id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2309.07789
SOT-MRAM-Enabled Probabilistic Binary Neural Networks for Noise-Tolerant and Fast Training
We report the use of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) to implement a probabilistic binary neural network (PBNN) for resource-saving applications. The in-plane magnetized SOT (i-SOT) MRAM not only enables field-free magnetization switching with high endurance (> 10^11), but also hosts multiple stable probabilistic states with a low device-to-device variation (< 6.35%). Accordingly, the proposed PBNN outperforms other neural networks by achieving an 18* increase in training speed, while maintaining an accuracy above 97% under the write and read noise perturbations. Furthermore, by applying the binarization process with an additional SOT-MRAM dummy module, we demonstrate an on-chip MNIST inference performance close to the ideal baseline using our SOT-PBNN hardware.
Puyang Huang, Yu Gu, Chenyi Fu, Jiaqi Lu, Yiyao Zhu, Renhe Chen, Yongqi Hu, Yi Ding, Hongchao Zhang, Shiyang Lu, Shouzhong Peng, Weisheng Zhao, Xufeng Kou
2023-09-14T15:25:36Z
http://arxiv.org/abs/2309.07789v2
# SOT-MRAM-Enabled Probabilistic Binary Neural Networks for Noise-Tolerant and Fast Training ###### Abstract We report the use of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) to implement a probabilistic binary neural network (PBNN) for resource-saving applications. The in-plane magnetized SOT (i-SOT) MRAM not only enables field-free magnetization switching with high endurance (\(>10^{11}\)), but also hosts multiple stable probabilistic states with a low device-to-device variation (\(<6.35\%\)). Accordingly, the proposed PBNN outperforms other neural networks by achieving an 18\(\times\) increase in training speed, while maintaining an accuracy above 97% under the write and read noise perturbations. Furthermore, by applying the binarization process with an additional SOT-MRAM dummy module, we demonstrate an on-chip MNIST inference performance close to the ideal baseline using our SOT-PBNN hardware. ## I Introduction With the advent of artificial intelligence, a seismic shift is observed in computing paradigms. As we move towards handling larger volumes of data and higher task complexities, new architectures for artificial neural networks (ANNs) have sprung up in numerous applications [1]. Conventionally, ANNs have relied on high-precision floating-point arithmetic to obtain optimal computational results. However, these approaches always require substantial computing and memory resources. Therefore, as the number of operations and data volume increase, the training process inevitably slows. In addition, the performance of these deterministic networks is heavily affected by discrepancies between standard training datasets and actual input data, which invariably result in reduced accuracy [2]. Alternatively, probabilistic-featured PBNNs, which introduce stochasticity within the network to enhance robustness, have been proposed to facilitate the convergence to global optima [3]. Consequently, the noise-tolerant PBNNs could offer accelerated training, robustness, and resource-saving capabilities for image/video classification and natural language processing. In principle, the key of PBNNs relies on the conversion of probabilities into deterministic outcomes via data sampling [4]. This necessitates that the PBNN hardware withstands a large number of sampling operations, while the power consumption for each operation needs to be as low as possible. In this context, the non-destructive electrical manipulation of magnetic moments inherently allows MRAM to possess high endurance and energy-efficient write/read characteristics. More importantly, the magnetization switching probability can be well-controlled by the injection current level via spin-orbit torque, therefore making MRAM a suitable building block to construct PBNNs. Inspired by the above scenario (Fig. 1), we utilize the SOT-MRAM platform to harness the PBNN advantages. The device across the 8-inch wafer exhibits highly consistent performance in terms of low resistance variation and identical probabilistic switching curves with repeatable state variables. By implementing both the vector-matrix multiplication (VMM) and the binarization operations with SOT-MRAM, we demonstrate a noise-tolerant and fast-training PBNN with an on-chip MNIST digit recognition accuracy of 90%. ## II Field-Free Probabilistic Switching of In-Plane Magnetized SOT-MRAM ### _Device characterizations and SOT-driven probabilistic magnetization switching_ High-quality magnetic tunnel junction (MTJ)/heavy metal (HM) thin films were prepared on 8-inch Si/SiO\({}_{2}\) wafers by magnetron sputtering. To enable field-free operation, we adopted the i-SOT MRAM configuration, in which an elliptical-shaped MTJ design warrants in-plane magnetic anisotropy. Accordingly, the spin current generated in the HM layer is parallel to the magnetization direction (\(M\)) of the free layer; therefore, the SOT is inserted to directly switch \(M\) without the presence of an assisted magnetic field (Fig. 2). After film growth, large-scale SOT-MRAM array was fabricated, with typical device size of 0.7 \(\upmu\)m \(\times\) 2 \(\upmu\)m. High-resolution transmission electron microscope (HR-TEM) images in Fig. 3 visualize the sharp hetero-interfaces. Subsequently, reliable field-free SOT-driven magnetization switching was demonstrated (Fig. 4), where the response time of the SOT-MRAM is below 400 ps, and the hysteresis window (_i.e._, switching voltage of 400 ps pulse is \(V_{\text{C}}\)) of the \(R\)-\(V\) curve is modulated by the pulse width. Moreover, because of the non-destructive SOT switching mechanism, the recorded parallel resistance (\(R_{\text{P}}\)) and antiparallel resistance (\(R_{\text{AP}}\)) in Fig. 5 did not experience any distortion after \(10^{11}\) write/read cycles (_i.e._, the tunneling magnetoresistance ratio of \(\sim\)70% is sufficient for the VMM operation in PBNNs). Besides, by changing the input voltage around \(V_{0}\) (voltage of 50% switching probability), multiple switching probability states (_i.e._, network weights) are obtained (Fig. 6), and their corresponding probabilities are repeatable (_i.e._, whose values were deduced by counting the number of \(R_{\text{P}}\)-to-\(R_{\text{AP}}\) switching during 500 samplings per voltage). Apart from single SOT device characterizations, device-to device variations also play an important role in determining the overall network functionality. In this regard, Fig. 7 confirms that 8 randomly selected SOT-MRAM devices from the array all yielded 11 well-defined intermediate probabilistic states, with an average variation of 6.35% in the examined operating range. In the meantime, the standard deviation of the \(R_{\text{P}}\) (\(R_{\text{AP}}\)) normal distribution curve, which was collected from 100+ devices across the 8-inch wafer, is found to be 4.72% (4.79%), as shown in Fig. 8. Besides, we need to point out that such a small resistance variation has a negligible impact on the MNIST classification accuracy of the subsequent simulations and PBNN on-chip validation. Consequently, the proposed SOT-MRAM provides stable and uniform probabilistic switching states, thereby laying a solid foundation for the design and implementation of PBNN. ## III SOT-MRAM-Enabled PBNN Implementation ### PBNN network structure and process flow for MNIST test To demonstrate the SOT-MRAM-enabled PBNN, we developed a network that consists of two convolutional layers, two max-pooling layers, and three fully connected layers (_i.e._, all layers are constructed by SOT-MRAM) for standard MNIST handwritten digit recognition test (Fig. 9). Utilizing the faster training speed of PBNN, our PyTorch simulation results in Fig. 10 show that the ideal classification accuracy of PBNN quickly exceeds 98% after only 4 epochs, whereas other networks require at least 20 epochs under full-precision conditions [5, 6, 7]. Equivalently, the PBNN system can realize a significant training time reduction of 6\(\times\) to 18\(\times\) (Fig. 11). Another advantage of PBNN is its resource-saving feature. For instance, the entire network needs only eight quantized states for both weights and activations to achieve an accuracy above 98% at 30 epochs (Fig. 12). Furthermore, the PBNN displays a salient noise-tolerant property against the write and read errors. According to the error-awareness simulation data in Fig. 13, it is seen that the training result remains almost constant with respect to the write error, which may benefit from the natural stochasticity of probabilistic switching. On the other hand, even though the increase of the read error lowers the classification accuracy of PBNN to 90%, its performance is still better than that of other neural network counterparts [8]. Considering that the measured write and read errors of the SOT-MRAM devices are less than 6.35% (Fig. 7) and 4.8% (Fig. 8) respectively, the overall accuracy of our SOT-PBNN can reach the 97.33% benchmark in the ideal scenario. ### Hardware implementation of SOT-MRAM PBNN Guided by the PyTorch simulation, we further designed an on-chip PBNN system based on the in-plane magnetized SOT-MRAM array. As illustrated in Fig. 14, the row devices are selected through an SWL decoder, while the column devices are selected by write and read voltages from a digital-to-analog converter (DAC). Afterwards, a transimpedance amplifier (TIA) converts the current accumulated during the VMM operation into a voltage signal, which is binarized before passing to the next network layer. Given that the SOT-MRAM resistance changes with the read voltage, the binarization process in our SOT-PBNN system cannot be performed using a fixed-value resistor. Instead, to enable on-chip current comparison, we allocated an additional [2 \(\times\) n] dummy cell corresponding to the [m \(\times\) n] MRAM array cell. As a result, the binarization is achieved by writing \(R_{\text{AP}}\) and \(R_{\text{P}}\) into two MRAM devices along the same column of the dummy cell, and then determining the resistance state of a single MRAM using half of the summed currents under the same read voltage. The overview of the integrated SOT-MRAM PBNN chip is shown in Fig. 15. As a proof-of-concept, the VMM operation was validated experimentally in a [16 \(\times\) 1] array. Specifically, after all 16 serial-connected SOT-MRAM devices were initialized in the anti-parallel state (_i.e._, weight assignment), the accumulated current (\(I_{\text{out}}\)) measured at the output port is highly consistent with the ideal value (\(I_{\text{sum}}=\sum_{i=1:6}I_{\text{so},i}\)) under different read voltages (Fig. 16). It is also noted that with a maximum read voltage of 0.54 V, the average output current variation is only 4.31%, again demonstrating the low read error of our SOT-MRAM devices. Concurrently, the on-chip comparator function was evaluated from 9 randomly selected devices. Although the difference between \(R_{\text{P}}\) and \(R_{\text{AP}}\) becomes narrower as the read voltage increases, the reference resistance \(R_{\text{ref}}\) of the dummy cell is continuously kept within the resistance gap, hence verifying a wide operating range of the SOT-MRAM dummy cell (Fig. 17). Finally, we selected an 8-level activation quantization and 3-level weight quantization for handwritten digit recognition. After transferring the weight values to MRAM, we conducted inference using the MNIST dataset. From the measured and simulated results in Fig. 18, it is seen that the inference currents distributions share the same feature versus the assigned weight information, therefore yielding the correct digit judgement. Based on the inference results after 100 training sessions, we have obtained an on-chip classification accuracy over 90% in our integrated SOT-PBNN chip (Fig. 19). ## IV Conclusion Compared with other memristive devices and neural networks, the SOT-MRAM-enabled PBNN elaborated in this work shows advantages including long endurance, stable states, fast training, and robustness against input variations (Table 1). Our work provides a compelling framework for the design of reliable neural networks for low-power applications with limited computational resources. ## Acknowledgment This work is supported by the National Key R&D Program of China (2021YFA0715503), the NSFC Programs (11904230, 62004013), the Shanghai Rising-Star Program (21QA1406000), and the Young Elite Scientists Sponsorship Program by CAST (2021QNRC001).
2310.20519
Enhancing Graph Neural Networks with Quantum Computed Encodings
Transformers are increasingly employed for graph data, demonstrating competitive performance in diverse tasks. To incorporate graph information into these models, it is essential to enhance node and edge features with positional encodings. In this work, we propose novel families of positional encodings tailored for graph transformers. These encodings leverage the long-range correlations inherent in quantum systems, which arise from mapping the topology of a graph onto interactions between qubits in a quantum computer. Our inspiration stems from the recent advancements in quantum processing units, which offer computational capabilities beyond the reach of classical hardware. We prove that some of these quantum features are theoretically more expressive for certain graphs than the commonly used relative random walk probabilities. Empirically, we show that the performance of state-of-the-art models can be improved on standard benchmarks and large-scale datasets by computing tractable versions of quantum features. Our findings highlight the potential of leveraging quantum computing capabilities to potentially enhance the performance of transformers in handling graph data.
Slimane Thabet, Romain Fouilland, Mehdi Djellabi, Igor Sokolov, Sachin Kasture, Louis-Paul Henry, Loïc Henriet
2023-10-31T14:56:52Z
http://arxiv.org/abs/2310.20519v1
# Enhancing Graph Neural Networks with ###### Abstract Transformers are increasingly employed for graph data, demonstrating competitive performance in diverse tasks. To incorporate graph information into these models, it is essential to enhance node and edge features with positional encodings. In this work, we propose novel families of positional encodings tailored for graph transformers. These encodings leverage the long-range correlations inherent in quantum systems, which arise from mapping the topology of a graph onto interactions between qubits in a quantum computer. Our inspiration stems from the recent advancements in quantum processing units, which offer computational capabilities beyond the reach of classical hardware. We prove that some of these quantum features are theoretically more expressive for certain graphs than the commonly used relative random walk probabilities. Empirically, we show that the performance of state-of-the-art models can be improved on standard benchmarks and large-scale datasets by computing tractable versions of quantum features. Our findings highlight the potential of leveraging quantum computing capabilities to potentially enhance the performance of transformers in handling graph data. ## 1 Introduction Graph machine learning (GML) is an expanding field of research with applications in chemistry (Gilmer et al., 2017), biology (Zitnik et al., 2018), drug design (Konaklieva, 2014), social networks (Scott, 2011), computer vision (Harchaoui and Bach, 2007) and science (Sanchez-Gonzalez et al., 2020; Xu et al., 2018). In the past few years, significant effort has been put into the design of Graph Neural Networks (GNNs) (Hamilton). The objective is to learn suitable representations that enable efficient solutions to the original problem. To that end, a large number of models have been developed in the past few years (Kipf and Welling, 2016; Hamilton et al., 2018; Velickovic et al., 2018). While the prevalent approach for constructing GNNs relies on the Message Passing (MP) mechanism (Gilmer et al., 2017), this approach exhibits several recognized limitations, with the most significant being its theoretical expressivity. Indeed, two graphs that are indistinguishable via the Weisfeiler-Lehman (WL) test will lead to the same MP Neural Network (MPNN) output (Morris et al., 2019). Another limitation arises from the fact that MPNNs are more effective when dealing with homophilic data. This is based on the underlying assumption that nodes that are similar, either in structure or features, are more likely to be related. A study by (Zhu et al., 2020) demonstrates that MPNNs encounter difficulties when applied to heterophilic graphs. Finally, MPNNs are prone to over-smoothing (Chen et al., 2020) as well as over-squashing (Topping et al., 2021). These latter aspects constitute serious limitations for the datasets that exhibit long-range dependencies (Dwivedi et al., 2022). The research community is actively exploring solutions to address these limitations. The key idea is to expand aggregation beyond neighbouring nodes by incorporating information related to the entire graph or a more extensive portion of it. Graph Transformers were created according to these requirements, with success on standard benchmarks (Ying et al., 2021; Rampasek et al., 2022). Among the myriad of proposed architectures, the Graph Inductive Bias Transformer (GRIT) (Ma et al., 2023) stands out for its impressive generalization capacity. This stems from its independence from the MP mechanism and its utilization of multiple positional encodings in its architecture. While these features make it a good candidate for overcoming the aforementioned limitations, the authors relied on discrete \(k\)-step random walks to initialize the PE tensor. These random walks constitute to this day the most widely adopted choice (He et al., 2023; Dwivedi et al., 2022a), since the main alternative, the matrix of the Laplacian eigenvectors, is invariant by sign flip of each vector, resulting in \(2^{k}\) possible choices. The goal of this work is to leverage new types of structural features emerging from quantum physics as positional encodings. The rapid development of quantum computers during the previous years provides the opportunity to compute features that would be otherwise intractable. These features contain complex topological characteristics of the graph, and their inclusion has the potential to enhance the model's quality, reduce training or inference time, and decrease energy consumption. The paper is organized as follows: In Section 2, we provide a concise overview of the existing research on graph transformers, along with references to the latest developments in quantum graph machine learning. Section 3 delves into the core theoretical aspects of this work. It covers quantum mechanics basics for readers unfamiliar with the topic, details the way to construct a quantum state from a graph and explains why quantum states can provide relevant information that is hard to compute with a classical computer. Additionally, we introduce our central proposal, a framework for both static and dynamic positional encoding based on quantum correlations. Finally, Section 4 presents the outcomes of our numerical experiments and includes discussions of the results. Figure 1: Summary of our method. **(a)** Our hybrid quantum-classical framework utilizes a classical computer for parameter optimization and employs a hybrid model using a Quantum Processing Unit (QPU) and a CPU and/or GPU, denoted as classical Processing Unit (cPU). In our quantum graph NN, we initialize QPU at a quantum state \(|\psi_{0}\rangle\), apply a mixing Hamiltonian \(\hat{\mathcal{H}}_{\mathrm{M}}\) evolution for a duration \(\theta\), and utilize a Hamiltonian \(\hat{\mathcal{H}}_{\mathcal{G}}\) evolution for the graph feature map with a duration \(t\). \(K\) layers are used to obtain a sufficiently expressive quantum model. Finally, the output is obtained by measuring correlators, e.g., \(\langle Z_{i}Z_{j}\rangle\). See Section 3.1 for details. **(b)** Static or trainable PE is constructed for a graph \(\mathcal{G}\) via **(c)** (quantum) random walk (static PE) or a quantum graph NN (static/trainable PE), which computes quantum correlations. Note that our PEs are not restricted to classical models (such as the transformer studied in this work) but are also applicable to all quantum models. ## 2 Related works ### Graph Transformers Efforts have been made in the community to go beyond MPNNs due to several issues (Zhu et al., 2020; Chen et al., 2020; Topping et al., 2021). Inspired by the success of transformers in natural language processing (Vaswani et al., 2017; Alayrac et al., 2022), new architectures of GNNs have been proposed to allow an all-to-all aggregation between the nodes of the graphs, and called graph transformers (GT) (Dwivedi & Bresson, 2020; Dwivedi et al., 2021; Ramapask et al., 2022; Kreuzer et al., 2021; Zhang et al., 2023; Ma et al., 2023). However, due to the quadratic cost of computing the attention process, they are not applicable to large-scale graphs of millions of nodes and more. It has been shown that GTs that include graph inductive biases such as MP modules perform better than those that do not (Rampasek et al., 2022; Ma et al., 2023). ### Positional and Structural Encoding Positional or structural embeddings are features computed from the graph that are concatenated to original node or edge features to enrich GNN architectures (either MPNN or GT). These two terms are used interchangeably in the literature and we denote them as "positional encodings" (PEs) in the rest of this work. PEs can include random walk probabilities (Rampasek et al., 2022; Ma et al., 2023), spectral information (Dwivedi et al., 2020; Rampasek et al., 2022; Kreuzer et al., 2021), shortest path distances (Li et al., 2018), or heat kernels (Mialon et al., 2021). They can also be learned (Dwivedi et al., 2021). We detail below the most common PEs used in the literature. **Laplacian Eigenvectors.** The spectral information of the graph can be used as PE, more precisely the eigenvectors of the Laplacian matrix. If one takes a line graph, it almost corresponds to positional embeddings in the transformer architecture for sequences. The main issue of this encoding is to ensure that the model remains invariant by changing the sign of eigenvectors, which has been solved by (Lim et al., 2022). **Relative Random Walk Probabilities (RRWP).** The authors of (Ma et al., 2023) introduced the RRWP with which they initialize their model. For a graph \(\mathcal{G}\), let \(A\) be the adjacency matrix and \(D\) the degree matrix. Let \(P\) be a 3 dimensional tensor such that \(P_{k,i,j}=(M^{k})_{ij}\) with \(M=D^{-1}A\). For each pair of node \((i,j)\), we associate the vector \(P_{i,i,j}\), i.e., the concatenation of the probabilities for all \(k\) to get from node \(i\) to node \(j\) in \(k\) steps in a random walk. \(P_{i,i}\) is the same as the Random Walk Structural Encodings (RWSE) defined in (Rampasek et al., 2022). The authors of (Ma et al., 2023) highlight the benefits of RRWP. They prove that the Generalized Distance WL (GD-WL) test introduced by (Zhang et al., 2023) with RRWP is strictly more powerful than GD-WL test with the shortest path distance, and they prove universal approximation results of multi-layer perceptrons (MLP) initialized with RRWP. ### Quantum Computing for Graph Machine Learning Using quantum computing for Machine Learning on graphs has already been proposed in several works, as reviewed in (Tang et al., 2022). The authors of (Verdon et al., 2019) realized learning tasks by using a parameterized quantum circuit depending on a Hamiltonian whose interactions share the topology of an input graph. Comparable ideas were used to build graph kernels from the output of quantum procedures, for photonic (Schuld et al., 2020) as well as neutral atom quantum processors (Henry et al., 2021). The latter was successfully implemented on quantum hardware (Albrecht et al., 2023). The architectures proposed in these papers were entirely quantum and only relied on classical computing for the optimization of variational parameters. By contrast, in what we propose here quantum dynamics only plays a role in the aggregation phase of a larger entirely classical architecture. Such a hybrid model presents the advantage of gaining access to hard-to-access graph topological features through quantum dynamics while benefiting from the power of well-known existing classical architectures. Methods and theory In this section, we outline the process of mapping graphs to a quantum state of a QPU. To extract graph features, we introduce correlators and define the concept of the ground state for a quantum graph representation. Finally, we explore an alternative approach for extracting graph features using quantum random walks (QRW) and their advantages over classical analogues. ### Quantum Graph Machine Learning **The graph as a quantum state.** We explain in this subsection how to create a quantum state that contain relevant information about the graph. More details about quantum information processing can be found in (Nielsen and Chuang, 2002). The _quantum state_\(\ket{\psi}\) of a system of \(N\) qubits can be represented as a vector of unit norm in \(\mathbb{C}^{2^{N}}\). Quantum states are modified through the action of _operators_, that can be represented as hermitian matrices of size \(2^{N}\times 2^{N}\). Its dynamics obeys the Schrodinger equation \(-i\frac{d\ket{\psi}}{dt}=\hat{\mathcal{H}}\ket{\psi}\), where the operator \(\hat{\mathcal{H}}\) is the _Hamiltonian_ of the system, with solution \(\ket{\psi(t)}=\mathcal{T}\exp\left[-i\int_{0}^{t}\hat{\mathcal{H}}(\tau)d\tau \right]\ket{\psi(0)}\), with \(\mathcal{T}\) the time-ordering operator. An operator \(\hat{\mathcal{O}}\) that can be measured is called an _observable_, and its eigenvalues correspond to possible outcome of its measurement. Its expectation value on the quantum state \(\ket{\psi}\) is the scalar \(\bra{\hat{\mathcal{O}}}=\bra{\psi}\hat{\mathcal{O}}\ket{\psi}\), where \(\bra{\psi}\) is the conjugate transpose of \(\ket{\psi}\). The _Pauli matrices_ are defined as follows: \(I=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)\), \(X=\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\), \(Y=\left(\begin{smallmatrix}0&-i\\ i&0\end{smallmatrix}\right)\), \(Z=\left(\begin{smallmatrix}1&0&-1\\ 0&-1\end{smallmatrix}\right)\). They form a basis of hermitian matrices of size \(2\times 2\). A _Pauli string_ of size \(N\) is an operator that can be written as the Kronecker product of \(N\) Pauli matrices. We will note Pauli strings by their non-trivial Pauli operations. For instance, in a system of 5 qubits, \(X_{0}Y_{3}=X\otimes I\otimes I\otimes Y\otimes I\). We associate a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), to a quantum state \(\ket{\psi_{\mathcal{G}}}\) of \(\ket{\mathcal{V}}\) qubits containing information about \(\mathcal{G}\) via a hamiltonian \(\hat{\mathcal{H}}_{\mathcal{G}}\) of the form \[\hat{\mathcal{H}}_{\mathcal{G}}=\sum_{(i,j)\in\mathcal{E}}\hat{\mathcal{H}}_{ij} \tag{1}\] where \(\hat{\mathcal{H}}_{ij}\) is an Pauli string acting non-trivially on \(i\) and \(j\) only. We will be focusing on the Ising hamiltonian \(\hat{\mathcal{H}}^{I}=\sum_{(i,j)\in\mathcal{E}}Z_{i}Z_{j}\) and the XY hamiltonian \(\hat{\mathcal{H}}^{XY}=\sum_{(i,j)\in\mathcal{E}}X_{i}X_{j}+Y_{i}Y_{j}\). We will note \(\ket{0}\) and \(\ket{1}\) the two eigenstates (or eigenvectors) of \(Z\) with respective eigenvalues 1 and -1, and we will use \(\left\{\ket{\mathbf{b}}=\bigotimes_{i=1}^{N}\ket{b_{i}}\right\}_{\mathbf{b} \in\{0,1\}^{N}}\) as a basis of the \(2^{N}\)-dimensional space of quantum states. We consider the quantum state obtained by alternated action of \(p\) layers of \(\hat{\mathcal{H}}_{\mathcal{G}}\) and a _mixing_ hamiltonian \(\hat{\mathcal{H}}_{M}\) (that doesn't commute with \(\hat{\mathcal{H}}_{\mathcal{G}}\), for instance \(\hat{\mathcal{H}}_{M}\propto\sum_{i}Y_{i}\)) \[\ket{\psi_{\mathcal{G}}(\mathbf{\theta})}=\prod_{k=1}^{p}\left(\mathrm{e}^{-i \hat{\mathcal{H}}_{M}\theta_{k}}\mathrm{e}^{-i\hat{\mathcal{H}}_{G}t_{k}} \right)\mathrm{e}^{-i\hat{\mathcal{H}}_{M}\theta_{0}}\ket{\psi_{0}}, \tag{2}\] where \(\mathbf{\theta}=(\theta_{0},t_{0},\theta_{1},t_{1},\ldots\theta_{p})\) is a real vector of parameters. The choice of these states is motivated by their similarity with the _Trotterized_ dynamics of a lot of quantum systems(Suzuki, 1976). **Correlation.** The correlations (or _correlators_) \(C_{ij}\) of local operators \(\hat{\mathcal{O}}_{i}\) and \(\hat{\mathcal{O}}_{j}\) acting respectively on qubits \(i\) and \(j\) can be defined either as the expectation value of their product \(\langle\hat{\mathcal{O}}_{i}\hat{\mathcal{O}}_{j}\rangle\), or their covariance \(\langle\hat{\mathcal{O}}_{i}\hat{\mathcal{O}}_{j}\rangle-\langle\hat{\mathcal{ O}}_{i}\rangle\langle\hat{\mathcal{O}}_{j}\rangle\) (note that the orders matters if \(\hat{\mathcal{O}}_{i}\) and \(\hat{\mathcal{O}}_{j}\) don't commute). In the rest of the paper, we will indifferently call correlation the two former expressions, and give precisions when necessary. We will be focusing on the case where \(\hat{\mathcal{O}}_{i}\) is a Pauli string of length 1 (i.e. \(X_{i}\), \(Y_{i}\) or \(Z_{i}\)). **Ground state.** The ground state of a system is defined as the lowest-energy eigenstate of its hamiltonian (when it is degenerate, one considers the _ground state manifold_\(\mathbb{H}_{GS}\)). Ground state properties are widely studied in many-body physics and their properties depend on the topology of the graph. Preparing this state is the purpose of quantum annealing (Das and Chakrabarti, 2008). When using neutral atom quantum processors (Henriet et al., 2020), one can natively address hamiltonians of the form \(\hat{\mathcal{H}}_{\mathcal{G}}=\sum_{(i,j)\in\mathcal{E}}J_{ij}(Z_{i}-\alpha_{i }I)(Z_{j}-\alpha_{j}I)\), with \(\alpha_{i}\) real coefficients. Its eigenstates are the basis states \(|\mathbf{b}\rangle\) described above. In the case where \(\alpha_{i}=1-\delta/(2z_{i})\) with \(z_{i}=\sum_{j|(i,j)\in\mathcal{E}}J_{ij}\) and \(J_{ij}=1/4\), the eigenenergies (or eigenvalues) are \(E(\mathbf{b})=\sum_{i,j\in\mathcal{E}}b_{i}b_{j}-\delta\sum_{i=1}^{N}b_{i}\). When \(0<\delta<1\), this is the cost function associated with the maximum independent set problem, a NP-hard problem (Garey & Johnson, 1979). In the absence of degeneracy-lifting or symmetry-breaking effects, a quantum annealing scheme would prepare a symmetric, equal-weight superposition of all maximum independent sets. With that in mind, we will call _ground state of the graph_ the state \(|\psi_{GS}\rangle=\frac{1}{\sqrt{|\mathbb{H}_{GS}|}}\sum_{\mathbf{b}\in\mathbb{ H}_{GS}}|\mathbf{b}\rangle\). **Classical and Quantum Walks.** Quantum walks, as introduced by (Aharonov et al., 1993), differ fundamentally from classical random walks by evolving through unitary processes, allowing for interference between different trajectories. These walks manifest in two primary types: _continuous-time quantum walks_ (CQRW) (Farhi & Gutmann, 1998; Rossi et al., 2017) and _discrete-time quantum walks_ (DQRW) (Lourett et al., 2010). Discrete classical random walks on \(\mathcal{G}(\mathcal{V},\mathcal{E})\) use the probability matrix \(M=D^{-1}A\) for node transitions over a walk of length \(K\), resulting in the probability distribution \(P_{K}=M^{K}P_{0}\)(Aharonov et al., 2001). Note that this approach is utilized in RRWP encodings (Ma et al., 2023). In the continuous case, CQRW can be viewed as a natural extension of continuous-time classical random walks (CRW). In CRW, the probability of a walker being at vertex \(i\) and time \(t\) is represented as \(p_{i}(t)\), which follows the differential equation \(\frac{\mathrm{d}}{\mathrm{d}t}p_{i}(t)=-\sum_{j}G_{ij}p_{j}(t)\). Here, the infinitesimal generator \(G_{ij}=-\gamma\) if an edge exists between nodes \(i\) and \(j\), and \(0\) otherwise, with diagonal elements \(G_{ii}=k_{i}\gamma\) determined by the node degree \(k_{i}\). Considering now a quantum evolution with a graph Hamiltonian \(\hat{\mathcal{H}}_{\mathcal{G}}\), given a \(2^{N}\)-dimensional Hilbert space of \(N\) qubits, the Schrodinger equation which governs the evolution of a quantum state \(|\psi_{\mathcal{G}}\rangle\) when projected onto a state \(|i\rangle\) is given as \[\underbrace{i\frac{\mathrm{d}}{\mathrm{d}t}\langle i|\psi_{\mathcal{G}}(t) \rangle=\sum_{j}\langle i|\hat{\mathcal{H}}_{\mathcal{G}}|j\rangle\langle j| \psi_{\mathcal{G}}(t)\rangle}_{\mathrm{Quantum}}\longleftrightarrow\underbrace{ \frac{\mathrm{d}}{\mathrm{d}t}p_{i}(t)=-\sum_{j}G_{ij}p_{j}(t)}_{\mathrm{ Classical}}. \tag{3}\] Note the similarity between the differential equations of CQRW and CRW. A quantum analogue of CRW can be obtained by taking \(\langle i|\hat{\mathcal{H}}_{\mathcal{G}}|j\rangle=G_{ij}\). The probabilities are preserved as the sum of amplitude squared, \(\sum_{i}|\langle i|\psi_{\mathcal{G}}(t)\rangle|^{2}=1\), in the quantum case, instead of \(\sum_{i}p_{i}(t)=1\) in the classical case. This difference between evolution of probabilities(which are real) and evolution of amplitudes(which are complex) leads to interesting differences between dynamics of classical and quantum walks. Using this formalism, any quantum evolution can be thought of as a CQRW (Childs et al., 2002). Notably, quantum walks have demonstrated exponential hitting time advantage for graphs like hypercubes (Kempe, 2002) and glued binary trees (Childs et al., 2003). These results have been recently extended for more general hierarchical graphs (Balasubramanian et al., 2023). For an overview, refer to (Kempe, 2003). ### Positional encodings with quantum features In this section, we detail our proposals to incorporate quantum features in GNN models, and we discuss the potential benefits and drawbacks. Our methods can be roughly divided in two categories: quantum features that are used as static positional encodings and are precomputed at the begining of the procedure and quantum features that can be dynamically trained. #### 3.2.1 Static position encoding **Eigenvectors of the correlation on the ground state.** We propose to use the correlation matrix \(C_{ij}=\langle Z_{i}Z_{j}\rangle\) on the ground state of the graph defined in Sec. 3.1. Since this matrix is symmetric with non-negative eigenvalues, it can formally be used in the same place as the Laplacian matrix in graph learning models. Hence, we use the eigenvectors of this correlation matrix in the same way Laplacian eigenvectors (LE) are used in other architectures of graph transformers. Instead of taking the eigenvectors with the lowest eigenvalues as for the Laplacian eigenmaps, we take the ones with highest eigenvalues, since they are the ones in which most of the information about the correlation matrix is contained. We expect to face the same challenges about the sign ambiguity (Dwivedi et al., 2021; Kreuzer et al., 2021), and to implement the same techniques to alleviate them (Lim et al., 2022). \(k\)**-particles quantum random walks (\(k\)-QRW).** In this work, we introduce the \(k\)-particles (or walkers) random walk positional encoding, that can be can be obtained using \(\hat{\mathcal{H}}^{XY}\). We note \(\hat{\mathcal{H}}^{XY}_{k}\) the XY hamiltonian restricted to the \(k\) particles subspace \(\mathbb{H}_{k}\) (_i.e._ the span of states \(\left|\mathbf{b}\right\rangle\) of hamming weight \(k\), noted \(\left|i_{1}\ldots i_{k}\right\rangle\), parameterized by \(k\) integers \(i_{1}\ldots i_{k}\in\{0,1\}^{k}\)). For a 1-particle QRW, we calculate the probability \([X^{(1)}(t)]_{ij}=|\left\langle j\right|e^{-i\hat{\mathcal{H}}^{XY}_{k}t}|i \rangle|^{2}\) to find particle at node \(j\) coming from node \(i\) after time \(t\). Similarly for a 2-particle QRW, we calculate \([X^{(2)}(t)]_{ij}=|\left\langle ij\right|e^{-i\hat{\mathcal{H}}^{XY}_{k}t}| \psi_{i}\rangle|^{2}\), where \(|i,j\rangle\in\hat{\mathcal{H}}^{XY}_{k}\) is the state with walkers at nodes \(i\) and \(j\) and \(|\psi_{i}\rangle\in\hat{\mathcal{H}}^{XY}_{k}\) the initial state. As choices for the initial distribution we propose to use some localised state \(\left|\psi_{\text{init}}\right\rangle\propto|ij\rangle\), or the uniform distribution over all pairs of nodes \(\left|\psi_{\text{init}}\right\rangle\propto\sum_{(i,j)\in\mathcal{V}^{2}|ij \rangle}j)\), or the uniform distribution over the edges of the original graph \(\left|\psi_{\text{init}}\right\rangle\propto\sum_{(i,j)\in\mathcal{E}}|ij\rangle\). From these we obtain the positional encodings using \(\mathbf{P}_{ij}=[I,X^{n_{w}}(t_{1}),X^{n_{w}}(t_{2})...X^{n_{w}}(t_{K})]_{ij}\), where \(n_{w}=1,2\) is the number of walkers. From the symmetries of \(\hat{\mathcal{H}}^{XY}\), this 2-QRW can be viewed as a 1-QRW on the set of 2-particle states (see A.3.2). From there, we consider a _discrete_ 2-particle quantum-inspired RW (2-QiRW) encoding that reads \(\mathbf{P}_{ij}=\left[\langle ij|\,(\hat{\mathcal{H}}^{XY}_{2})^{k}\,|\,\psi_{ \text{init}}\rangle\,|k\in[0,K]\right]_{ij}\). #### 3.2.2 Learnable positional encodings Here we consider a specific case of equation 2, with \(p=1\), \(\theta_{0}=-\theta_{1}=\theta\), \(\hat{\mathcal{H}}_{M}\propto\sum_{i\in\mathcal{V}}Y_{i}\) and \(\hat{\mathcal{H}}_{\mathcal{G}}=\sum_{(i,j)\in\mathcal{E}}Z_{i}Z_{j}-\delta \sum_{i\in\mathcal{V}}Z_{i}\). A similar setting was implemented on neutral atom QPU in (Albrecht et al., 2023), where \(\hat{\mathcal{H}}_{M}\propto\sum_{i\in\mathcal{V}}X_{i}+\varepsilon\hat{ \mathcal{H}}_{\mathcal{G}}\) (with \(\varepsilon\lesssim 1\)) due to hardware constraints. We then use the covariance matrix of the number of occupation observable \(\hat{\mathcal{O}}_{i}=\frac{1}{2}(I-Z_{i})\), which equals one when the \(i-\)th atom is in its excited state, and 0 otherwise. This is one of the few particular cases where we can recover a closed formula for the correlation matrix used as PE, see Appendix A.2.1 for its full expression. The goal in this section is to learn the positional encoding by training a GNN, or any permutation equivariant NN, to find an optimised value of the parameters (\(\theta\), \(t\) and \(\delta\)) involved in our PE. The training of this module is carried out jointly with that of the transformer layers, and the input PE is updated after each backward pass. This allows a custom value of the parameters for each graph in the dataset. To this end, we train a GNN \(P_{\mathbf{W}}(\mathbf{A},\mathbf{X})\), with \(\mathbf{A}\) being the adjacency matrix, \(\mathbf{X}\) the feature matrix of the nodes in the graph, and \(\mathbf{W}\) the parameter set of the NN. We obtain the PE parameters as \((\theta||t||\delta)=P_{\mathbf{W}}(\mathbf{A},\mathbf{X})\in\mathbb{R}^{3\times k}\) (we learn \(k\) triples, encoding as many correlations matrices, which we concatenate in one tensor as it is done in the original GRIT paper). Here we chose \(\mathbf{X}\) as the pairwise graph distance matrix between the nodes, such that we only consider its structural features. Since the positional encoding discussed here is obtained from a special case of QAOA, a key aspect of this approach concerns the initial values taken by \(\theta,t\) and \(\delta\)(Egger et al., 2021). Among the many possible initialization protocols, the authors in (Jain et al., 2022) used GNN to find a warm start in the non-convex energy landscape, prior to an optimization through quantum annealing, for the Max-Cut problem. In our case, the training is carried out classically (only because we recover a closed formula), and its extension as a quantum NN module in a transformer-like architecture is yet to be investigated in future works. ### GNN models with quantum aggregation In the previous subsection, we explained how to integrate positional encodings coming from a quantum processing unit into a graph transformer of graph neural network model. In this subsection, we propose to directly include the quantum correlations as a trainable part of the model. Given \(H^{l}\) the node features matrix at the layer \(l\), the node features matrix at the next layer is computed with the following formula : \[H^{l+1}=\sigma((A(\theta)H^{l}||H^{l})W) \tag{4}\] where \(A(\theta)\) is the quantum attention matrix. Given a quantum graph state, we compute for every pair of nodes \((i,j)\) the vector of 2-bodies observables \(C_{ij}=[\langle Z_{i}Z_{j}\rangle,\langle X_{i}X_{j}\rangle,\langle Y_{i}Y_{j} \rangle,\langle X_{i}Z_{j}\rangle,\langle X_{i}Y_{j}\rangle,\langle Y_{i}Z_{j} \rangle,\langle X_{j}Z_{i}\rangle,\langle X_{j}Y_{i}\rangle,\langle Y_{j}Z_{i} \rangle]^{T}\). The quantum atten tion matrix is computed by taking a linear combination of the previous correlation vector and optionally a softmax over the lines. More details are given in the Appendix B. ### Theoretical arguments for QRWs **Two-interacting-particle QRW are more expressive than RRWP.** WL tests (Weisfeiler and Leman, 1968) and their extensions (Morris et al., 2019; Grohe and Otto, 2015; Grohe, 2017) are widely used algorithms for distinguishing graphs (Graph Isomorphism). A standard WL test generates node colorings as a function of the node neighbourhood. An extension of this called the generalized distance WL (GD-WL) test had been introduced by (Zhang et al., 2023) as a generalization of the WL isomorphism test. Let \(\mathcal{G}\) be a graph on a set of vertices \(\mathcal{V}\) and let \(d(u,v)\) be the distance between nodes \(u,v\). Just like the WL test, the GD-WL test starts with an initial color \(h_{0}(u)\) for a node u. Then at iteration \(l\), the color \(h_{l}(u)\) is computed by \(h_{l}(u)=\texttt{HASH}(\{(h_{-1}(v),d(u,v)),\,v\in\mathcal{V}\})\) where HASH is a hash function. The algorithm stops when the colors don't change after an iteration. The authors of Zhang et al., 2023 show that if equipped with some distances such as the shortest path distance (SPD), the GD-WL test is strictly more powerful than the WL test. The authors of (Ma et al., 2023) show that the GD-WL test equipped with RRWP is strictly more powerful than the GD-WL test with the SPD distance. We can show however (see Appendix A.3.1 for a proof), that GD-WL test with RRWP embeddings cannot distinguish non-isomorphic strongly regular graphs (see (Gamble et al., 2010) for the definition of strongly regular graphs) : **Theorem 1**: _GD-WL test with RRWP embedding fails to distinguish non isomorphic strongly regular graphs. 2 particle-QRW can distinguish some of them._ ## 4 Experiments We performed several experiments to assess the capabilities of quantum features to improve existing GNN models. We report in the following subsections the most significant results. Less conclusive experiments are detailed in Appendices C.4, C.3. ### Experiments on RW models In this subsection, we test concatenating the QRW encodings to the RRWP in the GRIT model (Ma et al., 2023). We compute the (continuous) 1-CQRW for \(K\) random times and the discrete 2-QRW for \(K\) steps. Those encodings are computed numerically since they are still tractable for graphs below 200 nodes compared to the higher order \(k\)-QRW ones. We benchmark our method on 7 datasets from (Dwivedi et al., 2020), following the experimental setup of (Rampasek et al., 2022) and (Ma et al., 2023). Our method is compared to many other architectures and the results directly taken from (Ma et al., 2023). We do not perform an extensive hyperparameter search for each architecture and only run ourselves the GRIT model by taking the same hyperparameters as the authors. The experiments are done by building on the codebase of (Ma et al., 2023) which is itself built on (Rampasek et al., 2022). More details about the protocol and hyperparameters can be found in C.1, and more details about the datasets can be found in Appendix D. The results are included in Table 1. Our methods performs better on ZINC, MNIST and CIFAR10 than all others, and comes second for PATTERN and CLUSTER. We also benchmark our methods on large-scale datasets, ZINC-full (a bigger version of ZINC (Irwin et al., 2012)) and PCQM4MV2 (Hu et al., 2021). As before we only compute GRIT and report other results from (Ma et al., 2023). Our methods performs the best among all others. ### Experiments on learning the positional encoding Here we discuss results related to the learned parameters of the quantum PE, which were mainly run on the ZINC dataset. We observe from the comparison of these results, displayed in the right part of table 2, and those of random parameters (table 1) that the performances are significantly lower in this case. The main reason for this is the incapacity of the proposed model, to efficiently explore the space of the encoded parameters \(\theta,t\) and \(\delta\). More details about this are provided in the Appendix A.1 Given such limitations, the question arises as to whether the proposed positional encoding, via learned or randomly fixed parameters, plays any role in the training of the transformer model, or whether the attention scheme proposed in (Ma et al., 2023) along with the graph external features is enough to obtain the same performances. For that we compare our results with three different GRIT+RRWP models on randomized Zinc datasets. In the first one, we remove the structural information while keeping the degree sequence in each graph (configuration model of the graph (Newman, 2010)), in the second we remove any trace of structural information by randomly replacing each graph in the dataset by a random graph with the same number of nodes and edges, and in the final case we keep the structural information but randomly permute the feature vectors of each graph, such that we isolate the contribution of the structural features from that of the external features. In each case, we benchmark these results using a 2 layered graph neural network (with GAT convolutional layers), to encode the parameters of the PE tensor. The results of these comparisons are showcased in the right part of table 2, and further details about the randomization process in Appendix A.2 \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Model** & **ZINC** & **MNIST** & **CIFAR10** & **PATTERN** & **CLUSTER** \\ \cline{2-6} & **MAE\(\downarrow\)** & **Accuracy\(\uparrow\)** & **Accuracy\(\uparrow\)** & **Accuracy\(\uparrow\)** & **Accuracy\(\uparrow\)** \\ \hline GCN & \(0.367\pm 0.011\) & \(90.705\pm 0.218\) & \(55.710\pm 0.381\) & \(71.892\pm 0.334\) & \(68.498\pm 0.976\) \\ GIN & \(0.526\pm 0.051\) & \(96.485\pm 0.252\) & \(55.255\pm 1.527\) & \(58.387\pm 0.136\) & \(64.716\pm 1.553\) \\ GAT & \(0.384\pm 0.007\) & \(55.35\pm 0.205\) & \(64.223\pm 0.455\) & \(78.271\pm 0.186\) & \(70.587\pm 0.447\) \\ GateGCN & \(0.282\pm 0.015\) & \(97.340\pm 0.143\) & \(67.312\pm 0.311\) & \(85.568\pm 0.088\) & \(73.840\pm 0.326\) \\ DCN & \(0.168\pm 0.003\) & \(-\) & \(72.838\pm 0.417\) & \(86.680\pm 0.034\) & \(-\) \\ GIN-AK+ & \(0.080\pm 0.001\) & \(-\) & \(72.19\pm 0.13\) & \(86.850\pm 0.057\) & \(-\) \\ SAN & \(0.139\pm 0.006\) & \(-\) & \(-\) & \(86.581\pm 0.037\) & \(76.961\pm 0.65\) \\ K-Sabgraph SAT & \(0.094\pm 0.008\) & \(98.715\pm 0.087\) & \(68.702\pm 0.409\) & \(86.821\pm 0.020\) & \(79.232\pm 0.348\) \\ EGT & \(0.108\pm 0.009\) & \(98.051\pm 0.126\) & \(72.298\pm 0.336\) & \(86.685\pm 0.059\) & \(78.016\pm 0.180\) \\ GPS & \(0.059\pm 0.002\) & \(98.108\pm 0.111\) & \(76.468\pm 0.881\) & \(87.196\pm 0.076\) & \(80.026\pm 0.277\) \\ GRIT (our run) & \(0.060\pm 0.002\) & \(98.164\pm 0.054\) & \(76.198\pm 0.744\) & \(90.405\pm 0.232\) & \(79.856\pm 0.156\) \\ \hline **GRIT 1-CQRW** & \(0.058\pm 0.002\) & \(98.108\pm 0.111\) & \(76.347\pm 0.704\) & \(87.205\pm 0.040\) & \(78.895\pm 0.1145\) \\ **GRIT 2-QQRW** & \(0.059\pm 0.004\) & \(98.204\pm 0.048\) & \(76.442\pm 1.07\) & \(90.165\pm 0.446\) & \(79.777\pm 0.171\) \\ \hline \hline \end{tabular} \end{table} Table 1: Test performance in five benchmarks from (Dwivedi et al., 2020). We show the mean \(\pm\) s.d. of 4 runs with different random seeds as in (Ma et al., 2023). Highlighted are the top first, second, and third results. Models are restricted to \(\sim 500K\) parameters for ZINC, PATTERN, CLUSTER \(\sim 100K\) for MNIST and CIFAR10. We compare our model to our run of GRIT and indicate the results obtained by the authors for information. Figures other than the last 3 lines are taken from (Ma et al., 2023). Models in **bold** are our models. \begin{table} \begin{tabular}{c l c|c c} \hline \hline **Method** & **Model** & **ZINC-full** & **PCQMAMv2** & **Model** & **Zine** \\ & & (MAE \(\downarrow\)) & (MAE \(\downarrow\)) & & (MAE \(\downarrow\)) \\ \hline \multirow{5}{*}{MPNNs} & GIN & \(0.088\pm 0.002\) & \(0.1195\) & \(-\) & \(-\) \\ & GraphSAGE & \(0.126\pm 0.003\) & \(-\) & \(-\) & \(-\) \\ & GAT & \(0.111\pm 0.002\) & \(-\) & \(-\) & \(-\) \\ & GCN & \(0.113\pm 0.002\) & \(0.1195\) & \(-\) & \(-\) \\ \hline PE-GNN & SignNet & \(0.024\pm 0.003\) & \(-\) & \(-\) & \(-\) \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Graph \\ Transformers \\ \end{tabular} } & Graphormer-URPE & \(0.052\pm 0.005\) & \(0.0864\) & **Rand. fests +RRWP** & \(0.393\pm 0.012\) \\ & Graphormer-URPE & \(0.028\pm 0.002\) & \(-\) & **Rand. strvet +RRWP** & \(0.245\pm 0.009\) \\ & Graphormer-GD & \(0.025\pm 0.004\) & & **Config. model+RRWP** & \(0.156\pm 0.004\) \\ & GPS-medium & \(-\) & \(0.0858\) & **2 layers GAT+QCorr** & \(0.134\pm 0.017\) \\ & GRIT (MA \(\downarrow\), 2023) & \(0.023\pm 0.001\) & \(0.0859\) & **2 layers SAGE+QCorr** & \(0.130\pm 0.003\) \\ & GRIT (our run) & \(0.025\pm 0.002\) & \(0.0842\) & **2 layers Transf. +QCorr** & \(0.127\pm 0.014\) \\ & **GRIT 1-CQRW (ours)** & \(0.025\pm 0.003\) & \(0.0947^{*}\) & **2 layers GCN +QCorr** & \(0.111\pm 0.003\) \\ & **GRIT 2-QQRW (ours)** & \(0.023\pm 0.002\) & \(0.0838\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 2: Left part : test performance on ZINC-full (Irwin et al., 2012) and PCQM4MV2. For ZINC-full, we show the mean and s.d of 4 runs with different random seeds and we limit the model to \(\sim 500K\) parameters. For PCQM4MV2 we show the output of a single run due to computation time. We compare our model to our run of GRIT and indicate the results obtained by the authors for information. Figures other than GRIT are taken from (Ma et al., 2023). Highlighted are the top first, second, and third results. Right part: results related to the isolation of the effects of structural and external graph features in the GRIT model, on the small zinc dataset. For the result with \({}^{*}\) only 55 epochs were used. ### Synthetic experiments In this section, we provide two examples of datasets with a binary graph classification task for which the use of the correlation matrix on the ground state as defined in 3.2.1 is more powerful than other commonly used features like the eigenvectors of the laplacian matrix or the diagonal of the random walk matrix. We name our datasets special pattern (S-PATTERN) and cross ladder (C-LADDER), and we provide a detail of their construction in appendices D.1.1 and D.1.2. The idea is to construct graphs that will exhibit very different Ising ground states but similar spectral properties or random walk transition probabilities. We illustrate the differences between the encodings in Appendix C.2. We train classical models on these datasets with LEs and random walk embeddings (RWSE) as node features, and we compare it to the same models with eigenvectors of the correlation on the ground state. We experiment with GCN and GPS with many hyperparameters combinations. More details on the protocols can be found in C.2 We also benchmark the GRIT model with RRWP. The results are shown in table 3. The quantum encoding models achieve 100% accuracy in both cases whereas the models with LE or RWSE achieve only 57% maximum. The GRIT model achieves 78% for the S-PATTERN, and 100% for the C-LADDER. ### Discussion We performed several experiments comparing the quantum encodings to the classical ones. Including the quantum walk features into state of the art models improves their performances on most of the datasets tested. It is not surprising that the method works well for datasets for which random walks are known to be relevant features like ZINC (Rampasek et al., 2022). We only limited ourselves to versions of quantum features that are efficiently computable, and we were able to show a small gain in performances compared to state of the art models. It is then believable that using quantum features that cannot be classically accessible could lead to a great improvement of models, if quantum hardware can be made widely available. We were able to engineer artificial datasets with for which classical approaches have difficulties to perform the associated binary classification tasks. GPS model fails both of the tasks whereas GRIT model is successful on the S-PATTERN task and mildly successful on the C-LADDER task. We think GRIT is successful on the S-PATTERN dataset because the degree distribution is different for the two classes, and GRIT specifically uses a degree scaler in its architecture. We have addressed, albeit at a preliminary stage, the issue of learning parameters related to the positional encoding. We have shown that even with non-optimal parameters (noted QCor on the right-hand side of the table 2), the model still performs better than when undergoing structural randomization, which underlines the discriminative capacity of the QCor positional encoder, and that the performance of the GRIT-QCorr model is not exclusively due to the graph external features. ## 5 Conclusion In this paper we have investigated how quantum computing architectures can be used to construct new families of graph neural networks. This study involved measuring observables like correlations and probabilities for a quantum system whose hamiltonian has the same topology as the graph of interest. We then integrated these observables as positional encodings and used them in different classical graph neural network architectures. We also used them as attention layers in graph transformers. We proved that some positional encodings that use quantum features are theoretically more expressive than ones based on simple random walk, on certain classes of graphs. Our experiments show that state of the art models can already be enhanced with restricted quantum features that are classically efficient to compute. This study provides strong indications that the full leverage of quantum hardware can \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **(GCN/GPS)-LE** & **(GCN/GPS)-RWSE** & **GRIT-RRWP** & **(GCN/GPS)-Q** \\ \hline C-LADDER & \(57\) & \(56\) & \(78\) & \(100\) \\ S-PATTERN & \(56\) & \(55\) & \(100\) & \(100\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results on synthetic data. We show the accuracy on the test set for each datasets. For each positional encoding, we show the score of the best model among all the combinations of hyperparameters tested. LE : Laplacian Eignevectors, RWSE: Random Walk Structural Encodings, Q: Quantum, eigenvectors of the correlation on the ground state. lead to development of high-performance architectures for certain tasks. In particular, Neutral Atom quantum hardware (Henriet et al., 2020) would be particularly suited to the type of time-dependent Hamiltonian we described here. Furthermore, we can create artificial classification tasks that are easily solvable with quantum enhanced models while classical models struggle. While the exact capabilities of our approach have to be explored, the results we obtain show that quantum enhanced GNNs are a promising family of models that could be fully exploited with near term quantum hardware. ## Acknowledgements We would like to thank Jacob Bamberger, Constantin Dalyac and Vincent Elfving for useful discussions and comments. We especially thank Jacob Bamberger for introducing us a method to generate non-isomorphic graphs indistinguishable by the WL test. ## Reproducibility statement The code to reproduce the experiments discussed in the main text is included in the supplementary materials with indications to run it. We also detail in the appendices C.1 C.2 C.4 the details of the protocols.
2310.20294
Robust nonparametric regression based on deep ReLU neural networks
In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of safeguarding estimation procedures against systematic contamination. We approach this statistical issue by shifting our focus towards estimating conditional distributions. To address it robustly, we introduce a novel estimation procedure based on $\ell$-estimation. Under a mild model assumption, we establish general non-asymptotic risk bounds for the resulting estimators, showcasing their robustness against contamination, outliers, and model misspecification. We then delve into the application of our approach using deep ReLU neural networks. When the model is well-specified and the regression function belongs to an $\alpha$-H\"older class, employing $\ell$-type estimation on suitable networks enables the resulting estimators to achieve the minimax optimal rate of convergence. Additionally, we demonstrate that deep $\ell$-type estimators can circumvent the curse of dimensionality by assuming the regression function closely resembles the composition of several H\"older functions. To attain this, new deep fully-connected ReLU neural networks have been designed to approximate this composition class. This approximation result can be of independent interest.
Juntong Chen
2023-10-31T09:05:09Z
http://arxiv.org/abs/2310.20294v1
# Robust nonparametric regression based on deep relu neural networks ###### Abstract. In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of safeguarding estimation procedures against systematic contamination. We approach this statistical issue by shifting our focus towards estimating conditional distributions. To address it robustly, we introduce a novel estimation procedure based on \(\ell\)-estimation. Under a mild model assumption, we establish general non-asymptotic risk bounds for the resulting estimators, showcasing their robustness against contamination, outliers, and model misspecification. We then delve into the application of our approach using deep ReLU neural networks. When the model is well-specified and the regression function belongs to an \(\alpha\)-Holder class, employing \(\ell\)-type estimation on suitable networks enables the resulting estimators to achieve the minimax optimal rate of convergence. Additionally, we demonstrate that deep \(\ell\)-type estimators can circumvent the curse of dimensionality by assuming the regression function closely resembles the composition of several Holder functions. To attain this, new deep fully-connected ReLU neural networks have been designed to approximate this composition class. This approximation result can be of independent interest. Key words and phrases:Nonparametric regression, robust estimation, deep neural networks, circumventing the curse of dimensionality, supremum of an empirical process 2010 Mathematics Subject Classification: Primary 62G35, 62G05; Secondary 68T01 This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement N\({}^{\text{o}}\) 811017. and \(f^{\star}:\mathscr{W}\to\mathbb{R}\) is an unknown regression function that we want to estimate. A substantial body of literature addresses this problem through the minimization of empirical least squares loss functions. By integrating such a classical estimation approach with various approximation models, several methods have been developed and investigated. These include kernel regression (e.g., Nadaraya (1964) and Watson (1964)), local polynomial regression (e.g., Fan (1992, 1993)), spline-based regression (e.g., Wahba (1990) and Friedman (1991)), and wavelet-based regression (e.g., Donoho et al. (1995) and Donoho and Johnstone (1998)), among others. In-depth discussions on different methods and theories related to nonparametric regression can also be found in books such as Gyorfi et al. (2002) and Tsybakov (2009). Particularly, when \(f^{\star}:[0,1]^{d}\to\mathbb{R}\) is of \(\alpha\)-smoothness, Stone (1982) demonstrated that the minimax optimal convergence rate is of the order \(n^{-2\alpha/(2\alpha+d)}\) with respect to some squared \(\mathbb{L}_{2}\)-loss. As the value of \(d\) becomes large, the convergence rate can become extremely slow, which is a well-known phenomenon called the curse of dimensionality. One possible way to overcome this difficulty is to make additional structural assumptions on the regression function \(f^{\star}\) namely to assume that the unknown function \(f^{\star}\) is of the form \(f_{1}\circ f_{2}\) where \(f_{1}\) and \(f_{2}\) have some specific structures (e.g., Stone (1985), Horowitz and Mammen (2007) and Baraud and Birge (2014)). For instance, under the generalized additive structure of \(f^{\star}\), Horowitz and Mammen (2007) showed that, one can estimate the regression function \(f^{\star}\) with rate \(n^{-2\alpha/(2\alpha+1)}\) which is independent of the dimension \(d\). Recently, estimation based on neural networks has demonstrated remarkable success in both experimental and practical domains. Inspiring work has been carried out to systematically analyze the theoretical properties of least squares estimators implemented by various structured neural networks, particularly those employing a ReLU activation function. We mention the work of Schmidt-Hieber (2020), Kohler and Langer (2021), Suzuki and Nitanda (2021) and Jiao et al. (2023), among others. Based on the established approximation results, these studies have revealed that least squares estimators implemented using appropriate neural network architectures achieve the same minimax convergence rate as that obtained in Stone (1982) when considering a regression function \(f^{\star}\) with \(\alpha\)-smoothness. However, these findings also indicate that without further assumptions on the underlying model, nonparametric regression using deep neural networks is not immune to the curse of dimensionality. Much effort has been devoted to mitigating this issue through network-based estimation approaches (e.g., Schmidt-Hieber (2019), Chen et al. (2022) and Nakada and Imaizumi (2020) where they assume that the distribution of \(W\) is supported on a low-dimensional manifold, or the covariates exhibit a low intrinsic dimension, and Bauer and Kohler (2019), Suzuki (2019), where structural assumptions are imposed on \(f^{\star}\)). In particular, it is worth mentioning that, as shown in Schmidt-Hieber (2020), neural networks, especially deep ones, exhibit a natural advantage in approximating functions with a compositional structure compared to classical approximation methods. Given a collection of candidate estimators for \(f^{\star}\), most of the aforementioned approaches derive their estimators by minimizing a least-squares-based objective function. While possessing several desirable properties, least squares estimators are highly susceptible to data contamination and the presence of outliers, which are common scenarios encountered in practical applications. To address this issue of instability, several alternative approaches have been proposed in the context of linear regression, such as Huber regression (Huber (1973)), Tukey's biweight regression (Beaton and Tukey (1974)) and the least absolute deviation regression (Bassett and Koenker (1978)). In the realm of deep learning, a prevailing characteristic is the presence of data abundant in quantity but often deficient in quality. As a result, robustness becomes an essential property to consider when implementing estimation procedures based on deep neural networks (Barron (2019)). However, there has been significantly less research conducted in this field. In Lederer (2020), upper bounds for the expected excess risks of a specific class of estimators were established. These estimators are obtained by minimizing empirical risk using unbounded, Lipschitz-continuous loss functions on feed-forward neural networks, covering cases such as the least absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss. Jiao et al. (2023) investigated a similar class of estimators. They relaxed several assumptions required in Lederer (2020), which led to the establishment of their non-asymptotic expected excess risk bounds under milder conditions. They also considered the approximation error introduced by the ReLU neural network and demonstrated that the curse of dimensionality can be mitigated for such class of estimators if the distribution of \(W\) is assumed to be supported on an approximately low-dimensional manifold. Drawing upon the approximation results established in Schmidt-Hieber (2020) and Suzuki (2019), Padilla et al. (2022) examined the properties of quantile regression using deep ReLU neural networks. When the underlying quantile function can be represented as a composition of Holder functions or when it belongs to a Besov space, they derived convergence rates for the resulting estimators in terms of the mean squared error at the design points. All the previously mentioned work that addresses robust nonparametric regression using deep neural networks assumes the existence of the regression function \(f^{\star}\). The approaches they considered and analyzed focus on the robustness under the scenarios where there is a departure from Gaussian distributions to heavy-tailed distributions. When it comes to the case of adversarial attacks, where the statistical model is misspecified from a distributional perspective, their results are unable to provide a theoretical guarantee for the performance of the resulting estimators. In this paper, we approach the nonparametric regression problem from a novel perspective that acknowledges the possibility of misspecification at the distributional level. We propose a general procedure under mild assumptions to address this problem in a robust manner and investigate its application to ReLU neural networks. Specifically, our primary contributions are as follows. 1. We consider this estimation problem from the perspective of estimating the conditional distributions \(Q_{i}^{\star}(W_{i})\) of \(Y_{i}\) given \(W_{i}\). To handle this statistical issue, we propose an \(\ell\)-type estimation procedure based on a development of \(\ell\)-estimation methodology proposed in Baraud (2021). Our approach is based on the presumption that there exists an underlying function \(f^{\star}\) on \(\mathscr{W}\) belonging to some collection \(\overline{\mathcal{F}}\) such that \(Q_{i}^{\star}(W_{i})\) is of the form \(Q_{f^{\star}(W_{i})}\sim\mathcal{N}(f^{\star}(W_{i}),\sigma^{2})\) for all \(i\in\{1,\ldots,n\}\). However, our method is not confined to this assumption. In other words, we allow our statistical models to be slightly misspecified: \(Q_{i}^{\star}(W_{i})\) may not be exactly of the form \(Q_{f^{\star}(W_{i})}\) and even if they were, \(f^{\star}\) may not belong to the class \(\overline{\mathcal{F}}\). 2. Assuming that \(\overline{\mathcal{F}}\) is a VC-subgraph class on \(\mathscr{W}\), we derive a non-asymptotic risk bound for the resulting estimators, measured in terms of the total-variation type distance. Building upon this general result, we offer a comprehensive elucidation of the robustness of our estimators with regard to model misspecification at the distributional level. We also provide a quantitative comparison between the \(\ell\)-type estimators and another type of robust estimators known as \(\rho\)-estimators, which were introduced in Baraud and Chen (2020). 3. We showcase the application of our \(\ell\)-type estimation procedure using ReLU neural network models. In the case of a well-specified model, we derive uniform risk bounds over Holder classes for our estimators. By incorporating the lower bounds that we established, we demonstrate that the resulting estimators achieve the minimax optimal rate of convergence. 4. We consider the problem of circumventing the curse of dimensionality by imposing structural assumptions on the underlying regression function \(f^{\star}\). More precisely, we assume the function \(f^{\star}\) can be expressed as a composition of several Holder functions, following the consideration in Schmidt-Hieber (2020). In contrast to using sparsity-based ReLU neural networks as in Schmidt-Hieber (2020), we develop new deep fully-connected ReLU neural networks to approximate composite Holder functions, enhancing the informativeness of the architectural design. This approximation result can be of independent interest. By leveraging the derived approximation theory, we demonstrate that the \(\ell\)-type estimators implemented based on appropriate network models can alleviate the curse of dimensionality while converging to the truth at a minimax optimal rate. The paper is organized as follows. In Section 2, we describe our specific statistical framework and set notation. In Section 3, we introduce our estimation procedure based on \(\ell\)-estimation and present our main result regarding the risk bounds for the resulting estimators. We also provide an explanation of why the deviation inequality we establish ensures the desired robustness property of the estimators and compare them with the \(\rho\)-estimators in that section. In Section 4, we delve into the implementation of our \(\ell\)-type estimation approach on ReLU neural networks. We establish uniform risk bounds over Holder classes when the data are truly i.i.d. and the regression function exists. By combining the lower bounds we have derived, we demonstrate the minimax optimality of our estimators under the well-specified scenario. The problem of circumventing the curse of dimensionality is addressed in Section 5, where we impose structural assumptions on the regression function \(f^{\star}\). Section 6 is devoted to most of the proofs in this paper. ## 2. The statistical setting Let \(X_{i}=(W_{i},Y_{i})\), for \(i\in\{1,\ldots,n\}\) be \(n\) pairs of independent, but not necessarily i.i.d., random variables with values in a measurable product space \((\mathscr{X},\mathcal{X})=(\mathscr{W}\times\mathscr{Y},\mathcal{W}\otimes \mathcal{Y})\). Denote the set of all probabilities on \((\mathscr{Y},\mathcal{Y})\) as \(\mathscr{T}\). We assume that the conditional distribution of \(Y_{i}\) given \(W_{i}=w_{i}\) exists and is given by the value at \(w_{i}\) of a measurable function \(Q_{i}^{\star}\) from \((\mathscr{W},\mathcal{W})\) to \(\mathscr{T}\). We endow \(\mathscr{T}\) with the Borel \(\sigma\)-algebra \(\mathcal{T}\) associated with the total variation distance. Recall that when given two probabilities \(P_{1}\) and \(P_{2}\) on a measurable space \((A,\mathscr{A})\), the total variation distance \(\|P_{1}-P_{2}\|_{TV}\) between \(P_{1}\) and \(P_{2}\) is defined as \[\|P_{1}-P_{2}\|_{TV}=\sup_{\mathcal{A}\in\mathscr{A}}\left[P_{1}(\mathcal{A}) -P_{2}(\mathcal{A})\right]=\frac{1}{2}\int_{A}\left|\frac{dP_{1}}{d\mu}-\frac {dP_{2}}{d\mu}\right|d\mu,\] where \(\mu\) is any reference measure that dominates both \(P_{1}\) and \(P_{2}\). With this chosen \(\mathcal{T}\), for any \(i\in\{1,\ldots,n\}\), the mapping \(w\mapsto\|Q_{i}^{\star}(w)-R\|_{TV}\) on \((\mathscr{W},\mathcal{W})\) is measurable for any probability \(R\in\mathscr{T}\). Given a class of real-valued measurable functions \(\overline{\mathcal{F}}\) on \(\mathscr{W}\), we presume that, there exists a function \(f^{\star}\in\overline{\mathcal{F}}\) for which the conditional distributions \(Q_{i}^{\star}(W_{i})\) have the structure of \(Q_{f^{\star}(W_{i})}=\mathcal{N}(f^{\star}(W_{i}),\sigma^{2})\) or are at least in close proximity to it. The function \(f^{\star}\) is what we refer to as the regression function. It is worth emphasizing, as we mentioned in Section 1, that our statistical model could potentially be misspecified: the conditional distributions \(Q_{i}^{\star}(W_{i})\) might not precisely take the form \(Q_{f^{\star}(W_{i})}\), or even if they did, the regression function \(f^{\star}\) might not belong to the class \(\overline{\mathcal{F}}\). What we are truly assuming is that the collection \(\{Q_{f},\ f\in\overline{\mathcal{F}}\}\) provides a suitable approximation of the actual conditional distributions \(Q_{i}^{\star}\), for \(i\in\{1,\ldots,n\}\). Let \(\mathscr{Q}_{\mathscr{W}}\) represent the collection of all conditional probabilities from \((\mathscr{W},\mathcal{W})\) to \((\mathscr{T},\mathcal{T})\), and define \(\boldsymbol{\mathscr{Q}}_{\mathscr{W}}=\mathscr{Q}_{\mathscr{W}}^{n}\). As a direct result, we obtain the \(n\)-tuple \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\in\boldsymbol{ \mathscr{Q}}_{\mathscr{W}}\). We equip the space \(\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\) with a distance metric resembling the total variation distance. More precisely, for \(\mathbf{Q}=(Q_{1},\ldots,Q_{n})\) and \(\mathbf{Q}^{\prime}=(Q_{1}^{\prime},\ldots,Q_{n}^{\prime})\) in \(\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\), \[\ell(\mathbf{Q},\mathbf{Q}^{\prime}) =\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\|Q_{i}(W_{i})-Q_{i}^{ \prime}(W_{i})\|_{TV}\right]\] \[=\frac{1}{n}\sum_{i=1}^{n}\int_{\mathscr{W}}\|Q_{i}(w)-Q_{i}^{ \prime}(w)\|_{TV}dP_{W_{i}}(w). \tag{1}\] Particularly, when \(\ell(\mathbf{Q},\mathbf{Q}^{\prime})=0\), it signifies that \(Q_{i}=Q_{i}^{\prime}\)\(P_{W_{i}}\)-a.s., for all \(i\). Building on the \(n\) observations \(\boldsymbol{X}=(X_{1},\ldots,X_{n})\), we will introduce an estimation approach in the later section to develop an estimator \(\widehat{f}(\boldsymbol{X})\in\overline{\mathcal{F}}\) for the potential regression function \(f^{\star}\) (which may not exist). Furthermore, we aim to estimate the \(n\)-tuple \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\) by means of the structure \(\mathbf{Q}_{\widehat{f}}=(Q_{\widehat{f}},\ldots,Q_{\widehat{f}})\). We assess the performance of the estimator \(\mathbf{Q}_{\widehat{f}}\) for \(\mathbf{Q}^{\star}\) through the measure \(\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\). We denote \(P=Q\cdot P_{W}\) when \(P\) represents the distribution of a random variable \((W,Y)\in\mathscr{W}\times\mathscr{Y}\), where the marginal distribution of \(W\) is \(P_{W}\) and the conditional distribution of \(Y\) given \(W\) is \(Q\). One can observe that when \(P_{1}=Q_{1}\cdot P_{W}\) and \(P_{2}=Q_{2}\cdot P_{W}\), the total variation distance between \(P_{1}\) and \(P_{2}\) can be represented as \[\|P_{1}-P_{2}\|_{TV}=\int_{\mathscr{W}}\|Q_{1}(w)-Q_{2}(w)\|_{TV}dP_{W}(w).\] By defining \(P_{i}^{\star}=Q_{i}^{\star}\cdot P_{W_{i}}\) and \(P_{i,f}=Q_{f}\cdot P_{W_{i}}\) for a measurable function \(f\) that maps \(\mathscr{W}\) to \(\mathbb{R}\), we can represent \(\ell(\mathbf{Q}^{\star},\mathbf{Q}_{f})\) as the average total variation distance over \(n\) samples: \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{f})=\frac{1}{n}\sum_{i=1}^{n}\|P_{i}^{ \star}-P_{i,f}\|_{TV}. \tag{2}\] In the case where \(W_{i}\) are i.i.d. with the common distribution \(P_{W}\) and \(Q_{i}^{\star}=Q^{\star}\) for all \(i\in\{1,\ldots,n\}\), we may slightly abuse the notation \(\ell(Q^{\star},Q_{\widehat{f}})\) to measure the distance between \(Q^{\star}\) and \(Q_{\widehat{f}}\) defined as \[\ell(Q^{\star},Q_{\widehat{f}})=\int_{\mathscr{W}}\|Q^{\star}(w)-Q_{\widehat {f}(w)}\|_{TV}dP_{W}(w). \tag{3}\] We conclude this section by introducing some notations that will be useful later. We denote \(\mathbb{N}^{*}\) the set of all positive natural numbers and \(\mathbb{R}^{*}_{+}\) the set of all positive real numbers. For any \(x\in\mathbb{R}\), we use the notation \(\lfloor x\rfloor\) to represent the largest integer strictly smaller than \(x\), and the notation \(\lceil x\rceil\) to represent the least integer greater than or equal to \(x\). Given any set \(J\), we denote its cardinality by \(|J|\). For a \(\mathbf{R}\in\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\) and a set \(\mathbf{A}\subset\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\), we define \(\ell(\mathbf{R},\mathbf{A})=\inf_{\mathbf{R}^{\prime}\in\mathbf{A}}\ell(\mathbf{R},\mathbf{R}^{\prime})\). Unless otherwise specified, log denotes the logarithm function with base \(e\). Let \((E,\mathcal{E})\) be a measurable space and \(\mu\) be a \(\sigma\)-finite measure on \((E,\mathcal{E})\). For \(k\in[1,+\infty]\), we define \(\mathcal{L}_{k}(E,\mu)\) the collection of all the measurable functions \(f\) on \((E,\mathcal{E},\mu)\) such that \(\|f\|_{k,\mu}<+\infty\), where \[\|f\|_{k,\mu}=\begin{cases}\left(\int_{E}|f|^{k}d\mu\right)^{1/k},&\text{for $k \in[1,+\infty)$},\\ \inf\{K>0,\;|f|\leq K\;\mu-\text{a.e.}\},&\text{for $k=\infty$}.\end{cases}\] We denote the associated equivalent classes as \(\mathbb{L}_{k}(E,\mu)\) where any two functions coincide for \(\mu\)-a.e. can not be distinguished. In particular, we write the norm \(\|\cdot\|_{k}\) with \(k\in[1,+\infty]\) when \(\mu=\lambda\) is the Lebesgue measure. Throughout the paper, \(c\) or \(C\) denotes positive numerical constant which may vary from line to line. ## 3. \(\ell\)-Type estimation under regression setting We employ an \(\ell\)-type estimator, drawing inspiration from the concepts outlined in a series of papers presented in Baraud (2021) within a general framework, as well as from the content of Baraud et al. (2022), which is specifically dedicated to density estimation. Consider a set of \(n\) independent random variables denoted as \(X_{1},\ldots,X_{n}\), where their values are drawn from a measured space \((\mathscr{X},\mathcal{X})\). In essence, \(\ell\)-estimation offers a versatile approach to acquiring a robust estimator for the actual joint distribution \(\mathbf{P}^{\star}\) of \(X_{1},\ldots,X_{n}\). The established \(\ell\)-estimation approach begins by introducing a set of potential probabilities \(\overline{\mathscr{P}}\), intended to offer a suitable approximation of \(\mathbf{P}^{\star}\). The primary challenge in implementing \(\ell\)-estimation within a regression framework lies in the absence of information concerning the marginal distributions \(P_{W_{i}}\) required for constructing candidate probabilities and designing the estimation procedure. Moreover, our objective does not encompass the task of estimating these marginal distributions. In this scenario, further effort is necessary to implement \(\ell\)-type estimation and establish a risk bound for the resulting estimator. ### Constructing the \(\ell\)-type estimator Let \(\overline{\mathcal{F}}\) be a collection of real-valued measurable functions on \(\mathscr{W}\), which we call it a model. For any \(f\in\overline{\mathcal{F}}\), we denote \(Q_{f}\) the conditional Gaussian distribution induced by the function \(f\), i.e., given any \(w\in\mathscr{W}\), \(Q_{f(w)}\) is a normal distribution centered around \(f(w)\), with a variance of \(\sigma^{2}\), and denote \(q_{f(w)}\) the density function of the Gaussian distribution \(Q_{f(w)}\) with respect to the Lebesgue measure. To prevent any measurability issue, we introduce the notation \(\mathcal{F}\), representing either a finite or, at most, a countable subset of \(\overline{\mathcal{F}}\). Subsequently, the majority of our discussion will be focused on the set \(\mathcal{F}\). Nevertheless, as we delve into further details, it turns out that through careful choice of \(\mathcal{F}\), no approximation power will be sacrificed in comparison to estimations based on \(\overline{\mathcal{F}}\). Given \(f_{1},f_{2}\in\mathcal{F}\), we define for any \((w,y)\in\mathscr{W}\times\mathscr{Y}\), \[t_{(f_{1},f_{2})}(w,y)=\mathbbm{1}_{q_{f_{2}(w)}(y)>q_{f_{1}(w)}(y)}-Q_{f_{1}(w) }\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right).\] Employing the function \(t_{(f_{1},f_{2})}(\cdot,\cdot)\) produces the following inequalities. **Lemma 1**.: _Let \(P^{\star}=Q^{\star}\cdot P_{W}\) represent the distribution of a pair of random variables \((W,Y)\in\mathscr{W}\times\mathscr{Y}\), where the first marginal distribution is \(P_{W}\), and the conditional distribution of \(Y\) given \(W\) is denoted by \(Q^{\star}\). For any \(f_{1},f_{2}\in\mathcal{F}\), any \(P_{W}\) and any \(Q^{\star}\in\mathscr{Q}_{\mathscr{W}}\), we have_ \[\ell(Q_{f_{1}},Q_{f_{2}})-\ell(Q^{\star},Q_{f_{2}})\leq\mathbb{E}_{P^{\star}} \left[t_{(f_{1},f_{2})}(W,Y)\right]\leq\ell(Q^{\star},Q_{f_{1}}). \tag{4}\] The proof of Lemma 1 is deferred to Section 6.1. Lemma 1 implies that the family of test statistics \(t_{(f_{1},f_{2})}\) holds information concerning the \(\ell\)-type distance between two of \(Q_{f_{1}}\), \(Q_{f_{2}}\), and \(Q^{\star}\), which is an essential property for constructing our final estimator. For any \(f_{1},f_{2}\in\mathcal{F}\) and \(n\) pairs of observations \(\mathbf{X}=(X_{1},\ldots,X_{n})\) with \(X_{i}=(W_{i},Y_{i})\), \(i\in\{1,\ldots,n\}\), we design the function \[\mathbf{T}_{l}(\mathbf{X},f_{1},f_{2})=\sum_{i=1}^{n}t_{(f_{1},f_{2})}(W_{i},Y_{i})\] and set \[\mathbf{T}_{l}(\mathbf{X},f_{1})=\sup_{f_{2}\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X},f _{1},f_{2}).\] Our final estimator of \(\mathbf{Q}^{\star}=(Q^{\star}_{1},\ldots,Q^{\star}_{n})\) is defined as \(\mathbf{Q}_{\widehat{f}}=(Q_{\widehat{f}},\ldots,Q_{\widehat{f}})\), where \(\widehat{f}(\mathbf{X})\) is an \(\epsilon\)-minimizer over \(\mathcal{F}\) of the map \(f_{1}\mapsto\mathbf{T}_{l}(\mathbf{X},f_{1})\). More precisely, given \(\epsilon>0\), the \(\ell\)-type estimator within the set \(\mathcal{F}\) is defined as any measurable function \(\widehat{f}(\mathbf{X})\) of the random (and non-void) set \[\mathscr{E}(\mathbf{X},\epsilon)=\left\{f\in\mathcal{F},\ \mathbf{T}_{l}(\mathbf{X},f) \leq\inf_{f^{\prime}\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X},f^{\prime})+\epsilon \right\}.\] **Remark 1**.: The parameter \(\epsilon\) is devised to ensure the existence of the estimator \(\widehat{f}\). As we will explore in Section 3.2, it is prudent to choose a relatively small value for \(\epsilon\), specifically not significantly greater than \(1\), as this choice improves the risk bound of an \(\ell\)-type estimator. Specifically, when a function \(f\in\mathcal{F}\) exists such that \(\mathbf{T}_{l}(\mathbf{X},f)=\inf_{f^{\prime}\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X}, f^{\prime})\), it is advisable to prioritize this \(f\) as the estimator \(\widehat{f}\). Furthermore, considering that \(\mathbf{T}_{l}(\mathbf{X},f)\geq\mathbf{T}_{l}(\mathbf{X},f,f)=0\) for all \(f\in\mathcal{F}\), any function \(\widehat{f}\in\mathcal{F}\) meeting the condition \(0\leq\mathbf{T}_{l}(\mathbf{X},\widehat{f})\leq\epsilon\) qualifies as an \(\ell\)-type estimator. ### The performance of the \(\ell\)-type estimator Before delving into the theoretical performance of our \(\ell\)-type estimator, we lay the foundation by stating our main assumption on the model \(\overline{\mathcal{F}}\). To facilitate this, we introduce the following definition: **Definition 1** (VC-subgraph).: _An (open) subgraph of a function \(f\) in \(\overline{\mathcal{F}}\) is the subset of \(\mathscr{W}\times\mathbb{R}\) given by_ \[\mathscr{C}_{f}=\left\{(w,u)\in\mathscr{W}\times\mathbb{R},\,f(w)>u\right\}.\] _A collection \(\overline{\mathcal{F}}\) of real-valued measurable functions on \(\mathscr{W}\) is VC-subgraph with dimension not larger than \(V\) if, for any finite subset \(\mathcal{S}\subset\mathscr{W}\times\mathbb{R}\) with \(|\mathcal{S}|=V+1\), there exists at least one subset \(S\) of \(\mathcal{S}\) such that for any \(f\in\overline{\mathcal{F}}\), \(S\) is not the intersection of \(\mathcal{S}\) with \(\mathscr{C}_{f}\), i.e._ \[S\neq\mathcal{S}\cap\mathscr{C}_{f}\quad\text{whatever }f\in\overline{ \mathcal{F}}.\] Herein, we proceed to introduce our primary assumption concerning the model \(\overline{\mathcal{F}}\). **Assumption 1**.: _The class of functions \(\overline{\mathcal{F}}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\geq 1\)._ Encompassing a range of widely employed examples, Assumption 1 is formulated under a considerably broad scope. For instance, when \(\overline{\mathcal{F}}\) is contained in a linear space with finite dimension \(D\), Assumption 1 is fulfilled with \(V=D+1\) according to Lemma 2.6.15 of van der Vaart and Wellner (1996). Moreover, when \(\overline{\mathcal{F}}\) represents a fully connected ReLU neural network, it has been demonstrated in Bartlett et al. (2019) [Theorem 7] that the VC-dimension of \(\overline{\mathcal{F}}\) is linked to the depth and width of the network. Further elaboration on \(\ell\)-estimation based on neural networks will be provided in Section 4 and Section 5. Building upon Assumption 1, we can establish the following non-asymptotic exponential inequalities for the upper deviations of a total variation type distance between the true distribution of the data and the estimated one based on \(\widehat{f}(\mathbf{X})\). **Theorem 1**.: _Under Assumption 1, whatever the conditional distributions \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\) of the \(Y_{i}\) given \(W_{i}\) and the distributions of \(W_{i}\), any \(\ell\)-type estimator \(\widehat{f}\) based on the class \(\mathcal{F}\) satisfies that for any \(\overline{f}\in\mathcal{F}\) and any \(\xi>0\), with a probability at least \(1-e^{-\xi}\),_ \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q} ^{\star},\mathbf{Q}_{\overline{f}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{n }+\sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}. \tag{5}\] _In particular, with the triangle inequality,_ \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\leq 3\ell(\mathbf{Q}^{ \star},\boldsymbol{\mathscr{Q}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{n}+ \sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}, \tag{6}\] _where \(\boldsymbol{\mathscr{Q}}=\{\mathbf{Q}_{f},\ f\in\mathcal{F}\}\). As a consequence of (6), for any \(n\geq V\), integration with respect to \(\xi>0\) yields the following risk bound for the resulting estimator \(\mathbf{Q}_{\widehat{f}}=(Q_{\widehat{f}},\ldots,Q_{\widehat{f}})\)_ \[\mathbb{E}\left[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\right]\leq C _{\epsilon}\left[\ell(\mathbf{Q}^{\star},\boldsymbol{\mathscr{Q}})+\sqrt{ \frac{V}{n}}\right], \tag{7}\] _where \(C_{\epsilon}>0\) is a numerical constant depending on \(\epsilon\) only._ The proof of Theorem 1 is deferred to Section 6.2. Let us now provide some remarks regarding this result. **Remark 2**.: Consider the set \(\overline{\boldsymbol{\mathscr{Q}}}=\{\mathbf{Q}_{f},\ f\in\overline{ \mathcal{F}}\}\). It is clear that if \(\boldsymbol{\mathscr{Q}}\) is dense in \(\overline{\boldsymbol{\mathscr{Q}}}\) with respect to the (pseudo) distance \(\ell\), both (6) and (7) also remain valid when replacing \(\boldsymbol{\mathscr{Q}}\) with \(\overline{\boldsymbol{\mathscr{Q}}}\). This is the situation in which the subset \(\mathcal{F}\) is dense in \(\overline{\mathcal{F}}\) with respect to the topology of pointwise convergence. For further insights in this direction, we refer to Section 4.2 of Baraud and Birge (2018). For the sake of simplicity in our explanation, let us temporarily assume in this section that \(\boldsymbol{\mathscr{Q}}\) is dense in \(\overline{\boldsymbol{\mathscr{Q}}}\) with respect to \(\ell\). **Remark 3**.: According to (7), the risk of the resulting estimator is bounded, up to a numerical constant, by the sum of two terms. The term \(\ell(\mathbf{Q}^{\star},\overline{\boldsymbol{\mathscr{Q}}})\) corresponds to the approximation error incurred by employing the model \(\overline{\mathcal{F}}\), while \(\sqrt{V/n}\) illustrates the complexity of the considered model \(\overline{\mathcal{F}}\). Hence, a suitable model \(\overline{\mathcal{F}}\) should strike a balance between these two factors, namely, a model that is not excessively complex yet offers a good approximation of the underlying regression function. **Remark 4**.: In the favourable situation where the data \(X_{i}=(W_{i},Y_{i})\) are truly i.i.d. with \(Q_{i}^{\star}=Q_{f^{\star}}\), \(i\in\{1,\ldots,n\}\) for some \(f^{\star}\in\overline{\mathcal{F}}\), we can deduce from (7) that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon} \sqrt{\frac{V}{n}}.\] In typical situations, the value of \(V\) aligns with the magnitude of parameters necessary to parametrize \(\overline{\mathcal{F}}\), which cannot be improved in general. As we shall observe in Section 4.2, the above risk bound will lead to an optimal rate of convergence in the minimax sense when the regression function \(f^{\star}\) is assumed to be a smooth function of regularity \(\alpha\). **Remark 5**.: The term \(\ell(\mathbf{Q}^{\star},\overline{\boldsymbol{\mathscr{Q}}})\) elucidates the robustness property of the resulting estimator concerning model misspecification. To illustrate, let us consider the general scenario where the data are only independent and the true joint distribution is given by \[\mathbf{P}^{\star}=\bigotimes_{i=1}^{n}P_{i}^{\star}=\bigotimes_{i=1}^{n}\left[ (1-\beta_{i})P_{i,\overline{f}}+\beta_{i}R_{i}\right],\qquad\sum_{i=1}^{n} \beta_{i}\leq\frac{n}{2}, \tag{8}\] with some \(\overline{f}\in\overline{\mathcal{F}}\), \(P_{i,\overline{f}}=Q_{\overline{f}}\cdot P_{W_{i}}\), \(R_{i}\) being an arbitrary distribution on \(\mathscr{X}=\mathscr{W}\times\mathscr{Y}\) and \(\beta_{i}\) taking values in \([0,1]\) for all \(i\in\{1,\ldots,n\}\). With the connection (2) between the pseudo distance \(\ell\) and \(\|\cdot\|_{TV}\), we can deduce from (7) that \[\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\|P_{i}^{\star}-P_{i,\widehat {f}}\|_{TV}\right] \leq C_{\epsilon}\left[\frac{1}{n}\sum_{i=1}^{n}\|P_{i}^{\star}-P_ {i,\overline{f}}\|_{TV}+\sqrt{\frac{V}{n}}\right] \tag{9}\] \[\leq C_{\epsilon}\left[\frac{1}{n}\sum_{i=1}^{n}\beta_{i}+\sqrt{ \frac{V}{n}}\right],\] where the second inequality comes from the fact that \(\|\cdot\|_{TV}\) is bounded by \(1\). The above result implies that as long as the quantity \((\sum_{i=1}^{n}\beta_{i})/n\) remains small compared to the term \(\sqrt{V/n}\), the performance of the resulting estimator will not deteriorate significantly in comparison to the ideal situation presented in Remark 4. The formulation (8) can be utilized to provide a more detailed explanation of the stability of \(\ell\)-type estimation procedure. More precisely, in the case of the presence of outliers, the observations include several outliers, the indices of which are marked as a non-empty subset \(J\) of \(\{1,\ldots,n\}\). For any \(i\in J\), \(R_{i}=\delta_{a_{i}}\) and \(\beta_{i}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{i\in J}\) for all \(i\in\{1,\ldots,n\}\). The bound (9) indicates that our estimation procedure remains stable as long as \(|J|/n\) remains small compared to \(\sqrt{V/n}\). This accounts for the robustness when the outliers present. Under another scenario, where the data are contaminated, \((W_{i},Y_{i})\) are i.i.d., and \(P_{W_{i}}=P_{W}\). A portion \(\beta\in(0,1/2]\) of the \(n\) samples is drawn according to an arbitrary distribution \(R_{i}=R\) (where \(R\) is not equal to \(P_{i,\overline{f}}\)), while the remaining part follows the distribution \(P_{i,\overline{f}}\). In this case, as an immediate consequence of (9), the performance of our estimator remains stable as long as the contamination proportion \(\beta\) remains small compared to the value of \(\sqrt{V/n}\). ### Connection to \(\mathbb{L}_{1}\)-distance between the regression functions As we have seen in Section 3.2, we establish non-asymptotic inequalities for the upper deviations of a total variation type distance between the true conditional distributions and the estimated one based on \(\widehat{f}\). In the context of a regression setting where the data are truly i.i.d. and follow the common marginal distribution \(P_{W}\), the function \(f^{\star}\) exists, such that \(Q_{i}^{\star}=Q_{f^{\star}}\). It would be interesting to investigate the performance of the \(\ell\)-type estimator \(\widehat{f}(\mathbf{X})\) in relation to the regression function \(f^{\star}\), utilizing a suitable distance metric, as typically considered in the literature. Given two real-valued functions \(f\) and \(f^{\prime}\) on \(\mathscr{W}\), it turns out that \(\ell(Q_{f},Q_{f^{\prime}})\) can be related to the \(\mathbb{L}_{1}(P_{W})\)-distance between \(f\) and \(f^{\prime}\). We present the result as follows. **Lemma 2**.: _For any two measurable real-valued functions \(f,f^{\prime}\) on \(\mathscr{W}\), and any \(w\in\mathscr{W}\), we have_ \[\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{TV}=1-2\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{ 2\sigma}\right), \tag{10}\] _where the notation \(\Phi\) stands for the cumulative distribution function of the standard normal distribution. Consequently,_ \[0.78\min\left\{\frac{\|f-f^{\prime}\|_{1,P_{W}}}{\sqrt{2\pi}\sigma},1\right\} \leq\ell(Q_{f},Q_{f^{\prime}})\leq\min\left\{\frac{\|f-f^{\prime}\|_{1,P_{W}}}{ \sqrt{2\pi}\sigma},1\right\}\,. \tag{11}\] Proof.: For any two probabilities \(P\) and \(R\) on the measured space \((\mathscr{X},\mathcal{X})\), it is well known that the total variation distance can equivalently be written as \(\|P-R\|_{TV}=R(r>p)-P(r>p)\), where \(p\) and \(r\) stand for the respective densities of \(P\) and \(R\) with respect to some common dominating measure \(\mu\). Therefore, a fundamental calculation reveals that for any \(w\in\mathscr{W}\), \[\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{TV} =Q_{f^{\prime}(w)}\left(q_{f^{\prime}(w)}>q_{f(w)}\right)-Q_{f(w) }\left(q_{f^{\prime}(w)}>q_{f(w)}\right)\] \[=\left[1-\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{2\sigma}\right) \right]-\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{2\sigma}\right) \tag{12}\] \[=1-2\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{2\sigma}\right),\] which concludes the equality (10). We also note from (12) that \[\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{TV}=\mathbb{P}\left[|Z|\leq\frac{1}{2}\left( \frac{|f^{\prime}(w)-f(w)|}{\sigma}\right)\right],\] where \(Z\) is a standard real-valued Gaussian random variable. Recall that \[\ell(Q_{f},Q_{f^{\prime}})=\int_{\mathscr{W}}\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{ TV}dP_{W}(w). \tag{13}\] Based on (13), the conclusion of (11) follows by applying Lemma 1 in Baraud (2021) with \(d=1\) and replacing \(|m-m^{\prime}|\) with \((|f^{\prime}(w)-f(w)|)/\sigma\). The above result indicates that when the two functions \(f\) and \(f^{\prime}\) are sufficiently close to each other with respect to the \(\mathbb{L}_{1}(P_{W})\)-distance, the quantity \(\ell(Q_{f},Q_{f^{\prime}})\) is of order \(\|f-f^{\prime}\|_{1,P_{W}}/(\sqrt{2\pi}\sigma)\). Conversely, when \(f\) and \(f^{\prime}\) are far apart, the value of \(\ell(Q_{f},Q_{f^{\prime}})\) remains approximately of the order of \(1\). Combining Lemma 2 with (7), we can deduce that \[\min\left\{\frac{\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]}{ \sqrt{2\pi}\sigma},1\right\}\leq C_{\epsilon}\left[\inf_{f\in\mathcal{F}}\ell (Q_{f^{\star}},Q_{f})+\sqrt{\frac{V}{n}}\right], \tag{14}\] where \(C_{\epsilon}>0\) is a numerical constant depending on \(\epsilon\) only. As we shall see it later, in typical applications, if we can find a nice model to approximate the regression function \(f^{\star}\) in the sense that the right hand of (14) is smaller than \(1\), then we finally obtain a risk bound for \(\widehat{f}(\mathbf{X})\) with respect to the \(\mathbb{L}_{1}\)-distance: \[\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]\leq C_{\epsilon, \sigma}\left[\inf_{f\in\mathcal{F}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V}{n} }\right],\] where \(C_{\epsilon,\sigma}\) is a numerical constant depending on \(\epsilon,\sigma\) only. ### Comparison with \(\rho\)-estimation As mentioned in Section 3.2, one notable feature of the \(\ell\)-type estimators is their robustness under misspecification. Interestingly, the \(\rho\)-estimators also exhibit robustness properties, but they are quantified using a Hellinger-type distance, rather than the one based on the total variation distance. For a more comprehensive understanding of the \(\rho\)-estimation methodology, one can refer to Baraud and Birge (2018) and Baraud and Chen (2020), with the latter primarily focusing on the regression setting. It is worth noting that while there exists some connection between the Hellinger distance and the total variation distance, they are not equivalent in general. The main distinctions between these two types of estimators has been examined in Section 7.1 of Baraud (2021) which includes an illustration of regression under a fixed design. Leveraging the results we have established in Section 3.2 and 3.3, we are therefore able to delve deeper in this direction, especially under a random regression design setting. To illustrate simply, we assume the data are truly i.i.d. with \(P_{W_{i}}=P_{W}\) and \(Q_{i}^{\star}=Q^{\star}\), for all \(i\in\{1,\ldots,n\}\). We write \(P^{\star}=Q^{\star}\cdot P_{W}\) the true distribution of \((W,Y)\in\mathscr{W}\times\mathscr{Y}\). For some \(f\in\mathcal{F}\), provided the term \(\ell(Q^{\star},Q_{f})\) and \(1/n\) are both sufficiently small, employing Lemma 2, one can deduce from (5) that the \(\ell\)-type estimator \(\widehat{f}_{\ell}(\mathbf{X})\) satisfies \[\mathbb{E}\left[\|f-\widehat{f}_{\ell}\|_{1,P_{W}}\right]\leq C_{\sigma, \epsilon}\left[\|P^{\star}-P_{f}\|_{TV}+\sqrt{\frac{V}{n}}\right], \tag{15}\] where \(P_{f}=Q_{f}\cdot P_{W}\). From another point of view, we can deduce, through a slight modification of Theorem 1 in Baraud and Chen (2020), that for any \(f\in\mathcal{F}\), the \(\rho\)-estimator \(\widehat{f}_{\rho}(\mathbf{X})\) complies with the following \[\mathbb{E}\left[h^{2}(P_{f},P_{\widehat{f}_{\rho}})\right]\leq C\left[h^{2}(P ^{\star},P_{f})+\frac{V(1+\log n)}{n}\right],\] where \(C>0\) is some numerical constant and \(h\) stands for the Hellinger distance. Considering the fact that \[h^{2}(P_{f},P_{\widehat{f}_{\rho}}) =\int_{\mathscr{W}}1-\exp\left[-\frac{|f(w)-\widehat{f}_{\rho}(w )|^{2}}{8\sigma^{2}}\right]dP_{W}(w)\] \[\geq(1-e^{-1})\left(\frac{\|f-\widehat{f}_{\rho}\|_{2,P_{W}}^{2} }{8\sigma^{2}}\wedge 1\right),\] we can deduce, using Holder's inequality and a similar argument to that used in obtaining (15), that for some \(f\in\mathcal{F}\), given the term \(h^{2}(P^{\star},P_{f})\) and \(1/n\) are both sufficiently small \[\mathbb{E}\left[\|f-\widehat{f}_{\ell}\|_{1,P_{W}}\right]\leq C_{\sigma}\left[ h(P^{\star},P_{f})+\sqrt{\frac{V(1+\log n)}{n}}\right]. \tag{16}\] If we put the numerical constants \(C_{\sigma,\epsilon}\), \(C_{\sigma}\) aside, the main difference between the two risk bounds lie in the fact that they express the robustness of the two different estimators by the approximation term \(\|P^{\star}-P_{f}\|_{TV}\) and \(h(P^{\star},P_{f})\) respectively. With the connection that for any two probabilities \(P_{1},P_{2}\), \[\|P_{1}-P_{2}\|_{TV}\leq\sqrt{2}h(P_{1},P_{2}),\] we can conclude that the stability of the \(\ell\)-type estimators will not be significantly worse than the \(\rho\)-estimators. In fact, the \(\ell\)-type estimators can posses much more robustness than the \(\rho\)-estimators. To explain it in details, consider the misspecified formulation: \[P^{\star}=(1-\beta)P_{f^{\star}}+\beta R,\quad\text{for some small $\beta\in(0,1)$},\] where \(f^{\star}\in\mathcal{F}\), \(R\neq P_{f^{\star}}\) is any arbitrary distribution on \(\mathscr{X}=\mathscr{W}\times\mathscr{Y}\). On the one hand, we can calculate that \[\|P^{\star}-P_{f^{\star}}\|_{TV}=\beta\|P_{f^{\star}}-R\|_{TV}, \tag{17}\] which is of the order of magnitude \(\beta\). On the other hand, we have \[h(P^{\star},P_{f^{\star}})\leq\sqrt{1-\sqrt{1-\beta}}, \tag{18}\] which is at most of the order of magnitude \(\sqrt{\beta/2}\). Therefore, for small values of \(\beta\), the above computation indicates that the term \(h(P^{\star},P_{f^{\star}})\) is much larger than \(\|P^{\star}-P_{f^{\star}}\|_{TV}\). Combining (15) with (17), we deduce that the \(\ell\)-type estimators remain stable as long as \(\beta\) is small as compared to \(1/\sqrt{n}\). Combining (16) with (18), we know that the performance of the \(\rho\)-estimators deteriorate immediately as long as \(\beta\) becomes large as compared to \((\log n)/n\). This analysis implies that the \(\ell\)-type estimators possess more robustness compared to the ones obtained from \(\rho\)-estimation. ## 4. Applications of \(\ell\)-type Estimation Using Neural Networks In recent years, experimental findings have demonstrated the significant success of neural networks modeling in various applications. From a theoretical perspective, it has been observed that neural networks, especially deep ones (see, for example, Schmidt-Hieber (2020) and Suzuki and Nitanda (2021)), possess a natural advantage over classical methods when approximating functions with specific characteristics. In this section, we will discuss \(\ell\)-type estimation for models based on neural networks. The covariates \(W_{i}\) are assumed to be i.i.d. on \(\mathscr{W}=\left[0,1\right]^{d}\), following the common distribution \(P_{W}\), while \(Q_{i}^{\star}=Q^{\star}\) holds for all \(i\in\{1,\ldots,n\}\). ### ReLU feedforward neural networks We start with introducing some preliminaries of the ReLU feedforward neural networks. Recall the Rectifier Linear Unit (ReLU) activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), which is defined as \[\sigma(x)=\max\{0,x\}.\] For any vector \(\mathbf{x}=(x_{1},\ldots,x_{p})^{\top}\in\mathbb{R}^{p}\), where \(p\in\mathbb{N}^{*}\), the notation \(\sigma(\mathbf{x})\) represents the activation function applied component-wise, defined as follows: \[\sigma(\mathbf{x})=(\max\{0,x_{1}\},\ldots,\max\{0,x_{p}\})^{\top}.\] A fundamental and extensively employed type of feedforward neural networks in practice is the multi-layer perceptrons, where the neurons in consecutive layers are fully connected through linear transformation matrices. In our later discussion on applying the \(\ell\)-type estimation, we will focus on the multi-layer perceptrons with ReLU activation function. To begin, let's introduce the expression of the multi-layer perceptrons under consideration. For any vector \(\mathbf{p}=(p_{0},\ldots,p_{L+1})\in(\mathbb{N}^{*})^{L+2}\) with \(p_{0}=d\) and \(p_{L+1}=1\) and \(L\in\mathbb{N}^{*}\), we denote the multi-layer perceptron \(\overline{\mathcal{F}}_{(L,\mathbf{p})}\) as a collection of functions of the form: \[f:\mathbb{R}^{d}\to\mathbb{R},\quad\mathbf{w}\mapsto f(\mathbf{w})=M_{L}\circ\sigma \circ M_{L-1}\circ\cdots\circ\sigma\circ M_{0}(\mathbf{w}),\] where \[M_{l}(\mathbf{y})=A_{l}(\mathbf{y})+b_{l},\quad\text{for $l=0,\ldots,L$},\] \(A_{l}\) is a \(p_{l+1}\times p_{l}\) weight matrix and the shift vector \(b_{l}\) is of size \(p_{l+1}\) for any \(l\in\{0,\ldots,L\}\). In the first layer, the input data consists of the values of the predictor \(W\), whereas the last layer represents the output. With the expression given above, we say that the network \(\overline{\mathcal{F}}_{(L,\mathbf{p})}\) comprises \(L\) hidden layers and a total of \((L+2)\) layers. For \(l\in\{1,\ldots,L\}\), we refer to \(p_{l}\) as the width of the \(l\)-th hidden layer. The entries in these weight matrices and vectors typically vary in \(\mathbb{R}\) or a subinterval of \(\mathbb{R}\), which is what we refer to as parameters. In the latter scenario, we employ the notation \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\subset\overline{\mathcal{F}}_{(L,\mathbf{ p})}\), denoting the set of all functions with parameters ranging within the interval \([-K,K]\). Furthermore, we use the notation \(\mathcal{F}_{(L,\mathbf{p})}\) (or \(\mathcal{F}_{(L,\mathbf{p},K)}\)) for the multi-layer perceptron, which shares the same architecture as \(\overline{\mathcal{F}}_{(L,\mathbf{p})}\) (or \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\) respectively), but with the distinction that all the parameters take values in \(\mathbb{Q}\). In some of our application scenarios, it suffices to consider a multi-layer perceptron with a rectangular design, where \(p_{l}=p\) for all \(l\in\{1,\ldots,L\}\). In this case, we may use the simplified notation \(\mathcal{F}_{(L,p)}\) (or \(\mathcal{F}_{(L,p,K)}\)) to represent the class \(\mathcal{F}_{(L,\mathbf{p})}\) (or \(\mathcal{F}_{(L,\mathbf{p},K)}\) respectively) for \(\mathbf{p}=(d,p,\ldots,p,1)\). We discuss the implementation of the \(\ell\)-type estimation on ReLU neural networks \(\mathcal{F}_{(L,\mathbf{p},K)}\). To implement the procedure introduced in Section 3, we work on the countable subset \(\mathcal{F}_{(L,\mathbf{p},K)}\) of the model \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\). We can establish the following result. **Lemma 3**.: _For any \(L\in\mathbb{N}^{*}\), \(\mathbf{p}=(p_{0},\ldots,p_{L+1})\in(\mathbb{N}^{*})^{L+2}\) with \(p_{0}=d\), \(p_{L+1}=1\) and a finite positive constant \(K\), the class of functions \(\mathcal{F}_{(L,\mathbf{p},K)}\) is dense in \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\) with respect to the supremum norm \(\|\cdot\|_{\infty}\)._ The proof of Lemma 3 is postponed to Section 6.3. Lemma 3 ensures that our estimation approach applied to the countable model \(\mathcal{F}_{(L,\boldsymbol{p},K)}\) does not compromise approximation power compared to \(\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\). The following proposition establishes VC-dimensional bounds for rectangular multi-layer perceptrons employing a ReLU activation function. This result can be derived from Proposition 5 in Chen (2022), which also aligns with those stated in Theorem 7 of Bartlett et al. (2019). **Proposition 1**.: _For any \(L\in\mathbb{N}^{*}\), \(p\in\mathbb{N}^{*}\), the class of functions \(\overline{\mathcal{F}}_{(L,p)}\) is a VC-subgraph on \(\mathscr{W}\) with dimension_ \[V(\overline{\mathcal{F}}_{(L,p)})\leq(L+1)\left(s+1\right)\log_{2}\left[2 \left(2e(L+1)\left(\frac{pL}{2}+1\right)\right)^{2}\right], \tag{19}\] _where \(s=p^{2}(L-1)+p(L+d+1)+1\)._ This result shows the connection between the VC-dimensional bounds and the depth and width of ReLU rectangular multi-layer perceptrons. Specifically, for any finite constant \(K>0\), as \(\overline{\mathcal{F}}_{(L,p,K)}\subset\overline{\mathcal{F}}_{(L,p)}\), the dimensional bounds (19) also apply to the class \(\overline{\mathcal{F}}_{(L,p,K)}\). We will use Proposition 1 along with other results to derive the risk bounds for the \(\ell\)-type estimators when applying our approach to ReLU feedforward neural networks. ### Approximating functions in Holder space In this section, we examine the performance of the \(\ell\)-type estimators implemented on the ReLU feedforward neural networks. We consider the regression setting, where the regression function \(f^{\star}\) exists, and we assume that it belongs to an \(\alpha\)-smoothness Holder class. Given \(t\in\mathbb{N}^{*}\) and \(\alpha\in\mathbb{R}^{*}_{+}\), we define \(\mathcal{H}^{\alpha}(D,B)\) an \(\alpha\)-Holder ball with radius \(B\) as the collection of functions \(f:D\subset\mathbb{R}^{t}\rightarrow\mathbb{R}\) such that \[\max_{\begin{subarray}{c}\beta=(\beta_{1},\ldots,\beta_{t})^{\top}\in\mathbb{ N}^{t}\\ \sum_{j=1}^{t}\beta_{j}\leq\lfloor\alpha\rfloor\end{subarray}}\|\partial^{ \boldsymbol{\beta}}f\|_{\infty}\leq B\quad\text{and}\quad\max_{\begin{subarray} {c}\beta\in\mathbb{N}^{t}\\ \sum_{j=1}^{t}\beta_{j}=\lfloor\alpha\rfloor\end{subarray}}\sup_{\begin{subarray} {c}\boldsymbol{x},\boldsymbol{y}\in D\\ x\neq\boldsymbol{y}\end{subarray}}\frac{\left|\partial^{\boldsymbol{\beta}}f( \boldsymbol{x})-\partial^{\boldsymbol{\beta}}f(\boldsymbol{y})\right|}{\| \boldsymbol{x}-\boldsymbol{y}\|_{2}^{\alpha-\lfloor\alpha\rfloor}}\leq B,\] where for any \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{t})^{\top}\in\mathbb{N}^{t}\), \(\partial^{\boldsymbol{\beta}}=\partial^{\beta_{1}}\cdots\partial^{\beta_{t}}\). Based on the notation introduced above, in this section, we assume that \(Q^{\star}=Q_{f^{\star}}\), where \(f^{\star}\in\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\), with a specified smoothness index \(\alpha\in\mathbb{R}^{*}_{+}\) and a finite constant \(B>0\). For any \(\alpha\in\mathbb{R}^{*}_{+}\), the following result demonstrates the error introduced by various ReLU neural networks when approximating the class \(\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\), as deduced from Corollary 3.1 of Jiao et al. (2023). **Proposition 2**.: _Assume that \(f\in\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\) with \(\alpha\in\mathbb{R}^{*}_{+}\) and a finite constant \(B>0\). For any \(M,N\in\mathbb{N}^{*}\), there exists a function \(\overline{f}\) implemented by a ReLU neural network \(\overline{\mathcal{F}}_{(L,p)}\) with a width of_ \[p=38(\lfloor\alpha\rfloor+1)^{2}3^{d}d^{\lfloor\alpha\rfloor+1}N\lceil\log_{2}(8 N)\rceil\] _and a depth of_ \[L=21(\lfloor\alpha\rfloor+1)^{2}M\lceil\log_{2}(8M)\rceil+2d\] _such that_ \[\left|f(\boldsymbol{w})-\overline{f}(\boldsymbol{w})\right|\leq 19B(\lfloor \alpha\rfloor+1)^{2}d^{\lfloor\alpha\rfloor+\frac{\alpha\lor 1}{2}}(NM)^{-\frac{2 \alpha}{d}},\] _for all \(\boldsymbol{w}\in[0,1]^{d}\)._ In fact, several approximation results have been established regarding the Holder class of smoothness functions, for instance in Chen et al. (2019), Schmidt-Hieber (2020), and Nakada and Imaizumi (2020), among others. The reason why we consider using Proposition 2 is mainly due to two aspects. Firstly, unlike most existing results where the prefactor in the error bound depends exponentially on the dimension \(d\), the prefactor in this error bound depends only polynomially on the dimension \(d\). Secondly, it offers specific structures of the neural networks to be considered, thus making the result more informative. Building upon the results of Theorem 1, Proposition 1, 2, and Lemma 3, we derive the risk bounds for the \(\ell\)-type estimators implemented by different networks as follows. **Corollary 1**.: _For any \(N,M\in\mathbb{N}^{*}\), no matter what the distribution of \(W\) is, the \(\ell\)-type estimator \(\widehat{f}(\boldsymbol{X})\) taking values in the network class \(\mathcal{F}_{(L,p,K)}\) with_ \[p=38(\lfloor\alpha\rfloor+1)^{2}3^{d}d^{\lfloor\alpha\rfloor+1}N\lceil\log_{2 }(8N)\rceil,\] \[L=21(\lfloor\alpha\rfloor+1)^{2}M\lceil\log_{2}(8M)\rceil+2d\] _and a sufficiently large \(K\), satisfies that for any \(f^{\star}\in\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\) and \(n\geq V(\overline{\mathcal{F}}_{(L,p)})\),_ \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon, \sigma,\alpha,d,B}\left[(NM)^{-2\alpha/d}+\frac{NM}{\sqrt{n}}\left(\log_{2}(2 N)\log_{2}(2M)\right)^{3/2}\right], \tag{20}\] _where \(C_{\epsilon,\sigma,\alpha,d,B}\) is a constant depending on \(\epsilon,\sigma,\alpha,d\) and \(B\) only._ _In particular, if we take \(N=1\) and \(M=\lceil n^{d/2(d+2\alpha)}\rceil\), combining Lemma 2 with (20) allows us to deduce that:_ \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon, \sigma,\alpha,d,B}n^{-\frac{\alpha}{d+2\alpha}}\left(\log n\right)^{3/2}. \tag{21}\] _For \(n\) being sufficiently large such that the right-hand side of (21) is smaller that 0.78, according to Lemma 2, (21) is equivalent to_ \[\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]\leq C_{\epsilon, \sigma,\alpha,d,B}n^{-\frac{\alpha}{d+2\alpha}}\left(\log n\right)^{3/2},\] _which is the typical rate of convergence with respect to the \(\mathbb{L}_{1}(P_{W})\)-norm._ The proof of Corollary 1 is deferred to Section 6.4. Our remarks are provided below. **Remark 6**.: As we will see later it in Theorem 2, the convergence rate \(n^{-\alpha/(d+2\alpha)}\) is minimax optimal with respect to the distance \(\ell(\cdot,\cdot)\), at least when \(W\) is uniformly distributed on \(\left[0,1\right]^{d}\). Therefore, the risk bound (21) we obtained is optimal up to a logarithmic factor. As it was shown in Section 7 of Baraud (2021), \(\ell\)-estimators are not always optimal when addressing various estimation problems, which differs from \(\rho\)-estimators. However, by combining the upper bound (21) and the subsequent lower bound stated in Theorem 2, we demonstrate that implementing the \(\ell\)-type estimation procedure is optimal within our framework and offers more robustness compared to \(\rho\)-estimators. **Remark 7**.: A noteworthy aspect of the presented result is that the stochastic error is not dependent on the upper bound of the sup-norms for all functions within the class \(\overline{\mathcal{F}}_{(L,p,K)}\). This is not the case, for example, in the results established in Lemma 4 of Schmidt-Hieber (2020) and Theorem 4.2 of Jiao et al. (2023), both of which analyze the performance of the least squares estimator. As a consequence, the final risk bound they established deteriorates with the enlargement of the model they considered due to the inclusion of such an upper bound in their stochastic error terms. From this perspective, our estimation method does not suffer from this drawback. Therefore, we can accommodate a sufficiently large value of \(K\) without compromising the risk bound for the resulting estimator. ## 5. Circumventing the curse of dimensionality As we observed in Section 4, the minimax optimal rate over an \(\alpha\)-Holder class on \(\mathscr{W}=\left[0,1\right]^{d}\) is of order \(n^{-\alpha/(d+2\alpha)}\). This rate slows down significantly as the dimensionality \(d\) increases, a phenomenon known as the curse of dimensionality. To overcome this issue, in this section, we introduce structural assumptions on \(f^{\star}\) and construct specific models using deep ReLU neural networks to implement our procedure. One natural structure for the regression function \(f^{\star}\) for neural networks to exhibit advantages is a composition of multiple functions, which was previously explored by Schmidt-Hieber (2020). More precisely, for any \(k\in\mathbb{N}^{*}\), \(\mathbf{d}=(d_{0},\ldots,d_{k})\in(\mathbb{N}^{*})^{k+1}\), \(\mathbf{t}=(t_{0},\ldots,t_{k})\in(\mathbb{N}^{*})^{k+1}\), \(\boldsymbol{\alpha}=(\alpha_{0},\ldots,\alpha_{k})\in(\mathbb{R}_{+}^{*})^{k+1}\) and a finite constant \(B\geq 0\), we denote \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) the class of functions as, \[\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)= \left\{f_{k}\circ\cdots\circ f_{0},\;f_{i}=(f_{ij})_{j}:[a_{i},b_{i}]^{d_{i} }\to[a_{i+1},b_{i+1}]^{d_{i+1}}\,,\right. \tag{22}\] \[\left.f_{ij}\in\mathcal{H}^{\alpha_{i}}(\left[a_{i},b_{i}\right]^{ t_{i}},B)\text{ and }(|a_{i}|\vee|b_{i}|)\leq B\right\},\] where \(a_{0}=0\), \(b_{0}=1\), \(d_{0}=d\) and \(d_{k+1}=1\). In what follows, we assume the existence of an underlying regression function \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) such that \(Q^{\star}=Q_{f^{\star}}\) (or at least \(Q^{\star}\) is close to \(Q_{f^{\star}}\) with respect to \(\ell\)), where the values of \(k\), \(\mathbf{d}\), \(\mathbf{t}\) and \(\boldsymbol{\alpha}\) are considered to be known. We will then proceed to construct suitable networks for approximating the class \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) and implement the \(\ell\)-type estimation procedure to derive the final estimator \(Q_{\widehat{f}}\) of \(Q^{\star}\). In such a composition structure, the power of approximation based on the neural network actually relies on the so-called effective smoothness indices, which are defined as \[\alpha_{i}^{*}=\alpha_{i}\prod_{l=i+1}^{k}\left(\alpha_{l}\wedge 1\right), \quad\text{for}\ \ i\in\{0,\ldots,k-1\}\] and \(\alpha_{k}^{*}=\alpha_{k}\). Based on Proposition 2 and the basic operation rules of the neural networks, we establish the following result to approximate any function belonging to \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\). **Proposition 3**.: _Assuming that \(f\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) with \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) defined by (22). For all \(i\in\{0,\ldots,k\}\), denote_ \[p_{i}=114(\lfloor\alpha_{i}\rfloor+1)^{2}3^{t_{i}}t_{i}^{\lfloor\alpha_{i} \rfloor+1}\] _and_ \[L_{i}=21(\lfloor\alpha_{i}\rfloor+1)^{2}\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{ *})}\rceil\lceil\log_{2}(8\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil) \rceil+2t_{i}.\] _There exists a function \(\overline{f}\) implemented by a ReLU network with a width of \(\overline{p}=\max_{i\in\{0,\ldots,k-1\}}d_{i+1}p_{i}\) and a depth of \(\overline{L}=k+\sum_{i=0}^{k}L_{i}\) such that_ \[\|f-\overline{f}\|_{\infty}\] \[\leq(2B)^{1+\sum_{i=1}^{k}\alpha_{i}}\left(\prod_{i=0}^{k}\sqrt{d _{i}}\right)\left[\sum_{i=0}^{k}C_{\alpha_{i},t_{i},B}^{\prod_{l=i+1}^{k}( \alpha_{l}\wedge 1)}(\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil)^{-2 \alpha_{i}^{*}/t_{i}}\right],\] _where_ \[C_{\alpha_{i},t_{i},B}=19(2B)^{\alpha_{i}+1}(\lfloor\alpha_{i}\rfloor+1)^{2} t_{i}^{\lfloor\alpha_{i}\rfloor+(\alpha_{i}\lor 1)/2}.\] The proof of Proposition 3 is postponed to Section 6.5. The presented approximation result is notable for offering a well-defined structure for neural networks to effectively implement diverse estimation approaches, as compared to the sparsity-based networks considered in Schmidt-Hieber (2020). From this point of view, Proposition 3 is more informative. Building upon the results of Theorem 1, Proposition 1, 3, and Lemma 3, we can derive the following risk bound for the \(\ell\)-type estimators. **Corollary 2**.: _Assume that \(f^{\star}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) with \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) defined by (22). For all \(i\in\{0,\ldots,k\}\), we set_ \[p_{i}=114(\lfloor\alpha_{i}\rfloor+1)^{2}3^{t_{i}}t_{i}^{\lfloor\alpha_{i} \rfloor+1}\] _and_ \[L_{i}=21(\lfloor\alpha_{i}\rfloor+1)^{2}\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{ *})}\rceil\lceil\log_{2}(8\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil) \rceil+2t_{i}.\] _Whatever the distribution of \(W\), any \(\ell\)-type estimator \(\widehat{f}(\mathbf{X})\) implemented by a ReLU neural network \(\mathcal{F}_{(\overline{L},\overline{p},K)}\) with_ \[\overline{L}=k+\sum_{i=0}^{k}L_{i},\qquad\overline{p}=\max_{i=0,\ldots,k}d_{i+1 }p_{i}\] _and a sufficiently large \(K\), satisfies that for all \(n\geq V(\overline{\mathcal{F}}_{(\overline{L},\overline{p})})\),_ \[\mathbb{E}\left[\ell\left(Q_{f^{\star}},Q_{\widehat{f}}\right)\right]\leq C_{ \epsilon,\sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B}\left(\sum_{i=0}^{k}n^{- \frac{\alpha_{i}^{\star}}{t_{i}+2\alpha_{i}^{\star}}}\right)(\log n)^{3/2}, \tag{23}\] _where \(C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B}\) is a numerical constant depending on \(\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B\) only._ We left the proof of Corollary 2 to Section 6.6. Our comments are presented as follows. **Remark 8**.: Denoting \[\phi_{n}=\max_{i=0,\ldots,k}n^{-\alpha_{i}^{\star}/(2\alpha_{i}^{\star}+t_{i})},\] the result (23) we have established indicates that, up to a logarithmic term, the \(\ell\)-type estimator \(\widehat{f}\) based on the class \(\mathcal{F}_{(\overline{L},\overline{p},K)}\) converges to the regression function \(f^{\star}\) at the rate of \(\phi_{n}\). Furthermore, for sufficiently large \(n\) such that the right-hand side of (23) is smaller than \(0.78\), upon applying Lemma 2, we obtain \[\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]\leq C_{\epsilon, \sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B}\phi_{n}(\log n)^{3/2}.\] This aligns with the risk bound established in Theorem 1 of Schmidt-Hieber (2020) for the least squares estimator with respect to the \(\mathbb{L}_{2}(P_{W})\)-norm. **Remark 9**.: If the situation deviates from the ideal scenario where \(Q^{\star}=Q_{f^{\star}}\) and \(f^{\star}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B)\), a bias term \(\inf_{f\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B)}\ell(Q^{\star},Q _{f})\) will be included in the final risk bound (23). However, as long as the bias term is not significantly larger than the quantity on the right-hand side of (23), the accuracy of the resulting estimator \(\widehat{f}\) remains on the same order of magnitude as in the ideal case. This follows from the robustness property of the \(\ell\)-type estimator as we have explained in Section 3. The following lower bound demonstrates that the convergence rate \(\phi_{n}\) is minimax optimal, at least when \(W\) is uniformly distributed on \([0,1]^{d_{0}}\). **Theorem 2**.: _Let \(P_{W}\) be the uniform distribution on \([0,1]^{d_{0}}\). For any \(k\in\mathbb{N}^{*}\), \(\mathbf{d}\in(\mathbb{N}^{*})^{k+1}\), \(\mathbf{t}\in(\mathbb{N}^{*})^{k+1}\) such that \(t_{j}\leq\min(d_{0},\ldots,d_{j-1})\) for all \(j\), any \(\mathbf{\alpha}\in(\mathbb{R}^{*}_{+})^{k+1}\) and \(B>0\) large enough, there exists a positive constant \(c\) such that_ \[\inf_{\widehat{f}}\sup_{f^{\star}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{ \alpha},B)}\mathbb{E}\left[\ell\left(Q_{f^{\star}},Q_{\widehat{f}}\right) \right]\geq c\phi_{n},\] _where the infimum runs among all possible estimators of \(f^{\star}\)._ The proof of Theorem 2 is deferred to Section 6.7. ## 6. Proofs ### Proof of Lemma 1 Proof.: Drawing on the formulation of \(t_{(f_{1},f_{2})}\) and the definition of \(\ell(\cdot,\cdot)\) as provided in (3), we can deduce that \[\mathbb{E}_{P^{\star}}\left[t_{(f_{1},f_{2})}(W,Y)\right]\] \[= \int_{\mathscr{W}}\left[Q_{(w)}^{\star}\left(q_{f_{2}(w)}>q_{f_{1 }(w)}\right)-Q_{f_{1}(w)}\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right)\right]dP_{W}(w)\] \[\leq \int_{\mathscr{W}}\|Q_{(w)}^{\star}-Q_{f_{1}(w)}\|_{TV}dP_{W}(w)\] \[= \ell(Q^{\star},Q_{f_{1}}),\] which gives the second inequality in (4). Furthermore, for any two probabilities \(P\) and \(R\) on the measured space \((\mathscr{X},\mathcal{X})\), it is well known that the total variation distance can equivalently be written as \[\|P-R\|_{TV}=R(r>p)-P(r>p),\] where \(p\) and \(r\) stand for the respective densities of \(P\) and \(R\) with respect to some common dominating measure \(\mu\). Given this fact, we can calculate \[\mathbb{E}_{P^{\star}}\left[t_{(f_{1},f_{2})}(W,Y)\right]\] \[= \int_{\mathscr{W}}\left[Q_{(w)}^{\star}\left(q_{f_{2}(w)}>q_{f_{1 }(w)}\right)-Q_{f_{2}(w)}\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right)\right]dP_{W}(w)\] \[+\int_{\mathscr{W}}\left[Q_{f_{2}(w)}\left(q_{f_{2}(w)}>q_{f_{1 }(w)}\right)-Q_{f_{1}(w)}\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right)\right]dP_{W}(w)\] \[\geq \int_{\mathscr{W}}\left[\|Q_{f_{1}(w)}-Q_{f_{2}(w)}\|_{TV}-\|Q_{ (w)}^{\star}-Q_{f_{2}(w)}\|_{TV}\right]dP_{W}(w)\] \[= \ell(Q_{f_{1}},Q_{f_{2}})-\ell(Q^{\star},Q_{f_{2}}),\] which yields the first inequality in (4). ### Proof of Theorem 1 Prior to proving Theorem 1, we will initially establish several auxiliary results that will serve as the foundation for deriving Theorem 1. **Proposition 4**.: _For any \(\overline{f}\in\mathcal{F}\), we define_ \[\mathscr{C}_{+}(\mathcal{F},\overline{f})=\left\{\left\{(w,y)\in\mathscr{W} \times\mathscr{Y}\ s.t.\ q_{f(w)}(y)>q_{\overline{f}(w)}(y)\right\},\ f\in \mathcal{F}\right\}\] _and_ \[\mathscr{C}_{-}(\mathcal{F},\overline{f})=\left\{\left\{(w,y)\in\mathscr{W} \times\mathscr{Y}\ s.t.\ q_{f(w)}(y)<q_{\overline{f}(w)}(y)\right\},\ f\in \mathcal{F}\right\}.\] _Under Assumption 1, the classes of subsets \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\) and \(\mathscr{C}_{-}(\mathcal{F},\overline{f})\) are both VC with dimensions not larger than \(9.41V\)._ Proof.: We first prove the result holds for the class \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\). For any \(f\in\mathcal{F}\), we define the function \(\widetilde{q}_{f}\) on \(\mathscr{W}\times\mathscr{Y}\) as \[\widetilde{q}_{f(w)}(y)=\exp\left[\frac{yf(w)}{\sigma^{2}}-\frac{f^{2}(w)}{2 \sigma^{2}}\right].\] Then the class of subsets \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\) can be rewritten as \[\mathscr{C}_{+}(\mathcal{F},\overline{f})=\left\{\left\{(w,y)\in\mathscr{W} \times\mathscr{Y}\ s.t.\ \widetilde{q}_{f(w)}(y)>\widetilde{q}_{\overline{f}(w)}(y)\right\},\ f\in \mathcal{F}\right\}.\] We introduce the result of Proposition 5 in Baraud and Chen (2020) as follows. **Proposition 5**.: _Let \(I\subset\mathbb{R}\) be a non-trivial interval and \(\mathcal{F}\) a class of functions from \(\mathscr{W}\) into \(I\). If \(\mathcal{F}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\), the class of functions_ \[\left\{h_{f}:(w,y)\mapsto e^{S(y)f(w)-A(f(w))},\ f\in\mathcal{F}\right\}\] _is VC-subgraph on \(\mathscr{W}\times\mathscr{Y}\) with dimension not larger than \(9.41V\), where \(S\) is a real-valued measurable function on \(\mathscr{Y}\) and \(A\) is convex and continuous on \(I\)._ Note that function \(\widetilde{q}_{f(w)}(y)\) takes a particular form as described in Proposition 5 with \(S(y)=y/\sigma^{2}\), for all \(y\in\mathbb{R}\), and \(A(u)=u^{2}/(2\sigma^{2})\), for all \(u\in\mathbb{R}\). Therefore, under Assumption 1, the class of functions \(\{\widetilde{q}_{f},\ f\in\mathcal{F}\}\) on \(\mathscr{W}\times\mathscr{Y}\) is VC-subgraph with dimension not larger than \(9.41V\). Moreover, since for any given function \(\overline{f}\in\mathcal{F}\), \(\widetilde{q}_{\overline{f}}\) is a fixed function taking its values in \(\mathbb{R}\), applying Lemma 2.6.18 (v) of van der Vaart and Wellner (1996) (see also Proposition 42 (i) in Baraud et al. (2017)), we obtain that the class of functions \(\left\{\widetilde{q}_{f}-\widetilde{q}_{\overline{f}},\ f\in\mathcal{F}\right\}\) on \(\mathscr{W}\times\mathscr{Y}\) is VC-subgraph with dimension not larger than \(9.41V\). According Proposition 2.1 of Baraud (2016), \(\left\{\widetilde{q}_{f}-\widetilde{q}_{\overline{f}},\ f\in\mathcal{F}\right\}\) is weak VC-major with dimension not larger than \(9.41V\), which implies that the class of subsets \[\left\{\left\{(w,y)\in\mathscr{W}\times\mathscr{Y}\ s.t.\ \widetilde{q}_{f(w)}(y)- \widetilde{q}_{\overline{f}(w)}(y)>0\right\},\ f\in\mathcal{F}\right\}\] is a VC-class of subsets of \(\mathscr{W}\times\mathscr{Y}\) with dimension not larger than \(9.41V\). Hence, the conclusion holds for \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\). Now we show the conclusion also holds for the class of subsets \(\mathscr{C}_{-}(\mathcal{F},\overline{f})\). As we have seen, under Assumption 1, \(\{\widetilde{q}_{f},\ f\in\mathcal{F}\}\) on \(\mathscr{W}\times\mathscr{Y}\) is VC-subgraph with dimension not larger than \(9.41V\). By applying Proposition 42 (iii) of Baraud et al. (2017), we can establish that \(\{-\widetilde{q}_{f},\ f\in\mathcal{F}\}\) on \(\mathscr{W}\times\mathscr{Y}\) is a VC-subgraph with dimension not exceeding \(9.41V\). As a result of Lemma 2.6.18 (v) of van der Vaart and Wellner (1996), this property also holds for the class \(\{\widetilde{q}_{\overline{f}}-\widetilde{q}_{f},\ f\in\mathcal{F}\}\) for any fixed \(\overline{f}\in\mathcal{F}\). Finally, we can conclude using a similar argument as we did for \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\). **Proposition 6**.: _For any \(\overline{f}\in\mathcal{F}\), we define_ \[\mathscr{C}_{=}(\mathcal{F},\overline{f})=\left\{\{w\in\mathscr{W}\ s.t.\ f(w)= \overline{f}(w)\},\ f\in\mathcal{F}\right\}.\] _Under Assumption 1, the class of subsets \(\mathscr{C}_{=}(\mathcal{F},\overline{f})\) is a VC-class of sets on \(\mathscr{W}\) with dimension not larger than \(9.41V\)._ Proof.: We set \[\mathscr{C}_{\geq}(\mathcal{F},\overline{f})=\left\{\{w\in\mathscr{W}\ s.t.\ f(w)- \overline{f}(w)\geq 0\},\ f\in\mathcal{F}\right\}\] and \[\mathscr{C}_{\leq}(\mathcal{F},\overline{f})=\left\{\{w\in\mathscr{W}\ s.t.\ f(w)- \overline{f}(w)\leq 0\},\ f\in\mathcal{F}\right\}.\] Since \(\mathcal{F}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\) and \(\overline{f}\) is a fixed function, \(\{f-\overline{f},\ f\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\) as a consequence of applying Proposition 42 (i) in Baraud et al. (2017). According to Proposition 2.1 of Baraud (2016), \(\{f-\overline{f},\ f\in\mathcal{F}\}\) is weak VC-major with dimension not larger than \(V\), which implies that the class of subsets \[\left\{\left\{w\in\mathscr{W}\ s.t.\ f(w)>\overline{f}(w)\right\},\ f\in \mathcal{F}\right\}\] is a VC-class of subsets of \(\mathscr{W}\) with dimension not larger than \(V\). Then Lemma 2.6.17 (i) of van der Vaart and Wellner (1996) implies that \(\mathscr{C}_{\leq}(\mathcal{F},\overline{f})\) is a VC-class of subsets of \(\mathscr{W}\) with dimension not larger than \(V\). Following a similar argument, we can show that the same conclusion also holds for the class \(\mathscr{C}_{\geq}(\mathcal{F},\overline{f})\). Writing \[\mathscr{C}_{\geq}(\mathcal{F},\overline{f})\bigwedge\mathscr{C}_{\leq}( \mathcal{F},\overline{f})=\left\{C_{\geq}\cap C_{\leq},\ C_{\geq}\in\mathscr{C }_{\geq}(\mathcal{F},\overline{f}),\ C_{\leq}\in\mathscr{C}_{\leq}(\mathcal{F },\overline{f})\right\},\] we can deduce that \(\mathscr{C}_{\geq}(\mathcal{F},\overline{f})\bigwedge\mathscr{C}_{\leq}( \mathcal{F},\overline{f})\) is a VC-class of subsets of \(\mathscr{W}\) with dimension not larger than \(9.41V\) according to Theorem 1.1 of van der Vaart and Wellner (2009). It is easy to note that \(\mathscr{C}_{=}(\mathcal{F},\overline{f})\subset\mathscr{C}_{\geq}(\mathcal{ F},\overline{f})\bigwedge\mathscr{C}_{\leq}(\mathcal{F},\overline{f})\), which implies the completion of the proof. **Lemma 4**.: _Under Assumption 1, whatever the conditional distributions \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\) of the \(Y_{i}\) given \(W_{i}\) and the distributions of \(W_{i}\), any \(\ell\)-type estimator \(\widehat{f}\) based on the set \(\mathcal{F}\) satisfies that for any \(\overline{f}\in\mathcal{F}\) and any \(\xi>0\), with a probability at least \(1-e^{-\xi}\),_ \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q}^{ \star},\mathbf{Q}_{\overline{f}})+\frac{2\boldsymbol{\vartheta}(\overline{f}) }{n}+\sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}, \tag{24}\] _where_ \[\boldsymbol{\vartheta}(\overline{f})= \mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left[\mathbf{T}_{ l}(\boldsymbol{X},\overline{f},f^{\prime})-\mathbb{E}\left[\mathbf{T}_{l}( \boldsymbol{X},\overline{f},f^{\prime})\right]\right]\right]\] \[\vee\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left[ \mathbb{E}\left[\mathbf{T}_{l}(\boldsymbol{X},f^{\prime},\overline{f})\right] -\mathbf{T}_{l}(\boldsymbol{X},f^{\prime},\overline{f})\right]\right].\] Proof.: The proof of Lemma 4 builds upon the idea presented in the proof of Theorem 1 in Baraud et al. (2022), but with certain modifications to adapt it to the regression setting. For any \(f_{1},f_{2}\in\mathcal{F}\), define \[\mathbf{Z}_{+}(\mathbf{X},f_{1}) =\sup_{f_{2}\in\mathcal{F}}\left[\mathbf{T}_{l}(\mathbf{X},f_{1},f_{2} )-\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},f_{1},f_{2})\right]\right]\] \[\mathbf{Z}_{-}(\mathbf{X},f_{1}) =\sup_{f_{2}\in\mathcal{F}}\left[\mathbb{E}\left[\mathbf{T}_{l}( \mathbf{X},f_{2},f_{1})\right]-\mathbf{T}_{l}(\mathbf{X},f_{2},f_{1})\right]\] and set \[\mathbf{Z}(\mathbf{X},f_{1})=\mathbf{Z}_{+}(\mathbf{X},f_{1})\vee\mathbf{Z}_{-}(\mathbf{X},f_{1}).\] As per Lemma 1, for any \(f,\overline{f}\in\mathcal{F}\), it holds that \[n\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{f}) \leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbb{ E}\left[\mathbf{T}_{l}(\mathbf{X},f,\overline{f})\right]\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbb{ E}\left[\mathbf{T}_{l}(\mathbf{X},f,\overline{f})\right]-\mathbf{T}_{l}(\mathbf{X},f, \overline{f})+\mathbf{T}_{l}(\mathbf{X},f,\overline{f})\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},f,\overline{f}) \tag{25}\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},f).\] By utilizing (25), substituting \(f\) with \(\widehat{f}(\mathbf{X})\in\mathscr{E}(\mathbf{X},\epsilon)\), and employing the definition of \(\widehat{f}(\mathbf{X})\), we can derive that \[n\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}}) \leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},\widehat{f}) \tag{26}\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},\overline{f})+\epsilon.\] Moreover, we can compute that \[\mathbf{T}_{l}(\mathbf{X},\overline{f}) =\sup_{f\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X},\overline{f},f) \tag{27}\] \[\leq\sup_{f\in\mathcal{F}}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f },f)-\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f},f)\right]\right]+\sup_ {f\in\mathcal{F}}\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f},f)\right]\] \[\leq\mathbf{Z}(\mathbf{X},\overline{f})+n\ell(\mathbf{Q}^{\star}, \mathbf{Q}_{\overline{f}}),\] where the second inequality is obtained by applying Lemma 1. Combining (26) and (27), we obtain that for any \(\overline{f}\in\mathcal{F}\), \[n\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\mathbf{ Z}(\mathbf{X},\overline{f})+2n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\epsilon. \tag{28}\] In what follows, we study the term \(\mathbf{Z}(\mathbf{X},\overline{f})\) to have a further insight of the risk bound for the estimator \(\widehat{f}\). It is worth noting that for any \(\overline{f},f\in\mathcal{F}\) and \((w,y),(w^{\prime},y^{\prime})\in\mathscr{W}\times\mathscr{Y}\), the following inequality holds: \[\big{|}t_{(\overline{f},f)}(w,y)-t_{(\overline{f},f)}(w^{\prime},y^{\prime}) \big{|}\leq 2.\] Writing \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathscr{X}^{n}\) and \(\mathbf{x}^{\prime}_{(i)}=(x_{1},\ldots,x^{\prime}_{i},\ldots,x_{n})\in\mathscr{X} ^{n}\), as an immediate consequence, we can derive that \[\frac{1}{2}\big{|}\mathbf{Z}_{+}(\mathbf{x},\overline{f})-\mathbf{Z}_{+}(\mathbf{x}^{ \prime}_{(i)},\overline{f})\big{|}\leq 1.\] By following a similar approach as in the proof of Lemma 2 in Baraud (2021) and considering the term \(\xi\) replaced with \(\xi+\log 2\), one can conclude that with a probability of at least \(1-(1/2)e^{-\xi}\), \[\mathbf{Z}_{+}(\mathbf{X},\overline{f}) \leq\mathbb{E}\left[\mathbf{Z}_{+}(\mathbf{X},\overline{f})\right]+ \sqrt{2n(\xi+\log 2)}\] \[=\mathbb{E}\left[\sup_{f\in\mathcal{F}}\left[\mathbf{T}_{l}(\mathbf{X },\overline{f},f)-\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f},f) \right]\right]\right]+\sqrt{2n(\xi+\log 2)} \tag{29}\] \[\leq\mathbf{\vartheta}(\overline{f})+\sqrt{2n(\xi+\log 2)}.\] A similar argument gives that with a probability at least \(1-(1/2)e^{-\xi}\), \[\mathbf{Z}_{-}(\mathbf{X},\overline{f})\leq\mathbf{\vartheta}(\overline{f})+\sqrt{2n (\xi+\log 2)}. \tag{30}\] By combining (29) and (30), we can derive that with a probability at least \(1-e^{-\xi}\), \[\mathbf{Z}(\mathbf{X},\overline{f})=\mathbf{Z}_{+}(\mathbf{X},\overline{f})\lor \mathbf{Z}_{-}(\mathbf{X},\overline{f})\leq\mathbf{\vartheta}(\overline{f})+\sqrt{2n (\xi+\log 2)}. \tag{31}\] Finally, plugging (31) into (28) gives the upper bound \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q }^{*},\mathbf{Q}_{\overline{f}})+\frac{2\mathbf{\vartheta}(\overline{f})}{n}+ \sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}. \tag{32}\] **Proposition 7**.: _Let \(f_{1}\) and \(f_{2}\) be two functions belonging to \(\mathcal{F}\). For all \(w\in\mathscr{W}\), the following equality holds_ \[Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)})=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{ 2\sigma}\right)-\frac{1}{2}\mathds{1}_{f_{1}(w)=f_{2}(w)},\] _where \(\Phi\) stands for the cumulative distribution function of the standard normal distribution._ Proof.: For all \(w\in\mathscr{W}\) satisfying \(f_{1}(w)=f_{2}(w)\), it is easy to see that \(Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)})=0\). The equality naturally holds since \[\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right)-\frac{1}{2}\mathds{1}_ {f_{1}(w)=f_{2}(w)}=\Phi(0)-\frac{1}{2}=0.\] For all \(w\in\mathscr{W}\) satisfying \(f_{1}(w)>f_{2}(w)\), \[Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)}) =\int_{-\infty}^{[f_{1}(w)+f_{2}(w)]/2}q_{f_{1}(w)}(y)dy\] \[=\int_{-\infty}^{[f_{2}(w)-f_{1}(w)]/2\sigma}\frac{1}{\sqrt{2\pi }}e^{-\frac{t^{2}}{2}}dt\] \[=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right).\] For all \(w\in\mathscr{W}\) satisfying \(f_{1}(w)<f_{2}(w)\), \[Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)}) =\int_{[f_{1}(w)+f_{2}(w)]/2}^{+\infty}q_{f_{1}(w)}(y)dy\] \[=\int_{[f_{2}(w)-f_{1}(w)]/2\sigma}^{+\infty}\frac{1}{\sqrt{2 \pi}}e^{-\frac{t^{2}}{2}}dt\] \[=1-\Phi\left(\frac{f_{2}(w)-f_{1}(w)}{2\sigma}\right)\] \[=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right).\] Therefore, we can conclude the equality. The following result comes from the Proposition 3.1 in Baraud (2016), and we shall repeatedly use it in our proof. **Lemma 5**.: _Let \(X_{1},\ldots,X_{n}\) be independent random variables with values in \((E,\mathcal{E})\) and \(\mathcal{C}\) a \(VC\)-class of subsets of \(E\) with \(VC\)-dimension not larger than \(V\geq 1\) that satisfies for \(\sigma\in(0,1]\), \(\sum_{i=1}^{n}\mathbb{P}(X_{i}\in C)\leq n\sigma^{2}\) for all \(C\in\mathcal{C}\). Then,_ \[\mathbb{E}\left[\sup_{C\in\mathcal{C}}\Big{|}\sum_{i=1}^{n}\left(\mathds{1}_{C }(X_{i})-\mathbb{P}(X_{i}\in C)\right)\Big{|}\right]\leq 10(\sigma\lor a) \sqrt{nV\left[5+\log\left(\frac{1}{\sigma\lor a}\right)\right]}\] _where_ \[a=\left[32\sqrt{\frac{(V\wedge n)}{n}\log\left(\frac{2en}{V\wedge n}\right)} \right]\wedge 1.\] To prove Theorem 1, we also need the following result, which can be obtained by making a modification to the proof of Theorem 2 in Baraud and Chen (2020). **Lemma 6**.: _Let \(W_{1},\ldots,W_{n}\) be \(n\) independent random variables with values in \((\mathscr{W},\mathcal{W})\) and \(\mathcal{F}\) an at most countable VC-subgraph class of functions with values in \([0,1]\) and VC-dimension not larger than \(V\geq 1\). If_ \[Z(\mathcal{F})=\sup_{f\in\mathcal{F}}\left|\sum_{i=1}^{n}(f(W_{i})-\mathbb{E} \left[f(W_{i})\right])\right|\ \ \text{and}\ \ \sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[f^{2}(W_{i}) \right]\leq\sigma^{2}\leq 1,\] _then_ \[\mathbb{E}\left[Z(\mathcal{F})\right]\leq 4.61\sqrt{nV\sigma^{2}\mathcal{L}( \sigma)}+85V\mathcal{L}(\sigma),\] _with \(\mathcal{L}(\sigma)=9.11+\log(1/\sigma^{2})\)._ Proof of Theorem 1.: Now we will proceed to prove Theorem 1. Utilizing the result from Lemma 4, we only need to establish an upper bound for the term \(\boldsymbol{\vartheta}(\overline{f})\). Let us express \(\boldsymbol{\vartheta}(\overline{f})\) as \(\boldsymbol{\vartheta}(\overline{f})=\boldsymbol{\vartheta}_{1}(\overline{f}) \vee\boldsymbol{\vartheta}_{2}(\overline{f})\), where \[\boldsymbol{\vartheta}_{1}(\overline{f})=\mathbb{E}\left[\sup_{f^{\prime}\in \mathcal{F}}\left[\mathbf{T}_{l}(\boldsymbol{X},\overline{f},f^{\prime})- \mathbb{E}\left[\mathbf{T}_{l}(\boldsymbol{X},\overline{f},f^{\prime})\right] \right]\right]\] and \[\boldsymbol{\vartheta}_{2}(\overline{f})=\mathbb{E}\left[\sup_{f^{\prime}\in \mathcal{F}}\left[\mathbb{E}\left[\mathbf{T}_{l}(\boldsymbol{X},f^{\prime}, \overline{f})\right]-\mathbf{T}_{l}(\boldsymbol{X},f^{\prime},\overline{f}) \right]\right].\] In what follows, we will derive an upper bound for the term \(\boldsymbol{\vartheta}_{1}(\overline{f})\). For any \(f_{1},f_{2}\in\mathcal{F}\), define \[g_{(f_{1},f_{2})}(w,y)=\mbox{1l}_{q_{f_{2}(w)}(y)>q_{f_{1}(w)}(y)},\quad\mbox{ for all }(w,y)\in\mathscr{W}\times\mathscr{Y}.\] Let \(\Phi\) be the cumulative distribution function of the standard normal distribution. For any \(f_{1},f_{2}\in\mathcal{F}\), define \[h_{(f_{1},f_{2})}(w)=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right), \quad\mbox{for all }w\in\mathscr{W}\] and \[k_{(f_{1},f_{2})}(w)=\frac{1}{2}\mbox{1l}_{f_{1}(w)=f_{2}(w)},\quad\mbox{for all }w\in\mathscr{W}.\] Given any \(f_{1},f_{2}\in\mathcal{F}\), according to Proposition 7, we have that for all \((w,y)\in\mathscr{W}\times\mathscr{Y}\), \[t_{(f_{1},f_{2})}(w,y) =g_{(f_{1},f_{2})}(w,y)-\left[h_{(f_{1},f_{2})}(w)-k_{(f_{1},f_{2} )}(w)\right] \tag{33}\] \[=g_{(f_{1},f_{2})}(w,y)-h_{(f_{1},f_{2})}(w)+k_{(f_{1},f_{2})}(w).\] By the definition of \(\mathbf{T}_{l}\) and the equality (33), we deduce that \[\boldsymbol{\vartheta}_{1}(\overline{f}) =\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left[\sum_{i=1} ^{n}\left(t_{(\overline{f},f^{\prime})}(W_{i},Y_{i})-\mathbb{E}\left[t_{( \overline{f},f^{\prime})}(W_{i},Y_{i})\right]\right)\right]\right]\] \[\leq\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1 }^{n}\left(g_{(\overline{f},f^{\prime})}(W_{i},Y_{i})-\mathbb{E}\left[g_{( \overline{f},f^{\prime})}(W_{i},Y_{i})\right]\right)\right|\right]\] \[\quad+\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_ {i=1}^{n}\left(h_{(\overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[h_{( \overline{f},f^{\prime})}(W_{i})\right]\right)\right|\right]\] \[\quad+\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_ {i=1}^{n}\left(k_{(\overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[k_{( \overline{f},f^{\prime})}(W_{i})\right]\right)\right|\right].\] As it has been shown in Proposition 4 that under Assumption 1, the class of subset \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\) is VC with dimension not larger than \(9.41V\). Hence, applying Lemma 5 with \(\sigma=1\), we can obtain that \[\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1}^{n}\left(g_{( \overline{f},f^{\prime})}(W_{i},Y_{i})-\mathbb{E}\left[g_{(\overline{f},f^{ \prime})}(W_{i},Y_{i})\right]\right)\right|\right]\leq 68.6\sqrt{nV}. \tag{34}\] According to Proposition 6, under Assumption 1, the class of subsets \(\mathscr{C}_{=}(\mathcal{F},\overline{f})\) is VC on \(\mathscr{W}\) with dimension not larger than \(9.41V\). Applying Lemma 5 again, we derive that \[\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1}^{n}\left(k_{( \overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[k_{(\overline{f},f^{\prime})} (W_{i})\right]\right)\right|\right]\leq 34.3\sqrt{nV}. \tag{35}\] Moreover, under Assumption 1, the class of functions \(\{f^{\prime}-\overline{f},\ f^{\prime}\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\). Given the value of \(\sigma>0\), since the function \(\psi(z)=-|z|/2\sigma\), for all \(z\in\mathbb{R}\) is unimodal, the class \(\{\psi\circ(f^{\prime}-\overline{f}),\ f^{\prime}\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(9.41V\), as stated in Proposition 42 (vi) of Baraud et al. (2017). Then according to Proposition 42 (ii) of Baraud et al. (2017), \(\{h_{(\overline{f},f^{\prime})},\ f^{\prime}\in\mathcal{F}\}=\{\Phi\circ\left[ \psi\circ(f^{\prime}-\overline{f})\right],\ f^{\prime}\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(9.41V\). It is easy to note that for any \(f^{\prime}\in\mathcal{F}\), \[\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[h_{(\overline{f},f^{\prime})}^{2}(W_ {i})\right]\leq 1.\] Applying Lemma 6 to the class \(\{h_{(\overline{f},f^{\prime})},\ f^{\prime}\in\mathcal{F}\}\) gives the result that \[\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1}^{n}\left(h_{( \overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[h_{(\overline{f},f^{\prime}) }(W_{i})\right]\right)\right|\right]\leq 42.7\sqrt{nV}+7286.7V. \tag{36}\] Combining (34), (35) and (36) together, we can conclude that for any \(\overline{f}\in\mathcal{F}\) \[\boldsymbol{\vartheta}_{1}(\overline{f})\leq 145.6\sqrt{nV}+7286.7V. \tag{37}\] By following a similar line of proof, one can also derive that \[\boldsymbol{\vartheta}_{2}(\overline{f})\leq 145.6\sqrt{nV}+7286.7V. \tag{38}\] Therefore, (37) and (38) together imply that for any \(\overline{f}\in\mathcal{F}\), \[\boldsymbol{\vartheta}(\overline{f})=\boldsymbol{\vartheta}_{1}(\overline{f} )\vee\boldsymbol{\vartheta}_{2}(\overline{f})\leq 145.6\sqrt{nV}+7286.7V. \tag{39}\] By substituting the bound (39) into equation (24), we infer that for any \(\overline{f}\in\mathcal{F}\) and any \(\xi>0\), with a probability at least \(1-e^{-\xi}\), \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q} ^{\star},\mathbf{Q}_{\overline{f}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{ n}+\sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}, \tag{40}\] which concludes the inequality (5). Using the triangle inequality, \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\leq\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{ \widehat{f}}),\] we derive that any \(\ell\)-type estimator \(\widehat{f}\) on the set \(\mathcal{F}\) satisfies that for all \(\xi>0\), with a probability at least \(1-e^{-\xi}\), \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\leq 3\ell(\mathbf{Q}^{ \star},\boldsymbol{\mathscr{Q}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{n}+ \sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}.\] ### Proof of Lemma 3 Proof.: Lemma 3 can be proven using a similar argument as in the proof of Lemma 11 in Chen (2022), where the main idea is inspired by the proof of Lemma 5 of Schmidt-Hieber (2020). We only need to show that for any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\), there exists a sequence of functions \(f_{i}\in\mathcal{F}_{(L,\boldsymbol{p},K)}\), \(i\in\mathbb{N}^{*}\) such that \[\lim_{i\to+\infty}\|f-f_{i}\|_{\infty}=0.\] For any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\), recall that it can be written as \[f(\boldsymbol{w})=M_{L}\circ\sigma\circ M_{L-1}\circ\cdots\circ\sigma\circ M_ {0}(\boldsymbol{w})\quad\text{for any }\boldsymbol{w}\in\left[0,1\right]^{d},\] where \[M_{l}(\boldsymbol{y})=A_{l}(\boldsymbol{y})+b_{l},\quad\text{for }l=0,\ldots,L,\] \(A_{l}\) is a \(p_{l+1}\times p_{l}\) weight matrix and the shift vector \(b_{l}\) is of size \(p_{l+1}\) for any \(l\in\{0,\ldots,L\}\). For \(l\in\{1,\ldots,L\}\), we define the function \(f_{l}^{+}:\left[0,1\right]^{d}\to\mathbb{R}^{p_{l}}\), \[f_{l}^{+}(\boldsymbol{w})=\sigma\circ M_{l-1}\circ\cdots\circ\sigma\circ M_{0 }(\boldsymbol{w})\] and for \(l\in\{1,\ldots,L+1\}\), we define \(f_{l}^{-}:\mathbb{R}^{p_{l-1}}\to\mathbb{R}\) \[f_{l}^{-}(\boldsymbol{x})=M_{L}\circ\sigma\circ\cdots\circ\sigma\circ M_{l-1} (\boldsymbol{x}).\] We set the notations \(f_{0}^{+}(\boldsymbol{x})=f_{L+2}^{-}(\boldsymbol{x})=\boldsymbol{x}\). Given a vector \(\boldsymbol{v}=(v_{1},\ldots,v_{p})^{\top}\) of any size \(p\in\mathbb{N}^{*}\), we denote \(|\boldsymbol{v}|_{\infty}=\max_{i=1,\ldots,p}|v_{i}|\). For any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\), with the fact that the absolute values of all the parameters are bounded by \(K\) and \(\boldsymbol{w}\in\left[0,1\right]^{d}\), we have for all \(l\in\{1,\ldots,L\}\) \[\left|f_{l}^{+}(\boldsymbol{w})\right|_{\infty}\leq K_{+}^{l}\prod_{k=0}^{l-1 }(p_{k}+1),\] where \(K_{+}=\max\{K,1\}\), and \(f_{l}^{-}\), \(l\in\{1,\ldots,L+1\}\), is a multivariate Lipschitz function with Lipschitz constant bounded by \(\prod_{k=l-1}^{L}(K_{+}p_{k})\). For any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\) with weight matrices and shift vectors \(\{M_{l}=(A_{l},b_{l})\}_{l=0}^{L}\) and for all \(\epsilon>0\), since \(\mathbb{Q}\) is dense in \(\mathbb{R}\), there exist a \(N_{\epsilon}>0\) such that for all \(i\geq N_{\epsilon}\), all the non-zero parameters in \(f_{i}\in\mathcal{F}_{(L,\boldsymbol{p},K)}\) are smaller than \[\frac{\epsilon}{(L+1)\prod_{k=0}^{L+1}\left[K_{+}(p_{k}+1)\right]}\] away from the corresponding ones in \(f\). We denote the weight matrices and shift vectors of function \(f_{i}\) as \(\{M_{l}^{i}=(A_{l}^{i},b_{l}^{i})\}_{l=0}^{L}\). We note that \[f_{i}(\boldsymbol{w})=f_{i,2}^{-}\circ\sigma\circ M_{0}^{i}\circ f_{0}^{+}( \boldsymbol{w})\] and \[f(\boldsymbol{w})=f_{i,L+2}^{-}\circ M_{L}\circ f_{L}^{+}(\boldsymbol{w}).\] Therefore, for all \(i\geq N_{\epsilon}\) and all \(\mathbf{w}\in[0,1]^{d}\) \[|f_{i}(\mathbf{w})-f(\mathbf{w})|\leq \sum_{l=1}^{L}\left|f_{i,l+1}^{-}\circ\sigma\circ M_{l-1}^{i}\circ f _{l-1}^{+}(\mathbf{w})-f_{i,l+1}^{-}\circ\sigma\circ M_{l-1}\circ f_{l-1}^{+}(\mathbf{w })\right|\] \[+\left|M_{L}^{i}\circ f_{L}^{+}(\mathbf{w})-M_{L}\circ f_{L}^{+}(\bm {w})\right|\] \[\leq \sum_{l=1}^{L}\left[\prod_{k=l}^{L}K_{+}p_{k}\right]\left|M_{l-1}^ {i}\circ f_{l-1}^{+}(\mathbf{w})-M_{l-1}\circ f_{l-1}^{+}(\mathbf{w})\right|_{\infty}\] \[+\left|M_{L}^{i}\circ f_{L}^{+}(\mathbf{w})-M_{L}\circ f_{L}^{+}(\bm {w})\right|\] \[\leq \sum_{l=1}^{L+1}\left[\prod_{k=l}^{L+1}K_{+}p_{k}\right]\left|M_ {l-1}^{i}\circ f_{l-1}^{+}(\mathbf{w})-M_{l-1}\circ f_{l-1}^{+}(\mathbf{w})\right|_{\infty}\] \[\leq \sum_{l=1}^{L+1}\left[\prod_{k=l}^{L+1}K_{+}p_{k}\right]\left[ \left|\left(A_{l-1}^{i}-A_{l-1}\right)\circ f_{l-1}^{+}(\mathbf{w})\right|_{\infty }+|b_{l-1}^{i}-b_{l-1}|_{\infty}\right]\] \[< \frac{\sum_{l=1}^{L+1}\left[\prod_{k=l}^{L+1}K_{+}p_{k}\right] \left(p_{l-1}\left|f_{l-1}^{+}(\mathbf{w})\right|_{\infty}+1\right)}{(L+1)\prod_{ k=0}^{L+1}\left[K_{+}(p_{k}+1)\right]}\epsilon\] \[< \epsilon.\] Hence, by the definition we can conclude that \(\mathcal{F}_{(L,\mathbf{p},K)}\) is dense in \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\) with respect to the supremum norm \(\|\cdot\|_{\infty}\). ### Proof of Corollary 1 Proof.: Recall that, in accordance with the general result (7), for any \(n\geq V(\overline{\mathcal{F}}_{(L,p)})\geq V(\overline{\mathcal{F}}_{(L,p,K)})\), we can obtain \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon} \left[\inf_{f\in\mathcal{F}_{(L,p,K)}}\ell(Q_{f^{\star}},Q_{f})+\sqrt{\frac{V (\overline{\mathcal{F}}_{(L,p,K)})}{n}}\right], \tag{41}\] where \(C_{\epsilon}>0\) is a numerical constant depending on \(\epsilon\) only. Then, applying Lemma 3 and inequality (11), we derive from (41) that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right] \leq C_{\epsilon,\sigma}\left[\inf_{f\in\mathcal{F}_{(L,p,K)}}\|f ^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V(\overline{\mathcal{F}}_{(L,p,K)})}{n}}\right]\] \[\leq C_{\epsilon,\sigma}\left[\inf_{f\in\overline{\mathcal{F}}_{(L,p,K)}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V(\overline{\mathcal{F}}_{(L,p,K) })}{n}}\right], \tag{42}\] where \(C_{\epsilon,\sigma}\) is a numerical constant depending only on \(\epsilon\) and \(\sigma\). On the one hand, as a consequence of Proposition 2, we have that for the network \(\overline{\mathcal{F}}_{(L,p,K)}\) with \[p=38(\lfloor\alpha\rfloor+1)^{2}3^{d}d^{\lfloor\alpha\rfloor+1}N\lceil\log_{2}(8N )\rceil, \tag{43}\] \[L=21(\lfloor\alpha\rfloor+1)^{2}M\lceil\log_{2}(8M)\rceil+2d \tag{44}\] and \(K\) being large enough, \[\inf_{f\in\overline{\mathcal{F}}_{(L,p,K)}}\|f^{\star}-f\|_{1,P_{W}} =\inf_{f\in\overline{\mathcal{F}}_{(L,p,K)}}\int_{\mathscr{W}}|f^{ \star}(w)-f(w)|dP_{W}(w) \tag{45}\] \[\leq 19B(\lfloor\alpha\rfloor+1)^{2}d^{\lfloor\alpha\rfloor+( \alpha\lor 1)/2}(NM)^{-2\alpha/d}.\] On the other hand, given the equalities (43) and (44), we have \(p\geq 342\) and \(L\geq 65\), for any \(\alpha\in\mathbb{R}_{+}^{*}\). By applying Proposition 1, we can derive through a basic computation that \[V(\overline{\mathcal{F}}_{(L,p,K)}) \leq(L+1)\left(s+1\right)\log_{2}\left[2\left(2e(L+1)\left(\frac {pL}{2}+1\right)\right)^{2}\right]\] \[\leq C_{d}p^{2}L^{2}\log_{2}\left(pL^{2}\right)\] \[\leq C_{\alpha,d}(NM)^{2}\left[\log_{2}(2N)\log_{2}(2M)\right]^ {3}, \tag{46}\] where \(C_{d}\) only depends on \(d\) and \(C_{\alpha,d}\) only depends on \(d\) and \(\alpha\). Plugging (45) and (46) into (42), we can conclude that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon, \sigma,\alpha,d,B}\left[(NM)^{-2\alpha/d}+\frac{NM}{\sqrt{n}}\left(\log_{2}(2N )\log_{2}(2M)\right)^{3/2}\right],\] where \(C_{\epsilon,\sigma,\alpha,d,B}>0\) only depends on \(\epsilon,\sigma,\alpha,d\) and \(B\). ### Proof of Proposition 3 Proof.: Prior to proving Proposition 3, we will first introduce the following rules for network combination, which are extensively detailed in Section 7.1 of Schmidt-Hieber (2020). _Composition_: Let \(f_{1}\in\overline{\mathcal{F}}(L,\mathbf{p})\) and \(f_{2}\in\overline{\mathcal{F}}(L^{\prime},\mathbf{p}^{\prime})\) be such that \(p_{L+1}=p_{0}^{\prime}\). Let \(\mathbf{v}\in\mathbb{R}^{p_{L+1}}\) be a vector. We define the composed network \(f_{2}\circ\sigma_{\mathbf{v}}(f_{1})\), where \[\sigma_{\mathbf{v}}\begin{pmatrix}y_{1}\\ \vdots\\ y_{p_{L+1}}\end{pmatrix}=\begin{pmatrix}\sigma(y_{1}-v_{1})\\ \vdots\\ \sigma(y_{p_{L+1}}-v_{p_{L+1}})\end{pmatrix},\] for any vector \(\mathbf{y}=(y_{1},\ldots,y_{p_{L+1}})^{\top}\in\mathbb{R}^{p_{L+1}}\). Then \(f_{2}\circ\sigma_{\mathbf{v}}(f_{1})\) belongs to the space \(\overline{\mathcal{F}}(L+L^{\prime}+1,(\mathbf{p},p_{1}^{\prime},\ldots,p_{L+1}^{ \prime}))\). _Parallelization_: Let \(f_{1}\) and \(f_{2}\) be two networks with an equal number of hidden layers and identical input dimensions. Specifically, let \(f_{1}\in\overline{\mathcal{F}}(L,\mathbf{p})\) and \(f_{2}\in\overline{\mathcal{F}}(L,\mathbf{p}^{\prime})\), where \(p_{0}=p_{0}^{\prime}\). The parallelized network \((f_{1},f_{2})\) concurrently computes \(f_{1}\) and \(f_{2}\) within a joint network belonging to the class \(\overline{\mathcal{F}}(L,(p_{0},p_{1}+p_{1}^{\prime},\ldots,p_{L+1}+p_{L+1}^{ \prime}))\). We will also use the following inequality later in the proof. It can be derived through a minor modification of the proof of Lemma 3 in Schmidt-Hieber (2020). **Lemma 7**.: _Let \(k\in\mathbb{N}^{*}\), \(\mathbf{d}=(d_{0},\ldots,d_{k})\in(\mathbb{N}^{*})^{k+1}\), \(\mathbf{t}=(t_{0},\ldots,t_{k})\in(\mathbb{N}^{*})^{k+1}\) with \(t_{i}\leq d_{i}\) and \(\boldsymbol{\alpha}=(\alpha_{0},\ldots,\alpha_{k})\in(\mathbb{R}^{*}_{+})^{k+1}\). For any \(i\in\{0,\ldots,k\}\) and \(j\in\{1,\ldots,d_{i+1}\}\) with \(d_{k+1}=1\), let \(h_{ij}\in\mathcal{H}^{\alpha_{i}}([0,1]^{t_{i}}\,,Q_{i})\) taking values in \([0,1]\) for some \(Q_{i}\geq 1\) and \(h_{i}=(h_{i1},\ldots,h_{id_{i+1}})^{\top}\). Then for any function \(\widetilde{h}_{i}=(\widetilde{h}_{i1},\ldots,\widetilde{h}_{id_{i+1}})^{\top}\) with \(\widetilde{h}_{ij}:[0,1]^{t_{i}}\to[0,1]\),_ \[\|h_{k}\circ\cdots\circ h_{0}-\widetilde{h}_{k}\circ\cdots\circ\widetilde{h} _{0}\|_{\infty}\leq\left(\prod_{i=0}^{k}Q_{i}\sqrt{d_{i}}\right)\sum_{i=0}^{k} |||h_{i}-\widetilde{h}_{i}|||_{\infty}^{\prod_{i=i+1}^{k}(\alpha_{l}\wedge 1)},\] _where \(|||f|||_{\infty}\) denotes the sup-norm of the function \(\boldsymbol{x}\mapsto|f(\boldsymbol{x})|_{\infty}\)._ The essential strategy for establishing Proposition 3 is derived from a section of the proof of Theorem 1 in Schmidt-Hieber (2020). However, we employ distinct fundamental networks as suggested by Proposition 2 to approximate functions with Holder smoothness. This, in turn, leads to more specific neural network structures for approximating \(f^{\star}=f_{k}\circ\cdots\circ f_{0}\) compared to the sparsity-based networks considered in Theorem 1 of Schmidt-Hieber (2020). To begin with, we rewrite \[f^{\star}=f_{k}\circ\cdots\circ f_{0}=g_{k}\circ\cdots\circ g_{0},\] where \[g_{0}:=\frac{f_{0}}{2B}+\frac{1}{2},\quad g_{k}:=f_{k}(2B\cdot-B)\] and \[g_{i}:=\frac{f_{i}(2B\cdot-B)}{2B}+\frac{1}{2}\quad\text{for all }i\in\{1,\ldots,k-1\}.\] Given the condition \(B\geq 1\), we can readily confirm that \(g_{0j}\in\mathcal{H}^{\alpha_{0}}([0,1]^{t_{0}}\,,Q_{0})\), \(g_{ij}\in\mathcal{H}^{\alpha_{i}}([0,1]^{t_{i}}\,,Q_{i})\), for \(i\in\{1,\ldots,k-1\}\) and \(g_{kj}\in\mathcal{H}^{\alpha_{k}}([0,1]^{t_{k}}\,,Q_{k})\), with \(Q_{0}=1\), \(Q_{i}=(2B)^{\alpha_{i}}\), for \(i\in\{1,\ldots,k-1\}\) and \(Q_{k}=2^{\alpha_{k}}B^{\alpha_{k}+1}\). We apply Proposition 2 to approximate each function \(g_{ij}\), for all \(j\in\{1,\ldots,d_{i+1}\}\), \(i\in\{0,\ldots,k\}\). In particular, for all the functions \(g_{i1},\ldots,g_{id_{i+1}}\), we take \(N_{i}=1\), \(M_{i}=\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil\) and consider a ReLU network \(\overline{\mathcal{F}}_{(L_{i},(t_{i},p_{i},\ldots,p_{i},1))}\) with \[p_{i}=114(\lfloor\alpha_{i}\rfloor+1)^{2}3^{t_{i}}t_{i}^{\lfloor\alpha_{i} \rfloor+1},\] \[L_{i}=21(\lfloor\alpha_{i}\rfloor+1)^{2}\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{ *})}\rceil\lceil\log_{2}(8\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil) \rceil+2t_{i}.\] According to Proposition 2, there exists a function \(\overline{g}_{ij}\in\overline{\mathcal{F}}_{(L_{i},(t_{i},p_{i},\ldots,p_{i},1 ))}\) such that \[\|\overline{g}_{ij}-g_{ij}\|_{\infty} \leq 19Q_{i}(\lfloor\alpha_{i}\rfloor+1)^{2}t_{i}^{\lfloor\alpha _{i}\rfloor+(\alpha_{i}\lor 1)/2}(N_{i}M_{i})^{-2\alpha_{i}/t_{i}}, \tag{47}\] \[\leq 19Q_{i}(\lfloor\alpha_{i}\rfloor+1)^{2}t_{i}^{\lfloor\alpha _{i}\rfloor+(\alpha_{i}\lor 1)/2}(\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil)^{-2 \alpha_{i}/t_{i}}.\] Let \(\widetilde{g}_{ij}=(\overline{g}_{ij}\lor 0)\wedge 1=1-(1-\overline{g}_{ij})_{+}\). It is straightforward to observe that \(\widetilde{g}_{ij}\) assumes values in the interval \([0,1]\). Recall that since \(\overline{g}_{ij}\in\overline{\mathcal{F}}_{(L_{i},(t_{i},p_{i},\ldots,p_{i},1 ))}\), it can be written as \[\overline{g}_{ij}=\overline{M}_{L_{i}}^{(i)}\circ\sigma\circ\cdots\circ\sigma \circ\overline{M}_{0}^{(i)},\] for some linear transformations \(\overline{M}_{0}^{(i)},\ldots,\overline{M}_{L_{i}}^{(i)}\). Let \(M_{L_{i}+2}(x)=M_{L_{i}+1}(x)=1-x\), for any \(x\in\mathbb{R}\). Then we have \[\widetilde{g}_{ij} =M_{L_{i}+2}\circ\sigma\circ M_{L_{i}+1}\overline{g}_{ij}\] \[=M_{L_{i}+2}\circ\sigma\circ M_{L_{i}+1}\overline{M}_{L_{i}}^{(i )}\circ\sigma\circ\cdots\circ\sigma\circ\overline{M}_{0}^{(i)}\] \[=M_{L_{i}+2}\circ\sigma\circ\widetilde{M}_{L_{i}}^{(i)}\circ \sigma\circ\cdots\circ\sigma\circ\overline{M}_{0}^{(i)}\] where \(\widetilde{M}_{L_{i}}^{(i)}=M_{L_{i}+1}\circ\overline{M}_{L_{i}}^{(i)}\). Hence, we deduce that \(\widetilde{g}_{ij}\in\overline{\mathcal{F}}_{(L_{i}+1,(t_{i},p_{i},\ldots,p_{ i},1,1))}\). Furthermore, as each function \(g_{ij}\) assumes values in the interval \([0,1]\) due to the transformation, this implies that \[\|\sigma(\widetilde{g}_{ij})-g_{ij}\|_{\infty}=\|\widetilde{g}_{ij}-g_{ij}\| _{\infty}\leq\|\overline{g}_{ij}-g_{ij}\|_{\infty}. \tag{48}\] Next, we amalgamate these individual small networks by employing the fundamental operations of neural networks introduced at the outset of this proof. Note that \(\overline{\mathcal{F}}_{(L_{i}+1,(t_{i},p_{i},\ldots,p_{i},1,1))}\subset \overline{\mathcal{F}}_{(L_{i}+1,(d_{i},p_{i},\ldots p_{i},1,1))}\), for \(t_{i}\leq d_{i}\). By the parallelization rule, the function \(\widetilde{g}_{i}=(\widetilde{g}_{i1},\ldots,\widetilde{g}_{id_{i+1}})\) can be implemented by the ReLU neural network \(\overline{\mathcal{F}}_{(L_{i}+1,(d_{i},d_{i+1}p_{i},\ldots,d_{i+1}p_{i},d_{i +1},d_{i+1}))}\). A similar analysis implies that \(\overline{g}_{k}\) can be implemented by the ReLU neural network \(\overline{\mathcal{F}}_{(L_{k},(d_{k},d_{k+1}p_{k},\ldots,d_{k+1}p_{k},d_{k+1 }))}\). To construct the function \(\widetilde{f}=\overline{g}_{k}\circ\widetilde{g}_{k-1}\cdots\circ\widetilde{g}_ {0}\) that approximates the function \(f^{\star}=g_{k}\circ\cdots\circ g_{0}\), we apply the composition rule to amalgamate the networks we have considered earlier. It can be shown with a similar argument as we did before that for any \(k\in\mathbb{N}^{*}\), \(\widetilde{g}_{k-1}\circ\cdots\circ\widetilde{g}_{0}\) can be implemented by the ReLU neural network \[\overline{\mathcal{F}}\left(\sum_{i=0}^{k-1}(L_{i}+1),\left(d_{0},\underbrace {d_{1}p_{0},\ldots,d_{1}p_{0}}_{L_{0}\text{ times}},\ldots,d_{k-1},\underbrace{d_{k}p_{k-1}, \ldots,d_{k}p_{k-1}}_{L_{k-1}\text{ times}},d_{k},d_{k}\right)\right).\] Note that \(d_{i}\geq 1\) for all \(0\leq i\leq k+1\) and \(p_{i}\geq 1\) for \(0\leq i\leq k\). Denote \[\overline{p}=\max_{i=0,\ldots,k}d_{i+1}p_{i}.\] Finally, we can conclude that the function \(\widetilde{f}=\overline{g}_{k}\circ\widetilde{g}_{k-1}\cdots\circ\widetilde{g} _{0}\) can be implemented by the ReLU neural network \(\overline{\mathcal{F}}_{(\overline{L},(d_{0},\overline{p},\ldots,\overline{p},d_{k+1}))}\) with \(\overline{L}=k+\sum_{i=0}^{k}L_{i}\). Recall that \(Q_{0}=1\), \(Q_{i}=(2B)^{\alpha_{i}}\), for \(i\in\{1,\ldots,k-1\}\) and \(Q_{k}=2^{\alpha_{k}}B^{\alpha_{k}+1}\). Combining Lemma 7 with (47) and (48) yields the following upper bound for the approximation error, \[\inf_{f\in\overline{\mathcal{F}}\left(\overline{\mathcal{L}},(d_{0}, \overline{p},\ldots,\overline{p},d_{k+1})\right)}\|f^{\star}-f\|_{\infty}\] \[\leq(2B)^{1+\sum_{i=1}^{k}\alpha_{i}}\left(\prod_{i=0}^{k}\sqrt{ d_{i}}\right)\left[\sum_{i=0}^{k}C_{\alpha_{i},t_{i},B}^{\prod_{l=i+1}^{k}( \alpha_{l}\wedge 1)}(\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil)^{-2\alpha_{i}^ {*}/t_{i}}\right],\] where \[C_{\alpha_{i},t_{i},B}=19(2B)^{\alpha_{i}+1}(\lfloor\alpha_{i} \rfloor+1)^{2}t_{i}^{\lfloor\alpha_{i}\rfloor+(\alpha_{i}\lor 1)/2}.\] ### Proof of Corollary 2 Proof.: Firstly, we establish an upper bound for the VC-dimension of the ReLU neural network \(\overline{\mathcal{F}}_{(\overline{L},\overline{p},K)}\). Using the fact that for any \(k\), \(\mathbf{d}\), \(\mathbf{t}\), \(\boldsymbol{\alpha}\) and any \(n\geq 1\), \(\overline{L}\geq L_{0}\geq 65\), and \(\overline{p}\geq p_{0}\geq 342\), we can deduce, through the application of Proposition 1, that \[V(\overline{\mathcal{F}}_{(\overline{L},\overline{p},K)}) \leq C_{d_{0}}\overline{p}^{2}\overline{L}^{2}\log_{2}\left( \overline{p}\overline{L}^{2}\right) \tag{49}\] \[\leq C_{k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha}}\left(\sum_{ i=0}^{k}L_{i}\right)^{2}\log_{2}\left(\sum_{i=0}^{k}L_{i}\right),\] where \(C_{k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha}}>0\) is a numerical constant depending only on \(k,\mathbf{d},\mathbf{t}\) and \(\boldsymbol{\alpha}\). Combining (7) with the inequality (11), we obtain that for any \(n\geq V(\overline{\mathcal{F}}_{(\overline{L},\overline{p})})\geq V(\overline{ \mathcal{F}}_{(\overline{L},\overline{p},K)})\), \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right] \leq C_{\epsilon,\sigma}\left[\inf_{f\in\overline{\mathcal{F}}_{ (\overline{L},\overline{p},K)}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V( \overline{\mathcal{F}}_{(\overline{L},\overline{p},K)})}{n}}\right] \tag{50}\] \[\leq C_{\epsilon,\sigma}\left[\inf_{f\in\overline{\mathcal{F}}_{ (\overline{L},\overline{p},K)}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V( \overline{\mathcal{F}}_{(\overline{L},\overline{p},K)})}{n}}\right],\] where the second inequality rises from the fact that \(\mathcal{F}_{(\overline{L},\overline{p},K)}\) is dense in \(\overline{\mathcal{F}}_{(\overline{L},\overline{p},K)}\) with respect to the sup-norm according to Lemma 3. Finally, plugging the result provided in Proposition 3 and (49) into (50), we can conclude that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right] \leq C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha },B}\left[\left(\sum_{i=0}^{k}n^{-\frac{\alpha_{i}^{\star}}{t_{i}+2\alpha_{i}^{ \star}}}\right)+\left(\sum_{i=0}^{k}L_{i}\right)\sqrt{\frac{\log_{2}\left(\sum_ {i=0}^{k}L_{i}\right)}{n}}\right]\] \[\leq C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\boldsymbol{ \alpha},B}\left[\sum_{i=0}^{k}\left(n^{-\frac{\alpha_{i}^{\star}}{t_{i}+2 \alpha_{i}^{\star}}}+\frac{L_{i}}{\sqrt{n}}\right)\right]\sqrt{\log_{2}\left( \sum_{i=0}^{k}L_{i}\right)}\] \[\leq C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\boldsymbol{ \alpha},B}\left(\sum_{i=0}^{k}n^{-\frac{\alpha_{i}^{\star}}{t_{i}+2\alpha_{i}^ {\star}}}\right)(\log n)^{3/2}.\] ### Proof of Theorem 2 To establish lower bounds, we initially prove the following variant of Assouad's lemma. **Lemma 8**.: _Let \(\mathcal{P}\) be a family of probabilities on a measurable space \((\mathscr{X},\mathcal{X})\). If for some integer \(D\geq 1\), there is a subset of \(\mathcal{P}\) of the form \(\left\{P_{\boldsymbol{\varepsilon}},\ \boldsymbol{\varepsilon}\in\{0,1\}^{D}\right\}\) satisfying_ 1. _there exists_ \(\eta>0\) _such that for all_ \(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime}\in\{0,1\}^{D}\)_,_ \[\|P_{\boldsymbol{\varepsilon}}-P_{\boldsymbol{\varepsilon}^{\prime}}\|_{TV} \geq\eta\delta(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime}) \quad\text{with}\quad\delta(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^ {\prime})=\sum_{j=1}^{D}\mbox{1}\!\!\!1_{\varepsilon_{j}\neq\varepsilon_{j}^{ \prime}}\] 2. _there exists a constant_ \(a\in[0,1/2]\) _such that_ \[h^{2}\left(P_{\boldsymbol{\varepsilon}},P_{\boldsymbol{\varepsilon}^{\prime}} \right)\leq\frac{a}{n}\quad\text{for all }\boldsymbol{\varepsilon},\boldsymbol{ \varepsilon}^{\prime}\in\{0,1\}^{D}\text{ satisfying }\delta(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime})=1.\] _Then for all measurable mappings \(\widehat{P}:\mathscr{X}^{n}\to\mathcal{P}\),_ \[\sup_{P\in\mathcal{P}}\mathbb{E}_{\mathbf{P}}\left[\|P-\widehat{P}(\boldsymbol {X})\|_{TV}\right]\geq\frac{\eta D}{4}\max\left\{1-\sqrt{2a},\ \frac{1}{2}\left(1-\frac{a}{n}\right)^{2n}\right\}, \tag{51}\] _where \(\mathbb{E}_{\mathbf{P}}\) denotes the expectation with respect to a random variable \(\boldsymbol{X}=(X_{1},\ldots,X_{n})\) with distribution \(\mathbf{P}=P^{\otimes n}\)._ Proof.: Let \(\overline{\boldsymbol{\varepsilon}}\) minimize \(\boldsymbol{\varepsilon}\mapsto\|P-P_{\boldsymbol{\varepsilon}}\|_{TV}\) over \(\{0,1\}^{D}\) for a given probability \(P\) on \((\mathscr{X},\mathcal{X})\). Note that for all \(\boldsymbol{\varepsilon}\in\{0,1\}^{D}\), \[\|P_{\boldsymbol{\varepsilon}}-P_{\overline{\boldsymbol{\varepsilon}}}\|_{TV} \leq\|P-P_{\boldsymbol{\varepsilon}}\|_{TV}+\|P-P_{\overline{\boldsymbol{ \varepsilon}}}\|_{TV}\leq 2\|P-P_{\boldsymbol{\varepsilon}}\|_{TV}.\] Thus, using property (i), we have for all \(\boldsymbol{\varepsilon}\in\{0,1\}^{D}\): \[\|P_{\boldsymbol{\varepsilon}}-P\|_{TV}\geq\frac{\eta}{2}\delta(\boldsymbol{ \varepsilon},\overline{\boldsymbol{\varepsilon}})=\sum_{i=1}^{D}\left[ \varepsilon_{i}\ell_{i}(P)+(1-\varepsilon_{i})\ell_{i}^{\prime}(P)\right],\] where \(\ell_{i}(P)=(\eta/2)\mbox{1\kern-2.5pt{l}}_{\pi_{i}=0}\) and \(\ell_{i}^{\prime}(P)=(\eta/2)\mbox{1\kern-2.5pt{l}}_{\pi_{i}=1}\), for \(i\in\{1,\ldots,D\}\). Finally, the conclusion follows by applying a version of Assouad's lemma from Birge (1986) with \(\beta_{i}=a/n\), for all \(i\in\{1,\ldots,D\}\) and \(\alpha=\eta/2\). Now we prove Theorem 2. The roadmap is to first find a suitable collection of probabilities \(\mathcal{P}\) then apply Lemma 8 to derive the lower bound. The construction idea is inspired by the proof of Theorem 3 of Schmidt-Hieber (2020). Denote \(i^{*}\in\operatorname*{argmin}_{i=0,\ldots,k}\alpha_{i}^{*}/(2\alpha_{i}^{*}+t _{i})\). For simplicity, we write \(t^{*}=t_{i^{*}}\), \(\alpha^{*}=\alpha_{i^{*}}\) and \(\alpha^{**}=\alpha_{i^{*}}^{*}\). We define \(N_{n}=\lfloor\rho n^{1/(2\alpha^{**}+t^{*})}\rfloor\), \(h_{n}=1/N_{n}\) and \(\Lambda=\{0,h_{n},\ldots,(N_{n}-1)h_{n}\}\). Following the construction outlined on page 93 of Tsybakov (2009), we consider the function \[\mathcal{K}(x)=a\exp\left(-\frac{1}{1-(2x-1)^{2}}\right)\mbox{1\kern-2.5pt{l} }_{|2x-1|\leq 1}\] with \(a>0\). Provided that \(a\) is sufficiently small, we have \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\) with support on \([0,1]\). Moreover, for any \(\beta\in\mathbb{N}\) satisfying \(\beta\leq\lfloor\alpha^{*}\rfloor\), the \(\beta\)-th derivative of \(\mathcal{K}\) is zero at both \(x=0\) and \(x=1\), i.e., \(\mathcal{K}^{(\beta)}(0)=\mathcal{K}^{(\beta)}(1)=0\). We define the function \(\psi_{\mathbf{u}}\) on \([0,1]^{t^{*}}\) as \[\psi_{\mathbf{u}}(w_{1},\ldots,w_{t^{*}})=h_{n}^{\alpha^{*}}\prod_{j=1}^{t^{*} }\mathcal{K}\left(\frac{w_{j}-u_{j}}{h_{n}}\right),\] where \(\mathbf{u}=(u_{1},\ldots,u_{t^{*}})\in\mathcal{U}_{n}=\{(u_{1},\ldots,u_{t^{*} }),\;u_{i}\in\Lambda\}\). Note that for any \(\mathbf{u},\mathbf{u}^{\prime}\in\mathcal{U}_{n}\), \(\mathbf{u}\neq\mathbf{u}^{\prime}\), the supports of \(\psi_{\mathbf{u}}\) and \(\psi_{\mathbf{u}^{\prime}}\) are disjoint. For any \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{t^{*}})\in\mathbb{N}^{t^{*}}\) satisfying \(\sum_{j=1}^{t^{*}}\beta_{j}\leq\lfloor\alpha^{*}\rfloor\), it holds that \(\|\partial^{\boldsymbol{\beta}}\psi_{\mathbf{u}}\|_{\infty}\leq 1\) due to the fact that \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\). Set \(\mathcal{I}_{\mathbf{u}}=[u_{1},u_{1}+h_{n}]\times\cdots\times[u_{t^{*}},u_{t^ {*}}+h_{n}]\). Moreover, for any \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{t^{*}})\) with \(\sum_{j=1}^{t^{*}}\beta_{j}=\lfloor\alpha^{*}\rfloor\), with the fact that \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\) and triangle inequality, we obtain that for any \(\boldsymbol{x},\boldsymbol{y}\in\mathcal{I}_{\mathbf{u}}\), \[\frac{\left|\partial^{\boldsymbol{\beta}}\psi_{\mathbf{u}}(\boldsymbol{x})- \partial^{\boldsymbol{\beta}}\psi_{\mathbf{u}}(\boldsymbol{y})\right|}{\| \boldsymbol{x}-\boldsymbol{y}\|_{2}^{\alpha^{*}-\lfloor\alpha^{*}\rfloor}} \leq t^{*}.\] Therefore, we have \(\psi_{\mathbf{u}}\in\mathcal{H}^{\alpha^{*}}(\mathcal{I}_{\mathbf{u}},t^{*})\). For any vector \(\boldsymbol{\varepsilon}=(\varepsilon_{\mathbf{u}})_{\mathbf{u}\in\mathcal{U} _{n}}\in\{0,1\}^{\left|\mathcal{U}_{n}\right|}\), define the function \(\phi_{\boldsymbol{\varepsilon}}\) on \([0,1]^{t^{*}}\) as \[\phi_{\boldsymbol{\varepsilon}}(w_{1},\ldots,w_{t^{*}})=\sum_{\mathbf{u}\in \mathcal{U}_{n}}\varepsilon_{\mathbf{u}}\psi_{\mathbf{u}}(w_{1},\ldots,w_{t^ {*}}).\] Given that \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\) and \(\mathcal{K}^{(\beta)}(0)=\mathcal{K}^{(\beta)}(1)=0\), for any \(\beta\leq\lfloor\alpha^{*}\rfloor\), it is not difficult to verify that \(\phi_{\boldsymbol{\varepsilon}}\in\mathcal{H}^{\alpha^{*}}(\left[0,1\right]^{t^ {*}},2t^{*})\). Let \(d_{i}^{\prime}=\min\{d_{0},\ldots,d_{i}\}\), for all \(i\in\{0,\ldots,k\}\). For \(0\leq i<i^{*}\), we denote \(f_{i}(\boldsymbol{w})=(w_{1},\ldots,w_{d_{i+1}})^{\top}\), if \(d_{i+1}=d_{i+1}^{\prime}\); otherwise, we set \(f_{i}(\boldsymbol{w})=(w_{1},\ldots,w_{d_{i}^{\prime}},0,\ldots,0)^{\top}\). We denote \(f_{\boldsymbol{\varepsilon},i^{*}}(\boldsymbol{w})=(\phi_{\boldsymbol{ \varepsilon}}(w_{1},\ldots,w_{t^{*}}),0,\ldots,0)^{\top}\) \(f_{i}(\mathbf{w})=(w_{1}^{\alpha_{i}\wedge 1},0,\ldots,0)^{\top}\), for \(i^{*}<i\leq k-1\) and \(f_{k}(\mathbf{w})=w_{1}^{\alpha_{k}\wedge 1}\). Let \(\mathcal{A}=\prod_{l=i^{*}+1}^{k}(\alpha_{l}\wedge 1)\). Since \(t_{j}\leq\min(d_{0},\ldots,d_{j-1})\), we can set \[f_{\mathbf{\varepsilon}}(\mathbf{w}) =f_{k}\circ\cdots\circ f_{i^{*}+1}\circ f_{\mathbf{\varepsilon},i^{*} }\circ f_{i^{*}-1}\circ\cdots\circ f_{0}(\mathbf{w})\] \[=\sum_{\mathbf{u}\in\mathcal{U}_{n}}\varepsilon_{\mathbf{u}} \left[\psi_{\mathbf{u}}(w_{1},\ldots,w_{t^{*}})\right]^{\mathcal{A}}.\] Consequently, we can observe that the resulting function \(f_{\mathbf{\varepsilon}}\) belongs to the class \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B)\) when \(B\) is sufficiently large. Since \(W\) is uniformly distributed on \([0,1]^{d_{0}}\), we can compute \[\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{2}^{2}=\delta(\mathbf{ \varepsilon},\mathbf{\varepsilon}^{\prime})h_{n}^{{2\alpha^{**}}+t^{*}}\|\mathcal{ K}^{\mathcal{A}}\|_{2}^{2t^{*}} \tag{52}\] and \[\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{1}=\delta(\mathbf{ \varepsilon},\mathbf{\varepsilon}^{\prime})h_{n}^{{\alpha^{**}}+t^{*}}\|\mathcal{ K}^{\mathcal{A}}\|_{1}^{t^{*}}, \tag{53}\] where \(\delta(\cdot,\cdot)\) denotes the Hamming distance. For any \(P_{f_{\mathbf{\varepsilon}}}=Q_{f_{\mathbf{\varepsilon}}}\cdot P_{W}\) and \(P_{f_{\mathbf{\varepsilon}^{\prime}}}=Q_{f_{\mathbf{\varepsilon}^{\prime}}}\cdot P_{W}\), where \(P_{W}\) is the uniform distribution on \([0,1]^{d_{0}}\), we can derive that \[h^{2}(P_{f_{\mathbf{\varepsilon}}},P_{f_{\mathbf{\varepsilon}^{\prime}}}) =\int_{\mathscr{W}}\left(1-\exp\left[-\frac{|f_{\mathbf{ \varepsilon}}(w)-f_{\mathbf{\varepsilon}^{\prime}}(w)|^{2}}{8\sigma^{2}}\right] \right)dP_{W}(w)\] \[\leq\int_{\mathscr{W}}\frac{|f_{\mathbf{\varepsilon}}(w)-f_{\mathbf{ \varepsilon}^{\prime}}(w)|^{2}}{8\sigma^{2}}dP_{W}(w) \tag{54}\] \[=\frac{\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{2 }^{2}}{8\sigma^{2}}.\] According to Lemma 2, we can deduce that for \(W\) uniformly distributed on \([0,1]^{d_{0}}\), \[\ell(Q_{f_{\mathbf{\varepsilon}}},Q_{f_{\mathbf{\varepsilon}^{\prime}}})=\|P_{f_{\mathbf{ \varepsilon}}}-P_{f_{\mathbf{\varepsilon}^{\prime}}}\|_{TV}\geq\frac{0.78}{\sqrt{2 \pi}\sigma}\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{1}, \tag{55}\] provided \(\rho\geq 1+\left[\|\mathcal{K}^{\mathcal{A}}\|_{1}^{t^{*}}/(\sqrt{2\pi}\sigma )\right]^{1/\alpha^{**}}\) such that \(h_{n}^{{\alpha^{**}}}\leq\sqrt{2\pi}\sigma/\|\mathcal{K}^{\mathcal{A}}\|_{1}^{ t^{*}}\). Putting (52), (53), (54) and (55) together, we observe that the family of probabilities \(\mathcal{P}=\{P_{\mathbf{\gamma}_{\mathbf{\varepsilon}}},\ \mathbf{\varepsilon}\in\{0,1\}^{|\mathcal{U}_{n}|}\}\) satisfies the assumptions of Lemma 8 with \(D=N_{n}^{t^{*}}\), \[\eta=\frac{0.78}{\sqrt{2\pi}\sigma}h_{n}^{{\alpha^{**}}+t^{*}}\|\mathcal{K}^{ \mathcal{A}}\|_{1}^{t^{*}}\quad\text{and}\quad a=\frac{1}{8\sigma^{2}}nh_{n}^{ {2\alpha^{**}}+t^{*}}\|\mathcal{K}^{\mathcal{A}}\|_{2}^{2t^{*}}.\] Finally, taking the constant \[\rho\geq\left[1+\left(\frac{\|\mathcal{K}^{\mathcal{A}}\|_{1}^{t^{*}}}{\sqrt{ 2\pi}\sigma}\right)^{\frac{1}{\alpha^{**}}}\right]\vee\left[1+\left(\frac{\| \mathcal{K}^{\mathcal{A}}\|_{2}^{2t^{*}}}{\sigma^{2}}\right)^{\frac{1}{2 \alpha^{**}}+t^{*}}\right]\] such that \(h_{n}^{{\alpha^{**}}}\leq(n\|\mathcal{K}^{\mathcal{A}}\|_{2}^{2t^{*}}/\sigma^{2 })^{-\frac{{\alpha^{**}}}{2\alpha^{**}}+t^{*}}\wedge\left(\sqrt{2\pi}\sigma/\| \mathcal{K}^{\mathcal{A}}\|_{1}^{t^{*}}\right)\), we derive by Lemma 8 that there exists some constant \(c>0\) such that \[\inf_{\widehat{f}}\sup_{f^{*}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{ \alpha},B)}\mathbb{E}\left[\ell(Q_{f^{*}},Q_{\widehat{f}})\right]\geq cn^{- \frac{{\alpha^{**}}}{2\alpha^{**}+t^{*}}}.\]
2303.17925
Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks
In this study, we explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs), with a particular focus on complex topologies. We propose a novel methodology for constructing complex ANNs based on various topologies, including Barab\'asi-Albert, Erd\H{o}s-R\'enyi, Watts-Strogatz, and multilayer perceptrons (MLPs). The constructed networks are evaluated on synthetic datasets generated from manifold learning generators, with varying levels of task difficulty and noise, and on real-world datasets from the UCI suite. Our findings reveal that complex topologies lead to superior performance in high-difficulty regimes compared to traditional MLPs. This performance advantage is attributed to the ability of complex networks to exploit the compositionality of the underlying target function. However, this benefit comes at the cost of increased forward-pass computation time and reduced robustness to graph damage. Additionally, we investigate the relationship between various topological attributes and model performance. Our analysis shows that no single attribute can account for the observed performance differences, suggesting that the influence of network topology on approximation capabilities may be more intricate than a simple correlation with individual topological attributes. Our study sheds light on the potential of complex topologies for enhancing the performance of ANNs and provides a foundation for future research exploring the interplay between multiple topological attributes and their impact on model performance.
Tommaso Boccato, Matteo Ferrante, Andrea Duggento, Nicola Toschi
2023-03-31T09:48:16Z
http://arxiv.org/abs/2303.17925v2
# Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks ###### Abstract In this study, we explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs), with a particular focus on complex topologies. We propose a novel methodology for constructing complex ANNs based on various topologies, including Barabasi-Albert, Erdos-Renyi, Watts-Strogatz, and multilayer perceptrons (MLPs). The constructed networks are evaluated on synthetic datasets generated from manifold learning generators, with varying levels of task difficulty and noise. Our findings reveal that complex topologies lead to superior performance in high-difficulty regimes compared to traditional MLPs. This performance advantage is attributed to the ability of complex networks to exploit the compositionality of the underlying target function. However, this benefit comes at the cost of increased forward-pass computation time and reduced robustness to graph damage. Additionally, we investigate the relationship between various topological attributes and model performance. Our analysis shows that no single attribute can account for the observed performance differences, suggesting that the influence of network topology on approximation capabilities may be more intricate than a simple correlation with individual topological attributes. Our study sheds light on the potential of complex topologies for enhancing the performance of ANNs and provides a foundation for future research exploring the interplay between multiple topological attributes and their impact on model performance. ## 1 Introduction Modern neural architectures are widely believed to draw significant design inspiration from biological neuronal networks. The artificial neuron, the fundamental functional unit of neural networks (NNs), is based on the McCulloch-Pitts unit [13], sharing conceptual similarities with its biological counterpart. Additionally, state-of-the-art convolutional NNs incorporate several operations directly inspired by the mammalian primary visual cortex, such as nonlinear transduction, divisive normalization, and maximum-based pooling of inputs. However, these architectures may be among the few examples where the evolutionary structural and functional properties of neuronal systems have been genuinely relevant for NN design. Indeed, the topology of biological connectomes has not yet been translated into deep learning model engineering. Due to the ease of implementation and deployment, widely-used neural architectures predominantly feature a regular structure resembling a sequence of functional blocks (e.g., neuronal layers). The underlying multipartite graph of a multilayer perceptron (MLP) is typically controlled by a few hyperparameters that define its basic topological properties: depth, width, and layer sizes. Only recently have computer vision engineers transitioned from chain-like structures [32] to more elaborate connectivity patterns [16; 17] (e.g., skip connections, complete graphs). Nevertheless, biological neuronal networks display much richer and less templated wirings at both the micro- and macro-scale [14]. Considering synaptic connections between individual neurons, the _C. elegans_ nematode features a hierarchical modular [5] connectome, wherein hubs with high betweenness centrality are efficiently interconnected [4; 33]. Moreover, the strength distribution of the adult Drosophila central brain closely follows a power law with an exponential cutoff [29]. As a result, the relationship between the graph structure of a NN and its predictive abilities remains unclear. In the literature, there is evidence that complex networks can be advantageous in terms of predictive accuracy and parameter efficiency [18]. However, past attempts to investigate this connection have yielded conflicting results that are difficult to generalize outside the investigated context. The first experiment on complex NNs was performed in 2005 by Simard et al., who trained a randomly rewired MLP on random binary patterns [31]. Nearly a decade later, Erkaymaz and his collaborators employed the same experimental setup on various real-life problems [12; 11; 9; 10] (e.g., diabetes diagnosis, performance prediction of solar air collectors). The best-performing models featured a number of rewirings consistent with the small-world regime. However, all assessed topologies were constrained by MLP-random interpolation. In [2], an MLP and a NN generated following the Barabasi-Albert (BA) procedure were compared on a chemical process modeling problem. Both models were trained with an evolutionary algorithm, but the MLP achieved a lower RMSE. The _learning matrix_[24], a sequential algorithm for the forward/backward pass of arbitrary directed acyclic graphs (DAGs), enabled the evaluation of several well-known complex networks on classification [24] and regression [26] tasks. The experiments included random and small-world networks, two topologies based on "preferential attachment", a complete graph, and a _C. elegans_ sub-network [7]. Nevertheless, the learning matrix's time complexity limited the network sizes (i.e., 26 nodes), and for each task, a different winning topology emerged, including the MLP. Some recent works have instead focused on mul tipartite sparse graphs [23, 35]. While these architectures outperformed the complete baselines, their topological complexity was entirely encoded within the connections between adjacent layers. We propose the hypothesis that, given the same number of nodes (i.e., neurons) and edges (i.e., parameters), a complex NN might exhibit superior predictive abilities compared to classical, more regularly structured MLPs. Unlike previous studies, we conduct a systematic exploration of random, scale-free, and small-world graphs (Figure 1) on synthetic classification tasks, with particular emphasis on the following: * **Network size.** The defining properties of a complex topology often emerge in large-scale networks. For example, the second moment of a power-law degree distribution diverges only in the \(N\rightarrow\infty\) limit [3], where \(N\) is the network size1. The networks in [24, 26] have 15 and 26 nodes, respectively. We trained models with 128 neurons. Footnote 1: The proposition holds when the degree exponent is smaller than 3. * **Dataset size.** The _estimation error_ achieved by a predictor depends on the training set size: the greater the number of samples, the lower the error [30]. Except for studies based on multipartite graphs, all previous research works in a small-data regime. Our datasets are three times larger than those used before. * **Hyperparameter optimization.** Learning rate and batch size are crucial in minimizing the loss function. Ref. [24] is the only one that considers finding the optimal learning rate. The role of batch size has never been investigated. Each DAG, however, could be characterized by its optimal combination of hyperparameters. Hence, we optimized the learning rate and batch size for each topology. ## 2 Theory ### Complex Graph Generators **Erdos-Renyi (ER).** An ER graph [8], or _random network_, is uniformly sampled from the set of all graphs with \(N\) nodes and \(L\) edges. For \(N\gg\langle k\rangle\), the degree distribution of a random graph is well approximated by a Poisson distribution: \(p_{k}=e^{-\langle k\rangle}\frac{\langle k\rangle^{k}}{k!}\); \(k\) and \(\langle k\rangle\) represent node degree and average degree, respectively. **Watts-Strogatz (WS).** The WS generator [34] aims to create graphs that exhibit both high clustering and the _small-world_ property; this is achieved by interpolating _lattices_ with random networks. The generation starts from a ring in which nodes are connected to their immediate neighbors. The links are then randomly rewired with probability \(p\). **Barabasi-Albert (BA).** The well-known BA model [1] can be used to generate networks characterized by the \(p_{k}\propto k^{-3}\)_scale-free_ degree distribution. Being the model inspired by the growth of real networks, the generative procedure iteratively attaches nodes with \(m\) stubs to a graph that evolves from an initial star of \(m+1\) nodes. Node additions respond to the preferential attachment mechanism: the probability that a stub reaches a node is proportional to the degree of the latter. **Multilayer Perceptron (MLP).** The underlying networks of MLPs are called multipartite graphs. In a multipartite graph (i.e., a sequence of bipartite graphs) nodes are partitioned into layers, and each layer can only be connected with the adjacent ones; no intra-layer link is allowed. Additionally, inter-layer connections have to form _bicliques_ (i.e., fully-connected bipartite graphs). ## 3 Methods ### Datasets The foundation of the datasets developed, as displayed in Figure 2, is established by manifold learning generators2 provided by the scikit-learn machine learning library [25]. To modify the generators for classification purposes, 3D points sampled from one of the available curves (_s curve_ and _swiss roll_) are segmented into n_classes\(\times\)n_reps portions based on their univariate position relative to the primary dimension of the manifold samples. As the term implies, n_classes refers to the number of classes involved in the considered classification. Each segment is then arbitrarily allocated to a class, maintaining task balance (i.e., Figure 1: Example feedforward NNs (128 neurons, 732 synaptic connections) based on complex topologies: scale-free (BA), random (ER), and small-world (WS). All graphs are directed and acyclic. Information flows from top to bottom. Input, hidden, and output units are denoted in blue, orange, and green, respectively. Since the networks are defined at the micro-scale, hidden and output nodes implement weighted sums over the incoming edges. In the hidden units, the computational operation is followed by an activation function. The activations of nodes located on the same horizontal layer can be computed in parallel. precisely n_reps segments have the same label). We define n_reps as the task _difficulty_. An additional aspect of our datasets is the standard deviation \(\sigma\) of the Gaussian noise that can be added to the points. The generation procedure is finalized with a min-max normalization. ### Feedforward Neural Networks All trainable models are produced following the same 3-step procedure and share \(N\) and \(L\). Consequently, NNs exhibit identical density and parameter counts. **Undirected Graph Generation.** The initial step in creating a NN involves sampling an undirected graph using the generators detailed in Section 2. Once \(N\) and \(L\) are established, all models exhibit a single parameter configuration compatible with the required density3. The WS generator is the sole exception: the probability \(p\) is allowed to vary between 0 and 1. If the generator is limited to sample networks with a number of links from a finite set (e.g., \(L=m+(N-m-1)m\) according to the BA model), we first generate a graph with slightly higher density than the target before randomly eliminating excess edges. After obtaining the graph, we confirm the existence of a single connected component. Footnote 3: This statement is accurate if the number of MLP layers is predetermined. **Directed Acyclic Graph (DAG) Conversion.** Before performing any calculations, the direction for information propagation through the network links Figure 2: Benchmark classification datasets. **Top**: the _swiss roll_. **Bottom**: the _s curve_. Each dataset is composed of 3D points divided into multiple segments. Classes are color-coded. Datasets differ in terms of difficulty (\(x\) axis) and noise (\(y\) axis). must be determined; this is accomplished by randomly assigning, without replacement, an integer index from \(\{1,\ldots,N\}\) to the network nodes. It can be shown that the directed graph obtained by setting the direction of each edge from the node with a lower index to the node with a higher index is free of cycles. However, this conversion results in an unpredictable number of sources and sinks. Since classification tasks typically involve a pre-defined number of input features and output classes, it is necessary to resolve such network-task discrepancies. To address this issue, we developed a straightforward heuristic capable of adjusting DAGs without altering the underlying undirected graphs. **Mapping of Functional Roles.** The last step of the presented procedure consists in mapping computational operations to the DAG nodes. Working at the micro-scale (i.e., connections between single neurons), the operations allowed are two. Source nodes implement constant functions; their role, indeed, is to feed the network with the initial conditions for computations. Hidden and sink nodes, instead, perform a weighted sum over the incoming edges, followed by an activation function: \[a_{v}=\sigma\Bigg{(}\sum_{u}w_{uv}a_{u}+b\Bigg{)} \tag{1}\] where \(a_{v}\) is the activation of node \(v\), \(\sigma\) denotes the activation function4 (SELU [20] for hidden nodes and the identity function for sinks), \(u\) represents the generic predecessor of \(v\), \(w_{uv}\) is the weight associated with edge \((u,v)\) and \(b\) the bias. In order to implement the map of functional roles, we made use of the 4Ward library5[6], developed for the purpose. Starting from a DAG, the package returns a working NN deployable as a PyTorch Module. Footnote 4: Depending on the context, we use the same \(\sigma\) notation for both the standard deviation of the dataset noise and the activation function. Footnote 5: [https://github.com/BoCtrl-C/forward](https://github.com/BoCtrl-C/forward) ### Experiments **Dataset Partitioning.** Each generated dataset is randomly divided into 3 non-overlapping subsets: the train, validation and test splits. All model trainings are performed over the train split while the validation split is exploited in validation epochs and hyperparameter optimization. Test samples, instead, are accessed only in the evaluation of the final models. **Model Training.** Models are trained by minimizing cross entropy with the Adam [19] optimizer (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\)). A scheduler reduces the learning rate by a factor of 0.5 if no improvement is seen on the validation loss for 10 epochs. The training procedure ends when learning stagnates (w.r.t. the validation loss) for 15 epochs, and the model weights corresponding to the epoch in which the minimum validation loss has been achieved are saved. **Hyperameter Optimization.** Hyperparameters are optimized through a grid search over a predefined 2D space (i.e., learning rate/batch size). We generate networks of the same topological family starting from 5 different random seeds. In the MLP case, models differ only in the weight initialization. For each parameter pair, the 5 models are trained accordingly, and the resulting best validation losses are collected. Then, the learning rate and batch size that minimize the median validation loss computed across the generation seeds are selected as the optimal hyperparameters of the considered graph family. **Topology Evaluation.** Once the optimal learning rate and batch size are found, we train 15 new models characterized by the considered topology and compute mean classification accuracy and standard deviation on the dataset test split. The procedure is repeated for each investigated graph family and a Kruskal-Wallis (H-test) [21] is performed in order to test the null hypothesis that the medians of all accuracy populations are equal. If the null hypothesis is rejected, a Mann-Whitney (U-test) [22] post hoc analysis follows. **Robustness Analysis.** We use the final trained models in a _graph damage_ study to investigate their _functional_ robustness (accuracy vs. fraction of removed nodes). The _topological_ robustness (giant component vs. fraction of removed nodes) is already well-studied in network science. We randomly remove a fixed fraction of nodes, \(f\), from a neural network and compute the accuracy achieved by the resulting model on the test dataset. Practically, node removal is implemented using PyTorch's Dropout6, which zeroes some network activations by sampling from i.i.d. Bernoulli distributions. As each batch element is associated with specific random variables, activations produced by different dataset samples are processed by differently pruned neural networks. Therefore, the figure of interest is averaged over the dataset and the 15 generation seeds. In a typical topological analysis, when \(f=0\), the giant components of all tested graphs have the same size (i.e., \(N\)). We adopt this convention in our experimental setup by replacing test accuracy with _accuracy gain_: \(\mathcal{A}(f)\). The metric is defined as the ratio between the accuracy obtained by a pruned network and the accuracy obtained by the original one (i.e., \(f=0\)). An accuracy gain \(<1\) indicates a decline in model performance. Consequently, the figure of merit for our analysis is the mean accuracy gain, with the expectation taken over the generation seeds. Footnote 6: [https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html) ## 4 Results We obtained the presented results by following the experimental protocol outlined in Section 3 using the specified topologies (i.e., BA, ER, MLP, and WS) and datasets. We set n_classes = 3 and n_reps \(\in 3,6,9,12\); for the _swiss roll_ dataset, \(\sigma\in 0.0,1.0\), while for the _s curve_, \(\sigma\in 0.0,0.3\). The train, validation, and test split sizes were 1350, 675, and 675, respectively. Given that in a 1-hidden layer MLP (h1 notation) the number of synaptic connections depends solely on \(N\) (i.e., \(L=3\times H+H\times 3\), with \(H=N-3-3\)), we chose an MLP with 128 neurons as a reference model and calculated the hyperparameters for the complex networks to achieve graphs with \(L=732\) edges. The additional degree of freedom in the WS generator enabled us to separate the small-world topology into three distinct graph families: p.5 (\(p=0.5\)), p.7 (\(p=0.7\)), and p.9 (\(p=0.9\)). The hyperparameter optimization searched for learning rates in {0.03, 0.01, 0.003, 0.001} and batch sizes in {32, 64}. Figure 3 displays the mean test accuracy achieved by each group of models as a function of task difficulty. All manifolds, noise levels, and difficulties are represented. Excluding difficulty level 9 in the _swiss roll_ dataset, the accuracy curves exhibit a clear decreasing trend. Specifically, as the difficulty increases, the performance of the MLPs degrades more rapidly than that of complex networks. Confidence intervals, on the other hand, are wider in the high-difficulty plot regions. As expected, noisy tasks were more challenging to learn. In Figure 4, the results obtained by the models for the two highest levels of task difficulty are shown in detail. The H-test null hypothesis is rejected for all experiments, and the U-test statistical annotations are displayed. Regardless of the scenario considered, a complex topology consistently holds the top spot in Figure 3: Mean test accuracy as a function of the task difficulty. Confidence intervals (\(\pm\) standard deviation) are reported as well. Different subplots correspond to different datasets. Each curve denotes the trend of a specific network topology. the mean accuracy ranking. MLPs, in contrast, are always the worst-performing models. Moreover, the MLP performance differs significantly from that of the complex networks, in a statistical sense. Conversely, only 3 out of 8 experiments exhibit statistical differences within the group of complex networks. Figure 5 presents the results of the robustness analysis. We investigated \(f\in\{0.0,0.1,\ldots,0.5\}\) and removed nodes from the models trained on the datasets characterized by the lowest level of difficulty. On these tasks, indeed, all models behave approximately the same (see Figure 3), hinting at a fair comparison. Unsurprisingly, node removal has the same effect on all topologies: the accuracy gain decreases as \(f\) increases. MLPs, however, show enhanced robustness to random deletions. Confidence intervals of the complex graph families overlap. It is worth noting that the chance level (i.e., accuracy of \(1/3\)) could be reached by different accuracy gains depending on the task; the best accuracy under \(f=0\), indeed, varies between the manifold/noise pairs. ## 5 Discussion The most significant finding from the experiments performed is the performance in terms of accuracy attained by the architectures built on complex topologies in the high-difficulty regime. In this context, and in light of the statistical tests Figure 4: Mean test accuracy at the highest difficulty levels. **Left**: difficulty \(=9\). **Right**: difficulty \(=12\). The bars display both means and standard deviations. Each bar corresponds to a specific network topology and is represented by a consistent color across all histograms (following the color scheme from Figure 3). Statistical annotations appear above the histograms, with each segment indicating a significant difference between two accuracy distributions. Figure 5: Robustness analysis. The horizontal axis reports the fraction of removed nodes (i.e., \(f\)) while the vertical one the accuracy gain (i.e., \(\mathcal{A}(f)\)). Each curve refers to a different network topology. Confidence intervals (\(\pm\) standard deviation) are reported. carried out, the complex models prove to be a solid alternative to MLPs. Formally justifying the observed phenomenon is challenging. Fortunately, in 2017, Poggio et al. discussed two theorems [28] that guided our explanation. According to the first theorem7, a shallow network (e.g., an MLP h1) equipped with infinitely differentiable activation functions requires \(N=\mathcal{O}(\epsilon^{-n})\) units to approximate a continuous function \(f\) of \(n\) variables8 with an approximation error of at most \(\epsilon>0\). This exponential dependency is technically called the _curse of dimensionality_. On the other hand, the second theorem states that if \(f\) is compositional and the network presents its same architecture, we can escape the "curse". It is important to remember that a compositional function is defined as a composition of "local" constituent functions, \(h\in\mathcal{H}\) (e.g., \(f(x_{1},x_{2},x_{3})=h_{2}(h_{1}(x_{1},x_{2}),x_{3})\), where \(x_{1},\ x_{2},\ x_{3}\) are the input variables and \(h_{1},\ h_{2}\) the constituent functions). In other words, the structure of a compositional function can be represented by a DAG. In this approximation scenario, the required number of units depends on \(N=\mathcal{O}(\sum_{h}\epsilon^{-n_{h}})\), where \(n_{h}\) is the input dimensionality of function \(h\). If \(\max_{h}n_{h}=d\), then \(\sum_{h}\epsilon^{-n_{h}}\leq\sum_{h}\epsilon^{-d}=|\mathcal{H}|\epsilon^{-d}\). Footnote 7: We invite the reader to consult ref. [28] for a complete formulation of the theorems. Footnote 8: Depending on the context, we use the same \(f\) notation for both the fraction of removed nodes and the function to be approximated. The primary advantage of complex networks is their potential to avoid the curse of dimensionality when relevant graphs for the function to be learned are present. Under the assumption that the function linking the _swiss roll_ and _s curve_ points to the ground truth labels is compositional (intuitively, in non-noisy datasets, each class is a union of various segments), we conjecture that our complex NNs can exploit this compositionality. In the high-difficulty regime, the necessary network size for MLP h1 to achieve the same accuracy as complex models likely exceeds the size set for experiments. While one could argue that the datasets employed were compositionally sparse by chance, according to [27], all _efficiently computable functions_ must be _compositionally sparse_ (i.e., their constituent functions have "small" \(d\)). Performance differences on noisy datasets are less noticeable, possibly due to the minimal overlap between the functions to be approximated and the studied topologies. Notably, our setup does not precisely match the theorem formulations in [28] (e.g., SELUs are not infinitely differentiable), but Poggio et al. argue that the hypotheses can likely be relaxed. No statistically significant differences emerged between the complex graph families from the results of Section 4. Various explanations exist for this outcome: all tested topologies could be complex enough to include relevant subgraphs of the target \(f\) functions; the random DAG conversion heuristic might have perturbed hidden topological properties of the original undirected networks; or the degree distribution of a network may not be the most relevant topological feature in a model's approximation capabilities. However, the higher accuracy in complex networks comes with trade-offs. Although the methodology in [6] improves the scalability of complex NNs and enables experimentation with arbitrary DAGs, it is important to note that 1-hidden layer MLPs typically have faster forward pass computation. In these models, the forward pass requires only two matrix multiplications, whereas, in NNs built using 4Ward, the number of operations depends on the DAG _height_. Moreover, the analyses in Figure 5 demonstrate MLPs' superiority in a graph damage scenario. We speculate that the hidden units in an MLP h1 contribute equally to the approximation of the target function. In contrast, the ability of complex networks to exploit the compositionality of the function to be learned might lead to high specialization of some hidden units. ## 6 Conclusion Our study provides valuable insights into the influence of network topology on the approximation capabilities of artificial neural networks (ANNs). Our novel methodology for constructing complex ANNs based on various topologies has enabled a systematic exploration of the impact of network structure on model performance. The experiments conducted on synthetic datasets demonstrate the potential advantages of complex topologies in high-difficulty regimes when compared to traditional MLPs. While complex networks exhibit improved performance, this comes at the cost of increased computational requirements and reduced robustness to graph damage. Our investigation of the relationship between topological attributes and model performance (Appendix A) reveals a complex interplay that cannot be explained by any single attribute. This finding highlights the need for further research to better understand the interactions among multiple topological attributes and their impact on ANN performance. As a result of this study, researchers and practitioners can consider the potential benefits and limitations of complex topologies when designing ANNs for various tasks. Moreover, our work provides a foundation for future research focused on identifying optimal topological features, understanding the impact of multiple attributes, and developing new methodologies for constructing more efficient and robust ANN architectures. By further exploring the role of network topology in ANNs, we can unlock new possibilities for improving the performance and adaptability of these models across diverse applications.
2307.16666
Improving the temporal resolution of event-based electron detectors using neural network cluster analysis
Novel event-based electron detector platforms provide an avenue to extend the temporal resolution of electron microscopy into the ultrafast domain. Here, we characterize the timing accuracy of a detector based on a TimePix3 architecture using femtosecond electron pulse trains as a reference. With a large dataset of event clusters triggered by individual incident electrons, a neural network is trained to predict the electron arrival time. Corrected timings of event clusters show a temporal resolution of 2 ns, a 1.6-fold improvement over cluster-averaged timings. This method is applicable to other fast electron detectors down to sub-nanosecond temporal resolutions, offering a promising solution to enhance the precision of electron timing for various electron microscopy applications.
Alexander Schröder, Leon van Velzen, Maurits Kelder, Sascha Schäfer
2023-07-31T13:45:57Z
http://arxiv.org/abs/2307.16666v1
Improving the temporal resolution of event-based electron detectors using neural network cluster analysis ###### Abstract Novel event-based electron detector platforms provide an avenue to extend the temporal resolution of electron microscopy into the ultrafast domain. Here, we characterize the timing accuracy of a detector based on a TimePix3 architecture using femtosecond electron pulse trains as a reference. With a large dataset of event clusters triggered by individual incident electrons, a neural network is trained to predict the electron arrival time. Corrected timings of event clusters show a temporal resolution of 2 ns, a 1.6-fold improvement over cluster-averaged timings. This method is applicable to other fast electron detectors down to sub-nanosecond temporal resolutions, offering a promising solution to enhance the precision of electron timing for various electron microscopy applications. ## I. Introduction In the recent decade, ultrafast transmission electron microscopy (UTEM) based on pico- and femtosecond electron pulses has made a tremendous progress[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], reaching down to an attosecond temporal resolution[11, 12] and opening new avenues for quantum electron microscopy[13, 14, 15]. In a complementary approach, fast electron cameras are being developed[16] which would allow pico- to nanosecond-scale imaging with continuous electron beams at higher average beam currents compared to UTEM. For example, a delay-line detector combined with microchannel plate (MCP) amplification demonstrated a temporal resolution of 122 ps, enabling the imaging of the gyrational motion of a magnetic vortex[17]. A further detector platform involves large arrays of time-to-digital converters (TDC) as implemented in the TimePix3 chip architecture[18] and originally employed for X-ray detection[19, 20]. First applications of the TimePix3 detector in electron imaging involves the energy- and angle-resolved detection of low-energy photoelectrons[21], as well as protein imaging[22, 23]and coincidence measurements[24] at high electron energies. The intrinsic temporal binning width of the TimePix3 chip allows for a temporal resolution of 1.56 ns. By utilizing the intrinsic correlations between the triggering time and the duration of the above-threshold signal, a root-mean-square (rms) resolution of 1.7 ns, close to the binning width, was demonstrated for low-energy electrons[25]. At higher electron energies, the substantially different electron-sensor interactions are expected to generate a more complex temporal detector response, which is not, yet, experimentally characterized. Here, we utilize femtosecond electron pulses in an ultrafast transmission electron microscope to determine the temporal response and event correlation properties of a TimePix3 electron detector at 200 keV electron energies. A neural network approach is presented to improve the temporal resolution of electron event detection. The neural network is trained based on experimental data recorded for femtosecond electron pulses. Corrected timings of event clusters show a temporal width of 2 ns (rms), increasing the achievable time resolution by a factor of 1.6 compared to cluster-averaged timings. ## II. Experimental Approach The event-based electron detector used in the experiments (CheeTah T3, Amsterdam Scientific Instruments) utilizes the TimePix3 architecture[18] and is composed of four individual chips (512 x 512 pixels, 28 mm x 28 mm sensor size) bump-bonded to a 300-m thick silicon sensor (Fig. 1a). A high-energy electron incident on the sensor locally generates free carriers and secondary electrons resulting in a signal peak in the input channels of close-by pixels in the TimePix chip. Within each pixel, an event is registered if the incoming signal exceeds a threshold value. A time stamp (time-of-arrival, ToA) is assigned to such an event using a global and local clock operating at frequency of 40 MHz and 640 MHz, respectively. In addition, the time-over-threshold (ToT) for each event is determined. ToA, ToT and the pixel coordinates of an event are buffered and finally stored on a hard drive. Each double-pixel column has a counter that is reset at the start of acquisition. Due to a known reset synchronization issue, about 5 percent of pixel columns are shifted in their time response by one period of the coarse clock (25 ns), which we manually correct. For assigning a set of registered events to a single incident electron, a cluster algorithm is used, which groups events occurring in a defined time interval (5 us) within an 8x8 pixel area. The TimePix3 camera is incorporated into the Oldenburg ultrafast transmission electron microscope (OI-UTEM), which is based on a JEOL F200 Schottky field-emitter TEM. A modified laser-driven electron source[7] allows for the generation of femtosecond electron pulses (200-keV electron energy, 200-fs pulse duration, 400-kHz repetition rate) using ultrashort ultraviolet light pulses (200-fs temporal width, 343 nm center wavelength). The electron pulses arriving at the camera are spread across the whole detector area with an approximately homogeneous intensity. On average, 1.3 event clusters are detected on the camera per optical pulse. An electronic trigger signal synchronized to the electron pulse train is generated by a fast photodiode (12 GHz bandwidth) illuminated by the photoemission laser and fed into the TimePix3 event stream. The electron pulse duration and any jitter in the synchronous trigger signal are significantly shorter than the 1.56-ns clock period employed in the TimePix3 architecture. Therefore, the temporal spread of registered electron events directly signifies the temporal resolution of the detector. ## III. Electron Detection For characterizing the temporal response of the TimePix3 electron detector, we first consider the histogram of event timings relative to the photodiode trigger signal, as displayed in Fig. 1(b) with the mean ToA of the distribution set to zero. The temporal response function exhibits a full-width-at-half-maximum (FWHM) of approximately 10 ns (4.2 ns, rms), considerably longer than the electron pulse width and the temporal bin width of the TimePix detector of 1.56 ns. In addition, the distribution function is skewed towards larger delays. The temporal accuracy can be improved by considering event clusters triggered by a single incident electron instead of the individual events in each pixel. Typical, randomly selected event clusters are shown in Fig. 1 (c,d). For the employed detector settings (about 15-keV projected threshold, 100-V Bias), each incident electron triggers a cluster of 6.5 events on average. The cluster shape is strongly varying putatively due to the random scattering processes of the incident electron within the silicon sensor. Furthermore, within a cluster, time-of-arrival and time-over-threshold values differ by large amounts, which can be partially traced back to a time-walk effect, given the fixed threshold level and the varying signal amplitudes arriving at each pixel. However, different from previous results for the combination of a TimePix3 chip with a light-sensitive sensor[21, 25], we find no clear correlation between the time-of-arrival and time-over-threshold values registered in a pixel. This is further evidenced by considering the joint probability distribution between both quantities as shown in Fig. 2(a). Instead, we observe distinct features in joint distributions connecting the number of involved pixels in an event cluster, i.e. the cluster size, and the summed ToT and averaged ToA within the cluster, respectively. As shown in Fig. 2(b), the distribution of cluster-summed ToT values consists of two components. One intense structure which shows narrow a ToT distribution independent of cluster size, and a structure corresponding to a smaller number of clusters for which the summed ToT increased approximately linearly with cluster size. Figure 1: **Measurement principle and event timings.** (a) Experimental setup and schematic of TimePix3 event detection. (b) Uncorrected time-of-arrival distribution of photoelectron pulses relative to an optical trigger signal (averaged over 4 x 10\({}^{6}\) electron pulses) showing temporal of about 10 ns (FWHM). (c) Examples of randomly selected electron event clusters each induced by a single incident electron; color scales denote time-over-threshold (top) and time-of-arrival (bottom). Similarly, also for the joint distribution of ToAs and cluster sizes both features are found (Fig. 2c). Here, the dominant distribution component shifts in its maximum with increasing cluster size in line with an expected time-walk effect. A second, less intense component is observed for small cluster sizes with a slight tilt in the opposite direction as compared to the first component. A likely cause for these features are signals stemming from secondary electrons created within the sensor material and slowed-down primary electrons hitting the TimePix3 detector electronics. Summing the joint distributions along the direction of the cluster size as a random variable and neglecting the intrinsic correlations results in the broadened overall ToA distribution, as displayed in Fig. 1(b), and a concomitant loss of temporal resolution. A more precise timing information is achieved by compensating for the linear shift in the main feature of Fig. 2(c) with cluster size. The resulting corrected ToA distribution is plotted in Fig. 3(b) (blue curve). ## IV. Improving temporal resolution In an ideal case, all of the intrinsically contained correlations within the event data set should be exploited to extract a most faithful estimate of the arrival time of the incident primary electron. To approximate this goal, we trained a neural network using the experimental event data from our electron pulse measurements. The network is composed of six fully connected layers with incrementally reducing layer sizes (200, 150, 100, 50, 25 and 10 knots per layer, ReLu activation function) down to a single regression output layer, as sketched in Fig. 3(a). For each event cluster, all ToA and ToT data, and event positions relative to the cluster center-of-mass are fed into the network. The number of input nodes is fixed corresponding to a maximum of 10 events per cluster. If less than 10 events per cluster are registered, the remaining input nodes are filled with default values. The scalar output of the network is the predicted arrival time of the incident electron relative to the trigger signal. In the experiment, the arrival times of electrons at the detector relative to the photoemission laser pulses are fixed. To avoid training the network to this trivial case, for each event cluster a random number is added to the experimental ToA values and used as the respective ground truth. Overall, the network is trained with experimental data from 1.4 x 10\({}^{5}\) electron event cluster. Figure 2: Correlations between event cluster properties. (a) Two-dimensional histogram of time-of-arrival and time-over-threshold evaluated for individual events. (b-c) Correlation between the number of above-threshold pixels in an event cluster size and the summed time-over-threshold (b) and averaged time-of-arrival (c). In order to gauge the precision of the predicted arrival time, the neural network is introduced to experimental recordings not employed in the training process. Using recorded event data from \(5\times 10^{6}\) electron pulses, we obtained a ToA distribution of neural network corrected data, as shown in Fig. 3(b) (green curve). Notably, with our neural network approach, the predicted ToA distribution shows a root-mean-squared width of only 2 ns (4.7 ns FWHM) - a factor of 2 narrower compared to the predicted ToA distribution. Figure 3: **Neural-network based timing prediction.** (a) Schematics of the neural network trained to predict the electron arrival time. (b) Histogram of the resulting time-of-arrival using uncorrected (gray), event cluster averaged (red), shift corrected (blue) and the neural network predicted data (green). See text for details. (c,d) Color-coded spatial distribution of arrival times using uncorrected (c) and neural network predicted (d) data. Color scale: time-of-arrival of an electron event. uncorrected event data. Cluster averaging the ToA yields a 3.2 ns rms width, but also does not improve the tailing to larger time delays. The neural network corrects this effect, resulting in an approximately Gaussian shape of the distribution. We further note that including the absolute positions of each event in the neural network training data did not improve the accuracy of ToA prediction. Finally, Fig. 3 (c,d) show a graphical representation of the achieved arrival time homogeneity across the detector for uncorrected pixel-based data (c) and neural network analyzed cluster data (d). In both cases, 2 x 10\({}^{5}\) detected events from the photoelectron beam were acquired and are represented by a dot at the position of the events with a color corresponding to the arrival time. The centered cross pattern in Fig. 3(c) is due to a 110-\(\upmu\)m gap between the individual detector chips. As expected, applying the neural network results in a homogeneous timing information throughout the image (Fig 3 d). Additionally, by only considering the center of mass of each cluster, the gap between chips can be filled without noticeable artifacts. ## V Conclusion Using the precise timing of femtosecond photoelectron sources, we characterized correlations within the event-data stream of a TimePix3 electron detector. The event stream is used to train a neural network allowing to improve the precision of electron timing accuracies by a factor of two. Notably, this approach solely relies on experimental input data and requires no assumptions on the intricated scattering mechanism of fast electrons in the sensor layers or response behavior of the detector electronics. As such, we expect this approach to be equally applicable to other fast electron detectors with sub-nanosecond temporal resolution, currently being developed. ## Acknowledgments We acknowledge financial support by the Volkswagen Foundation as part of the Lichtenberg Professorship "Ultrafast nanoscale dynamics probed by time-resolved electron imaging" and by the German Science Foundation within the grant INST 184/211-1 FUGG. ## Conflict of Interest Two of the authors (LvV and MK) are paid employees of Amsterdam Scientific Instruments, the vendor of the TimePix3 electron detector characterized in this work. ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2303.00524
Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach
Prediction of taxi service demand and supply is essential for improving customer's experience and provider's profit. Recently, graph neural networks (GNNs) have been shown promising for this application. This approach models city regions as nodes in a transportation graph and their relations as edges. GNNs utilize local node features and the graph structure in the prediction. However, more efficient forecasting can still be achieved by following two main routes; enlarging the scale of the transportation graph, and simultaneously exploiting different types of nodes and edges in the graphs. However, both approaches are challenged by the scalability of GNNs. An immediate remedy to the scalability challenge is to decentralize the GNN operation. However, this creates excessive node-to-node communication. In this paper, we first characterize the excessive communication needs for the decentralized GNN approach. Then, we propose a semi-decentralized approach utilizing multiple cloudlets, moderately sized storage and computation devices, that can be integrated with the cellular base stations. This approach minimizes inter-cloudlet communication thereby alleviating the communication overhead of the decentralized approach while promoting scalability due to cloudlet-level decentralization. Also, we propose a heterogeneous GNN-LSTM algorithm for improved taxi-level demand and supply forecasting for handling dynamic taxi graphs where nodes are taxis. Extensive experiments over real data show the advantage of the semi-decentralized approach as tested over our heterogeneous GNN-LSTM algorithm. Also, the proposed semi-decentralized GNN approach is shown to reduce the overall inference time by about an order of magnitude compared to centralized and decentralized inference schemes.
Mahmoud Nazzal, Abdallah Khreishah, Joyoung Lee, Shaahin Angizi, Ala Al-Fuqaha, Mohsen Guizani
2023-02-28T00:21:18Z
http://arxiv.org/abs/2303.00524v2
Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach ###### Abstract Prediction of taxi service demand and supply is essential for improving customer's experience and provider's profit. Recently, graph neural networks (GNNs) have been shown promising for this application. This approach models city regions as nodes in a transportation graph and their relations as edges. GNNs utilize local node features and the graph structure in the prediction. However, more efficient forecasting can still be achieved by following two main routes; enlarging the scale of the transportation graph, and simultaneously exploiting different types of nodes and edges in the graphs. However, both approaches are challenged by the scalability of GNNs. An immediate remedy to the scalability challenge is to decentralize the GNN operation. However, this creates excessive node-to-node communication. In this paper, we first characterize the excessive communication needs for the decentralized GNN approach. Then, we propose a semi-decentralized approach utilizing multiple cloudlets, moderately sized storage and computation devices, that can be integrated with the cellular base stations. This approach minimizes inter-cloudlet communication thereby alleviating the communication overhead of the decentralized approach while promoting scalability due to cloudlet-level decentralization. Also, we propose a heterogeneous GNN-LSTM algorithm for improved taxi-level demand and supply forecasting for handling dynamic taxi graphs where nodes are taxis. Extensive experiments over real data show the advantage of the semi-decentralized approach as tested over our heterogeneous GNN-LSTM algorithm. Also, the proposed semi-decentralized GNN approach is shown to reduce the overall inference time by about an order of magnitude compared to centralized and decentralized inference schemes. GNN, netGNN, taxi demand forecasting, taxi supply forecasting, ITS, decentralized inference. ## I Introduction Intelligent transportation system (ITS) is an essential item in modern city planning. A key component of an ITS is the means of public transportation such as taxis, buses, and ride-hailing vehicles. As the importance of these services grows, there is a corresponding need for accurately and efficiently forecasting the travel needs of passengers and the corresponding available supplies by vacant taxis ready to serve them. This forecasting enables efficient management of transportation resources and allows for dynamic allocation of taxis such that customer waiting time is minimized and the taxi occupancy times are maximized. It can also help optimizing routes, urban development, traffic flow, and public transportation planning. Taxi demand forecasting has been receiving increasing amounts of attention in the recent transportation engineering literature [1, 2, 3, 4, 5, 6, 7, 8, 9]. Similar to other prediction problems, approaches to taxi demand and supply forecasting can be broadly categorized into two main categories; first is model-based approaches where a statistical model for traffic patterns is employed. Examples along this line include integrated auto-regressive moving average (ARIIMA) [3] and linear regression [10] models. Despite their simplicity, these methods focus only on temporal dependencies and overlook exploiting spatial dependencies for the prediction. The other category is deep learning (DL)-based methods where data-driven techniques are shown to well exploit spatiotemporal correlations for improved prediction. Along the DL line, recurrent neural networks (RNN) such as long short-term memory (LSTM) are especially important for taxi demand and supply forecasting as they can well address time dependency. Accordingly, there is a series of works on using RNN for taxi demand and supply forecasting [11, 12, 13, 7, 14]. Recently, there has been a growing interest in the use of graph neural networks (GNNs) for taxi demand and supply prediction. This approach models city regions as nodes in a graph and their relations as the edges linking these nodes. Along this line, several works have shown the advantage of GNNs in utilizing local region information and the relationships across non-Euclidean regions in improving the forecasting performance [2, 5, 6, 8, 9, 14, 15]. Despite the promising success of GNNs for taxi demand and supply forecasting, there are still outstanding challenges hindering their potential. First, as for graph representation, it is advantageous to simultaneously expose several node and relation types in the representation learning of transportation graphs [17]. Specifically, the existence of multiple types of nodes and edges in current transportation graphs calls for adopting a heterogeneous information network (HIN) approach to seamlessly exploit them. This requires the development of corresponding heterogeneous GNNs (hetGNNs) to handle their representation learning. Second, inferring taxi demand and supply predictions at the level of the whole transportation system incurs a huge amount of computation as it is necessary to utilize the existing salient relationships. This calls for developing solutions to improve the scalability of the GNN approach to cope with city-wide or even larger graphs [18]. Based on the above discussion, in this paper, we propose a hetGNN-LSTM-based algorithm for taxi demand and supply prediction. Compared to the existing GNN-based approaches, our algorithm defines taxis as nodes in a graph. This allows taxi graphs to be dynamic as opposed to existing approaches assuming nodes as fixed geographical regions. On the other hand, this allows for predicting the demands and supplies for each taxi. Operating this algorithm in a centralized way is computationally intensive. Therefore, we develop a decentralized GNN inference approach. However, we show theoretically and through experiments that this decentralized approach has a huge amount of message passing delay which grows quadratically with the number of communication hops. To reduce this delay, we propose a semi-decentralized approach. This approach uses multiple cloudlet devices each handling a subgraph of the transportation graph. A cloudlet is a moderate computing capability device placed at a base station (BS) and able to communicate with the taxis in its coverage area. From now on, we refer to the cloudlet with its network as a cloudlet network (CLN). As for the BS, we assume a 5G small cell such as the architecture of eNodeB BS detailed in [19]. The contributions of this paper can be summarized as follows. * We consider taxi demand and supply forecasting at a taxi level. Predicting on a taxi-node level provides drivers with immediate information on the availability of pick-ups and supply of other taxis in a region surrounding them. * We propose a novel heterogeneous graph-based algorithm for taxi demand and supply prediction utilizing hetGNNs. We model the transportation system as a heterogeneous graph of taxis as nodes linked with three relationship types derived from road connectivity, location proximity, and target destination proximity. The proposed algorithm exploits these relationships to improve the prediction. * We propose a _semi-decentralized_ approach to GNN-based traffic prediction. This approach is proposed to mitigate the scalability of centralized GNN inference and the huge message-passing delay in decentralized GNN inference. * We propose an adaptive node-CLN assignment to minimize inter-CLN communication. We develop a heuristic protocol for this assignment in a distributed manner across cloudlet devices. Experiments on real-world taxi data show a high-accuracy prediction of the proposed taxi demand and supply forecasting algorithm compared to the state-of-the-art, represented by DCRNN [14], Graph WaveNet [15], and CCRNN [16] being leading GNN-based approaches. Experiments also show that the inference time delay in decentralized GNN operation grows quadratically with the number of message-passing hops. Also, the proposed semi-decentralized GNN approach is shown to reduce the overall inference time by about 10 times compared to centralized and decentralized inference. The source code and datasets used in this paper are available on [https://github.com/mahmoudkanazal/SemidecentralizedhetGNNLSTM](https://github.com/mahmoudkanazal/SemidecentralizedhetGNNLSTM). The rest of this work is organized as follows. Section II revises related works. The proposed taxi demand and supply algorithm and semi-decentralized GNN approach are detailed in Section Section III. Section IV presents experiments and results, with the conclusions in Section V. ## II Related Work ### _DL methods for traffic demand forecasting_ Similar to their use in other application areas, DL models achieve performance gains in a variety of traffic forecasting problems. This is due to their ability to leverage dependencies among training data. Along this line, recurrent neural networks such as LSTM [20] are used to exploit time correlations [21], whereas convolutional neural networks (CNNs) utilize spatial dependencies [22]. An LSTM is particularly well-suited for prediction tasks that involve temporal dependency. This is because LSTM networks can remember and use previous information over extended periods. A more recent research trend combines LSTMs and CNNs to exploit spatiotemporal correlations in what is known as ConvLSTM [7, 23]. However, these DL approaches share a common restriction; they overlook existing and potentially useful relationships across entities in a traffic system, such as taxis, roads, and customers. Such relationships model other types of dependencies such as road connectivity and proximity. For instance, the demands at two railway stations in the same city are very likely to be correlated [6], even though the two stations may be distant. ### _GNN models for traffic demand forecasting_ Graph data structures appear naturally in many fields such as user accounts in a social network and vehicles in a traffic system [24]. GNNs extend DL to graph data by combining graph structure and node information through message passing and aggregation [25, 26]. This combination enables GNNs to produce node embeddings to serve multiple downstream graph tasks such as node classification (inferring the class label of a node), link prediction (estimating the likelihood of a link to exist between given nodes), and graph classification (inferring a property of the graph as a whole). Each node in a graph has a computational graph composed of its \(L\)-hop neighboring nodes. Node embeddings are obtained by alternating between message passing, i.e., communicating local information across nodes, and aggregation where received messages along with previous node information are used to obtain an updated node embedding. Message passing is done according to the topology of the graph, whereas aggregation is done by the neural network layers of the GNN model obtained by training over graph data [27, 28]. As DL methods overlook useful relationships, recent literature considers modeling the traffic system as a graph and applying a GNN approach to problems such as taxi demand and supply forecasting and flow prediction [29]. A city area is divided into many regions and each region is represented by a node in the graph. Several works assume different edges linking these nodes such as having a common origin-destination relationship between two regions if there is a taxi moving from one region (origin) to the other (destination) [6]. A more recent example work [8] considers dividing a city into non-uniform regions serving as the graph nodes linked with edges representing their road connectivity. Despite the clear advantages of GNNs, the existing research body on their usage for taxi demand and supply forecasting assumes homogeneous graphs. So, (homogeneous) GNNs along with other DL models such as LSTM are mainly used to perform the forecasting. Still, when modeling the traffic system as a graph, it may contain different types of nodes with different relation types. So, restricting the use of GNNs to homogeneous models overlooks this richness of relation types and limits the potential of GNNs. Therefore, it is advantageous to model traffic systems as HINs processed by hetGNNs [30] where the prediction can be improved by incorporating multiple relationships between the graph nodes. The main challenge hetGNNs face is handling heterogeneity. While some primitive hetGNNs project the HIN on the graph space to eliminate its heterogeneity [31], others use the metapath concept [32]1 to maintain and utilize the heterogeneity [33, 34, 35]. This is achieved by decomposing the HIN into multiple metapaths encoded to get node representations under graph semantics. hetGNNs have been shown to outperform their GNN ancestors in many applications such as malicious payment detection [36], drug/illegal trade detection [37], and network intrusion attack detection [38]. Footnote 1: A meta path is a composition of relations linking two nodes. ### _Decentralizing GNN inference_ Training and testing of GNN models over large graphs require huge memory and processing costs. This is because graph nodes are mutually dependent. Thus, they can not be arbitrarily divided into smaller subgraphs. Techniques such as neighborhood sampling [39] may ease the problem to some extent. Still, even a sampled computational graph and the associated features may not fit in the memory of a single device [40]. Thus, centralized GNN operation faces a scalability limitation [40, 41]. To mitigate the scalability limitation of centralized GNNs, decentralized (peer-to-peer) node inference has been applied to a few GNN applications like robot path control [42, 43] and resource optimization in wireless communication [18]. Decentralization naturally promotes GNNs' scalability. Still, it requires excessive communication overhead between nodes [40]. In turn, this communication delay significantly slows down the operation of a GNN because the progression of calculations across GNN layers needs to wait for a 2-way message passing to deliver messages from \(L\)-hop neighbors. Another disadvantage of decentralized inference is the difficulty of coordinating and synchronizing the operation of all nodes. Another less significant disadvantage is the need for each node to maintain and operate a copy of the GNN model. While the literature considers either centralized or decentralized GNNs, it came to our knowledge while developing this work that there is another work [41] on distributed GNN operation. [41] proposes adaptive node to cloud server assignment minimizing a general cost function with the servers of different computational powers. However, this is done in a centralized manner; a solver needs to know the graphs of nodes and cloud servers to do the assignment. In our work, we optimize the assignment in a distributed manner at the cloudlet level. Also, according to [41], any node may be assigned to any cloud server, while our work focuses on the boundary nodes at each CLN and assigns them to their CLN or an adjacent one taking the geometry into account. Also, while our work focuses on minimizing the communication delay, [41] adopts a general cost function where the challenge is mapping nodes to cloud servers of varying computational capabilities while optimizing the other costs including the delay. It is also noted that [41] does not compare centralized and decentralized GNN implementations or consider their trade-offs. ## III The proposed work ### _System model_ The system model considered in this paper is a taxi service system composed of many taxis operating in a certain region/city as shown in Fig. 1-a. We set the objective of the system as providing future predictions for the demand (e.g., passengers) and supply (e.g., vacant taxis) in the vicinity around each taxi represented by the red circles in this figure. The approach assumed in this work is a GNN approach based on a graph representation of the taxis as shown in Fig. 1-b. To this end, we study and compare the following approaches to GNN inference with the taxi graph. * A fully centralized approach: as represented in Fig. 2-a, a server or cloud is placed at a BS with a communication range converging the entire operation area. Taxis upload their local messages to the server which also keeps track of their locations. The server uses this information to obtain nodes' computational graphs and uses a local GNN to obtain updated messages. According to the graph structure, the server performs message passing by computation instead of communication. Next, the server sends the updated node messages to their taxi nodes. For taxi-server communication, a communication network under the ITS-G5 standard [44] is assumed. * Our fully decentralized approach: taxis have GNN models locally and can only communicate with the taxis in their network's coverage area, as represented in Fig. 2-b. This way, each taxi forms its computational graph. Taxi-to-taxi Figure 1: Taxis in a city (a) and their corresponding graph representation (b). communication is done through a wireless ad-hoc network such as the one in [45]. * Our proposed _semi-decentralized_ approach as shown in Fig. 2-c: This approach uses cloudlets centered at CLNs. Taxis in the coverage area of each CLN use it to send their messages to its cloudlet device. Similar to the centralized setting, the cloudlet uses the uploaded taxi messages along with their locations to obtain node computational graphs and uses a local GNN to obtain updated messages. Message passing for the edges in the CLN is done by computation instead of communication as represented by the dashed lines in Fig. 2-c. However, the messages between connected taxis in adjacent cloudlets are shared through cloudlet-cloudlet communication, as represented by the solid lines in Fig. 2-c. ### _Graph construction and problem formulation_ We represent the transportation system as a HIN composed of taxis as its nodes linked with edges of three types. First, is a _connectivity_ edge [8] representing road connectivity. Second, is a _proximity_ edge [9] linking nearby taxis. Third, we define a _destination-similarity_ edge linking taxis going to nearby destinations. Accordingly, a _connectivity_ adjacency matrix is as follows [8]. \[\mathbf{A}_{c}[i,j]=\begin{cases}1&\text{if there is a road connecting nodes $i$ and $j$},\\ 0&\text{otherwise.}\end{cases} \tag{1}\] A _proximity_ adjacency matrix is as follows [9]. \[\mathbf{A}_{p}[i,j]=\begin{cases}\operatorname{dist}(p_{i},p_{j})&\text{if } \operatorname{dist}(p_{i},p_{j})<th_{p},\\ 0&\text{otherwise.}\end{cases} \tag{2}\] where \(\operatorname{dist}(p_{1},p_{2})\) is a function of the Euclidean distance between taxi positions \(p_{1}\) and \(p_{2}\), and \(th_{p}\) is a certain threshold. Next, our proposed _destination-similarity_ adjacency matrix is as follows. \[\mathbf{A}_{d}[i,j]=\begin{cases}\operatorname{dist}(d_{i},d_{2})&\text{if } \operatorname{dist}(d_{i},d_{2})<th_{d},\\ 0&\text{otherwise.}\end{cases} \tag{3}\] where \(\operatorname{dist}(d_{1},d_{2})\) is a measure of the Euclidean distance between destinations \(d_{1}\) and \(d_{2}\), and \(th_{d}\) is a prescribed threshold. The structure of the proposed HIN is represented in the network schema in Fig. 3-a. We denote the HIN at a time instant \(t\) by \(G^{t}=(\mathcal{V}^{t},E_{c}^{t},E_{p}^{t},E_{d}^{t})\), or equivalently, \(G^{t}=(\mathcal{V}^{t},\mathbf{A}_{c}^{t},\mathbf{A}_{p}^{t},\mathbf{A}_{d}^{t})\) where \(\mathcal{V}^{t}\) is the node-set, \(E_{c}^{t},E_{p}^{t}\), and \(E_{d}^{t}\) represent the connectivity, proximity, and destination-similarity edges, respectively, \(\mathbf{A}_{c}^{t},\mathbf{A}_{p}^{t},\) and \(\mathbf{A}_{d}^{t}\) denote the Figure 3: The network schema of the proposed HIN in (a) and the proposed system architecture in (b). Figure 2: Three possible computation settings. (a) centralized: taxis upload their messages to a central server that performs the computations and returns the results to the taxis. (b) decentralized: taxis exchange messages with their neighbors and perform computations locally. (c) semi-decentralized: taxis in a CLN upload their messages to a cloudlet. The cloudlet performs the computations and returns them to taxis. connectivity, proximity, and destination-similarity adjacency matrices, respectively. For simplicity, we represent the operation of the system on a time slot basis and assume the graph is fixed during a time slot. At a time step \(t\), each taxi knows the \(P\)-step historical demand and supply values of its current region of dimensions \(m\times n\) taxi positions, which serve as the node message (attributes). The objective at each node is to predict the demand and supply values for the next \(Q\) time slots. In this sense, each taxi driver will be informed on both the availability of future passengers and other vacant taxi in an \(m\times n\) vicinity around their taxi. At a time instant \(t\), the graph \(G^{t}\) has an overall feature matrix \(\mathbf{X}_{t}\in\mathbb{R}^{N\times d}\) where \(d=m\times n\) is the input feature dimension and \(N^{t}\) is the number of nodes. So, we intend to obtain a mapping function \(\mathcal{F}\) as follows. \[\left[\mathbf{X}_{t-P+1:t},G^{t}\right]\overset{\mathcal{F}}{ \longrightarrow}\mathbf{X}_{t+1:t+Q}, \tag{4}\] where \(\mathbf{X}_{t+1:t+Q}\in\mathbb{R}^{Q\times N^{t}\times d}\) and \(\mathbf{X}_{t-P+1:t}\in\mathbb{R}^{P\times N^{t}\times d}\). ### _A heGNN-LSTM algorithm for taxi demand and supply prediction in a semi-decentralized approach_ #### Iii-C1 The proposed hetGNN-LSTM algorithm To incorporate multiple edge types and time dependency in the prediction, we propose a hetGNN-LSTM algorithm as described in Fig. 3-b. For simplicity, we first present its operation in a centralized setting and then in a decentralized setting. After discussing the shortcomings of the centralized and decentralized settings, we present its use in the proposed semi-decentralized setting. In the centralized approach, first, the server constructs a HIN according to the network schema in Fig. 3-a. For each node, messages are shared across the HIN; then the hetGNN layer outcomes are calculated accordingly, and the process continues. Eventually, the final node embedding is obtained after \(L\)-hop messages are exchanged from the neighbors to the node in question. After that, these embeddings are fed to an LSTM model to produce the eventual demand predictions. The predictions are then sent to their respective nodes. However, this centralized operation lacks scalability due to the huge amount of computation done at the server. This suggests decentralizing the operation. In the decentralized setting, represented in Fig. 2-b, each taxi maintains a copy of the hetGNN-LSTM model and it is assumed to do two main tasks. The first task is establishing the connection to receive the messages shared from its \(L\)-hop taxi nodes and using them along with its local information to obtain its final predictions, whereas the second task is sending its messages to these neighbors so that they can operate their GNNs. Due to the absence of a central server, a node needs to identify its \(L\)-hop neighbors and communicate with them. However, this is restricted to the communication abilities of these nodes and may not fully make use of the HIN structure between distant nodes. More importantly, the node-node message passing delay limits the number of achievable communicating hops at a reasonable inference delay. Intuitively, the delay in decentralized GNN inference mainly depends on the message passing delay which significantly increases with the number of communication hops. To quantify this dependency, in Theorem 1, we derive approximate bounds for the overall inference time delay in decentralized GNNs. We show that this delay increases quadratically with the number of communication hops. Furthermore, this increase is determined by the topology of the computational graph of a node. The lower and upper bounds of GNN inference delay are determined mainly by the maximal degree2 of nodes in each communication hop, and the summation of node degrees in each communication hop, respectively. Footnote 2: A node’s degree is the number of edges connected to it [46]. **Theorem 1**.: _In decentralized GNN inference, the overall inference delay with \(L\)-hop message passing, denoted by \(\Delta\), is within the following topology-dependent bounds with quadratic growth with \(L\)._ \[2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\max_{x\in N_{l}(i)}\{d_{x} \}+(l+1)t_{p}]\\ \leq\Delta\leq\\ 2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\sum_{x\in N_{l}(i)}d_{x}+(l+ 1)t_{p}] \tag{5}\] _where \(L\) is the number of message passing hops (equivalently, the number of GNN layers), \(t_{r}\) is the packet transmission delay of the wireless network used for message passing, \(d_{i}\) is the degree of node \(i\), \(N_{l}(i)\) is the set of \(l\)-hop-away neighbors to node \(i\), and \(t_{p}\) is the GNN layer processing delay._ The proof of Theorem 1 is in the Appendix. To investigate the relationship between the number of hops and the overall GNN inference delay in real-world scenarios, we present the following experiment. A total of 255 taxis in a city region are considered to work in a decentralized GNN setting. We assume an ad hoc wireless network connecting taxi nodes. We calculate the overall inference time as specified in Section IV. For \(L\) hop values ranging from 1 to 5, an \(L\)-hop computational graph of a node is obtained, and the overall message passing delay is calculated. We also calculate the GNN inference delay bounds presented in Theorem 1. We repeat this experiment for 10 trials and plot average values of the overall GNN inference delay and its bounds versus hops in Fig. 4. As shown in this figure, the overall inference delay grows with increasing communication hops at a quadratic proportionality. Also, the actual delay is within the bounds specified in Theorem 1. This result shows the message passing delay bottleneck in decentralized GNN inference. To resolve the limitations of scalability and excessive message passing delay in the centralized and decentralized schemes, respectively, we propose a semi-decentralized approach. As shown in Fig. 2-c, this approach uses a set of CLNs that span the work area of the taxis. Each CLN is associated with a certain city area and thus establishes a sub-HIN accordingly. Then, taxis in each CLN's region (sub-HIN) communicate their messages to the cloudlet, which performs predictions using its copy of the hetGNN-LSTM model, and then it communicates these predictions back to their respective taxis. It is noted that some boundary taxi nodes may have edges connecting them to taxi nodes in an adjacent CLN. In this case, the CLNs serving these taxis need to exchange messages about these nodes for each GNN level. Adjacent CLNs exchange information on their boundary taxis that have edges across these CLNs, as represented by the solid lines in Fig. 2-c. The main steps of the operation in the semi-decentralized approach are outlined in Algorithm 1. ``` 1:Initial CLN region boundaries for each cloudlet, a trained GNN-LSTM model at each cloudlet. 2:The next \(Q\)-future predictions of taxi demand and supply in an \(m\times n\) vicinity region around each taxi node 3:Each cloudlet \(u\) senses the existing \(n_{u}\) taxis in its CLN region. 4:Each cloudlet \(u\) obtains a sub-HIN \(G_{u}^{t}:(\mathbf{A}_{u,\mathbf{c}}^{t},\mathbf{A}_{\mathbf{u},\mathbf{p}}^{t},\mathbf{A}_{\mathbf{u}, \mathbf{d}}^{t}\in\mathbb{R}^{n_{u}\times n_{u}})\) based on the taxis in its CLN region. 5:Each cloudlet \(u\) obtains the feature matrix of the nodes in its CLN its nodes \(\mathbf{X}_{\mathbf{u}}^{t}\in\mathbb{R}^{n_{u}\times k^{t}})\) 6:Each cloudlet \(u\) determines the boundary nodes in its CLN connected to nodes in an adjacent cloudlet \(v\). 7:Each cloudlet \(u\) computes the messages passed across its nodes. 8:Each cloudlet \(u\) sends the messages of its boundary nodes to the CLNs containing their connected nodes. 9:Each cloudlet \(u\) receives the messages from the nodes connected to its boundary nodes through their CLN. 10:The updated node messages are aggregated and fed to the \(l\)-th GNN layer to produce new messages. 11:Each cloudlet \(u\) sends the eventual embeddings to their respective nodes in the CLN. ``` **Algorithm 1** Semi-decentralized GNN inference. #### Iii-C2 Adaptive node-CLN assignment In semi-decentralized operation, inter-CLN communication needs to happen for boundary nodes at each GNN level. The way the nodes are assigned to CLNs governs the number of inter-CLN edges. Therefore, an adaptive assignment can be employed to minimize the number of inter-CLN edges. We propose using minimum edge-cut graph partitioning to divide the shared subgraph between each pair of adjacency CLNs to serve this goal. As an example, in Fig. 5-a, uniformly assigning the nodes to CLNs results in 3 edges across the CLNs (denoted by the red dashed lines). However, applying minimum-cut graph partitioning to the nodes in the shared boundary region (\(\mathcal{V}_{b}\)) enclosed by the dashed lines in Fig. 5-b, a new assignment may add nodes \(v_{7}\), \(v_{8}\), and \(v_{9}\) to CLN 1, as seen in Fig. 5-c. This means that inter-CLN message passing needs to happen only across one node pair (\(v_{8}\)-\(v_{10}\)). This partitioning has to be done in a distributed manner between the CLNs. To achieve this, CLN 1 and CLN 2 share information on nodes and their positions in the boundary subgraph \(\mathcal{V}_{b}\). Nodes in this 2-\(L\) region are then to be partitioned between the two CLNs in a distributed 2-way manner. This means that one CLN will do the partitioning and instruct the other. For example, let us assume it is CLN 2. So, the partitioning problem is to optimize an assignment operator \(\Psi\) that assigns each node in the shared boundary subgraph (\(\mathcal{V}_{b}\)) to belong either to the set of nodes assigned to CLN 1 (\(\mathcal{V}^{1}\)) or to that of CLN 2 (\(\mathcal{V}^{2}\)), minimizing inter-set edges. The node assignment problem can be formulated as follows. \[\begin{array}{ll}\operatorname{argmin}_{\Psi}&\sum_{u}\sum_{\mathcal{V}} \mathds{1}\{\mathcal{V}_{u}^{1},\mathcal{V}_{v}^{2}\}\\ \text{subject to}&(\mathcal{V}^{1},\mathcal{V}^{2})=\Psi(\mathcal{V}_{b})\\ &\mathcal{V}^{1}\cap\mathcal{V}^{2}=\varnothing\end{array}, \tag{6}\] where the indicator function \(\mathds{1}\{\mathcal{V}_{u}^{1},\mathcal{V}_{v}^{2}\}\) is 1 if there is an edge between the \(i\)-th node in \(\mathcal{V}^{1}\) and the \(j-th\) node in \(\mathcal{V}^{2}\), and is 0 otherwise. The node assignment problem in (6) can be solved as a minimum edge-cut graph partitioning process. The overall graph is divided into multiple subgraphs with minimum edge cuts, and each subgraph is assigned to a CLN. The graph partitioning problem is known to be NP-complete [47]. However, multi-level graph partitioning as in the METIS [48] algorithm provides suitable solutions by operating in three stages; coarsening the graph into smaller graphs, bisecting the smallest graph, and gradually projecting nodes on each graph partition. Still, the nodes in the taxi graph are aligned in a two-dimensional space. This eases the problem of graph partitioning by restricting partitioning to boundary nodes in neighboring CLNs. So, in this work, we apply k-means clustering between each pair of adjacent CLNs to partition their boundary nodes in a distributed manner. We propose a protocol for achieving adaptive assignment between each pair of adjacent CLNs in a distributed manner in Algorithm 2. #### Iii-C3 Computational complexity analysis The proposed algorithm assumes taxi nodes and generates predictions in the \(m\times n\) vicinity of each node, whereas the existing approaches predict aggregate demand and supply values for each region treated as a node. Assuming a city having \(N\) taxis operating in \(K\) regions, the proposed algorithm retrieves \(2m\times n\times P\times Q\) data values for \(N\) taxis whereas the existing approaches retrieve \(2\times P\times Q\) data values for \(K\) regions. It is interesting to compare the inference time complexity of the proposed algorithm to that of the existing approaches. Fig. 4: Actual values of overall GNN inference delay versus the number of communication hops and their corresponding bounds of Theorem 1. The inference time complexity of the existing approaches, depends on the following factors. * Time complexity of passing 2\(PQ\)-dimensional messages across \(K\) (region) nodes within \(L\) GNN layers: assuming a message packet delay \(t_{r}\), this is proportional to \(P,Q,K\), and \(L^{2}\) (as concluded in the proof of Theorem 1). * The time complexity of processing 2\(PQ\)-dimensional messages for \(L\) layers per node: assuming a processing delay \(\tau\) for each message, this is proportional to \(P,Q,L,K\), and \(\tau\). Therefore, the inference time complexity of existing approaches is \(O(KPQ(L^{2}t_{r}+L\tau).\) The inference time complexity of the proposed algorithm in the semi-decentralized setting is calculated at a cloudlet level since cloudlets operate concurrently. Let us assume that the number of nodes per CLN is \(N/K\), on average. The cloudlet inference time depends on the following factors. * Message passing by computation of \(mnPQ\) messages for \(L\) GNN layers per node. Let us denote by \(t_{c}\) the time to pass an \(mnPQ\)-dimensional message by communication. So, the time complexity due to this delay is proportional to \(N/K,m,n,P,Q,t_{c}\) and \(L^{2}\). * Layer processing time of \(mnPQ\) messages for \(L\) layers per node. Again, let us denote this processing time by \(\tau\). So, the time complexity due to this delay is proportional to \(L\tau\). * Cross-CLN MP for boundary nodes: let us assume a faction \(\gamma\) of cloudlet nodes being boundary nodes connected to nodes in other CLNs, and let us denote by \(t_{CLN}\) the message packet transmission delay across CLNs. Then, this delay will be \(\gamma N/KL^{2}t_{CLN}\). So, the overall time per-cloudlet time complexity is \(O(mnPQt_{c}N/K+mnPQN/KL\tau+\gamma N/KL^{2}t_{CLN})\). ## IV Experiments and Performance Analysis We present experiments on real-world data to analyze the operation of the proposed hetGNN-LSTM taxi demand and supply forecasting algorithm and the semi-decentralized GNN approach from the perspectives of prediction accuracy and GNN inference delay. ### _The setup and dataset_ The dataset used in this work is adopted from the NYC dataset [4]. This dataset consists of 35 million taxi trip records in New York City from April 1st, 2016 to June 30th, 2016. For each trip, the following information is recorded; pick-up time, drop-off time, pick-up longitude, pick-up latitude, drop-off longitude, and trip distance. As for the hetGNN, we use a heterogeneous graph convolutional network (HeteGCN). We implement the models using PyTorch Geometric (PyG) [49] version 2.0.3 built on PyTorch version 1.10.0. Experiments are conducted on a Lambda GPU Workstation with 128 GB of RAM, two GPUs each of 10 GB RAM, and an I9 CPU of 10 cores at a clock speed of 3.70 GHz. Table I lists key hyperparameters of the hetGNN-LSTM model. ### _Performance of the proposed hetGNN-LSTM forecasting algorithm_ In this experiment, we compare the accuracy of taxi demand and supply values predicted by the proposed hetGNN-LSTM algorithm with the following approaches representing the state-of-the-art. * DCRNN [14]: uses a combination of diffusion convolutional layers and RNNs to make predictions. * Graph WaveNet [15]: uses a combination of graph convolutional layers and dilated causal convolutional layers to capture the complex patterns in the data and uses an adaptive adjacency matrix. * CCRNN [16]: uses a GCN model with coupled layer-wise graph convolution, which allows for more effective modeling of the spatial-temporal dependencies in transportation demand data with learnable adjacency matrices. Comparisons are conducted in terms of the following quality metrics. * Root mean squared error (RMSE) \[\mathrm{RMSE}(\mathbf{x},\hat{\mathbf{x}})=\sqrt{\frac{1}{n}\sum_{i}\left(x_{i}-\hat{ x}_{i}\right)^{2}}\] * Mean absolute percentage error (MAPE) \[\mathrm{MAPE}(\mathbf{x},\hat{\mathbf{x}})=\frac{1}{n}\sum_{i}\left|\frac{x_{i}-\hat{ x}_{i}}{x_{i}}\right|\] * Mean absolute error (MAE) \[\mathrm{MAE}(\mathbf{x},\hat{\mathbf{x}})=\frac{1}{n}\sum_{i}\left|x_{i}-\hat{x}_{i}\right|\] where \(\mathbf{x}=x_{1},\cdots,x_{n}\) denotes the ground truth values of taxi demand and supply, and \(\hat{\mathbf{x}}=\hat{x}_{1},\cdots,\hat{x}_{n}\) represents their predicted values. There are two major differences between the operation of the proposed algorithm and the existing algorithms in the literature [14, 15, 16, 2, 2, 5, 6]. First, our hetGNN-LSTM algorithm considers dynamically evolving graphs since its node granularity is at the taxi level. Thus, taxis can enter or leave the system generating a dynamic graph. In contrast, the existing approaches model city regions as graph nodes, and therefore, have static graphs. Secondly, our algorithm considers predicting the demands and supplies in a vicinity surrounding each taxi node whereas the existing approaches only predict at the locations of nodes (since each node is a region). Therefore, it is unfair to directly compare our algorithm to the existing approaches as this will compare different quantities. We thus include the following comparison cases where our algorithm is operated on a region level and the other algorithms are operated on a taxi level. * A _taxi vs. region_ comparison case: in this case, we record the taxi demand and supply values obtained by the proposed algorithm at a 3x3 region surrounding each taxi, and record demand and supply predictions of the baselines at specific city regions. This is an unfair comparison since the proposed algorithm predicts more information compared to the baselines. * A _region vs. region_ comparison case: from the demand and supply predictions obtained with the proposed algorithm at taxi locations, we calculate the overall predictions made in each city region. We compare these values to the predictions at the same regions obtained by each of the baselines. This is a fair comparison since we compare corresponding predictions. * A _taxi vs. taxi_ comparison case: we record taxi demands and supplies obtained by the proposed algorithm at taxi locations. For the baselines, we use them to predict demands and supplies at the same taxi locations (rather than on a region level). This is also a fair comparison case since we compare predictions of the same information. For each of the above-mentioned comparison cases, we operate the proposed hetGNN-LSTM model in the three decentralization settings. Namely, centralized, fully decentralized, and semi-decentralized settings denoted by scenarios \(SC1\), \(SC2\), and \(SC3\), respectively. Table II lists the results of this experiment. Considering Table II, let us first compare the performances of the proposed algorithm in centralized, fully-decentralized, and semi-decentralized settings (\(SC1\), \(SC2\), and \(SC3\), respectively). The centralized approach \(SC1\) has the best performance compared to \(SC2\) and \(SC3\). This is because it has access to all \(L\)-hop information for each node. Since \(SC3\) uses inter-cloudlet communication to achieve message passing between boundary nodes in adjacent CLNs, its performance is close to that of the centralized setting (\(SC1\)). It has some degradation compared to \(SC1\) because some nodes may have dependency across non-adjacent CLNs which is not accounted for in the inter-cloudlet communication. However, the fully decentralized setting \(SC2\) has more performance degradation as compared to \(SC1\) and \(SC3\). This is because the number of communication hops in \(S2\) is restricted to the communication ranges of taxis in the wireless ad hoc network (This is a distance of 100 m). Now, let us compare the performance of the proposed algorithm in the three settings to that of the baselines according to Table II. First, for the _taxi-region_ comparison case, the proposed algorithm has higher RMSE and MAE values compared to the other baselines. However, it has a lower MAPE than the baselines. This result is not conclusive since this comparison case compares different quantities in different settings (the proposed algorithm predicts on a taxi level, whereas the baselines predict on a region level), we only include this result for the reader's reference. The other two comparison cases are more reasonable in the sense that similar quantities are compared. In the region-region comparison case, the proposed algorithm significantly outperforms the baselines. For example, the proposed algorithm in the centralized setting achieves reductions of 45.4%, 56.0%, and 58.5% in MSE, MAE, and MAPE compared to CCRNN [16]. Similar reductions are obtained with the proposed algorithm in decentralized and semi-decentralized settings. A similar result is seen in the taxi-taxi comparison case, where the three metrics of the baselines become larger as they are operated at a taxi node level. As an example, the proposed algorithm in the centralized setting achieves reductions of 17.46%, 28.79%, and 50.31% in MSE, MAE, and MAPE compared to CCRNN [16]. A similar performance is obtained with the proposed algorithm in decentralized and semi-decentralized settings. ### _The impact of decentralization on the GNN inference delay_ In this experiment, the objective is to compare the overall GNN inference time in centralized, decentralized, and semi-decentralized settings, calculated as detailed below. * In centralized GNN: The overall inference time equals the summation of; the time for nodes to upload their messages to the server, the time of inference for nodes using message passing by calculation and layer processing, and the time to send the eventual messages to their respective nodes. The communication medium between the nodes and the server assumed is the ITS-G5 standard from [44], and we adopt a packet transmission delay of 3.3 ms as reported in [44]. As for the GNN layer computation time, we measure it as the execution time for one GNN layer. Similarly, measure the time required to pass the messages by computation instead of communication. * In decentralized GNN: The overall inference time for the nodes in the graph is calculated on a node level, where each node has its \(L\)-hop computational graph. We calculate the overall inference time for a given node by adding the transmission delays required to send and receive the messages to/from the nodes in its \(L\)-hop computational graph, and the processing time at each GNN layer. The communication medium assumed in this setting is an ad hoc wireless network, and we adopt empirical values for the transmission delay from [45]. In this network, a source node forwards its message to relay nodes that forward it till it reaches the destination node. As described in [45], the processing delay for a source node is about 16.55 ms which accounts for queuing and processing delays, whereas the link transmission delay is 7 ms. However, the processing delay at a relay node is 11.65 ms. Thus, an empirical value for the transmission delay calculated in this manner is about (16.55+7) ms for source nodes and (11.65+7) ms for relay nodes. As for the layer processing time, a decentralized device is assumed to have a 100 times longer processing time as compared to a centralized cloud server, as commonly assumed in the literature [50]. * In semi-decentralized GNN: The overall inference time is calculated as the time for the slowest CLN to obtain the embeddings of its nodes. This is calculated as the sum of the time required to upload the node messages to the CLN, the time for the CLN to produce latent embeddings based on the initial node messages, and the time to send these latent embeddings to their nodes. Furthermore, if a CLN has boundary nodes connected to nodes in an adjacent CLN, then the two CLNs have to exchange the updated messages of their connected nodes for each GNN layer. Thus, the overall message passing delay per GNN layer is the sum of the message passing by calculation (relatively small) and the inter-CLN message passing (relatively large). As for the layer processing time, a cloudlet in the semi-decentralized setting is assumed to have an order of magnitude slower computation as compared to the cloud server in the centralized setting. It is noted that if an adaptive assignment is applied, then we add its time delay. This is the sum of time to send information from a CLN to its neighbor, the time for Figure 6: Comparison of the overall inference time versus the number of message passing hops in four decentralization scenarios, with the proposed model in (a), and CCRNN in (b). the receiving CLN to perform adaptive k-means assignment, and the time for sending the result to the other CLN. With the above specifications, we assume a total of 255 nodes spread in a city area. Then, we quantify the overall inference time over the number of communication hops for the following scenarios. * _Cent.:_ centralized inference as shown in Fig. 2-a * _Decent.:_ decentralized inference as shown in Fig. 2-b * _Semi-decent.:_ semi-decentralized inference, as shown in Fig. 2-c, where decentralization is obtained using 10 CLNs of uniform node assignment. We also consider the case of 20 CLNs for observing the effect of the number of CLNs on performance. * _Semi-decent.-adaptive:_ semi-decentralized inference where decentralization is obtained by splitting the nodes into the coverage of 10 non-uniform CLNs with the proposed adaptive assignment according to Algorithm 2. Similar to the previous scenario, we also include the case of using 20 CLNs. Fig. 6 shows the results of the above experiment, with the GNN model being the proposed hetGNN-LSTM model in (a), and CCRNN [16] in (b). Several conclusions can be made from this figure. First, decentralization reduces the overall inference time (by comparing _Cent._ and _Decent._). Second, the added benefit of semi-decentralization is shown in the significant reduction, of about an order of magnitude, in inference time attained by the proposed setting (_Semi-decent._ and _Semi-decent-adaptive._) as compared to either centralization or decentralization. Furthermore, finer decentralization (20 CLNs) is shown more advantageous than coarse decentralization (10 CLNs). This is consistently seen in both _Semi-decent._ and _Semi-decent.-adaptive:_ Moreover, the advantage of adaptive node-CLN assignment is shown by a significant reduction of around 2-times in inference time in _Semi-decent.-adaptive_ compared to the case of uniform assignment _Semi-decent_. This is consistent with the expected reduction of inter-CLN edge communication. Fig. 7 visually illustrates adaptive assignment by comparing a sample uniform assignment to an adaptive one in parts (a) and (b), respectively. In part (b), we obtain adaptive assignments using the proposed distributed adaptive assignment approach in Algorithm 2. CLN boundaries are denoted by solid yellow lines and nodes are denoted by yellow dots. It can be seen that adaptive assignment can help reduce the number of taxis connected across CLNs, and thus, the volume of inter-CLN communication. This interprets the reduction in inference delay shown in the _Semi-decent.-adaptive_ scenario in Fig. 6. ### _The generalizability of the hetGNN model_ A key advantage of GNNs is that even when trained with moderately sized graphs, they can generalize to larger graphs with unforeseen structures [42]. In this experiment, we investigate the impact of training graph sizes on the performance of the proposed hetGNN-LSTM algorithm. For this purpose, we consider a sample test graph of 1916 nodes and test it with different models trained with gradually increasing the training graph sizes from 220 to 1916. Fig. 8 shows the testing RMSE, MAE, and MAPE performance metrics versus the training graph size. In view of Fig. 8, it can be seen that the drop in training set size does not incur a proportional drop in the performance as one moves from 1916 nodes to 220 nodes. With about one-tenth of the graph sizes for the training, the performance loss is still relatively marginal. This result establishes the generalizability of the hetGNN model. Figure 8: Quality metrics for testing with a fixed graph size while varying training graph sizes. Figure 7: Uniform and adaptive assignment samples in (a) and (b), respectively. ### _Training the model using federated learning_ The above experiments focus on the inference of the proposed model. Still, it is interesting to examine its operation with distributed learning such as federated learning (FL) [51, 52]. However, we assume a server-free FL framework due to the absence of a central server. This framework is similar to the server-free FL approach in [53] where neighboring clients exchange models, aggregate them to obtain a model initialization, train on local data, and then exchange the trained models, and so on. However, we assume that neighboring clients work with mutual trust. In this setting, each cloudlet is considered an FL client. Starting from the initial model states, each client exchanges its model with its adjacent neighbors, aggregates the received models along with its own local model, and trains on the aggregated model over its local data. Next, the trained models are again shared across direct neighbors and so on. This process is repeated for a predetermined number of FL rounds. In the following experiment, we divide the city area into 10 areas covered by 10 CLNs and apply this decentralized FL across them for 10 rounds. Table III lists several hyper-parameters and specifications for this experiment. We repeat this experiment with varying CLN graph sizes. For each case, we test the aggregate models over a test graph size of 1916 nodes and plot the average performance metrics across the 10 clients. Fig. 9 shows the results of this experiment. One can make the following observations in view of Fig. 9. First, an FL-trained model exhibits a slight degradation in its prediction quality as compared to a model trained in a centralized manner. This can be seen in view of the drop in the RMSE, MAE, and MAPE in Fig. 9 compared to Fig. 8, where the values in FL training are on average 7% less than their corresponding values in centralized training. Second, it can be seen that the size of the client graph has a small impact on the quality of the aggregated model. This result is consistent with the generalizability of the GNN model established in the previous experiment. Therefore, one can still train with moderately sized graphs, possibly at the cloudlet area level without sacrificing the performance. ## V Conclusion and Future Work In this paper, we propose a hetGNN-LSTM algorithm for taxi demand and supply prediction making use of several edge types between graph nodes. To enable this approach for large-scale training and inference, we also propose a semi-decentralized GNN approach that resolves the scalability and excessive communication delay limitations of centralized and decentralized GNNs, respectively. We propose the use of multiple CLNs to enable this semi-decentralized approach. Through experiments using real data, the proposed hetGNN-LSTM algorithm is shown to be effective in predicting taxi demand and supply on a taxi level. This allows for dynamic graph evolution over time as opposed to the existing approaches assuming static graphs. The proposed hetGNN-LSTM model is shown to generalize well to larger graphs with unforeseen structures. Moreover, the proposed semi-decentralized approach allows for cloudlet-level federated learning without sacrificing performance. Future extensions to this work will include incorporating other node and edge types on the constructed HIN. Besides, extensions will also include the development of custom-made hetGNN models to better fit the traffic demand prediction. Namely, by more explicit incorporation of time dependency. It is also interesting to devise better ways of node-CLN assignment. ## VI Acknowledgment This work is supported in part by the National Science Foundation under Grant No. 2216772. ### _The proof of Theorem 1_ Proof.: To derive the delay bounds, we quantify the inference delay bounds on a small graph with 2-hops and then generalize Fig. 10: A sample node’s computational graph in the decentralized GNN setting. Fig. 9: Quality metrics for testing with a fixed graph size and different average client graph sizes in distributed FL. the derived bounds for \(L\)-hop graphs with arbitrary numbers of nodes. Consider the computational graph of node \(i\) shown in Fig. 10. This node has nodes \(j\) and \(k\) as its 1-hop neighbors, and nodes \(j_{1}\) through \(j_{3}\), and \(k_{1}\) through \(k_{4}\) as its hop-2 neighbors. Let us express the overall inference delay with messages from the 1-hop and 2-hop nodes to node \(i\) according to the topology of this computational graph. We assume that nodes communicate through an ad hoc wireless network. Also, a node can only communicate with one node at a time. Let \(t_{s}\) and \(t_{r}\) denote the (per-node) sending and receiving transmission delays, respectively. For simplicity, let us further assume that nodes have similar separations for simplicity and \(t_{s}=t_{r}\). Also, let \(t_{p}\) denote the processing time for messages received at a node through a GNN layer. The total 2-hop inference delay (\(\Delta_{tot,2}\)) can be written as. \[\Delta_{tot,2}=\Delta_{1}+\Delta_{2} \tag{7}\] where \(\Delta_{1}\) is the time required to pass the 1-hop messages from \(j\) and \(k\) to \(i\), process them by the first GNN layer, and send the processed message to \(j\) and \(k\). So, the total delay of hop-1 is: \[\Delta_{1}=2(t_{s}+tr)+tp=2\times 2(t_{r}+t_{p}) \tag{8}\] In general, for node \(i\) with degree \(d_{i}\), \[\Delta_{1}=2(t_{s}+tr)+tp=2\times d_{i}(t_{r}+t_{p}) \tag{9}\] where \(d_{i}\) is the degree of node \(i\). Let us consider the delay in the 2 communication hop. \(\Delta_{2}\) is the time required to receive the messages from the neighbors of \(j\) and \(k\), sending them to \(i\), processing them by the second GNN layer, and then sending the result back to the neighbors of \(j\) and \(k\). The time required for collecting the messages from the neighbors of \(j\) (to itself) is \(\Delta_{2}^{j}=d_{j}(t_{s}+t_{r}+t_{p})=2d_{j}t_{r}+t_{p}\). Similarly, the time required to collect the messages from the neighbors of \(k\) (to itself) is \(\Delta_{2}^{k}=2d_{k}t_{r}+t_{p}\). These two messages need then to be sent to \(i\). So, this requires another time delay of \(\Delta_{j,ktoi}=2d_{i}t_{r}\). To this end, the topology of the graph determines the way how the delays of \(j\) and \(k\) contribute to the overall delay. Specifically, if \(j\) and \(k\) have uncommon nodes in their 1-hop graph (except for \(i\)) then they can work concurrently on receiving their messages. Thus, their delay will be determined by the slowest among them. This is the minimum value of their delays. However, if they have common nodes, then they must sequentially communicate with these nodes. This elongates the delay. It can be maximally equal to the summation of delays \(j\) and \(k\). So, the delay for hop-2 message passing is. \[\Delta_{tot,2}=\Delta_{1}+\Delta_{2}^{j}+\Delta_{2}^{k}-\Delta_{ conc} \tag{10}\] where \(\Delta_{conc}\) is the time duration when \(j\) and \(k\) concurrently receive messages (from their exclusive neighbors). The minimum value of \(\Delta_{conc}\) is zero when the \(j\) and \(k\) can not work simultaneously, i.e., when they share the same 1-hop neighborhood. This maximizes the delay to \(\Delta_{2,max}=\Delta_{1}+\Delta_{2}^{j}+\Delta_{2}^{k}\). On the other hand, the minimum value of \(\Delta_{conc}\) is \(\min(\Delta_{2}^{j},\Delta_{2}^{k})\) when \(j\) and \(k\) can work completely simultaneously (except for communicating with node (i)). In that case, \(\Delta_{2}\) is minimized to \(\Delta_{2,min}=\Delta_{1}+\max\{\Delta_{2}^{j},\Delta_{2}^{k}\}\). Expanding \(\Delta_{2}^{j}\) and \(\Delta_{2}^{k}\) and adding common terms, and with the delay has the following bounds. \[2t_{r}[2d_{i}+\max_{j\in N_{2}(i)}\{d_{x}\}]+3t_{p}\\ \leq\Delta_{tot,2}\leq\\ 2t_{r}[2d_{i}+\sum_{x\in N_{2}(i)}d_{x}]+3t_{p} \tag{11}\] Following the same logic, we can generalize the above bounds for an \(l\)-hops since an \(l\)-hop message received at node \(i\) is an \(l-1\)-hop message received at \(j\) and \(k\) then forwarded to \(i\). \[2t_{r}[ld_{i}+\max_{x\in N_{l}(i)}\{d_{x}\}]+(l+1)t_{p}\\ \leq\Delta_{tot,l}\leq\\ 2t_{r}[ld_{i}+\sum_{x\in N_{l}(i)}d_{x}]+(l+1)t_{p} \tag{12}\] where \(N_{l}(i)\) denotes nodes connected to \(i\) and \(l\)-hops always. In an \(L\)-hop computational graph, the hop-1 messages will be exchanged \(L\) times, the hop-2 messages will be exchanged \(L-1\) times, and so on. So, the total \(L\)-hop delay (\(\Delta_{tot,L}\) is within the following bounds. \[2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\max_{x\in N_{l}(i)}\{d_{x} \}+(l+1)t_{p}]\\ \leq\Delta_{tot,L}\leq\\ 2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\sum_{x\in N_{l}(i)}d_{x}+(l +1)t_{p}] \tag{13}\] These bounds have quadratic growth with the number of hops.
2309.16335
End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG by Deep Neural Networks
Background: Atrial fibrillation (AF) is one of the most common cardiac arrhythmias that affects millions of people each year worldwide and it is closely linked to increased risk of cardiovascular diseases such as stroke and heart failure. Machine learning methods have shown promising results in evaluating the risk of developing atrial fibrillation from the electrocardiogram. We aim to develop and evaluate one such algorithm on a large CODE dataset collected in Brazil. Results: The deep neural network model identified patients without indication of AF in the presented ECG but who will develop AF in the future with an AUC score of 0.845. From our survival model, we obtain that patients in the high-risk group (i.e. with the probability of a future AF case being greater than 0.7) are 50% more likely to develop AF within 40 weeks, while patients belonging to the minimal-risk group (i.e. with the probability of a future AF case being less than or equal to 0.1) have more than 85% chance of remaining AF free up until after seven years. Conclusion: We developed and validated a model for AF risk prediction. If applied in clinical practice, the model possesses the potential of providing valuable and useful information in decision-making and patient management processes.
Theogene Habineza, Antônio H. Ribeiro, Daniel Gedon, Joachim A. Behar, Antonio Luiz P. Ribeiro, Thomas B. Schön
2023-09-28T10:47:40Z
http://arxiv.org/abs/2309.16335v1
# End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG by Deep Neural Networks ###### Abstract **Background:** Atrial fibrillation (AF) is one of the most common cardiac arrhythmias that affects millions of people each year worldwide and it is closely linked to increased risk of cardiovascular diseases such as stroke and heart failure. Machine learning methods have shown promising results in evaluating the risk of developing atrial fibrillation from the electrocardiogram. We aim to develop and evaluate one such algorithm on a large CODE dataset collected in Brazil. **Methods:** We used the CODE cohort to develop and test a model for AF risk prediction for individual patients from the raw ECG recordings without the use of additional digital biomarkers. The cohort is a collection of ECG recordings and annotations by the Telehealth Network of Minas Gerais, in Brazil. A convolutional neural network based on a residual network architecture was implemented to produce class probabilities for the classification of AF. The probabilities were used to develop a Cox proportional hazards model and a Kaplan-Meier model to carry out survival analysis. Hence, our model is able to perform risk prediction for the development of AF in patients without the condition. **Results:** The deep neural network model identified patients without indication of AF in the presented ECG but who will develop AF in the future with an AUC score of 0.845. From our survival model, we obtain that patients in the high-risk group (i.e. with the probability of a future AF case being greater than 0.7) are 50% more likely to develop AF within 40 weeks, while patients belonging to the minimal-risk group (i.e. with the probability of a future AF case being less than or equal to 0.1) have more than 85% chance of remaining AF free up until after seven years. **Conclusion:** We developed and validated a model for AF risk prediction. If applied in clinical practice, the model possesses the potential of providing valuable and useful information in decision-making and patient management processes. **Keywords:** Atrial fibrillation; Deep neural network; ECG; Risk prediction; Survival analysis ## Introduction Atrial fibrillation (AF) is progressively more common worldwide within an ageing population [1]. It is associated with adverse outcomes such as cognitive impairment and can lead to more severe heart diseases if not treated early. Previous studies have found a close link between AF and increased risk of death [2] and heart-related complications, such as stroke and heart failure [3, 4, 5]. Good assessment of patient risk can allow more frequent monitoring and facilitate early diagnosis. Early detection of the problem might allow to start anticoagulation treatment and help prevent death and disability. The electrocardiogram (ECG) is a convenient, fast, and affordable option used at many hospitals, clinics, primary and specialised health centres to diagnose many types of cardiovascular diseases. Over the past 50 years, computer-assisted tools have complemented physician interpretation of ECGs. Notably, the realm of deep learning has emerged as a promising avenue to enhance automated ECG analysis, showcasing impressive strides in recent years [6, 7, 8]. Prior studies have predominantly explored the use of deep neural networks (DNNs) to automatically detect AF and other cardiac arrhythmias from standard 12-lead ECGs [9, 10, 11]. This advancement holds valuable implications for clinical decision support, offering auxiliary tools for diagnosing cardiac arrhythmias. However, while achieving consistent diagnoses in patients-even among those with established conditions-is an essential aspect, the parallel need remains for systems yielding timely and early warning for patients with prospective conditions to develop AF. Combining the features obtained from DNNs with survival methods is a promising approach for accurate risk prediction. Recent studies explored this approach for the risk prediction of heart diseases [12] and mortality [13, 14]. The risk prediction of AF from the 12-lead ECG has been studied before with different approaches and varying degrees of success. Raghunath et al. [15] used DNNs for a dataset collected during 30 years to directly predict new-onset AF within one year and identified the patients at risk of AF-related stroke among those predicted to be at high risk of impending AF. The authors in [16] focused on predicting future AF incidents and the time to the event but used a DNN model trained on a different dataset, and the survival analysis spanned a longer period. From our group, Zvuloni et al. [17] performed end-to-end AF risk prediction from the 12-lead ECG but did not go further to implement survival modelling and estimate the time to the AF event. Further, Biton et al. [18] presents a model that used digital biomarkers in combination with deep representation learning to predict the risk of AF. Their model uses a random forest classifier including features from a pre-trained DNN where the weights are kept fixed from a different ECG classification task. The aim of our work is to bridge the gap between these studies. While these previous studies focused either on directly predicting future AF cases within a given time frame or incorporated DNNs trained on disparate datasets for survival modelling, there exists no comprehensive approach that synergizes the capabilities of DNNs in AF diagnosis with the precision of survival analysis techniques for estimating time-to-event outcomes. Contrarily, our approach combines both of these aspects: firstly, by employing an end-to-end trained DNN to assess the risk of AF development, and secondly, by utilizing the DNN's output to construct a time-to-event model that forecasts the occurrence of AF from the date of ECG examination. We demonstrate the effectiveness of the method which offers accurate prognostic insights into AF occurrences. Further, we release implementation codes and trained weights to facilitate future studies. ## Methods ### The dataset The model development and testing were conducted using the CODE (Clinical Outcomes in Digital Electrocardiology) dataset [19]. The CODE dataset consists of 2,322,465 12-lead ECG records from 1,558,748 different patients. The ECG records were collected in 811 counties in the state of Minas Gerais, Brazil by a public telehealth system, Telehealth Network of Minas Gerais (TNMG) between 2010 and 2017. A detailed description of the recordings and the labelling process for each ECG exam of the CODE dataset can be found in [11]. Information about the patients was recorded together with their ECG tracings. The average age of the patients (considering each exam separately) is 53.6 years with a median of 54 years and a standard deviation of 17.4 years. To analyse the natural history of patients with regard to AF, we identified patients who recorded multiple ECG exams. The distribution of the number of visits for each patient during a period of eight years is depicted in Supplementary Material Figure S.1. As the figure shows, the majority of patients recorded only a single ECG exam (1,104,588 patients). 285,685 patients performed two visits each, while the remaining 168,475 patients recorded ECG exams more than twice. The number of medical visits undertaken by each patient was taken into consideration in classifying the exams into different classes as discussed in the problem formulation. The ECG signals are between 7 and 10 seconds long and recorded at sampling frequencies ranging from 300 to 600 Hz. The ECG records were re-sampled at 400 Hz to generate between 2800 and 4000 temporal samples. All ECGs are zero-padded to obtain a uniform size of 4096 samples for each ECG lead, which are then used as input to the convolutional model. The labels for AF in the CODE dataset were extracted from the text report produced by the expert who looked at the ECGs. To improve the quality of the annotations, some exams were reviewed by doctors, in this case, disagreement with the labels produced by the University of Glasgow automatic diagnosis software was used to select exams to be reviewed. The procedure is described in detail in [11]. ### Problem formulation The study considered patients in the CODE database with at least two ECG exams or that have AF. Patients were classified into three groups (NoAF, BaselineAF, FutureAF) according to the presence or absence of a record with AF condition and whether the record with AF is the baseline or not. The ECG exams from the patients were classified into three different classes, focusing on patients who undertook multiple exams. The classification process, which is illustrated in Figure 1, is detailed as follows: * _NoAF Class:_ all ECG exams from patients who recorded multiple exams without presenting an AF abnormality. We exclude the last exam for each patient or exams recorded within one week Figure 1: Diagram of patients groups and exams categories. from the last exam. * _WithAF Class:_ combined all ECG exams that exhibit the AF condition. * _FutureAF Class:_ regrouped normal ECG exams from patients who had normal ECG exams at the beginning, but who were diagnosed with AF condition in a follow-up exam. The retained records were made before the patients were first diagnosed with AF condition. We exclude all subsequent normal exams after the first positive case, and exams made within one week before this case. The one-week threshold was set so we don't have to deal with paroxysmal atrial fibrillation cases, which is a brief event of atrial fibrillation that usually stops in 24 hours and may last up to a week. We are interested in using predictions of the FutureAF class for predicting the long-term risk of AF, hence we consider that exams should be distanced by at least one week to be considered as a follow-up exam. Hence, ECG exams recorded within one week before the first exam with the AF condition were not added FutureAF. Similarly, exams for which we do no follow the patient for longer than one week were not added to NoAF. We used the remaining exams for developing and testing the model. In the final dataset, 637,514 exams (92.17%) belong to the class _NoAF_; 41,851 (6.05%) to class _WithAF_; and, 12,280 (1.78%) to the class _FutureAF_. This final dataset was split uniformly at random and by patient into train set, validation set and test set. 60% of the data were allocated for training, 10% for validation and 30% for testing. Splitting the data into train and validation sets as we have done is common for large datasets such as ours because cross-validation becomes computationally expensive [9, 11, 20]. The train-test split happened so that ECG records belonging to one patient ended up in the same split. ### DNN architecture and training The DNN architecture in this study was based on a deep residual neural network implemented in previous studies [11, 13]. The neural network consists of a convolutional layer followed by five residual blocks and ends with a fully connected (dense) layer that passes its output to a softmax to obtain three class probabilities for the classes NoAF, WithAF and FutureAF which are defined to add up to one. While the focus is on predicting the class FutureAF from ECG exams with an absence of the AF condition, we kept the exams belonging to the class WithAF to improve the performance of the model. Hence, the developed model also has the capability of conducting automatic AF diagnosis. The DNN model was trained by minimising the average cross-entropy loss using the Adam optimiser [21]. Default parameters were used with weight decay of \(5\cdot 10^{-4}\) to regularise the model. As the results obtained in [11, 13] were satisfactory, this study kept most of the selected hyperparameters from these studies. Hence, no further hyperparameter tuning was performed. The initial learning rate was \(10^{-3}\) and was reduced by a factor of 10 whenever the validation loss remained without improvement for 7 consecutive epochs. The dropout rate was manually tuned between values: 0.8 and 0.5 with the latter value resulting in improved performance. The training was performed until the minimum learning rate of \(10^{-7}\) was reached or for a maximum of 70 epochs. We save and use as the final the one with the best validation results (i.e. minimum error loss) during the optimisation process as a form of early stopping. Despite the pronounced class imbalance, we abstain from employing strategies like over- or under-sampling to mitigate it. Over-sampling risks overfitting the minority class, while under-sampling discards numerous majority samples. Since our emphasis lies not on threshold-dependent metrics like accuracy, but rather on utilising the resulting class probabilities for the survival model, the class imbalance becomes less influential. ### Model evaluation and metrics After the training process, the performance of the DNN model was evaluated on the test data using classification evaluation metrics: sensitivity, positive predictive value (PPV), specificity, false positive rate, \(F\)-score, the Receiver Operating Characteristic (ROC) curve, Area Under the Receiver Operating Characteristic Curve (AUC-ROC), Precision-Recall Curve and Average Precision (AP) score. This study first evaluated the performance of the model on the task of classifying the three groups: NoAF, WithAF and FutureAF, based on the class probabilities from the DNN model. We plotted the ROC curves, the precision-recall curves and the confusion matrix, and computed the AUC score and AP scores for each class. Next, an evaluation of the model considering only the FutureAF class and the NoAF class was performed to assess the ability of the model to distinguish normal exams within the two classes. In other words, to evaluate how the model performs at AF risk prediction for patients without AF. For this task, samples labelled as WithAF class were removed. The class probabilities for the NoAF class and for the FutureAF class were normalised for each instance to sum to one. Lastly, a probability threshold that maximises the \(F_{1}\)-score for NoAF class and FutureAF class was selected, and the threshold-based metrics, namely sensitivity, PPV, specificity and \(F_{1}\)-score were computed. The threshold was obtained using the validation set, while all metrics including the plots were measured using the test set. ### Time-to-event models This study considers non-parametric and semi-parametric methods for time-to-event prediction. Patients in the test set belonging to the class NoAF (191,665 recordings, 116,255 unique patients) and the class FutureAF (3691 recordings, 2016 unique patients) were considered for the time-to-event prediction. We used Kaplan-Meier method [22] and Cox proportional hazard (PH) models [23]. The Kaplan-Meier method [22] (also referred to as the product-limit method) is a non-parametric method that provides an empirical estimate of the survival probability at a specific survival time using the actual sequence of the event times. Similar to other non-parametric methods, the advantage of the Kaplan-Meier is that it allows for the analysis without assumptions. On the other hand, the Cox PH model [23] allows us to adjust to different covariates and hence are also interesting to the analysis. Cox PH models are the most commonly used semi-parametric model for survival analysis. The model assumes that the covariates have an exponential influence on the hazard. The log-hazard of an individual is a sum of the population-level baseline hazard and a linear function of the corresponding covariates. We provide two analyses for the Cox PH model, in one analysis we adjust the model with age and gender, and in a second analysis we adjust the model with comorbidities in addition to age and gender. We consider 16 variables that were recorded during a patient visit, that include comorbidities, cardiovascular risk factors and cardiovascular drug usage, namely: use of diuretics, beta-blockers, converting enzyme inhibitors, amiodarone, or calcium blockers, obesity, diabetes mellitus, smoking, previous myocardial revascularization, family history of coronary heart disease, previous myocardial infarction, dyslipidemia, chronic kidney disease, chronic lung disease, chnica disease, arterial hypertension. The _observation time_\(T\) is given in weeks. During the development of the Cox PH model, patients were subdivided into four groups according to quintiles of the probability output of the DNN: \([0,0.1)\); \([0.1,0.4)\), \([0.4,0.7)\) and \([0.7,1.0]\). The study used the first group of patients having a predicted probability of less than \(0.1\) as a reference and produced hazard ratios for the remaining groups. For the Kaplan-Meier model, patients were grouped according to the same intervals: \([0,0.1)\); \([0.1,0.4)\), \([0.4,0.7)\) and \([0.7,1.0]\). We used the lifelines python library [24]. ## Results We developed a model to predict whether a patient belongs to the classes NoAF, WithAF or FutureAF. Our results for the classification task are available in the supplementary material. Since our ultimate goal is to predict the risk of a future AF event, we present here the ability of the model to predict the class FutureAF and the results from survival analysis. ### AF risk prediction and survival analysis The DNN model outputs class probabilities for the three classes. In a first analysis, we excluded exams from the class WithAF in order to study the ability of the model to distinguish between FutureAF and NoAF. We compute the performance metrics using the probability of FutureAF against that of NoAF. In Table 1 we display the confusion matrix, where the predicted values are compared against the true values. In Figure 2 we show the ROC curve and the AUC-ROC score obtained for this case. The AUC-ROC score was equal to 0.845. This reveals that the model can detect elements in each class. Figure 3 displays the PR curves and the calculated average precision (AP) scores. The AP score for the class FutureAF was quite small (\(\text{AP}=0.22\)) and its PR curve had a low area under the curve. This suggests that the model is unable to provide both, high sensitivity and PPV values at once for exams in the class FutureAF. An option for applying the model on the prediction task between two classes is to select a threshold that maximises the \(F_{1}\)-score, i.e. putting equal weights on both sensitivity and PPV. The threshold was computed using the validation set and was applied to the classification task for both the validation set and the test set. The obtained optimal probability threshold was equal to 0.1043 and the corresponding performance metrics are shown in Table 2. All the metrics consider the class FutureAF as the positive class. The sensitivity and PPV values on the test set are 0.322 and 0.247, respectively. In contrast, the specificity is very high (0.981), which is mainly due to class imbalance. The class probabilities from the DNN model belonging to the class FutureAF were used to develop survival models. Two Cox PH models were implemented, one adjusted with age and gender, and another adjusted with comorbidities in addition to age and gender. Table 3 shows the hazard ratios of patients whose probabilities for the class FutureAF belong to one of the groups: (0.1-0.4], (0.4-0.7] and (0.7-1.0], taking patients in the group (0.0-0.1] as a reference. As the table indicates, moving from a lower probability range to a higher probability range, the hazards leading to AF also increase. Considering the Cox PH model adjusted with age and gender plus comorbidities, the probability range of (0.7-1.0] had the highest hazard ratio that equals 40.869 (95% CI: \(32.83-50.87 \begin{table} \begin{tabular}{l c|c c} & & \multicolumn{2}{c}{**Predicted Value**} \\ & & NoAF & FutureAF \\ \hline **True** & NoAF & 188 606 & 3 059 \\ **Value** & FutureAF & 2 584 & 1 107 \\ \hline \end{tabular} \end{table} Table 1: Confusion matrix. Figure 2: The ROC curves and AUC scores for FutureAF class versus NoAF class. AUC-ROC\(=0.845\) the model assessment, however, some covariates (the three probability ranges in this case) did not pass the non-proportional test, hence rejecting the null hypothesis of proportional hazards. This led the study to use a non-parametric model in order to make further survival analyses. A Kaplan-Meier approach was used to this end. The survival curves that were generated through the Kaplan-Meier estimator are displayed in Figure 4. Note that survival time refers in the context of our study to the time-to-event which is the development of AF and not to actual mortality-related survival. Therefore, survival probability refers to the likelihood that no event occurs. The shaded area highlights the 95% confidence interval of the survival probability at different survival times (exponential Greenwood confidence intervals were used [25]). Patients within the lowest risk group maintained survival probabilities greater than 0.8 during the study period of about seven years. The survival probability is reduced at a higher rate moving from patients in a lower probability range to patients in a higher probability range. The median survival times for patients in probability groups \((0.0-0.1]\), \((0.1-0.4]\), \((0.4-0.7]\) and \((0.7-1.0]\) are infinity, 248, 82 and 40 weeks respectively. The median time without developing AF defines the point in time where on average 50% of the patients in a group would have had the condition. That means for example, patients in the first cohort (probability range \((0.0-0.1]\)) have a 50% chance of never developing AF within seven years, while patients in the last cohort (probability range \((0.7-1.0]\)) are 50% likely to develop AF within 40 weeks (less than a year). A table below the survival curve in Figure 4 shows the number of patients at risk, censored \begin{table} \begin{tabular}{l c c} \hline & Validation & Test (CI 95\%) \\ \hline Sensitivity & 0.315 & 0.322 (\(\pm\) 0.016) \\ PPV & 0.250 & 0.247 (\(\pm\) 0.012) \\ Specificity & 0.982 & 0.981 (\(\pm\) 0.001) \\ F1-score & 0.279 & 0.280 (\(\pm\) 0.012) \\ \hline \end{tabular} \end{table} Table 2: Performance metrics on the task of predicting the class FutureAF versus NoAF. Figure 3: The precision-recall curves and AP scores for FutureAF class versus NoAF class. Recall denotes the sensitivity, and precision denotes the positive predictive value. Figure 4: Survival curves for the different cohorts based on their probability range using the Kaplan-Meier model. \begin{table} \begin{tabular}{l c c c c} \hline \hline Adjusted for: & Probability Group & Hazard Ratio & CI 95\% & P - value \\ \hline & (0.1, 0.4] & 4.060 & 3.77 - 4.37 & \(<0.005\) \\ Age and sex & (0.4, 0.7] & 20.609 & 17.11 - 24.82 & \(<0.005\) \\ & (0.7, 1.0] & 42.339 & 33.99 - 52.74 & \(<0.005\) \\ \hline Age, sex, risk factors & (0.1, 0.4] & 3.995 & 3.71 - 4.30 & \(<0.005\) \\ comorbidities, & (0.4, 0.7] & 20.444 & 16.98 - 24.62 & \(<0.005\) \\ \& drug usage\({}^{*}\) & (0.7, 1.0] & 40.869 & 32.83 - 50.87 & \(<0.005\) \\ \hline \end{tabular} \({}^{*}\)**We adjust for the following comorbidities, cardiovascular risk factors, and drug usage:** use of diuretics, beta-blockers, converting enzyme inhibitors, amiodarone, or calcium blockers, obesity, diabetes mellitus, smoking, previous myocardial revascularization, family history of coronary heart disease, previous myocardial infarction, dyslipidemia, chronic kidney disease, chronic lung disease, chagas disease, arterial hypertension. \end{table} Table 3: Hazard ratios for different probability groups from the Cox PH model. patients (i.e. no further follow-up or the event time is beyond the study period) and patients with AF at different time intervals (50 weeks each time interval). Taking the event times 0 and 50 weeks as an example, for patients within the probability range \((0-0.1]\), the number of patients at risk was 129,369 (68%), censored cases were 60,091 and 794 (0.42%) AF events were recorded after 50 weeks; while for patients within the probability range (0.7 - 1.0] the number of patients at risk was 61 (33.7%), censored cases were 26 and 94 (51.9%) AF events were recorded. This again provides an estimate of the time to event for patients in different risk groups. ## Discussion ### DNN model performance The DNN model produced a good AUC score for the class FutureAF, which suggests its potential at predicting this class. The actual ability to predict the class FutureAF was attested by the AP score obtained for this class (\(\text{AP}=0.22\)). The low score reveals the difficulty in predicting this class and suggests that there would be many false positive cases (incorrectly predicting the class FutureAF) regardless of the threshold. Regarding the risk prediction task (normal ECG exams in FutureAF vs NoAF), the DNN model produced lower sensitivity and PPV values as shown in Table 2 (the probability threshold here maximises \(F_{1}\)-score). However, the specificity was as high as 0.982. This indicates that most of the exams that could be predicted as negative are truly negative and that there would be very few false positive cases. Hence, the information from this prediction task can be of value during a screening of a large population, i.e. one can consider that among the individuals predicted as negative, approximately 1.8% are at risk of developing AF. ### Survival analysis The survival analysis implemented in this study provided additional and valuable information about the risk level and an estimate of the time to the event of having an AF condition. The Cox PH model produced the hazard ratios for patients belonging to four different probability groups taking the group with the lowest risk as a reference. The Cox PH model failed the non-proportional test; still, it provides insight into the risk level incurred by patients in different groups. As stated in [24], a model that does not meet the proportional hazards assumption still can be useful in performing prediction (e.g. predicting survival times) as opposed to making inferences. Recent work also suggests that virtually all real-world clinical datasets will violate the proportional hazards assumptions if sufficiently powered and that statistical tests for the proportional hazards assumption may be unnecessary [26]. To understand the influence of a class probability group on the survival duration, a Kaplan-Meier model was implemented. The results showed that patients in the highest risk group (FutureAF class probability range of \((0.7-1.0]\)) were approximately 60% likely to develop AF within one year, compared with less than 15% of patients in the minimal risk group (FutureAF class probability range of \((0.0-0.1]\)) that would develop the condition within the complete time span of seven years. These findings proved the ability of the DNN model at predicting patients with impending AF conditions and with different risk levels. Compared to the results of the study in [18], which used digital biomarkers from the raw 12-lead ECG, clinical information and features from deep representation learning to make AF risk prediction, our approach learns predicting features directly from the raw ECG signal without the need to extract any biomarker. Thus precluding the need to extract biomarkers from the ECG signal which facilitates the ECG processing pipeline. It is also worth mentioning that the median survival time obtained in [18] is more than two years for patients in probability group \((0.8-1.0]\). Even though the methods used to produce survival curves are different (Cox PH model versus Kaplan-Meier) and also the classifier used (Random Forrest versus Neural Network with Softmax), their results seem less alarming considering the results in this work, where 50% of patients in the probability group \((0.7-1.0]\) are likely to develop AF within 40 weeks (less than one year). This difference in median survival times may also be attributed to the fact that the study in [18] used a random forest classifier while this study uses neural networks and a sigmoid function for classification. ### Clinical implications Patients with clinical AF that are not taking anticoagulant medication have an elevated risk of stroke, and the strokes caused by AF are more severe than strokes caused by other causes [27]. AF does not always cause symptoms, and for roughly 20% of the population, stroke is the first manifestation of AF [28]. Thus, there is a lot of interest in detecting cases of AF before the occurrence of a stroke, by systematic screening for asymptomatic AF [29] or, more recently, by the recognition of those in sinus rhythm who will develop AF in the future [9, 17, 18, 30, 31]. Among the risk scores that use clinical variables, the CHARGE-AF risk score is one of the most accurate and well-validated and uses variables readily available in primary care settings [30]. A recent review of risk scores based on clinical variables for prediction of AF [31] found that 14 different scores are potentially useful, with AUC-ROC curves between 0,65 to 0,77 for the general population, with best results for the CHARGE-AF and MHS scores. Risk scores based on standard 12-lead ECGs are a promising tool considering both practical and technical questions [9, 17, 18]. Reported studies, including ours, showed much higher discrimination capacity, with AUC-ROC curves over 0.85. Since ECGs are routinely performed in most subjects at risk, ie, those older than 60 years old, the prediction can be obtained automatically, without the need of inputting variables in a risk calculator. In this study, we also provide semi-parametric and non-parametric time-to-event models that might help inform doctors of the development of the disease for each group of patients. The model was tested in cases where the disease could be observed up to seven years of the examination, providing a more complete picture for the use of this model in clinical practice The ability to accurately recognise patients that have a high chance of developing AF may allow the intensified surveillance of those patients, with early recognition of the appearance of the AF. In this case, the early institution of anticoagulant treatment could prevent the drastic event of a stroke and change the natural history of this condition. Moreover, new therapies to prevent AF could be developed and used for preventing not the stroke but potentially the whole set of complications related to the appearance of AF. All these clinical applications of the method deserve to be tested in controlled clinical trials, but preliminary prospective studies confirmed that AI-augmented ECG analysis could be helpful, at least, to recognise those at higher risk of developing AF [32]. ### Limitations One limitation lies in the dataset used for model development and testing. Many of the patients that were considered as all-time normal (without AF during the whole data collection period) had dropped from the follow-up before the study period ended or had a relatively shorter time interval between their first and last ECG records. Therefore, it is impossible to tell with certainty whether an individual was at no risk of developing AF within seven years. Censored data are unexceptional in survival analysis, however, in normal supervised learning, an ideal dataset would consist of patients who had recorded ECG exams regularly for the considered study period. Moreover, we do not prove this is better than existing clinical scores such as CHARGE-AF [30]. Similar to a statement in [18], during data selection, there was a bias towards individuals who had a cardiac disease or a forthcoming heart condition, since all the patients considered had attended multiple medical visits. The AF label is also solely based on the ECG analysis. This label might contain errors from medical mistakes and from problems in the extraction of the label (see [11] for a more complete discussion of the labeling process). This way, some FutureAF exams might be previously missed AF cases during the ECG analysis. Finally, the model is developed and tested solely on patients from Brazil, and external validation in other cohorts is needed to verify the efficiency of the model in other populations. ## Conclusion This study employed ResNet-based convolutional DNNs for end-to-end AF risk prediction from 12-lead ECG signals. The trained DNN effectively identified ECG signal changes indicative of AF development, facilitating risk prediction and survival analysis. By integrating DNN probabilities into Cox PH and Kaplan-Meier models, hazard ratios and survival functions were derived, stratifying patients based on risk levels. This model holds promise for clinical application, aiding AF risk stratification and informing clinical decisions. Further validation is imperative to confirm predictive performance. Future research should encompass external validation on diverse datasets, preferably from distinct geographic populations, to assess model usability across different groups. Exploring the model's potential in identifying AF-related stroke risks is another avenue, considering the established AF-stroke connection [4, 5]. Additionally, extending this approach to predict other arrhythmias and cardiovascular diseases is a plausible direction for further development. #### Ethical approval This study complies with all relevant ethical regulations. CODE Study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais, protocol 49368496317.7.0000.5149. Since this is a secondary analysis of anonymized data stored in the TNMG, informed consent was not required by the Research Ethics Committee for the present study. ### Declaration of interests There are no competing interests. #### Funding This research is financially supported by the _Wallenberg AI, Autonomous Systems and Software Program (WASP)_ funded by Knut and Alice Wallenberg Foundation, and by _Kjell och Marta Beijer Foundation_. ALPR is supported in part by CNPq (465518/2014-1, 310790/2021-2 and 409604/2022-4) and by FAPEMIG (PPM-00428-17, RED-00081-16 and PPE-00030-21). ALPR received a Google Latin America Research Award scholarship. JB acknowledges the support of the Technion EVPR Fund: Hitman Family Fund and Grant No ERANET - 3-16881 from the Israeli Ministry of Health. The funders had no role in the study design; collection, analysis, and interpretation of data; writing of the report; or decision to submit the paper for publication. ### Data sharing The DNN model parameters that yield the results from in this paper are available under ([https://zenodo.org/record/7038219#.Y9Phl4LMJNw](https://zenodo.org/record/7038219#.Y9Phl4LMJNw)). This should allow the reader to partially reproduce the results from this study. 15% of the CODE cohort (denoted CODE-15%) was also made openly available ([https://doi.org/10.5281/zenodo.4916206](https://doi.org/10.5281/zenodo.4916206)). Researchers affiliated with educational or research institutions might make requests to access the full CODE cohort. Requests should be made to the corresponding author of this paper. They will be forwarded and considered on an individual basis by the Telehealth Network of Minas Gerais. An estimate for the time needed for data access requests to be evaluated is three months. If approved, any data use will be restricted to non-commercial research purposes. The data will only be made available on the execution of appropriate data use agreements. ### Code availability The code for the model training, evaluation and statistical analysis is available at the GitHub repository [https://github.com/mygithub27/af-risk-prediction-by-ecg-dnn](https://github.com/mygithub27/af-risk-prediction-by-ecg-dnn).
2304.00150
E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics
We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts. We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow, and compare equivariant graph neural networks to their non-equivariant counterparts on different performance measures, such as kinetic energy or Sinkhorn distance. Such measures are typically used in engineering to validate numerical solvers. Our main findings are that while being rather slow to train and evaluate, equivariant models learn more physically accurate interactions. This indicates opportunities for future work towards coarse-grained models for turbulent flows, and generalization across system dynamics and parameters.
Artur P. Toshev, Gianluca Galletti, Johannes Brandstetter, Stefan Adami, Nikolaus A. Adams
2023-03-31T21:56:35Z
http://arxiv.org/abs/2304.00150v1
# E(3) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics ###### Abstract We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts. We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow, and compare equivariant graph neural networks to their non-equivariant counterparts on different performance measures, such as kinetic energy or Sinkhorn distance. Such measures are typically used in engineering to validate numerical solvers. Our main findings are that while being rather slow to train and evaluate, equivariant models learn more physically accurate interactions. This indicates opportunities for future work towards coarse-grained models for turbulent flows, and generalization across system dynamics and parameters. ## 1 Particle-based fluid mechanics Navier-Stokes equations (NSE) are omnipresent in fluid mechanics, hydrodynamics or weather modeling. However, for the majority of problems, solutions are analytically intractable, and obtaining accurate predictions necessitates falling back to numerical solution schemes. Those can be split into two categories: grid/mesh-based (Eulerian description) and particle-based (Lagrangian description). **Smoothed Particle Hydrodynamics.** In this work, we investigate Lagrangian methods and more precisely the Smoothed Particle Hydrodynamics (SPH) approach, which was independently developed by Gingold & Monaghan (1977) and Lucy (1977) to simulate astrophysical systems. Since then, SPH has established as the preferred approach in various applications ranging from free surfaces such as ocean waves (Violeau & Rogers, 2016), through fluid-structure interaction systems (Zhang et al., 2021), to selective laser melting in additive manufacturing (Weiraher et al., 2019). Figure 1: Velocity magnitude of Taylor-Green vortex (a) and x-velocity of reverse Poiseuille (b). The main idea behind SPH is to represent the fluid properties at discrete points in space and to use truncated radial interpolation kernel functions to approximate them at any arbitrary location. The kernel functions are used to estimate state statistics which define continuum-scale interactions between particles. The justification for truncating kernel support is the assumption of local interactions between particles. The resulting discretized equations are then integrated in time using numerical integration techniques like symplectic Euler by which the particle positions are updated. To generate training data, we implemented our own SPH solver based on the transport velocity formulation by Adamir et al. (2013), which promises a homogeneous particle distribution over the domain. We then selected two flow cases, both of which are well-known in the fluid mechanics community: the 3D laminar Taylor-Green Vortex and the 3D reverse Poiseuille Flow. We are planning to open-source the datasets in the near future. Taylor-Green Vortex.The Taylor-Green vortex system (TGV, see Figure 1 (a)) with Reynolds number of \(\text{Re}=100\) is neither laminar nor turbulent, i.e. there is no layering of the flow (typical for laminar flows), but also the small scales caused by vortex stretching do not lead to a fully developed energy cascade (typical for turbulent flows) Brachet et al. (1984). The TGV has been extensively studied starting with Taylor & Green (1937) and continuing all the way to Sharma & Sengupta (2019). The TGV system is typically initialized with a velocity field given by \[u=-\cos(kx)\cos(ky)\cos(kz)\;,\qquad v=\sin(kx)\cos(ky)\cos(kz)\;,\qquad w=0\;, \tag{1}\] where \(k\) is an integer multiple of \(2\pi\). The TGV datasets used in this work consist of 8/2/2 trajectories for training/validation/testing, where each trajectory comprises 8000 particles. Each trajectory spans 1s physical time and was simulated with \(dt=0.001\) resulting in 1000 time steps per trajectory. The ultimate goal would be to learn the dynamics over much larger time steps than those taken by the numerical solver, but with this dataset we just want to demonstrate the applicability of learned approaches to reproducing numerical solver results. Reverse Poiseuille Flow.The Poiseuille flow, i.e. laminar channel flow, is another well-studied flow case in fluid mechanics. However, channel flow requires the treatment of wall-boundary conditions, which is beyond the focus of this work. In this work, we therefore consider data obtained by reverse Poiseuille flow (RPF, see Figure 1 (b)) (Fedosov et al., 2008), which essentially consists of two opposing streams in a fully periodic domain. Those flows are exposed to opposite force fields, i.e., the upper and lower half are accelerated in negative \(x\) direction and positive \(x\) direction, respectively. Due to the fact that the flow is statistically stationary (the vertical velocity profile has a time-independent mean value), the RPF dataset consists of one long trajectory spanning 120s. The flow field is discretized by 8000 particles and simulated with \(dt=0.001\), followed by sub-sampling at every 10th step. Learning to directly predict every 10th state is what we call temporal coarse-graining. The resulting number of training/validation/testing instances is the same as for TGV, namely 8000/2000/2000. ## 2 (Equivariant) graph network-based simulators We first formalize the task of autoregressively predicting the next state of a Lagrangian fluid mechanics simulation based on the notation from Sanchez-Gonzalez et al. (2020). Let \(X^{t}\) denote the state of a particle system at time \(t\). One full trajectory of \(K+1\) steps can be written as \(\mathbf{X}^{t_{0:K}}=(X^{t_{0}},\dots,X^{t_{K}})\). Each state \(\mathbf{X}^{t}\) is made up of \(N\) particles, namely \(\mathbf{X}^{t}=(\mathbf{x}^{t}_{1},\mathbf{x}^{t}_{2},\dots\mathbf{x}^{t}_{N})\), where each \(\mathbf{x}_{i}\) is the state vector of the \(i\)-th particle. However, the inputs to the learned simulator can span multiple time instances. Each node \(\mathbf{x}^{t}_{i}\) can contain node-level information like the current position \(\mathbf{p}^{t}_{i}\) and a time sequence of \(H\) previous velocity vectors \(\hat{\mathbf{p}}^{t_{i+k-H:k}}\), as well as global features like the external force field \(\mathbf{f}_{i}\) in the reverse Poiseuille flow. To build the connectivity graph, we use an interaction radius of \(\sim 1.5\) times the average interparticle distance. This results in around 10-20 one-hop neighbors. Graph Network-based Simulator.The Graph Network-based Simulator (GNS) framework (Sanchez-Gonzalez et al., 2020) is one of the most popular learned surrogates for engineering particle-based simulations. The main idea of the GNS model is to use the established encoder-processor-decoder architecture (Battaglia et al., 2018) with a processor that stacks several message passing layers (Gilmer et al., 2017). One major strength of the GNS model lies in its simplicity given that all its building blocks are simple MLPs. However, the performance of GNS when predicting long trajectories strongly depends on choosing the right amount of Gaussian noise to perturb input data. Additionally, GNS and other non-equivariant models are less data-efficient (Batzner et al., 2022). For these reasons, we implement and tune GNS as a comparison baseline, and use it as an inspiration for which setup, features, and hyperparameters to use for equivariant models. Steerable E(3)-equivariant Graph Neural Network.Steerable E(3)-equivariant Graph Neural Networks (SEGNNs) (Brandstetter et al., 2022) are an instance of E(3)-equivariant GNNs, i.e., GNNs that are equivariant with respect to isometries of the Euclidean space (rotations, translations, and reflections). Most E(3)-equivariant GNNs that are tailored towards molecular property prediction tasks, (Batzner et al., 2022; Batatia et al., 2022) restrict the parametrization of the Clebsch-Gordan tensor products to an MLP-parameterized embedding of pairwise distances. In contrast, SEGNNs use general steerable node and edge attributes which can incorporate any kind of physical quantity, and directly learn the weights of the Clebsch-Gordan tensor product. Indeed, extensions of methods such as NequIP (Batzner et al., 2022) towards general physical features would results in something akin to SEGNN. Steerable attributes strongly impact the Clebsch-Gordan tensor products, and thus finding physically meaningful edge and node attributes is crucial for good performance. In particular, we chose edge attributes \(\hat{a}_{ij}=V(\mathbf{p}_{ij})\), where \(V(\cdot)\) is the spherical harmonic embedding and \(\mathbf{p}_{ij}=\mathbf{p}_{i}-\mathbf{p}_{j}\) are the pairwise distances. We further choose node attributes \(\hat{a}_{i}=V(\tilde{\mathbf{p}}_{i})+\sum_{k\in\mathcal{N}(i)}\hat{a}_{ik}\), where \(\tilde{\mathbf{p}}_{i}\) are averaged historical velocities and \(\mathcal{N}(i)\) is the \(i\)-neighborhood. As for node and edge features, we found that concatenated historical velocities for the nodes and pairwise displacements for the edges capture best the Navier-Stokes dynamics. For training SEGNNs, we verified that adding Gaussian noise to the inputs (Sanchez-Gonzalez et al., 2020) indeed significantly improves performance. We further found that explicitly concatenating the external force vector \(\mathbf{f}_{i}\) to the node features boosts performance in the RPF case. However, adding \(\mathbf{f}_{i}\) to the node attributes \(\hat{a}_{i}t=V(\mathbf{f}_{i})+V(\tilde{\mathbf{p}}_{i})+\sum_{k\in\mathcal{N }(i)}\hat{a}_{ik}\) does not improve performance. Other models, like EGNN by Satorras et al. (2021), achieve equivariance by working with invariant messages, but it does not allow the same flexibility in terms of features. On a slightly more distant note, there has been a rapid raise in physics-informed machine learning (Raissi et al., 2019) and operator learning (Li et al., 2021), where functions or surrogates are learned in an Eulerian (grid-based) way. SEGNN is a sound choice for Lagrangian fluid mechanics problems since it is designed to work directly with vectorial information and particles. ## 3 Results The task we train on is the autoregressive prediction of accelerations \(\tilde{\mathbf{p}}\) given the current position \(\mathbf{p}_{i}\) and \(H=5\) past velocities of the particles. We measured the performance of the GNS and the SEGNN models in four aspects when evaluating on the test dataset: (i) _Mean-squared error_ (MSE) of particle positions \(\text{MSE}_{p}\) when rolling out a trajectory over 100 time steps (1 physical second for both flow cases). This is also the validation loss during training. (ii) _Sinkhorn distance_, as an optimal transport distance measure between particle distributions. Lower values indicate that the particle distribution is closer to the reference one. (iii) _Kinetic energy_\(E_{kin}\) (\(=0.5mv^{2}\)) as a global measure of physical behavior. Performance comparisons are summarized in Table 1. GNS and SEGNN models have roughly the same number of parameters for Taylor-Green (both have 5 layers and 128-dim features), whereas for reverse Poiseuille SEGNN has three times less parameters than GNS (SEGNN has 64-dim features). Looking at the timing in Table 1, equivariant models of similar size are one order of magnitude slower than non-equivariant ones. This is a known result and is related to the constraint of how the Clebsch-Gordan tensor product can be implemented on accelerators like GPUs. Taylor-Green Vortex.One of the major challenges of the Taylor-Green dataset are the varying input and output scales throughout a trajectory, by up to one order of magnitude. Consequently, this results in the larger importance of first-time steps in the loss even after data normalization. Figure 2 (a) summarizes the most important performance properties of the Taylor-Green vortex experiment. In general, both models match the ground truth kinetic energy well, but GNS drifts away from the reference SPH curve earlier. Both learned solvers, seem to preserve larger system velocities resulting in higher \(E_{kin}\). The rollout MSE for this case matches the behavior seen in \(E_{kin}\). **Reverse Poiseuille Flow.** The challenge of the reverse Poiseuille case lies in the different velocity scales between the main flow direction (\(x\)-axis) and the \(y\) and \(z\) components of the velocity. Although such unbalanced velocities are used as inputs, target accelerations in \(x\)-, \(y\)-, and \(z\)-direction all underlie similar distributions. This, combined with temporal coarsening makes the problem sensitive to input deviations. Figure 2 (b) shows that SEGNN reproduces the particle distribution almost perfectly, whereas GNS shows signs of particle clustering, resulting in a larger Sinkhorn distance. Interestingly, the shear layers in-between the inverted flows (around planes \(y=\{0,1,2\}\)) seem to have the largest deviation from the ground truth, which could be source of clusters, see Figure 3. ## 4 Future Work In this work, we demonstrate that equivariant models are well suited to capture underlying physics properties of particle-based fluid mechanics systems. Natural future steps are enforcing physical behaviors such as homogeneous particle distributions, and including recent developments for neural PDE training into the training procedure of Sanchez-Gonzalez et al. (2020). The latter include e.g., the push-forward trick and temporal bundling (Brandstetter et al., 2022). One major weakness of recursively applied solvers, which these strategies aim to mitigate, is error accumulation, which in most cases leads to out-of-distribution states, and consequently unphysical behavior after several rollout steps. We conjecture that together with such extensions equivariant models offer a promising direction to tackle some of the long-standing problems in fluid mechanics, such as the learning of coarse-grained representations of turbulent flow problems, e.g. Taylor-Green (Brachet et al., 1984), or learning the multi-resolution dynamics of NSE problems (Hu et al., 2017). \begin{table} \begin{tabular}{c|c c c|c c c} & \multicolumn{2}{c|}{Taylor-Green vortex} & \multicolumn{4}{c}{Reverse Poiseuille flow} \\ \cline{2-7} & SEGNN & GNS & SPH & SEGNN & GNS & SPH \\ \hline \(\text{MSE}_{\mathbf{p}}\) & 7.7e-5 & 1.3e-4 & - & 7.7e-3 & 8.0e-3 & - \\ \(\text{MSE}_{E_{kin}}\) & 5.3e-5 & 1.3e-4 & - & 2.8e-1 & 3.0e-1 & - \\ \hline Sinkhorn & 1.3e-7 & 1.1e-7 & - & 7.8e-8 & 1.9e-6 & - \\ Time [ms] & 290 & 32 & 9.7 & 180 & 33 & 110 \\ \(\#\) params & 720k & 630k & - & 180k & 630k & - \\ \end{tabular} \end{table} Table 1: Performance measures on the Taylor-Green vortex and reverse Poiseuille flow. The Sinkhorn distance is averaged over test rollouts, the inference time is obtained for one rollout step of 8000 particles. Figure 2: Taylor-Green vortex (a) and reverse Poiseuille (b) performance evolution.
2309.16902
Investigating Shift Equivalence of Convolutional Neural Networks in Industrial Defect Segmentation
In industrial defect segmentation tasks, while pixel accuracy and Intersection over Union (IoU) are commonly employed metrics to assess segmentation performance, the output consistency (also referred to equivalence) of the model is often overlooked. Even a small shift in the input image can yield significant fluctuations in the segmentation results. Existing methodologies primarily focus on data augmentation or anti-aliasing to enhance the network's robustness against translational transformations, but their shift equivalence performs poorly on the test set or is susceptible to nonlinear activation functions. Additionally, the variations in boundaries resulting from the translation of input images are consistently disregarded, thus imposing further limitations on the shift equivalence. In response to this particular challenge, a novel pair of down/upsampling layers called component attention polyphase sampling (CAPS) is proposed as a replacement for the conventional sampling layers in CNNs. To mitigate the effect of image boundary variations on the equivalence, an adaptive windowing module is designed in CAPS to adaptively filter out the border pixels of the image. Furthermore, a component attention module is proposed to fuse all downsampled features to improve the segmentation performance. The experimental results on the micro surface defect (MSD) dataset and four real-world industrial defect datasets demonstrate that the proposed method exhibits higher equivalence and segmentation performance compared to other state-of-the-art methods.Our code will be available at https://github.com/xiaozhen228/CAPS.
Zhen Qu, Xian Tao, Fei Shen, Zhengtao Zhang, Tao Li
2023-09-29T00:04:47Z
http://arxiv.org/abs/2309.16902v1
# Investigating Shift Equivalence of Convolutional Neural Networks in Industrial Defect Segmentation ###### Abstract In industrial defect segmentation tasks, while pixel accuracy and Intersection over Union (IoU) are commonly employed metrics to assess segmentation performance, the output consistency (also referred to equivalence) of the model is often overlooked. Even a small shift in the input image can yield significant fluctuations in the segmentation results. Existing methodologies primarily focus on data augmentation or anti-aliasing to enhance the network's robustness against translational transformations, but their shift equivalence performs poorly on the test set or is susceptible to nonlinear activation functions. Additionally, the variations in boundaries resulting from the translation of input images are consistently disregarded, thus imposing further limitations on the shift equivalence. In response to this particular challenge, a novel pair of down/upsampling layers called component attention polyphase sampling (CAPS) is proposed as a replacement for the conventional sampling layers in CNNs. To mitigate the effect of image boundary variations on the equivalence, an adaptive windowing module is designed in CAPS to adaptively filter out the border pixels of the image. Furthermore, a component attention module is proposed to fuse all downsampled features to improve the segmentation performance. The experimental results on the micro surface defect (MSD) dataset and four real-world industrial defect datasets demonstrate that the proposed method exhibits higher equivalence and segmentation performance compared to other state-of-the-art methods. Our code will be available at [https://github.com/xiaozhen228/CAPS](https://github.com/xiaozhen228/CAPS). shift equivalence, industrial defect segmentation, U-Net, convolutional neural network (CNN), deep learning. ## I Introduction Visual inspection methods based on convolutional neural networks (CNNs) have attracted considerable interest in recent years for industrial quality control of diverse scenes, such as steel surfaces [1], printed circuit boards [2], rail surfaces [3], textured fabrics [4], and many others. Concurrently, segmentation-based networks for defect detection have become popular due to their ability to provide precise location and contour information of defects [5, 6]. However, most segmentation-based networks in defect detection primarily focus on improving segmentation metrics such as pixel accuracy and Intersection over Union (IoU), while neglecting the crucial aspect of output consistency. Output consistency refers to the concept of shift equivalence, which implies that if input images are shifted by a certain number of pixels, the corresponding segmentation masks produced by the network should also exhibit the same pixel offsets. Despite the long-held belief that CNNs inherently possess shift equivalence [7, 8], several studies [9, 10, 11] have revealed that input translation significantly affects the segmentation outcomes, especially in the industrial inspection field. To shed light on the issue of shift equivalence in CNNs, Fig. 1 visually portrays the impact of input translations on the segmentation masks. The defective raw image is partitioned into three sub-images: green, black, and red, with each pair of adjacent sub-images differing by only one pixel in position. The sub-images are subsequently fed into the segmentation network, yet the resulting segmentation masks exhibit significant disparities. This situation often occurs in the following industrial settings: 1) when the same part is repeatedly captured by machine vision equipment with slight pixel translations due to mechanical deviations, leading to significant fluctuations in segmentation outcomes; 2) in a defective image, the same defects may vary by just a few pixels in position from image to image due to sampling, resulting in highly inconsistent segmentation Fig. 1: Influence of input image translation on output segmentation masks. Initially, the black window in the original image is translated upwards and downwards by one pixel, resulting in the green and red windows, respectively. Subsequently, the images within the three windows are cropped and individually fed into the image segmentation network. The ground truth indicates a defect area of 44 pixels, while the predicted defect areas for the green, black, and red sub-images in the network output are 55, 49, and 63 pixels, respectively. outcomes. Therefore, the issue of shift equivalence has gained widespread attention among scholars in recent years. The strategies to address the problem of shift equivalence in CNNs can be broadly categorized into learning-based and design-based approaches [12]. The former primarily focuses on enhancing network robustness through a data-driven approach, such as data augmentation. However, its segmentation performance shows a significant decline in the test set. The latter strategy seeks to redesign the network architecture in order to rectify the lack of equivalence in CNNs without relying on data. One key factor contributing to the loss of shift equivalence in CNNs is the downsampling layers, such as pooling layers and strided convolution layers, which violate the Nyquist-Shannon sampling theorem as highlighted by Zhang [9]. Therefore, it is meaningful to devise a new downsampling technique to cover traditional sampling layers such as MaxPooling, ensuring that the downsampled feature maps remain as similar as possible before and after image translation. Currently, two new downsampling design techniques have been proposed to reduce the disparities in downsampled feature maps, namely, anti-aliasing and component selection. Anti-aliasing-based methods, exemplified by BlurPool [9], aim to minimize differences between adjacent pixels by incorporating a low-pass filter (LPF) to remove high-frequency components from the image. However, these methods face limitations in nonlinear systems, especially when nonlinear activation functions like ReLU are present in the network [13]. On the other hand, the component-selection-based methods, represented by adaptive polyphase sampling (APS) [10] and learnable polyphase sampling (LPS) [11], were designed to select the same components as much as possible during downsampling before and after translation, thereby achieving structural equivalence in CNNs. Although the component-selection-based methods have demonstrated effectiveness in improving shift equivalence, they have not taken into account the variations in image boundaries that occur when input images are shifted in the manner depicted in Fig. 1. These variations result in random pixels at the image boundaries, making it challenging to ensure the similarity of downsampled results or the selection of identical components before and after image translation, thereby further constraining the shift equivalence. Furthermore, selecting specific component implies discarding the remains, which has an impact on the segmentation performance. To address this issue, a novel method called component attention polyphase sampling (CAPS) is proposed in this Fig. 2: A visual comparison of two downsampling methods and their corresponding upsampling techniques based on a one-dimensional signal. (a) MaxPooling process. (b) same input signal in Fig. 2(a) after an one-unit leftward translation with the MaxPooling process. (c) the proposed method. (d) same input signal in Fig. 2(c) after an one-unit leftward translation with the CAPD process. First, assume that the unshifted signal input to the downsampling layer is [1, 2, 3, 4, 3, 2]. Then, the shifted signal, when shifted one unit to the left, is represented as [2, 3, 4, 3, 2, 5]. The stride and pooling size during downsampling are both set to 2. As depicted in Figs. 2(a) and (b), the downsampling results after MaxPooling for the unshifted signal and the shifted signal are [2, 4, 3] and [3, 4, 5], respectively. It is observed that the results of MaxPooling are quite different [2, 4, 3] vs. [3, 4, 5]. However, the proposed CAPD keeps the results similar after downsampling [[1, 9, 3, 2,0] vs. [2, 0, 3, 9, 2,1] ) as shown in Figs. 2(c) and (d). Specifically, the input signal is first sampled into two components, Component 1 and Component 2, according to the parity index. Then, the boundary elements of the components are filtered by adding a window and the corresponding weights are acquired from the component attention module. Lastly, the different components are weighted and fused to obtain the downsampled results. paper. CAPS contains two essential layers, namely component attention polyphase downsampling (CAPD) and component attention polyphase upsampling (CAPU), to replace the conventional downsampling and upsampling layers in CNNs. The CAPD aims to ensure maximum similarity of the downsampled results before and after image translation. It mainly consists of three parts: a polyphase downsampling process, an adaptive windowing (AW) module, and a component attention (CA) module. Initially, the input image undergoes polyphase downsampling, generating four components with half of the original spatial resolution. These components are then extracted as features and sequentially processed through the AW and CA modules to generate attention weights corresponding to each component. The downsampled results are finally achieved by fusing the different initial component features with the attention weights. The AW module effectively mitigates the boundary effect caused by shifts in images, thereby enhancing the consistency of downsampled features. On the other hand, the CA module captures global features of the components through global average pooling (GAP) and employs one-dimensional convolution to facilitate component-wise attention, leading to significant improvement in defect segmentation performance through the fusion of all downsampled components. Corresponding to the implementation of downsampling using CAPD, CAPU restores the downsampled features to their original spatial positions, thereby ensuring shift equivalence in segmentation networks. Fig. 2 provides a visual comparison of two downsampling methods and their corresponding upsampling techniques based on a one-dimensional signal. It can be seen from Fig. 2 that MaxPooling selects the maximum value at fixed positions as the downsampled result. When the input undergoes translation, the maximum value within the corresponding pooling region has already changed, leading to significant alterations in the downsampled result. However, the proposed CAPS samples the input signal into two components based on its odd and even indices. When the input undergoes translation, only the odd and even indices are swapped, and the values within each component remain the same. The fusion results of CAPD are also largely identical for the similarity of components. Our contributions are summarized as follows: 1. A pair of down/upsampling layers called CAPS is proposed to address the shift equivalence problem and can serve as alternatives for conventional downsampling and upsampling layers in CNNs. To the best of our knowledge, this work is the first to investigate the issue of shift equivalence in the field of industrial defect segmentation, considering the boundary variations caused by image translation and leveraging all downsampled component features. 2. CAPD, a novel downsampling layer, is designed to maximize the similarity of downsampled results before and after image translation. The AW module mitigates boundary effect, while the CA module integrates different components to enhance segmentation performance. 3. The proposed method outperforms other state-of-the-art anti-aliasing-based and component-selection-based methods in both shift equivalence and segmentation performance on the micro surface defect (MSD) dataset and four real-world industrial defect datasets. ## II Related Work In this section, defect detection relying on segmentation networks is first reviewed. Subsequently, a comprehensive works related to shift equivalence is introduced. ### _Defect Segmentation_ Defect segmentation, a crucial technique in defect detection, has gained significant traction in real-world industrial vision scenarios. Currently, deep learning methods (e.g. FCN [14], DeepLabv3+ [15], U-Net [16], etc.) have emerged as popular choices for industrial vision defect segmentation due to their robust characterization and generalization capabilities. Wang et al. [17] employed a three-stage FCN to enhance the accuracy and generalization ability of defect segmentation in tire X-ray images. To address the limited sample issue in print defect datasets, Valente et al. [18] utilized computer graphics techniques to synthesize new training samples at the pixel level, resulting in commendable segmentation performance using DeepLabv3+. Miao et al. [19] designed a loss function of U-Net network based on the Matthews correlation coefficient to tackle the challenges posed by limited data volume and imbalanced sample categories. To alleviate the contextual feature loss caused by multiple convolutions and pooling during the encoding process, Yang et al. [20] introduced a multi-scale feature fusion module into the U-Net network. They integrated a module named bidirectional convolutional long short-term memory block attention into the skip connection structure, effectively capturing global and long-term features. Zheng et al. [21] devised a novel Residual U-Structure embedded within U-Net, complemented by a coordinate attention module to integrate multi-scale and multi-level features. In summary, current methodologies predominantly focus on extracting rich features to enhance defect segmentation performance, often overlooking the significance of shift equivalence. Therefore, it holds great significance to study shift equivalence in CNNs. ### _Shift Equivalence_ In contrast to shift equivalence in segmentation tasks, shift equivalence in image classification - also known as shift invariance - has been extensively studied with some promising results [25, 26]. The strict distinction between shift equivalence and shift invariance is presented in Section III-A. Learning-based approaches such as data augmentation [27] and deformable convolution [22, 28] are data-driven ways to improve shift equivalence. Deformable convolution introduces additional learnable offset parameters within standard convolutional operations, which can acquire adaptive receptive fields and learn geometric transformations automatically. Deformable U-Net (DUNet) [28] is a typical application of deformable convolution in the field of image segmentation. It aims to replace the traditional convolutional layers in U-Net [16] with a deformable convolutional block to make the network adaptive in adjusting the receptive field and sampling locations according to the segmentation targets. However, this approach is time-consuming and relies heavily on training data, making it difficult to apply to real industrial scenarios. Modern CNNs lack equivalence unless the input image is shifted by an integer multiple of the downsampling factor, which is impractical [25]. As a result, the redesign of traditional downsampling methods has emerged as an effective strategy for improving equivalence. Anti-aliasing has proven to be an effective approach for enhancing equivalence by addressing the violation of Nyquist-Shannon sampling theorem during downsampling [9]. In this approach, the MaxPooling layer (stride 2) was partitioned into a dense MaxPooling layer (stride 1) and a naive downsampling layer (stride 2). Additionally, a low-pass filter (LPF) was utilized to blur the features after dense maxpooling layer, effectively mitigating aliasing effect. Zhang's work [9] paved the way for further advancements, such as the utilization of adaptive low-pass filter kernels based on BlurPool [26], and the design of Pyramidal BlurPooling (PBP) structure to gradually reduce the size of the LPF kernel at each downsampling layer [23]. Despite anti-aliasing-based approaches offering some degree of improvement in shift equivalence, their ability to address the equivalence problem remains limited. First, aliasing can only be completely eliminated in linear systems by blurring prior to downsampling, which contradicts the prevalent use of nonlinear activation functions (e.g., ReLU) in current CNNs [13]. Second, it employs LPF before downsampling, resulting in a trade-off between image quality and shift equivalence [29]. Another elegant approach for improving downsampling is component-selection-based methods. These methods involve downsampling the input using fixed strides to obtain a set of downsampled components, followed by a strategy to select one of the components as the downsampled feature map. By ensuring consistent selection of the same component for each downsampling operation, shift equivalence in CNNs can be achieved by restoring the feature map to its corresponding position during upsampling. Chaman et al. [10] employed the max-norm polyphase component during downsampling in their APS method, proving complete shift equivalence under circular shifts. Gomez et al. [11] utilized a small neural network to adaptively select components, thereby further improving segmentation performance without compromising equivalence in LPS. Both APS and LPS utilized LPF before downsampling and after upsampling, as ref. [10] indicated that anti-aliasing can further improve the segmentation performance significantly. However, APS and LPS did not achieve the expected performance in terms of equivalence when faced with common shifts in input images. This can be attributed to two main reasons. First, boundary variations resulting from image translation are not considered during the downsampling process, making it challenging to ensure consistent component selection before and after image translation. Second, selecting a single component as the downsampled result discards the majority of features, leading to reduced segmentation performance. In order to solve the problem of information loss during downsampling, Liu et al. [24] proposed a novel multi-level wavelet CNN (MWCNN) method, which employs discrete wavelet transform (DWT) and inverse wavelet transform (IWT) to downsample and upsample the features respectively. However, MWCNN concatenates the four components directly after DWT while ignoring their order, resulting in the loss of shift equivalence. ### _Positioning of Our CAPS_ The proposed approach in this paper focuses on a structural redesign of the network to enhance shift equivalence in the segmentation task. Specifically, CAPS improves the equivalence of the segmentation networks by redesigning the downsampling layer CAPD and the upsampling layer CAPU. Table I analyzes representative methods and ours in terms of whether to consider image boundaries, key points, and major shortcomings. Compared to other methods, the proposed method considers the variations in image boundaries due to translation, thereby enhancing the consistency of downsampled results before and after translation. Moreover, unlike other methods that rely on selecting a single component after downsampling, CAPS incorporates the fusion of multiple downsampled component features, thereby improving the overall segmentation performance. ## III Problem Description Here, we provide definitions of shift equivalence and shift invariance, along with graphical examples to illustrate the issues first. Then, the boundary problem when input translation occurs is pointed out and a preliminary solution is proposed. To enhance readability, one-dimensional signals are employed to illustrate this section. It is worth noting that a two-dimensional image can be seen as an extension of a one-dimensional signal. ### _Definitions of shift invariance and shift equivalence_ To clarify the concept of shift equivalence, it is important to first distinguish it from the related concept of shift invariance. Shift invariance refers to a mapping that remains constant before and after the input is shifted, and is commonly used to indicate that the translation of an input image does not affect the final predicted class in image classification tasks. For an input signal \(\mathbf{x}\) and its shifted vision \(T_{N}(\mathbf{x})\), an operation \(\tilde{f}\) considered to be shift-invariant can be defined as: \[\tilde{f}(\mathbf{x})=\tilde{f}(T_{N}(\mathbf{x})) \tag{1}\] where \(N\) denotes the number of signal shifts in the circular or common shift way. Shift equivalence, however, dictates that the output should shift concurrently with the input, which is commonly utilized to describe image segmentation tasks. Accordingly, an operation \(\tilde{f}\) is shift-equivalent can be expressed as: \[T_{N}(\tilde{f}(\mathbf{x}))=\tilde{f}(T_{N}(\mathbf{x})) \tag{2}\] ### _Description of shift equivalence and boundary effect_ Fig. 3 illustrates the downsampling methods based on component selection for shift equivalence, such as APS and LPS. In the second row, the initial signal \(\mathbf{x}\) comprises four elements: an orange triangle, a blue square, a grey pentagon, and a red pentagram. The signal \(\mathbf{x}\) is then sampled into _component 1_ and _component 2_ according to the odd/even indices, one of which is eventually selected as the result of downsampling. The components acquired from initial signal \(\mathbf{x}\) and its circular shift vision \(T_{N}(\mathbf{x})\) (first row) contain the same elements, only in a different order. Therefore, it can theoretically guarantee shift equivalence if the same components are selected during downsampling when the input images are shifted, as proved by ref. [11]. Nevertheless, as depicted in the third row of Fig. 3, in the case of a common shift, its _component 1_ corresponds precisely to _component 2_ of the initial signal, while its _component 2_ manifests an additional element (represented by the green circle) absent in the initial signal. Moreover, as shown in the third row, the orange triangle in the initial signal does not appear in the common shift version \(T_{N}(\mathbf{x})\). Hence, random variations in the input signal boundaries cause variability in the selection of the downsampled components, resulting in the loss of full shift equivalence. Due to the boundary variations that occur during common shifts, Eq. 2 considers only the unchanged part of the input image before and after the translation. It is important to note that, unless explicitly specified, all shifts of input images in this paper specifically pertain to common shift rather than circular shift. As shown in Fig. 3, the previous component-selection-based method only kept a certain component as the result of downsampling, which does not make full use of all components. So fusing all components in a set of specific weights is a good way to exploit full characteristics. Moreover, for the boundary variations that make downsampled results uncertain, an effective way to reduce the variation of image boundaries is to adaptively crop feature boundaries based on the input dimension. ## IV Proposed Method In the following text, the pipeline of the proposed method is introduced and then the design details of CAPD and CAPU are expressed. Following that, the equivalence proof regarding CAPS is provided. Lastly, the loss function is specified. ### _Pipeline_ The U-Net is widely used in industrial defect segmentation for its strong segmentation capabilities and simple architecture [19, 20, 21]. It not only has fast inference speed to meet the demand of industrial real-time segmentation, but also has a very good guarantee of segmentation performance for its skip connection structure to fuse more level features. Moreover, the symmetric downsampling and upsampling structure of U-Net can be easily replaced with the proposed CAPS to verifying its performance. Thus, the U-Net is adopted as the base model in this paper and other compared methods have been further improved on this basis to enhance shift equivalence. Fig. 3: The downsampling method based on component selection for shift equivalence. The rectangular region in the middle row represents the initial signal. _Component 1_ and _Component 2_ sample the odd and even positions of the input signal, respectively. The component-selection-based approaches select one of the two components as the result of downsampling according to a specific strategy. The standard U-Net consists of an encoder for feature extraction and a decoder for recovering the original spatial resolution. By using a _Crop_ and a _Skip connection_ operations, lower-level features are concatenated with higher-level features along the channel dimension to fuse more informative features. CAPS is incorporated into the network architecture as illustrated in Fig. 4. Unlike the standard U-Net, the CAPD layer is designed to perform downsampling instead of MaxPooling in the encoder. Similarly, the CAPU layer replaces the transposed convolution for upsampling the features in the decoder. Let us denote \(\mathbf{X}\in\mathbb{R}^{H\times W\times C}\) as an input defect image and the output of the model \(\hat{\mathbf{Y}}\in\mathbb{R}^{H\times W\times 2}\) can be modeled as: \[\hat{\mathbf{Y}}=f_{model}(\mathbf{X},\theta) \tag{3}\] where \(f_{model}:\mathbb{R}^{H\times W\times C}\mapsto\mathbb{R}^{H\times W\times 2}\), \(\theta\) are parameters in the proposed model, and the elements in \(\hat{\mathbf{Y}}\) are constrained to binary values of 0 or 1, symbolizing the background and the defect, respectively. Through the process of back-propagation, performed during the training phase, the optimal parameters \(\theta^{*}\) of the proposed model can be expressed as below: \[\theta^{*}=\operatorname*{arg\,max}_{\theta}l(\hat{\mathbf{Y}},\mathbf{Y}) \tag{4}\] in which \(\mathbf{Y}\) denotes the ground truth of the input image \(\mathbf{X}\) and \(l\) represents the loss function as outlined in Section IV-E. ### _Component Attention Polyphase Downsampling (CAPD) Layer_ The architecture of CAPD is visualized in Fig. 5 and the downsampling process is divided into three stages. The first stage is a polyphase downsampling process and the output four downsampled components are fed into a small neural network for feature extraction. Moving on to the second stage, the feature maps derived from the first stage are processed through the AW module and CA module. This stage is to adaptively remove uncertain feature boundaries and determine the initial weights for the different components. The final downsampled result is acquired in the third stage by weighting and fusing the initial four components. **Polyphase downsampling and feature extraction.** First, the input features undergo polyphase downsampling, resulting in a reduction of the original spatial resolution by half. They are partitioned into four components based on their spatial locations. Given an input feature \(\mathbf{F}\in\mathbb{R}^{h\times w\times c}\), four components are achieved by polyphase downsampling in the spatial dimension at equal intervals: \[\mathbf{F}_{(i,j)}[x,y,z]=\mathbf{F}[2x+i,2y+j,z] \tag{5}\] where \(\mathbf{F}_{(i,j)}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times c},i,j\in \{0,1\}\) denotes the four downsampled components as illustrated in Fig. 5. These components are then passed through two convolutional layers with [3\(\times\)3, 128] and [3\(\times\)3, 64] convolutional kernels, respectively. To fully extract their features, a [1\(\times\)1, 1] convolution kernel is then utilized to compress the features in the channel dimension. The resulting output \(\mathbf{P}_{(i,j)}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times 1}\) is subsequently employed as the input for the subsequent AW module. **AW module and CA module.** The process of windowing in the AW module is expressed as follows: \[\mathbf{z}=Cat\{GAP\{\mathbf{P}_{(i,j)}[hs:-hs,ws:-ws,:]\}\} \tag{6a}\] \[hs=\text{int}\left(\frac{h}{2}\times\beta\times\frac{1}{2}\right)\] (6b) \[ws=\text{int}\left(\frac{w}{2}\times\beta\times\frac{1}{2}\right) \tag{6c}\] where \(\mathbf{z}\in\mathbb{R}^{1\times 4}\) denotes the output of the AW module and \(\beta\) corresponds to the proportion of the cropped feature boundaries. The symbols \(GAP\) and \(Cat\) refer to the operations of global average pooling and concatenation, respectively. After conducting an ablation analysis of hyperparameters, the hyperparameter \(\beta\) was set to 0.25 to achieve optimal equivalence. The CA module is intended to enable cross-component interaction for feature fusion motivated by [30]. The initial weights \(\boldsymbol{\rho}\in\mathbb{R}^{1\times 4}\) of the four components from the CA module can be mathematically expressed as: \[\boldsymbol{\rho}=\sigma(\mathbf{H}^{(k)}*\mathbf{z}) \tag{7}\] where \(\sigma(\cdot)\) represents the sigmoid function defined as \(\sigma(x)=1/(1+e^{-x})\), while \(\mathbf{H}\) denotes a one-dimensional convolution kernel with a size of \(k\). In this paper, the hyperparameter \(k\) was set to 2 since only four components of global features are required to interact and get attention weights. To summarize, the CA module serves two primary purposes: 1) acquiring initial weights for the fusion of components in the third stage through attention mechanisms, and 2) facilitating end-to-end learning of the network by ensuring that the corresponding polyphase downsampled components receive similar initial weights before and after the translation of input images. **Fusion of components.** The utilization of component fusion approaches evidently demonstrates their advantage in improving segmentation performance compared to selecting a single component. However, every coin has two sides. Sometimes when the initial weights of different components are similar, the model fails to concentrate on a specific component. Hence, to enhance the consistency of the downsampled feature maps, a more discriminative component weight is necessary. In this regard, _T-softmax_ function [31] is incorporated to adjust the Fig. 4: The network architecture of our proposed method. Compared to the standard U-Net network, only the downsampling and upsampling layers were replaced with CAPD (yellow arrows) and CAPU (orange arrows), respectively. weights \(\rho_{i}\) resulting in a larger variance. The final component weight \(w_{i}\) is calculated as follows: \[w_{i}=\frac{\exp(\rho_{i}/T)}{\sum_{j=0}^{3}\exp(\rho_{j}/T)},i=0,1,2,3 \tag{8}\] where \(T\) denotes the temperature coefficient, set \(10^{-3}\) according to the ablation experiments to balance the shift equivalence and segmentation performance. Following this, the result of downsampling is denoted as: \[\mathbf{D}_{c}=w_{0}\mathbf{F}_{(0,0)}+w_{1}\mathbf{F}_{(0,1)}+w_{2}\mathbf{F }_{(1,0)}+w_{3}\mathbf{F}_{(1,1)} \tag{9}\] where \(\mathbf{D}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times c}\) is the final result of downsampling. In essence, the CAPD is ultimately designed to make the images after translation have similar downsampled feature maps \(\mathbf{D}\) as possible without losing feature information. Eventually, these downsampled feature maps are upsampled using the CAPU according to upsampling factor \(\gamma\), which keeps track of the positions that require restoration during the upsampling process: \[\gamma=\arg\max(w_{i}),i=0,1,2,3 \tag{10}\] ### _Component Attention Polyphase Upsampling (CAPU) Layer_ The upsampling process of CSPU is straightforward, involving the placement of the components obtained from downsampling into predetermined spatial positions in the upsampled feature maps. Moreover, the remaining positions in the upsampled feature map are filled with zeros to mimic the uncertainty during the upsampling process. Fig. 6 illustrates a complete downsampling and upsampling process, assuming that the input feature is \(\mathbf{F}\in\mathbb{R}^{h\times w\times c}\). Specifically, the input \(\mathbf{F}\) first undergoes the CAPD layer, yielding the downsampled feature \(\mathbf{D}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times c}\) and the sampling factor \(\gamma\) corresponding to the maximum weight. Then the upsampled result \(\mathbf{U}\in\mathbb{R}^{h\times w\times c}\) is denoted as: \[\mathbf{U}_{c}=T_{m,n}(U_{2}(\mathbf{D}_{c})) \tag{11}\] where \(m\) and \(n\) map \(\gamma\) to a two-dimensional position encoding, which can be achieved through a simple binary encoding process represented by: \[mn=\phi(\gamma) \tag{12}\] where \(m\) corresponds to the first bit of the encoding result and \(n\) represents the second bit of the encoding result. The function \(\phi\) converts a decimal number into a binary code. \(T_{m,n}(\cdot)\) represents translating the input feature by \(m\) and \(n\) pixels in the \(x\) and \(y\) axes, respectively and \(U_{2}\) is the is a conventional upsampling operation. \(U_{2}(\mathbf{D}_{c})=\mathbf{Z}[x,y,z]\) can be calculated as: \[\mathbf{Z}[x,y,z]=\begin{cases}\mathbf{D}_{c}[x/2,y/2,z],\text{when x and y are even}\\ 0,\quad\text{otherwise}.\end{cases} \tag{13}\] Following APS and LPS, LPF is also added before CAPD and after CAPU to improve the segmentation performance. Moreover, in the next subsection, we can show that the CAPS is completely equivalent when the boundaries of the features are not considered and \(T\to 0\). Fig. 5: The diagram of our proposed CAPD layer. In the first stage, the input features are first polyphasically sampled into four components according to odd and even indices. Then, these components are fed into a neural network with shared weights for feature extraction. The extracted features are used in the second stage to generate initial component weights that characterize the different levels of importance through the adaptive windowing and component attention modules, respectively. In the third stage, the initial weights are processed by _T-softmax_ function to obtain the final weights and the different components are weighted and fused using the final weights to acquire the final downsampled features. Fig. 6: One complete downsampling and upsampling process. ### _Proof of shift equivalence for CAPS_ For the simplicity of the proof, the channel dimensions of the input features are not considered and the stride is set to 2. The final result can also be easily generalized to multiple channels and other stride. In addition, the boundaries of the features (i.e. pixels newly entering and moving out of the subimage during the translation process) are not be considered because the feature changes of the boundaries are random and unpredictable when common shift is performed on the image. Corresponding to \(U_{2}\) in Eq. 11, let \(D_{2}\) represent the traditional downsampling operation and \(\mathbf{Q}=D_{2}(\mathbf{F})\) is given by \(\mathbf{Q}[x,y]=\mathbf{F}[2x,2y]\). It is clear that \(\mathbf{Q}\) is a two-dimensional simplified version of the first downsampled component \(\mathbf{F}_{(0,0)}\) in Eq. 5, and the other downsampled components can be expressed as \(\{\mathbf{F}_{(i,j)}\}_{i,j=0}^{1}\), where \(\mathbf{F}_{i,j}=D_{2}(T_{-i,-j}(\mathbf{F}))=\mathbf{F}[2x+i,2y+j]\). Let us denote \(D_{2}^{c}(\cdot)\) and \(U_{2}^{c}(\cdot)\) as the CAPD and CAPU operator, which are defined as: \[\mathbf{D}_{c}=\mathbf{F}_{m,n}=D_{2}^{c}(\mathbf{F})=D_{2}(T_{-m,-n}(\mathbf{ F})) \tag{14}\] \[U_{2}^{c}(\mathbf{D}_{c},m,n)=T_{m,n}(U_{2}(\mathbf{D}_{c})) \tag{15}\] where \(m\) and \(n\) denotes the index of component with the highest weight as indicated in Eq. 12. Note that the conditional equality of \(\mathbf{D}_{c}=\mathbf{F}_{m,n}\) in Eq. 14 is the temperature coefficient \(T\to 0\) in Eq. 8. We can now show that \(U_{2}^{c}\circ D_{2}^{c}\) is fully equivalent when variations in the image boundaries due to translation are not considered and \(T\to 0\): \[U_{2}^{c}\circ D_{2}^{c}(\widetilde{\mathbf{F}})=T_{s_{x},s_{y}}(U_{2}^{c} \circ D_{2}^{c}(\mathbf{F})),\forall s_{x},s_{y}\in\mathbb{Z} \tag{16}\] where \(\widetilde{\mathbf{F}}=T_{s_{x},s_{y}}(\mathbf{F})\) represents the result of translating the input \(\mathbf{F}\) by \(s_{x}\) and \(s_{y}\) pixels along the x-axis and y-axis directions, respectively. _Proof._ Let \(m,n\) and \(\widetilde{m},\widetilde{n}\) denote the component index corresponding to the maximum weight obtained with \(\mathbf{F}\) and \(\widetilde{\mathbf{F}}\) as CAPS inputs, respectively. Then assume that \(s_{x}\) and \(s_{y}\) are both odd integers: \[D_{2}^{c}(\mathbf{F}) =D_{2}(T_{-m,-n}(\mathbf{F})) \tag{17a}\] \[D_{2}^{c}(\widetilde{\mathbf{F}}) =D_{2}(T_{-\widetilde{m},-\widetilde{n}}(\widetilde{\mathbf{F}})) \tag{17b}\] Based on the above properties we can get: \[U_{2}^{c}\circ D_{2}^{c}(\mathbf{F})=T_{m,n}U_{2}D_{2}(T_{-m,-n}(\mathbf{F})) \tag{18}\] Similarly for the input after translation: \[U_{2}^{c}\circ D_{2}^{c}(\widetilde{\mathbf{F}}) =T_{\widetilde{m},\widetilde{n}}U_{2}D_{2}(T_{-\widetilde{m},- \widetilde{n}}(\widetilde{\mathbf{F}})) \tag{19a}\] \[=T_{\widetilde{m},\widetilde{n}}U_{2}D_{2}(T_{s_{x}-\widetilde{m},s_{y}-\widetilde{n}}(\mathbf{F}))\] (19b) \[=T_{\widetilde{m},\widetilde{n}}T_{s_{x}-1,s_{y}-1}U_{2}D_{2}(T_{ 1-\widetilde{m},1-\widetilde{n}}(\mathbf{F}))\] (19c) \[=T_{s_{x},s_{y}}(T_{m,n}U_{2}D_{2}(T_{-m,-n}(\mathbf{F}))) \tag{19d}\] where the properties \(\widetilde{m}=1-m\) and \(\widetilde{n}=1-n\) (for odd \(s_{x}\) and \(s_{y}\)) are used in Eq. 19d and holds based on the fact that the weights of the corresponding components before and after the translation are the same. This is ensured by the global average pooling layer in CAPD as pointed out by [10, 11]. Then, Eq. 16 is shown to be valid by substituting Eq. 18 into Eq. 19d. The same conclusion can similarly be reached when \(s_{x}\) or \(s_{y}\) is even, since \(\widetilde{m}=m\) or \(\widetilde{n}=n\) corresponds to them. In practice, the full shift equivalence of the network cannot be satisfied because the boundaries of the image change unpredictably before and after the input translation as shown in Fig. 3. Therefore, the AW module is designed to minimize the effect of boundary variations on shift equivalence. In addition, although the \(T\) in Eq. 8 closer to 0 favours shift equivalence, a higher \(T\) facilitates the fusion of component features and thus improves segmentation performance. Thus, \(T\) is set as a hyperparameter in this paper to balance shift equivalence and segmentation performance. ### _Loss Function_ A loss function that combines the cross-entropy loss \(l_{ce}\) and the Dice loss \(l_{de}\) was utilized. Mathematically, the loss function can be expressed as the sum of both losses, denoted as \(l=l_{ce}+l_{de}\). The values of cross-entropy loss \(l_{ce}\) and Dice loss \(l_{de}\) for a given sample image are represented as follows: \[l_{ce}(\hat{\mathbf{Y}},\mathbf{Y})=-\frac{1}{HW}\sum_{i=1}^{H}\sum_{j=1}^{W} \log q(x_{ij},y_{ij}) \tag{20}\] \[l_{de}(\hat{\mathbf{Y}},\mathbf{Y})=1-2\frac{\left|\hat{\mathbf{Y}}\cap \mathbf{Y}\right|}{\left|\hat{\mathbf{Y}}\right|+\left|\mathbf{Y}\right|} \tag{21}\] where \(q(x_{ij},y_{ij})\) denotes the probability that the pixel \(x_{ij}\) is predicted to be the ground truth \(y_{ij}\). The meanings of \(\hat{\mathbf{Y}}\) and \(\mathbf{Y}\) are consistent with those in Eq. 4. ## V Experiments In this section, the dataset utilized in the experiments is first described and details on the generation of the training and test datasets are provided. The metrics employed to evaluate shift equivalence and segmentation performance are then defined. Subsequently, the shift equivalence problem for the most advanced image segmentation networks is investigated. Six networks explicitly designed to address image shift equivalence are then compared, demonstrating the efficacy of the proposed method. Following that, the effect of boundary variations is analyzed and ablation experiments are conducted. Model complexity and runtime are also further analyzed after the ablation experiments. Lastly, four other real industrial datasets are used to validate the effectiveness of the proposed method. ### _Generation of training and test datasets_ A publicly available micro surface defect (MSD) of silicon steel strip dataset was used in the experiments [32]. The dataset consists of 35 images of surface defects in silicon steel strips, each with a resolution of 640x480. The defects are categorized into two groups: spot-defect images (SDI) and steel-pit-defect images (SPDI), containing 20 and 15 images, respectively. Notably, one distinctive characteristic of this dataset is the presence of random background textures in the original images, with the defects occupying a small portion of the overall image, as depicted in Fig. 1. The original dataset was divided in a ratio of 0.8, 0.1 and 0.1 to be used in training, validation and testing phases, respectively. Given that micro-defects are relatively sparse in comparison to the overall image, small resolution images (128x128) were cropped from the original images for the generation of training, validation and test sets. A random sampling strategy where each raw image was sampled into 30 images was employed for training and validation datasets, while maintaining a 3:1 ratio between defective and normal images, as illustrated in Fig. 7(a). Two test sets were constructed to evaluate the segmentation performance and shift equivalence, respectively: **Middle Defect Testset (MDT).** The MDT aims to evaluate the network when defects are located in the middle region of the image. To generate the MDT, sampling windows were moved across the images with a one-pixel increment, and only images with defects located within the yellow window were selected from the black window as shown in Fig. 7(b). The distance between the black and yellow window boundaries was set to 40 pixels. **Boundary Defect Testset (BDT).** The BDT was created to assess the network when defects are positioned in the boundary regions of the image. The generation of the BDT followed a similar process to the MDT, but only included images where defects appeared between the black and yellow windows, as depicted in Fig. 7(c). The visualization results of the MDT and BDT can be observed in Fig. 8(a) and Fig. 8(b), respectively. ### _Implementation Details_ Our experiments were conducted using the PyTorch deep learning library [33]. The network was optimized using the SGD optimizer [34] with an initial learning rate of 0.001 and a momentum value of 0.9. To further improve the training performance, a polynomial learning rate scheduling approach with a power of 0.9 was employed in the experiments. The training process was carried out for a maximum of 500 epochs, with a batch size of 32, utilizing an NVIDIA GeForce RTX 3090 GPU. The training phase continued until there was no decrease in the loss of the validation set for 10 consecutive epochs. ### _Evaluation Metrics_ To assess the shift equivalence in our experiments, we designed two new metrics, namely mean variance of Intersection-over-Union (mvIoU) and mean variance of defect area (mvda). Both metrics are designed to describe fluctuations in defect segmentation masks. A lower value for mvIoU and mvda indicates a higher level of shift equivalence. Additionally, we utilized the mean Intersection-over-Union (mIoU), precision, recall and f1-score as measures of segmentation performance. Let us consider the test set, whether it is the MDT or BDT, divided into \(N\) subsets: \(M_{j},\ j=1,2,\ldots,N\), each subset consists of images cropped from the same raw image. \(IoU(\hat{\mathbf{Y}}_{i},\mathbf{Y}_{i})\) denotes the IoU between the predicted segmentation \(\hat{\mathbf{Y}}_{i}\) and the ground truth \(\mathbf{Y}_{i}\) corresponding to input image \(\mathbf{X}_{i}\), as described in Eq. 4. The equivalence metrics mvIoU and mvda are defined as follows: **mvIoU**: The mIoU of the set \(M_{j}\) is calculated as: \[mIoU_{j}=\frac{1}{|M_{j}|}\sum_{i=1,\mathbf{X}_{i}\in M_{j}}^{|M_{j}|}IoU(\hat {\mathbf{Y}}_{i},\mathbf{Y}_{i}) \tag{22}\] The metric mvIoU which portrays the equivalence of segmentation masks is formulated as: \[\mathrm{mvIoU}=\frac{1}{N}\sum_{j=1}^{N}\frac{1}{|M_{j}|-1}\sum_{i=1,\mathbf{ X}_{i}\in M_{j}}^{|M_{j}|}(IoU(\hat{\mathbf{Y}}_{i},\mathbf{Y}_{i})-mIoU_{j})^{2} \tag{23}\] **mvda**: Assume that the area of defects in \(\mathbf{X}_{i}\) is \(Area(\mathbf{X}_{i})\), then the average area of the predicted defects in the set \(M_{j}\) can be expressed as: \[mArea_{j}=\frac{1}{|M_{j}|}\sum_{i=1,\mathbf{X}_{i}\in M_{j}}^{|M_{j}|}Area( \mathbf{X}_{i}) \tag{24}\] The metric mvda is calculated as: \[\mathrm{mvda}=\frac{1}{N}\sum_{j=1}^{N}\frac{1}{|M_{j}|-1}\sum_{i=1,\mathbf{ X}_{i}\in M_{j}}^{|M_{j}|}(Area(\mathbf{X}_{i})-mArea_{j})^{2} \tag{25}\] To calculate the area of defects, only the connected defect domain with the largest area in the segmentation masks is considered, and the rest is deemed as overkill in defect segmentation. ### _Comparison with current advanced segmentation networks_ The current advanced segmentation network designs have not explicitly focused on shift equivalence due to their emphasis on segmentation performance. To investigate the shift Fig. 8: Visualization of two test sets. (a) visualization of the MDT. (b) visualization of the BDT Fig. 7: The three sampling methods for generating dataset. (a) random sampling for the training and validation dataset. (b) sliding sampling for the MDT. (c) sliding sampling for the BDT. equivalence of current state-of-the-art segmentation networks, five high-performing networks were implemented for evaluation:1) UperNet [36]: A multi-task learning framework that performs well on image segmentation by parsing multiple visual concepts such as category, material and texture; 2) PSPNet [35]: A network with a pyramid pooling module designed to achieve excellent image segmentation performance by fusing features from different receptive fields; 3) DeepLabv3+ [15]: An encoder-decoder network that utilizes the Atrous Spatial Pyramid Pooling module to extract multi-scale contextual features using dilated convolutions; 4) Mask2former [37]: A network that employs a transformer decoder with masked attention, aiming to extract local features within the region of the predicted mask. It currently achieves state-of-the-art semantic segmentation performance on various publicly available datasets; 5) SAM-Adapter [38]: A network adapter that builds upon the Segment Anything Model [39] as a foundation model. It incorporates multiple visual prompts to adapt to downstream tasks. In our evaluation, ResNet-101 was used as the backbone for UperNet, PSPNet, and DeepLabv3+, while employing Swin-Transformer-large as the backbone for Mask2former. All four models utilize the official code from the mmsegmentation library 1 and employ the default pretrained model to achieve optimal segmentation performance. SAM-Adapter was implemented using the official code 2 initialized with SAM-Large pretrained parameters for feature extraction. Footnote 1: mmsegmentation: [https://github.com/open-mmlab/mmsegmentation](https://github.com/open-mmlab/mmsegmentation) Footnote 2: SAM-Adapter: [https://github.com/tianru-chen/SAM-Adapter-PyTorch](https://github.com/tianru-chen/SAM-Adapter-PyTorch) Table II presents the results of five advanced image segmentation networks and ours, on the MDT and BDT. It is worth noting that our network, depicted in Fig. 4 is relatively more lightweight compared to the others, which introduces some unfairness in the comparison. However, our network still exhibits superior equivalence, particularly in terms of the mvda. The segmentation performance surpasses that of DeepLabv3+, indicating that the proposed method can simultaneously balance shift equivalence and segmentation performance. It is observed that most existing segmentation networks suffer from low shift equivalence, so it is crucial to explore methods for improving equivalence in both academic research and real-world industrial applications. ### _Comparison with other advanced shift equivalence methods_ The proposed CAPS method was compared with six advanced methods aiming to enhance shift equivalence, namely BlurPool [9], APS [10], LPS [11], PBP [23], MWCNN [24], and DUNet [22] on both the MDT and BDT. To ensure experimental fairness, all methods except DUNet utilized the U-Net structure depicted in Fig. 4 as a base model, with only the downsampling and upsampling layers replaced. Unlike other methods, DUNet [22] replaces only a portion of the standard convolutions in U-Net with deformable convolutional blocks. To adhere to the recommendation of APS and LPS, circular padding was utilized in all experiments, while keeping all other settings consistent with their original papers and codes 3 4 5 6 7 8. Footnote 4: APS: [https://github.com/achaman2/truly_shift_invariant_cms](https://github.com/achaman2/truly_shift_invariant_cms) Footnote 5: LPS: [https://tayramond.yeh.com/learnable_polyphase_sampling](https://tayramond.yeh.com/learnable_polyphase_sampling) Table III provides a comparison of different methods that contribute to the improvement of shift equivalence on the MDT and BDT. The best results are shown in bold and the second best results are underlined for a clearer comparison. CAPS greatly reduces the mvIoU and mvda on both test sets, revealing better shift equivalence. Concretely, CAPS relatively reduces upon the second best method by 23.08% mvIoU, 70.28% mvda on the MDT and 12.50% mvIoU, 82.32% mvda on the BDT. The improvement in equivalence reveals the importance of considering variations in feature boundaries during the downsampling process. The results on the BDT substantially surpass those of other methods, further indicating that the AW module does not negatively impact segmentation performance and equivalence when defects are located at the image boundaries. Apart from the improvement in shift equivalence, CAPS achieve a new state-of-the-art of 75.15%, 75.93% mIoU, surpassing the previous best solution PBP by +0.55% and +0.36% on the MDT and BDT, respectively. Although CAPS does not reach optimality in term of precision, it exhibits +5.04% and +1.34% recall improvement compared with the second best method LPS. The optimal values obtained on the f1-score (0.8839 on the MDT and 0.8529 on the BDT) also demonstrate that CAPS has superior segmentation performance while balancing precision and recall. It can also be observed that DUNet and MWCNN exhibit poor shift equivalence and segmentation performance, even worse than Baseline. **Comparison of equivalence between the MDT and BDT.** Fig. 9 shows the comparison of mvda and mvIoU between the MDT and BDT. The red bar represents the value of mvda and corresponds to the y-axis on the left, while the blue bar represents mvIoU and corresponds to the y-axis on the right. It is evident that the baseline method exhibits lower equivalence compared to the other methods, emphasizing the importance of redesigning the downsampling and upsampling structure. The results illustrate that almost all methods achieve higher values of mvda and mvIoU on the BDT dataset compared to the MDT dataset, indicating the increased difficulty in maintaining equivalence when defects are located at the boundaries rather than the middle region. **Qualitative result analysis of different methods.** The qualitative segmentation results of the different methods are shown in Fig. 10, using the more challenging BDT. The first row describes the result of original image without any shift, while each successive row is shifted to the left by a specified number of pixels. Compared to other methods, CAPS demonstrates nearly complete equivalence in the segmentation results, with the exception of a slight discrepancy observed when shifting by 9 pixels, as depicted in the ninth row. Conversely, other methods, such as BlurPool, exhibit significant fluctuations in the segmentation mask, particularly in terms of the area of defects. ### _Effect of boundary variations on shift equivalence_ To clarify the reasons for the poor performance of shift equivalence on the BDT, the images sampled from a raw image named _SDI_3_ were taken out individually to test their IoU. Fig. 11 shows the results in the form of box plots, where the height of the boxes indicates the extent of IoU fluctuation, and the black dots signify outliers. Notably, the IoU exhibits more outliers when the defects lie at the image boundaries. This suggests that the main reason for the weaker equivalence on the BDT than on the MDT is that the segmentation results on image boundaries are more susceptible to boundary variations due to translations. Therefore, some outliers in Fig. 11 are more likely to be generated on the BDT, affecting the equivalence of the network. Fig. 11 illustrates the boundary differences in downsampled features that arise from translation. Specifically, the input to the proposed network consists of two sampled images, one with a black box and the other with a red box shown in Fig. 1. The latter is obtained by translating the former down one pixel. Assuming that the first channels in the feature maps of these two images after the first CAPD layer are denoted as \(\mathbf{D}_{1}\) and \(\mathbf{D}_{2}\) with a resolution of \(64\times 64\). The difference in the z-axis of Fig. 12 can be calculated as: \(Shift(\mathbf{D}_{1})-\mathbf{D}_{2}\). The middle pink area represents that the two features are identical before Fig. 11: The IoU results when the raw image _SDI_3_ was sampled. Fig. 9: Comparison of mvda and mvIoU between the MDT and BDT. and after translation, but the boundary region features exhibit significant differences. These boundary differences introduce uncertainty in the component fusion process of CAPS. So the AW module is designed to disregard feature boundaries shown in Fig. 12 when generating the weights for component fusion. By doing so, the downsampling results are similar when the input image is shifted, which improves the shift equivalence. Fig. 10: The qualitative segmentation results of the different methods on the BDT. The area of the defect is labeled at the top of the image using red font. Compared to other methods, the segmentation masks of CAPS have the least fluctuation and possess the best shift equivalence. ### _Ablation analysis_ In this subsection, we conduct an ablation analysis on the collected MDT and BDT to verify the effectiveness of modules designed in CAPS and analyze the hyperparameters in the proposed methods. Specifically, the efficacy of the AW, CA and LPF in CAPS is assessed. In addition to this, the effectiveness of data augmentation (DA) is further evaluated on CAPS by applying random transformations to the training data, including random rotation, flip, brightness, contrast and cutout [40]. **Effect of the AW:** Comparing the first and fourth rows in Table IV, it can be seen that the removal of AW module leads to a relative improvement of \(50.0\%\) and \(28.6\%\) in terms of mIoU on the MDT and BDT, respectively, while mvda increases by 4 to 6 times. This demonstrates the crucial role of the AW module in achieving shift equivalence. Furthermore, a substantial decrease in mIoU is observed when the AW module is removed. This overall reduction in segmentation performance highlights the significance of the AW module in maintaining segmentation performance. **Effect of the CA:** As depicted in the second row of Table IV, the removal of the CA module leads to a relative decrease on mIoU by \(3.8\%\) and \(4.5\%\) on the MDT and BDT, respectively. Furthermore, the shift equivalence of the network decreases, as indicated by the increase on mvIoU and mvda. This emphasizes the effectiveness of utilizing means based on the fusion of attentional components, not only in enhancing the segmentation performance of the network but also in improving shift equivalence. **Effect of the LPF:** As shown in the third and fourth rows of Table IV, the use of LPF effectively improves segmentation performance with +5.87% mIoU, +7.16% precision on the MDT and +3.72% mIoU, +4.67% precision on the BDT. However, shift equivalence is compromised, as indicated by the increase in both mvIoU and mvda on the MDT and BDT. This decrease in equivalence can be attributed to the LPF further blurring boundary features, leading to increased variations at the feature boundaries before and after translation. Therefore, it is suggested that when a higher demand for equivalence is prioritized over segmentation performance, the removal of LPF in CAPS can contribute to an increase in shift equivalence. **Effect of the DA:** It can be seen from Table IV, there is a slight increase in segmentation performance (e.g. from 78.15% to 78.23% in mIoU) but a decrease in shift equivalence (e.g. from 2.4139 to 3.5132 in mvda) on the MDT. It can be seen that data augmentation enhances the diversity of samples, thus benefiting the segmentation performance, but it cannot accurately improve the network's shift equivalence. Sometimes the distribution between the augmented training data and the original test data is biased, which reduces the equivalence of the network. **Effect of the hyperparameters:** Two main hyperparameters are used in our method: the \(\beta\), which controls the proportion of windowing, and the \(T\) when the components are fused. We investigated the impact of varying \(\beta\) and \(T\) within a certain range on mIoU and mvda as depicted in Fig. 13. The blue y-axis on the left represents mIoU, while the red y-axis on the right represents mvda. The solid and dashed lines in both images depict the experimental results on the MDT and BDT, respectively. Specifically, in Fig. 13(a), the \(\beta\) indicates the truncation ratio of the AW module to the feature boundaries, which means that higher values make downsampling less affected by image boundaries. It can be found that the network exhibits the best shift equivalence for a \(\beta\) value of 0.25, with the mIoU only slightly lower than the highest value. Therefore, the hyperparameter \(\beta\) was consistently set to 0.25 in all experiments. As shown in Fig. 13(b), the temperature Fig. 12: Boundary differences due to translation of the same component feature map. control factor \(T\) determines how well the component features are fused. When \(T\) approaches 0, the _T-softmax_ function approximates the _argmax_ operation, thereby increasing the shift equivalence for smaller mvda no matter on the MDT and BDT. Conversely, as \(T\) approaches 1, the _T-softmax_ function becomes equivalent to the standard softmax function, enhancing component fusion, which benefits improving segmentation performance. However, when \(T\) exceeds \(10^{-3}\) the equivalence of the segmentation network drops drastically, as shown by the red solid and dashed lines in Fig. 13(b). To strike a balance between segmentation performance and shift equivalence we set \(T\) to \(10^{-3}\) to achieve this trade-off. ### _Model complexity and runtime analysis_ In order to check the complexity of our CAPS, the number of parameters and FLOPs are analysed in the proposed method and compared with other advanced methods as shown in Table V. The average inference time for a single image is illustrated in Table VI. Although the number of model parameters as well as FLOPs is larger than several methods, the inference time for a single image is within the requirements of real industrial scenarios (\(\leq\)40 ms). Specifically, the average inference time for a single image is 9.24 ms, 13.33 ms and 38.93 ms when the input size is 128x128, 256x256 and 512x512, respectively. Moreover, the proposed CAPS has the best shift equivalence among all the methods, so the appropriate increment in inference time compared to other methods is acceptable. ### _Performance in other real-world industrial defect datasets_ To validate the effectiveness of the proposed method, four additional datasets were used to further evaluate the performance. Specifically, they are screw, leather and hazelnut from MVTec Anomaly Detection Dataset (MVTec AD) [41] and photovoltaic modules from Maintenance Inspection Dataset (MIAD) [42]. Fig. 14 shows the original image and ground truth for sample images from different datasets. The number of images in each dataset and the size of the original images are shown in second and third columns of Table VII. The training, validation and test sets are generated in line with MSD as shown in V-A. Additionally, defects located both in the middle and the boundaries are collectively tested to assess the overall performance of different methods. Fig. 13: Sensitivity analysis of the hyperparameter. (a) the analysis of \(\beta\). (b) the analysis of \(T\). Segmentation performance and shift equivalence for four datasets are quantitatively demonstrated in Table VIII. The proposed CAPS achieves the best shift equivalence and remarkable segmentation performance compared with other methods. For shift equivalence, CAPS has the lowest mvIoU and mvda on all datasets, implying that the proposed method not only has the smallest IoU fluctuations, but also the highest stability of the predicted defect area. Moreover, it has the highest mIoU and f1-scores in 3 out of 4 datasets, showing its powerful defect segmentation ability. It can be observed that DUNet exhibits the best recall and f1-score on photovoltaic modules, and the second best recall and f1-score on hazelnut. But it slows down the inference speed as shown in Table VI, which is not suitable for industrial scenarios. ## VI Conclusion This paper presents a novel approach fusing on investigating shift equivalence of CNNs in industrial defect segmentation. The proposed method designs a pair of down/upsampling layers named CAPS to replace conventional downsampling and upsampling layers. The downsampling layer CAPD performs an attention-based fusion of the different components considering the feature boundaries. The CAPU then upsamples the downsampling results to a specific spatial location, Fig. 14: Visualization of four real industrial datasets. (a) screw (b) photovoltaic modules (c) leather (d) hazelnut ensuring the equivalence of the segmentation results. On the industrial defect segmentation test sets MDT and BDT, the proposed method surpasses other advanced methods such as BlurPool, APS, LPS, PBP, MWCNN and DUNet in terms of shift equivalence and segmentation performance.
2308.16424
Solar horizontal flow evaluation using neural network and numerical simulation with snapshot data
We suggest a method that evaluates the horizontal velocity in the solar photosphere with easily observable values using a combination of neural network and radiative magnetohydrodynamics simulations. All three-component velocities of thermal convection on the solar surface have important roles in generating waves in the upper atmosphere. However, the velocity perpendicular to the line of sight (LoS) is difficult to observe. To deal with this problem, the local correlation tracking (LCT) method, which employs the difference between two images, has been widely used, but LCT has several disadvantages. We develop a method that evaluates the horizontal velocity from a snapshot of the intensity and the LoS velocity with a neural network. We use data from numerical simulations for training the neural network. While two consecutive intensity images are required for LCT, our network needs just one intensity image at only a specific moment for input. From these input array, our network outputs a same-size array of two-component velocity field. With only the intensity data, the network achieves a high correlation coefficient between the simulated and evaluated velocities of 0.83. In addition, the network performance can be improved when we add LoS velocity for input, enabling achieving a correlation coefficient of 0.90. Our method is also applied to observed data.
Hiroyuki Masaki, Hideyuki Hotta, Yukio Katsukawa, Ryohtaroh T. Ishikawa
2023-08-31T03:28:03Z
http://arxiv.org/abs/2308.16424v2
[ ###### Abstract We suggest a method that evaluates the horizontal velocity in the solar photosphere with easily observable values using a combination of neural network and radiative magnetohydrodynamics simulations. All three-component velocities of thermal convection on the solar surface have important roles in generating waves in the upper atmosphere. However, the velocity perpendicular to the line of sight (LoS) is difficult to observe. To deal with this problem, the local correlation tracking (LCT) method, which employs the difference between two images, has been widely used, but LCT has several disadvantages. We develop a method that evaluates the horizontal velocity from a snapshot of the intensity and the LoS velocity with a neural network. We use data from numerical simulations for training the neural network. While two consecutive intensity images are required for LCT, our network needs just one intensity image at only a specific moment for input. From these input array, our network outputs a same-size array of two-component velocity field. With only the intensity data, the network achieves a high correlation coefficient between the simulated and evaluated velocities of 0.83. In addition, the network performance can be improved when we add LoS velocity for input, enabling achieving a correlation coefficient of 0.90. Our method is also applied to observed data. Sun: granulation, -- Sun: photosphere -- Sun: magnetic fields] Solar horizontal flow evaluation using neural network and numerical simulation with snapshot data Hiroyuki MSAKI\({}^{1,2}\)] Hiroyuki MASAKI\({}^{1,2}\), Hideyuki HOTTA\({}^{2,1}\) Yukio KATSUKAWA\({}^{3}\) & Ryohtaroh \({}^{1}\)Department of Physics, Graduate School of Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan \({}^{2}\) Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8601, Japan \({}^{3}\)National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo, 181-8588, Japan \({}^{4}\)National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292, Japan \({}^{*}\)E-mail: [email protected] ## 1 Introduction The solar surface is filled with turbulent thermal convection. The energy is continuously generated by nuclear fusion around the centre of the sun. This input energy is transported outward by the radiation in the radiation zone (70% of solar radius). In the outer 30% of the solar interior, the energy is transported outward by thermal convection. This layer is called the convection zone (e.g., Nordlund et al. 2009). This thermal convection causes a mottled appearance called granulation at the surface. The lifetime, the spatial scale, and the typical velocity of the granulation are several minutes, 1 Mm, and 3-4 \(\rm{km~{}s^{-1}}\), respectively (e.g., Spruit et al. 1990). Thermal convection in the solar photosphere causes several phenomena in the upper atmosphere and is related to poorly understood so lar phenomena, such as coronal heating and magnetic field generation. Thus, it is important to evaluate the thermal convection velocity in the sun. The line of sight (LoS) velocity (i.e., the Doppler velocity) can be measured by the Doppler effect. For example, satellites for solar observation, such as Hinode (Kosugi et al., 2007; Tsuneta et al., 2007) and the Solar Dynamics Observatory (SDO:Pesnell et al., 2012; Scherrer et al., 2012), have instruments for observing the Doppler shifts at multiple spectral lines. While we can evaluate the LoS flow velocity using the Doppler effect relatively easily, the flow velocity perpendicular to the LoS is difficult to measure because the motion does not cause any Doppler shift. To deal with this problem, local correlation tracking (LCT:November and Simon, 1988) is widely used. This method evaluates the horizontal velocity field from the displacements of structures within two successive intensity maps. Because LCT compares close sub-region pairs and finds a large correlation between the two images, this method requires many numerical operations. Moreover, LCT cannot be used with images of arbitrary cadence. In addition, LCT can only evaluate the mean in time and is not good at detecting steady flow in which no apparent motion can be observed. By contrast, many magnetohydrodynamics (MHD) simulations of the solar photosphere have been improved in the past three decades (Stein and Nordlund, 1998; Vogler et al., 2005). The improvements in computer performance and algorithms make it possible to reproduce solar observations in simulations in detail. We can obtain many sets of data such as intensity and three-component velocity in each snapshot using numerical simulations. Numerical simulations have also been used to validate LCT (Verma et al., 2013). Asensio Ramos et al. (2017) developed an algorithm that estimates the horizontal velocity field using a combination of numerical simulation and neural network as a substitute for LCT. This algorithm, DeepVel, estimates horizontal velocity at optical depth \(\tau\)=1, 0.1, and 0.01 from two intensity maps obtained 30 seconds apart. The DeepVel obtains a correlation coefficient of 0.83 at \(\tau\)=1 between the estimated and simulated velocity. In addition, DeepVelU (Tremblay and Attie, 2020), an enhanced version of DeepVel, can use the intensity, the LoS velocity field, and LoS magnetic field as trackers and achieve a correlation coefficient of 0.947. These algorithms also achieved similar values of the correlation coefficient at the other optical depths and can detect vortices more clearly than LCT. The DeepVel and DeepVelU evaluate the horizontal velocity from two images at a specific interval. When the cadence of the new data is different from that used in training, the network needs to be trained again for the new data. Moreover, Ishikawa et al. (2022) improved the correlation coefficient to 0.95 using a network structure focusing on spatial scales. In this study, we perform numerical simulations to obtain modeled physical quantities and develop a method that estimates the horizontal velocity field in the solar surface from the intensity and the LoS velocity in one observation snapshot with the neural network using the calculated data. Because the network is constructed only with convolution, the network evaluation is fast for any intensity image size. A big advantage of this study compared with the previous research (e.g., Asensio Ramos et al., 2017) is that we only require a single snapshot for the evaluation. Thus, we can apply the network to observations with any length of the time cadence. We confirm that the network can be applied to observations and compare our result with that of LCT. ## 2 neural network training ### Numerical simulation The data used in this study are calculated by the Radiation and RSST for Deep Dynamics (R2D2:Hotta et al., 2019; Hotta and Iijima, 2020; Hotta and Toriumi, 2020) MHD simulation code. The R2D2 solves the following equations. \[\frac{\partial\rho}{\partial t} =-\nabla\cdot\left(\rho\mathbf{v}\right) \tag{1}\] \[\frac{\partial}{\partial t}(\rho\mathbf{v}) =-\nabla\cdot\left(\rho\mathbf{v}\mathbf{v}\right)-\nabla p+\rho\mathbf{g}+ \frac{1}{4\pi}\left(\nabla\times\mathbf{B}\right)\times\mathbf{B}\] (2) \[\frac{\partial\mathbf{B}}{\partial t} =\nabla\times\left(\mathbf{v}\times\mathbf{B}\right)\] (3) \[\rho T\frac{\partial s}{\partial t} =\rho T\left(\mathbf{v}\cdot\nabla\right)s+Q\] (4) \[p =p\left(\rho_{,}\colon s\right) \tag{5}\] Here, \(\rho\), \(\mathbf{v}\), \(p\), \(T\), \(\mathbf{g}\), \(\mathbf{B}\), \(s\), and \(Q\) are the density, velocity, pressure, temperature, gravitational acceleration, magnetic field, entropy, and radiative heating, respectively. The R2D2 solves the equations with a fourth-order spatial derivative and four-step Runge-Kutta method for time integration. Pressure \(p\) is obtained from the entropy and the density table prepared by the OPAL equation of state considering partial ionisation (Rogers et al., 1996). Radiative heating \(Q\) is calculated by solving the radiation transfer equation using the grey approximation and the short characteristic in 24 directions. The simulation box size is 6.144 Mm \(\times\) 6.144 Mm in the horizontal direction and 3.072 Mm in the vertical direction. The number of grid cells is 128 in each direction. Thus, the horizontal and vertical grid spacings are 48 km and 24 km, respectively. Considering the typical lifetime of the granulation (several minutes), we set the output cadence to five minutes. While we can obtain more data with a shorter cadence, the convection structure does not change significantly with the shorter time interval. We choose the output cadence to compromise the data amount and the neural network training efficiency. We initiate calculations with different initial vertical magnetic fields, 1, 20, and 30 G, to ensure data generality. We obtain about 30,000 snapshots of data. For the training, we use the data at \(\tau\)=1 surface defined with the Rosseland mean opacity. ### Network structure In this study, we mainly train two neural networks named Networks I and IV. The radiative intensity is used as an input for both networks, and the vertical velocity at the \(\tau=1\) surface is used as additional input for Network IV. These networks have almost the same structure. We emphasize that we use a huge amount of data for the training, but we only require a single snapshot of the data for the practical evaluation. The output is two components of the horizontal velocity with the same number of grid points as the original input data. The network has the encoder-decoder structure only with convolution. This structure is often used in image recognition because it can be applied to any image size by a learning filter. The encoder-decoder structure is divided into encoder and decoder. The encoder extracts the feature of the input image and compresses the image, and the decoder converts the compressed image to an output image. In this study, the network is based on U-net (Ronneberger et al., 2015) with some skip connection that delivers information from the encoder to the decoder. The common encoder-decoder tends to lose positional information on input. U-net solves this problem using the skip connection. In addition, we deepen the networks by placing ResidualNet between the encoder and decoder. ResidualNet is a structure that repeats the skip connection at a small interval. The network structure is shown in Fig. 1. The network can be applied to any size of two-dimensional input intensity and vertical velocity image. We note that some layers divide the width of an image in half, and bit error may increase with an odd number of pixels. The kernel size of convolution is \(3\times 3\), and we use rectified linear unit (ReLU) for the activation function. When image size is reduced by downsampling, we quadruple the number of filters so that the amount of information does not decrease. The U-net was developed for biological-image segmentation. In the segmentation problem, max pooling is generally used to extract features of the image. All information of the input image in this study is related to output images (horizontal velocities). Thus, we choose the convolution with the stride size of two and a kernel size of \(2\times 2\) for downsampling. We use the same size, stride, and number of filters as the convolution process in the deconvolution process, i.e., the reversed procedure of convolution is adopted for the deconvolution. We note that in the final step, we output two images of velocity (two components of velocity). In our network, we obtain a set of velocity fields in a square area 4 Mm on a side. ### Training setting We use the intensity and vertical velocity maps of \(128\times 128\) pixels obtained from the numerical simulations as the input of the network. The intensity is normalised by the temporal and spatial average intensity of the sun. For outputs, we use the two-component velocity field of \(128\times 128\) pixels. The unit for velocities is km s\({}^{-1}\). We prepare about 30,000 Figure 1: The network architecture is shown. The top panel shows the whole network. The bottom panel shows the residual network in the orange box in the top panel. The black squares show the shape of the data, and the numbers on the left and the top show the size and number of the images, respectively. The process shown by the coloured arrows change the shape of the data. The input and output of the residual blocks are the same. datasets for training. We also rotate the image by 90, 180, and 270 degrees to increase the amount of data. We note that the DeepVel adopted the same way, i.e., the rotated image, for increasing the data amount (Asensio Ramos et al., 2017). Finally, we use about 120,000 datasets for training. In addition to these data, we prepare about 1,000 datasets for network validation. Mini-batches comprising 32 datasets are randomly selected from training data. We note that the mini-batch is a set of datasets, with which the network performance tends to increase (Wu and He, 2018). The network trains in 128 epochs, and the network with the highest correlation coefficient between the evaluation of the network and validation data is adopted. The network is optimised to minimise the mean square error using the Adam optimiser (Kingma and Ba, 2014). We use TensorFlow and its wrapper, Keras, for implementing the network and an Nvidia GeForce RTX 2080 Ti GPU for training. ## 3 Result ### Validation of image Fig. 2 shows the estimated horizontal velocity by Networks I and IV. Panels a, b, c, d, e, and f show the intensity, simulated horizontal velocity, and estimated horizontal velocity by Network I, and the vertical magnetic field, vertical velocity, and estimated horizontal velocity by Network IV, respectively. The white arrows show the horizontal velocity, and the background colour map shows the horizontal divergence of the flow. We use the magnetic field in SS3.3. The results show that the structure estimated by the network roughly reproduces the original velocity field. The network can also detect the vortex at \((y,z)=(2\ \mathrm{Mm},5\ \mathrm{Mm})\). We calculate the correlation coefficient defined as: \[\frac{\sum(v_{\mathrm{sim}}-\bar{v}_{\mathrm{sim}})(v_{\mathrm{eva}}-\bar{v} _{\mathrm{eva}})}{\sqrt{\sum(v_{\mathrm{sim}}-\bar{v}_{\mathrm{sim}})^{2}} \sqrt{\sum(v_{\mathrm{eva}}-\bar{v}_{\mathrm{eva}})^{2}}} \tag{6}\] where \(v_{\mathrm{sim}}\) and \(v_{\mathrm{eva}}\) are the simulated and estimated horizontal velocity for the validation dataset, respectively. \(\sum\) is the sum of all pixels and all validation data, and the overbar is the mean of the two-dimensional data. The correlation coefficients for Networks I and IV are 0.83 and 0.90, respectively. The mean absolute error values \[\overline{|\vec{v}_{\mathrm{sim}}-\vec{v}_{\mathrm{eva}}|} \tag{7}\] are 0.92 \(\mathrm{km\,s^{-1}}\) and 0.72 \(\mathrm{km\,s^{-1}}\), the R2score values \[1-\frac{\sum(v_{\mathrm{sim}}-v_{\mathrm{eva}})}{\sum(v_{\mathrm{sim}}-\bar{ v}_{\mathrm{sim}})} \tag{8}\] are 0.71 and 0.82, and the mean angular differences are 28.6\({}^{\circ}\) and 22.6\({}^{\circ}\) for Networks I and IV, respectively. The mean angular difference \(\theta\) describes the angle between the flow vectors of the evaluated and simulated flows. \[\theta=\arccos\left(\frac{\mathbf{v}_{\mathrm{eva}}\cdot\mathbf{v}_{\mathrm{ sim}}}{|\mathbf{v}_{\mathrm{eva}}||\mathbf{v}_{\mathrm{sim}}|}\right) \tag{9}\] Fig. 3 shows the difference in the absolute value between the simulated and evaluated values. Panels a and b show the results from Networks I and IV, respectively. One may notice that the difference increases around the granular boundary. This is because the structure of the granule boundary is complex, and the sign of velocity changes in this area. This small-scale turbulence does not obey the overall coherent pattern of thermal convection, i.e., the broad areas of diverging flows and narrow lanes of converging flows. Fig. 4 shows two-dimensional histograms of the simulated and estimated velocities. Panels a and b show the results from Networks I and IV, respectively. While most data points are located on the line \(y=x\), the network tends to underestimate the velocity with respect to high simulated velocities and the fitted slope is smaller than unity. This result indicates that our evaluation tends to show lower velocity than the simulated velocity. (IB) intensity and vertical magnetic field, and (IVB) intensity, vertical velocity, and magnetic field. Because intensity is the easiest to observe, we do not consider the network without intensity. Fig. 2 shows the example of physical quantities to input in panels a, d, and e. The distribution of the magnetic field has a large kurtosis, in which most of the values are close to zero. This type of distribution is difficult to use for evaluation. Thus, we process the magnetic field as: \[B_{x}^{\prime}=\frac{B_{x}}{|B_{x}|}\log\left(1+\frac{B_{x}}{B_{\rm cr}}\right), \tag{10}\] where \(B\) and \(B^{\prime}\) are the original magnetic field and the processed quantity, respectively. \(B_{\rm cr}\) is a free parameter, and we choose \(B_{\rm cr}=1\,\rm G\). Table 2 and Fig. 6 show the results of the validation functions of different networks and their learning curves, respectively. The network performance is improved by adding vertical velocity. This is because the velocity field is less diffuse than intensity and has more detailed information. By contrast, the role of the magnetic field is much less important. Especially, the difference between Networks (IV) and (IVB) is insignificant. The effect of the vertical magnetic field on the velocity field is indirect. ### Comparison with previous studies We compare Network I with the DeepVel (Asensio Ramos et al., 2017) and DeepVelU (Tremblay and Attie, 2020), \begin{table} \begin{tabular}{|l|c|c|c|} \hline Number of datasets & 3,000 & 30,000 & 120,000 \\ \hline Correlation coefficient & 0.70 & 0.80 & 0.83 \\ R2score & 0.42 & 0.63 & 0.69 \\ Mean square error [(\(\rm km\,s^{-1}\))\({}^{2}\)] & 1.76 & 1.41 & 1.29 \\ Mean absolute error [\(\rm km\,s^{-1}\)] & 1.36 & 1.06 & 0.96 \\ Mean angle [degree] & 45.0\({}^{\circ}\) & 33.7\({}^{\circ}\) & 29.7\({}^{\circ}\) \\ \hline \end{tabular} \({}^{*}\) The results of validation functions for Network I. \end{table} Table 1: The results with different numbers of datasets Figure 2: Examples of the evaluation: (a) intensity, (b) simulated horizontal velocity, and (c) estimated horizontal velocity by Network I, and (d) vertical magnetic field, (e) vertical velocity, and (f) estimated horizontal velocity by Network IV are shown. The white arrows show the velocity, and an example of a length of 10 \(\rm km\,s^{-1}\) is shown by the blue arrow in the lower right corner. Note that the input value for the magnetic field is preprocessed and thus differs from (d). \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Network & I & IV & IB & IVB \\ Input data & \(I\) & \(I,v_{x}\) & \(I,B_{x}\) & \(I,v_{x},B_{x}\) \\ \hline Correlation coefficient & 0.84 & 0.90 & 0.86 & 0.90 \\ R2score & 0.71 & 0.81 & 0.74 & 0.81 \\ Mean square error [(km s\({}^{-1}\))\({}^{2}\)] & 1.25 & 1.01 & 1.18 & 0.98 \\ Mean absolute error [km s\({}^{-1}\)] & 0.92 & 0.74 & 0.88 & 0.72 \\ Mean angle [degree] & 28.6\({}^{\circ}\) & 22.6\({}^{\circ}\) & 26.9\({}^{\circ}\) & 21.9\({}^{\circ}\) \\ \hline \end{tabular} \({}^{*}\) The results of validation functions with different inputs. \(x\) indicates the vertical direction. \end{table} Table 2: The results with different inputs Figure 4: Two-dimensional histogram between the simulated and estimated (evaluated) velocities is shown. (a) The result from Network I and (b) the result from Network IV. The colour map shows the number of pixels that have the corresponding value. The light blue line shows where the two values match exactly \(v_{\rm sim}=v_{\rm vara}\). The pink line shows the result of the linear fitting of the data, where \(v_{\rm vara}=0.710v_{\rm sim}-0.076\) in (a) and \(v_{\rm vara}=0.813v_{\rm sim}-0.098\) in (b). The black line shows the average of the estimated velocities for each simulation velocity. Figure 3: (a) and (b) The differences between the simulated and estimated velocities from Networks I (intensity) and IV (intensity and vertical velocity), respectively. The background grey map shows the input intensity. Figure 5: The learning curves with different numbers of datasets are shown. Panels (a) and (b) show the mean absolute error and the correlation coefficient, respectively. The solid line shows the evaluation value for the test data, and the dashed line shows the evaluation value for the training data. The horizontal axis shows the epochs. Note the axis is not the iteration of the network. The gap between the training and test data curves reflects the network’s overtraining. Note that the trend does not change up to 128 epochs, so this figure only shows 30 epochs. Figure 6: The learning curves of the networks that require different inputs is shown. (a) The mean absolute error and (b) correlation coefficient. which obtain the solar velocity field using a method similar to that used in this study. The DeepVel and DeepVelU estimate the horizontal velocity with two consecutive intensity images, LoS velocity or LoS magnetic field. This means that these methods are an alternative to LCT. This is different from the network in this study using one snapshot. The training data for the DeepVel are 30,000 images of \(50\times 50\) pixels and for the DeepVelU are 2,000 images of \(48\times 48\) pixels. Our data are 120,000 images of \(128\times 128\) pixels. The amount of data used in this study is 20 times larger than that used in the DeepVel. The correlation coefficient between the velocity estimated by the DeepVel and the simulated one is 0.83. Because the DeepVel requires two input images, the velocity can be estimated from the temporal difference. By contrast, our network can estimate it from one snapshot and achieves a similar performance to DeepVel without using temporal evolution information. ## 4 Application to Observation In this section, we apply our network to observed data. The data were taken through a green continuum filter centred at 555 nm by the Hinode Solar Optical Telescope (SOT). We here use a snapshot taken at 11:46:34 on Dec. 29, 2007, with an exposure time of 0.077 s. Fig. 7 shows the overall view of the observed data. We perform linear interpolation on the 39 km \(\times\) 39 km observed data to align the resolution of the 48 km \(\times\) 48 km training data. The image with the aligned resolution is shown in Fig. 7b, and the area of panel b is indicated by the red square in panel a. The data are normalized by the average radiation intensity of the entire observation image. The original observation image (a) has a pixel scale of 0.054". We crop out a FOV of about 161 pixels. To apply the network to the observations, we use Hinode's point spread function (PSF) to the intensity of the training images described above. We apply the PSF at 555 nm green continuums described in Mathew et al. (2009). The network trained with the data with the PSF is named Network IP. This is not enough to prevent the network from overtraining. Fig. 8 shows the wavenumber distribution of the observed and simulated intensity. While the wavenumber distributions at large scale are consistent, the observation intensity has small-scale structures that are not present in the simulation data. This difference is assumed to be noise in obtaining the horizontal velocities, so we add random noise to the data to ignore the small-scale structure of the observed data. We add random noise in the Fourier space. The mean of the noise is zero and the standard deviation normalized with the maximum intensity is \(1.6\times 10^{-3}\). We remove the information of small scales that does not match the observed and simulated data for the training by adding the noise and letting the network apply it. Here we name the network trained with the data with the PSF and noise as Network IPN. Because we intend to evaluate the velocity before using the PSF, we do not apply the PSF for the horizontal velocities, which is the output. All other settings are unchanged from Networks I and IV. Fig. 9 shows the results of our application to the observed data. Panel a shows the observed intensity. Panels b and c show the network evaluation of the horizontal velocity by Networks IPN and IP, respectively. Network IP seems to fail the evaluation (panel c) because the velocity Figure 8: The radiation intensity spectra are shown. The blue, orange, and green lines show observation, simulation, and training data, respectively. The horizontal axis shows the wavenumber, and the vertical axis shows the intensity normalized by the respective maximum value. Figure 7: A sample of the Hinode observation images used in this study. (a) The entire area of observed data. (b) The area enclosed by the red square in (a), with the resolution adjusted using linear completion. structure is not a typical granule network structure, while Network IPN shows reasonable evaluation (panel b). This result shows that adding noise is an important factor to apply our network to the real observed data. We also apply Networks IPN and IP to the simulation data. Figs 10b and 10c show the result with Networks IPN and IP, respectively. For the simulation data, the difference between Networks IPN and IP is not significant. In the early stage of training, the networks can fit the large scale and gradually shift to smaller scales. It becomes difficult for Network IP to estimate velocity from the observed data due to the noise. In Network IPN, the small-scale structure vanishes due to random noise, and thus it can estimate the observations. The performance of Networks IP and IPN is lower than that of Network I because we apply the PSF and the noise to the training data. The correlation coefficient between the network estimated velocity and the simulation is 0.64, and the R2score is 0.42. In addition, the two-dimensional histogram is shown in Fig. 11. For this network, applying PSF alone does not change the final performance much, although it will change the update speed of the network. This decrease in estimation performance is due to the resolution of observation. Currently, we cannot observe the small-scale flow achieved in the numerical simulation. If the resolution is improved with the development of observational technology, the correlation coefficient between network estimation and simulation will be better. ## 5 Comparison with LCT We compare the velocity evaluated by the Fourier LCT (FLCT) with the velocity estimated using the network that can apply to the observations made in the previous section. We use the FLCT code by Fisher and Welsch (2008) for LCT. The FWHM of the Gaussian of the FLCT is 1200 km. We can obtain horizontal velocity maps from two consecutive images using LCT. Note that LCT can be performed at some arbitrary interval; we choose 30 seconds as the interval in this study. A total of 19 horizontal maps over 10 minutes obtained with LCT are averaged and compared with the result of the network. We have performed 19 time averages under the same conditions for the network evaluation. We carry out a parameter survey to optimize free parameters in applying LCT. We test LCT using parameters that would cover typical temporal and scales of thermal convection with simulated velocity. These parameters included averaging times of 5 min, 10 min, 20 min, 30 min, and snapshot; FWHM values of 300 km, 600 km, 1200 km, and 2500 km; and intervals of 30 s, 60 s, and 120 s. We tested a total of 75 combinations of these parameters. We present a figure on this comparison in Appendix 1. We show the results of the best parameters that obtained the highest correlation coefficient between the horizontal velocity field from the simulation and the one estimated by LCT. First, we apply LCT and network to the simulation data and compare their performance. Fig. 12 shows the velocity fields evaluated by LCT and our network and in simulation. The results of the simulation and the network are similar. It appears that some structure is detected by LCT. However, LCT results do not match the simulation in scale or structure. The correlation coefficient between the simulation and LCT velocities is 0.19 and between LCT velocities and the network is 0.13. Given our thorough characterization of LCT performance for a reasonable range of input parameters, we conclude that LCT is incapable of accurately recovering granular-scale flows at the spatial resolution studied here. The correlation coefficient between the network and the simulation velocity is 0.84. The correlation coefficient between the neural network and the simulation increases because the small-scale complex structure disappears due to the temporal average. Next, we compare the results with the observed data. We use the data from 2007-12-29T11:56:34 to 2007-12-29T12:06:05. The results are shown in Fig. 13. In this case, the correlation coefficient between LCT and the neural network is 0.06, which is hardly consistent. With LCT, it is difficult to extract displacements from events with small time and spatial scales. We cannot capture the flow converging in the granule lane. To effectively capture this flow, resolving this region with a higher pixel count than the LCT window is necessary. Observations at higher resolution are therefore required. Considering that pixels in the simulated data (48 km in horizontal extent) approach the highest-resolution observations presently available, higher-resolution instrumentation would have to be developed to employ optical flow methods like LCT to study sub-granular flows. In addition, because LCT tracks the apparent flow rather than the actual velocity, LCT leads to a smaller velocity than simulation and network velocity. Malherbe et al. (2018) reported that LCT has difficulty in evaluating the flow in the granulation scale (see also Tremblay et al., 2018). Tremblay shows it is also difficult for LCT to evaluate the velocity at the edges of the images because occasionally a feature goes out from the data domain in the next step. Our neural networks do not have this difficulty because only a snapshot is required for the evaluation. ## 6 Summary and Conclusion We develop a method that estimates the solar horizontal velocity field, which cannot be directly measured, from the radiative intensity and other variables easier to observe using neural network technology. The network is constructed only with convolution. The network can be applied to any image size. Using a GPU, the network can estimate the velocity quickly. Although we cannot obtain the exact horizontal velocity corresponding to the intensity in real observations, we obtained the training and validation data from a numerical calculation using the R2D2 radiation MHD code. When we include the vertical velocity as additional input, the network performance is improved. By contrast, the vertical magnetic field does not improve the evaluation performance much. The correlation coefficient between the simulation's velocities and that estimated by the network is 0.83. The overall structure and the velocity inside the granule are consistent. However, it is not easy to estimate the velocity at the granular boundary. High velocities (\(>5\) km s\({}^{-1}\)) tend to be underestimated. There are still possible ways to improve the evaluation skill of our network. For example, the network can be divided into two networks. When one network evaluates the absolute value and the other evaluates angle, the combination of the two networks may be able to evaluate the complicated turbulent structure well. Comparing to DeepVel, which uses a similar method to our network, we achieved almost the same performance as the DeepVel with a smaller amount of input data. Our network estimates the horizontal velocity from one snapshot of the image. Thus, we can obtain the velocity from the observation with any length of the time cadence. If we estimate the horizontal velocity field for the real observation from the intensity using a network trained with simulated data, the network overtrains and does not provide a reasonable result. Therefore, we introduce the PSF of Hinode and reduce the small-scale structure by adding white noise to the training data for the network, which makes the estimated velocities more accurate. Because Figure 10: (a) The input intensity in the simulation, (b) the estimation by the network trained with noise, and (c) the estimation by the network trained without noise. Figure 9: (a) The input intensity observed in the Hinode, (b) the estimation by the network trained with noise, and (c) the estimation by the network trained without noise. The result in panel c does not adapt to the Hinode image due to overtraining. the network for the observation (Networks IP and IPN) includes the influence from the PSF and the noise, the correlation coefficient is decreased compared with Network I. This implicitly indicates that if the influence of the PSF and the noise is reduced in actual observation in the future, our evaluation ability will increase. We apply LCT to the simulation data to determine optimal parameters. We cannot find parameter combinations of similar accuracy to the network estimation. Because we try to estimate the horizontal velocity in the granulation scale, LCT tends to fail the evaluation. Even for these scales, our network succeeds in evaluating the horizontal flow. By contrast, evaluating scales smaller than the granulation is difficult because of the loss of information due to the addition of noise. Even training without noise is difficult for small-scale estimation, and Ishikawa et al. (2022) suggest that a significant update is needed, such as changing the training method and increasing the amount of input information. For future studies, we consider estimating other quantities that are difficult to observe, such as the horizontal magnetic field or the quantity inside the sun, by using the intensity and other observable quantities. ## Acknowledgements The results were obtained using Cray XC50 at the Center for Computational Astrophysics, National Astronomical Observatory of Japan. This work was supported by JST, Figure 11: A two-dimensional histogram of the velocity field estimated by Network IPN and the simulated velocity field is shown. The colour map shows the number of pixels that have the corresponding value. The light blue line shows where the two values match exactly \(v_{\rm sim}=v_{\rm eva}\). The pink line shows the result of the linear fitting of the data, where \(v_{\rm eva}=0.403v_{\rm sim}-0.067\). The black line shows the average of the estimated velocities for each simulation velocity. Figure 12: The results of LCT and network for the intensity of the simulation are shown. The background in the left column shows the radiation intensity, and the red arrows show the averaged velocity field. The figures on the right show the divergence of the averaged velocity. (a) and (b) The velocity estimated with LCT, (c) and (d) the velocity estimated with our network, and (e) and (f) the velocity in the simulation. Note that the size of the arrow legend is different for each image. the establishment of University fellowships towards the creation of science technology innovation, Grant Number JPMJFS2107. This work was supported by MEXT/JSPS KAKENHI (grant no. JP20K14510, JP23H01210 (PI: H. Hotta)), P21H04492 (PI: K. Kusano), JP21H01124 (PI: T. Yokoyama), JP21H04497 (H. Miyahara) and MEXT as Program for Promoting Research on the Supercomputer Fugaku (MEXT as Program for Promoting Research on the Supercomputer Fugaku( JPMXP1020230504 (PI: H. Hotta) and J20HJ00016 (PI: J. Makino)) ). YK is supported by JSPS KAKENHI Grant Number JP18H05234 and JP23H01220 (PI: Y. Katsukawa). R.T.I. is supported by JSPS KAKENHI Grant Number 23KJ0299 (PI: R.T. Ishikawa). We express our heartfelt gratitude to the referee for their careful review and valuable comments, which greatly enhanced the quality of this manuscript. ## Data availability The simulation data and source code of neural networks underlying this article will be shared on reasonable request to the corresponding author.
2309.08533
Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction
Background: As available medical image datasets increase in size, it becomes infeasible for clinicians to review content manually for knowledge extraction. The objective of this study was to create an automated clustering resulting in human-interpretable pattern discovery. Methods: Images from the public HAM10000 dataset, including 7 common pigmented skin lesion diagnoses, were tiled into 29420 tiles and clustered via k-means using neural network-extracted image features. The final number of clusters per diagnosis was chosen by either the elbow method or a compactness metric balancing intra-lesion variance and cluster numbers. The amount of resulting non-informative clusters, defined as those containing less than six image tiles, was compared between the two methods. Results: Applying k-means, the optimal elbow cutoff resulted in a mean of 24.7 (95%-CI: 16.4-33) clusters for every included diagnosis, including 14.9% (95% CI: 0.8-29.0) non-informative clusters. The optimal cutoff, as estimated by the compactness metric, resulted in significantly fewer clusters (13.4; 95%-CI 11.8-15.1; p=0.03) and less non-informative ones (7.5%; 95% CI: 0-19.5; p=0.017). The majority of clusters (93.6%) from the compactness metric could be manually mapped to previously described dermatoscopic diagnostic patterns. Conclusions: Automatically constraining unsupervised clustering can produce an automated extraction of diagnostically relevant and human-interpretable clusters of visual patterns from a large image dataset.
Lidia Talavera-Martinez, Philipp Tschandl
2023-09-15T16:50:47Z
http://arxiv.org/abs/2309.08533v1
Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction ###### Abstract Background: As available medical image datasets increase in size, it becomes infeasible for clinicians to review content manually for knowledge extraction. The objective of this study was to create an automated clustering resulting in human-interpretable pattern discovery. Methods: Images from the public HAM10000 dataset, including 7 common pigmented skin lesion diagnoses, were tiled into 29420 tiles and clustered via k-means using neural network-extracted image features. The final number of clusters per diagnosis was chosen by either the elbow method or a compactness metric balancing intra-lesion variance and cluster numbers. The amount of resulting non-informative clusters, defined as those containing less than six image tiles, was compared between the two methods. Results: Applying k-means, the optimal elbow cutoff resulted in a mean of 24.7 (95%-CI: 16.4-33) clusters for every included diagnosis, including 14.9% (95% CI: 0.8-29.0) non-informative clusters. The optimal cutoff, as estimated by the compactness metric, resulted in significantly fewer clusters (13.4; 95%-CI 11.8-15.1; p=0.03) and less non-informative ones (7.5%; 95% CI: 0-19.5; p=0.017). The majority of clusters (93.6%) from the compactness metric could be manually mapped to previously described dermatoscopic diagnostic patterns. Conclusions: Automatically constraining unsupervised clustering can produce an automated extraction of diagnostically relevant and human-interpretable clusters of visual patterns from a large image dataset. Pre-peer review version 1 Footnote 1: This is the pre-peer reviewed version of the following article: _Talavera-Martinez L, Tschandl P. Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction. J Eur Acad Dermond Vennerol. 2023_, which has been published in final form at [https://doi.org/10.1111/jdv.19234](https://doi.org/10.1111/jdv.19234). This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions. ## I Introduction In dermatology, but also other visual medical fields, the recognition and description of specific samples of diseases is important for a precise diagnosis and for the formulation of differential diagnoses. Apart from clinical dermatology, there have been a plethora of pattern-descriptions of symptoms of disease in dermatoscopy in recent decades [1], especially for the diagnosis of skin tumors and inflammatory diseases [2], which are used for teaching and diagnosis in daily practice. These descriptions were mostly based on mono- or multicentric case collections that were reviewed manually by a few authors and evaluated on possible repetitions of patterns [3, 4, 5]. As clinical image data collections are increasing in size [6, 7], entirely manual review for discovering diagnostic patterns is not a realistic scenario anymore. In addition to an insurmountable workload, interrater disagreement may be a hindering factor in identifying and describing objective, valid and teachable pattern groups [1]. Increasingly, neural networks - especially convolutional neural networks (CNN) - are described as an aid for diagnostic classification of medical images. In the field of dermatology, CNNs were described to have at least an equal accuracy as dermatologists in experimental settings for classifying clinical and dermatoscopic images, and shown to improve physicians' diagnostic accuracy when applied in diverse interactive settings [8, 9]. Such algorithms can not only classify images but also label anatomic areas [10], rate psoriasis [11], or retrieve similar images to a case, by implicitly analyzing patterns and pattern combinations after training to categorize images into distinct classes [12]. Therefore, we hypothesize that convolutional neural networks could be helpful in the extraction of diagnostically relevant patterns in medical image collections. Recent reports have shown unsupervised techniques when having limited label data [13]. The goal of this study was to create an automated workflow to extract diagnostic relevant pattern candidates for review by doctors and researchers, with dermatoscopic images of skin tumors as an example (Fig. 1). Eventually, from a big dataset with thousands of images, this should enable human-computer interaction and return an interpretable number of visually distinct patterns, by obtaining as few redundant or uninformative patterns as possible. The approach we propose is a pipeline based on machine learning that consists of extracting deep features from CNNs, and applying an unsupervised clustering algorithm to these features. The clustering shall be constrained by a custom compactness metric that, in contrast to the well known elbow-metric, should better balance retrieval of all relevant patterns while at the same time keeping redundant information low. ## II Materials and Methods ### _Data and processing_ This non-interventional retrospective study was conducted on public image data only, specifically the HAM10000 dataset [14]. This dataset is composed of 10015 dermatoscopic images of pigmented lesions with annotations on both the diagnosis and segmentation of the lesion area [8]. To focus on patterns rather than full images, analyses were performed on a tile-level. We extracted square subregions (tiles) of an image by a sliding window with a size of 128x128 pixels with 25% overlap, discarding tiles with \(<60\%\) lesion area. In sum, 29420 tiles were extracted, Suppl. Fig. S1 shows two example cases with resulting extracted tiles, and Suppl. Table S1 the number of tiles per diagnosis. To ensure approximately equal representation of diagnoses, included nevi were limited to a random subsample of 1100 cases, and resulting tiles limited to a maximum random subsample of 850 tiles. To reduce the influence of changes in illumination color, we applied color constancy correction [15] to all tiles (Suppl. Fig. S2). ### _Neural network and feature extraction_ A VGG16 [16] architecture, pretrained on ImageNet data, was fine-tuned to classify tiles into one of seven diagnoses included in the HAM10000 dataset. Training was performed with all 29420 tiles, using 70% for training and the remaining 30% for validation during a single training run, ensuring no overlap of tiles of the same image between sets. This training run was only performed as a means to parameterize the model, thus as knowledge discovery rather than classification accuracy was the goal, the complete training dataset was also used for cluster analyses downstream. Data augmentation steps were flips in both horizontal and vertical directions, random 90\({}^{\circ}\) rotations, and zooms. Training was performed with a batch size of 32, using the Adam [17] optimizer, a weighted categorical cross entropy loss, with an initial learning rate of 1-e5, and an early stopping policy based on validation loss. For extracting features from image tiles by the fine-tuned model, the numerical state of the layer before the classification layer was obtained, resulting in a 1280-length vector. Neural network experiments were conducted using tensorflow [18] and python 3.8. Experiments were repeated with EfficientNet-B0 [19] and a convolutional autoencoder, with results for those two models shown in the supplementary data. For the autoencoder, we trained the model from scratch with a mean-squared error loss, and extracted the features from the flattened embedding space. ### _Clustering_ Resulting extracted features are normalized and used as input to an unsupervised clustering algorithm, specifically k-means [20] with cosine distance as a distance metric. This calculation was performed using scikit-learn v1.1.2 [21] and scipy v1.9.0 [22]. To automatically obtain the optimal number of clusters without further intervention from a user, either the elbow method (optimal value as calculated by yellowbrick v1.5 [23]) or a custom compactness metric (W) was applied. The latter method is based on the assumption that each lesion, thus also the tiles that comprise it, on average only show one or two dermatoscopic patterns. Thus, the proposed metric measures both the similarity of the clusters to which tiles of the same image have been assigned, and the number of different clusters the tiles were assigned to in respect to the total number of clusters. The metric was implemented as follows: \[I=\{img_{1},...,img_{q}\}\] \[T_{q}=\{t_{1},...,t_{j}\}\] \[C_{q}=\{c_{1},...,c_{i}\}\] \[W=\frac{1}{M}\times\sum_{q=1}^{M}\left(\frac{K}{min(n_{clst},L)}\times\sum_{L }^{j=1}cosDst\left(\frac{1}{K}\sum_{i=1}^{K}(c_{i}),t_{j}\right)\right)\] where \(M\) is the number of images \(I\) in the experiment, \(T_{q}\) are the \(L\) tiles of an image \(img_{q}\), \(K\) is the number of unique clusters to which \(T_{q}\) belong, \(n_{clst}\) are the total number of clusters used in the experiment. Reiterating, the first half of W for an image ensures tiles are spread to as few clusters as possible, and the second half of W ensures the distance of tiles to the common center of clusters, that tiles are assigned to, is low. ### _Classification ability_ To assess classification ability of the two clustering cutoff methods, clusters were created not only for each diagnosis separately, but also for the whole dataset spanning all diagnoses. The frequency of diagnoses contained in a resulting cluster was noted as a multi-class probability for a classification task. Test images from the ISIC 2018 challenge Task 3 [24, 25] were tiled and preprocessed as above (resulting in 10254 tiles from 1304 lesions with sufficient lesion area depicted), and probabilities of the closest cluster of each tile averaged. The top-1 class of the resulting probabilities was taken as a prediction, and accuracy as well as mean recall [24] calculated. Fig. 1: Processing overview - Dermatoscopic images (upper left) are used as source data, and a neural network is trained for classification on tiled lesion-area tiles. Features are extracted from lesion tiles with this trained network, on which an unsupervised k-means clustering is applied to find pattern groups. Up to the closest 7 lesion-tiles within a cluster (examples shown for clusters within the BCC class) are stored as representatives of a pattern and presented to a human reader for qualitative interpretation. ### _Manual pattern descriptions_ Top-7 tiles of clusters, created for every diagnosis in the dataset separately with VGG16 feature vectors, k-means and the described compactness metric, were inspected by a dermatologist with substantial experience in dermatoscopy (PT). Patterns were scored for redundancy, i.e. showing the same pattern as another cluster of the diagnosis, informativeness, i.e. whether any reproducible pattern can be identified, number of patterns, and previous description, i.e. whether the pattern was already identified and described in the literature. A pattern was defined as a change in color and/or structure covering the majority of the image tile. ### _Statistics_ Differences of paired values were compared using a one-sample t-test after checking for normality assumptions. Statistical analyses were performed using R Statistics v4.1.0 [26], and plots created with ggplot2 [27]. A two sided p-value \(<.05\) was regarded as statistically significant, with a Bonferroni-Holm type correction applied. ## III Results ### _Pattern interpretability_ Applied on clusters of every diagnosis separately, the elbow method created a mean of 24.7 (95%-CI: 16.4-33) clusters for a diagnosis, whereas the compactness metric resulted in significantly less clusters (13.4; 95%-CI 11.8-15.1; p=0.03; Fig. 1(a)). The proportion of uninformative clusters was higher when using the elbow method (14.9%; 95% CI: 0.8-29.0) than using the compactness metric (7.5%, 95% CI: 0-19.5; p=0.017; Fig. 1(b)). Qualitative interpretation of the clusters resulting from the compactness metric, for 93.6% (88 of 94) of diagnosis-specific clusters, at least one recognizable consistent pattern could be identified by a dermatologist. Identified patterns could be mapped to 53 unique known diagnostic descriptions from previous literature of at least 29 publications (Suppl. Table S3). Only 51 clusters could be described with one pattern alone, whereas 30 clusters encompassed two, and 7 clusters three recognizable patterns in combination. The proportion of redundant clusters within a diagnosis ranged from 0% (basal cell carcinoma and melanoma) to 27.3% (dermatofibroma and vascular lesions). ### _Retained classification performance_ Application of clustering on the whole dataset with all diagnoses included, using the elbow method resulted in a higher number of clusters than the compactness metric (42 vs. 7), as well as a higher mean recall (46.3 vs. 34.6) and accuracy (43.4%; 95%-CI 40.7-46.2 vs. 32.2%; 95%-CI 29.7-34.8) for predictions on the ISIC2018 test set. Clusters of the compactness metric were rarely able to predict actinic keratoses, and almost never dermatofibroma (Fig. 3). ## IV Discussion With ever growing image datasets, human interpretation of available data becomes increasingly difficult, and herein we present an automated analysis pipeline which via automated image processing is supposed to aid human-computer interaction for diagnostic marker discovery. Providing only information on the diagnosis and image area, we were able to showcase that the presented workflow is able to reproduce a major fraction of diagnostic patterns in dermatoscopy described in the literature. In contrast to other publications [6, 24, 28] trying to optimize for the best diagnostic accuracy of a neural network model, herein we propose a metric to constrain k-means clustering to optimize for human interpretability in a truly interactive human-computer interaction workflow. The proposed compactness metric reduces the information to a digestible amount, shown by the significant reduction in overall clusters (Fig 1(a)), alongside a reduction of non-informative information shown by the significant reduction of noninformative clusters (Fig 1(b)). These improvements, though come at a cost, namely a reduced diagnostic accuracy when applied in an automated classification setting. This underlines that the training proposed herein could be useful for human-computer interaction and interpretability, but not for safely predicting diagnoses as a standalone application. As biases from automated predictions of image data through neural networks is a significant problem [29], datasets should be inspected for potential biases. Although not explicitly shown in this pilot experiment, the proposed workflow may enable medical personnel and researchers to identify highly prevalent biases in a qualitative manner. It is certainly not a complete Fig. 3: Confusion matrices showcasing performance of predictions via averaged nearest neighbor cluster-probabilities constrained by (a) compactness metric (7 clusters), or (b) the elbow method (42 clusters). Values within cells show proportions within one ground-truth class (=row). Fig. 2: Number of overall (a) and uninformative (b) clusters per diagnosis when constraining cluster numbers via either the compactness or elbow method. solution, as based on the failure to classify rare classes (Fig 3a) we hypothesize that biases on rare classes will equally not be detectable. Through qualitative analysis of resulting clusters we found that for most it is not possible to find a single pattern to describe them, but the majority needed at least a combination of two patterns (Supp. Table S3). This finding may help in designing future annotation and pattern analysis studies, and we hypothesize that studies trying to annotate and analyze for a single structure may not be representing real patterns. Interestingly, this may be a missing link between descriptive and "metaphoric" language [1], as the former is more suitable for distinct and concise descriptions, but metaphoric language inherently tries to capture structure combinations. A further interesting insight was that the frequency of redundant clusters was not equally distributed, but higher in dermatofibroma and vascular lesions. This could be sourced by the fact that these diagnoses in general show less variability in their patterns, but also that the used dataset through small sample size for these diagnoses is not covering the real visual variability. Finally, it is also interesting to note that by qualitatively comparing different network architectures (Supp. Fig. S4 - S10), one can identify different utility of them for the purpose of pattern discovery. While an Autoencoder is mainly detecting color blobs, edges, corners and curves, it is focussing less on detailed structures. The top-7 tiles from clusters created by using EfficientNetB0, as a representative of a modern architecture with higher diagnostic accuracy than VGG16 [19], were less homogeneous and thus harder to interpret. Thus, despite being not ideal for classification, we hypothesize that VGG16, through its inner architecture, is a well fit for extracting features of interpretable mid-level patterns useful for human-computer interaction. Future studies should show feasibility of implementation of this workflow not only for present dermatoscopic datasets [30], but also other imaging modalities such as dermatopathology and clinical images. ### _Limitations_ This pilot study was supposed to showcase the general feasibility of the proposed process. Applicability to nonpigmented tumors, other localisations, inflammatory cases and darker skin types cannot be estimated as those were not included in the source datasets. The process at its core is analyzing substructures of dermatoscopic images, thus overall architecture is not integrated but could theoretically be overcome by changing the tile size and minimal lesion area. The latter is a relevant consideration when applying the workflow, as with the initially chosen tile size and lesion area constraints, some test cases with a very small lesion depicted did not produce any tile. ## Acknowledgements Lidia Talavera-Martinez was a beneficiary of the scholarship BES-2017-081264 granted by the Ministry of Economy, Industry, and Competitiveness of Spain under a program co-financed by the European Social Fund. She is also part of the R&D&i Project PID2020-113870GB-I00, funded by MCIN/AEI/10.13039/50110 0011033/. ## Data availability Used image data are openly available at [https://doi.org/10.7910/DVN/DBW86T](https://doi.org/10.7910/DVN/DBW86T) (Harvard Dataverse). Resulting Clusters and qualitative evaluations are available in the supplementary material of this article.
2309.04452
Postprocessing of Ensemble Weather Forecasts Using Permutation-invariant Neural Networks
Statistical postprocessing is used to translate ensembles of raw numerical weather forecasts into reliable probabilistic forecast distributions. In this study, we examine the use of permutation-invariant neural networks for this task. In contrast to previous approaches, which often operate on ensemble summary statistics and dismiss details of the ensemble distribution, we propose networks that treat forecast ensembles as a set of unordered member forecasts and learn link functions that are by design invariant to permutations of the member ordering. We evaluate the quality of the obtained forecast distributions in terms of calibration and sharpness and compare the models against classical and neural network-based benchmark methods. In case studies addressing the postprocessing of surface temperature and wind gust forecasts, we demonstrate state-of-the-art prediction quality. To deepen the understanding of the learned inference process, we further propose a permutation-based importance analysis for ensemble-valued predictors, which highlights specific aspects of the ensemble forecast that are considered important by the trained postprocessing models. Our results suggest that most of the relevant information is contained in a few ensemble-internal degrees of freedom, which may impact the design of future ensemble forecasting and postprocessing systems.
Kevin Höhlein, Benedikt Schulz, Rüdiger Westermann, Sebastian Lerch
2023-09-08T17:20:51Z
http://arxiv.org/abs/2309.04452v2
# Postprocessing of Ensemble Weather Forecasts ###### Abstract Statistical postprocessing is used to translate ensembles of raw numerical weather forecasts into reliable probabilistic forecast distributions. In this study, we examine the use of permutation-invariant neural networks for this task. In contrast to previous approaches, which often operate on ensemble summary statistics and dismiss details of the ensemble distribution, we propose networks which treat forecast ensembles as a set of unordered member forecasts and learn link functions that are by design invariant to permutations of the member ordering. We evaluate the quality of the obtained forecast distributions in terms of calibration and sharpness, and compare the models against classical and neural network-based benchmark methods. In case studies addressing the postprocessing of surface temperature and wind gust forecasts, we demonstrate state-of-the-art prediction quality. To deepen the understanding of the learned inference process, we further propose a permutation-based importance analysis for ensemble-valued predictors, which highlights specific aspects of the ensemble forecast that are considered important by the trained postprocessing models. Our results suggest that most of the relevant information is contained in few ensemble-internal degrees of freedom, which may impact the design of future ensemble forecasting and postprocessing systems. Key words and phrases: ## 1 Introduction Operational weather forecasting relies on numerical weather prediction (NWP) models. Since such models are subject to multiple sources of uncertainty, such as uncertainty in the initial conditions or model parameterizations, a quantification of the forecast uncertainty is indispensable. To achieve this, NWP models generate a set of deterministic forecasts, so-called ensemble forecasts, based on different initial conditions and variations of the underlying physical models. Since these forecasts are subject to systematic errors such as biases and dispersion errors, statistical postprocessing is used to enhance their reliability (see, e.g., Vannitsem et al.2018). Recently, machine learning (ML) approaches for statistical postprocessing have shown superior performance over classical methods. For instance, Rasp and Lerch (2018) propose a distribution regression network (DRN) which predicts the parameters of a temperature forecast distribution from a suitable family of parametric distributions. In subsequent work, Schulz and Lerch (2022b) found that shallow multi-layer perceptrons (MLPs) with forecast distributions of different flexibility achieve state-of-the-art results in postprocessing wind gust ensemble forecasts. An ensemble forecast consists of multiple separate member forecasts, which are generated by repeatedly running NWP simulations with different model parameterizations and initial conditions. Typically, the configurations of different runs are sampled randomly from an underlying distribution of plausible simulation conditions, obtained, e.g., from uncertainty-aware data assimilation. The member forecasts can then be seen as identically distributed and interchangeable random samples from a distribution of possible future weather states. In this setting, statistical postprocessing of ensemble forecasts can be phrased as a prediction task on unordered predictor vectors and requires solutions that are tuned to match the predictor format. Specifically, member interchangeability demands that the predictions of a well-designed postprocessing system should not be affected by permutations, i.e. shuffling, of the ensemble members. Systems that satisfy this requirement are called permutation invariant. Established postprocessing methods rely on basic summary statistics of the raw ensemble forecast to inform the estimation of the postprocessed distribution and are thus permutation invariant by design. However, especially in large ensembles, the details of the distribution may carry valuable information for postprocessing, and a more elaborate treatment of the inner structure of the raw forecast ensembles may be advisable. Alleviating such restrictions, Bremnes (2020) employs MLPs for postprocessing of wind speed forecasts, which receive information about the full state of a univariate ensemble forecast. Yet, the size of the ensemble, i.e. the number of member forecasts, acts as a multiplier to the dimension of the predictors in the proposed models, resulting in an increased model complexity and a higher tendency to overfitting. Only recently, studies have started to explore how more dedicated model architectures can help to improve postprocessing (Mlakar et al.2023; Ben-Bouallegue et al.2023), and ML provides a variety of further approaches to enforcing permutation invariance in data-driven learning (e.g., Ravanbakhsh et al., 2016; Vaswani et al., 2017; Zaheer et al., 2017; Lee et al., 2019; Sannai et al., 2019; Zhang et al., 2019). The increasing adoption of permutation-invariant statistical models in postprocessing thus raises the question of how capable different model architectures are in extracting information from the ensemble forecasts and how much value is added by considering ensemble-valued predictors instead of summary statistics. ### Contribution In this study, we investigate the capabilities of different permutation-invariant NN architectures for univariate postprocessing of station predictions. We evaluate the proposed models on two exemplary station-wise postprocessing tasks with different characteristics. The ensemble-based network models are compared to classical methods and basic NNs which operate only on ensemble summary statistics but are trained under identical predictor conditions otherwise. We further assess how much of the predictive information is carried within the details of the ensemble distribution, and how much of the model skill arises from other factors. To shed light on the sources of model skill, we propose an ensemble-oriented importance analysis and study the effect of ensemble-internal degrees of freedom using conditional feature permutation. ## 2 Related work ### Statistical postprocessing of ensemble forecasts Two of the first methods for statistical postprocessing of ensemble forecasts are ensemble model output statistics (EMOS; Gneiting et al., 2005) and Bayesian model averaging (BMA; Raftery et al., 2005). While EMOS performs a distributional regression based on a suitable family of parametric distributions and summary statistics of the ensemble, BMA generates a mixture distribution based on the individual ensemble members. Due to its simplicity, EMOS has been applied to a wide range of weather variables including temperature (Gneiting et al., 2005), wind gusts (Pantillon et al., 2018), precipitation (Scheuerer, 2014) and solar radiation (Schulz et al., 2021). Following the simple statistical approaches, ML approaches such as quantile regression forests (Taillardat et al., 2016) or a gradient boosting extension of EMOS (Messner et al., 2017) have been introduced. First NN-based approaches are the DRN approach (Rasp and Lerch, 2018) as an extension of the EMOS framework, and the Bernstein quantile network (BQN; Bremnes, 2020) that provides a more flexible forecast distribution. In Schulz and Lerch (2022b), NN-based approaches were adapted towards the prediction of wind gusts and outperformed classical methods. Recently, research has shifted towards the use of more sophisticated network architectures. Examples include convolutional NNs that incorporate spatial NWP output fields (Scheuerer et al., 2020; Gronquist et al., 2021; Veldkamp et al., 2021; Horat and Lerch, 2023), and generative models to produce multivariate forecast distributions (Dai and Hemri, 2021; Chen et al., 2022). Only recently, Mlakar et al. (2023) propose NN models that explicitly address the ensemble structure of the inputs by employing a dynamic attention mechanism. This model performs best in the benchmark study of Demaeyer et al. (2023). In orthogonal work, Ben-Bouallegue et al. (2023) postprocess each member individually with hierarchical ensemble transformers, and Orlova et al. (2022) found that exploiting the ensemble structure enhances the predictive performance in the context of sub-seasonal forecasting. For a general review of statistical postprocessing of weather forecasts, we refer to Vannitsem et al. (2018), a review of recent developments and challenges can be found in Vannitsem et al. (2021); Haupt et al. (2021). ### Neural network architectures for regression on set-structured data Vinyals et al. (2015) compare a sequential method for processing unordered predictors against a permutation-invariant alternative model and demonstrate that the lack of built-in permutation invariance may substantially affect prediction quality. Ravanbakhsh et al. (2016) introduce a permutation-equivariant layer for NNs that operate on set-structured data and combine these to design permutation-invariant networks. Similar layers were later used by Zaheer et al. (2017), who propose the framework _DeepSets_, which is discussed in more detail in section 4a. Murphy et al. (2018) propose Janossy pooling, which obtains a permutation-invariant mapping as the average of a permutation-sensitive function applied to all possible reorderings of the set. The authors propose methods to lessen the computational burden of the method, but the resulting mappings are subject to constraints or achieve permutation invariance only approximately so we do not consider the approach in our comparison. Limitations of representing functions on sets have been discussed by Wagstaff et al. (2019). Lyle et al. (2020) demonstrate further that algorithmically enforced permutation invariance is favorable in a variety of tasks compared to alternative approaches, such as data augmentation. Pooling-type network architectures were introduced by Edwards and Storkey (2016) and investigated in more detail by Zaheer et al. (2017) and Sannai et al. (2019), who prove that pooling architectures with additive pooling are universal approximators of functions on sets. Yet, Soelch et al. (2019) highlight that the use of more expressive pooling functions may enhance model performance. A different approach is considered by Lee et al. (2019), who use (multi-head) attention functions (Vaswani et al., 2017) for permutation-invariant inference on set-valued data. Attention-based models, also known as transformers, have proven powerful in a variety of computer vision tasks (e.g., Khan et al., 2022) and have more recently also found meteorological applications, such as data-driven weather forecasting (e.g. Pathak et al., 2022) and postprocessing (e.g. Finn, 2021; Ben-Bouallegue et al., 2023). ### Machine learning explainability and feature importance ML explainability has attracted substantial interest throughout the last decade (Guidotti et al., 2018; Linardatos et al., 2020; Sahakyan et al., 2021; Burkart and Huber, 2021; Zhang et al., 2021). Model explanation aims at understanding black-box algorithms by assessing the general logic of the algorithm, often using sample-based explanations. Many variants exist (e.g., Bach et al., 2015; Ribeiro et al., 2016; Shrikumar et al., 2017; Lundberg and Lee, 2017) and are increasingly adopted in the earth-system sciences (e.g., Labe and Barnes, 2021; Farokhmanesh et al., 2023). Such techniques explain model predictions by assigning sample-specific relevance or attribution scores to the model inputs (i.e., the predictors), and deriving the effective strength of certain predictors. While attribution-based approaches are well suited for an in-depth investigation of the model inference they usually come at high computational cost and provide information that is too fine-grained for a comparative evaluation of algorithms. For these higher-level tasks, averaged importance scores of certain predictors, as obtained, e.g. through feature permutation importance (FPI; Breiman, 2001), are more informative. FPI is commonly implemented as a post-training step, in which relevance scores are assigned to the predictors based on the accuracy loss after permuting the predictor values within the test dataset. In this work, we propose a conditional permutation importance measure for ensemble-valued predictors, which allows attributing importance values to different aspects of the ensemble-internal variability. Conditional perturbation measures have been considered in earlier works (e.g., Strobl et al., 2008; Molnar et al., 2023), yet there the importance of specific predictors is evaluated in the context of the remaining predictors, whereas our approach addresses specifically the ensemble structure of the raw forecasts encountered in postprocessing. ## 3 Benchmark methods and forecast distributions ### Assessing predictive performance We evaluate probabilistic forecasts based on the paradigm of Gneiting et al. (2007), i.e., a forecast should maximize sharpness subject to calibration. Both sharpness and calibration can be assessed quantitatively using proper scoring rules (Gneiting and Raftery, 2007). A popular choice is the continuous ranked probability score (CRPS; Matheson and Winkler, 1976) \[\text{CRPS}(F,y)=\int_{-\infty}^{\infty}\left(F(z)-\mathbb{1}\left\{y\leq z \right\}\right)^{2}dz,\] wherein \(y\in\mathbb{R}\) is the observed value, \(F\) the cumulative distribution function (CDF) of the forecast distribution, and \(\mathbb{1}\) the indicator function. The CRPS can be computed analytically for a wide range of distributions including the truncated logistic distribution and probabilistic forecasts in ensemble form (Jordan et al., 2019). In addition to the CRPS, we assess calibration based on the empirical coverage of prediction intervals (PIs) derived from the forecast distribution, and sharpness on the corresponding length. Under the assumption of calibration, the observed coverage of a PI should match the nominal level, and a forecast is sharper the smaller the length of the PI. In line with Schulz and Lerch (2022), we choose the PI level based on the size of the underlying ensemble. For an ensemble of size \(M\), this gives rise to a PI with nominal level \((M-1)/(M+1)\). Further, we qualitatively assess calibration based on (unified) probability integral transform (PIT) histograms (Gneiting and Katzfuss, 2014; Vogel et al., 2018). While a flat histogram indicates that the forecasts are calibrated, systematic deviations indicate miscalibration. For more details on the evaluation of probabilistic forecasts, we refer to Gneiting and Katzfuss (2014). ### Distributional regression with a parametric forecast distribution In this study, we consider postprocessing of the ensemble forecast distribution of a real-valued random variable \(Y\) as a distributional regression task on ensemble-structured predictors. We consider the case of station-wise forecasts, which are given as prediction vectors \(\mathbf{x}\in\mathcal{P}\subseteq\mathbb{R}^{P}\), each comprising the predictions of \(p\) scalar-valued meteorological variables, such as surface temperature or 10-m wind speed at a station site. For \(M\in\mathbb{N}\), an \(M\)-member ensemble forecast \(X\in\left\{\mathcal{P}\right\}_{M}\), with \(\left[\mathcal{P}\right]_{M}:=\left\{\left\{\mathbf{x}_{1},...,\mathbf{x}_{M}\right\} :\mathbf{x}_{m}\in\mathcal{P},m=1,...,M\right\}\), is a set of separate prediction vectors, which all concern the same forecasting task. Within the (parametric) distributional regression framework, the parameter vector \(\mathbf{\theta}\in\Theta\subseteq\mathbb{R}^{D}\), \(D\in\mathbb{N}\), of a parametric distribution \(\mathcal{F}_{\theta}\) is linked to the predictors via a function that is estimated by minimizing a proper scoring rule. The underlying model can be written as \[Y\mid X\sim\mathcal{F}_{\mathbf{\theta}},\quad\mathbf{\theta}=g(X)\in\Theta, \tag{1}\] where \(g:\left[\mathcal{P}\right]_{M}\to\Theta\) is called the link function. For EMOS, the link function is typically a generalized affine-linear function of ensemble summary statistics \(s:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\), where \(D\) is the dimension of the model, \(\mathbf{\theta}\) is the mean of the prediction vector, and \(\mathbf{\theta}\) is the mean of the prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{2}\] where \(\mathbf{\theta}\) is the mean of the prediction vector, and \(\mathbf{\theta}\) is the mean of the prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{3}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{4}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{5}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{6}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{7}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{8}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{9}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{10}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{11}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{12}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{13}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{14}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{15}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{16}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{17}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{18}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{19}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{20}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{21}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{22}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{23}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \[\mathbf{\theta}=\mathbf{\theta}+\mathbf{\theta}, \tag{24}\] where \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector, and \(\mathbf{\theta}\) is the mean of prediction vector. The model is then defined as \(\left[\mathcal{P}\right]_{M}\rightarrow\mathbb{R}\), such as ensemble mean or standard deviation. I.e., given \(F\in\mathbb{N}\) summary features \(s_{f}\), for \(f=1,...,F\), the link function reads \[g(X)=\rho(\Gamma\mathbf{s}_{X}+\mathbf{\gamma})\,, \tag{2}\] wherein \(\mathbf{s}_{X}:=(s_{1}(X),...,s_{F}(X))\in\mathbb{R}^{F}\), and \(\Gamma\in\mathbb{R}^{D\times F}\) and \(\mathbf{\gamma}\in\mathbb{R}^{D}\) denote the parameters of the optimized affine-linear transform. The function \(\rho:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) indicates a combination of element-wise activation functions, such as \(\exp(\cdot)\) or the identity. DRN (Rasp and Lerch, 2018; Schulz and Lerch, 2022) overcomes the need to pre-define the detailed structure of \(g\) by admitting the data-driven estimation of arbitrary link functions using NNs, i.e., \[g(X)=\phi_{\mathbf{\beta}}\left(\mathbf{s}_{X}\right) \tag{3}\] with \(\phi_{\mathbf{\beta}}\) denoting an MLP with parameters \(\mathbf{\beta}\). The forecast distribution as well as the underlying proper scoring rule used for optimization are two implementation choices. We note that Schulz and Lerch (2022) employ a learned station embedding to reuse the same set of model parameters for multiple weather stations. Since this design choice does not affect the treatment of the forecast ensemble, we subsume this embedding within the model parameters \(\mathbf{\beta}\) for brevity. ### Flexible distribution estimator Distributional regression methods based on a parametric forecast distribution are robust but lack flexibility as they are bound to the parametric distribution family of choice. Typical choices of forecast distributions include the normal (Gneiting et al., 2005; Rasp and Lerch, 2018), logistic (Schulz and Lerch, 2022) or generalized extreme value distribution (Lerch and Thorarinsdottir, 2013; Scheuerer, 2014). They all lack the ability to express multi-modalities that are required when different weather patterns occur. Hence, methods that do not rely on parametric assumptions have been proposed in the postprocessing literature. Examples are the direct adjustment of the ensemble members (van Schaeybroeck and Vannitsem, 2015) or quantile regression forests (Taillardat et al., 2016). To incorporate the full ensemble structure, we consider flexible forecast distributions that are able to represent the distributional structure of the ensemble forecast in more detail. BQN (Bremnes, 2020) models the forecast distribution as a quantile function \[Q(p|\mathbf{\alpha})=\sum_{\nu=0}^{d}\alpha_{\nu}\,B_{\nu d}(p)\,,\quad p\in[0,1]\,, \tag{4}\] which is a linear combination of Bernstein (basis-)polynomials \(B_{\nu d}(p)\) of degree \(d\in\mathbb{N}\), \(\nu=0,\ldots,d\), with mixing coefficients \(\mathbf{\alpha}=(\alpha_{0},...,\alpha_{d})\) such that \(\alpha_{0}\leq\alpha_{1}\leq...\leq\alpha_{d}\). The inference network is designed to output parameters \(\mathbf{\theta}\) that parameterize the mixing coefficients, i.e., \(\mathbf{\alpha}=\mathbf{\alpha}(\mathbf{\theta})\). In contrast to DRN, this formulation offers increased flexibility for modeling multi-modality, while requiring hard upper and lower bounds for the values of the forecast variable. For BQN models, the optimization is guided by an average of quantile scores (Koenker and Bassett, 1978; Gneiting and Raftery, 2007), which can be seen as a discrete approximation of the CRPS (Gneiting and Ranjan, 2011). Note that our BQN implementation differs from that of Bremnes (2020) and Schulz and Lerch (2022) in how the ensemble predictors are used. Bremnes (2020) specifies the link function as \[g(X)=\phi_{\mathbf{\beta}}\left(\text{sort}(X)\right), \tag{5}\] wherein \(\phi_{\mathbf{\beta}}\) is a NN and sort\((\cdot)\) indicates a sorting operation of the ensemble members wrt. a pre-determined reference quantity. While Bremnes (2020) restrict their models to univariate ensemble predictors, Schulz and Lerch (2022) use ensemble-valued predictors only for the predictor of the target variable and ensemble means for additional auxiliary predictor variables. The sorting imposes a fixed ordering of the members and thus ensures permutation invariance of the model predictions. Yet, this increases the number of weights in the initial layer of the network and restricts the trained model to ensembles of fixed size. Hence, we incorporate the ensemble as described in Eq. (3), analogous to DRN using ensemble summary statistics. A comparison of both variants is conducted in the supplementary materials, demonstrating the equivalence of both approaches. ### Usage of auxiliary predictors In addition to the predictions of the postprocessing target variable, most algorithms use auxiliary information to improve the prediction performance (see Table 1. We distinguish between ensemble-valued and scalar-valued predictors, where ensemble-valued predictors vary between different members and scalar-valued predictors do not. In the ensemble-valued case, we further differentiate the prediction of the postprocessed quantity, termed the primary prediction, from auxiliary predictions of other meteorological variables. For either of these, postprocessing models can have access to the full set of ensemble values or only to summary statistics. Scalar-valued predictors refer to contextual information, such as station-specific coordinates and orography details (cf. Table 1: station pred.), as well as to temporal information, such as the day of the year. We consider only models that are trained on predictions for specific initialization and lead times, such that information about the diurnal cycle is not required. While most approaches include the scalar predictors explicitly as features in the regression process, EMOS takes advantage of categorical location and time information implicitly by fitting independent models for each station and month (Schulz and Lerch, 2022b). Notably, the permutation-invariant models (cf. Table 1: perm.-inv.) considered in this study have access to the richest predictor pool. ## 4 Permutation-invariant neural network architectures From the variety of permutation-invariant model architectures, we select two representative approaches, _set pooling architectures_ and _set transformers_, which we adapt for distributional regression. Compared with the benchmark methods of section 3, the proposed networks replace the summary-based ensemble processing while the parameterization of the forecast distributions remains unchanged. A schematic comparison of both permutation-invariant architectures is shown in Fig. 1. ### Set pooling architectures Set pooling architectures (Zaheer et al., 2017), also known as _DeepSets_, achieve permutation invariance via extraction and permutation-invariant summarization of learned latent features. The features are obtained by applying an encoder network to all ensemble members separately. The resulting link function can be expressed as \[g(X) =\phi_{\mathbf{\beta}}^{\text{Dec}}(\text{pool}(\tilde{X})), \tag{6}\] \[\text{where}\ \tilde{X} =\{\phi_{\mathbf{\beta}}^{\text{Enc}}(\mathbf{x}):\mathbf{x}\in X\}\}. \tag{7}\] Therein, pool is a permutation-invariant pooling function, and \(\phi_{\mathbf{\beta}}^{\text{Enc}}\) and \(\phi_{\mathbf{\beta}}^{\text{Dec}}\) are trainable MLPs, acting as encoder and decoder, respectively. We will thus use the names _set pooling_ and _encoder-decoder_ (ED) architecture synonymously. We consider different variants of ensemble summarization based on average and extremum pooling, as well as adaptive pooling functions based on an attention mechanism (Lee et al., 2019; Soelch et al., 2019), discussed below. Overall, we find that the pooling mechanism is of minor importance. Detailed comparisons are thus deferred to the supplementary materials. In all subsequent experiments, we use attention-based pooling. ### Set Transformer Set transformers (Lee et al., 2019) are NNs, which model interactions between set members via self-attention. _Attention_ is a form of nonlinear activation function, in which the relevance of the inputs is determined via a matching of input-specific key and query vectors. _Multi-head attention_ allows the model to attend to multiple key patterns in parallel (Vaswani et al., 2017). Lee et al. (2019) combine multi-head attention with member-wise NN to build a permutation-invariant set-attention block, from which a set transformer is built by stacking multiple instances. Set transformers apply straight-forwardly to ensemble data and can exploit all aspects of the available ensemble dataset by allowing for information exchange between ensemble members early in the inference process (cf. (1)). We construct a set transformer by using three set-attention blocks with 8 attention heads (Vaswani et al., 2017; Lee et al., 2019). Each block comprises a separate MLP with two hidden layers. Additionally, the first set-attention block is preceded by a linear layer to align the channel number of the ensemble input with the hidden dimension of the set-attention blocks. To construct vector-valued predictions from set-valued inputs, Lee et al. (2019) propose attention-based pooling, in which the output query vectors are implemented as learnable parameters. After pooling, the final prediction is obtained by applying another two-layer MLP. ## 5 Data We evaluate the performance of the proposed models in two postprocessing tasks using the datasets described in Table 2. ### Wind gust prediction in Germany In the first case study, we employ our methods for station-wise postprocessing of wind gust forecasts using a dataset that has previously been used in Pantillon et al. (2018) and Schulz and Lerch (2022b). The ensemble forecasts are based on the COSMO-DE (Baldauf et al., 2011) ensemble prediction system (EPS) and consist of 20 members forecasts, simulated with a horizontal resolution of 2.8 km. The forecasts are initialized at 00 UTC, and we consider the lead times 6, 12 and 18h. Other than wind gusts, the Figure 1: Set pooling architecture (top), consisting of encoder and decoder MLPs, and set transformer (bottom), featuring attention blocks and intermediate MLPs with residual connections. While the encoder-decoder architecture admits interactions between members only inside the pooling step, the set transformer admits information transfer between the members in each attention step. dataset comprises ensemble forecasts of several meteorological variables, such as temperature, pressure, precipitation and radiation. The predictions are verified against observations measured at 175 stations of the German weather service (Deutscher Wetterdienst; DWD). Forecasts for the individual weather stations are obtained from the closest grid point. The time period of the forecast and observation data starts on 9 December 2010 and ends at 31 December 2016. The models use the data from 2010-2015 for model estimation, using 2010-2014 as training and 2015 as validation period. The forecasts are then verified in 2016. As in Schulz and Lerch (2022b), each lead time is processed separately. As detailed in Schulz and Lerch (2022b), a minor caveat is caused by a non-trivial substructure of the forecast ensembles. The 20-member ensembles constitute a conglomerate of four sub-ensembles, which are generated with slightly different model configurations. While this formally violates the assumption of statistical interchangeability of the members, the sub-ensembles are sufficiently similar to justify the application of permutation-invariant models. For the benchmark methods EMOS and DRN, we use the exact same forecasts as in Schulz and Lerch (2022b), both estimating the parameters of a truncated logistic distribution by minimizing the CRPS, see their section 3 for details. BQN is adapted as described in section 3 and Table 1. ### Temperature forecasts from the EUPPBench dataset In a second example, we postprocess ensemble forecasts of surface temperature using a subset of the EUPPBench postprocessing benchmark dataset (Demaeyer et al., 2023). EUPPBench provides paired forecast and observation data from two sets of samples. The first part consists of 20 years of reforecast data (1997 - 2016) from the Integrated Forecasting System (IFS) of the ECMWF with 11 ensemble members. Mimicking typical operational approaches, the reforecast dataset is used as training data, complemented by additional two years (2017 and 2018) of 51-member forecasts as test data. EUPPBench comprises sample data from multiple European countries - Austria, Belgium, France, Germany and the Netherlands - which are publicly accessible via the CliMetLab API (ECMWF, 2013). Additional data for Switzerland can be requested from the Swiss weather service, but is not used in this study. EUPPBench constitutes a comprehensive dataset of samples over a long time period. In contrast to the wind gust forecasts, the EUPPBench ensemble members are exchangeable, so that permutation-invariant model architectures are optimally suited. \begin{table} \begin{tabular}{l c c c} \hline \hline Predictors & Ensemble-valued & Scalar-valued \\ \hline Method & Primary prediction & Auxiliary predictions & Spatial & Temporal \\ \hline EMOS (Schulz and Lerch, 2022b, ours) & mean + std. dev. & – & different models per station and month \\ \hline BQN (Bremnes, 2020) & ensemble (sorted) & – & station embed. & – \\ BQN (Schulz and Lerch, 2022b) & ensemble (sorted) & mean & station pred. + embed. & day of year \\ BQN (ours) & mean + std. dev. & mean & station pred. + embed. & day of year \\ DRN (Schulz and Lerch, 2022b, ours) & mean + std. dev. & mean & station pred. + embed. & day of year \\ \hline Perm.-inv. DRN + BQN (ours) & ensemble (perm.-inv.) & ensemble (perm.-inv.) & station pred. + embed. & day of year \\ \hline \hline \end{tabular} \end{table} Table 1: Predictor utilization by postprocessing methods. Methods used in this study are indicated by _ours_. \begin{table} \begin{tabular}{l r r} \hline \hline Dataset & Wind gust forecasts & EUPPbench (re)forecasts \\ \hline Underlying NWP model & COSMO-DE-EPS & ECMWF-IFS \\ Initialization time & 00 UTC & 00 UTC \\ Ensemble size \(M\) & 20 & Reforecasts: 51 \\ & & Forecasts: 11 \\ Predicted ensemble forecast quantities \(p\) & 61 & 28 \\ Region & Germany & Central Europe \\ Stations & 175 & 117 \\ Lead times considered in h & 6, 12, 18 & 24, 72, 120 \\ Training samples & 315,000 & 374,000 \\ Test samples & 63,000 & Reforecasts: 97,000 \\ & & Forecasts: 85,000 \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of the data used in the postprocessing applications described in section 5. Deviating from the EUPPBench convention, models are tested on the 51-member forecasts and the last 4 years of the reforecast dataset are considered as an independent test set of 11-member forecast samples. This allows us to assess the generalization capabilities of the ensemble-based postprocessing models on data equivalent to the training data, as well as on data with larger ensemble size. Furthermore, we use the full set of available surface- and pressure-level predictor variables, whereas the original EUPPBench task is restricted to using only surface temperature data. While this design choice hinders the direct comparison of the evaluation metrics in this paper with original EUPPBench models, it enables a more comprehensive assessment of the relative benefits of using summary-based vs. ensemble predictors. From the pool of available forecast lead times, we select 24h, 72h and 120h for a closer analysis. Unlike previous postprocessing applications for temperature (e.g., Gneiting et al. 2005; Rasp and Lerch 2018), we employ a zero-truncated logistic distribution as parametric forecast distribution in Eq. (1), instead of a zero-truncated normal, as preliminary tests showed a slightly superior predictive performance (see supplementary material for details). Note that the zero-censoring arises from temperatures being measured in Kelvin here. Further, this allows using the same configuration as for the wind gust predictions. In particular, both the EMOS and DRN benchmark approaches are identical for both data sets. ## 6 Performance evaluation For each of the postprocessing methods, we generated a pool of 20 networks in each forecast scenario. To ensure a fair comparison to the benchmark methods, we follow the approach from Schulz and Lerch (2022a,b), who build an ensemble of 10 networks and aggregate the forecasts via quantile aggregation. Hence, we draw 10 members from the pool and repeat this procedure 50 times to quantify the uncertainty of sampling from the general pool. For all model variants and resamples, we select those configurations as the final forecast that yield the lowest CRPS on the validation set. Details on hyperparameter tuning are discussed in the supplementary materials. For both datasets, we compute the average CRPS, PI length and PI coverage for the different forecast lead times based on the respective test datasets. The average is calculated over the resamples of the aggregated network ensembles. In what follows, the prefixes ED and ST refer to pooling-based encoder-decoder models and set transformers, respectively, and suffixes DRN and BQN indicate the parameterization of the forecast distribution. The model categories DRN and BQN without additional prefix refer to the benchmark models based on summary statistics. ### Wind gust forecasts Table 3 shows the quantitative evaluation for lead times 6h, 12h and 18h. All permutation-invariant model architectures perform similar to the DRN and BQN benchmarks and outperform both the EPS and conventional postprocessing via EMOS, thus achieving state-of-the-art performance for all lead times. Further, the PI lengths and coverages are similar to that of the benchmark methods with the same forecast distribution, indicating that the ensemble-based models achieve approximately the same level of sharpness as the benchmark networks while being well-calibrated. Note that the underlying distribution type should be taken into account when comparing the sharpness of different postprocessing models based on the PI length, as the DRN and BQN forecast distributions exhibit different tail behavior, which affects the PI lengths for different nominal levels (see supplementary materials for details). A noticeable difference between the different network classes is that the ED models result in sharper PIs than the ST models. This coincides with the empirical PI coverages of the methods in that wider PIs typically result in a higher coverage. Fig. 2 shows the PIT histograms of the postprocessed forecasts. While differences are seen between DRN-type and BQN-type models, all DRN-type and all BQN-type models show very similar patterns. While all models are well calibrated, DRN-type models reveal limitations in the resolution of gusts in the lower segment of the distribution. BQN-type models all yield very uniform calibration histograms. ### EUPPBench surface temperature reforecasts As shown in Table 4, also for the EUPPBench dataset both ED and ST models show significant advantages compared to the EPS and EMOS in terms of CRPS and PI length. Differences between the network variants arise mainly due to the use of different forecast distribution types. Note that the lead times of the wind gust dataset are in the short range with a maximum of 18h, whereas the lead times considered in the EUPPBench dataset range from one to five days. Hence, the differences between the lead times in the effects of postprocessing are more pronounced. E.g., for a lead time of 120h, the improvement of the network-based postprocessing methods over the conventional EMOS approach is much smaller than for shorter lead times. In particular, ST models perform the best for lead time 24h and all newly proposed models result in the smallest CRPS for lead time 120h. In terms of the PI length and coverage, we find that the ED and ST models tend to generate slightly sharper predictions. A more detailed discussion of the differences in the PI lengths due to the choice of the underlying distribution is provided in the supplementary material. The PIT histograms in Fig. 2 show that the BQN models struggle to set accurate upper and lower bounds for the predicted distribution, whereas DRN distributions do not show such issues. Instead, they face the problem that the tail is too heavy. Overall, all postprocessing methods result in calibrated forecasts, while the DRN forecasts appear slightly better calibrated than the DRN forecasts. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Lead Time & \multicolumn{3}{c}{6h} & \multicolumn{3}{c}{12h} & \multicolumn{3}{c}{18h} \\ \hline Method & CRPS & PI length & PI coverage & CRPS & PI length & PI coverage & CRPS & PI length & PI coverage \\ \hline EPS & 1.31 & 2.37 & 43.18 & 1.26 & 3.31 & 56.32 & 1.32 & 3.80 & 59.78 \\ EMOS & 0.88 & 5.58 & 92.83 & 0.97 & 6.01 & 91.92 & 1.04 & 6.43 & 92.46 \\ \hline BQN & 0.79 & 4.60 & 90.23 & 0.85 & 4.90 & 89.65 & 0.95 & 5.56 & 90.70 \\ DRN & 0.79 & 4.75 & 91.43 & 0.85 & 5.11 & 91.08 & 0.95 & 5.68 & 91.78 \\ \hline ED-BQN & 0.80 & 4.56 & 89.83 & 0.86 & 4.92 & 89.56 & 0.95 & 5.55 & 90.55 \\ ED-DRN & 0.79 & 4.70 & 91.17 & 0.86 & 5.15 & 91.13 & 0.95 & 5.76 & 92.07 \\ ST-BQN & 0.80 & 4.67 & 90.20 & 0.87 & 5.01 & 89.94 & 0.96 & 5.61 & 90.70 \\ ST-DRN & 0.80 & 4.77 & 91.34 & 0.86 & 5.17 & 91.13 & 0.96 & 5.83 & 92.24 \\ \hline \hline \end{tabular} \end{table} Table 3: Mean CRPS in m/s, PI length in m/s and PI coverage in % of the postprocessing methods for the different lead times of the wind gust data. Recall that the nominal level of the PIs is approximately 90.48%. Figure 2: Calibration histograms of the postprocessing models on 20-member wind gust forecasts (bottom), 11-member EUPPBench reforecast ensembles (top, left) and 51-member forecast ensembles (top, right). BQN forecasts, yielding PIT histograms with a wave-like structure. ### Generalization to 51-member forecast ensembles As before, postprocessing outperforms the EPS forecasts and results in calibrated and accurate forecasts (cf. Table 5 and Fig. 2). Notably, all models have been trained purely on 11-member reforecasts and are not fine-tuned to the 51-member forecast ensembles. The CRPS scores are similar with almost identical values for all models, except EMOS, for all lead times. The ST models again perform the best for the shortest lead time. For the DRN forecasts, we find that the ensemble-based networks tend to reduce the PI length, as it is smaller for all cases except lead time 120h. The corresponding PI coverages are closely connected to the length of the PIs and indicate that the PIs are too large for most postprocessing models, as the observed coverages are above the nominal level. The calibration of the methods is not as good as in the other case studies, as indicated by the PIT histograms in Fig. 2, which may be a consequence of the large learning rate used in training the models (cf. supplementary materials). All BQN forecasts have problems in the tails, where the lower and upper bound are too extreme, such that not sufficiently many observations fall into the outer bins. DRN yields similar results as for the reforecast data with too heavy-tailed forecast distribution, as indicated by the least frequent last bin. The differences between the methods itself are again minor. Still, all postprocessing methods generate reasonably well-calibrated forecasts. Overall, the ensemble-based models result in state-of-the-art performance for generalization on 51-member forecasts or offer advantages over the summary-based benchmark methods. ## 7 Analysis of predictor importance We analyse how the different model types distill relevant information out of the ensemble predictors. For this, we propose an ensemble-oriented permutation feature importance (PFI) analysis to assess which distribution properties of the ensemble-valued predictors have the most effect on the final prediction. In its original form, PFI (e.g., Breiman 2001; Rasp and Lerch 2018; Schulz and Lerch 2022b) is used to assign relevance scores to scalar-valued predictors by randomly shuffling the values of a single predictor across the dataset. While the idea of shuffling predictor samples translates identically from scalar-valued to ensemble-valued predictors, ensemble predictors possess internal degrees of freedom (DOFs), such as ensemble mean and ensemble range, which may affect the prediction differently. In addition to ensemble-internal DOFs, the perturbed predictor ensemble is embedded in the context of the remaining multivariate ensemble predictors, such that covariances, copulas or the rank order of the ensemble members may carry information. To account for such effects, we introduce a conditional permutation strategy that singles out the effects of different ensemble properties. ### Importance of the ensemble information Following the notation of section 3, let \(g\) denote a postprocessing system that translates a raw ensemble forecast \(X=\left\{\mathbf{x}_{m}\in\mathcal{P}:m=1,...,M\right\}\in\left[\mathcal{P}\right] _{M}\) into a postprocessed distribution descriptor \(\theta\), and let for each member forecast \(\mathbf{x}_{m}\) and predictor channel \(i\), \(1\leq i\leq p\), \(\mathbf{x}_{m}^{(i)}\) denote the value of predictor \(i\) in the respective member. Let further \(\mathcal{D}=\left\{\left(X(t),y(t)\right):1\leq t\leq T\right\}\) be a test dataset for evaluation, consisting of \(T\in\mathbb{N}\) known raw forecast-observation pairs. Given a (negatively oriented) accuracy measure \(\bar{S}\), we write \(\bar{S}_{\mathcal{D}}\left[g\right]\) to denote the accuracy score of \(g\), subject to data \(\mathcal{D}\). From here, we choose \(\bar{S}\) to be the expected CRPS, and assume that all scores are computed based on the same test dataset, allowing us to drop the dataset index, i.e., \(\bar{S}_{\mathcal{D}}\left[g\right]\equiv\bar{S}\left[g\right]\). In this notation, the relative FPI, as used in Schulz and Lerch (2022b), can be written as \[\Delta_{0}(P):=\frac{\bar{S}[g\circ P]-\bar{S}[g]}{\bar{S}[g]}, \tag{8}\] wherein \(P\) indicates a perturbation operator that alters parts of the predictor data, and \(\circ\) denotes function composition. For the classical FPI, we denote the permutation operator as \(\Pi_{\pi}^{(i)}\), which shuffles the \(i\)-th predictor channel of the raw ensembles according to a permutation \(\pi\) of the dataset \(\mathcal{D}\) (omitted in the notation for brevity). For ensemble-valued predictors, we consider two generalizations of this operator. We refer to these as the fully-random permutation, \(\Pi_{\pi}^{(i)}\), and the rank-aware random permutation, \(\tilde{\Pi}_{\pi}^{(i)}\). The former acts as a direct analog of the scalar-valued permutation case, i.e., for all \(1\leq t\leq T\) and \(1\leq m\leq M\), it replaces the values \(\mathbf{x}_{m}^{(i)}\) of the ensemble \(X(t)\) with arbitrary values \(\mathbf{x}_{m^{\prime}}^{(i)}\), \(1\leq m^{\prime}\leq M\), from the ensemble \(X(\pi(t))\), without replacement. Thus, it destroys all information of the original ensemble. The latter ranks the member values \(\mathbf{x}_{m}^{(i)}\) in \(X(t)\) and replaces them with values \(\mathbf{x}_{m^{\prime}}^{(i)}\) from \(X(\pi(t))\), where \(m^{\prime}\) are chosen such that all members are used exactly once and the perturbed ensemble possesses the same ranking order as the original one. It thus preserves the ordering of the perturbed predictors in the context of the remaining predictors. In practice, we note that the differences in feature importance for both variants are very minor, such that we select only the rank-aware variant for further analysis. To probe the importance of ensemble-internal DOFs, we consider additional perturbation operators, which rely on conditional shuffling of the ensemble predictors. For this, let \(s:\left[\mathbb{R}\right]_{M}\rightarrow\mathbb{R}\) be a summary function, which translates an ensemble of scalar predictor values into a real valued summary statistic, such as ensemble mean or standard deviation. Then an \(s\)-conditional shuffling operator \(\Pi_{(\pi_{b})\mid s}^{(i)}\) is defined as follows. For all raw predictions \(X(t)\) in the dataset, the predictor ensemble for the \(i\)-th predictor, \(X^{(i)}(t)=\{\mathbf{x}_{m}^{(i)}:\mathbf{x}_{m}\in X(t)\}\), is extracted and summary statistics \(s(X^{(i)}(t))\) are computed. The observed summary statistics are ranked from 1 to \(T\) and the corresponding ensembles \(X(t)\) are distributed into \(B\in\mathbb{N}\) evenly spaced bins, according to these ranks. For each bin \(b\), \(0\leq b<B\), a permutation \(\pi_{b}\) is sampled randomly and the values of the \(i\)-th predictor are shuffled bin-wise according to these permutations. For suitably sized bins, the shuffling preserves information about \(s\) and erases information about other DOFs, thereby ensuring that each of the bins contains an approximately equal number of samples, independent of the details of the predictor distribution. In our experiments, \(B=100\) bins yielded a good balance between information preservation and randomization. Results for larger and smaller bin sizes were qualitatively similar. Note that for predictors in which certain values appear with large multiplicity, such as zero in censored variables like precipitation, the ranking is computed on the unique values of the summary statistics. This enforces a small amount of variation even in bins with degenerate values. In analogy to the rank-aware (unconditional) shuffling, the rank-aware \(s\)-conditional shuffling is denoted as \(\tilde{\Pi}_{(\pi_{b})\mid s}^{(i)}\). For the conditional FPI analysis, we suggest the computation of importance ratios, \[\chi(P|R):=\frac{\tilde{S}[g\circ P]-\tilde{S}[g]}{\tilde{S}[g\circ R]-\tilde{ S}[g]}, \tag{9}\] which measure the fraction of skill restored (or destroyed) by applying a shuffling operation \(P\) instead of a reference operation \(R\). The ratios of interest are \(\chi(\tilde{\Pi}_{(\pi_{b})\mid s}^{(i)},\tilde{\Pi}_{\pi}^{(i)})\), which measure how much of the prediction skill deficit due to randomized shuffling of predictor \(i\) is restored by preserving information about the summary statistic \(s\). In absence of sampling errors due to finite data, \(\chi\left(\tilde{\Pi}_{(\pi_{b})\mid s}^{(i)},\tilde{\Pi}_{\pi}^{(i)}\right)\) yields values between 0 and 1, with 0 indicating uninformative summary statistics, and 1 suggesting that knowledge of \(s\) is sufficient to restore the original model skill entirely. Empirically, we find that the theoretical bounds are preserved well for predictors with sufficiently large FPI. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Lead Time & \multicolumn{2}{c}{24h} & \multicolumn{2}{c}{72h} & \multicolumn{2}{c}{120h} \\ \hline Method & CRPS & PI length & PI coverage & CRPS & PI length & PI coverage & CRPS & PI length & PI coverage \\ \hline EPS & 1.21 & 1.81 & 39.85 & 1.28 & 3.29 & 56.62 & 1.54 & 4.89 & 63.44 \\ EMOS & 0.82 & 3.85 & 82.56 & 0.96 & 4.72 & 83.96 & 1.25 & 6.05 & 83.12 \\ \hline BQN & 0.67 & 3.32 & 84.73 & 0.87 & 4.44 & 86.08 & 1.20 & 6.24 & 86.09 \\ DRN & 0.67 & 3.28 & 84.16 & 0.86 & 4.27 & 84.58 & 1.19 & 5.70 & 83.09 \\ \hline ED-BQN & 0.67 & 3.29 & 84.57 & 0.87 & 4.45 & 86.05 & 1.19 & 6.03 & 85.45 \\ ED-DRN & 0.67 & 3.19 & 83.39 & 0.87 & 4.25 & 84.11 & 1.19 & 5.64 & 82.60 \\ ST-BQN & 0.66 & 3.16 & 84.01 & 0.87 & 4.31 & 84.85 & 1.19 & 6.07 & 85.15 \\ ST-DRN & 0.66 & 3.06 & 82.67 & 0.87 & 4.18 & 83.44 & 1.19 & 5.77 & 83.17 \\ \hline \hline \end{tabular} \end{table} Table 4: Mean CRPS in K, PI length in K and PI coverage in % of the postprocessing methods for the different lead times for the EUPPBench reforecast data. Recall that the nominal level of the PIs is approximately 83.33 %. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Lead Time & \multicolumn{2}{c}{24h} & \multicolumn{2}{c}{72h} & \multicolumn{2}{c}{120h} \\ \hline Method & CRPS & PI length & PI coverage & CRPS & PI length & PI coverage & CRPS & PI length & PI coverage \\ \hline EPS & 1.21 & 2.65 & 57.54 & 1.18 & 4.71 & 74.78 & 1.38 & 7.14 & 83.26 \\ EMOS & 0.79 & 6.31 & 96.26 & 0.90 & 7.74 & 97.49 & 1.16 & 9.92 & 97.47 \\ \hline BQN & 0.64 & 4.32 & 94.13 & 0.80 & 6.52 & 97.23 & 1.13 & 9.18 & 97.58 \\ DRN & 0.64 & 5.48 & 97.92 & 0.80 & 7.21 & 98.37 & 1.13 & 9.58 & 98.28 \\ \hline ED-BQN & 0.64 & 4.74 & 96.30 & 0.81 & 6.49 & 97.42 & 1.12 & 8.81 & 97.15 \\ ED-DRN & 0.64 & 5.31 & 97.62 & 0.81 & 7.09 & 98.19 & 1.12 & 9.61 & 97.90 \\ ST-BQN & 0.62 & 4.61 & 95.96 & 0.80 & 6.18 & 96.31 & 1.13 & 8.68 & 96.05 \\ ST-DRN & 0.62 & 5.10 & 97.40 & 0.81 & 6.88 & 97.55 & 1.13 & 9.43 & 97.11 \\ \hline \hline \end{tabular} \end{table} Table 5: Mean CRPS in K, PI length in K and PI coverage in % of the postprocessing methods for the different lead times for EUPPBench forecast data. Recall that the nominal level of the PIs is approximately 96.15%. ### _Results_ We compute FPI scores \(\Delta_{0}\left(\Pi_{\pi}^{(i)}\right)\) for all ensemble predictors and model variants. Fig. 3 depicts a selection of the FPI scores of the most important ensemble-valued predictors in both tasks. A figure with all ensemble-valued predictors is shown in the supplementary materials. Scalar-valued predictors (cf. section 3d for the terminology) are omitted for easier comparison with the conditional importance measures. The boxplots indicate statistics obtained from 20 separate model runs, which have been evaluated independently, the bars show the mean importance. The accuracy of the wind gust models is dominated by VMAX-10M, and supplemented by additional predictors with lower importance. Temperature-like predictors obtain similar or higher scores than, e.g. winds at 850 hPa and 950 hPa pressure levels. Note that for each lead time, the importance highlights different temperature predictors, which may be attributed to the diurnal cycle. Similar arguments can explain the increasing importance of ASOB-S (short-wavelength radiation balance at the surface) with increasing lead time. In a direct comparison of the model variants, we find that the differences between BQN-type and DRN-type models are very minor. However, ED-type models attribute higher importance to the most relevant predictors (VMAX-10M, T1000, T-2M), whereas ST-type models distribute the importance more evenly and use more diverse predictor information. In the EUPPBench case, the models focus mainly on temperature-like predictors as well as surface radiation balances. Notably, for the summary-based models, mn2t6 and mx2t6 tend to be more important than the primary predictor t2m up to lead time 72h. Since the diurnal cycle does not cause variations between the lead times here, differences in the predictor utilization must be due to the increasing uncertainty at longer lead times. The ensemble-based models rely relatively more strongly on the t2m predictor for the shorter lead time, whereas for longer lead times, the information utilization is more diverse. Qualitative differences between ED- and ST-type models are observed with respect to the humidity-related predictors tcw and tcw. Only ST models recognize the value in these predictors, which may explain in parts the different generalization properties of ED and ST models on the EUPPBench reforecast and forecast datasets. Figs. 4 and 5 investigate the importance of ensemble-internal DOFs of selected ensemble predictors for the permutation-invariant model architectures. For both datasets, we choose a set of representative high-importance predictors, and display the DOF importance for the ensemble-based models. For all predictors and lead times, we compute importance ratios \(\chi\left(\tilde{\Pi}_{(\pi_{b})|s}^{(i)},\tilde{\Pi}_{\pi}^{(i)}\right)\) for a selection of commonly used ensemble summary statistics. Specifically, we consider the ensemble mean as a proxy for the location of the distribution, ensemble maximum and minimum to assess the impact of extreme values, standard deviation, inter-quartile range and full range (difference between maximum and minimum) to quantify the scale of the distribution, as well as skewness and kurtosis as higher-order summary statistics. Due to the pairwise similarity of some of the measures, it is to be expected that conditional shuffling with respect to one of the measures preserves information also about others. To assess the information overlap between shuffling patterns with different reference statistics, Spearman rank correlations are computed between the shuffled statistics and the original statistics. The resulting correlation matrices illustrate how accurately the rank order for one statistic is preserved if the data is conditionally shuffled with respect to another. Rank correlations are chosen to minimize the effect of the marginal distribution of the respective statistics values, since these may vary considerably between different predictors and summary statistics. The results are depicted as heatmaps in Figs. 4 and 5. For wind gust postprocessing (Fig. 4), the importance ratios suggest in many cases that virtually all of the predictor information can be restored by conditioning the shuffling procedure on the ensemble mean. Notably, this is the case for VMAX-10M, consistently for all lead times, as well as for T-G and FI850. The interaction plots suggest that the mean-conditioning preserves information about extrema to a high degree, whereas ensemble range and higher-order statistic information are mixed up. These findings are supported by observations in Schulz and Lerch (2022b), who note that omitting the standard deviation of the auxiliary ensemble predictors helps to improve the quality of the network predictions. ASOB-S and WIND850 are interesting corner cases, in which the mean-conditioning restores substantial amounts of the model skill, but fails to reproduce it completely. This indicates that, while the ensemble mean is an important predictor, the remaining DOFs deliver complementary information that modulates the interpretation of the mean value. Note that the information overlap between location-like and scale-like metrics for ASOB-S predictors at 6h lead time is again an artifact due to the diurnal cycle. At 6h lead time, a substantial fraction of the ASOB-S predictor ensembles fall zero mean and no variance due to the lack of solar irradiation, which impacts the correlation values. The relevance scores for FI850 statistics suggest possible higher-order interactions. For all lead times, it is possible to restore half of the lost model skill by conditioning on skewness and kurtosis, and the corresponding correlation plots suggest that this cannot be attributed to information overlap with other predictors, because correlations are consistently close to zero. An explanation of this observation could be that a fully randomized shuffling of the predictor ensembles destroys the information coherence with other predictors. In surface temperature postprocessing, t2m is an interesting case, in which for all lead times neither of the summary statistics alone is sufficient to restore the unperturbed model performance. This indicates that both ED- and ST-type models learn to attend to the details of the ensemble distribution and marks a difference to the wind gust case study, where most of the information is conveyed in the ensemble means. With increasing lead time, the mean-conditional shuffling becomes more effective in restoring the model skill. This may be due to the decreasing reliability of the EPS prediction system with increasing lead time. Similar patterns are observed also in the remaining predictors. While the model skill cannot be restored with mean-only conditioning for 24h lead time, the mean appears to become more informative for longer lead times. The radiation parameter ssrd6 sticks out visually with high correlations between location-related predictors, which occurs due to the same reasons as for the ASOB-S parameter discussed before. ## 8 Discussion and Conclusion We have introduced permutation-invariant NN architectures for postprocessing ensemble forecasts by selecting two exemplary model families and adapting them to postprocessing. In two case studies, using datasets for wind gusts and surface temperature postprocessing, we have validated the model performance and compared the permutation-invariant models against benchmark models from prior work. Our results show that permutation-invariant postprocessing networks achieve state-of-the-art performance in both applications. All permutation-invariant architectures outperform both the raw ensemble forecast and conventional postprocessing via EMOS by a large margin, but no systematic differences can be observed between the (more complex) permutation-invariant models and existing NN-based solutions. Based on a subsequent assessment of the permutation importance of ensemble-internal DOFs, we have seen that for many auxiliary ensemble predictors, preserving information about the ensemble mean is sufficient to maintain almost the complete information about the postprocessing target, while more detailed information is required about the primary predictors. These findings are consistent with prior work and are more comprehensive due to the larger variety of summary statistics considered in the analysis. A striking advantage of the permutation-invariant models lies in the generality of the approach, i.e., the models possess the flexibility to attend to the important features in Figure 3: Permutation feature importance for summary-based networks (top) and permutation-invariant models (bottom) for EUPPBench and wind gust postprocessing. Predictors named _ens_ in the top figure correspond to the primary predictors t2m and VMAX-10M, respectively. The suffix _sd_ indicates the ensemble standard deviation of the predictor. the predictor ensembles and they are capable of identifying those during training (as shown in our feature analysis). As the added flexibility comes with a surplus of computational complexity, the benefits of the respective methods should be weighed carefully. In operational settings, it may be reasonable to consider permutation-invariant models, as proposed here, as a tool for identifying relevant aspects of the input data. The gained knowledge can then be used for data reduction and to train reduced models with a more favorable accuracy-complexity trade-off. Despite these advantages, the apparent similarity between the performance of the ensemble-based and summary-based models remains baffling and requires further clarification. Supposing capable ensemble predictions, it seems reasonable, from a meteorological perspective, to expect that postprocessing models that operate on the entire ensemble can learn more complex patterns and relationships than models that operate on simple summary statistics. The lack of substantial improvements, as seen in this study, admits different explanations. One possibility would be that the available datasets are insufficient to establish statistically relevant connections between higher-order ensemble-internal patterns and the predicted variables. Problems could arise, e.g. due to insufficient sample counts of the overall datasets or due to ensemble sizes being too low to provide reliable representations of the forecast distribution. Yet, another reason could lie in the fact that the generation mechanisms underlying the NWP ensemble forecasts fail to achieve meaningful representations of such higher-order distribution information, which would raise follow-up questions regarding the design of future ensemble prediction systems. Given the impact and potential implications of the latter alternative, future work should examine the information content of raw ensemble predictions in more detail. The proposed permutation-invariant model architectures may help to achieve this, e.g., by conducting postprocessing experiments with dynamical toy systems that are cheap to simulate and simple to understand, such that large datasets can be generated and evidence for both hypotheses can be distinguished. Acknowledgments.This research was funded by the subprojects B5 and C5 of the Transregional Collaborative Research Center SFB/TRR 165 "Waves to Weather" (www.wavestoweather.de) funded by the German Research Foundation (DFG). Sebastian Lerch gratefully acknowledges support by the Vector Stiftung through the Young Investigator Group "Artificial Intelligence for Probabilistic Weather Forecasting". Data availability statement.The case study on surface temperature postprocessing is based on the EUPPBench dataset, which is publicly available. See Demaeyer et al. (2023) for details. The wind gust dataset is proprietary but can be obtained from the DWD for research purposes. Code with implementations of all methods is publically available (Hohlein, 2023). Figure 5: Importance of ensemble-internal DOFs for temperature postprocessing. Same as Fig. 4. ## Appendix A Description of predictors The descriptions of the ensemble-valued predictor variables used in both case studies are shown in Tables A1 and A2 for wind-gust and surface-temperature postprocessing, respectively. The predictors listed in Table A3 are not ensemble-valued and are used equally in both cases. \begin{table} \begin{tabular}{l l l l} \hline \hline Short name & Units & Full name & Levels \\ \hline VMAX & m/s & Maximum wind, i.e. wind gusts & 10 m \\ U & m/s & U-component of wind & 10 m, 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ V & m/s & V-component of wind & 10 m, 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ WIND & m/s & Wind speed, derived from U and V via \(\sqrt{\text{U}^{2}+\text{V}^{2}}\) & 10 m, 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ OMEGA & Pa/s & Vertical velocity (Pressure) & 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ \hline T & K & Temperature & Ground-level, 2 m, \\ & & 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ T-D & K & Dew point temperature & 2 m \\ \hline RELIUM & \% & Relative humidity & 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ TOT-PREC & kg/m\({}^{2}\) & Total precipitation (Acc.) & – \\ RAIN-GSP & kg/m\({}^{2}\) & Large scale rain (Acc.) & – \\ SNOW-GSP & kg/m\({}^{2}\) & Large scale snowfall - water equivalent (Acc.) & – \\ W-SNOW & kg/m\({}^{2}\) & Snow depth water equivalent & – \\ W-SO & kg/m\({}^{2}\) & Column integrated soil moisture & multilayers: 1, 2, 6, 18, 54 \\ CLC & \% & Cloud cover & T: total; \\ & & & L: soil to 800 hPa; M: 800 to 400 hPa; H: 400 to 0 hPa \\ HBAS-SC & m & Cloud base above mean sea level, shallow connection & – \\ HTOP-SC & m & Cloud top above mean sea level, shallow connection & – \\ \hline ASOB-S & W/m\({}^{2}\) & Net short wave radiation flux & surface \\ ATHB-S & W/m\({}^{2}\) & Net long wave radiation flux (m) & surface \\ ALB-RAD & \% & Albedo (in short-wave) & – \\ \hline PMSL & Pa & Pressure reduced to mean sea level & – \\ FI & m\({}^{2}\)/s\({}^{2}\) & Geopotential & 1000 hPa, 950 hPa, 850 hPa, 700 hPa, 500 hPa \\ \hline \hline \end{tabular} \end{table} Table 10: Description of meteorological parameters for wind-gust postprocessing (cf. Schulz and Lerch 2022b). Target variable: wind speed of gust (observations). Primary predictor: VMAX-10m (ensemble forecast). \begin{table} \begin{tabular}{l l l} \hline \hline Predictor & Type & Description \\ \hline yday & Temporal & Cosine transformed day of the year \\ lat & Spatial & Latitude of the station \\ lon & Spatial & Longitude of the station \\ alt & Spatial & Altitude of the station \\ orog & Spatial & Difference of station altitude and model surface height of nearest grid point. \\ loc-bias & Spatial & Mean bias of ensemble forecasts, computed from the training data. \\ loc-cover & Spatial & Mean coverage of ensemble forecasts, computed from the training data. \\ \hline \hline \end{tabular} \end{table} Table 10: Experiment of meteorological parameters for surface temperature postprocessing (EUPPBench, cf. Demaeyer et al. 2023). Target variable: t2m (observations). Primary predictor: t2m (ensemble forecast).
2309.09638
Neural Network-Based Rule Models With Truth Tables
Understanding the decision-making process of a machine/deep learning model is crucial, particularly in security-sensitive applications. In this study, we introduce a neural network framework that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks. Our proposed framework, called $\textit{Truth Table rules}$ (TT-rules), is built upon $\textit{Truth Table nets}$ (TTnets), a family of deep neural networks initially developed for formal verification. By extracting the set of necessary and sufficient rules $\mathcal{R}$ from the trained TTnet model (global interpretability), yielding the same output as the TTnet (exact interpretability), TT-rules effectively transforms the neural network into a rule-based model. This rule-based model supports binary classification, multi-label classification, and regression tasks for tabular datasets. Furthermore, our TT-rules framework optimizes the rule set $\mathcal{R}$ into $\mathcal{R}_{opt}$ by reducing the number and size of the rules. To enhance model interpretation, we leverage Reduced Ordered Binary Decision Diagrams (ROBDDs) to visualize these rules effectively. After outlining the framework, we evaluate the performance of TT-rules on seven tabular datasets from finance, healthcare, and justice domains. We also compare the TT-rules framework to state-of-the-art rule-based methods. Our results demonstrate that TT-rules achieves equal or higher performance compared to other interpretable methods while maintaining a balance between performance and complexity. Notably, TT-rules presents the first accurate rule-based model capable of fitting large tabular datasets, including two real-life DNA datasets with over 20K features. Finally, we extensively investigate a rule-based model derived from TT-rules using the Adult dataset.
Adrien Benamira, Tristan Guérand, Thomas Peyrin, Hans Soegeng
2023-09-18T10:13:59Z
http://arxiv.org/abs/2309.09638v1
# Neural Network-Based Rule Models With Truth Tables ###### Abstract Understanding the decision-making process of a machine/deep learning model is crucial, particularly in security-sensitive applications. In this study, we introduce a neural network framework that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks. Our proposed framework, called _Truth Table rules_ (TT-rules), is built upon _Truth Table nets_ (TTnets), a family of deep neural networks initially developed for formal verification. By extracting the set of necessary and sufficient rules \(\mathcal{R}\) from the trained TTnet model (global interpretability), yielding the same output as the TTnet (exact interpretability), TT-rules effectively transforms the neural network into a rule-based model. This rule-based model supports binary classification, multi-label classification, and regression tasks for tabular datasets. Furthermore, our TT-rules framework optimizes the rule set \(\mathcal{R}\) into \(\mathcal{R}_{opt}\) by reducing the number and size of the rules. To enhance model interpretation, we leverage Reduced Ordered Binary Decision Diagrams (ROBDDs) to visualize these rules effectively. After outlining the framework, we evaluate the performance of TT-rules on seven tabular datasets from finance, healthcare, and justice domains. We also compare the TT-rules framework to state-of-the-art rule-based methods. Our results demonstrate that TT-rules achieves equal or higher performance compared to other interpretable methods while maintaining a balance between performance and complexity. Notably, TT-rules presents the first accurate rule-based model capable of fitting large tabular datasets, including two real-life DNA datasets with over 20K features. Finally, we extensively investigate a rule-based model derived from TT-rules using the Adult dataset. ## 1 Introduction Deep Neural Networks (DNNs) have been widely and successfully employed in various machine learning tasks, but concerns regarding their security and trustworthiness persist. One of the primary issues associated with DNNs, as well as ensemble ML models in general, is their lack of explainability and the challenge of incorporating human knowledge into them due to their inherent complexity [40, 41]. Therefore, there is a significant research focus on achieving global and exact interpretability for these systems, especially in safety-critical applications [4, 20]. In contrast, rule-based models [23], including tree-based models [11], are specifically designed to offer global and exact explanations, providing insights into the decision-making process that yields the same output as the model. However, they generally exhibit lower performance compared to other models like DNNs or ensemble ML model [28]. Additionally, they encounter scalability issues when dealing with large datasets and lack flexibility in addressing various types of tasks, often limited to binary classification [14]. To the best of our knowledge, there is currently no family of DNNs that possesses both global and exact interpretability akin to rule-based models, while also demonstrating scalability on real-life datasets without the need for an explainer. This limitation is significant since explainer methods often provide only local, inexact, and potentially misleading explanations [40, 41, 43]. **Our approach.** This paper introduces a novel neural network framework that effectively combines the interpretability of rule-based models with the high performance of DNNs. Our framework, called TT-rules, builds upon the advancements made by Benamira _et al._[10] and Agarwal _et al._[3]. The latter proposed a neural network architecture that achieves interpretability by utilizing several DNNs, each processing a single continuous input feature, and a linear layer for merging them. The effectiveness of aggregating local features on image datasets to achieve high accuracy has been demonstrated by Brendel _et al._[15]. Similarly, Agarwal _et al._[3] showed that aggregating local features on tabular datasets can yield high accuracy. Furthermore, Benamira _et al._[10] introduced a new Convolutional Neural Network (CNN) filter function called the Learning Truth Table (LTT) block. The LTT block has the unique property of its complete distribution being computable in constant and practical time, regardless of the architecture. This allows the transformation of the LTT block from weights into an exact mathematical Boolean formula. Since an LTT block is equivalent to a CNN filter, the entire neural network model, known as Truth Table Net (TTnet), can itself be represented as a Boolean formula. To summarize, while Agarwal _et al._[3] focused on continuous inputs, and Benamira _et al._[10] focused on discrete inputs, our approach leverages the strengths of both works to achieve high accuracy while maintaining global and exact interpretability. **Our contributions.** To optimize the rule set \(\mathcal{R}\), our TT-rules framework employs two post-training steps. Firstly, we automatically integrate _"Don't Care Terms"_ (\(DCT\)), utilizing human logic, into the truth tables. This reduces the size of each rule in the set \(\mathcal{R}\). Secondly, we introduce and analyze an inter-rule correlation score to decrease the number of rules in \(\mathcal{R}\). These optimizations, specific to the TT-rules framework, automatically and efficiently transform the set \(\mathcal{R}\) into an optimized set \(\mathcal{R}opt\) in constant time. We also quantify the trade-offs among performance, the number of rules, and their sizes. At this stage, we obtain a rule-based model from the trained DNN TTnet, which can be used for prediction by adding up the rules in \(\mathcal{R}opt\) according to the binary of floating linear layer. To enhance the interpretability of the model, we convert all rule equations into Our claims.A) The TT-rules framework demonstrates versatility and effectiveness across various tasks, including binary classification, multi-classification, and regression. A-1) Our experiments encompass five machine learning datasets: Diabetes [22] in healthcare, Adult [22], HELOC [1], and California Housing [34] in finance, and Compas [7] in the justice domain. The results clearly indicate that the TT-rules framework surpasses most interpretable models in terms of Area Under Curve/Root Mean Square Error (AUC/RMSE), including linear/logistic regression, decision trees, generalized linearized models, and neural additive models. A-2) On two datasets, the TT-rules framework performs comparably to XGBoost and DNN models. A-3) We conducted a comparative analysis of the performance-complexity tradeoff between our proposed TT-rules framework and other state-of-the-art rule-based models, such as generalized linearized models [47], RIPPER [18, 19], decision trees (DT)[36], and ORS[48], specifically focusing on binary classification tasks. Our findings demonstrate that the TT-rules framework outperforms all the aforementioned models, except for the generalized linearized models, in terms of the performance-complexity tradeoff. B) Scalability is a key strength of our model, enabling it to handle large datasets with tens of thousands of features, such as DNA datasets [44, 37, 33], which consist of over 20K features. Our model not only scales efficiently but also performs feature reduction, compressing the initial 20K features of the first DNA datasets [44, 37] into 1K rules, and reducing the 23K features of the second DNA dataset [33] into 9K rules. C) A distinctive feature of our framework lies in its inherent global and exact interpretability. C-1) To showcase its effectiveness, we provide a concrete use case with the Adult dataset and thoroughly investigate its interpretability. C-2) We explore the potential for incorporating human knowledge into our framework. C-3) Additionally, we highlight how experts can leverage the rules to detect concept shifts, further emphasizing the interpretability aspect of our framework. Outline.This paper is structured as follows. Section 2 presents a comprehensive literature review on rule-based models. In Section 3, we establish the notations and fundamental concepts that will be utilized throughout the paper. Section 4 offers a detailed analysis of the TT-rules framework, exploring its intricacies and functionalities. In Section 5, we present the experimental results obtained and compare them with the current state-of-the-art approaches. Additionally, we showcase the scalability of our framework and illustrate its applicability through a compelling case study. The limitations of the proposed approach are discussed in Section 6, followed by the concluding remarks in Section 7. ## 2 Related work ### Classical rule-based models Rule-based models are widely used for interpretable classification and regression tasks. This class encompasses various models such as decision trees [11], rule lists [42, 6, 21], linear models, and rule sets [30, 18, 19, 38, 47]. Rule sets, in particular, offer high interpretability due to their straightforward inference process [30]. However, traditional rule sets face limitations when applied to large tabular datasets, binary classification tasks, and capturing complex feature relationships. These limitations result in reduced accuracy and limited practicality in real-world scenarios [48, 46]. To overcome these challenges, we leverage the recent work of Benamira _et al._[10], who proposed an architecture specifically designed to be encoded into CNF formulas [12]. This approach has demonstrated scalability on large datasets like ImageNet and can be extended to multi-label classification tasks. In this study, our objective is to extend Benamira's approach to handle binary and multi-class classification tasks, as well as regression tasks, across a wide range of tabular datasets ranging from 17 to 20K features. ### DNN-based rule models There have been limited investigations into the connection between DNNs and rule-based models. Two notable works in this area are DNF-net [2] and RRL [46]. DNF-net focuses on the activation function but lacks available code, while RRL specifically addresses classification tasks. Although RRL achieved high accuracy on the Adult dataset, its interpretability raises concerns due to its complex nature, involving millions of terms, and its training process that is time-consuming [46]. Neural Additive Models (NAMs) [3] represent another type of neural network architecture that combines the flexibility of DNNs with the interpretability of additive models. While NAMs have demonstrated superior performance compared to traditional interpretable models, they do not strictly adhere to the rule-based model paradigm and can pose challenges in interpretation, especially when dealing with a large number of features. In this paper, we conduct a comparative analysis to evaluate the performance and interpretability of our TT-rules framework in comparison to NAMs [3]. ## 3 Background ### Rule-based models #### 3.1.1 Rules format : DNF and ROBDD Rule-based models are a popular method for generating decision predicates expressed in DNF. For instance, in the Adult dataset [22], a rule for determining whether an individual would earn more than 50KS/year might look like: \[((\text{Age}>34)\wedge\text{Maried})\vee(\text{Male}\wedge(\text{Capital Loss }<1\text{k/year}))\] Although a rule is initially expressed in DNF format, a decision tree format is often preferred. To achieve this, the DNF is transformed into its equivalent Reduced Ordered Binary Decision Diagram (ROBDD) graph: a directed acyclic graph used to represent a Boolean function [31, 5, 16, 8]. #### 3.1.2 Infer a set of rule-based model In a binary classification problem, we are presented with a set of rules \(\mathcal{R}\) and a corresponding set of weights \(\mathcal{W}\). These rules and weights can be separated into two distinct sets, namely \(\mathcal{R}+\) and \(\mathcal{W}+\) for class 1, and \(\mathcal{R}-\) and \(\mathcal{W}-\) for class 0. Given an input \(I\), we can define the rule-based model as follows: \[Classifier(I,\mathcal{R})=\left\{\begin{array}{ll}1&\text{if }S_{+}(I)-S_{-}(I)>0\\ 0&\text{otherwise.}\end{array}\right.\] Here, \(S_{+}(I)\) and \(S_{-}(I)\) denote the scores for class 1 and class 0, respectively. These scores are calculated using the following equations: \[\left\{\begin{array}{l}S_{+}(I)=\sum_{(r_{+},w_{+})\in(\mathcal{R}+,\mathcal{W}+ )}w_{+}\times\mathbb{I}_{\tau+(I)\text{ is True}}\\ S_{-}(I)=\sum_{(r_{-},w_{-})\in(\mathcal{R}-,\mathcal{W}-)}w_{-}\times\mathbb{I }_{\tau-(I)\text{ is True}}\end{array}\right.\] where \(\mathbb{I}_{\tau(I)\text{ is True}}\) represents the binary indicator that is equal to 1 if the input \(I\) satisfies the rule \(r\), and 0 otherwise. This rule-based model can be easily extended to multi-class classification and regression tasks. #### 3.1.3 Comparing rule-based models When comparing rule-based models, it is common to evaluate their quality based on three main criteria. The first is their performance, which can be measured using metrics such as AUC, accuracy, or RMSE. The second criterion is the number of rules used in the model. Finally, the overall complexity of the model is also taken into account, which is given as the sum of the size of each rule, for all rules in the model [23]. ### Truth Table net (TTnet) The paper [10] proposed a new CNN filter function called Learning Truth Table (LTT) block for which one can compute the complete distribution in practical and constant time, regardless of the network architecture. Then, this LTT block is inserted inside a DNN as CNN filters are integrated into deep convolutional neural networks. #### 3.2.1 Overall LTT design An LTT block must meet two essential criteria: * (A) The LTT block distribution must be entirely computable in practical and constant time, regardless of the complexity of the DNN. * (B) Once LTT blocks are assembled into a layer and layers into a DNN, the latter DNN should be scalable, especially on large-scale datasets such as ImageNet. To meet these criteria, Benamira _et al._[10] proposed the following LTT design rules: 1. Reduce the input size of the CNN filter to \(n\leq 9\). 2. Use binary inputs and outputs. 3. Ensure that the LTT block function uses a nonlinear function. As a result, each filter in our architecture becomes a truth table with a maximum input size of 9 bits. Notations.We denote the \(f^{th}\)\(1\)D-LTT of a layer with input size \(n\), stride \(s\), and no padding as \(\Phi_{f}\). Let the input feature with a single input channel \(chn_{input}=1\) be represented as \((v_{0}\dots v_{L-1})\), where \(L\) is the length of the input feature. We define \(y_{i,f}\) as the output of the function \(\Phi_{f}\) at position \(i\): \[y_{i,f}=\Phi_{f}(v_{i\times s},v_{i\times s+1},\dots,v_{i\times s+(n-1)})\] Following the aforementioned rules (1) and (2), \(y_{i,f}\) and \((v_{i\times s},v_{i\times s+1},\dots,v_{i\times s+(n-1)})\) are binary values, and \(n\leq 9\). As a result, we can express the 1D-LTT function \(\Phi_{f}\) as a truth table by enumerating all \(2^{n}\) possible input combinations. The truth table can then be converted into an optimal (in terms of literals) \(\mathsf{DNF}\) formula using the Quine-McCluskey algorithm [13] for interpretation. Example 1: From LTT weights to truth table and \(\mathsf{DNF}\).In this example, we consider a pre-trained 1D-LTT \(\Phi_{f}\) with input size \(n=4\), a stride of size \(1\), and no padding. The architecture of \(\Phi_{f}\) is given in Figure 0(b) composed of two CNN filter layers: the first one has parameters \(\mathtt{W}_{1}\) with (input channel, output channel, kernel size, stride) = \((1,4,3,1)\), while the second \(\mathtt{W}_{2}\) with \((4,1,2,1)\). The inputs and outputs of \(\Phi_{f}\) are binary, and we denote the inputs as [\(x_{0}\), \(x_{1}\), \(x_{2}\), \(x_{3}\)]. To compute the complete distribution of \(\Phi_{f}\), we generate all \(2^{4}=16\) possible input/output pairs, as shown in Figure 0(a), and obtain the truth table in Table 1. This truth table fully characterizes the behavior of \(\Phi_{f}\). We then transform the truth table into a \(\mathsf{DNF}\) using the Quine-McCluskey algorithm [13]. This algorithm provides an optimal (in terms of literals) \(\mathsf{DNF}\) formula that represents the truth table. The resulting \(\mathsf{DNF}\) formula for \(\Phi_{f}\) can be used to compute the output of \(\Phi_{f}\) for any input. Overall, this example demonstrates the applicability of LTT design rules in the construction of DNNs, as it meets both criteria of LTT blocks being computable in constant time and DNN scalability on large datasets. #### 3.2.2 Overall TTnet design We integrated LTT blocks into the neural network, just as CNN filters are integrated into a deep convolutional neural network: each LTT layer is composed of multiple LTT blocks and there are multiple LTT layers in total. Additionally, there is a pre-processing layer and a final layer. These two layers provide flexibility in adapting to different applications: scalability, formal verification, and logic circuit design. ## 4 Truth Table Rules (TT-rules) The Truth Table rules framework consists of three essential components. The first step involves extracting the precise set of rules \(\mathcal{R}\) once the TTnet has been trained. Next, we optimize \(\mathcal{R}\) by reducing the rule's size through _Don't Care Terms_ (DCT) injection. At this point, \(\mathcal{R}\) is equivalent to the Neural Network model: inferring with \(\mathcal{R}\) is the same as inferring with the model. Last, we minimize the number of rules using the Truth Table correlation metric. Both techniques serve to enhance the model's complexity while minimizing any potential loss of accuracy. ### From LTT block to set of rules \(\mathcal{R}\) General.We now introduce a method to convert \(\Phi_{f}\) from the general \(\mathsf{DNF}\) form into rule set \(\mathcal{R}\). In the previous section, we described the general procedure for transforming an LTT block into a \(\mathsf{DNF}\) logic gate expression. This expression is independent of the spatial position of the feature. This means that we have: \[\left\{\begin{array}{l}y_{0,f}=\Phi_{f}(v_{0},v_{1},\dots,v_{(n-1)}))\\...\\ y_{i,f}=\Phi_{f}(v_{i\times s},v_{i\times s+1},\dots,v_{i\times s+(n-1)}))\\...\\ y_{\lfloor\frac{L-n}{s}\rfloor,f}=\Phi_{f}(v_{L-n},v_{L-n+1},\dots,v_{L-1})) \end{array}\right.\] When we apply the LTT \(\mathsf{DNF}\) expression to a specific spatial position on the input, we convert the \(\mathsf{DNF}\) into a rule. To convert the general \(\mathsf{DNF}\) from into a set of rules \(\mathcal{R}\), we divide the input into patches and substitute the \(\mathsf{DNF}\) literals with the corresponding feature names. The number of rules for one filter corresponds to the number of patches: \(\lfloor\frac{L-n}{s}\rfloor\). An example of this process is given in Table 1 and one is provided below. **Example 2: conversion of DNF expressions to rules.** We established the \(\Phi_{f}\) expression in DNF form as \(x_{3}\land\overline{x_{0}}\land\overline{x_{1}}\land\overline{x_{2}}\). To obtain the rules, we need to consider the padding and the stride of the LTT block. Consider the following 5-feature binary input (\(L=5\)): [Male, Go Uni., Married, Born in US, Born in UK]. In our case, with a stride at 1 and no padding, we get 2 patches: [Male, Go Uni., Married, Born US] and [Go Uni., Married, Born US, Born UK]. After the substitution of the literal by the corresponding feature, we get 2 rules \(\mathcal{R}=\{\text{Rule}_{0}^{\text{DNF}},\text{Rule}_{1}^{\text{DNF}}\}\): \[\left\{\begin{array}{l}\text{Rule}_{0,f}^{\text{DNF}}=\text{Born US }\land\overline{\text{Male}}\land\bar{\text{Go Uni.}}\land\text{Married}\\ \text{Rule}_{1,f}^{\text{DNF}}=\text{Born UK}\land\bar{\text{Go Uni.}}\land\text{ Married}\land\text{Born US}\end{array}\right.\] and therefore, the output of the LTT block \(\Phi_{f}\) becomes: \[\left\{\begin{array}{l}y_{0,f}=\text{Rule}_{0,f}^{\text{DNF}}(v_{0},v_{1}, v_{2},v_{3})\\ y_{1,f}=\text{Rule}_{1,f}^{\text{DNF}}(v_{1},v_{2},v_{3},v_{4})\end{array}\right.\] We underline the logic redundancy in Rule\({}_{1}^{\text{DNF}}\): if someone is born in the UK, he/she is necessarily not born in the US. We solve this issue by injecting _Don't Care Terms_ (\(DCT\)) into the truth table as we will see in the next section. ### Automatic post-training optimizations: from \(\mathcal{R}\) to \(\mathcal{R}_{opt}\) In this subsection, we present automatic post-training optimizations that are unique to our model and require the complete computation of the LTT truth table. #### 4.2.1 Reducing the rule's size with Don't Care Terms (\(Dct\)) injection We propose a method for reducing the size of rules by injecting _Don't Care Terms_ (\(DCT\)) into the truth table. These terms represent situations where the LTT block output can be either 0 or 1 for a specific input, without affecting the overall performance of the DNN. We use the Quine-McCluskey algorithm to assign the optimal value to the \(DCT\) and reduce the DNF equations. These \(DCT\) can be incorporated into the model either with background knowledge or automatically with the one hot encodings and the Dual Step Function described in the TTnet paper [10]. To illustrate this method, we use Example 2 where we apply human common sense and reasoning to inject \(DCT\) into the truth table. For instance, since no one can be born in both the UK and the US at the same time, the literals \(x_{2}\) and \(x_{3}\) must not be 1 at the same time for the second rule. By injecting \(DCT\) into the truth table as \([0,0,1,DCT,0,0,0,DCT,0,0,0,DCT,0,0,DCT]\), we obtain the new reduced rule: \(\text{Rule}_{1,reduced}^{\text{DNF}}=\text{Born UK}\land\bar{\text{Go Uni.}}\land\bar{\text{Married}}\). This method significantly decreases the size of the rules while maintaining the same accuracy, as demonstrated in Table 4 in Section 5. #### 4.2.2 Reducing the number of rules with Truth Table Correlation metric To reduce the number of rules obtained with the TT-rules framework, we introduce a new metric called Truth Table Correlation (\(TTC\)). \begin{table} \begin{tabular}{|c|c|c|c||c|} \hline \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(\Phi_{f}\) \\ \hline 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 \\ \hline \end{tabular} \end{table} Table 1: Truth Table of the LTT block \(\Phi_{f}\) characterized by the weights \(\mathcal{W}_{1}\) and \(\mathcal{W}_{2}\) with \(L=5\) and binary input feature names [Is the Sex Male? (Male), Did the person go to University? (Go Uni.), Is the person married? (Married), Is the person born in the US? (Born US), Is the person born in the UK? (Born UK)]. Figure 1: A Learning Truth Table (LTT) filter example in one dimension. This metric addresses the issue of rule redundancy by measuring the correlation between two different LTT blocks, which may learn similar rules since they are completely decoupled from each other. The idea is to identify and remove redundant rules and keep only the most relevant ones. The \(TTC\) metric is defined as follows: \[TTC(y_{1},y_{2})=\left\{\begin{array}{ll}\frac{HW(y_{1},\overline{y_{2}})}{ \frac{HW(y_{1},\overline{y_{2}})}{|y_{1}|}}-1&\text{if}abs(\frac{HW(y_{1}, \overline{y_{2}})}{|y_{1}|}-1)>\frac{HW(y_{1},\overline{y_{2}})}{|y_{1}|}\\ \frac{HW(y_{1},\overline{y_{2}})}{|y_{1}|}&\text{otherwise.}\end{array}\right.\] Here, \(y_{1}\) and \(y_{2}\) are the outputs of the LTT blocks, \(\overline{y_{2}}\) is the negation of \(y_{2}\), \(|y_{1}|\) represents the number of elements in \(y_{1}\), and \(HW\) is the Hamming distance function. The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are not equal. The \(TTC\) metric varies from -1 to 1. When \(TTC=-1\), the LTT blocks are exactly opposite, while they are the same if \(TTC=1\). We systematically filter redundant rules with a threshold correlation of \(\pm 0.9\). If the correlation is positive, we delete one of the two filters and give the same value to the second filter. If the correlation is negative, we delete one of the two filters and give the opposite value to the second filter. By using this metric, we can reduce the number of rules and optimize the complexity of the model while minimizing accuracy degradation. ### Overall TT-rules architecture #### 4.3.1 Pre-processing and final layer To maintain interpretability, we apply batch normalization and a step function layer consisting of a single linear layer. The batch normalization allows to learn the thresholds for the continuous features (such as the condition \(\text{YoE}>11\) in Fig. 2). We propose two types of training for the final linear layer. The first uses a final sparse binary layer, which forces all weights to be binary and sparse according to a BinMask as in [27]. In order to train without much loss in performance when using the Heaviside step function, Benamira _et al._[10] adopted the Straight-Through Estimator (STE) proposed by [26]. The second is designed for scalability and employs floating-point weights, which allows to extend the model to regression tasks. To reduce overfitting, a dropout function is applied in the second case. #### 4.3.2 Estimating complexity before training In our TT-rules framework, the user is unable to train a final rule-based model with a fixed and pre-selected complexity. However, the complexity can be estimated. The number of rules is determined by multiplying the number of filters \(F\) by the number of patches \(\lfloor\frac{L-n}{s}\rfloor\). The complexity of each rule is based on the size of the function \(n\), and on average, we can expect \(n2^{n-1}\) Boolean gates per rule, before \(DCT\) injection. Therefore, the overall complexity is given by \(n\times 2^{n-1}\times\lfloor\frac{L-n}{s}\rfloor\times F\). #### 4.3.3 Training and extraction time Training.Compared to other rule-based models, our architecture scales well in terms of training time. The machine learning tabular dataset can be trained in 1-5 minutes for 5-fold cross-validation. For large DNA tabular datasets, our model can be trained in 45 minutes for 5-fold cross-validation, which is not possible with other rule-based models such as GL and RIPPER. Extraction time for \(\mathcal{R}_{opt}\).Our model is capable of extracting optimized rules at a fast pace. Each truth table can be computed in \(2^{n}\) operations, where \(n\leq 9\) is in terms of complexity. In terms of time, our model takes 7 to 17 seconds for Adult [22], 7 to 22 seconds for Compas [7], and 20 to 70 seconds for Diabetes [22]. ## 5 Results In this section, we present the results of applying the TT-rules framework to seven datasets, which allow us to demonstrate the effectiveness of our approach and provide evidence for the three claims stated in the introduction. ### Experimental set-up Evaluation measures and training conditions.We used RMSE, AUC, and accuracy for the evaluation of the regression, binary classification, and multi-class classification respectively. Rules and complexity are defined in Section 3.1.3. All results are presented after grid search and 5-fold cross-validation. All the training features are detailed in the supplementary Section. We compare the performance of our method with that of several other algorithms, including Linear/Logistic Regression [36], Decision Trees (DT)[36], Generalized Linear Models (GL)[47], Neural Additive Models (NAM)[3], XGBoost[17], and Deep Neural Networks (DNNs) [36]. The supplementary materials provide details on the training conditions used for these competing methods. Experiments are available on demand. Our workstation consists of eight cores Intel(R) Core(TM) i7-8650U CPU clocked at 1.90 GHz, 16 GB RAM. \begin{table} \begin{tabular}{l|c|c c c|c} \hline \hline & **Regression** (RMSE) & \multicolumn{3}{c|}{**Binary classification** (AUC)} & **Multi-classification** (Accuracy) \\ \hline & California Housing & Compas & Adult & HELOC & Diabetes \\ continous/binary \# & 8/144 features & 9/17 features & 14/100 features & 24/330 features & 43/296 features \\ \hline Linear/ log & 0.728 \(\pm\) 0.015 & 0.721 \(\pm\) 0.010 & 0.883 \(\pm\) 0.002 & 0.798 \(\pm\) 0.013 & 0.581 \(\pm\) 0.002 \\ DT & 0.514 \(\pm\) 0.017 & 0.731 \(\pm\) 0.020 & 0.872 \(\pm\) 0.002 & 0.771 \(\pm\) 0.012 & 0.572 \(\pm\) 0.002 \\ GL & 0.425 \(\pm\) 0.015 & 0.735 \(\pm\) 0.013 & 0.904 \(\pm\) 0.001 & 0.803 \(\pm\) 0.001 & NA \\ NAM & 0.562 \(\pm\) 0.007 & 0.739 \(\pm\) 0.010 & - & - & - \\ TT-rules (Ours) & 0.394 \(\pm\) 0.017 & 0.742 \(\pm\) 0.007 & 0.906 \(\pm\) 0.005 & 0.800 \(\pm\) 0.001 & 0.584 \(\pm\) 0.003 \\ \hline XGBoost & 0.532 \(\pm\) 0.014 & 0.736 \(\pm\) 0.001 & 0.913 \(\pm\) 0.002 & 0.802 \(\pm\) 0.001 & 0.591 \(\pm\) 0.001 \\ DNNs & 0.492 \(\pm\) 0.009 & 0.732 \(\pm\) 0.004 & 0.902 \(\pm\) 0.002 & 0.800 \(\pm\) 0.010 & 0.603 \(\pm\) 0.004 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison machine learning dataset of our method to Linear/Logistic Regression) [36], Decision Trees (DT) [36], GL [47], NAM [3], XGBoost [17] and DNNs. Results are obtained with a large TT-rules model, without optiizations. Means and standard deviations are reported from 5-fold cross validation. Machine learning datasets.We utilized a variety of healthcare and non-healthcare datasets for our study. For multi-classification, we used the Diabetes 130 US-Hospitals dataset1 from the UCI Machine Learning Repository [22]. For binary classification tasks, we used two single-cell RNA-seq analysis datasets, one for head and neck cancer2[37] and another for melanoma3[44], as well as the TCGA lung cancer dataset4[33] for regression. For binary classification tasks, we used the Adult dataset5 from the UCI Machine Learning Repository [22], the Compas dataset6 introduced by ProPublica [7], and the HELOC dataset [1]. We also employed the California Housing dataset7[34] for the regression task. Supplementary details regarding each of the datasets can be found in the supplementary Section materials. Footnote 1: [https://bit.ly/diabetes_130_uci](https://bit.ly/diabetes_130_uci) Footnote 2: [https://bit.ly/acck_head_rna](https://bit.ly/acck_head_rna) Footnote 3: [https://hyl.melamona_rna](https://hyl.melamona_rna) Footnote 4: [https://bit.ly/cega_lung_rna](https://bit.ly/cega_lung_rna) Footnote 5: [https://bit.ly/Adult_uci](https://bit.ly/Adult_uci) Footnote 6: [https://bit.ly/Compas_data](https://bit.ly/Compas_data) Footnote 7: [https://bit.ly/california_statlib](https://bit.ly/california_statlib) Footnote 8: [https://github.com/](https://github.com/) DNA datasets.Our TT-rules framework's scalability is demonstrated using two DNA datasets, namely the single-cell RNA-seq analysis datasets for head neck, and melanoma cancer [37, 44] for binary classification and the TCGA lung cancer[33] for regression. These datasets contain 23689 and 20530 features, respectively, and are commonly used in real-life machine learning applications [32, 25, 39, 45]. ### Performances comparison - Claim A) #### 5.2.1 AUC/RMSE/Accuracy - Claim A-1) & A-2) First, Table 2 demonstrates that our method can handle all types of tasks, including regression, binary classification, and multi-class classification. Moreover, it outperforms most of the other interpretable methods (decision tree, RIPPER, linear/log, NAM) in various prediction tasks, except for GL [47], which performs better than our method on the HELOC dataset. It is worth noting that GL does not support multi-class classification. Additionally, our method shows superior performance to more complex models such as XGBoost and DNNs on California Housing and Compas datasets. Therefore, our method can be considered comparable or superior to the current state-of-the-art methods while providing global and exact interpretability, which will be demonstrated in Section 5.4. #### 5.2.2 Complexity - Claim A-3) Impact of post-training optimization.The optimizations proposed in Section 4.2 succeeded to reduce the complexity of our model as defined in Section 3.1.3 at a cost of little accuracy loss as seen in Table 4. The complexity went down by a factor of \(1.35\times\), \(2.22\times\), and \(1.47\times\) on the Adult, Compas, and Diabetes datasets respectively. The accuracy went down for the Adult and Diabetes datasets by \(0.004\) and \(0.009\) respectively and stayed the same for Compas. Comparison with rule-based models.Table 3 presents a comparison of various rule-based models, including ours, on the Compas, Adult, and HELOC datasets, in terms of accuracy, number of rules, and complexity. We note that we report accuracy and AUC for binary classification tasks, as RIPPER and ORS do not provide probabilities. We proposed two TT-rules models: our model for high performances, as shown in Table 2, with floating weights, and a small model with sparse binary weights, which is also our most compact model in terms of the number of rules and complexity. Our proposed model outperforms the others in terms of accuracy on the Compas dataset and has similar performances to GL [47] on the Adult and HELOC datasets. Although GL provides a better tradeoff between performance and complexity, we highlight that GL does not support multi-class classification tasks and is not scalable for larger datasets such as DNA datasets, as shown in the next section. We also propose a small model as an alternative to our high-performing model. Our small model achieves accuracy that is \(0.023\), \(0.009\), and \(0.006\) lower than our best model but requires only \(3.2\times\), \(2.2\times\), and \(9.8\times\) fewer rules on the Compas, Adult, and HELOC datasets, respectively. We successfully reduce the complexity of our model by \(14.3\times\), \(34\times\), and \(180\times\) on these three datasets. ### Scalability - Claim B) Our TT-rules framework demonstrated excellent scalability to real-life datasets with up to 20K features. This result is not surprising, considering the original TTnet paper [10] showed the architecture's ability to scale to ImageNet. Furthermore, our framework's superiority was demonstrated by outperforming other rule-based models that failed to converge to such large datasets (GL [47], RIPPER [18, 19]). NAMs were not trained as we considered investigating the 20K \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c|}{TT-rules \(\mathcal{R}\)} & \multicolumn{2}{c}{TT-rules \(\mathcal{R}_{opt}\)} \\ \hline & Acc. & Complexity & Acc. & Complexity \\ \hline **Adult** & \(0.846\pm 0.003\) & \(909\pm 212\) & \(0.842\pm 0.003\) & \(673\pm 145\) \\ **Compas** & \(0.664\pm 0.013\) & \(343\pm 41\) & \(0.664\pm 0.013\) & \(155\pm 22\) \\ **Diabetes** & \(0.574\pm 0.008\) & \(22\mathrm{K}\pm 2800\) & \(0.565\pm 0.009\) & \(15\mathrm{k}\pm 2225\) \\ \hline \hline \end{tabular} \end{table} Table 4: Reduction of the complexity of some TT-rules models after applying optimizations from Section 4.2 on Adult [22], Compas [7] and Diabetes [22] datasets. \begin{table} \begin{tabular}{l|c c|c c c|c c c} \hline \hline & \multicolumn{2}{c|}{**Compas**} & \multicolumn{2}{c|}{**Adult**} & \multicolumn{2}{c}{**HELOC**} \\ \hline & Accuracy & Rules & Complexity & Accuracy & Rules & Complexity & Accuracy & Rules & Complexity \\ \hline GL & \(0.685\pm 0.012\) & \(16\pm 2\) & \(20\pm 6\) & \(0.852\pm 0.001\) & \(16\pm 1\) & \(23\pm 1\) & \(0.732\pm 0.001\) & \(104\pm 5\) & \(104\pm 5\) \\ RIPPER & \(0.560\pm 0.006\) & \(12\pm 2\) & \(576\pm 48\) & \(0.833\pm 0.009\) & \(43\pm 15\) & \(14154\pm 4937\) & \(0.691\pm 0.019\) & \(17\pm 4\) & \(792\pm 186\) \\ DT & \(0.673\pm 0.015\) & \(78\pm 1\) & \(12090\pm 155\) & \(0.837\pm 0.004\) & \(398\pm 5\) & \(316410\pm 3975\) & \(0.709\pm 0.011\) & \(70\pm 1\) & \(9522\pm 136\) \\ ORS & \(0.670\pm 0.015\) & \(11\pm 1\) & \(460\pm 42\) & \(0.844\pm 0.006\) & \(9\pm 3\) & \(747\pm 249\) & \(0.704\pm 0.012\) & \(16\pm 6\) & \(1888\pm 708\) \\ \hline TT-rules big (Ours) & \(0.687\pm 0.005\) & \(42\pm 3\) & \(4893\pm 350\) & \(0.851\pm 0.003\) & \(288\pm 12\) & \(22896\pm 954\) & \(0.733\pm 0.010\) & \(807\pm 30\) & \(103763\pm 3857\) \\ TT-rules small (Ours) & \(0.664\pm 0.013\) & \(13\pm 2\) & \(155\pm 22\) & \(0.842\pm 0.003\) & \(130\pm 10\) & \(673\pm 145\) & \(0.727\pm 0.010\) & \(82\pm 30\) & \(574\pm 210\) \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy and complexity on the Compas, Adult and HELOC datasets for different methods. All the TT-rules are computed with our automatic post-training optimizations as described in Section 4.2. TT-rules big refers to a TTnet trained with a final linear regression with weights as floating points, whereas TT-rules small refers to a TTnet trained with a sparse binary linear regression. graphs to be barely interpretable. Regarding performance, the TT-rules framework achieved an impressive RMSE of 0.029 on the DNA single-cell regression problem, compared to 0.092 for linear models, 0.028 for DNNs, and 0.42 for Random Forests. On the DNA multi-cross dataset, the TT-rules framework achieved an accuracy of 83.48%, compared to 83.33% for linear models, outperforming DNNs and Random Forests by 10.8% and 10.4%, respectively. Our approach not only scales but also reduces the input feature set, acting as a feature selection method. We generated a set of 1064 rules out of 20530 features for the regression problem, corresponding to a drastic reduction in complexity. For the binary classification dataset, we generated 9472 rules, which more then halved the input size from 23689 to 9472. ### TT-rules application case study - Claim C) In this section, we present the results of applying the TT-rules framework on the Adult dataset [22], for a specific trained example on Figure 2. Exact and global interpretability - Claim C-1).For global and exact interpretability, we first apply TT-rules framework to obtain \(\mathcal{R}\) and \(\mathcal{R}_{opt}\). Then we transform the rules in \(\mathcal{R}_{opt}\) into their equivalent ROBDD representation. This transformation is fast and automatic and can be observed in Figure 2: the resulting decision mechanism is small and easily understandable. In the Adult dataset, the goal is to predict whether an individual \(I\) will earn more than $50K per year in 1994. Given an individual's feature inputs \(I\), the first rule of Figure 2 can be read as follows: if \(I\) has completed more than 11 years of education, then the rule is satisfied. If not, then the rule is satisfied if \(I\) earns more than $54,200 in investments per month or loses more than $228. If the rule is satisfied, \(I\) earns one positive point. If \(I\) has more positive points than negative points, the model predicts that \(I\) will earn more than $50K per year. Human knowledge injection - Claim C-2).Figure 2 illustrates our model's capability to incorporate human knowledge by allowing the modification of existing rules. However, it is important to note that we do not claim to achieve automatic human knowledge injection. The illustration simply highlights the possibility of manual rule modification in our framework. Mitigating contextual drift in DNN through global and exact interpretability - Claim C-3).It is essential to recognize that machine learning models may not always generalize well to new data from different geographic locations or contexts, a phenomenon known as "contextual drift" or "concept drift" [24]. The global and exact interpretation of DNNs is vital in this regard, as it allows for human feedback on the model's rules and the potential for these rules to be influenced by contextual drift. For example, as depicted in Figure 2, this accurate model trained on US data is highly biased towards the US and is likely to perform poorly if applied in South America due to rule number 3. This highlights once again the significance of having global and exact interpretability of DNNs, as emphasized by recent NIST Artificial Intelligence Risk Management Framework [4]. ## 6 Limitations and future works Although our TT-rules framework provides a good balance between interpretability and accuracy, we observed that the generalized linear model (GL) offers a better trade-off. Specifically, for approximately the same performance, GL offers significantly less complexity. As such, future work could explore ways to identify feature interactions that work well together, similar to what GL does. Exploring automatic rule addition as an alternative to the human-based approach used in our work could also be a fruitful direction for future research. Another interesting avenue is to apply TT-rules to time series tasks, where the interpretable rules generated by our model can provide insights into the underlying dynamics of the data. Finally, another promising area for future work would be to propose an agnostic global explainer for any model based on the TT-rules framework. ## 7 Conclusion In conclusion, our proposed TT-rules framework provides a new and optimized approach for achieving global and exact interpretability in regression and classification tasks. With its ability to scale on large Figure 2: Our neural network model trained on Adult dataset in the form of Boolean decision trees: the output of the DNN and the output of these decision trees are the same, reaching 83.6% accuracy. Added Features are represented in orange rectangles. By modifying existing rules and incorporating \(r_{5}\), the **Human Added Rule**, we reach 84.6% accuracy. On the same test set, Random Forest reaches 85.1% accuracy and Decision Tree 84.4% with depth 10. There is no contradiction to the rules: one person can not be born in both Mexico and Nicaragua. The term YoE refers to the Years of Education and the Capital Gains (Losses) refer to the amount of capital gained (lost) over the year. Each rule \(r_{i}\) is a function \(r_{i}:\{0,1\}^{n}\mapsto\{-1,0,1\}\), i.e for each data sample I we associate for each rule \(r_{i}\) a score which is in \(\{-1,0,1\}\). The prediction of our classifier is then as stated above. datasets and its potential for feature reduction, the TT-rules framework appears as a valuable tool towards explainable artificial intelligence.
2308.16425
On the Equivalence between Implicit and Explicit Neural Networks: A High-dimensional Viewpoint
Implicit neural networks have demonstrated remarkable success in various tasks. However, there is a lack of theoretical analysis of the connections and differences between implicit and explicit networks. In this paper, we study high-dimensional implicit neural networks and provide the high dimensional equivalents for the corresponding conjugate kernels and neural tangent kernels. Built upon this, we establish the equivalence between implicit and explicit networks in high dimensions.
Zenan Ling, Zhenyu Liao, Robert C. Qiu
2023-08-31T03:28:43Z
http://arxiv.org/abs/2308.16425v1
# On the Equivalence between Implicit and Explicit Neural Networks: ###### Abstract Implicit neural networks have demonstrated remarkable success in various tasks. However, there is a lack of theoretical analysis of the connections and differences between implicit and explicit networks. In this paper, we study high-dimensional implicit neural networks and provide the high dimensional equivalents for the corresponding conjugate kernels and neural tangent kernels. Built upon this, we establish the equivalence between implicit and explicit networks in high dimensions. ## 1 Introduction Implicit neural networks (NNs) [2] have recently emerged as a new paradigm in neural network design. An implicit NN is equivalent to an infinite-depth weight-shared explicit NN with input-injection. Unlike explicit NNs, implicit NNs generate features by directly solving for the fixed point, rather than through layer-by-layer forward propagation. Moreover, implicit NNs have the remarkable advantage that gradients can be computed analytically only through the fixed point with _implicit differentiation_. Therefore, training implicit NNs only requires constant memory. Despite the empirical success achieved by implicit NNs [3, 11], our theoretical understanding of these models is still limited. In particular, there is a lack of theoretical analysis of the training dynamics and generalization performance of implicit NNs, and possibly more importantly, whether these properties can be connected to those of explicit NNs. [2] demonstrates that any deep NN can be reformulated as a special implicit NN. However, it remains unknown whether general implicit NNs have advantages over explicit NNs. [6] extends previous neural tangent kernel (NTK) studies to implicit NNs and give the exact expression of the NTK of the ReLU implicit NNs. However, the differences between implicit and explicit NTKs are not analyzed. Moreover, previous works [9, 10] have proved the global convergence of gradient descent for training implicit NNs. However, it is still unclear what distinguishes the training dynamic of implicit NNs and that of explicit NNs. In this paper, we investigate implicit NNs from a high-dimensional view. Specifically, we perform a fine-grained asymptotic analysis on the eigenspectra of conjugate kernel (CKs) and NTKs of implicit NNs, which play a fundamental role in the convergence and generalization high dimensional NNs [8]. By considering input data uniformly drawn from the unit sphere, we derive, with recent advances in random matrix theory, high-dimensional (spectral) equivalents for the CKs and NTKs of implicit NNs, and establish the equivalence between implicit and explicit NNs by matching the coefficients of the corresponding asymptotic spectral equivalents. Surprisingly, our results reveal that a _single-layer_ explicit NN with carefully designed activations has the same CK or NTK eigenspectra as a ReLU implicit NN, whose depth is essentially _infinite_. ## 2 Preliminaries ### Implicit and Explicit NNs Implicit NNs.In this paper, we study a typical implicit neural network, the deep equilibrium model (DEQ) [2]. Let \(\mathbf{X}=[\mathbf{x}_{1},\cdots,\mathbf{x}_{n}]\in\mathbb{R}^{d\times n}\) denote the input data. We define a vanilla DEQ with the transform at the \(l\)-th layer as \[\mathbf{h}_{i}^{(l)}=\sqrt{\frac{\sigma_{a}^{2}}{m}}\mathbf{A}\mathbf{z}_{i}^{(l-1)}+\sqrt{ \frac{\sigma_{b}^{2}}{m}}\mathbf{B}\mathbf{x}_{i},\quad\mathbf{z}_{i}^{(l)}=\phi(\mathbf{h}_{i }^{(l)}) \tag{1}\] where \(\mathbf{A}\in\mathbb{R}^{m\times m}\) and \(\mathbf{B}\in\mathbb{R}^{m\times d}\) are weight matrices, \(\sigma_{a},\sigma_{b}\in\mathbb{R}\) are constants, \(\phi\) is an element-wise activation, \(\mathbf{h}_{i}^{(l)}\) is the pre-activation and \(\mathbf{z}_{i}^{(l)}\in\mathbb{R}^{m}\) is the output feature of the \(l\)-th hidden layer corresponding to the input data \(\mathbf{x}_{i}\). The output of the last hidden layer is defined by \(\mathbf{z}_{i}^{*}\triangleq\lim_{l\to\infty}\mathbf{z}_{i}^{(l)}\) and we denote the corresponding pre-activation by \(\mathbf{h}_{i}^{*}\). Note that \(\mathbf{z}_{i}^{*}\) can be calculated by directly solving for the equilibrium point of the following equation \[\mathbf{z}_{i}^{*}=\phi\left(\sqrt{\frac{\sigma_{a}^{2}}{m}}\mathbf{A}\mathbf{z}_{i}^{*}+ \sqrt{\frac{\sigma_{b}^{2}}{m}}\mathbf{B}\mathbf{x}_{i}\right). \tag{2}\] We are interested in the conjugate kernel and neural tangent kernel (Implicit-CK and Implicit-NTK, for short) of implicit neural networks defined in Eq. (2). Following [6], we denote the corresponding Implicit-CK by \(\mathbf{G}^{*}=\lim_{l\to\infty}\mathbf{G}^{(l)}\) where the \((i,j)\)-th entry of \(\mathbf{G}^{(l)}\) is defined recursively as \[\mathbf{G}_{ij}^{(0)} =\mathbf{x}_{i}^{\top}\mathbf{x}_{j},\quad\mathbf{\Lambda}_{ij}^{(l)}=\left[ \begin{array}{cc}\mathbf{G}_{ii}^{(l-1)}&\mathbf{G}_{ij}^{(l-1)}\\ \mathbf{G}_{ji}^{(l-1)}&\mathbf{G}_{jj}^{(l-1)}\end{array}\right], \tag{3}\] \[\mathbf{G}_{ij}^{(l)} =\sigma_{a}^{2}\mathbb{E}_{(\text{u},\text{v})\sim\mathcal{N}(0, \mathbf{\Lambda}_{ij}^{(l)}}[\phi(\text{u})\phi(\text{v})]+\sigma_{b}^{2}\mathbf{x}_{i }^{\top}\mathbf{x}_{j},\] \[\dot{\mathbf{G}}_{ij}^{(l)} =\sigma_{a}^{2}\mathbb{E}_{(\text{u},\text{v})\sim\mathcal{N}(0, \mathbf{\Lambda}_{ij}^{(l)}}[\phi^{\prime}(\text{u})\phi^{\prime}(\text{v})].\] And the Implicit-NTK is defined as \(\mathbf{K}^{*}=\lim_{l\to\infty}\mathbf{K}^{(l)}\) whose the \((i,j)\)-th entry is defined as \[\mathbf{K}_{ij}^{(l)}=\sum_{h=1}^{l+1}\left(\mathbf{G}_{ij}^{(h-1)}\prod_{h^{\prime}=h }^{l+1}\dot{\mathbf{G}}_{ij}^{(h^{\prime})}\right). \tag{4}\] Explicit Neural Networks.We consider a single-layer fully-connected NN model defined as \(\mathbf{Y}=\sqrt{\frac{1}{p}}\sigma(\mathbf{W}\mathbf{X})\) where \(\mathbf{W}\in\mathbb{R}^{p\times d}\) is the weight matrix and \(\sigma\) is an element-wise activation function. Let \(\mathbf{w}\sim\mathcal{N}(0,\mathbf{I}_{d})\), the corresponding Explicit-CK matrix \(\mathbf{\Sigma}\) and Explicit-NTK matrix \(\mathbf{\Theta}\) are defined as follows: \[\mathbf{\Sigma}=\mathbb{E}_{\mathbf{w}}[\sigma(\mathbf{w}^{\top}\mathbf{X})^{\top}\sigma(\mathbf{ w}^{\top}\mathbf{X})],\quad\mathbf{\Theta}=\mathbf{\Sigma}+\left(\mathbf{X}^{\top}\mathbf{X} \right)\odot\mathbb{E}_{\mathbf{w}}[\sigma^{\prime}(\mathbf{w}^{\top}\mathbf{X})^{\top} \sigma^{\prime}(\mathbf{w}^{\top}\mathbf{X})]. \tag{5}\] ### CKs and NTKs of ReLU Implicit NNs We make the following assumptions on the random initialization, the input data, and activations. **Assumption 1**: _(i) As \(n\to\infty\), \(d/n\to c\in(0,\infty)\). All data points \(\mathbf{x}_{i}\), \(i\in[n]\), are independent and uniformly sampled from \(\mathbb{S}^{d-1}\). (ii) \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{W}\) are independent and have i.i entries of zero mean, unit variance, and finite fourth kurtosis. Moreover, we require \(\sigma_{a}^{2}+\sigma_{b}^{2}=1\). (iii) The activation \(\phi\) of the implicit NN is the normalized ReLU, i.e., \(\phi(x)=\sqrt{2}\max(x,0)\). The activation \(\sigma\) of the explicit NN is a \(C^{3}\) function._ **Remark 1**: _(i) Despite derived here for uniform distribution on the unit sphere, we conjecture that our results extend the result to more general distributions by using the technique developed in [5, 7]. (ii) The additional requirement on the variance is to ensure the existence and uniqueness of the fixed point of the NTK and to keep the diagonal entries of the CK matrix at \(1\), see examples in [6]. (iii) It is possible to extend our results to implicit NNs with general activations by using the technique proposed in [10]. We defer the extension to more general data distributions and activation functions to future work._ Under Assumptions 1, the limits of Implicit-CK and Implicit-NTK exist, and one can have precise expressions of \(\mathbf{G}^{*}\) and \(\mathbf{K}^{*}\) as follows [6, 9]. **Lemma 1**: _Let \(f(x)=\frac{\sqrt{1-x^{2}}+(\pi-\arccos x)x}{\pi}\). Under Assumptions 1, the fixed point of Implicit-CK \(\mathbf{G}^{*}_{ij}\) is the root of_ \[\mathbf{G}^{*}_{ij}=\sigma_{a}^{2}f(\mathbf{G}^{*}_{ij})+(1-\sigma_{a}^{2})\mathbf{x}_{i} ^{\top}\mathbf{x}_{j}. \tag{6}\] _The limit of Implicit-NTK is_ \[\mathbf{K}^{*}_{ij}=h(\mathbf{G}^{*}_{ij})\triangleq\frac{\mathbf{G}^{*}_{ij}}{1-\dot{ \mathbf{G}}^{*}_{ij}}\quad\text{where}\quad\dot{\mathbf{G}}^{*}_{ij}\triangleq\sigma_ {a}^{2}\pi^{-1}(\pi-\arccos\bigl{(}\mathbf{G}^{*}_{ij}\bigr{)}). \tag{7}\] ## 3 Main Results In this section, we prove the high-dimensional equivalents for CKs and NTKs of implicit and explicit NNs. As a result, by matching the coefficients of the asymptotic spectral equivalents, we establish the equivalence between implicit and explicit NNs in high dimensions. ### Asymptotic Approximations Cks.We begin by defining several quantities that are crucial to our results. Note that the unique fixed point of Eq. (6) exists as long as \(\sigma_{a}^{2}<1\). We define the implicit map induced from Eq. (6) as \(\mathbf{G}^{*}_{ij}\triangleq g(\mathbf{x}_{i}^{\top}\mathbf{x}_{j})\). Let \(\angle^{*}=g(0)\) be the solution of \(\angle^{*}=\sigma_{a}^{2}f(\angle^{*})\) when \(\mathbf{x}_{i}^{\top}\mathbf{x}_{j}=0\). Using implicit differentiation, one can obtain that \[g^{\prime}(0)=\frac{1-\sigma_{a}^{2}}{1-\sigma_{a}^{2}f^{\prime}(\angle^{*})},\quad g^{\prime\prime}(0)=\frac{\sigma_{a}^{2}(1-\sigma_{a}^{2})^{2}f^{ \prime\prime}(\angle^{*})}{(1-\sigma_{a}^{2}f^{\prime}(\angle^{*}))^{3}}.\] Now we are ready to present the asymptotic equivalent of the Implicit-CK matrix. **Theorem 1** (Asymptotic approximation of Implicit-CKs): _Let Assumptions 1 hold. As \(n,d\to\infty\), the Implicit-CK matrix \(\mathbf{G}^{*}\) defined in Eq. (6) can be approximated consistently in operator norm, by the matrix \(\overline{\mathbf{G}}\), that is \(\|\mathbf{G}^{*}-\overline{\mathbf{G}}\|_{2}\to 0\), where_ \[\overline{\mathbf{G}}=\alpha\mathbf{1}\mathbf{1}^{\top}+\beta\mathbf{X}^{\top}\mathbf{X}+\mu\mathbf{ I}_{n},\] _with \(\alpha=g(0)+\frac{g^{\prime\prime}(0)}{2d}\), \(\beta=g^{\prime}(0)\), and \(\mu=g(1)-g(0)-g^{\prime}(0)\)._ **Theorem 2** (Asymptotic approximation for Explicit-CKs): _Let Assumptions 1 hold. As \(n,d\to\infty\), the Explicit-CK matrix \(\mathbf{\Sigma}\) defined in Eq. (5) can be approximated consistently in operator norm, by the matrix \(\overline{\mathbf{\Xi}}\), that is \(\|\mathbf{\Sigma}-\overline{\mathbf{\Xi}}\|_{2}\to 0\), where_ \[\overline{\mathbf{\Xi}}=\alpha_{1}\mathbf{1}\mathbf{1}^{\top}+\beta_{1}\mathbf{X}^{\top}\mathbf{X}+ \mu_{1}\mathbf{I}_{n},\] _with \(\alpha_{1}=\mathbb{E}[\sigma(z)]^{2}+\frac{\mathbb{E}[\sigma^{\prime\prime}(z) ]^{2}}{2d}\), \(\beta_{1}=\mathbb{E}[\sigma^{\prime}(z)]^{2}\), and \(\mu_{1}=\mathbb{E}[\sigma^{2}(z)]-\mathbb{E}[\sigma(z)]^{2}-\mathbb{E}[\sigma ^{\prime}(z)]^{2}\), for \(z\sim\mathcal{N}(0,1)\)._ NTKs.For the Implicit-NTK, we define \(\mathbf{K}^{*}_{ij}=k(\mathbf{x}^{\top}_{i}\mathbf{x}_{j})\), i.e., \(k(\mathbf{x}^{\top}_{i}\mathbf{x}_{j})=h(g(\mathbf{x}^{\top}_{i}\mathbf{x}_{j}))\), for \(i,j\in[n]\). It is easy to check that \(k(0)=h(\angle^{*})\) and \(k(1)=h(g(1))\). Using implicit differentiation again, we have \[k^{\prime}(0)=\frac{(1-\sigma_{a}^{2})h^{\prime}(\angle^{*})}{\sigma_{a}^{2}f ^{\prime}(\angle^{*})-1},\;k^{\prime\prime}(0)=\frac{(1-\sigma_{a}^{2})^{2}(h ^{\prime\prime}(\angle^{*})-\sigma_{a}^{2}f^{\prime}(\angle^{*})h^{\prime \prime}(\angle^{*})+\sigma_{a}^{2}h^{\prime}(\angle^{*})f^{\prime\prime}( \angle^{*}))}{(1-\sigma_{a}^{2}f^{\prime}(\angle^{*}))^{3}}.\] Now we are ready to present the asymptotic equivalent of the Implicit-NTK matrix. **Theorem 3** (Asymptotic approximation for Implicit-NTKs): _Let Assumptions 1 hold. As \(n,d\to\infty\), the Implicit-NTK matrix \(\mathbf{K}^{*}\) defined Eq. (7) in can be approximated consistently in operator norm, by the matrix \(\overline{\mathbf{K}}\), that is \(\|\mathbf{K}^{*}-\overline{\mathbf{K}}\|_{2}\to 0\), where_ \[\overline{\mathbf{K}}=\dot{\alpha}\mathbf{1}\mathbf{1}^{\top}+\dot{\beta}\mathbf{X}^{\top}\bm {X}+\dot{\mu}\mathbf{I}_{n},\] _with \(\dot{\alpha}=k(0)+\frac{k^{\prime\prime}(0)}{2d}\), \(\dot{\beta}=k^{\prime}(0)\), and \(\dot{\mu}=k(1)-k(0)-k^{\prime}(0)\)._ **Theorem 4** (Asymptotic approximation for Explicit-NTKs): _Let Assumptions 1 hold. As \(n,d\to\infty\), the Explicit-NTK matrix \(\mathbf{\Theta}\) defined in Eq. (5) can be approximated consistently in operator norm, by the matrix \(\overline{\mathbf{\Theta}}\), that is \(\|\mathbf{\Theta}^{*}-\overline{\mathbf{\Theta}}\|_{2}\to 0\), where_ \[\overline{\mathbf{\Theta}}=\dot{\alpha}_{1}\mathbf{1}\mathbf{1}^{\top}+\dot{\beta}_{1}\bm {X}^{\top}\mathbf{X}+\dot{\mu}_{1}\mathbf{I}_{n},\] _with \(\dot{\alpha}_{1}=\mathbb{E}[\sigma(z)]^{2}+\frac{3\mathbb{E}[\sigma^{\prime \prime}(z)]^{2}}{2d}\), \(\dot{\beta}_{1}=2\mathbb{E}[\sigma^{\prime}(z)]^{2}\), and \(\dot{\mu}_{1}=\mathbb{E}[\sigma^{2}(z)]+\mathbb{E}[\sigma^{\prime}(z)^{2}]- \mathbb{E}[\sigma(z)]^{2}-2\mathbb{E}[\sigma^{\prime}(z)]^{2}\) for \(z\sim\mathcal{N}(0,1)\)._ **Remark 2**: _(i) Due to the homogeneity of the ReLU function, the Implicit-CK and the Implicit-NTK are essentially inner product kernel random matrices. Consequently, Theorem 1 and 3 can be built upon the results in [4]. We postpone the study on general activations to future work. (ii) The results in Theorem 2 and 4 generalize those of [1, 7] to the cases of "non-centred" activations, i.e., we do not require \(\mathbb{E}[\sigma(z)]=0\) for \(z\sim\mathcal{N}(0,1)\)._ ### The Equivalence between Implicit and Explicit NNs In the following corollary, we show a concrete case of a single-layer explicit NN with an quadratic activation, that matches the CK or NTK eigenspectra of a ReLU implicit NN. The idea is to utilize the results of Theorems 1-4 to match the coefficients of the asymptotic equivalents such that \(\alpha_{1}=\alpha,\beta_{1}=\beta,\mu_{1}=\mu\), or \(\dot{\alpha}_{1}=\dot{\alpha},\dot{\beta}_{1}=\beta,\dot{\mu}_{1}=\mu\). We implement numerical simulations to verify our theory. The numerical results are shown in Figure 1. **Corollary 1**: _We consider a quadratic polynomial activation \(\sigma(t)=a_{2}t^{2}+a_{1}t+a_{0}\). Let Assumptions 1 hold. As \(n,d\rightarrow\infty\), the Implicit-CK matrix \(\mathbf{G}^{*}\) defined in Eq. (6) can be approximated consistently in operator norm, by the Explicit-CK matrix \(\mathbf{\Sigma}\) defined in Eq. (5), i.e., \(\|\mathbf{G}^{*}-\mathbf{\Sigma}\|_{2}\to 0\), as long as_ \[a_{2}=\pm\sqrt{\frac{\mu}{2}}\quad a_{1}=\pm\sqrt{\beta},\quad a_{0}=\pm\sqrt {\alpha-\frac{\mu}{d}}-a_{2},\] _and the Implicit-NTK matrix \(\mathbf{K}^{*}\) defined in Eq. (7) can be approximated consistently in operator norm, by the Explicit-NTK matrix \(\mathbf{\Theta}\) defined in Eq. (5), i.e., \(\|\mathbf{K}^{*}-\mathbf{\Theta}\|_{2}\to 0\), as long as_ \[a_{2}=\pm\sqrt{\frac{\mu}{6}},\quad a_{1}=\pm\sqrt{\frac{\hat{\beta}}{2}}, \quad a_{0}=\pm\sqrt{\dot{\alpha}-\frac{\dot{\mu}}{d}}-a_{2}.\] ## 4 Conclusion In this paper, we study the CKs and NTKs of high-dimensional ReLU implicit NNs. We prove the asymptotic spectral equivalents for Implicit-CKs and Implicit-NTKs. Moreover, we establish the equivalence between implicit and explicit NNs by matching the coefficients of the asymptotic spectral equivalents. In particular, we show that a single-layer explicit NN with carefully designed activations has the same CK or NTK eigenspectra as a ReLU implicit NN. For future work, it would be interesting to extend our analysis to more general data distributions and activation functions. Figure 1: We independently generate \(n=1\,000\) data points from the \(d=1\,200\)-dimensional unit sphere. We use Gaussian initialization and \(\sigma_{a}^{2}\) is set as \(0.2\). Upper: the CK results. Bottom: the NTK results. (a) spectral densities of implicit kernels, (b) spectral densities of explicit kernels, (c) quadratic activations. #### Acknowledgements Z. Liao would like to acknowledge the National Natural Science Foundation of China (via fund NSFC-62206101) and the Fundamental Research Funds for the Central Universities of China (2021XXJS110) for providing partial support. R. C. Qiu and Z. Liao would like to acknowledge the National Natural Science Foundation of China (via fund NSFC-12141107), the Key Research and Development Program of Hubei (2021BAA037) and of Guangxi (GuiKe-AB21196034).
2309.14523
Smooth Exact Gradient Descent Learning in Spiking Neural Networks
Artificial neural networks are highly successfully trained with backpropagation. For spiking neural networks, however, a similar gradient descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance of spikes. Here, we demonstrate exact gradient descent learning based on spiking dynamics that change only continuously. These are generated by neuron models whose spikes vanish and appear at the end of a trial, where they do not influence other neurons anymore. This also enables gradient-based spike addition and removal. We apply our learning scheme to induce and continuously move spikes to desired times, in single neurons and recurrent networks. Further, it achieves competitive performance in a benchmark task using deep, initially silent networks. Our results show how non-disruptive learning is possible despite discrete spikes.
Christian Klos, Raoul-Martin Memmesheimer
2023-09-25T20:51:00Z
http://arxiv.org/abs/2309.14523v1
# Smooth Exact Gradient Descent Learning in Spiking Neural Networks ###### Abstract Artificial neural networks are highly successfully trained with backpropagation. For spiking neural networks, however, a similar gradient descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance of spikes. Here, we demonstrate exact gradient descent learning based on spiking dynamics that change only continuously. These are generated by neuron models whose spikes vanish and appear at the end of a trial, where they do not influence other neurons anymore. This also enables gradient-based spike addition and removal. We apply our learning scheme to induce and continuously move spikes to desired times, in single neurons and recurrent networks. Further, it achieves competitive performance in a benchmark task using deep, initially silent networks. Our results show how non-disruptive learning is possible despite discrete spikes. ## I Introduction Biological neurons communicate via short electrical impulses, called spikes [1]. Besides their overall rate of occurrence, the precise timing of single spikes often carries salient information [2; 3; 4; 5]. Taking into account spikes is therefore essential for the modeling and the subsequent understanding of biological neural networks [1; 6]. To build appropriate spiking network models, powerful and well interpretable learning algorithms are needed. They are further required for neuromorphic computing, an aspiring field that develops spiking artificial neural hardware to apply them in machine learning. It aims to exploit properties of spikes such as event-based, parallel operation (neurons only need to be updated when they send or receive spikes) and the temporal and spatial (i.e. in terms of interacting neurons) sparsity of communication to achieve tasks with unprecedented energy efficiency and speed [7; 8; 9]. The prevalent approach for learning in non-spiking neural network models is to perform gradient descent on a loss function [10; 11]. Its transfer to spiking networks is, however, problematic due to the all-or-none character of spikes: The (dis-)appearance of spikes is not predictable from gradients computed for nearby parameter values. Thus, a systematic addition or removal of spikes via exact gradient descent is not possible. This can, for example, lead to permanently silent, so-called dead neurons [12; 13] and to diverging gradients [14]. Further, the network dynamics after a spike (dis-)appearance and thus also the loss may change in a disruptive manner [15; 16; 17; 18]. Nevertheless, there are two popular approaches for learning in spiking neural networks based on gradient descent: The first approach, surrogate gradient descent, assumes binned time and replaces the binary activation function with a continuous-valued surrogate for the computation of the gradient [19]. It thus sacrifices the crucial advantage of event-based processing and necessitates the computation of state variables in each time step as well as their storage [20] (but see [21]). Furthermore, the computed surrogate gradient is only an approximation of the true gradient. The second approach, spike-based gradient descent, computes the exact gradient of the loss by considering the times of existing spikes as functions of the learnable parameters [12; 22]. It allows for event-based processing but relies on ad-hoc measures to deal with spike (dis-)appearances and gradient divergence, in particular to avoid dead neurons [23; 24; 25; 26; 27]. Here we show that disruptive (dis-)appearances of spikes can be avoided. Consequently, all network spike times vary continuously and in some network models even smoothly, i.e. continuously differentiably, with the network parameters. This allows us to perform non-disruptive, exact gradient descent learning, including, as we show, the systematic addition or removal of spikes. ## II Disruptive and Non-disruptive (Dis-)appearance of spikes ### Neuron model The most frequently employed neuron models when learning spiking networks are variants of the leaky integrate-and-fire (LIF) neuron [14; 18; 19; 20; 21; 22; 23; 24; 26; 28; 29]. LIF neurons, however, suffer from the aforementioned disruptive spike (dis-)appearance. For example, spikes can appear in the middle of a trial due to a continuous, arbitrarily small change of an input weight or time (Fig. 1a,b). We therefore consider instead another important standard spiking neuron model, the quadratic integrate-and-fire (QIF) neuron (Supplemental Material Sec. I) [6; 30; 31]. In contrast to the LIF neuron, the QIF neuron explicitly incorporates the fact that in biological neurons the membrane potential further increases due to a self-amplification mechanism once it is large enough, which generates the spike upstroke. The QIF neuron may thus be considered as the simplest truly spiking neuron model [31]. The voltage self-amplification is so strong that the voltage actually reaches infinity in finite time. One can define the time when this happens as the time of the spike, reset and onset of synaptic transmission. We adopt this and henceforth call \(\infty\) the threshold of the QIF neuron for simplicity. For sufficiently negative voltage, the voltage increases strongly as well. The neuron can thus be reset to negative infinity, from where it quickly recovers. ### Non-disruptive (dis-)appearance of spikes and smooth spike timing In QIF neurons with a temporally extended, exponentially decaying input current, spike times only (dis-)appear at the end of a trial; otherwise they change smoothly with the network parameters. Importantly, this kind of spike (dis-)appearance is non-disruptive, since it cannot change subsequent spiking dynamics. The mechanism underlying this feature can be intuitively understood: The slope of the voltage at the threshold is infinitely large. If there is a small change for example in an input weight (Fig. 1 left column, blue curves), the voltage and its slope will still be large close to where the spike has previously been. Therefore a spike will still be generated, only a bit earlier or later, unless it crosses the trial end. This is in contrast to the LIF neuron, where the slope of the voltage at the threshold can tend to zero and a spike can therefore abruptly (dis-)appear, accompanied by a diverging gradient (Fig. 1 left column, purple curves). A similar mechanism applies if there are changes in an input time as in Fig. 1 right column: An inhibitory input is moved backward in time until it crosses the time of an output spike generated by a sole, previous excitatory input (\(t_{\text{in}}\) crosses \(t_{\text{sp}}\) in Fig. 1d right). In the QIF neuron the voltage and the slope are infinitely large at this point, such that the additional inhibitory input is negligible compared to the intrinsic drive. Thus there is no abrupt change in spike timing. In contrast, in the LIF neuron the inhibitory input induces a downward slope in the potential also if it is at the threshold. The spike induced by the excitatory input alone therefore suddenly appears once the inhibitory input arrives later. In Supplemental Material Sec. II, we prove the smoothness of the spike times and their non-disruptive (dis-)appearance in the general case with multiple inputs and output spikes. ### Generalizations The crucial feature of the QIF neuron that leads to non-disruptive spike (dis-)appearances is that the voltage slope close to the threshold is positive irrespective of previous and present inputs. We therefore expect that also further neuron models with that feature exhibit spikes with continuous timings. This includes neuron models that generate spikes via a self-amplification mechanism and reach infinite voltage in finite time. One such model are hybrid leaky integrate-and-fire neurons with an attached, non-linear spike generation mechanism. This model has been observed to well match responses of biological neurons when the attached part is taken from a QIF [32]. Further models are, with minor modifications, the Izhikevich neuron [31], which can exhibit various spike generation regimes such as bursting, the rapid theta neuron [33], the sine neu Figure 1: Disruptive and non-disruptive appearance of spikes. (a,b,d) Spikes of LIF neurons can appear disruptively, in the middle of a trial. (a,c,d) Spike times of QIF neurons only appear non-disruptively at the trial end and otherwise change continuously with changed parameters. Left column: a neuron receives a single input, whose weight is increased (traces with increasing saturation). Right column: a neuron receives an excitatory as well as an inhibitory input whose arrival is moved to larger times. (a) Setup (gray: different input currents), (b) LIF membrane potentials (purple traces, saturation corresponding to a; \(V_{\text{rest}},V_{\text{o}}\): resting and threshold potential; \(T\): trial duration) and spikes (top, tick marks), (c) like b for QIF neuron (\(V_{\text{sep}}\): separatrix potential), (d) times of the first output spike as function of the changed parameter (\(w_{\text{min}}\): weight at which the spike appears, at finite time for the LIF neuron, at infinity for QIF neuron), (e) spike time gradient, divergent for LIF neurons upon increase of input weight (left). Dots in (d,e) correspond to equally colored spikes in (b,c). ron [34] and the exponential integrate-and-fire neuron [6]. Also anti-leaky [35] and intrinsically oscillating leaky integrate-and-fire neurons possess the desired feature if the impact of synaptic input currents vanishes at their spike threshold. The synapse model may be changed as well: We expect that synapses with continuous current rise will be feasible, as well as conductance-based synapses and synapses inducing infinitesimally short currents that generate a jump-like response directly in the voltage. In the latter case, the spike times are, however, not smooth, as the derivative with respect to the time or weight of an input spike time jumps if it crosses another one. ## III Pseudodynamics and pseudospikes In the proposed networks of QIF neurons, the disappearance of spikes happens by shifting them past the trial end, which is controllable by a spike-based gradient. The systematic addition of spikes remains a problem; from the view of the gradient it is unclear when a spike will appear. However, since such an appearance happens only at the trial end, we can solve the problem by appropriately continuing the dynamics as pseudodynamics behind it, starting with the voltages at the trial end. Concretely, we propose two approaches. In both, the pseudodynamics generate pseudospikes, whose timings have several useful properties: (i) They depend continuously and mostly smoothly on the network parameters, also when the pseudospikes cross the trial end to turn into ordinary spikes. (ii) If the voltage at the trial end increases, the pseudospike times decrease, intuitively because the neuron is already closer to spike. (iii) The pseudospikes interact such that the components of the gradient in multi-layer networks are generically non-zero also if neurons are inactive during the actual trial duration. (iv) The pseudospike times are analytically computable. In the first approach, which we use in our applications, the neurons continue to evolve as autonomous QIF neurons, but with an added constant, superthreshold drive until they have spiked sufficiently often for the task at hand (Supplemental Material Sec. I). To ensure generically non-zero gradients, we choose the drive's value to depend on the pseudospike times of the presynaptic neurons, weighted by the synaptic strengths. The transitions from pseudospike times to ordinary spike times are smooth. If a presynaptic pseudospike becomes an ordinary one, the pseudospike times are continuous, but their derivatives are not. In Supplementary Material Sec. IB we suggest a second approach where the spike times remain completely smooth. While we focused in this section on QIF neurons with extended coupling, the derivations indicate that similar pseudospike time functions can be found for other neuron models with continuous spike times. We explicitly obtain such functions for QIF neurons with infinitesimally short synaptic currents (Supplemental Material Sec. I) and use them in one of our applications. ## IV Gradient descent learning ### Spike-based gradient descent with continuous spike times In the following, we apply spike-based gradient descent learning on the neural network models with continuous spike times identified above. We choose single neuron models with an analytical solution between spikes and for the time of an upcoming spike. The former enables and the latter simplifies the use of efficient event-based simulations and modern automatic differentiation libraries [36]. Interestingly, such solutions in terms of elementary functions exist for the QIF neuron with temporally extended, exponentially decaying input currents if the time constant of the input current is half the membrane time constant (Supplemental Material Sec. I). The condition on the synaptic time constant is compatible with often assumed biologically plausible values, for example with a membrane time constant about \(10\,\mathrm{ms}\) and a synaptic time constant about \(5\,\mathrm{ms}\)[1, 6]. In the examples in this article, we therefore use these values. In one of our applications we employ oscillating QIF neurons with infinitesimally short input currents. Between spikes, they evolve with a constant rate of change using an appropriate representation [37, 30, 6, 31], which further simplifies the event-based simulations. ### Single neuron learning As a first illustration of our scheme, we learn the spike times of a single neuron. Specifically, the neuron is a QIF neuron with extended coupling that receives several inputs, two of which possess learnable weights and times (Fig. 2a, see Supplemental Material Sec. VII for details on models and tasks). The learnable weights are initially zero and the neuron does not spike at all during the trial (Fig. 2b, left). We apply spike-based gradient descent to minimize the quadratic difference between two target spike times and the first two spike times (which may also be pseudospike times). The output neuron is set to initially generate two pseudospikes, one for each target spike time. While not necessary in the displayed task, superfluous (pseudo-)spikes can be included into the loss function with target behind the trial end, to induce their removal if they enter the trial. The use of pseudospikes allows to activate the initially silent neuron (Fig. 2c, gray background). In doing so, the pseudospike times transition smoothly into ordinary spike times (Fig. 2c, white background). They are then shifted further until they lie precisely at the desired position on the time axis (Fig. 2b, right). The spike times change smoothly (Fig. 2c) and the gradient is continuous (Fig. 2d). The example illustrates that our scheme allows to learn precisely timed spikes of a single neuron - in a smooth fashion and even if the neuron is initially silent. ### Learning a recurrent neural network Next, we consider the training of a recurrent neural network (RNN). Successful learning of recurrent connections can be used for the construction of models of cortical networks, which are characterized by a high degree of recurrence [1], when the values of weights or other parameters are unknown [38, 39, 18]. In an RNN, spikes of all neurons generally influence subsequent spikes of all neurons. Thus, a change in a spike has a much broader and less straightforward impact than when training a single neuron. This renders RNN training harder. We consider a fully-connected RNN of ten QIF neurons with extended coupling and external inputs. The spike times of two network neurons are learned (Fig. 3a). In contrast to the learning of all network spikes [40, 18], such a task does not reduce to multiple single neuron learning tasks. Similar to the previous task, we apply our spike-based gradient descent to minimize the quadratic difference between spike times and their targets. Both the recurrent weights and the initial conditions of the neurons are learned. The latter exemplifies that our scheme can be applied not only to weights and input spike times but also to further network parameters. Our scheme is successful also in this scenario (Fig. 3b,c). The spike times are learned with great precision, the maximal deviation of any of the learned spikes Figure 3: Learning precise spikes in an RNN. (a) Network schematic. Neurons receive in each trial the same spikes from external input neurons (gray). Recurrent weights and initial conditions are learned such that the first two network neurons (blue and orange) spike at desired times. (b) Loss dynamics during learning. (c) Left: Spikes of network neurons before learning. Spikes of the first two neurons are colored, their target times are displayed in gray. Right: Learning changes the network dynamics such that the first two neurons spike precisely at the desired values (the colored spikes mostly cover the gray ones). (d) Evolution of the spike times of the first neuron during learning. The times of the spikes that are supposed to lie within the trial (blue traces) shift towards their target values (gray circles). The next spike (black trace) is supposed to lie outside the trial. (Gray area indicates pseudospikes.) (e) Same as (d) but the spike times are shown as a function of the arc length of the output spike time trajectories. This demonstrates that the spike times change continuously, despite the occurrence of large gradients (c.f. the step-like change in (d)). Figure 2: Smooth gradient descent learning of spikes in a QIF neuron. (a) A neuron receives several inputs, the weights and times of two of them (colored) are learned with gradient descent. (b) Left: Before learning, the input spikes (bottom, learnable spikes in orange) do not result in a sufficiently strong deflection of current and potential (middle, horizontal gray lines indicate zero input current and \(V_{\mathrm{rest}}\), \(V_{\mathrm{sep}}\), respectively, black bars indicate current and potential difference of one) to result in a spike (top, gray tick marks: target spike times). Right: After learning, the neuron spikes at the desired times (top, blue lines covering gray lines). (c) During learning, the (pseudo (gray area)) spike times change smoothly (colors as in (a), gray horizontal lines: target spike times). (d) The components of the gradient of the loss function \(L\) change continuously during learning (\(\partial L/\partial w_{1}\) is mostly covered by \(\partial L/\partial t_{\mathrm{in},1}\)). Learning progress is displayed as a function of the arc length of the output spike time trajectories since the start of learning. from its target time is less than 2 ms. As in the previous example, the spike times of the first neuron change continuously during learning without discrete jumps of the spike times (Fig. 3d,e). Due to large gradients, which are typical for all kinds of RNNs [41], the spike times of the second neuron change seemingly jump-like (Supplemental Material Fig. S6). Such sudden changes can be smoothened by restricting the maximal spike time change per step with the help of adjustable update step sizes (Supplemental Material Fig. S7). Hence, the applicability of our scheme extends to multi-spike learning and recurrent networks. ### Solving a standard machine learning task Finally, we apply our scheme to the classification of hand-written single-digit numbers from the MNIST dataset, which is a widely used benchmark in neuromorphic computing (e.g. [20, 24, 29]). We employ a three-layer feed-forward network. For computational efficiency, we use oscillatory QIF neurons with infinitesimally short input currents. For each input pixel, there is a corresponding input neuron, which spikes once at the beginning of the trial if the binarized pixel intensity is one and otherwise remains silent. The input spikes are then further processed by two hidden layers of 100 neurons each. The index of the neuron in the output layer that spikes first is the model prediction. Such time-to-first-spike coding naturally leads to fast classification in terms of time and number of spikes. Hence, it is well suited to foster the potential advantages of neuromorphic hardware regarding energy-efficiency and inference time. From a biological perspective, there is experimental evidence that the first spikes of neurons encode sensory information [2, 42, 43]. To demonstrate that our scheme allows to solve the dead neuron problem even if neurons in multiple layers are silent, we randomly initialize network parameters such that there are initially basically no ordinary spikes (Fig. 4a, left). Concretely, 99.9 % (mean over ten network instances, also in the following) of all hidden neurons initially do not generate ordinary spikes for any input image in the test data set. Yet, the pseudospike time-dependent, imposed interaction between the neurons allows to backpropagate errors. Hence, minimizing the cross-entropy loss activates the hidden (Fig. 4b) and output (Fig. 4c) neurons. The fraction of neurons that do not spike before the first output spike (where test trials can in principle be terminated) for any input image, quickly decays to a final value of 0.2 % (Fig. 4d). This means nearly all hidden neurons are utilized for inference. Still, the activity after learning is sparse with 0.31 ordinary spikes per hidden neuron before the first output spike, which is beneficial in terms of energy- and time-efficiency. The final accuracy of 97.3 % when only considering ordinary output spikes is comparable to previous results where similar setups are considered [23, 24, 44, 25]. If we also allow pseudospikes in the classification, the accuracy does not change much, it becomes 97.5 %. The convergence to minimal error is, however, faster (Fig. 4d). Thus, our scheme achieves competitive performance in a Figure 4: Spike-based gradient descent learning of the MNIST dataset. (a) Spike raster plot of the three-layer network. Left: It is silent before learning (inset shows example input also used on the right, and in b, c). Right: After learning, the neurons spike sparsely. (b) Voltage dynamics of the first neuron of the second hidden layer before (blue) and after (orange) learning. Despite not receiving any input before learning, our learning scheme adjusts upstream connection weights such that it eventually starts to spike. (c) Voltage dynamics of all output neurons after learning. Only the output neuron representing the correct class (“9”) spikes. (d) The fraction of neurons that do not spike before the first output spike for any input image quickly decays from (almost) one to a near zero value. (e) The networks achieve low test classification errors. If also pseudospikes are used for classification (orange), learning is faster. Horizontal gray lines in (b,c) indicate \(V_{\text{rest}}\), black bars indicate potential difference of one. Solid lines in (d,e) indicate mean and shaded areas std over ten network instances. neuromorphic benchmark task even if almost no neuron is initially active (see Supplemental Material Sec. VI for further quantitative measures). ## V Discussion We have shown that there are neural networks with spike times that vary continuously or even smoothly with network parameters; ordinary spikes only (dis-)appear at the end of the trial and can be extended to pseudospikes. The networks allow to learn the timings of an arbitrary number of spikes in a continuous fashion with a spike-based gradient. Perhaps surprisingly, the networks may consist of rather simple, standard QIF neurons. These are widely used in theoretical neuroscience [6; 31], including for the supervised learning of spiking neural networks [38; 45; 46]. However, the particularity that spikes only (dis-)appear at the trial end has not been noticed and exploited. Furthermore, QIF neurons have already been implemented in neuromorphic hardware [47; 48]. On the one hand, our scheme possesses the same advantages as other spike-based gradient descent approaches such as small memory and computational footprints and a clear interpretation as following the exact loss gradient. On the other hand, like standard machine learning schemes it produces no disruptive transitions during learning and no gradient divergences; it can in principle be used with any type of initialization and does not rely on ad-hoc measures to remove and add spikes and revive dead neurons. This suggests a wide range of applications: When studying biological neural networks, our scheme may be used to learn neurobiologically relevant tasks, in order to benchmark biological learning and to investigate how the network dynamical solutions may work. The scheme may also be used to reconstruct synaptic connectivity from experimentally (partially) observed spiking activity. Furthermore, it may be used to train networks in neuromorphic computing. It generally allows to benchmark other learning rules whose underlying mechanisms are less transparent and to (pre-)train networks before converting to a desired neuron type that complicates learning. The dynamics of spiking and non-spiking neural networks can have long temporal dependencies with small perturbations increasing over time [49; 50; 51; 52; 35], see also Supplementary Material Sec. IV. For learning this causes the well-known exploding gradient problem [41; 10]. We therefore restricted our learning examples to at most ten multiples of the membrane time constant. This fits the length of various experimentally observed precisely timed patterns of spikes [53; 54; 55; 42; 42] and the fast processing of certain tasks in neuromorphic computing [20; 23; 24; 25; 44]. We have introduced pseudospikes to allow the gradient to "see" spikes before they appear and to thus add spikes in systematic manner. This preserves the gradients of the ordinary spike times and solves, in particular, the dead neuron problem. The resulting possibility to initialize an entire network with small weights may be important to induce desirable and biologically plausible features such as energy-efficient final connectivity and sparse spiking [7; 57], sparse coding [58] and representation learning [59]. In a somewhat related approach, silent neurons were assumed to spike at the trial end [26; 27]. In contrast to our pseudospikes, however, this only applied to output neurons and did not allow to backpropagate errors through silent neurons. To conclude, the present study shows that despite the inherent discreteness of spikes, it is possible to perform exact, smooth gradient descent in spiking neural networks, including the gradient-based removal and after augmentation also generation of spikes. ###### Acknowledgements. We thank Sven Goedeke for helpful comments on the manuscript and the German Federal Ministry of Education and Research (BMBF) for support via the Bernstein Network (Bernstein Award 2014, 01GQ1710).
2309.10976
Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks
Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI). However, while it is well-known in computer vision that CI quality diminishes under distribution shift, this behavior remains understudied for GNNs. Hence, we begin with a case study on CI calibration under controlled structural and feature distribution shifts and demonstrate that increased expressivity or model size do not always lead to improved CI performance. Consequently, we instead advocate for the use of epistemic uncertainty quantification (UQ) methods to modulate CIs. To this end, we propose G-$\Delta$UQ, a new single model UQ method that extends the recently proposed stochastic centering framework to support structured data and partial stochasticity. Evaluated across covariate, concept, and graph size shifts, G-$\Delta$UQ not only outperforms several popular UQ methods in obtaining calibrated CIs, but also outperforms alternatives when CIs are used for generalization gap prediction or OOD detection. Overall, our work not only introduces a new, flexible GNN UQ method, but also provides novel insights into GNN CIs on safety-critical tasks.
Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan
2023-09-20T00:35:27Z
http://arxiv.org/abs/2309.10976v1
# Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks ###### Abstract Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI). However, while it is well-known in computer vision that CI quality diminishes under distribution shift, this behavior remains understudied for GNNs. Hence, we begin with a case study on CI calibration under controlled structural and feature distribution shifts and demonstrate that increased expressivity or model size do not always lead to improved CI performance. Consequently, we instead advocate for the use of epistemic uncertainty quantification (UQ) methods to modulate CIs. To this end, we propose G-\(\Delta\)UQ, a new single model UQ method that extends the recently proposed stochastic centering framework to support structured data and partial stochasticity. Evaluated across covariate, concept, and graph size shifts, G-\(\Delta\)UQ not only outperforms several popular UQ methods in obtaining calibrated CIs, but also outperforms alternatives when CIs are used for generalization gap prediction or OOD detection. Overall, our work not only introduces a new, flexible GNN UQ method, but also provides novel insights into GNN CIs on safety-critical tasks. ## 1 Introduction As graph neural networks (GNNs) are increasingly deployed in critical applications with test-time distribution shifts (Zhang and Chen, 2018; Gaudelet et al., 2020; Yang et al., 2018; Yan et al., 2019; Zhu et al., 2022), it becomes necessary to expand model evaluation to include safety-centric metrics, such as calibration errors (Guo et al., 2017), out-of-distribution (OOD) rejection rates (Hendrycks and Gimpel, 2017), and generalization gap estimates (Jiang et al., 2019), to holistically understand model performance in such shifted regimes (Hendrycks et al., 2022b; Trivedi et al., 2023b). Notably, such additional metrics often rely on _confidence indicators_ (CIs), such as maximum softmax or predictive entropy, which can be derived from prediction probabilities. Although there is a clear understanding in the computer vision literature that the quality of confidence indicators can noticeably deteriorate under distribution shifts (Wiles et al., 2022; Ovadia et al., 2019), and additional factors like model size or expressivity can exacerbate this deterioration (Minderer et al., 2021), the impact of these phenomena on graph neural networks (GNNs) remains under-explored. Indeed, there is an expectation that adopting more advanced or expressive architectures (Chuang and Jegelka, 2022; Alon and Yahav, 2021; Topping et al., 2022; Rampasek et al., 2022; Zhao et al., 2022) would inherently improve CI calibration on graph classification tasks. Yet, we find that using graph transformers (GTrans) (Rampasek et al., 2022) or positional encodings (Dwivedi et al., 2022a; Wang et al., 2022b; Li et al., 2020) do not significantly improve CI calibration over vanilla message-passing GNNs (MPGNNs) even under controlled, label-preserving distribution shifts. Notably, when CIs are not well-calibrated, GNNs with high accuracy may perform poorly on the additional safety metrics, leading to unforeseen risks during deployment. Given that using advanced architectures is not an immediately viable solution for improving CI calibration, we instead advocate for modulating CIs using epistemic _uncertainty estimates_. Uncertainty quantification (UQ) methods (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017; Blundell et al., 2015) have been extensively studied for vision models (Guo et al., 2017; Minderer et al., 2021), and have been used to improve vision model CI performance under distribution shifts. Our work not only studies the effectiveness of such methods on improving GNN CIs, but also proposes a novel UQ method, G-\(\Delta\)UQ, which extends the recently proposed, state-of-the-art stochastic data-centering (or "anchoring") framework (Thiagarajan et al., 2022; Netanyah et al., 2023) to support partial stochasticity and structured data. In brief, stochastic centering provides a scalable alternative to highly effective, but prohibitively expensive, deep ensembles by efficiently sampling a model's hypothesis space, in lieu of training multiple, independently trained models. When using the uncertainty-modulated confidence estimates from G-\(\Delta\)UQ, we outperform other popular UQ methods, on not only improving the CI calibration under covariate, concept and graph size shifts, but also in improving generalization gap prediction and OOD detection performance. **Proposed Work.** This work studies the effectiveness of GNN CIs on the graph classification tasks with distribution shifts, and proposes a novel uncertainty-based method for improving CI performance. Our contributions can be summarized as follows: **Sec. 2: Case Study on CI Calibration.** We find that improving GNN expressivity does not mitigate CI quality degradation under distribution shifts. **Sec. 4: (Partially) Stochastic Anchoring for GNNs.** We propose G-\(\Delta\)UQ, a novel UQ method based on stochastic centering for GNNs with support for partial stochasticity. **Sec. 5: Evaluating Uncertainty-Modulated CIs under Distribution Shifts.** Across covariate, concept and graph-size shifts and a suite of evaluation protocols (calibration, OOD rejection, generalization gap prediction), we demonstrate the effectiveness of G-\(\Delta\)UQ. ## 2 Case Study on GNN CI Calibration In this section, we demonstrate that GNNs struggle to provide calibrated confidence estimates under distribution shift (Dwivedi et al., 2020) despite, improvements in architectures (He et al., 2022; Corso et al., 2020; Zhao et al., 2022) and expressivity (Wang et al., 2022; Dwivedi et al., 2022). Since assessing calibration performance does not require any additional, potentially confounding, post-processing, we perform a direct assessment of GNN CI reliability and motivate why uncertainty-based CI modulation is needed. **Notations.** Let \(\mathcal{G}=(\mathbf{X},\mathbb{E},\mathbf{A},Y)\) be a graph with node features \(\mathbf{X}\in\mathbb{R}^{N\times d_{\ell}}\), (optional) edge features \(\mathbb{E}\in\mathbb{R}^{m\times d_{\ell}}\), adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), and graph-level label \(Y\in\{0,1\}^{c}\), where \(N,m,d_{\ell},c\) denote the number of nodes, number of edges, feature dimension and number of classes, respectively. We use \(i\) to index a particular sample in the dataset, e.g. \(\mathcal{G}_{i},\mathbf{X}_{i}\). Then, we can define a graph neural network consisting of \(\ell\) message passing layers \((\mathtt{MPNN})\), a graph-level readout function (READOUT), and classifier head (MLP) as follows: \[\mathbf{X}_{M}^{\ell+1},\,\mathbb{E}^{\ell+1} = \mathtt{MPNN}_{e}^{\ell}\left(\mathbf{X}^{\ell},\mathbb{E}^{\ell},\mathbf{A}\right), \tag{1}\] \[\mathbf{G} = \mathtt{READOUT}\left(\mathbf{X}_{M}^{\ell+1}\right),\] (2) \[\hat{Y} = \mathtt{MLP}\left(\mathbf{G}\right), \tag{3}\] where \(\mathbf{X}_{M}^{\ell+1},\mathbb{E}^{\ell+1}\) are intermediate node and edge representations, and \(\mathbf{G}\) is the graph representation. We focus on a graph classification setting throughout our paper. **Experimental Set-up:** Our experimental set-up is as follows. All results are reported over three seeds. _Models._ While improving the expressivity of GNNs is an active area of research, positional encodings (PEs) and graph-transformer (GTran) architectures (Muller et al., 2023) have proven to be particularly popular due to their effectiveness, and flexibility. Indeed, GTrans are known to not only help mitigate over-smoothing (a phenomenon where GNNs lose discriminative power) and over-squashing (a phenomenon where GNNs collapse node representations) (Alon and Yahav, 2021; Topping et al., 2022) but also to better capture long-range dependencies in large graphs (Dwivedi et al., 2022). Critical to the success of any transformer architecture are well-designed PEs. Notably, graph PEs help improve GNN and GTran expressivity by distinguishing between isomorphic nodes, as well as capturing structural vs. proximity information (Dwivedi et al., 2022). Here, we ask if these enhancements translate to improved calibration under distribution shift with respect to simple MPNNs by: (i) incorporating equivariant and stable PEs (Wang et al., 2022); (ii) utilizing MPNN vs. GTran architectures; and, (iii) changing model depth and width. We utilize the state-of-the-art, flexible "general, powerful, scalable" (GPS) GTran (Rampasek et al., 2022) with the GatedGCN backbone. For fair comparison, we use a GatedGCN as the compared MPNN. _Data._ Superpixel-MNIST (Dwivedi et al., 2020; Knyazev et al., 2019; Velickovic et al., 2018) is a popular graph classification benchmark that converts MNIST images into \(k\) nearest-neighbor graphs of superpixels (Achanta et al., 2012). We select this benchmark as it allows for (i) a diverse set of well-trained models without requiring independent, extensive hyper-parameter search and (ii) controlled, label preserving distribution shifts. Inspired by Ding et al. (2021), we create structurally distorted but valid graphs by rotating MNIST images by a fixed number of degrees and then creating the super-pixel graphs from these rotated images. (See Appendix, Fig. 8.) Since superpixel segmentation on these rotated images will yield different superpixel \(k\)-nn graphs without harming class information, we can emulate label-preserving structural distortion shifts. Note, the models are trained only using the original (\(0^{\circ}\) rotation) graphs. _Evaluation._ Calibrated models are expected to produce confidence estimates that match the true probabilities of the classes being predicted (Naeini et al., 2015; Guo et al., 2017; Ovadia et al., 2019). While poorly calibrated CIs are over/under confident in their predictions, calibrated CIs are more trustworthy and can also improve performance on other safety-critical tasks which implicitly require reliable prediction probabilities (see Sec. 5). Here, we report the top-1 label expected calibration error (ECE) (Kumar et al., 2019; Detlefsen et al., 2022). Let \(p_{i}\) be the top-1 probability, \(c_{i}\) be the predicted confidence, \(b_{i}\) a uniformly sized bin in \([0,1]\). Then, \(ECE:=\sum_{i}^{N}b_{i}\|(p_{i}-c_{i})\|\). **Observations.** In Fig. 1, we present our results and make the following observations. Despite the aforementioned benefits in model expressivity, GPS performs noticeably worse compared to the MPGNN, despite having comparable accuracies. This is apparent particularly at severe shifts (\(60^{\circ}\), \(90^{\circ}\), \(180^{\circ}\) rotations). Furthermore, we find that PEs have minimal effects on both calibration and accuracy. This suggests that while these techniques may enhance theoretical and empirical expressivity, they do not necessarily transfer to the safety-critical task of obtaining calibrated predictions under distribution shifts. In addition, we investigate the impact of model depth and width on calibration performance, considering that model size has been known to affect both the calibration of vision models (Guo et al., 2017; Minderer et al., 2021) and the propensity for over Figure 1: **Calibration on Structural Distortion Distribution Shifts.** On a controlled graph structure distortion shift, we evaluate models trained on the standard superpixel MNIST benchmark (Dwivedi et al., 2020) on super-pixel \(k\)-nn graphs created from rotated MNIST images. While accuracy is expected to decrease as distribution shift increases, we observe that the expected calibration error also grows significantly worse. Importantly, this trend is persistent when considering transformer architectural variants (GPS (Rampasek et al., 2022)), as well as different depths and widths. In contrast, our proposed G-\(\Delta\)UQ method achieves substantial improvement in ECE without significantly compromising on accuracy. squashing in GNNs (Xu et al., 2021). We see that increasing the number of message passing layers (\(L=3\to L=5\)) can marginally improve accuracy, but it may also marginally decrease ECE. Moreover, we find that increasing the width of the model can lead to slightly worse calibration at high levels of shift (\(90^{\circ}\), \(180^{\circ}\)), although accuracy is not compromised. Notably, when we apply our proposed method G-\(\Delta\)UQ, (see Sec. 4), to the MPGNN with no positional encodings, it significantly improves the calibration over more expressive variants (GPS, LPE), across all levels of distribution shifts, while maintaining comparable accuracy. We briefly note that we did not tune the hyper-parameters to our method to ensure a fair comparison, so we expect that accuracy could be further improved. Overall, our results emphasize that obtaining reliable GNN CIs remains a difficult problem that cannot be easily solved through advancements in architectures and expressivity. This motivates our uncertainty-modulated CIs as an architecture-agnostic solution. ## 3 Related Work Here, we discuss techniques for improving CI reliability and the recently proposed stochastic centering paradigm, before introducing our proposed method in Sec. 4. ### Improving Confidence Indicators It is well known in computer vision that CIs are often unreliable or uncalibrated directly out-of-the-box (Guo et al., 2017), especially under distribution shifts (Ovadia et al., 2019; Wiles et al., 2022; Hendrycks et al., 2019). Given that reliable CIs are necessary for a variety of safety-critical tasks, including generalization error prediction (GEP) (Jiang et al., 2019) and out-of-distribution (OOD) detection (Hendrycks and Gimpel, 2017), many strategies have been proposed to improve CI calibration (Lakshminarayanan et al., 2017; Guo et al., 2017; Gal and Ghahramani, 2016; Blundell et al., 2015). One particularly effective strategy is to create a deep ensemble (DEns) (Lakshminarayanan et al., 2017) by training a set of independent models (e.g., different hyper-parameters, random-seeds, data order) where the mean predictions over the set is noticeably better calibrated. However, since DEns requires training multiple models, in practice, it can be prohibitively expensive to use. To this end, we focus on single-model strategies. Single-model UQ techniques attempt to scalably and reliably provide uncertainty estimates, which can then optionally be used to modulate the prediction probabilities. Here, the intuition is that when the epistemic uncertainties are large in a data regime, confidence estimates can be tempered so that they better reflect the accuracy degradation during extrapolation (e.g., training on small-sized graphs but testing on large-sized graphs). Some popular strategies include: Monte Carlo dropout (MCD) (Gal and Ghahramani, 2016) which performs Monte Carlo dropout at inference time and takes the average prediction to improve calibration, temperature scaling (Temp) (Guo et al., 2017) which rescales logits using a temperature parameter computed from a validation set, and SVI (Blundell et al., 2015) which proposes a stochastic variational inference method for estimating uncertainty. While such methods are more scalable than DeepEns, in many cases, they struggle to match its performance (Ovadia et al., 2019). We note that while some recent works have studied GNN calibration, they focus on node classification settings (Hsu et al., 2022; Wang et al., 2021; Kang et al., 2022) and are not directly relevant to this work as they make assumptions that are only applicable to node classification tasks (e.g., proposing regularizers that rely upon on similarity to training nodes or neighbourhood similarity). ### Stochastic Centering for Uncertainty Quantification Recently, it was found that applying a (random) constant bias to vector-valued (and image) data leads to non-trivial changes in the resulting solution of a DNN (Thiagarajan et al., 2022). This behavior was attributed to the lack of shift-invariance in the neural tangent kernel (NTK) induced by conventional neural networks such as MLPs and CNNs. Building upon this observation, Thiagarajan et al. proposed a single model uncertainty estimation method, \(\Delta\)-UQ, based on the principle of _anchoring_. Conceptually, anchoring is the process of creating a relative representation for an input sample \(x\) in terms of a random anchor \(c\) (which is used to perform the _stochastic centering_), \([x-c,c]\). By choosing different anchors randomly in each training iteration, \(\Delta\)-UQ emulates the process of sampling different solutions from the hypothesis space (akin to an ensemble). During inference, \(\Delta\) UQ aggregates multiple predictions obtained via different random anchors and produces uncertainty estimates. Formally, given a trained stochastically centered model, \(f_{\theta}:[\mathbf{X}-\mathbf{C},\mathbf{C}]\rightarrow\hat{\mathbf{Y}}\), let \(\mathbf{C}:=\mathbf{X}_{train}\) be the anchor distribution, \(x\in\mathbf{X}_{test}\) be a test sample, and anchor \(c\in\mathbf{C}\) be anchor. Then, the mean target class prediction, \(\mathbf{\mu}(y|\mathrm{x})\), and corresponding variance, \(\mathbf{\sigma}(y|\mathrm{x})\) over \(K\) random anchors are computed as: \[\mathbf{\mu}(y|\mathrm{x}) =\frac{1}{K}\sum_{k=1}^{K}f_{\theta}([\mathrm{x}-\mathrm{c}_{k}, \mathrm{c}_{k}]) \tag{4}\] \[\mathbf{\sigma}(y|\mathrm{x}) =\sqrt{\frac{1}{K-1}\sum_{k=1}^{K}(f_{\theta}([\mathrm{x}-\mathrm{ c}_{k},\mathrm{c}_{k}])-\mathbf{\mu})^{2}} \tag{5}\] Since the variance over \(K\) anchors captures epistemic uncertainty by sampling different hypotheses, these estimates can be used to modulate the predictions: \(\mathbf{\mu}_{\text{calib}}=\mathbf{\mu}(1-\mathbf{\sigma})\). The resulting calibrated predictions and uncertainty estimates have led to state-of-the-art performance on image outlier rejection and calibration tasks, while still only requiring a single model. Furthermore, it was separately shown that anchoring can also be used to improve the extrapolation behavior of DNNs (Netanyahu et al., 2023). However, while an attractive paradigm, there are several challenges to using stochastic centering with GNNs and graph data. We discuss and remedy these below in Sec. 4. ## 4 Graph-\(\Delta\)UQ: Uncertainty-based Prediction Calibration In this section, we introduce, G-\(\Delta\)UQ, a novel single-model UQ method that helps improve the performance of CIs without sacrificing computational efficiency or accuracy by extending the recently proposed stochastic centering paradigm to graph data. (See Fig. 2 for an overview.) As discussed in Sec. 3, the stochastic centering and anchoring paradigm has demonstrated significant promise in computer vision, yet there are several challenges that must be addressed prior to applying it to GNNs and graph data. Notably, previous research on stochastic centering has focused on traditional vision models (CNNs, ResNets, ViT) and relied on straightforward input space transformations (e.g., subtraction and channel-wise concatenation: \([\mathbf{X}-\mathbf{C},\mathbf{C}]\)) to construct anchored representations. However, graph datasets are structured, discrete, and variable-sized, where such trivial transformations do not exist. Moreover, the distribution shifts encountered in graph datasets exhibit distinct characteristics compared to those typically examined in the vision literature that must be accounted for when sampling the underlying GNN hypothesis space. Therefore, it is non-trivial to design anchors that are capable of appropriately capturing epistemic uncertainty. Below, we discuss not only how to extend stochastic centering to GNNs and structured data, (G-\(\Delta\)UQ), but also propose partially stochastic and pretrained variants that further improve the capabilities of anchored GNNs. In Section 5, we empirically demonstrate the advantages of our approach. Figure 2: **Overview of G-\(\Delta\)UQ. We propose three different stochastic centering variants that induce varying levels of stochasticity in the underlying GNN. Notably, READOUT stochastic centering allows for using pretrained models with G-\(\Delta\)UQ.** ### Node Feature Anchoring Recall that in \(\Delta\)-UQ, input samples are transformed into an anchored representation, by directly subtracting the input and anchor, and then concatenating them channel-wise, where the first DNN layer is correspondingly modified to accommodate the additional channels. While this is reasonable for vector-valued data or images, due to the variability graph size and discrete nature, performing a structural residual operation, \((\mathbf{A}-\mathbf{A}_{c},\mathbf{A}_{c})\) with respect to a graph sample, \(\mathcal{G}=(\mathbf{X},\mathbb{E},\mathbf{A},Y)\), and another anchor graph, \(\mathcal{G}_{c}=(\mathbf{X}_{c},\mathbb{E}_{c},\mathbf{A}_{c},Y_{c})\), would introduce artificial edge weights and connectivity artifacts that can harm convergence. Likewise, we cannot _directly_ anchor using the node features, \(\mathbf{X}\), since the underlying graphs are different sizes, and a set of node features cannot be considered IID. To this end, we first create a distribution over the training dataset node features and sample anchors from this distribution as follows. We first fit a Gaussian distribution (\(\mathcal{N}(\mu,\sigma)\)) to the training node features. Then, during training, we randomly sample an anchor for each node. Mathematically, given the anchor \(\mathbf{C}^{N\times d}\sim\mathcal{N}(\mu,\sigma)\), we create the anchor/query node feature pair \([\mathbf{X}_{i}-\mathbf{C}||\mathbf{X}_{i}]\), where \(||\) denotes concatenation, and \(i\) is the node index. During inference, we sample a fixed set of \(K\) anchors and compute residuals for all nodes with respect to the same anchor, e.g., \(\mathbf{c}^{1\times d}{}_{k}\sim\mathcal{N}(\mu,\sigma)\) (\([\mathbf{X}_{i}-c_{k}||\mathbf{X}_{i}]\)), with appropriate broadcasting. For datasets with categorical node features, it is more beneficial to perform the anchoring operation after embedding the node features in a continuous space. Alternatively, considering the advantages of PEs in enhancing model expressivity (Wang et al., 2022), one can compute positional information for each node and perform anchoring based on these encodings. While performing anchoring with respect to the node features is perhaps the most direct extension of \(\Delta\)-UQ to graphs, as it results in a fully stochastically centered GNN, only using node features for anchoring neglects direct information about the underlying structure, which may lead to less diversity when sampling from the hypothesis space. Below, we introduce hidden layer variants that create partially stochastic GNNs that exploit message-passing to capture both feature and structural information during hypothesis sampling. ### Hidden Layer Anchoring While performing anchoring in the input space creates a fully stochastic neural network as all parameters are learned using the randomized input, it was recently demonstrated with respect to Bayesian neural networks that relaxing the assumption of fully stochastic to partially stochastic neural networks not only leads to strong computational benefits, but also may improve calibration (Sharma et al., 2023). Motivated by this observation, we extend G-\(\Delta\)UQ to support anchoring in intermediate layers, in lieu of the input layer. This allows for _partially stochastic_ GNNs, wherein the layers prior to the anchoring step are deterministic. Moreover, intermediate layer anchoring has the additional benefit that anchors will be able to sample hypotheses that consider both topological and node feature information due to MPNN steps, and supports using pretrained GNNs. We introduce these variants below. (See Fig. 2 for a visual representation.) _Intermediate MPNN Anchoring:_ Given a GNN containing \(\ell\) MPNN layers, let \(r\leq\ell\) be the layer at which we perform node feature anchoring. We obtain the anchor/sample pair by computing the intermediate node representations from the first \(r\) MPNN layers. We then randomly shuffle the node features over the entire _batch_, \((\mathbf{C}=\text{SHUFFLE}(\mathbf{X}_{i}^{r+1}))\), concatenate the residuals, and proceed with the READOUT and MLP layers as with the standard \(\Delta\)-UQ model. Note that, we do not consider the gradients of the query sample when updating the parameters, and the MPNN\({}^{r+1}\) layer is modified to accept inputs of dimension \(d_{r}\times 2\) (to take in anchored representations as inputs). Another difference from the input space implementation is that we fix the set of anchors and subtract a single anchor from all node representations in an iteration (instead of sampling uniquely), e.g., \(\mathbf{c}^{1\times d}=\mathbf{X}_{c}^{r+1}[n,:]\) and \([\mathbf{X}_{i,n}^{r+1}-\mathbf{c}||\mathbf{c}]\). This process is shown below, assume appropriate broadcasting: \[\mathbf{X}^{r+1} =\textsc{MPNN}^{1\ldots r}\] \[\mathbf{X}^{r+1} =\textsc{MPNN}^{r+1\ldots\ell}\left([\mathbf{X}^{r+1}-\mathbf{C}, \mathbf{X}^{r+1}],\mathbf{A}\right)\] \[\hat{Y} =\textsc{MLP}(\textsc{READOUT}\left(\mathbf{X}^{\ell+1}\right))\] _Intermediate Read Out Anchoring:_ While READOUT anchoring is conceptually similar to intermediate MPNN anchoring, we now only obtain a different anchor for each hidden graph representation, instead of individual nodes. This allows us to sample hypotheses after all node information has been aggregated over \(\ell\) hops. This is demonstrated below: \[\mathbf{G} =\texttt{READOUT}\left(\mathbf{X}\right),\mathbf{G}_{c}=\texttt{ READOUT}\left(\mathbf{X}_{c}\right)\] \[\hat{Y} =\texttt{MLP}\left(\left[\mathbf{G}-\mathbf{G}_{c},\mathbf{G}_{ c}\right]\right)\] _Pretrained Anchoring:_ Lastly, we note that in order to be compatible with the stochastic centering framework (the input layer or chosen intermediate layer), the network architecture must be modified and retrained from scratch. To circumvent this, we consider a variant of READOUT anchoring using a pretrained GNN backbone. Here, the final MLP layer of a pretrained model is discarded, and reinitialized to accommodate query/anchor pairs. We then freeze the MPNN, and only train the anchored classifier head. This allows for an inexpensive, limited stochasticity GNN. While all G-\(\Delta\)UQ variants are able to sample from the underlying hypothesis space (see Fig. 10), each variant will provide somewhat different uncertainty estimates. Through our extensive evaluation, we show that the complexity of the task and the nature of the distribution shift will determine which of the variants is best suited and make some recommendations on which variants to use. ## 5 Uncertainty-based Prediction Calibration under Distribution Shift using G-\(\Delta\)Uq In this section, we demonstrate the effectiveness of G-\(\Delta\)UQ in improving the reliability of CIs on various tasks (calibration, generalization gap prediction and OOD detection) as well as various distribution shifts (size, covariate and concept). ### Size Generalization While GNNs are well-known to struggle when generalizing to larger size graphs (Buffelli et al., 2022; Yehudai et al., 2021; Chen et al., 2022), their predictive uncertainty behavior with respect to such shifts remains under studied. Given that such shifts can be expected at deployment, reliable uncertainty estimates under this setting are important for safety critical applications. We note that while sophisticated training strategies can be used to improve size generalization (Buffelli et al., 2022; Bevilacqua et al., 2021), our focus is primarily on the quality of uncertainity estimates, so we do not consider such techniques. However, we note that G-\(\Delta\)UQ can be used in conjunction with such techniques. **Experimental Set-up.** Following the procedure of (Buffelli et al., 2022; Yehudai et al., 2021), we create a size distribution shift by taking the smallest 50%-quantile of graph size for the training set, and reserving the larger quantiles (>50%) for evaluation. Unless, otherwise noted, we report results on the largest 10% quantile to capture performance on the largest shift. We utilize this splitting procedure on four well-known benchmark binary graph classification datasets from the TUDataset repository (Morris et al., 2020): D&D, NCI1, NCI09, and PROTEINS. (See App. A.3 for dataset statistics.) We further consider three different backbone GNN models, GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019), and PNA (Corso et al., 2020). All models contain three message passing layers and the same sized hidden representation. The accuracy and expected calibration error on the larger-graph test set are reported for models trained with and without stochastic anchoring. Figure 3: **Impact of Layer Selection on G-\(\Delta\)UQ.** Performing anchoring at different layers leads the sampling of different hypothesis spaces. On D&D, we see that later layer anchoring corresponds to a better inductive bias and can lead to dramatically improved performance. **Results.** As noted in Sec. 4, stochastic anchoring can be applied at different layers, leading to the sampling of different hypothesis spaces and inductive biases. In order to empirically understand this behavior, we compare the performance of stochastic centering when applied at different layers on the D&D dataset, which comprises the most severe size shift from training to test set (see Fig. 3). We observe that applying stochastic anchoring after the READOUT layer (L3) dramatically improves both accuracy and calibration as the depth increases. While this behavior is less pronounced on other datasets (see Fig. 11), we find overall that applying stochastic anchoring at the last layer yields competitive performance on size generalization benchmarks and better convergence compared to stochastic centering performed at earlier layers. Indeed, in Fig. 4, we compare the performance of last-layer anchoring against a non-anchored model on four datasets. We observe that G-\(\Delta\)UQ improves calibration performance on most datasets, while generally maintaining or even improving the accuracy. Indeed, improvement is most pronounced on the largest shift (D&D), further emphasizing the benefits of stochastic centering. ### Evaluation under Concept and Covariate Shifts Here, we seek to understand the behavior of GNN CIs under controlled covariate and concept shifts, as well demonstrate the benefits of G-\(\Delta\)UQ in providing reliable estimates under such shifts. Notably, we expand our evaluation beyond calibration error to include the safety-critical tasks of OOD detection (Hendrycks and Gimpel, 2017; Hendrycks et al., 2019) and generalization gap prediction tasks (Guillory et al., 2021; Ng et al., 2022; Trivedi et al., 2023; Garg et al., 2022). We begin by introducing our data and additional tasks, and then present our results. **Experimental Set-up.** In brief, concept shift corresponds to a change in the conditional distribution of labels given input from the training to evaluation datasets, while covariate shift corresponds to Figure 4: **Predictive Uncertainty under Size Distribution Shifts. When evaluating the accuracy and calibration error of models trained with and without stochastic anchoring on dataset with a graph size distribution shift, we observe that stochastic centering decreases calibration error while improving or maintaining accuracy across datasets and different GNNs.** Figure 5: **Predictive Uncertainty under Concept and Covariate Shifts. Stochastic anchoring leads to competitive in-distribution and out-of distribution test accuracy while improving calibration, across domains and shifts. This is particularly true when comparing to other single-model UQ methods.** change in the input distribution. We use the recently proposed Graph Out-Of Distribution (GOOD) benchmark (Gui et al., 2022) to obtain four different datasets (GOODCMNIST, GOODMotif-basis, GOODMotif-size, GOODSST2) with their corresponding in-/out- of distribution concept and covariate splits. To ensure fair comparison, we use the architectures and hyper-parameters suggested by the benchmark when training. Please see the supplementary for more details. We consider the following baseline UQ methods in our analysis: Deep Ensembles (Lakshminarayanan et al., 2017), Monte Carlo Dropout (MCD) (Gal and Ghahramani, 2016), and our proposed G-\(\Delta\)UQ, including the pretrained variant. DeepEns is well known to be a highly performative baseline on uncertainty estimation tasks, but we emphasize that it requires training multiple models. This is in contrast to single model estimators, such as MCD and G-\(\Delta\)UQ. We note that while MCD and G-\(\Delta\)UQ can be applied at intermediate layers; we present results on the best performing layer but include the full results in the supplementary. **Using Confidence Estimates in Safety Critical Tasks.** The safe deployment of graph machine learning models in critical applications requires that GNNs not only generalize to ID and OOD datasets, but that they do so safely. To this end, recent works (Hendrycks et al., 2022; Hentryedi et al., 2023; Trivedi et al., 2023) have expanded model evaluation to include additional robustness metrics to provide a holistic view of model performance. Notably, while reliable confidence indicators are critical to success on these metrics, the impact of distributions shift on GNN confidence estimates remains under-explored. We introduce these additional tasks below. _Generalization Error Prediction:_ Accurate estimation of the expected generalization error on unlabeled datasets allows models with unacceptable performance to be pulled from production. To this end, generalization error predictors (GEPs) (Garg et al., 2022; Ng et al., 2022; Jiang et al., 2019; Trivedi et al., 2023; Guillory et al., 2021) which assign sample-level scores, \(S(x_{i})\) which are then aggregated into dataset-level error estimates, have become popular. We use maximum softmax probability and a simple thresholding mechanism as the GEP (since we are interested in understanding the behavior of confidence indicators), and report the error between the predicted and true target dataset accuracy: \[GEPError:=||\text{Acc}_{target}-\frac{1}{|X|}\sum_{i}\mathbb{I}(\mathrm{S}( \tilde{x}_{i};\mathrm{F})>\tau)||\] where \(\tau\) is tuned by minimizing GEP error on the validation dataset. We use the confidences obtained by the different baselines as sample-level scores, \(\mathrm{S}(x_{i})\) corresponding to the model's expectation that a sample is correct. The MAE between the estimated error and true error is reported on both in- and out-of -distribution test splits provided by the GOOD benchmark. Figure 6: **Generalization Gap Prediction.The mean absolute error when using scores obtained from different baselines in the challenging (and to the best of our knowledge, yet unexplored for graphs) task of generalization error prediction are reported. While there is not a dominant method, stochastic anchoring is very competitive, and yields among the lowest MAE of single-model UQ estimators. Notably, pretrained G-\(\Delta\)UQ is particularly effective and outperforms the end-to-end variant.** Out-of-Distribution DetectionBy reliably detecting OOD samples and abstaining from making predictions, models can avoid over extrapolating to distributions which are not relevant. While many scores have been proposed for detection (Hendrycks et al., 2019, 2022; Lee et al., 2018; Wang et al., 2022; Liu et al., 2020), flexible, popular baselines, such as maximum softmax probability and predictive entropy (Hendrycks and Gimpel, 2017), can be derived from confidence indicators relying upon prediction probabilities. Here, we report the AUROC for the binary classification task of detecting OOD samples using the maximum softmax probability (Kirchheim et al., 2022). We briefly note that while more sophisticated scores can be used, our focus is on the reliability of GNN confidence indicators and thus we choose scores directly related to those estimates. Moreover, since sophisticated scores can often be derived from prediction probabilities, we expect their performance would also be improved with better estimates. **Results.** We report the results in Figs. 5, 6, and 7. Our observations are below. Results are reported over three seeds. _Accuracy & Calibration._ In Fig. 5, we observe that using stochastic anchoring via G-\(\Delta\)UQ yields competitive accuracy, especially in comparison to other single-model methods such as MCD, temperature scaling, or the base GNN model: in-distribution accuracy is higher on 6 out of 8 dataset/shift combinations, and out-of-distribution accuracy is higher on 5 out of 8 combinations. While Deep Ensembles is the most accurate method on a majority of datasets, they are known to be computationally expensive. Moreover, the simpler stochastic anchoring procedure generally comes close to the accuracy of Deep Ensembles, and in a few cases (covariate shift on GOODCMNIST and GOODMotif-size datasets), can noticeably outperform it. Stochastic anchoring also excels in improving calibration, improving in-distribution calibration compared to all baselines on 4 out of 8 combinations. _Most importantly, out-of-distribution calibration error is decreased by stochastic anchoring on 7 of 8 dataset/shift combinations compared to all other methods (single-model or ensemble)._ _Generalization Gap Prediction._ Next, we study all of our methods on the GOOD benchmarks for the task of generalization gap prediction, and report the results in Fig. 6. On this challenging task, there is no clear winner across all benchmarks. However, G-\(\Delta\)UQ variants are consistently competitive in MAE, and yield among the lowest MAE (across the board lower than other single-model UQ methods). In particular, _the pretrained G-\(\Delta\)UQ variant produces on average the lowest MAE for generalization gap estimation._ _OOD Detection._ Finally, we consider the task of detecting out-of-distribution samples. In Fig. 7, we see that the performance of stochastic anchoring methods under concept shift is generally very competitive with other UQ methods. For covariate shifts, except for the GOODMotif-basis dataset, stochastic anchoring produces high AUROC scores. In particular, on the GOODCMNIST-color, GOODSST2-length and GOODMotif-size benchmarks, the pretrained variant of G-\(\Delta\)UQ produces significantly improved AUROC scores. Finally, on GOODMotif-basis, however, both have lower Figure 7: **OOD Detection.** The AUROC is reported for the task of detecting out-of-distribution samples. Under concept shift, the proposed G-\(\Delta\)UQ variants are very competitive with other baselines, including DeepEns. Under covariate shifts, except for GOODMotif-basis, pretrained G-\(\Delta\)UQ produces significant improvements over all baselines, including end-to-end G-\(\Delta\)UQ training. AUROC than other baselines; we suspect the reason for this to be the inherent simplicity of this dataset and that G-\(\Delta\)UQ was prone to shortcuts. Overall, we find that G-\(\Delta\)UQ performs competitively across several tasks and distributions shifts, validating our approach as an effective mechanism for producing reliable confidence indicators. ## 6 Conclusion In this work, we take a closer look at confidence estimation under distribution shifts in the context of graph neural networks. We begin by demonstrating that techniques for improving GNN expressivity, such as transformer architectures and using positional encodings, do not necessarily improve the estimation performance on a simple structural distortion shift benchmark. To this end, we seek to improve the uncertainty estimation of GNNs by adapting the principle of stochastic anchoring for discrete, structured settings. We propose several G-\(\Delta\)UQ variants, and demonstrate the benefits of partial stochasticity when estimating uncertainty. Our evaluation is extensive, spanning multiple types of distribution shift (size, concept, covariate) while considering multiple safety critical tasks that require reliable estimates (calibration, generalization gap prediction, and OOD detection.) The proposed G-\(\Delta\)UQ improves estimation performance on a number of tasks, while remaining scalable. Overall, our paper rigorously studies uncertainty estimation for GNNs, identifies several shortcomings in existing approaches and proposes a flexible framework for reliable estimation. In future work, we will extend our framework to support link prediction and node classification tasks, as well as provide an automated mechanism for creating partially stochastic GNNs. ## 7 Acknowledgements This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, Lawrence Livermore National Security, LLC and is partially supported by the LLNL-LDRD Program under Project No. 2-ERD-006. This work is also partially supported by the National Science Foundation under CAREER Grant No. IIS 1845491, Army Young Investigator Award No. W9-11NF1810397, and Adobe, Amazon, Facebook, and Google faculty awards. Any opinions, findings, and conclusions or recommendations expressed here are those of the author(s) and do not reflect the views of funding parties. PT thanks Ekdeep Singh Lubana and Vivek Sivaraman for useful discussions during the course of this project.
2303.00055
Learning time-scales in two-layers neural networks
Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive phases of learning often take place on very different time scales. Finally, models learnt in an early phase are typically `simpler' or `easier to learn' although in a way that is difficult to formalize. Although theoretical explanations of these phenomena have been put forward, each of them captures at best certain specific regimes. In this paper, we study the gradient flow dynamics of a wide two-layer neural network in high-dimension, when data are distributed according to a single-index model (i.e., the target function depends on a one-dimensional projection of the covariates). Based on a mixture of new rigorous results, non-rigorous mathematical derivations, and numerical simulations, we propose a scenario for the learning dynamics in this setting. In particular, the proposed evolution exhibits separation of timescales and intermittency. These behaviors arise naturally because the population gradient flow can be recast as a singularly perturbed dynamical system.
Raphaël Berthier, Andrea Montanari, Kangjie Zhou
2023-02-28T19:52:26Z
http://arxiv.org/abs/2303.00055v3
# Learning time-scales in two-layers neural networks ###### Abstract Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive phases of learning often take place on very different time scales. Finally, models learnt in an early phase are typically'simpler' or 'easier to learn' although in a way that is difficult to formalize. Although theoretical explanations of these phenomena have been put forward, each of them captures at best certain specific regimes. In this paper, we study the gradient flow dynamics of a wide two-layer neural network in high-dimension, when data are distributed according to a single-index model (i.e., the target function depends on a one-dimensional projection of the covariates). Based on a mixture of new rigorous results, non-rigorous mathematical derivations, and numerical simulations, we propose a scenario for the learning dynamics in this setting. In particular, the proposed evolution exhibits separation of timescales and intermittency. These behaviors arise naturally because the population gradient flow can be recast as a singularly perturbed dynamical system. ###### Contents * 1 Introduction * 2 Setting and standard learning scenario * 3 Further related work * 4 The large-network, high-dimensional limit * 4.1 Connection with mean field theory * 4.2 A general formulation * 5 Numerical solution * 6 Timescales hierarchy in the gradient flow dynamics * 6.1 First time scale: constant component * 6.2 Second time scale: linear component I * 6.3 Third time scale: linear component II * 6.4 Conjectured behavior for larger time scales * 7 Stochastic gradient descent and finite sample size A Appendix to Section 4A1 Proof of Proposition 1A.2 Proof of Corollary 1A.3 Proof of Proposition 2A.4 Derivation of the mean field dynamics (28)A.5 Details of the alternative mean field approachB Calculations for the analysis of mean-field gradient flowB.1 Solution of Eq. (83)B.2 Induced approximation of the riskB.3 Proof of Theorem 1C Proofs of Theorem 2 and 3: learning with projected SGDC.1 Difference between GF and GDC.2 Difference between GD and SGDC.3 Difference between SGD and projected SGDC.4 Proof of Theorem 3D Counterexamples to the standard learning scenarioD.1 Case 1: \(\sigma_{k}=0\) for some \(k\in\mathbb{N}\)D.2 Case 2: \(\varphi_{0}=\dots=\varphi_{k}=0\) for some \(k\geq 1\)D.3 Case 3: \(\varphi_{k}=0\) for some \(k\geq 1\) ## 1 Introduction It is a recurring empirical observation that the training dynamics of neural networks exhibits a whole range of surprising behaviors: 1. _Plateaus._ Plotting the training and test error as a function of SGD steps, using either small stepsize or large batches to average out stochasticity, reveals striking patterns. These error curves display long plateaus where barely anything seems to be happening, which are followed by rapid drops (Saad and Solla, 1995; Yoshida and Okada, 2019; Power et al., 2022). 2. _Time-scales separation._ The time window for this rapid descent is much shorter than the time spent in the plateaus. Additionally, subsequent phases of learning take increasingly longer times (Ghorbani et al., 2020; Barak et al., 2022). 3. _Incremental learning._ Models learnt in the first phases of learning appear to be simpler than in later phases. Among others, Arpit et al. (2017) demonstrated that easier examples in a dataset are learned earlier; Kalimeris et al. (2019) showed that models learnt in the first phase of training correlate well with linear models; Gissin et al. (2019) showed that, in many simplified models, the dynamics of gradient descent explores the solution space in an incremental order of complexity; Power et al. (2022) demonstrated that, in certain settings, a function that approximates well the target is only learnt past the point of overfitting. Understanding these phenomena is not a matter of intellectual curiosity. In particular, incremental learning plays a key role in our understanding of generalization in deep learning. Indeed, in this scenario, stopping the learning at a certain time \(t\) amounts to controlling the complexity of the model learnt. The notion of complexity corresponds to the order in which the space of models is explored. While a number of groups have developed models to explain these phenomena, it is fair to say that a complete picture is still lacking. An exhaustive overview of these works is out of place here. We will outline three possible explanations that have been developed in the past, and provide more pointers in Section 3. Theory \(\#1\): Dynamics near singular points.Several early works (Saad and Solla, 1995; Fukumizu and Amari, 2000; Wei et al., 2008) pointed out that the parametrization of multi-layer neural networks presents symmetries and degeneracies. For instance, the function represented by a multilayer perceptron is invariant under permutations of the neurons in the same layer. As a consequence, the population risk has multiple local minima connected through saddles or other singular sub-manifolds. Dynamics near these sub-manifolds naturally exhibits plateaus. Further, random or agnostic initializations typically place the network close to such submanifolds. Theory \(\#2\): Linear networks.Following the pioneering work of Baldi and Hornik (1989), a number of authors, most notably Saxe et al. (2013), Li et al. (2020), studied the behavior of deep neural networks with linear activations. While such networks can only represent linear functions, the training dynamics is highly non-linear. As demonstrated in Saxe et al. (2013), learning happens through stages that correspond to the singular value decomposition of the input-output covariance. Time scales are determined by the singular values. Theory \(\#3\): Kernel regime.Following an initial insight of Jacot et al. (2018), a number of groups proved that, for certain initializations, the training dynamics and model learnt by overparametrized neural networks is well approximated by certain linearly parametrized models. In the limit of very wide networks, the training dynamics of these models converges in turn to the training dynamics of kernel ridge(less) regression (KRR) with respect to a deterministic kernel (independent of the random initialization.) We refer to Bartlett et al. (2021) for an overview and pointers to this literature. Recently Ghosh et al. (2021) show that, in high dimension, the learning dynamics of KRR also exhibits plateaus and waterfalls, and learns functions of increasing complexity over a diverging sequence of timescales. While each of these theories offers useful insights, it is important to realize that they do not agree on the basic mechanism that explains plateaus, time-scales separation, and incremental learning. In theory \(\#1\), plateaus are associated to singular manifolds and high-dimensional saddles, while in theories \(\#2\) and \(\#3\) they are related to a hierarchy of singular values of a certain matrix. In \(\#2\), the relevant singular values are the ones of the input-output covariance, and the fact that these singular values are well separated is postulated to be a property of the data distribution. In contrast, in \(\#3\) the relevant singular values are the eigenvalues of the kernel operator, and hence completely independent of the output (the target function). In this case, eigenvalues which are very different are proved to exist under natural high-dimensional distributions. Not only these theories propose different explanations, but they are also motivated by very different simplified models. Theory \(\#1\) has been developed only for networks with a small number of hidden units. Theory \(\#2\) only applies to networks with multiple output units, because otherwise the input-output covariance is a \(d\times 1\) matrix and hence has only one non-trivial singular value. Finally, theory \(\#3\) applies under the conditions of the linear (a.k.a. lazy) regime, namely large overparametrization and suitable initialization (see, e.g., Bartlett et al. (2021)). In order to better understand the origin of plateaus, time-scales separation, and incremental learning, we attempt a detailed analysis of gradient flow for two-layer neural networks. We consider a simple data-generation model, and propose a precise scenario for the behavior of learning dynamics. We do not assume any of the simplifying features of the theories described above: activations are non-linear; the number of hidden neurons is large; we place ourselves outside the linear (lazy) regime. Our analysis is based on methods from dynamical systems theory: singular perturbation theory and matched asymptotic expansions. Unfortunately, we fall short of providing a general rigorous proof of the proposed scenario, but we can nevertheless prove it in several special cases and provide a heuristic argument supporting its generality. The rest of the paper is organized as follows. Section 2 describes our data distribution, learning model, and the proposed scenario for the learning dynamics. We review further related work in Section 3. Section 4 describes the reduction of the gradient flow to a'mean field' dynamics that will be the starting point of our analysis. Section 5 presents numerical evidence of the proposed learning scenario. Finally, Sections 6 to 7 present our analysis of the learning dynamics. Notations.In this paper, we use the classical asymptotic notations. The notations \(f(\varepsilon)=o(g(\varepsilon))\) or \(g(\varepsilon)=\omega(f(\varepsilon))\) as \(\varepsilon\to 0\) both denote that \(|f(\varepsilon)|/|g(\varepsilon)|\to 0\) in the limit \(\varepsilon\to 0\). The notations \(f(\varepsilon)=O(g(\varepsilon))\) or \(g(\varepsilon)=\Omega(f(\varepsilon))\) both denote that the ratio \(|f(\varepsilon)|/|g(\varepsilon)|\) remains upper bounded in the limit. The notation \(f(\varepsilon)=\Theta(g(\varepsilon))\) or \(f(\varepsilon)\asymp g(\varepsilon)\) denote that \(f(\varepsilon)=O(g(\varepsilon))\) and \(g(\varepsilon)=O(f(\varepsilon))\) both hold. Finally, \(f(\varepsilon)\sim g(\varepsilon)\) denotes that \(f(\varepsilon)/g(\varepsilon)\to 1\) in the limit. ## 2 Setting and standard learning scenario We are given pairs \(\{(x_{i},y_{i})\}_{i\leq n}\), where \(x_{i}\in\mathbb{R}^{d}\) is a feature vector and \(y_{i}\in\mathbb{R}\) is a response variable. We are interested in cases in which the feature vector is high-dimensional but does not contain strong structure, but the response depends on a low-dimensional projection of the data. We assume the simplest model of this type, the so-called single-index model: \[y_{i}=\varphi(\langle u_{*},x_{i}\rangle)\,,\qquad\,x_{i}\sim \mathsf{N}(0,I_{d}),\;u_{*}\in\mathbb{S}^{d-1}, \tag{1}\] where \(\varphi:\mathbb{R}\to\mathbb{R}\) is a link function, \(\mathsf{N}(0,I_{d})\) denotes the standard multivariate Gaussian distribution in dimension \(d\), and \(\mathbb{S}^{d-1}:=\{v\in\mathbb{R}^{d}:\,\|v\|_{2}=1\}\). We study the ability to learn model (1) using a two-layers neural network with \(m\) hidden neurons: \[f(x;a,u)=\frac{1}{m}\sum_{i=1}^{m}a_{i}\sigma(\langle u_{i},x\rangle),\qquad a _{1},\cdots,a_{m}\in\mathbb{R},\;u_{1},\cdots,u_{m}\in\mathbb{S}^{d-1},\] where \((a,u):=(a_{1},\cdots,a_{m},u_{1},\cdots,u_{m})\) collectively denotes all the model's parameter. The factor \(1/m\) in the definition is relevant for the initialization and learning rate. We anticipate that we will initialize the \(a_{i}\)'s to be of order one, which results in second layer coefficients \(a_{i}/m=\Theta(1/m)\). This is often referred to as the'mean-field initialization' and is known to drive learning process out of the linear or kernel regime, see e.g. (Mei et al., 2018; Chizat and Bach, 2018; Ghorbani et al., 2020; Yang and Hu, 2020; Abbe et al., 2022). The bulk of our work will be devoted to the analysis of projected gradient flow in \((a_{i},u_{i})_{1\leq i\leq m}\) on the population risk \[\mathscr{R}(a,u) =\frac{1}{2}\mathbb{E}\big{\{}\big{(}y-f(x;a,u)\big{)}^{2}\big{\}} \tag{2}\] \[=\frac{1}{2}\mathbb{E}\Big{\{}\Big{(}\varphi(\langle u_{*},x \rangle)-\frac{1}{m}\sum_{i=1}^{m}a_{i}\sigma(\langle u_{i},x\rangle)\Big{)}^{ 2}\Big{\}}\,. \tag{3}\] In Section 7, we will bound the distance between stochastic gradient descent (SGD) and gradient flow in population risk. As a consequence, we will establish finite sample generalization guarantees for SGD learning. Projected gradient flow with respect to the risk \(\mathscr{R}(a,u)\) is defined by the following ordinary differential equations (ODEs): \[\partial_{t}(\varepsilon a_{i})=-m\partial_{a_{i}}\mathscr{R}(a,u)\,, \tag{4}\] \[\partial_{t}u_{i}=-m(I_{d}-u_{i}u_{i}^{\top})\nabla_{u_{i}}\mathscr{R}(a,u)\,. \tag{5}\] It is useful to make a few remarks about the definition of gradient flow: * The projection \(I_{d}-u_{i}u_{i}^{\top}\) ensures that \(u_{i}\) remains on the unit sphere \(\mathbb{S}^{d-1}\). * The overall scaling of time is arbitrary, and the matching to SGD steps will be carried out in Section 7. The factors \(m\) on the right-hand side are introduced for mathematical convenience, since the partial derivatives are of order \(1/m\). * The factor \(\varepsilon\) introduced in the flow of the \(a_{i}\)'s reflects the fact that usually SGD is run with respect to the overall second-layer weights \((a_{i}/m)_{1\leq i\leq m}\). This would correspond to taking \(\varepsilon=1/m\). However, we will keep \(\varepsilon\) as a free parameter independent of \(m\), and study the evolution for small \(\varepsilon\). We assume the initialization to be random with i.i.d. components \((a_{i,\mathrm{init}},u_{i,\mathrm{init}})\): \[(a_{i,\mathrm{init}},u_{i,\mathrm{init}})\sim\mathrm{P}_{A}\otimes\mathrm{ Unif}(\mathbb{S}^{d-1})\,, \tag{6}\] where \(\mathrm{P}_{A}\) is a probability measure on \(\mathbb{R}\). The unique solution of the gradient flow ODEs with this initialization will be denoted by \((a(t),u(t))\). We will be interested in the case of large networks \((m\to\infty)\) in high dimension \((d\to\infty)\). As shown below, the two limits commute (over fixed time horizons). Our main finding is that, in a number of cases, \(\varphi\) is learnt incrementally. Namely, the function \(f(x;a(t),u(t))\) evolves over time according to a sequence of polynomial approximations of \(\varphi((u_{*},x))\). These polynomial approximations are given by the decomposition of \(\varphi\) in \(L^{2}(\mathbb{R},\phi(x)\mathrm{d}x)\), where \(\phi(x)\) is the standard normal density: \(\phi(x)=\exp(-x^{2}/2)/\sqrt{2\pi}\). (For notational simplicity, we will use the shorthand \(L^{2}\) instead of \(L^{2}(\mathbb{R},\phi(x)\mathrm{d}x)\) in the sequel.) In order to describe the polynomial approximations learnt during the training more explicitly, we decompose \(\varphi\) and \(\sigma\) into normalized Hermite polynomials: \[\varphi(z)=\sum_{k=0}^{\infty}\varphi_{k}\mathrm{He}_{k}(z)\,,\quad\sigma(z)= \sum_{k=0}^{\infty}\sigma_{k}\mathrm{He}_{k}(z)\,. \tag{7}\] Here, \(\mathrm{He}_{k}\) denotes the \(k\)-th Hermite polynomial, normalized so that \(\|\mathrm{He}_{k}\|_{L^{2}(\mathbb{R},\phi(x)\mathrm{d}x)}=1\). As we will see, the incremental learning behavior arises for small \(\varepsilon\). By the law of large numbers (see below), the following almost sure limit exists (provided \(\mathrm{P}_{A}\) is square integrable) \[\mathscr{R}_{\mathrm{init}}:=\lim_{m\to\infty}\lim_{d\to\infty}\mathscr{R}(a _{\mathrm{init}},u_{\mathrm{init}})\,=\frac{1}{2}\left(\varphi_{0}-\sigma_{0} \int\!a\,\mathrm{P}_{A}(\mathrm{d}a)\right)^{2}+\frac{1}{2}\sum_{k\geq 1} \varphi_{k}^{2}. \tag{8}\] We are now in position to describe the scenario that we will study in the rest of the paper. **Definition 1**.: _We say that the standard learning scenario holds up to level \(L\) for a certain target function \(\varphi\), activation \(\sigma\), and distribution \(\mathrm{P}_{A}\), if the followings hold:_ 1. _The limit below exists:_ \[\mathscr{R}_{\infty}(t,\varepsilon)=\lim_{m\to\infty}\lim_{d\to\infty}\mathscr{ R}(a(t),u(t)).\] (9) 2. _There exist constants_ \(c_{2},\ldots,c_{L+1}>0\) _such that the following asymptotic holds as_ \(\varepsilon\to 0\)_,_ \(t\to 0\)_:_ \[\mathscr{R}_{\infty}(t,\varepsilon)\xrightarrow[\varepsilon\to 0,t\to 0]{}\begin{cases} \mathscr{R}_{\mathrm{init}}&\text{if }t=o(\varepsilon)\,,\\ \frac{1}{2}\sum_{k\geq 1}\varphi_{k}^{2}&\text{if }t=\omega(\varepsilon)\text{ and }t=\frac{1}{4|\sigma_{1} \varphi_{1}|}\varepsilon^{\nicefrac{{1}}{{2}}}\log\frac{1}{\varepsilon}- \omega(\varepsilon^{\nicefrac{{1}}{{2}}})\,,\\ \frac{1}{2}\sum_{k\geq 2}\varphi_{k}^{2}&\text{if }t=\frac{1}{4|\sigma_{1} \varphi_{1}|}\varepsilon^{\nicefrac{{1}}{{2}}}\log\frac{1}{\varepsilon}+ \omega(\varepsilon^{\nicefrac{{1}}{{2}}})\text{ and }t=c_{2}\varepsilon^{\nicefrac{{1}}{{4}}}- \omega(\varepsilon^{\nicefrac{{1}}{{3}}})\,,\\ \frac{1}{2}\sum_{k\geq l}\varphi_{k}^{2}&\text{if }t=c_{l-1}\varepsilon^{ \nicefrac{{1}}{{2(l-1)}}}+\omega(\varepsilon^{\nicefrac{{1}}{{l}}})\text{ and }t=c_{l}\varepsilon^{\nicefrac{{1}}{{2l}}}- \omega(\varepsilon^{\nicefrac{{1}}{{l+1}}})\,,\\ &\text{ for all }3\leq l\leq L+1.\end{cases}\] Figure 1 provides a cartoon illustration of the standard learning scenario. A specific realization of our general setup is determined by the triple \((\sigma,\varphi,\mathrm{P}_{A})\), In the rest of the paper, we will provide evidence showing that the standard learning scenario holds in a number of cases. Nevertheless, we can also construct examples in which it does not hold: * If one or more of the Hermite coefficients of the activation vanish, then the standard scenario does not hold for general \(\varphi\). Specifically, if \(\sigma_{k}=0\), then for any \(t\) the function \(f(x;a(t),u(t))\) remains orthogonal to \(\mathrm{He}_{k}(\langle u_{*},x\rangle)\). In particular, if \(\varphi_{k}\neq 0\) then the risk remains bounded away from zero for every \(t\). We refer to Appendix D.1 for a formal statement. * If the first Hermite \(k+1\) coefficients of \(\varphi\) vanish, \(\varphi_{0}=\cdots=\varphi_{k}=0\), \(k\geq 1\), then the standard scenario does not hold. (See Appendix D.2 for the proof.) * In fact, we expect the standard scenario might fail every time one or more of the coefficients \(\varphi_{k}\) vanish, for \(k\geq 1\). Appendix D.3 provides some heuristic justification for this failure. **Remark 2.1**.: We can compare the standard learning scenario described here to the ones in earlier literature and described as theory #1, #2, #3 in the introduction. There appears points of contact, but also important differences with both theory #1 and #3: * As in theory #1, the plateaus and separation of time scales arise because the trajectory of gradient flow is approximated by a sequence of motions along submanifolds in the space of parameters \((a,u)\). Along the \(l\)-th such submanifold \(f(x;a,u)\) is well-approximated by a degree-\(l\) polynomial. Escaping each submanifold takes an increasingly longer time. This is reminding of the motion between saddles investigated in earlier work (Saad and Solla, 1995; Fukumizu and Amari, 2000; Wei et al., 2008). However, unlike in earlier work, we will see that this applies to networks with a large (possibly diverging) number of Figure 1: Cartoon illustration of the evolution of the population risk within the standard learning scenario of Definition 1. hidden neurons. Also, we identify the subsequent phases of learning with the polynomial decomposition of Eq. (7). * As in theory #3, subsequent phases of learning correspond to increasingly accurate polynomial approximations of the target function \(\varphi(\langle u_{*},x\rangle)\). However, the underlying mechanism and time scales are completely different. In the linear regime, the different time scales emerge because of increasingly small eigenvalues of the neural tangent kernel. In that case, the time required to learn degree-\(l\) polynomials is of order \(d^{l}\)(Ghosh et al., 2021). In contrast, in the standard learning scenario, polynomials of degree \(l\) are learnt on a time scale of order one in \(d\) (and only depending on the learning rate \(\varepsilon\)). This of course has important implications when approximating gradient flow by SGD. Within the linear regime, the sample size required to learn polynomial of order \(l\) scales like \(d^{l}\)(Ghosh et al., 2021), while in the standard scenario, it is only of order \(d\) (see Section 7). ## 3 Further related work As we mentioned in the introduction, plateaus and time scales in the learning dynamics of kernel models were analyzed by Ghosh et al. (2021). A sharp analysis for the related random features model was developed by Bodin and Macris (2021). Our analysis builds upon the mean-field description of learning in two-layer neural networks, which was developed in a sequence of works, see, e.g., (Mei et al., 2018; Rotskoff and Vanden-Eijnden, 2018; Chizat and Bach, 2018; Mei et al., 2019). In particular, we leverage the fact that, for the data distribution (1), the population risk function is invariant under rotations around the axis \(u_{*}\), and this allows for a dimensionality reduction in the mean field description. Similar symmetry argument were used by Mei et al. (2018) and, more recently, by Abbe et al. (2022). The single-index model can be learnt using simpler methods than large two-layer networks. Limiting ourselves to the case of gradient descent algorithms, Mei et al. (2018) proved that gradient descent with respect to the non-convex empirical risk \(\widehat{R}_{n}(u):=n^{-1}\sum_{i=1}^{n}(y_{i}-\varphi(u^{\top}x_{i}))^{2}\) converges to a near global optimum, provided \(\varphi\) is strictly increasing. Ben Arous et al. (2021) considered online SGD under more challenging learning scenarios and characterized the time (sample size) for \(|\langle u,u_{*}\rangle|\) to become significantly larger than for a random unit vector \(u\). Learning in overparametrized two-layer networks under model (1) (or its variations) has been studied recently by several groups. In particular, Ba et al. (2022) considers a training procedure which runs a single step gradient descent followed by freezing the first layer and performing ridge regression with respect to the second layer. This scheme is amenable to a precise characterization of the generalization error. Bietti et al. (2022) consider a similar scheme in which a first phase of gradient descent is run to achieve positive correlation with the unknown direction \(u_{*}\). Damian et al. (2022) also consider a two-phases scheme, and prove consistency and excess risk bounds for a more general class of target functions whereby the first equation in (1) is replaced by \[y_{i}=\varphi(U_{*}^{\top}x_{i})+\varepsilon_{i}\,,\;\;\;U_{*}\in\mathbb{R}^{k \times d}\,,\varphi:\mathbb{R}^{k}\to\mathbb{R}\,, \tag{10}\] with \(k\ll d\). In particular, near optimal error bounds are obtained under a non-degeneracy condition on \(\nabla^{2}\varphi\). Abbe et al. (2022) consider a similar model whereby \(x\sim\text{Unif}(\{+1,-1\}^{d})\), and \(y=\varphi(x_{S})\) where \(S\subseteq[d]\), and \(x_{S}=(x_{i})_{i\in S}\) (i.e., \(x_{S}\) contains the coordinates of \(x\) indexed by entries of \(S\)). Under a structural assumption on \(\varphi\) (the'merged staircase property'), and for \(|S|\) fixed, they prove the two stages algorithm learns the target function with sample complexity of order \(d\). This paper is technically related to ours in that it uses mean-field theory to obtain a characterization of learning in terms of a PDE in a reduced \((k+2)\)-dimensional space. A similar model was studied by Barak et al. (2022) that bounds the sample complexity by \(d^{O(k)}\) for learning parities on \(k\) bits using gradient descent with large batches (if \(k=O(1)\), Barak et al. (2022) require \(O(1)\) steps with batch size \(d^{O(k)}\)). Let us emphasize that our objective is quite different from these works. We do not allow ourselves deviations from standard SGD and try to derive a precise picture of the successive phases of learning (in particular, we do not consider two-stage schemes or layer-by-layer learning). On the other hand, we focus on a relatively simple model. To clarify the difference, it is perhaps useful to rephrase our claims in terms of sample complexity. While previous works show that the target function can be learnt with \(O(d)\) samples, we claim that it is learnt by online SGD with test error \(r\) from about \(C(r,\varepsilon)d\) samples and characterize the dependence of \(C(r,\varepsilon)\) on \(r\) for small \(\varepsilon\). (Falling short of a proof in the general case.) After posting an initial version of this paper, we became aware that Arnaboldi et al. (2023) independently derived equations similar to (14)-(18), (25), (119). There are technical differences, and hence we cannot apply their results directly. However, Section 4.1 and Appendix A.4 are analogous to their work. ## 4 The large-network, high-dimensional limit The first step of our analysis is a reduction of the system of ODEs (4), (5), with dimension \(m(d+1)\) to a system of ODEs in \(2m\) dimensions. We will achieve this reduction in two steps: * First we reduce to a system in \(m(m+3)/2\) dimensions for the variables \(a_{i}\), \(\langle u_{i},u_{j}\rangle\), \(\langle u_{i},u_{*}\rangle\). This reduction is exact and is quite standard. * We then show that the products \(\langle u_{i},u_{j}\rangle\) can be eliminated, with an error \(O(1/m)\). As further discussed below, the resulting dynamics could also be derived from the mean field theory of Mei et al. (2018); Rotskoff and Vanden-Eijnden (2018); Chizat and Bach (2018); Mei et al. (2019) (with the required modifications for the constraints \(\left\lVert u_{i}\right\rVert=1\)). In order to define formally the reduced system, we define the functions \(U,V:[-1,1]\to\mathbb{R}\) via: \[V(s) :=\mathbb{E}\{\varphi(G)\,\sigma(G_{s})\}=\sum_{k\geqslant 0} \varphi_{k}\sigma_{k}s^{k}\,,\hskip 14.226378pt(G,G_{s})\sim\mathcal{N}\left(0, \begin{bmatrix}1&s\\ s&1\end{bmatrix}\right)\,, \tag{11}\] \[U(s) :=\mathbb{E}\{\sigma(G)\,\sigma(G_{s})\}=\sum_{k\geqslant 0} \sigma_{k}^{2}s^{k}\,. \tag{12}\] Note that the above identities follow from (O'Donnell, 2014, Proposition 11.31). Throughout this section, we will make the following assumptions. * The distribution of weights at initialization, \(\mathrm{P}_{A}\) is supported on \([-M_{1},M_{1}]\). * The activation function is bounded: \(\left\lVert\sigma\right\rVert_{\infty}\leq M_{2}\). Additionally, the functions \(V\) and \(U\) are bounded and of class \(C^{2}\), with uniformly bounded first and second derivatives over \(s\in[-1,1]\). A sufficient condition for this is \[\sup\left\{\left\lVert\sigma^{\prime}\right\rVert_{L^{2}},\,\left\lVert\sigma ^{\prime\prime}\right\rVert_{L^{2}}\right\}\leq M_{2},\hskip 14.226378pt\sup \left\{\left\lVert\varphi\right\rVert_{L^{2}},\,\left\lVert\varphi^{\prime} \right\rVert_{L^{2}},\,\left\lVert\varphi^{\prime\prime}\right\rVert_{L^{2}} \right\}\leq M_{2}.\] * Responses are bounded, i.e., \(\left\lVert\varphi\right\rVert_{\infty}\leq M_{3}\). **Remark 4.1**.: We hereby briefly explain the sufficiency of \(L^{2}\)-boundedness of derivatives of \(\sigma\) and \(\varphi\) as claimed in Assumption A2. Suppose for example that \(\left\lVert\sigma^{\prime}\right\rVert_{L^{2}}\leq M_{2}\) and \(\left\lVert\varphi^{\prime}\right\rVert_{L^{2}}\leq M_{2}\), then we have \[\sup_{s\in[-1,1]}\left\lvert V^{\prime}(s)\right\rvert\overset{(a)}{=}\sup_{ s\in[-1,1]}\left\lvert\mathbb{E}\{\varphi^{\prime}(G)\,\sigma^{\prime}(G_{s})\} \right\rvert\overset{(b)}{\leq}\left\lVert\varphi^{\prime}\right\rVert_{L^{2} }\left\lVert\sigma^{\prime}\right\rVert_{L^{2}}\leq M_{2}^{2}, \tag{13}\] where \((a)\) follows from Gaussian integration by parts and \((b)\) follows from Cauchy-Schwarz inequality. Our first statement establishes reduction \((i)\) mentioned above. The proof of this fact is presented in Appendix A.1. **Proposition 1** (Reduction to \(d\)-independent flow).: _Define \(s_{i}=\langle u_{i},u_{*}\rangle\), \(r_{ij}=\langle u_{i},u_{j}\rangle\) for \(i,j=1,\ldots,m\). Then, letting \(R=(r_{ij})_{i,j\leq m}\), we have_ \[\mathscr{R}(a,u)=\mathscr{R}_{\mathrm{red}}(a,s,R):=\frac{1}{2}\|\varphi\|_{L ^{2}}^{2}-\frac{1}{m}\sum_{i=1}^{m}a_{i}V(s_{i})+\frac{1}{2m^{2}}\sum_{i,j=1}^ {m}a_{i}a_{j}U(r_{ij})\,. \tag{14}\] _If \((a(t),u(t))\) solve the gradient flow ODEs (4)-(5) then \((a(t),s(t),R(t))\) are the unique solution of the following set of ODEs (note that \(r_{ii}=1\) identically)_ \[\varepsilon\partial_{t}a_{i}= \,V(s_{i})-\frac{1}{m}\sum_{j=1}^{m}a_{j}U(r_{ij})\,, \tag{15}\] \[\partial_{t}s_{i}= \,a_{i}\left(V^{\prime}(s_{i})(1-s_{i}^{2})-\frac{1}{m}\sum_{j=1} ^{m}a_{j}U^{\prime}(r_{ij})(s_{j}-r_{ij}s_{i})\right)\,,\] (16) \[\partial_{t}r_{ij}= \,a_{i}\left(V^{\prime}(s_{i})(s_{j}-s_{i}r_{ij})-\frac{1}{m} \sum_{p=1}^{m}a_{p}U^{\prime}(r_{ip})(r_{jp}-r_{ip}r_{ij})\right)\,,\] (17) \[+a_{j}\left(V^{\prime}(s_{j})(s_{i}-s_{j}r_{ij})-\frac{1}{m}\sum_ {p=1}^{m}a_{p}U^{\prime}(r_{jp})(r_{ip}-r_{jp}r_{ij})\right)\,. \tag{18}\] The input dimension \(d\) does not appear in the reduced ODEs, Eqs. (15) to (18), and only plays a role in the initialization of the \(s_{i}\)'s and the \(r_{ij}\)'s. Namely, since \(u_{i,\mathrm{init}}\sim\mathrm{Unif}(\mathbb{S}^{d-1})\), we can represent \(u_{i,\mathrm{init}}=g_{i}/\|g_{i}\|_{2}\) with \(g_{i}\sim\mathsf{N}(0,I_{d}/d)\). By concentration of \(\|g_{i}\|_{2}\), this implies that, for \(1\leq i<j\leq m\), \(s_{i}\), \(r_{ij}\) are approximately \(\mathsf{N}(0,1/d)\). This discussion immediately yields the following consequence. **Corollary 1**.: _Let \((a(t),u(t))\) be the solution of the gradient flow ODEs (4), (5) with initialization (6), and let \((a^{0}(t),s^{0}(t),R^{0}(t))\) be the unique solution of Eqs. (15) to (18), with initialization \(a^{0}_{i}(0)=a_{i}(0)\), \(s^{0}_{i}(0)=0\), \(r^{0}_{ij}(0)=0\) for \(i\neq j\). Then, for any fixed \(T\) (possibly dependent on \(m\) but not on \(d\)), the followings holds with probability at least \(1-\exp(-C^{\prime}m)\) over the i.i.d. initialization \((a_{i}(0),u_{i}(0))_{i\in[m]}\):_ \[\sup_{t\in[0,T]}|\mathscr{R}(a(t),u(t))-\mathscr{R}_{\mathrm{red} }(a^{0}(t),s^{0}(t),R^{0}(t))|\leq\frac{CM}{\sqrt{d}}\exp\left(MT(1+T)^{2}/ \varepsilon^{2}\right)\,, \tag{19}\] \[\max\left(\sup_{t\in[0,T]}\frac{1}{\sqrt{m}}\|a(t)-a^{0}(t)\|_{2 },\frac{1}{\sqrt{m}}\sup_{t\in[0,T]}\|s(t)-s^{0}(t)\|_{2}\right)\leq\frac{1}{ \sqrt{d}}\cdot C\exp\left(MT(1+T)^{2}/\varepsilon^{2}\right)\,,\] (20) \[\sup_{t\in[0,T]}\frac{1}{m}\|R(t)-R^{0}(t)\|_{\mathrm{F}}\leq\frac {1}{\sqrt{d}}\cdot C\exp\left(MT(1+T)^{2}/\varepsilon^{2}\right)\,. \tag{21}\] _Here \(C,C^{\prime}\) are absolute constants and \(M\) only depends on the \(M_{i}\)'s in Assumptions A1-A3._ The proof of Corollary 1 is deferred to Appendix A.2. From now on, we will assume the initialization \(s^{0}_{i}(0)=0\), \(r^{0}_{ij}(0)=0\) for \(i\neq j\), but drop the superscript \(0\) for notational simplicity. We notice in passing that the right-hand sides of Eqs. (19) to (21) are independent of \(m\): this approximation step holds uniformly over \(m\). (Note that the left hand sides are normalized by \(m\) as to yield the root mean square error per entry.) In order to state the reduction \((ii)\) outlined above, we define the mean field risk as \[\mathscr{R}_{\mbox{\tiny{\it mf}}}(a,s):=\mathscr{R}_{\mbox{\tiny{\it red}}}(a,s,R={s}{s}^{\top})=\frac{1}{2}\|\varphi\|_{L^{2}}^{2}-\frac{1}{m}\sum_{i=1}^{m}a _{i}V(s_{i})+\frac{1}{2m^{2}}\sum_{i,j=1}^{m}a_{i}a_{j}U(s_{i}s_{j})\,. \tag{22}\] Further, we denote by \(\{a_{i}^{\mbox{\tiny{\it mf}}}(t),s_{i}^{\mbox{\tiny{\it mf}}}(t)\}_{i=1}^{m}\) the solution to the following ODEs: \[\varepsilon\partial_{t}a_{i}= \,V(s_{i})-\frac{1}{m}\sum_{j=1}^{m}a_{j}U(s_{i}s_{j})\,, \tag{23}\] \[\partial_{t}s_{i}= \,a_{i}\left(1-s_{i}^{2}\right)\left(V^{\prime}(s_{i})-\frac{1}{ m}\sum_{j=1}^{m}a_{j}U^{\prime}(s_{i}s_{j})s_{j}\right)\,.\] Note that (23) would be identical to (15)-(16) if we had \(r_{ij}=s_{i}s_{j}\). A priori, this is not the case. However, the two systems of equations are close to each other for large \(m\) as made precise by our next proposition, which formalizes reduction \((ii)\). **Proposition 2** (Reduction to flow in \(\mathbb{R}^{2m}\)).: _Let \((a_{i}(t),s_{i}(t),r_{ij}(t))_{1\leq i<j\leq m}\) be the unique solution of the ODEs (15)-(18) with initialization \(s_{i}(0)=0\), \(r_{ij}(0)=0\) for all \(1\leq i\neq j\leq m\). Let \((a_{i}^{\mbox{\tiny{\it mf}}}(t),s_{i}^{\mbox{\tiny{\it mf}}}(t))_{i\leq m}\) be the unique solution of the ODEs (23) with initialization \(s_{i}^{\mbox{\tiny{\it mf}}}(0)=0\), \(a_{i}^{\mbox{\tiny{\it mf}}}(0)=a_{i}(0)\) for all \(i\leq m\)._ _If assumptions A1-A3 hold, then for any \(T<\infty\) there exists a constant_ \[C(T)=M\exp(MT(1+T)^{2}/\varepsilon^{2}) \tag{24}\] _(with \(M\) depending on the constants \(\{M_{i}\}_{1\leq i\leq 3}\) appearing in Assumptions A1-A3 only) such that:_ \[\sup_{t\in[0,T]}\frac{1}{m}\sum_{i=1}^{m}\left\|(a_{i}(t),s_{i}(t))-(a_{i}^{ \mbox{\tiny{\it mf}}}(t),s_{i}^{\mbox{\tiny{\it mf}}}(t))\right\|_{2}^{2}\leq \frac{C(T)}{m}\,.\] _Consequently,_ \[\sup_{t\in[0,T]}|\mathscr{R}_{\mbox{\tiny{\it red}}}\left(a(t),s(t),R(t) \right)-\mathscr{R}_{\mbox{\tiny{\it mf}}}\left(a^{\mbox{\tiny{\it mf}}}(t),s ^{\mbox{\tiny{\it mf}}}(t)\right)|\leq\frac{C(T)}{\sqrt{m}}\,.\] The proof of this proposition is deferred to Appendix A.3. Now, combining the propositions and corollaries in this section, we deduce that with high probability over the i.i.d. initialization, \[\sup_{t\in[0,T]}|\mathscr{R}(a(t),u(t))-\mathscr{R}_{\mbox{\tiny{\it mf}}} \left(a^{\mbox{\tiny{\it mf}}}(t),s^{\mbox{\tiny{\it mf}}}(t)\right)|\leq \left(\frac{1}{\sqrt{d}}+\frac{1}{\sqrt{m}}\right)CM\exp(MT(1+T)^{2}/ \varepsilon^{2}). \tag{25}\] ### Connection with mean field theory Consider the empirical distributions of the neurons: \[\widehat{\rho}_{t} :=\frac{1}{m}\sum_{i=1}^{m}\delta_{(a_{i}(t),s_{i}(t))}\,, \tag{26}\] \[\rho_{t} :=\frac{1}{m}\sum_{i=1}^{m}\delta_{(a_{i}^{\mbox{\tiny{\it mf}}} (t),s_{i}^{\mbox{\tiny{\it mf}}}(t))}\,, \tag{27}\] with \((a_{i}(t),s_{i}(t))_{i\leq m}\), \((a_{i}^{\mbox{\tiny{\it mf}}}(t),s_{i}^{\mbox{\tiny{\it mf}}}(t))_{i\leq m}\) as in the statement of Proposition 2, i.e., solving (respectively) Eqs. (15)-(18) and Eq. (23) with initial conditions as given there. Then, it is immediate to show that \(\rho_{t}\) solves (in weak sense) the following continuity partial differential equation (PDE) (we refer to Ambrosio et al. (2005); Santambrogio (2015) for the definition of weak solutions and basic properties, and Appendix A.4 for a short derivation.) \[\partial_{t}\rho_{t}(a,s) =-\nabla\cdot\left(\rho_{t}\Psi\left(a,s;\rho_{t}\right)\right) \tag{28}\] \[:=-\left(\partial_{a}\left(\rho_{t}\Psi_{a}\left(a,s;\rho_{t} \right)\right)+\partial_{s}\left(\rho_{t}\Psi_{s}\left(a,s;\rho_{t}\right) \right)\right), \tag{29}\] where \(\Psi=\left(\Psi_{a},\Psi_{s}\right)\) is given by \[\Psi_{a}(a,s;\rho)= \,\varepsilon^{-1}\cdot\left(V(s)-\int_{\mathbb{R}^{2}}a_{1}U( ss_{1})\rho(\mathrm{d}a_{1},\mathrm{d}s_{1})\right), \tag{30}\] \[\Psi_{s}(a,s;\rho)= a(1-s^{2})\cdot\left(V^{\prime}(s)-\int_{\mathbb{R}^{2}}a_{1}s_{ 1}U^{\prime}(ss_{1})\rho(\mathrm{d}a_{1},\mathrm{d}s_{1})\right). \tag{31}\] This equation can be extended to a flow in the whole space \((\mathscr{P}(\mathbb{R}^{2}),W_{2})\) (all probability measures on \(\mathbb{R}^{2}\) equipped with the second Wasserstein distance), and interpreted as gradient flow with respect to this metric in the following risk: \[\mathscr{R}_{\text{mf},*}(\rho):=\frac{1}{2}\|\varphi\|_{L^{2}}^{2}-\int\!aV (s)\,\rho(\mathrm{d}a,\mathrm{d}s)+\frac{1}{2}\int\!a_{1}a_{2}U(s_{1}s_{2})\, \rho(\mathrm{d}a_{1},\mathrm{d}s_{1})\,\rho(\mathrm{d}a_{2},\mathrm{d}s_{2})\, \tag{32}\] which is the obvious extension of \(\mathscr{R}_{\text{mf}}(a,s)\) of Eq. (22) to general probability distributions. Proposition 2 implies that for any \(T<\infty\), and under the above initial conditions, \[\sup_{t\in[0,T]}W_{2}(\rho_{t},\widehat{\rho}_{t})\leq\sqrt{\frac{M\exp(MT(1+T )^{2}/\varepsilon^{2})}{m}}\,. \tag{33}\] If we further denote by \(\rho_{t}^{d}\) the empirical distribution of \((a_{i}(t),s_{i}(t))\), \(i\leq m\), when \(s_{i}(0)=\langle u_{i}(0),u_{*}\rangle\), \(u_{i}(0)\sim\mathrm{Unif}(\mathbb{S}^{d-1})\), a further application of Corollary 1 yields \[\sup_{t\in[0,T]}W_{2}(\rho_{t}^{d},\rho_{t})\leq\sqrt{\frac{M\exp(MT(1+T)^{2}/ \varepsilon^{2})}{m\wedge d}}\,. \tag{34}\] Starting with Mei et al. (2018); Chizat and Bach (2018); Rotskoff and Vanden-Eijnden (2018), several authors used continuity PDEs of the form (28) to study the learning dynamics of two-layer neural networks. Following the physics tradition, this is referred to as the'mean-field theory' of two-layer neural networks. Appendix A.5 sketches an alternative approach to prove bounds of the form (25), (34) using the results of Mei et al. (2018, 2019). The present derivation has the advantages of yielding a sharper bound and of being self-contained. ### A general formulation As mentioned above, the system of ODEs in Eq. (23) is a special case of the Wasserstein gradient flow of Eq. (28) whereby we set \(\rho_{0}=m^{-1}\sum_{i=1}^{m}\delta_{(a_{i}^{\text{mf}}(0),s_{i}^{\text{mf}}( 0))}\). In order to study the solutions of Eq. (28) (hence Eq. (23)) we adopt the following framework. Let \((\Omega,\rho)\) denote a probability space. Let \(a=a(\omega,t)\) and \(s=s(\omega,t)\) (\(\omega\in\Omega\), \(t\geqslant 0\)) be two measurable functions satisfying (dropping dependencies in \(t\) below) \[\varepsilon\partial_{t}a(\omega)= \,V(s(\omega))-\int\mathrm{d}\rho(\nu)a(\nu)U(s(\omega)s(\nu))\,, \tag{35}\] \[\partial_{t}s(\omega)= \,a(\omega)\left(1-s(\omega)^{2}\right)\left(V^{\prime}(s(\omega) )-\int\mathrm{d}\rho(\nu)a(\nu)U^{\prime}(s(\omega)s(\nu))s(\nu)\right)\,.\] If \(\omega=i\in\Omega=\{1,\dots,m\}\) endowed with the uniform measure, we obtain the equations (23). In general, the push-forward \(\rho_{t}\) of the measure \(\rho\) through the map \(\omega\in\Omega\mapsto(a(\omega,t),s(\omega,t))\in\mathbb{R}^{2}\) satisfies the mean-field equation (28). As a consequence, the dynamics (35) can be viewed as a gradient flow on the risk \[\mathscr{R}_{\text{mf},*}(\rho)=\frac{1}{2}\|\varphi\|^{2}-\int a(\omega)V(s( \omega))\mathrm{d}\rho(\omega)+\frac{1}{2}\int a(\omega_{1})a(\omega_{2})U(s( \omega_{1})s(\omega_{2}))\mathrm{d}\rho(\omega_{1})\mathrm{d}\rho(\omega_{2} )\,. \tag{36}\] ## 5 Numerical solution In Figure 2, we present the result of an Euler discretization of Eqs. (23) where \(\varphi\) is a degree-2 polynomial and \(\sigma\) is the ReLU activation: \(\sigma(s)=\max(s,0)\), \[\begin{split}\varphi(s)&=\mathrm{He}_{0}(s)- \mathrm{He}_{1}(s)+\frac{2}{3}\mathrm{He}_{2}(s)\\ &=\left(1+\frac{2\sqrt{2}}{6}\right)-s+\frac{2\sqrt{2}}{6}s^{2} \,.\end{split} \tag{37}\] These plots clearly display two of the features emphasized in the introduction: \((i)\) plateaus separated by periods of rapid improvement of the risk; \((ii)\) increasingly long timescales (notice the logarithmic time axis in the second and third row). In order to examine the incremental learning structure, we rewrite the risk \(\mathscr{R}_{\mbox{\tiny mf}}\) of Eq. (22) Figure 2: Simulation of the mean field neuron dynamics of Eqs. (23), with the target function of Eq. (37) and ReLU activations. We use learning rate ratios \(\varepsilon=10^{-3}\) (left) and \(\varepsilon=10^{-6}\) (right) and we use \(m=10\) neurons. First two rows: evolution of the risk \(\mathscr{R}_{\mbox{\tiny mf}}\) of Eq. (22), in linear and log-scales. Third row: evolution of the first three terms of the sum of (38). by decomposing \(\varphi\) and \(\sigma\) in the basis of Hermite polynomials \[\mathscr{R}_{\text{mf}}(a,s)=\frac{1}{2}\sum_{k\geqslant 0}\left(\varphi_{k}- \frac{\sigma_{k}}{m}\sum_{i=1}^{m}a_{i}s_{i}^{k}\right)^{2}\,. \tag{38}\] We observe that, for small \(\varepsilon\), the Hermite coefficients of \(\varphi\) are learned sequentially, in the order of their degree. When \(\varepsilon\) is sufficiently small (right plots), this incremental learning happens in well separated phases. The plateaus and waterfalls in the plots of \(\mathscr{R}_{\text{mf}}\) correspond to the network learning increasingly higher degree polynomials. In Figure 3 we plot the evolution of the values of the \(a_{i}\) and \(s_{i}\), for \(i\in\{1,\dots,m\}\). We observe that the order of magnitude of the \(a_{i}\)'s and the \(s_{i}\)'s increases when passing through the different phases of the incremental learning process. Altogether, the results of Figures 2 and 3 are consistent with the standard learning scenario up to level \(L=2\) as per Definition 1. While we conjecture that incremental learning also occurs for higher-order polynomials, we found this hard to observe in numerical simulations. First, as predicted in Definition 1, the times at which the components are learned are closer on a logarithmic scale as the degree increases. It is therefore increasingly difficult to observe time scales corresponding to higher degrees. Second, we expect there to be a choice of the initialization \((a_{i,\text{init}},u_{i,\text{init}})_{i\in[m]}\), activation and target function, for which not all the components of \(\varphi\) are actually learnt. We observed empirically that this happens easily for small \(m\). ## 6 Timescales hierarchy in the gradient flow dynamics We are interested in the behavior of the solution of the ODEs (35), initialized from \(s(\omega,0)=0\) for all \(\omega\) (as per Proposition 2). The standard learning scenario of Definition 1 concerns the behavior of solutions for \(\varepsilon\to 0\). This type of questions can be addressed within the theory of dynamical systems using _singular perturbation theory_(Holmes, 2013) ('singular' refers to the fact that \(\varepsilon\) multiplies one of the highest-order derivatives). As a side remark, we note that the system (35) can be seen as a slow-fast dynamical system, where the \(a(\omega)\)'s are the fast variables and the \(s(\omega)\)'s are the slow variables (Berglund, 2001). Formally, the time derivative of the \(a(\omega)\)'s is multiplied by a factor \((1/\varepsilon)\). From a dynamical systems perspective, the present case is made complicated because of a bifurcation when the \(s(\omega)\)'s become non-zero. The standard learning scenario provides a detailed description of this bifurcation. We will motivate this scenario using a classical, but non-rigorous, technique of singular perturbation Figure 3: Same simulation as in Figure 2 (b). In these plots, we show the evolution of the \(a_{i}\) and the \(s_{i}\) for \(i\in\{1,\dots,m\}\) following a discretization of Eqs. (23). theory, called the _matched asymptotic expansion_[Holmes, 2013, Chapter 2]. This technique decomposes the approximation of the solution in several time scales on which a regular approximation holds. These time scales are traditionally called _layers_ in the literature; however, we avoid this terminology due to the potential confusion with the layers of the neural network. We will work mainly using the Hermite representation of the dynamical ODEs (35), which we write down for the reader's convenience: \[\begin{split}\varepsilon\partial_{t}a(\omega)&= \,\sum_{k=0}^{\infty}\sigma_{k}s(\omega)^{k}\left(\varphi_{k}-\sigma_{k}\int a (\nu)s(\nu)^{k}\mathrm{d}\rho(\nu)\right)\,,\\ \partial_{t}s(\omega)&=\,a(\omega)\left(1-s(\omega) ^{2}\right)\sum_{k=1}^{\infty}k\sigma_{k}s(\omega)^{k-1}\left(\varphi_{k}- \sigma_{k}\int a(\nu)s(\nu)^{k}\mathrm{d}\rho(\nu)\right)\,.\end{split} \tag{39}\] Sections 6.1-6.3 respectively describe the first three time scales of the matched asymptotic expansion of (39). This gives, for each time scale, an approximation of the \(a(\omega)\), \(s(\omega)\). In Appendix B.2, we detail how these sections induce an evolution of the risk alternating plateaus and rapid decreases, and support the standing learning scenario of Definition 1. Finally, in Section 6.4, we conjecture the behavior on longer time scales. Notations.We denote \(\mathds{1}\) the constant function \(\mathds{1}:\omega\in\Omega\mapsto 1\in\mathbb{R}\). Denote \(\langle.,.\rangle_{L^{2}(\rho)}\) the dot product on \(L^{2}(\rho)\) and \(\|.\|_{L^{2}(\rho)}\) the associated norm. For \(x\in L^{2}(\rho)\), we denote \(x_{\perp}\) the orthogonal projection of \(x\) on the hyperplane \(\mathds{1}^{\perp}\) of \(L^{2}(\rho)\) of functions orthogonal to \(\mathds{1}\): \[x_{\perp}(\omega)=x(\omega)-\int x(\nu)\mathrm{d}\rho(\nu)\,.\] We denote \(a_{\mathrm{init}}(\omega)=a(\omega,0)\) and thus \(a_{\perp,\mathrm{init}}\) is the orthogonal projection of \(a_{\mathrm{init}}\) on \(\mathds{1}^{\perp}\). ### First time scale: constant component We define a "fast" time variable \(t_{1}=t/\varepsilon\) and replace it in Eq. (39). We expand the solutions \(a(\omega)\) and \(s(\omega)\) in powers of \(\varepsilon\): \[a(\omega) =a^{(0)}(\omega)+\varepsilon a^{(1)}(\omega)+\varepsilon^{2}a^{ (2)}(\omega)+\dots\,, \tag{40}\] \[s(\omega) =s^{(0)}(\omega)+\varepsilon s^{(1)}(\omega)+\varepsilon^{2}s^{ (2)}(\omega)+\dots\,, \tag{41}\] where \(a^{(0)}(\omega),a^{(1)}(\omega),a^{(2)}(\omega),\dots,s^{(0)}(\omega),s^{(1)} (\omega),s^{(2)}(\omega),\dots\) are implicitly functions of \(t_{1}\). They are initialized at \[a^{(0)}(\omega,t_{1}=0) =a_{\mathrm{init}}(\omega)\,, a^{(1)}(\omega,t_{1}=0) =0\,, a^{(2)}(\omega,t_{1}=0) =0\,, \dots \tag{42}\] \[s^{(0)}(\omega,t_{1}=0) =0\,, s^{(1)}(\omega,t_{1}=0) =0\,, s^{(2)}(\omega,t_{1}=0) =0\,, \dots \tag{43}\] to be consistent with the initial condition \(a(\omega,t_{1}=0)=a(\omega,t=0)=a_{\mathrm{init}}(\omega)\) and \(s(\omega,t_{1}=0)=s(\omega,t=0)=0\). We substitute the expansion in (39): \[\partial_{t_{1}}a^{(0)}(\omega)+\varepsilon\partial_{t_{1}}a^{ (1)}(\omega)+\dots \tag{44}\] \[\quad=\sum_{k=0}^{\infty}\sigma_{k}\left(s^{(0)}(\omega)+ \varepsilon s^{(1)}(\omega)+\dots\right)^{k}\] (45) \[\qquad\times\left(\varphi_{k}-\sigma_{k}\int\left(a^{(0)}(\nu)+ \varepsilon a^{(1)}(\nu)+\dots\right)\left(s^{(0)}(\nu)+\varepsilon s^{(1)}( \nu)+\dots\right)^{k}\mathrm{d}\rho(\nu)\right)\,,\] (46) \[\partial_{t_{1}}s^{(0)}(\omega)+\varepsilon\partial_{t_{1}}s^{ (1)}(\omega)+\dots \tag{47}\] \[=\varepsilon\left(a^{(0)}(\omega)+\varepsilon a^{(1)}(\omega)+\dots \right)\left(1-\left(s^{(0)}(\omega)+\varepsilon s^{(1)}(\omega)+\dots\right)^{2} \right)\sum_{k=1}^{\infty}k\sigma_{k}\left(s^{(0)}(\omega)+\varepsilon s^{(1)}( \omega)+\dots\right)^{k-1} \tag{48}\] \[\qquad\times\left(\varphi_{k}-\sigma_{k}\int\left(a^{(0)}(\nu)+ \varepsilon a^{(1)}(\nu)+\dots\right)\left(s^{(0)}(\nu)+\varepsilon s^{(1)}( \nu)+\dots\right)^{k}\mathrm{d}\rho(\nu)\right)\,. \tag{49}\] The basic assumption of matched asymptotic expansions is that terms of the same order in \(\varepsilon\) can be identified (with some limitations that we develop below). For now, let us identify terms of order \(1=\varepsilon^{0}\): \[\partial_{t_{1}}a^{(0)}(\omega) =\sum_{k=0}^{\infty}\sigma_{k}\left(s^{(0)}(\omega)\right)^{k} \left(\varphi_{k}-\sigma_{k}\int a^{(0)}(\nu)\left(s^{(0)}(\nu)\right)^{k} \mathrm{d}\rho(\nu)\right)\,, \tag{50}\] \[\partial_{t_{1}}s^{(0)}(\nu) =0\,. \tag{51}\] From (51) and (43), we have \(s^{(0)}(\omega)=0\): time \(t_{1}=O(1)\Leftrightarrow t=O(\varepsilon)\) is too short for the \(s(\omega)\) to be of order \(1\). Substituting \(s^{(0)}(\omega)=0\) in (50), we obtain \[\partial_{t_{1}}a^{(0)}(\omega)=\sigma_{0}\left(\varphi_{0}-\sigma_{0}\int a^ {(0)}(\nu)\mathrm{d}\rho(\nu)\right)\,. \tag{52}\] Recall that \(\langle.,.\rangle_{L^{2}(\rho)}\) is the dot product on \(L^{2}(\rho)\), \(\mathds{1}\) denotes the constant function \(\mathds{1}:\omega\in\Omega\mapsto 1\in\mathbb{R}\) and \(a_{\perp}\) is the orthogonal projection of \(a\) on \(\mathds{1}^{\perp}\). Equation (52) can be rewritten as \[\partial_{t_{1}}\langle a^{(0)},\mathds{1}\rangle_{L^{2}(\rho)} =\sigma_{0}\left(\varphi_{0}-\sigma_{0}\langle a^{(0)},\mathds{1 }\rangle_{L^{2}(\rho)}\right)\,,\] \[\partial_{t_{1}}a^{(0)}_{\perp} =0\,,\] which gives after integration (using (42)): \[\langle a^{(0)},\mathds{1}\rangle_{L^{2}(\rho)} =e^{-\sigma_{0}^{2}t_{1}}(a_{\mathrm{init}},\mathds{1})_{L^{2}( \rho)}+\left(1-e^{-\sigma_{0}^{2}t_{1}}\right)\frac{\varphi_{0}}{\sigma_{0}}\,, \tag{53}\] \[a^{(0)}_{\perp} =a_{\perp,\mathrm{init}}\,.\] At this point, we have determined \(a^{(0)}(\omega)\) and \(s^{(0)}(\omega)\), and thus \(a(\omega)=a^{(0)}(\omega)+O(\varepsilon)\) and \(s(\omega)=s^{(0)}(\omega)+O(\varepsilon)\) up to a \(O(\varepsilon)\) precision, which is sufficient to obtain a \(o(1)\)-approximation of the risk \(\mathscr{R}_{\mbox{\tiny mf,*}}\) (see Section B.2). However, note that we could obtain more precise estimates by identifying higher-order terms in (44)-(49). For instance, identifying the \(O(\varepsilon)\) terms in (47)-(49), we obtain \(\partial_{t_{1}}s^{(1)}(\omega)=a^{(0)}(\omega)\sigma_{1}\varphi_{1}\). This shows that the \(s(\omega)\) become non-zero, though only of order \(\varepsilon\) on the time scale \(t_{1}\asymp 1\); the inner-layer weights develop an infinitesimal correlation with the true direction \(u_{*}\) thanks to the linear component of \(\sigma\) and \(\varphi\). The approximation constructed above should be considered as valid on the time scale \(t_{1}\asymp 1\Leftrightarrow t\asymp\varepsilon\). The approximation breaks down when we reach a new time scale, at which the \(s(\omega)\) are large enough for the \(a(\omega)\) to be affected (at leading order) by the linear part of the functions. We detail the new time scale and its resolution in the next section. ### Second time scale: linear component I In this section, we seek a second, slower time scale, for which the behavior of the asymptotic expansion is different. Identification of the scale.Consider \(t_{2}=\frac{t}{\varepsilon^{\gamma}}\), where \(\gamma<1\) is to be determined. We rewrite the system (39) using \(t_{2}\), and expand the solutions \(a(\omega)\) and \(s(\omega)\): \[a(\omega) =a^{(0)}(\omega)+\varepsilon^{\delta}a^{(1)}(\omega)+\varepsilon^ {2\delta}a^{(2)}(\omega)+\ldots\,, \tag{54}\] \[s(\omega) =\varepsilon^{\delta}s^{(1)}(\omega)+\varepsilon^{2\delta}s^{(2) }(\omega)+\ldots\,. \tag{55}\] (Since within the previous time scale we obtained \(s(\omega)=O(\varepsilon)\), it is natural to assume \(s^{(0)}(\omega)=0\).) Let us pause to comment on our method. Similarly to what has been done in the previous time scale, we will substitute the expansions (54)-(55) in the equations (39) in order to compute the different terms in the expansion. However, this step also allows us to compute the exponents \(\gamma\) and \(\delta\), that give respectively the new time scale and the size of the \(s(\omega)\)'s. Note that we should have proceeded similarly for the first time scale, by introducing a first time variable \(t_{1}=\frac{t}{\varepsilon^{\gamma}}\), expanding \(a(\omega),s(\omega)\) in powers \(1,\varepsilon^{\delta^{\prime}},\varepsilon^{2\delta^{\prime}},\ldots\), and determining \(\gamma^{\prime}\) and \(\delta^{\prime}\) a posteriori. This would have led, indeed, to \(\gamma^{\prime}=1\) and \(\delta^{\prime}=1\). However, for simplicity, we preferred to fix these values that are natural a priori. Finally, note that the expansions (40)-(41) and (54)-(55) are different, because they are valid on different time scales. In fact, the only coherence conditions that we require below is that the expansions match in a joint asymptotic where \(t_{1}=\frac{t}{\varepsilon}\to\infty\) and \(t_{2}=\frac{t}{\varepsilon^{\gamma}}\to 0\). We thus build different approximations for each one of the time scales, with some matching conditions; this justifies the name of _matched asymptotic expansion_. We now return to our computations and substitute (54)-(55) in (39): \[\varepsilon^{1-\gamma}\partial_{t_{2}}a^{(0)}(\omega)+\ldots =\sum_{k=0}^{\infty}\sigma_{k}\left(\varepsilon^{\delta}s^{(1)}( \omega)+\ldots\right)^{k}\] \[\qquad\qquad\qquad\times\left(\varphi_{k}-\sigma_{k}\int\left(a^ {(0)}(\nu)+\ldots\right)\left(\varepsilon^{\delta}s^{(1)}(\nu)+\ldots\right) ^{k}\mathrm{d}\rho(\nu)\right)\,,\] \[\varepsilon^{\delta}\partial_{t_{2}}s^{(1)}(\omega)+\ldots =\varepsilon^{\gamma}\left(a^{(0)}(\omega)+\ldots\right)\left(1- \left(\varepsilon^{\delta}s^{(1)}(\omega)+\ldots\right)^{2}\right)\sum_{k=1}^ {\infty}k\sigma_{k}\left(\varepsilon^{\delta}s^{(1)}(\omega)+\ldots\right)^{k-1}\] \[\qquad\qquad\qquad\qquad\times\left(\varphi_{k}-\sigma_{k}\int \left(a^{(0)}(\nu)+\ldots\right)\left(\varepsilon^{\delta}s^{(1)}(\nu)+\ldots \right)^{k}\mathrm{d}\rho(\nu)\right)\,,\] and thus \[\varepsilon^{1-\gamma}\partial_{t_{2}}a^{(0)}(\omega)+O( \varepsilon^{1-\gamma+\delta}) =\sigma_{0}\left(\varphi_{0}-\sigma_{0}\int a^{(0)}(\nu) \mathrm{d}\rho(\nu)\right) \tag{56}\] \[\qquad-\varepsilon^{\delta}\sigma_{0}^{2}\int a^{(1)}(\nu) \mathrm{d}\rho(\nu)+\varepsilon^{\delta}\sigma_{1}\varphi_{1}s^{(1)}(\omega) +O(\varepsilon^{2\delta})\,,\] (57) \[\varepsilon^{\delta}\partial_{t_{2}}s^{(1)}(\omega)+O(\varepsilon ^{2\delta}) =\varepsilon^{\gamma}\sigma_{1}\varphi_{1}a^{(0)}(\omega)+O( \varepsilon^{\gamma+\delta})\,. \tag{58}\] For the first time scale, we chose \(\gamma=\delta=1\), so that the terms of order \(\varepsilon^{\delta}\) were negligible compared to \(\varepsilon^{1-\gamma}\partial_{t_{2}}a^{(0)}(\omega)\) in (56). This means that the linear components \(\sigma_{1},\varphi_{1}\) of the functions had no effect on the \(a(\omega)\) at leading order. We are now interested in a new time scale where \(\varepsilon^{1-\gamma}\partial_{t_{2}}a^{(0)}(\omega)\) and \(\varepsilon^{\delta}\sigma_{1}\varphi_{1}s^{(1)}(\omega)\) are of the same order, i.e., \(1-\gamma=\delta\); then the linear components play a role in the dynamics. Further, for \(s^{(1)}(\omega)\) to be non-zero, we need both sides of (58) to be of the same order, thus \(\delta=\gamma\). Putting together, this gives \(\gamma=\delta=1/2\). Derivation of the ODEs for this time scale.Let us summarize equations. For \(t_{2}=\frac{t}{\varepsilon^{\nicefrac{{1}}{{2}}}}\) and \[a(\omega) =a^{(0)}(\omega)+\varepsilon^{\nicefrac{{1}}{{2}}}a^{(1)}(\omega)+ \ldots\,,\] \[s(\omega) =\varepsilon^{\nicefrac{{1}}{{2}}}s^{(1)}(\omega)+\ldots\,,\] we have from (56)-(58): \[\varepsilon^{\nicefrac{{1}}{{2}}}\partial_{t_{2}}a^{(0)}(\omega) =\sigma_{0}\left(\varphi_{0}-\sigma_{0}\int a^{(0)}(\nu)\mathrm{d} \rho(\nu)\right) \tag{59}\] \[\qquad-\varepsilon^{\nicefrac{{1}}{{2}}}\sigma_{0}^{2}\int a^{(1 )}(\nu)\mathrm{d}\rho(\nu)+\varepsilon^{\nicefrac{{1}}{{2}}}\sigma_{1}\varphi_ {1}s^{(1)}(\omega)+O(\varepsilon)\,,\] (60) \[\varepsilon^{\nicefrac{{1}}{{2}}}\partial_{t_{2}}s^{(1)}(\omega) =\varepsilon^{\nicefrac{{1}}{{2}}}\sigma_{1}\varphi_{1}a^{(0)}( \omega)+O(\varepsilon)\,. \tag{61}\] First, we identify the terms of order \(1=\varepsilon^{0}\): \[0=\sigma_{0}\left(\varphi_{0}-\sigma_{0}\int a^{(0)}(\nu)\mathrm{d}\rho(\nu) \right)\,. \tag{62}\] This means that the trajectory remains in the affine hyperplane such that \(\varphi_{0}=\sigma_{0}\int a^{(0)}(\nu)\mathrm{d}\rho(\nu)\); intuitively, that the constant part of \(\varphi\) remains learned in this second time scale. Second, we identify the terms of order \(\varepsilon^{\nicefrac{{1}}{{2}}}\) in (59)-(61): \[\partial_{t_{2}}a^{(0)}(\omega) =-\sigma_{0}^{2}\int a^{(1)}(\nu)\mathrm{d}\rho(\nu)+\sigma_{1} \varphi_{1}s^{(1)}(\omega)\,, \tag{63}\] \[\partial_{t_{2}}s^{(1)}(\omega) =\sigma_{1}\varphi_{1}a^{(0)}(\omega)\,. \tag{64}\] In (63), the first term of the right hand side depends on the unknown higher-order terms \(a^{(1)}(\nu)\); in fact, this is best interpreted as the Lagrange multiplier associated to the constraint (62). To eliminate this Lagrange multiplier, we use again the compact notations: \[\partial_{t_{2}}a^{(0)} =-\sigma_{0}^{2}\langle a^{(1)},\mathds{1}\rangle_{L^{2}(\rho)} \mathds{1}+\sigma_{1}\varphi_{1}s^{(1)}\,, \tag{65}\] \[\partial_{t_{2}}s^{(1)} =\sigma_{1}\varphi_{1}a^{(0)}\,, \tag{66}\] and thus \[\partial_{t_{2}}a^{(0)}_{\perp} =\sigma_{1}\varphi_{1}s^{(1)}_{\perp}\,, \tag{67}\] \[\partial_{t_{2}}s^{(1)}_{\perp} =\sigma_{1}\varphi_{1}a^{(0)}_{\perp}\,. \tag{68}\] Matching.The initialization of the ODEs (65)-(66) for the second time scale is determined by a classical procedure that matches with the previous time scale. In this paragraph, we denote \(\underline{a},\underline{s}\) the approximation obtained in the first time scale (Section 6.1), and \(\overline{a},\overline{s}\) the approximation in the second time scale, described above. Consider an intermediate time scale \(\widetilde{t}=\frac{t}{\varepsilon^{\nicefrac{{1}}{{2}}}},\nicefrac{{1}}{{2 }}<\alpha<1\), and assume \(\widetilde{t}\asymp 1\) so that \[t_{1} =\frac{t}{\varepsilon}=\frac{\widetilde{t}}{\varepsilon^{1- \alpha}}\to\infty\,, t_{2} =\frac{t}{\varepsilon^{\nicefrac{{1}}{{2}}}}=\varepsilon^{\alpha \nicefrac{{-1}}{{2}}}\widetilde{t}\to 0\,.\] In this intermediate regime, we want the approximations provided on the first and the second time scales to match: \(\underline{a}(\widetilde{t})\) and \(\overline{a}(\widetilde{t})\) (resp. \(\underline{s}(\widetilde{t})\) and \(\overline{s}(\widetilde{t})\)) should match to leading order. From the first time scale approximation, \[\underline{a}=\underline{a}^{(0)}+O(\varepsilon) \tag{69}\] \[= \langle\underline{a}^{(0)},\mathds{1}\rangle_{L^{2}(\rho)}\mathds{1}+ \underline{a}_{\perp}^{(0)}+O(\varepsilon) \tag{70}\] \[= \left[e^{-\sigma_{0}^{2}t_{1}}\langle\text{$\mathfrak{a}_{\text{ init}}$},\mathds{1}\rangle_{L^{2}(\rho)}+\left(1-e^{-\sigma_{0}^{2}t_{1}}\right) \frac{\varphi_{0}}{\sigma_{0}}\right]\mathds{1}+a_{\perp,\text{init}}+O(\varepsilon)\] (71) \[= \left[e^{-\sigma_{0}^{2}\widetilde{\gamma}/\varepsilon^{1-\alpha} }\langle a_{\text{init}},\mathds{1}\rangle_{L^{2}(\rho)}+\left(1-e^{-\sigma_{0 }^{2}\widetilde{\gamma}/\varepsilon^{1-\alpha}}\right)\frac{\varphi_{0}}{ \sigma_{0}}\right]\mathds{1}+a_{\perp,\text{init}}+O(\varepsilon)\] (72) \[= \frac{\varphi_{0}}{\sigma_{0}}\mathds{1}+a_{\perp,\text{init}}+o( 1)\,. \tag{73}\] From the second time scale approximation, \[\overline{a} =\overline{a}^{(0)}(t_{2})+O(\varepsilon^{\nicefrac{{1}}{{2}}} )=\overline{a}^{(0)}(\varepsilon^{\alpha-\nicefrac{{1}}{{2}}}\widetilde{t}) +O(\varepsilon^{\nicefrac{{1}}{{2}}}) \tag{74}\] \[=\overline{a}^{(0)}(0)+o(1)\,. \tag{75}\] By matching, Equations (73) and (75) should be coherent. Thus the ODE for the second time scale should be initialized from \(\overline{a}^{(0)}(0)=\frac{\varphi_{0}}{\sigma_{0}}\mathds{1}+a_{\perp, \text{init}}\). Similarly, the matching procedure gives that the ODE for the second time scale should be initialized from \(\overline{s}^{(1)}=0\). Solution.As we are done with the matching procedure, we now consider the solution in the second time scale only, that we denote again by \(a\), \(s\) as in (65), (66). The matching procedure motivates us to consider the solution of (67)-(68) initialized at \(a_{\perp}^{(0)}(0)=a_{\perp,\text{init}}\), \(s_{\perp}^{(1)}=0\). This gives \[a_{\perp}^{(0)} =\cosh\left(\varphi_{1}\sigma_{1}t_{2}\right)a_{\perp,\text{init }}\,, s_{\perp}^{(1)} =\sinh\left(\varphi_{1}\sigma_{1}t_{2}\right)a_{\perp,\text{init }}\,.\] To conclude, we note that \(\langle a^{(0)},\mathds{1}\rangle_{L^{2}(\rho)}=\frac{\varphi_{0}}{\sigma_{0}}\) is constrained by (62). Further, from (64), \[\partial_{t_{2}}\langle s^{(1)},\mathds{1}\rangle_{L^{2}(\rho)} =\sigma_{1}\varphi_{1}\langle a^{(0)},\mathds{1}\rangle_{L^{2}( \rho)}=\sigma_{1}\varphi_{1}\frac{\varphi_{0}}{\sigma_{0}},\] thus \(\langle s^{(1)},\mathds{1}\rangle_{L^{2}(\rho)}=\sigma_{1}\varphi_{1}\frac{ \varphi_{0}}{\sigma_{0}}t_{2}\). Putting together, these equations give: \[a^{(0)} =\frac{\varphi_{0}}{\sigma_{0}}\mathds{1}+\cosh\left(\varphi_{1} \sigma_{1}t_{2}\right)a_{\perp,\text{init}}\,, s^{(1)} =\sigma_{1}\varphi_{1}\frac{\varphi_{0}}{\sigma_{0}}t_{2}\mathds{1}+\sinh \left(\varphi_{1}\sigma_{1}t_{2}\right)a_{\perp,\text{init}}\,. \tag{76}\] We observe that \(a^{(0)}\) and \(s^{(1)}\) diverge as \(t_{2}\to\infty\). This implies that our approximation on the second time scale must break down at a certain point. Indeed, we analyzed this time scale under the assumption that both \(a^{(0)}\) and \(s^{(1)}\) are of order \(1\). However, since \(a^{(0)}\) and \(s^{(1)}\) diverge exponentially as \(t_{2}\to\infty\), as per Eq. (76), this assumption breaks down when \(t_{2}\asymp\log(1/\varepsilon)\). More precisely, in (59) (resp. (61)), the \(O(\varepsilon)\) term includes a term of the form \[-\varepsilon s^{(1)}(\omega)\sigma_{1}^{2}\int a^{(0)}(\nu)s^{(1)}(\nu)\text{d} \rho(\nu)\qquad\left(\text{resp. }-\varepsilon a^{(0)}(\omega)\sigma_{1}^{2}\int a^{(0)}(\nu)s^{(1)}(\nu)\text{d }\rho(\nu)\right)\,.\] When \(a^{(0)}\) and \(s^{(1)}\) become of order \(\varepsilon^{-\nicefrac{{1}}{{4}}}\), this term becomes of order \(\varepsilon^{\nicefrac{{1}}{{4}}}\), which is then of the same order as the term \(\varepsilon^{\nicefrac{{1}}{{2}}}\sigma_{1}\varphi_{1}s^{(1)}(\omega)\) in (59) (resp. the term \(\varepsilon^{\nicefrac{{1}}{{2}}}\sigma_{1}\varphi_{1}a^{(0)}(\omega)\) in (61)). At this point, these terms can not be neglected anymore. From (76), we have \[a^{(0)} \sim\frac{e^{\left|\varphi_{1}\sigma_{1}\right|t_{2}}}{2}a_{\perp, \text{init}}\,, s^{(1)} \sim\text{sign}(\varphi_{1}\sigma_{1})\frac{e^{\left|\varphi_{1}\sigma_{1} \right|t_{2}}}{2}a_{\perp,\text{init}}\,, t_{2}\to\infty\,.\] Therefore, \(a^{(0)}\) and \(s^{(1)}\) become of order \(\varepsilon^{-\nicefrac{{1}}{{4}}}\) at the time \(t_{2}\sim\frac{1}{4\left|\sigma_{1}\varphi_{1}\right|}\log\frac{1}{\varepsilon}\), at which the approximation on the second time scale breaks down. We thus introduce a new time scale centered at this critical point. ### Third time scale: linear component II We now introduce the time \(t_{3}=t_{2}-\frac{1}{4|\varphi_{1}\sigma_{1}|}\log\frac{1}{\varepsilon}\). As \(t_{3}\) is only a translation from \(t_{2}\), the ODEs in terms of \(t_{3}\) are the same as the ones in term of \(t_{2}\). However, in this time scale, \(a\) and \(\varepsilon^{\nicefrac{{1}}{{2}}}s\) have diverged. In coherence with the discussion above, we seek expansions of the form \[a =\varepsilon^{-\nicefrac{{1}}{{4}}}a^{(-1)}+a^{(0)}+\varepsilon^{ \nicefrac{{1}}{{4}}}a^{(1)}+\ldots\,, \tag{77}\] \[s =\varepsilon^{\nicefrac{{1}}{{4}}}s^{(1)}+\varepsilon^{\nicefrac{{ 1}}{{2}}}s^{(2)}+\ldots\,. \tag{78}\] Similarly to the second time scale, we substitute (77)-(78) in (39) and obtain \[\varepsilon^{\nicefrac{{1}}{{4}}}\partial_{t_{3}}a^{(-1)}(\omega) =-\varepsilon^{-\nicefrac{{1}}{{4}}}\sigma_{0}^{2}\int a^{(-1)}( \nu)\mathrm{d}\rho(\nu)+\sigma_{0}\left(\varphi_{0}-\sigma_{0}\int a^{(0)}( \nu)\mathrm{d}\rho(\nu)\right)\] \[\quad-\varepsilon^{\nicefrac{{1}}{{4}}}\sigma_{0}^{2}\int a^{(1) }(\nu)\mathrm{d}\rho(\nu)+\varepsilon^{\nicefrac{{1}}{{4}}}\sigma_{1}\left( \varphi_{1}-\sigma_{1}\int a^{(-1)}(\nu)s^{(1)}(\nu)\mathrm{d}\rho(\nu) \right)s^{(1)}(\omega)+O(\varepsilon^{\nicefrac{{1}}{{2}}})\,,\] \[\varepsilon^{\nicefrac{{1}}{{4}}}\partial_{t_{3}}s^{(1)}(\omega) =\varepsilon^{\nicefrac{{1}}{{4}}}\sigma_{1}\left(\varphi_{1}- \sigma_{1}\int a^{(-1)}(\nu)s^{(1)}(\nu)\mathrm{d}\rho(\nu)\right)a^{(-1)}( \omega)+O(\varepsilon^{\nicefrac{{1}}{{2}}})\,.\] First, we identify the terms of order \(\varepsilon^{-\nicefrac{{1}}{{4}}}\): \[0=-\sigma_{0}^{2}\int a^{(-1)}(\nu)\mathrm{d}\rho(\nu)=-\sigma_{0}^{2}\left< a^{(-1)},\mathds{1}\right>_{L^{2}(\rho)}\,. \tag{79}\] This means that \(a\) has no component diverging in \(\varepsilon\) in the direction of \(\mathds{1}\). Second, we identify the terms of order \(1=\varepsilon^{0}\): \[0=\sigma_{0}\left(\varphi_{0}-\sigma_{0}\int a^{(0)}(\nu)\mathrm{d}\rho(\nu) \right)=\sigma_{0}\left(\varphi_{0}-\sigma_{0}\left<a^{(0)},\mathds{1}\right> _{L^{2}(\rho)}\right)\,. \tag{80}\] Put together with (79), this equation ensures that the constant component of \(\varphi\) remains learned on this third time scale. Third, we identify the terms of order \(\varepsilon^{\nicefrac{{1}}{{4}}}\): \[\partial_{t_{3}}a^{(-1)}(\omega) =-\sigma_{0}^{2}\int a^{(1)}(\nu)\mathrm{d}\rho(\nu)+\sigma_{1} \left(\varphi_{1}-\sigma_{1}\int a^{(-1)}(\nu)s^{(1)}(\nu)\mathrm{d}\rho(\nu) \right)s^{(1)}(\omega)\,, \tag{81}\] \[\partial_{t_{3}}s^{(1)}(\omega) =\sigma_{1}\left(\varphi_{1}-\sigma_{1}\int a^{(-1)}(\nu)s^{(1)}( \nu)\mathrm{d}\rho(\nu)\right)a^{(-1)}(\omega)\,.\] Again, the term \(-\sigma_{0}^{2}\int a^{(1)}(\nu)\mathrm{d}\rho(\nu)\) is best interpreted as the Lagrange multiplier associated to the constraints (79), (80). Using the compact notations, \[\int a^{(-1)}(\nu)s^{(1)}(\nu)\mathrm{d}\rho(\nu) =\left<a^{(-1)},s^{(1)}\right>_{L^{2}(\rho)}=\left<a^{(-1)}, \mathds{1}\right>_{L^{2}(\rho)}\left<\mathds{1},s^{(1)}\right>_{L^{2}(\rho)}+ \left<a^{(-1)}_{\perp},s^{(1)}_{\perp}\right>_{L^{2}(\rho)}\] \[=\left<a^{(-1)}_{\perp},s^{(1)}_{\perp}\right>_{L^{2}(\rho)}\,,\] where in the last equality we use (79). Thus we can rewrite (81) as \[\partial_{t_{3}}a^{(-1)} =-\sigma_{0}^{2}\langle a^{(1)},\mathds{1}\rangle_{L^{2}(\rho)} \mathds{1}+\sigma_{1}\left(\varphi_{1}-\sigma_{1}\left<a^{(-1)}_{\perp},s^{(1 )}_{\perp}\right>_{L^{2}(\rho)}\right)s^{(1)}\,, \tag{82}\] \[\partial_{t_{3}}s^{(1)} =\sigma_{1}\left(\varphi_{1}-\sigma_{1}\left<a^{(-1)}_{\perp},s^{(1 )}_{\perp}\right>_{L^{2}(\rho)}\right)a^{(-1)}\,,\] and thus \[\partial_{t_{3}}a^{(-1)}_{\perp} =\sigma_{1}\left(\varphi_{1}-\sigma_{1}\left<a^{(-1)}_{\perp},s^{( 1)}_{\perp}\right>_{L^{2}(\rho)}\right)s^{(1)}_{\perp}\,, \tag{83}\] \[\partial_{t_{3}}s^{(1)}_{\perp} =\sigma_{1}\left(\varphi_{1}-\sigma_{1}\left<a^{(-1)}_{\perp},s^{( 1)}_{\perp}\right>_{L^{2}(\rho)}\right)a^{(-1)}_{\perp}\,.\] In Appendix B.1, we solve this system of ODEs and determine the initial condition by matching with the previous layer. The result is that \[\begin{split}& a^{(-1)}=a_{\perp}^{(-1)}=\lambda a_{\perp,\text{ init}}\,,\\ & s^{(1)}=s_{\perp}^{(1)}=\text{sign}(\sigma_{1}\varphi_{1}) \lambda a_{\perp,\text{init}}\,,\end{split} \tag{84}\] where \(\lambda=\lambda(t_{3})\) is the function \[\lambda(t_{3})=\frac{|\varphi_{1}|^{\nicefrac{{1}}{{2}}}}{\left(|\sigma_{1}| \left\|a_{\perp,\text{init}}\right\|_{L^{2}(\rho)}^{2}+4|\varphi_{1}|e^{-2| \sigma_{1}\varphi_{1}|t_{3}}\right)^{\nicefrac{{1}}{{2}}}}\,. \tag{85}\] This solution finishes to describe how the linear part of the function \(\varphi\) is learned. ### Conjectured behavior for larger time scales The analysis of the previous sections naturally suggests the existence of a sequence of cutoffs. At each time scale, a new polynomial component of \(\varphi\) is learned within a window that is much shorter than the time elapsed before that phase started. Along this sequence, we expect \(s\) and \(a\) to grow to increasingly larger scales in \(\varepsilon\) (but \(s\) remains \(o(1)\) while \(a\) diverges). More precisely, we assume that during the \(l\)-th phase, the network learns the degree-\(l\) component \(\varphi_{l}\), and various quantities satisfy the following scaling behavior: \[a=O(\varepsilon^{-\omega_{l}}),\ \ \ s=O(\varepsilon^{\beta_{l}}),\ \ \ t=O( \varepsilon^{\mu_{l}})\,, \tag{86}\] where \(\omega_{l}>0\) is an increasing sequence and \(\beta_{l},\mu_{l}>0\) are decreasing sequences. Further, while learning of this component takes place when \(t=O(\varepsilon^{\mu_{l}})\), the actual evolution of the risk (and of the neural network) take place on much shorter scales, namely: \[\Delta t=O(\varepsilon^{\nu_{l}})\,, \tag{87}\] where \(\nu_{l}\) is also decreasing, with \(\nu_{l}>\mu_{l}\). The goal of this section is to provide heuristic arguments to conjecture the values of \(\omega_{l}\), \(\beta_{l}\), \(\mu_{l}\) and \(\nu_{l}\). We will base this conjecture on a rigorous analysis of a simplified model. The simplified model is motivated by the expectation (supported by the heuristics and simulations in the previous sections) that learning each component happens independently from the details of the evolution on previous time scales. In the simplified model, the activation function \(\sigma(x)\) is proportional to the \(l\)-th Hermite polynomial, namely \(\sigma(x)=\sigma_{l}\text{He}_{l}(x)\). This is the component of \(\sigma\) that we expect to be relevant on the \(l\)-th time scale. The gradient flow equations (39) then read: \[\begin{split}\varepsilon\partial_{t}a(\omega)=&\ \sigma_{l}s(\omega)^{l}\left(\varphi_{l}-\sigma_{l}\int a(\nu)s(\nu)^{l}\text{ d}\rho(\nu)\right)\,,\\ \partial_{t}s(\omega)=&\ a(\omega)\left(1-s(\omega) ^{2}\right)l\sigma_{l}s(\omega)^{l-1}\left(\varphi_{l}-\sigma_{l}\int a(\nu) s(\nu)^{l}\text{d}\rho(\nu)\right)\,.\end{split} \tag{88}\] with corresponding risk component \[\mathscr{R}_{l}=\frac{1}{2}\left(\varphi_{l}-\sigma_{l}\int a(\nu)s(\nu)^{l} \text{d}\rho(\nu)\right)^{2}.\] We capture the effect of learning dynamics on the previous time scales by the overall magnitude of the \(a(\omega)\)'s and \(s(\omega)\)'s at initialization. Namely, we choose the scale of initialization of the simplified model to be given by the end of the \((l-1)\)-th time scale, i.e., \(a(\omega)\asymp\varepsilon^{-\omega_{l-1}}\) and \(s(\omega)\asymp\varepsilon^{\beta_{l-1}}\). Further, in order for the \((l-1)\)-th component to be learned, namely \[\int a(\nu)s(\nu)^{l-1}\text{d}\rho(\nu)\approx\frac{\varphi_{l-1}}{\sigma_{l- 1}}, \tag{89}\] we require \(\omega_{l-1}=(l-1)\beta_{l-1}\) so that \(\int a(\nu)s(\nu)^{l-1}\mathrm{d}\rho(\nu)=\Theta(1)\). Analogously, we assume \(\omega_{l}=l\beta_{l}\). Based on this consideration, we introduce the rescaled variables \[\widetilde{a}(\omega)=\varepsilon^{\omega_{l}}a(\omega),\ \widetilde{s}(\omega)= \varepsilon^{-\beta_{l}}s(\omega),\ \text{where}\ \widetilde{a}(\omega,0)\asymp\varepsilon^{\omega_{l}-\omega_{l-1}}, \widetilde{s}(\omega,0)\asymp\varepsilon^{\beta_{l-1}-\beta_{l}}.\] Rewriting Eq. (88) in terms of \(\widetilde{a}(\omega)\)'s and \(\widetilde{s}(\omega)\)'s, and using \(\omega_{l}=l\beta_{l}\), we get that \[\begin{split}\varepsilon^{1-2l\beta_{l}}\partial_{l}\widetilde{a }(\omega)=&\,\sigma_{l}\widetilde{s}(\omega)^{l}\left(\varphi_{l }-\sigma_{l}\int\widetilde{a}(\nu)\widetilde{s}(\nu)^{l}\mathrm{d}\rho(\nu) \right)\\ \varepsilon^{2\beta_{l}}\partial_{l}\widetilde{s}(\omega)=& \,l\sigma_{l}\widetilde{a}(\omega)\widetilde{s}(\omega)^{l-1}\left(1- \varepsilon^{2\beta_{l}}\widetilde{s}(\omega)^{2}\right)\left(\varphi_{l}- \sigma_{l}\int\widetilde{a}(\nu)\widetilde{s}(\nu)^{l}\mathrm{d}\rho(\nu) \right).\end{split} \tag{90}\] In order for the \(\widetilde{a}(\omega)\)'s and \(\widetilde{s}(\omega)\)'s to be learned simultaneously, we need \(1-2l\beta_{l}=2\beta_{l}\), which implies \(\beta_{l}=1/2(l+1)\). Making a further change of the time variable \(t=\varepsilon^{\nu_{l}}\tau\), where \(\nu_{l}=2\beta_{l}=1/(l+1)\), it follows that \[\begin{split}\partial_{\tau}\widetilde{a}(\omega)=& \,\sigma_{l}\widetilde{s}(\omega)^{l}\left(\varphi_{l}-\sigma_{l} \int\widetilde{a}(\nu)\widetilde{s}(\nu)^{l}\mathrm{d}\rho(\nu)\right)\\ \partial_{\tau}\widetilde{s}(\omega)=&\,l\sigma_{l} \widetilde{a}(\omega)\widetilde{s}(\omega)^{l-1}\left(1-\varepsilon^{2\beta_{l }}\widetilde{s}(\omega)^{2}\right)\left(\varphi_{l}-\sigma_{l}\int\widetilde{a }(\nu)\widetilde{s}(\nu)^{l}\mathrm{d}\rho(\nu)\right).\end{split} \tag{91}\] Moreover, rewriting the risk in terms of the rescaled variables \(\widetilde{a},\widetilde{s}\), \(\mathscr{R}_{l}(\tau)=\mathscr{R}_{l}(\widetilde{a}(\tau),\widetilde{s}(\tau))\) satisfies the ODE: \[\partial_{\tau}\mathscr{R}_{l}=-2\sigma_{l}^{2}\mathscr{R}_{l}\cdot\int \widetilde{s}(\omega)^{2(l-1)}\left(l^{2}\widetilde{a}(\omega)^{2}\left(1- \varepsilon^{2\beta_{l}}\widetilde{s}(\omega)^{2}\right)+\widetilde{s}( \omega)^{2}\right)\mathrm{d}\rho(\omega). \tag{92}\] Note that with our choice of \(\beta_{l}\) and \(\omega_{l}\), we have \(\omega_{l}-\omega_{l-1}=\beta_{l-1}-\beta_{l}=1/2l(l+1)\). This means that the \(\widetilde{a}(\omega)\)'s and \(\widetilde{s}(\omega)\)'s are initialized at the same scale, namely \[\widetilde{a}(\omega,0),\widetilde{s}(\omega,0)=\Theta(\varepsilon^{1/2l(l+1) })\,. \tag{93}\] The theorem below describes quantitatively the dynamics of the simplified model for small \(\varepsilon\), and determines the value of \(\mu_{l}\) (recall that \(\nu_{l}=1/(l+1)\)): **Theorem 1** (Evolution of the simplified gradient flow).: _Assume \(l\geq 2\) and let \((\widetilde{a}(\omega,\tau),\widetilde{s}(\omega,\tau))_{\tau\geq 0}\) be the unique solution of the ODE system (91), initialized as per Eq. (93) (note in particular that \(\sigma_{l}\varphi_{l}\widetilde{a}(\omega,0)\widetilde{s}(\omega,0)^{l}\asymp \varepsilon^{1/2l}\)). Then the followings hold:_ * _Let us denote_ \[A=\left\{\omega:\sigma_{l}\varphi_{l}\liminf_{\varepsilon\to 0}\varepsilon^{-1/2l} \widetilde{a}(\omega,0)\widetilde{s}(\omega,0)^{l}>0\right\}\] (94) _and assume_ \(\rho(A)>0\)_. For_ \(\Delta\in(0,\varphi_{l}^{2}/2)\)_, define_ \[\tau(\Delta)=\inf\{\tau\geq 0:\mathscr{R}_{l}(\widetilde{a}(\tau), \widetilde{s}(\tau))\leq\Delta\}.\] (95) _Then, for any fixed_ \(\Delta\) _we have_ \(\tau(\Delta)=\Theta(\varepsilon^{-(l-1)/2l(l+1)})\) _as_ \(\varepsilon\to 0\)_. Further, if_ \(\rho\) _is a discrete probability measure, then there exists_ \(\tau_{*}(\varepsilon)=\Theta(\varepsilon^{-(l-1)/2l(l+1)})\) _and, for any_ \(\Delta>0\) _a constant_ \(c_{*}(\Delta)>0\) _independent of_ \(\varepsilon\) _such that_ \[\tau\leq\tau_{*}(\varepsilon)-c_{*}(\Delta)\Rightarrow \liminf_{\varepsilon\to 0}\mathscr{R}_{l}(\widetilde{a}(\tau), \widetilde{s}(\tau))\geq\frac{1}{2}\varphi_{l}^{2}-\Delta\,,\] (96) \[\tau\geq\tau_{*}(\varepsilon)+c_{*}(\Delta)\Rightarrow \limsup_{\varepsilon\to 0}\mathscr{R}_{l}(\widetilde{a}(\tau), \widetilde{s}(\tau))\leq\Delta\,,\] (97) _namely the_ \(l\)_-th component is learnt in an_ \(O(1)\) _time window around_ \(\tau_{*}(\varepsilon)=\Theta(\varepsilon^{-(l-1)/2l(l+1)})\)_._ _._ 2. _Similarly, we denote_ \[B=\left\{\omega:\sigma_{l}\varphi_{l}\limsup_{\varepsilon\to 0}\varepsilon^{-1/2l} \widetilde{a}(\omega,0)\widetilde{s}(\omega,0)^{l}<0,\text{ and }\liminf_{\varepsilon\to 0}( \widetilde{s}(\omega,0)^{2}/\widetilde{a}(\omega,0)^{2})>l\right\}.\] _If_ \(\rho(B)>0\)_, then the same claims as in_ \((a)\) _hold._ 3. _If neither of the conditions at points_ \((a)\)_,_ \((b)\) _holds, and_ \[\sigma_{l}\varphi_{l}\limsup_{\varepsilon\to 0}\varepsilon^{-1/2l} \widetilde{a}(\omega,0)\widetilde{s}(\omega,0)^{l}<0,\quad\limsup_{ \varepsilon\to 0}(\widetilde{s}(\omega,0)^{2}/\widetilde{a}(\omega,0)^{2})<l\] (99) _for almost every_ \(\omega\in\Omega\)_. Then, for such_ \(\omega\in\Omega\) _and each_ \(\Delta>0\)_, there exists a constant_ \(C_{*}(\omega,\Delta)>0\) _such that_ \[\tau\geq C_{*}(\omega,\Delta)\varepsilon^{-(l-1)/2l(l+1)}\ \Rightarrow\ | \widetilde{s}(\omega,\tau)|\leq\Delta\varepsilon^{1/2l(l+1)}\,,\] (100) _meaning that_ \(\widetilde{s}(\omega,\tau)\) _converges to_ \(0\) _eventually._ _We further note that \(\tau=\Theta(\varepsilon^{-(l-1)/2l(l+1)})\Longleftrightarrow t=\Theta( \varepsilon^{\mu_{l}})\) with \(\mu_{l}=1/2l\), and \(\tau=O(1)\Longleftrightarrow t=O(\varepsilon^{\mu_{l}})\) with \(\nu_{l}=1/(l+1)\)._ The proof of Theorem 1 is deferred to Appendix B.3. **Remark 6.1**.: Under the conditions of cases \((a)\) and \((b)\), we see that the degree-\(l\) component of the target function is learnt within an \(O(\varepsilon^{1/(l+1)})\) time window around \(t_{*}(l,\varepsilon)\asymp\varepsilon^{1/2l}\), which is consistent with the timescales conjectured in Definition 1. **Remark 6.2**.: Case \((c)\) corresponds to \(s(\omega)/s(\omega,0)\) becoming close to \(0\) in time \(t=O(\varepsilon^{\mu_{l}})\), and staying at \(0\). In other words, the neurons become orthogonal to the target direction and play no role in learning higher-degree components any longer. Informally, case \((c)\) couples the learning of different polynomial components. It can happen that the learning phase \(l-1\) induces an effective initialization \((\widetilde{a}(\omega,0),\ \widetilde{s}(\omega,0))\) within the domain of case \((c)\). We expect this not to be the case for suitable choices of initialization (or equivalently \(\mathrm{P}_{A}\)), \(\varphi\), and \(\sigma\). Establishing this would amount to establishing that the standard learning scenario holds. ## 7 Stochastic gradient descent and finite sample size So far we focused on analyzing the projected gradient flow (GF) dynamics with respect to the population risk, as defined in Eqs. (4)-(5). In this section, we extract the implications of our analysis of GF on online projected stochastic gradient descent, which is a projected version of the SGD dynamics (151). For simplicity of notation, we denote by \(z=(y,x)\in\mathbb{R}\times\mathbb{R}^{d}\) a datapoint and by \(\theta_{i}=(a_{i},u_{i})\in\mathbb{R}\times\mathbb{S}^{d-1}\) the parameters of neuron \(i\). For \(z=(y,x)\) and \(\rho^{(m)}=(1/m)\sum_{i=1}^{m}\delta_{\theta_{i}}=(1/m)\sum_{i=1}^{m}\delta_{( a_{i},u_{i})}\), we define \[\widehat{F}_{i}(\rho^{(m)};z) =\left(y-\frac{1}{m}\sum_{j=1}^{m}a_{j}\sigma(\langle u_{j},x \rangle)\right)\sigma(\langle u_{i},x\rangle),\] \[\widehat{G}_{i}(\rho^{(m)};z) =a_{i}\left(y-\frac{1}{m}\sum_{j=1}^{m}a_{j}\sigma(\langle u_{j},x\rangle)\right)\sigma^{\prime}(\langle u_{i},x\rangle)x.\] The projected SGD dynamics is specified as follows: \[\overline{a}_{i}(k+1) =\overline{a}_{i}(k)+\varepsilon^{-1}\eta\widehat{F}_{i}( \overline{\rho}^{(m)}(k);z_{k+1}) \tag{101}\] \[\overline{u}_{i}(k+1) =\operatorname{Proj}_{\mathbb{S}^{d-1}}\left(\overline{u}_{i}(k) +\eta\widehat{G}_{i}(\overline{\rho}^{(m)}(k);z_{k+1})\right),\] where for \(u\in\mathbb{R}^{d}\) and compact \(S\subset\mathbb{R}^{d}\), \(\text{Proj}_{S}(u):=\text{argmin}_{s\in S}\left\|s-u\right\|_{2}\), and \(\overline{\rho}^{(m)}:=(1/m)\sum_{i=1}^{m}\delta_{\overline{\theta}_{i}}\). Note that the \((\overline{a}_{i},\overline{u}_{i})\)'s here are different from the \((\overline{a},\overline{s})\)'s in Section 6. We prove that, for small \(\eta\), the projected SGD of Eq. (101) is close to the gradient flow of Eqs. (4)-(5). Throughout this section, we make the following assumptions similar to those assumed in Section 4: **A1.**: \(\rho_{0}\) is supported on \([-M_{1},M_{1}]\times\mathbb{S}^{d-1}\). Hence, \(\left|a_{i}(0)\right|\leq M_{1}\) for all \(i\in[m]\). **A2.**: The activation function is bounded: \(\left\|\sigma\right\|_{\infty}\leq M_{2}\). Additionally, define for \(u,u^{\prime}\in\mathbb{R}^{d}\): \[V(\left\langle u_{*},u\right\rangle;\left\|u_{*}\right\|_{2}, \left\|u\right\|_{2}) =\mathbb{E}\left[\varphi(\left\langle u_{*},x\right\rangle) \sigma(\left\langle u,x\right\rangle)\right], \tag{102}\] \[U(\left\langle u,u^{\prime}\right\rangle;\left\|u\right\|_{2}, \left\|u^{\prime}\right\|_{2}) =\mathbb{E}\left[\sigma(\left\langle u,x\right\rangle)\sigma( \left\langle u^{\prime},x\right\rangle)\right]. \tag{103}\] We then require the functions \(V\) and \(U\) to be bounded and differentiable, with uniformly bounded and Lipschitz continuous gradients for all \(\left\|u\right\|_{2},\left\|u^{\prime}\right\|_{2}\leq 2\): \[\left\|\nabla_{u}V\right\|_{2}\leq M_{2},\;\left\|\nabla_{u}V- \nabla_{u^{\prime}}V\right\|_{2}\leq M_{2}\left\|u-u^{\prime}\right\|_{2}, \tag{104}\] \[\left\|\nabla_{(u,u^{\prime})}U\right\|_{2}\leq M_{2},\;\left\| \nabla_{(u,u^{\prime})}U-\nabla_{(u_{1},u_{1}^{\prime})}U\right\|_{2}\leq M_{ 2}\left(\left\|u-u_{1}\right\|_{2}+\left\|u^{\prime}-u_{1}^{\prime}\right\|_{2 }\right). \tag{105}\] Similar to Remark 4.1, we can show that a sufficient condition for Eq.s (104) and (105) is \[\sup\left\{\left\|\sigma^{\prime}\right\|_{L^{2}},\left\|\sigma^{ \prime\prime}\right\|_{L^{2}}\right\}\leq M_{2}^{\prime},\quad\sup\left\{ \left\|\varphi\right\|_{L^{2}},\left\|\varphi^{\prime}\right\|_{L^{2}},\left\| \varphi^{\prime}\right\|_{L^{2}}\right\}\leq M_{2}^{\prime},\] where the constant \(M_{2}^{\prime}\) depends uniquely on \(M_{2}\). **A3.**: Assume \((x,y)\sim\mathds{P}\), then we require that \(y\in[-M_{3},M_{3}]\) almost surely. Moreover, we assume that for all \(\left\|u\right\|_{2}\leq 2\), both \(\sigma(\left\langle u,x\right\rangle)\) and \(\sigma^{\prime}(\left\langle u,x\right\rangle)(x-\left\langle u,x\right\rangle u)\) are \(M_{3}\)-sub-Gaussian. The following theorem upper bounds the distance between gradient flow and projected stochastic gradient descent dynamics. **Theorem 2** (Difference between GF and Projected SGD).: _Let \(\theta_{i}(t)=(a_{i}(t),u_{i}(t))\) be the solution of the GF ordinary differential equations (4)-(5). There exists a constant \(M\) that only depends on the \(M_{i}\)'s from Assumptions A1-A3, such that for any \(T,z\geq 0\) and_ \[\eta\leq\frac{1}{(d+\log m+z^{2})M\exp((1+1/\varepsilon)MT(1+T/ \varepsilon)^{2})},\] _the following holds with probability at least \(1-\exp(-z^{2})\):_ \[\sup_{k\in[0,T/\eta]\cap\mathbb{N}}\max_{i\in[m]}|\overline{a}_{i} (k)| \leq M(1+T/\varepsilon), \tag{106}\] \[\sup_{k\in[0,T/\eta]\cap\mathbb{N}}\max_{i\in[m]}\left\|\theta_{i }(k\eta)-\overline{\theta}_{i}(k)\right\|_{2} \leq\left(\sqrt{d+\log m}+z\right)M\exp((1+1/\varepsilon)MT(1+T/ \varepsilon)^{2})\sqrt{\eta},\] (107) \[\sup_{k\in[0,T/\eta]\cap\mathbb{N}}|\mathscr{R}(\overline{a}(k), \overline{u}(k))-\mathscr{R}(a(k\eta),u(k\eta))| \leq\left(\sqrt{d+\log m}+z\right)M\exp((1+1/\varepsilon)MT(1+T/ \varepsilon)^{2})\sqrt{\eta}. \tag{108}\] The proof is presented in Appendix C and follows the same scheme as in that of Theorem 1 part (B) in [11]. The main difference with respect to that theorem is here we are interested in projected SGD (and GF) instead of plain SGD (and GF), hence an additional step of approximation is required, and the \(a_{i}\)'s and \(u_{i}\)'s need to be treated separately. We next draw implications of the last result on learning by online SGD within the standard learning scenario. **Theorem 3**.: _Fix any \(\delta>0\). Assume \(\varphi,\sigma\) and the initialization \(\mathrm{P}_{A}\) be such that the standard learning scenario of Definition 1 holds up to level \(L\) for some \(L\geq 2\), and that_ \[\sum_{k\geq L+1}\varphi_{k}^{2}\leq\frac{\delta}{2}. \tag{109}\] _Then, there exist constants \(\varepsilon_{*}=\varepsilon_{*}(\delta)\), \(T_{0}=T_{0}(\delta)\), \(T=T(\varepsilon,\delta)=T_{0}(\delta)\varepsilon^{1/(2L)}\) and \(M=M(\varepsilon,\delta)\) that depend on \(\varepsilon,\delta\) (together with \(\varphi,\sigma\) and \(\mathrm{P}_{A}\)) such that the following happens. Assume \(\varepsilon\leq\varepsilon_{*}(\delta)\) and \(m,d,z\) are such that \(d\geq M\), \(m\geq\max(M,z)\), and the step size \(\eta\) and number of samples (equivalently, number of steps) \(n\) satisfy_ \[\eta =\frac{1}{M(d+\log m+z)}\,, \tag{110}\] \[n =MT(d+\log m+z)\,. \tag{111}\] _Then, with probability at least \(1-e^{-z}\), the projected gradient descent algorithm of Eq. (101) achieves population risk smaller than \(\delta\):_ \[\mathds{P}\Big{(}\mathscr{R}(\overline{a}(n),\overline{u}(n))\leq\delta\Big{)} \geq 1-e^{-z}\,. \tag{112}\] The proof of Theorem 3 is deferred to Appendix C.4. **Remark 7.1**.: Within the lazy or neural tangent regime, learning the projection of the target function \(\varphi(\langle u_{*},x\rangle)\) onto polynomials of degree \(\ell\) requires \(n\gg d^{\ell}\) samples, and \(m\gg d^{\ell-1}\) neurons (Ghorbani et al., 2021, Mei et al., 2022, Montanari and Zhong, 2022). In contrast, Theorem 3 shows that, within the standard learning scenario, \(O(d)\) samples and \(O(1)\) neurons are sufficient. Further as per Theorem 2, the learning dynamics is accurately described by the GF analyzed in the previous sections. ## Acknowledgments This work was supported by the NSF through award DMS-2031883, the Simons Foundation through Award 814639 for the Collaboration on the Theoretical Foundations of Deep Learning, the NSF grant CCF-2006489 and the ONR grant N00014-18-1-2729, and a grant from Eric and Wendy Schmidt at the Institute for Advanced Studies. Part of this work was carried out while Andrea Montanari was on partial leave from Stanford and a Chief Scientist at Ndata Inc dba Project N. The present research is unrelated to AM's activity while on leave.
2306.17485
Detection-segmentation convolutional neural network for autonomous vehicle perception
Object detection and segmentation are two core modules of an autonomous vehicle perception system. They should have high efficiency and low latency while reducing computational complexity. Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-performance computing platforms. In the case of autonomous vehicles, i.e. cars, but also drones, it is necessary to use embedded platforms with limited computing power, which makes it difficult to meet the requirements described above. A reduction in the complexity of the network can be achieved by using an appropriate: architecture, representation (reduced numerical precision, quantisation, pruning), and computing platform. In this paper, we focus on the first factor - the use of so-called detection-segmentation networks as a component of a perception system. We considered the task of segmenting the drivable area and road markings in combination with the detection of selected objects (pedestrians, traffic lights, and obstacles). We compared the performance of three different architectures described in the literature: MultiTask V3, HybridNets, and YOLOP. We conducted the experiments on a custom dataset consisting of approximately 500 images of the drivable area and lane markings, and 250 images of detected objects. Of the three methods analysed, MultiTask V3 proved to be the best, achieving 99% mAP_50 for detection, 97% MIoU for drivable area segmentation, and 91% MIoU for lane segmentation, as well as 124 fps on the RTX 3060 graphics card. This architecture is a good solution for embedded perception systems for autonomous vehicles. The code is available at: https://github.com/vision-agh/MMAR_2023.
Maciej Baczmanski, Robert Synoczek, Mateusz Wasala, Tomasz Kryjak
2023-06-30T08:54:52Z
http://arxiv.org/abs/2306.17485v1
# Detection-segmentation convolutional neural network for autonomous vehicle perception ###### Abstract Object detection and segmentation are two core modules of an autonomous vehicle perception system. They should have high efficiency and low latency while reducing computational complexity. Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-performance computing platforms. In the case of autonomous vehicles, i.e. cars, but also drones, it is necessary to use embedded platforms with limited computing power, which makes it difficult to meet the requirements described above. A reduction in the complexity of the network can be achieved by using an appropriate: architecture, representation (reduced numerical precision, quantisation, pruning), and computing platform. In this paper, we focus on the first factor - the use of so-called detection-segmentation networks as a component of a perception system. We considered the task of segmenting the drivable area and road markings in combination with the detection of selected objects (pedestrians, traffic lights, and obstacles). We compared the performance of three different architectures described in the literature: MultiTask V3, HybridNets, and YOLOP. We conducted the experiments on a custom dataset consisting of approximately 500 images of the drivable area and lane markings, and 250 images of detected objects. Of the three methods analysed, MultiTask V3 proved to be the best, achieving 99% \(mAP_{50}\) for detection, 97% MIoU for drivable area segmentation, and 91% MIoU for lane segmentation, as well as 124 fps on the RTX 3060 graphics card. This architecture is a good solution for embedded perception systems for autonomous vehicles. The code is available at: [https://github.com/vision-agh/MMAR_2023](https://github.com/vision-agh/MMAR_2023). detection-segmentation convolutional neural network, autonomous vehicle, embedded vision, YOLOP, HybridNets, MultiTask V3 ## I Introduction Perception systems in mobile robots, including self-driving cars and unmanned aerial vehicles (UAV), use sensors like cameras, LiDAR (Light Detection and Ranging), radar, IMU (Inertial Measurement Unit), GNSS (Global Navigation Satellite Systems) and more to provide crucial information about the vehicle's position in 3D space and detect relevant objects (e.g. cars, pedestrians, cyclists, traffic lights, etc.). Image and LiDAR data processing involve two main tasks: detection, which identifies objects and labels them with bounding boxes or masks, and segmentation, which assigns labels to each pixel based on its representation in the image. Instance segmentation assigns different labels to objects belonging to the same class (e.g. different cars). This allows all objects to be correctly identified and tracked. Typically, both tasks are performed by different types of deep convolutional neural networks. For detection, networks from the YOLO family (_You Only Look Once_[1]) are the most commonly used solution. For segmentation, networks based on the CNN architecture are used, such as U-Net [2] and the fully convolutional network for semantic segmentation, and the mask R-CNN for instance segmentation. It is also worth mentioning the increasing interest in transformers neural networks in this context [3]. However, the use of two independent models has a negative impact on the computational complexity and energy efficiency of the system. For this reason, network architectures that perform both of the above tasks simultaneously are being researched. There are two approaches that can be used to solve this challenge: using instance segmentation networks or detection-segmentation networks. Instance segmentation networks are a special class of segmentation networks and require the preparation of a training dataset that is common to all detected objects. In addition, their operation is rather complex, and only part of the results are used for self-driving vehicles (distinguishing instances of classes such as road, lane, Fig. 1: Illustration of the discussed network architectures. etc. is unnecessary for further analysis and often difficult to define precisely). Detection-segmentation networks consist of a common part (called the backbone) and several detection and segmentation heads. This architecture allows the preparation of a separate training dataset for detection and often several subsets for segmentation (e.g. a separate one for lane and road marking segmentation). This allows the datasets to be scaled according to how important the accuracy of the module is. In addition, the datasets used can contain independent sets of images, which greatly simplifies data collection and labeling. The three architectures discussed so far: detection, segmentation, and detection-segmentation are shown in Figure 1. In addition, limiting the number of classes will reduce the time needed for post-processing, which involves filtering the resulting detections, e.g. using the NMS (Non-Maxima Suppression) algorithm. Segmenting the image into only selected categories can also reduce inference time and increase accuracy. All these arguments make detection-segmentation networks a good solution for embedded perception systems for autonomous vehicles. In this paper, we compared the performance of three detection-segmentation networks: MultiTask V3 [4], HybridNets [5], and YOLOP [6]. We conducted the experiments on a custom dataset, recorded on a mock-up of a city. The road surface and road markings were segmented, and objects such as pedestrians, traffic lights, and obstacles were detected. To the best of our knowledge, this is the first comparison of these methods presented in the scientific literature. The rest of the paper is structured as follows. Section II discusses the most important works on the use of neural networks for simultaneous object detection and segmentation. The architectures of the tested networks are then presented in Section III. The methods for training the neural networks are described in Section IV. The results obtained are summarised in Section V. The paper ends with conclusions and a discussion of possible future research. ## II Related works Many different methods have been described in the scientific literature for the detection of drivable area and road markings, as well as for the detection of objects, e.g. pedestrians, cars, traffic signs, traffic lights, etc. One of the solutions available is the use of deep neural networks. These can be divided into detection, segmentation, and detection-segmentation networks. Detection networks are designed to locate, classify and label existing objects in any image using a bounding box. This is a set of coordinates of the corners of the rectangles that mark the detected objects in the image. A conventional method of object detection is based on proposing regions and then classifying each proposal into different object categories. This includes network architectures based on regions with convolutional neural networks (R-CNN) [7]. Another approach considers object detection as a regression or classification problem in order to directly obtain the final results (categories and locations). These include, among others, the YOLOv7 network architectures [1]. Segmentation networks are based on an encoder-decoder architecture. They are used to classify each pixel in the image. Two types of segmentation can be distinguished: semantic and instance. A representative example of semantic segmentation is U-Net [2]. The encoder module uses convolution and pooling layers to perform feature extraction. On the other hand, the decoder module recovers spatial details from the sub-resolution features, while predicting the object labels. A standard choice for the encoder module is a lightweight CNN backbone, such as GoogLeNet or a revised version of it, namely Inception-v3 [8]. To improve the accuracy and efficiency of semantic segmentation networks, multi-branch architectures have been proposed. They allow high-resolution segmentation of objects in the image. To this end, multi-branch networks introduce a fusion module to combine the output of the encoding branches. This can be a Feature Fusion module in which the output features are joined by concatenation or addition, an Aggregation Layer (BiSeNet V2 [9]), a Bilateral Fusion module (DDRNet [10]) or a Cascade Feature Fusion Unit (ICNet [11]). Moreover, there is more and more research in the direction of object detection and segmentation to use transformer-based neural networks, such as DETR [12], SegFormer [13]. In the segmentation task, there are only a few architectures proposed at the moment, while in the object detection task there are many solutions, of which transformer-based methods achieve the best performance. Vision transformers offer robust, unified, and even simpler solutions for various tasks. Compared to CNN approaches, most transformer-based approaches have simpler pipelines but stronger performance. However, transformer-based methods require a lot of training data. Many dedicated solutions require both detection and segmentation of objects in the image. It should be noted that once full segmentation (i.e. for all object classes under consideration) has been performed, there is no need to implement detection - the bounding boxes can be obtained from the masks of individual objects. However, networks implementing accurate multi-class semantic segmentation or instance segmentation are characterized by high computational complexity, as highlighted in the paper [14]. The authors showed that the performance of the three most accurate networks did not exceed 24 fps (frames per second) on an RTX 3090 and 12 fps on a GTX 1080 Ti graphics card. This shows that for this type of network, achieving real-time processing (60 fps) on an embedded computing platform is challenging. Hence the idea of combining fast detection with segmentation limited to a few classes with relatively little variation (such as roadway, road markings, or vegetation/buildings). A key feature of this type of solution is the encoder common to both functionalities. This approach makes it possible to run deep networks on embedded devices equipped with computing chips that consume less power but have also less computing power. Furthermore, as will be shown later, the process of learning a segmentation-detection network is easier and faster than, an alternative solution based on a segmentation network only. In the papers [15, 16, 4, 17, 5], detection-segmentation network architectures have been proposed that currently achieve the best results. The training process typically uses the following datasets: _KITTI_, _Cityscapes_, VOC2012 or _BDD100k_[18, 19, 20, 21]. When pre-selecting the appropriate solutions for the experiments, we took into account the diversity of the proposed architectures, the fulfillment of the requirements related to the FPT'22 competition [22], as well as the possibility of quantizing and accelerating the network on embedded computing platforms, i.e. eGPU (embedded Graphic Processing Unit), SoC FPGA (System on Chip Field Programmable Gate Array). Therefore, we decided to use the following three networks in our research: MultiTask V3 [4], HybridNet [5], and YOLOP [6]. ## III The considered detection-segmentation neural networks The MultiTask V3 network [4] is a model proposed by the developers of the Vitis AI (AMD Xilinx) platform for users using neural networks on SoC FPGA platforms. A scheme of the MultiTask V3 neural network architecture is shown in Figure 2. It allows five tasks to be performed simultaneously - detection, three independent image segmentations, and depth estimation. The backbone of the network, which determines the underlying feature vector, is based on the ResNet-18 convolutional neural network. Subsequent features are extracted using encoders and convolutional layer blocks. Branches responsible for a given part of the network then generate the corresponding output using convolution, ReLU activation operations, and normalization. Due to the large number of tasks to be performed, the network was trained to segment road markings, lanes (including direction), and objects (pedestrians, obstacles, and traffic lights) separately. Detection was performed on the same set of objects. The model was trained using only our own custom datasets, which were transformed into the format recommended by the network developers. The resulting network processes images with a resolution of \(512\times 320\) pixels. In addition, thanks to the model quantization tools, it is possible to reduce the precision and run the network on SoC FPGA platforms using DPUs (Deep Learning Processor Units). The performance on the original _BDD100k_ dataset [21] was not given, as the network was not previously described in any scientific paper. The second detection-segmentation neural network considered is the YOLOP [6]. A scheme of the architecture is shown in Figure 3. It performs 3 separate tasks within a single architecture - detection of objects in the road scene, segmentation of the drivable area, and road markings. The network consists of a common encoder and 3 decoders, with each decoder dedicated to a separate task. The drivable area represents all lanes in which the vehicle was allowed to move - opposite lanes were not taken into account. The network was originally trained on the _BDD100k_ dataset [21]. To reduce memory requirements, the images were scaled from a resolution of \(1280\times 720\times 3\) to a resolution of \(640\times 384\times 3\). The network achieved a \(mAP_{50}\) score for single class detection (cars) of 76.5%, drivable area segmentation mIoU of 91.5%, and lane line segmentation mIoU score of 70.5%. HybridNets [5] network is another example of simultaneous segmentation and detection models. HybridNets, like YOLOP, only performs object detection and segmentation of road markings and drivable area (without considering lane direction). A scheme of the architecture is shown in Figure 4. It does not have the semantic segmentation and depth estimation branches available in MultiTask V3. The network consists of four elements: a feature extractor in the form of EfficientNet V2 [23], a neck in the form of BiFPN [24], and two branches, one for a detection head similar to YOLOv4 [25] and the other for segmentation consisting of a series of convolutions and fusion of the outputs of successive layers of the neck. The network was initially trained on the _BDD100k_ dataset [21], whose images were scaled to a size of \(640\times 384\times 3\). It achieved a \(mAP_{50}\) for single class detection (cars) equal to 77.3%, drivable area segmentation mIoU of 90.5%, and lane line segmentation mIoU score of 31.6%. ## IV Experiments performed A custom training dataset was prepared to compare the above-mentioned neural network models. It was divided into three subsets containing objects (pedestrian figures, obstacles, Fig. 4: Scheme of the HybridNets neural network architecture. Fig. 3: Scheme of the YOLOP neural network architecture. Fig. 2: Scheme of the MultiTask V3 neural network architecture traffic lights), road markings, and drivable area, respectively. The subsets were prepared based on the collected recordings from the city mock-up which was constructed according to the rules of the FPT'22 competition [22]. Subsequently, labels were applied to the images obtained from the recordings. The road markings dataset was prepared semi-automatically by pre-selecting a threshold and performing binarization. Annotations were then prepared for all sets using the Labelme software [26]. The resulting label sets were adapted to the formats required by the tools designed to train the aforementioned networks. The final dataset consisted of 500 images of the city mock-up with road markings, 500 images with the drivable area, and 250 images with objects. Size of the dataset was dictated by a small environment with little changes of lightning and camera angles, as the purpose of the trained model was to be used only on a given mock-up. The prepared datasets were divided into training and validation subsets in an 80/20 ratio. The validation set was later used as the test set. This decision was made because the size of the prepared dataset was relatively small (but still sufficient to properly train the model, as shown in Figure 6. An example of an input data set from a training set is shown in Figure 5. In the case of the MultiTask V3 network, a path to the prepared dataset was passed to the training program. The application managed the training sets independently so that it was possible to run the training procedure from start to finish on all sets. Network has been trained using default hyperparameters, provided by developers. The base learning rate was set to 0.01. The optimiser used for training was a Stochastic Gradient Descent (SGD). Training included data augmentation as random mirroring of the input images, photometric distortion, and random image cropping. The model was trained using a batch size of 16. As the MultiTask V3 network also performs object segmentation, the maximum number of epochs was set to the highest of all the models considered. A value of 450 epochs was chosen, after which no significant increase in validation results was observed. The YOLOP network training program did not allow different parts of the model to be trained simultaneously with independent sets. As the segmentation sets were different from the detection set, it was necessary to split the network training procedure. The training procedure began with the backbone layers and the upper detection layers (segmentation layers were frozen). Once this was completed, the layers responsible for segmentation were unfrozen, the remaining layers were frozen, and the training procedure was restarted. Network has been trained using default hyperparameters, provided by developers. The base learning rate was set to 0.01. The optimiser used for training was an Adam algorithm. Training included data augmentation as random changes in image perspective and random changes in the image's colour hue, saturation, and value. The model was trained using a batch size of 2. The training was stopped after 390 epochs, as the validation results did not improve in the following steps. As with YOLOP, the HybridNet training program does not allow simultaneous training with two independent data sets. Therefore, a similar training strategy to YOLOP was used. First, the backbone and the detection branch were trained. Default hyperparameter settings provided by the developers were used, including the AdamW optimiser. There were only two parameters that were changed: a batch size of 4 and an initial learning rate of 0.001. The change of learning rate during detection training was chosen, as starting with the default learning rate of 0.0001 didn't show promising results. After 150 epochs, when no further performance improvement was observed on the validation set, training was stopped, the backbone and detection branches were frozen, and training was started on the segmentation set. This time the default hyperparameters were kept, including the learning rate of 0.0001. The segmentation branch was trained for 100 epochs until no improvement in performance was observed. In total, the network was trained for 250 epochs. Both of the branches were trained using the default data augmentation provided by the researchers, in the form of: left-right flip, change of hue, rotation, shear and translation. ## V Results and Discussion Figure 6 shows the results of the considered neural network in terms of object detection, driving area, and road marking segmentation for a view containing a straight road. To verify the effectiveness of our selected detection-segmentation neural network models, we compared the performance of each single-task scheme separately, as well as the multitask scheme. Table I shows the performance of the models on the NVIDIA GeForce GTX 1060 M and NVIDIA GeForce RTX 3060 graphics cards. It can be seen that YOLOP and MultiTasks networks for comparable resolutions process data in real-time, while HybridNets is slightly slower. Here it should be noted that the original implementation of HybridNets was used. Unlike the YOLOP and MultiTask V3 models, it makes extensive use of subclassing to implement most of the layers used in the network. This may cause large discrepancies Fig. 5: Examples of training sets. Set (b) was generated for the MultiTask V3 network only, and sets (a), (c) and (d) for all models. in the inference speed of the network compared to other models. Table II summarises the input image resolution and computational complexity of the selected neural networks. MultiTask V3 has the highest FLOPS value, especially when normalised with respect to the input image resolution and the highest number of parameters. On the other hand, it achieved the best performance on both GPUs, possibly due to the highly optimised parallel implementation. We then performed an evaluation to assess the performance of each task: object detection and drivable and lane segmentation. We considered the object detection performance of three models on a custom dataset. As shown in Table III, we use \(mAP_{50}\), \(mAP_{70}\), and \(mAP_{75}\) as the evaluation metrics of the detection accuracy. For YOLOP and MultiTask V3, the \(mAP_{50}\) score is above 95%, proving that both networks have been successfully trained. For MultiTask V3, the score does not change much as the IoU (Intersection over Union) acceptance level increases, while for YOLOP it decreases slightly. This result shows that the detections made by MultiTask V3 are very similar to those provided by the validation dataset, while YOLOP's detections are close to them, but do not overlap perfectly. The \(mAP_{50}\) score for the HybridNets architecture is about 84%. This score is lower than the previous two architectures but still allows for acceptable detection accuracy. We used IoU and mIoU (mean IoU) as evaluation metrics for drivable area segmentation and lane segmentation accuracy. A comparison of the drivable area segmentation results for MultiTask V3, YOLOP and HybridNets is shown in Table IV. Note that one of the requirements of the FPT 22 competition is left-hand traffic. It can be seen that the best performance is achieved by the MultiTask V3 network. However, the other neural networks also perform very well, with an accuracy of no less than 84%. A high IoU score for the drivable area class for all networks shows that the predicted segmentations are almost the same as those in the validation dataset. Achieving such high results was predictable as the driving area surfaces are large, simple in shape, and uniform in color. It is therefore relatively easy to distinguish them from the background. As the background is classified as any other pixel not belonging to the driving area class, the results obtained are even higher. according to the requirements of the FPT'22 competition. The dataset was created solely for training models that could be used on the city mock-up. Due to the constant environmental factors and relatively few corner cases (such as intersections, turns, etc.), there was no need to obtain more data. However, for real-world applications, more work should be done to prepare the dataset. It should include more data, including different locations and environments (lighting, weather factors, etc.) to make the models reliable in a diverse environment. The results obtained confirm the high attractiveness of this type of networks - they allow good detection and segmentation accuracy, and real-time performance. Moreover, the training of these networks is simpler, since certain parts can be trained independently, even on separate datasets. Of the three methods analysed, MultiTask V3 proved to be the best, obtaining 99% \(mAP_{50}\) for detection and 97% MIoU for drivable area segmentation and 91% MIoU for lane segmentation, as well as 124 fps on the RTX 3060 graphic card. This architecture is a good solution for embedded perception systems for autonomous vehicles. As part of future work, we plan to focus on two further stages of building an embedded perception system based on a deep convolutional neural network. First, we want to perform quantization and pruning of the analysed network architectures to see how they will affect efficiency and computational complexity. Next, we will run the networks on an eGPU (e.g. Jetson Nano) and an SoC FPGA (e.g. Kria from AMD Xilinx). Networks will be compared on given platforms for performance and power consumption. It is worth noting that initial tests on an eGPU with MultiTask V3 and YOLOP have shown, showing that MultiTask V3 provides faster inference while consuming less energy. In the final step, we will add a control system to the selected perception system, place the selected computational system on a model of an autonomous vehicle and test its performance on the created mock-up. Secondly, we will attempt to use the _weakly supervised learning_ and _self-supervised learning_ methods, which, in the case of an atypical, custom dataset, would allow a significant reduction in the labeling process of the learning data. Thirdly, we also want to consider adding modules for depth estimation and optical flow, as elements often used in autonomous vehicle perception systems. ## Acknowledgment The work presented in this paper was supported by the AGH University of Krakow project no. 16.16.120.773 and by the programme "Excellence initiative - research university" for the AGH University of Krakow.
2309.11856
Activation Compression of Graph Neural Networks using Block-wise Quantization with Improved Variance Minimization
Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activation compression (EXACT) which demonstrated drastic reduction in memory consumption by performing quantization of the intermediate activation maps down to using INT2 precision. They showed little to no reduction in performance while achieving large reductions in GPU memory consumption. In this work, we present an improvement to the EXACT strategy by using block-wise quantization of the intermediate activation maps. We experimentally analyze different block sizes and show further reduction in memory consumption (>15%), and runtime speedup per epoch (about 5%) even when performing extreme extents of quantization with similar performance trade-offs as with the original EXACT. Further, we present a correction to the assumptions on the distribution of intermediate activation maps in EXACT (assumed to be uniform) and show improved variance estimations of the quantization and dequantization steps.
Sebastian Eliassen, Raghavendra Selvan
2023-09-21T07:59:08Z
http://arxiv.org/abs/2309.11856v2
Activation Compression of Graph Neural Networks Using Block-Wise Quantization With Improved Variance Minimization ###### Abstract Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activation compression (EXACT) which demonstrated drastic reduction in memory consumption by performing quantization of the intermediate activation maps down to using INT2 precision. They showed little to no reduction in performance while achieving large reductions in GPU memory consumption. In this work, we present an improvement to the EXACT strategy by using block-wise quantization of the intermediate activations. We experimentally analyze different block sizes and show further reduction in memory consumption (\(>15\%\)), and runtime speedup per epoch ( \(\approx 5\%\)) even when performing extreme extents of quantization with similar performance trade-offs as with the original EXACT. Further, we present a correction to the assumptions on the distribution of intermediate activation maps in EXACT (assumed to be uniform) and show improved variance estimations of the quantization and dequantization steps. 1 Footnote 1: Source code will be available at the official paper repository [https://github.com/saintslab/i-Exact](https://github.com/saintslab/i-Exact). Sebastian Eliassen\({}^{\star}\) _Raghavendra Selvan\({}^{\star}\)_ \({}^{\star}\) Department of Computer Science, University of Copenhagen graph neural networks, quantization, activation compression, efficient machine learning, deep learning ## 1 Introduction Graph neural networks (GNNs) are a class of deep learning (DL) models most useful when dealing with graph structured data [1, 2]. They have shown widespread applications in a range of diverse applications [3, 4, 5, 6]. GNNs are known to scale poorly with the number of nodes in the graph data primarily due to the memory requirements for storing the adjacency matrices and intermediate activation maps [7]. The increase in memory consumption necessitates use of more computational resources. This is, unfortunately, in line with the growing resource consumption of recent classes of deep learning methods [8, 9]. A common approach to reducing resource consumption of DL methods is to explore different efficiency strategies [10, 11] such as training neural networks with quantized weights [12] or quantized activation maps [13]. The main focus of efficiency improvements in GNNs has been either by operating on subgraphs to use smaller adjacency matrices [14] or to store compressed node embeddings or activation maps for computing gradients [15]. In this work, we are interested in the latter, specifically following the method introduced in [15] that proposed extreme activation compression (EXACT) using a combination of stochastic rounding-based quantization and random projections. In this work we make two contributions, starting from EXACT, that further improve the memory consumption and yield training runtime speedup. Firstly, we introduce block-wise quantization [16] of the activation maps which quantizes large groups of tensors instead of individual tensors with support down to INT2 precision. Secondly, the quantization variance estimation in EXACT is performed using assumption that the activation maps uniformly distributed. We show that the activation maps do not follow a uniform distribution but instead follow a type of clipped normal distribution with empirical evidence. Using this insight, we present an improvement to the variance minimization strategy when performing the quantization of activation maps. Experimental evaluation on multiple graph datasets shows a consistent reduction in memory consumption and speedup in training runtime compared to EXACT. ## 2 Notations and Background We describe a graph with \(N\) nodes as \(\mathcal{G}=(\mathbf{X},\mathbf{A})\), with node feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times F}\) containing \(F\)-dimensional features for each of the \(N\) nodes, and binary adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\) with the relations between each of the nodes. Specifically \(\mathbf{A}_{i,j}=1\) if there is an edge between node \(i\) and \(j\) and \(\mathbf{A}_{i,j}=0\) otherwise. The GNN from [2] with \(L\) layers can be compactly written as the recursion: \[\mathbf{H}^{(\ell+1)}=\sigma\left(\hat{\mathbf{A}}\mathbf{H}^{(\ell)}\mathbf{ \Theta}^{(\ell)}\right) \tag{1}\] where the symmetric normalized adjacency matrix is \(\hat{\mathbf{A}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{A}\tilde{\mathbf{D} }^{-\frac{1}{2}}\) with \(\tilde{\mathbf{D}}\) as the degree matrix of \(\mathbf{A}+\mathbf{I}\), \(\mathbf{H}^{(0)}:=\mathbf{X}\), the trainable parameters at layer-\(\ell\) are \(\mathbf{\Theta}^{(\ell)}\) and a suitable non-linearity \(\sigma(\cdot)\). Since the activation maps, specifically the intermediate results \(\left(\mathbf{H}^{(\ell)}\mathbf{\Theta}^{(\ell)}\right)\) and the node embedding matrix \(\mathbf{H}^{(\ell)}\), are the biggest users of memory, EXACT [15] focused on reducing the size of the activation maps from FLOAT32 to lower precision using two methods: **Stochastic Rounding**: For a given node \(i\) its embedding vector \(\mathbf{h}_{i}^{(\ell)}\) is quantized and stored using \(b\)-bit integers as: \[\mathbf{h}_{i_{\texttt{int}}}^{(\ell)}=\mathrm{Quant}\left(\mathbf{h}_{i}^{( \ell)}\right)=\left\lfloor\frac{\mathbf{h}_{i}^{(\ell)}-Z_{i}^{(\ell)}}{r_{i} ^{(\ell)}}B\right\rceil=\left\lfloor\bar{\mathbf{h}}\right\rfloor \tag{2}\] where \(B=2^{b}-1\), \(Z_{i}^{(\ell)}=\min(\mathbf{h}_{i}^{(\ell)})\) is the zero-point, \(r_{i}^{(\ell)}=\max(\mathbf{h}_{i}^{(\ell)})-\min(\mathbf{h}_{i}^{(\ell)})\) is the range for \(\mathbf{h}_{i}^{(\ell)}\), \(\bar{\mathbf{h}}\) is the normalized activation map, and \(\lfloor\cdot\rceil\) is the stochastic rounding operation [17]. Stochastic rounding is a rounding method that rounds a number to its nearest integer with a probability inversely proportional to the distance from the quantization boundaries. 2 Footnote 2: For any scalar activation map, \(h\), stochastic rounding is given by: \[\left\lfloor h\right\rceil=\begin{cases}\left\lceil h\right\rceil,\text{with probability }h-\left\lfloor h\right\rfloor\\ \left\lfloor h\right\rfloor,\text{with probability }1-\left(h-\left\lfloor h\right\rfloor\right) \end{cases}\] where \(\lceil\cdot\rceil\), \(\lfloor\cdot\rfloor\) are the ceil and floor operators, respectively. Due to its stochastic nature, which is determined by the distance to the quantization boundaries, the operator itself is an unbiased operator. Figure 1-A) illustrates stochastic rounding with uniform bin widths. The inverse process of dequantization is defined as: \[\hat{\mathbf{h}}_{i}^{(\ell)}=\mathrm{Dequant}\left(\mathbf{h}_{i_{\texttt{ int}}}^{(\ell)}\right)=r_{i}^{(\ell)}\mathbf{h}_{i_{\texttt{int}}}^{( \ell)}/B+Z_{i}^{(\ell)} \tag{3}\] which linearly transforms the quantized values from \([0,B]\) back to their original ranges. Note that we still have some information-loss, since \(\mathbf{h}_{i_{\texttt{int}}}^{(\ell)}\) is only an estimate of \(\mathbf{h}_{i}^{(\ell)}\).3 Footnote 3: Note that quantization followed by dequantization is unbiased due to stochastic rounding, i.e., \(\mathbb{E}[\hat{\mathbf{h}}_{i}^{(\ell)}]=\mathbb{E}[\mathrm{Dequant}(\mathrm{ Quant}(\mathbf{h}_{i}^{(\ell)})]]=\mathbf{h}_{i}^{(\ell)}\). **Random Projection**: Another way of minimizing memory footprint of activation maps is to perform dimensionality reduction on them. This is done via random projection in EXACT as: \[\mathbf{h}_{i_{\texttt{true}}}^{(\ell)}=\mathrm{RP}(\mathbf{h}_{i}^{(\ell)}) =\mathbf{h}_{i}^{(\ell)}\mathbf{R} \tag{4}\] where \(\mathbf{R}\in\mathbb{R}^{D\times R}\) with \(R<D\) is the normalized Rademacher random matrix [18] that satisfies \(\mathbb{E}[\mathbf{R}\mathbf{R}^{\top}]=\mathbf{I}\). The random projected node embeddings are inversely transformed by \[\hat{\mathbf{h}}_{i}^{(\ell)}=\mathrm{IRP}\left(\mathbf{h}_{i_{\texttt{true}} }^{(\ell)}\right)=\mathbf{h}_{i_{\texttt{true}}}^{(\ell)}\mathbf{R}^{T}. \tag{5}\] The matrix containing all projected and recovered activation maps are defined as \(\mathbf{H}_{\mathrm{proj}}^{(\ell)}\) and \(\hat{\mathbf{H}}^{(\ell)}\), respectively.4 Footnote 4: Also note that the RP and IRP operations are also unbiased. i.e., \(\mathbb{E}[\hat{\mathbf{H}}^{(\ell)}]=\mathbb{E}[\mathrm{IRP}(\mathbf{R}^{( \ell)})]=\mathbf{H}^{(\ell)}\). EXACT method combines random projection and quantization to obtain compounding reductions in memory consumption. Specifically, node embeddings are compressed as \(\tilde{\mathbf{h}}_{i}^{(\ell)}=\mathrm{Quant}\left(\mathrm{RP}\left(\mathbf{ h}_{i}^{(\ell)}\right)\right)\) are stored in memory during the forward pass, and during the backward pass the they are recovered as \(\hat{\mathbf{h}}_{i}^{(\ell)}=\mathrm{IRP}\left(\mathrm{Dequant}\left(\tilde{ \mathbf{h}}_{i}^{(\ell)}\right)\right)\). ## 3 Methods Quantizing activation maps of GNNs reduces the memory consumption when training GNNs but does introduce an additional overhead in the computation time due to the quantization/dequantization steps. We propose to perform large block-wise quantization [16] in place of quantizing individual tensors in order to recover some of the slowdown and to further reduce the memory consumption. ### Block-wise Quantization of Activation maps The quantization in Eq. (1) is performed over each node embedding, which is a tensor \(\mathbf{h}_{i}^{(\ell)}\in\mathbb{R}^{D}\) resulting in a sequence of \(b\)-bit integers i.e., \(\mathbf{h}_{i\texttt{true}}^{(\ell)}\in\left\{0,\dots,B-1\right\}^{D}\). Instead of quantizing each node embedding, block-wise quantization takes a larger chunk of tensors and performs the quantization on them which further reduces the memory footprint and yields speedup. Block-wise quantization has been shown to be effective in reducing the memory footprint as demonstrated in [16] where optimizer states are block-wise quantized to \(8\)-bits (INT8)[19]. Consider the complete node embedding matrix after random projection, \(\mathbf{H}_{\texttt{proj}}^{(\ell)}\in\mathbb{R}^{N\times R}\). To perform block-wise quantization first the node embedding matrix is reshaped into a stack of tensor blocks of length \(G\): \[\mathbf{H}_{\texttt{block}}^{(\ell)}\in\mathbb{R}^{\frac{N_{i}R}{G}\times G}:= \mathrm{reshape}\left(\mathbf{H}_{\texttt{proj}}^{(\ell)},G\right). \tag{6}\] Figure 1: Demonstration of stochastic rounding for \(b=2\) i.e., \(2^{b}=4\) quantization bins for 128 points uniformly sampled datapoints. Here the sampled points can be quantize to any of the four levels. The closer the color of the sample is to the color of the vertical bar, the larger the probability that it quantizes to said vertical bar. A) Quantization bins when using uniform bin widths are showed. B) The effect of using non-linear bin widths when performing variance optimization introduced in Sec 3.2 is visualized.. The sequence of random projection and quantization as described in Section 2 are performed on each block in \(\mathbf{h}_{i_{\text{block}}}^{(\ell)}\in\mathbb{R}^{G}\ \forall\ i=[1,\ldots,(N\cdot R/G)]\). Performing quantization using larger blocks of tensors is shown to improve training stability, as block-wise quantization localizes the effects of outliers to within its own block [16]. In this work, we experiment different block sizes to study the impact on memory consumption and test performance. ### Improved Variance Minimization Starting from the observation that \(\mathbf{h}_{i_{\text{inter}}}^{(\ell)}\) is an unbiased estimate, we want to find the quantization boundaries such that its variance, \(\mathrm{Var}(\mathbf{h}_{i_{\text{inter}}}^{(\ell)})\), is minimized to further reduce the effect of quantization. To achieve this we need three components: 1) distribution of activation maps, 2) variance as a function of the activation maps, and 3) minimization of the expected variance as a function of quantization boundaries. In the EXACT [15], the quantization boundaries are always set to integer values i.e., bins are of constant width. This stems from the assumption that the underlying distribution of activation maps are _uniformly_ distributed [15] (Figure 2-center). In this work we show, on multiple datasets, that the activation maps are more accurately distributed as a variation of the normal distribution which we call the clipped normal. Letting \(B=2^{b}-1\) define the number of quantization bins, and \(\Phi^{-1}\) the Probability Point Function, we describe the clipped normal distribution as \[\mathcal{CN}_{[1/D]}(\mu,\sigma)=\min\left(\max\left(0,\mathcal{ N}(\mu,\sigma)\right),B\right), \tag{7}\] \[\text{where }\mu=B/2\text{ and }\sigma=-\mu/\Phi^{-1}(1/D).\] The similarity between the observed and the modelled activation maps are visualized in Figure 2, where we can see that the clipped normal distribution is better at approximating the activation maps compared to the uniform distribution. We next expand stochastic rounding to use irregular bin widths. Consider the normalized activation, \(h\in\hat{\mathbf{h}}\) within the bin-\(i\), stochastic rounding when using irregular bin widths, \(\delta_{i}\ \forall\ i=[1,\ldots,B]\), is given by: \[\lfloor h\rceil=\begin{cases}\lceil h\rceil,\text{with probability }(h-\lfloor h \rfloor)/\delta_{i}\\ \lfloor h\rfloor,\text{with probability }1-((h-\lfloor h\rfloor)/\delta_{i}). \end{cases} \tag{8}\] Following the variance estimation from [20]5 and assuming a normalized activation \(h\), we calculate its stochastic rounding variance as Footnote 5: Check Eq. 4.4 onwards in [20] for detailed derivation. \[\mathrm{Var}(\lfloor h\rceil)=\sum_{i=1}^{i=B}\left(\delta_{i}(h-\alpha_{i-1} )-(h-\alpha_{i-1})^{2}\right), \tag{9}\] where \(\delta_{i}\) is the width of the bin containing \(h\), and \(\alpha_{i}\) is the starting position of the bin. Assuming INT2 quantization i.e., with \(B=3\) bins, the expected variance of the stochastic rounding operation under the clipped normal distribution is obtained from Eq. (9) and Eq. (7): \[\mathbb{E}[\mathrm{Var}(\lfloor h\rceil)]=\int_{0}^{\alpha}( \alpha\cdot h-h^{2})\mathcal{CN}(h;\mu,\sigma)\,dh\] \[+\int_{\alpha}^{\beta}\left((\beta-\alpha)(h-\alpha)-(h-\alpha)^{ 2}\right)\mathcal{CN}(h;\mu,\sigma)\,dh\] \[+\int_{\beta}^{B}\left((B-\beta)(h-\beta)-(h-\beta)^{2}\right) \mathcal{CN}(h;\mu,\sigma)\,dh \tag{10}\] where \([\alpha,\beta]\) are the tunable edges of the central bin (see Figure 1-B). Given this expected variance in Eq. (10), we optimize the boundaries \([\alpha,\beta]\) that minimize the variance due to stochastic rounding. This can be done using standard numerical solvers implemented in Python. ## 4 Experiments and Results **Data**: Experiments are performed on two large-scale graph benchmark datasets for inductive learning tasks. The open graph benchmark (OGB) Arxiv dataset [21] consisting of graph with \(\approx 170k\) nodes and \(>1M\) edges, and the Flickr dataset [22] consisting of \(\approx 90k\) nodes and \(\approx 900k\) edges. **Experimental Set-up**: The GNN used in our experiments are the popular GraphSAGE architecture [14] implemented in Pytorch [23], which is also the baseline model with no activation compression i.e., operating in FP32 precision. EXACT is used in INT2 precision and \(D/R=8\) as the second baseline which uses extreme compression. We experiment our proposed compression methods in INT2 precision and different group sizes \(G/R=[2,4,8,16,32,64]\) to demonstrate the influence of block-wise quantization. To keep the dimensionality proportion between the GNN layers, we scale Figure 3: _Variance of stochastic rounding for INT2 quantization with different quantization boundaries \([\alpha,\beta]\) based on Eq. (9). When \([\alpha=1.0,\beta=2.0]\) uniform bin width is obtained._ Figure 2: _The observed normalized activations in a GNN model on the OGB-Arxiv data (left) compared to different modelled distributions: uniform (center), and clipped normal (right). Notice the clipped normal is able to model the observed distribution more accurately, including the edges where the spikes are caused due to clipping at the boundaries._ the dimensionality of each layer equally when performing grouping, hence the block size is presented using the \(G/R\). The influence of variance minimization (VM) on the test performance is also reported. **Results**: Performance of the baseline methods and different configurations of the method presented in this work for two datasets are reported in Table 1. The most astonishing trend is that there is no noticeable difference in test performance on both datasets, across all models, even with extreme quantization (INT2) and any of the reported block sizes. With our proposed method there is a further improvement in memory consumption compared with EXACT by about 15% (97% with baseline FP32) and about 8% (97% with baseline FP32) for the Arxiv and Flickr datasets, respectively, when using the largest block size (G/R=64). We also gain a small speedup in training time per epoch: 5% for Arxiv, and 2.5% for Flickr, compared to EXACT. Use of clipped normal distribution in Eq. (7) to model the activation maps is better than uniform distribution. This is captured using the Jensen-Shannon divergence measure, reported in Table 2 where we observe that for all layers, in both datasets, the distance to the observed distribution is smaller for clipped normal distribution. Variance minimization when performed for EXACT (reported as INT2+VM in Table 1) does not show any further improvement or degradation in performance. ## 5 Discussion and Conclusion Based on the experiments and the results in Table 1, we notice that block-wise quantization of activation maps on top of random projection and stochastic rounding yields a further reduction in memory consumption and small speedup in runtime. Increasing block size does not hamper the test performance but progressively yields further reduction in memory consumption. Activation maps in GNNs are not uniformly distributed; we demonstrated this using empirical visualizations in Figure 2. We quantified this using the clipped normal distribution which had a smaller Jensen-Shannon divergence to the observed distribution, as seen in Table 2. This implies that using uniform quantization bin width could be sub-optimal. We presented an extension to stochastic rounding that accounts for variable bin widths in Eq. (8). The influence on quantization variance using Eq. (9) visualized in Figure 3 clearly demonstrates the value of using non-uniform bin widths. **Limitations**: The compute overhead even with the proposed modifications do not fully recover the reduction in speedup compared to the baseline i.e., when using FP32. While the variance estimation improvement introduced by modelling the activation maps with clipped normal distribution better models the activation maps, minimizing the variance of stochastic rounding under this distribution does not yield a noticeable improvement in test performance. This could simply be due to the fact that the overall drop in performance even with block-wise quantization is small, and there is no room for further improvement. The software implementations of the quantization and variance minimization strategies are not highly optimized and there is room for further fine-tuning. **Conclusion**: Improving efficiency of training GNNs is an important step towards reducing their resource consumption. We have demonstrated that combining block-wise quantization with extreme compression (down to INT2) can be achieved with a small drop performance. The reduction in memory consumption from baseline (FP32) is \(>95\)%, and compared to EXACT we gain a further \(>15\)% in memory reduction and up to \(\approx 5\)% training runtime speedup per epoch. We have empirically shown that the activation maps for common GNN architectures do not follow uniform distribution. We proposed an improved modelling of these activation maps using a variation of the normal distribution (the clipped normal) and show that tighter variance minimization of the quantization noise was achievable. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **Quant.** & **G/R** & **Accuracy \(\uparrow\)** & **S** (e/s) \(\uparrow\) & **M**(MB) \(\downarrow\) \\ \hline \multirow{8}{*}{Arxiv} & FP32[14] & – & 71.95 \(\pm\) 0.16 & 13.07 & 786.22 \\ & INT2[15] & – & 71.16 \(\pm\) 0.21 & 10.03 & 30.47 \\ \cline{2-6} & & 2 & 71.16 \(\pm\) 0.34 & 10.23 & 27.89 \\ & & 4 & 71.77 \(\pm\) 0.22 & 10.46 & 26.60 \\ & & 8 & 71.21 \(\pm\) 0.39 & 10.54 & 25.95 \\ & INT2 & 16 & 71.01 \(\pm\) 0.19 & 10.55 & 25.72 \\ & & 32 & 70.87 \(\pm\) 0.29 & 10.54 & 25.60 \\ & & 64 & 71.28 \(\pm\) 0.25 & 10.54 & 25.56 \\ \cline{2-6} & INT2+VM & – & 71.20 \(\pm\) 0.19 & 9.16 & 30.47 \\ \hline \hline \multirow{8}{*}{Flickr} & FP32[14] & – & 51.81 \(\pm\) 0.16 & 17.95 & 546.92 \\ & INT2[15] & – & 51.65 \(\pm\) 0.23 & 11.26 & 20.39 \\ \cline{2-6} & & 2 & 51.58 \(\pm\) 0.24 & 11.38 & 19.54 \\ \cline{1-1} & & 4 & 51.57 \(\pm\) 0.29 & 11.50 & 19.12 \\ \cline{1-1} & & 8 & 51.60 \(\pm\) 0.25 & 11.55 & 18.95 \\ \cline{1-1} & INT2 & 16 & 51.65 \(\pm\) 0.21 & 11.54 & 18.86 \\ \cline{1-1} & & 32 & 51.61 \(\pm\) 0.19 & 11.53 & 18.84 \\ \cline{1-1} & & 64 & 51.72 \(\pm\) 0.24 & 11.53 & 18.84 \\ \cline{1-1} \cline{2-6} & INT2+VM & – & 51.71 \(\pm\) 0.18 & 10.78 & 20.39 \\ \hline \hline \end{tabular} \end{table} Table 1: _Performance of block-wise quantization with \(D/R=8\), different quantization precision (FP32, INT2), block size (G), and with variance minimization (VM). We report the following metrics on the Arxiv [21] and Flickr [22] datasets: accuracy (%), speed (S) reported as epochs/second and memory (M) consumption in MB. Standard deviations of test accuracy is computed over 10 runs._ \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **Layer** & **R** & \(\mathcal{U}\) & \(\mathcal{CN}_{\left[1/D\right]}\) \\ \hline \multirow{2}{*}{Arxiv} & layer 1 & 16 & 0.0495 & 0.0213 \\ & layer 2 & 16 & 0.0446 & 0.0016 \\ & layer 3 & 16 & 0.0451 & 0.0041 \\ \hline \multirow{2}{*}{Flickr} & layer 1 & 63 & 0.0674 & 0.0017 \\ & layer 2 & 32 & 0.0504 & 0.0033 \\ \hline \hline \end{tabular} \end{table} Table 2: _Jensen-Shannon divergence measure for Uniform and Clipped Normal distributions compared to the normalized activations \(\bar{\mathbf{h}}\) at each layer of the GNN for Arxiv and Flickr datasets. In all cases we see a smaller divergence measure between the clipped normal and the empirical distribution of activation maps._ **Acknowledgements**: The authors acknowledge funding received under European Union's Horizon Europe Research and Innovation programme under grant agreements No. 101070284 and No. 101070408.
2310.05950
Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments
The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied in three 16-QAM 34.4 GBaud transmission experiments with different representative fibers. A number of post-training quantization and quantization-aware training algorithms are compared for casting the weights and activations of the neural network in few bits, combined with the uniform, additive power-of-two, and companding quantization. For quantization in the large bit-width regime of $\geq 5$ bits, the quantization-aware training with the straight-through estimation incurs a Q-factor penalty of less than 0.5 dB compared to the unquantized neural network. For quantization in the low bit-width regime, an algorithm dubbed companding successive alpha-blending quantization is suggested. This method compensates for the quantization error aggressively by successive grouping and retraining of the parameters, as well as an incremental transition from the floating-point representations to the quantized values within each group. The activations can be quantized at 8 bits and the weights on average at 1.75 bits, with a penalty of $\leq 0.5$~dB. If the activations are quantized at 6 bits, the weights can be quantized at 3.75 bits with minimal penalty. The computational complexity and required storage of the neural networks are drastically reduced, typically by over 90\%. The results indicate that low-complexity neural networks can mitigate nonlinearities in optical fiber transmission.
Jamal Darweesh, Nelson Costa, Antonio Napoli, Bernhard Spinnler, Yves Jaouen, Mansoor Yousefi
2023-09-09T12:24:55Z
http://arxiv.org/abs/2310.05950v1
# Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments ###### Abstract The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied in three 16-QAM 34.4 GBaud transmission experiments with different representative fibers. A number of post-training quantization and quantization-aware training algorithms are compared for casting the weights and activations of the neural network in few bits, combined with the uniform, additive power-of-two, and companding quantization. For quantization in the large bit-width regime of \(>5\) bits, the quantization-aware training with the straight-through estimation incurs a Q-factor penalty of less than 0.5 dB compared to the unquantized neural network. For quantization in the low bit-width regime, an algorithm dubbed companding successive alpha-blending quantization is suggested. This method compensates for the quantization error aggressively by successive grouping and retraining of the parameters, as well as an incremental transition from the floating-point representations to the quantized values within each group. The activations can be quantized at 8 bits and the weights on average at 1.75 bits, with a penalty of \(\leq 0.5\) dB. If the activations are quantized at 6 bits, the weights can be quantized at 3.75 bits with minimal penalty. The computational complexity and required storage of the neural networks are drastically reduced, typically by over 90%. The results indicate that low-complexity neural networks can mitigate nonlinearities in optical fiber transmission. Neural network equalization, nonlinearity mitigation, optical fiber communication, quantization. ## I Introduction The compensation of the channel impairments is essential to the spectrally-efficient optical fiber transmission. The advent of the coherent receivers, combined with the advances in the digital signal processing (DSP) algorithms, has allowed for the mitigation of the fiber transmission effects in the electrical domain [1]. However, real-time energy-efficient DSP is challenging in high-speed communication. The linear transmission effects, such as the chromatic dispersion (CD) and polarization mode dispersion (PMD), can be compensated using the well-established DSP algorithms [2]. The distortions arising from the fiber Kerr nonlinearity can in principle be partially compensated using the digital back propagation (DBP) based on the split-step Fourier method (SSFM). DBP can be computationally complex in long-haul transmission with large number of steps in distance [3]. The neural networks (NNs) provide an alternative approach to nonlinearity mitigation with flexible performance-complexity trade-off [4, 5, 6, 7, 8]; see Section III-A. To implement NNs for real-time equalization, the model should be carefully optimized for the hardware. The number of bits required to represent the NN can be minimized by quantization [9] and data compression, using techniques such as pruning, weight sharing and clustering [10]. There is a significant literature showing that these methods often drastically reduce the storage requirement of the NN, and its energy consumption, which is often dominated by the communication cost of fetching words from the memory to the arithmetic units [10, 11, 12]. How the NNs can be quantized with as few bits as possible, while maintaining a given Q-factor, is an important problem. This paper is dedicated to the quantization of the NNs for nonlinearity mitigation, in order to reduce the computational complexity, memory footprint, latency and energy consumption of the DSP. There are generally two approaches to the NN quantization. In post-training quantization (PTQ), the model is trained in 32- or 16-bit floating-point (FP) precision, and the resulting parameters are then quantized with fewer number of bits [9, 13]. This approach is simple; however, quantization introduces a perturbation to the model parameters incurring a performance penalty. As a consequence, PTQ is usually applied in applications that do not require quantization below 8 bits. In quantization-aware training (QAT), quantization is integrated into the training algorithm, and the quantization error is partly compensated [11, 14, 15, 16, 12]. However the optimization of the loss function with gradient-based methods is not directly possible, because the quantizer has a derivative that is zero almost everywhere. In the straight-through estimator (STE), the quantizer is assumed to be the identity function, potentially saturated in an input interval, in the backpropagation algorithm used for computing the gradient of the loss function [17, 18]. QAT is used in applications requiring low complexity in inference; however, it can be more complex in training than PTQ, and needs parameter tuning and experimentation. With the exception of a few papers reviewed in Section IV-F, the quantization of the NNs for nonlinearity mitigation has not been much explored. In this paper, we study the quantization of the weights and activations of a small convolutional fully-connected (Conv-FC) and a bidirectional long short-term memory fully-connected (BiLSTM-FC) equalizer, applied to three 16-QAM 34.4 GBaud dual-polarization fiber transmission experiments. The experiments are based on a 9x50 km true-wave classic (TWC) fiber link, a 9x110 km standard single-mode fiber (SMF) link, and a 17x70 km large effective area fiber (LEAF) link. We compare the Q-factor penalty, computational complexity, and memory requirement of a number of PTQ and QAT-STE algorithms, as a function of the launch power and the quantization rate \(b\). The uniform, additive power-of-two (APoT), companding, fixed- and mixed-precision quantization are compared. It is shown that, these algorithms, if optimized, work well in the large bit-width regime of \(b\geq 5\). However, they do not achieve sufficiently small distortions in our experiments in the low bit-width regime with \(b<5\), where the quantization error needs to be aggressively mitigated. For this case, we propose a companding successive alpha-blending (SAB) quantization algorithm that mitigates the quantization error by successive grouping and retraining of the parameters, combined with an incremental transition from the floating-point representations to the quantized values within each group. The algorithm also accounts for the probability distribution of the parameters. It is shown that the quantization of the activations impacts the Q-factor much more than the weights. The companding SAB algorithm is studied w/wo the quantization of activations. The results indicate that, for quantization in the large bit-with regime, QAT-STE incurs a Q-factor penalty of less than 0.5 dB relative to the unquantized NN, while reducing the storage and computational complexity of the NN typically by over 90%. This is obtained with the uniform, companding or APoT variant of QAT-STE, depending on the transmission experiment. If the activations are quantized at 8 bits, the weights can be quantized with the companding SAB algorithm at the average rate of 1.75 bits, paving the way to the binary NN equalizers. The quantization of the activations at 6 bits and weights at \(3.75\) bits results in a reduction in the computational complexity by \(95\%\) and memory footprint by \(88\%\), with the Q-factor penalty of 0.2 dB. Overall, the results suggest that nearly-binary NNs mitigate nonlinearities in optical fiber transmission. This paper is structured as follows. In Section II, we describe the optical fiber transmission experiments. In Section III, we review the use of the NNs for the fiber nonlinearity mitigation, and in Section IV the quantization of the NNs. Finally, we compare the Q-factor penalty and the gains of quantization for several algorithms in Section V, and draw conclusions in Section VI. ## II Dual Polarization Transmission Experiment Setup Fig. 1 shows the block diagram of the transmission experiments considered in this paper. Three experiments are performed with different representative fibers, described below. #### Ii-1 Transmitter At the transmitter (TX), a pseudo-random bit sequence (PRBS) is generated for each polarization \(p\in\{x,y\}\), and mapped to a sequence of symbols \(\mathbf{s}_{p}\) taking values in a 16-QAM constellation according to the Gray mapping. The two complex-valued sequences \(\mathbf{s}_{x}\) and \(\mathbf{s}_{y}\) are converted to four real-valued sequences, and passed to an arbitrary wave generator (AWG) that modulates them to two QAM signals using a root raised cosine pulse shape with the roll-off factor of 0.1 at the rate \(34.4\) GBaud. The AWG includes digital-to-analog converters (DACs) at \(88\) Gsamples/s. The outputs of AWG are four continuous-time electrical signals \(I_{x}\), \(Q_{x}\), \(I_{y}\) and \(Q_{y}\) corresponding to the in-phase (I) and quadrature (Q) components of the signals of the \(x\) and \(y\) polarization. The electrical signals are converted to optical signals and polarization-multiplexed with a dual-pol IQ Mach-Zehnder modulator (MZM), driven by an external cavity laser (ECL) at wavelength \(1.55~{}\mu m\) with line-width 100 KHz. The output of the IQ-modulator is amplified by an erbium-doped fiber amplifier (EDFA), filtered by an optical band-pass filter (OBPF) and launched into the fiber link. The laser introduces phase noise, modeled by a Wiener process with the Lorentzian power spectral density [19, Chap. 3.5]. #### Ii-2 Fiber-optic Link The channel is a straight-line optical fiber link in a lab, with \(N_{sp}\) spans of length \(L_{sp}\). An EDFA with 5 dB noise figure (NF) is placed at the end of each span to compensate for the fiber loss. The experiments are performed \begin{table} \begin{tabular}{l c c c} & TWC fiber & SMF & LEAF \\ \cline{2-4} \(L_{sp}~{}\mathrm{km}\) & 50 & 110 & 70 \\ \(N_{sp}\) & 9 & 9 & 17 \\ \(\alpha~{}\mathrm{dB/km}\) & 0.21 & 0.22 & 0.19 \\ \(D~{}\mathrm{ps/(mm.km)}\) & 5.5 & 18 & 4 \\ \(\gamma~{}(\mathrm{W.Km})^{-1}\) & 2.8 & 1.4 & 2.1 \\ PMD \(\tau~{}\mathrm{ps/\sqrt{km}}\) & 0.02 & 0.08 & 0.04 \\ NF \(\mathrm{dB}\) & 5 & 5 & 5 \\ \end{tabular} \end{table} TABLE I: OPTICAL LINK PARAMETERS Fig. 1: The block-diagram of the transmission experiments. with the TWC fiber, SMF and LEAF, and parameters in Table I. TWC Fiber ExperimentThe first experiment is with a short-haul TWC fiber link with 9 spans of 50 km. The TWC fiber was a brand of nonzero dispersion shifted fiber (NZ-DSF) made by Lucent, with low CD coefficient of \(D=5.5~{}\mathrm{ps}/(\mathrm{nm}\cdot\mathrm{km})\) at 1550 nm wavelength and a high nonlinearity parameter of \(\gamma=2.8~{}(\mathrm{Watt}\cdot\mathrm{km})^{-1}\). Thus, even though the link is short with 450 km length, the channel operates in the nonlinear regime at high powers. The link parameters, including the fiber loss coefficient \(\alpha\) and PMD value \(\tau\), can be found in Table I. SMF ExperimentThe second experiment is based on a long-haul 9x110 km standard single-mode fiber link, with parameters in Table I. LEAF ExperimentLEAF is also a brand of NZ-DSF, made by Corning, similar to the TWC fiber but with a smaller nonlinearity coefficient due to the larger cross-section effective area. This experiment uses a 17x70 km link described in Table I. #### Ii-B3 Receiver At the receiver, the optical signal is polarization demultiplexed, and converted to four electrical signals using an integrated coherent receiver driven by a local oscillator (LO). Next, the continuous-time electrical signals are converted to the discrete-time signals by an oscilloscope, which includes analog-to-digital converters (ADCs) that sample the signals at the rate of \(50\) Gsamples/s, and quantize them with the effective number of bits of around \(5\). The digital signals are up-sampled at 2 samples/symbol, and equalized in the DSP chain shown in Fig. 1. The equalization is performed by the conventional dual-polarization linear DSP [1], followed by a NN. The linear DSP consists of a cascade of the frequency-domain CD compensation, multiple-input multiple-output (MIMO) equalization via the radius directed equalizer to compensate for PMD [1, Sec. VII-], [20], polarization separation, carrier frequency offset (CFO) correction, and the carrier-phase estimation (CPE) using the two-stage algorithm of Pfau _et al._ to compensate for the phase offset [21]. The linearly-equalized symbols are denoted by \(\tilde{\mathbf{s}}_{p}\). Once the linear DSP is applied, the symbols are still subject to the residual CD, dual-polarization nonlinearities, and the distortions introduced by the components at TX and RX. Define the residual channel memory \(M\) to be the maximum effective length of the auto-correlation function of \(\tilde{\mathbf{s}}_{p}\) over \(p\in\{x,y\}\). The outputs of the CPE block \(\tilde{\mathbf{s}}_{p}\) are passed to a low-complexity NN, which mitigates the remaining distortions, and outputs \(\tilde{\mathbf{s}}_{p}\). The architecture of the NN depends on the experiment, and will be explained in Section III-B. ## III Neural Networks for Nonlinearity Mitigation ### _Prior Work_ The NN equalizers in optical fiber communication can be classified into two categories. In _model-based equalizers_, the architecture is based on the parameterization of the channel model. An example is learned DBP (LDBP) [8], where the NN is a parameterization of the SSFM which is often used to simulate the fiber channel. The dual-polarization LDBP is a cascade of layers, each consisting of two complex-valued symmetric filters to compensate for the CD, two real-valued asymmetric filters for the differential group delays, a unitary matrix for the polarization rotation, and a Kerr activation function for the mitigation of the fiber nonlinearity. It is shown that LDBP outperforms DBP [8]. On the other hand, in _model-agnostic equalizers_, the architecture is independent of the channel model [4, 5, 6, 7]. The model-agnostic schemes do not require the channel state information, such as the fiber parameters. Here, the NNs can be placed at the end of the conventional linear DSP for nonlinearity mitigation [22], or after the ADCs for compensating the linear and nonlinear distortions (thereby replacing the linear DSP) [23, 24]. A number of NN architectures have been proposed for the nonlinearity mitigation. Fully-connected (FC) or dense NNs with 2 or 3 layers, few hundred neurons per layer, and tanh activation were studied in [25, 26]. The overfitting and complexity become problems when the models get bigger. The convolutional NNs can model the linear time-invariant (LTI) systems with a finite impulse response. The application of the convolutional networks for compensating the nonlinear distortions is investigated in [27], showing that one-dimensional convolution can well compensate the CD. The bi-directional recurrent and long-short term memory networks (LSTM) receivers are shown to perform well in fiber-optic equalization [24]. Compared to the convolutional and dense networks, BiLSTM networks better model LTI systems with infinite impulse response, such as the response of the CD. A comparison of the different architectures in optical transmission in [25] shows that, dense and convolutional-LSTM models perform well at low and high complexities, respectively. An effect that particularly impacts the performance of the NN is PMD. In most papers, random variation of the polarization-dependent effects during the transmission have not been carefully studied. The polarization effects are sometimes neglected [22], or assumed to be static during the transmission [8]. In such simulated systems, the dual-polarization NN receivers are subject to a performance degradation compared to real-life experiments [25]. ### _Two NN Models Considered in This Paper_ In this Section, we describe two NN equalizers used in this paper. The NN is placed at the end of the linear DSP shown in Fig. 1. In consequence, since the PMD is compensated by the MIMO equalizer, the NN is static and trained offline. Due to the constrains of the practical systems, low-complexity architectures are considered. A Conv-FC network is applied in the TWC fiber and SMF links, and a BiLSTM-FC network in the LEAF link. The BiLSTM-FC model has more parameters, and performs better; however, the smaller Conv-FC model is sufficient in short-haul links. #### Ii-B1 Conv-FC Model The four sequences of linearly-equalized symbols \(\Re(\tilde{\mathbf{s}}_{x})\), \(\Im(\tilde{\mathbf{s}}_{x})\), \(\Re(\tilde{\mathbf{s}}_{y})\) and \(\Im(\tilde{\mathbf{s}}_{y})\) are passed to the NN. We consider a many-to-one architecture, where the NN equalizes one complex symbol per polarization given \(n_{i}\) input symbols. The inputs of the network are four vectors, each containing a window of \(n_{i}=M+1\) consecutive elements from each of the four input sequences, where \(M\) is the residual channel memory defined in Section II-3. The network outputs a vector of \(n_{o}=4\) real numbers, corresponding to the real and imaginary parts of the symbols of the two polarizations after full equalization. The size of the concatenated input of the NN is thus \(\bar{n}_{i}=4(M+1)\). The NN operates in a sliding-window fashion: as each of its input vectors are shifted forward one element, \(4\) real numbers are produced. The Conv-FC model is a cascade of a complex-valued convolutional layer, a FC hidden layer, and a FC output layer. The first layer implements the discrete convolution of \(\tilde{\mathbf{s}}_{p}\), \(p\in\{x,y\}\), with a kernel \(\mathbf{h}\in\mathbb{C}^{K}\), to compensate primarily the residual CD, where \(\mathbb{C}\) denotes the complex numbers and \(K\) is the number of kernel taps. The two complex convolutions \(\tilde{\mathbf{s}}_{p}\ast\mathbf{h}\) are implemented using eight real convolutions in terms of two filters \(\Re(\mathbf{h})\) and \(\Im(\mathbf{h})\), according to \[\tilde{\mathbf{s}}_{p}\ast\mathbf{h} =\Re(\tilde{\mathbf{s}}_{p})\ast\Re(\mathbf{h})-\Im(\tilde{ \mathbf{s}}_{p})\ast\Im(\mathbf{h})\] \[+j\Big{\{}\Re(\tilde{\mathbf{s}}_{p})\ast\Im(\mathbf{h})+\Im( \tilde{\mathbf{s}}_{p})\ast\Re(\mathbf{h})\Big{\}}. \tag{1}\] The first layer thus contains eight parallel real-valued one-dimensional convolutions, with the stride one and "same padding," and no activation. There are total \(2K\) trainable real filter taps, typically far fewer than in generic convolutional layers used in the literature with large feature maps. The eight real convolutions are combined according to (1) or Fig. 2(a), obtaining \(\Re(\tilde{\mathbf{s}}_{x}\ast\mathbf{h})\), \(\Im(\tilde{\mathbf{s}}_{x}\ast\mathbf{h})\), \(\Re(\tilde{\mathbf{s}}_{y}\ast\mathbf{h})\) and \(\Im(\tilde{\mathbf{s}}_{y}\ast\mathbf{h})\), which are then concatenated. The resulting vector is fed to a FC hidden layer with \(n_{h}\) neurons, and tangent hyperbolic (tanh) activation. The joint processing of the two polarizations in the dense layer is necessary in order to compensate the nonlinear interactions between the two polarizations during the propagation. Finally, there is an output FC layer with \(2\) neurons for each complex-valued polarization symbol, and no activation. The computational complexity \(\mathcal{C}\) of the unquantized NNs can be measured by the number of the real multiplications per polarization, considering that the cost of the additions and computation of the activation is comparatively negligible. For the Conv-FC model \[\mathcal{C}_{\text{Conv-FC}}=4n_{i}K+2n_{i}n_{h}+\frac{n_{h}n_{o}}{2}. \tag{2}\] #### Ii-B2 BiLSTM-FC Model The second model is a cascade of a concatenator, a BiLSTM unit and FC output layer, shown in Fig. 2(b). At each time step \(t\) in the recurrent model, \(n_{i}=M+1\) linearly-equalized complex symbols are taken from each polarization. The resulting vectors \(\Re(\tilde{\mathbf{s}}_{x}^{(t)})\), \(\Im(\tilde{\mathbf{s}}_{x}^{(t)})\), \(\Re(\tilde{\mathbf{s}}_{y}^{(t)})\), \(\Im(\tilde{\mathbf{s}}_{y}^{(t)})\), are concatenated in a vector of length \(\bar{n}_{i}=4(M+1)\) and fed to a many-to-many BiLSTM unit. Each LSTM cell in this unit has an input of length \(2(M+1)\) corresponding to the one-sided memory, \(n_{h}\) hidden state neurons, the recurrent activation \(\tanh\), and the gate activation sigmoid. The output of the BiLSTM unit is a vector of length \(2n_{h}\), that is fed to a FC output layer with no activation and \(n_{o}=4\) neurons1. The computational complexity of the BiLSTM-FC model is Footnote 1: Equivalently, the input output of the BiLSTM unit may be expressed in arrays of shape \((4,M+1)\), without concatenation. \[C_{\text{BiLSTM-FC}}=n_{h}\Big{(}4n_{h}+16n_{i}+3+n_{o}\Big{)},\] real multiplications per polarization. The many-to-many variants of the above models are straightforward. In this case, there are \(n_{o}=4(M+1)\) neurons at the output, so that all \(M+1\) complex symbols are equalized in one shot; thus \(n_{i}=M+L\), \(\bar{n}_{i}=n_{o}=4(M+L)\). The many-to-many versions are less complex per symbol and parallelizable, but also less performant. The performance of the receiver is measured in terms of \[\text{Q-factor}=10\log_{10}\!\left(2\operatorname{erfc}^{-2}(2\text{BER}) \right)\quad\text{dB},\] where the BER is the bit error rate, and \(\operatorname{erfc}(.)\) is the complementary error function. The Q-factor of the NNs is compared with that of DBP and linear equalization. The DBP replaces the CD compensation unit at the beginning of the DSP chain and is applied with single step per span, and 2 samples per symbol. This comparison is done to evaluate the effectiveness of the NN in jointly mitigating the residual CD and Kerr nonlinearity. Fig. 3(a) shows the Q-factor gain of the unquantized Conv-FC model over the linear DSP in the TWC fiber experiment (\(K=M=40\)) [28]. The results demonstrates that the NN offers a Q-factor enhancement of \(0.5\) dB at -2 dBm, and \(2.3\) dB at 2 dBm. The raw data before the linear DSP were not available to add the DBP curve to Fig. 3(a). The TWC fiber link is short. On the other hand, the nonlinearities are stronger in the fiber link in the SMF experiment than in the TWC fiber experiment, due to the longer length. For the SMF experiment, Fig. 3(b) shows that the Conv-FC model provides a performance similar to that of DBP with 1 sample/symbol (SpS). The improvement results from the mitigation of the dual-polarization nonlinearities, as well as the equipment's distortions. The BiLSTM based receiver in the LEAF experiment (with \(n_{h}=100\), \(M=40\)) also gives a comparable performance to the DBP as shown in Fig. 3(c). In general, the implementation of the NN can be computationally expensive. In order to reduce the complexity, in the next section, we quantize the NNs, casting the weights and activations into low precision numbers. ## IV Quantization of the Neural Networks The parameters (weights and biases) of the NN, activations and input data are initially real numbers represented in FP 32 (FP32) or 64 bit numbers, described, e.g., in the IEEE 754 standards. The implementation of the NNs in memory or computationally restricted environments requires that these numbers to represented by fewer number of bits and in different format, e.g., in INT8. Define the quantization grid \(\mathcal{W}\) as a finite set of numbers \[\mathcal{W}=\big{\{}\hat{w}_{0},\hat{w}_{1},\cdots,\hat{w}_{n}\big{\}},\] where \(\hat{w}_{i}\in\mathbb{R}\) are the quantization symbols. A continuous random variable \(w\in\mathbb{R}\) drawn from a probability distribution \(p(w)\) is quantized to \(\hat{w}=Q(w)\), where \(Q:\mathbb{R}\mapsto\mathcal{W}\) is the quantization rule or quantizer \[Q(w)=\sum_{i=0}^{N}\hat{w}_{i}\;\mathbbm{1}_{I_{i}}(w).\] Here, \(I_{i}=[\Delta_{i},\Delta_{i+1})\), where \(\{\Delta_{i}\}_{i=0}^{N+1}\) are the quantization thresholds, and \(\mathbbm{1}\) is the indicator function, i.e., \(\mathbbm{1}_{I_{i}}(w)=1\) if \(w\in I_{i}\), and \(\mathbbm{1}_{I_{i}}(w)=0\) otherwise. The intervals \(\{I_{i}\}_{i=0}^{N}\) are the quantization cells, partitioning the real line. The quantization rate of \(\mathcal{W}\) is \(b=\log_{2}(N+1)\) bits, assuming that \(\hat{w}_{i}\) are equally likely. The hardware support is best when \(b\) is a power of two, commonly \(b=8\). The quality of reproduction is measured by a distortion which is often the mean-square error (MSE) \(D(b)=\mathbb{E}(w-\hat{w})^{2}\), where the expectation \(\mathbb{E}\) is with respect to the probability distribution of \(w\) and \(Q\) (if it includes random elements). For a fixed rate \(b\), the symbols \(\hat{w}_{i}\) and \(\Delta_{i}\) (or \(Q(.)\)) are found to minimize the distortion \(D(b)\). ### _Quantization Schemes_ There is a significant literature on the quantization algorithms in deep learning. However, most of these algorithms have been developed for over-parameterized NNs with large number of parameters. These networks have many degrees-of-freedom to compensate for the quantization error. It has been experimentally demonstrated that the over-parameterized NNs are rather resilient to the quantization, at least up to 8 bits. In contrast, the NNs used for fiber equalization are small, typically with few hundred or thousands of weights, smaller than the models deployed even in smartphones and Internet of Things applications [29]. Below, we review a number of the quantization algorithms suitable for the NN equalizers. #### Iv-A1 Uniform Quantization In uniform quantization, the quantization symbols \(\hat{w}_{i}\) are uniformly placed. Given a step size (or scale factor) \(s\) and a zero point \(z\), the uniform quantization rule is \[\hat{w}=s(\bar{w}-z),\] Fig. 3: Q-factor of the linear DSP, DBP with 1 SpS, and unquantized NN equalizers in (a) TWC fiber, (b) SMF, and (c) LEAF experiments. where \(\bar{w}\in\bar{\mathcal{W}}=\{0,1,\cdots,N\}\). The integer representation of \(w\) is \[\bar{w}=\text{clip}\left(\left\lfloor\frac{w}{s}\right\rceil+z;0,N\right),\] where \(\text{clip}(w,a,b)\), \(a\leq b\), is the clipping function \[\text{clip}(w,a,b)=\begin{cases}a,&w<a,\\ w,&a\leq w<b,\\ b,&w\geq b,\end{cases}\] in which \(\left\lfloor x\right\rceil\) is the rounding function, mapping \(x\) to an integer in \(\bar{\mathcal{W}}\), e.g., to the nearest symbol. The quantization grid is thus \[\mathcal{W}_{u}(s,z,b)=\Big{\{}-zs,-sz+s,\cdots,-sz+sN\Big{\}}. \tag{3}\] The scale factor \(s\) and zero point \(z\) can be determined by considering an interval \([\alpha,\beta]\) that contains most of the weights. Then, \(s(a,c,N){=}(\beta-\alpha)/N\) and \(z=\left\lfloor-\alpha/s\right\rfloor\). The interval \([\alpha,\beta]\) is called the clipping (or clamping or dynamic) range, and is selected by a procedure called calibration, which may require a calibration dataset (a small set of unlabeled examples). The parameters of the uniform quantizer are thus \(\alpha\), \(\beta\), \(b\) and the choice of the rounding function. For a fixed rate \(b\), the remaining parameters can be obtained by minimizing the MSE. However, it is simpler, and sometimes about equally good (especially when \(b\geq 4\)), to set the clipping range to be an interval centered at the mean \(\mu\) of \(w\), with a duration proportional to the standard deviation \(\sigma\) of \(w\) \[\alpha=\mu-\kappa\sigma,\quad\beta=\mu+\kappa\sigma,\] where, e.g., \(\kappa=4\). Even a simpler method of calibration is setting \(\alpha\) and \(\beta\) to be the minimum and maximum value of the weights \(w\), respectively [12]. The min-max choice can be sensitive to the outlier parameter values, increasing unnecessarily the step size and rounding error. In the symmetric quantization, \(z=0\). Thus, \(w=0\) is mapped to \(\bar{w}=0\) and \(\hat{w}=0\). The grid of the uniform unsigned symmetric quantization is thus \(\mathcal{W}_{\text{uns}}(s)=\big{\{}0,s,\cdots,sN\big{\}}\). If the distribution of \(w\) is symmetric around the origin, symmetric signed quantization is applied, where \[\mathcal{W}_{\text{uns}}(s,b)=\Big{\{}ks:\quad k=-(N+1)/2,\cdots,(N-1)/2\Big{\}}. \tag{4}\] The common practice is to cast the weights with the signed symmetric quantization. However, the output of the rectified linear unit and sigmoid activation is not symmetric. Moreover, the empirical distribution of the weights can sometimes be asymmetric. For instance, Fig. 4 shows the weight distribution of a NN used in Section V. It can be seen that the distribution has a negative mean. In these cases, asymmetric, or unsigned symmetric, quantization is used. The quantization is said to be static if \(\alpha\) and \(\beta\) are known and hard-coded a priori in hardware. The same values are used in training and inference, and for any input. In contrast, in dynamic-range quantization, \(\alpha\) and \(\beta\) are computed in real-time for each batch of the inputs to the NN. Since activations depend on input, their clipping range is best determined dynamically. This approach requires real-time computation of the statistics of the activations, bringing about an overhead in computational and implementation complexity, and memory. The computation composed of the addition and multiplication of the numbers in \(\mathcal{W}_{u}\) can be performed with integer arithmetic, with the scale factor and zero point applied in FP32 at the end. In what follows, the notation UN-\(b\) is used to indicate uniform quantization of the weights and activations at \(b\) bits (with a similar notation for other quantizers). #### Iii-B2 Additive Power-of-two Quantization In non-uniform quantization, the quantization symbols are not uniformly placed. The hardware support for these schemes is generally limited, due to, e.g., the requirements of the iterative clustering (e.g., via \(k\)-means) [30]. Thus, the majority of studies adopt uniform quantization. On the other hand, the empirical probability distribution of the weights is usually near bell shaped [31]; see Fig. 4. Thus, logarithmic quantization [32, 33, 34] could provide lower rate for a given distortion compared to the uniform quantization. In the power-of-two (PoT) quantization, the quantization symbols are powers of two [32] \[\mathcal{W}_{\text{par}}(s,r,b)=\pm s\Big{\{}0,2^{0},2^{-r},\cdots,2^{-r(2^{ \text{\tiny{b-1}}}-1)}\Big{\}},\] where \(r\in\mathbb{N}\) controls the width of the distribution of symbols, and \(s\in\mathbb{R}\) is the scale factor. The scale factor is stored in FP32, but is applied after the multiply-accumulate operations, and can be trainable. The PoT simplifies the computation by performing the multiplications via bit shifts. However, PoT is Fig. 4: a) Probability density function (PDF) of the weights is bell-shaped with non-zero mean, suggesting that uniform quantization is not optimal. b) APoT-4, illustrating that the quantization symbols are irregularly placed; c) CP-3. not flexible in the above form, and the symbols are sharply concentrated around zero. Further, increasing the bit-width merely sub-divides the smallest quantization cell around zero, without generating new symbols in other cells. The APoT introduces additional adjustable parameters, that can be used to control the distribution of the symbols, introducing new symbols generally everywhere [33]. The APoT grid is the sum of \(n\) PoT grids with a base bit-width \(b_{0}\) and different ranges, for a given \(n\in\mathbb{N}\) and \(b_{0}\). The bit-width is thus \(b=nb_{0}\). Choosing \(b_{0}\) such that \(n=b/b_{0}\) is an integer, the quantization grid of APoT is \[\mathcal{W}_{\text{apot}}(s,r,b,b_{0},\gamma)=\pm\,s\sum_{i=0}^{n-1}2^{-i} \big{|}\mathcal{W}_{\text{pot}}\big{|}(1,n,b_{0}+1)+\gamma,\] where \(s\) and \(\gamma\) are trainable scale and shift factors in FP32, the absolute value in the set \(|\mathcal{W}|\) is defined per component, and \(\Sigma\) is the Minkowski set sum. It can be verified that \(|\mathcal{W}_{\text{apot}}|=2^{b}\). The shift parameter \(\gamma\) allows restricting the quantized weights to unsigned numbers. As with the PoT, the main advantage of APoT representation is that it is multiplier-free, thus considerably less complex than the uniform quantization. The PoT and APoT gives rise to more efficient quantizers such as in DeepShift, where the bit-shifts or exponents are learned directly via STE [34]. The use of APoT in fiber-optics equalization is discussed in [28]. ### _Companding Quantization_ In companding (CP) quantization, an appropriate nonlinear transformation is applied to the weights so that the distribution of the weights becomes closer to a uniform distribution, and a uniform quantizer can be applied afterwards [35]. A companding quantizer is composed of a compressor, a uniform quantizer, and an expander. The \(\mu\)-law is an example of a compressor \[w_{c}=F(w)=\operatorname{sign}(w)\frac{\log(1+\mu|w|)}{\log(1+\mu)}, \tag{5}\] where \(\mu>0\) is the compression factor. Its inverse \[w=\mu^{-1}\operatorname{sign}(w_{c})\Big{(}1+\mu)^{|w_{c}|}-1\Big{)}, \tag{6}\] is the expander. Companding quantization has been widely used in data compression and digital communication. It is shown that the logarithmic companding quantization can cast the weights and biases of the NN image classifiers at 2 bits [36], and outperforms the uniform and APoT quantization in the same task [37]. However, the use of companding quantization in NN equalizers has not been investigated. ### _Mixed-precision Quantization_ The majority of the quantization schemes consider fixed-precision quantization, where a global bit-width is predefined. In the mixed-precision quantization, different groups of weights or activations are quantized generally at different rates [38]. The groups could be defined by layers, channels, feature maps, clusters, etc. One approach to determine the bit-width of each group is based on the sensitivity of the model using the Hessian matrix of the loss function [39]. If the Hessian matrix has a large norm on average over a particular group, a larger bit-width is assigned to that group. The output (and sometimes input) layer is often quantized at high precision, e.g., at 16 bits, as it directly influences the prediction. The biases impart a small overhead and usually not quantized. In our work, the quantization rates are determined from the sensitivity of the loss function. The hardware support for mixed-precision quantization is limited compared to the fixed-precision quantization. ### _PTQ and QAT_ #### Iii-D1 Post-training Quantization In PTQ, training is performed in full or half precision. The input tensor, activation outputs, and the weights are then quantized at fewer bits and used in inference [40]. In practice, the quantized values are stored in integer or fixed-point representations in field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC), and processed in arithmetic logic units with bit-wise operations. However, the general-purpose processors include the FP processing units as well, where the numbers are stored and processed in FP formats. Thus, to simulate PTQ in general-purpose hardware, the quantizer \(Q(.)\) is introduced in the computational graph of the NN after each weight, bias and activation stored in FP. The PTQ has little overhead, and is useful in applications where the calibration data are not available. However, quantization below 4-8 bits can cause a significant performance degradation [41]. Several approaches have been proposed to recover the accuracy in the low bit-width regimes. Effort has been dedicated to finding a smaller clipping range from the distribution of the weights, the layer- and channel-wise mixed precision, and the correction of the statistical bias in the quantized parameters. Moreover, rounding a real number to the nearest quantization symbol may not be optimal [42]. In adaptive rounding, a real number is rounded to the left or right symbol based on a Bernoulli distribution, or deterministic optimization. It has been shown that PTQ-4 with adaptive rounding incurs a small loss in accuracy in some applications [43]. #### Iii-D2 Quantization-aware Training In QAT, quantization is co-developed with the training algorithm. This usually enhances the prediction accuracy of the model by accounting for the quantization error during the training. QAT is simulated by placing the quantizer function after each weight and activation in the computational graph of the NN. The output of the quantizer is a piece-wise constant function of its input. This function is not differentiable at the points of discontinuity, and has a derivative that is zero everywhere else, i.e., \(Q^{\prime}(w)=\partial\hat{w}/\partial w=0\). Thus, the gradient of the loss function with respect to the weights is zero almost everywhere, and learning with the gradient-based methods is not directly possible. There are a number of approaches to address the zero gradient problem, such as approximating \(Q^{\prime}(w)\) with a non-zero function, as in STE. QAT usually achieves higher prediction accuracy than PTQ when quantizing at low number of bits, at the cost of the increased overhead. On the other hand, if the approximation technique is not carefully chosen, QAT may perform even worse than PTQ [44]. Training can be performed from scratch, or from a pre-trained model, followed by QAT fine-tuning the result. The Straight-thorough EstimationIn STE, the derivative of the quantizer is approximated with the identity function, potentially truncated on the clipping range \([\alpha,\beta]\) \[Q^{\prime}(w)\approx\begin{cases}0,&w<\alpha,\\ 1,&\alpha\leq w<\beta,\\ 0,&w\geq\beta.\end{cases} \tag{7}\] During the NN training, in the forward pass \(Q(.)\) is used. In the backward pass, \(Q^{\prime}(.)\) in (7) is applied, which is then used in the chain rule to back-propagate the errors in training [41, 18]. Moreover, the weights remains in FP in the backward pass, to recover the accuracy lost in the forward pass. Even though (7) is not a good approximation to the zero, STE works surprisingly well in some models when \(b\geq 5\)[44]. The gradient is usually sensitive to quantization, even more than activations. It is thus either not quantized, or quantized with at least 6 bits [45]. There are non-STE approaches as well. For instance, an appropriate regularization term can be added to the loss function that penalizes the weights that take on values outside the quantization set. Another approach is the alpha-blending (AB) quantization. Alpha-blending QuantizationThe AB quantization addresses the problem of the quantizer's zero derivative by replacing each weight with a convex combination of the full precision weight \(w\in\mathbb{R}\) and its quantized version \(\tilde{w}=Q(w)\)[46]: \[\tilde{w}=(1-\alpha_{j})w+\alpha_{j}\hat{w}, \tag{8}\] where the coefficient \(\alpha_{j}\) is changed from \(0\) to \(1\) with the epoch index \(j\in\{k_{1},\cdots,k_{2}\}\) according to \[\alpha_{j}=\begin{cases}0,&j\leq k_{1},\\ \left(\frac{k_{1}-j}{k_{2}-k_{1}}\right)^{3},&k_{1}<j\leq k_{2},\\ 1,&j\geq k_{2},\end{cases} \tag{9}\] for some \(k_{1}\leq k_{2}\). This approach enables a smooth transition from the unquantized weights corresponding to \(\alpha_{k_{1}}=0\) to the quantized ones corresponding to \(\alpha_{k_{2}}=1\). The AB quantization is integrated into the computational graph of the NN, by placing the sub-graph shown in Fig. 5 at the end of each scalar weight. Considering \(Q^{\prime}(.)=0\), we have \(\partial\tilde{w}/\partial w=1-\alpha\), and \(\partial L(\tilde{w})/\partial w=L^{\prime}\left(\tilde{w}\right)(1-\alpha)\neq 0\). Thus, even though the quantizer has zero derivative, the derivative of the loss function with respect to \(w\) is non-zero, and the weights are updated in the gradient-based training. The activations can still be quantized with STE. The AB QAT starts with \(j=k_{1}\), and trains with one or more epochs. Then, \(j\) is incremented to \(k_{1}+1\), and the training continues, initialized with the weights obtained at \(j=k_{1}\). It has been shown that the AB quantization provides an improvement over QAT-STE in different scenarios [46]. Given a base quantizer \(Q(.)\), the AB quantization may be viewed as using the quantizer \(Q_{ab}(w)=(1-\alpha_{j})w+\alpha_{j}Q(w)\). As shown in Fig. 5(b), when \(Q(.)\) is the uniform quantizer, \(Q_{ab}(.)\) is a piece-wise linear approximation to \(Q(.)\), with slope \(1-\alpha_{j}\). As \(\alpha_{j}\to 1\), the approximation error tends to zero, and \(w\) is quantized. #### Iii-B3 Successive Post-training Quantization Successive PTQ (SPTQ) may be viewed as a combination of PTQ and QAT [47], and is particularly effective for quantizing small NNs such as those encountered in optical fiber communication as discussed in [48]. The idea is to compensate for the quantization error in the training. The parameters of the NN are partitioned into several sets and sequentially quantized based on a PTQ scheme. This approach is simple and tends to perform well in practice, with a good PTQ scheme and hyper-parameter optimization. At stage \(i\), the set of weights in the layer \(\ell\) denoted by \(\mathcal{W}_{i}^{(\ell)}\) is partitioned into two subsets \(\mathcal{W}_{i,1}^{(\ell)}\) and \(\mathcal{W}_{i,2}^{(\ell)}\) corresponding to the quantized and unquantized weights, respectively, i.e., \[\mathcal{W}_{i}^{(\ell)}=\Big{\{}\mathcal{W}_{i,1}^{(\ell)},\mathcal{W}_{i,2}^ {(\ell)}\Big{\}},\quad\mathcal{W}_{i,1}^{(\ell)}\cap\mathcal{W}_{i,2}^{(\ell )}=\emptyset. \tag{10}\] The model is first trained over weights in \(\mathcal{W}_{i}^{(\ell)}\) in FP32. Then, the resulting weights in \(\mathcal{W}_{i,1}^{(\ell)}\) are quantized under a suitable PTQ scheme. Next, the weights in \(\mathcal{W}_{i,1}^{(\ell)}\) are fixed, and the model is retrained by minimizing the loss function with respect to the weights in \(\mathcal{W}_{i,2}^{(\ell)}\), starting from the previously trained values. The second group is retrained in order to compensate for the quantization error arising from the first group, and make up for the loss in the accuracy. In stage \(i+1\), the above steps are repeated upon substitution \(\mathcal{W}_{i+1}^{(\ell)}\overset{\Delta}{=}\mathcal{W}_{i,2}^{(\ell)}\). The weight partitioning, group-wise quantization, and retraining is repeated until the network is fully quantized. The total number of partition sets is denoted by \(N_{p}\). In another version of this algorithm, the partitioning for all stages is set initially. That is to say, the weights of layer \(\ell\) are partitioned into \(N_{p}\) groups \(\{\mathcal{W}_{i}^{(\ell)}\}_{i=1}^{N_{p}}\) and successively quantized, such that at each stage the weights of the previous groups are quantized and fixed, and those of the remaining groups are retrained. The hyper-parameters of the SPTQ are the choice of the quantizer function in PTQ and the partitioning scheme. There Fig. 5: (a) Sub-graph introduced after each weight \(w\) in the computational graph of the NN in the AB quantization; (b) the AB quantizer, when the base quantizer is the uniform one. are several options for the partitioning, such as random grouping, neuron grouping and local clustering. It has been demonstrated that models trained with SPTQ provide classification accuracies comparable to their baseline counterparts trained in 32-bit, with fewer bits [47]. Fig. 9(c) shows that SPTQ improves the Q-factor considerably, around 0.8 dB. #### Iv-D4 Successive Alpha-blending Quantization In this section, we propose SAB, a quantization algorithm suitable for the conversion of a small full-precision model to a low-precision one, in the low bit-width regime 1-3 bits, depending on whether or not the activations are quantized. SAB is an iterative algorithm with several stages, blending SPTQ and AB quantization in a particular manner described below. At stage \(i\), the weights are partitioned into the set \(\mathcal{W}_{i,1}^{(\ell)}\) and \(\mathcal{W}_{i,2}^{(\ell)}\) as in (10). First, each weight \(w\in\mathcal{W}_{i,1}^{(\ell)}\) is updated according to the AB relation (8) as \(\tilde{w}=(1-\alpha_{j})w+\alpha_{j}\tilde{w}\), where \(\alpha_{j}\) is given by (9) at \(j=k_{1}\). Then, the weights \(\tilde{w}\in\mathcal{W}_{i,1}^{(\ell)}\) are fixed, while those in \(\mathcal{W}_{i,2}^{(\ell)}\) are retrained from their previous values. Next, \(\alpha_{j}\) is incremented to the value in the sequence (9) at \(j=k_{1}+1\). The process of partitioning, AB updating, and retraining is repeated until \(\alpha_{j}=1\) is reached at \(j=k_{2}\), where all weights in \(\mathcal{W}_{i,1}^{(\ell)}\) are fully quantized. The algorithm then advances to the next stage \(i+1\), by partitioning \(\mathcal{W}_{i,2}^{(\ell)}\) into two complementary sets. The last partition is trained with the AB algorithm instead of being fixed, to address the problem of the performance drop in the last set that was encountered in SPTQ. The quantization process is summarized in Algorithm 1. Note that SAB is not directly a combination of SPTQ and AB: the successive retraining strategy is distributed within the AB algorithm with respect to \(\alpha_{j}\). Therefore, SAB quantization improves upon SPTQ and AB quantization, since each partition is not quantized in one shot, rather is incrementally quantized by increasing \(\alpha_{j}\). This allows the trained set \(\mathcal{W}_{i,2}^{(\ell)}\) to adapt to the changes in \(\mathcal{W}_{i,1}^{(\ell)}\). Instead of fixing the last partition as in the SPTQ scheme, the AB algorithm is applied to train the last partition and fix the quantization error. This modification leads to a reduction in the drop in performance occurred in the last partition. In uniform SAB quantization, the grid is (3). On the other hand, in the companding SAB quantization, first the compressor (5) is applied so that the probability distribution of the weights is approximately uniform on the clipping range. Then, all weights are quantized with the uniform SAB algorithm, and passed through the expander (6). ### _Computational Complexity of the Quantized NNs_ In this Section, we present expressions for the computational complexity of the two NN equalizers described in Section III-B after quantization, in order to quantify the gains of quantization in memory and computation. The complexity is measured in the number of the elementary bit-wise operations (BO) [49]. The reduction in memory is simply \(1-b/32\), where \(b\) is the quantization rate. #### Iv-E1 FC Layers Consider a FC layer with \(n_{i}\) inputs each with bit-width \(b_{i}\), \(n_{o}\) neurons at output, and per-weight bit-width of \(b_{w}\). There are \(n_{o}\) inner products, each between vectors of length \(n_{i}\). The main step is the BO to compute an inner product, which is bounded in Appendix A. From (16), \[\text{BO}_{\text{FC}}\leq n_{o}\Big{(}n_{i}b_{i}b_{w}+(n_{i}-1)(b_{i}+b_{w}+ \log_{2}(n_{i}))\Big{)}. \tag{11}\] #### Iv-E2 Convolution Layers Consider a one-dimensional convolutional layer, with an input of length \(n_{i}\) and per-element bit-width \(b_{i}\), and a filter with length \(n_{w}\) and per-element bit-width \(b_{w}\). It is assumed that the filter is padded with zeros on the boundaries so that the number of output features equals to the length of the input vector \(n_{i}\) ("same padding"). This layer requires \(n_{i}\) inner products between vectors of length \(n_{w}\). The BO is thus \[\text{BO}_{\text{Conv}}\leq n_{i}\Big{(}n_{w}b_{i}b_{w}+(n_{w}-1)(b_{i}+b_{w}+ \log_{2}(n_{w})\Big{)}. \tag{12}\] #### Iv-E3 LSTM Cells Consider the LSTM cell described in [24, Eq. 13], with an input of length \(n_{i}\) and hidden state of size \(n_{h}\) at each time step. The cell has four augmented dense matrices with dimension \(n_{h}\times(n_{i}+n_{h}+1)\), in the three gates and the cell activation state. Suppose that the activations, and thus the hidden state, are quantized at \(b_{a}\) bits. The bit-width of the Cartesian product of the quantization grids is upper bounded by the sum of the individual bit-widths. Thus, from (11) \[\text{BO}_{\text{LSTM}} \leq 4n_{h}\Big{\{}(n_{h}+n_{i}+1)\times(b_{i}+b_{a})b_{w}+(n_{h}+n_{i}) \tag{13}\] \[\times\big{(}b_{w}+b_{i}+b_{a}+\log_{2}(n_{h}+n_{i}+1))\Big{\}}.\] Clearly, \(\text{BO}_{\text{BiLSTM}}=2\text{BO}_{\text{LSTM}}\). Substituting \(b_{1}=b_{2}\) in (16), the storage and BO of the NN scale, respectively, linearly and quadratically with the bit-width. Therefore, quantization from FP32 at 4 bits reduces the memory by 8X, and complexity by 64X. The BO of the Conv-FC and BiLSTM-FC models are obtained by combining (11), (12) and (13). ### _Quantization of NNs in Optical Fiber Communication_ The uniform and PoT PTQ (representing fixed-point numbers) have been naturally applied when demonstrating the NN equalizers in FPGA [50, 51] or ASIC [52], usually at 8 bits. PTQ has been applied to the NNs mitigating the nonlinear distortions in optical fiber [28, 53, 54, 55, 56, 48, 53], and the inter-symbol interference (ISI) in passive optical networks (PONs) with intensity-modulation direct-detection (IMDD) [51, 57, 58] and in general dispersive additive white Gaussian noise (AWGN) channels [59]. In particular, the authors of [51] show that an MLP-based many-to-many equalizer outperforms the maximum likelihood sequence estimator in mitigating the ISI in an IMDD 30 km PON link. They implement the NN in FPGA, and determine the impact of the weight resolution on the BER at 2-8 bits. In [54], a multi-layer perceptron equalizing a 1000 km SMF link is pruned and quantized with uniform PTQ-8, and the reduction in BO is reported. The authors of [52] implement the time-domain LDBP in ASIC, where the filter coefficients, as well as the signal in each step of SSFM, are quantized. The APoT is considered in [28, 56, 60]. Fixed-point optimization-based PoT quantization is applied to an MLP equalizing an AWGN channel in [61]. The weights are quantized at 4 bits and activations at 14 bits. The authors of [60] represent the weights using a 2-term APoT expression, for multiplier-free NN nonlinearity mitigation in a 22x80 km SMF link. However, the quantization rate is not constrained. The mixed-precision quantization is applied to a perturbation-based equalizer in [53] (similar to the Volterra equalizer) in a 18x100 km SMF link, in which the perturbation coefficients larger than a threshold are quantized at large bit-width, and the rest at one bit. Here, the quantization also simplifies the sum expressing the equalizer, combining the identical or similar terms [62]. In our prior work, we compared PTQ, QAT-STE, APoT [28] and SPQT [48] for the quantization of the NN equalizers. However, the best rate here is 5 bits. The authors of [56] study PTQ, QAT-STE and APoT, and demonstrate that the NN weights can be stored with a range of bit-widths and penalties, using pruning, quantization and compression. The papers cited above mostly implement uniform, PoT, or APoT PTQ. In our experiments, these algorithms, and their combinations with the QAT-STE, did not achieve sufficiently small distortions in the low bit-width regime. The penalty due to the quantization depends on the size of the model. The current paper addresses the quantization error, using the SAB algorithm that lowers the rate markedly to 1-3 bits. Moreover, the activations are usually not quantized in the literature. In contrast, in this paper both weights and activations are quantized. Importantly, it will be shown in Section V that the quantization of activations impacts the performance considerably. Finally, quantization has been applied in the literature usually as an ingredient in a broader study, or combined with pruning and compression techniques. This paper provides a detailed analysis of the performance and complexity trade-off of different quantization algorithms, and goes beyond the previously reported results [28, 48] in technical advances, application, and discussions. ## V Demonstration of the Quantization Gains in Experiments In this Section, we determine the performance and complexity trade-off of the several quantization algorithms. We compute the Q-factor penalty as a function of the launch power and quantization rate, as well as the reduction in the memory and computational complexity, in the three transmission experiments described in Section II. ### _TWC Fiber Experiment_ We consider the TWC fiber dual-polarization transmission experiment in Section II-2a, with the Conv-FC model in \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{Bit-width} & \multicolumn{3}{c}{Q-factor} \\ Convolutional & Dense & Quantizer & -2 dBm & 2 dBm \\ \hline 32 & 32 & Unquantized & 8.6 & 7.54 \\ 6 & 8 & Uniform & 8.1 & 6.34 \\ 6 & 8 & ApoT & 8.4 & 7.4 \\ \hline \hline \end{tabular} \end{table} TABLE II: UNIFORM VS NON-UNIFORM QUANTIZATION IN TWC FIBER EXPERIMENT Fig. 6: Q-factor of the NN equalizer in the TWC fiber experiment. a) PTQ; b) QAT-STE; (c) SPTQ. Section III-B2. The hyper-parameters of this model are the size of the convolutional filters \(K\) and the number of hidden neurons \(n_{h}\). The filters' length is set to be the residual channel memory, \(K=M\). This is estimated to be \(M=40\) complex symbols per polarization, through the auto-correlation function of the received symbols after CPE, and performance evaluation. The minimum number of hidden units is \(n_{h}=100\), below which the performance rapidly drops. The NN is trained with 600,000 symbols from a 16-QAM constellation. A test set of 100,000 symbols is used to assess the performance of the NN. Each dataset is measured at a given power, during which the BER may fluctuate in time due to the environmental changes. The symbols on the boundary of the data frame are eliminated to remove the effects of the anomalies. The NN at each power is trained and tested with independent datasets of randomly chosen symbols at the same power. The NN is implemented in the Python's TensorFlow library. The loss function is the mean-squared error, and the learning algorithm is the Adam-Optimizer with the learning rate of 0.001. The libraries such as TensorFlow provide functions for basic PTQ and QAT-STE, however, at 8 bits or more. For quantization at an arbitrary bit-width \(b<8\), the algorithms have to be directly programmed. For benchmark models in deep learning, low bit-width implementations exist. For quantization above 5 bits, PTQ and QAT-STE are applied, combined with APoT quantization, fixed- or mixed precision. In fixed-precision PTQ, the weights and activations of all layers are quantized at 6, 7 or 8 bits. In mixed-precision PTQ, 6 bits is assigned to the weights and activations of the convolutional layer, whereas the dense layer is given 8 bits due to its more significant impact on the performance. The Q-factor is nearly not impacted at 8 bits. Fig. 6(a) demonstrates that fixed-precision PTQ-6 incurs a penalty of 0.7 dB at -2 dBm compared to the unquantized NN, and 1.9 dB at 2 dBm. This comes with a gain of \(81\%\) reduction in the memory usage and a \(95\%\) reduction in the computational complexity. The Q-factor improves using the QAT-STE, as depicted in Fig. 6(b). Here, the weights are initialized with random values, then trained and quantized at 5, 6, and 7 bits, and the activations at 6 bits. In this case, the drop is reduced to 0.5 dB at -2 dBm, and 1.2 dB at 2 dBm. As the transmission power is increased, the penalty due to the quantization increases. The distribution of the weights of the dense layer is bell-shaped, as shown in Fig. 4. In consequence, assigning more quantization symbols around the mean is a reasonable strategy. The APoT quantization delivers a good performance, with a Q-factor penalty of less than \(0.2\) dB at \(-2\) and \(2\) dBm, as seen in Table II. The uniform SPTQ is applied, by assigning 5 bits to the weights and activations of the dense layer. The convolutional layer is given 8 bits, but this layer has few weights, and little impact on the complexity. Fig. 6(c) shows that SPTQ at 5 bits leads to 0.2 dB Q-factor drop at -2 dBm, and 0.5 dB at 2 dBm. It can be seen that SPTQ outperforms the more complex QAT-STE by 2 bits at the same power [48]. Fig. 9(c) shows that increasing the partition size can notably enhance the Q-factor. Similar conclusions are drawn for SPTQ-4, as seen in Table III. For quantization below 5 bits, we apply SAB. In a first study, we consider fixed-precision quantization, where the weights and activations are quantized at 4 bits successively over 4 partitions. The results in Table IV indicate that SAB outperforms SPTQ and AB, with a performance drop of 0.5 dB near optimal power. In contrast, SPTQ and AB quantization resulted in a 1.2 dB drop in performance. In a second study, we apply mixed-precision SAB, giving more bits to the last partition. We consider a partition of size 4 with the weights and activations in the first three partition sets quantized at 4 bits, and in the last set at 6 bits, averaging to 4.5 bits. The results are shown in Fig. 9(a), indicating the Q-factor drop of 0.17 dB at -2 dBm and 0.24 dB at 2 dBm. This comes with \(86\%\) reduction in memory usage, and \(94\%\) in computational complexity. ### _SMF Experiment_ We consider the SMF experiment described in Section II-2b, with the Conv-FC model. The NN parameters and the quantization algorithms are similar to those in the TWC fiber experiment. For quantization above 5 bits, PTQ-6 led to a Q-factor drop of 0.3 dB at 1 dBm, and 0.4 dB at 4 dBm, as shown in Fig. 7 Fig. 7: Q-factor of the NN equalizer in the SMF experiment. a) PTQ; b) QAT-STE; c) uniform and companding PTQ. (a). For QAT-STE-6, as shown in Fig. 7(b), the drop is 0.1 dB at 1 dBm, and 0.2 dB at 4 dBm. For quantization below 5 bits, first the companding PTQ is applied. Fig. 7(c) shows that this quantizer outperforms the uniform quantization at 4 bits by about a dB, due to the non-uniform distribution of the weights of the dense layer. It is found that, while the APoT works well in the large bit-width regime \(b\geq 6\) (as in the TWC fiber experiment), it is uncompetitive at low bit-widths. Next, we apply SAB quantization, in a partition of size 4, where the weights in the first 3 sets are quantized at 3 bits, and in the last set at 6 bits, with the average rate of 3.75 bits. The activations for all partition sets are quantized at 3 bits. The uniform and companding versions are both studied. Fig. 9(b) shows the results. Uniform SAB quantization results in a Q-factor drop of 0.3 dB at 1 dBm, and 0.6 dB at 4 dBm. This quantizer offers a reduction in memory usage and computational complexity, by \(88\%\) and \(94\%\), respectively. Applying the companding SAB quantization, the Q-factor drop is reduced to 0.2 dB at 1 dBm. ### _LEAF Experiment_ The NN in this experiment is the BiLSTM-FC equalizer, described in Section III-B2. There are \(n_{h}=100\) hidden neurons, and the input size is \(\bar{n}_{i}=4(M+1)\), \(M=40\). This model is found to be prone to the quantization error, because small errors can be amplified by the internal activations, and accumulate over long input temporal sequences. Thus, we quantize the weights and biases of the forget, input and output gates, as well as the activations at the output of the cell. However, the internal activations remain in full precision. Fig. 8 (a) shows that PTQ-6 incurs a Q-factor penalty of \(0.9\) dB at 1 dBm, and \(1.2\) dB at \(-1\) dBm, respectively, while lowering the computational complexity by \(79\%\) and the memory usage by \(81\%\). QAT-STE significantly improves the Q-factor, as shown in Fig. 8 (b). At 6 bits, the drop is \(0.1\) dB at 1 dBm, and \(0.4\) dB at \(-1\) dBm. At 5 bits, the penalty is \(0.3\) dB at both \(1\) dBm and \(-1\) dBm, with \(82\%\) reduction in computational complexity and \(84\%\) in memory usage. Fig. 8(c) shows that the AB quantizer at 4 and 5 bits outperforms PTQ and QAT Specifically, the Q-factor drop is only \(0.2\) dB at -1 dBm, and \(0.15\) dB at 1dBm. ### _Quantization of the Weights, but not Activations_ In the previous sections, the weights and activations were both quantized. It can be seen that there is a cut-off bit-width around 5-6 bits, below which the performance of the QAT-STE rapidly drops. Upon investigation, we noticed that the quantization of the activations substantially impacts the Q-factor. The activation functions are nonlinear, and could amplify the quantization error. In this section, we consider quantizing the weights of the NN but not activations. The bit-width of the activations can still be reduced from 32 to 8 with negligible performance drop. Therefore, the activations are quantized, at 8 bits. In a first study, we quantize the weights of the Conv-FC model in the SMF experiment, using the fixed-precision SAB algorithm with a partition of size 4. The results are included in Table V, showing that the Q-factor drop at the optimal power is minimal, when the dense layer is quantized at as low as 3 bits. In a second study, we apply the mixed-precision SAB quantization with the same parameters. The first three partitions are quantized at 1 bit, and the last one at 4 bits. We obtain a quantization rate of 1.75 bits/weight, with 0.6 dB degradation in Q-factor, outperforming the state-of-the-art \begin{table} \begin{tabular}{c|c c} \hline Quantization scheme & bit-width & Q-factor \\ \hline Unquantized & 32 & 7.5 \\ SPTQ & 4 & 6.3 \\ AB & 4 & 6.3 \\ SAB & 4 & 7.0 \\ \hline \end{tabular} \end{table} TABLE IV: FIXED-PRECISION QUANTIZATION, TWC FIBER EXPERIMENT Fig. 8: Q-factor of the NN equalizer in the LEAF experiment. a PTQ; b) QAT-STE; (c) AB quantization. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \(N_{p}\) & \multicolumn{6}{c}{Q-factor} \\ & \(\mathcal{W}_{1}\) & \(\mathcal{W}_{2}\) & \(\mathcal{W}_{3}\) & \(\mathcal{W}_{4}\) & \(\mathcal{W}_{5}\) & \(\mathcal{W}_{6}\) & \(\mathcal{W}_{7}\) & \(\mathcal{W}_{8}\) \\ \hline 2 & 7.13 & **5.6** & & & & & & \\ 4 & 7.5 & 7.33 & 7.33 & **6.3** & & & & \\ 8 & 7.56 & 7.5 & 7.4 & 7.33 & 7.33 & 7.33 & **6.6** \\ \end{tabular} \end{table} TABLE III: Q-FACTOR OF SPTQ-4, TWC FIBER EXPERIMENT using the QAT-STE w/wo APoT by 2 dB. This important result demonstrates that low-complexity nearly-binary NNs can mitigate nonlinearities in optical fiber communication. In the so called "extreme quantization," the NNs are quantized at 1 or 2 bits [63, 64, 14, 65]. Many approaches to the binary and ternary NNs have been proposed, e.g., based on better approximations to the derivative of the quantizer than in the STE. However, we tested some of these approaches in our experiments, and did not observe notable gains over the linear equalization. Consequently, while extreme quantization has shown success in large models in computer vision, further work is needed to determine if it can be adapted and successfully applied to the small NN equalizers in optical fiber communication. ## VI Conclusions The paper shows that low-complexity quantized NNs can mitigate nonlinearities in optical fiber transmission. The QAT-STE partially mitigates the quantization error during the training, and is effective in the large bit-width regime with \(b>5\) bits. The companding quantization improves the Q-factor of the baseline schemes considerably, especially at low bit-widths. There is a cut-off bit-width of around 5 bits below which the penalty of the QAT-STE rapidly increases. In the low bit-width regime with \(b\leq 5\) bits, companding SAB quantization is the method of choice. There is a considerable performance penalty due to the quantization of activations. The weights of the NN can be quantized at 1.75 bits/parameter with \(\leq 0.5\) dB penalty, if the activations are quantized at \(b\geq 8\) bits. The weights and activations can be quantized at 3.75 bits/parameter, with minimal penalty. The LSTM-based receivers can be prone to the quantization error, due to the error amplification and propagation. Fully binary NN equalizers remain to be studied. ## Appendix A Bit-wise Operations for an Inner Product The cost of computation is measured here by the required bit-wise operations AND \(\wedge\), OR \(\vee\), XOR \(\oplus\), NOT and SHIFT [49]. ### _Addition and Multiplication of Integers_ The sum \(z=x+y\) of the integers \(x\) and \(y\) each with bit-width \(b\) is an integer with bit-width \(b+1\), with carry-over. Below, we show that \(z\) can be computed in \(\zeta b\) BO, where \(\zeta\) depends on the computing algorithm. Denote the binary representation of \(x\), \(y\) and \(z\) with \(x_{1}x_{2}\cdots x_{b}\), \(y_{1}y_{2}\cdots y_{b}\), and \(z_{1}z_{2}\cdots z_{b+1}\), respectively. Let \(c_{1}c_{2}\cdots c_{b+1}\) be the carry-over binary sequence, initialized with \(c_{1}=0\). Then, for \(i\in\{1,2,\cdots,b+1\}\) \[z_{i}=t\oplus c_{i},\quad c_{i+1}=(x_{i}\wedge y_{i})\vee(t\wedge c_{i}), \tag{14}\] where \(t=x_{i}\oplus y_{i}\). Thus, computing \(z\) using (14) takes \(5b\) BO, i.e., \(\zeta=5\). This approach requires one bit storage for \(t\), and \(2b\) bits transmission for memory access. Consider the multiplication of the integers \(\bar{z}=xy\), where \(x\) has bit-width \(b_{1}\) and \(y\) has \(b_{2}\) bits. Clearly, the bit-width of \(\bar{z}\) is \(b_{1}+b_{2}\). The multiplication \(2^{i}y\), \(i\in\mathbb{N}\), can be performed with one BO, by shifting the \(y\) in the binary form \(i\) positions to the left, and zero padding from right. The result is a binary sequence of the maximum length \(b_{1}+b_{2}\), and maximum \(b_{2}\) non-zero bits. Expanding \(x\) as a sum of \(b_{1}\) PoT numbers, \(\bar{z}\) is expressed as the sum of \(b_{1}\) binary sequences, each with up to \(b_{2}\) non-zero elements. Thus, \(\text{BO}=\zeta b_{1}b_{2}\). The value of \(\zeta\) can change with the algorithm, and is immaterial. In this paper, we assume \(\zeta=1\). The computation of \(z\) and \(\bar{z}\) above may not be optimal; hence the BOs are upper bounds. ### _The Inner Product_ The sum of \(n\) numbers of bit-width \(b\) can be performed in \(\log_{2}n\) steps by pairwise addition (assuming for simplicity that \(n\) is a PoT number). The sum has bit-width \(b+\log_{2}(n)-1\) bits. The BO can be bounded as below, or obtained from [66]. \[\text{BO}_{\text{sum}} \leq b\times\frac{n}{2}+(b+1)\times\frac{n}{4}+\cdots+(b+\log_{2 }(n)-1)\times 1\] \[=\frac{n}{2}\Big{[}b\sum_{k=0}^{\log_{2}(n)-1}2^{-k}+\sum_{k=1}^ {\log_{2}(n)-1}k2^{-k}\Big{]}\] \[\leq\frac{n}{2}\Big{[}(b+\log_{2}n-1)\sum_{k=0}^{\log_{2}(n)-1}2^ {-k}\Big{]}\] \[=(b+\log_{2}n)(n-1). \tag{15}\] Consider the inner product \(y=\mathbf{w}^{T}\mathbf{x}\), where \(\mathbf{w}=(w_{1},w_{2},\cdots,w_{n})\), \(\mathbf{x}=(x_{1},x_{2},\cdots,x_{n})\), and where \(w_{i}\) and \(x_{i}\) have, respectively, bit-width \(b_{1}\) and \(b_{2}\), \(\forall i\). Then, \(y\) has bit-width \(b_{1}+b_{2}+\log_{2}(n)-1\) bits. The products \(\{w_{i}x_{i}\}_{i=1}^{n}\) are calculated in \(nb_{1}b_{2}\) BO. Their sum is computed in BO given in (15) with \(b=b_{1}+b_{2}\). Thus \[\text{BO}_{\text{inner}}\leq nb_{1}b_{2}+(n-1)(b_{1}+b_{2}+\log_{2}n). \tag{16}\]
2305.19659
Improving Expressivity of Graph Neural Networks using Localization
In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting and give localized versions of $k-$WL for any $k$. We analyze the power of Local $k-$WL and prove that it is more expressive than $k-$WL and at most as expressive as $(k+1)-$WL. We give a characterization of patterns whose count as a subgraph and induced subgraph are invariant if two graphs are Local $k-$WL equivalent. We also introduce two variants of $k-$WL: Layer $k-$WL and recursive $k-$WL. These methods are more time and space efficient than applying $k-$WL on the whole graph. We also propose a fragmentation technique that guarantees the exact count of all induced subgraphs of size at most 4 using just $1-$WL. The same idea can be extended further for larger patterns using $k>1$. We also compare the expressive power of Local $k-$WL with other GNN hierarchies and show that given a bound on the time-complexity, our methods are more expressive than the ones mentioned in Papp and Wattenhofer[2022a].
Anant Kumar, Shrutimoy Das, Shubhajit Roy, Binita Maity, Anirban Dasgupta
2023-05-31T08:46:11Z
http://arxiv.org/abs/2305.19659v3
# Improving Expressivity of Graph Neural Networks using Localization ###### Abstract In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting and give localized versions of \(k-\)WL for any \(k\). We analyze the power of Local \(k-\)WL and prove that it is more expressive than \(k-\)WL and at most as expressive as \((k+1)-\)WL. We give a characterization of patterns whose count as a subgraph and induced subgraph are invariant if two graphs are Local \(k-\)WL equivalent. We also introduce two variants of \(k-\)WL: Layer \(k-\)WL and recursive \(k-\)WL. These methods are more time and space efficient than applying \(k-\)WL on the whole graph. We also propose a fragmentation technique that guarantees the exact count of all induced subgraphs of size at most 4 using just \(1-\)WL. The same idea can be extended further for larger patterns using \(k>1\). We also compare the expressive power of Local \(k-\)WL with other GNN hierarchies and show that given a bound on the time-complexity, our methods are more expressive than the ones mentioned in Papp and Wattenhofer (2022). ## 1 Introduction Graphs have been used for representing relational and structural data that appear in a variety of domains, ranging from social network analysis and combinatorial optimization to particle physics and protein folding Dill et al. (2008). In order to learn representations of these data for various downstream learning tasks such as graph classification, graph neural networks (GNNs) have emerged as very effective models. Given the various types of GNN-based models developed in recent years Kipf and Welling (2017); Velickovic et al. (2018); Hamilton et al. (2018); Xu et al. (2019), researchers have attempted to characterize the expressive power of these models. Morris et al. (2019) showed the equivalence between message-passing GNNs and 1-Weisfeiler-Leman (WL) algorithm Weisfeiler and Leman (1968), which is a well known combinatorial technique for checking graph isomorphism and similarity. They also showed the equivalence between \(k\)-GNNs and \(k-\)WL. Thus, in this paper, by \(k-\)WL, we will be referring to the equivalent \(k\)-GNN model. In general, the expressiveness of \(k-\)WL, for any \(k\), is measured by its ability to identify non-isomorphic graphs and subgraphs. In this paper, we are using _Folklore WL_ and state the results accordingly. It has been shown that \((k+1)-\)WL is more expressive than \(k-\)WL. The time and space complexity increases exponentially with \(k\). Thus, it is infeasible to run \(k-\)WL on large graphs. Also, the \(k-\)WL hierarchy is crude as \(3-\)WL identifies almost all non-isomorphic graphs. Arvind et al. (2017) characterized the graphs that can be identified by \(1-\)WL. So, we are interested in coming up with a GNN hierarchy that can be easily extended without much computational overhead. More specifically, we want to define a GNN hierarchy whose expressiveness lies between \(k-\)WL and \((k+1)-\)WL. A count of specific patterns is very useful in determining the similarity between two graphs. However, detecting and counting the number of subgraphs is generally NP-complete as it is a generalization of the clique problem. There have been various works on efficient algorithms for some fixed patterns and restricted host graph classes Bressan (2018); Shervashidze et al. (2009); Bouritsas et al. (2020); Zhao et al. (2022); Shervashidze et al. (2011); Komarath et al. (2023); Ying et al. (2019); Liu et al. (2019). In Arvind et al. (2020) characterizes patterns whose count is invariant for \(1-\)WL and \(2-\)WL equivalent graphs. Also, there exists a GNN hierarchy, \(S_{k}\)Papp and Wattenhofer (2022), where each node has an attribute that counts the number of induced subgraphs of size at most \(k\), that the node is participating in. It would be interesting to see whether a scalable GNN hierarchy exists, that is comparable to the \(k-\)WL and \(S_{k}\)hierarchy. There also exists a GNN hierarchy, \(M_{k}\), Papp and Wattenhofer (2022); Huang et al. (2023)in which \(k\) vertices are marked or deleted and a GNN model is run on the modified graph. Various subgraph based GNN models have been proposed that tackle these questions Zhao et al. (2022); Zhang and Li (2021); Morris et al. (2018); Alvarez-Gonzalez et al. (2022); Maron et al. (2019); You et al. (2021); Frasca et al. (2022); Morris et al. (2021); Bevilacqua et al. (2021); Papp and Wattenhofer (2022); Barcelo et al. (2021); Huang et al. (2023). These GNNs have been effective in capturing more fine-grained patterns and relationships within the graphs, and are scalable for large graphs. Also, it has been shown that subgraph GNNs are more expressive than the traditional ones. Frasca et al. (2022) gave an upper bound on the expressive power of subgraph \(1-\)WL. This leads to the question of coming up with upper and lower bounds for the expressiveness of subgraph \(k-\)WL for arbitrary \(k\). Consider the task of counting the occurrence of \(H\) as subgraph or induced subgraph in the host graph \(G\). Given the effectiveness of subgraph \(k-\)WL methods in increasing the expressiveness of GNNs, we want to extend this method to the subgraph itself and check whether we can fragment the subgraph and learn the count of fragments of the subgraphs to get the actual count of \(H\) in the graph. We are interested in evaluating its expressiveness in terms of subgraph and induced subgraph counting. Also, if there exists a GNN hierarchical model, we want to compare its expressiveness to pre-existing GNN hierarchies, as done in Papp and Wattenhofer (2022). ### Our Contributions In this paper, we attempt to answer these questions. The main contributions of our work are listed below: 1. **Characterizing expressiveness of _Local \(k-\)WL:_** Given a graph \(G=(V,E)\), we extract a \(r\)-hop subgraph rooted at each vertex \(v\in V\), say \(G_{v}^{r}\), and run \(k-\)WL on \(G_{v}^{r}\). While GNNs based on subgraphs have been proposed in recent papers, this is the first work that gives both upper and lower bounds for the expressiveness of Local \(k-\)WL. We are also the first to characterize patterns that can be counted exactly by Local \(k-\)WL. 2. _Layer \(k-\)WL:_ To improve the space and time complexity of Local \(k-\)WL, we propose the Layer \(k-\)WL method. For this method, instead of running \(k-\)WL on \(G_{v}^{r}\), we run it on two consecutive layers of vertices. Here, the \(i\)th layer of vertices refers to the vertices that appear at an \(i\)-hop distance from \(v\)(or the \(i\)th layer breadth-first search(BFS)). 3. _Recursive WL :_ Recursive WL is an alternative to \(k-\)WL. In this method, we first run \(1-\)WL to get a partitioning of vertices. Then we run \((k-1)-\)WL on the vertices of each partition separately. It can be shown that this method is more expressive than \((k-1)-\)WL and less expressive than \(k-\)WL. Also, since we are running \((k-1)-\)WL on a smaller set of vertices, it has better space and time complexity than running \(k-\)WL. 4. **Fragmentation :** For the counting task, based on the pattern \(H\) to be counted, the subgraph \(G_{v}^{r}\) is further decomposed into simpler patterns for which the exact counts of subpatterns are already known. Thus, we need to learn the easier tasks in the subgraphs of \(G_{v}^{r}\). So, a smaller \(k\) is sufficient to count the number of patterns. Using this method, we show that all the patterns appearing as induced subgraphs of size four can be counted using just \(1-\)WL. This technique can be useful for counting larger and more complicated patterns. Thus, instead of training a GNN for the larger subgraph, we can train GNN models for the smaller patterns for counting and then combine their counts to get the count of the larger subgraph. Using the fragmentation technique, we use the model learned for predicting \(K_{3}\) or a triangle to predict the number of \(K_{4}\) in the graph. Similarly, if we have a model that can predict \(K_{n}\) in a graph, then we can use it to predict \(K_{n+1}\). In other words, we can reduce the counting of \(K_{n+1}\) to a triangle counting problem with a minimal increase in the number of parameters. 5. **Comparison with other GNN models :**Papp and Wattenhofer (2022a) shows an analysis of four GNN models. We do a similar analysis for our models and compare them with the models mentioned in Papp and Wattenhofer (2022a). We show that our models are more expressive than the ones presented in that paper. Outline of the paper :In Section 2, we introduce some of the terms used throughout the paper. In Section 4, we introduce the localized variants of the \(k-\)WL algorithm and analyze their space and time complexities. In Section 5, we give theorems that characterize the expressiveness of the localized \(k-\)WL variants proposed in our work. In Section 6, we characterize the expressiveness of our methods in terms of subgraph and induced subgraph counting. We also discuss how to count the occurrences of \(H\) in \(G\), using localized algorithms. We discuss the fragmentation approach in Section 7, followed by a theoretical comparison of GNN models in Section 8. The model architecture, along with the parameters used for the experiment, is explained in Section 9 We report the results of our experiments in Section 10 and conclude the paper with Section 11. ## 2 Preliminaries We consider a simple graph \(G(V,E)\). For basic definitions of graph theory, we refer the reader to West et al. (2001). The neighbourhood of a vertex \(v\) is the set of all vertices adjacent to it in \(G\) (denoted as \(N_{G}(v)\)). The _closed_ neighbourhood of \(v\) is the set of all neighbours, including the vertex \(v\) (denoted as \(N_{G}[v]\)). A graph whose all the vertices are of same degree are called _regular graph_. A graph \(H\) is called a _subgraph_ of \(G\) if \(V(H)\subseteq V(G)\) and \(E(H)\subseteq E(G)\). The subgraph induced on \(S\subseteq V(G)\) is a graph whose vertex set \(S\) contains all the edges in \(G\) whose endpoints are in \(S\) and is denoted by \(G[S]\). The _induced subgraph_ on a \(r\)-hop neighbourhood around vertex \(v\) is denoted by \(G_{v}^{r}\). Attributed subgraphs are coloured subgraphs, also referred to as Motifs. The maximum distance from a vertex to all other vertices is called the _eccentricity_ of the vertex. The minimum of the eccentricity over all the vertices is called the _radius_ of the graph. The _center_ of the graph is set of vertices such that eccentricity is minimum. For pattern counting, we pick one of the center vertex and call it _key vertex_. _Homomorphism_ from graph \(H\) to \(G\) is a function from \(V(H)\) to \(V(G)\) such that if \(\{u,v\}\in E(H)\) then \(\{f(u),f(v)\}\in E(G)\). Given a pattern \(H\), the set of all of its homomorphic images is called _spasm_ of \(H\). Two graphs \(G\) and \(H\) are isomorphic if there exists a bijective function \(f:V(G)\to V(H)\) such that \(\{u,v\}\in E(G)\) if and only if \(\{f(u),f(v)\}\in E(H)\). The _orbit_ of a vertex \(v\) in \(G\) is the set of vertices to which \(v\) can be mapped, and that mapping can be extended to automorphism (denoted by \(Orbit_{G}(v)\)). We mention some of the structural parameters of graphs. Problems on graph with bounded structural parameters can be solved efficiently on bounded graph parameters. **Graph Parameters:** We first define tree decomposition of the graph as: Given a graph \(G\), we decompose it into tree structures, say \(T,\) where the set of vertices of \(T\) is a subset of set of vertices of \(G\). This decompostion has to satisfy the following constraints: 1. Every vertex of \(G\) must lie in some bag associated with a vertex of \(T\). 2. For each edge \(\{v_{i},v_{j}\}\), there exists a bag containing having \(v_{i}\) and \(v_{j}\). 3. If a vertex \(v_{i}\in V(G)\) belongs to two bags \(B_{i}\) and \(B_{j}\) associated with two vertices \(u_{i}\) and \(u_{j}\) of \(T,\) then \(v_{i}\) must be present in all the bags associated with the vertices belonging to the path connecting \(u_{i}\) and \(u_{j}\) in \(T.\) The width of the tree decomposition is the maximum size of the bag minus one. The treewidth of the graph \(G,\)\(tw(G)\) is the minimum over all such decompositions. It is NP-hard to compute tree-width of graphs. However, there exists an efficient algorithm that checks for a fixed \(k\), whether \(tw(G)\) is at most \(k\)Korhonen (2022), Korhonen and Lokshtanov (2022). Graph of bounded treewidth implies sparse graphs. However, for some sparse graph, the treewidth is unbounded. For example, grid graph on \(n\) vertices has treewidth \(\sqrt{n}\). The maximum of the treewidth over all its homomorphic images is called the _hereditary treewidth_ of pattern \(H\), denoted by \(htw(H)\). _Planar graphs_ are graphs that are \(k_{5}\) and \(k_{3,3}\) minor free. One can also say the graph that can be redrawn such that no edges cross each other are planar graphs. Also, the number of edges can be at most linear in the number of vertices. The _Euler genus_ of a graph is defined in a similar manner. The genus of a graph is the minimum number such that the graph can be drawn on circle without intersecting edges. Now, we look at graph classes that are dense but have nice structure such as complete graphs. _Clique width_ has been defined for dense graphs. However, there does not exist efficient algorithm to check whether clique width of the given graphs is \(k\) for \(k\geq 4\). Rankwidth has been defined by Robertson and Seymour to handle dense graph classes. Given a graph \(G\), to construct rankwidth decomposition, we define subcubic tree \(T\). Given a bijection from the set of leaves of a tree to the set of vertices of \(G\), we can construct rankwidth decomposition with respect to that bijection. The parent vertices in the tree contains union of vertices belonging to its two children. Note that deletion of single edge disconnects the tree and vertices of graph get partitioned into two subparts say \(X\) and \(V(G)\setminus X\). We define submatrix of adjacency matrix \(A(X,V(G)\setminus X)\) where \(a_{i,j}=1\) if and only if \(i\in X\) and \(j\in V(G)\setminus X\) are adjacent. Also, it has been shown that bounded clique width means bounded rank-width and vice versa Oum (2017). ## 3 Weisfeiler Leman Algorithm Weisfeiler-Leman (WL) is a well known combinatorial algorithm that has many theoretical and practical applications. Color refinement(\(1-\)WL) was first introduced in 1965 in Morgan (1965). The algorithm goes as follows: * Initially, color all the vertices as color 1. * In the next iteration \(i\), we color the vertices by looking at the number of colors of vertices adjacent to each vertex \(v\), in the \((i-1)\)th iteration, as \[C_{i}(v)=(C_{i-1}(v),\{\{C_{i}(w)\}\}_{w\in N_{G}(v)})\] We assign a new color to the vertices, according to the unique tuple it belongs to. This partitions the vertex set in every iteration according to their color classes. * The algorithm terminates if there is no further partition. We call the color set a _stable_ color set. * We can also observe that if two vertices get different colors at any stage \(i\), then they will never get the same color in the later iterations. We can observe that the number of iterations is at most \(n\) as a vertex set,\(V(G)\), can be partitioned at most \(n\) many times. * The color class of any vertex \(v\in V(G)\) can appear at most \(1+\log n\) times and the running time is \(\mathcal{O}(n^{2}\log n)\)Immerman and Sengupta (2019). In case we need to run only \(h\) iterations and stop before getting the stable color, then the running time is \(O(nh)\). The same idea was extended by Weisfeiler and Leman in which instead of coloring vertex, they colored all the two tuples based on edge, non-edge and \((v,v)\). In later iteration, the color gets refined for each two tuples based on their neighbourhood and common neighbourhood. This partition the set of two tuples of vertices. The iteration in which no further partition is being done are called _stable coloring_. Weisfeiler Leman algorithm which is known as \(2-\)WL algorithm. Similar approach was extended later for coloring \(k\)-tuples and then do refinement of coloring in later iterations. **Definition 1**.: _Let \(\vec{x}=(x_{1},...,x_{k})\in V^{k},y\in V\), and \(1\leq j\leq k\). Then, let \(x[j,y]\in V^{k}\) denote the \(k\)-tuple obtained from \(x\) by replacing \(x_{j}\) by \(y\). The \(k\)-tuples \(\vec{x}[j,y]\) and \(\vec{x}\) are said to be \(j\)-neighbors for any \(y\in V\). We also say \(\vec{x}[j,y]\) is the \(j\)-neighbor of \(\vec{x}\) corresponding to \(y\)._ * Color all the \(k\)-tuple vertices according to their isomorphic type. Formally, \((v_{1},v_{2},....,v_{k})\) and \((w_{1},w_{2},....,w_{k})\) get the same color if \(v_{i}=v_{j}\) then \(w_{i}=w_{j}\) and also, if \((v_{i},v_{j})\in E(G),\) then \((w_{i},w_{j})\in E(G).\) * In every iteration, the algorithm updates the color of the tuple after seeing the color of its adjacent \(k\) tuple vertices. \[C_{i+1}^{k}(\vec{v}):=(C_{i}^{k}(\vec{v},M_{i}(\vec{v})\] where \(M_{i}(\vec{v})\) is the multiset \[\{\{(C_{i}^{k}(v_{1},v_{2},...,v_{k1},w),...,C_{i}^{k}(v_{1},v_{2},..,w,..,v_{ k}),...,C_{i}^{k}(w,v_{2},...,v_{k}))\mid w\in V\}\}\] * The algorithm terminates if there is no further partition. We call the color set a stable color set. * We also observe that if two tuples get different colors at any stage \(i\), then they will never get the same color in the later iterations. We can observe that the number of iterations is at most \(n^{k}\) as \(V^{k}\) can be partitioned at most \(n^{k}\) many times. * The color class of any vertex \(\vec{v}\in V^{k}\) can appear at most \(\mathcal{O}(k\log n)\) times and running time is \(\mathcal{O}(k^{2}n^{k+1}\log n)\) Immerman and Sengupta (2019). Two graphs \(G\) and \(H\) are said to be \(k-\)WL equivalent (\(G\simeq_{k}H\)), if their color histogram of the stable colors matches. We say that \(G\) is \(k-\)WL identifiable if there doesn't exist any non-isomorphic graphs that are \(k-\)WL equivalent to \(G\). Color refinement (\(1-\)WL) can recognise almost all graphs Babai et al. (1980), while \(2-\)WL can recognise almost all regular graphs Bollobas (1982). The power of \(WL\) increases with an increase in the value of \(k\). The power of \(k-\)WL to distinguish two given graphs is same as with counting logic \(C^{k+1}\) with \((k+1)\)-variable. Also, the power of \(k-\)WL to distinguish two non-isomorphic graphs is equivalent to spoiler's winning condition in \((k+1)\)-bijective pebble game. Recently, Dell et al. (2018) has shown that the expressive power of \(k-\)WL is captured by homomorphism count. It has been shown that \(G_{1}\simeq_{k}G_{2}\) if and only if \(Hom(T,G_{1})=Hom(T,G_{2}),\) for all graphs \(T\) of treewidth at most \(k\). The graphs that are identified by \(1-\)WL are _Amenable_ graphs. There is a complete characterization of the amenable graphs in Arvind et al. (2017); Kiefer et al. (2015). In the original algorithm, we initially color all the vertices with color \(1\). However, if we are given a colored graph as input, we start with the given colors as the initial colors. Also, we can color the edges, and run \(1-\)WL Kiefer et al. (2015). Even if \(k-WL\) may not distinguish two non-isomorphic graphs, two \(k-WL\) equivalent graphs have many invariant properties. It is well known that two \(1-WL\) equivalent graphs have the same maximum eigenvalue. Two graphs that are \(2-WL\) equivalent are co-spectral and have the same diameter. Recently, V. Arvind et al. have shown the invariance in terms of subgraph existence and counts Arvind et al. (2020). They show the complete characterization of subgraphs whose count and existence are invariant for \(1-WL\) equivalent graph pairs. They also listed down matching, cycles and path count invariance for \(2-WL\). Also, there is a relation between homomorphism count and subgraph count Curticapean et al. (2017). The count of subgraphs is a function of the number of homomorphism from set of all homomorphic image of patterns. _Hereditary treewidth_ of graph is defined as maximum of treewidth over all homomorphic images. So, if two graphs are \(k-WL\) equivalent, then the count of all subgraphs whose \(htw(G)\) is almost \(k\) are same. However, running \(k-WL\) takes \(O(k^{2}\cdot n^{k+1}logn)\) time and \(O(n^{k})\) space Immerman and Sengupta (2019). So, it is not practically feasible to run \(k-WL\), for large \(k\), for graphs. The expressive power of \(k-\)WL is equivalent to first order logic on \((k+1)\) variables with a counting quantifier. Let \(G=(V,E)\), where \(V\) is a set of vertices and \(E\) is a set of edges. In logic, we define \(V\) as the universe and \(E\) as a binary relation. In Cai et al. (1992), they have proved that the power to distinguish two non-isomorphic graphs using \(k-\)WL is equivalent to \(C^{k+1}\), where \(C^{k+1}\) represents first order logic on \((k+1)\) variables with counting quantifiers (stated in Theorem 1). To prove this, they define a bijective \(k\)-pebble game, whose power is equivalent to \(C^{k}\). #### Bijective k-Pebble Game The bijective k-Pebble game (\(BP_{k}(G,H)\)) has been discussed in Kiefer (2020); Cai et al. (1992); Grohe and Neuen (2021). Let graphs \(G\) and \(H\) have the same number of vertices and \(k\in\mathbb{N}\). Let \(v_{i},v\in V(G)\) and \(w_{i},w\in V(H)\). **Definition 2**.: _The position of the game in bijective pebble game is the tuples of the vertices where the pebbles are placed._ The bijective \(k\)-pebble game is defined as follows: 1. Spoiler and Duplicator are two players. 2. Initially, no pebbles are placed on the graphs. So, the position of the game is ((),()) (the pairs of empty tuples.) 3. The game proceeds in the following rounds as follows: 1. Let the position of the game after the \(i^{th}\) round be \(((v_{1},...,v_{l}),(w_{1},w_{2},...,w_{l}))\). Now, the Spoiler has two options: either to play a pebble or remove a pebble. If the Spoiler wants to remove a pebble, then the number of pebbles on the graph must be at least one and if Spoiler decides to play a pebble then number of pebbles on that particular graph must be less than \(k\). 2. If the Spoiler wants to remove a pebble from \(v_{i},\) then the current position of the game will be \(((v_{1},v_{2},...v_{i-1},v_{i+1},..,v_{l}),(w_{1},w_{2},...w_{i-1},w_{i+1},..,w _{l}))\). Note that, in this round, the Duplicator has no role to play. 3. If the Spoiler wants to play a pair of pebbles, then the Duplicator has to propose a bijection \(f:V(G)\to V(H)\) that preserves the previous pebbled vertices. Later, the Spoiler chooses \(v\in V(G)\) and sets \(w=f(v)\). The new position of the game is \(((v_{1},...v_{l},v),(w_{1},w_{2},...,w_{l},w))\). The Spoiler wins the game if for the current position \(((v_{1},...v_{l},v),(w_{1},w_{2},...,w_{l},w)),\) the induced graphs are not isomorphic. If the game never ends, then the Duplicator wins. The equivalence between the bijective \(k\)-pebble game and \(k-\)WL was shown in the following theorem. **Theorem 1**.: _Cai et al. (1992)_ _Let \(G\) and \(H\) be two graphs. Then \(G\simeq_{k}H\) if and only if the Duplicator wins the pebble game \(BP_{k+1}(G,H)\)._ A stronger result, namely, the equivalence between the number of rounds in the bijective \((k+1)\)-pebble game and the iteration number of \(k-\)WL was stated in the following theorem. **Theorem 2**.: _Kiefer (2020)_ _Let \(G\) and \(H\) be graphs of same size. The vertices may or may not be colored. Let \(\vec{u}:=(u_{1},...,u_{k})\in(V(G))^{k}\) and \(\vec{v}:=(v_{1},...,v_{k})\in(V(H))^{k}\) be any two arbitrary elements. Then, for all \(i\in\mathbb{N}\), the following are equivalent :_ 1. _The color of_ \(\vec{u}\) _is same as the color of_ \(\vec{v}\) _after running_ \(i\) _iterations of_ \(k-\)_WL._ 2. _For every counting logic formulae with_ \((k+1)\) _variables of quantifier depth at most_ \(i\)_,_ \(G\) _holds the formula if and only if_ \(H\) _does so._ 3. _Spoiler does not win the game_ \(BP_{k+1}(G,H)\) _with the initial configuration_ \((\vec{u},\vec{v})\) _after at most_ \(i\) _moves._ ## 4 Local k-WL based Algorithms for GNNs In this section, we present the local \(k-\)WL based algorithms for GNNs. We also give runtime and space requirements for such GNNs. ### Local k-WL Given a graph \(G\), we extract the subgraph induced on a \(r\)-hop neighbourhood around every vertex. We refer to it as \(G^{r}_{v}\), for the subgraph rooted at vertex \(v\) in \(G\). Then, we colour the vertices in \(G^{r}_{v}\) according to their distances from \(v\). Now, we run \(k-\)WL on the coloured subgraph \(G^{r}_{v}\). The stable colour obtained after running \(k-\)WL is taken as the attributes of vertex \(v\). Then, we run a GNN on the graph \(G\) with the attributes on each vertex \(v\). This is described in Algorithm 1. ``` 1:Input: \(G,r,k\) 2:for each vertex \(v\) in \(V(G)\)do 3: Find the subgraph induced on the \(r\)-hop neighborhood rooted at vertex \(v\) (\(G_{v}^{r}\)). 4: Color the vertices whose distance from \(v\) is \(i\), by color \(i\). 5: Run \(k-\)WL on the colored graph until the colors stabilize. 6:endfor 7:Each vertex has as an attribute the stable coloring of vertex \(v\) obtained from \(G_{v}^{r}\). 8:Run GNN on the graph \(G\) with each vertex having attributes as computed above. ``` **Algorithm 1** Local k-WL Runtime and Space requirement Analysis :The time required to run \(k-\)WL on \(n\) vertices is \(O(n^{k+1}\log(n))\). Here, we run \(k-\)WL on a \(r\)-hop neighborhood instead of the entire graph. So, \(n\) is replaced by \(n_{1}\), where \(n_{1}\) is the size of the neighborhood. If a graph has bounded degree \(d\), and we run \(k-\)WL for a \(2\)-hop neighborhood, then \(n_{1}\) is \(O(d^{2})\). Also, we have to run Local \(k-\)WL for each vertex. Hence, the total time required is \(O(n\cdot d^{2k+2}\log(d))\). Also, running a traditional GNN takes time \(O((n+m)\log n),\) where \(m\) is the number of edges. So, if we assume that \(d\) is bounded, then the time required is linear in the size of the graph. Furthermore, the space required to run \(k-\)WL on \(n\) vertices graph is \(O(n^{k})\). Hence, for Local \(k-\)WL, it follows that the space requirement is \(O(n_{1}^{k})\). ### Layer k-WL In order to make Local \(k-\)WL more time and space efficient, while maintaining the same expressive power, we propose a modification to Local \(k-\)WL. Instead of running \(k-\)WL on the entire \(r\)-hop neighbourhood, we run \(k-\)WL on consecutive layers of \(G_{v}^{r}\) (i.e., run \(k-\)WL on the set of vertices with colour \(i\) and colour \((i+1)\)). Initially, we run \(k-\)WL on the set of vertices that are at a distance of \(1\) and \(2\) from \(v\). Then, we run \(k-\)WL on the set of vertices with colors \(2\) and \(3\), and so on. While running \(k-\)WL, initially, it partitions the \(k\)-tuples based on the isomorphism type. However, in this setting, we incorporate the stabilized colouring obtained in the previous round. For \(l<k\), we define the color of \(l\) tuples as \(col(u_{1},u_{2},...,u_{l}):=col(u_{1},u_{2},...,u_{l},\underbrace{u_{1},u_{1} \ldots,u_{l}}_{(k-l)\text{lines}})\). Consider the mixed tuple (we call a tuple to be mixed if some of the vertices have been processed in the previous iteration and the remaining have not yet been processed) \((u_{1},v_{1},\ldots,u_{k})\) where \(col(u_{j})=i\) and \(col(v_{j})=i+1\) (i.e \(u_{i}^{\prime}\)s are the set of processed vertices and \(v_{i}^{\prime}\)s are yet to be processed). So, even if \((u_{1},v_{1},\ldots,u_{k})\) and \((u_{1}^{{}^{\prime}},v_{1}^{{}^{\prime}},\ldots,u_{k}^{{}^{\prime}})\) may be isomorphic, if \(col(u_{1},u_{2},\ldots u_{l})\neq col(u_{1}^{{}^{\prime}},u_{2}^{{}^{\prime}}, \ldots u_{l}^{{}^{\prime}})\) then \(col(u_{1},v_{1},\ldots,u_{k})\neq col(u_{1}^{{}^{\prime}},v_{1}^{{}^{\prime}}, \ldots,u_{k}^{{}^{\prime}})\). The algorithm is described in Algorithm 2. A GNN model incorporating Local+Layer \(k-\)WL is equivalent to running Layer \(k-\)WL in line 5 in Algorithm 1. ``` 1:Given \(G_{v}^{r},k\). 2:Run \(k-\)WL on the induced subgraph of levels \(1\) and \(2\). 3:for each layer \(i\) of BFS(\(v\)), \(i\geq 2\)do 4: Initial colour of \(k-tuple\) incorporates the stabilized colour obtained from the previous iteration. 5: Run \(k-\)WL on the subgraph induced on the vertices in layer \(i\) and \((i+1)\) 6:endfor ``` **Algorithm 2** Layer k-WL(\(v\)) Runtime and Space requirement Analysis. The running time and space requirement for Layer \(k-\)WL depends on the maximum number of vertices in any two consecutive layers, say \(n_{2}\). The time required to run \(k-\)WL is \(O(r\cdot(n_{2})^{k+1}\log(n_{2}))\). However, running only Local \(k-\)WL will require \(O((r\cdot n_{2})^{k+1}\log(r\cdot n_{2}))\) time. The space requirement is \(O(n_{2}^{k})\). Hence, running Layer \(k-\)WL is more efficient than running Local \(k-\)WL, especially when \(r\) is large. ### Recursive WL Here, we present another variant of WL. The central idea is to decompose the graphs initially by running \(1-\text{WL}\). Then, further, decompose the graphs by running \(2-\text{WL}\) and so on. One can note that the final vertex partition that \(1-\text{WL}\) outputs after color refinement are regular if we restrict to a single color class. In other words, let \(G[X]\) be the induced graph on the vertices of same color. Then, \(G[X]\) is regular. Also, \(G[X,Y]\) where \(X\) and \(Y\) are sets of vertices of two different color classes. \(G[X,Y]\) is also a bi-regular graph. We run \(2-\text{WL}\) on the regular graph. Using Bollobas [1982], we can guarantee that it would distinguish almost all regular graphs. Similarly, running \(2-\text{WL}\) on \(G[X,Y]\) is bi-regular and thus can be distinguished by \(2-\text{WL}\). We again run \(1-\text{WL}\) on \(G\), using the colors obtained after running \(2-\text{WL}\). This further refines the colors of the vertices in \(G\). One can easily check that it is more expressive than \(1-\text{WL}\) and less expressive than \(2-\text{WL}\). We give the graph Figure 1 that can not be distinguished by Recursive \((1,2)-\text{WL}\) and \(1-\text{WL}\) but can be distinguished by \(2-\text{WL}\). This gives an intermediate hierarchy in the \(k-\text{WL}\) hierarchy. Also, the space and time required for running \(2-\text{WL}\) on the entire graph is more than that of Recursive \((1,2)-\text{WL}\). The running time and space required depend on the partition size obtained after running \(1-\text{WL}\). Note that the color of vertex \(v\) is \(col(v,v)\) after running \(2-\text{WL}\). ``` 1:Given \(G\) 2:Run \(1-\text{WL}\) and get the partition of vertices into colour classes. 3:Let \(S=\{C_{1},C_{2},\ldots C_{l}\}\) be the color classes obtained after running \(1-\text{WL}\). 4:for each color class \(C_{i}\) in \(S\)do 5:Run \(2-\text{WL}\) on the induced subgraph in \(C_{i}\) and get color partition. 6:Let \(C_{i}\) get partitioned into \(C_{i,1},C_{i,2},\ldots,C_{i,l}\) 7:endfor 8:Run \(1-\text{WL}\) on the colored graph \(G\) whose colors are given by steps 5 and 6. 9:for each new color class \(C^{\prime}_{i}\) and \(C^{\prime}_{j}\)do 10:Run \(2-\text{WL}\) on the induced subgraph on the vertices in color partitions \(C^{\prime}_{i}\) and \(C^{\prime}_{j}\) and get new color partition. 11:endfor 12:Repeat 5-11 till the colour stabilizes. ``` **Algorithm 3** Recursive(1,2) WL This idea can be generalized for any suitable \(k\). We can run a smaller dimensional \(k_{1}-\text{WL}\) and then use the partition of \(k_{1}\) tuple vertices. Later, we can use this partition to get a finer partition of \(k_{2}\) tuples. Assuming \(k_{1}<k_{2}\), one can see that we have to run \(k_{2}-\text{WL}\) on smaller graphs. This reduces the time and space required for running \(k_{2}-\text{WL}\) on the entire graph. One can easily see that it is less expressive than \(k_{2}-\text{WL}\), however, more expressive than \(k_{1}-\text{WL}\). More specifically, initially run \(1-\text{WL}\) and further run \((k-1)-\text{WL}\) on the colored classes. One can check that it is more expressive than \((k-1)-\text{WL}\) and less expressive than \(k-\text{WL}\). Figure 1: Graph identifiable by 2-WL but not by Recursive \(1-\text{WL}\) Theoretical Guarante of Expressive Power In this section, we theoretically prove the expressive power of GNN models that we proposed in Section 4 in terms of graph and subgraph isomorphism. In the discussion below, we say that a GNN model \(A\) is at most as expressive as a GNN model \(B\) if any pair of non-isomorphic graphs \(G\) and \(H\) that can be distinguished by \(A\) can also be distinguished by \(B\). Also, we say a GNN model \(A\) is at least as expressive as a GNN model \(B\) if \(A\) can identify all the non-isomorphic graph pairs that can be identified by GNN model \(B\). The proof of the theorem and lemmas presented in the section mainly use pebble bijective game. Also, as mentioned earlier, there is equivalence between the expressivity of \(k-\)WL and \((k+1)\)-pebble bijective game. ### Local k-WL It has been shown in recent works that running Local \(1-\)WL has more expressive power as compared to running \(1-\)WL. The upper bound on the expressive power of Local \(1-\)WL has been shown in Frasca et al. (2022). However, the expressive power of Local \(k-\)WL, for arbitrary \(k\), has not been studied. In the Theorem 3, we give a description of the expressive power of Local \(k-\)WL and show that it has more expressive power than \(k-\)WL. We also show that it is less expressive than \((k+1)-\)WL. The proof techniques that we used are different from Frasca et al. (2022). **Theorem 3**.: _Running Local \(k-\)WL is more expressive than running \(k-\)WL on the entire graph. Also, running Local \(k-\)WL is at most as expressive as running \((k+1)-\)WL on the entire graph._ Proof.: Let \(G_{1}\) and \(G_{2}\) be two graphs distinguishable by \(k-\)WL. So, the Spoiler has a winning strategy in the game (\(G_{1}\),\(G_{2}\)). Suppose \(G_{1}\) and \(G_{2}\) do not get distinguished after running \(k-\)WL locally. That means the Duplicator has a winning strategy for all vertices individualized. Let \(v\) in \(G_{1}\) and \(u\) in \(G_{2}\) be the vertices that are individualized. We play the \((k+1)\)-bijective pebble game on the entire graphs \((G_{1},G_{2})\) and the local subgraphs \((G_{1}^{v},G_{2}^{v})\) simultaneously. Let \(S_{1}\) and \(D_{1}\) be the spoiler and duplicator in game \((G_{1},G_{2})\) respectively, and \(S_{2}\) and \(D_{2}\) be the spoiler and duplicator in game \((G_{1}^{v},G_{2}^{u})\). We use the strategy of \(D_{2}\) to determine the move for \(D_{1}\) and the strategy of \(S_{1}\) to determine the move for \(S_{2}\). Initially, \(D_{2}\) gives a bijection \(f\) from the vertex set of \(G_{1}^{v}\) to \(G_{2}^{u}\). We propose the same bijection \(f\) by \(D_{1}\), extending it by mapping \(v\) to \(u\). Now, the spoiler \(S_{1}\) places a pebble at some vertex \((v_{i},f(v_{i}))\). The spoiler \(S_{2}\) also places a pebble at vertex \((v_{i},f(v_{i}))\). We can show using induction on the number of rounds that if \(S_{1}\) wins the game, then \(S_{2}\) also wins the game. Our induction hypothesis is that \(S_{1}\) has not won till the \(j^{th}\) round and the positions of both the games are same. Let the current position of both the games after the \(j^{th}\) round be \(((v_{1},v_{2},\ldots,v_{l}),(f(v_{1}),f(v_{2}),\ldots,f(v_{l}))\). Now, \(S_{1}\) either decides to play a pair of pebbles or remove. _Case(1): If \(S_{1}\) decides to remove a pebble._ In this case, the Duplicator \(D_{1}\) has done nothing to do. \(S_{2}\) will copy the same strategy as \(S_{1}\). Here, \(S_{1}\) cannot win in this round. Also, note that the positions of both games are the same. _Case(2): If \(S_{1}\) decides to play a pebble._ In this case, \(S_{2}\) also decides to play a pebble. Duplicator \(D_{2}\) proposes a bijective function \(f\). The same bijective function is proposed by \(D_{1}\). Now, \(S_{1}\) places a pebble at \((v,f(v))\). \(S_{2}\) also chooses the same vertices. So, the position of both the game is the same. Therefore, if \(S_{1}\) wins the game, then \(S_{2}\) also wins the game. Thus, running \(k-\)WL locally is at least as expressive as running \(k-\)WL on the entire graph. We can show that it is more expressive by looking at the simple example that running \(1-\)WL on a local substructure can count the number of triangles, whereas running \(1-\)WL on an entire graph does not recognize graphs having different triangle counts. Also, one can observe that running \(k-\)WL locally is running \(k-\)WL on the colored graph where vertices at distinct distances get distinct colors. Its power is the same as individualizing one vertex and running \(k-\)WL. Thus, running \(k-\)WL locally is more expressive than running \(k-\)WL on the entire graph. Let \(G_{1}\) and \(G_{2}\) be two graphs that can be distinguished by running \(k-\)WL locally. Recall that, the key vertices refer to \(u\) and \(v\) in \(G_{1}\) and \(G_{2}\) such that they are the root vertices corresponding to \(G_{1}\) and \(G_{2}\), respectively. This means the Spoiler has a winning strategy in the \((k+1)\) bijective pebble game, where the key vertices are matched to each other. Now, we use the strategy of the Spoiler in the local substructure to get a winning strategy for the Spoiler in the entire graph. At first, when the Duplicator gives a bijective function, the Spoiler places a pebble on the paired vertices. For the remaining moves, we copy the strategy of the Spoiler in the local structure, and the Duplicator's strategy of the entire graph is copied to the Duplicator's strategy of the local structures. Thus, if the Spoiler has a winning strategy in the local substructure, then the Spoiler wins the \((k+2)-\) bijective pebble game on entire graphs. ### Layer k-Wl We presented an algorithm (Algorithm 2) for applying \(k-\)WL to consecutive layers in a \(r\)-hop subgraph for a vertex \(v\in V.\) This improves the time and space efficiency of the Local \(k-\)WL method as we have discussed above. We now describe the expressive power of Layer \(k-\)WL. In the following lemmas, we show that the expressive power of Layer \(k-\)WL is the same as that of Local \(k-\)WL. Lemma 1.: _Running \(k-\)WL on the entire \(r\)-hop neighbourhood is at least as expressive as running Layer \(k-\)WL._ Proof.: Let \(G\) and \(H\) be the subgraphs induced on the \(r\)-hop neighborhood. Let \((S,D)\) be the Spoiler-Duplicator pair for the game \((G,H)\). Similarly, let \((S_{i},D_{i})\) be the Spoiler-Duplicator pair for the game \((G_{i},H_{i})\), where \(G_{i}\) and \(H_{i}\) are the subgraphs induced on the vertices at the \(i\)th and \((i+1)\)th layers of \(G\) and \(H\), respectively. We claim that if any of the \(S_{i}^{\prime}\)s has a winning strategy in the game \((G_{i},H_{i})\), then \(S\) has a winning strategy in the game \((G,H)\). Here, the strategy of \(D\) is copied by \(D_{i}\), and the strategy of \(S_{i}\) is copied by \(S\). We prove this using induction on the number of rounds of the game. Our induction hypothesis is that the positions of both the games are same, and if \(S_{i}\) wins after \(t\) rounds, then \(S\) also wins after \(t\) rounds. _Base case:_\(D\) proposes a bijective function \(f:V(G)\to V(H)\). Note that the bijection must be color-preserving; otherwise, \(S\) wins in the first round. Thus, we can assume that \(f\) is color-preserving. So, \(D_{i}\) proposes the restricted function \(f_{i}\) as a bijective function from \(V(G_{i})\) to \(V(H_{i})\). Now, \(\vec{S_{i}}\) plays a pair of pebbles in \((G_{i},H_{i})\), and \(S\) also plays the same pair of pebbles in the game \((G,H)\). It is easy to see that both games' positions are the same. Also, if \(S_{i}\) wins, then the number of vertices of a particular color is different. Hence, \(S\) also has a winning strategy. By the induction hypothesis, assume that after the \(t^{th}\) round \(S_{i}\) did not win and the position of the game is the same for both games. Consider the \((t+1)^{th}\) round in both games. \(S_{i}\) either chooses to play or remove a pebble. If \(S_{i}\) chooses to remove a pebble, so will \(S\). Again, the position of both the games is same. Now, if \(S_{i}\) decides to play a pair of pebbles, then \(S\) also decides to play a pair of pebbles. So, \(D\) proposes a bijective function and \(D_{i}\) proposes a restricted bijective function. Now, if \(S_{i}\) plays a pair of pebbles at \((v,f_{i}(v))\), then \(S\) also decides to play a pair of pebbles at \((v,f(v))\). Thus, the position of the game is same in both of the games. This ensures that if \(S_{i}\) has won, then \(S\) also wins. Lemma 2.: _Running Layer \(k-\)WL is as expressive as running \(k-\)WL on the entire induced subgraph._ Proof.: Let \(G\) and \(H\) be the subgraphs induced on a \(r\)-hop neighborhood. Let \((S,D)\) be the Spoiler-Duplicator pair for the game \((G,H)\). Similarly, let \((S_{i},D_{i})\) be the Spoiler-Duplicator pair for the game \((G_{i},H_{i})\), where \(G_{i}\) and \(H_{i}\) are the subgraphs induced on the vertices at the \(i\)th and \((i+1)\)th layers of \(G\) and \(H\), respectively. We claim that if \(S\) has a winning strategy in the game \((G,H)\), then there exists \(S_{i}\) such that it has a winning strategy in the game \((G_{i},H_{i})\). Here, the strategy of \(D\) is copied by \(D_{i}\) and the strategy of \(S_{i}\) is copied by \(S\). We prove the lemma using induction on the number of rounds of the game. Our induction hypothesis is that the position of the game \((G,H)\) is same for \((G_{i},H_{i})\), for all \(i\), if we restrict it to the subgraph induced by the vertices of color \(i\) and \((i+1)\). Also, if \(S\) wins after round \(t\), then there exists \(S_{i}\) that wins after \(t\) rounds. _Base case:_\(D\) proposes a bijective function \(f:V(G)\longrightarrow V(H)\). Note that the bijection must be color-preserving; otherwise \(S\) wins in the first round. Thus, we can assume that \(f\) is color-preserving. So, \(D_{i}\) proposes the restricted function \(f_{i}\) as a bijective function from \(V(G_{i})\) to \(V(H_{i}),\forall i\in[r]\). Now, \(S\) will play a pair of pebbles in the game \((G,H)\). Suppose \(S\) plays the pebbles at \((v,f(v))\) and \(color(v)=i\), then \(S_{i}\) and \(S_{i-1}\) play pebbles at \((v,f_{i}(v))\) in their first round. It is easy to see that the position of the games \((G,H)\) and \((G_{i},H_{i})\), for all \(i\in[r]\), is same if we restrict it to the subgraph induced by vertices of colors \(i\) and \((i+1)\). Also, if \(S\) wins, then the number of vertices of particular color are not same. So, there exists some \(i\), such that \(S_{i}\) also has a winning strategy. By induction hypothesis, assume that after the \(t^{th}\) round, \(S\) did not win and position of the game is same as defined. Consider the \((t+1)^{th}\) round in both the games. \(S\) either chooses to play or remove a pebble. If \(S\) chooses to remove a pebble from vertex \((v,f(v))\), then, if \(v\) is colored with color \(i\), then \(S_{i}\) and \(S_{i-1}\) will remove a pebble from vertex \((v,f_{i}(v))\). Again, the position of both the games is same. Now, if \(S\) decides to play a pair of pebbles, then each \(S_{i}\) also decides to play a pair of pebbles. So, \(D\) propose a bijective function and \(D_{i}\) proposes a restricted bijective function. Now, suppose \(S\) plays a pair of pebbles at \((v_{1},f(v_{1}))\). If \(color(v_{1})=i\), then \(S_{i}\) and \(S_{i-1}\) also decides to play pebbles at \((v_{1},f_{i}(v_{1}))\). Thus, the position of the game is same as defined. Now, if \(S\) wins, then there exists \(u\) and \(v\) such that either \((u,v)\in E(G)\) and \((f(u),f(v))\notin E(H)\) or \((u,v)\notin E(G)\) and \((f(u),f(v))\in E(H)\). Similarly, there exists \(S_{i}\) for which these things happen as the position induced is same. Therefore, \(S_{i}\) wins for some \(i\). Thus, from above two lemmas we can say that the expressive power of Layer \(k-\)WL is the same as local \(k-\)WL. ## 6 Subgraph Counting Algorithms and Characterization of Patterns Here, we characterize the expressive power of the proposed methods in terms of subgraph as well as induced subgraph counting. In this section, we provide algorithms and characterization of subgraphs that can exactly count the number of patterns appearing as subgraph or induced subgraph. As described above, we can see that the running time is dependent on the size of the local substructure and the value of \(k\). The size of the subgraph is dependent on the radius of the patterns. So, we have to take a \(r\)-hop neighbourhood for each vertex \(v\) in the host graph \(G\). In Section 6.1, we show how the value of \(k\) can be decided based on the local substructure of the host graph. It is independent of the structure of the pattern. Also, it gives an upper bound on the value of \(k\) that can count pattern appearing as subgraph and induced subgraph. In Section 6.2, we first show that doing local count is sufficient for induced subgraph count and later we give the upper bound on \(k\) based on the pattern size. Note that the value of \(k\) for induced subgraph count is based only on the size of pattern not its structure. In Section 6.3, we again show that locally counting subgraph is sufficient. Also, we explore the value of \(k\) based on the structure of pattern. For subgraph counting the structure of pattern can be explored to get a better upperbound for the value of \(k\). Later, for the sake of completeness we give algorithm to count triangle, pattern of radius one and \(r\). ### Deciding k based on local substructure of host graph Here, we explore the local substructure of the host graph in which we are counting pattern appearing as graphs and subgraphs. For a given pattern of radius \(r\), we explore \(r\)-hop neighbourhood around every vertex \(v\) in the host graph \(G\). If two graphs \(G_{1}\) and \(G_{2}\) are isomorphic then the number of subgraphs and induced subgraphs of both the graphs are the same. We use the same idea to count number of subgraphs. Cai et. al. [1992], shown that \(\Omega(n)\) dimension is needed to guarantee graph isomorphism. However, for restricted graph classes, we can still guarantee isomorphism for small dimensions. It has been shown that \(3-WL\) is sufficient for planar graph Kiefer et al. [2019], \(k-WL\) for graphs with treewidth at most \(k\)Kiefer and Neuen [2019], \((3k+4)-WL\) for graphs with rankwidth at most \(k\)Grohe and Neuen [2021], and \((4k+3)\) for graphs with Euler genus at most \(k\)Grohe and Kiefer [2019]. We say these graph classes as _good_ graph classes. Note that, for non-isomorphic graphs, the graphs is not \(k-\)WL equivalent. Thus, running corresponding \(k-\)WL can count pattern of radius \(r\), appearing as subgraph and induced subgraph. **Theorem 4**.: _Let \(G_{v}^{r}\) denote the \(r\)-hop neighborhood around \(v\). Given a pattern of radius \(r\), the values of \(k\) that are sufficient to guarantee the count of patterns appearing either as subgraphs or induced subgraphs are:_ 1. \((3-WL)\) _if_ \(G_{v}^{r}\) _planar_ 2. \((k-WL)\) _if_ \(tw^{1}(G_{v}^{r})\leq k\)__ 3. \(((3k+4)-WL)\) _if_ \(rankwidth(G_{v}^{r})\leq k\)__ 4. \(((4k+3)-WL)\) _if_ \(Euler-genus(G_{v}^{r})\leq k\)__ Proof.: Consider the subgraph induced by the vertex \(v\) and its \(r\)-hop neighborhood in \(G\), say \(G_{v}^{r}\), and the subgraph induced by the vertex \(u\) and its \(r\)-hop neighborhood in \(H,\) say \(H_{u}^{r}.\) Suppose both structures belong to _good_ graph classes. Now, we run corresponding \(k\) based on the local substructure as mentioned in the theorem. If the color histogram of the stable color matches for both the graphs. This implies that both the graphs are isomorphic. Thus, the number of subgraphs and induced subgraphs in both of the substructures are also the same. Also, we run respective \(k-\)WL on a colored graph, where vertices at a distance \(i\) from \(v\) is colored \(i\). So, it is at least as expressive as running \(k-\)WL on an uncolored graph. We can also show that it is strictly more expressive in distinguishing non-isomorphic graphs. Thus, all the \(k-\)WL mentioned corresponding to _good_ graph classes are sufficient for counting the number of patterns appearing as subgraphs and induced subgraphs. **Corollary 1**.: _If \(G_{v}^{r}\) is amenable, for all \(v\in V(G),\) then Local \(1-\)WL outputs the exact count of the patterns appearing as subgraph and induced subgraph._ **Corollary 2**.: _Running \(1-\)WL guarantees the exact number of subgraphs and induced subgraphs of all patterns of radius one, when the maximum degree of the host graph is bounded by 5._ _Similarly, if the maximum degree of the host graph is bounded by \(15\), then running \(2-\)WL is sufficient to count subgraphs and induced subgraphs of all patterns with a dominating vertex._ ### Counting Induced Subgraphs The following lemma shows that we can easily aggregate the local counts of the pattern \(H\) appearing as an induced subgraph to get the count of \(H\) over the entire graph. **Lemma 3**.: \[IndCount(H,G)=\frac{\sum_{v\in V(G)}IndCount_{(u,v)}(H,G_{v}^{r})}{| Orbit_{H}(u)|}\] (1) Proof.: Suppose a pattern \(H\) in \(G\) appears as an induced subgraph. So, an injective homomorphism from \(V(H)\) to \(V(G)\) exists, such that it preserves the edges. We fix one subgraph and find the number of mappings possible. Suppose one of the mappings maps \(u_{i}\) to \(v_{j}\) where \(j\in|V(H)|\). Now, we can see that the number of mappings of \(u_{i}\)(key vertex) is the same as the size of the orbit of \(u_{i}\) in \(H.\) This proves the claim that every induced subgraph has been counted, the size of the orbit many times. Assume that we want to count the number of occurrences of pattern \(H\) in \(G\) as (an induced) subgraph. Let \(u\) be the key vertex of \(H\) and \(r\) be the radius of \(H\). **Lemma 4**.: _It is sufficient to look at the \(r-\)hop neighborhood of \(v_{i}\) to compute \(Count_{(u,v)}(H,G)\) or \(IndCount_{(u,v)}(H,G)\)._ Proof.: Suppose a subgraph exists in \(G\) that is isomorphic to \(H\) or isomorphic to some supergraph of \(H\) with an equal number of vertices, where \(u\) is mapped to \(v_{i}\). The homomorphism between the graphs preserves edge relations. Consider the shortest path between \(u\) and any arbitrary vertex \(u_{i}\) in \(H\). The edges are preserved in the homomorphic image of \(H\). Therefore, the shortest distance from \(f(u)\) to any vertex \(f(u_{i})\) in \(G\) is less than equal to \(r\). So, it is sufficient to look at the \(r\)-hop neighborhood of \(v_{i}\) in \(G\). From the above two lemmas, we can conclude the following theorem: **Theorem 5**.: _It is sufficient to compute \(IndCount_{(u,v)}(H,G_{v}^{r}),\) for \(i\in[n],\) where \(G_{v}^{r}\) is induced subgraph of \(r-\)hop neighborhood vector of \(v.\)_ The following theorem gives a direct comparison with the \(S_{k}\) model Papp and Wattenhofer (2022), where each node has an attribute which is the count of induced subgraphs of size at most \(k\). **Theorem 6**.: _Local \(k-\)WL can count all induced subgraphs of size at most \((k+2)\)._ Proof.: Suppose, if possible \(G\) and \(H\) are Local \(k-\)WL equivalent and \(|P|\leq(k+2),\) where \(P\) is the pattern to be counted. We will choose one vertex \(v\) as a key vertex. Now, we want to count \(P-v\) locally. Assume that the distance from \(v\) to any vertex is \({}^{\prime}r^{\prime}\). So, we take the \(r\)-hop neighborhood of every vertex in \(G\) and \(H,\) respectively. It is easy to see that the number of induced subgraphs or subgraphs of size \(k\) is the same locally if they are \(k-\)WL equivalent since we do an initial coloring of \(k\)-tuples based on isomorphism type. Now, suppose \(P^{\prime}=P-v\) is of size \((k+1).\) Let \(P_{i}=P^{\prime}-v_{i},\) for \(i\in[k+1].\) It is possible that there may exist two pairs of subgraphs that are isomorphic. In that case, we remove such graphs. Let \(P_{1},P_{2},\ldots,P_{l}\) be all pairwise non-isomorphic graphs. \(k-\)WL would initially color the vertices according to isomorphism type. So, the subgraph count of \(P_{i}\) is the same in any two \(k-\)WL equivalent graphs. Let \(V(P_{i})=(u_{1},u_{2},\ldots u_{k}).\) We see the refined color after one iteration in equation 3. Now, we can observe that by looking at the first coordinate of the color tuples in a multiset, we can determine the adjacency of \(u\) with \((u_{2},u_{3},\ldots u_{k}).\) Similarly, after seeing the second coordinate of the color tuples in the multiset, we can determine the adjacency of \(u\) with \((u_{1},u_{2},\ldots u_{k}).\) Consider, \(\forall u\in V(G),\)\(P_{i}\cup\{u\}=H\) will give the count of induced subgraph \(P^{\prime}.\) Thus, if \(G\) and \(H\) are \(k-\)WL equivalent, then the size of each color class after the first iteration will be same. Now, for each \(P^{\prime}\) with \(v\) will form \(P\) if it has exactly \(|N_{P}(v)|\) many vertices of color \(1\). Also, as mentioned earlier that \(k-\)WL is equivalent to \(C^{k+1}\) logic, and we have to add unary logic stating that the color of neighbor to be \(1\). The \(k-\)WL and \(C^{k+1}\) are equivalent, so we have to add the unary relation putting the condition of the required colors. Again, using Lemma 3 we can say that running \(k-\)WL locally can count all patterns of size \((k+2)\) appearing as an induced subgraph. We can see the corollary below in which we mention the set of patterns shown in Chen et al. (2020). **Corollary 3**.: _Local \(2-\)WL on the subgraphs induced by neighborhood of each vertex can count each pattern appearing as induced subgraphs as well as subgraphs of (a) 3-star (b) triangle (c) tailed triangle (d) chordal cycle (e) attributed triangle._ Based on the above results, we now present the algorithm 4 for counting patterns appearing as an induced subgraph in \(G\) using the localized algorithm. The function \(IndCount_{u,v}(H,G_{v}^{r})\) takes as input the pattern \(H\), the attributed version of \(G_{v}^{r}\), and returns the induced subgraph count of \(H\) in \(G_{v}^{r},\) where \(u\in H\) is mapped to \(v\in G_{v}^{r}\). Notice that the function \(IndCount_{u,v}(H,G_{v}^{r})\) is a predictor that we learn using training data. ``` 1:Given \(G,H\). 2:Find \(r=radius(H)\) and let \(u\in H\) be a corresponding center vertex. 3:for each vertex \(v\) in V(G) do 4: Extract subgraph \(G_{v}^{r}\). 5: Find suitable \(k\), which will give an exact count based on the local substructure. 6: Run Local+Layer \(k-\)WL on \(G_{v}^{r}\). 7: Calculate \(IndCount_{u,v}(H,G_{v}^{r^{v}}).\) 8:endfor 9: return \(\frac{\sum_{v\in V(G)}IndCount_{(u,v)}(H,G_{v}^{r})}{|Orbit_{H}(u)|}\) ``` **Algorithm 4** Counting induced subgraph H in G The running time and space requirement for Algorithm 4 is dependent on the value of \(k\) and \(r\). We can make informed choices for the values of \(k\) and \(r\). Notice that the value of \(k\) is chosen based on the local substructure. Also, the value of \(r\) is the radius of \(H\). Suppose the local substructure is simple (planar, bounded treewidth, bounded rankwidth Theorem 4). In that case, \(k-\)WL, for small values of \(k,\) is sufficient for counting induced subgraph \(H\). Otherwise, we have to run \((|H|-2)-\)WL in the worst case. ### Deciding k based on the pattern for counting subgraphs For any pattern \(H\), it turns out that the number of subgraph isomorphisms from \(H\) to a host graph \(G\) is simply a linear combination of all possible graph homomorphisms from \(H^{\prime}\) to \(G\) (\(Hom(H^{\prime},G)\) is the number of homomorphisms from \(H^{\prime}\) to \(G\)) where \(H^{\prime}\) is the set of all homomorphic images of \(H\). That is, there exists constants \(\alpha_{H^{\prime}}\in\mathbb{Q}\) such that: \[Count(H,G)=\sum_{H^{\prime}}\alpha_{H^{\prime}}Hom(H^{\prime},G) \tag{2}\] where \(H^{\prime}\) ranges over all graphs in \(H^{\prime}\). This equation has been used to count subgraphs by many authors (Please refer Alon et al. (1997); Curticapean et al. (2017)). **Theorem 7**.: _Cai et al. (1992); Dell et al. (2018) For all \(k\geq 1\) and for all graphs \(G\) and \(H\), the following are equivalent:_ 1. \(HOM(F,G)=HOM(F,H)\) _for all graph_ \(F\) _such that_ \(tw(F)\leq k\)_._ 2. \(k-\)_WL does not distinguish_ \(G\) _and_ \(H\) _and_ 3. _Graph_ \(G\) _and_ \(H\) _are_ \(C^{k+1}\) _equivalent_1_._ Footnote 1: Counting logic with (k+1) variables Using Equation (2) and Theorem 7, we arrive at the following theorem: **Theorem 8**.: _Let \(G_{1}\) and \(G_{2}\) be \(k-\)WL equivalent and \(htw(H)\leq k\). Then subgraph count of \(H\) in \(G_{1}\) and \(G_{2}\) are the same._ **Lemma 5**.: \[Count(H,G)=\frac{\sum_{v\in V(G)}Count_{(u,v)}(H,G_{v}^{r})}{|Orbit_{H}(u)|}\] (3) Proof.: Suppose \(H\) appears as subgraph in \(G\). Then, there must exist an injective function matching \(u\in V(H)\) to some \(v\in V(G)\). Thus, counting locally, we can easily see that every subgraph would be counted. Now to prove Equation (3), it is sufficient to show that for a given subgraph it is counted exactly \(|Orbit_{H}(u)|\) many times. Note that two subgraphs are same if and only if their vertex sets and edge sets are same. We fix the vertex set and edge set in \(G\) which is isomorphic to \(H\). Now, consider an automorphism of \(H\) which maps \(u\) to one of the vertices \(u^{\prime}\) in its orbit. Note that we can easily find the updated isomorphic function that maps \(u^{\prime}\) to \(v\). Now, the number of choices of such \(u^{\prime}\) is exactly \(|Orbit_{H}(u)|\). Thus, the same subgraph is counted at least \(|Orbit_{H}(u)|\) many times. Suppose \(x\in V(H)\) is a vertex such that \(x\notin Orbit_{H}(u)\). Considering the fixed vertex set and edge set, if we can find an isomorphism, then it is a contradiction to the assumption that \(x\notin Orbit_{H}(u)\). Thus, the same subgraph is counted exactly \(|Orbit_{H}(u)|\) many times. Using Theorem 8 and Lemma 5, one can easily see that for counting pattern \(H\) as a subgraph, it is sufficient to run Local \(k-\)WL on the local substructure and count the subgraph locally. **Theorem 9**.: _Local \(k-\)WL can exactly count any subgraph \(H\) if \(htw(H-v)\leq k\)._ The upper bound on the choice of \(k\) for running \(k-\)WL can be improved from the default \(|H|-2\) bound that we used for the induced subgraph count. The value of \(k\) is now upper bound by \(htw(H)\). Hence, we pick the minimum \(k\) based on the local substructure of \(G\) as well as the hereditary treewidth of pattern \(H\) for computing the subgraph count of \(H\) in \(G\). The algorithm for counting the subgraph is similar to the induced subgraph. **Corollary 4**.: _Local \(1-\)WL can exactly count the number of patterns \(P\) or \(P-v\) appearing as a subgraph, when \(P\) or \(P-v,\) with a dominating vertex \(v\), is \(K_{1,s}\) and \(2K_{2}\)._ Proof.: In a star, \(K_{1,s}\), all the leaves are mutually independent. By the definition of homomorphism, the edges are preserved in homomorphic images. So, the only possibility of a homomorphic image of the star is getting another star with less number of leaves. Note that the star is a tree, and its treewidth is one. Also, for \(2K_{2}\), the homomorphic image is either itself or the star. So, the treewidth of all its homomorphic images is \(1\). **Corollary 5**.: _Local \(1-\)WL can exactly count the number of \(C_{4}\) appearing as a subgraph._ Proof.: Choosing any vertex as a key vertex, we can see that \(H-v\) is \(K_{1,2}\). Also, the orbit size is \(4\). So, we can directly use Lemma 5 to compute the count of the \(C_{4}\) in the graph locally, and then sum it over all the vertices and divide it by \(4\). **Corollary 6**.: _Local \(1-\)WL can exactly count patterns appearing as subgraphs of (a) 3-star (b) triangle (c) tailed triangle (d) chordal cycle (e)attributed triangle and patterns appearing as induced subgraphs of (a) triangle and (c) attributed triangle._ Proof.: For subgraph counting, we can see that for all of the \(5\) patterns, there exists a vertex \(v\) such that \(htw(P-v)=1\). One can note that the attributed triangle can also be handled using Corollary 1. Since every pattern has a dominating vertex, running \(1-\)WL on the subgraph induced on the neighborhood is sufficient. Now, we only have to argue for the patterns appearing as induced subgraphs. Note that the induced subgraph count of the triangle and the attributed triangle is same as the subgraph count of the triangle and attributed triangle. Note that all the subgraph or induced subgraph counting can be easily extended to attributed subgraph or attributed-induced subgraph counting (graph motif). We will be given a coloured graph as an input, and we will incorporate those colours and apply a similar technique as described above to get the subgraph count. **Corollary 7**.: _If \(C(G)=C(H),\) where \(C(.)\) is the color histogram, then \(Count(P,G)=Count(P,H)\) where \(P\) is the attributed subgraph._ In particular, for counting the number of triangles, we can see that it is enough to count the number of edges in the subgraph induced on the neighbourhood of the vertices. Thus, Local \(1-\)WL can give the exact count of the number of triangles. For more details, please see 6.4. The running time of \(1-\)WL depends on the number of iterations, \(h\). In general, it takes \(O((n+m)\log n)\) time, where \(m\) is the number of edges, whereas when we look at it in terms of iteration number it requires \(O(nh)\) time. **Lemma 6**.: _It requires \(O(n)\) time to guarantee the count of patterns which can be written using \(2\)-variable with a counting quantifier where the depth of the quantifier is constant._ For more details, please see the equivalence between the number of iterations and quantifier depth in Theorem 2. A list of patterns that can be counted as subgraph and induced subgraphs using local \(1-\)WL and local \(2-\)WL are mentioned in Table 1. Also, the patterns including 3-star, triangle, chordal \(4\)-cycle (chordal \(C_{4}\)), and attributed triangle have been studied in Chen et al. (2020) and have been shown that it cannot be counted by \(1-\)WL. ### Algorithms for subgraph counting #### Triangle counting in the host graph We describe an algorithm for counting the number of triangles in a given host graph \(G\) in Algorithm 5. Note that counting the number of triangles is the same as counting the number of edges in the \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Restriction on \(G_{v}^{r}\)** & **k** & **Patterns, \(H\)** & **Induced** & **Subgraph** & **Reference** \\ \hline \(G_{v}^{r}\) is amenable & 1 & All & ✓ & ✓ & Corollary 1 \\ \hline Max degree \(\leq\) 5 & 1 & Patterns with a dominating vertex & ✓ & ✓ & Corollary 2 \\ \hline Max degree \(\leq\) 15 & 2 & Pattern with a dominating vertex & ✓ & ✓ & Corollary 2 \\ \hline No restriction & 2 & \begin{tabular}{c} 3-star, triangle, tailed triangle, chordal cycle, \\ attributed triangle \\ \end{tabular} & ✓ & ✓ & Corollary 3 \\ \hline No restriction & 1 & \begin{tabular}{c} Either \(H\) or \(H-v\) is \(K_{1,s}\) or \(2K_{2}\), where \(v\) is the \\ dominating vertex \\ \end{tabular} & ✓ & ✓ & Corollary 4 \\ \hline No restriction & 1 & \begin{tabular}{c} \(C_{4}\) \\ \(3\)-star, tailed triangle, chordal cycle \\ \end{tabular} & ✓ & ✓ & Corollary 5 \\ \hline No restriction & 1 & \begin{tabular}{c} triangle, attributed triangle \\ \end{tabular} & ✓ & ✓ & Corollary 6 \\ \hline \end{tabular} \end{table} Table 1: List of all the patterns that can be counted exactly(as a subgraph or induced subgraph), given \(G\), using Local \(k-\)WL, for different \(k\). subgraph induced by \(N_{G}(v)\). It is well known that two \(1-\)WL equivalent graphs have the same number of edges. This ensures that if we run \(1-\)WL on the induced subgraphs in the neighborhood of \(v\), taking color as a feature, we can guarantee the count of the triangles. On the other hand, we can see that running \(1\)- WL on graph \(G\) will not guarantee the count of triangles. Running \(1-\)WL on the entire graph takes \(O(n+m)\log(n)\) and \(O(n)\) space, where \(m\) is the number of edges. Thus, running \(1-\)WL locally in the neighborhood is more space and time efficient. Note that the running time is also dependent on the number of iterations, \(h\). Running \(1-\)WL for \(h-\) iteration requires \(O(nh)\) time. The quantifier depth of counting logic with \((k+1)\) variables is equivalent to the number of iterations of \(k-\)WL ( See Theorem 2). For the case of triangle counting, we just need to count the number of edges, which can be done by running just one iteration of \(1-\)WL. So, the time required is \(O(deg(v))\) for each \(v\). This can be done in parallel for each vertex. ``` 1:Let \(G\) be the host graph. 2:\(num\_edges=0\) 3:for each vertex \(v\) in \(V(G)\)do 4: Find the induced subgraph on \(N_{G}(v)\) 5: Find the number of edges in the induced subgraph on \(N_{G}(v)\) 6: Add it to \(num\_edges\) 7:endfor 8:\(c\) = \(num\_edges/3\) 9:Output: The number of triangles in graph \(G\) is \(c\). ``` **Algorithm 5** Counting the number of triangles #### Counting subgraph of radius one We begin by explaining the procedure for counting the number of subgraphs having a dominating vertex (radius one). For this purpose, we fix a dominating vertex \(u\). If a subgraph exists, then the dominating vertex must be mapped to some vertex. We iteratively map the dominating vertex to each vertex in the host graph and count the number of patterns in the neighborhood of the dominating vertex. Here we present an algorithm for counting patterns of radius one Algorithm 6. Note that running \(k-\)WL on the entire graph takes \(O(k^{2}\cdot n^{k+1}\log n)\) time and \(O(n^{k})\) space, whereas when we run locally, it requires less time and space. Suppose we run only on the neighborhood of each vertex. Then, it requires \(\sum_{v\in V(G)}(deg(v))^{k+1}\log(deg(v))\) and space \(O(max_{i}(deg(v_{i}))^{k}+n)\). More specifically, suppose the given graph is \(r\)-regular. Then it requires \(O(r^{k+1}\log(r)n)\) time and \(O(r^{k}+n)\) space. Therefore, if the graph is sparse, then we can implement Local \(k-\)WL for a larger value of \(k\). We can see that running \(k-\)WL locally is not dependent on the size of the graph exponentially. However, it is not feasible to run \(k-\)WL on the entire graph for a larger value of \(k\). ``` 1:Let \(H\) be a pattern having a dominating vertex and \(G\) be the host graph. 2:for each vertex \(v\) in \(V(G)\)do 3: Find the induced subgraph on \(N_{G}(v)\) 4:if\(\text{degree}(v)+1<|V(H)|\)then 5: skip this iteration 6:endif 7: run \(k-\)WL on the induced subgraph on \(N_{G}(v)\) 8: Calculate \(Count_{u,v}(H,G_{v}^{r})\) 9:endfor 10:return\(\frac{\sum_{v\in V(G)}Count_{u,v}(H,G_{v}^{r})}{|Orbit_{H}(u)|}\) ``` **Algorithm 6** Counting the number of patterns of radius one #### Counting subgraphs of radius r Here, in Algorithm 7, we describe how to count the number of subgraphs of radius \(r\). We iterate over all the vertices and take the \(r\)-hop neighborhood around that vertex, say \(v\), and choose a suitable \(k\) according to the structure of the pattern that can guarantee the count of subgraphs in the local substructure. ## 7 Fragmentation Here, we discuss the _Fragmentation_ technique that is different from the localized methods we have seen so far. From Table 1, we have seen that Local \(k-\)WL (or ((Local + Layer) \(k-\)WL)) is sufficient for getting an exact count for the patterns given in the table. Given a pattern \(P,\) that is more complicated than the patterns in Table 1, we fragment \(P\) into simpler patterns such that their exact count is known. In the subgraph GNN proposed earlier, look into subgraph of the host graph. We have seen that this technique is scalable on large graphs. Also, we have seen that subgraph GNN is more expressive and efficient than traditional GNN. So, we tried to explore the expressibility when the pattern is also fragmented into smaller subpatterns. The fragmentation method involves fragmenting the pattern, \(P\), into smaller subpatterns and counting these subpatterns to get the count of the \(P\) in the host graph. As described in Section 6, the value of \(k\) depends on the size of the pattern (induced subgraph) and its structure (subgraph count). Thus, even though the \(htw(P)\) may be large, if we divide it into subpatterns, then \(k\) required to guarantee the count would be reduced. Thus, it provides more expressiveness for smaller \(k\) in order to count the patterns which cannot be counted if we directly apply Local \(k-\)WL. Thus, given a pattern we apply the same fragmentation on \(G_{v}^{r}.\) Thus, the number of occurrences of \(H\) in \(G_{v}^{r}\) can be computed by combining the counts of the simpler patterns. Instead of training a GNN for counting \(H,\) we can design GNNs for learning the easier tasks (i.e., for counting the simpler models) and combine the outputs of those models. It should be noted that the fragmentation into smaller subgraphs depends on the structure of the pattern \(H.\) We demonstrate this technique for counting induced tailed triangles in Figure 2. As seen in the figure, the tailed triangle pattern can be fragmented into two parts : the pendant vertex as the key vertex, and an attributed triangle. The colors assigned to the nodes of the attributed triangle depend on the distance of the nodes from the key node. Thus, the task of counting tailed triangles reduces to counting attributed triangles, as all the vertices at level 1 are connected to the root. Suppose the task is to count the number of chordal cycles appearing as induced subgraphs. If we pick the vertex of degree three as the key vertex, then it is enough to search the neighbourhood of \(v\) in the host graph. Now, in \(N_{G}(v),\) if we count the number of \(2-stars,\) then it gives the count of chordal cycle appearing as subgraph. If we eliminate the appearance of \(K_{4}\), then it would give the exact count of chordal cycles appearing as induced subgraphs. In that case, we count the number of triangles in \(N_{G}(v)\), which gives the exact count of \(K_{4}\). Using the fragmentation technique, we show that just \(1-\)WL is sufficient for getting exact counts of induced subgraphs of certain sizes. **Idea of the fragmentation algorithm 8:** Given a graph \(G\) and a pattern \(P\), we first fix a vertex \(u\in V(P)\) as the key vertex. Now, assume that the radius of the pattern is \(r\). Thus, for counting \(P\) locally, it is sufficient to take the \(r\)-hop neighbourhood for each vertex \(v\) of \(G\), say \(G_{v}^{r}\), as has been shown in Lemma 3 and Lemma 5. Also, we have proved above that doing count locally is sufficient for both subgraph and induced subgraph. Now, we fragment pattern \(P\) into smaller subpatterns, say \(P_{1},P_{2},\ldots P_{l}\). Based on the structure of \(P\), we consider the subgraphs of \(G_{v}^{r}\) where the subpattern \(P_{i}\) is required to be counted. For all subpattern \(P_{i}\) in \(P\), we make a list of subgraphs \(G_{v}^{r}(1),G_{v}^{r}(2),\ldots G_{v}^{r}(t)\) of \(G_{v}^{r}\) where \(P_{i}\) needs to be counted. We aggregate these lists into a dictionary, \(\mathcal{L}\), with \(P_{i}\) as the keys. It should be noted that the decomposition of \(P\) into \(P_{i}\)s is such that their counts can be calculated. That is, we have learnt models \(M_{i},\) corresponding to each \(P_{i}\), which counts the number of subpatterns \(P_{i}\). The array \(c\) stores the count of \(P_{i}\)'s in each subgraph of \(G_{v}^{r}\). Now, for each vertex, we use appropriate functions \(\alpha\) and \(\beta\) to combine the counts in \(c\) to get the count of \(P\) in \(G_{v}^{r}\). Finally, the function \(\gamma\) finds the normalizing factor to get the actual count of the pattern in the \(G\). ``` 1:Let \(G\) be the host graph and \(P\) be the list of patterns, 2:\(\mathcal{L}\) be the dictionary of subgraph rules associated with the subpattern \(P_{i},\) 3:\(M=\{M_{1},\ldots,M_{l}\}\) the list of learnt models for counting \(P_{i}\)'s, where \(l=|P|\) 4:\(a\leftarrow\) Zero array of size \(|V(G)|\) 5:for each vertex \(v\) in \(V(G)\)do 6: Extract \(G_{v}^{r}\) 7:\(b\leftarrow\) Zero array of size \(l\) 8:for each pattern \(P_{i}\) in \(\mathcal{L}\)do 9:\(c\leftarrow\) Zero array of size \(s\), where \(s=|\mathcal{L}(P_{i})|\) 10:for each rule \(k\) in \(\mathcal{L}(P_{i})\)do 11: Extract \(G_{v}^{r}(k)\) 12:\(c[k]\) = \(M_{i}(G_{v}^{r}(k))\) 13:endfor 14:\(b[i]\) = \(\alpha(c)\) 15:endfor 16:\(a[v]\) = \(\beta(b)\) 17:endfor 18:\(Count(P,G)\) = \(\gamma(a)\) 19:return\(Count(P,G)\) ``` **Algorithm 8** Fragmentation Algorithm **Theorem 10**.: _Using the fragmentation method, we can count induced tailed triangle, chordal \(C_{4}\) and \(3-star\), by running Local \(1-\)WL._ Proof.: For tailed triangle, we fix the pendant vertex as the key vertex (refer to Figure 2). Now, we have to look for the count of triangles such that exactly one of the node of triangle is adjacent to the key vertex. We find induced subgraph of \(2\)-hop neighborhood for each key vertex. Now, we color the vertices at distance \(i\) by color \(i\). Then, the problem of counting the number of tailed triangles reduces to counting the number of colored triangles in the colored graph such that one node of triangle is colored \(1\) and remaining two nodes are colored \(2\). We can find the count of colored triangles using \(1-\)WL on the induced subgraph by Corollary 1. This number of count of colored triangle is same Figure 2: Fragmentation for counting tailed triangles. as \(IndCount_{(u,v}(tailed-triangle,G_{v}^{r})\). Now, using Lemma 3, we can say that fragmentation technique can count tailed triangles appearing as induced subgraphs using \(1-\)WL. Consider the pattern, chordal \(C_{4}\). We have proved that \(1-\)WL can count the number of subgraphs of chordal \(C_{4}\). So, to count the number of induced subgraphs of chordal \(C_{4}\), we only have to eliminate the count of \(K_{4}\). When we fix one vertex of degree \(3\) as key vertex, we can easily compute the count of \(K_{1,2}\) in the neighborhood. Now, we have to eliminate all three tuples appearing as triangles in the neighborhood of the key vertex. We can easily count the number of triangles in the neighborhood of each vertex. This gives the exact count of chordal \(C_{4}\) appearing as subgraph in the local structure. Using Lemma 3, we can find \(IndCount(ChordalC_{4},G)\). Consider the pattern, \(3-star\). Here, we choose any of the pendant vertex as key vertex. Now, we have to compute \(K_{1,2}\), where one center vertex of the star is connected to the key vertex. We can easily count the number of colored \(K_{1,2}\) in 2-hop neighborhood of the key vertex. However, a triangle can be also be included in this count. So, we have to eliminate the \(3\) tuples forming a triangle. Again, using the approach discussed above, we can count the number of colored triangles and this will output the exact count of colored induced \(K_{1,2}\). Again, using lemma 3, we can find \(IndCount(3-star,G)\). **Theorem 11**.: _Using the fragmentation technique we can count all patterns appearing as induced subgraphs of size \(4\) by just running Local \(1-\)WL._ Proof.: In Table 2, we describe how the fragmentation technique can be leveraged to count all the induced subgraphs of size \(4\). This shows that for \(k=1\), the fragmentation technique is more expressive than \(S_{k+3}\). Also, we can enlist more graphs where the larger pattern can be seen as the union of fragments of smaller patterns. Using this, we can see that it covers all the graphs that were mentioned in Chen et al. (2020). One can see that all the formulae use the function that can be computed by \(1-\)WL. The number of vertices and the number of edges can easily be computed after doing one iteration of \(1-\)WL. Also, the degree sequence can be computed after running \(1-\)WL for one iteration. All other functions(formulae) are just the linear combination of the functions computed earlier, the number of vertices or the number of edges. In the structures shown in the table 2 below, we have highlighted the key vertex by "light green" color and other vertices by "black color". \begin{table} \begin{tabular}{|c|c|c|} \hline Vertices & Structure & Formula \\ \hline G1 & \(\circ\) & \(\bullet\) & \(\binom{n}{2}-|E|\) \\ \hline G2 & \(\circ\) & \(|E|\) \\ \hline G3 & \(\bullet\) & \(\bullet\) & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{1},G-\mathcal{G}[N_{G}[v]])}{3}\) \\ \hline G4 & \(\circ\) & \(\sum_{v}IndCount(G_{2},G-G[N_{G}[v]])\) \\ \hline \end{tabular} \end{table} Table 2: Fragmentation for patterns for size of at most 4. \begin{tabular}{|c|c|c|} \hline G5 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{2},G[N_{G}(v)])}{3}\) \\ \hline G6 & & \(\sum_{v}\binom{degree(v)}{2}-|E(N_{G}(v))|\) \\ \hline G7 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{3},G-G[N_{G}(v)]}{4}\) \\ \hline G8 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{5},G[N_{G}[v]))}{4}\) \\ \hline G9 & & \(\frac{\sum_{v}IndCount_{(u,v)}(G_{4},G-G[N_{G}[v]])}{2}\) \\ \hline G10 & & \(\frac{\sum_{v}(IndCount(G_{6},N_{G}(V))-IndCount(G_{5},N_{G}(v)))}{2}\) \\ \hline G11 & & \(\sum_{v}(IndCount(G_{6},G-G[N_{G}[v]])\) \\ \hline G12 & & \(\sum_{v}Count(attribute*-triangle,G[N_{G}(v)\cup N_{G}(N_{G}(v))])\) \\ \hline \end{tabular} ## 8 Theoretical Comparison of Graph Neural Networks In this section we give a comparison of the time and expressiveness between the GNN hierarchies proposed in Papp and Wattenhofer (2022) and our methods. From Theorem 3, it is clear that \(k-\)WL is less expressive than Local \(k-\)WL. Also, we have shown that the space and time required by Local \(k-\)WL are less compared to \(k-\)WL. \(S_{k}\) is a model in which a vertex is given an attribute based on the number of connected induced subgraphs of size at most \(k\), the key vertex. Even though it searches locally, the number of non-isomorphic graphs may be too many for a small value of \(k\). Suppose the radius of the induced subgraph is \(r\); then it has to search up to the \(r\)-hop neighborhood. Using brute force would require \(O(n_{1}^{k})\)-time to count the number of induced subgraphs of size \(k\), for each individual induced subgraph. To improve the time complexity, it either stores the previous computation, which requires a lot of space or further recomputes from scratch. Thus, it would require \(O(t_{k}\times n_{1}^{k})\), where \(t_{k}\) is the number of distinct graphs upto isomorphism of at most \(k\)-vertices. Using Theorem 6, one can easily see that running \(k-\)WL locally is more expressive than \(S_{k+2}\). The \(N_{k}\) model has a preprocessing step in which it takes the \(k\)-hop neighborhood around vertex \(v\), and gives attributes based on the isomorphism type. In a dense graph, it is not feasible to solve the isomorphism problem in general, as the size of the induced subgraph may be some function of \(n\). Also, using the best known algorithm for graph isomorphism by Babai (2016), the time required is \(O(n_{1}^{O(\log n_{1})})\). However, running Local \(k-\)WL would require \(O(n_{1}^{k})\). Also, there are rare examples of graphs that are \(3-\)WL equivalent and non-isomorphic. So, if we run \(3-\)WL locally, then most of times expressive power matches with \(N_{k}\). The \(M_{k}\) model deletes a vertex \(v\) and then runs \(1-WL\). Papp and Wattenhofer [2022a] proposed that instead of deleting the vertices, \(k\) vertices are marked in the local neighborhood and showed that it is more expressive than deletion. It identifies \(k\) set of vertices in the local \(r\)-hop neighborhood of the graph. It would require \(O(n_{1}^{(k+2)}\log(n_{1}))\) time as it has \(O(n_{1}^{k})\) many possibilities of choosing \(k\) many vertices. It requires \(O(n^{2}\log n)\) time to run \(1-\)WL. The same time is required for Local \((k+1)-\)WL. That's why we compare \(M_{k-1}\) with Local \(k-\)WL in Table 3. Also, it is known that with any \(l\) markings and running \(k-\)WL is less expressive than running \((k+l)-\)WL on the graph Furer [2017]. So, if we plug in the value, we can see that running Local \(k-\)WL is more expressive than doing \(l\) marking and running \(1-\)WL. One can get an intuition by comparing with the \((k+1)\) bijective pebble game. If we mark the vertices, then the marking pebbles get fixed and give less power to spoiler. However, just running \(k-\)WL the spoiler is free to move all the pebbles. We present a simple proof that Local \(k-\)WL is at least as expressive as \(M_{k-1}\). **Theorem 12**.: _Local \(k-\)WL is atleast as expressive as \(M_{k-1}\)._ Proof.: Let \(G_{v}^{r}\) and \(G_{u}^{r}\) be the induced subgraphs around the \(r\)-hop neighborhood for vertices \(v\) and \(u\), respectively. Let \(M_{k}\) distinguish \(G_{v}^{r}\) and \(G_{u}^{r}\). We claim that Local \(k-\)WL can also distinguish the graph \(G_{v}^{r}\) and \(G_{u}^{r}\). To prove our claim, we use the pebble bijective game. \(G_{v}^{r}\) is distinguished because there exists a tuple \((v_{1},v_{2},....v_{k-1})\) such that marking these vertices and running \(1-\)WL on the graph gives stabilised coloring of the vertices that does not match with that of \(G_{u}^{r}\). Now, consider two games. One game corresponds to \(1-\)WL and another to Local \(k-\)WL. For the first \((k-1)\) moves, the Spoiler chooses to play and places pebbles at \((v_{1},v_{2},....v_{k-1})\). After that, in both games, there are two pebbles and the position of both the games are same. Let \(S_{1}\) and \(D_{1}\) be the spoiler and Duplicator in the \((k+1)\) bijective pebble game, and \(S_{2}\) and \(D_{2}\) be the spoiler and Duplicator in \(2\) bijective pebble game. \(S_{1}\) will follow the strategy of \(S_{2}\) and \(D_{2}\) follows the strategy of \(D_{1}\). We prove by induction on the number of rounds. Our induction hypothesis is that the position of games in both the games is the same and if \(S_{2}\) wins, then \(S_{1}\) also wins. _Base case :_ Duplicator \(D_{1}\) proposes a bijection. \(D_{2}\) will propose the same bijection. Now, \(S_{2}\) places a pebble on some vertex \(v\). Similarly, \(S_{1}\) will also place a pebble at \(v\). Note that the position of the game is the same, and if \(S_{2}\) wins, then \(S_{1}\) also wins. Now, using the induction hypothesis, assume that the position of both the games is the same and \(S_{2}\) has not won till round \(i\). Now, consider the game at round \((i+1)\). If \(S_{2}\) decides to play / remove a pebble, then \(S_{1}\) will do the same. If \(S_{2}\) decides to play a pebble, then \(S_{1}\) also decides to play a pebble. So, \(D_{1}\) proposes a bijective function. \(D_{2}\) proposes the same bijective function. Now, \(S_{2}\) places pebble at some vertex \(u\), then \(S_{1}\) also places pebble at \(u\). Thus, the position of both the game is the same and if \(S_{2}\) wins, then \(S_{1}\) will also win. ## 9 Model Using Lemma 3 for induced subgraph counting and Lemma 5 for subgraph counting, we present the _InSigGNN_ model, which is shown in Figure 4. We have also designed a separate model, _InsideOutGNN_, for certain cases (Figure 3). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **Local k -WL** & **(Local+Layer) k -WL** & **k- WL** & **S k** & **M\_k-1** \\ \hline Expressiveness & - & - & Less & Less than & Less \\ & & - & & Local \((k-2)-\)WL & Less \\ \hline Time & \(O(n\times n_{1}^{k+1})\) & \(O((n_{2})^{k+1}\times rn)\) & \(O(n^{k+1})\) & \(O(t_{k}n_{1}^{t}n)\) & \(O(n_{1}^{t+1}n)\) \\ \hline \end{tabular} \end{table} Table 3: Here, \(n:\) number of nodes in the graph, \(n_{1}:\) max number of nodes in a \(r\)-hop neighbourhood of a vertex, \(n_{2}:\) maximum number of nodes in any two consecutive layers for a particular vertex, over all the vertices of the graph, and \(t_{k}:\) number of distinct graphs upto isomorphism of at most \(k\)-vertices. The first row compares the expressiveness of the models, and the second row compares the time complexity, with respect t0 our models. ### Model Description We conducted the experiments using two different architectures, _InsideOutGNN_ and _InSigGNN_. #### 9.1.1 InsideOutGNN Model In the InsideOut model (Figure 3), we take a graph as our input. We construct subgraphs with each node as the root node. For different tasks, we create the subgraphs in a different manner. For example, for counting triangles, we create the \(1\)-hop induced subgraph. These subgraphs are then taken as input for the Internal GNN part of our model. For our experiments, we used GINConv layers Xu et al. (2019). The Internal GNN outputs the embeddings of the nodes present in the subgraph. We then pass this embeddings through a Global Add pool layer which is then treated as the embedding for the root node which was used to create the subgraph. Using this embedding, we predict the local count by passing the embedding through a linear transformation. This local count is then used to train the Internal GNN model. We take a union of all the subgraphs. For the embeddings of the nodes of this union of subgraphs, we use the embeddings learned in the Internal GNN part of the model. Using these embeddings, we predict the global count of the substructure using a GIN Convolutional layer in the External GNN model. The motivation to split the model into two separate parts is to force the model to learn the local counts. If the local counts are predicted well, then we can easily count the global counts, as the global count is just a linear function of the local counts. #### 9.1.2 InSigGNN In the InSig Model, the Internal GNN part is similar to InsideOut Model. The model architecture is shown in Figure 4. In this model, we do not transfer the embeddings learned in the Internal GNN part of the model. We use the local counts only, sum it up, and pass it through a linear transformation. The weights learned in the Internal GNN model are based only on local counts. The external linear transformation is learned based on the external count. ### Model Usage We know that the global counts of the substructure is just a linear function of the local counts. Substructures such as \(2-Stars\) and \(3-Stars\) depend on the local substructures. Therefore, for counting such substructures, we use the InsideOut Model, which uses a GIN convolutional layer in the external part of the Model. Substructures, such as triangles, don't depend on the subgraph created with respect to the root nodes. It just depends on the number of edges in the subgraphs. We use a linear transformation on the summation of the local counts to predict the global count. Therefore, for substructures such as Figure 3: Schematic of the InsideOutGNN Model triangles and chordal cycles, we use linear transformation as those structures are not dependent on the subgraph. ### Hyperparameters We use two GIN Convolutional layers for the Internal GNN model. As there can be more than one length path in the subgraph, 2 convolutional layers are beneficial to capture the information well. In InsideOut Model, we use two GIN convolutional layers in the external part of the model. We use a learning rate of 0.0001 and a batch size of 1. We also varied the experiments with different hidden dimensions for the node embedding and found the best results when we used 512 as a hidden dimension. The experiments are conducted in Nvidia A100 40GB GPU. ## 10 Experimental Observations We also experimented with the fragmentation technique. The details of these models and the experimental details are described in Section 9. We used the random graphs dataset prepared in Chen et al. (2020). We report our experiments' Mean Absolute Error (MAE) in Table 4. We compare our results with those reported in Zhao et al. (2022). It can be observed that our model significantly outperforms the baseline. This is due to the incorporation of the counts of the patterns in the local substructures, which leads to better learning of the internal GNN (in both of our proposed models). The patterns such as \(3\)-star and \(2\)-star require some knowledge of the overall position of the root node \(v\) in \(G_{v}^{1}\). Thus, the InsideOutGNN model performs better for these patterns, as the external GNN takes the global structure of \(G\) into account. However, counts of patterns such as triangles can be computed by counting the number of edges at a \(1\)-hop neighbourhood, \(G_{v}^{1}\). So, the InSigGNN model, which learns the size of the orbit of the triangle, performs better. The fragmentation technique for each pattern is different and is discussed in detail in Section 91. Footnote 1: The dataset and the code are present in the GitHub repository [Link]. \begin{table} \begin{tabular}{|l|l l l|l l|l l|} \hline & \multicolumn{4}{c|}{**Without Fragmentation**} & \multicolumn{2}{c|}{**Fragmentation**} \\ \hline \multirow{2}{*}{_Models_} & \multicolumn{2}{c|}{_Triangle_} & \multicolumn{1}{l|}{_3-stars_} & \multicolumn{1}{l|}{_2-stars_} & \multicolumn{1}{l|}{_C4_} & \multicolumn{1}{l|}{_Chordal_} & \multicolumn{1}{l|}{\(K_{4}\)} & \multicolumn{1}{l|}{_Chordal_} \\ \hline Zhao et al. (2022) & 8.90E-03 & 1.48E-02 & NA & **9.00E-03** & NA & NA & NA & NA \\ \hline InsideOutGNN & 3.30E-03 & **2.80E-04** & **4.10E-04** & 4.40E-02 & 1.06E-02 & – & **9.14E-05** \\ \hline InSigGNN & **6.00E-04** & 2.00E-02 & 8.30E-03 & 3.53E-02 & **3.80E-04** & **4.85E-05** & 2.30E-02 \\ \hline \end{tabular} \end{table} Table 4: MAE for the subgraph count of different patterns. The results for \(2-stars\), \(chordal\)\(C_{4}\) and \(K_{4}\) are not available in Zhao et al. (2022) and are marked as NA. Figure 4: Schematic of the InSigGNN model Conclusion In this work, we progressed toward a more precise characterization of the localized versions of WL algorithms. We showed how the Local-\(k-\)WL lies between the global \(k\) and \(k+1-\)WL in terms of expressiveness. We also developed strategies to make the Local-\(k-\)WL algorithm more efficient by introducing techniques such as layered WL, recursive WL, and pattern fragmentation. The hope is that such generalizations of the WL algorithm will lead to a finer subdivision of the WL hierarchy as well as more efficient and expressive graph neural networks.
2306.17442
Designing strong baselines for ternary neural network quantization through support and mass equalization
Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision. These results rely on over-parameterized backbones, which are expensive to run. This computational burden can be dramatically reduced by quantizing (in either data-free (DFQ), post-training (PTQ) or quantization-aware training (QAT) scenarios) floating point values to ternary values (2 bits, with each weight taking value in {-1,0,1}). In this context, we observe that rounding to nearest minimizes the expected error given a uniform distribution and thus does not account for the skewness and kurtosis of the weight distribution, which strongly affects ternary quantization performance. This raises the following question: shall one minimize the highest or average quantization error? To answer this, we design two operators: TQuant and MQuant that correspond to these respective minimization tasks. We show experimentally that our approach allows to significantly improve the performance of ternary quantization through a variety of scenarios in DFQ, PTQ and QAT and give strong insights to pave the way for future research in deep neural network quantization.
Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
2023-06-30T07:35:07Z
http://arxiv.org/abs/2306.17442v1
Designing Strong Baselines for Ternary Neural Network Quantization Through Support and Mass Equalization ###### Abstract Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision. These results rely on over-parameterized backbones, which are expensive to run. This computational burden can be dramatically reduced by quantizing (in either data-free (DFQ), post-training (PTQ) or quantization-aware training (QAT) scenarios) floating point values to ternary values (2 bits, with each weight taking value in \(\{-1,0,1\}\)). In this context, we observe that rounding to nearest minimizes the expected error given a uniform distribution and thus does not account for the skewness and kurtosis of the weight distribution, which strongly affects ternary quantization performance. This raises the following question: shall one minimize the highest or average quantization error? To answer this, we design two operators: TQuant and MQuant that correspond to these respective minimization tasks. We show experimentally that our approach allows to significantly improve the performance of ternary quantization through a variety of scenarios in DFQ, PTQ and QAT and give strong insights to pave the way for future research in deep neural network quantization. Edouard Yvinec\({}^{1,2}\), Arnaud Dapogny\({}^{2}\), Kevin Bailly\({}^{1,2}\)+ Sorbonne Universite\({}^{1}\), CNRS, ISIR, f-75005, 4 Place Jussieu 75005 Paris, France Datakalab\({}^{2}\), 114 boulevard Malesherbes, 75017 Paris, France Footnote †: This work has been supported by the french National Association for Research and Technology (ANRT), the company Datakalab (CIFRE convention C20/1396) and by the French National Agency (ANR) (FacIL, project ANR-17-CE33-0002). This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013384 made by GENCI. Quantization, Deep Learning, Computer Vision ## 1 Introduction As the performance of deep neural network grows, so do their computational requirements: in computer vision, popular architectures such as ResNet [1], MobileNet V2 [2] and EfficientNet [3] rely on high expressivity from millions of parameters to effectively tackle challenging tasks such as classification [4], object detection [5] and segmentation [6]. In order to deploy these models using lower power consumption and less expensive hardware, many compression techniques have been developed to reduce the latency and memory footprint of large convolutional neural networks (CNN). Quantization is one of the most efficient of these techniques and consists in converting floating values with large bit-width to fixed point values encoded on lower bit-width. Quantization can be performed in three major contexts. First, data-free quantization (DFQ) [7, 8, 9, 10, 11], where, quantization is applied without data, hopefully not degrading the model accuracy. Second, post training quantization (PTQ) [12, 13, 14]: in this setup, one seeks to tune the quantization operator given an already trained model and a calibration set (usually a fraction of the training set). Third, quantization aware training (QAT) [15, 16, 17, 18, 19, 20, 21], in which, given a model and a training set, custom gradient descent proxies are usually implemented so as to circumvent zero gradients that stems from the rounding operation. While noawadays most methods are successful in quantizing models to 8 or 4 bits while preserving the accuracy [22, 13, 23, 11], lower bit-widths remain an open challenge. In particular, ternary quantization, where weights are mapped to \(\{-1,0,1\}\)[24, 12] is an extreme form of neural network quantization, that comes with its specific challenges. While most quantization techniques rely on a rounding operation [22, 7, 8, 10], they struggle to preserve high accuracies when applied in the ternary quantization regime. In this work, we show that this limitation is at least partly due to the distribution of the weights: in fact, assuming a bell-shaped distribution [7, 25], the tails of the distribution, which represent very few parameters, will be quantized to non-zero values. Consequently, the naive rounding operation erases most of the learned operations which leads to huge accuracy drops. Based on this observation, we conclude that the naive quantization operator, that amounts to minimizing the expected error given a uniform distribution (round to nearest) is particularly ill-suited in ternary quantization. This leads to the following question: _shall one minimize the highest quantization error or the average quantization error on the weights?_ To answer this question we design two quantization operators: TQuant, which minimizes the highest quantization error, and MQuant, which minimizes the average quantization error assuming a bell shaped distribution. In Figure 1, we illustrate these two operators. We empirically show that these operators achieve strong improvements in a broad range of scenarios for DFQ, PTQ and QAT. ## 2 Methodology Let's consider \(F\) a network with \(L\) layers and weights \(W_{l}\) for each layer indexed by \(l\). We note \(Q\) the \(b\)-bits quantization operator that quantizes the weights \(W_{l}\) from \([\min\{W_{l}\};\max\{W_{l}\}]\subset\mathbb{R}\) to the quantization interval \([-2^{b-1};2^{b-1}-1]\cap\mathbb{Z}\) and \(W_{l}^{q}\) the quantized weights and is defined as: \[(W^{q})=Q(W)=\left\lfloor\frac{W}{\lambda}\right\rfloor \tag{1}\] where \(\lfloor\cdot\rceil\) denotes the rounding operation and \(\lambda\) is a row-wise rescaling factor selected such that each coordinate of \(W^{q}\) falls in the quantization interval, _i.e._\((\lambda)_{i}\) is the maximum between \(\frac{|\min\{(W)_{i}\}|}{2^{b-1}}\) and \(\frac{|\max\{(W)_{i}\}|}{2^{b-1}-1}\). When the number of bits \(b\) is large, e.g. \(b=4\) or \(b=8\), most of the scalar values \(w\) of \(W\) will not be quantized to the extreme values of the quantized interval \([-2^{b-1};2^{b-1}-1]\) and as such not considering the tail of the distribution does not affect performance. This is not true when \(b=2\) (ternary) as can be seen in Figure 1 (a). Only the tails of the distribution are quantized to non-zero values which represent very few values. To reduce this effect, we propose to quantization operators, namely TQuant (\(Q_{T}\)) and MQuant (\(Q_{M}\)), that either leads to minimize the highest scalar quantization error or to equalize the support of each ternary value, respectively. ### Support Equalization We define the ternary quantization operator \(Q_{T}\) such that we have an equality of the measure of each pre-image of \(-1,0\) and \(1\), _i.e._ \[\begin{cases}Q_{T}^{-1}(\{-1\})=\left[-\lambda;-\frac{\lambda}{3}\right]\\ Q_{T}^{-1}(\{0\})=\left[-\frac{\lambda}{3};\frac{\lambda}{3}\right]\\ Q_{T}^{-1}(\{1\})=\left[\frac{1}{3};\lambda\right]\end{cases} \tag{2}\] Consequently, all quantized values are mapped from ranges equal in size, a third of the original support. This is achieved by using two thirds of the scaling factor instead, in equation 1. Then, the maximal error between any scalar \(w\in W\) and the quantization \(Q^{-1}(Q(w))\) is \(\frac{\lambda}{3}\). This operator balances the support of the original weight values with the quantization space \(\{-1,0,1\}\). Thus \(Q_{T}\) minimises the maximal error per weight value. However it doesn't balance the mass. More formally the number of weight values assigned to \(-1\) is not necessarily equal to the number of values assigned to \(1\) nor \(0\). To achieve the mass equalization, we introduce MQuant. ### Mass Equalization For each value \(v\in\{-1,0,1\}\) the spaces \(Q_{M}^{-1}(\{v\})\) have equal mass. This is achieved using \(\lambda\times\frac{5}{7\sqrt{2}}\) as a scaling factor. **Theorem 1**.: _Under a centered Gaussian prior, the \(Q_{M}\) operator minimizes the expected error per scalar weight value._ We provide the proof in Appendix A. This theoretical result allows us to compare the behaviour of the error in ternary quantization. _Which is more important, minimizing the maximum error or the expected error from quantization?_ In what follows, we provide empirical answers to this question, and show that both operators lead to significantly more robust baselines for ternary quantization. ## 3 Experiments We evaluate the proposed quantization operators in three contexts: quantization without data (data-free), quantization with a calibration set (PTQ) and quantization with full dataset (QAT). To provide insightful results for real world applications, we consider standard CNN backbones on ImageNet [4], such as ResNet 50 [1], MobileNet V2 [2] and Efficient-Net B0 [3]. In the context of tiny machine learning, we also considered ResNet 8 on Cifar10. ### Data-Free Quantization For our evaluation in the context of data-free quantization, we compare our two operators TQuant and MQuant with the baseline SQuant [9] that achieves state-of-the-art accuracy in Figure 1: Comparison of the ternary distribution (colored Dirac distributions) from different quantization operators: the naive quantization operator \(Q\), the mass balancing operator \(Q_{M}\) and the proposed ternary operator \(Q_{T}\). data-free quantization. In a similar vein as in [10] we evaluate our approach using different expansions orders that allow to find several accuracy-computational burden trade-offs. In Figure 2, we compare the influence of the quantization operator on the accuracy for ternary weight quantization. We observe across all three tested architectures that both MQuant and TQuant offer higher accuracies than SQuant by a significant margin. For instance, an expansion of order \(3\) with TQuant on ResNet 50 offers \(71.72\) points higher accuracy and MQuant offers \(58.83\) higher accuracy for an order 4 expansion. In parallel, TQuant reaches full-precision accuracy with half the expansion of SQuant and \(50\%\) less than MQuant which shows the better performance of the proposed method. This suggests that the proposed ternary-specific operators TQuand and MQuant enable overall stronger baselines than SQuant. In Figure 3, we apply the expansion quantization to both weights and activations. We observe very similar results. For instance, on MobileNet V2 and EfficientNet B0, only TQuant reaches the full-precision accuracy in a reasonable expansion order. It is worth noting that MQuant is significantly less efficient on activation quantization due to the lack of data. Formally, the scaling factors are derived from the standard deviations stored in the batch normalization layers which give a much better estimation of the maximum values (TQuant) than the cdf (MQunat). In a data-driven context, MQuant would be very similar to the method applied in [26]. However, TQuant doesn't suffer from this limitation and vastly outperforms SQuant as a quantization operator for data-free quantization in ternary representation. ### Post-Training Quantization We modified the two most popular methods: AdaRound [13] and BrecQ [14] to use with both MQuant and TQuant for weight ternary quantization (W2/A32). These techniques consist in learning, for each scalar value, whether it shall be rounded up or down. The procedure is agnostic to the scaling factor \(\lambda\). In our results, the rounding operation is determined by the PTQ method while the scales are given by the operator. In Table 1, we report our results on PTQ. First, we observe marginal slow down in terms of processing time when using our custom operators which is due to the added multiplicative term on the weight scaling. It is worth noting that the resulting quantized models will all share the exact same runtime. Second, we observe significant accuracy improvement when using TQuant on both PTQ methods, adding \(28.7\)% and \(32.22\)% accuracy with AdaRound and BrecQ respectively. Furthermore, it appears that MQuant even outperforms TQuant which suggests that when using a calibration sets, minimizing the average quantization error offers better performance than minimizing the highest quantization error. ### Quantization Aware Training In QAT, the quantized weights and their scales are learned from scratch. In practice, the weights are kept in full-precision (float32) during the training process. The forward pass uses their quantized version while the backward updates the full-precision values. In our tests, the quantized values \begin{table} \begin{tabular}{|c|c|c|c|} \hline PTQ method & operator & accuracy & Processing Time \\ \hline - & - & 89.100 & - \\ \hline \multirow{3}{*}{AdaRound} & native & 11.790 \(\pm\) 3.210 & 5m01 \\ & MQuant & **42.910**\(\pm\) 0.620 & 5m18 \\ & TQuant & 40.490 \(\pm\) 0.250 & 5m18 \\ \hline \multirow{3}{*}{BrecQ} & native & 25.780 \(\pm\) 2.440 & 3m45 \\ & MQuant & **63.540**\(\pm\) 0.850 & 3m50 \\ \cline{1-1} & TQuant & 58.000 \(\pm\) 1.120 & 3m50 \\ \hline \end{tabular} \end{table} Table 1: Accuracy results of a quantized (W2/A32) ResNet 8 for Cifar10. The process is performed on a RTX 2070. We report the quantization pre-processing time in minutes (m). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Baseline & MQuant & TQuant \\ \hline accuracy & 42.910 \(\pm\) 14.61 & 68.250 \(\pm\) 6.26 & **82.620**\(\pm\) 2.43 \\ \hline \end{tabular} \end{table} Table 2: Accuracy results of ResNet 8 for Cifar10, quantized in W2/A4 using straight through estimation [27]. Figure 3: Comparison between the proposed TQuant, MQuant and the state-of-the-art SQuant [9] operator for ternary weights and activation expansion (W2/A2). Figure 2: Comparison between the proposed TQuant, MQuant and the state-of-the-art SQuant [9] operator for ternary weights expansion and fixed 8 bits activations (W2/A8). The ImageNet top1 accuracy is reported for ResNet 50, MobileNet V2 and EfficientNet B0. applied in the forward passes are derived using the naive rounding operator (eq 1) and scales from the given operator. In Table 2, we report our results for ResNet 8 on Cifar10. We observe that both TQuant and MQuant improve the accuracy of the resulting model with TQuant almost enabling full-precision accuracy. TQuant also stabilizes the result as the variance on the accuracy drops from the original \(14.610\)% down to \(2.429\)%. We conclude that, in QAT, minimizing the highest quantization error provides the best results for ternary quantization. From all our experiments in different quantization context, we deduce several insights on ternary quantization: * when _no data_ is available, we should minimize the _maximum_ quantization error (TQuant), * when _few data_ are available, we should minimize the _average_ quantization error (MQaut), * when _all the data_ is available, we should minimize the _maximum_ quantization error (TQuant). ## 4 Conclusion In this work, we investigated ternary quantization of deep neural networks. This very low bit representation offers massive inference speed-ups and energy consumption reduction enabling deployment on edge devices of large neural networks but at the cost of the accuracy. We highlight that part of the accuracy degradation arises from the rounding operation itself which is unsuitable for preserving the predictive function. To tackle this limitation, we propose two quantization operator, namely TQuant and MQuant, that respectively minimize the highest and average quantization errors assuming the correct empirical prior. Extensive testing in data-free quantization, post-training quantization and quantization-aware training contexts show that the proposed ideas, albeit simple, leads to designing stronger baselines for ternary deep neural network compression as compared with existing state-of-the-art methods. ## Appendix A Proof of Theorem 1 Proof.: Let \(w\) be a scalar value from a symmetric distribution \(W\) with support \(\left.\right]-\lambda;\lambda\right|\). A quantization operator \(Q_{a}^{q}\) is defined by two parameters \(a\) and \(q\) such that \[Q_{a}^{q}:w\mapsto\begin{cases}q&\text{if }w\geq a\\ 0&\text{if }w\in\left.\right]-a;a[\\ -q&\text{otherwise}\end{cases} \tag{3}\] We want to solve the following minimization problem: \[\min_{a,q}\mathbb{E}\left[\left\|Q_{a}^{q}(w)-w\right\|\right] \tag{4}\] We develop the expression \(\mathbb{E}\left[\left\|Q_{a}^{q}(w)-w\right\|\right]\) in a sum of integrals \[\mathbb{E}\left[\left\|Q_{a}^{q}(w)-w\right\|\right]= \int_{-\lambda}^{-a}\left\|w+q\right\|d\mathbb{P}(w) \tag{5}\] \[+\int_{-a}^{a}\left\|w\right\|d\mathbb{P}(w)+\int_{a}^{\lambda} \left\|w-q\right\|d\mathbb{P}(w)\] Assuming, \(\left\|\dot{\parallel}\right.\) is the quadratic error, then \(\int_{a}^{\lambda}\left\|w-q\right\|d\mathbb{P}(w)\) is minimized by \(q=\mathbb{E}\mathbbm{1}[\left.{}_{-\lambda;-a}\right]w]\). By hypothesis, the distribution is symmetrical, _i.e._\(\mathbb{P}(w)\) is even. Therefore \(\mathbb{E}[\mathbbm{1}_{\left[-\lambda;-a\right]w}]=-\mathbb{E}[\mathbbm{1}_{ \left[a;\lambda\right]}w]\) and \[\mathbb{E}\left[\left\|Q_{a}^{q}(w)-w\right\|\right]=\mathbb{V}[\mathbbm{1}_ {\left[-a;a\right]w}]+2\mathbb{V}[\mathbbm{1}_{\left[a;\lambda\right]}w] \tag{6}\] Assuming a Gaussian prior over \(W\) (centered and reduced for the sake of simplicity), the variance terms correspond to variances of truncated Gaussian distribution [28]: \[\begin{cases}\mathbb{V}[\mathbbm{1}_{\left[-a;a\right]}w]=1-\frac{2a\phi(a)}{ 2\Phi(a)-1}\\ \mathbb{V}[\mathbbm{1}_{\left[a;\lambda\right]}w]=1-\frac{\lambda\phi(\lambda) -a\phi(a)}{\Phi(\lambda)-\Phi(a)}-\left(\frac{\phi(\lambda)-\phi(a)}{\Phi( \lambda)-\Phi(a)}\right)^{2}\end{cases} \tag{7}\] where \(\phi\) is the density function of the Gaussian distribution and \(\Phi\) the cumulative distribution function. To solve the minimization problem, we search for the critical points in \(a\). We recall that \(\phi^{\prime}(a)=-2a\phi(a)\) and \(\Phi^{\prime}(a)=\phi(a)\). \[\begin{split}\frac{\partial\mathbb{E}\left[\left\|Q_{a}^{q}(w)- w\right\|\right]}{\partial a}&=2\phi(a)\frac{(2a^{2}+1)(2\Phi(a)-1)-2a\phi(a)}{ (2\Phi(a)-1)^{2}}\\ &+\phi(a)\frac{\lambda\phi(\lambda)-a\phi(a)-(2a^{2}+1)(\Phi( \lambda)-\Phi(a))}{(\Phi(\lambda)-\Phi(a))^{2}}\\ &+\phi(a)\frac{\phi(\lambda)-\phi(a)-2a(\Phi(\lambda)-\Phi(a))}{( \Phi(\lambda)-\Phi(a))^{2}}\frac{\phi(\lambda)-\phi(a)}{\Phi(\lambda)-\Phi(a)} \end{split} \tag{8}\] To solve \(\frac{\partial\mathbb{E}[\left\|Q_{a}^{q}(w)-w\right\|]}{\partial a}=0\) for \(a\), we simplify the equation to get \[\begin{split} 2\frac{(2a^{2}+1)(2\Phi(a)-1)-2a\phi(a)}{(2\Phi(a)-1)^{2 }}\left(\Phi(\lambda)-\Phi(a)\right)^{2}-\frac{(\phi(\lambda)-\phi(a))^{2}}{ \Phi(\lambda)-\Phi(a)}\\ =(\lambda-2a)\phi(\lambda)+a\phi(a)-(2a^{2}+1)(\Phi(\lambda)-\Phi( a))\end{split} \tag{9}\] To solve this equation, we assume that \(\Phi(\lambda)-\Phi(a)\approx\Phi(-a)\) and \(\phi(\lambda)\approx 0\). Consequently, we obtain the second order polynomial: \(2a^{2}+1-5a\phi(a)-3\phi(a)^{2}=0\). We deduce, by solving the polynomial in \(a\) and in \(\phi(a)\): \[\begin{cases}a=\frac{1}{4}\left(\sqrt{49\phi(a)^{2}-8}+5\phi(a)\right)\\ \phi(a)=\frac{1}{6}\left(\sqrt{12+49a^{2}}-5a\right)\end{cases} \tag{10}\] We find \(a=\frac{5}{7\sqrt{2}}\) and as such \(\Phi^{-1}(a)\approx\frac{2}{3}\) which corresponds to the definition of \(Q_{M}\).
2309.10225
VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition
Spiking Neural Networks (SNNs) are at the forefront of neuromorphic computing thanks to their potential energy-efficiency, low latencies, and capacity for continual learning. While these capabilities are well suited for robotics tasks, SNNs have seen limited adaptation in this field thus far. This work introduces a SNN for Visual Place Recognition (VPR) that is both trainable within minutes and queryable in milliseconds, making it well suited for deployment on compute-constrained robotic systems. Our proposed system, VPRTempo, overcomes slow training and inference times using an abstracted SNN that trades biological realism for efficiency. VPRTempo employs a temporal code that determines the timing of a single spike based on a pixel's intensity, as opposed to prior SNNs relying on rate coding that determined the number of spikes; improving spike efficiency by over 100%. VPRTempo is trained using Spike-Timing Dependent Plasticity and a supervised delta learning rule enforcing that each output spiking neuron responds to just a single place. We evaluate our system on the Nordland and Oxford RobotCar benchmark localization datasets, which include up to 27k places. We found that VPRTempo's accuracy is comparable to prior SNNs and the popular NetVLAD place recognition algorithm, while being several orders of magnitude faster and suitable for real-time deployment -- with inference speeds over 50 Hz on CPU. VPRTempo could be integrated as a loop closure component for online SLAM on resource-constrained systems such as space and underwater robots.
Adam D. Hines, Peter G. Stratton, Michael Milford, Tobias Fischer
2023-09-19T00:38:05Z
http://arxiv.org/abs/2309.10225v2
# VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition ###### Abstract Spiking Neural Networks (SNNs) are at the forefront of neuromorphic computing thanks to their potential energy-efficiency, low latencies, and capacity for continual learning. While these capabilities are well suited for robotics tasks, SNNs have seen limited adaptation in this field thus far. This work introduces a SNN for Visual Place Recognition (VPR) that is both trainable within minutes and queryable in milliseconds, making it well suited for deployment on computer-constrained robotic systems. Our proposed system, VPRTempo, overcomes slow training and inference times using an abstracted SNN that trades biological realism for efficiency. VPRTempo employs a temporal code that determines the _timing_ of a _single_ spike based on a pixel's intensity, as opposed to prior SNNs relying on rate coding that determined the _number_ of spikes; improving spike efficiency by over 100%. VPRTempo is trained using Spike-Timing Dependent Plasticity and a supervised delta learning rule enforcing that each output spiking neuron responds to just a single place. We evaluate our system on the Nordland and Oxford RobotCar benchmark localization datasets, which include up to 27k places. We found that VPRTempo's accuracy is comparable to prior SNNs and the popular NetVLAD place recognition algorithm, while being several orders of magnitude faster and suitable for real-time deployment - with inference speeds over 50 Hz on CPU. VPRTempo could be integrated as a loop closure component for online SLAM on resource-constrained systems such as space and underwater robots. ## I Introduction Spiking neural networks (SNNs) are computational tools that model how neurons in the brain send and receive information [64]. SNNs have attracted significant interest due to their energy efficiency, low-latency data processing, and deployability on neuromorphic hardware such as Intel's Loihi or SynSense's Speck [41, 42]. However, modeling the complexity of biological neurons limits the computational efficiency of SNNs, especially where resource-constrained systems with real-time applications are concerned. Using a SNN system that trades biological realism for overall system efficiency would be well suited for a variety of robotics tasks [1, 57, 58, 61]. One such task that could benefit from fast and efficient SNNs is the visual place recognition (VPR) problem, where incoming query images are matched to a potentially very large reference database, with a range of applications in robot localization and navigation [3, 5, 21, 22, 30, 49, 50, 65], including as a loop closure component in Simultaneous Localization and Mapping (SLAM) [12, 23, 50, 62, 63]. However, VPR remains challenging as query images often have significant appearance changes when compared to the reference images, with factors such as time of day, seasonal changes, and weather contributing to these differences [33, 35, 66]. SNNs represent a unique way to solve VPR tasks for their low-latency and energy efficiency, especially where this is an important consideration for the overall design of a robot [26]. The flexible and adaptive learning that SNNs take advantage of make them ideal for VPR tasks, as network connections and weights will learn unique place features from traversal datasets. SNNs traditionally use a rate encoding method with substantial computational costs due to its reliance on spike counts [13, 17, 46]. Our work instead adopts a temporal encoding strategy inspired by central information storage techniques used by the brain, reducing the overall content the network processes during learning [52, 54]. This takes a different approach to what information each spike is representing and transmitting when compared to previous studies like those by Hussaini et al. [21, 22]. We leverage a simplified SNN framework [54], modular networks [6, 18, 25], and one-hot encoding for supervised training to substantially decrease training and inference times. Compared with previous systems, our SNN achieves query speeds exceeding 50 Hz on CPU hardware for large reference databases (\(>\)27k places) and even higher inference speeds when deployed on GPUs (as high as 500 Hz), indicating its potential usefulness for real-time applications [5, 19, 21, 30, 32, 36, 65]. Specifically, the key contributions of this work are: 1. We present VPRTempo, a novel SNN system for VPR that for the first time encodes place information via a temporal spiking code (Figure 1), significantly increasing the information content of each spike. 2. We significantly lower the training time of spiking networks to sub-hour time frames _and_ increase query speeds to real-time capability on both CPUs and GPUs, with the potential to represent tens of thousands of places even in resource-limited compute scenarios. 3. We demonstrate that our lightweight and highly compute efficient system results in comparable performance to popular place recognition systems like NetVLAD [5] on the Nordland [40] and Oxford RobotCar [34] benchmark datasets. To foster future research and development, we have made the code available at: [https://github.com/QVPR/VPRTempo](https://github.com/QVPR/VPRTempo) ## II Related Works In this section, we review previous works about VPR (Section II-A), how robotics have used and deployed SNNs (Section II-B), using SNNs to perform VPR tasks (Section II-C), and using temporal encoding in SNNs (Section II-D). ### _Visual place recognition_ Visual place recognition (VPR) is an important task in robotic navigation and localization for an accurate, and ideally comprehensive, representation of their environment and surroundings. The basic principle of VPR is to match single or multiple query image sets to previously seen or traversed locations in a reference database [33, 35, 50, 66]. This remains challenging for numerous reasons, including but not limited to: changes in viewpoint, time of day, or severe weather fluctuations [15, 51]. Several systems have been deployed to tackle this problem, ranging from simple approaches such as Sum of Absolute Differences (SAD) [36], over handcrafted systems like DenseVLAD [59] to learned approaches like NetVLAD [5, 36] and many others including [3, 30, 65]. These systems are commonly used in localization tasks such as loop closure in Simultaneous Localization And Mapping (SLAM) [60]. In this work, we are interested in alternative spiking neural network solutions to approach place recognition, which have desirable characteristics such as being able to have online adaptation and energy efficiency, especially when deployed on specialized neuromorphic hardware [2, 41, 42, 44]. ### _SNNs in robotics_ Thanks to these characteristics, SNNs have been deployed to perform a variety of robotics tasks, including in the physical interaction domain for robotic limb control and manipulation [57], scene understanding [27], and object tracking [29]. Of significant interest is the potential to combine neuromorphic algorithms and hardware for these tasks [61], since this has the capacity to decrease compute time and conserve energy consumption where this is a concern (e.g. battery powered autonomous vehicles). However, it is important to note that the deployment and use cases of SNNs in robotics is limited and has been mostly constrained to simulations or indoor/small-scale environments [21, 28]. ### _Spiking neural networks for robot localization_ Neuromorphic computing involves the development of hardware, algorithms, and sensors inspired by neuroscience to solve a variety tasks [9, 55, 56, 61, 65]. Clearly, the brain is robustly capable of place recognition in a variety of contexts, providing a solid rationale to explore neuromorphic systems for VPR related challenges. A recent study exhibited prolonged training and querying times in an SNN developed with the Brian2 simulator, due to the reliance on rate encoding and modeling complex neuronal dynamics [21]. Robotic localization tasks have also employed SNNs to perform SLAM [14, 56], encode a contextual cue driven multi-modal hybrid sensory system for place recognition [65], and navigation [16, 31]. Our approach mitigates traditional barriers to deploying SNNs for real-time compute scenarios, enhancing the learning speed substantially by simplifying network processes and incorporating temporal spike encoding. ### _Using temporal encoding in spiking networks_ Temporal encoding has, to date, not been utilized for VPR tasks but is a common method used in SNNs [7, 11, 47, 52]. Sometimes referred to as a latency code, rather than using the number of spikes to encode information, the timing of the spike is taken into consideration. This is particularly an important concept when designing a system that updates weights using spike timing dependent plasticity (STDP) [45], as the temporal information of a pre-synaptic spiking neuron can help determine which post-synaptic neurons to connect Fig. 1: **A-i** Traversal image sequences from standard VPR datasets (Nordland, Oxford RobotCar) are filtered and processed to be converted into **A-ii** spikes where the pixel intensity determines amplitude. In order to temporally encode spikes to an abstracted theta oscillation, **A-iii** amplitudes determine the spike timing during a timestep. **B-i** Once spikes have been generated, they are passed into a SNN with 3 layers, an input, feature layer, and a one-hot encoded output layer where each output neuron represents one place. **B-ii** In order to scale the system for large datasets, we train individual expert module SNNs of up to 1000 places from subsets of an entire traversal dataset. Various temporal coding strategies have been used for image classification where it achieved very high accuracy [10, 47]. One way to achieve temporal coding is to define the pixel intensity of an image not by the number of spikes the system propagates, but the timing of a single spike in a time-step [54]. Another method to achieve a similar outcome is to model the oscillatory activity of the brain to modulate when spikes occur, relative to the phase of a constant, periodic shift in a neurons internal voltage [4, 24, 41]. ## III Methodology This work naturally expands previously published works on SNNs for VPR. Hussaini et al. [21] have introduced a scalable means of representing places, whereby independent networks represent geographically distinct regions of the traverse. However, as reviewed in Section II-C, their network used rate coding with complex neuronal dynamics, which resulted in significant training times and non-real-time deployment. Here, by leveraging the temporal coding in conjunction with spike forcing proposed in BliTNet [54], we overcome these limitations and demonstrate that BliTNet, combined with modularity, results in a high-performing VPR solution. We have completely re-implemented BliTNet in PyTorch [43], optimizing for computational efficiency and fast training times, thereby making it well-suited for deployment on compute-bound systems. This section is structured as follows: We first briefly introduce the underlying BliTNet in Section III-A. This is followed by the modular organization of networks in Section III-B. We then describe our novel efficient parallelizable training and inference procedure in Section III-C. After which we describe our implementation in Section III-D, the datasets used in Section III-E, evaluation metrics in Section III-F, the baseline methods that we compare and contrast our network to in Section III-G, and finally our strategy for optimizing system hyperparameters in Section III-H. ### _Temporal coding for visual place recognition_ **Network architecture:** Each network consists of 3 layers. The input layer \(L_{I}\) has as many neurons as the number of pixels in the input image, i.e. \(W\cdot H\) neurons with \(W\) and \(H\) being the width and height of the input image, respectively. Neurons in \(L_{I}\) are sparsely connected to a feature layer \(L_{F}\). The output layer \(L_{O}\) is one-hot encoded and fully connected to the \(L_{F}\) layer such that each neuron \(n_{i}\in L_{O}\) represents one geographically distinct place from a set of traversal images \(\mathcal{D}=\{p_{1},p_{2},...,p_{N}\}\), where the number of places \(N\) equals the number of output neurons \(|L_{O}|\). For training that uses multiple traversals over different days/times, the same output neuron is used to encode that place so the network can learn changes to environments. **Network connections:** Neuronal connections between the different layers can be excitatory, inhibitory, or both and are determined probabilistically. Connections from \(L_{I}\to L_{F}\) are sparse and determined by an excitatory and inhibitory connection probability, \(P_{exc}\) and \(P_{inh}\) respectively. Connections from \(L_{F}\to L_{O}\) are fully connected, such that every neuron in \(L_{O}\) receives both excitatory and inhibitory input. Excitatory and inhibitory connections are individually defined since inhibitory weights undergo additional normalization (refer to the Homeostasis subsection in III-A). **Postsynaptic spike calculation:** The network state \(x_{j}^{n}\) for neuron \(j\) in layer \(m\) evolves in the following way: \[x_{j}^{n}=\sum_{m=1}^{N}\sum_{i}x_{i}^{m}(t)(W_{ji}^{+nm}-W_{ji}^{-nm})+C- \theta_{j}^{n}, \tag{1}\] where \(x_{i}^{m}\) is input neuron \(i\) in layer \(m\), \(\theta\in[0,\theta_{max}]\) is the neuron firing threshold, \(C\) is a constant input, and \(W_{ji}^{+nm}\) and \(W_{ji}^{-nm}\) are the excitatory and inhibitory weights, respectively. \(x\) is clipped in the range \([0,1]\) in case of negative or large spikes. **Weight updates and learning rules:** All weights between connected neurons are updated using spike timing dependent plasticity (STDP) rule [45, 54]. STDP strengthens a connection between two neurons when the pre-synaptic cell fires before the post-synaptic, and vice versa. As the spike amplitude represents the timing in an abstracted theta oscillation or time-step, we can determine whether a pre- or post-synaptic neuron fired first using: \[\Delta W_{ji}^{nm}(t)= \tag{2}\] \[\frac{\eta_{\text{STDP}}(t)}{f_{j}^{n}}\cdot\Big{[}\Theta\big{(} x_{i}^{m}(t-1)\big{)}\cdot\Theta\big{(}x_{j}^{n}(t)\big{)}\cdot\big{(}0.5-x_{j}^ {n}(t)\big{)}\Big{]},\] where \(W_{ij}^{nm}\) refers to both positive and negative weights, \(\eta_{\text{STDP}}\) is the STDP learning rate, \(f_{j}^{n}\) is the target firing rate of neuron \(j\) in layer \(n\), \(\Theta(\cdot)\) is the Heaviside step function, \(x\) is the spike amplitude of pre- or post-synaptic neurons, and \(t\) is the timestep. Effectively, a pre-synaptic neuron firing before a post-synaptic one will undergo a positive weight shift (synaptic potentiation). The inverse of this scenario will result in a negative weight shift (synaptic depression). To ensure that the weights persist as positive or negative connections, they cannot change sign during training. Therefore, if a sign change is detected then that connection will be reset to \(\epsilon=10^{\pm 6}\). The \(\eta_{\text{STDP}}\) is initialized to \(\eta_{\text{STDP}}^{\text{init}}\) and annealed during training according to: \[\eta_{\text{STDP}}(t)=\eta_{\text{STDP}}^{\text{init}}(1-\tfrac{t}{T})^{2}, \tag{3}\] where \(t\) is the current time step and \(T\) is the total number of training iterations. **Homeostasis:** Inhibitory connections undergo additional normalization to balance and control excitatory connections. Whenever the sum input to a post-synaptic neuron is positive, the negative weights increase slightly. The inverse occurs when the net input is negative to post-synaptic neurons. Both cases are implemented by: \[W_{ji}^{-nm}(t)\gets W_{ji}^{-nm}\Big{(}1-\eta_{\text{STDP}}(t)\cdot \Theta\big{(}\sum_{i}x_{i}^{m}(t)\big{)}\Big{)}. \tag{4}\] When the network is first initialized, neurons have randomly generated firing thresholds which may never be crossed depending on the input. To prevent neurons from being consistently inactive, we use an Intrinsic Threshold Plasticity (ITP) learning rate \(\eta_{\text{ITP}}\) to adjust the spiking threshold to an ideal value for post-synaptic neurons: \[\Delta\theta_{j}^{n}(t)=\eta_{\text{ITP}}(t)\cdot\Big{(}\Theta\big{(}x_{j}^{n}(t )\big{)}-f_{j}^{n}\Big{)}. \tag{5}\] If a threshold value goes negative, it is reset to 0. \(\eta_{\text{ITP}}\) is initially set to \(\eta_{\text{ITP}}^{\text{init}}\) and annealed as per Equation 3. **Spike forcing:** Spike forcing [54] was used in the output layer \(L_{O}\) to create a supervised readout of the result that is represented in the feature layer \(L_{F}\), similar to the delta learning rule [48]. For each learned place \(p_{i}\), the assigned neuron \(n_{i}\) in the output layer \(L_{O}\) (refer to III-A subsection Network Architecture) was forced to spike with an amplitude of \(x_{\text{force}}=0.5\). The network uses the differences of the calculated spikes from \(L_{F}\to L_{O}\) to the \(x_{\text{force}}\) to encourage the amplitudes to match. STDP learning rules strengthen connections between pre-synaptic neurons that cause output spikes and weakens those that do not: \[\begin{split}\Delta W_{ji}^{nm}(t)=\\ \eta_{\text{STDP}}(t)/f_{j}^{n}.[x_{i}^{m}(t-1)(x_{\text{force}}^{n }j(t)-x_{j}^{n}(t)].\end{split} \tag{6}\] ### _Modular place representation for temporal codes_ The brain encodes scene memory with place cells into sparse circuitry using engrams, individual circuits of neurons that activate in response to external cues for recall [37]. To mimic the concept of an engram, we used modules of individually trained networks to learn subsets of traversal datasets since the brain does not encode place recognition in a singular network. By training multiple individual networks we gain several advantages: 1) Smaller networks train faster and are more accurate, 2) since the probability of connections, firing thresholds, and firing rates across \(L_{I}\to L_{F}\) is uniformly random, a query image \(q\) trained in network \(N_{i}\) is unlikely to activate output neurons with a high enough amplitude in other networks to generate a false positive match, and 3) similar to what has been shown in previous work [21], modularizing greatly improves the scalability of the system to learn more places for VPR. To that end, we employ a system that can simultaneously train non-overlapping subsets of images into separate networks \(N_{i}\) and then test query images \(q\) across all networks at once [21]. Formally the union of all networks \(U\) is described as: \[U=\underset{i=\{1,\ldots,|N|\}}{\cup}\ N_{i}\text{ with }N_{i}\ \cap N_{j}\ =\ \emptyset\ \forall_{i}\ \neq\ j. \tag{7}\] ### _Efficient implementation_ Our efficient implementation for VPRTempo involves training and querying a 3D weight tensor \(\mathbf{T}\in\mathbb{R}^{|N|*|L_{i}|*|L_{j}|}\) with \(|N|\) being the number of modules, \(|L_{i}|\) the number of the neurons in the pre-synaptic layer (either \(L_{I}\) or \(L_{F}\)), and \(L_{j}\) being the number of neurons in the post-synaptic layer (\(L_{F}\) or \(L_{O}\)). This setup capitalizes on parallel computing to boost efficiency and speed [43]. During training, the weight tensor \(\mathbf{T}\) is multiplied by an image tensor \(\mathbf{I}\in\mathbb{R}^{|N|*|L_{i}|}\) that holds images associated with a particular module and their input spike rates. At deployment time, the single query image is fed to all modules concurrently, leveraging parallel processing to decrease the overall inference time. ### _Implementation details_ Our network was developed and implemented in Python3 and PyTorch [43]. The system is trained on non-overlapping and geographically distinct places from multiple standard traversal datasets (Nordland and Oxford RobotCar [34, 40]), as commonly used in the literature [21, 22, 38, 50]. All reference and query input images underwent preprocessing for gamma correction, resizing, and patch normalization [36]. Specifically, Gamma correction was used to normalize pixels \(\rho_{i}\) of the input images: \[\rho_{i}^{norm}\ =\ \begin{cases}\rho_{i}^{\gamma}&\text{for}\quad 0\geq\rho_{i}^{ \gamma}\leq 255\\ 255&\text{for}\quad\rho_{i}^{\gamma}>255,\end{cases} \tag{8}\] where \(\gamma=\frac{e^{\lambda\times 255}}{e^{\mu}}\) with \(\lambda=0.5\) and \(\mu=\bar{\rho_{i}}\). Normalized images were then resized to \(W\times H=28\times 28\) pixels and patch-normalized with patch sizes \(W_{P}\times H_{P}=7\times\) 7 pixels [36]. Network spikes \(x\) are defined as floating point amplitudes in the range [0, 1], where \(x=1\) is a full spike and \(x=0\) is no spike. Initial input spikes are generated from pixel intensity values \(i=[0,255]\) from training or test images and converted to amplitudes by \(x=\frac{i}{255}\). As an abstraction of theta oscillations in the brain, the spike amplitude determines where in a timestep the spike occurs, where 1 timestep is equivalent to one phase of theta (Figure 1). The network hyperparameters after performing a grid search (Section III-H) were set as to the values in Table I. Network modules were trained to learn 1100 places with 3 modules (totalling 3300 places) and 900 places with 3 modules (totalling 2700 places) for the comparisons presented in Figure 2 and Table II, Figure 3, respectively. Each training epoch contained images from multiple traversals of the datasets of different day/time and were trained for 4 epochs in total. The training of each module is done simultaneously such that for each epoch, multiple unique places can be learned at the same time. After the network trained for sufficient epochs, all modules were simultaneously queried with independent network activation states to perform place matching. ### _Datasets_ We follow the suggestion by Berton et al. [8] to have training and reference datasets that cover the same geographic area, rather than the common VPR practice to train networks on a training set that is potentially disjoint from the deployment area (i.e. reference dataset). Specifically, our network was evaluated on two VPR datasets, Nordland [40] and Oxford RobotCar [34] with image subsets and training performed as previously described [21]. Briefly, the Nordland dataset covers an approximately 728km traversal of a train in Norway sampled over Summer, Winter, Fall, and Spring. As is standard in the field, filtered images from the Nordland dataset removed any segments containing tunnels or speeds \(<\) 15 km/hr [20, 38, 21]. Images from both datasets were sub-sampled every 8 seconds, which is roughly 100 metres apart for Nordland, and 20 metres for Oxford RobotCar resulting in 3300 and 450 total places, respectively. It is important to note that while VPRSNN trained on these number of images, it only queried 2700 and 360 places respectively due to the 20% of input images used for calibration [21]; note also that our proposed VPRTempo does not require a calibration dataset. ### _Evaluation metrics_ When a query image \(q\) is processed by the network, the matched place \(\hat{p}\) is the place \(i\) that is assigned to the neuron \(n_{i}\) in the \(L_{O}\) with the highest spike amplitude \(f_{i}\): \[\hat{p}=\operatorname*{arg\,max}_{i}f_{i}. \tag{9}\] Our main metric for overall evaluation of the network is precision at 100% recall (P@100R) [50], which was used to determine performance when compared to other state-of-the-art VPR systems. P@100R measures the percent of correct matches when the network is forced to match every query image with a reference image. Recall at \(N\) (R@\(N\)) measures if the true place is within the top \(N\) matches. A match is only considered a true positive if it aligns with the exact ground truth, i.e. there is no ground truth tolerance around the true match. ### _Baseline methods_ We evaluated our system against standard VPR methods and state-of-the-art SNN networks. Firstly, the simple sum of absolute differences (SAD) baseline calculates the pixel-wise absolute difference between query and reference database images and selects a match based on the lowest sum value [36]. For SAD, images were resized to the same width and height used in our system (28 x 28 pixels). NetVLAD is trained to be highly viewpoint robust, and in this case images were kept at the original resolution [5]. We also compare with the current state-of-the-art Generalized Contrastive Loss (GCL) [30], which for Oxford RobotCar the dataset was resized to 240x320. We note that both Nordland and Oxford RobotCar are "on-the-rails" datasets with very limited viewpoint shift, which disadvantages GCL and NetVLAD. Finally, the main comparison is VPRSNN which is a recently reported SNN for performing VPR tasks based on spike rate coding by Hussaini et al. [21]. ### _Hyperparameter search_ To tune hyperparameters, we initially used a random search to identify the most influential parameters on match accuracy. Specifically, the hyperparameters we optimized were: firing threshold \(\theta\), initial STDP learning rate \(\eta_{\text{STDP}}^{\text{init}}\), initial ITP learning rate \(\eta_{\text{ITIP}}^{\text{init}}\), firing rate range \(f_{min}\) and \(f_{max}\), excitatory and inhibitory connection probabilities \(P_{exc}\) and \(P_{inh}\), and the constant input \(C\). An initial random sweep of 5,000 identified parameters for a refined grid search, specifically for \(f_{min}\), \(f_{max}\), \(P_{exc}\), and \(P_{inh}\). The resulting hyperparameters can be found in Table I. ## IV Results In this section, we establish the network training and query speeds when run on either CPU and GPU hardware (Section IV-A). Then, we compare the performance of the network when compared to state-of-the-art and standard VPR systems (VPRSNN [21], GCL [30], NetVLAD [5], and SAD [36]) (Section IV-B). ### _Training and querying speeds_ To establish how our network performed against the previous state-of-the-art SNN for VPR (VPRSNN), we measured the total time taken to train each network on 3300 places from the Nordland dataset [21]. We also tested query speeds of both networks, and the effect the number of places has on training and querying for our network (Figure 2). VPRSNN learned 3300 places in approximately 360 mins compared with our system which only took 10% (60 mins) and 0.4% (1 mins) of the time to train an equivalent number Fig. 2: **A** Comparison of training times for 3300 places from the Nordland dataset our system against state-of-the-art (VPRSNN [21]). VPRSNN trained in 360 mins, VPRTempo on a CPU trained in 60 mins, with our best performance of \(\approx\)1 min when VPRTempo runs on a GPU. **B** Querying speed 2700 places: VPRSNN 2 Hz, VPRTempo CPU at 353 Hz, and VPRTempo GPU queried at 1634 Hz. **C** Increasing the number of places scales the training time with a time complexity of \(\mathcal{O}(n)\) and inference time increases with a time complexity of \(\mathcal{O}(log~{}n)\). of images when run on CPU and GPU hardware, respectively (Figure 2A). Whilst our network trained the fastest on GPU hardware, CPU training was still significantly faster than VPRSNN [21] indicating our system's capability for real-time operation when training new places online. We attribute this predominately to increased spike efficiency and the shift away from modeling biological complexity, as in the Brian2 simulator [53]. Our system is capable of real-time query deployment, with speeds on both CPU and GPU hardware for our network in the order of hundreds to thousands of Hz (Figure 2B), which is particularly useful for resource limited compute scenarios. A single query simultaneously activating all expert modules at these speeds shows the scalability of our method if larger datasets are employed. And finally, the time scaling of our system was found to be \(\mathcal{O}(n)\) for training time and \(\mathcal{O}(log~{}n)\) for query speed (Figure 2C). We next measured training and query speeds for the conventional SAD and NetVLAD and state-of-the-art GCL VPR methods, as summarized in Table II. Across the board, our network trained and queried substantially faster than all of the other methods, with \(\approx\) sub-hour training times on both CPU and GPU hardware and query speeds in the hundreds to thousands of hertz. This indicates that our network is suitable for real-time compute scenarios in resource limited systems. We note that SAD has no training requirement to run and NetVLAD and GCL uses pre-trained models [5, 30]. Query speed for NetVLAD and GCL was measured as the time taken for query image feature extraction [5, 30]. Only CPU training or query speeds are listed for VPRSNN [21] since there is no GPU implementation of this network available. ### _Network accuracy_ Now that we have established the beneficial training and inference speed, we next evaluated our system accuracy when compared to conventional and state-of-the-art VPR systems. For the Nordland traversal dataset, our system achieved a P@100R of 56% (Table II). The conventional methods SAD and NetVLAD achieved an P@100R of 48% and 31% respectively, with the state-of-the-art GCL being 32%. Our main comparison competitor to VPRSNN achieving 53% (Table II). By comparison, for Oxford RobotCar dataset we achieved a P@100R of 37% compared to 40% for VPRSNN (best performer) (Table II). SAD, NetVLAD, and GCL measured at 37%, 31%, and 33% respectively. As previously discussed (refer to Section III-G), NetVLAD and GCL do not perform as well on static view-point datasets like Nordland and Oxford RobotCar. In addition to this, we calculated the precision-recall curves and R@N for both datasets to further validate our method, which are shown in Figure 3C and D. ## V Conclusions and future work In this paper, we presented the first temporally encoded SNN to solve VPR tasks for robotic vision and localization. We significantly improved the capacity of SNN systems to learn and query place datasets over state-of-the-art and conventional systems (SAD [36], NetVLAD [5], GCL [30], VPRSNN [21]), demonstrating capability for real-time learning in deployment. In addition to this, we observed comparable network precision to these systems in correctly matching query to database references. There are multiple future directions for this work and its application in robotic localization and VPR methods: 1) We are working toward translating our method onto Intel's neuromorphic processor, Intel Loihi 2 [41], to deploy on energy efficient hardware. 2) For deployment on neuromorphic hardware, we are investigating the use of event streams from event-based cameras as the input to our network in order Fig. 3: **A** Example database and query image from the Nordland dataset (left) that are patch-normalized for network training (right). **B** Ground truth (GT, left) and descriptor similarity matrix from testing 2700 query images (right). **C** Precision-recall curves comparing our network with sum of absolute differences (SAD [36]), NetVLAD [5], Generalized Contrastive Loss (GCL [30]), and VPRSNN [21]. **D** Recall at N curves comparing methods as in **C**. to further reduce latencies and improve energy-effciency. 3) Given our system's fast training and inference times, we will explore an ensemble fusing SNNs representing the same places for increased robustness. 4) Finally, we are exploring deployment of our network onto a robot for online and real-time learning of novel environments.
2309.06645
Bregman Graph Neural Network
Numerous recent research on graph neural networks (GNNs) has focused on formulating GNN architectures as an optimization problem with the smoothness assumption. However, in node classification tasks, the smoothing effect induced by GNNs tends to assimilate representations and over-homogenize labels of connected nodes, leading to adverse effects such as over-smoothing and misclassification. In this paper, we propose a novel bilevel optimization framework for GNNs inspired by the notion of Bregman distance. We demonstrate that the GNN layer proposed accordingly can effectively mitigate the over-smoothing issue by introducing a mechanism reminiscent of the "skip connection". We validate our theoretical results through comprehensive empirical studies in which Bregman-enhanced GNNs outperform their original counterparts in both homophilic and heterophilic graphs. Furthermore, our experiments also show that Bregman GNNs can produce more robust learning accuracy even when the number of layers is high, suggesting the effectiveness of the proposed method in alleviating the over-smoothing issue.
Jiayu Zhai, Lequan Lin, Dai Shi, Junbin Gao
2023-09-12T23:54:24Z
http://arxiv.org/abs/2309.06645v1
# Bregman Graph Neural Network ###### Abstract Numerous recent research on graph neural networks (GNNs) has focused on formulating GNN architectures as an optimization problem with the smoothness assumption. However, in node classification tasks, the smoothing effect induced by GNNs tends to assimilate representations and over-homogenize labels of connected nodes, leading to adverse effects such as over-smoothing and misclassification. In this paper, we propose a novel bilevel optimization framework for GNNs inspired by the notion of Bregman distance. We demonstrate that the GNN layer proposed accordingly can effectively mitigate the over-smoothing issue by introducing a mechanism reminiscent of the "skip connection". We validate our theoretical results through comprehensive empirical studies in which Bregman-enhanced GNNs outperform their original counterparts in both homophilic and heterophilic graphs. Furthermore, our experiments also show that Bregman GNNs can produce more robust learning accuracy even when the number of layers is high, suggesting the effectiveness of the proposed method in alleviating the over-smoothing issue. Jiayu Zhai, Lequan Lin, Dai Shi, and Junbin Gao+Discipline of Business Analytics, The University of Sydney Business School The University of Sydney, Camperdown, NSW 2006, Australia [email protected], {lequan.lin, dai.shi, junbin.gao}@sydney.edu.au Graph Neural Networks, Over-smoothing, Heterophilic Graphs, Bregman Neural Networks Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ## 1 Introduction With the extraordinary ability to encode complex relationships among entities in a system, graph data are widely observed in many application domains, such as social networks [1, 2], biological networks [3], recommender systems [4, 5], transportation networks [6, 7], etc. Graphs normally model entities as nodes and then construct edges between node pairs to represent underlying relationships. In addition, node attributes are represented as graph signals. Traditional deep feed-forward neural networks (NNs) only consider the propagation of features (i.e., columns of the graph signal matrix), which leaves the connectivity among nodes unexploited. To overcome this limitation, graph neural networks (GNNs) are designed to additionally aggregate neighbouring node features in the direction of rows, contributing to better graph representation learning (GRL) and eventually outstanding predictive performance in various tasks [8]. Framing NNs as an optimization problem is a well-established research topic in the machine learning community [9, 10, 11]. Likewise, numerous recent research on GNNs focuses on the optimization formulation of GNN layers or the end-to-end GNN training. Some works have shown that GRL can be approximated by the solution of some optimization problem with the smoothness assumption of neighbouring node representations [12, 13, 14]. It has also been proven that the end-to-end training for GNNs can be formulated as a bilevel optimization problem, or alternatively, a faster multi-view single-level optimization framework [15]. In this work, we will consider the bilevel optimization formulation of GNNs, in which the upper-level problem shares the same purpose as optimizing the objective function, and the lower-level problem conducts GRL. Unifying GNNs as optimization problems provides a new perspective to understanding and analyzing existing methods. For example, considering GNNs in node classification tasks, the smoothness assumption, which tends to homogenize the labels of connected nodes, can lead to several adverse effects such as over-smoothing and inappropriate message-passing for heterophilic graphs [14, 16]. Specifically, the so-called over-smoothing issue appears when node features become indistinguishable after several propagations of GNN layers. This phenomenon is more evident in the graphs where connected nodes are often with the same label, known as homophily. On the other hand, with heterophilic graphs where connected nodes have different labels, the smoothing effect induced by GNNs can even lead to worse classification outcomes, because the model is prone to assign similar labels to connect nodes with similar features after smoothing. The above-mentioned issues can be mitigated with the concept of "skip connection" [17]. For example, APPNP [18] combines the original node feature with the representation learned by each layer, which effectively preserves local information and helps mitigate over-smoothing issues. Such methods are also helpful with heterophilic graphs because they mitigate the effect of smoothing in representation learning. It has been shown that designing NNs as a bilevel optimization problem with penalty on the Bregman distance between representations from each two consecutive layers is reminiscent of and even better than applying skip connection [9, 19]. This method simplifies the network architecture by employing a set of invertible activation functions. However, it has no direct extension to GNNs as the problem design is limited by the feature propagation of traditional NNs. In this paper, we aim to propose a novel bilevel optimization framework for GNNs enlightened by the notion of Bregman distance that can effectively alleviate the adverse effects of smoothing. Similar to other bilevel designs, we develop the upper-level problem to optimize the overall objective function, and the lower-level problem for GRL. We show that the optimization framework can be easily applied to the computational format of GNNs by introducing the same set of activation functions for Bregman NNs [9], and we name such architectures as Bregman GNNs. The contributions of this work include (1) a novel bilevel optimization framework for designing GNNs with Bregman distance; (2) an alternative solution to the adverse effects of smoothing with a set of specially-designed activation functions sharing a similar purpose with skip connection; (3) solid numerical experiment results to validate the effectiveness of the new framework. ## 2 The Proposed Framework ### Preliminaries We denote \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) as an undirected graph with a set of nodes \(\mathcal{V}\) and a set of edges \(\mathcal{E}\). \(\mathbf{Z}^{l}\in\mathbb{R}^{n\times d_{l}}\) denotes the node feature matrix at layer \(l\), where \(n\) is the number of nodes, and \(d_{l}\) is the embedding size. The graph adjacency matrix is denoted as \(\mathbf{A}\in\mathbb{R}^{n\times n}\) with \(\mathbf{A}_{ij}=1\) if node \(i\) is connected with \(j\), and \(\mathbf{A}_{ij}=0\) otherwise. We further let \(\mathbf{D}\in\mathbb{R}^{n\times n}\) be the degree matrix, where \(d_{i}=\sum_{j}\mathbf{A}_{ij}\). We now provide some necessary notations and definitions for the formulation of Bregman GNN layers. **Definition 1** (Class of layer-wise functions \(\mathcal{F}\)[9]).: Define \(\{f_{l}\}_{l=0}^{L-1}:\mathbb{R}^{d_{l+1}}\times\mathbb{R}^{d_{l+1}}\to \mathbb{R}\) to be a specific set of bi-linear functions such that \[f_{l}(\mathbf{z},\mathbf{z}_{i}^{l}\mathbf{M}_{l})=(\mathbf{z}_{i}^{l}\mathbf{ M}_{l})^{\top}\mathbf{E}_{l}^{\top}\mathbf{z}-\mathbf{b}_{l}^{\top}\mathbf{z}- \mathbf{c}_{l}^{\top}(\mathbf{z}_{i}^{l}\mathbf{M}_{l})+\delta_{l}, \tag{1}\] where \(\mathbf{b}_{l},\mathbf{c}_{l}\in\mathbb{R}^{d_{l+1}}\) and \(\delta_{l}\in\mathbb{R}\). \(\mathbf{z}_{i}\in\mathbb{R}^{d_{l}}\) and \(\mathbf{z}\in\mathbb{R}^{d_{l+1}}\) are the feature vectors of sample \(i\) at layer \(l\) and \(l+1\), respectively. Finally, the matrix \(\mathbf{M}_{l}\in\mathbb{R}^{d_{l}\times d_{l+1}}\) is the weight matrix, and \(\mathbf{E}_{l}\in\mathbb{R}^{d_{l+1}\times d_{l+1}}\) is the parameter matrix presenting the feature correlation. We note that such design of \(\mathcal{F}\) guarantees the closed form solution of the lower-level optimization of the problem defined in Eq. (4), and this form of \(\mathcal{F}\) has been further assigned to enhance the performance of NN in the work of [9]. We now show how to establish the notion of \(\mathcal{F}\) to the **graph data**. Rather than the feature propagation in NN, where features are considered individually as single vectors, in GNN one shall require to propagate the feature matrix as a whole due to the connectivity of the nodes. Accordingly, one shall consider assigning the matrix trace for each of the terms of the definition of \(\mathcal{F}\), resulting in the following form: \[f_{l}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l}) =\text{tr}((\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E}_{l} \mathbf{Z}^{\top})-\langle\mathbf{1}\times\mathbf{b}_{l}^{\top},\mathbf{Z}\rangle\] \[-\langle\mathbf{1}\times\mathbf{c}_{l}^{\top},\mathbf{A}\mathbf{ Z}^{l}\mathbf{M}_{l}\rangle+\delta_{l}, \tag{2}\] where \(\mathbf{1}\) is the \(n\)-dimensional vector with all ones, and \(\langle\cdot,\cdot\rangle\) is the inner product between two matrices. We note that the inclusion of the inner product is due to the fact \(\langle\mathbf{A},\mathbf{B}\rangle=\mathrm{tr}(\mathbf{A}^{\top}\mathbf{B})\). Similarly, as we will show in Section 2.2, the form of \(\mathcal{F}\) under Eq. (2) also guarantees closed form solution of the low-level optimization problem defined in Eq. (5) for GNN. Additionally, to properly define the Bregman GNN layer, we further provide the notion of Bregman distance and proximity operator as follows. **Definition 2** (Bregman distance[19]).: Bregman distance of the matrix \(\mathbf{P}\) from the matrix \(\mathbf{Q}\) is \[D_{\phi}(\mathbf{P},\mathbf{Q})=\phi(\mathbf{P})-\phi(\mathbf{Q})-\langle \nabla\phi(\mathbf{Q}),\mathbf{P}-\mathbf{Q}\rangle,\] where \(\phi\) is a Legendre function [20]. The Bregman distance is actually a general case of many distance measurements. For example, if \(\phi(\mathbf{P})=\frac{1}{2}\|\mathbf{P}\|^{2}\), then \(D_{\phi}(\mathbf{P},\mathbf{Q})=\frac{1}{2}\|\mathbf{P}-\mathbf{Q}\|^{2}\) is the square Euclidean distance. **Definition 3** (Bregman proximity operator [21]).: The Bregman proximity operator of \(g\) with respect to \(\phi\) is denoted by \[\mathrm{prox}_{g}^{\phi}(\mathbf{P})=\operatorname*{argmin}_{\mathbf{Q}}\{g( \mathbf{Q})+\phi(\mathbf{Q})-\langle\mathbf{Q},\mathbf{P}\rangle\}. \tag{3}\] In the next section, we show how bilevel optimization can be constructed for graph data based on these definitions. ### Bilevel optimization for graph data We start by recalling bilevel optimization on the data input (i.e., images) in general NN. Given a standard training data set \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\) where \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}\in\mathbb{R}^{d_{0}}\times\mathbb{R}^{c}\), one can denote the feature propagation of NN as the bilevel optimization problem as follows [9]. \[\operatorname*{minimize}_{\psi,\{f_{l}\}_{l=0}^{l-1}} \sum_{i=1}^{n}\ell\left(\psi\left(\mathbf{z}_{i}^{L}\right),\mathbf{y}_{i }\right)\quad\text{ where }\forall i\in[n], \tag{4}\] \[\left\{\begin{array}{l}\mathbf{z}_{i}^{0}=\mathbf{x}_{i}\\ \text{ for }l=0,1,\ldots,L-1,\\ \mathbf{z}_{i}^{l+1}=\operatorname*{argmin}_{\mathbf{z}\in\mathbb{R}^{d_{l}}}\{f_{l }\left(\mathbf{z},\mathbf{z}_{i}^{l}\mathbf{M}_{l}\right)+D\left(\mathbf{z}, \mathbf{z}_{i}^{l}\mathbf{M}_{l}\right)+g(\mathbf{z})\},\end{array}\right.\] where \(\psi\in\mathcal{B}\left(\mathbb{R}^{d_{L}},\mathbb{R}^{c}\right)\) serving as Borel measurable function, \(\{f_{l}\}_{l=0}^{L-1}\in\mathcal{F}\left(\mathbb{R}^{d_{l+1}}\times\mathbb{R}^{d _{l+1}}\right)^{L}\), and \(g\in\Gamma_{0}(\mathbb{R}^{d})\) can be treated as simple convex function for regularization. The upper-level objective is the loss between the prediction \(\widehat{\mathbf{y}}_{i}=\psi(\mathbf{z}_{i}^{L})\) and the ground truth of \(\mathbf{y}_{i}\), where \(\psi\) is a simple transformation such as a linear layer or a linear layer followed by a softmax operator. \(\ell\) is the loss function, such as cross-entropy for classification tasks and quadratic loss for regression. The lower-level optimization problem produces layer-wise feature representation and can be further unrolled as an NN layer [9]. Now we analogize the notion of bilevel optimization from the scope of NN to the graph structured data. It is well-known that the core difference between the propagation in NN and GNN is whether the connectivity between nodes (or samples) is considered [22]. Specifically, unlike NN in which each node feature is propagated individually, the neighbouring information is gathered for each node via GNN propagation according to graph connectivity (i.e., adjacency matrix \(\mathbf{A}\)). Therefore, it is natural for one to generalize the bilevel optimization process defined in Eq. (4) by including the graph adjacency information. Accordingly, the lower-level objective becomes \[\mathbf{Z}^{l+1} =\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{n\times d_{l+1} }}\{f_{l}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+D_{\phi_{l}}( \mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+g_{l}(\mathbf{Z})\}. \tag{5}\] It is not difficult to verify that with the form of \(f\) defined in **Definition 1**, the optimization above can still have a closed form solution. The second term measures the closeness between the feature vectors in layer \(l\) and \(l+1\). The minimization of such term restricts the changes in the feature matrix between layers, thereby diluting the smoothing effects. **Remark 1**. The form of the \(f_{l}\) can be seen as the negative energy in Restricted Boltzmann Machine [23]. The energy between two vectors \(\mathbf{u}\) and \(\mathbf{v}\) is defined as: \[E(\mathbf{u},\mathbf{v})=-\mathbf{u}^{\top}\mathbf{E}\mathbf{v}-\mathbf{b}^{ \top}\mathbf{u}-\mathbf{c}^{\top}\mathbf{v}.\] Thus, the optimization problem aims to maximize this energy. ### Bregman GNN layers In this section, we show how the Bregman GNN layer is built. A demonstration of model architecture is provided in **Fig. 1**. According to Frecon et al. [9], many widely used activation functions (e.g., Relu and Arctan) can be written as the inverse gradient of strongly convex Legendre functions \(\phi\), and for some particular choice of \(g\) and \(\phi\), the Bregman proximity operator in Eq. (3) can be written as \[\operatorname{prox}_{g}^{\phi}(\mathbf{P})=\nabla\phi^{-1}(\mathbf{P})=\rho( \mathbf{P}).\] Since \(\operatorname{tr}((\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E}_{l} \mathbf{Z}^{\top})=\langle(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E }_{l},\mathbf{Z}\rangle\), Eq. (5) becomes \[\mathbf{Z}^{l+1} =\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{n\times d_{l+1 }}}\{f_{l}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+D_{\phi_{l}}( \mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+g_{l}(\mathbf{Z})\}\] \[=\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{n\times d_{l+1 }}}\{g_{l}(\mathbf{Z})+\phi(\mathbf{Z})-\langle\nabla\phi(\mathbf{A}\mathbf{ Z}^{l}\mathbf{M}_{l})\] \[\quad-(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E}_{l}+ \mathbf{1}\times\mathbf{b}_{l}^{\top},\mathbf{Z}\rangle\}\] \[=\operatorname{prox}_{g}^{\phi}(\nabla\phi(\mathbf{A}\mathbf{Z}^{l }\mathbf{M}_{l})-(\mathbf{A}\mathbf{Z}^{l}\mathbf{W}_{l})\mathbf{E}_{l}+ \mathbf{1}\times\mathbf{b}_{l}^{\top})\] \[=\rho(\rho^{-1}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})-(\mathbf{ A}\mathbf{Z}^{l}\mathbf{W}_{l})\mathbf{E}_{l}+\mathbf{1}\times\mathbf{b}_{l}^{ \top}), \tag{6}\] where we have \(D_{\phi_{l}}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})=\phi_{l}( \mathbf{Z})-\phi_{l}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})-\langle\nabla \phi_{l}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l}),\mathbf{Z}-\mathbf{A}\mathbf{ Z}^{l}\mathbf{M}_{l}\rangle\). If we further let \(\mathbf{W}_{l}=-\mathbf{E}_{l}\in\mathbb{R}^{d_{l+1}\times d_{l+1}}\) be the weight matrix, and \(\mathbf{b}_{l}\in\mathbb{R}^{d_{l+1}\text{\'{e}}}\) be the bias, then Eq. (6) can be seen as a layer of GNN: \[\mathbf{Z}^{l+1}=\rho(\rho^{-1}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+ \mathbf{A}\mathbf{Z}^{l}\mathbf{W}_{l}+\mathbf{1}\times\mathbf{b}_{l}^{\top}), \tag{7}\] where \(\rho\) is the activation function and \(\rho^{-1}\) is its inverse. If \(\mathbf{Z}^{l}\) and \(\mathbf{Z}^{l+1}\) share the same dimension i.e., \(n\times d_{l}\), then \(\mathbf{M}_{l}\in\mathbb{R}^{d_{l}\times d_{l}}\). \(\mathbf{W}_{l}\in\mathbb{R}^{d_{l}\times d_{l+1}}\) represents the weights in layer \(l\). \(\mathbf{b}_{l}\in\mathbb{R}^{d_{l+1}\times d_{l+1}}\) represents the biases in layer \(l\). Hence, the parameters that the model should learn are \(\mathbf{M}_{l}\), \(\mathbf{W}_{l}\), and \(\mathbf{b}_{l}\). Regarding the term \(\rho^{-1}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\) in the derivation of Eq. (6), the utilization of inverse activation function for \(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l}\) brings the feature representation of the previous layer to the present layer. This serves a similar purpose as skip connection. Therefore, such design helps the model to maintain the desirable variation of node features, thus inducing the adverse effect of smoothing in GNN propagation. Finally, it is worth noting that the propagation in Eq. (7) can be applied to many existing spatial message-passing GNNs \begin{table} \begin{tabular}{c c c c c c c} \hline **Datasets** & Class & Feature & Node & Edge & Train/val/test & Homophily\% \\ \hline **Cora** & 7 & 1433 & 2708 & 5278 & 140/500/1000 & 82.5\% \\ **CiteSeer** & 6 & 3703 & 3327 & 4552 & 120/500/1000 & 72.1\% \\ \hline **Actor** & 5 & 932 & 7600 & 26659 & 60\%/20\%/20\% & 21.4\% \\ **Texas** & 5 & 1703 & 183 & 279 & 60\%/20\%/20\% & 11.0\% \\ \hline \end{tabular} \end{table} Table 1: Statistics of the homophilic and heterophilic datasets Figure 1: Illustration of Bregman GNN based on Equation (7). The model is composed of one hidden layer of classic GNN propagation, one hidden layer of Bregman-modified propagation with the invertible activation functions, and finally, the output layer to make predictions. This architecture can be extended by adding more hidden layers. such as GCN [22] and GAT [25]. In the next section, we verify such enhancement power of Eq. (7) with comprehensive empirical studies. ## 3 Experiments The primary objective of our experiment is to test the performance of the proposed Bregman GNNs in comparison with their standard forms, which means the experiments are conducted in an ablation fashion. We first compare the performance of Bregman-enhanced GNNs to their standard forms to show their adaptive power on both homophily and heterophily graphs. Then, we provide the results of an over-smoothing experiment to show the effectiveness of the proposed method in alleviating over-smoothing. Our experiment codes can be found at [https://github.com/jiayuzhai1207/BregmanGNN](https://github.com/jiayuzhai1207/BregmanGNN). ### Datasets and Implementation Details For the first experiment, we choose 4 commonly-used datasets as shown in **Table 1**, including 2 homophilic graphs **Cora**[28] and **CiteSeer**[28], and 2 heterophilic graphs **Actor**[29] and **Texas**[29]. For the over-smoothing experiment, we only use **Actor**. The train/validation/test split follows the same split in [28] and [29]. In the first experiment, we choose 6 classic GNNs as baselines, and all networks have 3 layers including the output layer. We select this architecture because Bregman GNNs require at least 3 layers: 2 hidden layers to apply the inverse activation function, and the output layer for final classification. In the over-smoothing experiment, we choose GCN and GAT as baselines and set the number of layers in \([3,5,7,9]\). The average test accuracy and its standard deviation are calculated based on the results from 10 runs. Grid search is conducted for hyperparameter tuning. For Bregman GNNs, we select from a set of invertible activation functions that have been shown as Bregman proximity operators, such as ReLU, Tanh, ArcTan, and Softplus [9]. ### Results for Homophilic and Heterophilic Graphs The experiment results are shown in **Table 2**. Overall, Bregman GNNs present good performance compared to their standard counterpart across all datasets. For homophilic graphs, the Bregman architecture achieves consistent improvement on the standard baselines. Notably, the Bregman architecture enhances the accuracy of APPNP by 1.57% for **Cora** and by 1.37% for **Citeseer**. For heterophilic graphs, the Bregman architecture successfully improves the performance of ChebNet, GCN, and GAT for both **Texas** and **Actor** by 0.19% to 1.12%. APPNP remains to have the largest improvement from the Bregman enhancement for **Actor**. One possible reason is that the Bregman architecture provides one additional path for APPNP propagation to access source terms from previous layers, which further mitigates the adverse effect of smoothing. However, no improvement is observed between GraphSAGE and its Bregman form, yet the learning accuracy remains comparable between them. Finally, in most cases, Bregman GNNs show lower standard deviation, indicating higher stability in the node classification task. ### Results for Over-smoothing Experiment The experiment results are presented in **Fig. 2**. Apparently, the classification accuracy of both Bregman GCN and GAT is consistently higher than their standard counterparts when the number of layers increases. Therefore, we conclude that Bregman GNNs are more robust to the over-smoothing issue. Nevertheless, the overall decrease trend in accuracy indicates that the over-smoothing issue is only alleviated but not fully addressed. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**Cora**} & \multicolumn{2}{c|}{**CiteSeer**} & \multicolumn{2}{c|}{**Texas**} & \multicolumn{2}{c}{**Actor**} \\ \hline **Models** & **Bregman** & **Standard** & **Bregman** & **Standard** & **Bregman** & **Standard** & **Bregman** & **Standard** \\ \hline ChebNet [24] & \(81.22\pm 0.94\) & **81.46 \(\pm\) 0.54** & **71.70 \(\pm\) 0.50** & \(71.68\pm 1.20\) & **84.05 \(\pm\) 5.47** & \(83.51\pm 3.91\) & **35.92 \(\pm\) 0.84** & \(35.81\pm 1.16\) \\ GCN [22] & **82.58 \(\pm\) 0.84** & \(82.32\pm 0.69\) & **72.35 \(\pm\) 0.85** & \(71.51\pm 0.40\) & **63.78 \(\pm\) 5.31** & 63.24 \(\pm\) 4.55 & **29.05 \(\pm\) 0.68** & \(27.93\pm 0.79\) \\ GAT [25] & **82.19 \(\pm\) 0.69** & \(81.63\pm 0.71\) & **71.52 \(\pm\) 0.72** & \(70.31\pm 0.81\) & **63.24 \(\pm\) 3.86** & 62.70 \(\pm\) 3.15 & **29.45 \(\pm\) 0.52** & \(28.48\pm 0.70\) \\ APPNP [18] & **82.27 \(\pm\) 0.63** & \(80.70\pm 0.66\) & **72.67 \(\pm\) 0.60** & \(71.30\pm 0.78\) & 61.62 \(\pm\) 4.65 & **62.70 \(\pm\) 5.10** & **27.27 \(\pm\) 0.97** & 26.19 \(\pm\) 1.16 \\ GIN [26] & **80.36 \(\pm\) 0.76** & 80.04 \(\pm\) 1.26 & **69.82 \(\pm\) 0.79** & \(69.23\pm 0.79\) & **63.51 \(\pm\) 5.02** & 63.24 \(\pm\) 5.30 & **28.47 \(\pm\) 1.04** & 27.43 \(\pm\) 1.19 \\ GraphSAGE [27] & **81.74 \(\pm\) 0.66** & \(81.63\pm 0.47\) & **70.81 \(\pm\) 0.57** & \(70.53\pm 0.85\) & 83.51 \(\pm\) 4.75 & **83.78 \(\pm\) 3.82** & 35.34 \(\pm\) 0.68 & **35.60 \(\pm\) 0.70** \\ \hline \end{tabular} \end{table} Table 2: Comparison Experiment Results: Node Classification Accuracy (%) Figure 2: Results on Actor for GCN and GAT with different number of layers. Bregman-enhanced GNNs show higher accuracy when the number of layers increases. ## 4 Conclusion In this paper, we have proposed a novel bilevel optimization framework whose closed-form solution naturally defines a set of new network architectures called Bregman GNNs. Our experiments show the proposed framework can improve the performance of classic GNNs on both homophilic and heterophilic graphs and alleviate over-smoothing. However, it is worth noting that our method can only serve as a moderator and cannot fully address the over-smoothing issue. Future works may consider further improvement on this limitation.
2309.16318
DeepPCR: Parallelizing Sequential Operations in Neural Networks
Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed layer-by-layer, and the output of diffusion models is produced by applying a sequence of denoising steps. This sequential approach results in a computational cost proportional to the number of steps involved, presenting a potential bottleneck as the number of steps increases. In this work, we introduce DeepPCR, a novel algorithm which parallelizes typically sequential operations in order to speed up inference and training of neural networks. DeepPCR is based on interpreting a sequence of $L$ steps as the solution of a specific system of equations, which we recover using the Parallel Cyclic Reduction algorithm. This reduces the complexity of computing the sequential operations from $\mathcal{O}(L)$ to $\mathcal{O}(\log_2L)$, thus yielding a speedup for large $L$. To verify the theoretical lower complexity of the algorithm, and to identify regimes for speedup, we test the effectiveness of DeepPCR in parallelizing the forward and backward pass in multi-layer perceptrons, and reach speedups of up to $30\times$ for the forward and $200\times$ for the backward pass. We additionally showcase the flexibility of DeepPCR by parallelizing training of ResNets with as many as 1024 layers, and generation in diffusion models, enabling up to $7\times$ faster training and $11\times$ faster generation, respectively, when compared to the sequential approach.
Federico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella
2023-09-28T10:15:30Z
http://arxiv.org/abs/2309.16318v2
# DeepPCR: Parallelizing Sequential Operations in Neural Networks ###### Abstract Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed layer-by-layer, and the output of diffusion models is produced by applying a sequence of denoising steps. This sequential approach results in a computational cost proportional to the number of steps involved, presenting a potential bottleneck as the number of steps increases. In this work, we introduce DeepPCR, a novel algorithm which _parallelizes typically sequential operations_ in order to speed up inference and training of neural networks. DeepPCR is based on interpreting a sequence of \(L\) steps as the solution of a specific system of equations, which we recover using the _Parallel Cyclic Reduction_ algorithm. This reduces the complexity of computing the sequential operations from \(\mathcal{O}(L)\) to \(\mathcal{O}(\log_{2}L)\), thus yielding a speedup for large \(L\). To verify the theoretical lower complexity of the algorithm, and to identify regimes for speedup, we test the effectiveness of DeepPCR in parallelizing the forward and backward pass in multi-layer perceptrons, and reach speedups of up to \(30\times\) for the forward, and \(200\times\) for the backward pass. We additionally showcase the flexibility of DeepPCR by parallelizing training of ResNets with as many as 1024 layers, and generation in diffusion models, enabling up to \(7\times\) faster training and \(11\times\) faster generation, respectively, when compared to the sequential approach. ## 1 Introduction Neural Networks (NNs) have proven very effective at solving complex tasks, such as classification [26; 14], segmentation [5; 30], and image or text generation [26]. Training NNs, however, is a computationally demanding task, often requiring wall-clock times in the order of days, or even weeks [35; 18], before attaining satisfactory results. Even inference in pre-trained models can be slow, particularly when complex architectures are involved [4]. To reduce training times, a great effort has been invested into speeding up inference, whether by developing dedicated software and hardware [7; 22; 23], or by investigating algorithmic techniques such as (early) pruning [28; 40; 20; 27; 43; 9]. Another possibility for reducing wall-clock time, and the one we focus on in this work, consists in parallelizing computations that would otherwise be performed sequentially. The most intuitive approach to parallelization involves identifying sets of operations which are (almost entirely) independent, and executing them concurrently. Two paradigms that follow this principle are _data-parallelization_, where multiple datapoints are processed simultaneously in batches; and _model-parallelization_, where the model is split among multiple computational units, which perform their evaluations in parallel [1]. Still, certain operations which are key for training and inference in NNs have a sequential structure. The forward and backward pass of a NN are examples of such operations, where activations (or gradients) are computed sequentially, one layer at a time. Moreover, some generative models suffer from similar shortcomings: in diffusion models (DMs), for example, the output image is generated through a sequence of denoising steps [36]. Sequential operations such as these require a computational effort which grows linearly with the sequence length \(L\) (that is, with the number of layers, or denoising steps), which represents a bottleneck when \(L\) is large. Given the prevalence of these operations, any effort towards their acceleration can result in noticeable speed gains, by drastically reducing training and inference time. Further, faster computations may allow exploration of configurations which were previously unfeasible due to the excessive time required to perform these operations sequentially: for example, extremely deep NNs, or diffusion over tens of thousands of denoising steps. In this work we introduce DeepPCR, a novel method which provides a flexible framework for turning such sequential operations into parallel ones, thus accelerating operations such as training, inference, and the denoising procedure in DMs. The core idea behind DeepPCR lies in interpreting a sequential operation of \(L\) steps as the solution of a system of \(L\) equations, as illustrated in Sec. 2. DeepPCR assumes the output of each step only depends on that of the previous one, that is, the sequence satisfies the Markov property. If this holds, we can leverage the specific structure of the resulting system to tackle its solution in parallel, using the Parallel Cyclic Reduction algorithm (PCR) [10; 2]. This algorithm, described in Sec. 3, guarantees the recovery of the solution in \(\mathcal{O}(\log_{2}L)\) steps, rather than the \(\mathcal{O}(L)\) steps required for its sequential counterpart. In our test, this translates into inference speedups of up to \(30\times\) for the forward pass and \(200\times\) for the backward pass in certain regimes, and \(11.2\times\) speedup in image generation via diffusion, as shown in Fig. 1. The reduced computational complexity comes in exchange for higher memory and computational intensity. Therefore, in Sec. 4.1 we investigate in detail regimes for speedup, as well as the trade-off between our method and the sequential approach, considering as model problems the forward and backward passes through multi-layer perceptrons (MLPs) of various sizes. In Sec. 4.2 we then observe how this translates into speedups when training ResNet architectures. Finally, in Sec. 4.3 we showcase how DeepPCR can be applied to accelerate other types of sequential operations as well, choosing as example the denoising procedure in DMs. Previous WorkThe idea of parallelizing forward and backward passes through a DNN was spearheaded in [13; 32; 24; 31; 41], under the concept of _layer-parallelization_. For the most part, these approaches have been limited to accelerating the training of deep ResNets [15], since they rely on the interpretation of a ResNet as the discretization of a time-evolving differential equation [6], whose solution is then recovered in a time-parallel fashion [11]. More closely resembling our approach is the work in [39], where the authors start by interpreting a sequential operation as the solution of a large system of equations, which is then targeted using parallel solvers. They too focus on accelerating forward and backward passes on ResNets, but also consider some autoregressive generative models (specifically, MADE [12] and PixelCNN++ [38]), similarly to what is done in [44]. The main difference between our approach and the one in [39] lies in the solvers used for tackling the target system in parallel. They rely on variations of Jacobi iterations [34], which are very cost-efficient, but "fall short when the computational graph [of the sequential operation considered] is closer to a Markov chain" [39]: we can expect the convergence of Jacobi to fall to \(\mathcal{O}(L)\) in that case, thus providing no speedup over the sequential approach. By contrast, our method specifically targets Markov sequences, solving them with complexity \(\mathcal{O}(\log_{2}L)\), and is in this sense complementary to theirs. We point out that a similar theoretical foundation for our method was proposed in [33], however it was not verified experimentally, nor has it been considered for applications other than forward and backward passes acceleration. Figure 1: DeepPCR allows executing sequential operations, such as denoising in latent diffusion, in \(\mathcal{O}(\log_{2}L)\) time, as opposed to the \(\mathcal{O}(L)\) needed for the traditional approach (\(L\) being the number of steps). In our experiments, DeepPCR achieves a \(\mathbf{11.2\times}\)**speedup for image generation with latent diffusion** with respect to the sequential baseline, with comparable quality in the recovered result. Main ContributionsThe main contributions of this work can be summarized as follows: 1. We propose DeepPCR, a novel algorithm for parallelizing sequential operations in NN training and inference, reducing the complexity of these processes from \(\mathcal{O}(L)\) to \(\mathcal{O}(\log_{2}L)\), \(L\) being the sequence length. 2. We analyze DeepPCR speedup of forward and backward passes in MLPs, to identify high-performance regimes of the method in terms of simple architecture parameters, and we discuss the trade-offs between memory consumption, accuracy of the final solution, and speedup. 3. We showcase the flexibility of DeepPCR applying it to accelerate training of deep ResNet [15] on MNIST [8], and generation in Diffusion Models trained on MNIST, CIFAR-10 [25] and CelebA [29]. Results obtained with DeepPCR are comparable to the ones obtained sequentially, but are recovered up to \(7\times\) and \(11\times\) faster, respectively. ## 2 Turning sequential operations into systems of equations Our approach is rooted in casting the application of a sequence of \(L\) steps as the solution of a system of \(L\) equations, which we then proceed to solve all at once, in parallel. In this section, we illustrate a general framework to perform this casting and recover the target system. Specific examples for the applications considered in our work (namely forward and backward passes, and generation in diffusion models) are described in appendix A. The algorithm for the parallel solution of the recovered system is outlined in Sec. 3. Consider a generic sequence of steps in the form \(\mathbf{z}_{l}=f_{l}(\mathbf{z}_{l-1})\), for \(l=1,\ldots,L\), starting from \(\mathbf{z}_{0}=f_{0}(\mathbf{x})\). The various \(f_{l}\) could represent, for example, the application of layer \(l\) to the activations \(\mathbf{z}_{l-1}\) (if we are considering a forward pass), or the application of the \(l\)-th denoising step to the partially recovered image \(\mathbf{z}_{l-1}\) (if we are considering a diffusion mechanism). Notice we are assuming that the output of each step \(\mathbf{z}_{l}\) depends only on that of the previous step \(\mathbf{z}_{l-1}\) and no past ones: that is, we are considering sequences that satisfy the _Markov_ property (a discussion on the limitations related to this assumption, and possible workarounds to relax it, is provided in appendix B). We can collate this sequence of operations into a system of equations for the collated variable \(\mathbf{z}=[\mathbf{z}_{0}^{T},\ldots,\mathbf{z}_{L}^{T},]^{T}\), and obtain: \[\mathcal{F}(\mathbf{z})=\left[\begin{array}{cccc}\mathbf{z}_{0}-f_{0}(\mathbf{x})\\ \mathbf{z}_{1}-f_{1}(\mathbf{z}_{0})\\ \vdots\\ \mathbf{z}_{L}-f_{L}(\mathbf{z}_{L-1})\end{array}\right]=\left[\begin{array}{cccc}I &&&&\\ -f_{1}(\cdot)&I&&&\\ &\ddots&\ddots&\\ &&&-f_{L}(\cdot)&I\end{array}\right]\left[\begin{array}{cccc}\mathbf{z}_{0}\\ \mathbf{z}_{1}\\ \vdots\\ \mathbf{z}_{L}\end{array}\right]-\left[\begin{array}{cccc}f_{0}(\mathbf{x})\\ \mathbf{0}\\ \vdots\\ \mathbf{0}\end{array}\right]=\mathbf{0}. \tag{1}\] Notice that, to better highlight the structure of the operator involved, we are abusing matrix notation and considering that the "multiplication" of \(f_{l}(\cdot)\) with \(z_{l-1}\) results in its application \(f_{l}(z_{l-1})\), although \(f_{l}\) is generally a nonlinear operator. To tackle the nonlinearity (when present), we use Newton's method [34]. In more detail, denoting with a superscript \(k\) the Newton iteration, we start from an initial guess for iteration \(k=0\), namely \(\mathbf{z}=\mathbf{z}^{0}\), and iteratively update the solution \(\mathbf{z}^{k+1}=\mathbf{z}^{k}+\delta\mathbf{z}^{k}\) by solving the linearized system \[J_{\mathcal{F}}|_{\mathbf{z}^{k}}\,\delta\mathbf{z}^{k}=-\mathcal{F}(\mathbf{z}^{k}), \tag{2}\] until we reach convergence. \(\left.J_{\mathcal{F}}\right|_{\mathbf{z}^{k}}\) denotes the Jacobian of the global sequential operation \(\mathcal{F}(\mathbf{z})\) evaluated at the current iteration \(\mathbf{z}^{k}\). This Jacobian defines the target system we need to solve, and obeys a very specific structure: taking the derivative of (1) with respect to \(\mathbf{z}\), and expanding (2), we see that \[(2)\Longleftrightarrow\left[\begin{array}{cccc}I&&&&\\ -\left.J_{f_{1}}\right|_{\mathbf{z}_{0}^{k}}&I&&\\ &&\ddots&\ddots&\\ &&&-\left.J_{f_{L}}\right|_{\mathbf{z}_{L-1}^{k}}&I\end{array}\right]\left[\begin{array} []{cccc}\delta\mathbf{z}_{0}^{k}\\ \delta\mathbf{z}_{1}^{k}\\ \vdots\\ \delta\mathbf{z}_{L}^{k}\end{array}\right]=\left[\begin{array}{cccc}f_{0}(\mathbf{x })-\mathbf{z}_{0}^{k}\\ f_{1}(\mathbf{z}_{0}^{k})-\mathbf{z}_{1}^{k}\\ \vdots\\ f_{L}(\mathbf{z}_{L-1}^{k})-\mathbf{z}_{L}^{k}\end{array}\right], \tag{3}\] that is, the system is _block bidiagonal_. This structure is a direct consequence of the Markovian nature of the sequential operation: since each step relates only two adjacent variables \(\mathbf{z}_{l-1}\) and \(\mathbf{z}_{l}\), only two diagonals appear. The core of DeepPCR lies in applying a specialized parallel algorithm for solving systems with this very structure, as described in Sec. 3. Parallel Cyclic Reduction for NNs The solution of a block bidiagonal system is usually obtained via forward substitution: once \(\mathbf{z}_{l}\) is known, it is used to recover \(\mathbf{z}_{l+1}\) and so on, in increasing order in \(l\). This procedures is efficient, but inherently sequential, and as such might represent a bottleneck for large \(L\). Interestingly, there exist alternative algorithms for the solution of such systems, which trade-off more complex instructions and extra memory consumption for a higher degree of parallelization. One such algorithm, and the one our method is based on, is Parallel Cyclic Reduction (PCR) [19]. Originally, PCR was devised to parallelize the solution of tridiagonal systems; in this work, we describe its adaptation for bidiagonal systems such as (3). In a nutshell, PCR works by combining the equations of a system to progressively reduce its dimension, until it becomes easily solvable. Pseudo-code for the adapted algorithm is reported in Alg. 1, and a schematic of how the reduction is performed is outlined in Fig. 2. More details on its functioning are provided next. We start by noting that systems like (3) can be compactly represented as a set of equations involving only two _adjacent_ variables \(\delta\mathbf{z}_{l-1}\), \(\delta\mathbf{z}_{l}\): \[\delta\mathbf{z}_{l}-\underbrace{J_{f_{1}}|_{\mathbf{z}_{l-1}}}_{=:A_{l}^{0}}\delta \mathbf{z}_{l-1}-(\underbrace{f_{l}(\mathbf{z}_{l-1})-\mathbf{z}_{l}}_{=:\mathbf{r}_{l}^{0}})=0,\qquad l=1,\dots,L, \tag{4}\] with \(\delta\mathbf{z}_{0}=f_{0}(\mathbf{x})-\mathbf{z}_{0}^{k}\) known. The \(0\) superscripts in the operators \(A_{l}^{0}\) and vectors \(\mathbf{r}_{l}^{0}\) defined above refer to the current (0-th) PCR step. As a first step for PCR, we substitute the \((l-1)\)-th equation into the \(l\)-th, for each \(l\) in parallel, recovering \[\delta\mathbf{z}_{l}-\underbrace{A_{l}^{0}A_{l-1}^{0}}_{=:A_{l}^{1}}\delta\mathbf{z}_ {l-2}-\underbrace{\left(\mathbf{r}_{l}^{0}-A_{l}^{0}\mathbf{r}_{l-1}^{0}\right)}_{=: \mathbf{r}_{l}^{1}}=0,\qquad l=2,\dots,L. \tag{5}\] Notice that the original structure is still preserved, but now the equations relate variables \(l\) to \(l-2\). In other words, the even and the odd variables have become separated, and we have split the original system into two independent subsystems: one involving variables \(\delta\mathbf{z}_{0},\delta\mathbf{z}_{2},\dots\), the other \(\delta\mathbf{z}_{1},\delta\mathbf{z}_{3},\dots\). At the next step, we substitute equations \(l-2\) into \(l\), to recover: \[\delta\mathbf{z}_{l}-\underbrace{A_{l}^{1}A_{l-2}^{1}}_{=:A_{l}^{2}}\delta\mathbf{z}_ {l-4}-\underbrace{\left(\mathbf{r}_{l}^{1}-A_{l}^{1}\mathbf{r}_{l-2}^{1}\right)}_{=: \mathbf{r}_{l}^{1}}=0,\qquad l=5,\dots,L, \tag{6}\] so that now only variables at distance 4 are related. Ultimately, at each step of PCR, we are splitting each subsystem into two independent subsystems. If we iterate this procedure for \(\log_{2}L\) steps, we finally obtain \(L\) systems in one variable, which are trivially solvable, thus recovering the solution to the original system. Figure 2: Left: pseudo-code for PCR algorithm. Right: schematic of row reductions in PCR: green rows are combined pairwise to obtain a system of equations in even unknowns; at the same time, blue rows are combined to obtain a system in odd unknowns only. The result is two independent systems with half the original number of unknowns. The procedure is then repeated for \(\log_{2}L\) steps. ### Limitations of DeepPCR The main advantage of using DeepPCR for solving (1) lies in the fact that it requires only \(\mathcal{O}(\log_{2}L)\) sequential steps, as opposed to the \(\mathcal{O}(L)\) necessary for traditional forward substitution. However, some conditions must be verified for this procedure to be effective in achieving speedups. We discuss next some recommendations and limitations associated with DeepPCR. Effective speedup for deep modelsWhile PCR requires fewer sequential steps overall, each step is in principle more computationally intensive than its sequential counterpart, as it requires multiple matrix-matrix multiplications to be conducted concurrently (by comparison, one step of the sequential case requires applying the step function \(f_{l}(\mathbf{z})\)), as per line 6 in Alg. 1. If this cannot be done efficiently, for example because of hardware limitations, then we can expect performance degradation. Moreover, the difference between the linear and logarithmic regimes becomes useful only for large \(L\). Both these facts are investigated in Sec. 4.1. Controlling Newton iterationsWhenever (1) is nonlinear, the complexity actually becomes \(\mathcal{O}(c_{N}\log_{2}L)\), where \(c_{N}\) identifies the number of Newton iterations necessary for convergence. On the one hand, it is important for \(c_{N}\) to remain (roughly) constant and small, particularly with respect to \(L\), for the logarithmic regime to be preserved and speedups to be attained; on the other hand, there is a positive correlation between \(c_{N}\) and the accuracy of the solution recovered by the Newton solver. Implications of this trade-off are discussed in Sec. 4.4. We also point out that, in general, Newton's method provides no guarantees on _global_ convergence (unlike Jacobi's in [39], which reduces to the sequential solution in the worst-case scenario). Even though in our experiments the method never fails to converge, it is worth keeping in mind that ultimately the solver performance is dependent both on the regularity of the target function (1), and on the initialization choice. In particular, the effect of the latter is investigated in appendix F, but already the simple heuristics employed in our experiments (such as using the average of the train set images as initialization for the output of our DMs) have proven to be effective in providing valid initial guesses for Newton. Benefits from larger memoryTo apply DeepPCR, it is necessary to store the temporary results from the equation reductions (most noticeably, the operators \(A_{l}\) in line 6 in Alg. 1). The associated memory requirements scale linearly in the number of steps \(L\) and quadratically in the dimension of each step output \(\mathbf{z}\). This results in an increase in memory usage with respect to classical approaches (roughly \(2\times\) as much for forward passes in MLPs, as measured and reported in appendix C.2). We point out that the additional memory requirements of DeepPCR may limit its applications to some distributed training settings where memory is already a bottleneck. Moreover, one can expect additional communication overhead to arise in these settings. ## 4 Results In this section, we set out to demonstrate the applicability of DeepPCR to a variety of scenarios. We start by investigating the performance characteristics of DeepPCR when applied to the forward and backward passes through a Multi-Layer Perceptron (MLP). Experimenting with this model problem is mostly aimed at identifying regimes where DeepPCR achieves speedup. Specifically, in Sec. 4.1 we show that, when applied to the forward pass, DeepPCR becomes effective in architectures with more than \(2^{7}\) layers. For the backward pass, this regime is reached earlier, in architectures with \(2^{5}\) layers. Next, we explore the effects of applying DeepPCR to speedup the whole training procedure, considering ResNets architectures: in Sec. 4.2 we verify not only that the speedups measured for the single forward and backward passes carry over to this scenario, achieving a \(7\times\) speedup over the sequential implementation, but also that training with DeepPCR results in equivalent models than using sequential passes. In Sec. 4.3, we showcase the flexibility of DeepPCR by using it to speedup another type of sequential operation: the denoising procedure employed by diffusion models in image generation. We consider applications to latent diffusion, and find speedups of up to \(11.2\times\), with negligible error with respect to the sequential counterpart. Lastly, in Sec. 4.4 we focus on the role of the Newton solver in the DeepPCR procedure, establishing that the method remains stable and recovers satisfactory results even by limiting the number of Newton iterations, thus allowing to trade-off additional speedup for an increased approximation error with respect to sequential solutions. All the experiments in this section were conducted on a V100 GPU with 40GB of RAM; our models are built using the PyTorch framework, without any form of neural network compilation. ### Speeding up forward and backward passes in MLPs: identifying performance regimes Our first goal is to identify under which regimes DeepPCR can effectively provide a speedup. To this end, we consider a single forward pass through a randomly initialized MLP with a constant number of hidden units (namely, its width \(w\)) at each layer, and profile our algorithm for varying \(w\) and NN depth, \(L\). Notice that these two parameters directly affect the size of (3): \(L\) determines the number of equations, while \(w\) the unknowns in each equation; as such, they can be used as indication of when to expect speedups for more complex problems. Timing results for these experiments are reported in Fig. 3. The leftmost column refers to the sequential implementation of forward (top) and backward (bottom) pass, and clearly shows the linear complexity in \(L\) of such operations: the curves flatten on a line of inclination 1. Conversely, the graphs in the middle column illustrate DeepPCR's performance, and trace a logarithmic curve for the most part, confirming the theoretical expectations on its \(\mathcal{O}(\log_{2}L)\) complexity. Notice this reduces the wall-clock time for a single forward pass from \(0.55s\) to \(0.015s\), and for a backward pass from \(589ms\) to \(2.45ms\), corresponding to speedups of \(>30\times\) and \(200\times\), respectively, at least for the most favorable architectures - and this despite the fact that there has been more than 20 years of optimization into extracting the best performance from the current GPU hardware when running the sequential forward and backward pass. This result is encouraging as our proposed algorithm can gain from further optimization in each of its steps. As the MLP grows in width, however, the logarithmic regime is abandoned in favour of a linear regime. This performance degradation is due to the fact that the reductions in line 6 necessary for PCR cannot be performed concurrently anymore. Notice that \(w\) relates directly to the size of the Jacobian blocks in (3), so we can expect similar problems whenever the Jacobian size grows past a given threshold. This issue is caused by hardware limitations, and can be addressed by using dedicated hardware or by optimizing the implementation: evidence of this claim is provided in appendix C.1, where we measure how the threshold for abandoning the logarithmic regime shifts as we use GPUs with different amounts of dedicated memory. Finally, the rightmost graphs in Fig. 3 show the ratio of timings for the sequential versus parallel implementation: any datapoint above 1 indicates effective speedup. The break-even point between the two methods lies around \(L\approx 2^{7}\) for the forward pass. Figure 3: Time to complete a single forward pass (top) and backward pass (bottom), for MLPs of varying depths \(L\) and widths \(w\), with ReLU activation function. Each datapoint reports the minimum time over 100 runs. The left, center, and right columns refer to the sequential implementation, the DeepPCR implementation, and the ratio between the timings of the two, respectively. Results for backward pass are qualitatively comparable, but achieve break-even at \(L\approx 2^{5}\): this gain is due to the fact that the backward pass is a linear operation, and as such does not require Newton iterations. For a more in-depth analysis of the role of the Newton solver, we refer to Sec. 4.4. ### Speeding up training of ResNets The results in Sec. 4.1 identify regimes where one can expect to achieve speedup using DeepPCR, but they only refer to a single forward and backward pass through a freshly initialized model. The results in this section aim to verify that DeepPCR can be used to accelerate forward and backward passes for the whole training procedure, and that the speedup is maintained throughout. To this end, we train a deep ResNet model composed of only fully-connected layers. Each ResNet block consists of 4 layers of width \(2^{4}\) and the ReLU activation function. The models are trained on a classification task on MNIST [8], both using the sequential approach and DeepPCR. We train for 8 epochs using an SGD optimizer with a learning rate of \(10^{-3}\) without a scheduler. We perform training runs with various seeds but report results from only one for readability: the others are comparable, and we show their statistics in appendix D. In Fig. 4 we report the evolution of the wall-clock time measurements for the forward pass throughout the training procedure. We can notice these remain roughly constant, confirming that the speedup achieved by DeepPCR is preserved during training. Notice that using DeepPCR translates into a speedup of \(7\times\) over the sequential implementation: over the whole course of training, this entails a wall-clock time difference of \(3.2h\) versus \(30min\), even without including the gains from the backward pass. As mentioned in Sec. 3.1, we remind the reader that DeepPCR uses Newton in order to solve (1). Being Newton an approximate solver, one may wonder whether we are accumulating numerical errors with respect to the sequential solution, how does it affect the evolution of the parameters, and what is Figure 4: Time to complete forward pass during training, for sequential (left) and DeepPCR implementation (center), and ratio between the two (right), for ResNets of varying depths \(L\), with \(w=2^{4}\), skip connection of length \(4\), and ReLU activation function. Each datapoint is an average over 100 optimization steps, and the shaded area spans to \(\pm 1\) standard deviation. Figure 5: Loss evolution during training with forward and backward passes computed sequentially (left), with DeepPCR (center), and difference between the two (right), for ResNets of varying depths \(L\), with \(w=2^{4}\), skip connection of length \(4\), and ReLU activation function. Each datapoint is an average over 100 optimization steps, and the shaded area spans \(\pm 1\) standard deviation. the impact on the quality of the final trained model. In our experiments, we measure such impact by comparing the evolution of the loss curves for the models trained sequentially and in parallel with DeepPCR. These are reported in Fig. 5, which shows that, for our experiments, the evolutions are practically equivalent. To further confirm this, we report the accuracy evolution on the test set in appendix D: in both cases, it sits around \(94\%\) at the end of training. The effects of the Newton solver on performance are further discussed in Sec. 4.4. ### Speeding up image generation in Diffusion Models The experiments in this section showcase the flexibility of DeepPCR in accelerating more general definitions of sequential operations. As an example, we apply DeepPCR to speedup image generation via latent-space diffusion models [37]. Note that we are interested in parallelizing the whole denoising procedure, rather than the single forward pass through the denoiser: we refer to appendix A.4 for the specifics on how this operation falls within the DeepPCR framework. We consider the size of the latent space and the number of denoising steps as the two main parameters which can impact the effectiveness of DeepPCR, and measure how the performance of our method varies according to them. Notice that, in determining the size of system (3), these two parameters cover the same role as \(w\) and \(L\) in Sec. 4.1, respectively, so we identify them using the same notation. Our latent diffusion model considers a simplification of the KL-AutoEncoder introduced by [37] as an encoder, and a custom MLP with residual connections as denoiser: see appendix E for details. In Fig. 6 (left) we report the average time1 for completing the diffusion procedure, either sequentially or using DeepPCR, for 100 runs on architectures trained on MNIST with various values of \(w\) and \(L\). Notice how even in this case the time for the sequential approach grows linearly with respect to the number of denoising steps, while for DeepPCR the growth is logarithmic for the most part. Increasing \(w\) past \(\sim 2^{6}\), though, results in a speedup reduction for the largest \(L=2^{10}\), matching what is observed in Fig. 3: similarly, this is related to hardware limitations, and we refer again to appendix C.1 for an analysis of the phenomenon. The distributions of the associated speedups are also plotted in Fig. 6 (middle), where we can see that DeepPCR manages to generate images up to \(11\times\) faster, reducing the required time from \(1.3s\) to \(0.12s\) for certain configurations. To ensure the quality of the resulting images, we follow the FID score [16] and measure the Wasserstein-2 distance between the latent distribution of the original test set and the latent distribution of the images recovered, either sequentially or using DeepPCR. The difference of these distances is also reported in Fig. 6, and is consistently close to \(0\), hinting that using either method results in images of similar qualities. Some examples images generated sequentially or using DeepPCR can be seen in Fig. 18, to further confirm that they are hardly distinguishable. We also experimented with diffusion in pixel-space: the corresponding timings can be found in Tab. 2, and their behavior mimics what was observed for latent diffusion. Footnote 1: We point out that the timings in Fig. 6 and 7 are a proxy, evaluated assuming perfect parallelizability of the Jacobian assembly operation necessary to initialize system (3). We could not measure exact wall-clock time due to incompatibilities between the vmap and autograd functionalities provided in PyTorch. Nonetheless, this proxy is reasonably accurate, as the time required to assemble the Jacobians is negligible with respect to that for the PCR reduction (see appendix E.2, and particularly Fig. 17 for details). Figure 6: Results from applying DeepPCR to speedup image generation in latent diffusion trained on MNIST, for various latent space dimensions \(w\) and number of denoising steps \(L\). Left: timings using sequential and DeepPCR approaches (average over 100 runs). Middle: violin plots of speedups distribution (ratio of sequential/DeepPCR timings for 100 runs). Right: difference between Wasserstein-2 distances to test distribution of latents recovered sequentially and using DeepPCR. Finally, in order to provide empirical evidence of the capability of DeepPCR to provide speedup also for other datasets, we experiment with latent diffusion on CIFAR-10 [25] and CelebA [29] as well. The corresponding timings results are reported in Fig. 7. We limit ourselves to \(w>2^{6}\) due to the difficulty of training VAEs for these datasets on smaller latent dimensions. Nonetheless, the timing results are comparable to the ones measured for MNIST in Fig. 6, and even in this case we manage to recover speedups of \(8\times\) and \(9\times\) for CIFAR-10 and CelebA, respectively. We can see that also for these more complex datasets the performance of DeepPCR starts degrading for \(w>2^{7}\), similarly to what is observed in Fig. 6. This observation further confirms that the speedup attained by DeepPCR is influenced by the problem parameters \(w\) and \(L\), but is otherwise dataset-independent. ### Accuracy/Speedup trade-off: analysis on Newton convergence As outlined in Sec. 2, when system (1) is nonlinear, DeepPCR relies on a Newton solver. This is an iterative solver, which only recovers an _approximate_ solution, correct up to a fixed tolerance. The experiments in the previous sections were conducted with a tolerance of \(10^{-4}\), as we were interested in recovering a solution which would closely match the sequential one. The tolerance of the solver, however, grants us a degree of freedom in trading off accuracy for additional speedup. In this section we investigate in detail the properties of the Newton method when used for the solution of the problems considered in Sec. 4.1 and 4.2. As a first result, we show that Newton can indeed recover high-quality solutions, within a number of iterations \(c_{N}\) which is small and roughly independent of the configuration considered. To this purpose, we report in Fig. 8 the values of \(c_{N}\) recorded for the experiments in Sec. 4.1 and 4.2. In all configurations considered, they remained bounded below \(c_{N}\leq 6\), and practically independent on the system configuration, particularly of \(L\). In Fig. 8 (first on the left), we see that the performance of the Newton solver is indeed impacted by the type of activation function used in the layers of the MLP: using ReLUs generally requires more iterations for convergence than using a smoother counterpart such as sigmoid. This is in line with the properties of the Newton method which assumes differentiability of the underlying function for fast convergence. Additionally, for the same set-up, we show (second plot in Fig. 8) the error between the solution recovered via Newton with DeepPCR and the traditional solution, recovered sequentially. This error is expressed in terms of the \(L^{2}\) difference of the NN output (for the experiments in Sec. 4.1) and in terms of the \(L^{\infty}\) difference of the parameters evolution (for the experiments in Sec. 4.2), to better reflect the relevant metrics of the two experiments. The former sits almost always around machine precision, confirming that sequential and DeepPCR solutions are extremely close. For the latter, we see that small numerical errors eventually accumulate throughout the training procedure. Still, the discrepancies are bounded, and this does not affect the final performance of the trained model (as shown also in Fig. 5, and appendix D). Finally, we conduct an ablation study on the effect of reducing the accuracy of the recovered solution. To this end, we consider again the framework in Sec. 4.2, but this time we fix the number of Newton iterations for solving the forward pass to increasingly small values, and check at which stage training Figure 7: Results from applying DeepPCR to speedup image generation in latent diffusion, for various latent space dimensions \(w\) and number of denoising steps \(L\). The timings compare sequential (baseline) and DeepPCR approaches, reporting an average over 100 runs, for models trained over the CIFAR-10 (left) and CelebA (right) datasets. of the ResNets fails. The results reported in appendix F.1 show that, for the problem considered, stopping Newton at \(c_{N}=3\) still results in successful training. This translates into an additional \(2\times\) speedup with respect to the ResNet times reported in Fig. 4, for a total of up to \(14\times\) speedup. For more general problems, we can expect that fine-tuning the Newton solver would play a relevant role in the final speedup attained. Particularly, choosing the correct initial guess for the system and identifying the most apt tolerance level. ## 5 Conclusion, Limitations, and Future Work We introduced DeepPCR, a method for parallelizing sequential operations which are relevant in NN training and inference. The method relies on the target sequence being Markovian: if this is satisfied, the sequential operation can be interpreted as the solution of a bidiagonal system of equations. The system is then tackled using Parallel Cyclic Reduction, combined with Newton's method. We investigated the effectiveness and flexibility of DeepPCR by applying it to accelerate: i) forward/backward passes in MLPs, ii) training of ResNets, and iii) image generation in diffusion models, attaining speedups of up to \(30\times\), \(7\times\), and \(11\times\) for the three problems, respectively. We identified regimes where the method is effective, and further analyzed trade-offs in terms of speedup, accuracy, and memory consumption. The main bottleneck for our DeepPCR implementation is represented by the decay in performance associated with the growth in size of the Jacobian blocks in (3). While this can be curbed by using hardware with larger memory and/or better parallelization capabilities, investigating alternative ways to circumvent this issue would greatly benefit the applicability of DeepPCR. Another potential issue is related to the reliance of DeepPCR on a Newton solver for recovering the solution to the target system. While Newton proved to be reasonably robust for the target applications we investigated, in order to achieve best performance one might have to perform _ad-hoc_ adjustments to the solver, depending on the specific sequential operation considered. Future work will focus on relaxing the limitations outlined above, but also on investigating the applicability of DeepPCR to speedup forward and backward passes through more complex architectures, as well as to speedup different types of sequential operations. In particular, text generation in large language models [4] could be a suitable candidate. Overall, DeepPCR represents a promising method for speeding up training and inference in applications where reducing wall-clock time is critical, and additional computational power is available for parallelization. Furthermore, DeepPCR has the potential to unlock architectures which were not previously experimented upon, due to the long computational time required to perform inference on them. Figure 8: Newton solver analysis for forward pass through MLP (left), and ResNet training (right). ## Acknowledgements The authors would like to thank Barry Theobald, David Grangier and Ronan Collobert for their effort and help in proofreading the paper, and Nicholas Apostoloff and Jerremy Holland for supporting this work. The work by Federico Danieli was conducted as part of the AI/ML Residency Program in MLR at Apple.
2309.03846
Scalable Forward Reachability Analysis of Multi-Agent Systems with Neural Network Controllers
Neural networks (NNs) have been shown to learn complex control laws successfully, often with performance advantages or decreased computational cost compared to alternative methods. Neural network controllers (NNCs) are, however, highly sensitive to disturbances and uncertainty, meaning that it can be challenging to make satisfactory robustness guarantees for systems with these controllers. This problem is exacerbated when considering multi-agent NN-controlled systems, as existing reachability methods often scale poorly for large systems. This paper addresses the problem of finding overapproximations of forward reachable sets for discrete-time uncertain multi-agent systems with distributed NNC architectures. We first reformulate the dynamics, making the system more amenable to reachablility analysis. Next, we take advantage of the distributed architecture to split the overall reachability problem into smaller problems, significantly reducing computation time. We use quadratic constraints, along with a convex representation of uncertainty in each agent's model, to form semidefinite programs, the solutions of which give overapproximations of forward reachable sets for each agent. Finally, the methodology is tested on two realistic examples: a platoon of vehicles and a power network system.
Oliver Gates, Matthew Newton, Konstantinos Gatsis
2023-09-07T17:02:09Z
http://arxiv.org/abs/2309.03846v1
# Scalable Forward Reachability Analysis of Multi-Agent Systems with Neural Network Controllers ###### Abstract Neural networks (NNs) have been shown to learn complex control laws successfully, often with performance advantages or decreased computational cost compared to alternative methods. Neural network controllers (NNCs) are, however, highly sensitive to disturbances and uncertainty, meaning that it can be challenging to make satisfactory robustness guarantees for systems with these controllers. This problem is exacerbated when considering multi-agent NN-controlled systems, as existing reachability methods often scale poorly for large systems. This paper addresses the problem of finding overapproximations of forward reachable sets for discrete-time uncertain multi-agent systems with distributed NNC architectures. We first reformulate the dynamics, making the system more amenable to reachability analysis. Next, we take advantage of the distributed architecture to split the overall reachability problem into smaller problems, significantly reducing computation time. We use quadratic constraints, along with a convex representation of uncertainty in each agent's model, to form semidefinite programs, the solutions of which give overapproximations of forward reachable sets for each agent. Finally, the methodology is tested on two realistic examples: a platoon of vehicles and a power network system. ## I Introduction There has been recent interest in the use of neural networks (NNs) for control in closed-loop feedback systems. Neural network controllers (NNCs) can be used to imitate traditional control policies, such as model predictive control (MPC), with reduced computational cost [1], or to implement deep reinforcement learning (RL) policies [2]. Even for simple linear systems, NNCs can be used to implement complex non-linear control laws (which may not be easy to achieve with existing methods). NNs are, however, highly sensitive to input perturbations, so disturbances in the closed-loop system can have adverse effects [3]. This is problematic when NNCs are used in safety-critical systems, and recent work has focused on reachability analysis for systems with NNCs; if we can overapproximate the forward reachable sets, then it can be verified that certain regions of the state space are avoided over a given horizon. The problem of computing forward reachable sets becomes more challenging when considering a multi-agent system in which each agent is controlled by an NN (or a series of NNs); the effects of small perturbations to the input of one NNC are propagated through the system. Control of multi-agent systems is well-studied, and common goals include consensus, formation control and flocking/swarming [4]. Multi-agent control architectures can be categorised according to the dependence of each agent's control input on other agents' states: (a) centralised control, in which each agent's control input is a function of all agents' states; (b) distributed control, in which each agent's control input is a function of a subset of the other agents' states; (c) decentralised control, in which each agent's control input is a function of only its own state [5]. In distributed control, common approaches include state feedback [6] and distributed model predictive control (DMPC) [7]. A number of methods have been proposed for the reachability analysis of systems with NNCs [8, 9, 10, 11, 12, 13, 14, 15, 16]. In general, the problem of reachability for discrete-time LTI systems is undecidable [17], so methods for safety verification of closed-loop systems often aim to find tight overapproximations of the forward reachable sets. Alternatively, additional restrictions can be placed on the problem to allow the exact reachable sets to be computed. Generally, there is a tradeoff between scalability and tightness of the bounds [3]. In [8], semidefinite programming (SDP) is used to compute overapproximations of the forward reachable sets, and [9] builds on this work by considering parameter-varying systems. In [10] and [11], the input set is partitioned into smaller sets, and a linear programming (LP) approach is used to overapproximate the reachable sets; in [10], the input set partitioning approach is also applied to the method in [8], and a comparison is made between the LP and SDP approaches, demonstrating that the former is faster but the latter results in tighter bounds. The work in [12] restricts the input sets to be constrained zonotopes, allowing for the exact computation of reachable sets, and the work in [13] represents the input sets as hybrid zonotopes, allowing for a class of non-convex input sets. In [14], polynomial zonotopes are used to abstract the closed-loop dynamics, providing tight overapproximations. Other approaches include the use of polynomial optimisation [15] and Bernstein polynomials [16]. Existing reachability methods for NN-controlled systems do not explicitly consider multi-agent systems with distributed control architectures. Similarly to the single-agent case, in which NNCs can be used to learn complex control laws, a distributed NNC architecture can be used to learn complex distributed control laws [18]. An example is DMPC, in which each agent's controller solves an optimisation problem based on its own state and those of its neighbours. An overview of DMPC is given in [7], and its applications include vehicle platooning [19], frequency regulation in power systems [20] and formation control of UAVs [21]. In this paper, we present a scalable method to compute overapproximations of the forward reachable sets for uncertain multi-agent systems with distributed NNC architectures. To the best of our knowledge, this is the first work which explicitly deals with the multi-agent NNC reachability problem. The main contributions are as follows: * we reformulate the dynamics, making the system more amenable to reachability analysis; * we take advantage of the distributed architecture to split the overall reachability problem into smaller problems, using an existing SDP-based approach to overapproximate the forward reachable sets for each agent, and further extend this result to incorporate model uncertainty in the agents' dynamics; * we demonstrate the effectiveness of this methodology on two realistic multi-agent systems with different structures: a platoon of vehicles and a power network; * we compare our approach to the approach of treating the multi-agent system as one overall system, and we demonstrate that our approach outperforms the alternative approach. In Section II, we describe the dynamics, give some remarks about the form of the controller and describe the forward reachability problem. In Section III, we provide a simplification of the dynamics and control input, and in Section IV, we present the reachability method. In Section V, we introduce model uncertainty. In Section VI, we present experiments to demonstrate the method. ### _Notation_ The set of real \(n\times m\) matrices is denoted by \(\mathbb{R}^{n\times m}\), the set of real \(n\)-length vectors by \(\mathbb{R}^{n}\) and the set of real numbers by \(\mathbb{R}\). The set of symmetric \(n\times n\) matrices is denoted by \(\mathbb{S}^{n}\). The set of diagonal \(n\times n\) matrices is denoted by \(\mathbb{D}^{n}\). The set of positive integers is denoted by \(\mathbb{Z}^{+}\). The cardinality of a set \(\mathcal{S}\) is denoted by \(|\mathcal{S}|\). The \(n\times n\) identity matrix is denoted by \(I_{n}\). The symbols \(\geq\) and \(\leq\) apply elementwise to vectors and matrices. \(A\preceq 0\) implies that matrix \(A\) is negative semidefinite. The number \(0\) is used to represent the scalar, vector or matrix of appropriate size; the size will be clear from the context. For clarity, whenever the letters \(i\) and \(j\) are used in this paper, they refer to the index of an agent. ## II Problem Statement ### _Multi-agent dynamics_ We consider a discrete-time multi-agent system of \(M\) agents, where each agent \(i\) has linear time-invariant dynamics \[x_{k+1}^{[i]}=A_{ii}x_{k}^{[i]}+\sum_{j\in\mathcal{N}_{i}}A_{ij}x_{k}^{[j]}+B_ {i}u_{k}^{[i]}+w_{k}^{[i]}, \tag{1}\] where for each agent \(i\in\{1,\ldots,M\}=\mathcal{I}\), \(x_{k}^{[i]}\in\mathbb{R}^{n_{x}}\) is the local state, \(x_{k}^{[j]}\in\mathbb{R}^{n_{x}}\) is the \(j^{\text{th}}\) neighbouring state, \(u_{k}^{[i]}\in\mathbb{R}^{n_{x}}\) is the control input, \(w_{k}^{[i]}\in\mathbb{R}^{n_{x}}\) is a known external input, \(A_{ii}\in\mathbb{R}^{n_{x}\times n_{x}}\) is the state matrix, \(A_{ij}\in\mathbb{R}^{n_{x}\times n_{x}}\) is the matrix describing the effect of state \(x_{k}^{[j]}\) on agent \(i\), \(B_{i}\in\mathbb{R}^{n_{x}\times n_{x}}\) is the input matrix, and \(\mathcal{N}_{i}\) is the set of neighbours. Note that for simplicity, we have assumed that all agents have the same dimensions (although this assumption could be relaxed). We also assume that \(|\mathcal{N}_{i}|>0\)\(\forall i\). In this paper, without loss of generality, we focus on distributed control architectures, in which each agent's control input is a function of a subset \(\mathcal{N}_{i}\) of all other agents' states \(\mathcal{I}\). The methods presented in this paper can be easily extended to the other two cases by setting \(\mathcal{N}_{i}=\mathcal{I}\) (centralised) or \(\mathcal{N}_{i}=\{i\}\) (decentralised). ### _Control input_ In a traditional distributed control scheme with proportional feedback, the control input \(u_{\mathrm{trad},k}^{[i]}\) might be given by \[u_{\mathrm{trad},k}^{[i]}=\sum_{j\in\mathcal{N}_{i}}K_{ij}\left(x_{k}^{[j]}-x_ {k}^{[i]}\right),\] where \(K_{ij}\in\mathbb{R}^{n_{u}\times n_{x}}\) is some gain matrix (which could represent multiple gain matrices) [22]. Similarly, in a DMPC scheme, the \(i^{\text{th}}\) control input is generated by solving an optimisation problem based on the agent's state \(x_{k}^{[i]}\) and the neighbours' states \(x_{k}^{[j]}\)\(\forall j\in\mathcal{N}_{i}\). In this paper, we consider the extension of the traditional distributed control schemes to NN-based control, in which the \(i^{\text{th}}\) control input is generated by some non-linear function of the agent's state \(x_{k}^{[i]}\) and the neighbours' states \(x_{k}^{[j]}\)\(\forall j\in\mathcal{N}_{i}\). We also consider the possibility of controller saturation. Hence, the \(i^{\text{th}}\) control input \(u_{k}^{[i]}\) is given by \[u_{k}^{[i]}=\mathrm{sat}_{\mathcal{U}_{i}}\left[\sum_{j\in\mathcal{N}_{i}} \pi_{ij}\left(\begin{bmatrix}x_{k}^{[i]}\\ x_{k}^{[j]}\end{bmatrix}\right)\right], \tag{2}\] where \(\mathrm{sat}_{\mathcal{U}_{i}}\) is a projection onto the set \(\mathcal{U}_{i}=\{u\in\mathbb{R}^{n_{u}}\mid\underline{u}_{i}\leq u\leq \overline{u}_{i}\}\), where \(\underline{u}_{i}\) and \(\overline{u}_{i}\) are the lower and upper limits, respectively, for the \(i^{\text{th}}\) controller, and \(\pi_{ij}:\mathbb{R}^{2n_{x}}\rightarrow\mathbb{R}^{n_{u}}\) is a function representing the mapping of the input through a multi-layer perceptron (MLP). **Remark 1**: _In (2), we consider a separate NN \(\pi_{ij}\) for each neighbouring agent, as this preserves the ability to consider the effects of each neighbour individually, by isolating the effect of a particular neighbour's contribution to the control input. However, the control input could also be generated by feeding the state of the agent and the states of its neighbours into a single MLP \(\Pi_{i}:\mathbb{R}^{(1+|\mathcal{N}_{i}|)n_{x}}\rightarrow\mathbb{R}^{n_{u}}\). A transformation between the two architectures is given in Section III-B._ ### _Multi-layer perceptron_ The mapping \(s\mapsto\pi_{ij}(s)\) for an \(L\)-layer MLP is \[z_{ij}^{0} =s, \tag{3a}\] \[z_{ij}^{\ell+1} =\sigma^{\ell}\left(W_{ij}^{\ell}z_{ij}^{\ell}+b_{ij}^{\ell} \right),\quad\ell=0,\ldots,L-1,\] (3b) \[\pi_{ij}(s) =W_{ij}^{L}z_{ij}^{\ell}+b_{ij}^{L}, \tag{3c}\] where \(z_{ij}^{\ell}\in\mathbb{R}^{n_{\ell}}\) is the \(\ell^{\text{th}}\) vector of activation values (note that \(n_{0}=2n_{x}\)), \(W_{ij}^{\ell}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}}\) is the \(\ell^{\text{th}}\) weight matrix, \(b_{ij}^{\ell}\in\mathbb{R}^{n_{\ell+1}}\) is the \(\ell^{\text{th}}\) bias vector, and \(\sigma^{\ell}:\mathbb{R}^{n_{\ell+1}}\rightarrow\mathbb{R}^{n_{\ell+1}}\) is the \(\ell^{\text{th}}\) ReLU activation function, i.e. \(\sigma^{\ell}(y)=\max(y,0)\), applied elementwise, such that \(\sigma^{\ell}(y)=\begin{bmatrix}\max(y_{1},0)&\cdots&\max(y_{n_{\ell}+1},0) \end{bmatrix}^{\top}\), where \(y=\begin{bmatrix}y_{1}&\ldots&y_{n_{\ell}+1}\end{bmatrix}^{\top}\) is the vector of pre-activation values. For simplicity, we assume that all MLPs have the same structure (size and number of hidden layers) for all \(i,j\). ### _Overapproximation of forward reachable sets_ We denote the set of all possible states of the \(i^{\text{th}}\) agent at time \(k\) as \(\mathcal{X}_{k}^{[i]}\), such that \(x_{k}^{[i]}\in\mathcal{X}_{k}^{[i]}\). Given \(\mathcal{X}_{k}^{[i]}\) and the sets of neighbouring states at time \(k\), i.e. \(\mathcal{X}_{k}^{[j]}\)\(\forall j\in\mathcal{N}_{i}\), we aim to find an overapproximation \(\widehat{\mathcal{X}}_{k+1}^{[i]}\) of the reachable set \(\mathcal{X}_{k+1}^{[i]}\) at the next time step. Specifically, we aim to find the tightest polytopic overapproximation of the reachable set \[\min \quad\operatorname{vol}\left(\widehat{\mathcal{X}}_{k+1}^{[i]}\right)\] (4a) subject to \[\widehat{\mathcal{X}}_{k+1}^{[i]}\supseteq\mathcal{X}_{k+1}^{[i]}, \tag{4b}\] \[\widehat{\mathcal{X}}_{k+1}^{[i]}\text{ is a polytope}, \tag{4c}\] for each agent \(i\), where \(\operatorname{vol}\) is the \(n_{x}\)-dimensional volume. This could then be applied recursively to overapproximate the next forward reachable sets \(\mathcal{X}_{k+2}^{[i]},\ldots,\mathcal{X}_{k+N}^{[i]}\) over a finite horizon \(N\), and used to verify that certain unsafe regions of the state space are avoided over this horizon. ## III Reformulation of Dynamics An obvious approach to the forward reachability problem is to augment the agents' states into one 'overall' state \[x_{k}=\begin{bmatrix}x_{k}^{[1]}{}^{\top}&\cdots&x_{k}^{[M]}{}^{\top}\end{bmatrix}^ {\top},\] then form a recursion for the overall dynamics and use existing methods to perform reachability analysis on this system. The main issue with this approach is that it ignores the distributed architecture of the system - the dynamics of each agent depend only on its neighbours, not on all other agents. As a result, we reformulate the dynamics given by (1) and (2) to allow reachability analysis to be performed for each agent. In Section VI, we show that there is a significant computational advantage to solving \(M\) smaller reachability problems over one large reachability problem for the SDP-based method. We note that the approach of decomposing the reachability problem into smaller problems has been used more generally in work on reachability analysis [23, 24]. ### _Simplification of dynamics_ Let \(g(i,j)\) be a function which returns the \(j^{\text{th}}\) neighbour of the \(i^{\text{th}}\) agent. For example, if agent \(3\) has \(\mathcal{N}_{3}=\{2,5,6\}\), then \(g(3,1)=2\), \(g(3,2)=5\) and \(g(3,3)=6\). Then we can write \(\mathcal{N}_{i}=\{g(i,1),\ldots,g(i,q_{i})\}\), where \(q_{i}=|\mathcal{N}_{i}|\), and let \[\tilde{A}_{i} =\begin{bmatrix}A_{ii}&A_{ig(i,1)}&\cdots&A_{ig(i,q_{i})}\end{bmatrix}, \tag{5}\] \[\tilde{x}_{k}^{[i]} =\begin{bmatrix}x_{k}^{[i]}{}^{\top}&x_{k}^{[g(i,1)]}{}^{\top}& \cdots&x_{k}^{[g(i,q_{i})]}{}^{\top}\end{bmatrix}^{\top}, \tag{6}\] such that \(\tilde{x}_{k}^{[i]}\) is the concatenation of the state of agent \(i\) and the states of the neighbours of agent \(i\) at time \(k\), then (1) can be written as \[x_{k+1}^{[i]}=\tilde{A}_{i}\tilde{x}_{k}^{[i]}+B_{i}u_{k}^{[i]}+w_{k}^{[i]}.\] ### _Simplification of control input_ To simplify (2), we aim to represent it as the mapping of \(\tilde{x}_{k}^{[i]}\) through a single MLP \(\Pi_{i}:\mathbb{R}^{(1+q_{i})n_{x}}\rightarrow\mathbb{R}^{n_{u}}\). Note that the layer sizes can differ from agent to agent, depending on \(|\mathcal{N}_{i}|\). The mapping from \(\tilde{x}_{k}^{[i]}\) to \(\Pi_{i}(\tilde{x}_{k}^{[i]})\) is then \[z_{i,k}^{0} =\tilde{x}_{k}^{[i]}, \tag{7a}\] \[z_{i,k}^{\ell+1} =\sigma_{i}^{\ell}\left(W_{i}^{\ell}z_{i,k}^{\ell}+b_{i}^{\ell} \right),\quad\ell=0,\ldots,L-1,\] (7b) \[\Pi_{i}\left(\tilde{x}_{k}^{[i]}\right) =W_{i}^{L}z_{i,k}^{L}+b_{i}^{L}, \tag{7c}\] where \(z_{i,k}^{\ell}\in\mathbb{R}^{\tilde{n}_{i}^{i}}\) is the \(\ell^{\text{th}}\) vector of activation values (note that \(\tilde{n}_{0}^{i}=(1+q_{i})n_{x}\)), \(\sigma_{i}^{\ell}:\mathbb{R}^{\tilde{n}_{i+1}^{i}}\rightarrow\mathbb{R}^{\tilde {n}_{i+1}^{i}}\) is the \(\ell^{\text{th}}\) ReLU activation function, and the weight matrices and bias vectors are given by \(W_{i}^{0}=\mathfrak{W}_{i}^{0}\Lambda_{i}\), \(b_{i}^{0}=\mathfrak{W}_{i}^{0}\), \(W_{i}^{\ell}=\mathfrak{W}_{i}^{\ell}\), \(b_{i}^{\ell}=\mathfrak{b}_{i}^{\ell}\) for \(\ell=1,\ldots,L-1\), and \(W_{i}^{L}=\Omega_{i}\mathfrak{W}_{i}^{\ell}\), \(b_{i}^{L}=\Omega_{i}\mathfrak{b}_{i}^{\ell}\), where \[\mathfrak{W}_{i}^{\ell} =\operatorname{blkdiag}\left(W_{ig(i,1)}^{\ell},\ldots,W_{ig(i,q _{i})}^{\ell}\right),\] \[\mathfrak{b}_{i}^{\ell} =\begin{bmatrix}b_{ig(i,1)}^{\ell}&\cdots&b_{ig(i,q_{i})}^{\ell} \end{bmatrix}^{\top},\] for \(\ell=0,\ldots,L\), where \(\Lambda_{i}\in\mathbb{R}^{2q_{i}n_{x}\times\tilde{n}_{0}^{i}}\) and \(\Omega_{i}\in\mathbb{R}^{n_{u}\times q_{i}n_{u}}\) are given by \[\Lambda_{i}=\begin{bmatrix}I_{n_{x}}&0&0&0&\cdots&0\\ 0&I_{n_{x}}&0&0&\cdots&0\\ I_{n_{x}}&0&0&0&\cdots&0\\ 0&0&I_{n_{x}}&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ I_{n_{x}}&0&0&0&\cdots&0\\ 0&0&0&0&\cdots&I_{n_{x}}\end{bmatrix},\quad\Omega_{i}=\begin{bmatrix}I_{n_{u}} \\ \vdots\\ I_{n_{u}}\end{bmatrix}^{\top}.\] Finally, we can write \[\sum_{j\in\mathcal{N}_{i}}\pi_{ij}\left(\begin{bmatrix}x_{k}^{[i]}\\ x_{k}^{[j]}\end{bmatrix}\right)\equiv\Pi_{i}\left(\tilde{x}_{k}^{[i]}\right),\] which is illustrated in Figure 1. Figure 1: Reformulation of individual MLPs into one larger MLP ### _Summary of reformulation_ **Lemma 1**: _The dynamics given by (1) and (2) can be written as_ \[x_{k+1}^{[i]}=\tilde{A}_{i}\tilde{x}_{k}^{[i]}+B_{i}\mathrm{sat}_{\mathcal{U}_{i }}\left[\Pi_{i}\left(\tilde{x}_{k}^{[i]}\right)\right]+w_{k}^{[i]}, \tag{8}\] _where \(\tilde{A}_{i}\) and \(\tilde{x}_{k}^{[i]}\) are defined in (5) and (6), respectively, and \(\Pi_{i}(\tilde{x}_{k}^{[i]})\) is given by (7a)-(7c)._ ## IV Forward Reachability Analysis To overapproximate the forward reachable sets, we extend the reachability method outlined in [3] and [8] for the multi-agent case. The method in [8], called Reach-SDP, uses quadratic constraints (QCs) and semidefinite programming; the input set, NN and reachable set are abstracted using QCs. This involves forming quadratic inequalities for sets and functions by pre- and post-multiplying a matrix by a 'basis vector'. Using 'change-of-basis' matrices allows the same basis vector to be used for all inequalities, which allows linear matrix inequalities (LMIs) to be formed for forward reachability analysis of the closed-loop system. ### _Incorporation of control limits into MLP_ We first add two layers to the MLP in (7a)-(7c) to account for the control limits [8]. Recall that \(\tilde{x}_{k}^{[i]}\) is the concatenation of the state of agent \(i\) and the states of the neighbours of agent \(i\) at time \(k\). The mapping from \(\tilde{x}_{k}^{[i]}\) to \(u_{k}^{[i]}\) is now defined by (7a), (7b) and \[z_{i,k}^{L+1} =\sigma_{i}^{L}\left(W_{i}^{L}z_{i,k}^{L}+b_{i}^{L}-\underline{u} _{i}\right), \tag{9a}\] \[z_{i,k}^{L+2} =\sigma_{i}^{L+1}\left(-z_{i,k}^{L+1}+\overline{u}_{i}-\underline{ u}_{i}\right),\] (9b) \[u_{i}^{[i]} =-z_{i,k}^{L+2}+\overline{u}_{i}. \tag{9c}\] We then define the overall basis vector for the QCs as \(\left[\underline{\mathbf{z}}_{i,k}\hskip 1.0pt^{\top}\hskip 1.0pt1\right]^{\top}\), where \(\underline{\mathbf{z}}_{i,k}^{\top}=\left[{z_{i,k}^{0}}\hskip 1.0pt^{\top} \hskip 1.0ptz_{i,k}^{1}\hskip 1.0pt^{\top}\hskip 1.0pt\cdots\hskip 1.0ptz_{i,k}^{L+2 ^{\top}}\right]\). ### _Input set_ We describe the input set \(\widetilde{\mathcal{X}}_{k}^{[i]}\), such that \(\tilde{x}_{k}^{[i]}\in\widetilde{\mathcal{X}}_{k}^{[i]}\), by a hyper-rectangle, i.e. \[\widetilde{\mathcal{X}}_{k}^{[i]}=\left\{x\in\mathbb{R}^{\tilde{n}_{0}^{i}} \mid\underline{\tilde{x}}_{k}^{[i]}\leq x\leq\overline{\tilde{x}}_{k}^{[i]} \right\}, \tag{10}\] where \(\underline{\tilde{x}}_{k}^{[i]},\overline{\tilde{x}}_{k}^{[i]}\in\mathbb{R}^{ \tilde{n}_{0}^{i}}\) are known bounds on the state of the agent and the states of its neighbours at time \(k\). Using [3, Definition 1] and [3, Proposition 1], and noting that the first activation value \(z_{i,k}^{0}\) is equal to \(\tilde{x}_{k}^{[i]}\), we can write (10) as \[\begin{bmatrix}z_{i,k}^{0}\\ 1\end{bmatrix}^{\top}P_{k}^{i}(\Gamma)\begin{bmatrix}\tilde{z}_{i,k}^{0}\\ 1\end{bmatrix}\geq 0, \tag{11}\] where \[P_{k}^{i}(\Gamma)=\begin{bmatrix}-2\Gamma&\Gamma\left(\underline{\tilde{x}}_ {k}^{[i]}+\overline{\tilde{x}}_{k}^{[i]}\right)\\ \left(\underline{\tilde{x}}_{k}^{[i]}+\overline{\tilde{x}}_{k}^{[i]}\right)^{ \top}\Gamma&-2\underline{\tilde{x}}_{k}^{[i]}{}^{\top}\Gamma\overline{ \tilde{x}}_{k}^{[i]}\end{bmatrix},\] where \[\Gamma\in\mathbb{D}^{\tilde{n}_{0}^{i}}\] and \(\Gamma\geq 0\). By using a change-of-basis matrix [8] \[E_{\mathrm{in}}^{i}=\begin{bmatrix}I_{\tilde{n}_{0}^{i}}&0&\cdots&0&0\\ 0&0&\cdots&0&1\end{bmatrix},\] we can write (11) as \[\begin{bmatrix}\underline{\mathbf{z}}_{i,k}\\ 1\end{bmatrix}^{\top}\Delta_{k}^{i}(\Gamma)\begin{bmatrix}\underline{\mathbf{ z}}_{i,k}\\ 1\end{bmatrix}\geq 0, \tag{12}\] where \(\Delta_{k}^{i}(\Gamma)={E_{\mathrm{in}}^{i}}^{\top}P_{k}^{i}(\Gamma)E_{\mathrm{ in}}^{i}\). This is summarised in the following lemma. **Lemma 2**: _Consider the state of agent \(i\) and the states of the neighbours of agent \(i\) at time \(k\). If these are are bounded by a hyper-rectangular input set, i.e. \(\underline{\tilde{x}}_{k}^{[i]}\leq\tilde{x}_{k}^{[i]}\leq\overline{\tilde{x} }_{k}^{[i]}\), as in (10), then (12) holds._ ### _ReLU activation functions_ First, we define \(\hat{z}_{i,k}^{\ell+1}=W_{i}^{\ell}z_{i,k}^{\ell}+b_{i}^{\ell}\) for \(\ell=0,\ldots,L-1\), \(\hat{z}_{i,k}^{L+1}=W_{i}^{L}z_{i,k}^{L}+b_{i}^{L}-\underline{u}_{i}\), and \(\hat{z}_{i,k}^{L+2}=-z_{i,k}^{L+1}+\overline{u}_{i}-\underline{u}_{i}\). Then, we can write \(z_{i,k}^{\ell}=\sigma_{i}^{\ell-1}(\hat{z}_{i,k}^{\ell})\) for \(\ell=1,\ldots,L+2\), and write the concatenation of activation functions as \[\boldsymbol{\nu}_{i,k}=\sigma_{i}\left(\hat{\boldsymbol{\nu}}_{i,k}\right), \tag{13}\] where \(\boldsymbol{\nu}_{i,k}^{\top}=\begin{bmatrix}z_{i,k}^{1\top}&\cdots&z_{i,k}^{L +2^{\top}}\end{bmatrix}\), \(\hat{\boldsymbol{\nu}}_{i,k}^{\top}=\begin{bmatrix}\hat{z}_{i,k}^{1\top}& \cdots&\hat{z}_{i,k}^{L+2^{\top}}\end{bmatrix}\), and \(\sigma_{i}:\mathbb{R}^{n_{i}^{i}+2n_{u}}\rightarrow\mathbb{R}^{n_{i}^{i}+2n_{u}}\) is applied elementwise, where \(n_{i}^{i}=\sum_{\ell=1}^{L}\tilde{n}_{\ell}^{i}\). Using [3, Definition 2] and [3, Lemma 3], we can relax (13) as \[\begin{bmatrix}\hat{\boldsymbol{\nu}}_{i,k}\\ \boldsymbol{\nu}_{i,k}\\ 1\end{bmatrix}^{\top}Q^{i}(\lambda,\nu,\eta)\begin{bmatrix}\hat{\boldsymbol{ \nu}}_{i,k}\\ \boldsymbol{\nu}_{i,k}\\ 1\end{bmatrix}\geq 0, \tag{14}\] where \(Q^{i}(\lambda,\nu,\eta)\in\mathbb{S}^{2(n_{u}^{i}+2n_{u})+1}\) is defined in [3, Lemma 3] as \(Q\). By using a change-of-base matrix [8] \[E_{\mathrm{mid}}^{i}=\begin{bmatrix}W_{i}^{0}&0&\cdots&0&0&0\\ 0&W_{i}^{1}&\cdots&0&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&b_{i}^{L-1}\\ 0&0&\cdots&W_{i}^{L}&0&0&b_{i}^{L}-\underline{u}_{i}\\ 0&0&\cdots&0&-I_{n_{u}}&0&\overline{\underline{u}}_{i}-\underline{u}_{i}\\ \hline 0&I_{\tilde{n}_{1}^{i}}&\cdots&0&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots&0\\ 0&0&\cdots&I_{\tilde{n}_{1}^{i}}&0&0&\boldsymbol{0}\\ 0&0&\cdots&0&I_{n_{u}}&0\\ 0&0&\cdots&0&0&I_{n_{u}}&0\\ \hline\end{bmatrix},\] we can write (14) as \[\begin{bmatrix}\underline{\mathbf{z}}_{i,k}\\ 1\end{bmatrix}^{\top}\Theta^{i}(\lambda,\nu,\eta)\begin{bmatrix}\underline{ \mathbf{z}}_{i,k}\\ 1\end{bmatrix}\geq 0, \tag{15}\] where \(\Theta^{i}(\lambda,\nu,\eta)={E_{\mathrm{mid}}^{i}}^{\top}Q^{i}(\lambda,\nu, \eta)E_{\mathrm{mid}}^{i}\). This is summarised in the following lemma. **Lemma 3**: _Given that the activation values for the extended NN satisfy \(z_{i,k}^{\ell}=\sigma_{i}^{\ell-1}(\hat{z}_{i,k}^{\ell})\) for \(\ell=1,\ldots,L+2\), as in (13), then (15) holds._ ### _Reachability algorithm_ We parameterise the overapproximation \(\widehat{\mathcal{X}}_{k+1}^{[i]}\) of the reachable set \(\mathcal{X}_{k+1}^{[i]}\) using a polytope, i.e. the intersection of \(m\) halfspaces \[\widehat{\mathcal{X}}_{k+1}^{[i]}=\left\{x\in\mathbb{R}^{n_{x}}\ |\ H_{1}^{\top}x\leq h_{1},\ldots,H_{m}^{\top}x\leq h_{m}\right\}, \tag{16}\] where \(H_{1},\ldots,H_{m}\in\mathbb{R}^{n_{x}}\) and \(h_{1},\ldots,h_{m}\in\mathbb{R}\), which can be written as [8] \[\widehat{\mathcal{X}}_{k+1}^{[i]}=\bigcap_{p=1}^{m}\left\{x\in\mathbb{R}^{n_{ x}}\ \left[\begin{array}{c}x\\ 1\end{array}\right]^{\top}S_{p}(h_{p})\begin{bmatrix}x\\ 1\end{bmatrix}\leq 0\right\},\] where \[S_{p}(h_{p})=\begin{bmatrix}0&H_{p}\\ H_{p}^{\top}&-2h_{p}\end{bmatrix},\] so each halfspace can be equivalently written as \[\begin{bmatrix}x_{k+1}^{[i]}\\ 1\end{bmatrix}^{\top}S_{p}(h_{p})\begin{bmatrix}x_{k+1}^{[i]}\\ 1\end{bmatrix}\leq 0. \tag{17}\] By using a change-of-basis matrix \[E_{\mathrm{out},k}^{i}=\begin{bmatrix}\tilde{A}_{i}&0&\cdots&0&-B_{i}&B_{i} \overline{u}_{i}+w_{k}^{[i]}\\ 0&0&\cdots&0&0&1\end{bmatrix},\] we can write (17) as \[\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}^{\top}\Psi_{k}^{i}(h_{p})\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}\leq 0, \tag{18}\] where \(\Psi_{k}^{i}(h_{p})=E_{\mathrm{out},k}^{i}{}^{\top}S_{p}(h_{p})E_{\mathrm{out },k}^{i}\). Note that the structure of the change-of-basis matrix differs to that in [8], as \(\tilde{A}_{i}\) is not square, to account for the multi-agent dynamics. The result is summarised in the following lemma. **Lemma 4**: _If (18) holds for \(p=1,\ldots,m\), then \(\widehat{\mathcal{X}}_{k+1}^{[i]}\) is a polytope defined by \(H_{1},\ldots,H_{m}\) and \(h_{1},\ldots,h_{m}\), as in (16)._ ### _Reachability algorithm_ By combining the results in Lemmas 1-4, we arrive at the following result for the overapproximation of the forward reachable set of agent \(i\). **Theorem 1**: _Consider a discrete-time LTI system of the form in (1) and (2), where the structure of each MLP is given by (3a)-(3c). At time \(k\), let the state of agent \(i\) and the states of its neighbours be bounded by a hyper-rectangular input set, as in (10). If there exists a solution to_ \[\min_{\Gamma,\lambda,\nu,\eta,h_{p}} \quad h_{p}\] subject to \[\quad\Delta_{k}^{i}(\Gamma)+\Theta^{i}(\lambda,\nu,\eta)+\Psi_{k}^{i}(h_{p}) \preceq 0,\] _for \(p=1,\ldots,m\), where \(H_{1},\ldots,H_{m}\) are specified by the user, then the resulting polytope is the solution to (4a)-(4c)._ _Proof:_ Note that we are performing reachability analysis on the system given in (8). However, from Lemma 1, the original dynamics of the form given in (1) and (2) are equivalent to those given in (8), so performing reachability analysis on the system given in (8) is identical to performing reachability analysis on the system given in (1) and (2). The remainder of the proof is similar to that in [8, Theorem 1]. First, we pre-multiply each of the three terms in the LMI by \(\left[\mathbf{z}_{i,k}{}^{\top}\quad 1\right]\) and post-multiply each term by \(\left[\mathbf{z}_{i,k}{}^{\top}\quad 1\right]^{\top}\), resulting the scalar inequality \[\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}^{\top}\Delta_{k}^{i}(\Gamma)\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}+\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}^{\top}\Theta^{i}(\lambda,\nu,\eta)\begin{bmatrix}\mathbf{z}_{i, k}\\ 1\end{bmatrix}\\ +\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}^{\top}\Psi_{k}^{i}(h_{p})\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}\leq 0.\] From Lemmas 2 and 3 and our definitions of the input set and MLP, we know that the first two terms are non-negative, and as the overall expression is non-positive, then the last term is non-positive, i.e. \[\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}^{\top}\Psi_{k}^{i}(h_{p})\begin{bmatrix}\mathbf{z}_{i,k}\\ 1\end{bmatrix}\leq 0.\] Hence, from Lemma 4, we can conclude that \(\widehat{\mathcal{X}}_{k+1}^{[i]}\) is a polytope defined by \(H_{1},\ldots,H_{m}\) and \(h_{1},\ldots,h_{m}\), where \(\widehat{\mathcal{X}}_{k+1}^{[i]}\supseteq\mathcal{X}_{k+1}^{[i]}\). This result is implemented in Algorithm 1, which presents a method for computing hyper-rectangular output sets (a special case of the polytopic sets), given hyper-rectangular input sets. \(\mathrm{Reach}(\widehat{\mathcal{X}}_{k}^{[i]})\) represents the solution to the semidefinite program in Theorem 1, given \(\widehat{\mathcal{X}}_{k}^{[i]}\), and \(\delta_{\cdot,\cdot}\), is the Kronecker delta. This algorithm can be used iteratively to find approximate reachable sets for \(k+2\), \(k+3\), etc. Note also that this approach is parallelisable across agents. ``` 1:input sets \(\mathcal{X}_{k}^{[1]},\ldots,\mathcal{X}_{k}^{[M]}\) 2:for\(i=1,\ldots,M\)do 3:\(\widehat{\mathcal{X}}_{k}^{[i]}\leftarrow\mathcal{X}_{k}^{[i]}\times\mathcal{X}_ {k}^{[g(i,1)]}\times\ldots\times\mathcal{X}_{k}^{[g(i,q_{i})]}\) 4:for\(p=1,\ldots,2n_{x}\)do 5:if\(p\leq n_{x}\)then 6:\(H_{p}\leftarrow\left[\delta_{1,p}\quad\cdots\quad\delta_{n_{x},p}\right]^{\top}\) 7:else 8:\(H_{p}\leftarrow-\left[\delta_{n_{x}+1,p}\quad\cdots\quad\delta_{2n_{x},p} \right]^{\top}\) 9:endif 10:\(h_{p,k+1}^{[i]}\leftarrow\mathrm{Reach}\left(\widehat{\mathcal{X}}_{k}^{[i]}\right)\) 11:endfor 12:\(\widehat{\mathcal{X}}_{k+1}^{[i]}\leftarrow\left[h_{1,k+1}^{[i]}\quad\cdots \quad h_{n_{x},k+1}^{[i]}\right]^{\top}\) 13:\(\hat{\mathcal{X}}_{k+1}^{[i]}\leftarrow-\left[h_{n_{x}+1,k+1}^{[i]}\quad\cdots \quad h_{2n_{x},k+1}^{[i]}\right]^{\top}\) 14:\(\widehat{\mathcal{X}}_{k+1}^{[i]}\leftarrow\left\{x\in\mathbb{R}^{n_{x}}\ |\ \hat{\mathcal{X}}_{k+1}^{[i]}\leq x\leq\overline{\mathcal{X}}_{k+1}^{[i]}\right\}\) 15:endfor 16:approximate reachable sets \(\widehat{\mathcal{X}}_{k+1}^{[1]},\ldots,\widehat{\mathcal{X}}_{k+1}^{[M]}\) ``` **Algorithm 1** One-step forward reachability analysis with hyper-rectangular constraints ## V Model Uncertainty In this section, we introduce model uncertainty into the dynamics and extend Theorem 1 to account for this. Consider the dynamics in (1). Instead of assuming direct knowledge of the state and input matrices, we now assume that they lie in the convex hull of a finite number of matrices, i.e. \[A_{ii}\in\mathrm{co}\left\{A_{ii}^{1},\ldots,A_{ii}^{C_{i}}\right\},\quad B_{i} \in\mathrm{co}\left\{B_{i}^{1},\ldots,B_{i}^{D_{i}}\right\}, \tag{19}\] where \(C_{i},D_{i}\in\mathbb{Z}^{+}\), for \(i=1,\ldots,M\). Note that \(\Psi_{k}^{i}\) in (18) depends on \(A_{ii}\) and \(B_{i}\), so we can write \(\Psi_{k}^{i}(h_{p};A_{ii},B_{i})\). **Theorem 2**: _Consider the formulation in Theorem 1, but with the addition of model uncertainty in the state and input matrices described by (19). If there exists a solution to_ \[\min_{\Gamma,\lambda,\nu,\eta,h_{p}} h_{p}\] \[\text{subject to} \Delta_{k}^{i}(\Gamma)+\Theta^{i}(\lambda,\nu,\eta)+\Psi_{k}^{i}(h _{p};A_{ii}^{c},B_{i}^{d})\preceq 0,\] \[\forall\ c\in\left\{1,\ldots,C_{i}\right\},\ d\in\left\{1,\ldots, D_{i}\right\},\] _for \(p=1,\ldots,m\), where \(H_{1},\ldots,H_{m}\) are specified by the user, then the resulting polytope is the solution to (4a)-(4c)._ _Proof:_ The proof relies on the fact that \(\Psi_{k}^{i}(h_{p};A_{ii}^{c},B_{i}^{d})\) depends affinely on \(A_{ii}^{c}\) and \(B_{i}^{d}\). The proof has some similarities to that in [9], but the forms of the matrices and dynamics differ, so we include the proof for completeness. First, note that we can write (19) as \[A_{ii}=\sum_{c=1}^{C_{i}}\alpha_{c}A_{ii}^{c},\quad B_{i}=\sum_{d=1}^{D_{i}} \beta_{d}B_{i}^{d}, \tag{20}\] where \(\sum_{c=1}^{C_{i}}\alpha_{c}=1,\ \alpha_{c}\geq 0\ \forall c\in\left\{1, \ldots,C_{i}\right\}\) and \(\sum_{d=1}^{D_{i}}\beta_{d}=1,\ \beta_{d}\geq 0\ \forall d\in\left\{1,\ldots,D_{i}\right\}\). From the definitions of \(\Psi_{k}^{i}\) and \(\bar{A}_{i}\), we can write \[\Psi_{k}^{i}(h_{p};A_{ii}^{c},B_{i}^{d})=\begin{bmatrix}0&\Phi_{k}^{i}(A_{ii} ^{c},B_{i}^{d})\\ \Phi_{k}^{i}(A_{ii}^{c},B_{i}^{d})^{\top}&\Xi_{k}^{i}(h_{p};B_{i}^{d})\end{bmatrix},\] where \[\Phi_{k}^{i}(A_{ii}^{c},B_{i}^{d})=\begin{bmatrix}\left[A_{ii}^{c}&A_{ig(i,1)} &\cdots&A_{ig(i,q_{i})}\right]^{\top}H_{p}\\ &0&\left[-B_{i}^{d}\right]^{\top}H_{p}\end{bmatrix},\] and \[\Xi_{k}^{i}(h_{p};B_{i}^{d})=2H_{p}^{\top}\left(B_{i}^{d}\overline{u}_{i}+w_{ k}^{[i]}\right)-2h_{p},\] so it can be seen that \(\Psi_{k}^{i}(h_{p};A_{ii}^{c},B_{i}^{d})\) depends affinely on \(A_{ii}^{c}\) and \(B_{i}^{d}\). Hence, multiplying by \(\alpha_{c}\) and \(\beta_{d}\), summing over \(c=1,\ldots,C_{i}\) and \(d=1,\ldots,D_{i}\), and using \(\sum_{c=1}^{C_{i}}\alpha_{c}A_{ii}^{c}=A_{ii}\), \(\sum_{c=1}^{C_{i}}\alpha_{c}=1\), \(\sum_{d=1}^{D_{i}}\beta_{d}B_{i}^{d}=B_{i}\) and \(\sum_{d=1}^{D_{i}}\beta_{d}=1\) from (20) gives \[\sum_{c=1}^{C_{i}}\sum_{d=1}^{D_{i}}\alpha_{c}\beta_{d}\Psi_{k}^{i}(h_{p};A_{ii }^{c},B_{i}^{d})=\Psi_{k}^{i}(h_{p};A_{ii},B_{i}). \tag{21}\] Finally, multiplying the LMIs in the semidefinite program by \(\alpha_{c}\) and \(\beta_{d}\) and summing over \(c=1,\ldots,C_{i}\) and \(d=1,\ldots,D_{i}\), such that \[\sum_{c=1}^{C_{i}}\sum_{d=1}^{D_{i}}\alpha_{c}\beta_{d}\left[\Delta _{k}^{i}(\Gamma)+\Theta^{i}(\lambda,\nu,\eta)\right]\\ +\sum_{c=1}^{C_{i}}\sum_{d=1}^{D_{i}}\alpha_{c}\beta_{d}\Psi_{k}^ {i}(h_{p};A_{ii}^{c},B_{i}^{d})\preceq 0,\] results in \[\Delta_{k}^{i}(\Gamma)+\Theta^{i}(\lambda,\nu,\eta)+\Psi_{k}^{i}(h_{p};A_{ii},B_ {i})\preceq 0,\] which follows from the fact that \(\sum_{c=1}^{C_{i}}\sum_{d=1}^{D_{i}}\alpha_{c}\beta_{d}=1\) and from (21). \(\blacksquare\) This result is useful, as we only have to solve a finite number of semidefinite programs to find an overapproximation of the forward reachable set for all convex combinations of \(A_{ii}^{1},\ldots,A_{ii}^{C_{i}}\) and \(B_{i}^{1},\ldots,B_{i}^{D_{i}}\). ## VI Experiments In this section, we use two realistic examples of multi-agent systems to demonstrate our results. We also give a comparison to the approach proposed at the start of Section III, in which the states of each agent are augmented into one overall state and existing reachability methods are applied on the overall system. We also introduce model uncertainty into one of the systems and analyse this case. Simulations were performed on MATLAB, and CVX with MOSEK was used to solve the semidefinite programs. The NNs were trained to approximate distributed MPC schemes with a given horizon; the systems were simulated with MPC, and the resulting input-output data pairs were used to train the NNs. ### _Vehicle platooning_ In the first example, we consider the example of control of a platoon of vehicles. There are several forms of this problem, and control of a vehicular platoon has a number of benefits, including improved safety, higher road capacity, lower emissions, and/or reduced congestion [25, 26, 27, 28]. In this example, we consider the adaptive cruise control (ACC) problem, in which each vehicle aims to maintain a fixed distance from the vehicle in front, whilst travelling at a given velocity. The continuous-time longitudinal dynamics of each vehicle are given by [29] \[\dot{x}^{[i]}(t)=\bar{A}_{ii}x^{[i]}(t)+\bar{A}_{ii-1}x^{[i-1]}(t)+\bar{B}_{i}u^ {[i]}(t),\] (note that \(\mathcal{N}_{i}=\{i-1\}\)) where the state vector is \[x^{[i]}(t)=\begin{bmatrix}e^{[i]}(t)&v^{[i]}(t)&a^{[i]}(t)\end{bmatrix}^{\top},\] where \(e^{[i]}(t)\) is the distance error between vehicle \(i\) and vehicle \(i-1\) (i.e. if the desired distance between vehicle \(i\) and the vehicle in front, \(i-1\), is \(\bar{d}^{[i]}\) and the actual distance is \(d^{[i]}(t)\), then \(e^{[i]}(t)=d^{[i]}(t)-\bar{d}_{i}\)), \(v^{[i]}(t)\) is the velocity of the \(i^{\text{th}}\) vehicle, \(a^{[i]}(t)\) is the acceleration of the \(i^{\text{th}}\) vehicle, and \[\bar{A}_{ii} =\begin{bmatrix}0&-1&0\\ 0&0&1\\ 0&0&-\frac{1}{\tau}\end{bmatrix},\quad\bar{A}_{ii-1}=\begin{bmatrix}0&1&0\\ 0&0&0\\ 0&0&0\end{bmatrix},\] \[\bar{B}_{i} =\begin{bmatrix}0&0&\frac{1}{\tau}\end{bmatrix}^{\top},\] where \(\tau\) is the engine time constant [30] and \(u^{[i]}(t)\) is the \(i^{\text{th}}\) acceleration input. Note that the lead vehicle (\(i=1\)) has no physical neighbour, but this can be resolved by imagining a virtual vehicle [29] with state \(x^{[0]}=\begin{bmatrix}0&\bar{v}&0\end{bmatrix}^{\top}\), where \(\bar{v}\) is the reference velocity (to be maintained by the platoon). This is shown in Figure 2. The dynamics are discretised assuming zero-order hold (ZOH) with a sample period \(T=0.1\) s, treating \(x^{[i-1]}(t)\) and \(u^{[i]}(t)\) as exogenous inputs, such that the discrete-time dynamics are in the form in (1), where \(w_{k}^{[i]}=0\)\(\forall i\in\mathcal{I}\). The NNs have \(2\) hidden layers, both with \(15\) neurons. The \(i^{\text{th}}\) control input is \[u_{k}^{[i]}=\operatorname{sat}_{\mathcal{U}_{i}}\left[\pi_{ii-1}\left(\begin{bmatrix} e_{k}^{[i]}\\ v_{k}^{[i-1]}-v_{k}^{[i]}\\ a_{k}^{[i-1]}-a_{k}^{[i]}\end{bmatrix}\right)\right],\] where \(e_{k}^{[i]}\equiv e^{[i]}(kT)\), \(v_{k}^{[i]}\equiv v^{[i]}(kT)\) and \(a_{k}^{[i]}\equiv a^{[i]}(kT)\). The forward reachable sets were computed for five time steps, \(M=9\) agents, initial conditions given by \(\mathcal{X}_{\mathbf{l}}^{[i]}=\{x\in\mathbb{R}^{3}\mid\underline{x}\leq x \leq\underline{\overline{x}}\}\)\(\forall i\in\mathcal{I}\), where \(\underline{x}^{[i]}=\begin{bmatrix}-0.1&19.95&-0.01\end{bmatrix}\) and \(\overline{x}^{\top}=\begin{bmatrix}0.1&20.05&0.01\end{bmatrix}\), and controller limits given by \(\overline{u}_{i}=-\underline{u}_{i}=5\)\(\forall i\in\mathcal{I}\). A step change of \(-2\) was applied to the reference velocity at \(k=0\) (from \(\bar{v}=20\) to \(\bar{v}=18\)). The results are shown in Figure 3 for the first three vehicles. Note that the step change in \(\bar{v}\) takes some time to propagate down the platoon, hence the difference in range between the plots. A comparison between the computation time for this approach (Reach-SDP-MA) and the existing method (Reach-SDP) is shown in Table 1 for different values of \(M\). We then extend the vehicle platooning example to account for model uncertainty. Consider the case in which \(A_{ii}\in\operatorname{co}\left\{(1-\delta)A^{0},(1+\delta)A^{0}\right\}\) and \(B_{i}\in\operatorname{co}\left\{(1-\delta)B^{0},(1+\delta)B^{0}\right\}\), where \(A^{0}\in\mathbb{R}^{3\times 3}\) and \(B^{0}\in\mathbb{R}^{3}\) are the nominal values of \(A_{ii}\) and \(B_{i}\), respectively, and \(\delta=0.01\). The results are shown in Figure 4 for the first three vehicles. Because of the uncertainty, the size of the exact reachable sets increases, so the overapproximations are larger (compared to Figure 3). ### _Power network system_ For the second example, we consider automatic generation control (AGC) of a power network system. Unlike the previous example, the dynamics are not identical across agents, and some agents have more than one neighbour. There is also an exogenous input term. This system consists of \(M\) generation areas, and the aim is to reduce the frequency deviation in each area, in spite of load changes. Common approaches to this control problem include decentralised and distributed MPC schemes [20, 31, 32]. The continuous-time dynamics of each area are given by [31, 33] \[\dot{x}^{[i]}(t)=\bar{A}_{ii}x^{[i]}(t)+\sum_{j\in\mathcal{N}_{i}}\bar{A}_{ij }x^{[j]}(t)+\bar{B}_{i}u^{[i]}(t)+\bar{L}_{i}\Delta P_{L}^{[i]}(t),\] where the state vector for area \(i\) is \[x^{[i]}(t)=\left[\Delta\theta^{[i]}(t)\quad\Delta\omega^{[i]}(t)\quad\Delta P _{m}^{[i]}(t)\quad\Delta P_{v}^{[i]}(t)\right]^{\top},\] where \(\Delta\theta^{[i]}(t)\), \(\Delta\omega^{[i]}(t)\), \(\Delta P_{m}^{[i]}(t)\) and \(\Delta P_{v}^{[i]}(t)\) are the deviations in rotor angle, frequency, mechanical power and steam valve position, respectively, from the nominal values [34], \(u^{[i]}(t)\) is the reference power, \(\Delta P_{L}^{[i]}(t)\) is the local power load, and \[\bar{A}_{ii}=\begin{bmatrix}0&1&0&0\\ -\frac{\sum_{j\in\mathcal{N}_{i}}P_{ij}}{2H_{i}}&-\frac{D_{i}}{2H_{i}}&\frac{1 }{2H_{i}}&0\\ 0&0&-\frac{\bar{D}_{i}}{T_{i}}&\frac{1}{T_{i}}\\ 0&-\frac{1}{R_{i}T_{g_{i}}}&0&-\frac{1}{T_{g_{i}}}\end{bmatrix},\] \[\bar{A}_{ij}=\begin{bmatrix}0&0&0&0\\ \frac{P_{ij}}{2H_{i}}&0&0&0\\ 0&0&0&0\end{bmatrix},\quad\bar{B}_{i}=\begin{bmatrix}0&0&0&T_{g_{i}}\end{bmatrix}^ {\top},\] \[\bar{A}_{ij}=\begin{bmatrix}0&0&0&0\\ \frac{P_{ij}}{2H_{i}}&0&0&0\\ 0&0&0&0\end{bmatrix},\quad\bar{L}_{i}=\begin{bmatrix}0&-\frac{1}{2H_{i}}&0&0 \end{bmatrix}^{\top},\] \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline \(M\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline Reach-SDP-MA & \(3.85\) & \(7.04\) & \(11.60\) & \(14.17\) & \(20.82\) \\ \hline Reach-SDP [8] & \(4.01\) & \(75.39\) & \(562.94\) & \(3636.62\) & \(41025.05\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of methods (times in s) Figure 4: Plots of the reachable sets in red (solid) for distance error and velocity, and simulated trajectories (blue) for the vehicle platooning example with model uncertainty; the initial set is shown in red (dashed) – only agents \(1\) to \(3\) are shown Figure 3: Plots of the reachable sets in red (solid) for distance error and velocity, and simulated trajectories (blue) for the vehicle platooning example; the initial set is shown in red (dashed) – only agents \(1\) to \(3\) are shown Figure 2: Platoon of \(M\) vehicles, where each vehicle \(i\) only receives information from vehicle \(i-1\); the ‘virtual’ vehicle is shown with increased opacity where \(P_{ij}\), \(H_{i}\), \(D_{i}\), \(T_{t_{i}}\), \(R_{i}\) and \(T_{g_{i}}\) are defined in [31]. In this example, we consider \(M=4\) generation areas (Scenario 1 in [34]), where \(\mathcal{N}_{1}=\{2\}\), \(\mathcal{N}_{2}=\{1,3\}\), \(\mathcal{N}_{3}=\{2,4\}\) and \(\mathcal{N}_{4}=\{3\}\), as shown in Figure 5. The dynamics are discretised assuming ZOH with a sample period \(T=1\) s, treating \(x^{[j]}(t)\)\(\forall j\in\mathcal{N}_{i}\), \(u^{[i]}(t)\) and \(\Delta P_{L}^{[i]}(t)\) as exogenous inputs, such that the discrete-time dynamics are in the form in (1). The NNs have \(2\) hidden layers, both with \(10\) neurons. The \(i^{\text{th}}\) control input is \[u_{k}^{[i]}=\operatorname{sat}_{\mathcal{U}_{i}}\left[\sum_{j\in\mathcal{N}_{ i}}\pi_{ij}\left(\begin{bmatrix}x_{k}^{[i]}\\ x_{k}^{[j]}\end{bmatrix}-\begin{bmatrix}x_{\text{ref},k}^{[i]}\\ x_{\text{ref},k}^{[j]}\end{bmatrix}\right)\right],\] where \(x_{\text{ref},k}^{[i]}\) and \(x_{\text{ref},k}^{[j]}\) are the state and neighbour reference values, respectively. These are incorporated into the reachability analysis by modifying the first weight matrices and bias vectors. The forward reachable sets were computed for three time steps with initial conditions given by \(\mathcal{X}_{0}^{[i]}=\left\{x\in\mathbb{R}^{4}\mid\underline{x}\leq x\leq \overline{x}\right\}\)\(\forall i\in\mathcal{I}\), where \(\overline{x}^{\top}=-\underline{x}^{\top}=\left[10^{-4}\quad 10^{-7}\quad 10^{-3} \right]\), and controller limits given in [34]. A step change of \(-0.15\) was applied to \(\Delta P_{L,k}^{[i]}\equiv\Delta P_{L}^{[i]}(kT)\)\(\forall i\in\mathcal{I}\), and the results are shown in Figure 6. ## VII Conclusion and Future Work In this paper, we presented a scalable method to overapproximate the forward reachable sets of multi-agent systems with distributed NNC architectures. After simplifying the dynamics, we presented a method to split the overall reachability problem into multiple smaller reachability problems. We then extended this approach to account for model uncertainty. The effectiveness of this method was demonstrated on realistic examples, and it was shown to be significantly faster than the overall reachability method whilst producing the same bounds. It should also be noted that the method presented in this paper can be applied to any system that can be decomposed into the form in (1); it is not necessarily specific to multi-agent systems. Opportunities for future work include using the general framework presented in this paper to improve the efficiency of other reachability methods, such as LP-based methods. Also, further consideration could be given to synthesis of the NNCs, and reachability methods could be incorporated into the training process. Other types of activation functions and sets could also be considered.
2303.17883
Single-ended Recovery of Optical fiber Transmission Matrices using Neural Networks
Ultra-thin multimode optical fiber imaging promises next-generation medical endoscopes reaching high image resolution for deep tissues. However, current technology suffers from severe optical distortion, as the fiber's calibration is sensitive to bending and temperature and thus requires in vivo re-measurement with access to a single end only. We present a neural network (NN)-based approach to reconstruct the fiber's transmission matrix (TM) based on multi-wavelength reflection-mode measurements. We train two different NN architectures via a custom loss function insensitive to global phase-degeneracy: a fully connected NN and convolutional U-Net. We reconstruct the 64 $\times$ 64 complex-valued fiber TMs through a simulated single-ended optical fiber with $\leq$ 4\% error and cross-validate on experimentally measured TMs, demonstrating both wide-field and confocal scanning image reconstruction with small error. Our TM recovery approach is 4500 times faster, is more robust to fiber perturbation during characterization, and operates with non-square TMs.
Yijie Zheng, George S. D. Gordon
2023-03-31T08:35:22Z
http://arxiv.org/abs/2303.17883v2
# Single-ended Recovery of Optical fiber Transmission Matrices using Neural Networks ###### Abstract Ultra-thin multimode optical fiber imaging technology promises next-generation medical endoscopes that provide high image resolution deep in the body (e.g. blood vessels, brain). However, this technology suffers from severe optical distortion. The fiber's transmission matrix (TM) calibrates for this distortion but is sensitive to bending and temperature so must be measured immediately prior to imaging, i.e. _in vivo_ and thus with access to a single end only. We present a neural network (NN)-based approach that quickly reconstructs transmission matrices based on multi-wavelength reflection-mode measurements. We introduce a custom loss function insensitive to global phase-degeneracy that enables effective NN training. We then train two different NN architectures, a fully connected NN and convolutional U-Net, to reconstruct \(64\times 64\) complex-valued fiber TMs through a simulated single-ended optical fiber with \(\leq 4\%\) error. This enables image reconstruction with \(\leq 8\%\) error. This TM recovery approach shows advantages compared to conventional TM recovery methods: 4500 times faster; robustness to 6% fiber perturbation during characterization; operation with non-square TMs and no requirement for prior characterization of reflectors. Optical fiber imaging Transmission matrix reconstruction Custom loss function Neural network ## 1 Introduction Ultra-thin endoscopes are a promising technique for enabling cell-scale imaging in difficult-to-reach parts of the body, with the potential to improve disease detection in organs such as the pancreas and ovaries. Commercial products using imaging fiber bundles around 1mm diameter are used in bile ducts [1] and flexible and full-color imaging has been demonstrated using distal scanning mechanisms that are typically around 2mm diameter [2; 3; 4]. To further reduce the size of endoscopes, recent work has focused on imaging through ultra-thin multimode fibers with diameters of 0.125mm and has achieved _in vivo_ fluorescence imaging in brains of immobilized mice [5]. However, there are some key limitations of these imaging systems that use ultra-thin optical fiber. First, the thinnest such imaging devices are made using multimode fiber (MMF), which suffers from significant optical distortion that changes whenever the fiber is perturbed, particularly for longer fibers (\(>\)1m) required to reach deep inside the human body [6]. Second, to calibrate this distortion, practical fiber bundle endoscopes require measurement of their transmission matrix (TM) which requires optical components at the distal end to focus the light onto the distal facet. If calibration is required immediately before use, such components would be required on the distal tip for _in vivo_ use, and would thus compromise the ultra-thin form factor of the endoscopes [7]. A number of methods have been proposed to calibrate fiber TMs without distal access including guidestars [8; 9; 10], a virtual beacon source [11], or reflective structures on the fiber tips [7; 12; 13]. Gordon et al. [13] proposed a single-ended method of TM recovery based on the fiber system shown in Figure 1, with a specially designed reflector stack. This approach avoids the need for measurement at both proximal and distal end of the fiber but works for non-unitary TM matrices. The reflection matrix, \(\mathbf{C}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\), describes how an incident field \(\mathbf{E}_{\mathbf{in}}\in\mathbb{C}^{\mathbb{M}^{2}}\) is transformed via propagation through the optical fiber, reflected by the reflector stack and finally transferred back through the fiber into an output field \(\mathbf{E_{out}}\in\mathbb{C}^{M^{2}}\) at a wavelength of \(\lambda\): \[\mathbf{C}_{\lambda}=\mathbf{E_{out}}_{\lambda}\mathbf{E_{in\lambda}^{-1}} \tag{1}\] Theoretically, the forward TM, \(\mathbf{A}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\), can be unambiguously reconstructed at a fourth wavelength based on the measured reflection matrices at 3 different wavelengths. Specifically, the reconstruction of TM is achieved by solving a set of three quadratic matrix exponential equations: \[\mathbf{C}_{\lambda_{1}}=\mathbf{A_{\lambda_{1}}^{T}}\mathbf{R}_{\lambda_{1}} \mathbf{A}_{\lambda_{1}} \tag{2}\] \[\mathbf{C}_{\lambda_{2}}=(e^{(\log\mathbf{A}_{\lambda_{1}}\frac{\lambda_{1}}{ \lambda_{2}})})^{T}\mathbf{R}_{\lambda_{2}}(e^{(\log\mathbf{A}_{\lambda_{1}} \frac{\lambda_{1}}{\lambda_{2}})}) \tag{3}\] \[\mathbf{C}_{\lambda_{3}}=(e^{(\log\mathbf{A}_{\lambda_{1}}\frac{\lambda_{1}}{ \lambda_{3}})})^{T}\mathbf{R}_{\lambda_{3}}(e^{(\log\mathbf{A}_{\lambda_{1}} \frac{\lambda_{1}}{\lambda_{3}})}) \tag{4}\] where, \(\mathbf{A}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) is the transmission matrix at wavelength \(\lambda\), \(\mathbf{R}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) is the reflector matrix and \(\mathbf{e}^{(\log\mathbf{A}_{\lambda_{1}}\frac{\lambda_{1}}{\lambda_{2}})}\) is the transmission matrix adjusted for a wavelength \(\lambda_{2}\). Currently, these equations can be solved by using an iterative approach which relies on optimization of the entire TM [13]. This therefore scales in complexity with the square of the matrix dimension, incurring significant computational time, especially for large matrices. In practice, the transmission matrix shows high sensitivity to bending and temperature so in a practical usage scenario would need to be measured very frequently and immediately prior to imaging. Large computational times are therefore not practical. Considering this, there are several methods that have been developed in order to reduce the computational time for fiber imaging. These methods typically exploit prior knowledge about the fibers to improve or speed up TM reconstruction. For example, Li et al. [14] proposed a compressed sampling method based on the optical transmission matrix to reconstruct full-size TM of a multimode fiber supporting 754 modes at compression ratios down to 5% with good fidelity. Additionally, Huang et al. [15] retrieved the optical transmission matrix of a multimode fiber using the extended Kalman filter, enabling faster reconstruction. Recently, there has been work on using deep learning approaches, involving convolutional neural networks, to reconstruct images via multimode fibers both in transmission and reflection modes [16; 17; 18]. These methods have the advantage of being fast, and also learning and utilizing important prior information about the fiber properties and the objects being imaged. However, their performance typically degrades significantly under fiber perturbation because they do not have access to reflection calibration measurements required to unambiguously resolve a TM. Further, because such approaches seek to approximate the forward propagation of light and often only consider amplitude image recovery, they often rely on classical mean-squared error loss functions for training. In order to incorporate reflection calibration measurements following fiber perturbation, it may instead be advantageous to use AI approaches to reconstruct a transmission matrix rather than an image, though there has been relatively little work in this area. When reconstructing a transmission matrix comprising complex numbers, a particular type of degeneracy arises that is not well handled by conventional AI loss functions: a global phase factor. In many physical problems, including the recovery of transmission matrices for the purposes of image reconstruction and phase-hologram generation, global phase factors are not relevant as they do affect the perceived performance of the system: it is the _relative_ phase between pixels that must be preserved. Global phase may have a physical interpretation related to the physical length of the fiber, but in practice it is often arbitrary unless great care is taken. For example in interferometric systems the global phase is likely to be arbitrary unless the optical path lengths of the reference and sample arms are perfectly matched, which is very challenging for multimode fibers. Further, the global phase often drifts significantly during practical experiments [19], and approaches using phase-retrieval produce entirely arbitrary global phase values [20]. Therefore, in many important practical situations, conventional loss functions will convert arbitrary shifts in the global phase to large changes in value, which can confound minimization algorithms used to fit AI models. In such cases, models may arbitrarily learn a global phase factor (a type of 'overfitting') and may thus not be generalisable. In this paper, we therefore propose a new method of implementing single-ended recovery of an optical fiber TM by solving equations 2-4 based on three reflection matrix measurements at three different wavelengths. Specifically, we present two different neural network architectures, fully connected neural network (FCNN) and convolutional U-net based neural networks, and demonstrate the performance of both. As a necessary step, a custom global phase insensitive loss function is developed to eliminate the effect of global phase factor during the model training process. We first validate our model by recovering \(64\times 64\) complex-valued fiber transmission matrices through a simulated single-ended optical fiber system (shown in Figure 1) with \(\leq 4\%\) error for both FCNN and convolutional U-net architectures. We then demonstrate reconstructing \(8\times 8\) images through fiber based on recovered TM with \(\leq 8\%\) error. We highlight several advantages of this TM recovery approach compared to conventional TM recovery methods. Firstly, once the model is trained (\(\sim\)100 hours), it only requires \(\sim\)1 second for reconstruction, which is 4500 times faster than the conventional iterative approach. Secondly, the conventional method [13] can only reconstruct square TM problems, whereas this method is compatible with non-square-shaped TM with \(\leq 8\%\) error, which is potential for many practical cases where optical systems may have different mode bases at proximal and distal ends. Thirdly, no prior measurements for reflectors are required for this model, removing a significant experimental challenge. ## 2 Results ### TM recovery This TM recovery model was trained on a simulated dataset comprising 800,000 sets of simulated reflection matrices, \(\mathbf{C}_{\lambda}\) at 3 wavelengths, \(\lambda=\lambda_{1},\ \lambda_{2},\ \lambda_{3}\), as input and a complex-valued non-unitary transmission matrix at wavelength \(\lambda_{1}\), \(\mathbf{A}_{\lambda_{1}}\) as output. It was then validated using 200,000 such sets not used in training. Figure 2(a) shows the training and validation loss in training the FCNN model over 2500 epochs using different loss functions, namely conventional mean absolute error (MAE), and weighted and unweighted versions of our global-phase insensitive custom loss function (Eq. 9 and 10 respectively). Both global-phase insensitive loss functions show a decreasing loss in both training and validation in the first 2000 epochs and converge after 2500 epochs, whereas the MAE loss function exhibits fluctuating non-converging loss values for both training (green line) and validation (pink line). Comparison between the two versions of our global-phase insensitive loss function shows that the weighted version reduces loss compared to the unweighted version by \(\sim\)10% in both training (blue line) and validation (red line). An example of a reconstructed TM predicted by the FCNN model using weighted loss function at different epochs is shown inset in Figure2(a). It can be seen that the predicted TM is getting closer to the target TM from 300 epochs to 2500 epochs. These indicate that using custom loss function can successfully avoid the global phase degeneracy which would otherwise prevent the model from learning. Figure 2(b) compares the TM result predicted by our two different neural network architectures using different loss functions. Both FCNN and convolutional U-net-based neural networks cannot recover TM when using the MAE loss function but are capable of recovering TM using both versions of the global phase insensitive loss function, with a loss of \(\leq 4\%\) over 200,000 validation TMs. Compared to the unweighted version, there is a \(\sim 0.3\%\) reduction in error when using the weighted loss function in either FCNN or convolutional U-net architecture. Furthermore, we also Figure 1: Single-ended optical fiber imaging system for TM recovery. The optical image, \(\mathbf{X}\in\mathbb{C}^{M\times M}\) is placed at far end of distal facet. Light with a field of \(\mathbf{E_{in}}\in\mathbb{C}^{M^{2}}\) propagates from the proximal facet through the optical fiber, with the forward transmission matrix of the optical fiber defined as \(\mathbf{A}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) at the wavelength, \(\lambda\). A reflector stack with a three-layer structure is placed at the distal facet, with its reflector matrix defined as \(\mathbf{R}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) at the wavelength \(\lambda\). Reflection matrix, \(\mathbf{C}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) can be repeatedly measured at three different wavelengths to recover the TM evaluated the computational resource usage of the two different neural network architectures as shown in Figure2(c). We implemented the training process using Tensorflow 2.0 running on an NVIDIA Tesla V100 GPU. Compared to FCNN, the convolutional U-net shows significant advantages in memory usage because it requires 1000 times fewer trainable parameters and the converge time is reduced by \(20\%\). However, it shows 0.7% larger loss in the validation TMs. Both FCNN and convolutional U-net can recover TM at a loss \(\leq 4\%\), and also enable \(\sim\)1s prediction time. ### Image reconstruction To evaluate the performance of recovered TMs for image reconstruction, we considered 3 example images denoted \(\mathbf{x}\in\mathbb{C}^{8\times 8}\): an amplitude-only image with a'space invader' pattern, a phase-only digit with a uniform amplitude and a random complex-valued image. Figure 3 shows the image reconstruction results based on recovered TM using FCNN and convolutional U-net networks. It can be seen that all three types of images can be successfully reconstructed based on recovered TMs using both neural network models, with all the image loss \(\leq 8\%\). ### Fiber perturbation We then evaluate the robustness of our TM recovery model by swapping rows between different reflection matrices, simulating the effect of the TM changing mid-way through characterization. We simulated 10 sets of 64\(\times\)64 reflection matrices with five different perturbation rates indicating the numbers of rows swapped (1/64, 4/64, 8/64, 16/64, and 32/64). Figure 4 shows the TM recovery results and its corresponding image reconstructions for different fiber Figure 2: (a) Training and validation loss using MAE, unweighted custom loss function and weighted loss function. TM recovery results over different epochs. (b) TM results recovered using two different neural network architectures (i.e. FCNN and convolutional U-net networks), with three different loss functions, namely MAE, unweighted loss function in Eq.9, and weighted loss function in Eq.10. (c) Comparison between FCNN and convolutional U-net architecture in aspects of loss, training time, prediction time, number of converging epochs and number of trainable parameters. perturbation rates based on our pre-trained TM recovery FCNN model. It can be seen that our TM recovery model is compatible with optical fibers with a small perturbation rate (below 6%) with TM loss \(\leq 8\%\) and image loss \(\leq 15\%\) but performance degrades significantly above this. ### Recovery of non-square TMs We next examine the important practical case of non-square TMs, e.g. where the desired representation at the distal end of a fiber might be different from that used at the proximal end and may have more elements. To recover a TM \(\mathbf{A}\in\mathbb{C}^{M_{p}\times M_{d}}\), we require that the reflection matrix, \(\mathbf{C}\in\mathbb{C}^{M_{p}\times M_{p}}\) and that the reflector matrix, \(\mathbf{R}\in\mathbb{C}^{M_{d}\times M_{d}}\). \(M_{p}\) and \(M_{d}\) represent the number of elements used for the basis representation at the proximal and distal ends of the fiber respectively. Figure 5 shows one example of recovered non-square-shaped TM \(\in\mathbb{C}^{12\times 6}\) using FCNN and convolutional U-net network, with loss of 5.95% and 9.3% respectively. Theoretically, a tall-matrix structured TM, \(\mathbf{A}\in\mathbb{C}^{M_{p}\times M_{d}}\), (where \(M_{p}>M_{d}\)) can be recovered by reflection matrices with large total elements, and thus producing a better recovery performance with less loss and training data. ### Computational resource usage As the dimension of recovered images increases, we expect an increase in the TM dimension thus requiring more computational resources. Empirically measured computational resources are plotted in log-scale in Figure 6 (a)-(c): minimum training data, minimum memory usage, and converging time respectively. All indicate a quadratic relationship to the image dimension \(M\) for both FCNN and convolutional U-net models. For practical imaging applications we would desire at least \(32\times 32\) image resolution, giving a \(1024\times 1024\) TM, which would require training with \(>\)10 million examples, leading to memory consumption \(>\)1.5TB for the FCNN. By comparison, the convolutional U-net would require only 1.1TB of memory consumption. Compared to FCNN, convolutional U-net shows potential advantages in Figure 3: Image reconstruction based on recovered TM. (a) and (b) show the TM and its error respectively. We consider three example images: (c) an amplitude-only ‘space invader’ pattern, (d) a phase-only digit with uniform amplitude, and (e) a random complex-valued image. We compare the reconstruction result recovered by FCNN (second column) and U-net architectures (third column) with the target result shown in the first column. using 25% fewer memory resources and 20% less training data within 15% less training time. Figure 6(d) compares the prediction time using our neural network model with the conventional methods using iterative optimization approaches [13], where our FCNN model shows significantly less reconstruction time (\(\sim\)1s vs. 1920s for an 12 x 12 transmission matrix ) even with large size images but the conventional method requires increasing time when the size of image increases. Figure 4: Effect of perturbations of fiber TM during reflection-mode characterization. (a) and (b) show the TM and its error respectively. We consider three example images: (c) an amplitude-only ‘space invader’, (d) a phase-only digit and (e) a random complex-valued image. We compare the reconstruction result recovered by FCNN for increasing levels of fiber pertrubation. Figure 5: Non-square-shaped TM \(\in\mathbb{C}^{12\times 6}\) recovered by our TM recovery model using FCNN architecture and convolutional U-net model. ## 3 Discussion We have demonstrated the successful reconstruction of forward fiber TMs based on reflection-mode measurements at multiple wavelengths using a novel neural network based approach encompassing two architectures: a fully-connected neural network and a convolutional U-Net. Previous work applying neural networks to fibers has focussed on image reconstruction as the end goal, but we instead focus on transmission matrix reconstruction. Such an approach is more flexible as the inputs to the network are calibration measurements that reflect a fibers deformation state at any given time - previous image reconstruction approaches have instead learned a static representation of the fiber TM encoded in the neural network weights. Using our approach, the recovered TM will be accurate for the most recent calibration measurements and can be used for high-speed image recovery via conventional matrix operations. Indeed, we demonstrate error values \(\leq\)8% for reconstructing complex limitations. However, one major challenge of recovering the TM in this way is the need to recover a complex-valued TM with a degenerate global phase shift. Previous work on image reconstruction has addressed this problem by training separate networks for amplitude and phase recovery in purely real space and accepting relatively poor performance for phase recovery [16]. Here, we present a novel loss function that is insensitive to this global phase degeneracy and show a high degree of convergence compared to conventional MAE metrics. We believe this metric in itself could find applications in computer-generated holography via neural networks, phase retrieval problems and indeed to image-reconstruction-based neural networks for fiber imaging. Applying this loss function to our single-ended TM recovery problem, we demonstrated the model for reconstructing \(64\times 64\) complex-valued fiber transmission matrices through a single-ended optical fiber system with \(\leq 4\%\) error either using FCNN or U-net based neural network architecture, which is also capable of reconstructing \(8\times 8\) images through fiber based on recovered TM with \(\leq 8\%\) error. There are several major advantages to our neural network approach compared to previous iterative reconstruction approaches [13]. First, the prediction time is very fast, typically \(\sim\)1s, which makes this a feasible approach for future real-time imaging, over 4500x faster than the existing iterative approach. Training the network is significantly slower, but this would only need to be performed once per fiber for a fixed reflector so could be performed as a one-off initial calibration step. Second, our approach shows robustness to instances where the fiber TM might change part way through characterization measurements, as is likely to happen during real _in vivo_ usage, and can tolerate up to 6% of row swaps between different reflection matrices. This performance could likely be further improved by re-training the network with perturbed examples as input, thus also learning an 'error correction' strategy. Third, the approach can reconstruct non-square transmission matrices. This is important because due to experimental constraints, the sampling basis of the light on the proximal facet is often the pixel basis of the camera used. However, this basis may not be appropriate for imaging at the end of the fiber as it may contain many more elements than modes are supported in the fiber: multimode fiber may only support a few hundred modes depending on wavelength and core diameter. Therefore, to optimize speed and imaging performance it is often desirable to retrieve a TM in the mode basis of the fiber that can easily be addressed Figure 6: (a) Minimum training data versus the number of image dimensions, plotting in log-scale.(b) Minimum memory usage versus the number of image dimensions, plotting in log-scale. (c) Converging time versus the number of image dimensions. (d) Prediction time of using our TM recovery model and conventional method, plotting in log-scale. using our camera coordinates: hence a non-square TM. Finally, it is not required to characterize the reflector in advance as this can effectively be inferred based on measurements of the fiber. However, there are also some trade-offs with our approach. The first trade-off is that the lack of need for pre-characterization of reflector means that the reflector matrix of this is effectively encoded in the network weights. Without careful thought about implementation this could mean that for each different reflector a separate model would have to be trained and since this may require millions of transmission matrices to be measured, this may be experimentally infeasible. One possible solution to this problem is to either pre-characterize reflectors as proposed previously, or else devise a method of reliably manufacturing reflectors with consistent and highly reproducible properties. Most of the different fiber bending conditions could then be simulated using our approach here and so the network could be trained with relatively few experimental measurements. The second trade-off that follows from this is the need for large amounts of experimental transmission and reflection measurements. This could also be alleviated somewhat by forward simulations of fiber, as we have previously found a high degree of alignment between simulated and experimental matrices [13]. Further, experimental and simulated datasets could be combined in a domain-transfer approach [21, 22]. The use of adaptive loss functions, such as in generative-adversarial networks, may further enable convergence on relatively small datasets, or else generate further training data. Third, the training process is very memory-intensive for dealing with large sizes of TM that are typically encountered in imaging applications (e.g. \(1024\times 1024\)), which requires over 1TB for training the recovery model. One possible solution is to develop matrix compression techniques such as Auto-encoder models to reduce the size of our input matrices by extracting the core features into a latent space. Reducing batch size is considered an optimizing method to reduce memory usage but too small batch size will lead to wider fluctuations and thus a larger converging loss with more training time required. We anticipate this neural network-based TM recovery model with new loss function designed will lead to new machine-learning models that deal with phase information, for example in imaging through optical fiber, holographic imaging and projection, where both phase control and speed are required. ## 4 Methods We present a new TM recovery method that uses neural networks, instead of using iterative approaches [13], to solve the Equations 2 - 4. Figure 7 shows the schematic of this TM recovery model. Specifically, we first simulated \(N\) optical fibers TMs, \(\mathbf{A}_{\lambda_{1}}\in\mathbb{C}^{M^{2}\times M^{2}}\), at a wavelength of \(\lambda_{1}\) as the ground truth. Then we randomly generated three complex-valued matrices as our reflector matrices, \(\mathbf{R}_{\lambda_{1}}\in\mathbb{C}^{M^{2}\times M^{2}}\), \(\mathbf{R}_{\lambda_{2}}\in\mathbb{C}^{M^{2}\times M^{2}}\), and \(\mathbf{R}_{\lambda_{3}}\in\mathbb{C}^{M^{2}\times M^{2}}\) at wavelengths \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) respectively. Finally, we generated three reflection matrices, \(\mathbf{C}_{\lambda_{1}}\in\mathbb{C}^{M^{2}\times M^{2}}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{C}^{M^{2}\times M^{2}}\), and \(\mathbf{C}_{\lambda_{3}}\in\mathbb{C}^{M^{2}\times M^{2}}\), at these wavelengths which can be calculated using to Equations 2 - 4. Here, we use wavelengths \(\lambda_{1}=850\)nm, \(\lambda_{2}=852\)nm and \(\lambda_{3}=854\)nm as physically realistic values within the TM bandwidth of a typical endoscopic length fiber (\(\sim\)2m) [13]. Each set of 3 reflection matrices, \(\mathbf{C}_{\lambda_{1..3}}\) then forms a single input to our neural network model. To feed the neural network, we did data pre-processing to both input data (i.e. reflection matrices) and the ground truth (i.e. TMs), converting from complex-valued data to real-valued data. We then split the \(N\) data into training and validation in a 3:1 ratio before training the neural network model with the ADAM optimizer using our custom-defined loss function. Python was used for model training and MATLAB was used for data pre-processing and post-processing because of its ease of use for complex matrix computations. To gauge the accuracy of our TM reconstruction we define loss metric to evaluate the performance of TM recovery by calculating the mean MSE of each validated TM over the number of the validation data: \[Loss=\frac{1}{0.25N}MSE(\hat{A}_{t},A_{t}) \tag{5}\] where,\(\hat{A}_{t}\) is the recovered TM and \(A_{t}\) is the target TM. \(0.25N\) is the number of data used for validation. To evaluate the performance of TM recovery in a context relevant to practical endoscopy applications, we passed an image \(\mathbf{X}\in\mathbb{C}^{M^{2}}\) via the simulated optical fiber TM and compared with the image produced using the recovered TM. In this paper, we ignore any loss within the space from the image plane to the distal end of the fiber, and also the loss in transferring through the reflector stack. Theoretically, the reconstructed image, \(\mathbf{\hat{X}}\in\mathbb{C}^{M\times M}\) can be calculated by: \[\mathbf{\hat{X}}=(\mathbf{\hat{A_{t}}^{\ T}})^{-1}\mathbf{A_{t}}^{\ T} \mathbf{X} \tag{6}\] where, \(\mathbf{\hat{X}}\) is the reconstructed image, \(\mathbf{X}\) is the target image, \(\mathbf{\hat{A_{t}}}\) and \(\mathbf{A_{t}}\) are the recovered TM and target TM respectively. ### Network architectures We defined two neural network models: a Fully-connected neural network (FCNN) and convolution U-net based neural network as shown in Figure 8. The FCNN is a ten-layer densely connected neural network (eight hidden layers), including 32,768 neurons in first and last hidden layers and 8192 neurons in other layers, all with LeakyRelu activation function. Figure 8 shows the FCNN architecture, where reflection process matrices \(\mathbf{C}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{3}}\in\mathbb{R}^{128\times 128}\), are firstly flattened into \(1D\) arrays and then concatenated as the input of the model (with the size of \(49152\times 1\)) and transmission matrix \(\mathbf{A}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), flattened into \(1D\) array as the output (with the size of \(16384\times 1\)). Batch normalization layers were defined between every dense layer and dropout layers at the rate of \(0.2\) were defined after the first two dense layers. Also two skip connections were developed in order to prevent the model overfitting. The model was trained iteratively with the weighted 'global phase insensitive' custom loss function used. The training dataset for recovering \(64\times 64\) TM consisted of 500,000 matrices and the model was run for 2500 epochs, taking 182.5 hours using Tensorflow 2.0 running on a NVIDIA Tesla V100 GPU. The Adam optimizer was used with a learning rate of 0.004 in a decay rate of \(1e^{-4}\). Next, we developed a U-net-based model that used encoder-decoder architecture, including seven Conv2D and DeConv2D layers respectively and two MaxPooling and UpSampling layers respectively with LeakyRelu activation function in each layer. Figure 8 shows this architecture, where reflection process matrices \(\mathbf{C}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{3}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{R}^{128\times 128}\), are defined in three channels as the input of the model (with the size of \(128\times 128\times 3\)) and transmission matrix \(\mathbf{A}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), as the output (with the size of \(128\times 128\times 1\)). Batch normalization layers were defined between every layer and dropout layers at the rate of \(0.2\) were defined after the second and last second Conv layers. Also three skip connections were developed in order to prevent the model being overfitting. The model was trained iteratively with the weighted 'global phase insensitive' custom loss function defined. Also, 2200 epochs were used for training 400,000 training datasets using 143h. The Adam optimizer was used with a learning rate of 0.004 in a decay rate of 1e-4. ### Data Preparation In terms of data preparation, we first simulated \(N\) pairs of complex-valued transmission matrices \(\mathbf{A}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\) at a wavelength of \(\lambda_{1}=850nm\) as the ground truth of the model. To simulate these we devised a model that recreates some Figure 7: Schematic of TM recovery model, including (a) data generation, (b) data pre-processing and (c) model training. \(N\) pairs of TM are firstly simulated as the ground truth. \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) represent three different wavelengths (in our case, 850nm, 852nm and 854nm). The input of the model is all real-valued matrices concatenated with reflection matrices at three different wavelengths. \(L\) represents custom loss function and \(w\) represents the weight updated by the optimizer. characteristic properties found in fiber TMs. First, TMs are sparse in some commonly used basis e.g. LP modes for multimode fibers or pixel basis for multicore fibers [23]. Second, TMs can be arranged such that the majority of power intensity lies along the main diagonal with additional power spread along sub-diagonals, which is also typically observed when using bases that match relatively well to the fiber eigenbasis[24]. Third, TMs should be slightly non-unitary in realistic situations, with mode-dependent loss values (i.e. condition numbers) in the range of 3-5. To meet these requirements firstly, we generate a uniformly distributed random tri-diagonal matrix, \(\mathbf{B}\in\mathbb{C}^{6\times 64}\), which has non-zero elements only at the main diagonal, diagonal below and above it. We then compute the left singular matrix \(\mathbf{U}\in\mathbb{C}^{64\times 64}\) and right singular matrix \(\mathbf{V}\in\mathbb{C}^{64\times 64}\) via singular value decomposition (SVD). To make it a non-unitary matrix, we apply a new singular value distribution, \(\mathbf{S_{new}}\in\mathbb{R}^{64\times 64}\), a diagonal matrix that contains random values at its diagonal ranging from 0.5 to 2.5 to simulate our expected TM, which matched with those TM that were measured during the experiments [24]: \[\mathbf{A}=\mathbf{U}*\mathbf{S_{new}}*\mathbf{V}^{T} \tag{7}\] We next simulated three complex-valued reflector matrices with random uniformly distributed complex entries, \(\mathbf{R}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\), \(\mathbf{R}_{\lambda_{2}}\in\mathbb{C}^{64\times 64}\), and \(\mathbf{R}_{\lambda_{3}}\in\mathbb{C}^{64\times 64}\). Based on this, we generate N pairs of complex-valued reflection process matrix \(\mathbf{C}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{C}^{64\times 64}\), and \(\mathbf{C}_{\lambda_{3}}\in\mathbb{C}^{64\times 64}\) at three different wavelengths \(\lambda_{1}=850nm\), \(\lambda_{2}=852nm\) and \(\lambda_{3}=854nm\) that are corresponded to previously simulated \(\mathbf{A}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\) using Equations 2 - 4. In order to feed the neural network, both input data, reflection matrices, and ground truth, TM, are required to be all real-valued matrices. Basically, a \(2\times 2\) complex-valued matrix can be described as \(4\times 4\) all real-valued matrix in Equation 8: \[\begin{bmatrix}a+bi&c+di\\ e+fi&g+hi\end{bmatrix}=\begin{bmatrix}a&-b&c&-d\\ b&a&d&c\\ e&-f&g&-h\\ f&e&h&g\end{bmatrix} \tag{8}\] Finally, the input of the model, three \(\mathbf{C}_{\lambda}\in\mathbb{C}^{128\times 128}\) at different wavelengths are normalized using in the range from -1 and 1. ### Weighted global phase insensitive loss function Widely-used conventional loss functions such as mean absolute error (MAE) or mean squared error (MSE) calculate the absolute difference between predicted and target output values. However, there is a class of problems whose solutions trained by deep learning models are degenerate within a global phase factor, but whose relative phase between pixels must be preserved. This class includes problems where complex transmission matrices are reconstructed and relative phase, but not global phase important, but could extend to phase-hologram generation algorithms where replay-field phase is important. This is depicted visually in Figure 9(a), which shows one example of a pair of predicted and target matrices with complex entries depicted as vectors. Figure 9(b) shows the complex error between these two matrices when using MAE as the loss function. Due to the global phase shift, we observe that the vectors have large magnitudes, which will lead to an overall very large MAE when their magnitudes are summed. In the limiting case (e.g. when the phase shift is of \(\pi\)) where the predicted and target matrices are identical, this global phase shift can result in a Figure 8: Architectures of two different neural network models used for recovery TM. (a) Fully-connected neural network, (b) Convolutional U-net normalized MAE of 100% when the true value should be 0%. To avoid this problem, we propose a custom loss function termed a 'global phase insensitive' loss function: \[L(\widehat{\mathbf{A_{t}}}(w),\mathbf{A_{t}}(w))=\sum_{t=1}^{4M^{2}}\left| \widehat{\mathbf{A_{t}}}(w)-\mathbf{A_{t}}(w)e^{1i\phi(\sum\mathbf{A_{t}}(w) \odot(\widehat{\mathbf{A_{t}}}(w)+\beta))}\right| \tag{9}\] where, \(\widehat{\mathbf{A_{t}}}(w)\in\mathbb{C}^{M^{2}\times M^{2}}\) and \(\mathbf{A_{t}}(w)\in\mathbb{C}^{M^{2}\times M^{2}}\) represent predicted and target output value with regards to weight, \(w\), respectively, \(\sum\) represents calculating sum over all matrix elements, \(\phi\) represents the argument function for a complex number input, \(\odot\) represents element-wise division and \(\beta=0.001\) is a constant added to avoid divide-by-zero errors. We also developed an alternative custom loss function that weights phase entries by power intensity, achieved by multiplying by the complex conjugate of \(\widehat{\mathbf{A_{t}}}(w)\), denoted \(\widehat{\mathbf{A_{t}}}^{*}(w)\). We also add an \(\ell_{2}\) regularization term to give the generalization of the model. \[WL(\widehat{\mathbf{A_{t}}}(w),\mathbf{A_{t}}(w))=\sum_{t=1}^{4M^{2}}\left| \widehat{\mathbf{A_{t}}}(w)-\mathbf{A_{t}}(w)e^{1i\phi(\sum\mathbf{A_{t}}(w) \odot\widehat{\mathbf{A_{t}}}^{*}(w))}\right|+\frac{\alpha}{2}\|w\|^{2} \tag{10}\] where \(\odot\) represents element-wise multiplication, \(\alpha\) = \(1e^{-4}\) is the regularization parameter. This implicitly weights the phase contributions by the product of magnitudes of the respective elements in \(\widehat{\mathbf{A_{t}}}(w)\) and \(\mathbf{A_{t}}(w)\), which upon convergence will approximately equal the squared magnitude of the target. Specifically, the global phase factor, estimated by the term \(e^{1i\phi(\sum\mathbf{A_{t}}(w)\odot\widehat{\mathbf{A_{t}}}^{*}(w))}\) is the phase of a complex number representing the weighted sum of the elements of the complex difference matrix between predicted and target matrices. The rationale for this is that when the optimization algorithm has reached a minimum, in the ideal case the remaining error for each complex element will be entirely due to aleatoric uncertainty and can thus be modelled using a circularly symmetric complex Gaussian distribution [25]. The element-wise phase errors should therefore be uniformly distributed from \(0\) to \(2\pi\). If this is not the case, then there is likely some contribution to the phase error from an arbitrary global phase, as shown in Figure 9(c). Correcting for this factor should produce the desired uniform phase distribution. To estimate the correction factor, the element-wise complex errors can be summed, as shown in Figure 9(c). This will produce an overall complex factor that has the desired global phase shift, shown in Figure 9(d). The predicted output value can be corrected by multiplying by this phase factor as shown in Figure 9(e), the result of which is then used to compute further parameter updates in the gradient descent algorithm. It can be seen that the complex error in Figure 9(f) between the predicted and target output value is reduced to a minimum after removing the phase factor compared to that calculated by MAE. We then compared the absolute values of the complex error calculated by MAE (green bar) and our customized weighted 'global phase insensitive' loss function (blue bar) respectively over 100,000 pairs of predicted and desired TM as shown in Figure9(g). The error using the custom loss function is more than two times smaller than that of the conventional loss function (MAE), which suggests the potential for this custom loss function in eliminating the effect of global phase. ## Data Availability The data presented in this study are available from the following source: [DOI to be inserted later]. ## Code Availability The code for this study is available from the following source: [DOI to be inserted later]. ## Author Contributions ## Acknowledgement The authors acknowledge support from a UKRI Future Leaders Fellowship (MR/T041951/1).
2301.00012
GANExplainer: GAN-based Graph Neural Networks Explainer
With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making. Thus, it is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications. Some GNNs explainers have been proposed recently. However, they lack to generate accurate and real explanations. To mitigate these limitations, we propose GANExplainer, based on Generative Adversarial Network (GAN) architecture. GANExplainer is composed of a generator to create explanations and a discriminator to assist with the Generator development. We investigate the explanation accuracy of our models by comparing the performance of GANExplainer with other state-of-the-art methods. Our empirical results on synthetic datasets indicate that GANExplainer improves explanation accuracy by up to 35\% compared to its alternatives.
Yiqiao Li, Jianlong Zhou, Boyuan Zheng, Fang Chen
2022-12-30T23:11:24Z
http://arxiv.org/abs/2301.00012v1
# GANExplainer: GAN-based Graph Neural Networks Explainer ###### Abstract With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making. Thus, it is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications. Some GNNs explainers have been proposed recently. However, they lack to generate accurate and real explanations. To mitigate these limitations, we propose GANExplainer, based on Generative Adversarial Network (GAN) architecture. GANExplainer is composed of a generator to create explanations and a discriminator to assist with the Generator development. We investigate the explanation accuracy of our models by comparing the performance of GANExplainer with other state-of-the-art methods. Our empirical results on synthetic datasets indicate that GANExplainer improves explanation accuracy by up to 35% compared to its alternatives. ## 1 Introduction Graph neural networks (GNNs) [23], with the resurgence of deep learning, have become a powerful tool to model graph datasets and achieved impressive performance. However, a GNN model is typically very complicated and how it makes predictions are unclear; while unboxing the working mechanism of a GNN model is crucial in many practical applications (e.g., criminal associations predicting [20], traffic forecasting [6], and medical diagnosis [2]). Thus, an explainable model is favoured and even necessary, as explanations benefit users in multiple ways, such as improving the model's fairness/security and it also enhances understanding and trust in the model's predictions. As a result, explaining GNNs has achieved considerable research attention in recent years. Recently, several explainers [25, 5, 26, 14, 12] have been proposed to tackle the problem of explaining GNN models. These attempts can be categorized into _local_ and _global_ explainers according to their interpretation scales. In particular, if the method only provides an explanation for a specific instance, it is a _local explainer_. In contrast, if the method explains the whole model, then it is a global explainer_. Alternatively, GNN explainers can also be classified as either _transductive_ or _inductive_ explainers based on their capacity to generalize to extra unexplained nodes. GNNExplainer [25] and GraphLIME [5] are challenging to be applied in inductive settings as their explanations are limited to a single instance, and they provide local explanations, which are incapable of capturing the archetypal patterns shared by the same classes or groupings. While PGExplainer [14], Gem [12], and XGNN [26] can provide a global explanation of the model prediction. Specifically, a trained PGExplainer [14] and Gem [12] can be used in inductive scenarios to infer explanations for unexplained instances without retraining the explanation models. However, XGNN trains a graph generator for explaining a class by outputting class-wise graph patterns. It hardly applies to a specific instance, since the graph patterns may not even exist on the instance. PGExplainer trained a shared generative probabilistic model using the multiple explained instances rather than explicitly dissecting and modelling the class-wise knowledge. Gem trained an auto-encode generative model based on Granger causality. Still, the graph dataset's explanation is inconsistent since it lacks a discriminator to monitor the generation progress. To address the above limitations, we propose a global inductive explainer, Generative Adversarial Explainers (GANExplainer) using the virtues of Generative Adversarial Network (GAN) [4]. GAN application widely arranged from image generation to 3D object generation, but not used in GNN explaining. We are the first methods to use GAN to generate explanations for GNNs. Inspired by the explanations generated by other explainers, our ultimate goal is to encourage a compact subgraph of the computation graph to have a large causal influence on the outcome of the target GNN. Our setting is general and works for any graph learning tasks, including node classification and graph classification. Our contributions can be summarized as followings: * We endeavour to apply GAN to a new domain. GAN has been implemented in numerous fields, including computer vision [18], image processing [16], and natural language processing [15]. However, no attempts are made to explain GNNs using GAN. We are the first to use a GAN to shed light on how GNNs process and learn from graph data, thereby enhancing our understanding of their inner workings. * We present an innovative GNNs Explainer. Inspired by the GAN architecture, we propose a GANExplainer that is supervised by the target GNN and can consistently provide accurate explanations. GANExplainer is composed of both a Generator and a Discriminator. The objective of the Generator is to generate explanations that can be fed into the target GNN to obtain the same predictions. The Discriminator is a graph classifier that category the generating and input graphs into distinct categories. * We enhance the accuracy of GNNs Explainers. GANExplainer is not only capable of generating global explanations, but can also be utilised in inductive settings. Compared to state-of-the-art inductive GNN explainers, GANExplainer has superior performance. ## 2 Related Work Generate Adversarial Network.Generative adversarial networks (GANs) [4] are a type of deep learning model that can generate new data samples similar to a training dataset. They consist of two neural networks: a generator and a discriminator. The generator tries to capture the distribution of actual examples and generate new data examples. The discriminator is usually a binary classifier used to discriminate generated examples from actual examples as accurately as possible. The optimization of GANs is a minimax optimization problem. The optimization terminates at a saddle point, forming a minimum for the generator and a maximum for the discriminator. That is, the GAN optimization goal is to reach Nash equilibrium. At that point, the generator can be considered to have accurately captured the distribution of real examples. GANs provide a way to learn deep representations without extensively annotated training data. They achieve this by deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including video generation [18], image generation [24], face generation [4], object detection [3], and texture synthesis [9]. However, utilizing GAN in explaining GNNs is still under exploration. GNNs Explainers.GNNs incorporate both graph structure and feature information, which results in complex non-linear models, rendering explaining its prediction remain a challenging task. Besides, model explanations could bring a lot of benefits to users (e.g., improving safety and promoting fairness). Thus, some popular works have emerged in recent years focusing on the explanation of GNN models by leveraging the properties of graph features and structures. We here briefly review the respective GNNs explainers below. GNNExplainer[25] is a seminal method in the field of explaining GNN models. It provides local explanations for GNNs by identifying the most relevant features and subgraphs, which are essential in predicting a GNN. PGExplainer[14] introduces explanations for GNNs with the use of a probabilistic graph. It provides model-level explanations for each instance and possesses strong generalizability. CF-GNNExplainer [13] generate counterfactual explanations for the majority of instances for GNN explanations but ignores the correlation between the prediction and the explanation. Thus, Bajaj et al. [1] proposed RCExplainer generating robust counterfactual explanations. And Wang et al. [22] proposed ReFine, which pursues multi-grained explainability by pre-training and fine-tuning. Also, reinforcement learning is prevalent in explaining GNNs. Such as, Yuan et al. [27] proposed XGNN, which is a model-level explainer that trains a graph generator to generate graph patterns to maximize a specific prediction of the model. Since XGNN focuses on model-level explanations, it may not preserve the local fidelity, which means its explanations may not be a substructure existing in the input graph. Thus, to solve this limitation, Wang et al. [21] proposed RC-Explainer, which generates causal explanations for GNNs by combining the causal screening process as a Markov Decision Process in reinforcement learning. Further, Shan et al. [17] proposed RG-Explainer, a reinforcement learning enhanced explainer that can be applied in the inductive setting, demonstrating its better generalization ability. _Gem_[12] is able to provide both local and global explanations, and it is also operated in an inductive setting. Thus, it can explain GNN models without retraining. Notably, it adopts a parameterized graph auto-encoder with Graph Convolutional Network(GCN) [8] layers to generate explanations. Also, Gem applies Granger causality to generate causal explanations. However, Gem is not consistently getting accurate explanations. Thus, we aim to improve it by adding a discriminator in our framework, which can provide high-accuracy explanations in synthetic and real-world datasets. ## 3 Methods ### Problem Formulation GNN explainer provides a faithful and compact subgraph to illustrate why the GNNs make the prediction, which indicates essential graph structures and features leading to the model outcomes. Alternatively, it is important to note that the explanation subgraph must be a real subgraph of the input graph, meaning it must contain a subset of the vertices and edges of the input graph. For a graph \(\mathbf{g}=(\mathbf{V},\mathbf{A},\mathbf{X})\) with label \(\mathbf{L}=\{l_{1},l_{2},...,l_{i}\},l\in C=\{c1,c2,...,c_{j}\}\) and \(j\) is the number of categories. Where \(\mathbf{A}\) presents the adjacency matrix, when node \(i\) and node \(j\) are connected, \(A_{ij}=1\), if node \(i\) and node \(j\) are not connected, then \(A_{ij}=0\). We have the prediction \(f(\mathbf{g})=y\) of a GNN model, and the explanation \(E(f(\mathbf{g}),\mathbf{g})=Exp\) from a GNN explainer. We expect putting the explanation from the GNNs explainer \(Exp\) to the target model \(f(\mathbf{g})\) can obtain the Figure 1: The framework of GANExplainer. We generate ground truth through the Gem distillation process. same prediction. Shortly, we expect obtaining \(f(\mathbf{g})=y\) and \(f(E(f(\mathbf{g}),\mathbf{g}))=y\). Alternatively, we expect the explanation to be the subgraph of the input graph, that is \(Exp\in\mathbf{g}\). Thus, to provide an accurate and real explanation, We propose a GANExplainer based on GAN architecture, which thoroughly applies the feature and structure of the graph together. Applying the feature and structure of a graph to the GAN architecture could potentially allow the GANExplainer to more accurately capture the underlying relationships and patterns in the data, which could lead to more accurate explanations. ### GANExplainer Inspired by the GAN architecture, we propose the GANExplainer, which attempts to generate explanations for GNN under the supervision of the target GNN or ground truth. The framework of GANExplainer is shown in Figure 1. To keep the explanation generated by GANExplainer as a substructure of the input graph, we assign weights to each edge to determine its relative importance. Suppose the explanation adjacency matrix \(\mathbf{Exp}\) contains elements that are not present in the adjacency matrix \(\mathbf{A}\) of the input graph. In that case, the explanation subgraph is not a real subgraph of the input graph. Our objective is to generate real explanations that only contain truth edges of the input graph. Thus, we use GANExplainer to produce a weight matrix \(\mathbf{W}\) instead of generating a subgraph directly. When we get a weight matrix \(\mathbf{W}\), we multi the weight matrix into adjacency matrix \(\mathbf{A}\), and we get the explanation adjacency matrix \(\mathbf{Exp}\) (\(\mathbf{Exp}=\mathbf{W}*\mathbf{A}\)). \(\mathbf{Exp}\) represents the explanation subgraph. The element at the \(i-th\) row and \(j-th\) column of the explanation adjacency matrix is the weight of the edge between the \(i-th\) and \(j-th\) vertices in the explanation subgraph. Generator.The purpose of the generator is to create a weighted subgraph. The Generator is composed of an encode and a decoder section. Particularly, for synthetic datasets, the Generator employs a 6-layer encode and a 2-layer decode, while for real-world datasets, it employs a 7-layer encode and a 2-layer decode. The Generator initially produces a weighted graph based on the input graph, the ground truth of the input graph, and the prediction of the target GNN. The product is then introduced into the Discriminator. Then, based on the feedback from the Discriminator, the Generator was modified. Discriminator.The objective of the Discriminator is to identify whether the graph is real or artificial. The discriminator is a three-layer graph classifier with convolutional layers. In specific, the output of the Discriminator for the input graphs is real labels. In comparison, the discriminator produces fake labels for the graphs generated by the Generator, which information will be fed back into Generator and assist with the Generator development. ### Improved Loss Function GANs consist of a generator and a discriminator. The generator and the discriminator interact to get a balance like in a minimax game. The objective of the generator \(G\) is to generate data that tricks the discriminator, whereas the objective of the discriminator \(D\) is to differentiate between actual and generated data. Consequently, the loss function of the original GAN is defined as follows: \[\min_{G}\max_{D}V(D,G)=E_{x\sim p_{\text{data }(x)}}[\log D(x)]\] \[+E_{z\sim p_{z}(z)}[\log(1-D(G(z)))]\] where the \(x\) is the input graph data, and the \(z\) is the normalized graph adjacency matrix. Explainers should provide explanations for GNNs by finding both significant subgraphs and essential features that play a crucial role in the prediction of GNN. Therefore, we must use the target GNN as a check to ensure that we generate explanations that can explain the GNN, and that these explanations are not limited to the critical structures of datasets. Consequently, if we only use the original GAN objective function, we will generate another graph/subgraph that cannot explain why the GNN makes the prediction. In order to generate explanations with two virtues fidelity and reality, we have enhanced the generator's objective function. The objective function of the Generator of GANExplainer is defined as follows: \[\min_{G}\,_{z\sim p_{z(x)}}[\log(1-D(G(z)))+\lambda\frac{1}{N}\sum_{i=1}^{N} {(f(\mathbf{g})-f(G(z)))^{2}}]\] where the \(f\) means the target GNN; the \(N\) is the node set of \(\mathbf{g}\). In our experiments for synthetic datasets, we set \(\lambda=2\), while for real-world datasets we set \(\lambda=3\). ## 4 Experiments In this section, we conduct experiments to inspect the performances of our model. We first describe the details of the datasets and implementation we used in 4.1 and 4.2 respectively. After that, we present and analyze the experimental results on synthetic datasets in 4.3 and real-world datasets in 4.4 from two aspects: quantitative evaluation and qualitative evaluation. ### Datasets We focus on two widely used synthetic node classification datasets, including BA-Shapes and Tree-Cycles [25, 10], and two Real-world graph classification datasets, Mutagenicity [7] and NCI1 [19]. Statistics of these datasets are shown in Table 1. The BA-Shapes dataset is based on a Barabasi-Albert (BA) graph on 300 nodes and attaches 80 "house"-structured network motifs to randomly selected nodes of the base graph. Nodes are categorised into four classes depending on their structural roles, which correspond to nodes at the top, middle, and bottom of houses and nodes that do not belong to a house. The Tree-Cycles dataset starts with a base 8-level balanced binary tree and attaches 80 six-node cycle motifs, which are attached to random nodes of the base graph. Nodes are divided into 2 classes according to nodes belong the tree or the cycle. Mutagenicity datasets contain 4337 molecule graphs, where nodes represent atoms, and edges denote chemical bonds. The graph classes, including the non-mutagenic or the mutagenic class, indicate their mutagenic effects on the Gram-negative bacterium Salmonella Typhimurium. Carbon rings with chemical groups \(NH_{2}\) or \(NO_{2}\) are known to be mutagenic. Carbon rings, however, exist in both mutagen and nonmutagenic graphs, which are not discriminative. NCI1 represents a balanced subset of chemical compounds screened for activity against non-small cell lung cancer. This dataset contains more than 4,000 chemical compounds, each of which has a class label between positive and negative. Each chemical compound is represented as an undirected graph where nodes, edges and node labels correspond to atoms, chemical bonds, and atom types respectively. ### Experimental Settings In our experiments, we replicate the experimental conditions of Gem so that we can fairly compare our results to those of Gem, which are our baseline. In accordance with experimental Gem settings, they select K to select various top edges as explanations. Consequently, we set K to the same value for each dataset as the Gem. Consistent with experimental Gem settings, we divide the data into 80% training data, 10% validation data, and 10% testing data. We maintain the consistency of the testing data. Both training data and validation data are utilised in their entirety during the training process. Baseline approaches.With the wide application of GNNs, more and more GNN explainers have been proposed to address the problem of explaining GNN models. _GNNExplainer_ is a seminal method in the field of explaining GNN models. In addition, the PGExplainer and Gem are most related to our method. Note \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Node Classification} & \multicolumn{2}{c}{Graph Classification} \\ \cline{2-5} & BA-Shapes & Tree-Cycles & Mutagenicity & NCI1 \\ \hline \# of Graphs & 1 & 1 & 4,337 & 4110 \\ \# of Edges & 4110 & 1950 & 266,894 & 132,753 \\ \# of Nodes & 700 & 871 & 131,488 & 122,747 \\ \# of Labels & 4 & 2 & 2 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets information. that for the synthetic datasets, we know the motif of each dataset. However, for real-world datasets, there are no explicit motifs (no ground truth motifs) for classification. Thus, for real-world datasets, we need to explain all classes. Since PGExplainer assumes NO2 or NH2 as the motifs for the mutagen graphs and trains an MLP for model explanation with the mutagen graphs including at least one of these two motifs. In addition, as the results reported by the Gem, we note that the performance of Gem is better than PGExplainer. Thus, we consider GNNExplainer and Gem as alternative approaches. We set all the hyperparameters of the baselines as reported in the corresponding papers. Metrics.A better explainer should be able to generate more compact subgraphs yet maintains the prediction accuracy while the associated explanations are fed into the target GNN. After comparing the characteristic of each metric [11, 28], we chose quantitative and qualitative evaluation. To this end, we generate the explanations for the test set based on GNNExplainer, Gem, and GANExplainer, respectively. Then we use the predictions of the target GNN for the explanations to calculate the explanation accuracy. The explanation accuracy can be defined as: \[ACC_{exp}=\frac{Correct_{f(\mathbf{g})=f(Exp)}}{|Test|}\] where the \(f\) means the target GNN; \(\mathbf{g}\) presents the graph; \(Exp\) presents the explanation; \(|Test|\) means the total number of test set. ### GANExplainer on Synthetic Datasets Firstly, we conduct experiments on synthetic datasets, including BA-Shapes and Tree-Cycles. We evaluate the accuracy of explanations provided by GANExplainer (our model), Gem, and GNNExplainer. We also present quantitative and qualitative evaluations of our experiments. The accuracy of explanations for synthetic datasets with various K settings is detailed in Table 2. The results indicate that GANExplainer consistently provides the most accurate explanations in all cases. On the BA-Shapes dataset, GNNExplainer, Gem, and GANExplainer perform well for synthetic datasets. \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline \hline K (edges) & \multicolumn{5}{c|}{BA-Shapes} & \multicolumn{5}{c}{Tree-Cycles} \\ & 5 & 6 & 7 & 8 & 9 & 6 & 7 & 8 & 9 & 10 \\ \hline GNNExplainer & 0.7941 & 0.8824 & 0.9118 & 0.9118 & 0.9118 & 0.2000 & 0.5429 & 0.7143 & 0.8571 & 0.9429 \\ Gem & 0.9412 & 0.9412 & 0.9412 & 0.9412 & 0.9412 & 0.7429 & 0.7429 & 0.7714 & 0.8857 & 0.9143 \\ GANExplainer & 0.7647 & 1.000 & 0.9706 & 0.9853 & 0.9853 & 0.9143 & 1.0000 & 0.9714 & 1.0000 & 1.0000 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on Synthetic Datasets. However, GANExplainer also incorporates a number of enhancements. GANExplainer outperforms GNNExplainer and Gem on BA-Shapes. On the Tree-Cycles dataset, GANExplainer performs well on Tree-Cycles, whereas neither GNNExplainer nor Gem perform well. Specifically, when K=7 on Tree-Cycles, GANExplainer achieves a 35% and 84% improvement over Gem and GNNExplainer, respectively. Qualitative is an effective way to visualize the explanations. We obtain the difference in explanations between GNNExplainer, Gem and GANExplainer by qualitative analysis. We visualise the explanations of Tree-Cycles when \(K=6\), shown in Figure 2. We note that the explanations of GNNExplainer and Gem can not get the correct prediction when fed into the target GNN. However, we can get the correct prediction similar to the prediction of the target GNN when fed the explanation from our explainer, GANExplainer. ### GANExplainer on Real-world Datasets We report the results of real-world datasets in the following. The quantitative evaluation is shown in Table 3. As shown in the table, We note that the results reported successfully verify that our proposed GANExplainer can generate explanations that consistently yield high explanation accuracies over all datasets. To further check the explainability of the generated explanations, we report the qualitative evaluation of the Mutagenicity (graph 3903 and graph 3904, \(K=15\)) in Figure 3. When the target GNN gets the prediction of the graph, we expect to get the same prediction of explanation. Specifically, for graph 3903, the label is 1, while we get the wrong prediction of the target GNN 0. We want to explain why the target GNN makes the prediction 0. Thus, we visualise the explanations from GNNExplainer, Gem and GANExplainer, respectively. From the figure, we note that the explanations of GNNExplainer and GANExplainer get the same prediction as the original graph, after feeding explanations into the target GNN. However, the explanation of Gem makes a different prediction. Thus, we can conclude that the explanations provided by GNNExplainer and GANExplainer are correct while the explanation of Gem is incorrect. Furthermore, comparing the explanation of GNNExplainer and GANExplainer, we note the explanation of GANExplainer provides a more complete explanation. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline K (egeeds) & \multicolumn{3}{c|}{Mutagenicity} & \multicolumn{3}{c}{NCI1} \\ 15 & 20 & 25 & 30 & 15 & 20 & 25 & 30 \\ \hline GNNExplainer 0.6981 & 0.7188 & 0.7442 & 0.7834 & 0.6909 & 0.7031 & 0.7566 & 0.8004 \\ Gem & 0.6705 & 0.7027 & 0.7741 & 0.7949 & 0.6253 & 0.7055 & 0.7956 & 0.8126 \\ GANExplainer 0.6935 & 0.7442 & 0.7650 & 0.7857 & 0.6642 & 0.7494 & 0.7908 & 0.8273 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on Real-world Datasets. Figure 3: The explanation visualization on Mutagenicity when \(K=15\). Graph 3903 the label is 1, and the graph 3904 the label is 0. In this diagram, the first column is the original graph structure and the predicted label of the target GNN. The second to fourth columns are the explanation of graphs from GNNExplainer, Gem, and GANExplainer, respectively. Figure 2: The explanation visualization of Node 709, the label is 1, on Tree-Cycles when \(K=6\). In this diagram, the blue nodes mean the prediction label of the target GNN is 1, and the grey nodes mean the predictions of the target GNN for these nodes are 0. The red circle node is the node that needs to explain why the target GNN predicting label is 1. The first subfigure is the original graph structure, and the prediction of the target GNN is 1, which means the target GNN makes the right prediction. The second to fourth subfigures are the explanation of the node 709 from GNNExplainer, Gem, and GANExplainer, respectively. ## 5 Conclusion In this paper, we propose GANExplainer, an explainer based on GAN, which can create accurate and real explanations for any graph neural network. Specifically, GANExplainer is model-agnostic, does not rely on the linear-independence assumption of the explained features, and does not require knowledge of the internal structure of the target GNN. Furthermore, GANExplainer can be applied in inductive settings to explain the whole GNN model. Our findings are consistent across diverse datasets and graph learning tasks.
2309.07163
Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography-Based Cognitive Workload Detection
This article summarizes a systematic review of the electroencephalography (EEG)-based cognitive workload (CWL) estimation. The focus of the article is twofold: identify the disparate experimental paradigms used for reliably eliciting discreet and quantifiable levels of cognitive load and the specific nature and representational structure of the commonly used input formulations in deep neural networks (DNNs) used for signal classification. The analysis revealed a number of studies using EEG signals in its native representation of a two-dimensional matrix for offline classification of CWL. However, only a few studies adopted an online or pseudo-online classification strategy for real-time CWL estimation. Further, only a couple of interpretable DNNs and a single generative model were employed for cognitive load detection till date during this review. More often than not, researchers were using DNNs as black-box type models. In conclusion, DNNs prove to be valuable tools for classifying EEG signals, primarily due to the substantial modeling power provided by the depth of their network architecture. It is further suggested that interpretable and explainable DNN models must be employed for cognitive workload estimation since existing methods are limited in the face of the non-stationary nature of the signal.
Vishnu KN, Cota Navin Gupta
2023-09-11T14:27:22Z
http://arxiv.org/abs/2309.07163v1
Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography - Based Cognitive Workload Detection ###### Abstract This article summarizes a systematic review of the electroencephalography (EEG) - based cognitive workload (CWL) estimation. The focus of the article is two-fold, identify the disparate experimental paradigms used for reliably eliciting discreet and quantifiable levels of cognitive load and the specific nature and representational structure of the commonly used input formulations in deep neural networks (DNNs) used for signal classification. The analysis revealed a number of studies using EEG signals in its native representation of a two-dimensional matrix, for offline classification of CWL. However, only a few studies adopted an online/pseudo-online classification strategy for real-time CWL estimation. Further, only a couple of interpretable DNNs and a single generative model were employed for cognitive load detection till date during this review. More often than not, researchers were using DNNs as black-box type models. In conclusion, DNNs prove to be valuable tools for classifying EEG signals, primarily due to the substantial modeling power provided by the depth of their network architecture. It is further suggested that interpretable and explainable DNN models must be employed for cognitive workload estimation since existing methods are limited in the face of the non-stationary nature of the signal. Cognitive Workload, Mental Workload, Deep Neural Networks, Deep Learning, Electroencephalogram ## 1 Introduction Brain-Computer Interfaces (BCIs) are often employed to facilitate Human - Machine Interactions (HMIs) like that of autonomous or semi-autonomous transportation vehicles or heavy industrial machinery. Operational aspects of these environments demand situational awareness, optimal allocation of attentional resources, and sustained vigilance from the operator due to their safety-critical nature [1]. These cognitive resource demands induce a load on the mental faculties of the human operator, and this operational load has been termed as cognitive (mental) workload (CWL). The cognitive resources demanded by an operational task may vary from very low (underload) at times to extremely high (overload) in ominous operational situations. High and low CWL may adversely affect the interaction, reducing both the machine's and operator's performance, which may result in catastrophes and cost human lives [2] Therefore, accurate real-time estimation of task-induced workload and the user's cognitive state is critical for an adaptive - automated functional protocol in real-world HMIs like piloting an aircraft, driving an automobile, or operating heavy construction machinery. Emerging technologies like Brain - Computer Interfaces (BCIs) are envisioned to bridge the gap between humans and machines by providing a bio-digital interface between the two [3, 4]. The general definition of CWL is the ratio between the cognitive resources demanded by the task and the available cognitive resources that a user can allocate against the task's demands [5]. Several ways of measuring task-induced CWL exist [6]. The field has traditionally used self-reported (subjective) measures to estimate the cognitive workload experienced by a user in addition to reaction time in a secondary task (a behavioral measure). These methods hinder the primary task execution and therefore are unsuitable for real-time estimation [7]. The adoption of neurophysiological signals such as electroencephalogram (EEG) has increased since they can provide an objective, direct, passive, and real-time estimation of the cognitive resources demanded by the task [8]. Electroencephalographic (EEG) signals originate from a noisy nonlinear system and have traditionally been considered challenging to decode [9]. Nevertheless, EEG is still an appropriate signal for CWL estimation [10] since it is a low-cost and portable acquisition system with high temporal resolution. However, this neuroimaging modality comes with a unique set of challenges. The high dimensionality of the EEG signal has always compelled feature extraction and dimensionality reduction [9, 11]from the time-domain signal, followed by dimensionality reduction. Other than physiological artifacts [12], the presence of unrelated neural activity could be the primary reason for EEG signals being highly variable across the multiple sessions of a subject and the different subjects performing the same task. Almost all state-of-the-art BCI protocols need extensive calibration for reliable classification performance at the levels typically required by consumer BCI applications [13]. These challenges necessitate careful experimental design and extensive signal processing before conducting statistical analysis, so it would be possible to correlate the EEG signal with an observed behavioral phenotype. CWL can be elicited using numerous tasks and may be detected as changes in the signal power for various frequency bands of EEG signals. Many studies have independently verified characteristic changes in EEG sub-band oscillations during different levels of workload [14]. Alpha oscillations are characteristic of the wakeful state [15] since they relate to sensory perception and mediate sensory suppression mechanisms during selective attention [16]. Additionally, CWL can be measured using active or passive measures and tasks [17, 18]. A wide variety of these EEG-based measures have been reviewed in [19]. The passive BCI (pBCI) do not employ covarying subjective or behavioral measure of CWL. The envisioned pBCI is a bio-digital interface that can provide an implicit mode of communication between a computer-controlled machine through [8] by automatically detecting neurophysiological signals of specific intentions and translating this brain activity into machine-executable commands [20]. Identifying and segregating neural activity of interest from the rest of the signal is central to a BCI protocol, but the technical challenges VKN was associated with the Indian Institute of Technology, Guwahati, Assam, India ([email protected]). CNG was associated with the Indian Institute of Technology, Guwahati, Assam, India ([email protected]). significantly hamper practical signal classification [21]. Recent advances in deep neural networks (DNNs) have shown promise in objectively assessing CWL levels from electroencephalogram signals [22, 23]. Deep learning (DL) algorithms can learn from the characteristically weak EEG signals eliminating the need for feature extraction [24] as well as signal pre-processing in some cases [25]. DNNs possess superior pattern recognition abilities over traditional machine learning algorithms (MLA) since they can leverage the parametric depth of the network while learning, enabling them to recognize the relevant features directly from the EEG signals despite the non-stationarity. In several EEG-Based BCI paradigms, (DNNs have surpassed the performance of traditional MLA [25]. Notwithstanding some sparse success in the field of CWL estimation [26], DNNs currently perform inferior to the current state-of-the-art SVM classifiers [22, 27]. However, it is worth noting that [28] proved that DNNs could achieve performance on par with traditional classifiers using relatively small EEG datasets. Current limitations of EEG-based CWL detection and thus evident broad research gaps can be identified as, 1. Inter-session/subject variations in signal features notwithstanding the same stimulus being used for eliciting a given activity, 2. Non-stationarity of EEG signals and the need for signal features that deliver optimal classification performance for a given task, 3. Lack of models explaining the sustained inter-subject similarity in neural activity despite significant intra-subject signal variability [29], and 4. A consequent lack of consensus on the classification algorithm and the most appropriate signal feature. Cognitive workload and measurements are currently widely used in aviation [30], automobile [31], and certain BCI applications. The field uses both laboratory and real-world-based paradigms. These experimental paradigms, DNN based detection methods, and their application domains are reviewed in this article. Though many reviews exist on the topic [24, 30, 32, 2], a systematic review focusing on EEG-based cognitive workload estimation using deep learning algorithms is absent, and this review is intended to fill this gap in knowledge. ## II Literature Search ### _Research Questions_ The central topics of this review, the keywords for article retrieval, and The PRISMA flow chart of article selection are depicted in Fig. 1. We have identified 64 articles that satisfied all set criteria and have been analyzed using the following constructs. The research articles are systematically evaluated with predefined critical constructs expressed as questions 1. What are the paradigm designs used to elicit different cognitive states? Are there any domain-specific cognitive states and task-design trends? 2. What are the DNNs employed for cognitive load detection? What are the prefered network architecture, input formulations AND features used for CWL detection? ## III Results ### _Experimental Paradigm for CWL Induction_ Many experimental paradigms are prevalently used in CWL research to elicit graded levels of cognitive load. These levels vary from basic binary distinction to as high as seven levels of cognitive load. The highest levels of graded workload were provided by Automated Cabin Air Management System (AutoCAMS) task [33]. This experimental paradigm simulates the micro-world of a space flight but is a generic operator paradigm [34] where the subject is tasked to monitor gauge levels and make real-time decisions. The task's general similarity to many operational scenarios, including industrial process controllers, has inspired many studies, and about 10% of all the studies reviewed here are found using it. AutoCAMS simulates these graded levels by varying the number of subsystems and automation failures. The observed maximum in this survey is seven levels of cognition, as found in [35]. Most of the studies employing the AutoCAMS task used three or more categories. It is a computer simulation aimed at simulating adaptive operational environments. Additionally, different types of flight simulations were used to generate four [36, 37, 38] and five levels [39, 40] of workload. Further, the Multi-Attribute Task Battery (MATB ) [41] is also a generic operator paradigm like AutoCAMS, and it simulates the generic operations a pilot performs while flying an aircraft. The task battery consists of multiple tasks to be performed simultaneously in a scheduled manner, determining the induced workload levels. MATB has also been widely used for eliciting two [42] or three [28] workload levels. Furthermore, Simultaneous Task Capacity (SIMKAP) [43, 44, 45] and N-back tasks [46] have been used to produce a maximum of three workload levels. SIMKAP task is a multitasking paradigm, and few open-source datasets collected with it are available online. Additionally, the N - back task consists of presenting a series of numbers or shapes to the subject on a screen, and the subject is asked to react to a series of stimuli by evaluating if the current element was the same as that appeared n times ago, and hence the name, n-back. Other paradigms used to elicit two levels of workload: working memory task, mental arithmetic task, construction activities, and learning tasks, to name a few. Apart from these standard tasks, some studies have used in-house tasks [47, 48, 49] for eliciting cognitive workload [50]. #### Iii-1 Cognitive and Operative Paradigms A recent review classified the workload-inducing tasks into 'cognitive paradigms' and 'operate paradigms' [24] (Fig. 1)in which the studies using operative paradigms specifically intended their research as a direct industrial application, unlike the cognitive paradigm, which was intended as controlled laboratory experiments that are focused on theoretical aspects of cognition and the cognitive workload construct. This analysis follows the same taxonomical classification while presenting results. Most articles retrieved for this survey implemented operative paradigms for inducing cognitive workload (67%), while the rest were cognitive paradigms (33%). This inference is based on Fig. 2, where the pie charts illustrate the prevalence of the experimental paradigms used to elicit cognitive workload. The prevalent cognitive paradigms encountered in this study are N-back [51, 52, 53, 54], Sternberg working memory [55, 39] mental arithmetic (MA) [56, 57], SIMKAP [58, 59], while General Flight Simulation [60, 61, 62], Driving Simulations [63, 64, 65], MATB [28, 42, 66] and AutoCAMS [67, 68, 69, 70, 71, 72] are the prominent tasks categorized in operative paradigms. Overall, operative paradigms were encountered more in comparison to cognitive paradigms. Following the logic of the previous synthesis, a further sub-division of these experimental paradigms is proposed keeping the objective of the cognitive load-inducing task and orientation of the conducting research. This classification of experimental task demarcates the orientation and application of the intended experiment, whether the construct under experiment is the human agent and his continually varying cognitive states or if it is an operational aspect of the machine that may bring about characteristic cognitive states in the user upon encounter. This classification of experimental tasks may help identify the specific context of the cognitive workload problem and help formulate it into a suitable experimental setting for the intended application. The taxonomical classification schemes are the 'operator paradigm,' 'operation paradigm,' 'user paradigm,' and the 'brain paradigm.' This dichotomy is depicted in Fig. 1. D. The results briefed in this section are depicted in Fig. 2 #### 2 Operation and Operator Paradigms The operator paradigms simulate the general characteristics of an HMI, focusing on the operator, while the operation paradigm focuses on simulating one particular aspect of a given HMI to examine the corresponding functional states of the human agent. About 47% of all the articles surveyed use an operator paradigm, while about 20% of the studies employed an operation paradigm. Within the operator paradigms, flight simulations that mimicked the typical operational environment of a pilot were used the most (30%). Monotonous automobile driving (19%) and the generic operator paradigm AutoCAMS (19%) were the next most prevalent operator paradigms. Further, MATB was used by 16% of the studies. Other operator paradigms are driving in varying traffic conditions [73], construction activities [74], [75], and learning tasks [76], and together they constituted about 16% of the operator paradigms encountered in this survey. Contrary to the operator paradigms, operation paradigms were focused on inducing a specific cognitive state in response to a particular operational sequence or event, such as lane - deviation. The most prevalent experimental task within operation paradigms was the lane-deviation task (46%), where a lane-perturbation event was followed by monotonous automobile driving, and the operator's reaction time was regressed against the driver's cognitive state. Other operation paradigms encountered in this survey are driving distraction [77], remote piloiting of aerial vehicles [78], specific flight sequences [36], [37], [38], robot-assisted surgery [79], and construction activity [80], constituting about half of the operation paradigms (54%). #### 2.2.3 Brain and User Paradigms User paradigms focus on user skills or specific attributes of the user, like multitasking or language proficiency, while brain paradigms focus on cognition-related aspects such as working memory (WM) and engagement. About 18% of all the reviewed articles induce workload with a user paradigm, while brain paradigms were used by 15% of all the articles. The prevalent user paradigm was MA (46%), where the subject continuously performs difficult sssnumerical calculations to elicit binary workload levels, followed by SIMKAP (38%), where several sub-tasks are performed simultaneously to elicit graded workload levels. Other user paradigms include visuomotor tracking tasks [81], where a visual stimulus is tracked while moved through a screen, and language incongruency tasks, where ambiguous pronouns elicit higher workload levels. These tasks focused on the user and their response to a generic BCI protocol. Further, within the brain paradigms, the N-back task (46%) was the prevalent choice, the rest (56%) were several types of WM tasks and other in-house WM paradigms. These tasks were focused on a specific aspect of cognition, such as WM, attention, arousal, and vigilance. #### 2.2.4 Experimental Environment Usually, computer-based simulations were used to set up task environments in cognitive and operative paradigms. MATB and AutoCAMS are two computer-based operator paradigms extensively used for eliciting cognitive load, and they resemble the typical operational environment of an aircraft pilot and a generic industrial process controller, respectively. Unlike computer-based simulations, some deep-learning studies used EEG signals acquired from real-world vehicle operations scenarios [50], [82]. However, simulated task conditions are the norm in the field. Typically, these tasks are implemented in an augmented or virtual reality engine or a computer-based simulator. Though cognitive paradigms were created using only computer-based simulations, task environments of operative paradigms were set up much more diversely. In the automobile industry, augmented reality (AR) 'full-driving simulators' are less of an industry standard and are typically used only in research settings. The full-driving simulators constitute a real car mounted on a Stewart platform that can simulate motion with six degrees of freedom and surrounding projected display. AR environments are industry standard for pilot training in the aviation sector. These systems are known as 'full-flight simulators' and vary in terms of the realism induced in the simulation they offer. On the high end, data collected using full-flight simulators were encountered in this study [61], [83]. On the lower end, this study used a simple simulation setup like mounting the pilot's chair on a Stewart platform [39], [84] and providing computer-based projected display. Additionally, virtual reality (VR) engines and head-mounted displays were used only for conducting construction activity paradigms[74]. These AR and VR systems are a good trade-off between real-life situations and controlled laboratory environments and can be Figure 1: A) The PRISMA Protocol followed in this review. B) The tree diagram of topics covered in this review. C) The set of keywords identified for each branch of the topic tree. The synonyms of a concept are joined using OR blocks and the different sub-themes are joined using AND blocks to construct the final search string D) The proposed taxonomical classification of CWL experimental paradigms extremely useful in researching real-life paradigms which are often dangerous, like that of a lane-keeping task. However, unlike these costly systems that are not easily available, computer-based simulations are accessible to everyone. Moreover, many studies have validated computer-based operator paradigms like AutoCAMS and MATB, and there already exists a plethora of datasets collected using these tasks, and therefore it can be used for comparing the fidelity of detection methodologies. ### Cognitive States Induced by CVL The cognitive state of arousal, characterized by attention and engagement, is achieved when workload levels are optimized. Cognitive States and varying degrees of workload levels were seen across all the studies reviewed here. It was enquired whether any experimental paradigm was preferentially used to elicit a given cognitive state. It was observed that specific cognitive states were induced by domain-specific experimental paradigms. AutoCAMS and MATB were particularly suited for generating highly graded CVL levels due to their highly modular nature. Notably, different workload levels were elicited by 47% of the studies reviewed here. The states of attention and engagement have been explored by 10% of the studies. Overload fatigue was examined by about 16% of the studies, while underload fatigue was explored by 18%. Further, WM was explored by about 9% of the studies. These results are described in Fig 2. G Moreover, underload fatigue was mostly explored in automobile paradigms since detecting drowsy states is a popular domain-specific industrial need. On the other hand, operational fatigue was mostly explored in aviation paradigms. Apart from specific flight sequence simulations, only AutoCAMS was used to elicit operational exhaustion and overload fatigue. It is interesting to note that only brain paradigms explored WM, operation paradigms explored underload fatigue (drowsiness), and operator paradigms induced operational fatigue, while all types of paradigms explored attention, engagement, and multitasking abilities. ### Deep Neural Networks for CVL Detection There were mainly two kinds of studies that used a DNN for CWL detection, those that treated the model as a black box [28, 79] and those who have reasoned out the architecture and pipeline [58].. However, other studies have modified parts of the architecture and pipeline to suit the specific problem of EEG - based CWL detection [45, 51]. Most networks were implemented offline, and only two studies were found to use online pipeline [56, 81] explicitly. However, other studies [56, 74, 85] employed a pseudo - online analysis. Further, one study was found implementing a CWL detection system on a smartphone. Some studies (about 25%) have introduced additional DL mechanisms like attention [86], residual identity connections [54], or multi-paths [42, 45] to endow the network with additional modeling power. Within the group of studies employing additional DL mechanisms, residual connections, commonly known as ResNets were the prevalent (29%) choice [75, 87], followed by the attention mechanism [45, 54, 86] with about 17% prevalence. ResNets are are generally used to solve the vanishing gradient problem. Attention, on the other hand, enables the network to focus only on the parts of the input relevant to the problem at hand and is a method known to improve computational burden and performance. Attention has been used for feature selection in some studies, where it is employed before the network input layer, but most employed this mechanism within their network In the latter case, the output of several deep learning layers containing high-dimensional features are transformed with weighted multiplication of for determining their contribution to the each prediction.Further, about 17% of the studies reviewed here used both Ensemble Learning [70, 71] and Transfer Learning [69, 87]. Ensemble Learning trains several classifiers on sub-sets of the data and aggregates the information from all for making a prediction. The method is advantageous in the case of EEG since the signals are highly variable across sessions and subjects. One such study used an ensemble of AE networks to mitigate the cross-subject variability of EEG signals. It was enquired if any preference exists for DNNs in different application domains. It was observed that the most networks have been used in all types of experimental paradigms, and no such preference exists. Figure 2: The pie chart (B) describes the percentages articles using each task for collecting data. The pie charts have been organized to mimic the taxonomical bifurcation of cognitive and operative paradigm with color coded categories. The sub-charts show the distribution of operator (A) / user paradigms (F), and operation (E) / brain paradigms (C). The sizes of slices signify the prevalence within paradigm category. D) Depicts the application domains of CWL research. G) The bar charts depict the prevalence of different cognitive states encountered in this study, they are color coded to reflect the generalizability of the study, whether it is a cross-session/subject/task analysis. #### 3.1.1 Network Architecture Convolutional Neural Networks (CNNs) are the most prevalent network of choice (29%), presumably due to their success in computer vision. Plug-and-play type architectures and availability has been credited in at least one of the studies for the motivation of using a CNN for cognitive workload detection [79]. The generalizability of CNNs in recognizing spatial patterns from data structured in a 2D / 3D matrix might have been a reason for the choice [88]. Recurrent Neural Networks (RNNs) were the next prevalent architecture (24%), and they were explicitly motivated by the recurrent nature of the network and its known capabilities of modeling temporal dependencies [28, 42, 54]. Hybrid Neural Networks (Hybrids) and Auto-Encoders (AE) were used by about 15% and 12% of the studies reviewed in the survey. The hybrid type of networks only used CNN - RNN combinations, and hybrid networks consisting of other networks and algorithms are not to be found in this systematic survey. Other architectures encountered in this survey are Multi-Layer Perceptron/ Artificial Neural Networks (MLP/ANN) [39, 63] (9%), Deep Belief Networks (DBNs) [68, 89] (8%), Generative Adversarial Networks [90] (GANs) (2%), and Graph Neural Networks [73] (GNNs) (2%). The prevalence of neural networks is given in Fig. 3 A. #### 3.1.2 Signal Feature Extraction Popular features used in cognitive workload research can be categorized into five groups:'_spectral_, '_nonlinear_, '_temporal_, '_spatial_, '_statistical_, and '_others_.' Most studies used a combination of features from these groups, and only a few chose only a single type of feature. Since studies used a combination of features, each article was counted separately against each feature. About 72% of the studies reviewed here used a feature extraction step before modeling EEG with a DNN, but about 23% of studies eliminated the feature extraction step and directly fed the EEG signals to the DNN for analysis. However, within the studies that used no specific feature extraction step, most employed some signal filter or artifact reduction methods to clean the signal for analysis, and very few studies directly used the raw EEG signal as input to the DNN [65]. Within the studies that employed feature extraction steps, about 54% of studies extracted various spectral features from the EEG signals. It usually included calculating power spectral density using various methods, including Fourier and discreet wavelet transforms. Specifically, the frequency bands of theta, alpha, and beta were extracted by most studies since they were known to be the most relevant channels for CWL detection [14]. Some studies used all frequency sub-bands; however, most studies eliminate gamma at the pre-processing stage by applying a high-pass filter that excludes gamma oscillations. Nonlinear features, such as various entropy-based measures, were the next most prevalent feature (15%). These networks were motivated by the nonlinear behavior of EEG [29] and expected entropy measures to contribute to the classification performance significantly, and [28] found that their RNN performed slightly worse when nonlinear features were not given to the network. Most notably, approximate entropy [28], Shannon entropy [68, 69], Spectral entropy [69], nonlinear functional connectivity features [52], and mutual information [91] were used by the articles reviewed in this analysis. Generally, nonlinear features were fused or concatenated with other feature types before feeding to the network as in [28], while [52, 91] were found to training their classifier exclusively on nonlinear features. Further, some studies (11%) used statistical measures of the EEG signal like mean, variation, and kurtosis for training their networks. All statistical features were extracted by [76]; however, most studies that used statistical features for training their model typically extracted mean, variance, skewness, and kurtosis. Additionally, most studies concatenate all statistical features of interest before feeding them as a final input for the network. About 10% of studies used temporal features other than the time-domain signal, such as the auto-regressive coefficients [89] and moving average algorithms. All temporally varying features except the time-varying frequency (spectral feature) and time domain signals (no feature) are grouped into this category. Other features combined was found in about 8% of the studies. Among them, one study explored the use of fractals [63], while two studies explored both functional connectivity [52] and graph features. A recent review exhaustively enlists all the popular EEG features extraction typically used in signal classification [11]. It was further enquired whether any feature was preferentially Figure 3: D) The chart depicts the prevalence of DNNs encountered in this survey A) The plot describes the percentage of different feature used by the studies reviewed here. E) This bar chart depicts the input formulations used by the networks. 1D feature vectors were clearly the preferred input C) Chart depicts the paradigms specific choice of networks, according to proposed taxonomization grouping. It is clearly seen that there is no preference of network for any of the tasks. G) The preference of a network for a given input formulation is plotted. B) the preference of features among networks is plotted in this bar chart. employed for a given DNN to see if the network architecture necessitates the feature extraction in addition to the complexity of the signal. It was observed that spectral features were used for all DNN architectures along with nonlinear measures, and since these features have strong theoretical foundations in EEG analysis, it is unsurprising that all networks used these two as input features. However, Graph Neural Networks (GNNs) and Generative Adversarial Networks (GANs) have not been found using these two measures, possibly due to their architecture. It was also observed that DBNs and AEs did not use temporal features or the time-domain signal, as they exclusively preferred a concatenated feature vector. This analysis is depicted in Fig. 3 A & B. #### Iii-C3 Network Input Formulation There were three main categories of input formulations to be seen in this analysis, feature vector, image matrix, and EEG matrix. The feature vector is usually a concatenation of all the features in a suitable format for the employed DNN, while the image matrix is a single-channel or multi-channel image-like data created from the EEG signals using various signal transformation methods. The EEG matrix contained multi-channel signals in its native 2-dimensional (2D) form. Most studies that used multiple categories of features concatenated these into a suitable feature vector. Overall, feature vectors were the most used input formulation, followed by image matrices and EEG matrices. AEs, DBNs, and ANNs have all used exclusively feature vectors as input, while CNNs have not been trained using only feature vectors as input. This result is depicted Fig. 3 E. Within the feature vector, there were 1-dimensional (1D) and 2D feature vectors, where the 1D feature vector is usually a concatenation of all the features extracted from the data without any specific sequential relationship amongst its elements like the ones used in [28, 67]. Only Spectral and nonlinear features were concatenated to create vectors in [28, 69, 92] to create 1D or 2D vectors, while statistical features were concatenated in addition to the above in [50, 68]. Additionally, feature vectors were created using spectral power density in different bands, a CWL index known as the fatigue index [37]. The 2D feature vectors are also a concatenation of features except when the whole spectral decomposition matrix was used. In the image category, single-channel images (2D - images) [55, 61, 80], and multi-channel images (3D or higher) [75, 90, 93] were used as input to the network within the image category. These images were mostly created by transforming the time-domain EEG signal into the spectral domain. Many variations of image-like data were created from EEG signals using some topographical projection and interpolation for transforming the EEG data into a multi-channel or single-channel image. These methods mainly differed in the feature extraction step and the transformation used for mapping. [94, 95]. Certain studies employ spectral density at each location within a given time to produce a series of images, like a brain power map [37]. Some studies have created an image-like representation by concatenating the spectral decomposition matrix of different frequency bands into the multiple channels of an image [53], thereby suggesting EEG-image is a general feature that may be used for EEG signal analysis. EEG matrix was directly given as input for a given model under the assumption that DNNs can leverage their depth to model the inherently noisy EEG signals. Time-domain input was given mostly to RNNs and occasionally to CNNs. Some studies have used EEG signals assuming them to be 2D images [79, 85]. This, however, is not entirely supported by the assumptions of the CNN models employed since the arrangement of channels (the spatial location of the signals) does not follow any reasonable pattern resembling their spatial location in the scalp. Some studies used a 1-dimensional (1D) EEG vector, while others used a concatenation of multiple 2D frames into a 3-dimensional (3D) EEG matrix. Some CNNs can perform depth-wise, channel-wise, separable, and/or dilated convolutions and are adapted to process temporal dependencies. It was further enquired whether the DNNs favor any specific type of input formulations and whether there was any consideration to be given while creating the input vector for a given network. Time-domain inputs were exclusively used by MLPs, CNNs, and RNNs and their hybrids. Time-domain input is appropriate for RNN; for others, the insensitivity to temporal dependencies may prevent the time-domain signal from being useful to the network.. It was also observed that Image-like representation was only used for CNNs and their hybrids. These results are summarized in Fig 3. G #### Iii-C4 Generalizability of the Network The least generalizable model is the subject-specific model that has been explored by 27% of the studies, and many of these studies have recorded EEG from only one session per subject. Cross-Session models mark the next level of generalizability, and about 10% of studies have explored such a detection strategy. About 58% of the studies reviewed have proposed cross-subject classifiers, which suggest a high level of generalizability across the subjects and different sessions. However, it is notable that most studies have pooled multiple subjects/sessions with simplistic assumptions for training the data and have not considered the nonlinear statistical shifts present in the EEG signals from multiple sessions and subjects. The generalizability of cognitive states and the DNN detector is depicted in Fig. 3. G. The highest level of generalizability is achieved by around 4% of the studies, as they have built models to recognize workload levels across different tasks [48, 54]. These classifiers may have accurately estimated universal discriminatory features of different cognitive workload levels. However, it is still unclear what the significant contributing factors for the predictions and decisions made by these networks, very few studies [58, 96] have interpreted the networks' latent representations and attempted at explainable or interpretable deep learning. ## IV Discussion The principal motivation for this systematic literature analysis was to identify the most suitable methods for elicitation (experimental paradigms) and detection (EEG-based DNNs) specific to the different application domains of CWL research. This analysis found no specific trends in the architectural choice or training strategy according to the tasks or the targeted cognitive states as expected. However, clear patterns were present regarding the types of features and the data structure used for training a DNN, as described in the results section. Deliberations on the limitations of DNN-based detection lead to generalizability as overfitting is an imminent concern for any DNN; the peculiarities of EEG data only aggravate the issue. Some studies have built subject-specific classifiers since EEG is known to be having nonlinear statistical shifts across different subjects. These can be considered as the least generalizable models. Additionally, since EEG is a non-stationary signal across multiple sessions of a single subject, cross-session detection of cognitive workload is a challenging problem, and it might be caused by the fact that the number of recording sessions seen in typical EEG datasets might not offer enough modeling power for the network to capture variations across sessions. Most deep learning pipelines use a cross-subject training strategy to train the network. This trend may be attributed to the typical low sample sizes of EEG data, which would not offer sufficient samples from a subject to train a DNN since most studies did employ any mathematical transformation to bring the signals from multiple subjects into a shared database and have pooled them indiscriminately. Therefore, it can be suggested that existing DNNs can already perform cross-subject classification, suggesting they offer sufficient generalizability to model users' cognitive workload levels. Further, some studies have attempted cross-task classification of workload levels using the same DNN [54, 97, 98]. The performance of these networks suggests that cognitive workload levels elicited by different tasks may elicit similar neural responses and that they can be detected using a deep neural network. In summary, DNNs offer sufficient generalizability for employing them for CWL-level detection across subjects and tasks, provided they are trained with sufficiently heterogeneous data. A key issue identified in this survey is that of an appropriate input formulation. CNNs are particularly good at learning spatial relationships in a 2D matrix representation of data. However, since the EEG channels (matrix rows) are not arranged according to its (EEG electrodes) spatial location on the scalp, the EEG matrix does not adequately represent the spatial relationship between the channels. CNNs assume that the if the input data is one with spatial dependencies. Thus, the DNN cannot capture scalp spatial information when the native EEG matrix is presented to the network. Therefore, further measures of experimental controls need to be defined for employing CNNs directly on EEG matrices. A suggestion is to randomly change the location of EEG channels in the matrix representation and cross-validate the model. Further consideration of the input formulations for RNNs suggests that feeding a concatenated feature vector to an RNN is problematic since RNNs assume that the input vector's elements share temporal dependencies. Therefore, concatenating temporally unrelated features into a feature vector is unjustifiable in the case of an RNN. This issue has been correctly identified by [28], though they have used a set of temporally uncorrelated spectral and nonlinear features concatenated into a 1D vector. Another key issue identified is related to the subjects of CWL experiments. In laboratory paradigms, the subjects are mostly graduate students. Aviation and automobile paradigm still possessed larger variability due to professionals being used as subjects. However, all other paradigms predominantly use university students, presumably because of availability. In most cases, subjects tend to be in the below-40 age bracket. However, cognitive workload is known to change with age. Therefore, one of the suggestions this review put forth is to include older and younger individuals alike in the subject pool. Though many have deliberated on core problems of EEG-based cognitive state detections, solutions to these fundamental problems are still at large. This article postulates that deep neural network offers promising solutions to the challenges of EEG-based cognitive workload detection, such as automatic feature extraction and signal variability. Further, it is hypothesized that DNNs (using transfer learning) can overcome the domain statistical shifts in the EEG data across different sessions and subjects without using sophisticated data-pooling techniques [28], given that the training set is sufficiently large and heterogenous. Further, there were few online classifiers that may be useful in a practical BCI, though some have validated their framework using pseudo-online DNN designs. There was only one study that implemented their CWL framework in a smartphone. These findings suggest that a real-time framework needs extensive research to see if DNNs are a viable computing solution for real-time cognitive state detection in online BCI protocols. ### Cognitive Load Continuum The bibliometric data presented in this article suggests there are two central themes dealt in the studies reviewed here, the overload or the underload of cognitive resources, and respectively the resulting drowsiness or distraction. Further, this systematic review theorizes a proposition termed 'the cognitive load continuum' where all the disparate cognitive states and associated workload levels are expressed as a function of cognitive workload demand and available cognitive resources for allocation using the existing multiple-resource theory [5]. Transient neurophysiologicalized changes that lead up to a certain cognitive state, like that of fatigue, can be modeled as the state-transitory causes and effects in this framework. The proposition is graphically described in Fig. 4. A. In an operational context, these operator functional states (OFS) can vary continually due to task-related affairs. The optimal OFS is hypothesized to be an unstable equilibrium in its cognitive landscape. Furthermore, sub-optimal OFS can result from being under cognitive load for a prolonged period, which can be termed fatigue. There are two types of fatigue. Fatigue from cognitive overload, or operational exhaustion, leads to sub-optimal operator performance as the attention resources are depleted due to physiological fatigue. Fatigue from cognitive underload, or drowsiness, also leads to sub-optimal operator performance as the attention resources are reduced due to mind wandering or preoccupancy to sleep. This relationship is depicted in Fig. 4. B. ## V Conclusion The general operator paradigm was simulated using AutoCAMS and MATB with highly graded workload levels. Further, it has been observed that specific paradigms were used for eliciting some cognitive states, though a wide variety of tasks were used for eliciting graded/binary workload levels. Notably, drowsiness and underload fatigue were explored more by automobile driving tasks, while operational exhaustion and overload fatigue was explored more often Figure 4: A) The extreme ends of this scale are the extremes of cognition, the lower end being an unconscious state without any perception, cognition, or action. B) The curve depicted in the cognitive demand - and allocation curve on which the operator brain state achieves an unstable equilibrium of optimal cognitive load and delivers maximum performance. While on either side of this demand-allocation curve, the operator performance decreases
2301.00007
Selected aspects of complex, hypercomplex and fuzzy neural networks
This short report reviews the current state of the research and methodology on theoretical and practical aspects of Artificial Neural Networks (ANN). It was prepared to gather state-of-the-art knowledge needed to construct complex, hypercomplex and fuzzy neural networks. The report reflects the individual interests of the authors and, by now means, cannot be treated as a comprehensive review of the ANN discipline. Considering the fast development of this field, it is currently impossible to do a detailed review of a considerable number of pages. The report is an outcome of the Project 'The Strategic Research Partnership for the mathematical aspects of complex, hypercomplex and fuzzy neural networks' meeting at the University of Warmia and Mazury in Olsztyn, Poland, organized in September 2022.
Agnieszka Niemczynowicz, Radosław A. Kycia, Maciej Jaworski, Artur Siemaszko, Jose M. Calabuig, Lluis M. García-Raffi, Baruch Schneider, Diana Berseghyan, Irina Perfiljeva, Vilem Novak, Piotr Artiemjew
2022-12-29T12:26:56Z
http://arxiv.org/abs/2301.00007v2
# Selected aspects of complex, hypercomplex and fuzzy neural networks ###### Abstract We present a new class of hypercomplex and fuzzy neural networks, which are the most common examples of hypercomplex and fuzzy neural networks. We show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper hyper hyper hyper hypercomplex and fuzzy neural networks. We also show that hypercomplex and fuzzy neural networks are capable of complex and complex, hyper
2309.13881
Skip-Connected Neural Networks with Layout Graphs for Floor Plan Auto-Generation
With the advent of AI and computer vision techniques, the quest for automated and efficient floor plan designs has gained momentum. This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-connected layers capture multi-scale floor plan information, and the encoder-decoder networks with GNN facilitate pixel-level probability-based generation. Validated on the MSD dataset, our approach achieved a 93.9 mIoU score in the 1st CVAAD workshop challenge. Code and pre-trained models are publicly available at https://github.com/yuntaeJ/SkipNet-FloorPlanGe.
Yuntae Jeon, Dai Quoc Tran, Seunghee Park
2023-09-25T05:20:57Z
http://arxiv.org/abs/2309.13881v2
# Skip-Connected Neural Networks with Layout Graphs for ###### Abstract With the advent of AI and computer vision techniques, the quest for automated and efficient floor plan designs has gained momentum. This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-connected layers capture multi-scale floor plan information, and the encoder-decoder networks with GNN facilitate pixel-level probability-based generation. Validated on the MSD dataset, our approach achieved a 93.9 mIoU score in the 1st CVAAD workshop challenge. Code and pre-trained models are publicly available at [https://github.com/yuntae/JSkipNet-FloorPlanGen](https://github.com/yuntae/JSkipNet-FloorPlanGen). ## 1 Introduction Floor Plan auto-generation refers to the use of computational algorithms and tools to automatically design and optimize the spatial layout of a building or structure. Traditional floor plan design often requires substantial time, expertise, and manual iteration to balance both functional needs and aesthetic considerations. The auto-generation of floor plans offers a solution to this challenge by providing rapid, objective-driven designs that can maximize space utilization, enhance occupant comfort, and reduce design overhead. In recent years, numerous studies have been conducted on floor plan auto-generation based on computer vision and deep learning. RPLAN [2] suggests encoder-decoder networks for locating room and constructs 80k floor plans dataset from real residential buildings. Graph2Plan [1] suggests graph neural networks(GNN) and convolutional neural networks(CNN) for graph-based floor plan generation using RPLAN dataset. There also exist GAN-based study [3] with a bubble diagram for input. However, there are still challenges that are hard to solve such as: **1) Scalability Issue**: Almost recent studies have been limited by exclusively using the RPLAN dataset, which is comprised of residential floor plans. This poses a limitation when attempting to apply to buildings with different purposes, such as office buildings, and also proves challenging for larger scale buildings. **2) Graph Utilization Issue**: In the boundary-based approach like Graph2Plan, nodes in the graph can only be used if they are placed correctly inside the boundary. On the other hand, studies utilizing the graph as a bubble diagram offer too much freedom, rendering the use of boundaries as input unfeasible. We suggest encoder-decoder networks with skip connections for floor plan auto-generation. Our model inputs both a boundary image containing exterior information and a graph looks like bubble diagram showed in Fig. 1. We tested on Modified Swiss Dwellings (MSD) dataset [4] provided by 1st Computer Vision Aided Architectural Design (CVAAD) workshop on ICCV 2023. Our main contributions can be summarized as follows: 1. We utilized skip-connected layers to better comprehend various scale information of floor plans and validated this approach on the MSD dataset, which contains a diverse range of scales. 2. We inferred bubble diagram-style graphs using GNN and concatenated the acquired graph features prior to the upsampling phase, enabling floor plan generation based on pixel-level probabilities. Figure 1: **Visualization** of floor plan auto-generation. The input is a struct(boundary info) and a graph(room types and connection), and the output is a generated floor plan called full. ## 2 Method ### Boundary Image Pre-Processing Our pre-processing of boundary image begins by applying Mobile-SAM [5], which is segment anything model for mobile. We generate segmentation masks and prioritize the largest one, to get the exterior part of the building structure. After that, we can structure a processed image composed of three channels: 'in-wall-out', taking a value of 1 for the interior, 0.5 for the boundary and 0 for the exterior; 'in-out', excluding wall information from previous channel; and the 'raw-boundary'. This structure is inspired by RPLAN [2] dataset. ### Skip-Connected Neural Networks Our model employs a skip-connected architecture designed to preserve spatial details across various scales. The architecture comprises two central components: the encoder and the decoder, both supplemented with skip connections to ensure information flow across layers. The encoder component plays a role in extracting features from the input boundary image. Through a series of convolutional layers, it progressively down-samples the input while concurrently amplifying its feature dimensionality. This process enables the network to capture intricate patterns and semantics from the image at various scales. However, as the spatial dimensions are reduced, the risk of losing granular details increases. The decoder acts as the counterbalance to the encoder. Tasked with the up-sampling of the condensed feature maps, the decoder employs skip-connections that bridge layers together. These connections reintroduce the lost spatial details from the encoding phase by directly linking the outputs of the encoder's layers to the decoder. In a strategic enhancement, our design also fuses the resized input boundary image at each decoding step. This novel integration ensures the generated floor plans are not just detailed but also strictly adhere to the input boundary constraints, ensuring the fidelity and accuracy of the generated outputs. The combined effect of this encoder-decoder architecture, when fortified by the skip-connections, results in a more accurate and detail-preserving output. The network is equipped to understand and maintain the input boundary constraints efficiently across different scales, leading to enhanced consistency and fidelity in the generated floor plans. ### Graph Neural Networks We captures layout graph constraints using GNN, to ensure functionally feasible floor plans. We employ GCNConv layers for node representation learning, refining and aggregating the features to produce a 2D feature map. These graph features are then concatenated with the deepest outputs of the encoder, intertwining spatial details with layout graph constraints. As this merged data proceeds through the decoding process, the model seamlessly integrates both the spatial and topological information, yielding a floor plan that effectively combines visual precision with architectural layout constraints. ## 3 Results & Discussion 1st CVAAD workshop at ICCV 2023 provided MSD dataset [4], which includes boundary images, layout graphs and ground truth floor plans of single-as well as multi-unit building complexes across Switzerland, with 4167 floor plans for training and 1390 for testing. We will evaluate our model using Intersection over Union (IoU) that calculates the average intersection over union of predicted and ground truth segments across all classes. The training and inference processes were conducted on one NVIDIA A6000 GPU with PyTorch 2.0.0. Figure 2: **Architecture** of our proposed SkipNet-FloorPlanGen ### Quantitative & Qualitative Results Table 1 displays the competition leaderboard, demonstrating that the encoder-decoder model, enhanced with skip-connections and concatenation with resized boundary images, is a robust method. Fig. 3 shows the qualitative result of our method. We separated a validation dataset from the train set, and Fig. 3 illustrates the visualization results on the validation set. ### Discussion This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-connected layers capture multi-scale floor plan information, and the encoder-decoder networks with GNN facilitate pixel-level probability-based generation with layout constraints. Our proposed method has been evaluated on the 1st CVAAD workshop MSD dataset [4] on ICCV 2023, and demonstrated its robust. In the future, we will focus on transforming boundary images into graph diagrams or vectorized forms for enhanced deep learning applications. The transition could mitigate limitations in scalable representations. Additionally, we aim to construct hierarchical or probabilistic graphs considering inter-room characteristics in layout graphs, aiming to pioneer a novel approach in handling spatial representations for more robust and scalable model architectures.
2309.07412
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language and long-range modeling, while offering rapid parallel training and constant inference cost. With the resurgence of interest in LRNNs, we study whether they can learn the hidden rules in training sequences, such as the grammatical structures of regular language. We theoretically analyze some existing LRNNs and discover their limitations in modeling regular language. Motivated by this analysis, we propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. Experiments suggest that the proposed model is the only LRNN capable of performing length extrapolation on regular language tasks such as Sum, Even Pair, and Modular Arithmetic. The code is released at \url{https://github.com/tinghanf/RegluarLRNN}.
Ting-Han Fan, Ta-Chung Chi, Alexander I. Rudnicky
2023-09-14T03:36:01Z
http://arxiv.org/abs/2309.07412v2
# Advancing Regular Language Reasoning in ###### Abstract In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language modeling and long-range modeling while offering rapid parallel training and constant inference costs. With the resurged interest in LRNNs, we study whether they can learn the hidden rules in training sequences, such as the grammatical structures of regular language. We theoretically analyze some existing LRNNs and discover their limitations on regular language. Motivated by the analysis, we propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. Experiments suggest that the proposed model is the only LRNN that can perform length extrapolation on regular language tasks such as Sum, Even Pair, and Modular Arithmetic. ## 1 Introduction There is a recent surge of using linear recurrent neural networks (RNNs) [1, 13, 14] as alternatives to the de-facto Transformer architecture [15, 16] that is ingrained in the field of natural language processing. LRNNs depart from the inter-timestep non-linearity design principle of classic RNNs [1, 12, 13] while at the same time: 1. achieve Transformer-level performance on the task of natural language modeling [12, 13] and even better performance on synthetic long range modeling tasks [13, 14, 15, 16, 17, 18]. 2. have the added benefits of fast parallelizable training [15] and constant inference cost. In spite of the remarkable empirical performance on natural language, there has been no research on LRNNs' ability to model regular language. Regular language is a type of language that strictly follows certain rules like grammar.1 This is very different from natural language as human language is full of ambiguities. The successful modeling of a regular language is important since it implies a model's ability to learn the underlying rules of the data. For example, if the training data are arithmetic operations such as \(1+2\times 3\), the model should learn the rules of \(a+b\), \(a\times b\), and that \(\times\) has a higher priority than \(+\). Learning unambiguous rules behind the data is a critical step toward sequence modeling with regulated output. Footnote 1: Formally speaking, the rules are defined/recognized by the underlying finite state machine. In this paper, we aim to determine if existing LRNNs are competent to learn the correct grammar of regular language by testing their language transduction capability under the extrapolation setting. Concretely, a model is trained only to predict the desired outputs on a set of short sequences of length \(L_{tr}\). It then needs to predict the correct outputs for longer testing sequences of length \(L_{ex}\gg L_{tr}\). We theoretically show that some of the recently proposed LRNNs lack the expressiveness to encode certain arithmetic operations used in the tasks of regular language. In light of this observation, we propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. The two modifications of our proposed LRNN allow the model to learn and follow the grammar of the regular language. Experiments show that the proposed model is the only LRNN architecture that can extrapolate on regular language tasks such as Sum, Even Pair, and Modular Arithmetic. ## 2 Limitations of Prior Work In this section, we show that most LRNNs are unable to represent arithmetic operations, posing a serious issue under the extrapolation setting where the model has to learn the underlying language to combat the length distributional shift. ### Linear RNN In this paper, we consider a general family of LRNNs as follows. \[\begin{split} x_{k}&=A_{k}x_{k-1}+Bu_{k}\\ y_{k}&=h(x_{k}).\end{split} \tag{1}\] \(A_{k}\) is a matrix that defines the recurrence relation. \(A_{k}\) may or may not depend on the input \(u_{k}\). When it is input-independent, \(A_{k}\) is reduced to \(A\); otherwise, \(A_{k}=g(u_{k})\) for some function \(g\). The first line encodes a linear recurrence on the state \(x_{k}\). The second line is an output \(y_{k}\) that depends on \(x_{k}\). To control the model's expressiveness, the function \(h\) may or may not be a linear operation. Since the existing LRNNs differ in their linear recurrence relations (Eq. (2), (3), and (4)), we mainly focus on the linear recurrence of each model in this paper. ### Input-independent LRNN To begin with, state-space models (in discrete-time format) follow the standard LRNN recurrence. \[x_{k}=Ax_{k-1}+Bu_{k} \tag{2}\] Eq. (2) encapsulates the recurrence relation of the S4 models Gu et al. (2022); Gupta et al. (2022), S5 model Smith et al. (2023), and Linear Recurrent Unit Orvieto et al. (2023). For example, \(A\) is in the family of HiPPO matrix Gu et al. (2023) in S4 or a complex diagonal matrix in Linear Recurrent Unit. We show in Proposition 1 that such input-independent \(A\) matrix cannot represent subtraction. **Proposition 1**.: _An input-independent LRNN is inconsistent in representing subtraction._ Proof.: Denote \(u_{0}\), \(u_{-}\), and \(u_{1}\) as the input vector w.r.t. input character 0, -, and 1. Denote \(z\) as the initial state vector. The sequences "0-1" and "1-0" are represented as \[\begin{split} x_{0-1}&=A^{3}z+A^{2}u_{0}+Au_{-}+u_{ 1},\text{\quad for "0-1"}\\ x_{1-0}&=A^{3}z+A^{2}u_{1}+Au_{-}+u_{0},\text{\quad for "1-0"} \end{split}\] Because \(0-1\neq 1-0\), by forcing \(x_{0-1}\neq x_{1-0}\), we have \[A^{2}u_{0}+Au_{-}+u_{1}\neq A^{2}u_{1}+Au_{-}+u_{0}.\] On the other hand, let \(x_{0-}=A^{2}z+Au_{0}+u_{-}\) be the vector representation for "0-". The sequences "0-0-1" and "0-1-0" are represented as \[\begin{split} x_{0-0-1}&=A^{3}x_{0-}+A^{2}u_{0}+Au_ {-}+u_{1}\\ x_{0-1-0}&=A^{3}x_{0-}+A^{2}u_{1}+Au_{-}+u_{0}. \end{split}\] Notice \(x_{0-0-1}\) is for "0-0-1" while \(x_{0-1-0}\) for "0-1-0". Enforcing \(x_{0-0-1}=x_{0-1-0}\), we have \[A^{2}u_{0}+Au_{-}+u_{1}=A^{2}u_{1}+Au_{-}+u_{0},\] which is a contradiction. ### Input-dependent Diagonal LRNN To reduce the computational complexity of LRNNs, there has been interest in applying diagonal linear recurrence Gupta et al. (2022); Smith et al. (2023); Orvieto et al. (2023). In particular, prior work adopts input-independent diagonal recurrence and is unable to represent subtraction as shown in SS2.2. As a result, one may wonder if generalizing the model to diagonal input-dependent RNNs can solve the problem, as defined in Eq. (3). \[x_{k}=\text{diag}(v_{k})x_{k-1}+Bu_{k}, \tag{3}\] where \(v_{k}=f(u_{k})\) is a vector that depends on \(u_{k}\). To answer this question, we show that an input-dependent diagonal LRNN cannot represent subtraction in proposition 2. **Proposition 2**.: _An input-dependent diagonal linear RNN is inconsistent in representing subtraction._ The proof is essentially a generalization of Proposition 1 and is deferred to Appendix A.1. ### Implication Note the failure in representation implies the extrapolation error is large, but the model may still perform well if the testing length is no greater than the training length. We will evaluate this in SS4. ## 3 Proposed Method ### Motivation from Liquid-S4 The recently proposed liquid-S4 Hasani et al. (2023) can be seen as a generalization of Eq. (2) and (3), as its transition matrix is composed of an input-independent block matrix and an input-dependent diagonal matrix with the following recurrence. \[\begin{split} x_{k}&=Ax_{k-1}+(Bu_{k})\odot x_{k- 1}+Bu_{k}\\ &=(A+\text{diag}(Bu_{k}))x_{k-1}+Bu_{k},\end{split} \tag{4}\] where \(\odot\) denotes the Hadamard product and \(\text{diag}(w)\) constructs a diagonal matrix from \(w\). Although our experiments in SS4.3 show that Liquid-S4 cannot extrapolate on regular language tasks, to our best knowledge, it is still the first to use input-dependent block matrices \(A+\text{diag}(Bu_{k})\). In SS3.2, we will introduce a construction of input-dependent block matrices that can potentially model regular language tasks with sufficient numerical stability. ### Block-diagonal Input-dependent LRNN As shown in SS2, in the world of LRNNs, neither the input-independent recurrence nor input-dependent diagonal recurrence can accurately represent arithmetic operations. To balance between representation ability and computational efficiency, we propose an input-dependent block-diagonal LRNN as \[x_{k}=A_{k}x_{k-1}+Bu_{k}, \tag{5}\] where \(A_{k}=g(u_{k})\) is a block diagonal matrix that depends on \(u_{k}\) but not on the previous timesteps. \(g\) is an arbitrary function with the output being the size of \(A_{k}\). Eq. (5) is numerically unstable because the product \(\prod_{i=1}^{k}A_{i}\) could produce large numbers. The solution is to impose additional constraints on the blocks of \(A_{k}\) in Eq (5): \[\begin{split}& A_{k}=\text{diag}\left(A_{k}^{(1)},...,A_{k}^{(h)} \right)\in\mathbb{R}^{bh\timesbh}\\ & A_{k}^{(i)}=\begin{bmatrix}v_{k}^{(i,1)}&\ldots&v_{k}^{(i,b)} \end{bmatrix}\in\mathbb{R}^{b\times b}\\ &\|v_{k}^{(i,j)}\|_{p}\leq 1,&i\in[1,...,h],&j\in[1,...,b],\end{split} \tag{6}\] where \(\|\cdot\|_{p}\) denotes the vector p-norm and \(v_{k}^{(i,j)}\) is a column vector that depends on \(u_{k}\). For any vector \(v\), we can derive another vector \(v^{\prime}\) to satisfy p-norm constraint through \(v^{\prime}=v/\max(1,\|v\|_{p})\). Because \(\|v\|_{p}\geq\|v\|_{q}\) if \(p\leq q\), a smaller \(p\) imposes a stronger constraint on the columns of \(A_{k}^{(i)}\). In other words, we can stabilize Eq. (5) by selecting a sufficiently small \(p\). Take \(p=1\) as an example. Every block \(A_{k}^{(i)}\) is a matrix that none of its column norms is greater than 1 in \(\|\cdot\|_{1}\). This implies \(A_{k+1}^{(i)}A_{k}^{(i)}\) is the same kind of matrix. Specifically, let \(v^{(1)},...,v^{(b)}\) be the columns of \(A_{k+1}^{(i)}A_{k}^{(i)}\). We have \[\begin{split}&\left[\|v^{(1)}\|_{1}\quad\ldots\quad\|v^{(b)}\|_ {1}\right]=\mathbb{1}^{\top}\left|A_{k+1}^{(i)}A_{k}^{(i)}\right|\\ &\leq\mathbb{1}^{\top}\left|A_{k+1}^{(i)}\right|\left|A_{k}^{(i) }\right|\leq\mathbb{1}^{\top}\left|A_{k}^{(i)}\right|\leq\mathbb{1}^{\top}. \end{split} \tag{7}\] Note that \(\mathbb{1}\) is a column vector of all ones. \(|\cdot|\) and \(\leq\) are element-wise absolute value and inequality operations. The last two inequalities follow from that \(A_{k+1}^{(i)}\) and \(A_{k}^{(i)}\)'s column norms are no greater than 1 in \(\|\cdot\|_{1}\). Eq. (7) demonstrates that \(p=1\) can stabilize the proposed block-diagonal recurrence, Eq. (5). However, a small \(p\) restricts a model's expressiveness. In SS4.3, we will show that \(p=1.2\) is small enough to yield good empirical performance. ## 4 Experiments ### Regular Language Tasks Regular language is the type of formal language recognized by a Finite State Automata (FSA) (Chomsky, 1956). An FSA is described by a 5-tuple \((Q,\Sigma,\delta,q_{0},F)\). \(Q\) is a finite non-empty set of states. \(\Sigma\) is a finite non-empty set of symbols. \(q_{0}\in Q\) is an initial state. \(\delta:Q\times\Sigma\to Q\) is a transition function; \(F\subseteq Q\) is a set of final states. As noted in Deletang et al. (2023), language transduction can be more useful than language recognition in practice. Language transduction maps from one string to another while language recognition classifies whether a string obeys a rule. In this work, we follow the regular language transduction tasks in Deletang et al. (2023). We are particularly interested in **Sum(5)**, **EvenPair(5)**, **ModArith(5)**. For **Sum(M)**, the input is a string \(\{s_{i}\}_{i=0}^{n-1}\) of numbers in \([0,...,M-1]\). The output is the sum of them under modulo M: \(\sum_{i=0}^{n-1}s_{i}\) mod \(M\). For example, when \(M=5\), the input "0324" corresponds to the output "4" because \(0+3+2+4\) mod \(5=4\). Notably, **Sum(2)** is the famous PARITY problem that evaluates whether there is an odd number of 1's in a bit string. Thus, **Sum(M)** is a generalization of PARITY and shares the same characteristic: If one error occurs during the summation, the output will be wrong. **EvenPair(M)**, the input is a string \(\{s_{i}\}_{i=0}^{n-1}\) of numbers in \([0,...,M-1]\). The output is 1 if \(s_{n-1}=s_{0}\) and 0 otherwise. For example, when \(M=5\), the input "0320" corresponds to the output "1" because the first entry equals the last entry. Since **EvenPair(M)** only cares about the first and last entries, the model should learn to remember the first entry and forget the entries \(i\in[1,..,n-2]\). In **ModArith(M)**, the input is a string \(\{s_{i}\}_{i=0}^{n-1}\) of odd length (i.e., \(n\) is odd). The even entries (\(i\in[0,2,...]\)) are numbers in \([0,...,M-1]\); The odd entries (\(i\in[1,3,...]\)) are symbols in \(\{+,-,\times\}\). The output is the answer of the mathematical expression under modulo M. For example, when \(M=5\), the input "1+2-3\(\times\)4" corresponds to the output "1" because \(1+2-3\times 4\) mod \(5=-9\) mod \(5=1\). **ModArith(M)** is much more complicated than **Sum(M)** and **EvenPair(M)** because the model should learn to prioritize multiplication over addition and subtraction. ### Length Extrapolation In our pilot experiments, we discovered that all models can achieve near-perfect same-length testing accuracy; i.e., testing with \(L_{\text{ex}}=L_{\text{tr}}\). We hypothesize that this is because a huge enough model (e.g., enlarging the embedding dimension) can memorize all training sequences. Memorizing the training sequences does not mean the model can do well during testing, especially when the testing sequences are longer than the training ones. To evaluate whether a model learns the underlying rules of the language without simply memorizing the training data, we first train it on sequences of length \(L_{\text{tr}}\) generated by an FSA; It is then evaluated on sequences of length \(L_{\text{ex}}>L_{\text{tr}}\) generated by the same FSA. Table 1 summarizes the extrapolation setting. We mostly follow the requirements in Deletang et al. (2023), where the training and extrapolation lengths are 40 and 500. The lengths for **ModArith(5)** are 39 and 499 because this task requires odd-length inputs. ### Experimental Results We implemented LRNNs such as S4 (Gu et al., 2022), S4D (Gupta et al., 2022), and Liquid-S4 (Hasani et al., 2023) using their released codebases as baseline methods. For the proposed method, we set \(p=1.2\) in Eq. (6) and train the block-diagonal input-dependent LRNN with (b, h) = (8, 8). Because **ModArith** is more complicated than **Sum** and **EvenPair**, **ModArith** uses 3 layers while the others take 1 layer. Each layer is a full pass of LRNN as Eq. (1). To accelerate the computational speed, all LRNNs in Table 2 are implemented with the parallel scan algorithm (Martin and Cundy, 2018). The computational cost of our model is \(O(b^{2}h\log(T))\) where \(b,h\) and \(T\) are block size, number of blocks, and sequence length, respectively. Because the embedding dimension is held fixed as \(bh\), the complexity scales linearly w.r.t the block size. Table 2 compares the extrapolation ability of our model with other LRNN baselines on regular language tasks. As we can see, the proposed model, Eq. (5) and (6), is the only LRNN that can extrapolate on regular language. S4 and S4D cannot extrapolate on **ModArith**. This is expected as Prop. 1 shows that S4 and S4D cannot represent subtraction due to their input-independent transition matrices. As for Liquid-S4, although it uses input-dependent block matrices (discussed in SS3.1), it still cannot extrapolate on regular language. We believe this can be explained by its low expressiveness (Eq. (4)) compared to the proposed model (Eq. (5) and (6)). Overall, we can see that the combination of input dependency and sufficient expressiveness plays an important role in terms of regular language modeling. ## 5 Conclusion In this work, we explore linear RNNs in the realm of regular language modeling. We discover that existing LRNN models cannot represent subtraction and in turn propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. Our experiments confirm the proposed model's ability to model regular language tasks like Sum, Even Pair, and Modular Arithmetic under the challenging length extrapolation setting.
2309.03890
XpookyNet: Advancement in Quantum System Analysis through Convolutional Neural Networks for Detection of Entanglement
The application of machine learning models in quantum information theory has surged in recent years, driven by the recognition of entanglement and quantum states, which are the essence of this field. However, most of these studies rely on existing prefabricated models, leading to inadequate accuracy. This work aims to bridge this gap by introducing a custom deep convolutional neural network (CNN) model explicitly tailored to quantum systems. Our proposed CNN model, the so-called XpookyNet, effectively overcomes the challenge of handling complex numbers data inherent to quantum systems and achieves an accuracy of 98.5%. Developing this custom model enhances our ability to analyze and understand quantum states. However, first and foremost, quantum states should be classified more precisely to examine fully and partially entangled states, which is one of the cases we are currently studying. As machine learning and quantum information theory are integrated into quantum systems analysis, various perspectives, and approaches emerge, paving the way for innovative insights and breakthroughs in this field.
Ali Kookani, Yousef Mafi, Payman Kazemikhah, Hossein Aghababa, Kazim Fouladi, Masoud Barati
2023-09-07T17:52:43Z
http://arxiv.org/abs/2309.03890v4
XpookyNet: Advancement in Quantum System Analysis through Convolutional Neural Networks for Detection of Entanglement ###### Abstract The application of machine learning models in quantum information theory has surged in recent years, driven by the recognition of entanglement and quantum states, which are the essence of this field. However, most of these studies rely on existing prefabricated models, leading to inadequate accuracy. This work aims to bridge this gap by introducing a custom deep convolutional neural network (CNN) model explicitly tailored to quantum systems. Our proposed CNN model, the so-called XpookyNet, effectively overcomes the challenge of handling complex numbers data inherent to quantum systems and achieves an accuracy of 98.5%. Developing this custom model enhances our ability to analyze and understand quantum states. However, first and foremost, quantum states should be classified more precisely to examine fully and partially entangled states, which is one of the cases we are currently studying. As machine learning and quantum information theory are integrated into quantum systems analysis, various perspectives, and approaches emerge, paving the way for innovative insights and breakthroughs in this field. * September 2023 ## 1 Introduction In quantum mechanics, an extraordinary phenomenon known as quantum entanglement arises when two or more particles interact so that their quantum states become related [1]. This relation indicates that the particles become correlated and can no longer be described independently [2]. Any change made to one particle will be instantaneously reflected in the others, even if they are far apart [3]. Creating and increasing entanglement in arbitrary qubits for quantum algorithms and quantum information (QI) theory protocols, in which entanglement is a vital resource, plays an influential role [4]. As proof, it excludes undesirable energy levels in quantum annealing [5] and facilitates the exchange of quantum information over long distances [6]. It also provides conditions for transferring classical bits of information with fewer qubits [7]. The first step in creating and increasing entanglement is recognizing its existence and amount. In recent years, various entanglement detection criteria have been proposed [8]. Yet, the positive partial transpose (PPT) criterion determines entanglement only in \(2\otimes 2\) and \(2\otimes 3\) non-mixed bi-party states by indicating the state is separable if the partial transpose of the density matrix is positive semi-definite [9]. In other words, there are some mixed states that are entangled but still meet the PPT conditions, which are called bound entangled states, as they cannot be used to create a maximally entangled state through local operations and classical communication (LOCC), even though the reduction criterion has been practical here [10]. Moreover, Werner states are another instance in which PPT is violated [11]. Alternatively, concurrence, negativity, and relative entropy of entanglement (REE) are some well-known measurements for measuring entanglement. In a density matrix, concurrence is the maximum of 0 and the largest eigenvalue subtracted by the aggregate of all the other eigenvalues [12]. Negativity is the sum of the negative eigenvalues of a density matrix's partial transpose [13]. REE measures a quantum system's uncertainty compared to the nearest separable state by von Neumann entropy [14]. Similarly, the Entanglement of Formation (EoF) measures the level of entanglement required to generate a quantum state and represents the minimum average entanglement needed. EoF is measured by tracing a subsystem and optimizing entropy over all possible state decompositions [15]. Generally, EoF distinguishes entangled states from separable ones, even in mixed states, resulting in values between 0 and \(\log_{2}(d)\), where d is the dimension of the subsystem. From three qubits onwards, finding a real solution is as hard as those that take more than polynomial time. Hence, entanglement witnesses are tools used to detect entanglement in quantum systems. A witness \(W\) is a Hermitian operator with a non-negative expectation value for all separable states, but can have a negative expected value for some entangled states [16]. It performs entanglement detection to find a more suitable witness without fully characterizing the system or performing a full tomography; however, owing to the exponential growth of variables with the rise in qubit numbers, they require optimization in high-dimensional spaces. Quantum witnesses can be optimized using machine learning because they can quickly identify patterns in large datasets, making them ideal for solving complex problems [17; 18]. Four illustrations in Fig. 1 show how some of the most commonly used division methods are used for classifying separable and entangled states. Deep learning (DL) models have transformed research fields and impacted our daily lives due to their robustness and versatility. These widely used models can attain accurate results on any dataset regardless of the intended application, as long as the data is encoded to be more simplistic. Furthermore, the model must be adapted to the data. A link between learner models and QI has been extensively studied recently. Quantum neural networks detect entanglement and separability in multipartite quantum states using both discrete and continuous variations. Newly developed realignment criteria and generalized partial transposition criteria have led to the training of a neural network (NN) on bipartite and multipartite quantum systems [21]. The study of bound and noisy tripartite entanglement employs an NN with separable quantum states and a hidden mixing layer that encodes the classical probabilities of mixed quantum states. This research determines the quantum channel capacity using an NN, witnesses W/GHZ entanglement, and examines entanglement behavior based on environmental properties [22]. Generative models and multilayer perceptrons construct separable states for comparison in bipartite and multipartite systems based on separable approximations of target states and noise thresholds. The algorithm uses an ansatz to find the nearest separable state, then establishes the boundaries of separability for entangled states with bound entanglement [23]. A novel method, combining a pseudo-Siamese network with a generative adversarial net, has been developed to detect entanglement. This technique reframes the problem as an anomaly detection task, achieving over 97.5% accuracy and investigating partial entanglement [24]. NNs also classify quantum states using Bell-type inequality for relative coherence entropy and supervised learning. The NN detects entanglement and predicts its properties. This method can be expanded to multiparty systems using Bell states in noisy channels [25]. All in all, there has not yet been a model with sufficiently rigorous accuracy for two-qubit data, and the design of DL models for QI applications remains relatively unexplored. Additionally, research needs to be more comprehensive in generating data beyond two qubits under entanglement categories and is currently limited to Bell-type data. Based on the findings of this study, a highly appropriate customized model has been developed for use as a criterion. The model can fit complex number data from QI theory into a common framework. Furthermore, there is an investigation of how purity affects the identification of states. Section 2 discusses density matrices in QI theory, highlighting their application, classification, and detection of entanglement footprints in many-body quantum Figure 1: State space is divided into two parts based on whether states are entangled or separable. (a) Adjustment of a witness as a linear hyperplane and its optimization failure. (b) Entanglement witness optimization approaches, including linear: from \(W_{1}\) to \(W_{1}^{\prime}\), and nonlinear: from \(W_{2}\) to \(W_{2}^{\prime}\)[19; 20]. (c) The convexity of the target space impedes precision, even when encircling the separable states with several witnesses. (d) Using simple learner models to improve the convex witnesses isolator (which often does not accurately cover the target space). systems. Section 3 outlines the process of building a deep custom model from scratch and exploring advanced techniques throughout its design and learning processes. Additionally, quantum complex number data preprocessing is addressed. Section 4 outlines the methods of generating quantum states by computer and outside of a laboratory. In Section 5, the model performance results are evaluated and scrutinized based on the data obtained in Section 4. ## 2 Quantum Entanglement Formation Quantum systems involving more than one qubit, known as multi-party quantum systems, can exhibit quantum entanglement, where each qubit interacts with the others. The density matrix is a Hermitian matrix that allows us to expand it in terms of its eigenvectors and eigenvalues. Generally, it is defined as \(\rho=|\psi\rangle\langle\psi|\) for any quantum state \(|\psi\rangle\). It serves as a mathematical representation of a multi-party quantum system, especially when it is in a mixed state. Since there is no such thing as a pure state in reality, and because the coherence of the system is reduced by noise and interaction with the environment, the resulting state is a mixed quantum state. Density matrices enable us to calculate properties such as entanglement and coherence of quantum systems [26]. In terms of mixed states, they facilitate obtaining the expectation values and the time evolution of the quantum system [27]. This leads to the transformation of the equation for the expectation value in pure states, initially given as \(\langle\hat{A}\rangle=\langle\psi|\hat{A}|\psi\rangle\), to \(\langle\hat{A}\rangle=\mathrm{Tr}(\hat{\rho}\hat{A})\). Similarly, the Schrodinger equation describing the time evolution of pure states, originally stated as \(\frac{d}{dt}|\psi(t)\rangle=\frac{1}{i\hbar}\hat{H}(t)|\psi(t)\rangle\), is transformed into \(\frac{d}{dt}\hat{\rho}(t)=\frac{1}{i\hbar}[\hat{H}(t),\hat{\rho}]\), where the relative density matrix of \(|\psi\rangle\) is represented by \(\hat{\rho}\). Density matrices are viewed as raw but valuable data that contain latent patterns [28, 29, 30]. As these matrices are fed into the DL models and represented as data through their convolutional layers (which apply a set of filters), models analyze the density matrices thoroughly to decipher hidden patterns within them. Models perform their processes according to a label that describes the process they need to follow. As part of this work, the labeling was conducted according to the EoF criteria because the function was accurate and ready. The positive value of EoF between two systems indicates an entanglement between the two systems. To determine the EoF for bipartite systems, the Schmidt decomposition of the quantum state must be computed. The EoF is then obtained by evaluating the von Neumann entropy of each eigenvalue, and summing over the probabilities of the respective states: \[EoF=\inf{[\sum_{i}p_{i}S(\rho_{i})]}. \tag{1}\] Here, the infimum is taken over all possible state decompositions into a probabilistic mixture of pure product states. Additionally, to calculate the von Neumann entropy of the reduced density matrix for subsystem A or B (obtained by tracing out the other subsystem), the following equation is used: \[S(\rho_{A})=-\mathrm{Tr}[\rho_{A}\log_{2}{(\rho_{A})}], \tag{2}\] where \(\rho_{A}\) represents the reduced density matrix of subsystem A. ## 3 Designing Custom Model The DL models are graphs of convolutional layers; each consists of several kernels. The presence of multiple kernels in each layer, along with the diffusion of information from one layer to the next, leads to the extraction of patterns from the lowest-level features up to the highest-level features. Through these processes, complicated patterns within a matrix can ultimately be extracted [31]. In DL, the 3D convolution operation is expressed as follows [32]: \[Y_{i,j,k}=\sigma\left(b+\sum_{p=0}^{P-1}\sum_{q=0}^{Q-1}\sum_{r=0}^{R-1}X_{i+p, j+q,k+r}\times W_{p,q,r}\right), \tag{3}\] where \(Y_{i,j,k}\) is the output element at position \((i,j,k)\), \(X_{i+p,j+q,k+r}\) is the input element at position \((i+p,j+q,k+r)\), \(W_{p,q,r}\) is the weight element at position \((p,q,r)\), \(b\) is the bias term, and \(\sigma\) is the activation function. In this formula, \(P\), \(Q\), and \(R\) are the dimensions of the convolutional kernel or filter. The sum over \(p\), \(q\), and \(r\) represents the convolution operation, where the kernel is slid over the input tensor and multiplied element-wise with the corresponding elements in the input tensor. The bias term is added to each output value, and the activation function is applied to the result. The activation function introduces non-linearity into the output of the convolutional layer, allowing the network to learn complex patterns between the input and output. The intricate nature of entanglement detection necessitates a high-capacity model with many layers. The longer the sequence of layers, the greater the chance that the gradient value of the loss function will approach zero [33]. This major problem, called vanishing gradients, is caused by activation functions that map input values to small intervals. In order to train the model effectively, the activation function must separate and focus on important information. Using Leaky ReLU as a solution to vanishing gradients, a slight negative slope is introduced for values below zero, thus allowing for continued learning [34]. In this way, we will be able to resolve the over-fitting problem and the vanishing gradient problem to a large extent. The Leaky ReLU function is defined as follows: \[f(x)=\begin{cases}ax&;x<0\\ x&;\text{else}\end{cases}, \tag{4}\] where \(a\) represents a small positive constant, the function applies a linear transformation with a slope of \(a\) to the input when \(x\) is less than zero. Optimizing weights in the convolution layer kernels leads to improved accuracy for DL models. The back-propagation algorithm is utilized for this optimization process, where the weights are adjusted based on a loss function called categorical cross entropy (CCE). This function is calculated for a given number of classes, denoted as \(C\), using the following equation: \[L_{CCE}=-\sum_{i=1}^{C}y_{i}\log\left(\hat{y}_{i}\right) \tag{5}\] In this equation, \(y_{i}\) represents the actual label for class \(i\), and \(\hat{y}_{i}\) denotes the predicted SoftMax probability for class \(i\). However, when entanglement detection is the sole objective, or when the system consists of only two qubits, equation (1) can be simplified to a binary classification problem by setting \(C=2\). The modified equation becomes: \[-\sum_{i=1}^{2}y_{i}\log\left(\hat{y_{i}}\right)=-y_{1}\log\left(\hat{y_{1}} \right)-y_{2}\log\left(\hat{y_{2}}\right) \tag{6}\] This equation can be further simplified to represent the formula for binary cross entropy (BCE) as follows: \[L_{BCE}=-y\log\left(\hat{y}\right)-\left(1-y\right)\log\left(1-\hat{y}\right) \tag{7}\] The DL model can be designed and developed by incorporating these components and assembling them in the forward flow, backpropagation, and selection processes. The results of this analysis provide valuable insights for further refinement and optimization. ### Deep Convolution Model DL requires layers to be appropriately arranged and hyper-parameters to be selected wisely, such as the number of kernels and filters in each layer. We arranged different kernel sizes as shown in Fig. 2. Selection and arrangement of kernel sizes are based on the area that a kernel covers within the input tensor, e.g., a \(2\times 2\) kernel identifies the patterns between values in dimensions \(2\times 2\times N\). Typically, a smaller kernel identifies a more detailed pattern, whereas a larger kernel identifies a broader pattern [35]. Thus, the model's base path gradually reduces the size of the tensors. The parallel paths (branched model) process tensor generalities. By combining these paths, we can identify both general and specific patterns within the density matrix. The proposed model can determine entanglement, its amount, and the entangled qubits in three or more qubits. This is achieved by changing the last layer's activation function. As we examine processes like EoF and PPT, we realize there may be additional connections between density matrix values that cannot be identified mathematically. Thus, to detect _spooky action at a distance_, XpookyNet is designed to extract patterns from density matrix values using dexterous learning techniques. Regardless of how tempting it seems to simplify the complex number form of raw density matrices through measurements, we do not alter their original form [36]. It is wise to divide them into two matrices, one consisting of real numbers and the other consisting of imaginary numbers. Figure 2: XpookyNet’s overall scheme is as follows. The activation function of its last layer varies depending on whether the model is intended to detect entanglement or to predict the amount of entanglement. It also varies when categorizing the presence of entanglement among qubits. ### Improving Deep Model We developed our model with more layers, drawing inspiration from well-known convolutional models such as VGGNET, INCEPTION, and ResNet, which have achieved unprecedented accuracy in various databases [37, 38, 39]. This model contains parallel layers with expanded kernel sizes and layers aligned in rows, as shown in Fig. 2. The main artery of the layers, depicted in blue, is in charge of extracting patterns by gradually reducing the dimensions of the data. Since the expansion of two-qubit data results in an input tensor with dimensions of \(4\times 4\times 2\), which is not large, the Max Pooling layer is not used. Parallel arteries, depicted in green, play a significant role in extracting larger patterns. A more efficient set can be produced by replacing these simple convolutional layers with "separable convolutional layers", complemented with Batch Normalization (BN). _Batch Normalization_ is a technique used in DL to improve the performance, accuracy, and speed of DL models. When BN is applied to DL models, they become more stable. This is because the output of the layers becomes less sensitive to changes in the input [40]. Furthermore, Separable convolutions make CNN training more effortless and less overfitting-prone, reduce computational complexity, allowing faster training and inference, and improve accuracy by capturing more sophisticated features. Nevertheless, they may not be as effective as standard convolutions when dealing with low-dimension tensors. Additionally, they may require more layers than traditional CNNs to achieve similar accuracy levels [41]. ### Constructing Extended-Tensor QI relies on complex numbers, but DL models cannot accommodate them. This method is inspired by models that discover patterns within images defined by three channels: red, green, and blue, known as RGB, and convert the density matrices into three-dimensional tensors (as indicated in Fig. 3). We use typical and advanced models designed for working with images or similar datasets to process quantum data instead of limited, eccentric, and intricate models. As illustrated in Fig. 3, we divide the density matrices into two matrices: one containing real numbers and the other containing imaginary numbers. Finally, we obtain a tensor that forms the input data. ### Training Convolution Model Optimization, a fundamental component of DL algorithms, updates the weights and biases of the model to correspond with the loss function so that the loss function reaches its minimum. In the proposed model, the stochastic gradient descent (SGD) optimizer with a momentum of \(0.9\) is used to explore the loss function space to find and save the most accurate model with the minimum loss function. At the end of each epoch, the most accurate model is saved compared between the current and last model. When the loss function reaches a plateau, the learning process should be halted, and the optimizer's learning rate should be adjusted. This is to detect small hollows in the loss function and converge to its minimum. The model's learning rate is cut by one-tenth each time it hits a plateau. In addition, the parallel paths benefit the model by passing the data through larger kernel sizes and preventing vanishing gradients with shortcuts in backpropagation. Preprocessing quantum data by balancing and shuffling is a simple but highly effective method in improving results. Entangled data is likely to be produced, and with the increase of qubits, this probability will rise even more. However, giving all the data of one label after all the data of another label causes model bias and reduces generalization. ## 4 Quantum Data Generation ### Two-qubit Entangled State In order to generate two-qubit state data, we use the QuTiP library [42]; it provides a random density matrix with complex number elements, which ensures that the matrix is Hermitian, positive-semidefinite, and normalized to have a trace value of one. However, since generating random bipartite entangled data is approximately three times more likely than generating bipartite separable data, it's crucial that we store an equal number of matrices as our dataset for each class to prevent model bias. As a result of this simple action, the model's generalization improves, and as the generalization improves, so does the accuracy. We generate one million \(4\times 4\) random density matrices as the dataset. This dataset contains 500,000 entangled and 500,000 separable data. The matrices are labeled by calculating the EoF of each matrix using the Qiskit library [43]. The amount of entanglement is also stored as a result of the EoF function. ### Three-qubit Entangled States According to Fig. 4(a), Appendix A to describe each category. Besides generating entangled tripartite data, which only leads to the known states of GHZ state, W state, and Graph state, generating entangled \(B|AC\) partial data is also challenging, Figure 3: The process of converting complex number data to real number data provides simplicity in typical models. as detailed in Appendix B. In two-qubit entanglement cases, partial entanglements \(A|BC\) and \(C|AB\) are only provided with the tensor product operator between a single state and a bipartite entangled state. Fig. 4(b) demonstrates the method of physically generating states which exert randomness by single random unitary operations \(U_{rand}^{A,B,C}\) and entanglement by \(U_{Ent.}\) operator. We create 250,000 density matrices for three-qubit states, divided into five balanced categories as a dataset, which takes only 168 seconds to prepare when running on a regular CPU. The states are entirely pure, but if we wish to generate less pure data, we must mix more states, which requires a longer time. ## 5 Results Evaluation We summarize the key statistics and trends using a table and a graph. Next, we evaluate the performance of our classification algorithm using a confusion matrix and identify areas where it might need improvement. Finally, we present a plot that visualizes the relationships between different variables in our dataset. ### Two-qubit Entanglement Detection XpookyNet is tested with various approaches to achieve the state-of-the-art model presented thus far. It is imperative to know how effective each approach is to gain more meaningful insight into practice. This is because the design of customized learning models in QI has been sparse. We present in Table 1 the results of our tests, which comprised 10,000 two-qubit states. The table provides an overview of each of the approaches or their combination, its effect on the model's parameters, the required time to complete an epoch of learning, and, above all, the model's accuracy on the test data. As is evident from comparing the accuracy, one can immediately notice the significant superiority of the convolutional model over the NN. However, achieving near 100% accuracy still requires advanced considerations. In comparing the three methods proposed separately to achieve higher accuracy, it is observed that reducing the learning rate on plateaus (ReduceLROnPlateau) is the most effective. The separable convolutional and BN methods constantly interact and are more effective than branched models. Nevertheless, when these methods are Figure 4: An overview of three qubits: (a) An allegory of how state space is divided into categories. (b) The quantum circuit used for preparing known tripartite entangled states incorporates the single random unitary operators \(U_{rand}^{A,B,C}\) and the entanglement operation \(U_{Ent.}\). combined two by two, the branched model along with ReduceLROnPlateau achieves the highest accuracy, even better than the combination of all methods. As mentioned, separable convolutions are not suitable for working with small tensors. Due to the quadrupling of the tensor size in three qubits, the use of separable convolutional, along with the other methods, results in an improvement. Aside from these factors, we considered that the designed model would be highly accurate with a generalization ability, in addition to having a reasonable learning speed, and that it would be capable of attaining the highest accuracy in a relatively short time. We achieved an accuracy of 98.53% with only 14 epochs of XpookyNet; this result was obtained from test data, which is noteworthy. Additionally, with the help of our multi-functional model, in addition to detecting entanglement, XpookyNet can also determine the degree of entanglement. Observing and comparing the learning process of each situation mentioned in the table and the degree of improvement in their accuracy provides more insight. Fig. 5 shows the loss function diagrams and accuracy diagrams separately for simple and combined models. The shortcomings of the NN model compared to convolutional models and the leap caused by ReduceLROnPlateau are evident both in the loss function and in accuracy at a glance. This phenomenon is depicted in Fig. 5 (c) and (d), where it is evident that the application of ReduceLROnPlateau at epochs 8 and 12 leads to a substantial reduction in the loss function. This model has an excellent start in the first epoch thanks to separable convolutional and BN layers. Alternatively, reducing the learning rate on plateaus leads to better results. ### Three-qubit Entanglement Detection The XpookyNet model was put to work on three-qubit data, with the only difference being that separable convolution layers were employed along with BN, instead of using convolution layers, as shown in the last case of Table 1. A comprehensive investigation of partial entanglements and mixed-state data was performed since they are ignored when classifying three-qubit data with high accuracy if the data is limited to a few categories, making tripartite entanglement and so on insignificant. XpookyNet classifies all five categories with an accuracy of 99.88%, whereas the state mixing transformation relatively reduces its accuracy. The decrease \begin{table} \begin{tabular}{l l l l l l} \hline \hline & & & \multicolumn{3}{c}{Model Structure} \\ \cline{4-6} Models & Max. & Epoch & Number & of & Number of \\ & ACC & Time & Conv. & Layer & FC Layer & Parameters \\ \hline NN & 0.8325 & 19s44ms & 0 & 3 & 3,233 \\ Simple Conv. & 0.9500 & 51s55ms & 10 & 2 & 1,334,913 \\ Brch. & 0.9624 & 63s22ms & 14 & 2 & 2,015,521 \\ BN. Sep. & 0.9632 & 57s20ms & 10 & 2 & 1,080,263 \\ Plat. & 0.9789 & 48s33ms & 10 & 2 & 1,334,913 \\ **Brch. Plt.** & **0.9852** & **64s58ms** & **14** & **2** & **2,015,521** \\ Plt. BN. Sep. & 0.9749 & 60s47ms & 10 & 2 & 1,080,263 \\ Brch. BN. Sep. & 0.9664 & 75s62ms & 14 & 2 & 1,324,615 \\ Brch. Plt. BN. Sep. & 0.9761 & 75s95ms & 14 & 2 & 1,335,430 \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of Methods used in this study. The words Conv., Brch., Sep., and Plat. stand for Convolutional, Branched model, Separable convolution, and ReduceLROnPlateau, respectively. in states' purity makes them more difficult to analyze. From Fig. 6, it is evident that XpookyNet can quickly classify the state space in its first epoch with a high degree of precision. However, its performance diminishes with a decline in purity. Further, it is evident that even though XpookyNet classified the GHZ state, W state, and Graph state as tripartite, as learning progressed, it differentiated them and divided them into three distinct categories. Most importantly, the confluence between separable and partially entangled states experiences the highest amount of collision. In the next step, we display in Fig. 7 the two and three-qubit classification confusion matrix as a performance metric for evaluating XpookyNet. In the two-qubit entanglement case, the error in detecting full entanglement is expectedly higher, but when three qubits are considered, full entanglement can be detected with fewer errors. As the number of qubits (\(N\)) increases, detecting \(N\)-partite entanglement becomes undoubtedly more challenging. To address this apparent contradiction, all bipartite entanglements, whether two-qubit or partially entangled, are generated randomly with varying purities and aren't limited to known types due to their labeling indexes. The tripartite set is quite limited as it only includes the GHZ state, W state, and Graph state, representing a small fraction of the entire set. Additionally, as observed, the most significant error in prediction arises when partial entanglements are incorrectly characterized as separable. ## 6 Conclusion and Discussion In recent years, quantum information theory has witnessed rapid growth and faced notable challenges. One significant hurdle is applying artificial intelligence (AI) to this Figure 5: Loss function and accuracy graphs for the methods listed in Table 1. (a) Comparison of basic models and models that adopt one of the approaches in terms of accuracy. (b) The accuracy of the combined approach is zoomed in due to their high accuracy. (c) Loss function graphs of part a are shown in the same colors. (d) Loss function graphs of part b are shown in the same colors. Figure 6: A three-qubit state classification plot visualizes how the purity of the states and the number of epochs elapsed affect the model’s ability to classify data. Figure 7: The confusion matrices for two-qubit and three-qubit classification. field due to the complexity of feeding data with complex numbers into conventional AI models. However, this article presents a groundbreaking solution that addresses this challenge and propels the field forward. The key contribution of this research is the development of an advanced deep Convolutional Neural Network (CNN) model, boasting an impressive accuracy rate of 98.5%. This innovative model successfully overcomes the limitations of handling data with complex numbers, thereby unlocking new possibilities for effectively leveraging advanced machine learning techniques in processing quantum information. Furthermore, we have investigated the preparation and labeling of three-qubit states, considering both tripartite and partially entangled states. We have explored the impact of purity on the complexity of these states, shedding light on quantum systems' fundamental properties. Understanding quantum states better is of the utmost importance in QI theory, as it forms the foundation for various quantum algorithms and applications. We can solve previously unsolved problems in QI theory by leveraging advanced machine learning models, such as the deep CNN we have developed. These models offer powerful tools for analyzing and processing quantum data, enabling us to gain deeper insights into quantum phenomena. Moreover, applying these models to analyze mixed states offers new perspectives for studying noise and imperfections in quantum systems. It can alter how we investigate and mitigate noise in quantum systems, enhancing performance and robustness. Applied QI techniques hold promise for quantum computing and information processing in the future. In addition to providing insights into quantum states, our research represents a tremendous leap forward in developing a high-accuracy deep CNN model. The potential impact of these findings extends beyond theoretical research, contributing to advancing this interdisciplinary field. ## Appendix A A Generation of The Two-qubits Entangled State Two-qubit separable states are generated by: \[\rho_{sep}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{A}\otimes\rho_{i}^{B}, \tag{1}\] where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to an arbitrary number beyond, and density matrices of A and B qubits are \(\rho_{i}^{A}\) and \(\rho_{i}^{B}\), respectively. Entangled states are selected from the system's randomly generated states using the EoF criterion. Therefore, Two-qubit entangled states can be considered as the following: \[\rho_{ent}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{AB}, \tag{2}\] where the density matrix of the two-qubit entangled state is \(\rho_{i}^{AB}\). ## Appendix B A Generation of Three-qubits Entangled State In the three qubits case, two entanglement classes (bipartite and tripartite entangled state) need to be generated. ### Three-qubit Separable State Separable states are prepared by applying three single qubit operators \(U_{rand}^{A,B,C}\) to a fixed initial state \(\psi_{0}\rangle\). Therefore, three-qubit separatable states are generated by: \[\rho_{sep}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{A}\otimes\rho_{i}^{B}\otimes\rho_ {i}^{C}, \tag{10}\] where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to 20. ### Bipartite entangled state Bipartite entangled states of a three-qubit system are prepared by generating a randomly separated one-qubit and an entangled pair of two-qubit. Entangled states are selected from randomly generated states of the entire system using the EoF criterion. Case 1: Entangled pairs B and C: Bipartite entangled states are generated by: \[\rho_{A|BC}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{A}\otimes\rho_{i}^{BC}, \tag{11}\] where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to an arbitrary number higher than one, and \(\rho_{i}^{BC}\) is the generated entangled states for subsystem BC using the EoF criterion. Case 2: Entangled pairs A and B: Bipartite entangled states are generated by: \[\rho_{C|AB}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{AB}\otimes\rho_{i}^{C}, \tag{12}\] where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to 20, and \(\rho_{i}^{AB}\) is the generated entangled states for subsystem AB using the EoF criterion. Case 3: Entangled pairs A and C: The state vectors of the entangled state \(|\psi_{AC}\rangle\) and separated state \(|\psi_{B}\rangle\) considered as: \[|\psi_{AC}\rangle=[a_{0}\ \ a_{1}\ \ a_{2}\ \ a_{3}]^{T} \tag{13}\] and \[|\psi_{B}\rangle=[b_{0}\ \ b_{1}]^{T}. \tag{14}\] Therefore, the bipartite entangled states can be considered as: \[|\psi_{B|AC}\rangle=\frac{1}{N_{B|AC}}[a_{0}b_{0}\ \ a_{1}b_{0}\ \ a_{0}b_{1}\ \ a_{1}b_{1}\ \ a_{2}b_{0}\ \ a_{3}b_{0}\ \ a_{2}b_{1}\ \ a_{3}b_{1}]^{T}, \tag{15}\] where \(a\) and \(b\) are state vector coefficients that should satisfy normalization condition, \(\sum_{i}|a_{i}|^{2}=1\), and \(\sum_{i}|b_{i}|^{2}=1\). Also, \(N_{B|AC}\) is the normalization coefficient of the three-qubit state vector. Ultimately, the bipartite entangled states are generated by: \[\rho_{B|AC}=\sum_{i=1}^{m}\lambda_{i}\left(|\psi_{B|AC}\rangle\langle\psi_{B| AC}|\right)_{i}, \tag{16}\] where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to 20, and \(|\psi_{B|AC}\rangle\) is the generated bipartite entangled states for subsystem AC and separated qubit B. ### Tripartite GHZ state The state vector of the tripartite GHZ state (final three-qubit entangled state \(|\psi_{f}\rangle\)) can be considered as: \[|\psi_{f}\rangle=|\Psi_{GHZ}\rangle=\frac{1}{\sqrt{N_{GHZ}}}\left(\cos{(\epsilon )}|000\rangle+\sin{(\epsilon)}e^{i\phi}|\varphi_{A}\varphi_{B}\varphi_{C} \rangle\right) \tag{11}\] with initial states: \[|\varphi_{A}\rangle=\cos{(\theta_{A})}|0\rangle+e^{i\phi_{A}}\sin{( \theta_{A})}|1\rangle, \tag{12}\] \[|\varphi_{B}\rangle=\cos{(\theta_{B})}|0\rangle+e^{i\phi_{B}}\sin{ (\theta_{B})}|1\rangle,\] (13) \[|\varphi_{C}\rangle=\cos{(\theta_{C})}|0\rangle+e^{i\phi_{C}}\sin{ (\theta_{C})}|1\rangle, \tag{14}\] where \(N_{GHZ}=1/(1+\cos{(\delta)}\sin{(\delta)}\cos{(\alpha)}\cos{(\beta)}\cos{( \phi)})\). The angles belong to the intervals \(\delta\in(0,~{}\pi/4]\), \((\alpha,~{}\beta,~{}\gamma)\in(0,~{}\pi/2]\), and \(\phi\in[0,~{}2\pi)\). ### Tripartite W-state Every W-state can be written as: \[|\psi_{f}\rangle=|\Psi_{W}\rangle=\frac{1}{\sqrt{N_{W}}}\left(a~{}|001\rangle+ b~{}|010\rangle+c~{}|100\rangle+d~{}|\phi\rangle\right), \tag{15}\] where normalization coefficient is \(N_{W}=1/\sqrt{|a|^{2}+|b|^{2}+|c|^{2}+|d|^{2}}\), and \(|\phi\rangle\) is a superposition of remaining states that superposed with W-state. ### Tripartite Graph state The Graph state vector can be considered as: \[|\psi_{f}\rangle=|\Psi_{Graph}\rangle=\frac{1}{\sqrt{N_{Graph}}}( \alpha_{0}~{}|000\rangle+\alpha_{1}~{}|001\rangle+\alpha_{2}~{}|010\rangle\] \[-\alpha_{3}~{}|011\rangle+\alpha_{4}|100\rangle+\alpha_{5}~{}|10 1\rangle-\alpha_{6}~{}|110\rangle+\alpha_{7}~{}|111\rangle), \tag{16}\] where normalization coefficient is \(N_{Graph}=1/\sqrt{|\alpha_{0}|^{2}+\ldots+|\alpha_{7}|^{2}}\).
2309.06081
Information Flow in Graph Neural Networks: A Clinical Triage Use Case
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs. However, efficient training of GNNs remains challenging, with several open research questions. In this paper, we investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs). Specifically, we propose a mathematical model that decouples the GNN connectivity from the connectivity of the graph data and evaluate the performance of GNNs in a clinical triage use case. Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation. Moreover, we show that negative edges play a crucial role in achieving good predictions, and that using too many GNN layers can degrade performance.
Víctor Valls, Mykhaylo Zayats, Alessandra Pascale
2023-09-12T09:18:12Z
http://arxiv.org/abs/2309.06081v1
# Information Flow in Graph Neural Networks: ###### Abstract Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs. However, efficient training of GNNs remains challenging, with several open research questions. In this paper, we investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs). Specifically, we propose a mathematical model that decouples the GNN connectivity from the connectivity of the graph data and evaluate the performance of GNNs in a clinical triage use case. Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation. Moreover, we show that negative edges play a crucial role in achieving good predictions, and that using too many GNN layers can degrade performance. ## I Introduction Machine learning algorithms were originally designed to work with data that can be represented as a sequence (e.g., text) or grid (e.g., images). However, these data structures are inadequate for modeling the data of modern applications. For instance, in digital healthcare, a patient's electronic health record (EHR) can include numerous elements, such as demographic information, medical and medication history, laboratory results, etc. One way to model data with arbitrary structure is to use a Knowledge Graph (KG): a graph where nodes represent pieces of information and the edges indicate how the information pieces relate to one another. Many learning problems on KGs can be cast as predicting links between nodes. Fig. 1 shows an example of a chronic disease prediction problem on a KG. The patient (IDXA98) is connected to its EHR (with the patient's information such as name, dob, medical conditions, etc.), and the goal is to predict to which chronic disease nodes the patient is connected (colored arrow in Fig. 1). Making such a prediction is possible by analyzing the EHR of other patients with _known_ chronic diseases. Another example of a link prediction problem on a KG is when a patient is already diagnosed with a disease (e.g., SARS-CoV-2), and the goal is to find the most effective drug/treatment to help the patient recover [1]. While there exist several methods for predicting edges on graphs [2, 3, 4], Graph Neural Networks (GNNs) have emerged as one of the most widely used techniques. In brief, GNNs were developed in parallel by two communities: _geometric deep learning_ and _graph representation learning_. The first community focused on applying neural networks for prediction tasks on graph data, while the latter community concentrated on learning low-dimensional vector representations of the nodes and edges in the graph [5]. Current GNNs approaches combine the efforts of both communities and include important extensions such as the ability to handle multi-modal and multi-relational data [6, 7]. GNNs' ability to process multi-modal and multi-relational graphs boosted their popularity in various domains, including healthcare. Some applications include the prediction of hospital readmission [8, 9], chronic diseases [10], and ICU mortality [11, 12]. However, despite their popularity, efficient training of GNNs remains challenging. Previous work has primarily focused on designing new architectures for embedding aggregation [6], with little emphasis on how embedding information should be exchanged in the network. For instance, the works in [6, 13] suggest that the GNNs' connectivity--which determines how nodes receive information from their neighbors--should align with the connectivity of the KG. However, there are cases where it could be advantageous to explore more complex GNN connectivities that are tailored to the specific task at hand. For example, exchanging embeddings based on the KG connectivity depicted in Fig. 1 precludes medical conditions from influencing patient embeddings. Yet, incorporating such interactions can be beneficial in tasks such as chronic disease prediction, where it is essential to capture the patients' existing medical conditions (e.g., hyperglycemia for predicting diabetes) in their embeddings. In this paper, we investigate how the flow of embeddings within a GNN affects the prediction of links in a clinical triage use case. The paper makes the following contributions: 1. We present a mathematical model for predicting links Fig. 1: Example of a Knowledge Graph (KG) representing the medical record of a patient (Orla). The gray boxes represent the nodes, and the arrows the edges. The dashed and colored arrow with the question mark is the link we would like to predict. on KGs with GNNs, where we cast the prediction task as an optimization problem and leverage GNNs as an algorithmic tool to solve it (Sec. II). This model emphasizes that the GNN design parameters, such as the GNN connectivity, can be decoupled from the underlying structure of the graph data (i.e., the KG). 2. We show how to map the link prediction optimization to a program in PyG (Sec. III) and study how the GNN parameters affect the link prediction accuracy in a clinical triage use case (Sec. IV). Our findings suggest that a GNN connectivity that considers domain knowledge is more effective than just using the connectivity of the graph data, and that allowing embeddings to flow in any direction may result in poor performance (Sec. IV-C1). Additionally, we demonstrate that negative edges play a crucial role in achieving good predictions (Sec. IV-C3), and that using too many GNN layers can degrade performance (Sec. IV-C2). ## II Link Prediction Model ### _Multi-relational Knowledge Graph (KG)_ A multi-relational Knowledge Graph (KG) is a graph with \(n\) nodes and \(m\)_directed_ links, where each link is associated with a relation \(r\) that represents the type of connection between the nodes. For instance, in the semantic triple _patient_ (node) _suffers from_ (relation) _anemia_ (node), the relation _suffers from_ indicates the type of connection between the node _patient_ and the node _anemia_. The nodes in the graph are also associated with a type or class, e.g., the node _anemia_ can be of the type _medical condition_. Besides the links' relation, a link can be _positive_, _negative_, or _unknown_. A positive link indicates the two nodes are connected, while a negative link implies no connection. For example, if a patient has tested positive for diabetes, there will be a (positive) link between the patient's node and the diabetes node in the KG. Conversely, if the patient has tested negative, a (negative) link will indicate that such a connection does not exist. Unknown links, as the name suggests, are links whose existence is unknown from the data. This is the type of link that we would like to predict. ### _Link prediction as an optimization problem_ We can model a link prediction problem in a KG as follows. Every node \(i\in\{1,\ldots,n\}\) is associated with a feature vector \(e_{i}\in\mathbf{R}^{d}\), which is also known as the _node's embedding_, or _embedding_ for short. Similarly, every link is associated with a relation matrix \(W_{r}\in\mathbf{R}^{d\times d}\), \(r\in\{1,\ldots,R\}\). Next, for every pair of nodes \((i,j)\) and relation \(r\), we define the links' "score" as \[x_{ij}^{(r)}:=f(e_{i},W_{r},e_{j}) \tag{1}\] where \(f\) is a function that takes \(e_{i}\), \(W_{r}\), and \(e_{j}\) as inputs and returns a real number in the interval \([0,1]\).1 Similarly, for every link connecting nodes \((i,j)\) with a relation \(r\), we define the labels Footnote 1: For example, \(\mathcal{L}\) can be \(\|\mathbf{x}-\mathbf{y}\|_{2}\), i.e., the \(\ell_{2}\)-norm. \[y_{ij}^{(r)}=\begin{cases}1&\text{there is a relation $r$ from node $i$ to $j$},\\ 0&\text{there is not a relation $r$ from node $i$ to $j$}.\end{cases}\] With the above model, we can formulate the optimization problem \[\underset{e_{i},W_{r}}{\text{minimize}}\quad\mathcal{L}(\mathbf{x},\mathbf{y}) \tag{2}\] where \(\mathcal{L}:\mathbf{R}^{m}\times\mathbf{R}^{m}\rightarrow\mathbf{R}\), \(\mathbf{x}=(x_{ij}^{(r)})\in\mathbf{R}^{m}\), \(\mathbf{y}\in\{0,1\}^{m}\). The role of the loss function \(\mathcal{L}\) is to penalize vector \(\mathbf{x}\) being different from vector \(\mathbf{y}\) component-wise.2 Namely, by minimizing \(\mathcal{L}\) in (2), we are finding the nodes' embeddings \(e_{i}\) and the matrices \(W_{r}\) such that the score \(x_{ij}^{(r)}\) is equal to (or, close to) the label \(y_{ij}^{(r)}\), which indicates the presence of a positive/negative link. Footnote 2: For example, \(\mathcal{L}\) can be \(\|\mathbf{x}-\mathbf{y}\|_{2}\), i.e., the \(\ell_{2}\)-norm. ### _Solving the link prediction problem with a GNN_ GNNs tackle the optimization problem (2) by computing nodes' embeddings based on their connectivity patterns with neighboring nodes. To illustrate the concept, we show in Figure 2 a toy example where a "patient" feature vector is built with the patient's medical conditions embeddings. In particular, the feature vectors of nodes \(i\) and \(j\) (medical conditions) are combined _linearly_ to obtain the embedding of node \(k\) (a patient). GNNs combine the neighbors' embeddings by using multiple _non-linear_ functions. Fig. 3 shows an example of how Fig. 3: Example of a GNN with three nodes and two NN layers per node. Vectors \(e_{i}^{(0)}\), \(e_{j}^{(0)}\), \(e_{k}^{(0)}\) are the initial embeddings of nodes \(i\), \(j\), \(k\), i.e., the input in the first NN layer. Vectors \(e_{i}^{(2)}=e_{i}\), \(e_{j}^{(2)}=e_{j}\), \(e_{k}^{(2)}=e_{k}\) are the embedding outputs of layer 2. Fig. 2: Toy example of how the embedding of a patient is the linear combination of two medical conditions embeddings. a GNN uses multiple layers (i.e., functions) to combine the embeddings. Each layer \(l\in\{1,\ldots,L\}\) fuses the feature vector in the \((l-1)\)-th layer of the node with embeddings of its neighbors, and the output is passed to the next layer where the process is repeated. The GNN connectivity depends on how nodes are connected in the KG (but is not necessarily the same), and it determines how the embedding information propagates. Designing a GNN connectivity that enables efficient learning is use-case dependent as it requires knowing how nodes should interact. In Sec. IV-C1, we will show how different GNN connectivities affect the link-prediction performance for a clinical triage use case. ### _Link prediction_ Predicting an (unknown) link/relation in a KG consists of evaluating (1) with the embeddings and relation weights learned during the training. For example, suppose we have a patient and want to predict whether the patient may be _infected_ by COVID-19. Then, we use \(e_{\text{patient}}\), \(W_{\text{infected}}\), and \(e_{\text{COVID-19}}\) in (1), and if the score is larger than a confidence threshold (e.g., \(0.9\)), we can determine a positive link exists. Often, we need to predict links that connect nodes not seen during the training. In that case, we need to first compute the embeddings of the new nodes by combining their initial embeddings3 with the embeddings of their neighbors seen in the training. For example, a new patient may be connected to nodes that appeared during the training (e.g., fever, headache, cough). Then, the embedding of the unseen node (i.e., the patient) is calculated with the embeddings of the neighboring nodes. Footnote 3: The input in the first NN layer. See Fig. 3. ## III Link Prediction in PyG This section presents how to implement the link prediction optimization in Sec. II with PyG [14]--a python library for GNNs built upon PyTorch. We follow a similar approach as in the PyG tutorial for node classification [15]. ### _Creating the KG and tensors_ The first step is constructing a KG with the format in Table I. Each row in the table corresponds to a subject-relation-object "triple" indicating how nodes are connected. Recall the edges in the KG are directed, where the _subject_ and _object_ are the _source_ and _target_ nodes, respectively.4 The column _link type_ indicates whether such a link is positive (True) or negative (False), and the columns _sub. type_ and _obj. type_ indicate--as the names suggest--the types of nodes. Having different node types is useful, for example, to control how the embedding information flows in the GNN. In Sec. IV-C1, we will show how different GNN connectivities affect the link prediction performance. Footnote 4: The names in the subject and object columns identify a single node in the KG. That is, there cannot be multiple nodes called _London_ referring to different cities, e.g., England (UK), Ontario (Canada), Texas (USA), etc. The second step is to map the KG to a PyG data object that contains the nodes' initial embeddings (data.x), the network connectivity (data.edge_index), the types of relations (data.edge_type), and the labels (data.y) that indicate whether the links are positive or negative. Fig. 4 shows how to map the KG in Table I to a data object, where the nodes and relations have unique IDs (integers). ### _GNN model_ The GNN model consists of two core parts: (i) the generation of the embeddings (i.e., encoder) and (ii) the scoring function (i.e., decoder). Fig. 5 shows an example of a GNN model with two RGCNConv layers [6].5 The first part is to initialize the scoring function6 and the NN functions that will generate the embeddings. The initial embeddings (data.x) in the encoder and can be set manually--e.g., using a pre-trained natural language model that maps the node's name (i.e., a string) to a vector of a fixed size (e.g., as in [16])--or they can be variables in the optimization. Footnote 5: RGCNConv is a type of layer/function to compute the embeddings. Footnote 6: The weights \(W_{r}\) are part/defined in the scoring function. The forward function computes the nodes' embeddings and scores for every link in the KG. The embedding information is obtained with a communication mask7 that controls how the nodes propagate their embeddings to their neighbors in the GNN, i.e., the GNN connectivity. Footnote 7: A mask is a vector of booleans that selects which edges (i.e., the tensor’s rows) to use. ### _Training the model_ The training of the GNN is shown in Fig. 6. The process follows the steps in the tutorial for node classification in [15] with two differences. The first one is that we can control the embedding communication in the GNN, which affects how the initial embeddings and the embedding in intermediate layers are combined. The second difference is that we can filter the edges for which we want to evaluate the loss function, which Fig. 4: Example of a torch tensor that maps the KG in Table I with unique IDs. Each row corresponds to a row in Table I where nodes’ IDs are assigned sequentially: Orla (0), Paul (1), London (2), cholesterol (3), New York (4), diabetes (5). The relations’ IDs are also assigned sequentially: born (0), has (1). is useful to specialize the model to the types of links we would like to predict.8 Footnote 8: This is similar to the mask used in [15] to filter the training data. ### _Link prediction_ The link prediction task consists of calling the function model(data) where data includes the links we would like to predict. For instance, if we want to predict if the node _Orla_ is connected to the node _diabetes_ with the relation type _has_ (with the mapping used in the example in Fig. 4), we need to add the tensor([0, 5]) to data.edge_index and tensor([1]) to data.edge_type. To predict links connecting nodes not seen in the training, we must first assign the new nodes unique IDs and generate their embeddings using the same function used in the training. However, the GNN communication mask employed to create the embeddings must prevent the unseen/new nodes from affecting the embeddings of the nodes seen in the training. ## IV Use case: Clinical triage with Synthea This section presents a numerical evaluation of the GNN for clinical triage with the Synthea dataset generator [17]. The experiments' goal is to illustrate how (i) the GNN parameters and (ii) the domain knowledge affect the link prediction accuracy. In particular, we study different GNN connectivities (Sec. IV-C1), embedding sizes, and number of GNN layers (Sec. IV-C2), and the importance of negative edges in the construction of the KG (Sec. IV-C3). ### _Use case and dataset overview_ #### Iv-A1 Use case The _clinical triage_ problem involves determining the appropriate course of care when a patient presents with symptoms or medical conditions at the first point of contact. This includes deciding whether the patient requires immediate attention from a healthcare professional, such as in an emergency situation (e.g., a heart attack). #### Iv-A2 Dataset Synthea is a _synthetic_healthcare dataset generator that simulates realistic patient medical records. The generated patient records include a variety of information such as demographics, medical history, medications, allergies, and encounters with healthcare providers. The resulting data is designed to be representative of the United States population in terms of age, gender, and ethnicity, and it includes data on over 10 million synthetic patients. For the clinical triage problem, we access the patient's medical records generated by Synthea and extract the patient's medical conditions and encounters. Each encounter is associated with conditions (e.g., diabetes) and observations (e.g., fever), and belongs to a class that corresponds to one of the following care actions: _wellness_, _inpatient_, _outpatient_, _ambulatory_, and _emergency_. The goal of clinical triage problem is: Given a patient's medical encounter with some medical conditions and observations, determine the type of care action the patient should receive, i.e., to which care action node should the encounter be connected. ### _Experiment setup_ #### Iv-B1 Kg We generate a KG for the clinical triage problem with Synthea as shown in Fig. 7. There are five types of nodes (_encounter_, _observation_, _condition_, _patient_, and _care action_) connected by four different types of relations (_encounter-careaction_, _encounter-observation_, _encounter-condition_, and _patient-encounter_). As a remark, Synthea does not provide information about negative edges. Still, since an encounter can only be connected to one care action, we add negative links9 between an encounter and the other care actions. For example, suppose an encounter has a positive link with the Fig. 6: Example of the optimization procedure using the steps in [15]. Fig. 7: (a) KG schema generated with Synthea for the clinical triage problem. (b) Example of a graph with 5 patients. The edges in (b) are shown as undirected due to the figure size. Fig. 5: Example of a torch module to implement a GNN with two RGCNConv layers. The layer architecture was proposed in [6] and is available directly in PyG. care action _inpatient_. In that case, we add negative links in the KG between the encounter and the care actions _wellness_, _outpatient_, _ambulatory_, and _emergency_. #### V-B2 Gnn We will introduce the GNN connectivities in the experiment in Sec. IV-C1. The initial embeddings of the node types _observation_, _condition_, and _care action_ are variables in the optimization problem, while the initial embeddings of the node types _encounter_ and _patient_ are set to zero. This choice of initial embeddings is because, in the testing, we do not want to infer the embeddings of observations, conditions, or care actions not seen in the training. However, we allow new patients and new encounters. #### V-B3 Training parameters We conducted the numerical experiments with 50/10 patients in the training/testing sets--sampled uniformly at random. All the experiments are run for 50 realizations, where each realization involves a random sample of 50/10 patients from the Synthea generated data.10 The scoring function is DistMult [6] with weights initialized at uniformly at random, the loss function is logistic regression, and the GNN architecture uses RGCNConv layers [6]. The training is carried out with Adam [18] for 1000 epochs with variable learning rate11 and weight decay 0.0005. Footnote 10: A training sample has, on average, 35k edges and 1.9k nodes (50 patients, 1603 encounters, 153 observations, 107 conditions, and 5 care actions). Footnote 11: The learning rate is equal to 0.1 for the first 100 epochs, 0.01 for the following 600 epochs, and 0.001 for the last 300 epochs. ### _Experiments_ #### V-C1 GNN connectivity This experiment studies how the GNN connectivity for the propagation of the embedding information affects the link prediction performance. We consider the four GNN connectivities shown in Figure 8. In short, the C1 connectivity corresponds to the connectivity of the positive links in the KG. The C2 connectivity is obtained by adding "reverse" links in C1.12 Thus, the embedding information flows in any direction. The C3 connectivity is as C2 but without the edge from the node care action to encounter. Finally, the C4 connectivity allows only embedding information to flow from observation and condition nodes to encounter nodes. The rationale behind C4 is that the node types observation and condition can be regarded as the "attributes" or "properties" of an encounter, and therefore the embedding of an encounter should not affect the embeddings of the observation and condition nodes.13 Footnote 12: The reverse links have relation type {_sub. type_}-{_obj. type_}. Fig. 9 shows the number of correct care action predictions for the four GNN connectivities, where _total_ (blue bar) indicates the number of care actions of that type in the testing data (ground truth). Observe from the figure that the frequency of care actions is skewed, with _wellness_ being the most common and _emergency_ being the least common. Regarding correct predictions, the C4 connectivity has the best overall performance, closely followed by the C3 despite C3 having more than twice as many edges as C4 (see also Table II). The C1 and C2 connectivities did not perform well, but for two different reasons. First, the C1 connectivity does not allow the encounter nodes to receive embeddings from the observation and condition nodes, which are the "characteristics that define an encounter." The C2 connectivity fails because we allow the encounter nodes to access information that is not available in the testing. Specifically, the links between care action and encounter nodes do not exist when creating the embeddings in the testing--since they are the links we would like to predict.14 Footnote 13: i.e., there should be no edge from an encounter node to an observation/condition node. Footnote 14: We just want to predict the link from care action to encounter, but the reverse edge from care action to encounter will not exist either in the testing. **Conclusions:** GNN connectivities that may appear intuitive (C1 and C2) do not perform well because (i) the associated KG connectivity does not capture how the nodes' embeddings should interact, and (ii) the training uses links for computing Fig. 8: The four GNN connectivities used in the experiments in Sec. IV-C1. Accronyms: condition (C), observation (O), encounter (E), patient (P), and care action (CA). Fig. 9: Illustrating how the GNNs connectivities in Fig. 8 affect the prediction of care actions. The results are the average of 50 random samples from the Synthea generated dataset. Each sample consists of 50/10 patients in the training/testing sets. The nodes’ embeddings have size 5 and the GNN has 2 layers. The total number of predictions per care action is indicated in blue. the nodes' embeddings that are not present in the testing. Connecting nodes in every direction (C3) obtains a good performance, but it is slightly outperformed by a bespoke GNN connectivity (C4) that considers only essential connections. #### Iv-B2 Embedding size and number of GNN layers This experiment investigates the impact of two basic GNN design parameters: The number of layers and the size of the nodes' embedding. The GNN connectivity used here corresponds to the C4 connectivity described in Sec. IV-C1. Fig. 9(a) shows the average prediction accuracy of care action as a function of the embedding size for GNNs with 2 and 3 layers. Observe from the figure that, in both cases, the accuracy improves rapidly for embedding sizes ranging from 1 to 3, but beyond that point, the increase in accuracy becomes more gradual. Fig. 9(b) shows the prediction accuracy as a function of the number of GNN layers when the embedding sizes are fixed to 5 and 10. Observe from the figure that adding more layers decreases the GNN performance, which is in stark contrast to _deep_ CNNs, which use many layers. We reckon this behavior is because of the "over-smoothing" phenomenon also noted in the literature [19, 20], where adding more layers makes the GNN "too well connected," and therefore, the nodes embeddings become "too similar." **Conclusions:** The link prediction performance improves with the embeddings' size, but the improvement gains diminish once the embeddings are large enough. Using a large number of GNN layers has a negative impact on performance. #### Iv-B3 Negative edges This experiment studies the impact of removing negative edges in the KG. Recall that the negative edges in the KG are from the encounter to the care actions with no positive edge (see also Sec. IV-B1). We use the C4 GNN connectivity introduced in Sec. IV-C1 and obtain the results shown in Figure 11. The figure shows that not using negative edges considerably drops the link prediction performance. This behavior is because the link prediction can be thought of as a binary classification of an edge, and negative edges are a source of negative samples of such "classification." Notably, negative edges are not readily available in the Synthea generated data, and we had to use domain knowledge (i.e., understanding of the data) to add those. **Conclusions:** Negative edges are crucial for making good link predictions in this use case. The negative links were not available in Synthea directly, and we had to use domain knowledge (i.e., understanding of the graph data) to include them in the KG. ## V Conclusions This paper studied the flow of embedding information within GNNs and its impact on performance, specifically in a clinical triage use case. We proposed a mathematical model that decouples the GNN connectivity from the connectivity of graph data and found that incorporating domain knowledge in the GNN connectivity is more effective than relying solely on graph data connectivity. Our results also show that negative edges play a crucial role in achieving good performance, while using many GNN layers can lead to performance degradation. A future research direction is to evaluate how the approach performs in other datasets, and how to automatize the learning of the "domain knowledge." Specifically, the identification of key GNN links for transporting embedding information, and the identification of negative edges in the KG that may not be explicitly present in the data.
2309.05826
KD-FixMatch: Knowledge Distillation Siamese Neural Networks
Semi-supervised learning (SSL) has become a crucial approach in deep learning as a way to address the challenge of limited labeled data. The success of deep neural networks heavily relies on the availability of large-scale high-quality labeled data. However, the process of data labeling is time-consuming and unscalable, leading to shortages in labeled data. SSL aims to tackle this problem by leveraging additional unlabeled data in the training process. One of the popular SSL algorithms, FixMatch, trains identical weight-sharing teacher and student networks simultaneously using a siamese neural network (SNN). However, it is prone to performance degradation when the pseudo labels are heavily noisy in the early training stage. We present KD-FixMatch, a novel SSL algorithm that addresses the limitations of FixMatch by incorporating knowledge distillation. The algorithm utilizes a combination of sequential and simultaneous training of SNNs to enhance performance and reduce performance degradation. Firstly, an outer SNN is trained using labeled and unlabeled data. After that, the network of the well-trained outer SNN generates pseudo labels for the unlabeled data, from which a subset of unlabeled data with trusted pseudo labels is then carefully created through high-confidence sampling and deep embedding clustering. Finally, an inner SNN is trained with the labeled data, the unlabeled data, and the subset of unlabeled data with trusted pseudo labels. Experiments on four public data sets demonstrate that KD-FixMatch outperforms FixMatch in all cases. Our results indicate that KD-FixMatch has a better training starting point that leads to improved model performance compared to FixMatch.
Chien-Chih Wang, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Bryan Wang
2023-09-11T21:11:48Z
http://arxiv.org/abs/2309.05826v1
# KD-FixMatch: Knowledge Distillation Siamese Neural Networks ###### Abstract Semi-supervised learning (SSL) has become a crucial approach in deep learning as a way to address the challenge of limited labeled data. The success of deep neural networks heavily relies on the availability of large-scale high-quality labeled data. However, the process of data labeling is time-consuming and unscalable, leading to shortages in labeled data. SSL aims to tackle this problem by leveraging additional unlabeled data in the training process. One of the popular SSL algorithms, FixMatch[1], trains identical weight-sharing teacher and student networks simultaneously using a siamese neural network (SNN). However, it is prone to performance degradation when the pseudo labels are heavily noisy in the early training stage. We present KD-FixMatch, a novel SSL algorithm that addresses the limitations of FixMatch by incorporating knowledge distillation. The algorithm utilizes a combination of sequential and simultaneous training of SNNs to enhance performance and reduce performance degradation. Firstly, an outer SNN is trained using labeled and unlabeled data. After that, the network of the well-trained outer SNN generates pseudo labels for the unlabeled data, from which a subset of unlabeled data with trusted pseudo labels is then carefully created through high-confidence sampling and deep embedding clustering. Finally, an inner SNN is trained with the labeled data, the unlabeled data, and the subset of unlabeled data with trusted pseudo labels. Experiments on four public data sets demonstrate that KD-FixMatch outperforms FixMatch in all cases. Our results indicate that KD-FixMatch has a better training starting point that leads to improved model performance compared to FixMatch. Chien-Chih Wang, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Bryan Wang {ccwang, shaoyux, jinmiaof, yliuu, brywan}@amazon.com semi-supervised learning, knowledge distillation, siamese neural networks, high-confidence sampling, deep embedding clustering ## 1 Introduction Large-scale high-quality labeled data is the key to the tremendous success of supervised deep learning models in many fields. However, collecting labeled data remains a significant challenge, as the process of data labeling is time-consuming and resource-intensive. This is particularly evident in tasks such as detecting product image defects in e-commerce, where high-quality labeled data is essential for accurate predictions and customer satisfaction. The numbers of defective and non-defective images are sometimes extremely unbalanced and thus it is necessary to label a large number of randomly selected images in order to collect sufficient high-quality labeled images for supervised learning. Recently, [1] came up with a widely used SSL algorithm called FixMatch, which trains an SNN with limited labeled and extra unlabeled data, and achieved significant improvement comparing to traditional supervised learning methods. However, despite the superiority and success of FixMatch, it has a noticeable disadvantage. It trains the teacher and the student in an SNN simultaneously and since the pseudo labels generated by the teacher in the early stage may not be correct, using them directly may introduce large amount of label noise. Therefore, we propose a modified FixMatch called KD-FixMatch, which trains an outer and an inner SNN sequentially. Our proposed method improves upon FixMatch because an outer SNN will be first trained with labeled and unlabeled data and thus the percentage of label noise in the pseudo labels generated by the network of the well-trained outer SNN would be lower than that of the teacher in FixMatch, especially in the early stage. Our main contributions are summarized as follows: 1. In KD-FixMatch, the outer SNN is first well-trained and its network serves as a teacher for the inner SNN. Thus, we have a better training starting point than direcly applying FixMatch algorithm to the inner SNN. 2. Unlike self-training [2, 3] or Noisy Student [4], where its neural network is re-trained multiple times when its pseudo labels are required to be updated, we only need to train each of the outer and inner SNN once. 3. KD-FixMatch outperforms FixMatch in our experiments although the time complexity of KD-FixMatch is more than two times that of FixMatch. ## 2 Relation to Prior Work SSL has gained significant attention in recent years as a method to overcome the challenge of limited labeled data. By incorporating large-scale unlabeled data, SSL has shown to enhance the performance of deep neural networks, as demonstrated in various studies [1, 4, 5, 6, 7]. ### Teacher-Student Sequential Training Knowledge distillation [8] is a well-known model compression technique that transfers knowledge from a pre-trained, larger teacher model to a smaller student model. By using the teacher to generate pseudo labels for the unlabeled data, the student trained on both labeled data and pseudo-labeled data performs better than when trained solely on the limited labeled data [8]. Noisy Student [4] takes inspiration from knowledge distillation and uses a student model that is equal to or larger than the teacher model. This allows for the inclusion of noise-inducing techniques such as dropout, stochastic depth, and data augmentation in the student's training, resulting in better generalization compared to the teacher [4]. In Noisy Student, once the student is found to be better than the teacher, the pseudo labels are updated by the current student, and a new student is re-initialized. This procedure is then repeated several times. However, this approach requires waiting for the teacher to be well-trained before generating highly confident pseudo labels for the student's training. ### Teacher-Student Simultaneous Training Besides the aforementioned sequential methods, the simultaneous training method is also an alternative option [1, 9, 10]. We take FixMatch[1] as an example. The core neural network architecture for FixMatch is SNN which consists of two identical weight-sharing classifier networks, a teacher and a student. See Figure 1. Assume that \(f_{\mathbf{\theta}}\) is a standard neural network with softmax output layer, \(\mathbf{\theta}\in\Re^{n}\) is a long vector, \(\ell\) is the number of training examples, \(K\) is the number of the classes, and \(n\) is the total number of the parameters of \(f_{\mathbf{\theta}}\). We define the optimization problem for FixMatch to be \[\min_{\mathbf{\theta}}\ f(\mathbf{\theta}),\text{where }f(\mathbf{\theta})=\frac{1}{2C} \mathbf{\theta}^{T}\mathbf{\theta}+\frac{1}{\ell}\sum_{i=1}^{\ell}\xi(\mathbf{\theta}; \mathbf{x}^{i},\mathbf{y}^{i}) \tag{1}\] \(C>0\) is the parameter to avoid overfitting by regularization, \(\mathbf{x}^{i}\) is the \(i\)th input image, and \(\mathbf{y}^{i}\) is the label vector of \(\mathbf{x}^{i}\). In addition, if \(\mathbf{x}^{i}\) belongs to the \(s\)th class, the label vector, \(\mathbf{y}^{i}\), is \(\underbrace{[0,\dots,0}_{s-1},1,0,\dots,0]^{T}\in\Re^{K}\). However, training examples for SSL consists of both a set of the labeled data, \(S_{l}\), and a set of the unlabeled data, \(S_{u}\). Label vectors for the unlabeled data are not available. Therefore, FixMatch uses teacher inside SNN to generate pseudo labels as label vectors for \(\mathbf{x}^{i},i\in S_{u}\) and then the loss function, \(\xi\), in (1) can be defined as follows: \[\xi(\mathbf{\theta};\mathbf{x}^{i},\mathbf{y}^{i})=\begin{cases}\xi_{l}\left(\mathbf{y}^{i},f _{\mathbf{\theta}}(\mathbf{x}^{i}_{w})\right)&\text{if }\ i\in S_{l},\\ \lambda_{u}\xi_{u}(\hat{\mathbf{y}}^{i},f_{\mathbf{\theta}}(\mathbf{x}^{i}_{s}))&\text{if }\ i\in S_{u}, \end{cases} \tag{2}\] where \(\hat{\mathbf{y}}^{i}\) is the pseudo label vector of \(\mathbf{x}^{i},i\in S_{u}\), \(\lambda_{u}\) is a pre-defined parameter, \(\mathbf{x}^{i}_{w}\) is the weakly augmented image of \(\mathbf{x}^{i}\), \(\mathbf{x}^{i}_{s}\) is the strongly augmented image of \(\mathbf{x}^{i}\), \(\xi_{l}\) is a loss function for the labeled data, and \(\xi_{u}\) is a loss function for the unlabeled data. In [1], \(\xi_{l}\) in (2) is the standard cross-entropy (CE) function to measure the difference between \(f_{\mathbf{\theta}}(\mathbf{x}^{i}_{w})\) and its label vector \(\mathbf{y}^{i}\). For \(\xi_{u}\) in (2), when the input of \(f_{\mathbf{\theta}}\) is \(\mathbf{x}^{i}_{w}\), \(f_{\mathbf{\theta}}\) is considered as the teacher and \(f_{\mathbf{\theta}}(\mathbf{x}^{i}_{w})\) is \(\hat{\mathbf{y}}^{i}\) of \(\mathbf{x}^{i}\). Conversely, \(f_{\mathbf{\theta}}\) is considered as the student when the input of \(f_{\mathbf{\theta}}\) is \(\mathbf{x}^{i}_{s}\). However, in order to reduce label noise, we only choose the highly confident pseudo labels whose maximum value are greater than or equal to a pre-defined threshold \(\tau>0\) and thus \(\xi_{u}\) in (2) can be defined as follows: \[\mathbb{1}(\max(\hat{\mathbf{y}}^{i})\geq\tau)\xi_{u}(\text{OHL}(\hat{\mathbf{y}}^{i}),f_{\mathbf{\theta}}(\mathbf{x}^{i}_{s})), \tag{3}\] where \(\mathbb{1}\) is an indicator function and OHL is a function mapping a softmax pseudo label vector to its one-hot label vector. After the loss function, \(\xi\), in (1) is prepared, back-propagation is applied to update the weights in \(f_{\mathbf{\theta}}\) and thus both teacher and student are updated simultaneously by minimizing (1). For the method where teacher and student are trained simultaneously, there is no need to wait for the teacher's training procedure to complete before the student's can begin. However, in the early stage, heavy label noise or insufficient correct pseudo labels generated by the immature teacher may affect the generalization ability of the student model. ## 3 Proposed Algorithm We propose an algorithm called KD-FixMatch, which contains an outer SNN, \(f^{\text{outer}}\), and an inner SNN, \(f^{\text{inner}}\). The goal is not only to use teacher-student sequential training to fix label noise problem in the early stage, but also to incorporate teacher-student simultaneous training such as FixMatch. ### Trusted Pseudo Label Selection After the outer SNN is well-trained, it generates pseudo labels for the unlabeled data. Due to the imperfection of the outer SNN, we need to choose a subset of trusted pseudo labels to reduce label noise. Otherwise, it may affect inner SNN's model performance. Similar to (3), we first choose a pre-defined threshold, \(\tau^{\text{select}}\), and choose the pseudo labels whose maximum value Figure 1: An example of SNN for FixMatch. is greater than or equal to \(\tau^{\text{select}}\). Then, we extract the latent representations1 of the unlabeled data which has passed the \(\tau^{\text{select}}\) selection. We do a deep embedding clustering [11, 12] on the representations and choose the unlabeled data whose clustering results are consistent with their predicted class.2 Finally, the indices of the chosen unlabeled data form \(T_{u}\). Footnote 1: The representations are derived from the layer before the last layer. Footnote 2: The predicted class is the index of the maximum value of \(\tilde{\mathbf{y}}^{i}\). ### Merging Conflict Pseudo Labels From Section 3.1, \(T_{u}\) is determined. However, our inner SNN also generates pseudo labels for the unlabeled data during its training process. Following (3), we choose the pseudo labels whose maximum value is greater than or equal to a pre-defined threshold, \(\tau^{\text{inner}}>0\), to be inner SNN's trusted pseudo labels. Therefore, conflicts may arise when both the outer and inner SNN provide their own trusted pseudo labels for the same image. Heuristic 1 is proposed to determine the pseudo label vector, \(\hat{\mathbf{y}}^{i}\), in (3) for the unlabeled data which has conflict pusedo labels. The main idea is that the inner SNN intuitively has better generalization ability than the outer SNN since it has more reliable pseudo labels in the early stage. ### Robust Loss Functions Label noise is one of the major problems for SSL since teacher model is not perfect. In other words, there exist noisy labels among the pseudo labels generated by the teacher. [7] mentioned that robust loss functions may be helpful in case of noisy pseudo labels. The widely used loss functions in robust learning is symmetric loss functions [13, 14] which has been proven to be robust to noisy labels. However, there are some constraints for symmetric loss functions. Assume that \(f\) is a standard neural network with softmax output layer. From [14], a loss function \(\xi\) is called symmetric if it satisfies \(\sum_{k=1}^{K}\xi(f(\mathbf{x}),\mathbf{e}_{k})=M,\;\forall\mathbf{x}\in X,\;\forall f\), where \(K\) is the number of classes, \(X\) is the feature space, \(f\) is a function mapping an input \(\mathbf{x}\) to a softmax output \(\mathbf{y}\), \(M\) is a constant value, and \(\mathbf{e}_{k}=[\underbrace{0,\ldots,0}_{k-1},1,0,\ldots,0]^{T}\in\Re^{K}\). Mean absolute error (MAE) [15], symmetric cross-entropy (SCE) [16], and normalized cross-entropy (NCE) [17] are examples of robust loss functions. ### Knowledge Distillation Siamese Neural Networks We initialize an inner SNN and then train it with not only the labeled and unlabeled data but also the unlabeled data with trusted pseudo labels, \((\mathbf{x}^{i},\hat{\mathbf{y}}^{i}),i\in T_{u}\). For the inner SNN, the optimization problem and loss functions are the same as (1), (2), and (3) except that we substitute \(f\), \(C\), \(\xi_{l}\), \(\lambda_{u}\), \(\xi_{u}\), and \(\tau\) with \(f^{\text{inner}}\), \(C^{\text{inner}}\), \(\xi_{l}^{\text{inner}}\), \(\lambda_{u}^{\text{inner}}\), \(\xi_{u}^{\text{inner}}\), and \(\tau^{\text{inner}}\), respectively. A summary of our proposed algorithm is in Algorithm 1. ``` Given a labeled data, \(S_{l}\), and an unlabeled data, \(S_{u}\); 1: Initialize an outer SNN, \(f^{\text{outer}}\), and train it with \(S_{l}\) and \(S_{u}\) by using back-propagation to solve (1); 2: Generate pseudo labels for \(S_{u}\) using the well-trained \(f^{\text{outer}}\); 3: Select a trusted index subset \(T_{u}\) from \(S_{u}\); 4: Choose CE or a robust loss function to be \(\xi_{u}^{\text{inner}}\); 5: Initialize an inner SNN, \(f^{\text{inner}}\), and train it with not only the labeled and unlabeled data but also the unlabeled data with trusted pseudo labels, \((\mathbf{x}^{i},\hat{\mathbf{y}}^{i}),i\in T_{u}\); 6: Return the well-trained \(f^{\text{inner}}\); ``` **ALGORITHM 1**KD-FixMatch ## 4 Experiments The main objective in this section is to compare our proposed algorithm with other methods. For a fair comparison, we implement all of the methods using Tensorflow _2.3.0_. EfficientNet-B0[18] pre-trained on ImageNet3 is chosen for all of the experiments and SNN-EB0 is an SNN which consists of two identical weight-sharing EfficientNet-B0. We compare the following five methods: 4 (a) Baseline: EfficientNet-B0 is trained with only the labeled data by solving (1) with CE loss function; (b) FixMatch: SNN-EB0 is trained with the labeled and unlabeled data by solving (1) with (2) and (3) and both \(\xi_{l}\) and \(\xi_{u}\) are CE functions; (c) KD-FixMatch-CE: an outer and an inner SNN-EB0 are trained using Algorithm 1 with the labeled and unlabeled data and both \(\xi_{l}\) and \(\xi_{u}\) for outer and inner SNN-EB0 are CE functions; (d) KD-FixMatch-SCE-1.0-0.01: the same as KD-FixMatch-CE except that \(\xi_{u}\) for inner SNN-EB0 is an SCE function with \(\alpha=1.0\) and \(\beta=0.01\); and (e) KD-FixMatch-SCE-1.0-0.1: the same as KD-FixMatch-CE except that \(\xi_{u}\) for inner SNN-EB0 is an SCE function with \(\alpha=1.0\) and \(\beta=0.1\). For the labeled training data in all of our experiments, we follow the sampling strategy mentioned in [1] to choose an equal number of images from each class in order to avoid model bias. For all the experiments, Adam [19] optimizer is used and the exponential decay rates for \(1\)st and \(2\)nd moment estimates are \(0.9\) and \(0.999\), respectively. Batch size is set to \(128\).5 The initial learning rate is \(7\mathrm{e}{-5}\) and the learning rate scheduler mentioned in Section \(5.2\) of [18] is applied. We set \(1/C\) in (1) to be \(5\mathrm{e}{-4}\) except that \(1\mathrm{e}{-3}\) for CIFAR100. For weak augmentation, we follow the method in Section 2.3 of [1]. For strong augmentation, we follow RandAugment implementation in [20]. Following [1], we set \(\tau\) in (3) to be \(0.95\) and \(\lambda_{u}\) be \(1\). Also, we set \(\tau^{\mathrm{select}}\) and \(\tau^{\mathrm{inner}}\) to be \(0.80\) and \(0.95\), respectively. For inference, we follow [1] to use the maintained exponential moving average of the trained parameters. The decay is set to be \(0.9999\) and num_updates be \(10\). 6 Footnote 5: Except for Baseline, 64 labeled and 64 unlabeled images are in a batch. Footnote 6: See [https://github.com/petewarden/tensorflow_makefile/blob/master/tensorflow/python/training/moving_averages.py](https://github.com/petewarden/tensorflow_makefile/blob/master/tensorflow/python/training/moving_averages.py). ### Experiments on Four Public Data Sets In this subsection, we conduct our experiments on four public data sets: SVHN[21], CIFAR10/100[22], and FOOD101[23]. In order to mimic the practical use, we randomly split the training data from their official websites into original training data (\(80\%\)) and original validation data (\(20\%\)) for our experiments. The unlabeled data from the official websites is not used. Instead, we follow [1] to ignore the labels in the original training data and deem it as unlabeled data. All of these four data sets are publicly available and the summary is in Table 1. From Table 2, we observe that: (a) FixMatch is better than Baseline for all the cases. This is as expected because, comparing to Baseline, FixMatch leverages an additional unlabeled data; (b) KD-FixMatch is better than FixMatch for all the cases. The reason is that KD-FixMatch has a better starting point than FixMatch; (c) Comparing to FixMatch, the more labeled data we use, the less improvements we have for KD-FixMatch. The possible reason is that when FixMatch has a large enough initial labeled data it has a good starting point to reach the result competitive to KD-FixMatch; and (d) KD-FixMatch-SCE has slightly better model performance than KD-FixMatch-CE in most of the cases even without grid search for the two parameters, \((\alpha,\beta)\). The possible reason is that the robust loss function, SCE, is applied. ## 5 Conclusion In this paper, we have proposed an SSL algorithm, KD-FixMatch, which is an improved version of FixMatch that utilizes knowledge distillation. Based on our experiments, KD-FixMatch outperforms FixMatch and Baseline on four public data sets. Interestingly, despite the absence of the parameter selection for SCE (robust loss function), the performance of KD-FixMatch-SCE is slightly better than KD-FixMatch-CE in most of the cases.
2301.13694
Are Defenses for Graph Neural Networks Robust?
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw - virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering - most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski
2023-01-31T15:11:48Z
http://arxiv.org/abs/2301.13694v1
# Are Defenses for Graph Neural Networks Robust? ###### Abstract A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw - virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering - most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.1 Footnote 1: footnotemark: ## 1 Introduction The vision community learned a bitter lesson - we need specific carefully crafted attacks to properly evaluate the adversarial robustness of a defense. Consequently, adaptive attacks are considered the gold standard [44]. This was not always the case; until recently, most defenses were tested only against relatively weak static attacks. The turning point was Carlini & Wagner [3]'s work showing that 10 methods for detecting adversarial attacks can be easily circumvented. Shortly after, Athalye et al. [1] showed that 7 out of the 9 defenses they studied can be broken since they (implicitly) rely on obfuscated gradients. So far, this bitter lesson is completely ignored in the graph domain. Figure 1: Adaptive attacks draw a different picture of robustness. All defenses are less robust than reported, with an undefended GCN [33] outperforming some. We show results on Cora ML for both poisoning (attack before training) and evasion (attack after training), and both global (attack the test set jointly) and local (attack individual nodes) setting. The perturbation budget is relative w.r.t. the #edges for global attacks (5% evasion, 2.5% poisoning) and w.r.t. the degree for local attacks (100%). In (a)/(b) SVD-GCN is catastrophically broken – our adaptive attacks reach 24%/9% (not visible). Note that our non-adaptive attacks are already stronger than what is typically used (see § 5). Virtually no existing work that proposes an allegedly robust Graph Neural Network (GNN) evaluates against adaptive attacks, leading to overly optimistic robustness estimates. To show the seriousness of this methodological flaw we categorize 49 works that propose a robust GNN and are published at major conferences/journals. We then choose one defense per category (usually the most highly cited). Not surprisingly, we show that none of the assessed models are as robust as originally advertised in their respective papers. In Fig. 1 we summarize the results for 7 of the most popular defenses, spanning the entire spectrum of strategies (i.e., aimed at improving the graph, the architecture, or the training, see Table 1). We see that in both local and global settings, as well as for both evasion and poisoning, the adversarial accuracy under our adaptive attacks is significantly smaller compared to the routinely used non-adaptive attacks. Even more troubling is that many of the defenses perform worse than an undefended baseline (a vanilla GCN [33]). Importantly, the 7 defenses are not cherry-picked. We report the results for each defense we assessed and selected each defence before running any experiments. Adversarial robustness measures the local generalization capabilities of a model, i.e., sensitivity to (bounded) worst-case perturbations. Certificates typically provide a lower bound on the actual robustness while attacks provide an upper bound. Since stronger attacks directly translate into tighter bounds our goal is to design the strongest attack possible. Our adaptive attacks have perfect knowledge of the model, the parameters, and the data, including all defensive measures. In contrast, non-adaptive attacks (e.g., transferred from an undefended proxy or an attack lacking knowledge about defense measures) only show how good the defense is at suppressing a narrow subset of input perturbations.2 Footnote 2: From a security perspective non-adaptive attacks (typically transfer attacks) are also relevant since a real-world adversary is unlikely to know everything about the model and the data. Tramer et al. [44] showed that even adaptive attacks can be tricky to design with many subtle challenges. The graph domain comes with additional challenges since graphs are typically sparse and discrete and the representation of any node depends on its neighborhood. For this reason, we describe the recurring themes, the lessons learned, and our systematic methodology for designing strong adaptive attacks for all examined models. Additionally, we find that defenses are _sometimes_ sensitive to a common attack vector and transferring attacks can also be successful. Thus, the diverse collection of perturbed adjacency matrices resulting from our attacks forms a (black-box) unit test that any truly robust model should pass before moving on to adaptive evaluation. In summary: * We survey and categorize _49 defenses_ published across prestigious machine learning venues. * We design custom attacks for 7 defenses (14%), covering the spectrum of defense techniques. All examined models forfeit a large fraction of previously reported robustness gains. * We provide a transparent methodology and guidelines for designing strong adaptive attacks. * Our collection of perturbed graphs can serve as a robustness unit test for GNNs. ## 2 Background and preliminaries We follow the most common setup and assume GNN [20; 33] classifiers \(f_{\theta}(\mathbf{A},\mathbf{X})\) that operate on a symmetric binary adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) with binary node features \(\mathbf{X}\in\{0,1\}^{n\times d}\) and node labels \(\mathbf{y}\in\{1,2,\ldots,C\}^{n}\) where \(C\) is the number of classes, \(n\) is the number of nodes, and \(m\) the number of edges. A poisoning attack perturbs the graph (flips edges) prior to training, optimizing \[\max_{\mathbf{A}\in\Phi(\mathbf{A})}\ell_{\text{attack}}(f_{\theta^{*}}( \tilde{\mathbf{A}},\mathbf{X}),\mathbf{y})\quad\text{s.t.}\quad\theta^{*}= \arg\min_{\theta}\ell_{\text{train}}(f_{\theta}(\tilde{\mathbf{A}},\mathbf{X} ),\mathbf{y}) \tag{1}\] where \(\ell_{\text{attack}}\) is the attacker's loss, which is possibly different from \(\ell_{\text{train}}\) (see SS 4). In an evasion attack, \(\theta^{*}\) is kept fixed and obtained by training on the clean graph \(\min_{\theta}\ell_{\text{train}}(f_{\theta}(\mathbf{A},\mathbf{X}),\mathbf{y})\). In both cases, the locality constraint \(\Phi(\mathbf{A})\) enforces a budget \(\Delta\) by limiting the perturbation to an \(L_{0}\)-ball around the clean adjacency matrix: \(\|\tilde{\mathbf{A}}-\mathbf{A}\|_{0}\leq 2\Delta\). Attacks on \(\mathbf{X}\) also exist, however, this scenario is not considered by the vast majority of defenses. For example, only one out of the seven examined ones also discusses feature perturbations. We refer to SS D for more details on adaptive feature attacks. **Threat model.** Our attacks aim to either cause misclassification of the entire test set (_global_) or a single node (_local_). To obtain the strongest attack possible (i.e., tightest robustness upper bound), we use white-box attacks. We do not constrain the attacker beyond a simple budget constraint that enforces a maximum number of perturbed edges. For our considerations on unnoticeability, see SS A. **Greedy attacks.** Attacking a GNN typically corresponds to solving a constrained discrete non-convex optimization problem that - evident by this work - is hard to solve. Commonly, approximate algorithms are used to to tackle these optimization problems. For example, the single-step Fast Gradient Attack (FGA) flips the edges whose gradient (i.e., \(\nabla_{\mathbf{A}}\ell_{\text{train}}(f_{\theta^{*}}(\mathbf{A},\mathbf{X}), \mathbf{y})\)) most strongly indicates so. On the other hand, Nettack [67] and Metattack [66] are greedy multi-step attacks. The greedy approaches have the nice side-effect that an attack for a high budget \(\Delta\) directly gives all attacks for budgets lower than \(\Delta\). On the other hand, they tend to be relatively weaker. **Projected Gradient Descent (PGD).** Alternatively, PGD [53] has been applied to GNNs where the discrete adjacency matrix is relaxed to \([0,1]^{n\times n}\) during the gradient-based optimization and the resulting weighted change reflects the probability of flipping an edge. After each gradient update, the changes are projected back such that the budget holds in expectation \(\|\mathbb{E}[\mathbf{\hat{A}}]-\mathbf{A}\|_{0}\leq 2\Delta\). Finally, multiple samples are obtained and the strongest perturbation \(\mathbf{\hat{A}}\) is chosen that obeys the budget \(\Delta\). The biggest caveats while applying \(L_{0}\)-PGD are the relaxation gap and limited scalability (see Geisler et al. [17] for a detailed discussion and a scalable alternative). **Evasion vs. poisoning.** Evasion can be considered the easier setting from an attack perspective since the model is fixed \(f_{\theta^{*}}\). For poisoning, on the other hand, the adjacency matrix is perturbed before training (Eq. 1). Two general strategies exist for poisoning attacks: (1) transfer a perturbed adjacency matrix from an evasion attack [67]; or (2) attack directly by, e.g., unrolling the training procedure to obtain gradients through training [66]. Xu et al. [53] propose to solve Eq. 1 with alternating optimization which was shown to be even weaker than the evasion transfer (1). Note that evasion is particularly of interest for inductive learning and poisoning for transductive learning. ## 3 Adversarial defenses We select the defenses s.t. we capture the entire spectrum of methods improving robustness against structure perturbations. For the selection, we extend the taxonomy proposed in [21]. We selected the subset without cherry-picking based on the criteria elaborated below before experimentation. **Taxonomy.** The top-level categories are _improving the graph_ (e.g., preprocessing), _improving the training_ (e.g., adversarial training or augmentations), and _improving the architecture_. Many defenses for structure perturbations either fall into the category of improving the graph or adaptively weighting down edges through an improved architecture. Thus, we introduce further subcategories. Similar to [21]'s discussion, unsupervised improvement of the graph finds clues in the node features and graph structure, while supervised improvement incorporates gradient information from the learning objective. Conversely, for adaptive edge weighting, we identify three prevalent approaches: rule-based (e.g., using a simple metric), probabilistic (e.g., modeling a latent distribution), and robust aggregations (e.g., with guarantees). We assign each defense to the most fitting taxon (details in SS B). **Selected defenses.** To evaluate a diverse set of defenses, we select one per leaf taxon.3 We prioritize highly cited defenses published at renowned venues with publicly available code. We implement all defenses in one unified pipeline. We present the categorization of defenses and our selection in Table 1. Similarly to Tramer et al. [44], we exclude defenses in the "robust training" category (see SS C for a discussion). Two of the three models in the "miscellaneous" category report some improvement in robustness, but they are not explicitly designed for defense purposes so we exclude them from our study. Some works evaluate only against evasion [48], others only poisoning [12; 15; 58], and the rest tackle both [17; 30; 63]. In some cases the evaluation setting is not explicitly stated and inferred by us. For completeness, we consider each defense in all four settings (local/global and evasion/poisoning). Next, we provide a short summary of the key ideas behind each defense (details in SS E). Footnote 3: The only exception is unsupervised graph improvement, as it contains two of the most popular approaches, which rely on orthogonal principles. One filters edges based on the node features [48], the other uses a low-rank approximation of the adjacency matrix [12]. **Improving the graph.** The feature-based _Jaccard-GCN_[48] uses a preprocessing step to remove all edges between nodes whose features exhibit a Jaccard similarity below a certain threshold. This was motivated by the homophily assumption which is violated by prior attacks that tend to insert edges between dissimilar nodes. The structure-based _SVD-GCN_[12] replaces the adjacency matrix with a low-rank approximation prior to plugging it into a regular GNN. This defense was motivated by the observation that the perturbations from Nettack tend to disproportionately affect the high-frequency spectrum of the adjacency matrix. The key idea in _ProGNN_[30] is to learn the graph structure by alternatingly optimizing the parameters of the GNN and the adjacency matrix (the edge weights). The loss for the latter includes the standard cross-entropy loss, the distance to the original graph, and three other objectives designed to promote sparsity, low rank, and feature smoothness. **Improving the training.**_GRAND_[15] relies on random feature augmentations (zeroing features) coupled with neighbourhood augmentations \(\mathbf{\bar{X}}=(\mathbf{A}\mathbf{X}+\mathbf{A}\mathbf{A}\mathbf{X}+\cdots)\). All randomly augmented copies of \(\mathbf{\bar{X}}\) are passed through the same MLP that is trained with a consistency regularization loss. **Improving the architecture.**_GNNGuard_[58] filters edges in each message passing aggregation via cosine-similarity (smoothed over layers). In the first layer of _RGCN_[63] we learn a Gaussian distribution over the feature matrix and the subsequent layers then manipulate this distribution (instead of using point estimates). For the loss we then sample from the resulting distribution. In addition, in each layer, RGCN assigns higher/lower weights to features with low/high variance. _Soft-Median-GDC_[17] replaces the message passing aggregation function in GNNs (typically a weighted mean) with a more robust alternative by relaxing the median using differentiable sorting. **Common themes.** One theme shared by some defenses is to first discover some property that can discriminate clean from adversarial edges (e.g., high vs. low feature similarity), and then propose a strategy based on that property (e.g., filter low similarity edges). Often they analyze the edges from only a single attack such as Nettack [67]. The obvious pitfall of this strategy is that the attacker can easily adapt by restricting the adversarial search space to edges that will bypass the defense's (implicit) filter. Another theme is to add additional loss terms to promote some robustness objectives. Similarly, the attacker can incorporate the same terms in the attack loss to negate their influence. ## 4 Methodology: How to design strong adaptive attacks In this section, we describe our general methodology and the lessons we learned while designing adaptive attacks. We hope these guidelines can serve as a reference for testing new defenses. **Step 1 - Understand how the defense works** and categorize it. For example, some defenses rely on preprocessing which filters out edges that meet certain criteria (e.g., Jaccard-GCN [48]). Others introduce additional losses during training (e.g., GRAND [15]) or change the architecture (e.g., RGCN [63]). Different defenses might need different attacks or impose extra requirements on them. **Step 2 - Probe for obvious weaknesses.** Some examples include: (a) transfer adversarial edges from another (closely related) model (see also SS 6); (b) use a gradient-free (black-box) attack. For example, in our local experiments, we use a _Greedy Brute Force_ attack: in each step, it considers all possible single edge flips and chooses the one that contributes most to the attack objective (details in SS A). **Step 3 - Launch a gradient-based adaptive attack.** For rapid prototyping, use a comparably cheap attack such as FGA, and later advance to stronger attacks like PGD. For poisoning, strongly consider meta-gradient-based attacks like Metattack [66] that unroll the training procedure, as they almost always outperform just transferring perturbations from evasion. Unsurprisingly, we find that applying PGD [53] on the meta gradients often yields even stronger attacks than the greedy Metattack, and we refer to this new attack as _Meta-PGD_ (details in SS A). \begin{table} \begin{tabular}{l l|l l l} \hline \hline & Taxonomy & Selected Defenses & Other Defenses \\ \hline \hline \multirow{3}{*}{Improving graph} & Unsupervised & Jaccard-GCN [48] & \multirow{3}{*}{[10, 26, 50, 59, 60]} \\ & SVD-GCN [12] & & \\ \cline{2-4} & Supervised & ProGNN [30] & [51, 43, 56] \\ \hline \multirow{3}{*}{Improving training} & Robust training & n/a (see § C) & [6, 9, 14, 22, 27, 28, 41, 52, 53, 54] \\ \cline{2-4} & Further training principles & GRAND [15] & [5, 11, 29, 39, 42, 55, 61, 64, 65] \\ \hline \multirow{3}{*}{Improving architecture} & Adaptively & Rule-based & GNNGuard [58] & [31, 36, 37, 57] \\ \cline{2-4} & weighting & Probabilistic & RGCN [63] & [8, 13, 24, 25, 38] \\ \cline{1-1} \cline{2-4} & edges & Robust agg. & Soft-Median-GDC [17] & [7, 16, 47] \\ \cline{1-1} \cline{2-4} & Miscellaneous & n/a (see above) & [40, 46, 49] \\ \hline \hline \end{tabular} \end{table} Table 1: Categorization of selected defenses. Our taxonomy extends the one by Gunnemann [21]. **Step 4 - Address gradient issues.** Some defenses contain components that are non-differentiable, lead to exploding or vanishing gradients, or obfuscate the gradients [1]. To circumvent these issues, potentially: (a) adjust the defense's hyperparameters to retain numerical stability; (b) replace the offending component with a differentiable or stable counterpart, e.g., substitute the low-rank approximation of SVD-GCN [12] with a suitable differentiable alternative; or (c) remove components, e.g., drop the "hard" filtering of edges done in the preprocessing of Soft-Median-GDC [17]. These considerations also include poisoning attacks, where one also needs to pay attention to all components of the training procedure. For example, we ignore the nuclear norm loss term in the training of ProGNN [30] to obtain the meta-gradient. Of course, keep the entire defense intact for its final evaluation on the found perturbations. **Step 5 - Adjust the attack loss.** In previous works, the attack loss is often chosen to be the same as the training loss, i.e., the cross-entropy (CE). This is suboptimal since CE is not _consistent_ according to the definition by Tramer et al. [44] - higher loss values do not indicate a stronger attack. Thus, we use a variant of the consistent Carlini-Wagner loss [4] for _local_ attacks, namely the logit margin (LM), i.e., the logit difference between the ground truth class and most-likely non-true class. However, as discussed by Geisler et al. [17], for _global_ attacks the mean LM across all target nodes is still suboptimal since it can "waste" budget on already misclassified nodes. Their tanh logit margin (TLM) loss resolves this issue. If not indicated otherwise, we either use TLM or the probability margin (PM) loss - a slight variant of LM that computes the margin after the softmax rather than before. **Step 6 - Tune the attack hyperparameters** such as the number of PGD steps, the attack learning rate, the optimizer, etc. For example, for Metattack we observed that using the Adam optimizer [32] can weaken the attack and replacing it with SGD can increase the effectiveness. **Lessons learned.** We provide a detailed description of each adaptive attack and the necessary actions to make it as strong as possible in SS E. Here, we highlight some important recurring challenges that should be kept in mind when designing adaptive attacks. (1) Numerical issues, e.g., due to division by tiny numbers can lead to weak attacks, and we typically resolve them via clamping. (2) In some cases we observed that for PGD attacks it is beneficial to clip the gradients to stabilize the adversarial optimization. (3) For a strong attack it is essential to tune its hyperparameters. (4) Relaxing non-differentiable components and deactivating operations that filter edges/embeddings based on a threshold in order to obtain gradients for every edge is an effective strategy. (5) If the success of evasion-poisoning transfer depends on a fixed random initialization (see SS J), it helps to use multiple clean auxiliary models trained with different random seeds for the PGD attack - in each PGD step we choose one model randomly. (6) Components that make the optimization more difficult but barely help the defense can be safely deactivated. (7) It is sometimes beneficial to control the randomness in the training loop of Meta-PGD. (8) For Meta-PGD it can help to initialize the attack with non-zero perturbations and e.g., use the perturbed graph of a different attack. **Example 1 - SVD-GCN.** To illustrate the attack process (especially steps 3 and 4) we present a case study of how we construct an adaptive attack against SVD-GCN. Gradient-free attacks like Nettack do not work well here as they waste budget on adversarial edges which are filtered out by the low-rank approximation (LRA). Moreover, to the demise of gradient-based attacks, the gradients of the adjacency matrix are very unstable due to the SVD and thus less useful. Still, we start with a gradient-based attack as it is easier to adapt, specifically FGA, whose quick runtime enables rapid prototyping as it requires only a single gradient calculation. To replace the LRA with a function whose gradients are better behaved, we first decompose the perturbed adjacency matrix \(\tilde{\mathbf{A}}=\mathbf{A}+\delta\mathbf{A}\) and, thus, only need gradients for \(\delta\mathbf{A}\). Next, we notice that the eigenvectors of \(\mathbf{A}\) usually have few large components. Perturbations along those principal dimensions are representable by the eigenvectors, hence most likely are neither filtered out nor impact the eigenvectors. Knowing this, we approximate the LRA in a tractable manner by element-wise multiplication of \(\delta\mathbf{A}\) with weights that quantify how well an edge aligns with the principal dimensions (details in SS E). In short we replace \(\mathrm{LRA}(\mathbf{A}+\delta\mathbf{A})\) with \(\mathrm{LRA}(\mathbf{A})+\delta\mathbf{A}\circ\mathrm{Weight}(\mathbf{A})\), which admits useful gradients. This approach carries over to other attacks such as Nettack - we can incorporate the weights into its score function to avoid selecting edges that will be filtered out. **Example 2 - ProGNN.** While we approached SVD-GCN with a theoretical insight, breaking a composite defense like ProGNN requires engineering and tinkering. When attacking ProGNN with PGD and transferring the perturbations to poisoning we observe that the perturbations are only effective if the model is trained with the same random seed. This over-sensitivity can be avoided by employing lesson (5) in SS 4. As ProGNN is very expensive to train due to its nuclear norm regularizer, we drop that term when training the set of auxiliary models without hurting attack strength. For unrolling the training we again drop the nuclear norm regularizer since it is non-differentiable. Sometimes PGD does not find a state with high attack loss, which can be alleviated by random restarts. As Meta-PGD optimization quickly stalls, we initialize it with a strong perturbation found by Meta-PGD on GCN. All of these tricks combined are necessary to successfully attack ProGNN. **Effort.** Breaking Jaccard-GCN (and SVD-GCN) required around half an hour (resp. three days) of work for the initial proof of concept. Some other defenses require various adjustments that need to be developed over time, but reusing those can quickly break even challenging defenses. It is difficult to quantify this effort, but it can be greatly accelerated by adopting our lessons learned in SS 4. In any case, we argue that authors proposing a new defense must put in reasonable effort to break it. ## 5 Evaluation of adaptive attacks First, we provide details on the experimental setup and used metrics. We then report the main results and findings. We refer to SS A for details on the base attacks, including our Greedy Brute Force and Meta-PGD approaches. We provide the code, configurations, and a collection of perturbed graphs on the project website linked on the first page. **Setup.** We use the two most widely used datasets in the literature, namely Cora ML [2] and Citeseer [19] (details in SS F). Unfortunately, larger datasets are barely possible since most defenses are not very scalable. Still, in SS N, we discuss scalability and apply an adaptive attack to arXiv (170k nodes) [23]. We repeat the experiments for five different data splits (10% training, 10% validation, 80% testing) and report the means and variances. We use an internal cluster with Nvidia GTX 1080Ti GPUs. Most experiments can be reproduced within a few hours. However, the experiments with ProGNN and GRAND will likely require several GPU days. **Defense hyperparameters.** When first attacking the defenses, we observed that many exhibit poor robustness using the hyperparameters provided by their authors. To not accidentally dismiss a defense as non-robust, we tune the hyperparameters such that the clean accuracy remains constant but the robustness w.r.t. adaptive attacks is improved. Still, we run all experiments on the untuned defenses as well to confirm we achieve this goal. In the same way, we also tune the GCN model, which we use as a reference to asses whether a defense has merit. We report the configurations and verify the success of our tuning in SS H. **Attacks and budget.** In the _global_ setting, we run the experiments for budgets \(\Delta\) of up to 15% of the total number of edges in the dataset. Due to our (R)AUC metric (see below), we effectively focus on only the lower range of evaluated budgets. We apply FGA and PGD [53] for evasion. For poisoning, we transfer the found perturbations and also run Metattack [66] and our Meta-PGD. Recall that where necessary, we adapt the attacks to the defenses as outlined in SS 4 and detailed in SS E. In the _local_ setting, we first draw sets of 20 target nodes per split with degrees 1, 2, 3, 5, 8-10, and 15-25 respectively (total of 120 nodes). This enables us to study how the attacks affect different types of nodes - lower degree nodes are often conjectured to be less robust (see also SS K). We then run the experiments for relative budgets \(\Delta\) of up to 200% of the target node's degree. For example, if a node has 10 neighbors, and the budget \(\Delta=70\%\) then the attacker can change up to \(10\cdot 0.7=7\) edges. This commonly used setup ensures that we treat both low and high-degree nodes fairly. We use Nettack [67], FGA, PGD, and our greedy brute force attack for evasion. For poisoning, we only transfer the found perturbations. Again, we adapt the attacks to the defenses if necessary. In alignment with our threat model, we evaluate each found perturbation by the test set accuracy it achieves (_global_) or the ratio of target nodes that remain correctly classified (_local_). For each budget, we choose the strongest attack among all attempts (e.g., PGD, Metattack, Meta-PGD). This gives rise to an envelope curve as seen in Fig. 3. We also include lower budgets as attempts, i.e., we enforce the envelope curve to be monotonically decreasing. We introduce a rich set of attack characteristics by also transferring the perturbations supporting the envelope curve to every other defense. These transfer attacks then also contribute to the final envelope curve of each defense, but in most cases their contribution is marginal. **Non-adaptive attacks.** We call any attack "non-adaptive" that is not aware of any changes made to the model (including defense mechanisms). Where we report results for a non-adaptive attack (e.g., Fig. 1 or Fig. 2), we specifically refer to an attack performed on a (potentially linearlized) GCN with commonly used hyperparameters (i.e., untuned). We then apply the perturbed adjacency matrix to the actual defense. In other words, we transfer the adversarial perturbation from a GCN. For our _local_ non-adaptive attack, we always use Nettack. In contrast, for our _global_ non-adaptive attack, we apply all attacks listed above, and then transfer for each budget the attack which is strongest against the GCN. Due to this ensemble of attacks, our global non-adaptive attack is expected to be slightly stronger than the non-adaptive attacks in most other works. **Area Under the Curve (AUC).** An envelope curve gives us a detailed breakdown of the empirical robustness of a defense for different adversarial budgets. However, it is difficult to compare different attacks and defenses by only visually comparing their curves in a figure (e.g., see Fig. 4). Therefore, in addition to this breakdown per budget, we summarize robustness using the Area Under the Curve (AUC), which is independent of a specific choice of budget \(\Delta\) and also punishes defenses that achieve robustness by trading in too much clean accuracy. Intuitively higher AUCs indicate more robust models, and conversely, lower AUCs indicate stronger attacks. As our _local_ attacks break virtually all target nodes within our conservative maximum budget (see SS F), taking the AUC over all budgets conveniently measures how quick this occurs. However, for _global_ attacks, the test set accuracy continues to decrease for unreasonably large budget, and it is unclear when to stop. To avoid having to choose a maximum budget, we wish to stop when discarding the entire tainted graph becomes the better defense. This is fulfilled by the area between the envelope curve and the line signifying the accuracy of an MLP - a model that is oblivious to the graph structure, at the expense of a substantially lower clean accuracy than a GNN. We call this metric Relative AUC (RAUC) and illustrate it in Fig. 3. More formally, \(\mathrm{RAUC}(c)=\int_{0}^{b_{0}}(c(b)-a_{\text{MLP}})\mathrm{d}b\) s.t. \(b\leq b_{0}\implies c(b)\geq a_{\text{MLP}}\) where \(c(\cdot)\) is a piecewise linear robustness per budget curve, and \(a_{\text{MLP}}\) is the accuracy of the MLP baseline. We normalize the RAUC s.t. 0% is the performance of an MLP and 100% is the optimal score (i.e., 100% accuracy). **Finding 1 - Our adaptive attacks lower robustness by 40% on average.** In Fig. 2 we compare non-adaptive attacks, the current standard to evaluate defenses, with our adaptive attacks which we propose as a new standard. The achieved (R)AUC in each case drops on average by 40% (similarly for Citeseer, see SS F). In other words, the reported robustness in the original works proposing a defense is roughly 40% too optimistic. We confirm a statistically significant drop (\(p<0.05\)) with a one-sided t-test in 85% of all cases. Considering adversarial accuracy for (small) fixed adversarial budget (Fig. 1) instead of the summary (R)AUC over all budgets tells the same story: non-adaptive attacks are too weak to be reliable indicators of robustness and adaptive attacks massively shrink the alleged robustness gains. Figure 3: The dotted lines show the test set accuracy per budget after three global poisoning attacks against a tuned GCN on Cora ML. Taking the envelope gives the solid black robustness curve. The dashed gray line denotes the accuracy of an MLP. The shaded area is the RAUC. Figure 2: Adaptive vs. non-adaptive attacks with budget-agnostic (R)AUC on Cora ML (c.f. Fig. 1). SVD-GCN (b) is disastrously broken – our adaptive attacks reach <0.02 (not visible). § F for Citeseer. **Finding 2 - Structural robustness of GCN is not easily improved.** In Fig. 4 (global) and Fig. 5 (local) we provide a more detailed view for different adversarial budgets and different graphs. For easier comparison we show the accuracy relative to the undefended GCN baseline. Overall, the decline is substantial. Almost half of the examined defenses perform worse than GCN and most remaining defenses neither meaningfully improve nor lower the robustness (see also Fig. 1 and Fig. 3). GRAND and Soft-Medoid-GCN retain robustness in some settings, but the gains are smaller than reported. **Finding 3 - Defense effectiveness depends on dataset.** As we can see in Fig. 4 and Fig. 5, our ability to circumvent specific defenses tends to depend on the dataset. It appears that some defenses are more suited for different datasets. For example, GRAND seems to be a good choice for Citeseer while it is not as strong on Cora ML. The results for local attacks (Fig. 5) paint a similar picture, here we see that Cora ML is more difficult to defend. This points to another potentially problematic pitfall: most defenses are developed only using these two datasets as benchmarks. Is robustness even worse on other graphs? We leave this question for future work. **Finding 4 - No trade-off between accuracy and robustness for structure perturbations.** Instead, Fig. 6 shows that defenses with high clean accuracy also exhibit high RAUC, i.e., are more robust against our attacks. This appears to be in contrast to the image domain [45]. However, we cannot exclude that future more powerful defenses might manifest this trade-off in the graph domain. **Finding 5 - Necessity of adaptive attacks.** In Fig. 7, we show two exemplary characteristics of how an adaptive attack bypasses defensive measures. First, to attack SVD-GCN, it seems particularly effective to insert connections to high-degree nodes. Second, for GNNGuard, GRAND and Soft-Median-GDC it is disproportionally helpful to Figure 4: Difference (defense – undefended GCN) of adversarial accuracy for the strongest global attack per budget. Almost half of the defenses perform worse than the GCN. We exclude SVD-GCN since it is catastrophically broken and plotting it would make the other defenses illegible (accuracy <24% already for a budget of 2% on Cora ML). Absolute numbers in § F. Figure 5: Model accuracy vs. RAUC of the strongest global attacks on Cora ML. We do not observe a robustness accuracy trade-off, but even find models with higher accuracy to be more robust. Figure 6: Model accuracy vs. RAUC of the strongest global attacks on Cora ML. We do not observe a robustness accuracy trade-off, but even find models with higher accuracy to be more robust. delete edges. These examples illustrate why the existence of a one-fits-all perturbation which circumvents all possible defenses is unlikely. Instead, an adaptive attack is necessary to properly assess a defense's efficacy since different models are particularly susceptible to different perturbations. **Additional analysis.** During this project, we generated a treasure trove of data. We perform a more in-depth analysis of our attacks in the appendix. First, we study how node degree affects attacks (see SS K). For local attacks, the required budget to misclassify a node is usually proportional to the node's degree. Global attacks tend to be oblivious to degree and uniformly break nodes. Next, we perform a breakdown of each defense in terms of the sensitivity to different attacks (see SS I). In short, global attacks are dominated by PGD for evasion and Metattack/Meta-PGD for poisoning with the PM or TLM loss. For local, our greedy brute-force is most effective, rarely beaten by PGD and Nettack. Finally, we analyze the properties of the adversarial edges in terms of various graph statistics such as edge centrality and frequency spectra (see SS L SS M). ## 6 Robustness unit test Next we systematically study how well the attacks transfer between defenses, as introduced in the _attacks and budget_ paragraph in SS 5. In Fig. 8, we see that in 15 out of 16 cases the adaptive attack is the most effective strategy (see main diagonal). However for many defenses, there is often a source model or ensemble of source models (for the latter see SS G) which forms a strong transfer attack. Figure 8: RAUC for the transfer of the strongest global adaptive attacks on Cora ML between models. The columns contain the models for which the adaptive attacks were created. The rows contain the RAUC after the transfer. With only one exception, adaptive attacks (diagonal) are most effective. Figure 7: Exemplary metrics characterizing the attack vector our strongest attacks, which are those visible in Fig. I.1 and Fig. I.2. We give a more elaborate study of attack characteristics in § L. Motivated by the effectiveness of transfer attacks (especially if transferring from ProGNN [30]), we suggest this set of perturbed graphs to be used as a bare minimum robustness unit test: one can probe a new defense by testing against these perturbed graphs, and if there exists at least one that diminishes the robustness gains, we can immediately conclude that the defense is not robust in the worst-case - without the potentially elaborate process of designing a new adaptive attack. We provide instructions on how to use this collection in the accompanying code. Nevertheless, we cannot stress enough that this collection does not replace a properly developed adaptive attack. For example, if one would come up with SVD-GCN and would use our collection (excluding the perturbed graphs for SVD-GCN) the unit test would partially pass. However, as we can see in e.g., Fig. 2, SVD-GCN can be broken with an - admittedly very distinct - adaptive attack. ## 7 Related work Excluding attacks on undefended GNNs, previous works studying adaptive attacks in the graph domain are scarce. The recently proposed graph robustness benchmark [62] also only studies transfer attacks. Such transfer attacks are so common in the graph domain that their usage is often not even explicitly stated, and we find that the perturbations are most commonly transferred from Nettack or Metattack (both use a linearized GCN). Other times, the authors of a defense only state that they use PGD [53] (aka "topology attack") without further explanations. In this case, the authors most certainly refer to a PGD transfer attack on a GCN proxy. They almost never apply PGD to their actual defense, which would yield an adaptive attack (but possibly weak, see SS 4 for guidance). An exception where the defense authors study an adaptive attack is SVD-GCN [12]. Their attack collects the edges flipped by Nettack in a difference matrix \(\delta\mathbf{A}\), replaces its most significant singular values and vectors with those from the clean adjacency matrix \(\mathbf{A}\), and finally adds it to \(\mathbf{A}\). Notably, this yields a dense continuous perturbed adjacency matrix. While their SVD-GCN is susceptible to these perturbations, the results however do not appear as catastrophic as with our adaptive attacks, despite their severe violation of our threat model (see SS 2). Geisler et al. [17] are another exception where gradient-based greedy and PGD attacks are directly applied to their Soft-Median-GDC defense, making them adaptive. Still, our attacks manage to further reduce their robustness estimate. ## 8 Discussion We hope that the adversarial learning community for GNNs will reflect on the bitter lesson that evaluating adversarial robustness is not trivial. We show that on average adversarial robustness estimates are overstated by 40%. To ease the transition into a more reliable regime of robustness evaluation for GNNs we share our recipe for successfully designing strong adaptive attacks. Using adaptive (white-box) attacks is also interesting from a security perspective. If a model successfully defends such strong attacks, it is less likely to have remaining attack vectors for a real-world adversary. Practitioners can use our methodology to evaluate their models in hope to avoid an arms race with attackers. Moreover, the white-box assumption lowers the chance that real-world adversaries can leverage our findings, as it is unlikely that they have perfect knowledge. We also urge for caution since the attacks only provide an upper bound (which with our attacks is now 40% tighter). Nevertheless, we argue that the burden of proof that a defense is truly effective should lie with the authors proposing it. Following our methodology, the effort to design a strong adaptive attack is reduced, so we advocate for adaptive attacks as the gold-standard for future defenses. ## Acknowledgments and Disclosure of Funding This research was supported by the Helmholtz Association under the joint research school "Munich School for Data Science - MUDS". ## References * [1] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In _International Conference on Machine Learning, ICML_, 2018. * [2] Aleksandar Bojchevski and Stephan Gunnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In _International Conference on Learning Representations, ICLR_, 2018. * [3] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In _ACM Workshop on Artificial Intelligence and Security, AISec_, 2017. * [4] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In _IEEE Symposium on Security and Privacy_, 2017. * [5] Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, Junzhou Huang, and Wenwu Zhu. Not all low-pass filters are robust in graph convolutional networks. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * [6] J. Chen, X. Lin, H. Xiong, Y. Wu, H. Zheng, and Q. Xuan. Smoothing adversarial training for GNN. _IEEE Transactions on Computational Social Systems_, 8(3), 2020. * [7] Liang Chen, Jintang Li, Qibiao Peng, Yang Liu, Zibin Zheng, and Carl Yang. Understanding structural vulnerability in graph convolutional networks. In _International Joint Conference on Artificial Intelligence, IJCAI_, 2021. * [8] Lingwei Chen, Xiaoting Li, and Dinghao Wu. Enhancing robustness of graph convolutional networks via dropping graph connections. In _European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD_, 2021. * [9] Zhijie Deng, Yinpeng Dong, and Jun Zhu. Batch virtual adversarial training for graph convolutional networks. In _Workshop on Learning and Reasoning with Graph-Structured Representations at the International Conference on Machine Learning, ICML_, 2019. * [10] Dongsheng Duan, Lingling Tong, Yangxi Li, Jie Lu, Lei Shi, and Cheng Zhang. AANE: Anomaly aware network embedding for anomalous link detection. In _IEEE International Conference on Data Mining, ICDM_, 2020. * [11] Pantelis Elinas, Edwin V. Bonilla, and Louis Tiao. Variational inference for graph convolutional networks in the absence of graph data and adversarial settings. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [12] Negin Entezari, Saba A. Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. All you need is low (rank): Defending against adversarial attacks on graphs. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2020. * [13] Boyuan Feng, Yuke Wang, Z. Wang, and Yufei Ding. Uncertainty-aware attention graph neural network for defending adversarial attacks. In _AAAI Conference on Artificial Intelligence_, 2021. * [14] Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. Graph adversarial training: Dynamically regularizing based on graph structure. _IEEE Transactions on Knowledge and Data Engineering_, 33(6), 2021. * [15] Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. Graph random neural network for semi-supervised learning on graphs. In _International Conference on Machine Learning, ICML_, 2021. * [16] Simon Geisler, Daniel Zugner, and Stephan Gunnemann. Reliable graph neural networks via robust aggregation. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [17] Simon Geisler, Tobias Schmidt, Hakan Sirin, Daniel Zugner, Aleksandar Bojchevski, and Stephan Gunnemann. Robustness of graph neural networks at scale. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * Geisler et al. [2022] Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, and Stephan Gunnemann. Generalization of neural combinatorial solvers through the lens of adversarial robustness. In _International Conference on Learning Representations, ICLR_, 2022. * Giles et al. [1998] C. Lee Giles, Kurt D. Bollacker, and Steve Lawrence. CiteSeer: An automatic citation indexing system. In _ACM Conference on Digital Libraries_, 1998. * Gilmer et al. [2017] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In _International Conference on Machine Learning, ICML_, 2017. * Gunnemann [2021] Stephan Gunnemann. Graph neural networks: Adversarial robustness. In Lingfei Wu, Peng Cui, Jian Pei, and Liang Zhao, editors, _Graph Neural Networks: Foundations, Frontiers, and Applications_, chapter 8,. Springer, 2021. * Hu et al. [2021] Weibo Hu, Chuan Chen, Yaomin Chang, Zibin Zheng, and Yunfei Du. Robust graph convolutional networks with directional graph adversarial training. _Applied Intelligence_, 51(11), 2021. * Hu et al. [2020] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * Ioannidis and Giannakis [2020] Vassilis N. Ioannidis and Georgios B. Giannakis. Edge dithering for robust adaptive graph convolutional networks. In _AAAI Conference on Artificial Intelligence_, 2020. * Ioannidis et al. [2021] Vassilis N. Ioannidis, Antonio G. Marques, and Georgios B. Giannakis. Tensor graph convolutional networks for multi-relational and robust learning. _IEEE Transactions on Signal Processing_, 68, 2020. * Ioannidis et al. [2021] Vassilis N. Ioannidis, Dimitris Berberidis, and Georgios B. Giannakis. Unveiling anomalous nodes via random sampling and consensus on graphs. In _IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP_, 2021. * Jin and Zhang [2019] Hongwei Jin and Xinhua Zhang. Latent adversarial training of graph convolution networks. In _Workshop on Learning and Reasoning with Graph-Structured Representations at the International Conference on Machine Learning, ICML_, 2019. * Jin and Zhang [2021] Hongwei Jin and Xinhua Zhang. Robust training of graph convolutional networks via latent perturbation. In _European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD_, 2021. * Jin et al. [2021] Ming Jin, Heng Chang, Wenwu Zhu, and Somayeh Sojoudi. Power up! Robust graph convolutional network against evasion attacks based on graph powering. In _AAAI Conference on Artificial Intelligence_, 2021. * Jin et al. [2020] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for robust graph neural networks. In _ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD_, 2020. * Jin et al. [2021] Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, and Jiliang Tang. Node similarity preserving graph convolutional networks. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2021. * Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _International Conference on Learning Representations, ICLR_, 2015. * Kipf and Welling [2017] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations, ICLR_, 2017. * Li et al. [2021] Jintang Li, Tao Xie, Chen Liang, Fenfang Xie, Xiangnan He, and Zibin Zheng. Adversarial attack on large scale graph. _IEEE Transactions on Knowledge and Data Engineering_, 2021. * Li et al. [2020] Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. Deeprobust: A pytorch library for adversarial attacks and defenses. _arXiv preprint arXiv:2005.06149_, 2020. * [36] Xiaorui Liu, Jiayuan Ding, Wei Jin, Han Xu, Yao Ma, Zitao Liu, and Jiliang Tang. Graph neural networks with adaptive residual. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * [37] Xiaorui Liu, Wei Jin, Yao Ma, Yaxin Li, Hua Liu, Yiqi Wang, Ming Yan, and Jiliang Tang. Elastic graph neural networks. In _International Conference on Machine Learning, ICML_, 2021. * [38] Dongsheng Luo, Wei Cheng, Wenchao Yu, Bo Zong, Jingchao Ni, Haifeng Chen, and Xiang Zhang. Learning to drop: Robust graph neural network via topological denoising. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2021. * [39] Florence Regol, Soumyasundar Pal, Jianing Sun, Yingxue Zhang, Yanhui Geng, and Mark Coates. Node copying: A random graph model for effective graph sampling. _Signal Processing_, 192, 2022. * [40] Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, and Andreas Spanias. Uncertainty-matching graph neural networks to defend against poisoning attacks. In _AAAI Conference on Artificial Intelligence_, 2021. * [41] Ke Sun, Zhouchen Lin, Hantao Guo, and Zhanxing Zhu. Virtual adversarial training on graph convolutional networks in node classification. In _Chinese Conference on Pattern Recognition and Computer Vision, PRCV_, 2019. * [42] Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, and Suhang Wang. Transferring robustness for graph neural network against poisoning attacks. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2020. * [43] Shuchang Tao, H. Shen, Q. Cao, L. Hou, and Xueqi Cheng. Adversarial immunization for certifiable robustness on graphs. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2021. * [44] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [45] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In _International Conference on Learning Representations, ICLR_, 2019. * [46] Haibo Wang, Chuan Zhou, Xin Chen, Jia Wu, Shirui Pan, and Jilong Wang. Graph stochastic neural networks for semi-supervised learning. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [47] Yiwei Wang, Shenghua Liu, Minji Yoon, Hemank Lamba, Wei Wang, Christos Faloutsos, and Bryan Hooi. Provably robust node classification via low-pass message passing. In _IEEE International Conference on Data Mining, ICDM_, 2020. * [48] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples for graph data: Deep insights into attack and defense. In _International Joint Conference on Artificial Intelligence, IJCAI_, 2019. * [49] Tailin Wu, Hongyu Ren, Pan Li, and Jure Leskovec. Graph information bottleneck. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [50] Yang Xiao, Jie Li, and Wengui Su. A lightweight metric defence strategy for graph neural networks against poisoning attacks. In _International Conference on Information and Communications Security, ICICS_, 2021. * [51] Hui Xu, Liyao Xiang, Jiahao Yu, Anqi Cao, and Xinbing Wang. Speedup robust graph structure learning with low-rank information. In _ACM International Conference on Information & Knowledge Management, CIKM_, 2021. * [52] Jiarong Xu, Yang Yang, Junru Chen, Chunping Wang, Xin Jiang, Jiangang Lu, and Yizhou Sun. Unsupervised adversarially-robust representation learning on graphs. In _AAAI Conference on Artificial Intelligence_, 2022. * [53] Kaidi Xu, Hongge Chen, Sijia Liu, Pin Yu Chen, Tsui Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: An optimization perspective. In _International Joint Conference on Artificial Intelligence, IJCAI_, 2019. * [54] Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, and Xue Lin. Towards an efficient and general framework of robust training for graph neural networks. In _IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP_, 2020. * [55] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. In _Advances in Neural Information Processing Systems, NerUPS_, 2020. * [56] Baoliang Zhang, Xiaoxin Guo, Zhenchuan Tu, and Jia Zhang. Graph alternate learning for robust graph neural networks in node classification. _Neural Computing and Applications_, 34 (11), 2022. * [57] Li Zhang and Haiping Lu. A feature-importance-aware and robust aggregator for gcn. In _ACM International Conference on Information & Knowledge Management, CIKM_, 2020. * [58] Xiang Zhang and Marinka Zitnik. GNNGuard: Defending graph neural networks against adversarial attacks. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [59] Yingxue Zhang, Sakif Hossain Khan, and Mark Coates. Comparing and detecting adversarial attacks for graph deep learning. In _Workshop on Representation Learning on Graphs and Manifolds at the International Conference on Learning Representations, ICLR_, 2019. * [60] Yingxue Zhang, Florence Regol, Soumyasundar Pal, Sakif Khan, Liheng Ma, and Mark Coates. Detection and defense of topological adversarial attacks on graphs. In _International Conference on Artificial Intelligence and Statistics, AISTATS_, 2021. * [61] Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, and Wei Wang. Robust graph representation learning via neural sparsification. In _International Conference on Machine Learning, ICML_, 2020. * [62] Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, and Jie Tang. Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * [63] Dingyuan Zhu, Peng Cui, Ziwei Zhang, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In _ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD_, 2019. * [64] Jun Zhuang and Mohammad Al Hasan. Defending graph convolutional networks against dynamic graph perturbations via bayesian self-supervision. In _AAAI Conference on Artificial Intelligence_, 2022. * [65] Jun Zhuang and Mohammad Al Hasan. How does bayesian noisy self-supervision defend graph convolutional networks? _Neural Processing Letters_, 54(4), 2022. * [66] Daniel Zugner and Stephan Gunnemann. Adversarial attacks on graph neural networks via meta learning. In _International Conference on Learning Representations, ICLR_, 2019. * [67] Daniel Zugner, Amir Akbarnejad, and Stephan Gunnemann. Adversarial attacks on neural networks for graph data. In _ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD_, 2018. ## Checklist 1. For all authors... 1. Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes] 2. Did you describe the limitations of your work? [Yes] See SS 8. 3. Did you discuss any potential negative societal impacts of your work? [Yes] See SS 8. 4. Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... 1. Did you state the full set of assumptions of all theoretical results? [N/A] 2. Did you include complete proofs of all theoretical results? [N/A] 3. If you ran experiments... 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See SS 5. 2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See SS 5, SS H and provided code. 3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] All experiments are repeated for five random data splits. 4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See beginning of SS 5. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... 1. If your work uses existing assets, did you cite the creators? [Yes] 2. Did you mention the license of the assets? [No] 3. Did you include any new assets either in the supplemental material or as a URL? [Yes] See beginning of SS 5. 4. Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A] 5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... 1. Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] Attacks overview In this section, we make the ensemble of attacks explicit and explain essential details. We then adapt these attack primitives to circumvent the defense mechanisms (see SS E). **Global evasion attacks.** The goal of a global attack is to provoke the misclassification of a large fraction of nodes (i.e., the test set) jointly, crafting a single perturbed adjacency matrix. For evasion, we use _(1) the Fast Gradient Attack (FGA)_ and _(2) Projected Gradient Descent (PGD)_. In FGA, we calculate the gradient towards the entries of the clean adjacency matrix \(\nabla_{\mathbf{A}}\ell_{\text{attack}}(f_{\theta^{*}}(\mathbf{A},\mathbf{X}), \mathbf{y})\) and then flip the highest-ranked edges at once s.t. we exhaust the budget \(\Delta\). In contrast, PGD requires multiple gradient updates since it uses gradient ascent (see SS 2 or explanation below for Meta-PGD). We deviate from the PGD implementation of Xu et al. [53] is two ways: (I) we adapt the initialization of the perturbation before the first attack gradient descent step and (II) we adjust the final sampling of \(\tilde{\mathbf{A}}\). See below for more details. **Global poisoning attacks.** We either (a) transfer the perturbation \(\tilde{\mathbf{A}}\) found by evasion attack (1) or (2) and use it to poison training, or (b) differentiate through the training procedure by unrolling it, thereby obtaining a meta gradient. The latter approach is taken by both _(3) Metattack_[66] and _(4) our Meta-PGD_. Metattack greedily flips a single edge in each iteration and then obtains a new meta gradient at the changed adjacency matrix. In Meta-PGD, we follow the same relaxation as Xu et al. [53] (see below as well as SS 2) and obtain meta gradients at the relaxed adjacency matrices. In contrast to the greedy approach of Metattack, Meta-PGD is able to revise early decisions later on. **Meta-PGD.** Next, we explain the details of Meta-PGD and we present the pseudo code for reference in Algorithm A.1. Recall that the discrete edges are relaxed \(\{0,1\}\rightarrow[0,1]\) and that the "weight" of the perturbation reflects the probability of flipping the respective edge. ``` 1:Input: Adjacency matrix \(\mathbf{A}\), node features \(\mathbf{X}\), labels \(\mathbf{y}\), GNN \(f_{\theta}(\cdot)\), loss \(\ell_{\text{attack}}\) 2:Parameters: Budget \(\Delta\), iterations \(E\), learning rates \(\alpha_{t}\) 3:Initialize \(\mathbf{P}_{0}\in\mathbb{R}^{n\times n}\) 4:for\(t\in\{1,2,\dots,E\}\)do 5: Step \(\mathbf{P}^{(t)}\leftarrow\mathbf{P}^{(t-1)}+\alpha_{t}\nabla_{\mathbf{P}^{(t- 1)}}\left[\ell_{\text{attack}}\left(f\big{(}\mathbf{A}+\mathbf{P}^{(t-1)}, \mathbf{X};\,\theta=\mathrm{train}(\mathbf{A}+\mathbf{P}^{(t-1)},\mathbf{X}, \mathbf{y})),\mathbf{y}\right)\right]\) 6: Projection \(\mathbf{P}^{(t)}\leftarrow\Pi_{\|\mathbb{E}\|\mathbf{A}+\mathbf{P}^{(t)}\|- \mathbf{A}\|_{0}\leq 2\Delta}(\mathbf{P}^{(t)})\) 7: Sample \(\tilde{\mathbf{A}}\) s.t. \(\|\tilde{\mathbf{A}}-\mathbf{A}\|_{0}\leq 2\Delta\) 8: Return \(\tilde{\mathbf{A}}\) ``` **Algorithm A.1** Meta-PGD In the first step of Meta-PGD, we initialize the perturbation (line 3). In contrast to Xu et al. [53]'s suggestion, we find that initializing the perturbation with the zero matrix can cause convergence issues. Hence, we alternatively initialize the perturbation with \(\tilde{\mathbf{A}}\) from an attack on a different model (see also lesson learned #8 in SS 4). In each attack iteration, a gradient ascent step is performed on the relaxed perturbed adjacency matrix \(\tilde{\mathbf{A}}^{(t-1)}=\mathbf{A}+\mathbf{P}^{(t-1)}\) (line 5). For obtaining the meta gradient through the training process, the training is unrolled. For example, with vanilla gradient descent for training \(f_{\theta}(\mathbf{A},\mathbf{X})=f(\mathbf{A},\mathbf{X};\theta)\), the meta gradient resolves to \[\nabla_{\mathbf{P}^{(t-1)}}\left(\ell_{\text{attack}}\left[f\big{(}\mathbf{A}+ \mathbf{P}^{(t-1)},\mathbf{X};\theta=\theta_{0}-\eta\sum\limits_{k=1}^{E_{ \text{attack}}}\nabla_{\theta_{k-1}}\ell_{\text{train}}[f(\mathbf{A}+\mathbf{P }^{(t-1)},\mathbf{X};\theta=\theta_{k-1}),\mathbf{y}]\big{)},\mathbf{y}\right]\right)\] (A.1) with number of training epochs \(E_{\text{train}}\), fixed training learning rate \(\eta\), and parameters after (random) initialization \(\theta_{0}\). Notice that to obtain our variant of non-meta PGD, it suffices to replace the gradient computation in line 5 with \(\nabla_{\mathbf{P}^{(t-1)}}\left[\ell_{\text{attack}}(f_{\theta^{*}}(\mathbf{A }+\mathbf{P}^{(t-1)},\mathbf{X}),\mathbf{y})\right]\). Thereafter in line 6, the perturbation is projected such that in expectation the budget is obeyed, i.e., \(\Pi_{\|\mathbb{E}[\mathbf{A}+\mathbf{P}^{(t)}]-\mathbf{A}\|_{0}\leq 2\Delta}\). First, the projection clips \(\mathbf{A}+\mathbf{P}^{(t-1)}\) to be in \([0,1]\). If the budget is violated after clipping, it solves \[\arg\min_{\hat{\mathbf{P}}^{(t)}}\|\hat{\mathbf{P}}^{(t)}-\mathbf{P}^{(t)}\|_{2} \qquad\text{s.t. }\quad\mathbf{A}+\hat{\mathbf{P}}^{(t)}\in[0,1]^{n\times n}\text{ and }\sum|\hat{\mathbf{P}}^{(t)}|\leq 2\Delta\] (A.2) After the last iteration (line 7), each element of \(\mathbf{P}^{(t)}\) is interpreted as a probability and multiple perturbations are sampled accordingly. The strongest drawn perturbed adjacency matrix (in terms of attack loss) is chosen as \(\tilde{\mathbf{A}}\). Specifically, in contrast to [53], we sample \(K=100\) potential solutions that all obey the budget \(\Delta\) and then choose the one that maximizes the attack loss \(\ell_{\text{attack}}\). **Local attacks.** For local attacks we only run evasion attacks, and then transfer them to poisoning. This is common practice (e.g., see Zugner et al. [67] or Li et al. [34]). The attacks we use are _(1) FGA_, _(2) PGD_, _(3) Nettack [67]_, and a _(4) Greedy Brute Force_ attack. Nettack greedily flips the best edges considering a linearized GCN, whose weights are either specially trained or taken from the attacked defense. In contrast, in each iteration, our Greedy Brute Force attack flips the current worst-case edge for the attacked model. It determines the worst-case perturbation by evaluating the model for every single edge flip. Notice that all examined models use two propagation steps, so we only consider all potential edges adjoining the target node or its neighbors4. Importantly, Greedy Brute Force is adaptive for any kind of model. Runtime-wise, the algorithm evaluates the attacked model \(\mathcal{O}(\Delta nd)\) times with the number of nodes \(n\) and the degree of the target node \(d\). We provide pseudo code in Algorithm A.2. Footnote 4: Due to GCN-like normalization (see § E), the three-hop neighbors need to be considered to be exhaustive. However, it is questionable if perturbing a neighbor three hops away is ever the strongest perturbation there is. ``` 1:Input: Target node \(i\), adjacency matrix \(\mathbf{A}\), node features \(\mathbf{X}\), labels \(\mathbf{y}\), GNN \(f_{\theta}(\cdot)\), loss \(\ell_{\text{attack}}\) 2:Parameter: Budget \(\Delta\) 3:Initialize \(\tilde{\mathbf{A}}^{(0)}=\mathbf{A}\) 4:for\(t\in\{1,2,\ldots,\Delta\}\)do 5:for potential edge \(e\) adjoining \(i\) or any of \(i\)'s direct neighbors do 6: Flip edge \(\tilde{\mathbf{A}}^{(t)}\leftarrow\tilde{\mathbf{A}}^{(t-1)}\pm e\) 7: Remember best \(\tilde{\mathbf{A}}^{(t)}\) in terms of \(\ell_{\text{attack}}(f_{\theta^{*}}(\tilde{\mathbf{A}}^{(t)},\mathbf{X}), \mathbf{y})\) 8:if node \(i\) is misclassified then 9: Return \(\tilde{\mathbf{A}}^{(t)}\) 10: Recover best \(\tilde{\mathbf{A}}^{(t)}\) 11: Return \(\tilde{\mathbf{A}}_{\Delta}\) ``` **Algorithm A.2** Greedy Brute Force **Unnoticeability** typically serves as a proxy to ensure that the label of an instance (here node) has not changed. In the image domain, it is widely accepted that a sufficiently small perturbation of the input image w.r.t. an \(L_{p}\)-norm is unnoticeable (and similarly for other threat models such as rotation). For graphs the whole subject of unnoticeability is more nuanced. The only constraint we use is the number of edge insertions/deletion, i.e., an \(L_{0}\)-ball around the clean adjacency matrix. The only additional unnoticeability constraint proposed in the literature compares the clean and perturbed graph under a power law assumption on the node degrees [67]. However, we do not include such a constraint since (1) the degree distribution is only one (arbitrary) property to distinguish two graphs. (2) The degree distribution is a global property with an opaque relationship to the local class labels in node classification. (3) As demonstrated in Zugner & Gunnemann [66], enforcing an indistinguishable degree distribution only has a negligible influence on attack efficacy, i.e., their gradient-based/adaptive attack conveniently circumvents this measure. Thus, we argue that enforcing such a constraint is similar to an additional (weak) defense measure and is not the focus of this work. Finally, since many defense (and attack) works in the literature considering node-classification (including the ones we study) also only use an \(L_{0}\)-ball constraint as a proxy for unnoticeability, we do the same for improved consistency. Out of scope are also other domains, like combinatorial optimization, where unnoticeability is not required since the true label of the perturbed instance is known [18]. Defense taxonomy Next, we give further details behind our reasoning on how to categorize defenses for GNNs. Our taxonomy extends and largely follows Gunnemann [21]'s. The three main categories are _improving the graph_ (SS B.1), _improving the training_ (SS B.2), and _improving the architecture_ (SS B.3). We assign each defense to the category that fits best, even though some defenses additionally include ideas fitting into other categories as well. For the assignment of defenses see Table 1. ### Improving the graph With this category, we refer to all kinds of preprocessing of the graph. Alternatively, some approaches make the graph learnable with the goal of improved robustness. In summary, this category addresses changes that take place _prior_ to the GNN (i.e., any message passing). We further distinguish _(1) unsupervised_ and _(2) supervised_ approaches. **Unsupervised.** Any improvements that are not entangled with a learning objective, i.e., pure preprocessing, usually arising from clues found in the node features and graph structure. For example, Jaccard-GCN [48] filters out edges based on the Jaccard similarity of node features, while SVD-GCN [12] performs a low-rank approximation to filter out high-frequency perturbations. Most other approaches from this category exploit clues from features and structure simultaneously. **Supervised.** These graph improvements are entangled with the learning objective by making the adjacency matrix learnable, often accompanied by additional regularization terms that introduce expert assumptions about robustness. For example, ProGNN [30] treats the adjacency matrix like a learnable parameter, and adds loss terms s.t. it remains close to the original adjacency matrix and exhibits properties which are assumed about clean graphs like low-rankness. ### Improving the training These approaches improve training - without changing the architecture - s.t. the learned parameters \(\theta^{*}\) of the GNN exhibit improved robustness. In effect, the new training "nudges" a regular GNN towards being more robust. We distinguish _(1) robust training_ and _(2) further training principles_. **Robust training.** Alternative training schemes and losses which reward the correct classification of synthetic adversarial perturbations of the training data. With this category, Gunnemann [21] targets both straightforward adversarial training and losses stemming from certificates (i.e., improving certifiable robustness). Neither approach is interesting to us: the former is discussed in SS C, and the latter targets provable robustness which does not lend itself to empirical evaluation. **Further training principles.** This category is distinct from robust training due to the lack of a clear mathematical definition of the training objective. It mostly captures augmentations [15; 29; 39; 42; 61] or alternative training schemes [5; 11; 55; 64] that encourage robustness. A simple example for such an approach is to pre-train the GNN weights on perturbed graphs [42]. Another recurring theme is to use multiple models during training and then, e.g., enforce consistency among them [5]. ### Improving the architecture Even though there are some exceptions (see sub-category _(2) miscellaneous_), the recurring theme in this category is to somehow weight down the influence of some edges adaptively for each layer or message passing aggregation. We refer to this type of improved architecture with _(1) adaptively weighting edges_. We further distinguish between approaches that are _(a) rule-based_, _(b) probabilistic_, or use _(c) robust aggregation_. _Rule-based_ approaches typically use some metric [31; 58], alternative message passing [36; 37], or an auxiliary MLP [57] to filter out alleged adversarial edges. _Probabilistic_ approaches either work with distributions in the latent space [63], are built upon probabilistic principles like Bayesian uncertainty quantification [13], or integrate sampling into the architecture and hence apply it also at inference time [8; 24; 25; 38]. _Robust aggregation_ defenses replace the message passing aggregation (typically mean) with a more robust equivalent such as a trimmed mean, median, or soft median [7; 17]. In relation to the trimmed mean, in this category we include also other related approaches that come with some guarantees based on their aggregation scheme Wang et al. [47]. On adversarial training defenses The most basic form of adversarial training for structure perturbations aims to solve: \[\min_{\theta}\max_{\mathbf{A}^{\prime}\in\Phi(\mathbf{A})}\ell(f_{\theta}( \mathbf{A}^{\prime},\mathbf{X}),\mathbf{y})\] (C.1) Similarly to [44, 1, 4], we exclude defenses that build on adversarial training in our study for three reasons. First, we observe that adversarial training requires knowing the clean \(\mathbf{A}\). However, for poisoning, we would need to substitute \(\mathbf{A}\) with an adversarially perturbed adjacency matrix \(\tilde{\mathbf{A}}\). In this case, adversarial training aims to enforce adversarial generalization \(\mathbf{A}^{\prime}\in\Phi(\tilde{\mathbf{A}})\) for the adversarially perturbed adjacency matrix \(\tilde{\mathbf{A}}\) - potentially even reinforcing the poisoning attack. Second, an adaptive poisoning attack on adversarial training is very expensive as we need to unfold many adversarial attacks for a single training. Thus, designing truly adaptive poisoning attacks requires a considerable amount of resources. _Scaling_ these attacks to such complicated training schemes is not the main objective of this work. Third, adversarial training for structure perturbations on GNNs seems to be an unsolved question. So far, the robustness gains come from additional and orthogonal tricks such as self-training [53]. Hence, adversarial training for structure perturbations requires an entire paper on its own. ## Appendix D On defenses against feature perturbations As introduced in SS 2, attacks may perturb the adjacency matrix \(\mathbf{A}\), the feature matrix \(\mathbf{X}\), or both. However, during our survey we found that few defenses tackle feature perturbations. Similarly, 6 out of the 7 defenses chosen by us mainly based on general popularity turn out to not consciously defend against feature perturbations. The only exception is SVD-GCN [12], which also applies its low-rank approximation to the binary feature matrix. However, the authors do not report robustness under feature-only attacks; instead, they only consider mixed structure and feature attacks found by Nettack. Given the strong bias of Nettack towards structure perturbations, we argue that their experimental results do not confirm feature robustness. Correspondingly, in preliminary experiments we were not able to achieve considerable robustness gains of SVD-GCN compared to an undefended GCN - even with non-adaptive feature perturbations. If a non-adaptive attack is strong enough, there is not much merit in applying an adaptive attack. To reiterate, due to the apparent scarcity of defenses apt against feature attacks, we decided to focus our efforts on structure attacks and defenses. However, new defenses considering feature perturbations should study robustness in the face of adaptive attacks - similarly to our work. In the following, we give some important hints for adaptive attacks using feature perturbations. We leave attacks that jointly consider feature and structure perturbations for future work due to the manifold open challenges, e.g., balancing structure and feature perturbations in the budget quantity. **Baseline.** To gauge the robustness of defenses w.r.t. global attacks, we introduce the RAUC metric, which employs the accuracy of an MLP - which is perfectly robust w.r.t. structure perturbations - to determine the maximally sensible budget to include in the summary. As MLPs are however vulnerable to feature attacks, a different baseline model is required for this new setting. We propose to resolve this issue by using a label propagation approach, which is oblivious to the node features and hence perfectly robust w.r.t. feature perturbations. **Perturbations.** The formulation of the set of admissible perturbations depends on what modality the data represents, which may differ between node features and graph edges. Convenient choices for continuous features are l-p-norms; in other cases, more complicated formulations are more appropriate. Accordingly, one has to choose an appropriate constrained optimization scheme. Examined adversarial defenses In this section, we portray each defense and how we adapted the base attacks to each one. We refer to Table H.1 for the used hyperparameter values for each defense. We give the used attack parameters for a GCN below and refer to the provided code for the other defenses. **GCN.** We employ an undefended GCN [33] as our baseline. A GCN first adds self loops to the adjacency matrix \(\mathbf{A}\) and subsequently applies GCN-normalization, thereby obtaining \(\mathbf{A}^{\prime}=(\mathbf{D}+\mathbf{I})^{-\frac{1}{2}}(\mathbf{A}+\mathbf{ I})(\mathbf{D}+\mathbf{I})^{-\frac{1}{2}}\) with the diagonal degree matrix \(\mathbf{D}\in\mathbb{N}^{n\times n}\). Then, in each GCN layer it updates the hidden states \(\mathbf{H}^{(l)}=\mathrm{dropout}(\sigma(\mathbf{A}^{\prime}\mathbf{H}^{(l-1)} \mathbf{W}^{(l-1)}+\mathbf{b}^{(l-1)}))\) where \(\mathbf{H}^{(0)}=\mathbf{X}\). We use the non-linear ReLU activation for intermediate layers. Dropout is deactivated in the last layer and we refer to the output before softmax activation as logits. We use Adam [32] to learn the model's parameters. **Attack.** We do not require special tricks since the GCN is fully differentiable and does not come with defensive measures to consider. In fact, the off-the-shelf attacks we employ are tailored to a GCN. For PGD, we use \(E=200\) iterations, \(K=100\) samples, and a base learning rate of 0.1. For Meta-PGD, we only lower the base learning rate to 0.01 and add gradient clipping to 1 (w.r.t. global \(L_{2}\)-norm). For Metattack with SGD instead of Adam for training the GCN, we use an SGD learning rate of 1 and restrict the training to \(E_{\text{train}}=100\) epochs. ### Jaccard-GCN **Defense.** Additionally to a GCN, Jaccard-GCN [48] preprocesses the adjacency matrix. It computes the Jaccard coefficient of the binarized features for the pair of nodes of every edge, i.e., \(\mathbf{J}_{ij}=\frac{\mathbf{X}_{i}\mathbf{X}_{j}}{\min\{\mathbf{X}_{i}+ \mathbf{X}_{j},1\}}\). Then edges are dropped where \(\mathbf{J}_{ij}\leq\epsilon\). **Adaptive attack.** We do not need to adapt gradient-based attacks as the gradient is equal to zero for dropped edges. Straightforwardly, we adapt Nettack to only consider non-dropped edges. Analogously, we ignore these edges in the Greedy Brute Force attack for increased efficiency. ### Svd-Gcn **Defense.** SVD-GCN [12] preprocesses the adjacency matrix with a low-rank approximation (LRA) for a fixed rank \(r\), utilizing the Singular Value Decomposition (SVD) \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\approx\mathbf{U}_{r} \mathbf{\Sigma}_{r}\mathbf{V}_{r}^{\top}=\mathbf{A}_{r}\). Note that the LRA is performed on \(\mathbf{A}\) before adding self-loops and GCN-normalization (see above). Thereafter, the dense \(\mathbf{A}_{r}\) is passed to the GCN as usual. Since \(\mathbf{A}\) is symmetric and positive semi-definite, we interchangeably refer to the singular values/vectors also as eigenvalues/eigenvectors. **Adaptive attack.** Unfortunately, the process of determining the singular vectors \(\mathbf{U}_{r}\) and \(\mathbf{V}_{r}\) is highly susceptible to small perturbations, and so is its gradient. Thus, we circumvent the need of differentiating the LRA. We now explain the approach from a geometrical perspective. Each row of \(\mathbf{A}\) (or interchangeably column as \(\mathbf{A}\) is symmetric) is interpreted as coordinates of a high-dimensional point. The \(r\) most significant eigenvectors of \(\mathbf{A}\) span an \(r\)-dimensional subspace, onto which the points are projected by the LRA. Adding or removing an adversarial edge \((i,j)\) corresponds to moving the point \(\mathbf{A}_{i}\) along dimension \(j\), i.e., \(\mathbf{A}_{i}\pm\mathbf{e}_{j}\) (vice-versa for \(\mathbf{A}_{j}\)). As hinted at in SS 4, the \(r\) most significant eigenvectors of \(\mathbf{A}\) turn out to usually have few large components. Thus, the relevant subspace is mostly aligned with only few dimensions. Changes along the highest-valued eigenvectors are consequently preserved by LRA. To quantify how much exactly such a movement along a dimension \(j\), i.e., \(\mathbf{e}_{j}\), is preserved, we project the movement itself onto the subspace and extract the projected vector's \(j\)-th component. More formally, we denote the projection matrix onto the subspace as \(\mathbf{P}=\sum_{k=0}^{r}\mathbf{v}_{k}\mathbf{v}_{k}^{T}\) where \(\mathbf{v}_{k}\) are the eigenvectors of \(\mathbf{A}\). We now score each dimension \(j\) with \((\mathbf{P}\mathbf{e}_{j})_{j}=\mathbf{\bar{P}}_{jj}\). Since the adjacency matrix is symmetric and rows and columns are hence exchangeable, we then symmetrize the scores \(\mathbf{W}_{ij}=(\nicefrac{{\mathbf{P}_{ii}}}{{\mathbf{P}_{jj}}})_{\!\!2}\). Finally, we decompose the perturbed adjacency matrix \(\tilde{\mathbf{A}}=\mathbf{A}+\delta\mathbf{A}\) and, thus, only need gradients for \(\delta\mathbf{A}\). Using the approach sketched above, we now replace \(\mathrm{LRA}(\mathbf{A}+\delta\mathbf{A})\approx\mathrm{LRA}(\mathbf{A})+ \delta\mathbf{A}\circ\mathbf{W}\). The weights \(\mathbf{W}\) can also be incorporated into the Greedy Brute Force attack by dropping edges with weight \(<0.2\) and, for efficient early stopping, sort edges to try in order of descending weight. Similarly, Nettack's score function \(s_{\text{struct}}(i,j)\) - which attains positive and negative values, while \(\mathbf{W}\) is positive - can be wrapped to \(s^{\prime}_{\text{struct}}(i,j)=\log(\exp(s_{\text{struct}}(i,j))\circ\mathbf{W} )=s_{\text{struct}}(i,j)+\log\mathbf{W}\). Note that we assume that the direction of the eigenvectors remains roughly equal after perturbing the adjacency matrix. In practice, we find this assumption to be true. Intuitively, a change along the dominant eigenvectors should even reinforce their significance. ### Rgcn **Defense.** The implementations of R(obust)GCN provided by the authors5 and in the widespread DeepRobust [35] library6 are both consistent, but diverge slightly from the paper [63]. We use and now present RGCN according to those reference implementations. Principally, RGCN models the hidden states as Gaussian vectors with diagonal variance instead of sharp vectors. In addition to GCN's \(\mathbf{A}^{\prime}\), a second \(\mathbf{A}^{\prime\prime}=(\mathbf{D}+\mathbf{I})^{-1}(\mathbf{A}+\mathbf{I} )(\mathbf{D}+\mathbf{I})^{-1}\) is prepared to propagate the variances. The mean and variance of this hidden Gaussian distribution are initialized as \(\mathbf{M}^{(0)}=\mathbf{V}^{(0)}=\mathbf{X}\). Each layer first computes an intermediate distributions given by \(\hat{\mathbf{M}}^{(l)}=\mathrm{elu}(\mathrm{dropout}(\mathbf{M}^{(l-1)}) \mathbf{W}_{M}^{(l-1)})\) and \(\hat{\mathbf{V}}^{(l)}=\mathrm{relu}(\mathrm{dropout}(\mathbf{V}^{(l-1)}) \mathbf{W}_{V}^{(l-1)})\). Then, attention coefficients \(\boldsymbol{\alpha}^{(l)}=e^{-\gamma\hat{\mathbf{V}}^{(l)}}\) are calculated with the aim to subdue high-variance dimensions (where exponentiation is element-wise and \(\gamma\) is a hyperparameter). The final distributions are obtained with \(\mathbf{M}^{(l)}=\mathbf{A}^{\prime}\hat{\mathbf{M}}^{\prime(l)}\circ \boldsymbol{\alpha}^{(l)}\). Note the absence of bias terms. After the last layer, point estimates are sampled from the distributions via the reparameterization trick, i.e., scalars are sampled from a standard Gaussian and arranged in a matrix \(\mathbf{R}\). These samples are then used to obtain the logits via \(\mathbf{M}^{(L)}+\mathbf{R}\circ(\mathbf{V}^{(L)}+\epsilon)^{\frac{1}{2}}\) (where the square root applies element-wise and \(\epsilon\) is a hyperparameter). Adam is the default optimizer. The loss is extended with the regularizer \(\beta\sum_{i}\mathrm{KL}(\mathcal{N}(\hat{\mathbf{M}}_{i}^{(1)},\mathrm{diag }(\hat{\mathbf{V}}_{i}^{(1)}))\|\mathcal{N}(\mathbf{0},\mathbf{I}))\) (where \(\beta\) is a hyperparameter). Footnote 5: [https://github.com/ZW-ZHANG/RobustGCN](https://github.com/ZW-ZHANG/RobustGCN) Footnote 6: [https://github.com/DSE-NSU/DeepRobust](https://github.com/DSE-NSU/DeepRobust) **Adaptive attack.** A direct gradient attack suffices for a strong adaptive attack. Only when unrolling the training procedure for Metattack and Meta-PGD, we increase hyperparameter \(\epsilon\) from \(10^{-8}\) to \(10^{-2}\) to retain numerical stability. ### ProGNN **Defense.** We use and present Pro(perty)GNN [30] exactly following the implementation provided by the authors in their DeepRobust [35] library7. ProGNN learns an alternative adjacency matrix \(\mathbf{S}\) that is initialized with \(\mathbf{A}\). A regular GCN - which, as usual, adds self-loops and applies GCN-normalization - is trained using \(\mathbf{S}\), which is simultaneously updated in every \(\tau\)-th epoch. For that, first a gradient descent step is performed on \(\mathbf{S}\) with learning rate \(\eta\) and momentum \(\mu\) towards minimizing the principal training loss alongside two regularizers that measure deviation \(\beta_{1}\|\mathbf{S}-\mathbf{A}\|_{F}^{2}\) and feature smoothness \(\frac{\beta_{2}}{2}\sum_{i,j}\mathbf{S}_{ij}\|\frac{\mathbf{X}_{i}}{\sqrt{d_{i }}}-\frac{\mathbf{X}_{j}}{\sqrt{d_{j}}}\|^{2}\) (where \(d_{i}=\sum_{j}\mathbf{S}_{ij}+10^{-3}\)). Next, the singular value decomposition \(\mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{T}\) of the updated \(\mathbf{S}\) is computed, and \(\mathbf{S}\) is again updated to be \(\mathbf{U}\max(0,\boldsymbol{\Sigma}-\eta\beta_{3})\mathbf{V}^{T}\) to promote low-rankness. Thereafter, \(\mathbf{S}\) is again updated to be \(\mathrm{sgn}(\mathbf{S})\circ\max(0,|\mathbf{S}|-\eta\beta_{4})\) to promote sparsity. Finally, the epoch's resulting \(\mathbf{S}\) is obtained by clamping its elements between 0 and 1. Footnote 7: [https://github.com/mims-harvard/GNNGuard](https://github.com/mims-harvard/GNNGuard) **Adaptive attack.** Designing an adaptive attack for ProGNN proved to be a challenging endeavor. We describe the collection of tricks in SS 4's Example 2. ### GnnGuard **Defense.** We closely follow the authors' implementation8 as it deviates from the formal definitions in the paper [58]. GNNGuard adopts a regular GCN and, before each layer, it adaptively weights down alleged adversarial edges. Thus, each layer has a unique propagation matrix \(\mathbf{A}^{(l)}\) that is used instead of \(\mathbf{A}^{\prime}\). Footnote 8: [https://github.com/DSE-NSU/DeepRobust](https://github.com/DSE-NSU/DeepRobust) GNNGuard's rule-based edge reweighting can be clustered into four consecutive steps: (1) the edges are reweighted based on the pair-wise cosine similarity \(\mathbf{C}^{(l)}_{ij}=\frac{\mathbf{H}^{(l-1)}_{i}\cdot\mathbf{H}^{(l-1)}_{j}}{ \|\mathbf{H}^{(l-1)}_{i}\|\|\mathbf{H}^{(l-1)}_{i}\|}\|\) according to \(\mathbf{S}^{(l)}=\mathbf{A}\circ\mathbf{C}^{(l)}\circ\mathbb{I}[\mathbf{C}^{( l)}\geq 0.1]\), where edges with too dissimilar node embeddings are removed (see Iverson bracket \(\mathbb{I}[\mathbf{C}^{(l)}\geq 0.1]\)). Then, (2) the matrix is rescaled \(\boldsymbol{\Gamma}^{(l)}_{ij}=\mathbf{s}^{(l)}_{ij}/\mathbf{s}^{(l)}_{i}\) with \(\mathbf{s}^{(l)}_{i}=\sum_{j}\mathbf{S}^{(l)}_{ij}\) For stability, if \(\mathbf{s}^{(l)}_{i}<\epsilon\), \(\mathbf{s}^{(l)}_{i}\) is set to 1 (here \(\epsilon\) is a small constant). Next, (3) self-loops are added and \(\boldsymbol{\Gamma}^{(l)}\) is non-linarily transformed according to \(\hat{\boldsymbol{\Gamma}}^{(l)}=\exp_{\neq 0}(\boldsymbol{\Gamma}^{(l)}+ \mathrm{diag}\,^{1}\!/_{1}+\mathrm{d}^{(l)})\), where \(\exp_{\neq 0}\) only operates on nonzero elements and \(\mathbf{d}^{(l)}_{i}=\|\boldsymbol{\Gamma}^{(l)}_{i}\|_{0}\) is the row-wise number of nonzero entries. Last, (4) the result is smoothed over the layers with \(\boldsymbol{\Omega}^{(l)}=\sigma(\rho)\boldsymbol{\Omega}^{(l-1)}+(1-\sigma( \rho))\hat{\boldsymbol{\Gamma}}^{(l)}\) with learnable parameter \(\rho\) and sigmoid function \(\sigma(\cdot)\). The resulting reweighted adjacency matrix \(\boldsymbol{\Omega}^{(l)}\) is then GCN-normalized (without adding self-loops) and passed on to a GCN layer. Note that steps (1) to (3) are excluded from back-propagation during training. When comparing with the GNNGuard paper, one notices that among other deviations, we have omitted learnable edge pruning because it is disabled in the reference implementation. **Adaptive attack.** The hyperparameter \(\epsilon\) must be increased from \(10^{-6}\) to \(10^{-2}\) during the attack to retain numerical stability. In contrast to the reference implementation but as stated above, it is important to place the hard filtering step \(\mathbb{I}[\mathbf{C}^{(l)}\geq 0.1]\) for \(\mathbf{S}^{(l)}\) s.t. the gradient calculation w.r.t. \(\mathbf{A}\) is not suppressed for these entries. ### Grand **Defense.** The Graph Random Neural Network (GRAND) [15] model is the only defense from our selection that is not based on a GCN. First, \(\mathbf{A}\) is endowed with self-loops and GCN-normalized to obtain \(\mathbf{A}^{\prime}\). Also, each row of \(\mathbf{X}\) is \(l_{1}\)-normalized, yielding \(\mathbf{X}^{\prime}\). Next, rows from \(\mathbf{X}^{\prime}\) are randomly dropped with probability \(\delta\) during training to generate a random augmentation, and \(\mathbf{X}^{\prime}\) is scaled by \(1-\delta\) during inference to compensate, thereby obtaining \(\hat{\mathbf{X}}\). Those preprocessed node features are then propagated multiple times along the graph to get \(\overline{\mathbf{X}}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{A}^{\prime k}\hat{ \mathbf{X}}\). Finally, dropout is applied once to \(\overline{\mathbf{X}}\), and the result is plugged into a 2-layer MLP with dropout and ReLU activation to obtain class probabilities \(\mathbf{Z}\). The authors also propose an alternative architecture using a GCN instead of an MLP, however, we do not explore this option since the MLP version is superior according to their own results. GRAND is trained with Adam. The training loss comprises the mean of the cross-entropy losses of \(S\) model evaluations, thereby incorporating multiple random augmentations. Additionally, a consistency regularizer is added to enforce similar class probabilities across all evaluations. More formally, first the probabilities are averaged across all evaluations: \(\overline{\mathbf{Z}}=\frac{1}{S}\sum_{s=1}^{S}\mathbf{Z}^{(s)}\). Next, each node's categorical distribution is sharpened according to a temperature hyperparameter \(T\), i.e., \(\overline{\mathbf{Z}}^{\prime}_{ij}=\mathbf{z}^{\frac{1}{K}}_{ij}/\sum_{c} \mathbf{z}^{\frac{1}{K}}_{ic}\). The final regularizer penalizes the distance between the class probabilities and the sharpened averaged distributions, namely \(\frac{\partial}{S}\sum_{s=1}^{S}\|\mathbf{Z}^{(s)}-\overline{\mathbf{Z}}^{ \prime}\|_{F}^{2}\). **Adaptive attack.** When unrolling the training procedure for Metattack and Meta-PGD, to reduce the memory footprint, we reduce the number of random augmentations per epoch to 1, and we use a manual gradient calculation for the propagation operation. We also initialize Meta-PGD with a strong perturbation found by Meta-PGD on ProGNN. Otherwise, the attack has issues finding a perturbation with high loss; it presumably stalls in a local optimum. It is surprising that "only" initializing from GCN instead of ProGNN does not give a satisfyingly strong attack. Finally, we use the same random seed for every iteration of Metattack and Meta-PGD, as otherwise the constantly changing random graph augmentations make the optimization very noisy. ### Soft-Median-GDC **Defense.** The Soft-Median-GDC [17] deviates in two ways from a GCN: (1) it uses Personalized PageRank (PPR) with restart probability \(\alpha=0.15\) to further preprocess the adjacency matrix after adding self-loops and applying GCN-normalization. The result is then sparsified using a row-wise top-\(k\) operation (\(k=64\)). (2) the message passing aggregation is replaced with a robust estimator called Soft-Median. From the perspective of node \(i\), a GCN uses the message passing aggregation \(\mathbf{H}_{i}^{(l)}=\mathbf{A}_{i}\mathbf{H}^{(l-1)}\) which can be interpreted as a weighted mean/sum. In Soft-Median-GDC, the "weights" \(\mathbf{A}_{i}\) are replaced with a scaled version of \(\mathbf{A}_{i}\circ\text{softmax}\ (-\text{e}/\tau\sqrt{d})\). Here the vector \(\mathbf{c}\) denotes the distance between hidden embedding of a neighboring node to the neighborhood-specific weighted dimension-wise median: \(\mathbf{c}_{i}=\|\operatorname{Median}(\mathbf{A}_{i},\mathbf{H}^{(l-1)})- \mathbf{H}_{i}^{(l-1)}\|\). To keep the scale, these weights are scaled s.t. they sum up to \(\sum\mathbf{A}_{i}\). **Adaptive attack.** During gradient-based attacks, we adjust the \(\mathbf{c}\) of every node s.t. it now captures the distance to all other nodes, not only neighbors. This of course modifies the values of \(\mathbf{c}\), but is necessary to obtain a nonzero gradient w.r.t. to all candidate edges. We initialize PGD with a strong perturbation found by a similar attack on GCN, and initialize Meta-PGD with a perturbation from a similar attack on ProGNN (as with GRAND, using an attack against GCN as a base would be insufficient here). ## Appendix F Evaluation of adaptive attacks In Table F.1, we summarize the variants of the datasets we use, both of which we have precisely extracted from Nettack's code8. In Fig. F.1, we complement Fig. 2 and compare the (R)AUC of all defenses on Citeseer. The robustness estimates for the defenses on Citeseer are also much lower as originally reported. For completeness, we give absolute envelope curve plots for all settings and datasets as well as for higher budgets in Fig. F.2 and Fig. F.3 (compare with Fig. 4 and Fig. 5). Footnote 8: [https://github.com/danielzuegner/nettack](https://github.com/danielzuegner/nettack) Figure F.2: Absolute variant of Fig. 4, showing relative budgets up to 15%. Figure F.3: Absolute variant of Fig. 5, showing relative budgets up to 200%. Ensemble transferability study In Fig. 8, we transfer attacks found on an _individual_ model to other models. It is natural to also assess the strength of transfer attacks supplied by _ensembles_ of models. In Fig. G.1, we address this question for 2-ensembles. For poisoning, the combination of RGCN and ProGNN turns out to be (nearly) the strongest in all cases, which is reasonable since both already form strong individual transfer attacks as is evident in Fig. 8. For evasion, the differences are more subtle. We also investigate 3-ensembles, but omit the plots due to their size. For poisoning, RGCN and ProGNN now combined with Soft-Median-GDC remain the strongest transfer source, yet the improvement over the 2-ensemble is marginal. For evasion, there is still no clear winner. GCN and defense hyperparameters: original vs. tuned for adaptive attacks To allow for the fairest comparison possible, we tuned the hyperparameters for each model (including GCN) towards maximizing both clean accuracy and adversarial robustness on a single random data split. In Table 1, we list all hyperparameter configurations. While we cannot run an exhaustive search over all hyperparameter settings, we report substantial gains for most defenses and the GCN in Fig. 1. The only exceptions are GRAND, Soft-Median-GDC on Cora ML, and GNNGuard. For GRAND, we do not report results for the default hyperparameters as they did not yield satisfactory clean accuracy. Moreover, for Soft-Median-GDC on Cora ML and GNNGuard we were not able to substantially improve over the default hyperparameters. For the GCN, tuning is important to ensure that we have a fair and equally-well tuned baseline. A GCN is the natural baseline since most defense methods propose slight modifications of a GCN or additional steps to improve the robustness. For the defenses, tuning is vital since they were most originally tuned w.r.t. non-adaptive attacks. In any case, the tuning should counterbalance slight variations in the setup. As stated in the introduction, each attack only provides an upper bound for the actual adversarial robustness of a model (with fixed hyperparameters). A future attack of increased efficacy might lead to a tighter estimate. Thus, when we empirically compare the defenses to a GCN, we only compare upper bounds of the respective actual robustness. However, we attack the GCN with state-of-the-art approaches that were developed by multiple researchers specifically for a GCN. Even though we also tune the parameters of the adaptive attacks, we argue that the robustness estimate for a GCN is likely tighter than our robustness estimate for the defenses. In summary, the tuning of hyperparameters is necessary that we can fairly compare the robustness of multiple models, even though, we only compare upper bounds of the true robustness. Figure H.1: Each defense’s clean accuracy vs. (R)AUC values of the strongest attacks, akin to Fig. 6. Muted (semi-transparent) colors represent untuned defenses (except for Soft-Median-GDC on Cora ML and GNNGuard), solid colors denote tuned defenses, and lines connect the two. Our tuned defenses are almost always better than untuned variants w.r.t. both clean accuracy and robustness.
2309.05818
Rice Plant Disease Detection and Diagnosis using Deep Convolutional Neural Networks and Multispectral Imaging
Rice is considered a strategic crop in Egypt as it is regularly consumed in the Egyptian people's diet. Even though Egypt is the highest rice producer in Africa with a share of 6 million tons per year, it still imports rice to satisfy its local needs due to production loss, especially due to rice disease. Rice blast disease is responsible for 30% loss in rice production worldwide. Therefore, it is crucial to target limiting yield damage by detecting rice crops diseases in its early stages. This paper introduces a public multispectral and RGB images dataset and a deep learning pipeline for rice plant disease detection using multi-modal data. The collected multispectral images consist of Red, Green and Near-Infrared channels and we show that using multispectral along with RGB channels as input archives a higher F1 accuracy compared to using RGB input only.
Yara Ali Alnaggar, Ahmad Sebaq, Karim Amer, ElSayed Naeem, Mohamed Elhelw
2023-09-11T20:51:21Z
http://arxiv.org/abs/2309.05818v1
Rice Plant Disease Detection and Diagnosis using Deep Convolutional Neural Networks and Multispectral Imaging ###### Abstract Rice is considered a strategic crop in Egypt as it is regularly consumed in the Egyptian people's diet. Even though Egypt is the highest rice producer in Africa with a share of 6 million tons per year [5], it still imports rice to satisfy its local needs due to production loss, especially due to rice disease. Rice blast disease is responsible for 30% loss in rice production worldwide [9]. Therefore, it is crucial to target limiting yield damage by detecting rice crops diseases in its early stages. This paper introduces a public multispectral and RGB images dataset and a deep learning pipeline for rice plant disease detection using multimodal data. The collected multispectral images consist of Red, Green and Near-Infrared channels and we show that using multispectral along with RGB channels as input archives a higher F1 accuracy compared to using RGB input only. Keywords:Deep learning Computer vision Multispectral Imagery. ## 1 Introduction In Egypt, rice is important in Egyptian agriculture sector, as Egypt is the largest rice producer in Africa. The total area used for rice cultivation in Egypt is about 600 thousand ha or approximately 22% of all cultivated area in Egypt during the summer. As a result, it is critical to address the causes of rice production loss to minimize the gap between supply and consumption. Rice plant diseases contribute mostly to this loss, especially rice blast disease. According to [9], rice blast disease causes 30% worldwide of the total loss of rice production. Thus, rice crops diseases detection, mainly rice blast disease, in the early stages can play a great role in restraining rice production loss. Early detection of rice crops diseases is a challenging task. One of the main challenges of early detection of such disease is that it can be misclassified as the brown spot disease by less experienced agriculture extension officers (as both are fungal diseases and have similar appearances in their early stage) which can lead to wrong treatment. Given the current scarcity of experienced extension officers in the country, there is a pressing need and opportunity for utilising recent technological advances in imaging modalities and computer vision/artificial intelligence to help in early diagnosis of the rice blast disease. Recently, multispectral photography has been deployed in agricultural tasks such as precision agriculture [3], food safety evaluation [11]. Multispectral cameras could capture images in Red, Red-Edge, Green and Near-Infrared bands wavebands, which captures what the naked eye can't see. Integrating the multispectral technology with deep learning approaches would improve crops diseases identification capability. However, it would be required to collect multispectral images in large numbers. In this paper, we propose a public multispectral and RGB images dataset and a deep learning pipeline for rice plant disease detection. First, the dataset we present contains 3815 pairs of multispectral and RGB images for rice crop blast, brown spot and healthy leaves. Second, we developed a deep learning pipeline trained on our dataset which calculates the Normalised Difference Vegetation Index (NDVI) channel from the multispectral image channels and concatenates it along its RGB image channels. We show that using NDVI+RGB as input archives a higher F1 score by 1% compared to using RGB input only. ## 2 Literature Review Deep learning has emerged to tackle problems in different tasks and fields. Nowadays, it is being adopted to solve the challenge of crop disease identification. For example, Mohanty et al. [8] trained a deep learning model to classify plant crop type and its disease based on images. Furthermore, [1] proposed a deep learning-based approach for banana leaf diseases classification. Furthermore, multispectral sensors have proven its capability as a new modality to detect crop fields issues and diseases. Some approaches use multispectral images for disease detection and quantification. Cui et al. [4] developed an image processing-based method for quantitatively detecting soybean rust severity using multi-spectral images. Also, [12] utilize digital and multispectral images captured using quadrotor unmanned aerial vehicles (UAV) to collect high-spatial resolution imagery data to detect the ShB disease in rice. After the reliable and outstanding results deep learning models could achieve on rgb images, some approaches were developed to use deep learning on multispectral images, especially of crops and plants. [10] proposed a deep learning-based approach for weed detection in lettuce crops trained on multispectral images. In addition, Ampatzidis et al. [2] collects multispectral images of citrus fields using UVA for crop phenotyping and deploys a deep learning detection model to identify trees. ## 3 Methodology ### Hardware Components We used a MAPIR Survey3N camera, shown in Figure 1 to collect our dataset. This camera model captures ground-level multispectral images of red, green and NIR channels. It was chosen in favour of its convenient cost and easy integration with smartphones.. In addition, we used the Samsung Galaxy M51 mobile phone camera to capture RGB images, paired with the MAPIR camera. We Designed a holder gadget to combine the mobile phone, MAPIR camera and a power bank in a single tool, as seen in Figure 2, to facilitate the data acquisition operation for the officers. It was designed using SolidWorks software and manufactured by a 3D printer. ### Data Collection Mobile Application An android frontend application was also developed to enable the officers who collect the dataset to control the multispectral and the smartphone cameras for capturing dual RGBIR/RGB images simultaneously while providing features such as image labelling, imaging session management, and Geo-tagging. The mobile application is developed with Flutter and uses Firebase real-time database to store and synchronise the captured data including photos and metadata. Furthermore, Hive local storage database is used within the application to maintain a local backup of the data. Figure 1: MAPIR Survey3N Camera. ### Analytics Engine Module Our engine is based on ResNet18 [6] architecture which consists of 18 layers and it utilize the power of residual network, see Figure 3, residual network help us avoid the vanishing gradient problem. We can see how layers are configured in the ResNet-18 architecture. The architecture starts with a convolution layer with 7x7 kernel size and stride of 2. Next we begin with the skip connection. The input from here is added to the output that is achieved by 3x3 max pool layer and two convolution layers with kernel size 3x3, 64 kernels each. This is the first residual block. The output of this residual block is added to the output of two convolution layers with kernel size 3x3 and 128 such filters. This constituted the second residual block. Then the third residual block involves the output of the second block through skip connection and the output of two convolution layers with filter size 3x3 and 256 such filters. The fourth and final residual block involves output of third block through skip connections and output of two convolution layers with same filter size of 3x3 and 512 such filters. Finally, average pooling is applied on the output of the final residual block and received feature map is given to the fully connected layers followed by softmax function to receive the final output. The vanishing gradient is a problem which happens when training artificial neural networks that involved gradient based learning and backpropagation. We use gradients to update the weights in a network. But sometimes what happens is that the gradient becomes very small, effectively preventing the weights to be updated. This leads to network to stop training. To solve such problem, residual neural networks are used. Figure 2: Holder gadget. Residual neural networks are the type of neural network that applies identity mapping. What this means is that the input to some layer is passed directly or as a shortcut to some other layer. If \(x\) is the input, in our case its an image or a feature map, and \(F(x)\) is the output from the layer, then the output of the residual block can be given as \(F(x)+x\) as shown in Figure 4. We changed the input shape to be 256x256 instead of 224x244, also we replaced the last layer in the original architecture with a fully connected layer where the output size was modified to three to accommodate our task labels. Figure 4: Residual block Figure 3: ResNet18 original architecture ## 4 Experimental Evaluation ### Dataset We have collected 3815 samples of rice crops of three labels: blast disease, brown spot disease and healthy leaves distributed, shown in Figure 5, as the following: 2135, 1095 and 585, respectively. Each sample is composed of a pair of (RGB) and (R-G-NIR) images as seen in Figure 6, which were captured simultaneously. Figure 7 shows samples of the three classes in our dataset. ### Training Configuration In this section, we explain our pipeline for training data preparation and preprocessing. Also, we mention our deep learning models training configuration for loss functions and hyperparameters. #### Data Preparation Figure 7: (a) Blast class sample. (b) Brown spot class sample. (c) Healthy class sample. RGB images registrationSince the image sample of our collected dataset consists of a pair of RGB and R-G-NIR images, the two images are expected to have a similar field of view. However, the phone and MAPIR camera have different field of view parameters that the mapir camera has a 41\({}^{\text{\tiny{e}}}\) FOV compared to the phone camera with 123\({}^{\text{\tiny{e}}}\) FOV. As a result, we register the rgb image to the r-g-mir image using the OpenCV library. The registration task starts by applying an ORB detector over the two images to extract 10K features. Next, we use a brute force with Hamming distance matcher between the two images extracted features. Based on the calculated distances for the matches, we sort them descendingly and drop the last 10%. Finally, the homography matrix is calculated using the matched points in the two images to be applied over the RGB images. Figure 8 shows an RGB image before and after registration. MAPIR camera calibrationThe MAPIR camera sensor captures the reflected light which lies in the Wavelengths in the Visible and Near Infrared spectrum from about 400-1100n and saves the percentage of reflectance. After this step, calibration of each pixel is applied to ensure that it is correct. This calibration is performed before every round of images captured using the MAPIR Camera Reflectance Calibration Ground Target board, which consists of 4 targets with known reflectance values, as shown in Figure 9. Models training configurationWe trained our models for 50 epochs with a batch size of 16 using Adam optimizer and Cosine Annealing with restart scheduler [7] with cycle length 10 epochs and learning rate of 0.05. For the loss function, we used a weighted cross entropy to mitigate the imbalance of the training dataset. Images were resized to dimension 256 x 256. Figure 8: On the left is an RGB image before calibration and on the right is after registration. #### 4.2.2 Results For training the deep learning model using RGB and R-G-NIR pairs, we generate a NDVI channel, using Equation 1, and concatenate it to the RGB image. Our study shows that incorporating the NDVI channel improves the model capability to classify the rice crops diseases. Our model could achieve a F1 score with 5-kFold of 84.9% when using RGB+NDVI as input compared to using only RGB image which could obtain a F1 score of 83.9%. Detailed results are presented in Table 1. \[NDVI=\frac{NIR-Red}{NIR+Red} \tag{1}\] ## 5 Conclusion We presented our public dataset and deep learning pipeline for rice plant disease detection. We showed that employing multispectral imagery with RGB improves the model capability of disease identification by 1% compared to using solely RGB imagery. We believe using a larger number of images for training would \begin{table} \begin{tabular}{|l|l|l|} \hline Class & RGB & RGB+NDVI \\ \hline Blast & 89.64\% & 90.02\% \\ Spot & 82.64\% & 83.26\% \\ Healthy & 79.08\% & 81.54\% \\ \hline \end{tabular} \end{table} Table 1: F1 score over our collected dataset achieved by using RGB as input versus RGB+NDVI. Figure 9: MAPIR Camera Reflectance Calibration Ground Target board. enhance current results also considering a larger number of images when using a deeper model this will result in better results. In addition, more investigation on how to fuse multispectral imagery with RGB for training could be applied, for example we can calculate NDVI from the blue channel instead of the red this may also boost the model performance. Acknowledgements.The authors would like to acknowledge the support received from Data Science Africa (DSA) which made this work possible.
2309.04737
Learning Spiking Neural Network from Easy to Hard task
Starting with small and simple concepts, and gradually introducing complex and difficult concepts is the natural process of human learning. Spiking Neural Networks (SNNs) aim to mimic the way humans process information, but current SNNs models treat all samples equally, which does not align with the principles of human learning and overlooks the biological plausibility of SNNs. To address this, we propose a CL-SNN model that introduces Curriculum Learning(CL) into SNNs, making SNNs learn more like humans and providing higher biological interpretability. CL is a training strategy that advocates presenting easier data to models before gradually introducing more challenging data, mimicking the human learning process. We use a confidence-aware loss to measure and process the samples with different difficulty levels. By learning the confidence of different samples, the model reduces the contribution of difficult samples to parameter optimization automatically. We conducted experiments on static image datasets MNIST, Fashion-MNIST, CIFAR10, and neuromorphic datasets N-MNIST, CIFAR10-DVS, DVS-Gesture. The results are promising. To our best knowledge, this is the first proposal to enhance the biologically plausibility of SNNs by introducing CL.
Lingling Tang, Jiangtao Hu, Hua Yu, Surui Liu, Jielei Chu
2023-09-09T09:46:32Z
http://arxiv.org/abs/2309.04737v3
# Learning Spiking Neural Network from Easy to Hard task ###### Abstract Starting with small and simple concepts, and gradually introducing complex and difficult concepts is the natural process of human learning. Spiking Neural Networks (SNNs) aim to mimic the way humans process information, but current SNNs models treat all samples equally, which does not align with the principles of human learning and overlooks the biological plausibility of SNNs. To address this, we propose a CL-SNN model that introduces Curriculum Learning(CL) into SNNs, making SNNs learn more like humans and providing higher biological interpretability. CL is a training strategy that advocates presenting easier data to models before gradually introducing more challenging data, mimicking the human learning process. We use a confidence-aware loss to measure and process the samples with different difficulty levels. By learning the confidence of different samples, the model reduces the contribution of difficult samples to parameter optimization automatically. We conducted experiments on static image datasets MNIST, Fashion-MNIST, CIFAR10, and neuromorphic datasets N-MNIST, CIFAR10-DVS, DVS-Gesture. The results are promising. To our best knowledge, this is the first proposal to enhance the biologically plausibility of SNNs by introducing CL. Spiking Neural Network, Curriculum Learning, Biologically Plausibility, Deep Learning. ## I Introduction Efficient and low-power earning and information processing, similar to the human brain has always been a goal pursued in the field of deep learning. The design of Spiking Neural Networks (SNNs) is inspired by the neuroscience research on human brain. It attempts to simulate the behavior and information processing of neurons to better approximate the functioning principles of the biological brain [1, 2]. Unlike traditional Artificial Neural Networks (ANNs) [3, 4, 5], which use continuous-valued activations, SNNs use discrete, time-based representations spikes or action potentials to transmit and process information [6]. SNNs have the potential for efficient and low-energy information processing, as they only transmit spikes when necessary. This makes SNNs attractive for implementing neuromorphic hardware and energy-efficient computing systems [7]. SNNs can capture the temporal dynamics and precise timing information of neural computations, which are crucial for areas such as speech recognition [8], event-based processing [7], and sensory integration [9]. Unsupervised SNNs [15, 16, 17] are primarily based on biological principles, with the most common being the Spiking Time Dependency Plasticity (STDP) rule [18]. It adjusts the synaptic weights based on the timing relationship between pre-synaptic and post-synaptic neurons' spike emissions, demonstrating high biological plausibility. However, it is challenging to achieve good performance on deep networks. Supervised learning methods [19, 20], on the other hand, train SNNs using labeled data by minimizing the error between predicted outputs and true labels to adjust network parameters [21]. Since spiking neural networks transmit discrete spike signals and lack differentiability [22], surrogate gradient [23] methods are commonly used for backpropagation to optimize the parameters. This approach has shown promising results and provides a relatively simple solution for handling the non-differentiability of SNNs.Another learning approach is ANN to SNN conversion, where an ANNs is first trained, and then a structurally equivalent SNNs is constructed, initializing the SNN's weights with those of the ANNs. This conversion allows SNNs to achieve performance close to that of ANNs [24, 25, 26]. However, it requires longer time steps [27] and is not suitable for event-based datasets. When we learn new knowledge, we often start with small and simple concepts and gradually introduce complex and challenging knowledge. This is a natural learning process for us. Just as we receive education based on a curriculum designed by schools to continuously enhance our knowledge. SNNs is proposed to mimic the way humans process information [7]. However, the current SNNs models treat all samples equally during the training process without considering whether the model's learning capacity can effectively acquire relatively complex and challenging knowledge [10, 11, 12]. This approach is not reasonable and does not align with the natural laws of human learning, resulting in lower biological interpretability. Therefore, to improve the biological plausibility of SNNs, we propose introducing Curriculum Learning (CL) into SNNs to make them learn more like humans. CL is inspired by the process of human learning new knowledge. Its main idea is to gradually increase the training difficulty, allowing the model to learn useful knowledge more quickly and better generalize to new data in later stages of training [13]. Specifically, CL provides training data to the model in a certain order called a "curriculum" [14]. Initially, the model is exposed to simple training samples, such as easily classifiable samples or correctly labeled ones. Once the model performs well, the difficulty is gradually increased by providing more complex and challenging samples. This process aligns closely with the process of human learning new knowledge. The sequential learning process of CL mimics the natural way humans acquire new knowledge. In the [28], CL is summarized as a combination of a difficulty estimator and a training scheduler. The difficulty estimator evaluates the difficulty of samples based on selected evaluation features such as complexity [29], noise [30, 31], or suggestions from a mature teacher network [29]. The training scheduler then feeds the samples to the model in the order of difficulty according to the designed scheduling rules, by which can improve the model convergence speed. Pre-defined CL, as described in [28], relies on manually defining methods for evaluating sample difficulty and scheduling strategies. While simpler to implement, this approach tends to overlook the model's feedback during the learning process. Moreover, it is challenging to determine the most suitable training schedule, such as when and how much to introduce more challenging samples, based solely on prior knowledge, especially for specific tasks and datasets. It is more suitable for small datasets. In contrast to pre-defined CL, automatic CL introduces the concept of a teacher network. The teacher network can be the student network itself, which evaluates its own learning progress based on the training loss and adjusts the training samples accordingly. Alternatively, it can be a well-trained, more advanced network model that assesses sample difficulty based on the teacher model's performance on those samples and dynamically adjusts the sample inputs based on feedback from the student network model. Confidence-aware loss has been proposed as a type of loss function that considers the model's confidence or certainty in its predictions [32, 33]. During the training process, it aims to assign higher importance to samples with higher confidence and lower importance to samples with lower confidence. This is achieved by introducing sample weights, namely confidence, and adjusting the weights of samples during the training process to modulate their contribution to the model parameter updates during backpropagation. The confidence-aware loss achieves curriculum learning without the need for predefined curriculum design. The paper's main contributions can be summarized as follows: * We propose a CL-SNN model that has higher biological interpretability by introducing CL into SNNs. The model dynamically evaluates the difficulty of samples, assigns higher confidence to simple samples, and amplifies their contribution in backpropagation. It automatically reduces the impact of more difficult samples on parameter updates. This approach exhibits high biological plausibility and effectively simulates the process of human learning new knowledge. To our best knowledge, this is the first attempt to enhance the biologically plausibility of SNNs by introducing CL. * We evaluate our CL-SNN model on three static image datasets, MNIST, CIFAR10 and Fashion-MNIST, as well as three neuromorphic datasets, including DVS-Gesture, N-MNIST, and CIFAR10-DVS for classification tasks. The results surpass the current state-of-the-art experimental results for all six datasets. The rest of the paper is organized as follows. Section II is the proposed methods. Section III introduces the experiment results. And in section IV we make a summary and discussion. ## II Methods ### _Neuron model_ The most commonly used spiking neuron model for SNNs is Leaky Integrate and Fire (LIF) neuron model [34]. It features a simple computation process while preserving essential biological characteristics. The dynamics of LIF can be described as follows: \[\tau_{m}\frac{\mathrm{d}V\left(t\right)}{\mathrm{d}t}=-\left(V\left(t\right)- V_{r}\right)+I\left(t\right), \tag{1}\] in which \(V_{r}\) is the fixed resting potential, \(V(t)\) represents the membrane potential at time-step \(t\). \(V(t)\) decays continuously and, in the absence of input, it will decay until it reaches \(V_{r}\). \(\tau_{m}\) represents the membrane time constant. The LIF neuron model combines biological plausibility with computational convenience. Since SNNs transmit discrete spike signals, the dynamical mechanism of SNNs can be described as follows: \[Q_{t}=f(V_{t-1},I_{t}), \tag{2}\] \[O_{t}=\theta(Q_{t}-V_{th}), \tag{3}\] \[V_{t}=Q_{t}\left(1-O_{t}\right)+V_{r}O_{t}. \tag{4}\] where \(Q_{t}\) represents the membrane potential after receiving input, and depend on the membrane potential at time-step \(t-1\) and the input \(I_{t}\). \(I_{t}=\mathbf{w}_{ji}*O_{t}\) and \(\mathbf{w}_{ji}\) denotes the weight for spiking neuron \(i\) and \(j\). \(Q_{t}\) is calculated by the function \(f(x)\). \(O_{t}\) represents the output of the pre-neuron,in which \[\theta(x)=\left\{\begin{array}{ll}1,&x\geqslant 0\\ 0,&x<0\end{array}\right. \tag{5}\] \(O_{t}=1\) means the potential after receiving input reaches the threshold \(V_{th}\) and generate a spike, and \(O_{t}=0\) means neuron keep silent. \(V_{t}\) represents the membrane potential after fire process. Eqs.(2)(3)(4) describe the charging, discharging, and resetting processes of a spiking neuron, respectively. The differences between different spiking neuron models mainly lie in the function \(f(x)\). The \(f(x)\) for LIF neuron is defined as: \[Q_{t}=f(V_{t-1},I_{t})=V_{t-1}+\frac{1}{\tau_{m}}\left(-\left(V_{t-1}-V_{r} \right)+I_{t}\right). \tag{6}\] Most SNNs models that utilize LIF neurons typically set the membrane time constant \(\tau_{m}\) as a fixed constant. However, in [10] proposed the importance and necessity of a learnable \(\tau_{m}\) on model performance. They introduced the PLIF model, which simultaneously learns \(\tau_{m}\) and synaptic weights during training, resulting in improved fitting capabilities. To avoid errors caused by \(\tau_{m}\) in the denominator during the learning process, Eq.(6) is rewritten as follows: \[Q_{t}=\left(1-g(a)\right)V_{t-1}+g(a)\left(V_{r}+I_{t}\right) \tag{7}\] where \(\tau_{m}=\frac{1}{g(a)}\in\left(1,+\infty\right)\), a is a learnable parameter. In this paper, \(g(a)\) is taken as the sigmoid function, \(g(a)=\frac{1}{1+e^{\left(-a\right)}}\). ### _Surrogate gradient_ Because SNNs transmit discrete spike signals and exhibit non-continuous neuron responses. And \(\theta(x)\) is non-differentiable,the derivative of \(\theta(x)\) is: \[\theta^{\prime}(x)=\left\{\begin{array}{ll}+\infty,&x=0\\ 0,&x\neq 0\end{array}\right. \tag{8}\] it becomes challenging to use traditional backpropagation for gradient updates and parameter optimization in SNNs. Previous research [23] has proposed the surrogate gradient method to enable backpropagation in SNNs. The principle of surrogate gradient is to use \(y=\theta(x)\) during forward propagation and \(\frac{\mathrm{d}y}{\mathrm{d}x}=\phi^{\prime}(x)\) during backward propagation, instead of \(\frac{\mathrm{d}y}{\mathrm{d}x}=\theta^{\prime}(x)\), where \(\phi(x)\) represents the surrogate function. \(\phi(x)\) is typically a smooth and continuous function that has a similar shape to \(\theta(x)\). The training process of SNNs using the surrogate gradient method is as follows: 1. Forward Propagation: The input is passed through the network, simulating the spiking behavior of neurons, and generating spike outputs. 2. Surrogate Function Calculation: Based on the spike outputs, the value of the surrogate function is computed, and the gradient of the surrogate function is calculated. 3. Backpropagation: The gradients of the surrogate function are used for backpropagation to update the network parameters. The surrogate function used in this paper is: \[\phi(x)=\frac{1}{\pi}arctan(\frac{\pi}{2}\alpha x)+\frac{1}{2}. \tag{9}\] And we get: \[\frac{\mathrm{d}}{\mathrm{d}x}\phi(x)=\frac{\alpha}{2\left(1+\left(\frac{\pi} {2}\alpha x\right)^{2}\right)}. \tag{10}\] Gradient-based surrogate simplifies the computation of SNNs and achieves promising results. ### _Curriculum learning with confidence-aware loss_ The main idea of CL is to accelerate the learning of useful knowledge by gradually increasing the training difficulty, enabling the model to generalize better to new data in later stages of training. Common methods in CL involve presenting the model with simple data first and gradually introducing more challenging data in a predefined order of difficulty. This has led to the development of various difficulty measures and training schedulers [28]. However, most of these methods are suit for small datasets or focus on curriculum design at the data scheduling level. Moreover, it always requires additional resources for sample difficulty evaluation and sample scheduling. Previous research [32, 33] has introduced the concept of confidence-aware loss, which is a type of loss function that takes into account the confidence or certainty of model predictions. During training, it aims to assign higher importance to samples with higher confidence and lower importance to samples with lower confidence. A confidence-aware loss function, denoted as \(l(\hat{y},y,\omega)\), introduces a learnable parameter \(\omega\) as an additional input compared to the traditional loss function \(l(\hat{y},y)\), \(\omega\) is the confidence or reliability of the current prediction result \(\hat{y}\). In [35], a general and lightweight approach to implementing the critical purpose of curriculum learning was proposed. It is based on confidence-aware loss. And its mathematical definition is as follows: \[CAL_{\lambda}\left(l_{i},\omega_{i}\right)=\left(l_{i}-\epsilon\right)\omega_ {i}+\lambda\left(log(\omega_{i})\right)^{2}, \tag{11}\] Fig. 1: The structure of proposed CL-SNN. where \(l_{i}\) represents the initial training loss, which can be calculated using common loss functions such as cross-entropy loss or mean squared error (MSE). \(\omega_{i}\) is the confidence or certainty associated with sample \(i\) and is a learnable parameter. \(\epsilon\) is the threshold used to differentiate between easy and difficult samples and can be set as the average of the batch initial training loss \(l_{i}\) or a predetermined constant. \(\lambda\) is a hyper parameter that controls the regularization term. By dynamically learning the confidence \(\omega\) for each sample, aiming to reduce the impact of difficult samples on parameter updates while amplifying the confidence of simple and reliable samples, thereby expanding their contribution to the model. As training progresses, the fitting capacity of model continuously improves, leading to high confidence for all samples in the end. For the confidence \(\omega_{i}\), it scales the learning level of the samples. To simplify the computation, the confidence \(\omega\) directly obtained from the training loss \(l_{i}\) of the sample is defined as follows [35]: \[\omega_{i}(l_{i})=exp\left(-W\left(\frac{1}{2}max\left(-\frac{2}{e},\eta \right)\right)\right), \tag{12}\] with \(\eta=\frac{l_{i}-\epsilon}{\lambda}\), and W is the Lambert W function. The final loss used for backpropagation is computed by combining the confidence \(\omega\) obtained from the initial training loss \(l_{i}\) and \(l_{i}\) itself. In CL-SNN, we directly calculate the initial loss \(l_{i}\) using the cross-entropy loss as follows: \[L=\frac{1}{N}\sum_{i}L_{i}=-\frac{1}{N}\sum_{i}\sum_{c=1}^{M}y_{ic}log(p_{ic}), \tag{13}\] Combining Eqs.(11)(12)(13), we obtain the confidence-aware loss function for CL-SNN. For samples with a large initial training loss, they can be considered as more challenging for the current model. Hence, we assign them lower confidence values to reduce their impact during parameter updates. Conversely, for samples with smaller \(l_{i}\) values, we can regard them as simpler and more reliable, this just like a dynamic CL. ## III Experiments We evaluated the proposed CL-SNN model on classification tasks using six datasets, including three static datasets (MNIST, CIFAR10, Fashion-MNIST) and three neuromorphic datasets (N-MNIST, CIFAR10-DVS, DVS-Gesture). ### _Experiment setting_ The experiments were conducted using a network architecture similar to that described in [10]. The SpikingJelly framework [36] was utilized, without employing operations similar to Poisson encoding on the input, but instead directly feeding it into the network. For the MNIST, Fashion-MNIST, and N-MNIST datasets, we employed the same network architecture: 128c3-BN-MP2-128c3-BN-MP2-DP-FC2048-DP-FC100-AP10. Here, "128c3" refers to a convolutional layer with 128 channels and kernel size 3. "MP2" represents max-pooling layer with kernel size 2. "BN" represents batch normalization. "DP" refers to a dropout layer. We use the PLIF neuron described in Section 3.1. The detailed network architecture for other datasets are described in the code. In the surrogate function, we set \(\alpha=2\), so \[\phi(x)=\frac{1}{\pi}arctan(\pi x)+\frac{1}{2}, \tag{14}\] and \(\phi^{\prime}\left(x\right)=\frac{1}{1+(\pi x)^{\eta}}\). For the threshold \(\epsilon\) used to distinguish between easy and hard samples in the confidence-aware loss, there are two methods to define. One approach is to set \(\epsilon\) as a fixed constant, while the other approach involves using a dynamic threshold. We conducted experiments for both cases. In the first case, we set \(\epsilon=log(c)\), where \(c\) represents the number of classes. For datasets such as MNIST, CIFAR10, Fashion-MNIST, N-MNIST, and CIFAR10-DVS, \(c\) is equal to \(10\). For the DVS-Gesture dataset, \(c\) is \(11\), and we set \(\lambda=0\). In the second case, we set \(\epsilon\) as the average initial loss of each batch, resulting in a dynamic threshold, and in this case we set \(\lambda=1\). ### _Experiment results_ The experimental results of our proposed CL-SNN model on static datasets for classification tasks, along with the comparison with state-of-the-art methods, are shown in the Table(I). Our method outperforms the comparison methods on all six datasets, and our model exhibits higher biological plausibility, aligning with the principles of human knowledge acquisition. The results on neuromorphic datasets are show in Table(II), all show a better performance. The accuracy change curve of the model for different datasets is shown in Figure(2). In terms of the difficulty differentiation threshold \(\epsilon\), experimental results show that dynamic \(\epsilon\) performs better on CIFAR10, N-MNIST and DVS-Gesture datasets. And for MNIST, Fashion-MNIST and CIFAR10-DVS, the fixed \(\epsilon\) achieves better performance. The confidence levels \(\omega\) vary for samples of different difficulties, as shown in Figure(3a). Easier samples reach the maximum confidence earlier, while more difficult samples take longer to reach maximum confidence. The level of confidence-aware loss for different samples is illustrated in Figure(3b). ### _Other metrics_ We recorded other evaluation metrics of the model on six datasets, including macro Precision, macro Recall, and macro F1-score, hoping to provide some assistance for future research. Macro Precision is a metric used in multi-class classification tasks to measure the average precision across all classes and provides an overall measure of the precision performance across the different classes in the classification task. Similar to macro Precision, macro Recall is a metric used in multi-class classification tasks to measure the average recall across all classes. It calculates the average of the recall values for each class. Macro F1-score provides an overall measure of the F1-score performance across the different classes in the classification task.The macro Precision, macro Recall, and macro F1-score for used datasets are show in Table(III). \begin{table} \begin{tabular}{c c c c} \hline \hline **Dataset** & **model** & **method** & **accuracy** \\ \hline \multirow{8}{*}{N-MNIST} & BackEISNN [38] & directed trained & 99.57 \\ & Ling et al. [39] & spike-based BP & 99.49 \\ & Wu et al. [44] & STBP with NeuNorm & 99.53 \\ & Zhu et al. [42] & Time-based BP & 99.39 \\ & CL-SNN with fixed \(\epsilon\)(**ours**) & Spike-based BP & **99.58** \\ & CL-SNN with dynamic \(\epsilon\)(**ours**) & Spike-based BP & **99.63** \\ \hline \multirow{8}{*}{CIFAR10-DVS} & Ling et al. [39] & spike-based BP & 64.6 \\ & Hanle et al. [45] & STBP-tdBN & 67.8 \\ & Wu et al. [44] & STBP with NeuNorm & 60.5 \\ & CL-SNN with fixed \(\epsilon\)(**ours**) & Spike-based BP & **69.4** \\ & CL-SNN with dynamic \(\epsilon\)(**ours**) & Spike-based BP & **68.6** \\ \hline \multirow{2}{*}{DVS-Gesture} & Ling et al. [39] & spike-based BP & 91.32 \\ & BRP-SNN [40] & spike-based BP & 80.9 \\ & CL-SNN with fixed \(\epsilon\)(**ours**) & Spike-based BP & **94.44** \\ & CL-SNN with dynamic \(\epsilon\)(**ours**) & Spike-based BP & **94.72** \\ \hline \hline \end{tabular} \end{table} TABLE III: The proposed CL-SNN model’s macroPrecision, macroRecall, macroF1-score on different datasets ## IV Conclusion and discussion The previous SNNs model always processed all samples indiscriminately, which was not in line with the natural process of human learning new knowledge from easy to difficult. We propose to introduce CL based on confidence perception loss function into SNNs. CL is a training strategy that makes the model learn knowledge from easy to difficult. It is inspired by the process of human learning new knowledge. And based on this, a CL-SNN model was proposed, which has a high degree of biological rationality. By scaling the contributions of samples with different confidence levels in parameter updates, the core principles of course learning are achieved. To our knowledge, this is the first proposal to enhance the biological rationality of SNNs by introducing CL. However, determining the most suitable curriculum learning strategy for a specific task still requires further exploration. Confidence-aware loss is a convenient method, but the choice of difficulty threshold and the measurement of difficulty are not unique. Nonetheless, this paper provides insights into how to design SNNs that align with human cognitive processes. ## Acknowledgments This work is supported by the National Natural Science Foundation of China (No. 62276218), Sichuan Science and Technology Program (No. 2022YFG0031) and Chengdu International Science and Technology Cooperation (No. 2023-GH02-00029-HZ).
2309.13459
A Model-Agnostic Graph Neural Network for Integrating Local and Global Information
Graph Neural Networks (GNNs) have achieved promising performance in a variety of graph-focused tasks. Despite their success, however, existing GNNs suffer from two significant limitations: a lack of interpretability in results due to their black-box nature, and an inability to learn representations of varying orders. To tackle these issues, we propose a novel \textbf{M}odel-\textbf{a}gnostic \textbf{G}raph Neural \textbf{Net}work (MaGNet) framework, which is able to effectively integrate information of various orders, extract knowledge from high-order neighbors, and provide meaningful and interpretable results by identifying influential compact graph structures. In particular, MaGNet consists of two components: an estimation model for the latent representation of complex relationships under graph topology, and an interpretation model that identifies influential nodes, edges, and node features. Theoretically, we establish the generalization error bound for MaGNet via empirical Rademacher complexity, and demonstrate its power to represent layer-wise neighborhood mixing. We conduct comprehensive numerical studies using simulated data to demonstrate the superior performance of MaGNet in comparison to several state-of-the-art alternatives. Furthermore, we apply MaGNet to a real-world case study aimed at extracting task-critical information from brain activity data, thereby highlighting its effectiveness in advancing scientific research.
Wenzhuo Zhou, Annie Qu, Keiland W. Cooper, Norbert Fortin, Babak Shahbaba
2023-09-23T19:07:03Z
http://arxiv.org/abs/2309.13459v3
# A Model-Agnostic Graph Neural Network for Integrating Local and Global Information ###### Abstract Graph Neural Networks (GNNs) have achieved promising performance in a variety of graph-focused tasks. Despite their success, existing GNNs suffer from two significant limitations: a lack of interpretability in results due to their black-box nature, and an inability to learn representations of varying orders. To tackle these issues, we propose a novel **M**odel-**a**gnostic **G**raph Neural **N**etwork (MaGNet) framework, which is able to sequentially integrate information of various orders, extract knowledge from high-order neighbors, and provide meaningful and interpretable results by identifying influential compact graph structures. In particular, MaGNet consists of two components: an estimation model for the latent representation of complex relationships under graph topology, and an interpretation model that identifies influential nodes, edges, and important node features. Theoretically, we establish the generalization error bound for MaGNet via empirical Rademacher complexity, and showcase its power to represent layer-wise neighborhood mixing. We conduct comprehensive numerical studies using simulated data to demonstrate the superior performance of MaGNet in comparison to several state-of-the-art alternatives. Furthermore, we apply MaGNet to a real-world case study aimed at extracting task-critical information from brain activity data, thereby highlighting its effectiveness in advancing scientific research. **Keywords:** Graph representation; Empirical Rademacher complexity; Information aggregation ## 1 Introduction Graph-structured data is ubiquitous throughout the natural and social sciences, from brain networks to social relationships. In the most general view, a graph is simply a collection of nodes representing entities such as people, genes, and brain regions, along with a set of edges representing interactions between pairs of nodes. By representing such interconnected entities as graphs, it is possible to leverage their geometric topology to study functional relationships in systems with network based frameworks. For example, when studying brain function, graphs can be used to model activity relationships among neurons or brain regions. For such applications, it is essential to build relational inductive biases into machine learning methods in order to develop systems that can learn, reason, and generalize. To achieve this goal, in recent years there has been a surge in research on graph representation learning, including techniques for deep graph embedding powered by deep learning architecture, e.g., neural message passing approaches inspired by belief propagation. These advances in graph representation learning have led to new developments in various domains, including the analysis of information processing in the brain, chemical synthesis, recommender systems, and modeling social networks. Among the methods for deep graph representation, the family of graph neural networks (GNNs) has achieved remarkable success in real-world graph-focused tasks (Defferrard et al., 2016; Zhang and Chen, 2018). In general, GNNs iteratively aggregate and combine node representations within a graph, so-called message passing, to generate a set of learned hidden representation features (Gilmer et al., 2017). The main neural architectures of GNNs include graph convolutional networks (GCNs; Kipf and Welling (2016)), graph attention networks (GATs; Velickovic et al. (2017)), and graph isomorphism networks (GINs; Xu et al. (2018a)), and many others variants (Hamilton, 2020). While GNNs are capable of capturing subgraph information through local operations, e.g., graph convolution operations, they can be prone to over-smoothing the learned representations when applying multiple rounds of local operators. This can lead to models that treat all nodes uniformly, resulting in node representations that converge towards an indistinguishable vector (Li et al., 2018). Another issue with over-smoothing is that it limits the ability to capture high-order information, which can be only aggregated by executing a sufficient number of local operations, such as when nodes have a large receptive field. Several studies have suggested that this over-smoothing phenomenon is a key factor in the performance degradation of deep GNNs (Bodnar et al., 2022). Therefore, most recent works (Zhang et al., 2022; Keriven, 2022) advocate the use of various shallow GNNs (e.g., up to three layers) to avoid the over-smoothing issue. Unfortunately, as previously discussed, simply shallowing GNNs often falls short in capturing high-order information due to insufficient rounds of message passing. Another key factor for learning sufficient representation power in GNNs involves directly incorporating and sequentially combining information from neighbors at different node neighbor orders, including both low-order and high-order levels. Statistically, by incorporating both low-order (immediate neighbors) and high-order (i.e., neighbors beyond the immediate vicinity) information, GNNs can learn a richer representation under graph topology. For example, in the brain, low-order information may capture relationships within a functional region and high-order information relationships across functional regions (Bassett and Sporns, 2017). Importantly, the information flows in a sequential manner starting from within-region functionality to cross-region functionality (Vazquez-Rodriguez et al., 2019). Models that adhere to the principle of effectively combining different-order information are known as multi-scale GNNs, as they enable the exploration and integration of information at multiple levels of granularity within a graph (Xu et al., 2018; Liao et al., 2019; Sun et al., 2019; Oono and Suzuki, 2020; Liu et al., 2022). Multi-scale GNNs provide enhanced flexibility and adaptability in facilitating information flow across various neighborhood ranges. This is achieved by directing the outputs of intermediate layers to contribute to the final representation. Nonetheless, existing methods struggle to effectively integrate representations of different orders in a sequential manner due to their memoryless property: the most recent learned representation tends to overwrite the previously acquired representation. In addition, this approach still suffers from over-smoothing issues and fails to capture high-order latent representations in general. To address these issues, we propose a novel **M**odel-**a**gnostic **G**raph neural **Net**work (MaGNet) framework with two major components: the _estimation model_ and the _interpretation model_. The estimation model captures the complex relationship between the feature information and a target outcome under graph topology, allowing for powerful latent representation. The interpretation model, on the other hand, identifies a compact subgraph structure, i.e., influential nodes and edges and a small subset of node features that have a crucial role in the learned estimation model. This provides a meaningful explanation of the underlying mechanisms of the relationships, allowing for a better understanding of the factors governing the behavior of complex dynamic systems. For example, for brain activity data, the interpretation model may be used to identify the most informative patterns of regional activity and of functional relationships among spatially distributed regions, as well as the most informative time periods. The main advantages of the proposed framework and our contributions are summarized as follows. First, the proposed neural architecture of the model alleviates the over-smoothing issue and thus can effectively extract knowledge from high-order neighbors. In addition, the designed actor-critic neural architecture has the ability to integrate various order information via resolving the memoryless issue. It sequentially combines the representation learned from actor graph neural networks, while a critic neural network plays a role in evaluating the quality of the hidden representation learned by the actor network. Second, we propose a tractable and concise interpretation framework for the black-box estimation model. We formulate our interpretation model as an optimization task that maximizes the information gained between a prediction of the estimation model and the distribution of possible subgraph structures. The approach is model-agnostic and can explain the estimation model on any graph-focused task for graphs without assumptions regarding the underlying true model or data-generating mechanisms. Third, we study the ability to integrate various-order information as well as the statistical complexity of the proposed model via a minimax measurement on empirical Rademacher complexity. Unlike existing analyses solely working on standard message passing neural networks, our results can be applied to a mixture of neural architectures between message passing neural networks and feedforward neural networks, where the message passing neural network is a special case in our theoretical framework. Furthermore, we provide a rigorous generalization error bound for the MaGNet estimation model that consists of sequential deep-learning component models. The rest of the paper is organized as follows. Section 2 lays out the basic graph notation and model settings. Section 3 formally defines the estimation module. Section 4 illustrates how to construct the interpretation model for an estimated model. A comprehensive theoretical study of the proposed framework is provided in Section 5. Sections 6 and 7 demonstrate the empirical performance of our methods. We conclude this paper with a discussion of possible future research directions. All technical proofs are provided in Appendix. ## 2 Preliminaries ### Graph Structure In this section, we present some preliminaries and notations used throughout the paper. Let \(G=(V,E)\) represent the graph, where \(V\) represents the vertex set consisting of nodes \(\{v_{1},v_{2},...,v_{N}\}\), and \(E\in V\times V\) denotes the edge set with \((i,j)\)-th element \(e_{ij}\). The number of total nodes in the graph is denoted by \(N\). A graph can be described by a symmetric (typically sparse) adjacency matrix \(A\in\{0,1\}^{N\times N}\) derived from \(V\) and \(E\). In this setting, \(a_{ij}=0\) indicates that the edge \(e_{ij}\) is missing, whereas \(a_{ij}=1\) indicates that the corresponding edge exists. There is a \(T\)-dimensional set of features, \(X_{i}\), associated with each node, \(v_{i}\) so that the entire feature set is denoted as \(X\in\mathbb{R}^{N\times T}\). Suppose we have observed \(n\) graph instances, each consisting of a fixed graph structure but with different node features. Let \(G_{i}\) denote the \(i\)th instance of a graph, where \(i\in{1,2,\ldots,n}\). While our approach can be used for predictive models in general, here we focus on classification problems, where the objective is to assign a binary label \(s\in\{-1,1\}\) to each graph instance. ### Neural Message Passing The basic graph neural network (GNN) model can be motivated in a variety of ways. The same fundamental GNN model has been derived as a generalization of convolutions to non-Euclidean data (Bodnar et al., 2022), and as a differentiable variant of belief propagation (Dabkowski and Gal, 2017), as well as by analogy to classic graph isomorphism tests (Graham et al., 2019). Regardless of the motivation, the defining feature of a GNN is that it uses a form of neural message passing in which vector messages are exchanged between nodes and updated using neural networks (Abu-El-Haija et al., 2019). During each message passing iteration in a GNN, a hidden embedding corresponding to each node \(v\in\mathcal{V}\), denoted as \(H_{v}^{(k)}\), is updated according to information aggregated from \(v\)'s graph neighborhood \(\mathcal{N}(v)\). This message passing update can be expressed as follows: \[H_{v}^{(k+1)} =f_{\text{update}}^{(k)}\left(H_{v}^{(k)},f_{\text{agg}}^{(k)}\left( \left\{H_{u}^{(k)},\forall u\in\mathcal{N}(v)\right\}\right)\right)\] \[=f_{\text{update}}^{(k)}\left(H_{v}^{(k)},M_{\mathcal{N}(v)}^{(k) }\right),\] where \(f_{\text{update}}\) and \(f_{\text{agg}}\) are the update and aggregate functions, which are arbitrary differentiable functions (i.e., neural networks). The term \(M_{\mathcal{N}(v)}\) is the "message" that is aggregated from \(v\)'s graph neighborhood \(\mathcal{N}(v)\). We use superscripts to distinguish the embeddings and functions at different iterations of message passing. At each iteration \(k\) of the GNN, the aggregate function takes as input the set of embeddings of the nodes in \(v\)'s graph neighborhood \(\mathcal{N}(v)\) and generates a message \(M_{\mathcal{N}(v)}^{(k)}\) based on this aggregated neighborhood information. The update function then combines the message \(M_{\mathcal{N}(v)}^{(k)}\) with the previous embedding \(H_{v}^{(k-1)}\) of node \(v\) to generate the updated embedding \(H_{v}^{(k)}\). ### 2.3 Graph Convolutional Network Let \(D\) be the degree matrix corresponding to the adjacency matrix \(\tilde{A}=A+I\) with \(D_{ii}=\sum_{j=1}^{N}\tilde{A}_{ij}\). The hidden graph representation of nodes with two graph convolutional layers (Kipf and Welling, 2016) can be formulated in a matrix form: \[H=\widetilde{\mathcal{L}}\text{ReLU}(\widetilde{\mathcal{L}}XW^{(0)})W^{(1)}, \tag{1}\] where \(H\in\mathbb{R}^{N\times T^{(1)}}\) is the final embedding matrix of nodes and \(T_{hidden}\) is the dimension of the node hidden representation. The graph Laplacian is defined as \(\widetilde{\mathcal{L}}=D^{-\frac{1}{2}}\tilde{A}D^{-\frac{1}{2}}\). In addition, the weight matrix \(W^{(0)}\in\mathbb{R}^{t_{latent}\times t^{(0)}}\) is the input-to-hidden weight matrix for a hidden layer with \(t^{(0)}\) feature maps, and \(W^{(1)}\in\mathbb{R}^{t^{(0)}\times t^{(1)}}\) is the hidden-to-output weight matrix. Here we consider the two-layer case that aims to simplify the notation; the above definition can be easily extended to \(l\) graph convolutional layers with \(l>2\). Estimation Model In this section, we introduce a novel graph neural network, which aims to represent feature information and explain an outcome of interest by effectively analyzing the relationships between them. To achieve this goal, it integrates both low-order and high-order neighbor node information in order to form a powerful latent representation. Here, the low-order information refers to information aggregated from the local neighbors of a node, while the high-order information means the messages aggregated beyond just the immediate neighbors, capturing the global graph information. To characterize the feasibility of capturing various-order information rigorously, we first introduce a generalized second-order \(\Delta\)-operator which was defined in Abu-El-Haija et al. (2019), i.e., \(K\)-order counterparts for \(K\geq 3\) as follows: **Definition 3.1**.: _Given a graph neural network, \(\Delta(K)\) representer represents \(K\)-order node neighbor information for \(K\in\mathbb{N}\), e.g., there exists a real-valued vector \(\nu=(\nu_{1},\nu_{2},...,\nu_{K})\) and an injective (one-to-one) mapping function \(g(\cdot)\), such that the output embedding of this graph neural network can be represented as_ \[g\left(\sum_{k=0}^{K}\nu_{k}\cdot\mathcal{L}^{k}X\right):=\Delta(K),\] _for any type of graph Laplacian \(\mathcal{L}\) operation and input node feature matrix \(X\)._ Learning such an operator enables GNNs to represent feature differences among \(K\)-order node neighbor's information. When a candidate GNN model learns the \(\Delta(K)\)-operator, it effectively captures the \(K\)-order order neighborhood information in the hidden representation. In GCNs, the graph representation is obtained through interactions of neighboring nodes during multiple rounds of learned message passing. Ideally, one could consider a deep architecture via stacking \(K\) GCN layers in order to learn a \(\Delta(K)\) operator. Despite this, most of the GCN models employ shallow architectures, typically utilizing only two or three-order information (Zhang et al., 2020). The reason behind this limitation is two-fold. First, when repeatedly applying Laplacian smoothing, GCNs may mix node features from different clusters, rendering them indistinguishable. This phenomenon is known as the _over-smoothing_ issue (Li et al., 2021). Second, most GCNs are built upon a feedforward mechanism and suffer from the _memoryless_ problem. After each layer operation, the representation learned from the current layer overwrites the representation produced from the previous layers, meaning that there's no explicit memory mechanism. In other words, the _over-smoothing_ issue creats difficulties in capturing high-order information, while the _memoryless_ issue leads to a loss of lower-order information. Theoretically, under Definition 3.1, the GCN models with over-smoothing or memoryless issues cannot learn the \(\Delta(K)\)-operator in the case where \(K\) is large. ### Actor-Critic Graph Neural Network In this section, we propose an actor-critic graph neural network that is designed to aggregate different levels of node-neighbor information to compute a powerful graph embedding. In this dual neural network structure, the actor graph neural network aims to capture the hidden representation for each order of node-neighbor information, while the critic neural network plays a role in evaluating the quality of the hidden representation learned by the actor network. With the hidden representation and evaluated quality scores, we perform a fusion operation to integrate the representation using calculated quality scores as weights. This framework resolves the over-smoothing and memoryless issues, which is guaranteed to learn a \(\Delta(K)\)-operator. In contrast to GCNs, we adopt the simple weighted sum aggregator and abandon the use of feature transformation and nonlinear activation. As a result, the graph convolution operation in our actor graph neural network is defined as: \[H^{(k)}=(\mathcal{L})^{k}XW, \tag{2}\] where the graph Laplacian \(\mathcal{L}=D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\) and the weight matrix \(W\) is fixed to be an identity matrix in every message passing round. We argue that the nonlinear feature transformation is not critical and that the majority of the benefit arises from the local averaging of neighboring features. This is because the vectorized temporal signal is not like the multi-dimensional image data that require the use of many nonlinear layers to capture sufficient information. By removing the contracting mapping non-linear activation, our graph convolution layer achieves slower convergence in certain values for embedding vectors, thus alleviating the over-smoothing issue. Additionally, the abandoned feature transformation operation greatly facilitates computation. It is also worth noting that in (2), we aggregate only the connected neighbors and do not integrate the target node itself (i.e., self-connection). That is, the graph Laplacian is based on the adjacency matrix \(A\) instead of its augmented counterpart \(\widetilde{A}\). This is different from most existing graph convolutions (Kipf and Welling, 2016) that typically aggregate extended neighbors and need to handle the self-connection specially. The fusion operation, to be discussed later, essentially captures the same effect as self-connections. Further, based on the node representation \(H^{(k)}\), we can apply a graph pooling operation to summarize the graph embedding from \(H^{(k)}\). In the following, we use the notation \(H^{(k)}\) for graph embedding. As previously stated, our primary objective is to aggregate mixed-order information. To this end, we propose a fusion operation to completely preserve the information from different-order neighbors. Specifically, we can regard the graph embedding \(H\) from \(k\)-th round of message Figure 1: An illustrating example of 3-layer neural architecture of the actor-critic graph neural network. passing as the output of the \(l\)-th order information summarization, denoted as \(H^{(k)}\). From the perspective of meta algorithms, the embedding \(H^{(l)}\) can be regarded as the graph embedding learned from the \(k\)-th actor graph neural network. Then the ultimate graph embedding can be weighted combined from the various order graph embeddings, i.e., \[\widetilde{H}=\sum_{k=1}^{K}\alpha^{(k)}H^{(k)}, \tag{3}\] where \(\alpha^{(k)}\) is the fusion weight corresponding to the quality (or importance) of the \(k\)-th order knowledge for \(k=1,...,K\). This fusion operation can be understood as the ensemble of multiple single actor networks, i.e., the actor networks for generating graph embedding \(H^{(k)}\). Thus, the fusion operation naturally combines the unique characteristics of different single learners such as the different order information. To determine the fusion weights \(\alpha^{(k)}\), we introduce a critic network, e.g., feedforward neural network, which plays a role in evaluating the quality of \(H^{(k)}\) in a bias induction way. That is \[\alpha^{(k)}=\frac{1}{2}\log\left(\frac{1-\epsilon^{(k)}}{\epsilon^{(k)}} \right),\] where the function \(\text{logit}(x)=\log(x/(1-x))\) and the error rate \(\epsilon^{(k)}\) is defined as \[\epsilon^{(k)}=\sum_{i=1}^{n}\beta_{i}^{(k)}\mathds{1}\left\{s_{i}\neq\underset {\{m=-1,1\}}{\arg\max}\text{softmax}(H_{i}^{(k)})\right\}/\sum_{i=1}^{n}\beta _{i}^{(k)}. \tag{4}\] Here \(H_{i}^{(k)}\) denotes the graph embedding for the \(i\)-th graph sample, \(\arg\max_{\{m=-1,1\}}(x)\) is the operator taking the maximum element in the two-dimensional vector \(x\), and the coefficient \(\beta_{i}^{(k)}\) denotes the weight of the \(i\)-th graph sample. Intuitively, the \(\epsilon^{(k)}\) can be understood as the weighted classification error rate for the \(k\)-th single actor network, i.e. \(\text{softmax}(H^{(k)})\). For \(i=1,...n\), we adjust the graph sample weights \(\beta_{i}^{(k)}\) sequentially from the \(k\)-th to the \((k+1)\)-th step by following the updating rule: \[\beta_{i}^{(k+1)}\propto\beta_{i}^{(k)}\exp\left(\alpha^{(k)}\mathds{1}\left\{ s_{i}\neq\underset{\{m=-1,1\}}{\arg\max}\text{softmax}(H_{i}^{(k)})\right\} \right).\] This update rule intentionally pays more attention to the misclassified graph samples with potentially insufficient representation power, and increases their weights when training the next single actor network. Looking at the ultimate graph embedding \(\widetilde{H}\) in (3), we can see that it adaptively combines information from the first to the \(K\)-th order node neighbors information, as opposed to simply mixing different order embedding with equal weights. Furthermore, the sequential updates of single actor networks maintain the same learning pattern as standard GCNs, wherein the message passing for the \((k+1)\)-th hop directly succeeds the message passing for the \(k\)-th hop. Due to this, our actor-critic graph neural network preserves most of the desirable properties in standard GCNs, such as invariance to graph isomorphism (Xu et al., 2018) and relational representation (Wu et al., 2020). Ultimately, we derive the classification model and prediction rule given the \(i\)-th graph embedding \(\widetilde{H}_{i}\), \[p(\cdot|\widetilde{H}_{i})=\text{softmax}(\widetilde{H}_{i}). \tag{5}\] This equation returns the classification logits. To streamline the notation throughout the paper, we use \(p_{\widehat{\theta}}(\cdot)\) to represent a trained actor-critic graph neural network model. To have a better understanding of the proposed actor-critic neural network, we illustrate its neural architecture in Figure 1. Finally, we note that the model training in (5) is a well-studied convex optimization problem. It can be performed using efficient second-order methods or stochastic gradient descent (SGD) (Bottou, 2010). As long as the graph connectivity pattern remains sufficiently sparse, SGD can naturally scale to handle very large graph sizes. Furthermore, we ensure the neural network architecture's consistency by training the layers sequentially, following the order of node neighbors' message passing. This sequential training approach enables us to utilize the trained parameters from the previous training to initialize the current model training, which effectively reduces computational costs. Interpretation Model Although the estimation model boasts representation power to capture the complex relationship between the outcome of interests and features, understanding the rationale behind its predictions can be still quite challenging. In this section, we present a novel interpretation framework designed to uncover the reasoning behind the "black-box" estimation model. To bridge the gap between estimation and interpretation, we first observe that our estimation model extracts temporal feature information from various-order node neighbors as well as graph topology to output the hidden graph representation for predictions. This suggests that the prediction made by the estimation model, i.e., \(\widehat{s}=\arg\max_{\{m=-1,1\}}p_{\widehat{\theta}}(\cdot)\) in (5), is determined by the adjacency matrix \(A\) and the node feature information \(X\). Formally, to comprehend the model mechanism and provide explanations, the problem is transformed into identifying important/influential subgraphs, denoted as \(G_{sub}\subseteq G\) with a corresponding adjacency matrix \(A_{sub}\), and a small subset of the full-dimension of the node feature \(X_{sub}\). We first focus on the identification of influential subgraphs by assuming \(X_{sub}\) has been obtained, and then discuss how to perform node feature selection simultaneously with subgraph identifications. We adapt the principle of information gain, which was first introduced in the context of decision trees (Larose and Larose, 2014), into our framework. In particular, we formulate an optimization framework for influential subgraph identification. Our goal is to maximize the information gain with respect to subgraph candidates \(G_{\text{sub}}\) in order to determine the most influential one: \[\operatorname*{argmax}_{G_{sub}}\operatorname{IG}\left(p_{\widehat{\theta}}, G_{sub}\right)=\eta(p_{\widehat{\theta}})-\eta\left(p_{\widehat{\theta}}\mid G _{sub},X_{sub}\right), \tag{6}\] where \(\eta(\cdot)\) and \(\eta(\cdot|\cdot)\) denote the entropy and conditional entropy respectively. Essentially, information gain quantifies the change in prediction probability between the full model \(p_{\widehat{\theta}}(\cdot)\) and the one constrained to the subgraph \(G_{sub}\) and the subset node feature \(X_{sub}\). For example, if removing edge \(e_{ij}\), i.e., the \((i,j)\)-th element in the adjacency matrix \(A\), from the full graph \(G\) significantly decreases the prediction probability, then this edge is influential and should be included in the subgraph \(G_{sub}\). Conversely, if the edge \(e_{ij}\) is deemed redundant for prediction by the learned estimation model, it should be excluded. Examining the right-hand side of (6), we can easily observe that the entropy term \(\eta(p_{\widehat{\theta}})\) remains constant since the parameters \(\widehat{\theta}\) are fixed for an estimated model. Consequently, the objective of minimizing information gain in (6) is equivalent to maximizing the conditional entropy \(\eta\left(p_{\widehat{\theta}}\mid G_{sub},X_{sub}\right)\). Nevertheless, directly optimizing the above objective function is intractable, as there are \(2^{|V|}\) candidates for the subgraph \(G_{sub}\). To address this issue, we consider a relaxation by assuming that the subgraph is a Gilbert random graph (Gilbert, 1959). The selection of edges from the original input graph \(G\) are conditionally independent of each other and follow a probability distribution. In detail, the edge \(e_{ij}\) is a binary variable indicating whether the edge is selected, with \(e_{ij}=1\) if selected and 0 otherwise. Therefore, the graph \(G_{sub}\) is a random graph with probability \[P(G_{sub})=\Pi_{i,j\in N}P(e_{ij}).\] A straightforward instantiation of \(P(e_{ij})\) is the Bernoulli distribution \(e_{ij}\sim\text{Bern}(\mu_{ij})\), where \(\mu_{ij}\) is the first moment. In particular, we can rewrite the parametrized objective as: \[\underset{G_{sub}}{\text{Minimize }}\eta\left(p_{\widehat{\theta}}\mid G_{ sub},X_{sub}\right)=\underset{G_{sub}(\mu)}{\text{Minimize }}\mathbb{E}_{G_{sub}(\mu)}[\eta\left(p_{\widehat{\theta}}\mid G_{ sub},X_{sub}\right)], \tag{7}\] where \(G_{sub}(\mu)\) is the parametrized random subgraph. Due to the discrete nature of the subgraph \(G_{sub}(\mu)\), the objective function is non-smooth, making optimization challenging and unstable. To address this issue, we further propose a continuous approximation for the binary sampling process. Let \(\epsilon\) be a uniform random variable, i.e., \(\epsilon\sim\text{Unif}(0,1)\), and the real-valued parameters \(\psi_{ij}\in\Psi\), and a temperature parameter \(\omega\in\mathbb{R}^{+}\), then a sample of the binary edge \(e_{ij}\) can be approximated by a sigmoid mapping: \[\widetilde{e}_{ij}=\text{sigmoid}\left(\frac{\log(\epsilon)-\log(1-\epsilon) +\psi_{ij}}{\omega}\right).\] We denote \(\widetilde{G}_{sub}(\Psi)\) as the continuous relaxation counterpart of the subgraph, with the \((i,j)\)-th element of the adjacency matrix being \(\widetilde{e}_{ij}\). Interestingly, in our analysis, the temperature parameter \(\omega\) can describe the relationship between \(\widetilde{G}_{sub}(\Psi)\) and \(G_{sub}(\mu)\). We observe that as \(\omega\to 0\), the approximated edge \(\widetilde{e}_{ij}\) converges to the edge \(e_{ij}\), with the probability mass function, \[\lim_{\omega\to 0}P(\widetilde{e}_{ij}=1)=\frac{\exp(\psi_{ij})}{1+\exp(\psi_{ij})}.\] Recall that the edge \(e_{ij}\) follows a Bernoulli distribution with mean \(\mu_{ij}\). If we reparameterize \(\psi_{ij}\) such that: \[\psi_{ij}=\log\left(\frac{\mu_{ij}}{1-\mu_{ij}}\right),\] we achieve asymptotical consistency of the approximated subgraph, i.e., \(\lim_{\omega\to 0}\widetilde{G}_{sub}(\Psi)=G_{sub}(\mu)\). This supports the feasibility of applying continuous relaxation to the binary distribution. Unlike the objective function in (7) induced by the discrete original subgraph, the objective function becomes smooth under the edge continuous approximation and can be easily optimized using gradient-based methods. In other words, the gradient of the continuous edge approximation \(\widetilde{e}_{ij}\) with respect to the parameters \(\psi_{ij}\) is computable. More importantly, the sampling randomness towards the subgraph is absorbed into a uniform random variable \(\epsilon\) peel-off from the parameterized binary Bernoulli distribution, which greatly relaxes the complexity of the sampling processing. In this manner, the objective function in (7) can be reformulated as \[\underset{\Psi}{\text{Minimize}}\ \mathbb{E}_{\epsilon\sim\text{Unif}(0,1)}[ \eta\left(p_{\widehat{\theta}}\mid G_{sub}(\Psi),X_{sub}\right)].\] Unfortunately, solving the conditional entropy is still computationally expensive. To avoid this issue, we follow Kipf et al. (2018) to minimize a cross-entropy as the objective function. The empirical objective then becomes \[\underset{\Psi}{\text{Minimize}}\ \frac{1}{n_{0}}\sum_{i=1}^{n_{0}}\sum_{m \in\{-1,1\}}p_{\widehat{\theta}}(s_{i}=m|X_{sub}(i))\log p_{\widehat{\theta} }(s_{i}=m|G_{sub}(\Psi),X_{sub}(i)),\] where \(n_{0}\) is the sampling size and \(p_{\widehat{\theta}}(s_{i}=m|G_{sub}(\Psi),X_{sub}(i))\) denotes the classification logits conditional on the subgraph \(G_{sub}(\Psi)\) and the subset feature \(X_{sub}(i)\) of the \(i\)-th graph sample. So far, in the above development, we implicitly assume that the subset feature \(X_{sub}\) is known, which is not the case in practice. In the context where the subset feature \(X_{sub}\) is not given, the main challenges are: 1) identifying the subset is unknown; and 2) the fact that integrating this feature selection into the developed subgraph identification optimization framework is not trivial. Motivated by the great success of self-supervised techniques in large neural language models (Devlin et al., 2018), we propose to use a "masking" approach to convert the feature subsetting problem into an optimization problem that can be naturally combined with subgraph identification. Specifically, we define a binary vector \(\mathcal{B}\in\{0,1\}^{T}\) which holds the same dimension as the raw node feature. For each node \(v_{i}\) and its raw node feature \(X_{i}\), \(i=1,...,N\), we multiply the raw feature with the binary vector \(\mathcal{B}\), i.e., \(X_{i}\odot\mathcal{B}\), where \(\odot\) is the Hadamard product. Intuitively, the vector \(\mathcal{B}\) converts the value in some dimension of the node feature to 0. This aligns with the rationale that if a particular feature is not important, the corresponding weights in the neural network weight matrix take values close to 0. In terms of the principle of information gain, this type of masking does not significantly decrease the probability of the prediction or alter the information gain. The binary vector \(\mathcal{B}\) is non-smooth; we also consider a continuous relaxation for the vector by leveraging a sigmoid mapping, so that the feature selection procedure becomes a smooth optimization problem, i.e., \[X\odot\text{sigmoid}(\widetilde{\mathcal{B}}),\] where \(\widetilde{\mathcal{B}}\in\mathbb{R}^{T}\) is a real-valued vector and the \(\text{sigmoid}(\widetilde{\mathcal{B}})\) is applied to each row of \(X\). Next, we remove the low values in \(\widetilde{\mathcal{B}}\) through thresholding to arrive at the feature subsetting. Now, the subgraph identification and the feature selection can be naturally integrated into a single minimization problem: \[\min_{\Psi,\widetilde{\mathcal{B}}}\,\frac{1}{n_{0}}\sum_{i=1}^{n_{0}}\sum_{m \in\{-1,1\}}p_{\widetilde{\theta}}(s_{i}=m|X(i)\odot\text{sigmoid}(\widetilde {\mathcal{B}}))\log p_{\widetilde{\theta}}(s_{i}=m|G_{sub}(\Psi),X(i)\odot \text{sigmoid}(\widetilde{\mathcal{B}})),\] which forms a unified optimization framework. This unified optimization framework allows for the simultaneous identification of influential subgraphs and important node features, leading to a more interpretable and efficient model. The resulting optimization problem can be solved using gradient-based techniques. Theory In this section, we present the main theoretical results of the estimation model. First, we demonstrate that in contrast to standard GCNs, the proposed estimation model is capable of representing the feature differences among various-order neighbors. Second, we study the capacity of the actor graph neural network in terms of empirical Rademacher complexity. The derived bound is minimax optimal through careful analysis of both the lower and upper bounds. Furthermore, we provide a probabilistic upper bound on the generalization error of the actor-critic graph neural network which is calibrated in the fusion algorithm. **Theorem 5.1**.: _The MaGNet actor-critic graph neural network is capable of learning a \(\Delta(K)\)-representer, i.e., it is able to sufficiently capture \(K\)-order node neighbor information._ Theorem 5.1 demonstrates that our estimation model can learn various-order information. This ensures the capability of the proposed estimation model on high-order message passing, where nodes receive latent representations from their 1-order neighbors as well as further \(K\)-order neighbors at the information aggregation step. In contrast, the existing GCNs are not capable of representing this class of operations, even when stacked over multiple layers. To establish the bounds on generalization errors and simplify the proof, we make the following technical assumptions. **Assumption 5.1**.: _The \(L_{2}\)-norm of the feature vector in the input space is bounded, namely, for some constant \(\widetilde{c}>0\), we assume \(\left\|X_{i}\right\|_{2}\leq\widetilde{c}\) for all \(i=1,...,N\)._ **Assumption 5.2**.: _The maximum number of elements in the graph Laplacian matrix is bounded above by, i.e., \(\max_{i\in[N]}\max_{j\in[N]}|\mathcal{L}_{ij}|\leq c_{\mathcal{L}}\)._ **Assumption 5.3**.: _The Frobenius norm of any weight matrix in the estimation model is bounded. Namely,_ \[\|W^{(l)}_{\text{MLP}}\|_{F}\leq c_{2},\|W^{(l_{0})}_{\text{MLP}} \|_{F}\leq c_{1},\|W\|_{F}\leq c_{0},\] _with some constant \(c_{0},c_{1},c_{2}>0\) and the Frobenius norm \(\|\cdot\|_{F}\), where \(\|W^{l}_{\text{MLP}}\|_{F}\) is the weight matrix in \(l\)-th layer of MLP for any \(l<l_{0}\)._ **Assumption 5.4**.: _The element in the graph Laplacian \(\mathcal{L}\) is bounded by \(c_{2}>0\), namely, \(|\mathcal{L}_{ij}|\leq c_{2}\) for any \(i,j\)._ **Assumption 5.5**.: _The number of neighbors of each node is equal to each other, namely, for some common constant \(q\in\mathbb{N}^{+}\), assume \(q:=N(v_{i})\) for all node \(v_{i}\in V\), where \(N(\cdot)\) indicating the neighborhood nodes._ The above assumptions are common in the (graph) neural network literature. Assumption 5.1, 5.2 and 5.3 impose norm constraints on the parameters, graph laplacian matrix, and input feature, making the model class fall into a compact metric space (Liao et al., 2020). Assumption 5.4 is a standard assumption to control the intensity of the graph Laplacian in GNN literature (Hamilton, 2020; Lv, 2021). Assumption 5.5 requires us to focus on homogeneous graphs. We first present our result on bounding the Rademacher complexity of the model class \(\mathcal{F}_{c_{0},c_{1},c_{2}}\), which is the estimation model part before and up to the step that produces \(H^{(K)}\). For \(i_{0}\)-th graph sample, formally, we define our estimation model class \(\mathcal{F}_{c_{0},c_{1},c_{2}}\) in the setting of \(K=3\) and \(l_{0}=2\) without loss of generality, \[\mathcal{F}_{c_{0},c_{1},c_{2}}:=\bigg{\{}f(X(i_{0}))=\sigma \Bigg{(}\sum_{q=1}^{d_{1}}w_{\mathrm{MLP}}^{(2)}q\sigma\Bigg{(}\sum_{t=1}^{k }w_{\mathrm{MLP}}^{(1)}t_{q}\frac{1}{N}\sum_{m=1}^{N}\sum_{i=1}^{N}\mathcal{L} _{mi}\sum_{v=1}^{N}\mathcal{L}_{iv}\] \[\qquad\qquad\qquad\qquad\times\sum_{j\in N(v)}\mathcal{L}_{vj} \left<X(i_{0})_{j},\mathbf{w}_{t}\right>\Bigg{)}\Bigg{)},\quad i_{0}\in[n],\] \[\|W_{\mathrm{MLP}}^{(1)}\|_{F}\leq c_{2},\|W_{\mathrm{MLP}}^{(2) }\|\leq c_{1},\|W\|_{F}\leq c_{0}\bigg{\}},\] where \(\sigma(\cdot)\) is some identical mapping function, \(w_{\mathrm{MLP}}^{(1)}\) and \(w_{\mathrm{MLP}}^{(2)}\) is the first and second layer of the MLP for critic network, and \(\mathbf{w}_{t}\) is the \(t\)-th row of the weight matrix \(W\). Note that we use this particular setting as an example of the model class for simplifying the expression. The following theoretical results hold for the general case of \(K\) and \(l_{0}\). **Definition 5.1**.: _Given the input node feature matrix \(\{X(i)\}_{i=1}^{n}\) and the \(K\)-layers actor-critic graph neural network class \(\mathcal{F}_{c_{0},c_{1},c_{2}}\), the empirical Rademacher complexity of \(\mathcal{F}\) is defined as_ \[\widehat{\mathcal{R}}(\mathcal{F}_{c_{0},c_{1},c_{2}}):=\mathbb{E}_{\epsilon} \left[\frac{1}{n}\sup_{f\in\mathcal{F}}\bigg{|}\sum_{j=1}^{n}\epsilon_{j}f \left(X(j)\right)\bigg{|}X(1),X(2),\ldots,X(n)\right],\] _where \(\{\epsilon_{i}\}_{i=1}^{n}\) is an i.i.d. family of Rademacher variables, independent of \(\{X(i)\}_{i=1}^{n}\)._ **Theorem 5.2**.: _Under Assumptions 5.1-5.5, the empirical Rademacher complexity is bounded by_ \[\widehat{\mathcal{R}}(\mathcal{F}_{c_{0},c_{1},c_{2}})\leq \frac{(L_{0})^{l_{0}}c_{0}c_{1}c_{2}(\min_{i,j\in[N]}\mathcal{L}_ {i,j})^{K-1}\widetilde{c}q^{K}}{\sqrt{n}}|\lambda_{\min}(\mathcal{L})|; \text{Upper Bound}\] \[\widehat{\mathcal{R}}(\mathcal{F}_{c_{0},c_{1},c_{2}})\geq \frac{2^{K}(L_{0})^{l_{0}}c_{0}c_{1}c_{2}c_{\mathcal{L}}^{K-1} \widetilde{c}q^{K+0.5}}{\sqrt{n}}|\lambda_{\max}(\mathcal{L})|; \text{Lower Bound}\] _where \(L_{0}\) is the Lipschitz constant for non-linear activation function in the critic network, \(\lambda_{\min}(\mathcal{L})\) and \(\lambda_{\max}(\mathcal{L})\) are the finite minimum and maximum absolute eigenvalue of graph Laplacian \(\mathcal{L}\) and \(c_{0},c_{\mathcal{L}},q,\widetilde{c}\) are defined in the empirical Rademacher complexity upper bound._ Theorem 5.2 demonstrates that our derived upper bound is tight up to some constants when comparing it to the lower bound. Theorem 5.2 indicates that the upper bound of \(\widehat{\mathcal{R}}(\mathcal{F}_{c_{0},c_{1},c_{2}})\) depends on the number of graph instances, the degree distribution of the graph, and the graph convolution filter. Interestingly, the above bound is independent of the maximum number of nodes \(N\), for traditional regular graphs. Applying our results in empirical Rademacher complexity \(\widehat{\mathcal{R}}(\mathcal{F}_{c_{0},c_{1},c_{2}})\) to generalization analysis, we now state the fundamental result of the generalization bound of the estimation model. We denote \(conv(\mathcal{F}_{c_{0},c_{1},c_{2}})\) as the closed convex hull of \(\mathcal{F}_{c_{0},c_{1},c_{2}}\). That is, \(conv(\mathcal{F}_{c_{0},c_{1},c_{2}})\) consists of all functions that are pointwise limits of convex combinations of functions from \(\mathcal{F}\): \[\text{conv}(\mathcal{F}_{c_{0},c_{1},c_{2}}):=\Big{\{}f:\forall x,f (x)=\lim_{K\rightarrow\infty}f_{K}(x),f_{K}=\sum_{k=1}^{K}w_{k}f_{k},\] \[\sum_{k=1}^{K}w_{k}=1,f_{k}\in\mathcal{F}_{c_{0},c_{1},c_{2}},K \geq 1\Big{\}}.\] Obviously, we can observe that the combination in (3) belongs to \(conv(\mathcal{F}_{c_{0},c_{1},c_{2}})\). Next, we present the probabilistic generalization error for the estimation model: **Theorem 5.3**.: _Under Assumptions 5.1-5.5, given the \(\widehat{s}\) is the predicted label from actor-critic graph neural network class with \(K\) layers and true label \(s\), then the probabilistic upper bounds of the generalization error_ \[P\left(\widehat{s}s_{0}\leq 0\right)\leq\prod_{k=1}^{K}\Bigg{\{} \underbrace{2\sqrt{\epsilon^{(k)}\left(1-\epsilon^{(k)}\right)}+ \left(\frac{\log\log_{2}\left(2\big{(}\log\prod_{k=1}^{K}\sqrt{\frac{1- \epsilon^{(k)}}{\epsilon^{(k)}}}\lor 1\big{)}\right)}{n}\right)^{1/2}}_{\text{fusion estimation bias}}+\underbrace{\sqrt{\frac{1}{2n}\log\frac{2}{\delta}}}_{\text{ intrinsic uncertainty}}\] \[+\underbrace{\frac{2^{K+3}(L_{0})^{l_{0}}c_{0}c_{1}c_{2}c_{ \mathcal{L}}^{K-1}\widetilde{c}q^{K+0.5}}{\sqrt{n}}|\lambda_{\max}(\mathcal{L} )|\Bigg{(}\log\prod_{k=1}^{K}\sqrt{\frac{1-\epsilon^{(k)}}{\epsilon^{(k)}}} \lor 1\Bigg{)}}_{\text{local complexity}}\Bigg{\}},\] _with probability at least \(1-\delta\) for \(\delta\in[0,1)\)._ Theorem 5.3 demonstrates that the generalization error of the proposed estimation model is bounded, in terms of the error rate at the \(k\)-th iteration defined in (4), i.e., \(\epsilon^{(k)}\geq 1/2\) for any \(k=1,...,K\). In comparison to the generalization bound on vanilla GNNs (Scarselli et al., 2018; Garg et al., 2020), our bound is independent of the number of hidden units and the maximum number of nodes \(N\) in any input graph. For a regular graph with \(q=\mathcal{O}(1)\)(Bollobas, 1998), we conclude that \(\lambda_{\max}(\mathcal{L})=1\), which yields a generalization error bound of order \(\mathcal{O}(1/\sqrt{n})\) that is fully independent of the number of nodes \(N\). ## 6 Simulation Studies In this section, we present a comprehensive evaluation of the MaGNet using synthetic datasets. Specifically, we investigate its accuracy via binary classification tasks. Furthermore, we evaluate the performance of MaGNet's interpretation model by conducting thorough experiments for various purposes (e.g., node-wise, edge-wise, and feature-wise). To generate graphs, we allow the number of nodes \(V\) in the graph to vary, ranging from 30, 50, to 75, with the graph sample sizes \(n\) being 100 or 250. Each node has a temporal feature of dimension \(p\) as 20 or 50. We distinguish two categories of nodes, specifically, important nodes and non-important nodes, and we generate their features by applying two separate processes, resulting in two different settings. **Setting 1**: For the important nodes, features are generated following a multivariate Gaussian distribution, \(\text{MVN}(0,0.1\cdot I_{p\times p})\), where \(I_{p\times p}\) is an identity matrix. On the other hand, for the non-important nodes, the instance features are sampled from a uniform distribution, \(\text{Unif}(0,0.5)\) **Setting 2**: For the important nodes, features are generated following a Gaussian process with the mean function as \(\mu(x_{t})=0.1(t-25)^{2}\) and its kernel covariance function as \(k\left(x_{t},x_{t+h}\right)=\exp\left(-\left|x_{t}-x_{t+h}\right|^{2}/(2\sigma^ {2})\right)\) to have a dependency between the temporal features, where \(\sigma=0.5\) and \(h=3\). On the other hand, for the non-important nodes, the instance features are sampled from a Gaussian process with the mean function as \(\mu^{\prime}(x_{t})=0.1t\) with the same kernel covariance function \(k\left(x_{t},x_{t+h}\right)\) as in the important nodes generation. The distinct generation of features for each node class aims to create a distribution gap influencing the classification target outcome. It is important to note that the target outcome or classification rule is based solely on the features of important nodes and remains independent of those of non-important nodes. This generation process allows a good classifier to be able to separate important nodes from non-important ones. In all settings, we consider a binary outcome of interest, i.e., \(s=\{-1,1\}\), with the classification rule \(X_{V_{0}}/|V_{0}|+N(0,0.1)>0\), where \(N(0,0.25)\) introduces noise into the classification and \(X_{V_{0}}\) are the features of the important nodes and \(|\cdot|\) is the cardinality operator. ### Evaluation on Estimation Model In this section, we evaluate the classification accuracy of the MaGNet estimation model, comparing it against several benchmark approaches, including DeepNet (LeCun et al., 2015), penalized-logistic regression (Hastie et al., 2009), and RandomForest (Breiman, 2001). Furthermore, we compare our method with Graph Neural Networks (GNNs) models such as the vanilla graph convolutional network (V-GCN) (Kipf and Welling, 2016), graph attention network (GAT) (Velickovic et al., 2017), and the Graph Isomorphism Network (GIN) (Xu et al., 2018a). To accommodate the competing methods for non-GNN methods, we transform the unstructured graph data into structured data. Specifically, we treat each node as a distinct sample within the dataset. Under this setting, we have a total of \(n|V|\) samples for training sets. This pre-processing ensures a fair comparison between our model and the non-GNN competing methods. The results are provided in Table 1 and Table 2. As illustrated in Tables 1 and 2, the MaGNet estimation model is the best classifier among all the methods. This superiority is consistent across varying sample sizes, node quantities, and important node sizes, indicating MaGNet's robust performance in binary classification tasks. The performance gains of the MaGNet model mainly come from the ability to integrate both local and global information, which yields powerful representations for graph-structured data. ### Evaluation on Interpretation Model In this section, we first introduce three types of interpretation tasks. Then we report the performance of the MaGNet interpretation model with comparisons to the competing methods. #### 6.2.1 Interpretation Tasks As mentioned above, we consider three types of model interpretation tasks: node-wise, edge-wise, and feature-wise interpretation tasks. We note that each of them is aligned with the functionalities of the proposed MaGNet interpretation model. In the following interpretation tasks, we focus \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline **Sample Size** & **Important Nodes** & **Nodes** & **MaGNet** & **V-GCN** & **GAT** & **GIN** & **DeepNets** & **RF** & \(l_{1}\)**-Logit** \\ \hline \multirow{2}{*}{100} & \multirow{2}{*}{10} & 30 & **0.779** & 0.755 & 0.762 & 0.758 & 0.732 & 0.721 & 0.685 \\ & & 50 & **0.757** & 0.727 & 0.747 & 0.741 & 0.721 & 0.704 & 0.665 \\ & & 75 & **0.742** & 0.712 & 0.722 & 0.724 & 0.703 & 0.688 & 0.651 \\ \cline{2-10} & \multirow{2}{*}{20} & 30 & **0.791** & 0.775 & 0.768 & 0.764 & 0.757 & 0.752 & 0.704 \\ & & 50 & **0.768** & 0.748 & 0.754 & 0.737 & 0.714 & 0.701 & 0.678 \\ & & 75 & **0.758** & 0.738 & 0.730 & 0.729 & 0.711 & 0.693 & 0.667 \\ \hline \multirow{2}{*}{250} & \multirow{2}{*}{10} & 30 & **0.788** & 0.758 & 0.766 & 0.769 & 0.739 & 0.722 & 0.694 \\ & & 50 & **0.767** & 0.733 & 0.741 & 0.727 & 0.710 & 0.707 & 0.661 \\ & & 75 & **0.759** & 0.744 & 0.745 & 0.732 & 0.713 & 0.700 & 0.668 \\ \cline{2-10} & \multirow{2}{*}{20} & 30 & **0.797** & 0.767 & 0.754 & 0.748 & 0.728 & 0.710 & 0.684 \\ & & 50 & **0.771** & 0.741 & 0.756 & 0.733 & 0.752 & 0.747 & 0.697 \\ & & 75 & **0.771** & 0.750 & 0.745 & 0.727 & 0.733 & 0.718 & 0.687 \\ \hline \hline \end{tabular} \end{table} Table 1: The results of classification accuracy over 50 repeated experiments in Setting 1. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline **Sample Size** & **Important Nodes** & **Nodes** & **MaGNet** & **V-GCN** & **GAT** & **GIN** & **DeepNets** & **RF** & \(l_{1}\)**-Logit** \\ \hline \multirow{2}{*}{100} & \multirow{2}{*}{10} & 30 & **0.717** & 0.697 & 0.685 & 0.704 & 0.637 & 0.649 & 0.599 \\ & & 50 & **0.686** & 0.660 & 0.670 & 0.668 & 0.642 & 0.645 & 0.584 \\ & & 75 & **0.672** & 0.611 & 0.649 & 0.655 & 0.638 & 0.598 & 0.590 \\ \cline{2-10} & \multirow{2}{*}{20} & 30 & **0.713** & 0.720 & 0.690 & 0.700 & 0.715 & 0.680 & 0.642 \\ & & 50 & **0.698** & 0.661 & 0.718 & 0.685 & 0.636 & 0.607 & 0.615 \\ & & 75 & **0.685** & 0.681 & 0.659 & 0.646 & 0.638 & 0.638 & 0.616 \\ \hline \multirow{2}{*}{250} & \multirow{2}{*}{10} & 30 & **0.720** & 0.677 & 0.703 & 0.700 & 0.667 & 0.706 & 0.630 \\ & & 50 & **0.705** & 0.675 & 0.669 & 0.676 & 0.626 & 0.646 & 0.597 \\ & & 75 & **0.709** & 0.670 & 0.680 & 0.663 & 0.617 & 0.622 & 0.601 \\ \cline{2-10} & \multirow{2}{*}{20} & 30 & **0.727** & 0.684 & 0.709 & 0.705 & 0.660 & 0.655 & 0.628 \\ & & 50 & **0.718** & 0.683 & 0.675 & 0.682 & 0.690 & 0.708 & 0.668 \\ & & 75 & **0.710** & 0.677 & 0.686 & 0.669 & 0.658 & 0.648 & 0.618 \\ \hline \hline \end{tabular} \end{table} Table 2: The results of classification accuracy over 50 repeated experiments in Setting 2. on using Setting 2 as a data generation process in order to mimic the scenario with temporal features in neural activity experiments. In the node-wise interpretation tasks, our goal is to recover as many important nodes in the synthetic data as possible. This is particularly crucial in practice for achieving a parsimonious model that simultaneously maintains interpretability and performance. The ultimate aim is to exclude all non-important nodes and retain all important nodes after node-wise reasoning. In the edge-wise interpretation tasks, we initially define the notions of important and redundant edges (REs). An important edge refers to an edge connecting two important nodes. In contrast, all other edges not satisfying this condition are considered non-important edges. In the task, we seek to minimize the redundant edges in the trained MaGNet estimation model. To evaluate the interpretation model performance, we define two metrics: the absolute metric (AM) and the relative metric (RM), as follows: \[\text{AM}:=\frac{\text{\# of existing RE after reasoning}}{\text{\# of all possible RE}}, \tag{8}\] and \[\text{RM}:=\frac{\text{\# of existing RE before reasoning}-\text{\# of existing RE after reasoning}}{\text{\# of existing RE before reasoning}}. \tag{9}\] In the feature-wise interpretation tasks, we have different interpretations over different settings. In Setting 1, the interpretation task is identical to the variable selection on the node features, as we aim to identify the significant dimensions of the feature contributing to the classification. In contrast, in Setting 2 where the node feature is temporally generated, the task aims to detect an important time window as in the rat hippocampus experiments. #### 6.2.2 Results on Interpretation Tasks In this section, we present the results of the three types of interpretation task experiments. For the purposes of comparison, we consider two competing methods: the modified GroupLasso (Meier et al., 2008) and IntGRAD (Sundararajan et al., 2017), a gradient-based model explanation approach designed for deep neural networks. To implement GroupLasso for the node-wise and edge-wise interpretation tasks, we regard each node as a variable in a group and perform variable selection for the important nodes. For the implementation of IntGRAD, we adapt the module of IntGRAD to the graph neural network settings and apply it to our trained MaGNet estimation model. Note that for the feature-wise interpretation tasks, we only evaluate the model performance of the MaGNet interpretation method, because the two competing methods are not able to perform such type of task. The results of the node-wise interpretation tasks are presented in Table 3 over 50 repeated experiments. Our results reveal several key findings. First, the recovery rate of important nodes using the MaGNet interpretation model consistently outperforms the competing methods across diverse settings. These advantages are due to the following strength of our method: it leverages the information gain to directly assess the reduction of uncertainty on the node subgraph. This technique incorporates statistical uncertainty as a measurement criterion instead of applying either fully deterministic gradient-based methods or variable selection methods. Another advantage of our method is the reparametrization strategy for continuously approximating discrete variables, which makes the proposed interpretation framework more computationally stable than the \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Metric** & **Important Nodes** & **All Nodes** & **MaGNet** & **GroupLasso** & **IntGRAD** \\ \hline AM & 10 & 30 & **0.162** & 0.268 & 0.234 \\ & & 50 & **0.187** & 0.309 & 0.267 \\ \cline{2-6} & 20 & 30 & **0.132** & 0.245 & 0.194 \\ & & 50 & **0.157** & 0.273 & 0.222 \\ \hline RM & 10 & 30 & **0.811** & 0.741 & 0.764 \\ & & 50 & **0.783** & 0.702 & 0.746 \\ \cline{2-6} & 20 & 30 & **0.854** & 0.765 & 0.803 \\ & & 50 & **0.809** & 0.731 & 0.766 \\ \hline \hline \end{tabular} \end{table} Table 4: The edges-interpretation performance over two metrics with 50 repeated experiments, where lower AM is better, and higher RM is better. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Sample Size** & **Important Nodes** & **All Nodes** & **MaGNet** & **GroupLasso** & **IntGRAD** \\ \hline 100 & 10 & 30 & **0.874** & 0.720 & 0.801 \\ & & 50 & **0.845** & 0.684 & 0.736 \\ \cline{2-6} & 20 & 30 & **0.882** & 0.738 & 0.811 \\ & & 50 & **0.868** & 0.704 & 0.773 \\ \hline 250 & 10 & 30 & **0.887** & 0.729 & 0.813 \\ & & 50 & **0.856** & 0.692 & 0.746 \\ \cline{2-6} & 20 & 30 & **0.894** & 0.748 & 0.821 \\ & & 50 & **0.877** & 0.715 & 0.785 \\ \hline \hline \end{tabular} \end{table} Table 3: The node-wise interpretation performance over 50 repeated experiments. competing methods. Also, due to the fact that the MaGNet interpretation model is particularly designed for the graph neural network, it inherits the properties of the trained MaGNet estimation model, which is able to leverage local and global information to do the interpretation. The results of the edge-wise interpretation tasks are summarized in Table 4. It shows that our proposed method has a high edge reduction rate, underlining its effectiveness in pruning the graph of non-important edges. Importantly, the method exhibits the ability to retain significant edges, suggesting an inherent proficiency in discriminating between important and non-important edges and, thus, preserving the true graph structure. This performance is consistently validated across different settings, demonstrating the proposed method's robustness and adaptability to varying graph sizes. In addition, our model identifies and removes redundant edges while maintaining critical connections between significant nodes. This results in a more parsimonious and interpretable graph structure in practice. The results of the feature-wise interpretation task are reported in Figure 2. It indicates that our model achieves a high rate of identifying key features or time windows, and successfully detecting the most influential time windows across various settings. Figure 2: The identified important time window with different temporal feature dimensions. Application to Local Field Potential Activity Data from the Rat Brain In this section, we apply the proposed method to brain activity data recorded from an array of electrodes implanted inside the brain. The brain region of interest is the hippocampus, a region near the middle of the rat brain known to be important for the temporal organization of our memories and behaviors. Although it is well established that the hippocampus plays a key role in this function across mammals, the underlying neuronal mechanisms remain unclear. To shed light on these underlying mechanisms, we previously recorded neural activity in the hippocampus of rats performing a complex sequence memory task (Allen et al., 2016) (as such high-precision data are currently not available in humans). Using that dataset, our objective here is to apply the proposed method to identify key functional relationships in the local field potential (LFP) activity simultaneously recorded across electrodes during task performance, as this information could provide novel insights into potential functional networks within that region. The LFP activity data were collected from the CA1 region of the hippocampus while rats performed an odor sequence memory task (Fig 3). In this task, rats received repeated presentations of odor sequences (e.g., ABCDE) at a single odor port and were required to identify each item as either "in sequence" (InSeq; e.g., ABC...) or "out of sequence" (OutSeq; e.g., ABD...). Importantly, the recordings were performed from surgically implanted electrodes (tetrodes), organized into two bundles, which spanned much of the proximo-distal axis of dorsal CA1. This experimental design thus provides a unique opportunity to directly examine the anatomical distribution of information processing along that axis. In recent work, Shahbaba et al. (2022) showed that information about trial content, such as the identity of the odor presented and whether it was presented in or out of sequence, could be accurately decoded from the ensemble spiking activity. However, that study did not determine whether task-relevant information was also contained in the local field potential activity. A fundamentally different data type from the discrete neural spiking activity, the LFP's continuous signal is more challenging to decode. To our knowledge, there are only two reports which exclusively use LFP to successfully decode spatial information in the hippocampus, which require high-density recordings (Taxidis et al., 2015; Agarwal et al., 2014), and none showing decoding of nonspatial information from hippocampal LFP alone. To address this gap in knowledge, here we examined whether the content of odor trials can be decoded from hippocampal LFP activity and, if so, whether the pattern varies over space (electrodes) and time. For this analysis, we focused on decoding the two main trial types (InSeq and OutSeq) using LFP activity from the 0-500 ms period (0 = odor onset), a time period in which there are no overt differences in the behavior of the animals between InSeq and OutSeq trials. We considered each rat's data an independent dataset and performed the classification evaluation task separately. For each rat's data, we randomly selected 200 graph instances as the training set and the other 30 graph samples as the testing set. Figure 4 shows that the MaGNet estimation model achieves Figure 3: **(a.)** The task involves repeated presentations of sequences of odors and requires rats to determine whether each odor was presented “in sequence” (InSeq; e.g., ABC\(\dots\)) or “out of sequence” (OutSeq; e.g., AB\(\underline{\text{D}}\dots\)). Using an automated delivery system (left), all odors were presented in the same odor port (median interval between odors \(\sim\)5 s). Recordings were performed from electrodes organized into two bundles (right), which spanned much of the proximo-distal axis of dorsal CA1. **(b.)** In each session, the same sequence was presented multiple times, with approximately half the presentations including all InSeq trials (left) and the other half including one OutSeq trial (right). Each odor presentation was initiated by a nosepoke and rats were required to correctly identify each odor as either InSeq (by holding their nosepoke response until a tone signaled the end of the odor at 1.2 s) or OutSeq (by withdrawing their nose before the signal; <1.2 s) to receive a water reward. Incorrect responses resulted in termination of the sequence. **(c.)** Location of three electrode tips (red circles). The leftmost and rightmost electrodes approximate the extent of the CA1 transverse axis recorded in each animal the best performance for all the rats. In particular, the proposed method indicates improvements of 8.4%, 6.9%, and 8.9% when compared to existing graph neural networks, such as GAT, GIN, and GCN, respectively. The improvements are due to our method's integration of both low-order and high-order information, effectively utilizing more information into the latent representation. In addition, by removing the non-linear feature transformation and the self-loop augmentation in GNNs, our method is less likely to suffer the _over-smoothing_ and _memoryless_ issue. Compared to the baseline classification methods, the improvement of the proposed method is more than 16.9% for DeepNet, which is the best among all the baseline approaches. We then investigated the temporal dynamics of this decoding during trial periods by applying our MaGNet interpretation model. Specifically, we examined the most informative time bins for the InSeq/OutSeq classification in the first 500 ms of trials in Figure 4(a). We found that most significant time bins occurred between \(\sim\)180 ms and \(\sim\)320 ms after the rats poked into the port. This timeline is consistent with reports of hippocampal neurons responding to odor information in as little as 100ms (Allen et al., 2020) and with the expected timeline of InSeq/OutSeq identification within trials. This implies that the MaGNet interpretation model successfully identifies the important time window. Figure 4: Barplot of estimation accuracy for the MaGNet estimation model and alternative competing approaches on decoding the two main trial types. In addition, we found that most informative electrodes clustered in the distal region of CA1 in Figure 5b. In fact, across all 5 animals, the majority (86.7%) of significant electrodes were in distal CA1, and more than half of all electrodes in distal CA1 reached significance. We further Figure 5: **(a.)** Significant decoding of InSeq and OutSeq trials based on LFP activity during the first 500ms of odor trials. Scores peak during the 185-320 ms period, prior to the behavioral response. Grey traces indicate individual subject decodings, the black line indicates the mean across subjects. **(b.)** Informative electrode nodes and edges clustered in the distal region of CA1. Schematic showing side view of electrode bundles implanted across the CA1 proximal-distal axis (Top). Schematic showing a top view of the anatomical distribution of electrodes across subjects based on electrode tract reconstruction (bottom). Yellow indicates significant nodes (electrodes). Edges indicate significant relationships between electrodes. **(c.)** The clustering of informative nodes in distal CA1 is consistent with known anatomical differences in input connections. Odor information enters the hippocampus primarily through the LEC (lateral entorhinal cortex; magenta), which more strongly projects to the distal segment of CA1. In contrast, the MEC (medial entorhinal cortex; blue) more strongly projects to proximal CA1. Approximate location of the implanted electrode bundles are shown. found that the majority of significant edges found by the MaGNet interpretation model also clustered in distal CA1. This distribution of significant nodes and clusters suggests distal CA1 plays a more important role in representing InSeq/OutSeq information than proximal CA1, a pattern consistent with known differences in their anatomical connections (Figure 5c). However, the observation that a number of significant edges also extended into proximal CA1 suggests that functional interactions among the two segments of CA1 are critical for task performance. Furthermore, the distal region of the hippocampus involves a more explicit layer-wise division. Specifically, it is composed of three neuronal function segments: Stratum Pyramidale (SP), Stratum Radiatum (SR), and Stratum Oriens (SO). These segments play a crucial role in the intricate circuitry of the hippocampus, encompassing not only the pyramidal neurons but also various types of interneurons that regulate the activity of these pyramidal neurons. The connectivity and interactions among neurons in these layers support complex neural conduction that contributes to the formation and retrieval of memories. As an illustrating example, Figure 6 showcases that the influential nodes identified by the MaGNet interpretation model are consistently located across all three segments, suggesting a pattern of co-activation and cross-functional firing among the neurons in these segments, essential for processing the trisynaptic circuit (Tsao et al., 2018). This co-activation and cross-firing pattern also affirms the capability of our estimation model to incorporate high-order information. Figure 6: Three neuronal function segments in the distal area of the hippocampus. The green, red, and orange shadow area indicates the segment Stratum Pyramidale, Stratum Radiatum and Stratum Oriens, respectively. Discussion In this paper, we have proposed a novel graph neural network framework, MaGNet, which is able to effectively integrate the various-order information from both low-order and high-order information to allow powerful latent representation. Furthermore, MaGNet includes an interpretation component, which offers a tractable framework for identifying the influential subgraphs including important node, edge, and node features. In addition, we have also established rigorous theoretical foundations to assess the efficacy, statistical complexity, and generalizability of the MaGNet model. These theoretical results ensure that the proposed model is reliable and effective, contributing to the practical utility of MaGNet in various applications. We applied the MaGNet framework to LFP activity data recorded from the hippocampus of rats as they performed a challenging non-spatial sequence memory task. Using this model, we were able to decode the trial type (whether the odor was presented in or out of sequence) as well as identify the most informative trial periods and electrodes. Therefore, not only did the model provide the first direct evidence of decoding non-spatial trial content from hippocampal LFP activity alone, it also provided a high degree of specificity about how this information was distributed over space and time. This neuroscience result is consistent with a growing literature on the influence of anatomical gradients on information processing within brain regions (van Strien et al., 2009; Witter et al., 2017; Knierim et al., 2014), specifically with evidence that inputs carrying non-spatial information more strongly project to distal CA1 than proximal CA1 (Haberly and Price, 1978; Agster and Burwell, 2009). While our proposed framework maintains the sequential training approach as typical of most existing GNN architectures, the proposed training and optimization can still be costly due to the fusion step for multiple component models. Therefore, identifying ways to optimize these operations is a potential area for future research. Another potential direction for exploration is to extend the current framework to accommodate different types of tasks. For example, rather than solely focusing on graph classification, the framework can be extended for node classifications, link prediction, and beyond. By exploring these directions, we can continue to broaden the utility and effectiveness of our framework for a wide range of applications. Furthermore, there is a significant need to extend the current framework to accommodate dynamic settings. While modeling time-varying changes and dynamic systems holds central importance in numerous real-world applications, the current MaGNet framework, along with the majority of GNN models, is primarily tailored for static graph data. These models are capable of incorporating structural information into the learning process, but they fall short in capturing the evolution of dynamic graphs. Typically, dynamics in a graph refer to node attribute modifications or edge-structure changes, including the additions and deletions of nodes or edges. As a possible expansion of the existing MaGNet framework, we will explore the incorporation of node and edge activation functions to signify and capture the presence of the nodes and edges within each timestamp. This will enable subsequent utilization of attention mechanisms such as self-attention and neighborhood attention, which have shown efficacy in foundational models (Bommasani et al., 2021), to account for historical time-evolved information from preceding timestamps.
2309.14691
On the Computational Complexity and Formal Hierarchy of Second Order Recurrent Neural Networks
Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precision in weights, in order to recognize TC grammars. However, under constraints such as fixed or bounded precision neurons and time, ANNs without memory are shown to struggle to recognize even context-free languages. In this work, we extend the theoretical foundation for the $2^{nd}$-order recurrent network ($2^{nd}$ RNN) and prove there exists a class of a $2^{nd}$ RNN that is Turing-complete with bounded time. This model is capable of directly encoding a transition table into its recurrent weights, enabling bounded time computation and is interpretable by design. We also demonstrate that $2$nd order RNNs, without memory, under bounded weights and time constraints, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars. We provide an upper bound and a stability analysis on the maximum number of neurons required by $2$nd order RNNs to recognize any class of regular grammar. Extensive experiments on the Tomita grammars support our findings, demonstrating the importance of tensor connections in crafting computationally efficient RNNs. Finally, we show $2^{nd}$ order RNNs are also interpretable by extraction and can extract state machines with higher success rates as compared to first-order RNNs. Our results extend the theoretical foundations of RNNs and offer promising avenues for future explainable AI research.
Ankur Mali, Alexander Ororbia, Daniel Kifer, Lee Giles
2023-09-26T06:06:47Z
http://arxiv.org/abs/2309.14691v1
# On the Computational Complexity and Formal Hierarchy of ###### Abstract Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precision in weights, in order to recognize TC grammars. However, under constraints such as fixed or bounded precision neurons and time, ANNs without memory are shown to struggle to recognize even context-free languages. In this work, we extend the theoretical foundation for the \(2^{nd}\)-order recurrent network (\(2^{nd}\) RNN) and prove there exists a class of a \(2^{nd}\) RNN that is Turing-complete with bounded time. This model is capable of directly encoding a transition table into its recurrent weights, enabling bounded time computation and is interpretable by design. We also demonstrate that 2nd order RNNs, without memory, under bounded weights and time constraints, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars. We provide an upper bound and a stability analysis on the maximum number of neurons required by 2nd order RNNs to recognize any class of regular grammar. Extensive experiments on the Tomita grammars support our findings, demonstrating the importance of tensor connections in crafting computationally efficient RNNs. Finally, we show \(2^{nd}\) order RNNs are also interpretable by extraction and can extract state machines with higher success rates as compared to first-order RNNs. Our results extend the theoretical foundations of RNNs and offer promising avenues for future explainable AI research. ## Introduction Artificial neural networks (ANNs) have achieved impressive results across a wide variety of natural language processing (NLP) tasks. Two of the most promising approaches in NLP are those based on attention-based mechanisms and recurrence. One key way to evaluate the computational capability of these models is by comparing them with the Turing machine (TM). Importantly, it has been shown that recurrent neural networks (RNNs)[17] and transformers [16] are Turing complete when leveraging unbounded precision and weights. However, these classes of ANNs, by design, lack structural memory and thus empirically fail to learn the structure of data when tested on complex algorithmic patterns. Researchers have shown that, under restrictions or constraints such as bounded precision, RNNs operate more closely to finite automata [11], and transformers notably fail to recognize grammars [5] produced by pushdown automata. Another complementary research direction is focused on coupling ANNs with memory structures [6, 9, 10, 4, 2, 3] such as stacks, queues, and even tapes in order to overcome memory issues. It has been shown that the neural Turing machine (NTM) [2] and the differentiable neural computer (DNC) [3], are not Turing complete in their current form and can only model space-bounded TMs. However, recent work has theoretically derived a computational bound for RNNs either coupled to a growing memory module [1] or augmented with a stack-like structure [8, 20, 4, 6, 19] - these models have been shown to be Turing complete (TC) with bounded precision neurons. One of the earliest works in analyzing RNN computational capabilities was that of Siegelman and Sontag [18], which theoretically demonstrated that an RNN with unbounded precision was TC if the model satisfied two key conditions: 1. it was capable of computing or extracting dense representations of the data, and 2. it implemented a mechanism to retrieve these stored (dense) representations. Notably, this work demonstrated that simple RNNs could model any class of Turing complete grammars and, as a result, were universal models. These proofs have recently been extended to other kinds of ANNs without recurrence [15]. In particular, transformers with residual connections as well as the Neural GPU equipped with a gating mechanism were shown to be Turing complete [15]. Despite these important efforts, there is far less work on theoretically deriving the computational limits of RNN with finite precision and time. Therefore, in this paper, we specifically consider these constraints and formally construct the smallest TC RNNs possible as well as show that we only require \(11\) unbounded precision neurons to simulate any TM. We also prove that a class of RNNs can simulate a TM in finite-time \(O(T)\), where \(T\) is simulation time. In other words a system is finite if there is formal guarantee that simulation of TM will be completed by any system with available resources. If \(T>1\), then computational becomes infinite, and there is no guarantee that we fill find the answer with currently available compute. To achieve this, we simulate Turing machines with RNNs built from tensor synaptic connections (TRNNs). It has been shown that tensor connections are interpretable and offer practical as well as computational benefits [7, 13]. Furthermore, when augmented with differentiable memory, such networks have been shown to be _Turing equivalent with finite precision and time_[19]. However, none of the prior memory-less RNNs have been shown to be _Turing complete with finite time_. To this end, we will formally prove that: 1) an unbounded precision TRNN requires a smaller number of neurons and is equivalent to a universal Turing machine, and 2) a bounded precision TRNN can model any class of regular grammars and is more powerful than RNNs and transformers in recognizing these languages. As a result, the TRNN offers the following benefits: 1) **Computational benefit:** The number of hidden states required to construct a TM via a TRNN is comparatively less than other ANNs, 2) **Rule insertion and interpretability:** One can add/program the transition rules of a Turing complete grammar directly into weights of TRNN, making the model interpretable by design, and 3) **Explainability:** One can extract the rules as well as the underlying state machine from a well-trained TRNN, making the model explainable via extraction. This work makes the following contributions: * We formally prove the Turing completeness of a TRNN with unbounded precision. * We prove that one can simulate any TM in real-time and that the TRNN, by design, is interpretable since the number of states in the TRNN is equivalent to a TM. * We prove that a bounded precision TRNN requires \(n+1\) neurons in order to recognize any regular grammar, where \(n\) is the total number of states in a deterministic finite automaton (DFA). The remainder of the work is organized as follows. Section 0 describes the background and notation, presenting the Turing machine (TM) definition. Section 0 defines the TRNN architecture and its equivalence with a TM. Section 0 defines the TRNN architecture with bounded precision and weights and establishes an equivalence to the DFA. Section 0 presents experiments on the Tomita grammar datasets. Finally, we draw final conclusions. ## Background and Notation In the theory of computation, Turing machines (TMs) sit at the top of Chomsky's hierarchy as they can accept Type-0 unrestricted grammars. Problems that TMs cannot solve are considered to be undecidable. Conversely, if a problem is decidable, then there exists a Turing machine that can solve it. If a system of rules formulated to solve a problem can be simulated using a Turing machine, then the system is said to be Turing-complete (TC). Thus, we can see that the concept of Turing-completeness is absolutely essential for advancing a fundamental understanding of deep neural networks. A Turing machine is formally defined as tuple \(M=\{Z,\Sigma,R,\delta,q_{0},b,F\}\) where \(Z\) is a finite and non-empty set of states, \(\Sigma\) is the set of input alphabets, and \(b\) is the blank symbol. \(R\) is a non-empty set of tape alphabets such that \(\Sigma\subseteq R\backslash\{b\}\). \(q_{0}\in Z\) is the starting state and \(F\subseteq Z\) is the set of final or accepting states. The transition function \(\delta\) is a partial function defined as \(\delta:Z\backslash F\times R\to Z\times R\times\{-1,0,1\}\). We will consider a TM with \(m\) tape symbols and \(n\) states that can be defined using a finite set as follows: \[\delta=\{<r_{i},z_{j}|r_{u},z_{v}A> \tag{1}\] \[|i,u=1,2..m,j,v=1,2..n\}\] where \(r_{i}\) represents the read symbol extracted from the tape, \(z_{j}\) represents the controller's current state, and \(A\) represents the action of the tape. As noted, each tuple in \(\delta\) represents transition rules or functions. For instance, while reading \(r_{i},z_{j}|r_{u},z_{v}A\), the control head in state \(z_{j}\), after reading symbol \(r_{i}\) from the tape, will perform one of three actions: **1)** delete the current value at \(r_{i}\) from the tape and rewrite the current tape symbol as \(r_{u}\), **2)** change its state from \(z_{j}\) and move to the next state \(z_{u}\), and **3)** move the head position according to the value of \(A\); if \(A=1\), then move one unit to the right, if \(A=-1\), then move one unit to the left, and if \(A=O\), then stay in the same state and do not move. We introduce an additional state known as the "halt state" to model illegal strings. Therefore, an input string is said to be accepted by Turing machine \(M\) if \(M\) reaches the final state or eventually halts and stays in that state forever. Hence, one can model this using the following transition rule or function: \(<r_{i},z_{h}|r_{i},z_{h}A>\), where \(A=0\). A second (\(2^{nd}\)) order, or tensor, recurrent neural network (TRNN) is an RNN consisting of \(n\) neurons; ideally, \(n\) should be equivalent to the number of states in a TM grammar. The value of a neuron at time \(t\in 1,2,3...n\) is indexed using \(i\), the state transition of the TRNN is computed by first taking an affine transformation followed by a nonlinear activation function as follows: \[z_{i+1}^{t+1}=h_{H}(W_{ijk}^{t}z_{j}x_{k}+b_{i}) \tag{2}\] where \(z\) is the state transition of the TRNN such that \(z_{t}\in\mathbb{Q}\) and \(\mathbb{Q}\) is a set of rational numbers, \(W\in\mathbb{R}^{n\times n\times n}\) contains the \(2^{nd}\) order recurrent weight matrices, \(b\in\mathbb{R}^{n}\) are the biases, \(h_{H}\) is the nonlinear activation function such as the logistic sigmoid, and \(x\) is the input symbol at time \(t\). For simplicity, we will consider the saturated-linear function in this work, which is represented as follows: \[h_{H}(z)=\sigma(z):=\left\{\begin{array}{ll}0&\mbox{if}\quad z<0\\ z&\mbox{if}\quad 0\leq z\leq 1\\ 1&\mbox{if}\quad x>1.\end{array}\right. \tag{3}\] Therefore, \(z_{t}\in(\mathbb{Q}\cap[0,1])^{n}\) for all \(t>0\). In the appendix, we provide a notation table that contains all symbols used in this work alongside their definitions. ### Tensor Recurrent Neural Networks are Turing Complete Later, we will show that, without loss of generality and inducing only a very small error \(\epsilon\), one can obtain similar results with a sigmoid activation. To work with the sigmoid, we will restate the following lemma from [19]: **Lemma 0.1**: _Let \(\bar{Z}\) be a state tensor with components \(\bar{Z}_{i}\in\{0,1\}\), and let \(Z\) be a tensor satisfying:_ \[||Z-\bar{Z}||_{L^{\infty}}\leq\epsilon_{0}\] _for some \(\epsilon_{0}<\frac{1}{2}\)._ _Then, for all sufficiently small \(\epsilon>0\), for all \(H\) sufficiently large depending on \(\epsilon_{0}\) and \(\epsilon\),_ \[\max_{i}\left|\bar{Z}_{i}-h_{H}\left(Z_{i}-\frac{1}{2}\right)\right|\leq\epsilon.\] _where \(h_{H}(x)=\frac{1}{1+e^{-Hx}}\)._ Note that \(h_{H}(0)=\frac{1}{2}\) and \(h_{H}(x)\) decreases to \(0\) as \(Hx\rightarrow-\infty\) and increases to \(1\) as \(Hx\rightarrow\infty\) and that scalar \(H\) is a sensitivity parameter. In practice, we will often show that \(x\) is bounded away from \(0\) and then assume \(H\) to be a positive constant sufficiently large such that \(h_{H}(x)\) is as close as desired to either \(0\) or \(1\), depending on the sign of the input \(x\). **Proof:** For simplicity, we will assume \(\bar{Z}\) to be a tensor represented in vector form. Since \(\epsilon_{0}<\frac{1}{2}\), choose \(H\) sufficiently large such that: \[h_{H}\left(-\left(\frac{1}{2}-\epsilon_{0}\right)\right)=1-h_{H}\left(\frac{1 }{2}-\epsilon_{0}\right)\leq\epsilon.\] Note that we have the following: \[\left|\bar{Z}_{i}-h_{H}\left(Z_{i}-\frac{1}{2}\right)\right|=\left|\bar{Z}_{i}-h_ {H}\left(\bar{Z}_{i}-\frac{1}{2}+(Z_{i}-\bar{Z}_{i})\right)\right|.\] Now pick an arbitrary index \(i\). If at position \(i\) we obtain \(\bar{Z}_{i}=1\), then the following condition is obtained, \[\left|\bar{Z}_{i}-h_{H}\left(Z_{i}-\frac{1}{2}\right)\right|=1-h_{H}\left( \frac{1}{2}+(Z_{i}-\bar{Z}_{i})\right)\leq 1-h_{H}\left(\frac{1}{2}-\epsilon_{0} \right)\leq\epsilon.\] However, if at position \(i\) we have \(\bar{Z}_{i}=0\), then we obtain condition below: \[\left|\bar{Z}_{i}-h_{H}\left(Z_{i}-\frac{1}{2}\right)\right|=h_{H}\left(- \frac{1}{2}+(Z_{i}-\bar{Z}_{i})\right)\leq h_{H}\left(-\left(\frac{1}{2}- \epsilon_{0}\right)\right)\leq\epsilon\] In both cases, we can see that vector \(\bar{Z}\) is close to the ideal vector \(Z\) which proves the lemma. \(\Box\) In order to simulate a Turing machine \(M\) by a TRNN, we will construct a neural network that can encode the configuration of the TM with a tape using unbounded precision. We wish to create a binary mapping between the configuration of \(M\) (with the transition function) and the neural activities of a TRNN, such that a one-to-one mapping is preserved. The neural network simulator (\(S\)) constructed using a TRNN contains the following components or tuples to model \(M\): \[S=(n,z,W^{z},h_{H},z_{0}) \tag{4}\] where transition table (\(m\times n\)) is indexed using rows (\(n=\) total of neurons) and columns (\(m=\) tape size), \(Z\) is the set of all possible states (where \(z=z_{ij},-n/2<i\leq n/2,0\leq j\leq n-1\)), \(h_{H}\) is the activation function and \(z_{0}\) is the initial or start state. Therefore, starting from the start state (\(z_{0}\)) at \(t=0\), the model recursively evolves as \(t\) increases, according to Equation 2. Thus, when the TRNN reaches a fixed point, i.e., a halting state or dead state, the network dynamics are represented as: \[z_{i}^{t}=h_{H}(W_{ijk}^{t}z_{j}x_{k}+b_{i}) \tag{5}\] where \(z_{i}^{t}\) stays in the same state. Now we will prove that a TRNN with two steps (or cycles) is Turing complete and only requires \(m+2n+1\) unbounded precision neurons. **Theorem 0.2**: _Given a Turing Machine \(M\), with \(m\) symbols and \(n\) states, there exists a \(k\)-neuron unbounded precision TRNN with locally connected second-order connections, where \(K=m+2n+1\) can simulate any TM in O(\(T^{2}\)) turns or cycles._ **Proof:** Recall that the transition rules of a TM \(M\) are defined using \(\delta\) as follows: \[\delta=\{<r_{i},z_{j}|r_{u},z_{v}A>|i,u=0,1,2..m,j,v=1,2..n\} \tag{6}\] where we have introduced an additional symbol \(s_{0}\), which represents a blank tape symbol. It follows that the neural or state transition of a TRNN can be represented as: \[Z_{i,j}^{t+1}=h_{H}(\sum_{j_{1},j_{2}=0}^{m+2n}(W_{i_{j_{1},j_{2}}}^{t}Z_{i-1, j_{2}}^{t}Z_{i,j_{2}}^{t}+W_{Z_{j_{1},j_{2}}}^{t}Z_{i,j_{1}}^{t}Z_{i+1,j_{2}} ^{t})+\theta)\] where \(h_{H_{i,j}}\) is the sigmoid activation function and the neural activities are close to either \(0\) or \(1\). \(F\) is another activation function -- the extended version of \(h_{H}\) -- such that: \[F=2\delta_{j_{0}}(1-\sum_{j_{1}=0}^{m+2n}Z_{i,j_{1}}^{t}+\epsilon _{1})(Z_{i-1,j_{0}}^{t}+Z_{i+1,j_{0}}^{t})+\delta_{j_{0}}(1-\sum_{j_{1}=0}^{m+2 n}Z_{i-1,j_{1}}^{t}+\epsilon_{2}+1\\ +\sum_{j_{1}=0}^{m+2n}Z_{i+1,j1}^{t}+\epsilon_{2})Z_{i,0}^{t}-(1.5-\epsilon_{4})\] where \(\epsilon_{i}\) is a small error (value) and \(i\in{1,2..4}\) shifts neuron values to be close to \(0\) or \(1\). Note that the synaptic weights of a TRNN are locally connected and each neuron at the \(i_{th}\) position or index is only accessed or connected to the neurons at its immediate left \((i-1)^{th}\) or right \((i+1)^{th}\). If we map these values to a 2D plane, we can easily see that the weight values are uniformly distributed across \(i\) (or the \(i-axis\)), which is independent of the column position of \(i\). If we apply Theorem 0.2 recursively until the final or halt state, we prove that the TRNN can simulate any TM in O(\(T^{2}\)) turns or cycles. One could also replace \(F\) with the \(tanh\) activation function, since \(tanh=2(h_{H})-1\). The mapping for a TRNN can be represented by \(T_{w,b}\), where \(T_{w,b}^{2}\) = \(T_{M}(z)\). This means that \(2\) steps of a TRNN simulate one step of a TM \(M\). Let us try to visualize the simulation. First, we will assign the state mapping of Turing machine \(M\) to the subset of states \(z_{t}\) of the TRNN. Since \(k=m+2n+1\) is the maximum number of neurons with second-order weights, each column of neurons would represent a \(K\) dimensional vector with activation function \(h_{H}\), such that neuron values are either converging towards 0 or 1. Ideally, we need \(m+2n\) neurons with values equivalent to or close to \(0\) and only one neuron with values close to \(1\) to represent TM \(M\)'s states \(Q\). We represent the neurons with values close to \(1\) as \(RN_{1}\). Therefore, based on the position of \(RN_{1}\), we can have each column representing either the control head or the tape symbol. Let us see how \(RN_{1}\) functions in our construction. Assume the position or index \(j\) of \(RN_{1}\) is between \(0\leq j\leq m\); then, the current column would correspond to the tape symbol \(r_{j}\). One important thing to remember is that one can imagine TM \(M\) in a matrix form by considering the row to correspond to the transition rule and the column to correspond to the tape that is assumed to be infinite. Thus, a blank symbol would be represented when position of \(RN_{1}\) is at \(j=0\). On the other hand if position of \(RN_{1}\) is between \(m+1\leq j\leq m+2n\), then the column would correspond to a control head representing the \(((j-m+1)/2)\)-th state. As we can see, we have introduced two additional vectors using \(RN_{1}\). In other words, the \(RN_{1}\) position at \(j\) now models two values, one at \(j=m+2q\) and one at \(j=m+2q-1\), that ideally represent the same control head configuration at the \(q\)-th state extracted from the TM \(M\). As discussed before, this construction requires \(2\) steps/cycles to represent one step/cycle of a TM \(M\). In essence, this simply states that, to simulate a tape's "left operation", we will need two steps - the first is to simulate the left move command of the control head and the second is to create a buffer that stores the "left operation" action. We can perform a similar operation to get the "right operation" of the tape, thus completing the construction. It should be noted that, under this arrangement, we can observe that the product of a blank symbol and the states that lie within the central range are equivalent to \(0\), rather than being close to \(1\). This is where the special threshold function \(F\) comes into play - it will ensure that the central state lies within the range and does not deviate away from the TM \(M\)'s configuration. Next, we will derive the lower bound of a TRNN for recognizing a small universal Turing machine (UTM). Prior work has proposed four small UTMs [12] and one such configuration contains \(6\) states and \(4\) symbols that can simulate any Turing machine in time O(\(T^{6}\)), where \(T\) represents number of steps that the TM requires to compute a final output. As shown in this work, the \(UTM_{6,4}\) or \(UTM_{7,3}\) is a universal TM, which we state following theorem: **Theorem 0.3**: _[_12_]_ _Given a deterministic single tape Turing machine \(M\) that runs in time \(t\) then any of the above UTMs can simulate the computation of \(M\) using space \(O(n)\) and time \(O(t^{2})\)._ We now use Theorem 0.3 to simulate a UTM, thus showing the minimal number of neurons required by a TRNN to recognize any TM grammar and prove the following corollary: **Corollary 0.4**: _There exists a \(17\) neuron unbounded-precision TRNN that can simulate any TM in O(\(T^{4}\)), where \(T\) is the total number of turns or steps required by the TM to compute the final output._ Now we show a much stronger bound - the TRNN can simulate any TM in real-time (O(\(T\))) and the UTM in O(\(T^{2}\)). Note that the previously derived bound for the RNN without memory is O(\(T^{8}\)); with memory, it is O(\(T^{6}\)). **Theorem 0.5**: _Given a Turing Machine \(M\), with \(m\) symbols and \(n\) states, there exists a \(k\)-neuron unbounded precision TRNN with two locally connected second-order connections, where \(K=m+n+1\) can simulate any TM in real-time or O(\(T^{1}\)) cycles._ Proof Sketch:We add additional weights (\(W^{a}\)) that correspond to a blank tape and \(m\) symbols of the Turing Machine and another set of weights (\(W\)) that keep track of the empty or blank state (this keeps track of states that the control head is not referring to at any given time-step \(t\)) as well as controller states. The addition of an extra set of second order weight matrices makes this model different compared to the one in Theorem 0.2. Now the weight updates for the TRNN are computed as follows: \[Z_{i,j}^{t+1}=h_{H}(\sum_{j_{1}=0}^{m}\sum_{j_{2}=0}^{n}(W_{j,j_{1},j_{2}}^{t}Z _{i+1,j_{1}}^{t}A_{i+1,j_{2}}^{t}+\theta) \tag{7}\] \[A_{i,j}^{t+1}=h_{H}(\sum_{j_{1}=0}^{m}\sum_{j_{2}=0}^{n}(W^{a}t_{j,j_{1},j_{2}}Z_{ i,j_{1}}^{t}A_{i,j_{2}}^{t}+\theta^{a}) \tag{8}\] , where \(h_{H}\) is the sigmoid activation function while \(\theta\) and \(\theta^{a}\) are the extended functions. The rest of the construction follows Theorem 0.2 where the left and right tape operations are done in real-time. **Corollary 0.6**: _There exists an \(11\) neuron unbounded-precision TRNN that can simulate any TM in O(\(T^{2}\)), where \(T\) is the total number of turns or steps required by the TM in order to compute the final output._ It is worth noting that prior results have shown that RNNs are Turing complete with \(40\) unbounded precision neurons. Notably, they operate in O(\(T^{3}\)) cycles and, with a UTM, it operates in O(\(T^{6}\)). Our proof shows that the smallest RNN with only \(11\)**unbounded precision** neurons is Turing complete and can work in real-time **O(\(T\))**. More precisely, we can construct a UTM with \(6\) states and \(4\) symbols that can simulate any TM in O(\(T^{2}\)) cycles. In the next section, we will show, with bounded precision, what is the computational limit of the TRNN. ### Tensor RNNs with Bounded Precision Here, we will revisit some of the tighter bounds placed on TRNNs, based on notions from [13], and show that TRNNs are strictly more powerful compared to modern-day RNNs such as LSTMs and GRUs. To do this, we will look at a few definitions with respect to the fixed point and the sigmoid activation function. We will show a vectorized dynamic version of the DFA, which closely resembles the workings of neural networks. Thus, such a DFA can be represented as follows: **Definition 0.7**: (DFA, vectorized dynamic version) _At any time \(t\), the state of a deterministic finite-state automaton is represented by the state vector_ \[\bar{Q}^{t}\in\{0,1\}^{n},\] _whose components \(\bar{Q}^{t}{}_{i}\) are_ \[\bar{Q}^{t}{}_{i}=\left\{\begin{array}{cc}1&q_{i}=q^{t}\\ 0&q_{i}\neq q^{t}.\end{array}\right.\] _The next state vector \(\bar{Q}^{t+1}\) is given by the dynamic relation_ \[\bar{Q}^{t+1}=W^{t+1}\cdot\bar{Q}^{t},\] _where the transition matrix \(W^{t+1}\), determined by the transition tensor \(W\) and the input vector \(I^{t+1}\), is_ \[W^{t+1}=W\cdot I^{t+1}.\] _The input vector \(I^{t+1}\in\{0,1\}^{m}\) has the components:_ \[I^{t+1}{}_{j}=\left\{\begin{array}{cc}1&\mathfrak{a}_{j}=\mathfrak{a}^{t+1} \\ 0&\mathfrak{a}_{j}\neq\mathfrak{a}^{t+1},\end{array}\right.\] _and the transition tensor \(W\) has the components:_ \[W_{i}{}^{jk}=\left\{\begin{array}{cc}1&q_{i}=\delta(q_{j},\mathfrak{a}_{k}) \\ 0&q_{i}\neq\delta(q_{j},\mathfrak{a}_{k}).\end{array}\right.\] _In component or neural network form, the dynamic relation explaining next state transition can then be rewritten as:_ \[\bar{Q}^{t+1}{}_{i}=\sum_{jk}W_{ijk}I^{t+1}{}_{k}\bar{Q}^{t}{}_{j}. \tag{9}\] **Definition 0.8**: _Let us assume \(f\): \(Z\to Z\) to be a mapping in metric space. Thus, a point \(z_{f}\in Z\) is called a fixed point of the mapping such that \(f(z_{f})=z_{f}\)._ We are interested in showing that the TRNN has a fixed point mapping that ensures that the construction stays stable. To achieve this, we define the stability of \(z_{f}\) as follows: **Definition 0.9**: _A fixed point \(z_{f}\) is considered stable if there exists a range \(R=[i,j]\in Z\) such that \(z_{f}\in R\) and if iterations for \(f\) start converging towards \(z_{f}\) for any starting point \(z_{s}\in R\)._ It is shown that a continuous mapping function \(f:Z\to Z\) at least has one fixed point. These definitions are important for showing that a TRNN with the sigmoid activation converges to fixed point and stays stable. Next, using fixed point analysis and Lemma 0.1, we will prove that a TRNN can simulate any DFA with bounded precision and weights. In particular, we will formally establish: **Theorem 0.10**: _Given a DFA \(M\) with \(n\) states and \(m\) input symbols, there exists a \(k\)-neuron bounded precision TRNN with sigmoid activation function (\(h_{H}\)), where \(k=n+1\), initialized from an arbitrary distribution, that can simulate any DFA in real-time O(\(T\))._ **Proof:** It can be seen that, at time \(t\), the TRNN takes in the entire transition table as input. This means that a neural network simulator (\(Z\)) consists of the current state (\(Q\)) at time \(t\) extracted from DFA \(M\), the input symbol, and the current operation. The network dynamically evolves to determine next state \(Z_{t+1}\) at \(t+1\), which ideally represents the next state \(Q_{t+1}\) in DFA. As seen by Definition 0.7, In particular, when we look at equation 9, which represents the state of the DFA, and Equation 5, which represents the state of TRNN, we clearly see that both follow from each other. Thus, by induction, we observe that the TRNN is equivalent to the DFA using a differentiable activation function, since, for large values of \(h_{H}\), the activation function will reach a stable point and converge to either \(0\) or \(1\), based on the network dynamics. Similar results have been shown in prior work [13]. Specifically, this effort demonstrated that, using the sigmoid activation function \((h_{H})\), one can obtain two stable fixed points in the range [\(0-1\)], such that state machine construction using a TRNN stays stable until the weight values are within a fixed point. It is important to note that prior saturated function analysis results [11, 23] do not provide any bounds and the results are only valid for binary weights. On the other hand, we have derived the true computational power of (tensor) RNNs and our results are much more robust, showing the number of bounded precision neurons required to recognize any DFA in real-time. To the best of our knowledge, first order RNNs require at least \(2mn-m+3\) neurons in order to recognize any DFA, where \(m\) represents the total number of input symbols and \(n\) represents the states of the DFA. In contrast, the TRNN requires only (\(n+1\)) neurons to recognize any DFA. This is true even with weight values that are initialized using a Gaussian distribution. Finally, our construction shows that the TRNN can encode a transition table into its weights, thus insertion and extraction of rules is much more stable with the TRNN. As a result, the model is also desirably **interpretable** by design and extraction. ## Experimental Setup and Result We conduct experiments on the \(7\) Tomita grammars that fall under regular grammars and are considered to be the standard benchmark for evaluating memory-less models. We randomly sample strings of lengths up to \(50\) in order to create train and validation splits. Each split contains \(2000\) samples obtained by randomly sampling without replacement from the entire distribution. This is done so as to ensure that both distributions are far apart from each other. Second, we create two test splits with \(1000\) samples in each partition. Test set #1 contains samples different from prior splits and has strings up to length \(60\) whereas test set #2 contains samples of length up to \(120\). We experiment with two widely popular architectures to serve as our baseline models, the long short-term memory (LSTM) network and the neural transformer. Mainly, we perform a grid search over the learning rate, number of layers, number of hidden units, and number of heads in order to obtain optimal (hyper-)parameter settings. Table 5 provides the range and parameter settings for each model. To better demonstrate the benefits of the TRNN and to show that the empirical model closely follows our theoretical results, we only experimented with a single-layer model with a maximum of \(32\) hidden units. Weights were initialized using Xavier initialization for the baseline models, whereas for the TRNN, weights were initialized from a centered Gaussian distribution. In Table 2, we \begin{table} \begin{tabular}{|c|c|} \hline \# & Definition \\ \hline 1 & \(a^{*}\) \\ 2 & \((ab)^{*}\) \\ 3 & Odd \(\llbracket...\rrbracket_{a}\) of \(a\)’s must be followed by even \(\llbracket...\rrbracket_{b}\) of \(b\)’s \\ 4 & All strings without the trigram \(aaa\) \\ 5 & Strings \(I\) where \(\llbracket...\rrbracket_{a}(I)\) and \(\llbracket...\rrbracket_{b}(I)\) are even \\ 6 & Strings \(I\) where \(\llbracket...\rrbracket_{a}(I)\equiv_{3}\llbracket...\rrbracket_{b}(I)\) \\ 7 & \(b^{*}a^{*}b^{*}a^{*}\) \\ \hline \end{tabular} \end{table} Table 1: Definitions of the Tomita languages. Let \(\llbracket...\rrbracket_{\sigma}(I)\) denote the number of occurrences of symbol \(\sigma\) in string \(I\). Let \(\equiv_{3}\) denote equivalence mod \(3\). observe that the transformer-based architecture struggles in recognizing longer strings, despite achieving \(100\)% accuracy on the validation splits. As has been shown in prior efforts, earlier transformer-based architectures heavily depend on positional encodings, which work well for machine translation. Nevertheless, for grammatical inference, one needs to go beyond the input distribution in order to efficiently recognize longer strings. Since positional encodings do not contain information on longer strings (or on those longer than what is available in the training set), a transformer model is simply not able to learn the state machine and thus struggles to recognize the input grammar. In addition, transformers need more parameters in order to recognize the input distribution; this is evident from Table 5, where we show that transformers even with a large number of parameters struggle to learn complex grammars such as Tomita \(3,5\) and \(6\). However, as we can see from Table 2, the LSTM matches TRNN performance but requires more parameters (as is also evident in Table 5, which reports best settings for the LSTM). Third, when tested on longer strings, it becomes evident that the TRNN better understands the rules that can be inferred from the data, as opposed to simply memorizing the patterns (and thus only overfitting). This is clearly seen in Table 3, where the LSTM performance starts dropping (whereas, in contrast, minimal loss is observed for the tensor model). Furthermore, in the next section, we present a stability analysis showing that one can stably extract automata from TRNN. In Figure 1, we show the comparison between ground-truth (or oracle) and extracted state machines. These simulation results support our theoretical findings that a first-order neural model, e.g., LSTM, transformer, requires more parameters to learn a DFA than the TRNN. In other words, for a DFA with \(100\) states (\(m\)) and \(70\) input symbols (\(n\)), theoretically, a tensor RNN would only require \(101\) neurons in order to successfully recognize the grammar. First-order RNNs, on the other hand, such as the LSTM, would require \(2mn-m+3n+1\) or \(14111\) neurons. This is an order of magnitude larger than that required by a TRNN and such a trend is, as we have shown, empirically observable. [14] to extract automata. Note that we notably observed slightly better performance when using a self-organizing map, as opposed to simple K-means. We observe that the vanilla RNN and LSTM benefit from training for a longer duration in terms of extraction. In Table 4, we observe that all models reach almost \(100\%\) accuracy on validation splits within \(15\) epochs, however training for longer always helps with stable automata extraction. For instance, if we use early stopping and stop the training whenever the model reaches \(100\)% accuracy on the validation set for \(5\) consecutive epochs, we observe that the automata extracted from the model are unstable. Another key observation is that the DFA extracted from such models have a large number of states and, as a result, minimizing them takes a huge amount of time. We set a \(1500\) second threshold/cut-off time where, if a model is unable to extract DFA within this given timeframe, we consider it as unstable and report this finding. For instance, for the LSTM, the success rate drops to \(4/10\) while for the RNN, it is \(3/10\). On other hand, the TRNN can still extract the DFA but with exponentially more states and minimizing it necessary does not produce the minimal DFA. Thus, training for longer durations always helps, leading to more stable extraction. ## Conclusion This work presents a theoretical breakthrough demonstrating that a (tensor) recurrent neural network (RNN) with only \(11\) unbounded precision neurons is Turing-complete, and importantly, operates in real-time. This is the first memory-less model that works in real-time and requires such a minimal number of neurons to achieve equivalence. In addition, we show that an \(11\)-neuron RNN with second-order synaptic connections can simulate any Turing machine in just \(2\) steps, while previous models with \(40\) neurons require \(6\) steps to achieve the same result. We also prove that, with bounded precision, a tensor RNN (TRNN) can simulate any deterministic finite automaton (DFA) using only \(n+1\) neurons, where \(n\) is the total number of states of the DFA. We show that our construction is more robust than previous saturated function analysis approaches and, notably, provides an upper bound on the number of neurons and weights needed to achieve equivalence. Finally, we evaluate our TRNN model on the challenging Tomita grammars, demonstrating its superior performance compared to transformers and LSTMs, even when using fewer parameters. Our results highlight the potential of TRNNs for stable automata extraction processes and makr a promising direction for future responsible and interpretable AI research. \begin{table} \begin{tabular}{l|r||r||r|r||r} & \multicolumn{2}{c|}{\(Attn\)} & \multicolumn{2}{c|}{\(LSTM\)} & \multicolumn{2}{c}{**TRNN(ours)**} \\ & **V-Acc** & **Itr** & **V-Acc** & **Itr** & **V-Acc** & **Itr** \\ \hline _Tm-1_ & \(100\) & \(12\) & \(100\) & \(5\) & \(100\) & \(12\) \\ _Tm-2_ & \(100\) & \(11\) & \(100\) & \(4\) & \(100\) & \(7\) \\ _Tm-3_ & \(98.99\) & \(18\) & \(100\) & \(12\) & \(100\) & \(18\) \\ _Tm-4_ & \(100\) & \(9\) & \(100\) & \(9\) & \(100\) & \(16\) \\ _Tm-5_ & \(99.99\) & \(13\) & \(100\) & \(9\) & \(100\) & \(15\) \\ _Tm-6_ & \(100\) & \(8\) & \(100\) & \(12\) & \(100\) & \(18\) \\ _Tm-7_ & \(100\) & \(13\) & \(100\) & \(21\) & \(100\) & \(29\) \\ \end{tabular} \end{table} Table 4: Percentage of correctly classified strings of ANNs trained on the \(Tomita\) languages (across \(5\) trials). We report the mean accuracy for each model on the validation test including mean number of epochs required by the model in order to achieve perfect validation accuracy.
2309.09171
On the Connection Between Riemann Hypothesis and a Special Class of Neural Networks
The Riemann hypothesis (RH) is a long-standing open problem in mathematics. It conjectures that non-trivial zeros of the zeta function all have real part equal to 1/2. The extent of the consequences of RH is far-reaching and touches a wide spectrum of topics including the distribution of prime numbers, the growth of arithmetic functions, the growth of Euler totient, etc. In this note, we revisit and extend an old analytic criterion of the RH known as the Nyman-Beurling criterion which connects the RH to a minimization problem that involves a special class of neural networks. This note is intended for an audience unfamiliar with RH. A gentle introduction to RH is provided.
Soufiane Hayou
2023-09-17T05:50:12Z
http://arxiv.org/abs/2309.09171v1
# On the Connection Between Riemann Hypothesis ###### Abstract The Riemann hypothesis (\(\mathcal{RH}\)) is a long-standing open problem in mathematics. It conjectures that non-trivial zeros of the zeta function all lie on the line \(\text{Re}(z)=1/2\). The extent of the consequences of \(\mathcal{RH}\) is far-reaching and touches a wide spectrum of topics including the distribution of prime numbers, the growth of arithmetic functions, the growth of Euler's totient, etc. In this note, we revisit and extend an old analytic criterion of the \(\mathcal{RH}\) known as the Nyman-Beurling criterion which connects the \(\mathcal{RH}\) to a minimization problem that involves a special class of neural networks. This note is intended for an audience unfamiliar with \(\mathcal{RH}\). A gentle introduction to \(\mathcal{RH}\) is provided. ## 1 Introduction The Riemann hypothesis conjectures that the non-trivial zeros of the Riemann zeta function are located on the line \(\text{Re}(z)=\frac{1}{2}\) in the complex plane \(\mathbb{C}\). This is a long-standing open problem in number theory first formulated by (Riemann, 1859). The Riemann zeta function was first defined for complex numbers \(z\) with a real part greater than \(1\) by \(\zeta(z)=\sum_{n=1}^{\infty}\frac{1}{n^{z}},z\in\mathbb{C},\text{Re}(z)>1\). However, it is the extension of the zeta function \(\zeta\) to the whole complex plane \(\mathbb{C}\) that is considered in the statement of \(\mathcal{RH}\). This extension is called the _analytic continuation_ of the zeta function (details are provided in Appendix A). There is strong empirical evidence that \(\mathcal{RH}\) holds. Recent numerical verification by Platt and Trudgian (2021) showed that \(\mathcal{RH}\) is at least true in the region \(\{z=a+ib\in\mathbb{C}:a\in(0,1),b\in(0,\gamma]\}\) where \(\gamma=3\cdot 10^{12}\), meaning that all zeros of the zeta function with imaginary parts in \((0,\gamma]\) have a real part equal to \(\frac{1}{2}\). Several other theoretical insights seem to support \(\mathcal{RH}\) ;we invite the reader to check Appendix A for a short summary of relevant results and insights. In this note, we are interested in an specific criterion of the \(\mathcal{RH}\), i.e. an equivalent statement of \(\mathcal{RH}\). This criterion is known as the Nyman-Beurling criterion (Nyman, 1950; Beurling, 1955) which states that \(\mathcal{RH}\) holds if and only if a special class of functions is dense in \(L_{2}(0,1)\). This class of functions can be seen as a special kind of neural networks with one dimensional input. In this note, we show that the sufficient condition can be easily extended to \(L_{2}((0,1)^{d})\). Specifically, we introduce a new class of neural networks and show that \(\mathcal{RH}\) implies the density of this class in \(L_{2}((0,1)^{d})\) for any \(d\geq 2\). The necessary condition in general dimension \(d\geq 2\) remains an open question. ## 2 Riemann Hypothesis The Riemann zeta function was originally defined for complex numbers \(z\) with a real part greater than \(1\) by \[\zeta(z)=\sum_{n=1}^{\infty}\frac{1}{n^{z}},\quad z\in\mathbb{C},\text{Re}(z)>1. \tag{1}\] The above definition of Riemann zeta function excludes the region of interest \(\{z\in\mathbb{C}:\text{Re}(z)=\frac{1}{2}\}\) since the series in Eq. (3) diverge when \(|z|<1\). Indeed, \(\mathcal{RH}\) is stated for the an extension of the zeta function on the whole complex plane \(\mathbb{C}\). This extension is called the analytic continuation, and it is unique by the Identity theorem (Walz, 2017). To give the reader some intuition of how such extension is defined, let us show how we can extend \(\zeta\) to the region \(\{z\in\mathbb{C}:\text{Re}(z)>0\}\). Observe that the function \(\zeta\) satisfies the following identity \[(1-2^{1-z})\zeta(z)=\sum_{n=1}^{\infty}\frac{1}{n^{z}}-2\sum_{n=1}^{\infty} \frac{1}{(2n)^{z}}=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n^{z}},\] where the right hand side is defined for any complex number \(z\) such that \(\text{Re}(z)>0\). Using similar techniques, we can show that for any \(z\in\mathbb{C}\) such that \(\text{Re}(z)\in(0,1)\), \[\zeta(z)=2^{z}\pi^{z-1}\sin\left(\frac{\pi z}{2}\right)\Gamma(1-z)\zeta(1-z), \tag{2}\] which helps extend \(\zeta\) to complex numbers with negative real part. A step by step explanation of the analytic continuation of the \(\zeta\) function is provided in Appendix A. Zeros of the \(\zeta\) function.From Eq. (2), we have \(\zeta(-2k)=0\) for any integer \(k\geq 1\). The negative even integers \(\{-2k\}_{k\geq 1}\) are thus called _trivial zeros_ of the Riemann zeta function since the result follows from the simple fact that \(\sin\left(-\pi k\right)=0\) for all integers \(k\geq 1\). The other zeros of \(\zeta\) are called non-trivial zeros, and their properties remain poorly understood. The \(\mathcal{RH}\) conjectures that they all lie on a the line \(\text{Re}(z)=\frac{1}{2}\). Riemann Hypothesis (\(\mathcal{RH}\))._All non-trivial zeros of \(\zeta\) have a real part equal to \(\frac{1}{2}\)._ Whether \(\mathcal{RH}\) holds is still an open question. The consequences of the Riemann hypothesis are various (see Appendix A) and numerous equivalent results exist in the literature. In the next section, we re-visit an old analytic criterion of \(\mathcal{RH}\)that involves a special type of functions that can be seen as single layer neural networks. ### A _Neural Network_ Criterion for \(\mathcal{RH}\) For \(p>1,d\in\mathbb{N}\backslash\{0\}\), and some set \(S\subset\mathbb{R}^{d}\), let \(L_{p}(S)\) denote the set of real-valued functions \(f\) defined on \(S\) such as \(|f|^{p}\) is Lebesgue integrable, i.e. \(L_{p}(S)=\{f:S\to\mathbb{R}:\int_{S}|f|^{p}d\mu<\infty\},\) where \(\mu\) is the Lebesgue measure on \(\mathbb{R}^{d}\). We denote by \(\|.\|_{p}\) the standard Lebesgue norm defined by \(\|f\|_{p}=\left(\int_{S}|f|^{p}d\mu\right)^{1/p}\) for \(f\in L_{p}(S)\). For some \(k\geq 1\), let \(I_{k}\stackrel{{ def}}{{=}}(0,1)^{k}=(0,1)\times\cdots\times(0,1)\) where the product contains \(k\) terms. Let \(\rho\) denote the fractional part function given by \(\rho(x)=x-\lfloor x\rfloor,\) for \(x\in\mathbb{R}\). Consider the following class of functions defined on the interval \(I_{1}\) \[\mathcal{N}=\{f(x)=\sum_{i=1}^{m}c_{i}\rho\left(\frac{\beta_{i}}{x}\right),x \in I_{1}:m\geq 1,c\in\mathbb{R}^{m},\beta\in I_{m},c^{T}\beta=0\}.\] In machine learning nomenclature, \(\mathcal{N}\) consists of single-layer neural networks with a constrained parameter space and a specific non-linearity (or activation function) that depends on the fractional part \(\rho\). The parameters \((c,\beta)\) belong to the set \(\{c\in\mathbb{R}^{m},\beta\in(0,1)^{m},c^{T}\beta=0\}\). The values \((\rho(\beta_{i}/x))_{1\leq i\leq m}\) act as the neurons (post-activations) in the neural network. In Fig. 1, we depict neuron values for different choices of \(\beta_{i}\). The graphs show fluctuations when \(x\) is close to \(0\) which should be expected since the function \(x\to\rho(\beta_{i}/x)\) fluctuates indefinitely between \(0\) and \(1\) as \(x\) goes to zero, whenever \(\beta_{i}\neq 0\). In figure Fig. 1 (right), we show an example of a function from the class \(\mathcal{N}\) given by \(f(x)=\rho(0.7/x)-\rho(0.3/x)-4\rho(0.1/x)\). We observe that \(f\) is a step function which might be surprising at first glance. However, it is easy to see that \(\mathcal{N}\) consists only of step functions. This is due to the constraint on the parameters \(c,\beta\), and the fact that \(\rho(x)=x-\lfloor x\rfloor\). Now, we are ready to state the main results that draw an interesting connection between \(\mathcal{RH}\) and the class \(\mathcal{N}\). **Theorem 1** (Nyman (1950)): _The \(\mathcal{RH}\) is true if and only if \(\mathcal{N}\) is dense in \(L_{2}(I_{1})\)._ Beurling (1955) later extended this result by showing that for any \(p>1\), the \(\zeta\) function has no zeroes in the set \(\{z\in\mathbb{C}:\text{Re}(z)>1/p\}\) if and only if the set \(\mathcal{N}\) is dense in \(L_{p}(I_{1})\). **Theorem 2** (Beurling (1955)): _The Riemann zeta function is free from zeros in the half plane \(Re(z)>\frac{1}{p},1<p<\infty\), if and only if \(\mathcal{N}\) is dense in \(L_{p}(I_{1})\)._ The intuition behind this connection is rather simple. The number of fluctuations of the function \(x\to\rho(\beta/x)\) near \(0\) is closely related to the \(\zeta\) function. To understand the machinery of the proofs of Theorem 1 and Theorem 2, we provide a sketch of the proof by Beurling (1955) for the sufficient condition in Appendix B. Using the same techniques, we derive the following result on zero-free regions of the zeta function. **Lemma 1** (Nyman-Beurling zero-free regions): _Let \(f\in\mathcal{N}\) and \(\delta=\|1-f\|_{2}\) be the distance between the constant function \(1\) on \(I_{1}\) and \(f\). Then, the region \(\{z\in\mathbb{C},\text{Re}(z)>\frac{1}{2}\left(1+\delta^{2}|z|^{2}\right)\}\) is free of zeroes of the Riemann zeta function \(\zeta\)._ The condition that \(\mathcal{N}\) should be dense in \(L_{2}(I_{1})\) can be replaced by the following weaker condition: the constant function \(1\) on \(I_{1}\) can be approximated up to an arbitrary accuracy with functions from \(\mathcal{N}\). This is because from the constant function \(1\), one can construct an approximation of any step-wise function, which in turn can approximate any function in \(L_{2}(I_{1})\). A discussion on the empirical implications of Theorem 1 is provided in Appendix B. In the next section, we show that the sufficient condition of Theorem 2 can be easily generalized to networks with multi-dimensional inputs, i.e. the case \(d\geq 1\). ## 3 A sufficient condition in the multi-dimensional case Let \(d\geq 1\) and consider the following class of neural networks with inputs in \(I_{d}\), \[\mathcal{N}_{d}=\{f(x)=\sum_{j=1}^{d}\sum_{i=1}^{m}c_{i,j}\rho\left(\frac{\beta_{ i,j}}{x_{j}}\right),x\in I_{d}:m\geq 1,c\in\mathbb{R}^{d\times m},\beta\in I_{d \times m},c^{T}\beta=0\},\] where \(c=(c_{1,1},c_{2,1},\ldots,c_{m,1},\ldots,c_{m,d})^{\top}\in\mathbb{R}^{md}\) is the flattened vector of \((c_{\cdot,j})_{1\leq j\leq d}\). Notice that we recover the Nyman-Beurling class \(\mathcal{N}\) when \(d=1\). Using this class, we can generalize the zero-free region result given by Theorem 1 to a multi-dimensional setting in the case \(p=2\).1 Footnote 1: The choice of \(p=2\) is arbitray, and similar result to that of Theorem 2 can be obtained for any \(p>1\). **Lemma 2** (zero-free regions for general \(d\geq 1\)): _Let \(d\geq 1\) and \(f\in\mathcal{N}_{d}\). Let \(\delta=\|1-f\|_{2}\) be the \(L_{2}\) distance between the constant function \(1\) on \(I_{d}\) and \(f\). Then, the region \(\{z\in\mathbb{C},\mathrm{Re}(z)>\frac{1}{2}\left(1+\delta^{\frac{d}{d}}|z|^{2} \right)\}\) is free of zeroes of the Riemann zeta function \(\zeta\)._ In Fig. 2, we depict the zero-free regions from Lemma 2. The smaller the constant \(\delta\), the larger the region. The multi-dimensional input case (\(d\geq 2\)) can therefore be interesting if we can better approximate the constant function \(1\) with functions from \(\mathcal{N}_{d}\). More precisely, the result of Lemma 2 is relevant if for some \(d\geq 2\), we could find \(\delta\) such that \(\delta^{2/d}<\delta_{1}\), where \(\delta_{1}\) is the approximation error in the one-dimensional case \(d=1\). In this case, the zero-free region obtained with \(d\geq 2\) will be larger than the one obtained with \(d=1\). We refer the reader to Section 4 for a more-in depth discussion about the empirical implications of the multi-dimensional case. Notice that if \(\delta\) can be chosen arbitrarily small, then the zero-free region in Lemma 2 can be extended to the whole half-plane \(\{\mathrm{Re}(z)>1/2\}\). This is a generalization of the sufficient condition of Theorem 2 in the multi-dimensional case. **Corollary 3** (Sufficient condition for \(d\geq 1\)): _Let \(d\geq 1\). Assume that the class \(\mathcal{N}_{d}\) is dense in \(L_{2}(I_{d})\). Then, the region \(\{\mathrm{Re}(z)>1/2\}\) is free of the zeroes of the Riemann zeta function \(\zeta\)._ ### Open problem: The necessary condition for \(d\geq 2\) By considering the class \(\mathcal{N}_{d}\), we generalized the sufficient condition of Beurling's criterion in the multi-dimensional input case \(d\geq 2\). However, it is unclear whether a similar necessary condition holds. Proving that \(\mathcal{RH}\) implies the density of \(\mathcal{N}_{d}\) in \(L_{2}(I_{d})\) is challenging. A function \(f\in\mathcal{N}_{d}\) can be expressed as \(f(x)=\sum_{i=1}^{d}f_{i}(x_{i})\) for \(x=(x_{1},\ldots,x_{d})^{\top}\in I_{d}\), and \(f_{i}\) are functions with one-dimensional inputs. This special additive form of functions from \(\mathcal{N}_{d}\) makes it harder to use arguments similar to the one-dimensional case (Theorem 2) to prove density results. Figure 2: Zero-free regions of the form \(\{\mathrm{Re}(z)>\frac{1}{2}(1+\Delta|z|^{2})\}\) as stated in Theorem 1, Theorem 2, and Theorem 4. ## 4 Discussion on the Implications and Limitations In this section, we discuss some empirical implications of Lemma 1 and Lemma 2. Probabilistic zero-free regions.Notice that Lemmas 1 and 2 require access to the distance \(\|1-f\|_{2}\) which is generally intractable. However, we can approximate this quantity using Monte Carlo samples and obtain high probability bounds for this norm. Hence, the best we can do with such criterion is to verify the non-existence of zeroes of \(\zeta\) in some region _with high probability_. Indeed, using Hoeffding's inequality, we have the following result. **Lemma 4**: _Let \(d\geq 1\), \(N\geq 1\) and \(X_{1},X_{2},\ldots,X_{N}\) be iid uniform random variables on \(I_{d}\). Let \(f\in\mathcal{N}_{d}\) (where for \(d=1\), we denote \(\mathcal{N}_{d}=\mathcal{N}\)) such that \(f(x)=\sum_{j=1}^{d}\sum_{i=1}^{m}c_{i,j}\rho\left(\frac{\beta_{i,j}}{x_{j}}\right)\) for all \(x\in I_{d}\), for some \(m\geq 1,\beta\in I_{m\times d},c\in\mathbb{R}^{m\times d}\). Then, for any \(\alpha\in(0,1)\), we have with probability at least \(1-\alpha\), the region \(R_{N}\stackrel{{\text{def}}}{{=}}\{\text{Re}(z)>\frac{1}{2}\left( 1+\Delta_{N}(f)^{1/d}|z|^{2}\right)\}\) is free of the zeroes of \(\zeta\), where \(\Delta_{N}(f)=\frac{1}{N}\sum_{i=1}^{N}(1-f(x_{i}))^{2}+(1+\|c\|_{1}^{2})\sqrt {\frac{2\log(2/\alpha)}{N}}\), with \(\|c\|_{1}=\sum_{i=1}^{m}|c_{i}|\)._ The proof follows from a simple application of Hoeffding's concentration inequality to control the deviations of the empirical risk \(N^{-1}\sum_{i=1}^{N}(1-f(x_{i}))^{2}\). Hoeffding's lemma requires that the random variables \((1-f(x_{i}))^{2}\) are bounded, which is straightforward since \((1-f(x_{i}))^{2}\leq 2(1+f(x_{i})^{2})\leq 2(1+\|c\|_{1}^{2})\) almost surely. The result of Theorem 4 has an important implication on the choice of the sample size. Indeed, to have the coefficient \(\delta_{N}^{1/d}\) of order \(\epsilon\) with high probability, a necessary condition is that \(N=\mathcal{O}(\epsilon^{2d})\). When is the multi-dimensional variant better than the one-dimensional criterion?For some \(d\geq 2\), it is straightforward that the multi-dimensional criterion given in Lemma 2 is better than the one given in Lemma 1 only if \(\inf_{f\in\mathcal{N}_{d}}\|1-f\|_{2}^{1/d}<\inf_{f\in\mathcal{N}}\|1-f\|_{2}\). Under this condition, the zero-free region is larger with \(d\geq 2\). For empirical verification of the \(\mathcal{RH}\) and for same probability threshold \(\alpha\), Lemma 4 implies that the multi-dimensional setting is better than the one-dimensional counterpart whenever \(\inf_{f\in\mathcal{N}_{d}}\Delta_{N}(f)^{1/d}<\inf_{f\in\mathcal{N}}\Delta_{N}(f)\). We discuss the feasibility of such conditions in the next paragraph. What does it take to improve upon existing numerical verifications of \(\mathcal{RH}\)?The high probability zero-free regions from Lemma 4 are of the form \(\{\text{Re}(z)>\frac{1}{2}(1+\Delta|z|^{2})\}\) for some constant \(\Delta>0\). Using a different analytical criterion of the \(\mathcal{RH}\), Platt and Trudgian (2021) showed that the region \(\{a+ib:a\in(0,1),b\in(0,\gamma],\gamma\approx 3\cdot 10^{12}\}\) is free of the zeroes of \(\zeta\). Hence, using Lemma 4 to improve this result requires that the region \(R_{N}\cap\{a+ib,a\in(0,1),b\in(0,\gamma]\}\) contains complex numbers \(z\) with imaginary part larger than order \(10^{12}\). Let \(z=a+ib\in\mathbb{C}\). Having \(z\in R_{N}\) implies that \(b^{2}<-a^{2}+\Delta_{N}(f)^{-1/d}(2a-1)\). For the region of interest where \(a\in(0,1)\), and assuming that \(\Delta_{N}(f)\) is small enough, the right-hand side is of order \(\Delta_{N}(f)^{-1/d}(2a-1)\) which is maximized for \(a=1\) and equal to \(\Delta_{N}(f)^{-1/d}\). Thus, to improve upon existing work (Platt and Trudgian, 2021) (at least with some high probability certificate), we need to have \(\Delta_{N}(f)^{-1/d}\) of order \(10^{12}\), which means that \(\Delta_{N}(f)^{1/d}\) should be at least of order \(10^{-12}\). This requires a the minimize of a the empirical risk \(N^{-1}\sum_{i=1}^{N}(1-f(x_{i}))^{2}\) with a minimum sample size of order \(10^{24}\) which is unfeasible with the current compute resources.
2305.19921
Deep Neural Network Estimation in Panel Data Models
In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and explore latent patterns in the cross-section. We use the proposed estimators to forecast the progression of new COVID-19 cases across the G7 countries during the pandemic. We find significant forecasting gains over both linear panel and nonlinear time series models. Containment or lockdown policies, as instigated at the national-level by governments, are found to have out-of-sample predictive power for new COVID-19 cases. We illustrate how the use of partial derivatives can help open the "black-box" of neural networks and facilitate semi-structural analysis: school and workplace closures are found to have been effective policies at restricting the progression of the pandemic across the G7 countries. But our methods illustrate significant heterogeneity and time-variation in the effectiveness of specific containment policies.
Ilias Chronopoulos, Katerina Chrysikou, George Kapetanios, James Mitchell, Aristeidis Raftapostolos
2023-05-31T14:58:31Z
http://arxiv.org/abs/2305.19921v1
# Deep Neural Network Estimation in Panel Data Models+ ###### Abstract In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and explore latent patterns in the cross-section. We use the proposed estimators to forecast the progression of new COVID-19 cases across the G7 countries during the pandemic. We find significant forecasting gains over both linear panel and nonlinear time series models. Containment or lockdown policies, as instigated at the national-level by governments, are found to have out-of-sample predictive power for new COVID-19 cases. We illustrate how the use of partial derivatives can help open the "black-box" of neural networks and facilitate semi-structural analysis: school and workplace closures are found to have been effective policies at restricting the progression of the pandemic across the G7 countries. But our methods illustrate significant heterogeneity and time-variation in the effectiveness of specific containment policies. JEL codes: C33, C45. Keywords: Machine learning, neural networks, panel data, nonlinearity, forecasting, COVID-19, policy interventions. Introduction Panel data models are widely used in economics and finance. They combine both cross-sectional and time series data. One important advantage of panel data over time-series methods (see, for example, Chapters 26 and 28 of Pesaran (2015)) is their ability to control for unobserved heterogeneity both in the temporal and longitudinal dimensions. One can then approximate this latent individual heterogeneity through identifiable effects that are otherwise non-detectable in traditional time-series data sets. There are several ways to model and control for individual heterogeneity in linear panel data models: the random effects estimator, see, for example, Balestra and Nerlove (1966), the fixed effects (within) estimator, see, for example, Mundlak (1961, 1978), and the Swamy (1970) estimator. Alternative ways to model individual heterogeneity in linear models are found in Hsiao (1974, 1975), with a thorough discussion in Hsiao and Pesaran (2004, 2008) and Part VI of Pesaran (2015). The work summarized above focuses on linear heterogeneous panel data models. However, the importance of nonlinearity has attracted increased interest in the literature. Notable contributions are Fernandez-Val and Weidner (2016), who adapt the analytical and jackknife bias correction methods introduced in Hahn and Newey (2004) to nonlinear models with additive or interactive individual and time effects, and Chen et al. (2021) who address estimation and inference in general nonlinear models using iterative estimation. Hacooglu Hoke and Kapetanios (2021) provide an approach for estimation and inference in nonlinear conditional mean panel data models in the presence of cross-sectional dependence. Jochmans (2017) develops the asymptotic properties of GMM estimators for models with two-way multiplicative fixed effects, while Charbonneau (2013) considers a logit conditional maximum likelihood approach to investigate whether existing panel methods for eliminating a single fixed effect can be modified to eliminate multiple fixed effects. In this paper we also focus on the estimation of nonlinear panels. We propose the use of a novel machine learning (ML) panel data estimator based on neural networks. To help delineate the contributions of this paper and the empirical application that we consider, we first provide a high-level summary of the current literature on ML. Statistical ML is a major interdisciplinary research area. In the last decade, ML methods have been incorporated, in various forms, across the natural, social, medical, and economic sciences, leading to significant research outputs. There are two main reasons for such widespread adoption. Firstly, ML methods and specifically neural networks, the focus of this paper, have been found to exhibit outstanding empirical performance when forecasting, specifically with high-dimensional data sets. Secondly, they have great capacity to uncover potentially unknown and both highly complicated and nonlinear relationships in the data. In conjunction with increased availability of high-dimensional data sets, and policymakers' understandable desire for accurate forecasts, considerable attention has been paid to ML. Studies have shown that feed-forward neural networks can approximate any continuous function of several real variables arbitrarily well; see, for example, Hornik (1991), Hornik et al. (1989), Gallant and White (1992), and Park and Sandberg (1991). Other nonparametric approaches, for example, splines, wavelets, the Fourier basis, as well as simple polynomial approximations, have the universal approximation property, based on the Stone-Weierstrass theorem. However, it has been convincingly argued that neural networks outperform them in prediction (see, for example, Kapetanios and Blake (2010)). More recent work by Liang and Srikant (2016) and Yarotsky (2017, 2018) considers feedforward neural networks as approximations for complex functions that accommodate multiple layers, provided sufficiently many hidden neurons and layers are available. Other examples, like Bartlett et al. (2019), provide the theoretical framework for neural network estimation, while Schmidt-Hieber (2020) focuses on the adaptation property of neural networks, showing that they can strictly improve on classical methods. If the unknown target function is a composition of simpler functions, then the composition-based deep net estimator is superior to estimators that do not use compositions. Lastly, recent work of Farrell et al. (2021), building on the work of Yarotsky (2017) and Bartlett et al. (2019), studies deep neural networks and considers their use for semi-parametric inference. In this paper we focus on nonlinear panel data models, where the source of nonlinearity lies in the conditional mean. Our contribution to the literature is as follows. We propose a ML estimator of the conditional mean, \(E(y_{ii}|\mathbf{x}_{it})\), based on neural networks and explore the idea of heterogeneity in a nonlinear panel model by allowing the conditional mean to have a panel - common nonlinear component - as well as a nonlinear idiosyncratic component. We base our theoretical results mainly on Farrell et al. (2021), expanding their contribution to a panel data framework. We also find evidence of the double descent effect, whereby complex models can perform well without the need for explicit regularization (see Hastie et al. (2022) and Kelly et al. (2022), as well as Remark 5 below). We use the new deep panel data models to forecast the transmission of new COVID-19 cases during the pandemic across a number of countries. We consider the G7 countries. In contrast to theoretical epidemiological models, that may be specified incorrectly, our proposed neural network models are flexible reduced-form models. They let the data determine the path of new infections over time, by modeling this path as dependent on the lagged levels of the number of infections. By comparing the models against a deep (nonlinear) time-series model, that does not aim to exploit cross-country dependencies, we test whether there are benefits when forecasting new COVID-19 cases to pooling data across countries. We find that there clearly are. Importantly, our model also captures the nonlinear features of a pandemic, particularly in its early waves. Neural networks have great capacity to approximate complicated nonlinear functions and have been found to forecast well. But they are frequently criticized as non-interpretable (of being a "black box"), since they do not offer simple summaries of relationships in the data. Recently, there have been a number of papers that try to make ML output interpretable; see, for example, Athey and Imbens (2017), Wager and Athey (2018), Belloni et al. (2014), Joseph (2019), Chronopoulos et al. (2023), and Kapetanios et al. (2023). In this paper, given the many but contrasting (across time and countries) containment or social-distancing policies instigated to moderate the path of the COVID-19 pandemic, we use our model to shed light on the relative effectiveness - across time and across the G7 countries - of these policies at lowering the number of new COVID-19 cases. We do so by exploring how the use of partial derivatives, calculated from the output of our proposed neural network, can help examine the effectiveness of policy. We examine the derivatives over time and find that some, but not all, containment policies were effective at lowering new COVID-19 cases. These policies tended to be more effective two to three weeks after the policy change. There is also considerable heterogeneity across countries in the effectiveness of these policies. Policy, as a whole, was somewhat less effective in Italy, and was more effective in Japan in late summer 2022, later than in the other G7 countries. The remainder of the paper proceeds as follows. In Section 2 we introduce our main theoretical results: we discuss non-asymptotic bounds for a (potentially heterogeneous) neural network panel estimator based on a quadratic loss function. In Section 3, we discuss both methodological and implementation aspects of the proposed methodology. We undertake the modeling and forecasting of new COVID-19 cases, and the assessment of the effectiveness of containment policies, in Section 4. Section 5 concludes. We relegate to the online appendix additional forecasting results, data summaries, and further discussion of the prediction evaluation tests used. ## 2 Theoretical considerations: the deep neural panel data model Let \(y_{it}\) be the observation for the \(i^{th}\) cross-sectional unit at time \(t\) generated by the following panel data model: \[E\left(y_{it}|\mathbf{x}_{it}\right)=\widetilde{h}_{i}\left(\mathbf{x}_{it}\right), \quad i=1,\ldots,N,\;t=1,\ldots,T, \tag{1}\] where \(\left\{\mathbf{x}_{it}\right\}=\left\{\left(x_{t,1},\ldots,x_{t,p}\right)^{ \prime}\right\}\) is a \(p\)-dimensional vector of regressors, belonging to unit \(i\), and \(\widetilde{h}_{i}(\cdot)\) are unknown functions that will be approximated with neural networks. Throughout, we abstract from unconditional mean considerations, for simplicity, by assuming \(E(y_{it})=0\). This can be achieved by simple unit-by-unit demeaning of the dependent variable. Therefore, the model we entertain is given by: \[y_{it}=\widetilde{h}_{i}\left(\mathbf{x}_{it}\right)+\varepsilon_{it}, \tag{2}\] where \(\varepsilon_{it}\) is an error term. Next, we provide a crucial decomposition to justify the use of a panel structure. We assume that \(\widetilde{h}_{i}\left(\mathbf{x}_{it}\right)\) can be decomposed as follows: \[\widetilde{h}_{i}\left(\mathbf{x}_{it}\right)=h\left(\mathbf{x}_{it}\right)+h_{i} \left(\mathbf{x}_{it}\right), \tag{3}\] where the function \(h(\cdot)\) is the common component of the model, and is our main focus of interest, and \(h_{i}\left(\mathbf{x}_{it}\right)\) are idiosyncratic components that will also be approximated with neural networks. Assumptions needed for the identification of \(h(\cdot)\) will be given below. The main motivation for this decomposition is the familiar linear heterogeneous panel data model, that takes the form: \[y_{it}=\mathbf{x}_{it}^{\prime}\mathbf{\beta}_{i}+\varepsilon_{it}=\mathbf{x}_{it}^{\prime} \mathbf{\beta}+\mathbf{x}_{it}^{\prime}\mathbf{\eta}_{i}+\varepsilon_{it}, \tag{4}\] where \(E(\mathbf{\eta}_{i}|\mathbf{x}_{it},\varepsilon_{it})=0\). Equation (4) allows coefficients to vary across individual units. We wish to consider and analyze a nonlinear extension of this heterogeneous panel data model. The next step of our proposal involves approximating \(h(\cdot)\) and \(h_{i}(\cdot)\) with neural network functional parameterizations, given by \(g\left(\mathbf{\cdot};\mathbf{\theta}\right)\). Here, the functional form is known up to the parameter vector \(\mathbf{\theta}\), which is a vector of ancillary parameters, such as network weights and biases. More details on the choice of \(g\left(\mathbf{\cdot};\mathbf{\theta}\right)\) and the role of various neural network parameters will be provided in Section 3 below. Therefore, we parameterize (3) by proposing the following panel model: \[y_{it}=g\left(\mathbf{x}_{it};\mathbf{\theta}^{0}\right)+g\left(\mathbf{x}_{it};\mathbf{ \theta}_{i}^{0}\right)+\varepsilon_{it}, \tag{5}\] where \(\mathbf{\theta}^{0}\)and \(\mathbf{\theta}_{i}^{0}\) denote the values of the parameters that best approximate \(h\) and \(h_{i}\), respectively (see (3)), in a sense to be defined below. It is useful to draw some parallels between (4) and (5). We note that most multi-layer neural network architectures have a final linear layer given by: \[g\left(\mathbf{x}_{it};\mathbf{\theta}^{0}\right)=\mathbf{\theta}_{L}^{0\prime}\mathbf{f} \left(\mathbf{x}_{it}\right),\] where \(\mathbf{f}\) is a vector of known functions that form part of the neural network architecture and \(L\) denotes the number of network layers. Then, it follows that we have a linear representation, in \(\mathbf{f}\) and \(\mathbf{f}_{i}\), of the form: \[y_{it}=\mathbf{\theta}_{L}^{0\prime}\mathbf{f}\left(\mathbf{x}_{it}\right)+\mathbf{\theta}_{i,L}^{0\prime}\mathbf{f}_{i}\left(\mathbf{x}_{it}\right)+\varepsilon_{it}, \tag{6}\] which is reminiscent of (4) and thus provides a clear rationale for our nonlinear extension of it. Furthermore, it provides a rationale for thinking that \(\mathbf{\theta}_{i}^{0}\) plays a similar role to the idiosyncratic coefficients, \(\mathbf{\eta}_{i}\), of the linear model. Of course, one can use a different network architecture for the panel and idiosyncratic components, but for simplicity we keep the same structure. The model above encompasses a variety of nonlinear specifications. It is also worth emphasizing that the dimension of the regressor vector could be very large. So it is conceivable that each \(\mathbf{x}_{it}\) contains regressors from other cross-sectional units, allowing for complex nonlinear interactions across units. In the limit, each unit could have \(\left(\mathbf{x}_{1t},\ldots,\mathbf{x}_{Nt}\right)\) as the regressor vector. Next, we consider the conditions needed to identify \(h(\cdot)\). We require certain definitions. First, we define \(\mathbf{\varepsilon}_{it}\equiv y_{it}-h\left(\mathbf{x}_{it}\right)-h_{i}\left(\mathbf{x }_{it}\right)\) and \(u_{it}=h_{i}\left(\mathbf{x}_{it}\right)+\mathbf{\varepsilon}_{it}\), where the latter is in analogy to the usual composite error term for the linear heterogeneous panel model, given as \(\mathbf{x}_{it}^{\prime}\mathbf{\eta}_{i}+\varepsilon_{it}\). The assumption below generalizes the usual identification assumption on \(\mathbf{\eta}_{i}\) made in linear heterogeneous panel models. **Assumption 1**: _For all \(i=1,\ldots,N,\ t=1,\ldots,T\)_ 1. _For some positive constant_ \(C\)_, we assume that_ \(h(\cdot)\) _in (_7_) below is bounded, such that_ \(\|h\|_{\infty}\leq C.\)__\(h_{i}\) _is bounded similarly to_ \(h\)_._ 2. \(\{u_{it}\}\) _is independent and bounded across_ \(i\)_._ 3. \(E[u_{it}]h\left(\mathbf{x}_{it}\right)]=0.\)__ This assumption enables separation of \(h(\cdot)\) and \(h_{i}(\cdot)\) when panel pooled estimation is carried out. For neural network estimation, much stricter assumptions will be needed. In particular, the second part of the assumption is justified in view of our later assumption that \(h(\cdot)\) and \(h_{i}(\cdot)\) can be well approximated by neural network architectures and the linear aspect of neural networks discussed in (6) as it is similar, in functionality, to assuming that \(E(\mathbf{\eta}_{i}|\mathbf{x}_{it})=0\). Next we align our discussion with Farrell et al. (2021). The overall goal of neural network estimation in Farrell et al. (2021) is to estimate an unknown smooth function \(h(\cdot)\) that maps covariates, \(\mathbf{X}\), to an outcome \((T\times N)\) matrix \(\mathbf{Y}\), by minimizing a loss function \(g_{*}\left(\mathbf{Y},\mathbf{X};\mathbf{\theta}\right)\) with respect to the parameterization \(\mathbf{\theta}\) of a neural network function \(g\left(\mathbf{\cdot};\mathbf{\theta}^{0}\right)\). Formally, \[h=\operatorname*{arg\,min}_{\mathbf{\theta}}E\left[g_{*}\left(\mathbf{Y},\mathbf{X};\mathbf{ \theta}\right)\right]. \tag{7}\] This is a minimization of a population quantity and assumes that the true function \(h(\cdot)\) is the unique solution of (7). Note that while Farrell et al. (2021) do not specify a true function \(h(\cdot)\), we take a further step and assume that the true functions in (3) coincide with the unique solutions of (7). We do not specify \(\mathbf{Y}\) and \(\mathbf{X}\) further, since we will apply this general estimation strategy both to get a panel-based estimate of \(h(\cdot)\) and estimates of \(h_{i}(\cdot)\) via unit-specific estimation. For now, we present further sufficient general conditions on \(h(\cdot)\) and \(g_{*}\left(\mathbf{Y},\mathbf{X};\mathbf{\theta}\right)\) in order for our results to hold. We require the following assumptions: **Assumption 2**: _For some constant \(C_{g_{*}}>0\), we assume that \(g_{*}\left(\mathbf{Y},\mathbf{X};\mathbf{\theta}\right)\) satisfies:_ \[\sup_{\mathbf{Y},\mathbf{X}}|g_{*}\left(\mathbf{Y},\mathbf{X};\mathbf{\theta}_{1}\right)-g_{*} \left(\mathbf{Y},\mathbf{X};\mathbf{\theta}_{2}\right)|\leq C_{g_{*}}\left\|\mathbf{\theta}_{ 1}-\mathbf{\theta}_{2}\right\|,\ \text{for Frobenius norm}\ \left\|\!\left\cdot\right\|\!\right|.\] **Assumption 3**: _Consider a Holder space \(\mathcal{W}^{b,\infty}([-1,1]^{d})\), with \(b=1,2,\ldots\), where \(\mathcal{W}^{b,\infty}([-1,1]^{d})\) is the space of functions on \([-1,1]^{d}\) in \(L^{\infty}\), along with their weak derivatives. Recall \(h\) in (7), we assume that \(h\) lies within \(\mathcal{W}^{b,\infty}([-1,1]^{d})\), with a norm in \(\mathcal{W}^{b,\infty}([-1,1]^{d})\):_ \[\left\|h\right\|_{\mathcal{W}^{b,\infty}([-1,1]^{d})}=\max_{\mathbf{a}:|\mathbf{a}| \leq b}\operatorname*{ess\,sup}_{x\in[-1,1]^{d}}\left|D^{a}h(x)\right|, \tag{8}\] _where \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{d})\in[-1,1]^{d}\), \(|\mathbf{a}|=a_{1}+a_{2}+\cdots+a_{d}\) and \(D^{a}h\) is the corresponding weak derivative._ **Remark 1**: _In Assumption 3 we state a smoothness assumption following existing theoretical results; see, for example, Farrell et al. (2021) and Yarotsky (2017, 2018). A more detailed discussion of Holder-Sobolev and Besov spaces is available in Gine and Nickl (2016). Assumption 3 holds for both \(h(\cdot)\) and \(h_{i}(\cdot)\), \(i=1,\ldots,N.\)_ In our case, and in what follows, we specialize the general framework above by using a squared error loss function that, for the panel setting, becomes: \[g_{*}\left(\mathbf{Y},\mathbf{X};\mathbf{\theta}\right)=\frac{1}{NT}\sum_{i=1}^{N}\sum_{t =1}^{T}\left(y_{it}-g(\mathbf{x}_{it};\mathbf{\theta})\right)^{2}.\] In our analysis we use feed-forward neural networks architectures with rectified linear unit (ReLU) activation functions and weights that are unbounded following Farrell et al. (2021) and the discussion below. Such networks approximate smooth functions well, as shown in Yarotsky (2017, 2018). A further assumption is required on the processes \(\{\mathbf{x}_{t,k}\}\) and \(\{\varepsilon_{it}\}\), for some \(k=1,\ldots,p\), where \(p\) is the number of covariates. **Assumption 4**: _We assume the following:_ 1. _The rows of_ \(\mathbf{X}_{t}\) _are_ i.i.d. _realizations from a Gaussian distribution whose_ \(p\)_-dimensional inner product matrix_ \(\mathbf{\Sigma}\) _has a strictly positive minimum eigenvalue, such that_ \(\Lambda_{\min}^{2}>0\) _and_ \(\Lambda_{\min}^{-2}=O(1).\)__ 2. _The rows of the error term_ \(\varepsilon_{it}\) _are are_ i.i.d. _realizations from a Gaussian distribution, such that_ \(\varepsilon_{it}\sim N(0,\sigma_{\varepsilon}I_{N})\)_._ 3. \(\varepsilon_{it}\) _and_ \(\mathbf{x}_{it}\) _are mutually independent._ **Remark 2**: _Assumption 4 states that the covariate process and the errors are continuous and have all their existing moments, while being mutually independent. These strict assumptions are standard in the neural network literature and useful in order to continue with the analysis on a more simplified basis. These assumptions are strict. But it is reasonable to conjecture that similar results to those given below would hold under weaker conditions._ Having rewritten the loss function, as squared error loss, with respect to the re-parameterized panel model in (5), we construct a pooled-type nonlinear estimator, \(\widehat{\mathbf{\theta}}\), such that: \[\widehat{\mathbf{\theta}}=\operatorname*{arg\,min}_{\mathbf{\theta}}g_{*}\left(\mathbf{Y },\mathbf{X};\mathbf{\theta}\right)=\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathbb{R }^{d}}\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left[y_{it}-g(\mathbf{x}_{it};\mathbf{ \theta})\right]^{2}, \tag{9}\] which obeys Assumption 2. Therefore, our estimator of \(h\left(\mathbf{x}_{it}\right)\) is given by \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\). Then, we proceed to estimate \(h_{i}\left(\mathbf{x}_{it}\right)\) by \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}}_{i})\), where \(\widehat{\mathbf{\theta}}_{i}\) is given by: \[\widehat{\mathbf{\theta}}_{i}=\operatorname*{arg\,min}_{\mathbf{\theta}_{i}\in\mathbb{ R}^{d}}\frac{1}{T}\sum_{t=1}^{T}\left[y_{it}-g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})-g (\mathbf{x}_{it};\mathbf{\theta}_{i})\right]^{2}, \tag{10}\] for each \(i\), given \(\widehat{\mathbf{\theta}}\) from (9). Next, we argue that the estimation in (9) can effectively separate \(h\left(\mathbf{x}_{it}\right)\) from \(h_{i}\left(\mathbf{x}_{it}\right)\) and that the unit-wise second step estimation in (10) can retrieve \(h_{i}\left(\mathbf{x}_{it}\right)\). We do this by noting the following. Consider the loss function for \(\widehat{\mathbf{\theta}}\) in (9). We have: \[\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left[y_{it}-g(\mathbf{x}_{it} ;\mathbf{\theta})\right]^{2} =\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left[\left(h\left(\mathbf{ x}_{it}\right)-g(\mathbf{x}_{it};\mathbf{\theta})\right)+h_{i}\left(\mathbf{x}_{it} \right)+\varepsilon_{it}\right]^{2}\] \[=\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left[\left(h\left(\mathbf{ x}_{it}\right)-g(\mathbf{x}_{it};\mathbf{\theta})\right)\right]^{2}+\frac{1}{NT}\sum_{i=1}^{N} \sum_{t=1}^{T}\varepsilon_{it}^{2}\] \[+\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}h_{i}\left(\mathbf{x}_{it} \right)^{2}+\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left(h\left(\mathbf{x}_{it} \right)-g(\mathbf{x}_{it};\mathbf{\theta})\right)h_{i}\left(\mathbf{x}_{it}\right) \tag{11}\] \[+\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left(h\left(\mathbf{x}_{it} \right)-g(\mathbf{x}_{it};\mathbf{\theta})\right)\varepsilon_{it}+\frac{1}{NT}\sum_{i= 1}^{N}\sum_{t=1}^{T}h_{i}\left(\mathbf{x}_{it}\right)\varepsilon_{it}=\sum_{j=1}^{ 6}A_{j}.\] Under Assumptions 1-4, terms \(A_{2}\) and \(A_{3}\) converge in probability to positive limits, while \(A_{5}\) and \(A_{6}\) converge, in probability to zero, and in fact, furthermore, both are \(O_{p}((NT)^{-1/2})\). Additionally, under (3), \(A_{4}\) is \(O_{p}(N^{-1/2})\). Then it immediately follows that the loss function is minimized when \(\mathbf{\theta}=\mathbf{\theta}^{0}\), in view of our identification assumption in (7). It therefore follows that \(\widehat{\mathbf{\theta}}\rightarrow^{p}\mathbf{\theta}^{0}\) and \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\rightarrow^{p}g(\mathbf{x}_{it};\mathbf{\theta}^ {0})=h\left(\mathbf{x}_{it}\right)\). This proves that the best pooled panel neural network approximation coincides with the true panel function. Next, we can consider a closely related and, in fact, asymptotically equivalent minimization problem given by: \[g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})=\operatorname*{arg\,min}_{\mathbf{\theta}\in \mathbb{R}^{d}}\frac{1}{N}\sum_{i=1}^{N}\left[\frac{1}{T}\sum_{t=1}^{T}y_{it}- \frac{1}{T}\sum_{t=1}^{T}g(\mathbf{x}_{it};\mathbf{\theta})\right]^{2}=\operatorname* {arg\,min}_{\mathbf{\theta}\in\mathbb{R}^{d}}\frac{1}{N}\sum_{i=1}^{N}\left[\bar{y }_{i}-\bar{g}_{i}(\mathbf{\theta})\right]^{2} \tag{12}\] and the associated model with a composite error is: \[\bar{y}_{i}=\bar{g}_{i}(\mathbf{\theta})+u_{i},\quad i=1,\ldots,N, \tag{13}\] where: \[u_{i}=\frac{1}{T}\sum_{t=1}^{T}g(\mathbf{x}_{it};\mathbf{\theta}_{i})+ \frac{1}{T}\sum_{t=1}^{T}\varepsilon_{it}=\bar{g}_{i}(\mathbf{\theta}_{i})+\bar{ \varepsilon}_{i}. \tag{14}\] Note that \(u_{i}\) obeys Assumption 1.2. Moreover, this setting corresponds to that of Theorem 1 in Farrell et al. (2021) enabling the use of the rates derived in this theorem. This analysis is summarized and extended in the following proposition: **Proposition 1**: _Suppose Assumptions 1-4 hold. Let \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\) be the deep network estimator defined in (12). Then, for some \(\psi<1/2\), the following holds:_ \[\sup_{i,t}\left\|g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})-h\left(\mathbf{x}_{it} \right)\right\|_{2}^{2}=O_{P}(N^{-\psi}). \tag{15}\] The proof of Proposition 1 follows from the proof of Theorem 1 in Farrell et al. (2021), using the arguments made above the Proposition to recast our panel framework into the one of Farrell et al. (2021), by separately identifying \(h\) and \(h_{i}\). In Proposition 1, we use the results from Theorem 1 of Farrell et al. (2021) to obtain an asymptotic rate of convergence for the error in (15). It is clear that this rate of convergence is not optimal, since \(\psi<1/2\). We have provided a simplified result compared to Theorem 1 of Farrell et al. (2021). Refinements related to factors, such as the depth and width of the neural network used, can be obtained. These are also discussed in Theorem 6 of Bartlett et al. (2019) and in Lemma 6 of Farrell et al. (2021). Fast convergence of (15) depends on the trade-off between the number of neurons and layers, and more specifically on the parameterization of their relationship, that controls the approximating power of the network. We note that, in addition, one can obtain consistency for \(\widehat{\mathbf{\theta}}_{i}\) by minimizing the loss over \(\mathbf{\theta}_{i}\): \(L_{i}=\frac{1}{T}\sum_{t=1}^{T}[y_{it}-g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})-g (\mathbf{x}_{it};\mathbf{\theta}_{i})]^{2}.\) Given the rate in Proposition 1, it immediately follows that \(g\left(\mathbf{x}_{it};\mathbf{\theta}_{i}^{0}\right)\) can be consistently estimated at rate \(T^{-\psi}\), as long as \(T=o(N^{\xi})\) for some \(\xi<1\), given that then the uniform rate in Proposition 1 is faster than \(T^{-\psi}\). **Remark 3**: _Before concluding, it is of interest to consider whether an idiosyncratic component, \(g(\mathbf{x}_{it};\mathbf{\theta}_{i}^{0})\), is needed, in addition to the common component, \(g(\mathbf{x}_{it};\mathbf{\theta}^{0})\). This could be tested by a nonlinear version of a poolability test. One way to proceed is by fitting only the common component and then determining whether the residuals, \(\widehat{u}_{it}=y_{it}-g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\), can be further explained by unit-wise neural network regressions. Again, one way to do this is by constructing unit-wise \(R^{2}\) statistics. This is intuitive, if we recall the quasi linear representation given by (6). One can regress \(\widehat{u}_{it}\) on \(\mathbf{f}_{i}(\mathbf{x}_{it})\) to obtain such \(R^{2}\) statistics. Then the null hypothesis that \(\widetilde{h}_{i}\left(\mathbf{x}_{it}\right)=h(\mathbf{x}_{it})\), can be tested using the test statistic:_ \[P=\frac{1}{\widehat{\sigma}\sqrt{N}}\sum_{i=1}^{N}\left(TR_{i}^{2}-m\right),\] _where \(\widehat{\sigma}^{2}=\frac{1}{N}\sum_{i=1}^{N}\left(TR_{i}^{2}-m\right)^{2}\) and an appropriate centering factor, \(m\), needs to be chosen. This could be the dimension of \(\mathbf{f}_{i}\left(\mathbf{x}_{it}\right)\), although care needs to be taken, given that \(\mathbf{f}_{i}\left(\mathbf{x}_{it}\right)\) will contain estimated parameters. One way to resolve this issue may be to estimate the neural networks over a different time period to that used to run the unit-wise regressions of \(\widehat{u}_{it}\) on \(\mathbf{f}_{i}(\mathbf{x}_{it})\). Then, under our assumptions, including that of cross sectional independence, and for an appropriate choice of \(m\), \(P\) is asymptotically standard normal under the null hypothesis._ _Further exploration of this test is of interest. However, a full and rigorous analysis is beyond the scope of the current paper._ ## 3 Implementation considerations In this section we provide details on implementation of the proposed nonlinear estimators. First, we summarize the overall neural network construction which relates to the choice of the neural network's architecture and can be summarized by the functional parameterization \(g\left(\boldsymbol{\cdot};\boldsymbol{\theta}\right)\), used in the approximation of \(h(\cdot)\). We limit our attention to the construction of \(g\left(\boldsymbol{\cdot};\boldsymbol{\theta}\right)\), since it directly applies to \(g\left(\boldsymbol{\cdot};\boldsymbol{\theta}_{i}\right)\). Then we illustrate how regularization can be applied in the context of the proposed estimators. Finally, we discuss both the cross-validation exercise used to select the different parameters and hyperparameters of the corresponding network and optimization algorithm. ### Neural network construction We focus on the construction of the _feed-forward neural network_ functional parameterization, \(g\left(\boldsymbol{\cdot};\boldsymbol{\theta}\right)\), used to approximate \(h(\cdot)\) in Section 2. The feed-forward architecture consists of: an input layer, where the covariates are introduced given an initial set of weights to the inner (hidden) part of the network; the hidden layers, where a number of computational nodes are collected in each hidden layer and nonlinear transformations on the (weighted) covariates occur; and the output layer that gives the final predictions and a choice for the activation function \(\sigma(x):\mathbb{R}\rightarrow\mathbb{R}\) that is applied element-wise. The architecture is feed-forward, since in each of the hidden layers there exist several interconnected neurons that allow information to flow from one layer to the other, but only in one direction. The connections between layers correspond to weights. We use \(L\) to define the total number of hidden layers and \(M^{(l)}\), \(l=1,\ldots,L\) to define the total number of neurons at the \(l^{th}\) layer. \(L\) and \(M^{(l)}\) are measures for the depth and width of the neural network, respectively. We use the ReLU activation function, \(\sigma_{l}(\boldsymbol{X}_{t}):=\max(\boldsymbol{X}_{t},0)\), where \(\boldsymbol{X}_{t}\) is a \(N\times p\) matrix of characteristics for \(t=1,\ldots,T\); \(l=1,\ldots,L-1\) and a linear activation function for \(l=L\). The activation functions are applied elementwise. To explain the exact computation of the outcome of the _feed-forward neural network_, we focus on the pooled-type estimator in (9). We assume that the widths (the number of neurons), \(M^{(l)}\), and depth (the number of hidden layers), \(L\), of the network are constant positive numbers. Each of the neurons undergoes a computation similar to the linear combination received in each hidden layer \(l\): \(\boldsymbol{g}^{(l)}=\sigma_{l}(\boldsymbol{g}^{(l-1)}\boldsymbol{W}^{(l)^{ \prime}}+\boldsymbol{b}^{(l)^{\prime}})\), while the final output of the network is \(\boldsymbol{g}^{(L)}=\boldsymbol{g}^{(L-1)}\boldsymbol{W}^{(L)^{\prime}}+ \boldsymbol{b}^{(L)^{\prime}}\) and \(\boldsymbol{g}^{(0)}=\boldsymbol{X}_{t}\). We can then define for some \(t=1,\ldots,T\), \(g\left(\boldsymbol{\cdot};\boldsymbol{\theta}\right)\) as: (16) where \(\mathbf{W}^{(l)}\) is a \(M^{(l)}\times M^{(l-1)}\) matrix of weights, \(\mathbf{b}^{(l)}\) is a \(M^{(l)}\times N\) matrix of biases at layer \(l\), with \(\mathbf{b}^{(1)}=\mathbf{0}\). Notice that at \(l=1\), the dimensions of \(\mathbf{W}^{(1)}\) are \(M^{(1)}\times p\) and of \(\mathbf{b}^{(1)}\) are \(M^{(1)}\times N\). At the final layer, that is, at \(l=L\), the dimensions of \(\mathbf{W}^{(L)}\) are \(1\times M^{(L-1)}\) and of \(\mathbf{b}^{(L)}\) are \(1\times N\). Note that throughout the paper we use \(\mathbf{\theta}\) to denote a stacked vector containing all ancillary trainable parameters affiliated with the network estimation, as defined below: \[\mathbf{\theta}=\left(\operatorname{vec}\left(\mathbf{W}^{(1)^{\prime}}\right),\ldots,\operatorname{vec}\left(\mathbf{W}^{(L)^{\prime}}\right),\operatorname{vec} \left(\mathbf{b}^{(1)^{\prime}}\right),\operatorname{vec}\left(\mathbf{b}^{(2)^{ \prime}}\right),\ldots,\mathbf{b}^{(L)^{\prime}}\right)^{\prime}. \tag{17}\] We define the overall number of parameters as \(d=|\mathbf{\theta}|\). The optimization of the neural network proceeds in a forward fashion (from the input layer, that is, \(l=1\), to the output \(l=L\)) and layer-by-layer through an optimizer, for example, a version of stochastic gradient descent (SGD), where the gradients of the parameters \((\mathbf{W}^{(l)},\mathbf{b}^{(l)})\) are calculated through back-propagation (using the chain-rule) to train the network. **Remark 4**: _The exact (composition) structure described in (16) holds for a subclass of feed-forward neural networks, specifically that one that refers to fully connected layers (the one being consecutive to the other) but has no other connections. Each layer has a number of hidden units that are of the same order of magnitude. This architecture is the most commonly used in empirical research, and is often referred to as a Multi-layer Perceptron (MLP). Furthermore, the exact structure in (16) does not hold generally for any feed-forward neural network._ The specific choice of the network architecture is crucial and affects the complexity and the approximating power of \(g\left(\mathbf{\cdot};\mathbf{\theta}\right)\) in (16). Our analysis involves primarily theoretical arguments that are widely applicable in _feed-forward neural networks_ when we deal with panel data. We present an example of a _feed-forward neural network_, based on (16), in Figure 1. The neural network in Figure 1 consists of two inputs \(\mathbf{X}_{t}\in\mathbb{R}^{N\times p}\), \(\mathbf{X}_{t}=(\mathbf{x}_{t}^{(1)},\mathbf{x}_{t}^{(2)})\), in particular, \(p=2\), where \(\mathbf{x}_{t}^{(j)}\) is a \(N\times 1\) vector of one characteristic at \(t=1,\ldots,T\) for some \(j=1,2\), and one fitted output \(\widehat{\mathbf{y}}_{t}\). Between the inputs and output \((\mathbf{X}_{t},\widehat{\mathbf{y}}_{t})^{\prime}\), are \(M\) hidden computational nodes/neurons, in particular \(M=5\). The neurons are connected directly forming an acyclic graph which specifies a fixed architecture.1 Figure 1: Illustration of a _feed-forward neural network_ with two input matrices \((\mathbf{x}_{t}^{(1)},\mathbf{x}_{t}^{(2)})^{\prime}\), two layers, \(L=2\), \(5\) nodes, \(M=5\), eighteen connections, \(W=14\), and one (fitted) output \(\widehat{\mathbf{y}}_{t}.\) The inputs are illustrated with a white circle, the neurons with grey circles, the output with a black circle. Notice that the illustration in Figure 1 can correspond to a nonlinear pooled-type estimation of \(g(\mathbf{X}_{t};\mathbf{\theta}^{0})\) in (5), where we use (9) to obtain \(\widehat{\mathbf{\theta}}\), defined in (17), with input \(\mathbf{X}_{t}=(\mathbf{x}_{t}^{(1)},\ldots,\mathbf{x}_{t}^{(p)})\), and output \(\widehat{\mathbf{y}}_{t}\in\mathbb{R}^{N\times 1}.\) The remainder of (5), \(g(\mathbf{X}_{t};\mathbf{\theta}_{i}^{0}),\ i=1,\ldots,N,\ t=1,\ldots,T\) can be described conceptually as the heterogeneous component, that differs cross-sectionally. One can obtain the estimate of this heterogeneous component following the same steps as those used to obtain \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\), with the only difference that now the _feed-forward neural network_ is estimated unit-wise, similar to the logic of a fixed effects estimator for linear panel data models. ### Implementation and regularization In this section we discuss some operational implementation aspects required for the estimation of the panel neural network estimators proposed in Section 2. We focus discussion on the following panel estimator, \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\), obtained from the optimization of (12): \[g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})=\operatorname*{arg\,min}_{\mathbf{\theta}\in \mathbb{R}^{d}}\frac{1}{N}\sum_{i=1}^{N}\left[\frac{1}{T}\sum_{t=1}^{T}y_{it}- \frac{1}{T}\sum_{t=1}^{T}g(\mathbf{x}_{it};\mathbf{\theta})\right]^{2}.\] This nonlinear panel estimator, and generally neural network estimators, have many significant advantages over traditional panel models, mainly summarized in their great capacity at approximating highly nonlinear and complicated associations between variables and outstanding forecasting performance; see, for example, the discussion in Goodfellow et al. (2016) and Gu et al. (2020, 2021). In order to be able to minimize (12) and obtain a feasible solution for the panel estimator \(g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})\), we need to choose the overall architecture of the neural network. Following the discussion above, this reduces to choices for the total number of layers \(L\), total number of neurons \(M^{(l)}\), at each \(l=1,\ldots,L\) layers, a loss function \(g_{*}(\mathbf{y},\mathbf{X};\mathbf{\theta})\), which in this paper is taken to be the MSE loss, an updating rule for the weights (learning rate, \(\gamma\)) during optimization, and the optimization algorithm itself, typically taken to be some variant of SGD. However, neural networks tend to overfit, which can lead to a severe deterioration in their (forecasting) performance. A common empirical solution to this is to impose a penalty on the trainable parameters of the neural network, \(\mathbf{\theta}\). The penalized estimator based on the LASSO is obtained as the solution to the following minimization problem: \[g(\mathbf{x}_{it};\widehat{\mathbf{\theta}})^{\text{LASSO}}=\operatorname*{arg\,min} _{\mathbf{\theta}\in\mathbb{R}^{d}}\frac{1}{N}\sum_{i=1}^{N}\left[\frac{1}{T}\sum_ {t=1}^{T}y_{it}-\frac{1}{T}\sum_{t=1}^{T}g(\mathbf{x}_{it};\mathbf{\theta})\right]^{2} +\lambda\left\|\mathbf{\theta}\right\|_{1},\] where \(\lambda\) is the regularization parameter. Note that while explicit regularization improves empirical solutions of neural networks estimators under low signal-to-noise ratios, its role is not clear theoretically, since there are cases where simpler SGD solutions present similar solutions; see, for example, Zhang et al. (2021). Other commonly used regularization techniques frequently employed empirically to assist in the estimation of neural networks, relate to batch normalization, early stopping, and dropout. We succinctly discuss batch normalization below, given its importance because of the cross-sectional aspect of our estimator. We refer the reader to Gu et al. (2020) for a detailed discussion of early stopping and dropout. Batch normalization, proposed by Ioffe and Szegedy (2015), is a technique used to control the variability of the covariates across different regions of the network and datasets. It is used to address the issue of internal covariate shift, where inputs of hidden layers may follow different distributions than their counterparts in the validation sample. This is a prevalent issue when fitting, in particular, deep neural networks. Effectively, batch normalization cross-sectionally demeans and standardizes the variance of the batch inputs. **Remark 5**: _In this paper, we consider both penalized and non-penalized estimation. While using the latter might seem problematic due to the large number of parameters that needs to be estimated, we find, in our empirical work, that this is not necessarily the case. This is not surprising. Recent work in the statistical and machine learning literature highlights what is known as the double descent effect. For linear regressions, this relates to the use of generalized inverses to construct least squares estimators, when the number of variables, \(p\), exceeds the number of observations, \(T.\) Such estimators work better either when \(p\) is small (and standard matrix inversion can be used) or when \(p\) is much larger than \(T.\) Then the quality of the performance of an estimator is implicitly measured in terms of the "bias-variance trade-off," where an optimal performance resides at the lowest reported bias and variance of the corresponding model (either linear or nonlinear). While it is widely accepted that the "bias-variance trade-off" function resembles a U-shaped curve, it has been observed, for example see Belkin et al. (2019) and Hastie et al. (2022), that beyond the interpolation limit the test loss descends again, hence the name "double-descent." To understand why, note that such estimators implicitly impose penalization by using generalized inverses and so choose the parameter vector with the smallest norm, among all admissible vectors; see Hastie et al. (2022). So once \(p\) is much larger than \(T\) such a selection becomes more consequential, as many more candidate vectors are admissible. This linear effect is also present for neural network estimation given the connection to linear models highlighted in Section 2, as discussed in detail in Hastie et al. (2022) and Kelly et al. (2022)._ ### Cross-validation The cross-validation (CV) scheme consists of choices on the overall architecture of the neural network: the total number of layers (\(L\)), neurons (\(M\)), the learning rate (\(\gamma\)) of SGD, the batch size, dropout rate, level of regularization (\(\lambda\)), and a choice on the activation functions. Regarding the choice on the activation functions, we use ReLU for the hidden layers and a linear function for the output layer. We tune the learning rate of the optimizer, \(\gamma,\) from five discrete values in the interval \([0.01,0.001]\). We tune the depth and width of the neural networks using the following grids, \([1,3,5,10,15]\) and \([5,10,15,20,30]\), respectively. Hence the choice between deep or shallow learning is completely data-driven, as it is selected from the CV scheme. We set the batch size to 14. For the tuning of the regularization parameter, \(\lambda\), used for LASSO penalisation, we use the following grid \(c\sqrt{\log p/NT}\), where \(c=[0.001,0.01,0.1,0.5,1,5,10]\). We also use dropout regularization, where the dropout probability is up to 10 percent; see, for example, Gu et al. (2020). To select the trainable parameters, \(\mathbf{\theta}\), and the hyperparameters discussed above, we follow Gu et al. (2020, 2021) and divide our data into three disjoint time periods that maintain the temporal ordering of the data: the _training_ sub-sample, which is used to estimate the parameters of the model, \(\mathbf{\theta}\), given a specific set of hyperparameters; the _validation_ sub-sample, which is used to tune the different hyperparameters given \(\widehat{\mathbf{\theta}}\) from the _training_ sub-sample2; and, finally, the _testing_ sub-sample which is truly out-of-sample and is used to evaluate our nonlinear models' forecasting performance. As discussed in detail below, our forecasting exercise is recursive, based on an expanding window size. Hence, at each expanding window, we need to use the train-validation-split of the sample and estimate the relevant parameters and tune the hyperparameters. At each expanding window, let \(T^{*}\) denote the total sample size for the specific window, then the _training_ sub-sample consists of \(\lfloor 0.8T^{*}\rfloor\), the validation sub-sample consists of \(\lfloor 0.2T^{*}\rfloor-c\), and finally, the testing sub-sample consists of 7, 14, or 21 observations depending on the forecast horizon, \(h\), respectively. \(c\) is chosen so that the testing sub-sample always has \(h\) observations and \(\lfloor\cdot\rfloor\) stands for the floor function. Footnote 2: Note that while \(\widehat{\mathbf{\theta}}\) is used in the tuning of the hyperparameters, it is only estimated at the _training_ sub-sample. ### Optimization The estimation of neural networks is generally a computational cumbersome optimization problem due to nonlinearities and non-convexities. The most commonly used solution utilizes SGD to train a neural network. SGD uses a batch of a specific size, that is, a small subset of the data at each epoch (iteration) of the optimization to evaluate the gradient, to alleviate the computation hurdle. The step of the derivative at each epoch is controlled by the learning rate, \(\gamma\). We use the adaptive moment estimation algorithm (ADAM) proposed by Kingma and Ba (2014)3, which is a more efficient version of SGD. Finally, we set the number of epochs to \(5,000\) and use early stopping following Gu et al. (2020) to mitigate potential overfitting. Footnote 3: ADAM is using estimates for the first and second moments of the gradient to calculate the learning rate. ## 4 Empirical analysis: forecasting new COVID-19 cases In this section, after introducing the data, we examine the predictive ability of the proposed model(s) for forecasting the daily path of new COVID-19 cases across the G7 countries. We compare the forecasting results from our new models against two restricted alternatives: a neural network without a cross-sectional dimension and a linear panel data VAR (PVAR). Comparison against these alternatives lets us examine the importance of firstly modeling the panel dimension and secondly of allowing for nonlinearities. To assess the out-of-sample Granger causality of pandemic-induced lockdown policies on the spread of COVID-19, we compare the forecasting performance of our models with and without measures of the stringency of government-imposed containment and lockdown policies. Such (non-pharmaceutical) policies were differentially adopted by many countries from March 2020, including the G7, to reduce the spread of COVID-19. Then, we discuss how partial derivatives can be used to help interpret the output of the deep panel models. They can be used to help assess the efficacy of the different containment policy measures taken by individual countries to contain the spread of COVID-19. ### The COVID-19 data and the Oxford stringency index Our interest is modeling and forecasting, at a daily frequency, reports of new COVID-19 cases per 100K of the population over the sample April 2020 through December 2022 for the G7 countries. We source these data from the World Health Organization coronavirus dashboard. As \(\mathbf{x}_{it}\) variables, for each country, \(i\), at day, \(t\), we consider a set of 7 lagged COVID-19 related indicators, as well as lags of new cases per-100K (our \(y_{it}\) variable). For parsimony, we confine attention to lags at 7, 14, 21, and 28 days. These 7 variables, plus lags of the dependent variable, may all have explanatory power for \(y_{it}\). The 7 variables (all reported per 100K of the population) comprise: new deaths, the reproduction rate, new tests, the share of COVID-19 tests that are positive measured as a rolling 7-day average (this is the inverse of tests per case), the number of people vaccinated, the number of people fully vaccinated, and the number of total boosters. Knutson et al. (2023), Mathieu et al. (2021), and Caporale et al. (2022) also consider such COVID-19 related variables, given that they all likely relate (contemporaneously or at a lag) to the number of new COVID-19 cases. To assess the role of containment policies in explaining and forecasting the spread of new COVID-19 cases, we then consider specifications that augment the aforementioned set of \(\mathbf{x}_{it}\) variables by adding in a measure or measures of the stringency of the government response to COVID-19. Specifically, we use the government response stringency index, as compiled by the Oxford Coronavirus Government Response Tracker (OxCGRT). This index is a composite measure based on 9 response indicators, namely: school closures, workplace closures, the cancellation of public events, restriction on gatherings, public transport closing, requirements to stay at home, movement restriction, restrictions on international travel, and public information campaigns. Throughout the pandemic the Oxford stringency index was a widely consulted measure of policy. Since the Oxford index is an aggregation of 9 indicators, with the weights subjectively chosen by Oxford, we also experiment with forecasting when the underlying 9 disaggregates enter individually into our models, so that, in effect, we objectively use the data to weight the disaggregates. Note that we always consider the lagged effects of policy changes on new COVID-19 cases, mitigating endogeneity concerns that, for example, stricter lockdown policies follow increases in new COVID-19 cases. Throughout, \(t\) corresponds to a day and we use a trailing seven-day rolling average to smooth the data. The cross-sectional dimension of our panel is \(p=36\) when we consider the aggregate stringency index (as published by Oxford) and \(p=68\) when we consider the disaggregated stringency index. We further follow the literature (see, for example, Gu et al. (2021)) and rank-normalize all of our variables into the \([0,1]\) interval as follows: \[\widetilde{\mathbf{x}}_{i}=\frac{\mathbf{x}_{i}-\min(\mathbf{x}_{i})}{\max(\mathbf{x}_{i})-\min( \mathbf{x}_{i})},\quad i=1,\ldots,N.\] This normalization minimizes the influence of severely outlying observations stemming from covariate distributions that may have significant departures from normality, a common feature of COVID-19 data, especially at the beginning of the pandemic. The online Data Appendix provides additional data details. Figure 2 presents the aggregate stringency index and plots new COVID-19 cases per 100K of the population through our sample period. This figure shows that there are apparent commonalities across countries, both in the stringency of policy and the evolution of new COVID-19 cases. But there are differences too, with Japan standing out as having looser containment policies than the other countries during mid-2020 and then experiencing a later spike in new COVID-19 cases in summer 2022. Thus, it remains an empirical question whether forecasting new COVID-19 cases is improved by pooling information across countries. Figure 2: The Oxford stringency index and new COVID-19 cases per 100K of the population #### 4.1.1 Out-of-sample forecasting design We recursively produce forecasts of \(y_{it}\) - new COVID-19 cases - by estimating our set of models using expanding estimation windows and evaluate these forecasts over the out-of-sample period February 6, 2021 through December 24, 2022. Given (5), the \(h-\)day-ahead forecast of new COVID-19 cases per 100K is: \[\widehat{y}_{i,t+h}|\mathcal{F}_{t}=\widehat{g}\left(\mathbf{x}_{it};\mathbf{\theta}^{ \star}\right)+\widehat{g}\left(\mathbf{x}_{it};\mathbf{\theta}_{i}^{\star}\right), \tag{18}\] where \(\widehat{g}\left(\mathbf{x}_{it+h};\mathbf{\theta}^{\star}\right)\) denotes the corresponding fit of the pooled network, \(\widehat{g}\left(\mathbf{x}_{it+h};\mathbf{\theta}_{i}^{\star}\right)\) denotes the unit-by-unit fit of the network, and \(\widehat{y}_{i,t+h}|\mathcal{F}_{t}\) denotes the deep idiosyncratic forecast. \(\mathcal{F}_{t}\) denotes the information set up to time \(t\), for some \(t=1,\ldots,T\), \(\mathbf{\theta}^{\star}\) denotes the optimal weights obtained from the CV for the deep pooled model, and \(\mathbf{\theta}_{i}^{\star}\) denotes the optimal weights obtained from the CV for the deep idiosyncratic model. We compare this forecast against, what we call, the "deep pooled" forecast that sets \(\widehat{g}\left(\mathbf{x}_{it};\mathbf{\theta}_{i}^{\star}\right)=0\). We recursively compute \(h=7\), \(h=14\), and \(h=21\) day-ahead forecasts using an expanding estimation window (relating \(y_{i,t+h}\) to \(\mathbf{x}_{it}\), as per (18)). To ease the computational burden, given that we re-estimate the model and use CV (as discussed in Section 3) at each window, we increase the size of the estimation windows in increments of 7 days. We now summarize how estimation and forecasting works for \(h=7\) (forecasting at the longer horizons proceeds analogously): We first estimate our models using daily data from April 1, 2020 through January 30, 2021 (\(T^{0}=305\)) and produce forecasts 7 days-ahead. Then we estimate from April 1, 2020 through February 6, 2021 (\(T^{1}=312\)) and again produce forecasts 7 days-ahead. We carry on this process until we finally estimate our models over the sample April 1, 2020 through December 17, 2021 (\(T^{700}=991\)) producing forecasts 7 days-ahead. This results in an out-of-sample sample sample size of 700 days. We do not consider forecasting earlier than 7 days-ahead, given that the incubation period of COVID-19 is typically around one week, so that we should not expect policy changes to have effects within one week. During the first wave of the pandemic, many governments revised their policy measures to restrict the virus once a week, which also helps rationalize our choice of forecast horizons. Forecasts for longer horizons, \(h\), are obtained similarly. To test if and how our proposed deep neural network panel data models confer forecasting gains, we compare them against two benchmarks that switch off firstly panel (cross-country) interactions and secondly nonlinear effects. We do so by estimating: (i) a "deep time-series" model that is identical to our deep neural network panel data model but is estimated separately for each country; and (ii) a panel VAR (PVAR) model that does allow for cross-country interactions, but assumes linearity in terms of how \(x_{it}\) affects \(y_{it+h}\). Testing our model against these two special cases isolates whether it is allowing for cross-country interaction and/or for nonlinearity that is advantageous. We follow Canova and Ciccarelli (2009) and specify the \(i^{th}\) equation of the PVAR with \(q\) lags as: \[y_{it}=A_{1i}\boldsymbol{Y}_{t-1}+\cdots+A_{qi}\boldsymbol{Y}_{t-q}+\epsilon_{ it},\epsilon_{it}\sim\text{i.i.d.}N\left(0,\sigma_{i}\right), \tag{19}\] where \(A_{ji}\) for \(j=1,\ldots,q\) are coefficient matrices, we have dropped the intercept for notational simplicity, \(\boldsymbol{Y}_{t}=(z_{1t}^{\prime},\ldots,z_{Nt}^{\prime})^{\prime}\), and \(z_{it}=(y_{it},\boldsymbol{x}_{it})^{\prime}\). We set \(q=28\). We estimate the PVAR by OLS and compute \(h\)-day-ahead forecasts of \(y_{it+h}\) from (19) via iteration. #### 4.1.2 Forecast evaluation In this section we evaluate the forecasting performance of the proposed nonlinear panel estimator(s) relative to the two benchmark models, namely, the linear PVAR(28), and the deep time-series neural network. We then examine whether the inclusion of policy related variables affects forecast accuracy. Specifically, to test for out-of-sample Granger causality of the policy measures adopted by governments to contain the spread of COVID-19, we compare the forecast accuracy of all of our models with and without the aggregate and disaggregate Oxford stringency indexes. We evaluate the accuracy of the forecasts of new COVID-19 cases using the root mean squared forecast error (RMSE): \[\text{RMSE}_{i}=\sqrt{\frac{1}{T}\underset{t=1}{\overset{T}{\sum}}\left(y_{ i,t+h}-\widehat{y}_{i,t+h}|\mathcal{F}_{t}\right)^{2}},\quad i=1,\ldots,N.\] We use the Diebold and Mariano (1995) (DM) test to test whether differences in forecast accuracy across models are statistically significant. We follow Harvey et al. (1997) and use their small-sample adjustment. Table 1 compares the accuracy of our two deep nonlinear models against the two benchmarks when we do not include the stringency-based measures of policy and instead focus on predicting new COVID cases using lags of new COVID cases and the other 7 COVID-related measures. The results are striking. Both deep nonlinear panel models provide significant forecasting gains over both the linear PVAR(28) model and the deep time-series neural network at all three forecast horizons. This shows the importance of both the panel dimension and nonlinearites in forecasting the daily path of new COVID-19 cases across the G7 countries. Of the two deep models, the deep pooled estimator delivers, for all 7 countries, more accurate forecasts than the deep idiosyncratic model. Simpler models often work better when forecasting and this appears to be the case here too: allowing for additional country-specific effects in our deep pooled model hinders out-of-sample forecasting performance. Tables B.1-B.3 in the online appendix show that the forecasting gains, of the deep models against the time series model, are statistically significant. This evidences that the gains from modeling and forecasting new COVID-19 cases come from pooling data (in a nonlinear manner) across the G7 countries. We next test whether the containment or lockdown policies, imposed at the national-level, help forecast new COVID-19 cases. If the policies were effective, conditioning on them should deliver more accurate forecasts. Table 2 presents the relative RMSE ratios for each of the four forecasting models when estimating including and excluding the aggregate stringency index. We see that focusing on the deep models, given their higher accuracy as seen in Table 1, policy as measured by the aggregate stringency index was only effective in France and Japan at 7 days. In the other 5 countries, the RMSE ratios are greater than unity, indicating that better forecasts of new COVID-19 cases are made without the stringency index. Interestingly, for the less accurate deep time-series and PVAR models, policy appears to have been more effective. But consistent with it taking time for policy changes to affect the path of the pandemic, Table 2 shows that after an additional two weeks policy was effective in all G7 countries, except Italy and the US. Table 3 then tests whether the Oxford stringency data have more value-added when forecasting if we let the models decide how much weight to attach to each of the 9 components (policy levers) in the aggregate stringency index. The fact that the RMSE ratios, for the preferred deep models, are now less than unity across all 7 countries indicates that policy was effective after all: but it is important to let the data determine what policies matter in which country. Table 3 indicates that at \(h=7\) days policy was least effective in Canada and Italy, as while policy interventions still affect new COVID-19 cases, unlike in the other G7 countries, these effects are not statistically significant. However, again demonstrating that policy changes take time to have impact, policy has a larger effect after another week (at \(h=14\) days), as in both Canada and Italy the relative RMSE ratios are lower at 14 days than at 7 days. In the online appendix, we provide additional checks on the forecasting performance of our models. We show that the forecasting gains from the models conditioning on the disaggregate stringency index are often stronger in the first half of our out-of-sample window, when in absolute terms the forecasting errors were higher as COVID-19 infection rates were higher and more volatile. Analysis also indicates that the gains of our deep pooled models, over the linear PVAR model, were higher during these earlier waves of COVID-19. This is consistent with the pandemic exhibiting highly nonlinear features in its earlier waves, before vaccinations and other immunities helped restrain the spread of COVID-19. The fluctuation test of Giacomini and Rossi (2010) is used to show that policy in Italy and Japan proved to be effective later than in the other G7 countries: it is only by the fall of 2022 that we see policy having a marked effect on forecast accuracy. We also present results with the lasso penalization and discuss the observed double descent pattern that our deep models forecast better without any penalty. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & Canada & France & Germany & Italy & Japan & UK & US \\ \hline & & & \(h=7\) & & & & \\ \hline Deep pooled & 1.084 & 0.928 & 1.033 & 1.229 & 0.878 & 1.043 & 1.321 \\ Deep idiosyncratic & 1.015 & 0.897 & 1.038 & 1.152 & 0.836 & 1.309 & 1.329 \\ Deep time-series & 0.847 & 0.946 & 1.076 & 0.881 & 1.044 & 0.949 & 1.040 \\ PVAR(28) & 0.875\({}^{***}\) & 0.868\({}^{***}\) & 0.933 & 0.850\({}^{*}\) & 0.792\({}^{**}\) & 0.880\({}^{*}\) & 0.596\({}^{**}\) \\ \hline & & & \(h=14\) & & & & \\ \hline Deep pooled & 1.012 & 0.907\({}^{*}\) & 0.896 & 1.154 & 0.861 & 0.952 & 1.225 \\ Deep idiosyncratic & 0.967 & 0.868\({}^{*}\) & 0.913 & 1.087 & 0.841 & 1.210 & 1.191 \\ Deep time-series & 0.910 & 0.915 & 1.085 & 0.920 & 1.045 & 0.920 & 0.960 \\ PVAR(28) & 0.880\({}^{***}\) & 0.856\({}^{***}\) & 0.930 & 0.855\({}^{*}\) & 0.773\({}^{**}\) & 0.891\({}^{*}\) & 0.603\({}^{**}\) \\ \hline & & & \(h=21\) & & & & \\ \hline Deep pooled & 0.953 & 0.879\({}^{*}\) & 0.864\({}^{*}\) & 1.117 & 0.855\({}^{*}\) & 0.957 & 1.078 \\ Deep idiosyncratic & 0.903 & 0.838\({}^{**}\) & 0.874 & 0.989\({}^{*}\) & 0.850 & 1.124 & 1.059 \\ Deep time-series & 0.951 & 0.878 & 1.094 & 0.923 & 1.033 & 0.936 & 0.947 \\ PVAR(28) & 0.887\({}^{***}\) & 0.839\({}^{***}\) & 0.928 & 0.854\({}^{*}\) & 0.794\({}^{**}\) & 0.889\({}^{*}\) & 0.607\({}^{***}\) \\ \hline \hline \end{tabular} \end{table} Table 2: RMSE ratios, comparing the forecast accuracy of each respective model with and without the aggregate Oxford stringency index at 7, 14, and 21 days-ahead. Ratios \(<1\) indicate superior predictive ability for the model with the stringency index. For a description of the 4 forecasting models, see the notes to Table 1. \(*\), \(**\), and \(***\) denote rejection of the null hypothesis of equality of forecast mean squared errors with and without the aggregate Oxford stringency index at the 10%, 5%, and 1% levels of significance, respectively, using the modified Diebold and Mariano (1995) test with the Harvey et al. (1997) adjustment. ### Policy effectiveness A common critique of ML algorithms is their putative trade-off between accuracy and interpretability. The output of a highly complicated ML model, such as a deep neural network of the sort we consider, may fit the data well in-sample and even, as we find, out-of-sample. But the model itself is often hard to interpret. In this section, we illustrate how the use of partial derivatives provides one way to assess the impact of covariates. We focus on examination of the effects of changes in policy, as measured by the aggregate and the disaggregate stringency indexes, on the transmission of new COVID-19 cases. The use of partial derivatives to interpret model output is, of course, common practice in econometrics, ranging from the simple linear regression model to impulse response analysis. In this section, we show how partial derivatives can be used in deep neural networks to interpret highly nonlinear relationships between covariates and the dependent variable.4 Footnote 4: We prefer the use of partial derivatives over Shapley additive explanation values, as proposed by Lundberg and Lee (2017), since derivatives tend to be less noisy (see, for example, Chronopoulos et al. (2023)) and computationally less expensive to compute. Perhaps, though, the biggest disadvantage is the set of implicit assumptions, used in the operational construction of Shapley values. A major one is the assumption that inputs are statistically independent. This is discussed in Aas et al. (2021), who also discuss solutions. However, these are computationally intensive, potentially still quite poor approximations, and not appropriate for large sets of variables. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & Canada & France & Germany & Italy & Japan & UK & US \\ \hline \multicolumn{8}{c}{\(h=7\)} \\ \hline Deep pooled & 0.941 & 0.834\({}^{**}\) & 0.852\({}^{**}\) & 0.848 & 0.852\({}^{*}\) & 0.842\({}^{**}\) & 0.845\({}^{**}\) \\ Deep idiosyncratic & 0.866 & 0.861 & 0.913 & 0.861 & 0.875 & 1.059 & 0.947 \\ Deep time-series & 0.906 & 1.063 & 1.071 & 0.909 & 1.067 & 0.901 & 1.075 \\ PVAR(28) & 0.840 & 0.614\({}^{***}\) & 0.992 & 0.791 & 0.807 & 0.850 & 0.739 \\ \hline \multicolumn{8}{c}{\(h=14\)} \\ \hline Deep pooled & 0.921 & 0.893 & 0.868\({}^{**}\) & 0.843\({}^{**}\) & 0.860\({}^{*}\) & 0.887\({}^{*}\) & 0.908\({}^{*}\) \\ Deep idiosyncratic & 0.796\({}^{*}\) & 0.900 & 0.955 & 0.822\({}^{*}\) & 0.877 & 1.021 & 0.925 \\ Deep time-series & 0.934 & 1.079 & 1.066 & 0.917 & 1.096 & 0.890 & 1.045 \\ PVAR(28) & 0.826 & 0.579\({}^{***}\) & 1.043 & 0.783 & 0.802 & 0.840 & 0.732\({}^{*}\) \\ \hline \multicolumn{8}{c}{\(h=21\)} \\ \hline Deep pooled & 0.900 & 0.884\({}^{**}\) & 0.845\({}^{**}\) & 0.825\({}^{***}\) & 0.843\({}^{**}\) & 0.931 & 0.887\({}^{***}\) \\ Deep idiosyncratic & 0.772\({}^{**}\) & 0.869 & 0.991 & 0.787\({}^{**}\) & 0.891 & 0.953 & 0.864 \\ Deep time-series & 0.994 & 1.090 & 1.118 & 0.919 & 1.084 & 0.906 & 1.009 \\ PVAR(28) & 0.830 & 0.557\({}^{***}\) & 1.078 & 0.785 & 0.846 & 0.834 & 0.730\({}^{**}\) \\ \hline \hline \end{tabular} \end{table} Table 3: RMSE ratios, comparing the forecast accuracy of each respective model with and without the disaggregate Oxford stringency index at 7, 14, and 21 days-ahead. Ratios \(<\!1\) indicate superior predictive ability for the model with the stringency index. For a description of the 4 forecasting models, see the notes to Table 1. \(*\), \(**\), and \(***\) denote rejection of the null hypothesis of equality of forecast mean squared errors with and without the disaggregate Oxford stringency index at the 10%, 5%, and 1% levels of significance, respectively, using the modified Diebold and Mariano (1995) test with the Harvey et al. (1997) adjustment. Footnote 4: We use the notation of [21], which is the same as the one used in [21], but it is not clear that the \(\mathbf{X}_{t}\) is not a \(\mathbf{X}_{t}\)-invariant. While our deep neural networks are highly nonlinear, their solution/output via SGD optimization methods, can be treated as differentiable functions, as the majority of activation functions are differentiable. In this paper, we consider the case of ReLU, which is not differentiable at zero, whereas it is at every other point of \(\mathbb{R}\). From a computational standpoint, the gradient descent, heuristically, works well enough to treat it as a differentiable function. Furthermore, Goodfellow et al. (2016) argue that this issue is negligible and machine learning softwares are prone to rounding errors, making them very unlikely to compute the gradient at a singularity point. Note that even in this extreme case, both SGD and ADAM, will use the right sub-gradient at zero. Let the matrix of characteristics be denoted \(\mathbf{X}_{t}\in\mathbb{R}^{N\times p}\), where \(\mathbf{X}_{t}=(\mathbf{x}_{t}^{(1)},\ldots,\mathbf{x}_{t}^{(p)})\). Then for some \(i=1,\ldots,N\), \(j=1,\ldots,p\) and \(t=1,\ldots,T\), the partial derivatives of \(g(\mathbf{X}_{t};\widehat{\theta})\) with respect to the \(j^{th}\) characteristic in \(\mathbf{X}_{t}\) are: \[d_{i\,j,\,t}=\frac{\partial g\left(\mathbf{X}_{t};\widehat{\mathbf{\theta}}\right)}{ \partial x_{i,\,j,\,t-h}}, \tag{20}\] where \(g(\mathbf{X}_{t};\widehat{\mathbf{\theta}})\) is the function (see Section 2) that approximates the number of new cases per-100K across the \(i\) different countries, in our case the G7 countries. We assess the partial derivatives across time since, following Kapetanios (2007), we expect them to vary due to the inherent nonlinearity of the neural network. In our work we present the partial derivatives, defined in (20), without adding confidence bands around them to assess statistical significance. The reason for this is that there is currently no rigorous technology in the literature to produce these, especially in the case of penalized estimation. However, recent work by Kapetanios et al. (2023) uses a bootstrap approach to construct confidence bands around partial derivatives. A full modification of this work for use in panel models is an interesting and promising avenue to proceed, but is left for future research. In Figure 3, we present the partial derivatives with respect to the aggregate stringency index at horizons \(h\in\{7,14,21,28\}.\) Thereby we evaluate the dynamic effectiveness of the stringency policies adopted across the G7 countries.5 There are three features that we draw out from Figure 3. First, policy is more effective at containing the spread of COVID-19 after 7 days. Stronger and more negative effects of increases in stringency are seen after 7 days. Secondly, with the exception of in Japan, policy was most effective in the late fall of 2021 and in early 2022, at the time of the highly contagious Omicron variant. The dynamic effects of policy are, on average, much weaker in the second half of our sample. This is consistent with higher vaccination rates meaning that from mid-2021 (non-immunization) policies became less effective at restraining the spread of new COVID-19 cases. Thirdly, there is considerable cross-country variation in the effectiveness of policy. As referenced above when summarizing the Giacomini and Rossi (2010) fluctuation tests reported in the online appendix, policy in Japan is again seen in Figure 3 to have been most effective in late-summer 2022, consistent with COVID-19 cases peaking later in Japan than in the other countries (see Figure 2). Containment policies in Italy tended, relative to the other countries, to have a more muted effect. Given the evidence from Table 3 that the disaggregated stringency index confers additional forecasting gains relative to the aggregate index, we next look at the partial derivatives with respect to the 9 components of the Oxford index. This way we aim to shed light on the effectiveness of specific policy measures. We focus on the effects of school and university closings and of workplace closings, since of the 9 components of the Oxford stringency index these tend to be the specific policies associated with the largest marginal effects. Results for the other policy measures are provided in the online appendix. Given the high degree of correlation between the different policy measures (see online Tables A.2-A.3), we should in any case not over interpret these partial derivatives. Figure 4 shows that over time (as \(h\) increases from 7 to 28 days) the effects of school and university closings had an increasingly strong effect. For most countries, as expected, these effects are negative: the closures lead to a fall in new COVID-19 cases. These negative effects are especially strong in Italy. But in the UK, the effects are not so clean cut, with the closings appearing to have a positive effect during the early stages of COVID-19. As in Figure 3, we again see evidence across countries that the effects of school and university closures were far Figure 3: Partial derivatives: The effects of policy (as measured by the Oxford stringency index) on new COVID-19 cases 7, 14, 21, and 28 days after the policy change. more effective prior to January 2022. Thereafter, the effects are much more modest. Turning to Figure 5, we see that while workplace closures tended to have a negative effect on COVID-19 soon after the policy change, in particular in Germany and the UK, thereafter the effects are more uncertain and variable across countries. This can be attributed not just to difficulties in isolating the direct effects of one policy change versus another (related) one, but because in the intervening period there were likely additional and perhaps offsetting changes. Figure 4: Partial derivatives: The effects of school and university closures on new COVID-19 cases 7, 14, 21, and 28 days after the policy change Figure 5: Partial derivatives: The effects of workplace closures on new COVID-19 cases 7, 14, 21, and 28 days after the policy change Conclusion This paper proposes a nonlinear panel data estimator of the conditional mean based on neural networks. We explore heterogeneity and latent patterns in the cross-section, and derive an estimator to account for these patterns. Furthermore, we provide asymptotic arguments for the proposed methodology building on the work of Farrell et al. (2021). We use the proposed estimators to forecast, in a simulated out-of-sample experiment, the progression of the COVID-19 pandemic across the G7 countries. We find significant forecasting gains over both linear panel data models and time series neural networks. Containment or lockdown policies, as instigated at the national-level, are found to have out-of-sample predictive power for the spread of new COVID-19 cases. Using partial derivatives to help interpret the panel neural networks, we find considerable heterogeneity and time-variation in the effectiveness of specific containment policies.
2308.16422
Dilated convolutional neural network for detecting extreme-mass-ratio inspirals
The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to compact binary coalescences. While matched filtering-based techniques are known for their computational demands, existing deep learning-based methods primarily handle time-domain data and are often constrained by data duration and SNR. In addition, most existing work ignores time-delay interferometry (TDI) and applies the long-wavelength approximation in detector response calculations, thus limiting their ability to handle laser frequency noise. In this study, we introduce DECODE, an end-to-end model focusing on EMRI signal detection by sequence modeling in the frequency domain. Centered around a dilated causal convolutional neural network, trained on synthetic data considering TDI-1.5 detector response, DECODE can efficiently process a year's worth of multichannel TDI data with an SNR of around 50. We evaluate our model on 1-year data with accumulated SNR ranging from 50 to 120 and achieve a true positive rate of 96.3% at a false positive rate of 1%, keeping an inference time of less than 0.01 seconds. With the visualization of three showcased EMRI signals for interpretability and generalization, DECODE exhibits strong potential for future space-based gravitational wave data analyses.
Tianyu Zhao, Yue Zhou, Ruijun Shi, Zhoujian Cao, Zhixiang Ren
2023-08-31T03:16:38Z
http://arxiv.org/abs/2308.16422v3
# DECODE: DilatEd COnvolutional neural network for Detecting Extreme-mass-ratio inspirals ###### Abstract The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to compact binary coalescences. While matched filtering-based techniques are known for their computational demands, existing deep learning-based methods primarily handle time-domain data and are often constrained by data duration and SNR. In addition, most existing work ignores time-delay interferometry (TDI) and applies the long-wavelength approximation in detector response calculations, thus limiting their ability to handle laser frequency noise. In this study, we introduce DECODE, an end-to-end model focusing on EMRI signal detection by sequence modeling in the frequency domain. Centered around a dilated causal convolutional neural network, trained on synthetic data considering TDI-1.5 detector response, DECODE can efficiently process a year's worth of multichannel TDI data with an SNR of around 50. We evaluate our model on 1-year data with accumulated SNR ranging from 50 to 120 and achieve a true positive rate of 96.3% at a false positive rate of 1%, keeping an inference time of less than 0.01 seconds. With the visualization of three showcased EMRI signals for interpretability and generalization, DECODE exhibits strong potential for future space-based gravitational wave data analyses. Gravitational Wave Deep Learning EMRI ## 1 Introduction The groundbreaking detection of gravitational waves (GWs) in 2015, exemplified by the GW150914 event, has profoundly impacted the field of astrophysics [1]. Enabled by the Laser Interferemeter Gravitational Wave Observatory (LIGO) [2] and Virgo [3], this remarkable achievement unequivocally confirmed the existence of GWs, providing empirical validation of general relativity (GR) [4]. Beyond enriching our knowledge of the cosmos, this seminal discovery has ushered in a new era of astronomical observation [5]. With the spotlight now turning to space-based GW observatories [6, 7], the absence of terrestrial disturbances allows for a more dedicated exploration of the low-frequency GWs [8]. This exciting pursuit carries the potential to reveal hitherto unobserved phenomena, offering profound insights into the nature of our universe [5]. Space-based GW detection, a largely unexplored domain, marks the next epoch in astrophysics [6]. Pioneering this exciting venture are projects such as the Laser Interferometer Space Antenna (LISA) [9] by the European Space Agency (ESA), with NASA's participation, and Asian projects including Japan's DECi-hertz Interferometer Gravitational wave Observatory (DECIGO) and B-DECIGO [10; 11], as well as China's Taiji [12] and TianQin [13] missions. Targeting the millimetric frequency band, these endeavors offer a novel perspective for the exploration of diverse astrophysical and cosmological phenomena through the detection of low-frequency GWs [6; 14; 15]. The scientific goals are broad, with the intent to shed light on the enigmas of massive black hole binaries (MBHBs), extreme-mass-ratio inspirals (EMRIs), continuous waves from galactic binaries (GBs), and the stochastic GW backgrounds produced by the early universe's myriad of unresolved sources [16]. In the spectrum of potential discoveries, EMRIs hold a unique position. These events, initiated when a compact stellar remnant spirals into a massive black hole (MBH), these events provide opportunities to investigate the MBH characteristics and the nature of the surrounding environments [17]. EMRIs emit low-frequency GWs throughout their extended inspiral phase, serving as a rich source of information for understanding system physical parameters and the MBH's spacetime geometry [18]. The successful detection and parameter estimation of EMRI signals could provide novel insights into the astrophysics of MBHs and the foundational principles of gravity [19; 20]. Traditional methods for EMRI detection, which include both time-domain and time-frequency-domain techniques, have been widely studied in prior research [21; 22; 23; 24; 25]. These strategies mainly employ matched filtering [21; 24] and the Short Time Fourier Transform [22; 23; 25]. However, the inherent complexities of EMRI signals present significant obstacles. Characterized by their complex waveform templates, high-dimensional parameter space, and multiple modes within a single waveform, EMRI signals require over \(\sim 10^{35}\) templates for matched filtering search [18], resulting in a computationally intensive and time-consuming procedure. An example of single EMRI in both the time and frequency domains can be seen in Figure 1, showcasing the aforementioned challenges of signal detection. Additionally, EMRI signals are typically faint and buried within detector and confusion noise, necessitating extended observation durations to achieve an adequate signal-to-noise ratio (SNR) for detection [18]. Time-frequency techniques, offering representations in both time and frequency domains, are frequently less sensitive than matched filtering, which limits their ability to identify weak signals [25]. Given these challenges, exploring alternative methods, such as deep learning, becomes crucial for potentially improving the efficiency of EMRI signal detection. Deep learning, an advanced branch of machine learning, employs neural networks with multiple layers for different types of data. By facilitating the extraction of intricate patterns and representations from large datasets, it has played a crucial role in advancing various fields, from image recognition [26] to natural language processing [27]. Among the numerous architectures, the convolutional neural network (CNN) stands out for its proficiency in handling structured data, such as images and time series, by progressively learning features in a hierarchical manner. Starting with simple features like edges in the initial layers, they gradually combine these to recognize more complex patterns and structures in the deeper layers. This layered approach allows CNNs to automatically recognize and represent intricate details Figure 1: **Visualization of a training data sample. This depicts an EMRI signal from the TDI-A channel spanning 1-year with an SNR of 70. (a), Time-domain representation of the TDI-A strain, showcasing both the combined data (signal + noise) and the signal. The signal’s amplitude is about 3 orders of magnitudes lower than the noise, which makes the detection challenging. (b), Welch PSD of the combined data and the signal, the signal contains lots of modes (peaks), with some reaching the noise level, highlighting the suitability of the frequency domain detection method. The designed detector noise PSD is also presented for reference.** in the data, making them highly effective for tasks like object detection [28] and time-series classification [29]. In the area of GW data analysis, the potential of deep learning, especially CNNs, is becoming increasingly evident. A large amount of studies [30, 31, 32, 33, 34, 35, 36, 37] have demonstrated their effectiveness in ground-based GW detection. Beyond signal detection, deep learning methods have been applied to a variety of tasks, including parameter estimation [38, 39] and glitch classification [40, 41, 42]. However, the application of these methods to space-based GW detection is still in its early stages. While there have been some exploratory efforts, such as the adoption of MFCNN [32] to detect MBHB contaminated by confusion noise [43] and the application of dictionary learning to low-SNR space-based binary black hole (BBH) detection [44]. Notably, Zhang et al. [45] pioneered the detection of EMRIs using CNN, though without incorporating the time delay interferometry (TDI) technique. Therefore, further research is needed to harness the full capabilities of deep learning in space-based GW analysis. In this paper, we introduce the DECODE (DilatEd COnvolutional neural network for Detecting Extreme-mass-ratio inspirals), an end-to-end model designed for detecting EMRI signals in the frequency domain with an SNR of around 50. As showed in Figure 2, the model incorporates dilated causal convolutional layers, which expand its receptive field, allowing it to efficiently process data covering an entire year in one pass. We trained our model using synthetic data that considers the TDI-1.5 detector response, accounting for unequal arm lengths. The results are promising: the DECODE detects EMRI signals with a 1-year accumulated SNR between 50 and 120, achieving a true positive rate (TPR) of 96.3% with a false positive rate (FPR) of 1%. Notably, our model can evaluate one batch of data samples within Figure 2: **Comprehensive EMRI detection framework. (a), Depicts the entire EMRI detection process, from initial data preprocessing to the end-to-end DECODE model. (b), Highlights the mechanism of dilated causal convolution with dilation factors of \((1,2,4,8)\) and a kernel size of 2, emphasizing the exponential growth of the receptive field. (c), Detailed architecture of the residual block in DECODE, comprising two dilated causal convolutional layers, weight normalization, ReLU, and dropout layers. A \(1\times 1\) convolution is introduced to address any dimension discrepancies between the residual input and output.** seconds. Visualizations of the model's intermediate outputs highlight its interpretable feature extraction process and its ability to generalize beyond GR. These findings emphasize the potential of DECODE in future space-based GW data analyses. The remainder of this paper is organized as follows: Section 2 provides a detailed overview of the data generation procedure and outlines the architecture of our proposed model, the DECODE. In Section 3, we present the results of our EMRI detection experiments, demonstrating the effectiveness of our approach. Finally, Section 4 concludes the paper with a summary of our findings and a discussion on potential future work based on our findings. ## 2 Method ### EMRI Waveform Modeling Detecting EMRIs has the potential to reveal key astrophysical insights, but modeling their waveform is challenging due to the delicate balance of strong-field GR and gravitational radiation dynamics. Accurately describing EMRIs demands a solution to the self-force problem, which considers the gravitational impact of the smaller compact object on its own motion within the powerful gravitational field of the central MBH [6]. Because the self-force problem is highly non-linear and defies analytical solutions, researchers have developed approximate waveform models, commonly referred to as kludge models [46, 47]. Two commonly used kludge models in EMRI modeling are the "analytic kludge" (AK) [46] model and the numerical kludge (NK) [47] model. The AK model relies on post-Newtonian expansions and perturbative calculations to evolve the orbital parameters and generate waveforms quickly. It provides computational efficiency but suffers from dephasing compared to more accurate models, leading to potential inaccuracies in parameter estimation. On the other hand, the NK model incorporates the orbital trajectory computed in curved space using Kerr geodesics and includes radiation reaction effects. Although more accurate, the NK model is computationally more expensive, making EMRI signal detection using this template highly formidable. To address the limitations of both models, an argumented analytic kludge (AAK) [48, 49, 50] model has been proposed. The AAK model combines the computational efficiency of the AK model with improved phasing achieved through a mapping to Kerr geodesic frequencies and self-consistent post-Newtonian evolution. By incorporating self-force information and refining the phasing, the AAK model achieves higher waveform fidelity compared to the AK model while remaining computationally efficient. While its computational efficiency may not be adequate for matched filtering-based signal searches, it is suitable for producing training datasets for deep neural networks (DNNs). Despite the advancements in kludge waveform modeling, challenges remain. Incorporating second-order self-force effects into the models and refining them for orbits approaching plunge are ongoing areas of research [6]. Nonetheless, these waveform models are crucial for accurately representing the dynamics of EMRIs and enabling the detection, parameter estimation, and data analysis of these elusive astrophysical sources. ### Data Curation The process of curating training and testing datasets for the identification of EMRI signals using a DNN is a multi-step procedure consisting of signal generation, detector response simulation, and pre-processing. Waveform GenerationThe first step involves the generation of signal templates. The AAK model used for generating these templates is based on [51]. The waveform, denoted as \(h(t)=h_{+}(t)-ih_{\times}(t)\), is typically characterized by 14 physical parameters. The parameter space used for sampling the training and testing dataset parameters in this study is detailed in Table 1. Here, \(M\) and \(a\) represent the mass and the spin parameter of the MBH respectively. The semi-latus rectum is denoted by \(p\), while \(e\) stands for orbital eccentricity, and \(\iota\) signifies the orbit's inclination angle from the equatorial plane. \(Y=\cos\iota\equiv L_{z}/\sqrt{L_{z}^{2}+Q}\), where \(Q\) is the Carter constant, and \(L_{z}\) is the \(z\) component of the specific angular momentum. The polar and azimuthal sky location angles are represented by \(\theta_{S}\), and \(\phi_{S}\), respectively. The orientation of the spin angular momentum vector of the MBH is described by the azimuthal and polar angles \(\theta_{K}\) and \(\phi_{K}\). These parameters are uniformly sampled for our dataset. It is important to note that \(\Phi_{\varphi,0},\Phi_{\theta,0},\Phi_{r,0}\), which represent the phase of azimuthal, polar, and radial modes, are all manually set to 0 respectively. TDI ResponseThe next stage involves simulating the detector's response to these signals. The specific detector configurations utilized in this study are detailed in Table 2. For the breathing arm length, we employed the TDI-1.5 technique, which yielded the GW strain of TDI A and E channels, denoted as \(h_{A}(t)\) and \(h_{E}(t)\), respectively. A detailed derivation of this technique can be found in Ref. [52]. Their CUDA-based implementation, enable us to calculate the response cost in seconds. The signal is then rescaled according to the desired SNR using the formula: \[\mathrm{SNR}^{2}=(h_{A}\mid h_{A})+(h_{E}\mid h_{E}). \tag{1}\] Here, the inner product \((a\mid b)\) is defined as: \[(a\mid b)=2\int_{f_{min}}^{f_{max}}\frac{\tilde{a}^{*}(f)\tilde{b}(f)+\tilde{a} (f)\tilde{b}^{*}(f)}{S_{n}(f)}\ \mathrm{d}f. \tag{2}\] In this equation, \(f_{min}=\frac{1}{\text{Duration}}\simeq 3.17\times 10^{-8}\,\mathrm{Hz}\) and \(f_{max}=\frac{1}{2\cdot\text{Calence}}=\frac{1}{30}\mathrm{Hz}\). \(\tilde{a}(f)\) and \(\tilde{b}(f)\) represent the frequency domain signals, and the superscript \(*\) denotes the complex conjugate. \(S_{n}(f)\) is the one side noise power spectral density (PSD), which will be specified later. Noise GenerationThe third step introduces noise to the signal. This noise, \(n(t)\), is modeled as a colored Gaussian noise with a PSD defined by \[\mathrm{S}_{n}(f)=16\sin^{2}(\omega L)\left(P_{\mathrm{oms}}(f)+(3+\cos(2 \omega L))P_{\mathrm{acc}}(f)\right)\, \tag{3}\] \begin{table} \begin{tabular}{l r r} \hline \hline **Parameter** & \multicolumn{2}{c}{**Lower bound**} & \multicolumn{1}{c}{**Upper bound**} \\ \hline \hline \(\log_{10}(M/M_{\odot})\) & \(5\) & \(8\) \\ \(a\) & \(10^{-3}\) & \(0.99\) \\ \(e_{0}\) & \(10^{-3}\) & \(0.8\) \\ \(p_{0}/M\) & \(15\) & \(25\) \\ \(Y_{0}\) & \(-1\) & \(1\) \\ SNR & \(50\) & \(120\) \\ \(\theta_{S}\) & \(0\) & \(\pi\) \\ \(\phi_{S}\) & \(0\) & \(2\pi\) \\ \(\theta_{K}\) & \(0\) & \(\pi\) \\ \(\phi_{K}\) & \(0\) & \(2\pi\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of parameter setups in EMRI signal simulation. \begin{table} \begin{tabular}{l r} \hline \hline **Parameter** & \multicolumn{1}{c}{**Configuration**} \\ \hline \hline Size of training dataset & \(5000\) \\ Size of testing dataset & \(1000\) \\ Cadence & \(15\,\mathrm{s}\) \\ Duration & \(1\,\mathrm{year}\) \\ Re-sampled data length \(N\) & \(1024/2048/4096\) \\ \hline Arm length \(L\) & \(2.5\times 10^{9}\,\mathrm{m}\) \\ Detector orbit & \(1\)st order Keplerian orbit \\ TDI & \({}^{*}\,\mathrm{TDI}\)-1.5 \\ Acceleration noise \(A_{\mathrm{acc}}\) & \(3\,\mathrm{fm}/\sqrt{\mathrm{Hz}}\) \\ OMS noise \(A_{\mathrm{oms}}\) & \(15\,\mathrm{pm}/\mathrm{Hz}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of configurations of training and testing dataset. with \[\begin{split} P_{\text{oms}}(f)&=A_{\text{oms}}^{2} \left[1+\left(\frac{2\,\mathrm{mHz}}{f}\right)^{4}\right]\left(\frac{2\pi f}{c} \right)^{2}\,,\\ P_{\text{acc}}(f)&=A_{\text{acc}}^{2}\left[1+ \left(\frac{0.4\,\mathrm{mHz}}{f}\right)^{2}\right]\\ &\cdot\left[1+\left(\frac{f}{8\,\mathrm{mHz}}\right)^{4}\right] \left(\frac{1}{2\pi fc}\right)^{2}\,.\end{split} \tag{4}\] Where \(A_{\text{acc}}\) and \(A_{\text{oms}}\) are the noise budget of test mass acceleration noise and readout noise coming from the optical metrology system (OMS), \(L\) is the arm length of LISA detector, and \(c\) is the speed of light. Then the signal is injected into the noise, resulting in the synthetic data Figure 1 is a showcase of the training data in the time and frequency domain. Whitening and PSD EstimationIn the final stage of data curation, the data undergoes several pre-processing steps to prepare it for input into the DNN. The first of these steps is whitening, which serves to remove the frequency-dependent variations in the noise. This process allows the DNN to concentrate on the underlying signal patterns, simplifying the learning task and enhancing the network's ability to detect subtle patterns in the data, thereby improving the overall performance of the EMRI signal identification. Following whitening, the PSD of the data is estimated using Welch's method. The data then undergoes sub-sampling, where it is re-sampled onto a log-uniform frequency grid. This step is aimed at reducing the computational load of subsequent analyses by decreasing the number of data points. 3 different grid density is selected as listed in Table 2. The final pre-processing step is standardization, which ensures that all input features are on a uniform scale, a fundamental requirement for most deep learning algorithms. This step is crucial in enhancing the learning efficiency of the neural network and improving the overall performance of the model. ### Decode In this work, we introduce the DECODE, a novel architecture for sequence modeling tasks, as illustrated in Figure 2. The DECODE is inspired by the TCN architecture [53], which has been shown to outperform traditional recurrent architectures across a diverse range of tasks and datasets. The DECODE architecture leverages the strengths of convolutional networks, which have been proven to be highly effective for sequence modeling. It incorporates dilated convolutions, which are a powerful tool for capturing long-range dependencies in sequence data. The causal nature of the DECODE ensures that the model's output at each step is conditioned on all previous steps, making it suitable for tasks that require an understanding of sequential dependencies. While the TCN and other sequence modeling architectures have predominantly been applied to time series data, the DECODE stands out in its application to frequency domain data. Detecting EMRI in the time domain presents challenges due to the extended duration of the signals and their low SNR. As illustrated in Figure 0(a), the amplitude of the signal is typically three orders of magnitude lower than the noise, and the data spans a full year. However, as shown in Figure 0(b), in the frequency domain, the signal's PSD Figure 3: **EMRI detection performance across SNR and \(N\). All sub-plots depict receiver operating characteristic (ROC) curves for distinct input sample lengths \(N\) within specific SNR ranges, presented on a logarithmic scale. Each line style signifies the balance between TPR and FPR for a given sample length, with the area beneath each curve representing the model’s efficacy. A reference yellow dashed line indicates the random prediction. The use of logarithmic scales enhances the visibility of performance difference, especially at lower FPR levels. (a), Evaluation for \(\mathrm{SNR}\in[50,120]\). (b), Evaluation for \(\mathrm{SNR}\in[70,170]\). (c), Evaluation for \(\mathrm{SNR}\in[100,240]\).** has lots of peaks, with some even reaching the noise level. Despite this shift from time to frequency domain, the core principles of sequence modeling remain applicable. The DECODE effectively exploits these principles, achieving notable performance in EMRI signal detection. Causal Sequence ModelingThe DECODE framework is designed for sequence modeling, with a focus on maintaining causality throughout its structure. Central to DECODE's design are two fundamental principles. Firstly, the architecture ensures that the output sequence's length aligns with the input sequence. This alignment is achieved via a 1D-convolutional network design, where each hidden layer matches the length of the input layer. To maintain this length consistency, zero padding of length (\(\text{kernel size}-1\)) is applied. Following this, the architecture emphasizes the causality of the sequence. This is achieved by using causal convolutions, which ensure that the output at a particular time step is convolved only with preceding elements in the previous layer. Dilated ConvolutionIncorporated into the DECODE architecture, dilated convolutions play a pivotal role in capturing long-range dependencies in sequence data. Drawing inspiration from the WaveNet [54], the DECODE employs dilated convolutions to exponentially expand the receptive field without a significant increase in computational complexity or number of parameters. We provide an illustration in Figure 1(b), More formally, for a 1-D sequence input \(\mathbf{x}\in\mathbb{R}^{n}\) and a filter \(f:\{0,...,k-1\}\longrightarrow\mathbb{R}\), the dilated convolution operation \(F\) on element \(s\) of the sequence is defined as: \[F(s)=\left(\mathbf{x}\ast_{d}f\right)(s)=\sum_{i=0}^{k-1}f(i)\cdot\mathbf{x}_ {s-d\cdot i}\,, \tag{5}\] where \(d\) is the dilation factor, \(k\) is the filter size (i.e. kernel size), and \(s-d\cdot i\) accounts for the direction of the past. When \(d=1\), a dilated convolution reduces to a regular convolution. By employing larger dilations, the receptive field of a DECODE is effectively expanded, allowing it to capture long-range dependencies within the sequence data more effectively. Residual ConnectionsResidual connections, another key feature of the DECODE architecture, are designed to facilitate the training of deep networks. These connections, introduced by He et al. [55], allow the gradient to flow directly through the network, mitigating the problem of vanishing gradients that often jeopardize deep networks. In the DECODE, a residual block is composed of two dilated causal convolutional layers, with a residual connection skipping over them. If we denote the input to the residual block as \(\mathbf{x}\), the output of the block, \(\mathbf{y}\), can be computed as: \[\mathbf{y}=\mathrm{Activation}(\mathbf{x}+\mathcal{F}(\mathbf{x}))\,, \tag{6}\] where \(\mathcal{F}(\mathbf{x})\) represents the transformations performed by the dilated causal convolutional layers. This design choice has been shown to improve the performance of deep networks and is a key component of the DECODE architecture. The residual block used in the DECODE model is illustrated in Figure 1(c). Each block comprises two layers of dilated causal convolution, followed by the rectified linear unit (ReLU) activation function. Weight normalization [56] and dropout [57] are incorporated after each dilated convolution within the residual block. Loss FunctionIn our DECODE model, the output of the residual block has a shape of \((H,N)\), where \(H\) represents the hidden size of our model and \(N\) is the length of the input sequence. The last column of this output is then passed through a linear layer to generate the predicted probability for EMRI signal detection. To train the model, we use the cross-entropy loss, a common choice for classification tasks. One of the advantages of using the cross-entropy loss is its ability to accelerate convergence during training, especially when compared to other loss functions like mean squared error [58]. The cross-entropy loss for a binary classification problem is given by: \[\mathcal{L}=-\frac{1}{n}\sum_{i=1}^{n}y_{i}\log(\mathcal{P}_{i})+(1-y_{i})\log (1-\mathcal{P}_{i})\,. \tag{7}\] In this equation, \(y_{i}\) denotes the actual label, while \(\mathcal{P}_{i}\) is the predicted probability for the \(i\)-th sample, with \(n\) representing the total number of samples in the training dataset. The cross-entropy loss quantifies the divergence between the actual label and the predicted probability. ### Implementation Detail For waveform generation of training data, we employed FastEMRIWaveform2[50, 51] for EMRI signal creation and lisa-on-gpu3[52] for GPU-accelerated detector response simulations, which includes TDI. We also integrated additional functionalities from the SciPy library. Our DECODE architecture consists of 10 residual blocks, each with a kernel size of 3 and a hidden size of 128. Developed using the PyTorch framework, known for its computational efficiency and speed, computations were performed on a high-performance computing cluster equipped with NVIDIA Tesla V100 GPUs. The training utilized the Adam optimizer with a learning rate of \(2\times 10^{-4}\) and a batch size of 64. ## 3 Results ### EMRI Detection Proficiency Receiver operating characteristic (ROC) curve and the area under the curve (AUC) are essential tools for evaluating the performance of models in binary classification tasks. In the context of our study, where the task is to detect EMRI signals buried in noise, these tools provide valuable insights. The ROC curve, which plots the TPR against the FPR, offers a visual representation of the model's performance across various threshold settings. The AUC, on the other hand, provides a single, overall measure of the model's performance across all thresholds. A model with perfect discrimination has an AUC of 1, while a model performing no better than random guessing has an AUC of 0.5. In our research, we employ ROC curves as the primary benchmark to quantify the performance of the DECODE. Our test dataset used here is generated like the training datasets, i.e. the waveform parameters are uniformly distributed as shown in Table 1 but with different SNR range. As depicted in Figures 2(a) to 2(c), we show three separate ROC curves, with each corresponding to a unique input sample length fed into the DECODE. For the specified input lengths of \(N=(1024,2048,4096)\), the SNR ranges are set at \([50,120]\), \([70,170]\), and \([100,240]\). The associated AUC values, detailed within the figures, offer quantitative insight into the model's sensitivity in detecting EMRI signals. For clarity in visual representation, especially at lower FPR values, Figure 3 adopt a logarithmic scale for their axes. It's noteworthy that our test dataset comprises signals with a duration of 1 year, achieving twice the SNR compared to the 3-month data scenario presented in Ref. [45]. While their study tested models on datasets with \(\mathrm{SNR}\in[50,120]\), we evaluated ours on datasets with \(\mathrm{SNR}\in[100,240]\). Both datasets, when rescaled for a 1-year duration, maintain equivalent SNR values, implying consistent signal amplitudes. Impressively, our model attains a TPR of 97.5% at a FPR of 1% as showcased in Figure 2(c). One significant advantage of deep learning methods over matched filtering-based approaches is their speed. Once trained, the model can be rapidly deployed for inference. In our tests, conducted on a single NVIDIA Tesla V100 GPU, our model processed 2000 data samples in approximately 4 seconds, amounting to less than \(10^{-2}\) seconds per sample. ### EMRI Detection Efficacy In Figure 4, we provide a detailed examination of the DECODE's performance across different physical parameters. Figure 3(a) illustrates the relationship between TPR and SNR. The sub-figure clearly demonstrates that as the SNR increases, the TPR increases correspondingly, particularly at the specified FPR thresholds of 0.10 and 0.01. Figure 4: **Detection capability of DECODE across various parameters. (a), Illustrates the TPR as a function of SNR, highlighting the model’s capability to detect signals with varying strengths. (b) Showcases the TPR plotted against the relative amplitude \(\mathcal{A}\) (defined in eq. (8)), emphasizing the model’s ability to detect power excesses in the frequency domain and detect signals even when they are submerged within the noise. (c) Explores the TPR in relation to the spin parameter \(a\), keeping the MBH mass consistent at \(10^{6}M_{\odot}\). This sub-figure is evaluated at three distinct SNR levels: 50, 70, and 100, shedding light on the relationship between spin parameters and detection capabilities.** To gain a deeper understanding of the sensitivity of our model, we introduce the relative amplitude, denoted as \(\mathcal{A}\). It is defined as: \[\mathcal{A}=\max_{i\in A,E}\sqrt{\frac{S_{h}^{i}(f)}{S_{n}(f)}}\,, \tag{8}\] where \(S_{h}^{i}\) represents the Welch PSD of waveform \(h_{i}\). This metric effectively captures the signal's amplitude in the frequency domain. Figure 3(b) plots the TPR against the relative amplitude, at FPRs of 0.1 and 0.01, this sub-figure presents the model's proficiency in discerning power exceeds in the frequency domain. Notably, the DECODE can also detect signals that are entirely submerged within the noise. In Figure 3(c), we evaluate the DECODE's sensitivity to varying spin parameters, while keeping the MBH mass constant at \(10^{6}M_{\odot}\). The evaluation, performed at SNR levels of 50, 70, and 100 and FPR thresholds of 0.1 and 0.01, indicates that the model's detection performance is mainly influenced by the SNR. In contrast, the spin parameter appears to have a limited effect on detection, suggesting that the spin parameter contribution to the overall strength of the EMRI signal is relatively minor. ### Interpretability CNN-based models are powerful tools for pattern recognition and prediction. Their unique architecture and operational mechanism make them inherently interpretable, a feature that is particularly valuable in interdisciplinary research. CNN-based models learn hierarchical patterns in the data through their convolutional layers, with each layer extracting a set of high-level features from the input data. These features are then used by subsequent layers to understand more complex patterns. This transparent process of feature extraction can be visualized, providing insight into how the network interprets the data and makes predictions. The activation maps, often used in the context of neural networks, provide a visual representation of the features that the model identifies and emphasizes during its processing. Essentially, they capture the output values or "activations" from various layers or blocks within the network when presented with an input. These maps offer insights into which parts of the input data the model finds significant or relevant for a particular task. In the case of the DECODE, the activation maps generated at the output of each residual block reveal how the model processes and interprets the frequency-domain data of EMRI signals. The activation maps illuminate the interpretability of the DECODE. By analyzing the outputs of multiple residual blocks, the processes of feature extraction are made transparent. Figure 5 provides a detailed visualization of these maps, demonstrating the ability of the DECODE to distinguish EMRI signals from noise. Specifically, panel **i** of each sub-figure depicts activation maps for inputs with an EMRI signal, while panel **iii** depicts the corresponding frequency domain data. These maps emphasize activated neurons in regions that correspond to the frequency components of the signal. In contrast, panel **ii** of each sub-figure depicts diminished activations for noise-only samples. The corresponding frequency domain data for these samples is presented in panel **iv**, validating the model's ability at identifying EMRI signals. ### Generalization Ability Generalization ability is the capacity of a model trained on a specific dataset to perform well on new, untrained data. It indicates how well a model can extrapolate from its training data to make accurate predictions on unknown data. In practical applications, a model will frequently be presented with data that differs from its training set, so this ability is crucial. A model that generalizes well is robust and flexible, ensuring that it does not simply memorize the training data but rather understands inherent patterns and relationships. In Figure. 4(b) and 4(c), we provide evidence of the generalization capabilities of our model. Even though the model was only trained on AAK waveform datasets, it identified the AK waveform accurately during evaluation with the output probability equal to 1, demonstrating its ability to generalize across various waveform templates. In contrast, the model's successful detection of the XSPECG waveform [59, 60], which was formulated using the KRZ metric, demonstrates its generalization ability with respect to various gravitational theories. These results demonstrate the generalization ability of the model, suggesting that it is capable of handling scenarios beyond its training datasets. ## 4 Conclusion and Discussion The detection of EMRIs in gravitational wave astronomy presents a formidable challenge. In this paper we introduce the DECODE, a state-of-the-art end-to-end DNN model designed for the detection of EMRI signals in the frequency Figure 5: **Interpretability and generalization ability showcase.** This figure provides an in-depth visualization of the intermediate outputs from each residual block, demonstrating the model’s capability for feature extraction within the frequency domain and it’s generalization ability to different waveform templates and gravitational theories. For each sub-figure, panels **i** and **ii** represent the intermediate results corresponding to the input data samples shown in panels **iii** and **iv**. In contrast to the faint activations in panel **ii**, the noticeable activated neurons in panel **i** indicate the extraction of essential characteristics when a signal is present in the input. (a), AAK waveform. (b), AK waveform. (c), XSPEC waveform. domain. By leveraging dilated causal convolutional layers, the DECODE efficiently processes year-long data. Our evaluations on synthetic datasets have revealed the model's robustness and efficiency, achieving remarkable detection rates at varied SNR levels. Furthermore, the model's rapid inference capabilities and its ability to generalize beyond its training parameters but there is still room for future advancement. The precision of the EMRI detection model is intrinsically related to the precision of the training data. While our current training dataset employs the TDI-1.5 detector response, future developments could benefit from the incorporation of more sophisticated simulations, such as the TDI-2.0 technique. This would provide a more accurate simulation of the detector's response, potentially enhancing the model's applicability in the actual world. Our current approach primarily focuses on the amplitude information of the EMRI signals. However, the phase information, which has been largely wasted in this research, holds considerable potential. By integrating phase-related features into the model, we could capture more intricate patterns and details of the EMRI signals. This may lead to improved detection rates and lower false alarm rates. In conclusion, DECODE is a step forward in EMRI detection. Even though there are avenues for improvement, its foundational accomplishments demonstrate its potential as a tool for future space-based GW data analyses. ## 5 Acknowledgments The research was supported by the Peng Cheng Laboratory and by Peng Cheng Laboratory Cloud-Brain. This work was also supported in part by the National Key Research and Development Program of China Grant No. 2021YFC2203001 and in part by the NSFC (No. 11920101003 and No. 12021003). Z.C was supported by the "Interdisciplinary Research Funds of Beijing Normal University" and CAS Project for Young Scientists in Basic Research YSBR-006.
2305.19935
Neural Network Approach to the Simulation of Entangled States with One Bit of Communication
Bell's theorem states that Local Hidden Variables (LHVs) cannot fully explain the statistics of measurements on some entangled quantum states. It is natural to ask how much supplementary classical communication would be needed to simulate them. We study two long-standing open questions in this field with neural network simulations and other tools. First, we present evidence that all projective measurements on partially entangled pure two-qubit states require only one bit of communication. We quantify the statistical distance between the exact quantum behaviour and the product of the trained network, or of a semianalytical model inspired by it. Second, while it is known on general grounds (and obvious) that one bit of communication cannot eventually reproduce all bipartite quantum correlation, explicit examples have proved evasive. Our search failed to find one for several bipartite Bell scenarios with up to 5 inputs and 4 outputs, highlighting the power of one bit of communication in reproducing quantum correlations.
Peter Sidajaya, Aloysius Dewen Lim, Baichu Yu, Valerio Scarani
2023-05-31T15:19:00Z
http://arxiv.org/abs/2305.19935v5
# Neural Network Approach to the Simulation of Entangled States with One Bit of Communication ###### Abstract Bell's theorem states that Local Hidden Variables (LHVs) cannot fully explain the statistics of measurements on some entangled quantum states. It is natural to ask how much supplementary classical communication would be needed to simulate them. We study two long-standing open questions in this field with neural network simulations and other tools. First, we present evidence that all projective measurements on partially entangled pure two-qubit states require only one bit of communication. We quantify the statistical distance between the exact quantum behaviour and the product of the trained network, or of a semianalytical model inspired by it. Second, while it is known on general grounds (and obvious) that one bit of communication cannot eventually reproduce all bipartite quantum correlation, explicit examples have proved evasive. Our search failed to find one for several bipartite Bell scenarios with up to 5 inputs and 4 outputs, highlighting the power of one bit of communication in reproducing quantum correlations. ## I Introduction Quantum Mechanics is famous for having randomness inherent in its prediction. Einstein, Podolski and Rosen argued that this makes quantum mechanics incomplete, and suggested the existence of underlying Local Hidden Variables (LHVs) [1]. While this view was disproved by Bell's theorem [2; 3], it has nevertheless proved fruitful to approach quantum correlations, without committing to an ontology of the quantum world, by asking _which resources would one use to simulate them_. Though insufficient, LHVs provide an intuitive starting point - then, the question becomes: _which additional resources, on top of the LHVs, are needed to simulate quantum correlations?_. Some works have considered nonlocal boxes as supplementary resources [4; 5; 6]: while appealing for their intrinsic no-signaling feature, these hypothetical resources are as counterintuitive as entanglement itself, if not more. Classical communication, on the other hand, is a resource that we use on a daily basis and of which therefore we have developed an intuitive understanding. Because we are thinking in terms of simulations and not of ontology, we are not impaired by the very problematic fact that communication should be instantaneous if taken as the real underlying physical mechanism. Therefore, we are interested in the question of how much classical communication must supplement LHVs to simulate the behaviour of a quantum state. For the maximally entangled state of two qubits, after some partial results [7; 8], Toner and Bacon provided a definitive solution by describing a protocol that simulates the statistics of all projective measurements using only one bit of communication, which we refer to as _LHV+1_[9]. Subsequently, Degorre and coworkers used a different approach and found another protocol which also requires only one bit of communication [10]. The case of non-maximally entangled pure states proved harder. By invoking the Toner-Bacon model, two bits of communication are certainly sufficient [9]; while Brunner and coworkers proved that one PR-box is not [5]. But the simulation of those states in LHV+1 remained open. Only recently, Renner and coworkers reported an LHV+1 protocol that simulates exactly weakly entangled pure states [11]. Our neural network will provide evidence that projective measurements on all two-qubit states can be very closely approximated in LHV+1. The LHV+1 problem could, in principle, be approached systematically, since the behaviours that can be obtained with those resources are contained in a _polytope_. However, the size of this polytope grows very fast with the number of inputs and outputs: as of today, after some initial works [12; 13], the largest LHV+1 polytope to be completely characterized has three measurements per party and binary outcomes; and no quantum violation is found [14]. Addressing the problem for higher-dimensional systems has been challenging without going to the asymptotic limit. The only work we are aware of is of Vertesi and Bene, who showed that a pair of maximally entangled four-dimensional quantum systems cannot be simulated with only one bit of communication, by presenting a scenario involving an infinite number of measurements [15]. In recent years, there have also been increasing attempts to study quantum correlations with machine learning. Many of them reveal the great potential neural network has in tackling the complexities in detecting nonlocality and entanglement [16; 17; 18; 19; 20; 21]. The choice of tackling the LHV+1 problem with machine learning is prompted by the fact that there is no compact parametrisation of LHVs, nor of the dependence of the bit of communication from the parameters of the problem. Thus, we are looking for a solution to a problem, whose variables are themselves poorly specified. Moreover, similar to an LHV model, everything inside a neural network has definite values. Thus, it seems natural to devise a machine learning tool, specifically an artificial neural network (ANN), to act as an LHV model. This work is separated into two sections. In Section II, we study the simulability of the correlations of entangled state with classical resource and one bit of communication using neural network. We also present a semi-analytical protocol which _approximates_ the behaviour of partially entangled two-qubit states with one bit of communication, and we also study the errors of our protocol. In Section III, we also tried to find a quantum behaviour in dimensions higher than two qubits that could not be simulated by a single bit of communication. ## II Two-qubit entangled states using machine learning ### Using Neural Network to generate protocols Inspired by the use of a neural network as an oracle of locality [19], we approached the problem using an artificial neural network. The network takes in measurement settings \(\hat{a}\) and \(\hat{b}\) as an input and outputs an LHV+1 bit probability distribution, enforced by an architecture that forces the suitable locality constraints, which we will discuss below. The output distribution is then compared against the target distribution using a suitable error function, which is the Kullback-Leibler divergence. The Local Hidden Variables (LHV) are described by a random variable \(\lambda\) shared among both parties. \(\lambda\) can be of any form; the model of Toner and Bacon uses a pair of uniformly distributed Bloch vectors, the model by Renner [11] uses a biased distribution on the Bloch sphere, and the neural network of [19] only uses a single number, distributed normally or uniformly, as the LHV. In theory, the choice is ultimately redundant because the different LHV models can be made equivalent by some transformation. However, the neural network will perform differently since it can only process a certain amount of complexity in the model. From trial and error, we settled on Toner-Bacon's _uniformly distributed vector pair_ as the LHV model in our neural network. A probability distribution \(P(A,B)\) is _local_ if it can be written as \[P_{L}(A,B\mid\hat{a},\hat{b})=\int P(A\mid\hat{a},\lambda)\;P(B\mid\hat{b}, \lambda)\;d\lambda. \tag{1}\] The network approximates a local distribution by the Monte Carlo method as \[P_{L}(A,B\mid\hat{a},\hat{b})=\frac{1}{N}\sum_{i=1}^{N}P(A\mid\hat{a},\lambda_ {i})\;P(B\mid\hat{b},\lambda_{i}), \tag{2}\] where \(N\) is a sufficiently large number (\(\geq 1000\)). In the network, Alice and Bob are represented as a series of hidden layers. Each of the parties takes in their inputs according to the locality constraint and outputs their own local probability distribution. The activation functions used in the hidden layers are the standard functions, such as the rectified linear unit (ReLU) and the softmax function used to normalise the probabilities. The forward propagation is done \(N\) times using varying values of \(\lambda_{i}\) sampled from a probability distribution. Thereafter we take the average of the probabilities over \(N\) to get the probability distribution as expressed in equation (2). To move from LHV to LHV+1, we notice that sending one bit of communication is equivalent to giving Alice the power of making the decision to choose between one out of two local strategies. The recipe looks as follows: * Alice and Bob pre-agreed on _two_ local strategies \(P_{L,1}\) and \(P_{L,2}\), as well as on the \(\lambda\) to be used in each round. It seems to us that all previous works in LHV+1 assumed \(P_{1}(A\mid\hat{a},\lambda)=P_{2}(A\mid\hat{a},\lambda)\), but of course there is no need to impose such a constraint. * Upon receiving her input \(\hat{a}\), Alice decides which of the two strategies should be used for that round, taking also \(\lambda\) into account. Although all previous LHV+1 models used a deterministic decision, there is no reason to impose that: Alice's decision could be stochastic. She informs Bob of her choice with one bit of communication \(c\), and Bob consequently keeps his outcome for the chosen strategy. Thus, given a randomly sampled LHV \(\lambda_{i}\), the LHV+1 model is described by \[\begin{split} P(A,B\mid\hat{a},\hat{b},\lambda_{i})& =P(c=+1\mid\hat{a},\lambda_{i})P_{L,1}(A,B\mid\hat{a},\hat{b}, \lambda_{i})+P(c=-1\mid\hat{a},\lambda_{i})P_{L,2}(A,B\mid\hat{a},\hat{b}, \lambda_{i})\\ &=P(c=+1\mid\hat{a},\lambda_{i})P_{1}(A\mid\hat{a},\lambda_{i})P _{1}(B\mid\hat{b},\lambda_{i})+P(c=-1\mid\hat{a},\lambda_{i})P_{2}(A\mid\hat {a},\lambda_{i})P_{2}(B\mid\hat{b},\lambda_{i}).\end{split} \tag{3}\] where we labeled \(c=+1\) (respectively \(c=-1\)) the value of the bit of communication when Alice decides for strategy 1 (resp. 2). Now the complete model consists of two local networks and one communication network. The communication network consists of a series of layers whose inputs are the same as Alice's and outputs a number between 0 and 1 by using a sigmoid activation function, representing \(P(c\mid\hat{a},\lambda_{i})\), which then is used to make a convex mixture of the two local strategies, for the particular inputs and LHV. The final network architecture, then, can be seen in Fig. 1. This approach of using a neural network to generate local strategies was originally used in a network setting [19]. In that work, the network was used to verify non-locality by looking for transitions in the behaviours of distributions when mixed with noise. When a state is mixed with noise, it lies within a local set, up to a certain noise threshold; reducing the amount of noise in the state allows for the identification of sharp transitions in the network's error, indicating when the state exits the local set. Here, instead of such an oracle, we will use the network to generate a protocol to simulate the quantum state by analysing its outputs. ### Simulating Two-qubit States For a two-qubit scenario, the joint measurements can be defined by two vectors in the Bloch sphere, i.e. \(\hat{a},\hat{b}\in S^{2}\). Thus, the behaviour is the set \(\mathcal{P}(\rho)=\{P_{\rho}(A,B\mid\hat{a},\hat{b})\mid\hat{a},\hat{b}\in S ^{2}\}\). #### ii.2.1 Maximally Entangled State The maximally entangled state case has been solved analytically by Toner and Bacon [9]. Thus, we used this state as a test bed for our machine learning approach by training the machine to simulate the distribution of the maximally entangled state \(|\Psi^{-}\rangle\). A snapshot of the behaviour of the trained model can be seen in Fig. 5. By scrutinising similar figures for different LHVs, we can infer an analytical model of the machine. Maximally entangled state protocol: 1. Alice sends to Bob \[c=\text{sgn}(\hat{a}\cdot\hat{\lambda}_{1})\text{sgn}(\hat{a}\cdot\hat{ \lambda}_{2}).\] 2. Alice outputs \[A=-\text{sgn}(\hat{a}\cdot(\hat{\lambda}_{1}+c\hat{\lambda}_{2})).\] 3. Bob outputs \[B=\text{sgn}(\hat{b}\cdot(\hat{\lambda}_{1}+c\hat{\lambda}_{2})).\] The protocol bears much resemblance to Toner-Bacon's original protocol, with the only difference being the output of Alice, which are simply \(-\text{sgn}(\hat{a}\cdot\hat{\lambda}_{1})\) in the original protocol. However, one can check [22] that it indeed fulfills all the correct expectation values. #### ii.2.2 Non-maximally Entangled States We now apply the same method to the non-maximally entangled two-qubit states. Without loss of generality, Figure 1: The architecture of the Artificial Neural Network (ANN). The model consists of two local distributions and a communication network. In each distribution, the two parties are constrained by locality by routing the input accordingly. The communication network outputs a value between 0 and 1, and represents the probability of Alice sending a certain bit to Bob. The output for a particular round is then simply the convex combination of the two local distributions. any pure two-qubit states can be written in the form of \[\left|\psi(\theta)\right\rangle=\cos(\theta)\left|01\right\rangle-\sin(\theta) \left|10\right\rangle,\quad\theta\in\left[0,\frac{\pi}{4}\right]\] using a suitable choice of bases. The state is maximally entangled when \(\theta=\frac{\pi}{4}\) and separable when \(\theta=0\). We train the network to simulate the distribution of \(\left|\psi(\theta)\right\rangle\) with \(\theta\in\left[0,\frac{\pi}{4}\right]\). A selection of the resulting protocol is shown in Fig. 6. The error of the models for these states is lower than the one for the maximally entangled state (see Fig. 2). This does not necessarily mean that they successfully simulate the states exactly, instead of simply approximating them. If the behaviour were actually nonlocal, we should expect a transition in the error when we mix the state with noise, signifying the exit of the state from the local+1 bit set. However, we observe no clear transition occurring when noise is added to the state, only a shallow gradient, suggesting that it is still inside the local+1 bit set. While encouraging, this does not constitute a proof, and we would still need to write an analytical protocol. Unlike for the case of maximally entangled states, the models we obtained for the non-maximally entangled states are more complex and our attempt to infer a protocol begins by looking at figures similar to Fig. 6. We start from the parties' outputs: The outputs of Alice are of the form of \[P(A_{1}=+1\mid\hat{a})=\frac{1}{2}(1-\operatorname{sgn}(\hat{a}\cdot\vec{ \lambda}_{a1}+b_{a1})),\] where \(\hat{\lambda}_{a1}=u_{a1}\vec{\lambda}_{1}+\vec{\lambda_{2}}+v_{a1}\hat{z}\) decides the hemisphere direction and \(b_{a1}=w_{a1}+x_{a1}\vec{\lambda}_{1}\cdot\hat{z}+y_{a1}\vec{\lambda}_{2}\cdot \hat{z}\) decides the size of the hemisphere. Similarly, \[P(A_{2}=+1\mid\hat{a})=\frac{1}{2}(1+\operatorname{sgn}(\hat{a}\cdot\vec{ \lambda}_{a2}+b_{a2})),\] \[P(B_{1}=+1\mid\hat{b})=\frac{1}{2}(1+\operatorname{sgn}(\hat{b}\cdot\vec{ \lambda}_{b1}+b_{b1})),\] \[P(B_{2}=+1\mid\hat{b})=\frac{1}{2}(1-\operatorname{sgn}(\hat{b}\cdot\vec{ \lambda}_{b2}+b_{b2})).\] Using numerical algorithms, we can approximately obtain the relevant coefficients, laid out in Appendix A for the different states. The (simplified) bit of communication is given by \[P(c=+1\mid\hat{a})=\frac{1}{2}(1-\operatorname{clip}(f_{c},-1,1)),\] where \[f_{c} =\Theta(\hat{a}\cdot\vec{\lambda}_{1}+b_{c})\Theta(\hat{a}\cdot \vec{\lambda}_{2}+b_{c})\] \[+\Theta(-\hat{a}\cdot\vec{\lambda}_{1}+b_{c})\Theta(-\hat{a} \cdot\vec{\lambda}_{2}+b_{c})\] \[-\Theta(-\hat{a}\cdot\vec{\lambda}_{1}-b_{c})\Theta(\hat{a} \cdot\vec{\lambda}_{2}-b_{c})\] \[-\Theta(\hat{a}\cdot\vec{\lambda}_{1}-b_{c})\Theta(-\hat{a} \cdot\vec{\lambda}_{2}-b_{c}),\] with \(b_{c}=u_{c}+v_{c}(\vec{\lambda}_{2}\cdot\hat{z})(1-\vec{\lambda}_{1}\cdot \hat{z})\) and the clip function is defined as \[\operatorname{clip}(x,a,b)=\begin{cases}a&\text{if }x<a\\ b&\text{if }x>b\\ x&\text{otherwise}\end{cases}.\] Again, the relevant coefficients obtained using numerical methods are listed in Appendix A. Figure 2: The relative error between the neural network models’ behaviours and the quantum behaviours. The blue dots are for the original model described, while the red crosses are for the simplified model described in the text. The grey shaded region is the region in which an LHV+1 model is known [11]. ### Statistical analysis of the simulations After presenting our protocols, we can now consider the performance of our protocols, both the neural network protocol itself and the semianalytical protocol we distilled from it. These LHV+1 protocols are not exact protocols, but approximations, and we can describe their closeness to the quantum behaviour by providing statistical error values. To get a better intuition on the error values, let us consider a hypothesis testing scenario [23]. Suppose that we have an unknown sample of length \(n\) generated by the same measurement done to \(n\) identical systems. Suppose also that we know that the systems are all actual quantum systems (\(P_{Q}\)), or our LHV+1 models (\(P_{LHV+1}\)), but we do not know which. Let us take \(P_{LHV+1}\) as the null hypothesis. Let \(a\) be the Type I error (mistakenly rejecting a true null hypothesis). In our case, a Type I error would correspond to our machine learning model successfully spoofing as a quantum system. For any decision-making procedure, the probability of a Type I error is lower bounded by \[a\geq e^{-nD_{KL}(P_{Q}||P_{LHV+1})}.\] Thus, in order to have 95% confidence in rejecting a sample from the LHV+1 model, we would need a sample size of \[n_{95\%}\geq-\frac{\ln 0.05}{D_{KL}(P_{Q}||P_{LHV+1})}.\] The sample size \(n\) needed to distinguish the probability distributions differs with the measurement settings, with some measurement settings being more difficult to distinguish. The performance of our LHV+1 models (both the machine learning and our semi-analytical approximations) over the measurement settings are given in Fig. 3. It can be seen that from the neural network's protocols to our semianalytical approximations, we have gained about two orders of magnitude in Kullback-Leibler divergence. This is due to the limitations of our numerical methods used to obtain the optimum parameters, and the fact that we were bound to have missed some details from the behaviour of the network when we translated it into analytical expressions. Our semianalytical protocols require, on average, hundreds of measurements before they can be distinguished from real quantum behaviours, disregarding other noises present in an actual quantum system. Even better, when considering the neural network themselves, it would take upwards of \(10^{4}\) samples to distinguish them from an actual quantum system. Ideally, one might try to see whether the semianalytical protocols, when integrated analytically to give the full behaviour, can be made into an exact protocol with the correct parameters. However, the communication function is very tricky to analytically integrate, and thus this approach might not work. On the other hand, considering that an exact protocol can already simulate some two-qubit states, these pieces of evidence suggest that all two-qubit states can be simulated with just a single bit of communication. However, ultimately, the question of _exactly_ simulating partially entangled states with one bit of communication remains open. ## III Searching for Bell violation of the one-bit of communication polytope Since two-qubit states are simulatable up to very good precision, we now consider a different question: can we find an explicit quantum behaviour that is unsimulatable Figure 3: Violin plots for the neural network (blue) and the semianalytical protocol we presented (red) describing the following values: **(a)** The Kullback-Leibler divergence between our protocols and the quantum behaviours. **(b)** The Total Variational Distance between our protocols and the quantum behaviours. **(c)** The minimum sample size needed to have at least 95% confidence in distinguishing the two behaviours as described in the hypothesis testing scenario. In all three, the violin shapes illustrate the distributions of the values over the different projective measurements on the two-qubit state. with one bit of communication? We try to go to higher dimensional systems and try to find a Bell-like inequality for the communication polytope. As far as we know, no violation of a Bell-like inequality for the one-bit of communication polytope has ever been described. For the rest of the section, let \(\mathcal{L}\) be the local set, \(\mathcal{Q}\) be the quantum set, and \(\mathcal{C}\) be the one-bit of communication set. We are interested in points inside of \(\mathcal{Q}\) that lie outside \(\mathcal{C}\). ### Description of the polytope Similar to \(\mathcal{L}\), \(\mathcal{C}\) is also a convex polytope. However, unlike it, it does not lie inside the no-signalling \(\mathcal{NS}\) space. Let \(\mathcal{A}\) (\(\mathcal{B}\)) be the output set of Alice (Bob) and \(\mathcal{X}\) (\(\mathcal{Y}\)) her (his) input set. The number of deterministic strategies that can be performed with a single bit is \(|\mathcal{A}|^{|\mathcal{X}|}|\mathcal{B}|^{2|\mathcal{Y}|}2^{|\mathcal{X}|}\). However, due to duplicates, the number reduces to [14] \[|\mathcal{A}|\left(|\mathcal{B}|^{|\mathcal{Y}|}+(2^{|\mathcal{X}|-1}-1)(| \mathcal{B}|^{2|\mathcal{Y}|}-|\mathcal{B}|^{|\mathcal{Y}|})\right).\] \(\mathcal{C}\) is the convex polytope formed by these vertices. In practice, we can only generate polytopes of up to around \(2\times 10^{7}\) points due to memory limitations. Since the number of extremum points for \(\mathcal{C}\) is much larger than for \(\mathcal{L}\), we can quickly discard the possibility of performing full facet enumeration. Hence, we would have to resort to other methods for our search. ### Random sampling quantum behaviours in higher dimensions We first tried to sample points from \(\mathcal{Q}\) by measuring the maximally entangled two-qutrit and two-ququart state with measurements sampled uniformly in the Haar measure, before using linear programming to solve the membership problem for \(\mathcal{C}\). However, this method proved ineffective as we did not manage to find any behaviour which lies inside \(\mathcal{C}\), and even a significant amount still lies inside \(\mathcal{L}\). The statistics of this method can be seen in Table 1. ### Using non-signalling points The next method we used was to use points in \(\mathcal{NS}\) and mix them with noise in order to find out the threshold noise levels at which they exit the sets \(\mathcal{Q}\) and \(\mathcal{C}\). If we find a point \(P_{\mathcal{NS}}\) which exits \(\mathcal{Q}\) at a lower noise level \(w_{\mathcal{Q}}\) than the corresponding one for \(\mathcal{C}\), \(w_{\mathcal{C}}\), all behaviours \[wP_{\mathcal{NS}}+(1-w)P_{noise}\] with \(w_{\mathcal{C}}<w<w_{\mathcal{Q}}\) would be behaviours in \(\mathcal{Q}\) that are unsimulatable by one-bit of communication. The membership problem for \(\mathcal{Q}\) is solved using the NPA hierarchy method [24] with level 2 hierarchy. A graphical illustration can be seen in Fig. 4. Choosing a suitable \(P_{\mathcal{NS}}\), however, proved to be a challenge. The extremum points of the \(\mathcal{NS}\) space have only been characterised for binary inputs or binary outputs [25; 26]. Here, we mostly used nonlocal points which are locally unbiased, i.e. for all inputs, all the local outputs are of equal probability, and maximally correlated, i.e. for all input combinations, there is a perfect correlation between Alice and Bob's outputs, for a particular output of Alice the output of Bob is guaranteed and vice versa. While we tried other non-signalling points, this particular class of points gave us the closest gap between \(w_{\mathcal{Q}}\) and \(w_{\mathcal{C}}\). Similarly, there are also numerous choices for \(P_{noise}\), but we find that white noise gives the closest gap in most scenarios. The results of the closest gap \((w_{\mathcal{C}}-w_{\mathcal{Q}})\) found in each scenario are listed in Table 3. In the case of \(|\mathcal{A}|=|\mathcal{B}|=3\), we did not find any violation. The closest gap was observed in the \((|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{A}|,|\mathcal{B}|)=(4,4,3,3)\) scenario, where a there exist a point with \(w_{\mathcal{C}}=0.6612\) and \(w_{\mathcal{Q}}=0.6289\). However, in \(|\mathcal{A}|=|\mathcal{B}|=4\), specifically in the \((4,2,4,4)\) setting, we find a \(P_{\mathcal{NS}}\) which have \(w_{\mathcal{Q}}=w_{\mathcal{C}}=\frac{2}{3}\), described in in Table 2. The table itself can also be interpreted as a Bell inequality by taking the terms in \begin{table} \begin{tabular}{c|c c} \((|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{A}|,|\mathcal{B}|)\) & Points sampled & Proportion in \(\mathcal{L}\) \\ \hline \hline (3,3,3,3) & 10000 & 25.6\% \\ (3,4,3,3) & 300 & 10.0\% \\ (4,3,3,3) & 300 & 10.3\% \\ (4,4,3,3) & 100 & 1.0\% \\ (3,3,4,4) & 500 & 56.6\% \\ \hline \end{tabular} \end{table} Table 1: The statistics for the sampling approach. None of the points sampled fall outside \(\mathcal{C}\). Figure 4: \(w_{\mathcal{Q}}\) is the threshold weight for the quantum set \(\mathcal{Q}\), while \(w_{\mathcal{C}}\) is the threshold weight for the one-bit communication set \(\mathcal{C}\). Thus, \(w_{\mathcal{C}}<w_{\mathcal{Q}}\) would imply a violation and would give a quantum behaviour that could not be simulated by a single bit of communication. the table as the coefficients of the correlation terms and adding all of them. When normalised into a Bell game, the value of the game is both \(\frac{3}{4}\) for \(\mathcal{Q}\) and \(\mathcal{C}\). This Bell facet is the hyperplane which has the line connecting \(P_{\mathcal{N}\mathcal{S}}\) and \(P_{noise}\) as its normal. This point represents our closest attempt at finding a violation with this method. The number of extremal points for \(\mathcal{C}\) in the \((4,2,4,4)\) scenario is around \(1\times 10^{6}\), and it is possible to still go one input higher to \((4,3,4,4)\) or \((5,2,4,4)\). A violation might exist there, but our heuristic search proved unfruitful. In the end, contrary to the prepare-and-measure scenario [27], it still remains an open problem to find a bipartite quantum behaviour that is provably unsimulatable with one bit of communication [28]. ## IV Conclusion In this work, we tried to further the works that have been done on characterising the communication complexity cost of quantum behaviours. We tried to obtain a protocol to simulate partially entangled two-qubit states using a neural network, and we presented a semianalytical LHV+1 protocol based on the protocol of the neural networks. While these protocols could only approximate the quantum behaviours, on average one needs hundreds of measurement data, for the semianalytical protocols, and tens of thousands of measurement data, for the neural network protocols, in order to be distinguished from the quantum behaviour. We also tried to find quantum behaviours in higher dimensions that could not be simulated with one bit of communication. While we were able to find a Bell-like inequality that has the same maximum value in \(\mathcal{Q}\) and \(\mathcal{C}\), we were unable to find a violation. From this work and all the previous works done on the topic, it can be seen that evaluating the capabilities of entangled quantum states in terms of communication complexity is very difficult. While we are confident that a behaviour that cannot be simulated with a single bit could probably be found, extending the work to more bits and states would probably be too difficult, barring any new revolutionary techniques. On the other hand, from our result that numerical protocols that closely approximate the two-qubit entangled states can be found, the task of simulating partially entangled two-qubit states using one bit of communication _exactly_ is probably possible and a fully analytical protocol could probably be found in the near future. ## V Code availability The code is available at [https://github.com/PeterSidajava/neural-network-fp/](https://github.com/PeterSidajava/neural-network-fp/). \begin{table} \begin{tabular}{c|c|c c c|c c c c|} & & \multicolumn{3}{c|}{\(Y=1\)} & \multicolumn{3}{c|}{\(Y=2\)} \\ \cline{3-8} & & \(P(B=1)\) & \(P(B=2)\) & \(P(B=3)\) & \(P(B=4)\) & \(P(B=1)\) & \(P(B=2)\) & \(P(B=3)\) & \(P(B=4)\) \\ \hline \multirow{3}{*}{\(X=1\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ \hline \multirow{3}{*}{\(X=2\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ \hline \multirow{3}{*}{\(X=3\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ \hline \multirow{3}{*}{\(X=4\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ \hline \end{tabular} \end{table} Table 2: The point in \((4,2,4,4)\) which have \(w_{\mathcal{Q}}=w_{\mathcal{C}}=\frac{2}{3}\). Each of the smaller eight boxes corresponds to the output table for a particular combination of inputs, which Alice’s input indexing the vertical dimension and Bob’s the horizontal. In each of the boxes, the \(4\times 4\) table corresponds to the outputs of Alice and Bob, with Alice’s in the vertical and Bob’s in the horizontal. Note that the 1 here means \(\frac{1}{4}\), which means that each of the boxes sum up to 1, as required. \begin{table} \begin{tabular}{c|c c|c} \((|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{A}|,|\mathcal{B}|)\) & \(w_{\mathcal{Q}}\) & \(w_{\mathcal{C}}\) & \(w_{\mathcal{C}}-w_{\mathcal{Q}}\) \\ \hline \hline (3,3,3,3) & 0.5995 & 0.7000 & 0.1005 \\ (3,4,3,3) & 0.6022 & 0.7000 & 0.0978 \\ (4,3,3,3) & 0.5856 & 0.6766 & 0.0910 \\ (4,4,3,3,3) & 0.6289 & 0.6612 & 0.0323 \\ (5,3,3,3) & 0.5768 & 0.6610 & 0.0842 \\ (3,3,4,4) & 0.6159 & 0.7143 & 0.0984 \\ (4,2,4,4) & 0.6666 & 0.6666 & 0.0000 \\ \hline \end{tabular} \end{table} Table 3: The closest gap \((w_{\mathcal{C}}-w_{\mathcal{Q}})\) in each scenario we studied. Acknowledgments This research is supported by the National Research Foundation, Singapore and A*STAR under its CQT Bridging Grant. We thank Maria Balanzo-Juando, Martin J. Renner, Marco Tulio Quintino, and Marco Tomamichel for discussions. We are also grateful to the authors of [19] for making their codes public. We also thank the National University of Singapore Information Technology for the use of their high performance computing resources.
2309.00168
Pose-Graph Attentional Graph Neural Network for Lidar Place Recognition
This paper proposes a pose-graph attentional graph neural network, called P-GAT, which compares (key)nodes between sequential and non-sequential sub-graphs for place recognition tasks as opposed to a common frame-to-frame retrieval problem formulation currently implemented in SOTA place recognition methods. P-GAT uses the maximum spatial and temporal information between neighbour cloud descriptors -- generated by an existing encoder -- utilising the concept of pose-graph SLAM. Leveraging intra- and inter-attention and graph neural network, P-GAT relates point clouds captured in nearby locations in Euclidean space and their embeddings in feature space. Experimental results on the large-scale publically available datasets demonstrate the effectiveness of our approach in scenes lacking distinct features and when training and testing environments have different distributions (domain adaptation). Further, an exhaustive comparison with the state-of-the-art shows improvements in performance gains. Code is available at https://github.com/csiro-robotics/P-GAT.
Milad Ramezani, Liang Wang, Joshua Knights, Zhibin Li, Pauline Pounds, Peyman Moghadam
2023-08-31T23:17:44Z
http://arxiv.org/abs/2309.00168v3
# Pose-Graph Attentional Graph Neural Network ###### Abstract This paper proposes a pose-graph attentional graph neural network, called P-GAT, which compares (key)nodes between sequential and non-sequential sub-graphs for place recognition tasks as opposed to a common frame-to-frame retrieval problem formulation currently implemented in SOTA place recognition methods. P-GAT uses the maximum spatial and temporal information between neighbour cloud descriptors -- generated by an existing encoder-- utilising the concept of pose-graph SLAM. Leveraging intra- and inter-attention and graph neural network, P-GAT relates point clouds captured in nearby locations in Euclidean space and their embeddings in feature space. Experimental results on the large-scale publically available datasets demonstrate the effectiveness of our approach in scenes lacking distinct features and when training and testing environments have different distributions (domain adaptation). Further, an exhaustive comparison with the state-of-the-art shows improvements in performance gains. Code is available at [https://github.com/csiro-robotics/P-GAT](https://github.com/csiro-robotics/P-GAT). place recognition, spatiotemporal attention, SLAM ## I Introduction Accurate and drift-free (re)-localisation is critical for many robotic and computer vision applications, such as autonomous navigation [1] and augmented reality [2]. Achieving reliable (re)-localisation is challenging, particularly in GPS-denied environments, such as indoor, subterranean or dense vegetated environments [3, 4], due to occlusion, complex geometry, and dynamic objects in scenes. One promising direction for addressing the challenges associated with reliable (re)-localisation is to utilise a Place Recognition (PR) method to predict the coarse location of an agent within a database of previously visited places. Place Recognition is commonly framed as a retrieval task in computer vision and robotics, either vision-based [5, 6, 7, 8] or lidar-based [9, 10, 11, 12, 13]. Given a query, the method involves retrieving the most similar key in the database by first encoding the input frame (image/point cloud) as a global descriptor and matching it against the global descriptors of previously visited places. Despite remarkable improvements, visual PR is less robust against appearance, season, illumination and viewpoint variations in large-scale (_i.e._, city-scale) areas. In this paper, we consider the problem of lidar place recognition for large-scale environments. Despite all the progress in the field of lidar place recognition, most existing methods only encode a single lidar frame into a global descriptor, and hence, topological scene-level understanding is often neglected. There are few prior works [14, 15] that directly aggregate a sequence of lidar descriptors to generate one single global descriptor for each lidar sequence. However, these methods do not take advantage of the topological relationship between a sequence of point clouds in the context of a graph containing sets of nodes and edges. To exploit the spatiotemporal information between neighbouring point clouds, we propose an attentional graph neural network called P-GAT, that uses topological information obtained by a pose-graph lidar SLAM system, between a set of point clouds to maximise the receptive field for training. Nodes and edges generated in a pose graph (as the robot explores the environment and pose-graph SLAM optimises robot poses) are further used to generate fully connected graphs (subgraphs) that contain positional information for a given robot travel distance. These subgraphs are further fed into our P-GAT model for place recognition by comparing a pair of subgraphs rather than just a pair of point clouds (Fig. 1). The communication between the nodes of the subgraph pairs is performed leveraging an attentional graph neural network. To address the dynamic properties of subgraphs (varying number of nodes) and the presence of nodes in multiple subgraphs (in contrast to comparing between pairs of point clouds/images), we develop a customised layer normalisation module using a boolean mask for padding nodes and develop an averaging scheme for efficiently computing similarity scores during inference. We demonstrate that P-GAT can be integrated into any Fig. 1: P-GAT aims to optimise spatiotemporal information by relating point clouds within and across subgraphs leveraging an attentional graph neural network. If point clouds are captured in nearby locations (similar point clouds), our intra- (edges in black shades) and inter- (edges in blue shades) attention mechanism renewights their embeddings to bring them closer in feature space. Intra-attention enhances distant point cloud communication within subgraphs, while inter-attention facilitates potential place recognition in revisit areas. global point cloud descriptors (_i.e._, cloud encoder agnostic), which greatly improves their robustness and generalisability. We extensively analyse and compare the performance of our network with the state-of-the-art over multiple large-scale public datasets. To characterise the properties of P-GAT in detail, we demonstrate the role of each component using numerous ablation studies. The proposed P-GAT can achieve the state-of-the art on various benchmark datasets for lidar place recognition tasks. ## II Related Work This section reviews hand-crafted and learning-based lidar PR methods before introducing the attentional graph neural network and its applications. ### _Handcrafted Lidar Descriptors_ The purpose of lidar PR methods is to create distinctive descriptors, based on a Local Reference Frame (LRF), along the robot's path to recognise revisited places regardless of its pose. Two types of descriptors are used: signatures and histograms. Histogram-based methods such as PFH [16] and FPFH [17], describe the 3D surface neighbourhood of a point by encoding a few geometric features obtained individually at each point according to local coordinates. DELIGHT [18], instead of geometric properties, computes intensity histograms. Signature-based algorithms like Scan Context [19] or _Segmatch_[20] use descriptor-based features to improve place recognition. Additionally, algorithms such as SHOT [21] combine both histograms and signatures for robust place recognition. However, handcrafted lidar descriptors require careful tuning of feature extraction and matching parameters depending on the operating environment. ### _Learning-based Lidar Descriptors_ Recent advances in learning-based approaches have shown promising results in addressing the challenges mentioned earlier. CNN-based methods such as _Segmap_[22] or Efficient Segment Matching (ESM) [23] encode local patches of a point cloud into local embeddings. Local descriptors can later be used for localisation [24, 25]. These methods, however, are not defined in an end-to-end fashion to form a scene-level global descriptor for point clouds. Using a convolutional bottom-up and top-down backbone, MinkLoc3D [10] and its variations [26, 27] extract local features and aggregate them into a global descriptor by a Generalised-Mean pooling (GeM) [28]. _LoGG3D-Net_[29] employs a sparse convolutional U-Net to encode point clouds into local features. During training, it ensures the maximum similarity of corresponding local features on a pair of neighbour point clouds by defining a local consistency loss. Unlike MinkLoc3D, LoGG3D-Net uses second-order pooling to aggregate local features to create global descriptors. In contrast to convolutional models, methods have been proposed which are based on PointNet [30], an encoder which works directly on an unordered point cloud due to permutation invariance to points utilising a symmetry function. PointNetVLAD [9] is a seminal lidar PR work with a PoinNet-based backbone design. It uses NetVLAD [31] to aggregate local descriptors for the generation of global descriptors. To capture local contextual information, PCAN [11] adds an attention map mechanism for predicting the significance of point features. Re-weighted local features are further aggregated into a discriminative global descriptor using NetVLAD. Similarly, LPD-Net [32] aims to cover the limitation of PointNet in the extraction of local contextual information by aggregating neighbour features using a graph neural network. Global descriptors are finally generated using NetVLAD. In another effort, SOE-Net [12] adds an orientation-encoding unit in local descriptor extraction and a self-attention unit before the aggregation of local features through NetVLAD to improve the point-wise feature representation of PointNet. Recently, PPTNet [13], inspired by SOE-Net [12] and pyramid structure of PointNet++ [33], proposed a pyramid point transformer design to learn the regional contextual information of a point cloud at multiple levels leveraging a grouped self-attention mechanism for the extraction of discriminative local embeddings. Local embeddings are also aggregated using NetVLAD. In all the lidar PR methods mentioned above, point clouds are compared pairwise and thus, the topological and sequential relationship between a set of point clouds are not explored. The work that attempts to leverage temporal information between point clouds is Locus [15] which aims to relate the local features of the current point cloud to their correspondences in the previous point clouds using second-order temporal feature pooling; however, within a short time window consisting of only three frames. This limits the generalisation of Locus [15] under test-time distribution shifts. SeqOT [14] was also proposed to benefit from temporal information, although using a sequence of range images. Moreover, it generates only a single descriptor for each sequence. ### _Attentional Graph Neural Network_ Attentional Graph Neural Network (AGNN) is designed to process data represented as a graph, using attention mechanisms [34] to selectively focus on nodes and edges. The attention mechanism helps reduce the graph's complexity and improves the effectiveness of local feature matching across nodes. AGNNs have been successfully applied to various tasks, such as image matching [35], object detection [36], and program re-identification [37]. To the best of our knowledge, AGNN has never been used in lidar PR tasks. We believe AGNN is well-suited for lidar PR (due to its efficiency in the aggregation of contextual information) when exploring spatiotemporal information existing between nearby point clouds. ## III The Proposed Method Our goal is to enhance PR performance by increasing the spatiotemporal information on scenarios where a graph with nodes and edges represents robot poses and spatial constraints in between. Robot poses serve as positional information. ### _Overview_ P-GAT is defined based on a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) and \(\mathcal{E}\) denote nodes and edges, respectively. The graph is inherently created by a SLAM system (pose-graph). For example, a lidar SLAM system registers consecutive lidar scans to create nodes and edges of the pose graph for further use in a back-end optimisation problem. Utilising pose-graph SLAM, we assume that accurate localisation is achievable within a local window of robot traversal, _e.g._, 200 m, for a vehicle travelling in a large-scale environment. However, no global localisation is provided to P-GAT. Our network comprises three major blocks: positional encoding, graph-based attention mechanism and binary classification. Provided local robot poses by SLAM, we encode relative positional information into embeddings computed by a 3D backbone to maintain the topological relationship between a sequence of keynodes (selected robot poses associated with a point cloud and its corresponding descriptor). P-GAT utilises both graph structure and attention mechanism, described in Sec. III-C, to increase the distinctiveness of positional-aware embeddings. Inspired by SuperGlue [35], which uses an AGNN for image matching, we aim to relate the descriptors of point clouds captured in nearby locations by aggregating contextual and viewpoint information into point clouds' descriptors using an AGNN. Figure 2 depicts the overall architecture of P-GAT and its components. ### _Problem Formulation_ To formulate the problem, we consider two sets of point clouds as two subgraphs \(\mathcal{S}_{A}=\{V_{i}^{A}\}_{i=1}^{N}\), and \(\mathcal{S}_{B}=\{\mathcal{V}_{j}^{B}\}_{j=1}^{M}\), where \(M\) and \(N\) denote the number of keynodes in \(\mathcal{S}_{A}\) and \(\mathcal{S}_{B}\), respectively. Our PR problem is now defined between pairs of subgraphs instead of two individual point clouds, as is common in the literature. We also formulate the PR problem as a binary classification problem using a similarity measure computed from descriptors between every pair of \((\mathcal{V}_{i}^{A},\mathcal{V}_{j}^{B})\) in subgraphs \(\mathcal{S}_{A}\) and \(\mathcal{S}_{B}\). All the pairs of keynodes from subgraphs \(\mathcal{S}_{A}\) and \(\mathcal{S}_{B}\) that are captured in the same place must be classified. As noted earlier, each keynode is associated with the robot position \(\mathbf{t}\) and a descriptor \(\mathbf{d}\), _i.e._, \(\mathcal{V}=\{\mathbf{t},\mathbf{d}\}\). The position \(\mathbf{t}\in\mathbb{R}^{3}\) is relative to the first node of the graph and normalised, _i.e._, \(\mathbf{p}=\frac{1}{\sigma}(\mathbf{t}-\mathbf{c})\), where \(\mathbf{c}\) is the centroid of the nodes' positions in a subgraph and \(\sigma\) is a scalar showing the scatteredness of the keynodes. This normalisation is essential to maintain the geometry consistency between keynodes and network generalisation when training and testing data have different distributions. The descriptor \(\mathbf{d}\in\mathbb{R}^{E}\) represents the input point cloud embedding as a fixed-size vector global descriptor at the time of keynode \(\mathcal{V}\), and \(E\) denotes the descriptor dimension. ### _Attentional Graph Neural Network_ To exploit the spatiotemporal information existing between keynodes in the pose graph created by SLAM, we build a fully-connected graph between the subgraph pairs \((\mathcal{S}_{A},\mathcal{S}_{B})\), _i.e._, \(\mathcal{G}_{AB}=(\mathcal{V}_{AB},\mathcal{E}_{AB})\) with \(N+M\) keynodes and \(\frac{1}{2}((N+M)^{2}-(N+M))\) undirected edges. Now, we can aggregate the relative positional and contextual constraints between the keynodes in \(\mathcal{G}_{AB}\) utilising an AGNN to enhance the descriptors' representation. Since \(\mathcal{G}_{AB}\) has two types of connections (multiplexity [38]), _i.e._, intra- and inter- subgraph edges, we effectively train the network to push the embeddings of the point clouds captured in nearby locations together. This allows embeddings to be invariant to dynamic and viewpoint changes resulting in more effective place recognition. **Positional Encoding:** To embed the keynode relative position into the descriptor with a higher dimension \(E\) in each subgraph, we use Multi-Layer Perceptron (MLP) encoder to increase the dimension of the position \(\mathbf{p}\) to \(E\) and integrate it with the descriptor \(\mathbf{d}\) by element-wise addition \(\oplus\): \[\mathcal{X}=\mathcal{D}\oplus\mathbf{MLP}_{\text{enc}}\left(\mathcal{P} \right), \tag{1}\] where \(\mathcal{X}\in\mathbb{R}^{E\times N}\) is the matrix of positional-aware descriptors and \(\mathcal{D}\in\mathbb{R}^{E\times N}\) is the matrix consisting of original descriptors \(\mathbf{d}_{i},i\in\{1,...,N\}\). Position embedding is common in sequence-to-sequence learning problems [34, 39], enhancing the model's ability to capture contextual cues. In our case, it improves the embeddings' distinctiveness by allowing the GNN to distinguish between different embeddings of point clouds at different positions in the subgraph, thus capturing Fig. 2: P-GAT employs a multi-head attentional graph neural network for contextual and viewpoint information aggregation of sequential point clouds, refining each point cloud’s descriptor. The network uses an MLP encoder that supplements descriptors with normalised positions. Intra- and inter-attention mechanisms infer relationships for all nodes simultaneously, updating the descriptors accordingly. P-GAT predicts whether the pair of two point clouds are captured from the same place by comparing their final descriptors with shifted cosine similarity. the contextual and viewpoint information more effectively (See Sec. IV-E). **Multi-head Attentional GNN:** As stated before, our graph between pairs of subgraphs is fully connected and multiplex, _i.e._, it comprises two types of edges: intra edges \(\mathcal{E}_{\text{intra}}\) which are the edges between the nodes within one subgraph and inter edges \(\mathcal{E}_{\text{inter}}\) which are the edges across the subgraphs in the input pair. Borrowing the terminology of _message passing_ in GNN from [40], we aggregate the messages carried through edges \(e_{ij}\in\mathcal{E}\) to keynote \(i\), _i.e._, \(\mathbf{m}_{e_{ij}}:=\{\mathbf{m}_{e_{ij}}|\ \forall j:e_{ij}\in\mathcal{E}\}\). The AGNN is composed of multiple layers. At each layer \(\ell\in\{1,...,L\}\), the message passing update, _i.e._, updating the intermediate feature \({}^{(\ell)}\mathbf{x}_{i}\) of keynote \(\mathcal{V}_{i}\), is concurrently conducted by aggregating messages across all the edges for all the keynodes once for subgraph \(\mathcal{S}_{A}\) and once for \(\mathcal{S}_{B}\) using: \[{}^{(\ell+1)}\mathbf{x}_{i}={}^{(\ell)}\mathbf{x}_{i}+\mathcal{M}_{t}({}^{( \ell)}\mathbf{x}_{i},\mathbf{m}_{e_{i}}), \tag{2}\] where \(\mathcal{M}_{t}\) is the message function. Following [35], we use a MLP, albeit modified (See Sec. IV-C), and concatenate messages with intermediate representations, _i.e._, \(\mathcal{M}_{t}(\mathbf{x},\mathbf{m}_{e})=\mathbf{MLP}([\mathbf{x}\|\mathbf{ m}_{e}])\), where \([.||.]\) denotes concatenation operator. Following the attention mechanism described in [34], we aggregate messages received by each keynote through edges within (intra-attention) and across (inter-attention) subgraphs \(\mathcal{S}_{A}\) and \(\mathcal{S}_{B}\). To this end, we consider the receiver keynote to be in subgraph \(\mathcal{S}_{R}\) and the entire sender keynodes to be in \(\mathcal{S}_{S}\) such that \((\mathcal{S}_{R},\mathcal{S}_{S})\in\{\mathcal{S}_{A},\mathcal{S}_{B}\}^{2}\). \[\begin{split}\mathbf{Q}&=\mathbf{W}_{Q}\ \mathcal{X}_{R}+\mathbf{b}_{Q},\\ \begin{bmatrix}\mathbf{K}\\ \mathbf{V}\end{bmatrix}&=\begin{bmatrix}\mathbf{W}_{K}\\ \mathbf{W}_{V}\end{bmatrix}\mathcal{X}_{S}+\begin{bmatrix}\mathbf{b}_{K}\\ \mathbf{b}_{V}\end{bmatrix},\end{split} \tag{3}\] where matrices \(\mathbf{Q}\), \(\mathbf{K}\) and \(\mathbf{V}\) consist of query vectors (features of keynodes that are being attended to) in \(\mathcal{S}_{R}\), \(\mathbf{K}\) keys (features that are used to compute the attention scores) and \(\mathbf{V}\) values (features that are weighted by the attention scores to compute the output) in \(\mathcal{S}_{S}\), respectively. \(\mathcal{X}_{R}\) and \(\mathcal{X}_{S}\) are the matrices packing the intermediate features \(\mathbf{x}_{i}\) from \(\mathcal{S}_{R}\) and \(\mathbf{x}_{j}\) from \(\mathcal{S}_{S}\), respectively. Matrices \(\mathbf{W}_{Q}\), \(\mathbf{W}_{K}\) and \(\mathbf{W}_{V}\) are linear transformation weights computed within training along with biases \(\mathbf{b}_{Q}\), \(\mathbf{b}_{K}\) and \(\mathbf{b}_{V}\). The entire messages can now be computed by scaled-dot product attention as follows: \[\mathbf{m}_{e}=\text{softmax}\left(\frac{\mathbf{Q}^{\top}\mathbf{K}}{\sqrt{ d_{k}}}\right)\mathbf{V}, \tag{4}\] where \(d_{k}\) is the keys dimension. For large \(d_{k}\) values, the scaling is essential to avoid large magnitudes likely to cause minimal gradients when using a softmax function [34]. We use multi-head attention to capture different types of information and relationships within the input more effectively. ### _Classification Layer_ The output of the AGNN block is two tensors representing the final descriptors of subgraph \(\mathcal{S}_{A}\) and \(\mathcal{S}_{B}\), \(\mathcal{F}^{\text{A}}=\{\mathbf{f}_{i}^{\text{A}}\}_{i=1}^{N}\in\mathbb{R}^ {E\times N}\) and \(\mathcal{F}^{\text{B}}=\{\mathbf{f}_{j}^{\text{B}}\}_{j=1}^{M}\in\mathbb{R}^{ E\times M}\). The final prediction in P-GAT is performed using cosine similarity to produce a similarity matrix \(\mathbf{S}=\{s_{ij}\}\in\mathbb{R}^{N\times M}\). Element \(s_{ij}\) is obtained from: \[s_{ij}=\frac{\langle\mathbf{f}_{i}^{\text{A}}\,\ \mathbf{f}_{j}^{\text{B}}\rangle}{ \|\mathbf{f}_{i}^{\text{A}}\|\|\mathbf{f}_{j}^{\text{B}}\|}, \tag{5}\] where \(\langle.,.\rangle\) and \(\|.\|\) denote the inner product and the L2 norm, respectively. Based on the definition of PR, the descriptors of two nearby point clouds should be similar, _i.e._, \(s_{ij}\) close to \(1\), while the descriptors of two dissimilar point clouds should be distinct, _i.e._, \(s_{ij}\) close to \(-1\). Since there can be multiple pairs of nearby point clouds on a given pair of two subgraphs, PR based on two subgraphs becomes a multi-label binary classification problem. Additionally, we map the scores in \(\mathbf{S}\) with the range \([-1,1]\) to \(\mathbf{P}=\{p_{ij}\}\) with the range \([0,1]\) using: \[p_{ij}=s_{ij}\times 0.5+0.5, \tag{6}\] and interpret it as the probability that point cloud \(\mathcal{P}_{i}\) from subgraph \(\mathcal{S}_{A}\) and point cloud \(\mathcal{P}_{j}\) from subgraph \(\mathcal{S}_{j}\) represent the same place. Our classification problem can now be defined as a stochastic optimisation problem measuring the similarity between two probabilities. ### _Loss_ Reformulating the binary classification problem as stochastic optimisation, we minimise the Kullback-Leibler (KL) divergence \(D_{KL}(\mathbf{y}\|\mathbf{P})=-\sum_{i=1}^{N}p_{i}\log(\frac{p_{i}}{y_{i}})\) between the predicted probabilities \(p_{i}\) and the true probabilities \(y_{i}\) (ground truth). The true probabilities follow the Bernoulli distribution, _i.e._, the probability that random variable \(x\in\{0,1\}\) belongs to a class (\(x=1\)) is \(p(x=1)=p\), otherwise \(p(x=0)=1-p\). Using the Bernoulli distribution properties, the KL divergence is converted into Binary Cross Entropy (BCE) [41]. Since we have multiple separate classifications to perform (between the keynodes in pairs of subgraphs), our final BCE loss is defined as follows: \[\mathcal{L}\left(\mathbf{y},\mathbf{P}\right)=-\sum_{i=1}^{N}\sum_{j=1}^{M} \omega_{ij}\left(y_{ij}\cdot\log p_{ij}+(1-y_{ij})\cdot\log\left(1-p_{ij} \right)\right), \tag{7}\] where \(y_{ij}\in\{0,1\}\) is the ground truth label. If point cloud \(\mathcal{P}_{i}\) and point cloud \(\mathcal{P}_{j}\) represent the same place \(y_{ij}=1\), otherwise, \(y_{ij}=0\). \(\omega_{ij}\) is a scalar hyperparameter to indicate whether the point clouds pair \((\mathcal{P}_{i},\mathcal{P}_{j})\) contributes in the loss function based on the conditions we follow to select positive and negative pairs (See Sec. IV-C). ## IV Experiments We briefly introduce the datasets used, followed by evaluation settings and the implementation details. Comparisons are made between our proposed network and three state-of-the-art architectures with different backbones, serving as baselines. Additionally, we assess our network's performance in comparison to existing lidar PR approaches. We then provide detailed ablation studies to verify the network design and the impact of each component on performance gains. ### _Datasets_ We use three publicly-available large-scale datasets for evaluation. We detail the characteristics of each dataset below. **Oxford RobotCar dataset[42]** has been widely used for lidar PR, which is a processed subset of the overall Oxford RobotCar dataset [42]. It consists of point clouds captured by travelling a route (\(\sim\)10 km) 44 times across Oxford, UK, over a year. To assess performance on the Oxford RobotCar dataset, point clouds from one trip are taken as queries and matched against point clouds from other trips in an iterative process [9]. The training and testing dataset split introduced by Uy _et al._[9] is followed in this work. In total, \(\sim\) 24.7k point clouds were used for training and testing. **In-house dataset [9]** consists of data from three regions in Singapore - a University Sector (U.S.), a Residential Area (R.A.), and a Business District (B.D.). Similar to the evaluation on the Oxford dataset, we use the standard test dataset split described in [9]. Point clouds were collected on a car travelling a path in U.S. (\(\sim\) 10 km), R.A. (\(\sim\) 8 km) and B.D. (\(\sim\) 5 km) 5 times at different times. Point clouds from a single trip are used as queries and evaluated iteratively with point clouds from the remaining trips as database [9]. In-house dataset used at the test time only to demonstrate generalisability. In total, \(\sim\) 4.5k point clouds were used for testing. **MulRan dataset [43]** consists of traversals of several environments in South Korea - the Daejeon Convention Center (DCC) (3 runs each \(\sim\) 5 km), the Riverside (3 runs each \(\sim\) 6 km) of Daejeon city, the Korea Advanced Institute of Science and Technology (KAIST) (3 runs each \(\sim\) 7 km) and Sejong city (Sejong) (3 runs each \(\sim\) 23 km). We use the DCC and Riverside environments, training with DCC sequences 1 and 2 and testing on sequence 3, and training with Riverside sequences 1 and 3 and testing on sequence 2. KAIST three sequences are used as unseen only for evaluation. Because the average distance of point clouds in the MulRan dataset is \(\sim\) 1 m, we only use the point clouds with a minimum of 20 m apart, resulting in a total of \(\sim\) 2.5k point clouds used for training and testing. ### _Evaluation Criteria_ The datasets described above include UTM coordinates, _i.e._, ground truth obtained from IMU/GPS for Oxford/in-house and IMU/GPS/SLAM for MulRan, for each point cloud. Using this ground truth, we select a 25 m threshold to classify successful place recognition events (retrievals). We compare the cosine similarity between the refined global descriptors of each query in a run with the refined global descriptors of the remaining point clouds covering the same region in the database. For comparison, we use AR@N (and its varieties, _i.e._, AR@1 and AR@1%), commonly used for lidar PR performance. This metric measures the percentage of correctly localised queries where at least one of the top-N database predictions matches the query. A perfect AR@N score would be 100%, meaning all the possible revisits are correctly identified. ### _Implementation Details_ We implemented P-GAT in PyTorch, and Adam optimiser was used. The learning rate was set to \(1e^{-4}\) without learning rate decay. Our model was trained with batch size 256 for 1500 epochs on Oxford and MulRan. The number of attention layers was set to 9 with four heads for multi-head AGNN, resulting in \(\sim 12\) million parameters (in total) for learning. We used this single configuration across all the experiments. We create an adjacency matrix using a travel distance threshold to generate fully connected subgraphs. Using this matrix, we create subgraphs that include (key)nodes, each encompassing both the pose and its corresponding feature information. The adjacency matrix plays a crucial role in the training process for pairing subgraphs, ensuring that nodes within each subgraph are fully interconnected. Upon pairing subgraphs, we merge the nodes and features from both subgraphs, forming a fully connected graph. Nodes define edges in between, and features are populated with descriptors obtained from an existing PR method. We randomly pair subgraphs from the database to select two subgraphs as input. Because most of the pairs of subgraphs do not overlap, the ground truth of the similarity matrix is almost a zero matrix, not allowing the model to update the parameters properly. Therefore, we defined a positive rate and set it to 30%, forcing the model to have a 30% probability of having subgraph pairs with at least one pair of positive nodes. Subgraph generation is based on the robot's travelling distance. A subgraph starts from node \(\mathcal{V}_{i}\) and stops at node \(\mathcal{V}_{i+n}\) when the travelling distance exceeds a threshold. The stride of the subgraph generation is one, _i.e._, the next subgraph starts from node \(\mathcal{V}_{i+1}\). Since the number of subgraph nodes can vary, we use a boolean mask to deal with padding nodes. Also, we implemented a customised version of layer normalisation (compatible with the dynamic behaviour of subgraphs) in the MLP blocks, calculating all nodes' mean and standard deviation except padding nodes. This approach effectively handles the dynamic node number within subgraphs. During testing, because one query node can appear in multiple subgraphs (unlike point cloud pair-wise comparison), we implemented an average scheme in which the similarity scores related to a query node are averaged when the query node is compared against the entire nodes from the database, _i.e._, \(\overline{s}_{ij}=\frac{1}{C_{1}C_{2}}\sum_{n=1}^{C_{1}}\sum_{n=2}^{C_{2}}s_{ ij}(\mathbf{d}^{(n_{1})}_{1},\mathbf{d}^{(n_{2})}_{2})\), where \(C_{1}\) and \(C_{2}\) are the maximum numbers of subgraphs from which the \(i\)-th query node and \(j\)-th database node are seen, respectively. ### _Comparison to State-of-the-Art_ To show that P-GAT is backbone agnostic, we separately trained the model on Oxford RobotCar [42] and MulRan datasets [43] using descriptors \(\mathbf{d}\in\mathbb{R}^{256}\) obtained from the backbone models PointNetVLAD [9], MinkLoc3D [10], and PPT-Net [13]. Descriptors of these baselines were generated after training the models on the training splits described earlier. Table I summarises the average recall@1% (AR@1%) and average recall@1 (AR@1) of the results of our P-GAT and three other baseline models. Across all the experiments, P-GAT displays substantial performance gains (on average above 10%) compared to the baselines, indicating that if P-GAT is integrated with any backbone, it improves the performance due to spatiotemporal information. Additionally, following the trend of models' performance when trained on Oxford and tested on MulRan/in-house or trained on MulRan and tested on Oxford/in-house, P-GAT demonstrates more consistent improvement in performance, showing that P-GAT enhances the generalisation ability of the network when train and test data are from different distributions. We further compare our P-GAT with the state-of-the-art methods listed in Table II. For this, P-GAT was trained using descriptors extracted from MinkLoc3D trained on the Oxford training subset. Comparing P-GAT and MinkLoc3D baseline, we observe that AR@1 for Oxford, U.S., R.A., and B.D. improves by 5%, 11.3%, 13.9%, and 16.5%, respectively. Additionally, although we used MinkLoc3D descriptors, the substantial gain obtained by P-GAT allows us to outperform the other state-of-the-art methods, such as PPT-Net, SVT-Net and PVT3D, which reported higher performance than that of vanilla MinkLoc3D. AR@1% scores for all datasets are all higher than 99%. High AR@1% indicates that P-GAT enables the identification of all possible revisits in a small subset of top candidates (1% of point clouds in the database). To evaluate the performance variance of our model compared to the baselines, we trained P-GAT, PointNetVLAD, MinkLoc3D and PPT-Net 5 times on the Oxford training subset and tested on the Oxford/in-house testing subset using fixed settings. Fig. 3 shows mean AR@N recalls (average of AR@N over 5 experiments) and their standard deviations for all four models. The variance of our model is comparable with that of MinkLoc3D and PPT-Net, and it decreases by increasing the number of top candidates. PointNet-VLAD shows greater variance than the other methods indicating the network has more nondeterministic behaviour. The recall curves provide a visual representation of the superior performance of our P-GAT model, demonstrating the potential of our attention-based approach in improving the accuracy and stability of place recognition. ### _Ablation Studies_ We conducted ablation studies to validate our proposed method and the relative contribution of each component. For this purpose, we trained and evaluated P-GAT utilising the MinkLoc3D model trained on the Oxford dataset. This ensures a fair comparison with the results reported in Table II. **Effects of Positional Encoding:** Table III compares the performance of our model with and without positional information. The model without positional information shows a negligible drop (2.2% in AR@1) in performance on the Oxford testing split. However, the model lacking positional information exhibited a considerably inferior performance on the in-house datasets. As seen, AR@1 and AR@1% decrease more than 40% and 10%, respectively, over R.A, U.S. and \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Offard} & \multicolumn{3}{c}{US} & \multicolumn{3}{c}{RA} & \multicolumn{3}{c}{BD} \\ \cline{3-10} & AR@1 & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% \\ \hline \multicolumn{1}{l}{FointNetVLAD [9]} & 62.8 & 80.3 & 63.2 & 72.6 & 56.1 & 60.3 & 57.2 & 66.3 \\ PCAN [1] & 69.1 & 83.8 & 62.4 & 79.1 & 56.9 & 71.2 & 58.1 & 66.8 \\ LP-Net [32] & 86.3 & 94.9 & 87.0 & 96.0 & 83.1 & 90.5 & 82.5 & 89.1 \\ EC-Net [34] & 86.2 & 94.7 & — & 96.5 & — & 88.6 & — & 84.9 \\ HPTB [35] & 86.6 & 93.7 & 80.9 & 90.2 & 78.2 & 77.2 & 74.3 & 79.8 \\ SOE-Net [12] & 89.4 & 96.4 & 82.5 & 93.2 & 82.9 & 91.5 & 83.3 & 88.5 \\ MultiLoc3D [10] & 90.7 & 93.6 & 87.6 & 95.0 & 80.4 & 91.2 & 81.5 & 88.5 \\ NP-Transformer [46] & 93.8 & 97.7 & — & — & — & — & — & — \\ PPNetNet [13] & 93.5 & 98.1 & 90.1 & 97.5 & 84.1 & 93.3 & 84.6 & 90.0 \\ SVTNet [47] & 91.7 & 97.8 & 90.1 & 96.5 & 84.3 & 92.7 & 85.5 & 90.7 \\ MultiLoc3D [26] & 92.8 & 81.7 & 83.1 & 67.7 & 72.6 & 57.1 & 70.4 & 62.2 \\ PVT3D [48] & 95.6 & 98.5 & 92.9 & 97.9 & 89.5 & 94.8 & 87.9 & 92.1 \\ P-GAT (Ours) & **95.0** & **99.9** & **95.0** & **100.0** & **94.3** & **100.0** & **95.0** & **99.8** \\ \hline \hline \end{tabular} \end{table} TABLE II: Average recall (%) at top 1% (AR@1%) and top 1 (AR@1%) for the state-of-the-art lidar-based PR models trained on the Oxford RobotCar. Our P-GAT (trained on top of MinkLoc3D) performs best on all benchmarks. \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l l l l l} \hline \hline & & \multicolumn{3}{c}{Offard} & \multicolumn{3}{c}{DCC} & \multicolumn{3}{c}{Riversible} & \multicolumn{3}{c}{R.A.} & \multicolumn{3}{c}{U.S.} & \multicolumn{3}{c}{KAIST} & \multicolumn{3}{c}{Average} \\ \cline{3-14} **Trained one**: & & AR@1 & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% & AR@1\% \\ \hline \multirow{4}{*}{Oxford} & PointNetVLAD & Baseline & 62.8 & 83.8 & 69.3 & 81.6 & 37.0 & 36.6 & 56.2 & 65.3 & 52.3 & 68.4 & 63.1 & 79.0 & 57.2 & 73.1 & 56.8 & 72.5 \\ & P-GAT & **96.5** & **99.9** & **80.8** & **92.1** & **57.0** & **83.5** & **89.2** & **86.6** & **76.3** & **99.6** & **79.0** & **88.2** & **73.8** & **78.9** & **79.8** \\ \cline{2-14} & MinkLoc3D & Baseline & 93.0 & 97.9 & 77.7 & 91.2 & 46.3 & 81.1 & 81.3 & 83.5 & 80.4 & 91.2 & 86.7 & 95.0 & 78.8 & 91.6 & 77.3 & 90.9 \\ & P-GAT & **98.0** & **99.9** & **93.9** & **94.7** & **75.6** & **86.4** & **98.0** & **99.8** & **94.3** & **100.0** & **98.0** & **98.1** & **94.2** & **92.4** & **96.4** \\ \cline{2-14} & PPT-Net & Baseline & 93.5 & 98.1 & 76.0 & 89.6 & 39.4 & 84.2 & 84.6 & 90.0 & **84.1** & 93.1 & 90.1 & 97.5 & 77.4 & 88.3 & 77.9 & 61.6 \\ & P-GAT & **97.4** & **99.9** & **93.6** & **99.2** & **73.1** & **94.0** & **80.0** & **85.1** & 82.9 & **99.8** & **99.9** & **89.3** & **91.4** & **87.4** & **97.5** \\ \hline \multirow{4}{*}{DCC} & PointNetVLAD & Baseline & 40.5 & 77.6 & 75.3 & 90.9 & 52.3 & 81.8 & 52.2 & 60.5 & 45.0 & 58.5 & 49.1 & 60.5 & 64.9 & 78.9 & 54.8 & 70.3 \\ & P-GAT & **63.3** & **93.6** & **96.9** & **99.9** & **83.2** & **96.8** & **52.5** & **76.9** & **64.9** & **95.0** & **88.5** & **96.4** & **79.0** & **91.4** & **71.1** & **93.0** \\ \cline{2-14} & MinkLoc3D & Baseline & 68.9 & 81.5 & **95.4** & **99.2** & 78.1 & 92.7 & 94.5 & 85.7 & 79.5 & 88.7 & 84.0 & 92.5 & **85.3** & **93.5** & **81.5** & 90.5 \\ \cline{2-14} & P-GAT & **78.9** & **97.3** & 93.2 & 98.9 & **87.3** & **95.6** & **95.4** & **99.9** & **81.3** & **99.2** & **89.5** & **89.9** & 82.2 & 98.8 & **86.5** & **97.7** \\ \cline{2-14} & B B.D. These results demonstrate that positional information is critical for our model's performance, especially when testing in unseen environments. **Intra- and Inter-Attention Mechanisms:** In this ablation study, we examine the impact of inter- and intra-attention mechanisms on our model's performance. Table IV displays the results, indicating that disabling intra-attention produces a slight decrease in performance across all datasets. On the other hand, disabling inter-attention results in a slight improvement in the AR@1 and AR@1% scores on the Oxford dataset but a significant decrease in AR@1 scores on the in-house datasets. The U.S. and R.A. datasets show a decrease of around 1% in AR@1% scores, whereas B.D. experiences a decrease of 10%. Notably, the model without the full attention mechanism demonstrates the lowest AR@1 and AR@1% scores on the Oxford dataset, while its performance on the in-house datasets is comparable and sometimes even better than the model without inter-attention. These findings suggest that both inter- and intra- attention mechanisms enhance place recognition accuracy. Moreover, disabling inter-attention may lead to overfitting on the training dataset, resulting in poor generalisation at test-time with distribution shifts. **Travel Distance in Subgraphs:** We conducted an ablation study to investigate the impact of the robot's travelling distance within subgraphs on the performance of our model. Table V tabulates the performance results for varying travelling distances (50 m, 100 m, 200 m, and 300 m) in subgraphs. The model trained on subgraphs with a 50 m length exhibits the lowest AR@1 and AR@1% scores on all datasets. As travelling distances in subgraphs increase, the model performance also improves. The optimal results were obtained from the subgraphs with a travelling distance of 200 m. However, the model trained on subgraphs with a travelling distance of 300 m exhibits a slight performance drop. This ablation study demonstrates the importance of the subgraph length in aggregating information within and across subgraphs. Short subgraphs avoid proper receptive fields between similar and dissimilar point clouds. On the other hand, large subgraphs contain irrelevant information, biasing the model performance. ### _Runtime Analysis and Memory Usage_ We evaluated the computation time taken on average for each keynode in a subgraph and memory consumption to demonstrate that our presented system can run online. The timing results are collected by running the pre-trained models on a single NVIDIA RTX A3000 Mobile GPU with an Intel(R) Xeon(R) W-11855M CPU @ 3.20GHz CPU. Table VI reports P-GAT's runtime per frame using embeddings extracted by the baselines listed. Overall, P-GAT adds a constant memory of \(\sim 1.1\) GiB and \(\sim 20\) ms to the inference time, demonstrating that, in an end-to-end fashion, the total computation time allows online operation. ## V Conclusion We proposed P-GAT for large-scale place recognition tasks. Through the attention mechanism and graph neural network, we increase the descriptors' distinctiveness by relating the entire point clouds collected in a sequence and in nearby locations in revisit areas. This design allows for exploiting spatiotemporal information. Extensive experiments on the Oxford dataset, the in-house dataset and the MulRan dataset demonstrate the effectiveness of the proposed method and its superiority compared to the state-of-the-art. Our proposed P-GAT's average performance across key benchmarks is superior by above 10% over the original baselines demonstrating that P-GAT can be incorporated into any global descriptors, substantially improving their robustness and generalisation ability. In future work, we would like to extend the P-GAT for point cloud registration task to accurately determine the position and orientation of retrieved places. ## VI Appendix In addition to the quantitative results discussed earlier, in the following, we present qualitative examples of P-GAT in action, and visualisations and analysis of the learned attention patterns. ### _Qualitative Analysis on Retrieval_ Fig. 4, visualises some semantically poor point clouds from R.A. (top) and Riverside (bottom) that P-GAT successfully recognised, whereas vanilla MinkLoc3D [10] and PPTNet [13] failed in recognition. We show the query point clouds on the leftmost column and the top-1 retrieved point clouds from the three models on the following three columns. As seen, P-GAT correctly recognised the place and retrieved the most relevant point cloud despite poor features evidencing the benefit of the aggregation of spatiotemporal information. The third and fourth columns show the retrieval results from MinkLoc3D and PPT-Net. The comparison between our model and the baselines demonstrates the superiority of our model in challenging scenarios. ### _Qualitative Analysis of Intra- and Inter-mechanism_ Fig. 5 demonstrates the attention pattern between two subgraphs (keynodes are indicated by red and green dots) by focusing on a keynode (receiver node) from the query \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Distance & \multicolumn{3}{c}{Oxford} & \multicolumn{3}{c}{U.S.} & \multicolumn{3}{c}{R.A.} & \multicolumn{3}{c}{B.D.} \\ \hline nn & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 \\ \hline 50 & 74.3 & 99.2 & 97.1 & 89.3 & 2.2 & 52.7 & 76.2 & 2.1 & 73.3 \\ 100 & 5 & 91.1 & 99.8 & 4 & 78.7 & 98.4 & 4.7 & 98.3 & 71.1 & 89.5 \\ 200 & 10 & **98.8** & **99.3** & **98.0** & **100.0** & **9.3** & **100.0** & **9.3** & **88.0** & **98.3** \\ 300 & 15 & 88.6 & 97.1 & 12 & 83.1 & 99.3 & 12 & 60.7 & 98.3 & 12 & 90.4 & 99.6 \\ \hline \end{tabular} \end{table} TABLE V: Impact of the travel distance in the subgraphs. \begin{table} \begin{tabular}{c c c c} \hline **Model** & Memory Usage & Runtime \\ \hline PointNetVLAD + P-GAT & (2.95±1.13±-0.08iGB & (22±18±40)ms \\ LPD-Net + P-GAT & (1.94±1.13±-0.07iGB & (35±18±53)ms \\ MinkLoc3D + P-GAT & (0.85±1.13±-19.09iGB & (29±18±47)ms \\ PPT-Net + P-GAT & (0.96±1.13±-2.09iGB & (30±18±48)ms \\ \hline \end{tabular} \end{table} TABLE VI: Comparison of inference speed and memory consumption. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Distance & \multicolumn{3}{c}{Oxford} & \multicolumn{3}{c}{U.S.} & \multicolumn{3}{c}{R.A.} & \multicolumn{3}{c}{B.D.} \\ \hline nn & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 & AR@1 \\ \hline 50 & 74.3 & 99.2 & 97.1 & 89.3 & 2.2 & 52.7 & 76.2 & 2.1 & 73.3 \\ 100 & 5 & 91.1 & 99.8 & 4 & 78.7 & 98.4 & 4.7 & 98.3 & 71.1 & 89.5 \\ 200 & 10 & **98.8** & **99.3** & **98.0** & **100.0** & **9.3** & **100.0** & **9.3** & **88.0** & **98.3** \\ 300 & 15 & 88.6 & 97.1 & 12 & 83.1 & 99.3 & 12 & 60.7 & 98.3 & 12 & 90.4 & 99.6 \\ \hline \end{tabular} \end{table} TABLE IV: Impact of intra- and inter-attention mechanisms. subgraph. As seen in attention layer 0, P-GAT initially attends to all keynodes in the query and dataset subgraphs. By increasing the attention layers, P-GAT focuses on keynodes located nearby the receiver keynode. Intra-attention aggregates contextual information (in our case spatiotemporal) between point clouds captured consecutively, while inter-attention aggregates information between non-consecutive point clouds in revisited areas, increasing the distinctiveness of features and their invariance to viewpoints and temporal changes. Intermediate layers exhibit oscillating attention spans reflecting the complexity of the learned behaviour. ### _Average Scheme Implementation_ As described in the paper, one keynode can be seen from multiple subgraphs; therefore, more than a similarity score will be computed. This is only the case in inference when ground truth is unavailable. On the other hand, subgraphs in P-GAT are not fixed in length and can have varying numbers of keynodes. Hence we need to deal with this dynamic behaviour of subgraphs as well. To this end, we developed an average scheme to calculate the similarities between query and database keynodes. Alg. 1 shows the pseudo-code of our average scheme. Our algorithm initialises two matrices, the similarity matrix \(\overline{\mathbf{S}}\in\mathbb{R}^{M\times N}\) and the counting matrix \(\mathbf{C}\in\mathbb{R}^{M\times N}\), with zeros matrices. We use the proposed P-GAT to calculate the similarity scores in matrix \(\mathbf{S}\in\mathbb{R}^{M\times N_{l}}\) for all pairs of keynodes within the given query and database subgraphs \((\mathcal{S}_{Q},\mathcal{S}_{D}^{l})\). We add the similarity score \(s_{ij}\) of two keynodes Fig. 4: Retrieval examples of our model along with two baseline models (MinkLoc3D and PPT-Net) on R.A. (upper) and Riverside (lower) datasets. True-positive place recognition based on these point clouds is challenging due to the lack of distinct features. However, P-GAT can recognise similar places correctly, leveraging the spatiotemporal information between neighbour point clouds. The relative distance between the query point cloud and the top candidate selected by P-GAT, MinkLoc3D and PPT-Net is 3.16 m, 801.38 m and 1027.96 m, respectively, for the R.A. example, and 2.83 m, 114.62 m and 127.64 m, respectively, for the Riverside example. Fig. 5: Intra- and Inter- attention pattern, in various layers, between a pair of subgraphs intersecting at a revisit area. Intra edges are colour coded by grey shades, whereas Inter edges by blue shades. to the corresponding elements \(\overline{s}_{id}\) in the similarity matrix and record the number of added similarity scores for \(\overline{s}_{id}\) in the corresponding element \(c_{id}\) in the counting matrix. \(d\) is the global index of the \(j\)-th keynote in \(\mathcal{S}_{D}^{l}\). We map the local keynote index \(j\) to the global keynote index \(d\) by adding the subgraph's index \(l\), as we generated subgraphs using a stride of 1 and considering the incremental indexing of nodes in subgraphs in the database. Finally, we perform an element-wise division between the similarity matrix and the counting matrix to obtain a similarity matrix with elements corresponding to probabilities of whether pairs of keynotes represent the same place. The space complexity of Alg. 1 is \(\mathcal{O}(MN),M\ll N\), which is memory efficient. Once the similarity matrix is generated using Alg. 1, we follow the approach of other place recognition models and select the top K candidates based on their similarity scores. ``` Input: subgraph pairs \(\{\mathcal{S}_{Q},\mathcal{S}_{D}^{l}\}\) // \(\mathcal{S}_{Q}\) and \(\mathcal{S}_{D}^{l}\) are query and \(l\)-th database subgraphs, \(l\in\{1,\dots,L\}\) Output: updated similarity matrix \(\overline{\mathbf{S}}\in\mathbb{R}^{M\times N}\) // \(M\) is the number of keynodes in \(\mathcal{S}_{Q}\), and \(N\) is the total number of keynodes (counting duplicates only once in \(\{\mathcal{S}_{D}^{l}\}_{l=1}^{L}\) 1 Initialise similarity matrix \(\overline{\mathbf{S}}\) and counting matrix \(\mathbf{C}\in\mathbb{R}^{M\times N}\) with zeros for\(l=1:L\)do 2 S = P-GAT(\(\mathcal{S}_{Q},\mathcal{S}_{D}^{l}\)) for\(s_{ij}\) in S do // \(i\in\{1,\dots,M\},j\in\{1,\dots,N_{l}\}\), \(N_{l}\) is number of keynodes in \(\mathcal{S}_{D}^{l}\) \(d=j+1\) \(\overline{s}_{id}=\overline{s}_{id}+s_{ij}\) \(c_{id}=c_{id}+1\) // \(d\in\{1,\dots,N\}\) 3\(\overline{\mathbf{S}}=\overline{\mathbf{S}}\osim\mathbf{C}\) // \(\osim\) denotes element-wise division ``` **Algorithm 1**Average scheme algorithm ## Acknowledgements This work was funded by CSIRO's Machine Learning and Artificial Intelligence Future Science Platform (MLAI FSP). P.P. and P.M. share senior authorship.
2309.11717
A class-weighted supervised contrastive learning long-tailed bearing fault diagnosis approach using quadratic neural network
Deep learning has achieved remarkable success in bearing fault diagnosis. However, its performance oftentimes deteriorates when dealing with highly imbalanced or long-tailed data, while such cases are prevalent in industrial settings because fault is a rare event that occurs with an extremely low probability. Conventional data augmentation methods face fundamental limitations due to the scarcity of samples pertaining to the minority class. In this paper, we propose a supervised contrastive learning approach with a class-aware loss function to enhance the feature extraction capability of neural networks for fault diagnosis. The developed class-weighted contrastive learning quadratic network (CCQNet) consists of a quadratic convolutional residual network backbone, a contrastive learning branch utilizing a class-weighted contrastive loss, and a classifier branch employing logit-adjusted cross-entropy loss. By utilizing class-weighted contrastive loss and logit-adjusted cross-entropy loss, our approach encourages equidistant representation of class features, thereby inducing equal attention on all the classes. We further analyze the superior feature extraction ability of quadratic network by establishing the connection between quadratic neurons and autocorrelation in signal processing. Experimental results on public and proprietary datasets are used to validate the effectiveness of CCQNet, and computational results reveal that CCQNet outperforms SOTA methods in handling extremely imbalanced data substantially.
Wei-En Yu, Jinwei Sun, Shiping Zhang, Xiaoge Zhang, Jing-Xiao Liao
2023-09-21T01:36:46Z
http://arxiv.org/abs/2309.11717v1
A class-weighted supervised contrastive learning long-tailed bearing fault diagnosis approach using quadratic neural network ###### Abstract Deep learning has achieved remarkable success in bearing fault diagnosis. However, its performance oftentimes deteriorates when dealing with highly imbalanced or long-tailed data, while such cases are prevalent in industrial settings because fault is a rare event that occurs with an extremely low probability. Conventional data augmentation methods face fundamental limitations due to the scarcity of samples pertaining to the minority class. In this paper, we propose a supervised contrastive learning approach with a class-aware loss function to enhance the feature extraction capability of neural networks for fault diagnosis. The developed class-weighted contrastive learning quadratic network (CCQNet) consists of a quadratic convolutional residual network backbone, a contrastive learning branch utilizing a class-weighted contrastive loss, and a classifier branch employing logit-adjusted cross-entropy loss. By utilizing class-weighted contrastive loss and logit-adjusted cross-entropy loss, our approach encourages equidistant representation of class features, thereby inducing equal attention on all the classes. We further analyze the superior feature extraction ability of quadratic network by establishing the connection between quadratic neurons and autocorrelation in signal processing. Experimental results on public and proprietary datasets are used to validate the effectiveness of CCQNet, and computational results reveal that CCQNet outperforms SOTA methods in handling extremely imbalanced data substantially. Bearing fault diagnosis, Supervised contrastive learning, Long-tailed distribution, Class-weighted loss function, Quadratic neural network. ## 1 Introduction The health of bearings is critical to ensure the sound performance of rotation equipment commonly used in a broad spectrum of industrial applications [1, 2]. The failure and malfunction of bearings, which accounts for 40%-70% of engine failures, results in serious economic loss and even lead to casualties [3, 4]. Hence, it is of paramount importance to maintain bearings in a serviceable condition. Towards this goal, accurate and timely bearing fault diagnosis is an essential measure to reduce the downtime, diminish repair costs, extend bearing life, and improve the reliability as well as operational safety of rotating machinery [5]. A variety of studies in the literature find that bearing failures oftentimes get manifested as vibration signals on the surface, and exhibit specific fault frequencies corresponding to different fault modes [6, 7]. Considering the valuable information embodied in the vibration signals, a series of approaches have exploited it for condition monitoring and fault diagnosis of bearings, and the physical mechanism between vibration signals with corresponding fault types has been revealed by a variety of analytical tools, such as envelope analysis [8, 9], time-frequency analysis [10, 11, 12], and cyclostationarity analysis [13]. In general, bearing fault diagnosis can be classified into two categories: signal processing-based methods and data-driven methods. Signal processing-based methods utilize signal processing techniques, such as wavelet transform [14], short-time Fourier transform [15], blind deconvolution, and adaptive mode decomposition [16], to manually extract relevant characteristics from the raw vibration signal to facilitate the determination of the specific fault type. Although these methods have a sound mathematical foundation, they require to tailor the usage of signal processing techniques and the tuning of model parameters for each specific scenario. Data-driven methods have the potential to overcome these shortcomings. Among data-driven methods, one representative approach is deep learning, particularly convolutional neural network (CNN), it can learn feature representations from the raw data to support fault diagnosis in an end-to-end fashion [17, 18]. Additionally, CNNs possess the ability to handle large-scale data in high dimensional space, and it thus emerges as a prevailing choice for many applications. As a result, CNNs have also been extensively utilized in bearing fault diagnosis. For example, Zhang et al. [19] proposed a Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN) to process 1D vibration signal in an end-to-end fashion for bearing fault diagnosis. Other improvements have been made to the CNN-based diagnosis model, such as Deep Residual Shrinkage Networks (DRSN) [20], Dislocated Time Series CNN (DTS-CNN) [21], and probabilistic spiking response model (PSRM) [22]. These refinements lead to a superior performance of CNN for fault diagnosis in different contexts, such as heavy noise, varying rotation speed, and cross-domain characteristics. Despite the aforementioned achievements in CNN-based bearing fault diagnosis, most CNN-based methods rely on a balanced data to develop a good-performing model. However, in practical operating conditions, machines usually operate in a normal or healthy state for the majority of time while various types of failures only occur with quite a low probability [23, 24]. Take nuclear power plants as an example, regular maintenance is conducted on bearings of rotation equipment, such as seawater booster pumps, to ensure their stability and prevent breakdowns. These maintenance activities significantly improve the reliability of nuclear power plants, the probability of abnormal events thus drops down to a low level [25, 26]. Statistically, the data for fault diagnosis commonly exhibits a long-tailed distribution, where the number of samples in healthy states significantly outnumbers those in the fault states. In the machine learning community, the healthy class is considered as the majority (head) class, while the fault classes are considered as the minority (tail) classes. Under such circumstances, the long-tailed dataset poses a significant challenge to the training of CNN. If the long tail feature existing in the data is neglected, the trained CNN model typically exhibits a poor performance manifested in the form of high bias and prone to overfitting. These factors, individually and collectively, eventually get translated as a considerable increase in the misclassification rate [27]. To combat class imbalance, a typical strategy is to balance the distribution of data through resampling to augment the minority class. Resampling methods, such as oversampling [28] and undersampling [29], are commonly employed to remediate the class imbalance. In addition, data generation methods, like GAN [30] and VAE [31] have been developed to generate synthetic samples to enrich the minority class. However, these methods face fundamental limitations when dealing with extremely imbalanced data. Some studies indicated that these conventional methods experienced significant performance degradation (roughly 60% accuracy) when the imbalanced rate exceeded 20:1 [32, 33, 34]. In practical scenarios, it is common to encounter such extreme class imbalance with thousands of healthy data but only a few dozen instances of fault data. Specifically, resampling methods might discard a substantial number of samples in the majority class and potentially result in the loss of valuable information retained in these discarded samples. Furthermore, the high repeatability of 1D signals makes it hard to acquire effective samples for training [24]. On the other hand, synthetic data generation methods face challenges as it is unable to guarantee the quality of synthetically generated samples particularly when the training data is scarce. As reported in Ref. [32], synthetic data generation risks distorting the distribution of the actual data and causing overfitting problems. To overcome these challenges, contrastive learning offers a new perspective as it targets to optimize a contrastive loss function to increase the separability between positive and negative samples, thus learning a more discriminative feature space [35]. Contrastive learning has been proven to be highly effective for fault diagnosis, especially in the cases with a limited number of training samples [36, 37]. However, existing contrastive learning approaches are not yet tailored to address the long-tailed issue, and their effectiveness is unavoidably influenced by the class imbalance. As such, some studies combined contrastive learning with undersampling techniques to fight against the imbalanced dataset. For instance, Zhang et al. [32] combined contrastive learning with undersampling to achieve a balanced data distribution and trained a machine learning model using contrastive learning to achieve satisfactory outcomes in the long-tailed bearing fault diagnosis. Nevertheless, undersampling unavoidably causes the repetition of a small amount of samples in the tail and gives rise to limited representativeness in the generated samples and poor model robustness. In this paper, we are motivated to tackle these issues by refining the contrastive learning approach from two perspectives. In the first place, we propose to adopt a polynomial neural network, and more specifically, the quadratic network, to enhance feature extraction capability in CNN. In essence, the quadratic network replaces the linear neurons with nonlinear quadratic neurons in the neural network, and such a replacement injects a heightened expressive power compared to the conventional first-order neural network according to approximation theory [38]. In addition, previous study has demonstrated the efficacy and interpretability of the quadratic network in bearing fault diagnosis when faced with strong noise [39]. Despite these encouraging findings, the central issue _why quadratic network outperforms the conventional network when processing vibration signals_ remains a mystery. In this paper, we also answer this question through a rigorous mathematical deduction and conclude that quadratic neurons are able to achieve local autocorrelation, thereby facilitating the extraction of fault-related features. Such finding is crucial for a thorough understanding of the decision-making mechanism in the quadratic network for fault diagnosis. Secondly, supervised contrastive learning has been shown to guide the model to collapse to the vertices of a regular simplex on a hypersphere when dealing with balanced datasets [40]. However, imbalanced datasets have an explicit impact on the distribution of vertices when the model collapses, subsequently influencing the separability of features on the hypersphere. Considering the effectiveness of reweighting techniques in dealing with imbalanced data, such as focal loss [41] and cross-entropy loss [42], we argue that they can be applied to contrastive learning, particularly supervised contrastive learning, to address the long-tailed problem in the distribution. Compared to the state-of-the-art literature, the contributions of this paper are summarized as below: 1. We propose a class-weighted contrastive learning quadratic network (CCQNet) for long-tailed bearing faults diagnosis. We employ a quadratic network as a feature extraction backbone and combine it with class-weighted contrastive loss and logit-adjusted cross-entropy loss functions. Our method achieves an improvement in the model's ability to handle imbalanced data via a powerful feature extractor and a re-balanced loss function. 2. Mathematically, we demystify the superior signal feature representation ability of quadratic networks by deducing and establishing the connection between autocorrelation and quadratic neurons. To the best of our knowledge, this is the first theory to explain quadratic networks from the perspective of signal processing. 3. We conduct comprehensive experiments using the public dataset and our own dataset. Experimental results suggest that CCQNet outperforms other state-of-the-art methods, especially in extremely imbalanced data. The rest of the paper is structured as follows. Section 2 gives a brief review on supervised contrastive learning. Next, the proposed CCQNet and its main operators are explained in Section 3. In Section 4, several experiments are performed to verify the effectiveness of the proposed method. Finally, we conclude this paper in Section 5. ## 2 Supervised contrastive learning The key idea of contrastive learning is to bring samples of the same label (positive samples) closer together and push samples of different labels (negative samples) to fall apart. The primary difference between self-supervised contrastive learning and fully supervised contrastive learning lies in how to select positive and negative samples. As shown in Figure 1, the fully-supervised contrastive learning regards all the samples from the same class as positive samples and treats samples from all the other classes as negative samples to fully take advantage of labeling information [35]. In general, supervised contrastive learning consists of three steps: (1) **Data augmentation**: Given an input dataset \(S=\{\mathbf{X}\in\mathbb{R}^{N\times n};\mathbf{Y}\in\mathbb{R}^{N}\}\) consisting of sample/label pair \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{N}\), two data augmentation methods, i.e., adding Gaussian noise, random scaling, are performed such that, each \(\mathbf{x}_{i}\) is extended to two additional samples \(\tilde{\mathbf{x}}_{i1},\tilde{\mathbf{x}}_{i2}\), then raw samples are augmented as \(\left\{\tilde{\mathbf{x}}_{i}\right\}_{i=1}^{2N}\), with \(i\in\mathbf{I}\equiv\{1,2,\cdots,2N\}\). (2) **CNN construction**. The augmented samples are then passed through a backbone convolutional neural network CNN(\(\cdot\)), followed by a projection head, such as a multi-layer perceptron MLP(\(\cdot\)), and normalized to the vector \(\mathbf{z}_{i}\): \[\begin{split}\mathbf{u}_{i}&=\text{MLP}(\text{CNN}( \tilde{\mathbf{x}}_{i})),\\ \mathbf{z}_{i}&=\frac{\mathbf{u}_{i}}{||\mathbf{u}_{i}||}.\end{split} \tag{1}\] where \(||\cdot||\) denotes the \(L_{2}\) norm. (3) **Building contrastive loss function**. Assuming the mini-batch size is \(B\), and the mapping network \(\phi\) projects augmented data to their latent embeddings with \(\phi:\tilde{\mathbf{X}}=\left\{\tilde{\mathbf{x}}_{i}\right\}_{i=1}^{B}\rightarrow\bm {Z}=\left\{\mathbf{z}_{i}\right\}_{i=1}^{B}\). The supervised contrastive loss (SCL) function sums up the loss of each element of the mini-batch in \(\mathbf{Z}\): \[\mathcal{L}^{sup}=\sum_{i=1}^{B}\mathcal{L}^{sup}_{i}. \tag{2}\] In a mini-batch, suppose the positive output \(\mathbf{z}_{p}\in\mathbf{Z}\) as the same category of \(\mathbf{z}_{i}\) that does not contain \(\mathbf{z}_{i}\), and the negative output \(\mathbf{z}_{a}\in\mathbf{Z}\) as the arbitrary output except \(\mathbf{z}_{i}\). The SCL function of each \(\mathbf{z}_{i}\) is: \[\mathcal{L}^{sup}_{i}=\frac{-1}{|\mathbf{P}_{i}|}\sum_{\mathbf{p}\in\mathbf{P}_{i}}\log \frac{\exp\left(\mathbf{z}_{i}\cdot\mathbf{z}_{p}/\tau\right)}{\sum_{a\in\mathbf{A}_{i}} \exp\left(\mathbf{z}_{i}\cdot\mathbf{z}_{a}/\tau\right)}, \tag{3}\] where \(\mathbf{A}_{i}\equiv\{\{1,2,\cdots,\mathbf{B}\}\setminus\{i\}\}\) is the indices of all mini-batch samples that do not contain \(i\), \(\mathbf{P}_{i}\equiv\left\{p\in\mathbf{A}_{i}:\tilde{y}_{p}=\tilde{y}_{i}\right\}\) is the indices of all mini-batch samples that have the same class as index \(i\), \(|\mathbf{P}_{i}|\) is the number of samples in \(\mathbf{P}_{i}\), - denotes inner product, and \(\tau\in\mathbb{R}^{+}\) is the temperature parameter. Therefore, \(\frac{1}{|\mathbf{P}_{i}|}\) is used to calculate the average of the logarithmic term. Note that in supervised contrastive learning, the SCL function only uses the augmented samples to update network parameters. For any sample, all positives (augmented data with the same label) in a mini-batch contribute to the numerator. Thus, the goal of supervised contrastive learning is to minimize the SCL function, which increases the value of the numerator. This encourages the network to closely align representations to all instances from the same category, as updated parameters of the network make \(\mathbf{z}_{i}\cdot\mathbf{z}_{p}\) become larger [35, 43]. As a type of representation learning, supervised contrastive learning also requires a classifier to achieve classification tasks. Two typical strategies include two-stage and one-stage training. Regarding the former, the first stage learns features by using the SCL function and the second stage updates classifiers using the cross-entropy loss function [35]. As for the latter, the SCL function and the cross-entropy function are placed in two branches and trained simultaneously [44]. The one-stage strategy showed efficiency and effectiveness in handling small data [33, 45], and we adopt this strategy for our framework. However, supervised contrastive learning also faces difficulties in handling the imbalanced data [32]. In this case, the number of healthy samples is substantially larger than the Figure 1: The framework of supervised contrastive learning. faulty samples. In a mini-batch, healthy samples make up of the majority and other fault categories are only a minority or even non-existent. As a result, supervised contrastive learning struggles to increase the distance between minority classes, limiting its effectiveness in addressing the class imbalance. To overcome this issue, we consider enhancing the network's feature extraction capability and balancing the contrastive loss function within each class, thereby adapting it to handle long-tailed datasets. ## 3 Proposed Methodology An overview of our proposed method is given in Figure 2, and the developed approach consists of four steps: data augmentation, quadratic network construction, classifier learning, and contrastive learning. The first step aims to generate extra data for contrastive learning by leveraging data augmentation techniques. The second step employs a quadratic convolutional residual network as the backbone of the model to extract informative features from the raw signal. In the third and fourth step, two MLPs are employed as projection networks: the classifier branch employs a logit-adjusted cross-entropy (CE) loss function to complete the classification task, while the contrastive learning branch utilizes a class-weighted loss function to capture the latent representation of long-tailed data. ### Data augmentation The proposed method starts by randomly sampling the long-tailed data, and each selected raw sample \(\mathbf{x}\) is passed through the data augmentation module to generate a pair of positive samples \(\bar{\mathbf{x}}_{1},\bar{\mathbf{x}}_{2}\) for supervised contrastive learning. There are four data augmentation methods commonly used in signal processing: 1. **Adding Gaussian noise**: Gaussian noise \(\mathbf{n}\) is added to each sample \(\mathbf{x}\). Here, we set \(\mathbf{n}\) follows a Gaussian distribution \(N(0,0.01)\). Note that all data augmentation parameters we used refer to Ref. [46]. \[\bar{\mathbf{x}}=\mathbf{x}+\mathbf{n}.\] (4) 2. **Random scaling**: A random variable \(\mathbf{s}\) with value sampling from a Gaussian distribution \(N(0,0.01)\) is multiplied with each sample \(\mathbf{x}\). \[\bar{\mathbf{x}}=\mathbf{x}\cdot\mathbf{s}.\] (5) 3. **Random stretching**: The sample \(\mathbf{x}\) is stretched along the time axis. The degree of stretching is specified by a scaling factor denoted by \(\mathbf{r}\). The value of \(\mathbf{r}\) is randomly sampled from a uniform distribution U(0, 1) for each augmentation. If the scaling factor is set to \(\mathbf{r}\), the resulting length of the stretched signal is \(\mathbf{r}\) times the length of the original signal \(\mathbf{n}\). Figure 2: An overview of the proposed framework Specifically, the stretched signal denoted \(\mathbf{x}_{\text{stretched}}[j]\), is obtained by performing a linear interpolation of \(\mathbf{x}\) at the positions \(rj\), where \(j\) is the index of the sample in the original signal. The final stretched signal is obtained by truncating the interpolated signal to the length of the original signal. \[\begin{split}&\mathbf{x}_{\text{stretched}}[j]=\mathbf{x}[rj],\\ &\tilde{\mathbf{x}}=\mathbf{x}_{\text{stretched}}[0:n].\end{split}\] (6) 4. **Random cropping**: a random interval of the sample is set to zero. We set the interval length to 30 and replace it with a random position \(j\) in a sample. \[\tilde{\mathbf{x}}=\{\mathbf{x}[0:j],\underbrace{[0,0,\cdots,0]}_{30},\mathbf{x}[j+30: \text{end}]\}.\] (7) Two augmented sample pairs \(\tilde{\mathbf{x}}_{1},\tilde{\mathbf{x}}_{2}\) are produced by randomly selecting two augmentation methods, where each technique has an equal probability of 0.5 to be selected as the argumentation method. For example, \(P(\tilde{\mathbf{x}}=\mathbf{x}+\mathbf{n})=0.5\). In doing this, we ensure the diversity in the generated contrastive samples, thus enhancing the model's generalizability. ### Quadratic residual network In this paper, we employ a quadratic convolutional network as the backbone of the neural network because a quadratic network exhibits a superior feature extraction ability compared with conventional neural networks [47, 48]. Specifically, the effectiveness of the quadratic network has been verified in bearing faults diagnosis [39]. A quadratic network replaces conventional neurons with quadratic neurons composed of an inner product and a power term of the input vector. Mathematically, given an input sample \(\mathbf{x}\in\mathbb{R}^{1\times n}\), \(\mathbf{x}=\left[x_{1},x_{2},\cdots,x_{n}\right]\), a quadratic convolutional operation can be expressed as: \[\begin{split} Q(\mathbf{x})=\sigma((\mathbf{x}*\mathbf{w}^{\prime}+b^{\prime })\mathcal{O}(\mathbf{x}*\mathbf{w}^{\mathbf{x}}+b^{\mathbf{x}})+(\mathbf{x}\odot\mathbf{x})*\mathbf{w}^{ \mathbf{b}}+c),\end{split} \tag{8}\] where \(*\) denotes the convolutional operation, \(\mathcal{O}\) denotes Hadamard product, \(\mathbf{w}^{\prime}\in\mathbb{R}^{\{\mathcal{O}\times 1\}}\), \(\mathbf{w}^{\mathbf{x}}\in\mathbb{R}^{\{\mathcal{O}\times 1\}}\) and \(\mathbf{w}^{\mathbf{b}}\in\mathbb{R}^{\{\mathcal{O}\times 1}}\) denote the weights of three different convolution kernels, \(\sigma(\cdot)\) is the activation function (e.g., ReLU), \(b^{\prime}\), \(b^{\mathbf{s}}\) and \(c\) denote biases corresponding to these convolution kernels. Mathematically, studies have shown that quadratic neurons exhibit a superior ability to approximate radial functions with polynomial-level neurons, whereas conventional neurons require exponential-level neurons [48, 49]. Additionally, quadratic networks can achieve polynomial approximation, but conventional neural networks can only achieve piece-wise approximation through non-linear activation functions. These characteristics have the potential to enhance the generalization and expressiveness of neural networks, as real-world data distributions are usually non-linear. The advantage of quadratic neurons is that they can flexibly and directly enhance the performance of conventional networks, as networks are constructed by simply replacing conventional neurons. However, quadratic networks introduce more non-linear operations and parameters, leading to increased model complexity. It presents a challenge for achieving convergence during training. To address the aforementioned issue, two strategies are implemented. First, quadratic networks are trained using an algorithm called ReLinear [48]. The parameters of quadratic neurons are factorized into two groups: the first-order group \(\{\mathbf{w}^{\prime},b^{\prime}\}\) and the quadratic group \(\{\mathbf{w}^{\mathbf{s}},b^{\mathbf{s}},\mathbf{w}^{\mathbf{b}},c\}\). During the initialization stage, the first group undergoes normal initialization using Kaiming initialization [50], whereas the second group is set as \(\{b^{\mathbf{s}}=1;\mathbf{w}^{\mathbf{s}},\mathbf{w}^{\mathbf{b}},c=0\}\). During the training stage, different learning rates, \(\gamma_{r}\) and \(\gamma_{g,b}\), are assigned to these two groups, with \(\gamma_{r}=\alpha_{g,b}\), where \(0<\alpha<1\). ReLinear initiates the training of quadratic network starting from the first-order terms and gradually trains the quadratic terms' parameters, enabling the neural network to prevent gradient explosion. Second, we construct a cross-layer connection strategy for the neural network to improve its stability. As depicted in Figure 3, we employ two residual blocks, referred to as QResBlocks, each composed of two Qlayers. Each Qlayer contains the mainline \(\mathbf{Q}(\mathbf{x})\) and a shortcut connection. To ensure compatibility with the channel dimension of the output variable, an additional "quadratic convolution-batch normalization" structure is introduced in Qlayer1 of QResBlock2, as illustrated in Figure 3 (b). Overall, the structural parameters of the quadratic residual network backbone are presented in Table 1. ### Class-weighted contrastive learning The two generated sample pairs are subsequently sent to the contrastive learning branch after passing through the backbone. In this branch, the features are transformed by an MLP and then mapped to a hypersphere using \(L_{2}\) normalization (Eq. (1)). The key idea of the class-weighted Figure 3: The structure of quadratic ResNet backbone. contrastive learning is to design the contrastive loss function to induce neural network to pay equal attention to each class. In the case of a long-tailed dataset, the main impact of \(\mathcal{L}^{sup}\) is attributed to the term \(\sum\limits_{a\in\mathbf{A}(t)}\exp\left(\mathbf{z}_{i}\cdot\mathbf{z}_{a}/\tau\right)\), where \(\mathbf{z}_{a}\) represents the output of each generated sample except for \(\mathbf{z}_{i}\)[51]. When confronted with imbalanced datasets, tail classes pose a challenge for the network to effectively distinguish them from each other. This is because tail classes only occupy a small proportion of \(\mathbf{z}_{a}\), even missing a certain tail category in some mini-batches. Consequently, the network's ability to accurately classify the tail classes is hindered by this imbalance in contribution between the head and tail classes. Inspired by the reweighting technique, we design a class-aware weight to force the network to pay more attention to tail classes. That is, \[W_{a}=\frac{1}{|\mathbf{P}_{a}|}, \tag{9}\] where \(|\mathbf{P}_{a}|\) indicates the number of samples belonging to class \(a\). A tail class \(a\) has a much lower \(|\mathbf{P}_{a}|\) and results in a larger \(W_{a}\). By doing this, each class has its own class weight, which makes the proportion of each class in the contrastive loss function balanced. As a result, the learned representation from contrastive learning remains unaffected by class imbalance. Next, we integrate this weight into the SCL loss function formulated in Eq. ((3)), and we thus have the class-weighted contrastive loss (CRCL) as: \[\begin{split}&\mathcal{L}^{CRCL}=\sum\limits_{i=1}^{B}\mathcal{L }_{i}^{CRCL}\\ &=\sum\limits_{i=1}^{B}-\frac{1}{|\mathbf{P}_{i}|}\sum\limits_{p\in \mathbf{P}_{i}}\log\frac{\exp\left(\mathbf{z}_{i}\cdot\mathbf{z}_{p}\right)}{\sum_{a\in \mathbf{A}_{i}}W_{a}\exp\left(\mathbf{z}_{i}\cdot\mathbf{z}_{a}\right)}.\end{split} \tag{10}\] Compared to the original SCL loss function, for each \(z_{a}\), we assign a weight \(W_{a}\) for class \(a\). Note that the temperature parameter \(\tau\) is eliminated to reduce the number of hyperparameters that needs to be fine-tuned. **Remark 1.** It has been proved that the SCL guides models collapsing to the vertices of a regular simplex lying on a hypersphere with balanced datasets, and empirical evidence indicates that a regular simplex configuration is beneficial to better performance [40]. We argue that class-weighted loss function encourages features corresponding to samples in the same class learned by the network has the equidistant from each other. This makes the feature center nearly located on the vertices of a regular simplex. As shown in Figure 4, we map the imbalanced features acquired through SCL and CRCL onto a two-dimensional circle. It is evident that CRCL remarkably amplifies the separation between distinct classes. This attribute explains the ability of CRCL to handle imbalanced data. ### Classifier learning Similar to contrastive learning, the raw signal is fed to the classifier branch after the backbone network. Here, we use the logit adjusted cross-entropy loss function \(\mathcal{L}^{LC}\) to drive the training of the classifier [52]. This function is also a class-weighted as shown below: \[\mathcal{L}^{LC}=\sum\limits_{j\in\mathcal{I}}\mathcal{L}_{j}^{LC}=\sum\limits _{j\in\mathcal{J}}-\log\frac{\exp\left(f_{y_{j}}(\mathbf{x}_{j})+\tau\log\pi_{y_{ j}}\right)}{\sum_{y^{\prime}\in[L]}\exp\left(f_{y^{\prime}}(\mathbf{x}_{j})+\tau \log\pi_{y^{\prime}}\right)}. \tag{11}\] where \(j\in\mathcal{J}\equiv\{1,2,\cdots,n\}\) are the indices of the raw data \(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}\), \(f(\mathbf{x}_{j})\) denotes the output of the classifier, i.e. logit. \(f_{y_{j}}(\mathbf{x}_{j})\) denotes the value of the element in the logit vector classified as label \(y_{j}\). Let \([L]\) be a collection of labels \(y^{\prime}\) and \(y^{\prime}\in[L]\equiv\{1,2,\cdots,L\}\), \(\pi_{y_{j}}\) denotes the prior probability of the label \(y_{j}\), and \(\tau\) indicates temperature coefficient which is set to \(\tau=1\) in the developed approach. Finally, we combine the \(\mathcal{L}^{CRCL}\) and \(\mathcal{L}^{LC}\) together to form the composite loss function \(\mathcal{L}\): \[\mathcal{L}=\mathcal{L}^{CRCL}+\mathcal{L}^{LC}. \tag{12}\] ### The superiority of quadratic networks Although some shreds of evidence suggest that the quadratic network is superior to its conventional counterpart in terms of feature extraction ability [38], while the key issue on _why the quadratic network works better than conventional neural network when dealing with vibration signals_ remains a mystery. In this subsection, we analyze the learning power of quadratic network and establish its connection with signal autocorrelation. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Number & Type & Kernel & Channel & Stride & Padding & Output \\ \hline 0 & Input & - & - & - & 1\(\times\)2048 \\ & QConv & 1\(\times\)7 & 16 & 2 & 3 & 16\(\times\)1024 \\ & BN & - & - & - & - & 16\(\times\)1024 \\ & ReLU & - & - & - & - & 16\(\times\)1024 \\ & Max-Pool & 1\(\times\)3 & 16 & 2 & 1 & 16\(\times\)512 \\ 2 & QResNet(2QConv) & 1\(\times\)3 & 16 & 1 & 1 & 16\(\times\)512 \\ 3 & QResNet(2QConv) & 1\(\times\)3 & 32 & 1 & 1 & 32\(\times\)256 \\ 4 & QResNet(2QConv) & 1\(\times\)3 & 64 & 1 & 1 & 64\(\times\)128 \\ 5 & QResNet(2QConv) & 1\(\times\)3 & 128 & 1 & 1 & 128\(\times\)64 \\ 6 & AugPool & - & - & - & - & 128\(\times\)1 \\ 7 & Flatten & - & - & - & - & 128 \\ \hline \hline \end{tabular} \end{table} Table 1: The structural parameters of quadratic ResNet backbone. Figure 4: Features learned by SCL and CRCL on the CWRU dataset at IB rate=10:1. Where BA, IR, and OR denote ball defect, inner race defect, and outer race defect respectively. #### 3.5.1 Autocorrelation Autocorrelation, also known as serial correlation in the discrete time case, measures the correlation of the signal using a delayed copy of itself. Essentially, it quantifies the similarity between two observations of a signal as a function of the time delay between them. It is used to find the periodic signal submerged by noise or identify the missing feature frequency in a signal1. Mathematically, the convolutional expression of autocorrelation \(r(\mathbf{x}):\mathbb{R}^{1\times n}\rightarrow\mathbb{R}^{1\times n}\) is as follows, Footnote 1: [https://en.wikipedia.org/wiki/Autocorrelation](https://en.wikipedia.org/wiki/Autocorrelation) \[\begin{split} R(\mathbf{x})&=\mathbf{x}*\mathbf{x}\\ &=\left[\sum_{i=1}^{n}x_{i}^{2},\sum_{i=1}^{n-1}x_{i}x_{i+1}, \cdots,\sum_{i=1}^{2}x_{i}x_{i+n-2},x_{1}x_{n}\right]\\ &=[r_{1},r_{2},\cdots,r_{n}].\end{split} \tag{13}\] **Remark 2.** The convolution operation in signal processing contains both flip and translation operations, which is why autocorrelation has a negative sign ( \(\mathbf{y}=-\mathbf{x}*\mathbf{x}\)), but it only has the translation operation in deep learning. For the sake of formal uniformity, the convolution operation form of deep learning is used here. The advantage of autocorrelation is that it has excellent noise suppression. Suppose a noisy signal \(\mathbf{y}\) consists of two parts: \[\mathbf{y}=\mathbf{x}+\mathbf{s}, \tag{14}\] where \(\mathbf{x}\) denotes the raw signal, \(\mathbf{s}\) denotes the noise. An autocorrelation function calculates the noisy signal and its delay as: \[\begin{split} r_{yy,\tau}&=\sum_{i=1}^{n-\tau}(x_{ i}+s_{i})(x_{i+\tau}+s_{i+\tau})\\ &=\sum_{i=1}^{n-\tau}(x_{i}x_{i+\tau}+2s_{i}x_{i+\tau}+s_{i}s_{i+ \tau})\\ &=r_{xx,\tau}+2r_{xs,\tau}+r_{ss,\tau},\end{split} \tag{15}\] where \(r_{ab,\tau}\) denotes the value of the autocorrelation function of the signal \(\mathbf{a}\) and \(\mathbf{b}\) when the delay is \(\tau\). As \(\mathbf{s}\) is a random noise, it does not correlate with \(\mathbf{x}\), so \(r_{xs,\tau}=0\). If \(\mathbf{s}\) itself is uncorrelated, then \(r_{ss,\tau}=0\) except \(\tau=0\). The following equation holds: \[\left\{\begin{split} r_{yy,\tau}=r_{xx,\tau}+r_{ss,\tau},\ \tau=0\\ r_{yy,\tau}=r_{xx,\tau},\ \tau\neq 0.\end{split}\right. \tag{16}\] The above equations indicate that the autocorrelation function can extract the feature of the signal from the noise. #### 3.5.2 Learnable autocorrelation and quadratic neuron We define learnable autocorrelation as multiplying a set of weight parameters \(\mathcal{W}=\{\mathbf{W}_{0},\mathbf{W}_{1},\cdots,\mathbf{W}_{n}\}\) to autocorrelation. These parameters can be updated by backpropagation in a convolutional neural network: \[\begin{split}& R^{\mathcal{W}}(\mathbf{x})\\ &=\mathbf{W}(\mathbf{x}*\mathbf{x})\\ &=\left[\sum_{i=1}^{n}w_{i}^{1}x_{i}^{2},\sum_{i=1}^{n-1}w_{i}^{2}x _{i}x_{i+1},\cdots,\sum_{i=1}^{2}w_{i}^{n-1}x_{i}x_{i+n-2},w_{1}^{n}x_{1}x_{n} \right]\\ &=[r_{0}^{\mathbf{W}_{0}},r_{1}^{\mathbf{W}_{1}},\cdots,r_{n}^{\mathbf{W}_{n} }].\end{split} \tag{17}\] When \(\mathcal{W}\equiv\mathbf{1}\), the learnable autocorrelation degenerates to the conventional autocorrelation. On the other hand, the output of a quadratic convolutional layer can be factorized as: \[\begin{split} Q(\mathbf{x})=[q_{1},q_{2},\cdots,q_{n}],\end{split} \tag{18}\] where \[\begin{split} q_{j}&=\left(\sum_{i=1}^{k}w_{i}^{ \prime}x_{i+j-1}+b^{\prime}\right)\left(\sum_{i=1}^{k}w_{i}^{g}x_{i+j-1}+b^{g} \right)\\ &+\sum_{i=1}^{k}w_{i}^{k}x_{i+j-1}^{2}+c\\ &=\sum_{i=1}^{k}\left(w_{i}^{\prime}w_{i}^{g}+w_{i}^{b}\right)x_{ i+j-1}^{2}+\sum_{i=1}^{k-1}w_{i}^{\prime}w_{i+1}^{g}x_{i+j-1}x_{i+j}\\ &+\sum_{i=1}^{k-2}w_{i}^{\prime}w_{i+2}^{g}x_{i+j-1}x_{i+j+1}+ \cdots+w_{1}^{\prime}w_{k}^{g}x_{j}x_{k+j-1}\\ &+b^{\prime}\sum_{i=1}^{k}w_{i}^{g}x_{i+j}+b^{g}\sum_{i=1}^{k}w_{ i}^{\prime}x_{i+j}+C.\end{split} \tag{19}\] It is obvious that the calculation of a quadratic convolutional neuron contains the learnable autocorrelation operation. Combining Eq. (17) and Eq. (19), we have \[\begin{split} q_{j}&=\underbrace{\mathbf{W}_{0}^{\mathbf{W}_{0 }}+r_{1}^{\mathbf{W}_{1}}+\cdots+r_{k}^{\mathbf{W}_{k}}}_{\text{Learable Autocorrelation}}\\ &+\underbrace{b^{\prime}\sum_{i=1}^{k}w_{i}^{g}x_{i+j}+b^{g}\sum_ {i=1}^{k}w_{i}^{\prime}x_{i+j}+C}_{\text{{\it Covolutional Operation}}}.\end{split} \tag{20}\] The quadratic convolutional operation can be decomposed into two parts: the sum of learnable autocorrelation and the conventional convolutional operation. As shown in Figure 5, the first part is using the learnable autocorrelation for each subsequence adding to the final results. The second part is using learnable filters to convolute the input signal, which is the same as a conventional convolution neural network. With the above deduction, a quadratic network offers advantages over conventional neural networks when processing signals. The autocorrelation operation within a quadratic neuron aids in extracting valuable signals with random noise, while such capability is missing in conventional neural networks. By leveraging this feature, our methodology is able to enhance the power of feature extraction from the input data, and thus leads to an improved performance in bearing fault diagnosis. ## 4 Computational experiments In this section, we consider two bearing datasets: the Case Western Reserve University (CWRU) rolling bearing dataset2 and our own bearing dataset, to validate the performance of the proposed method. The two datasets consist of ten categories with nine faulty bearings and one healthy bearing. In the computational experiments, we first compare CCQNet with other state-of-the-art methods under a long-tailed distribution. Next, we analyze the feature extraction ability of the proposed model through visualization of interpretable feature maps and learnable autocorrelation. Finally, we conduct several ablation studies to verify the properties of our method. Footnote 2: [https://engineering.case.edu/bearing64tacenter/download-data-fil](https://engineering.case.edu/bearing64tacenter/download-data-fil) ### Dataset description #### Case Western Reserve University bearing faults dataset (CWRU) This extensively utilized dataset is collected by the Bearing Data Center at Case Western Reserve University (CWRU). The dataset is generated by applying electro-discharge machining (EDM) to motor deep groove ball bearings to artificially induce faults. Specifically, faults with diameters of 0.007 inches, 0.014 inches, and 0.021 inches are introduced into the inner race, outer race, and ball of the bearings, respectively. Subsequently, the faulty bearings are reinstalled in the test motors, and vibration data is recorded at both the drive-end (DE) and fan-end (FE). Throughout the experiments, the motor loads range from 0 to 3 HP (horsepower), while the motor speeds exhibit slight variations within the range of 1797 to 1720 RPM. The vibration signals are acquired at two sampling rates: 12 kHz and 48 kHz. In the experiments, we consider the vibration data collected at the drive-end (DE) with a 0 HP load and a sampling rate of 12 kHz. #### Our bearing dataset In addition to CWRU, we also collect our dataset employing angular contact ball bearings (HC7003), which are specifically designed for high-speed rotational machinery. This experiment is conducted at MIIT Key Laboratory of Aerospace Bearing Technology and Equipment, Harbin Institute of Technology. As depicted in Figure 6, the acceleration meter is directly attached to the bearings to capture the vibration signals generated by the bearings. Consistent with the CWRU dataset, we induced faults at the outer race (OR), inner race (IR), and ball (BA), encompassing three levels of severity: slight, moderate, and severe. Table 2 provides an overview of the fault types and the corresponding size in our dataset. During the test, the vibration signals were acquired at a constant motor speed of 1800 r/min using an NI USB-6002 device with a sampling rate of 12 kHz. We collected 47 seconds of bearing vibration data totaling 561,152 data points per fault category. Unlike the CWRU dataset, the bearing faults in our dataset consist of cracks with uniform sizes but varying depths. As a result, the vibration signals exhibit greater similarity among different fault types, thereby increasing the difficulty of accurately classifying the diagnostic model. \begin{table} \begin{tabular}{l l c} Class & Fault mode & Fault size (mm) \\ \hline C1 & Healthy Bearing & 0\(\times\)0\(\times\)0 \\ C2 & Ball cracking(Minor) & 1\(\times\)0.3\(\times\)0.1 \\ C3 & Ball cracking(Moderate) & 1\(\times\)0.3\(\times\)0.2 \\ C4 & Ball cracking(Severe) & 1\(\times\)0.3\(\times\)0.3 \\ C5 & IR cracking(Minor) & 2\(\times\)0.3\(\times\)0.1 \\ C6 & IR cracking(Moderate) & 2\(\times\)0.3\(\times\)0.2 \\ C7 & IR cracking(Severe) & 2\(\times\)0.3\(\times\)0.3 \\ C8 & OR cracking(Minor) & 2\(\times\)0.3\(\times\)0.1 \\ C9 & OR cracking(Moderate) & 2\(\times\)0.3\(\times\)0.2 \\ C10 & OR cracking(Severe) & 2\(\times\)0.3\(\times\)0.3 \\ \hline \end{tabular} \end{table} Table 2: Ten classes in our dataset. OR and IR denote that the faults appear in the outer race and inner race, respectively. Fault size denotes length\(\times\)width\(\times\)depth. Figure 5: The operation of the quadratic neuron. ### Experimental setup We conduct bearing faults diagnosis experiments by constructing long-tailed datasets. Here, we consider different imbalance rates (IB Rate) to investigate the variation of different models to the different levels of imbalance of the dataset. The IB Rate is defined as follows: \[\text{IB Rate}=\frac{N_{normal}}{N_{fault}}, \tag{21}\] where \(N_{normal}\) and \(N_{fault}\) represent the number of normal samples and the number of faulty samples in each class, respectively. Next, due to each class of measured signals consisting of only one long sequence, we randomly extract a subsequence comprised of 2048 data points as one input \(\mathbf{x}_{i}\). Afterwards, the entire dataset is divided into three parts: training set, validation set, and testing set. Specifically, in the training set, the number of normal data is set to 500, while the number of other fault categories is selected by the IB Rate (5:1, 10:1, 20:1, 50:1), which is set to 100, 50, 25, and 10, respectively. For all categories in the validation and test sets, we extracted 250 samples with balanced classes. Identical standardization is applied to all the samples. #### 4.2.1 Baseline methods We compare the developed method 3 with several well-known convolutional neural networks commonly used in the field of bearing fault diagnosis, including conventional CNN, contrastive learning-based methods, and resampling-based methods. The descriptions of baseline methods are given as follows: Footnote 3: Our code is available at [https://github.com/yuueien120/CCQNet](https://github.com/yuueien120/CCQNet) for readers’ verification. 1. **WDCNN** is a widely used method for bearing fault diagnosis. It utilizes a wide convolutional kernel in the first convolutional layer to extract features and suppress noise [19]. 2. **SupCon** is a supervised contrastive learning method. It leverages the concept of contrastive learning to enhance the discriminative power of the learned representations [46]. 3. **SelfCon** is a self-supervised contrastive learning method. It applies contrastive learning in a self-supervised manner to learn effective representations [53]. 4. **DNCNN** is a convolutional neural network that incorporates normalization techniques for both convolutional layers and neuron weights. This normalization helps improve the stability and generalization of the model [54]. 5. **CA-SupCon** combines oversampling techniques (class-aware sampler) with supervised contrastive learning. This method aims to address class imbalance issues in the dataset and enhance the discriminative ability of the learned representations [32]. 6. **Oversample+reweight** is a method that combines oversampling techniques with neuronal normalization for classifiers. It also adopts a reweighted loss function to tackle class imbalance problems in bearing fault diagnosis [41]. #### 4.2.2 Training strategies We employ the stochastic gradient descent (SGD) optimizer to optimize all networks. The batch size is set to 64, and the training epoch is set to 200 for our method. Hyperparameters of other methods are set according to the values reported in the corresponding articles. In addition, we train CCQNet using the ReLinear algorithm [48], which facilitates the fast convergence of quadratic networks. At the initial stage, the parameters in a quadratic neuron are initialized as \(\mathbf{w}_{g}=0,\mathbf{w}_{b}=0,c=0\) and \(b_{g}=1\), while \(\mathbf{w}_{r}\) and \(b_{r}\) follow the random initialization. At the training stage, the linear terms and quadratic terms are trained with different learning rates with \(\gamma_{l}\) and \(\gamma_{q}\) respectively. We set a scale factor \(\alpha\) to control the two learning rates, that is \[\gamma_{q}=\alpha\cdot\gamma_{l},\ \ 0<\alpha<1. \tag{22}\] At last, the learning rate is adjusted using the cosine annealing strategy. #### 4.2.3 Evaluation metrics The classification performance of the model is assessed using accuracy (ACC), F1 score, and MCC (Matthews Correlation Coefficient). These metrics are chosen as they are Figure 6: Our bearing faults test rig and bearing failure of inner race and outer race. insensitive to category imbalance. The definitions of these metrics are as follows: \[\text{ACC}=\frac{1}{C}\sum_{i=1}^{C}\frac{\text{TP}_{i}}{\text{TP}_{i}+\text{FP} _{i}}, \tag{23}\] \[\text{F1}=\frac{1}{C}\sum_{i=1}^{C}\frac{2\text{TP}_{i}}{2\text{TP}_{i}+\text{ FP}_{i}+\text{FN}_{i}}, \tag{24}\] \[\text{MCC}=\frac{c\cdot s-\sum_{i=1}^{C}p_{i}\cdot t_{i}}{\sqrt{\left(s^{2}- \sum_{i=1}^{C}p_{i}^{2}\right)\left(s^{2}-\sum_{i=1}^{C}t_{i}^{2}\right)}}. \tag{25}\] where \(\text{TP}_{i}\), \(\text{FP}_{i}\), and \(\text{FN}_{i}\) denote the true positive rate, false positive rate, and false negative rate for class \(i\), respectively; C denotes the number of classes, \(t_{i}\), \(p_{i}\), \(c\) and \(s\) are the number of times class \(i\) truly occurred, the number of times class \(i\) is predicted, the total number of samples correctly predicted, and the total number of samples for \(C\) classes respectively. ### Long-tail fault diagnosis results Tables 3 and 4 report the performance of the considered models on CWRU dataset and our bearing dataset. Firstly, it is clear that when the IB rate rises, both ACC, F1, and MCC drop down. Even so, our method still outperforms other SOTA methods across all the metrics. Secondly, when the IB rate is small (5:1, 10:1), the advantage of our method is not obvious, i.e., WDCNN and CCQNet all reach 100% in F1 and MCC. As the IB Rate rises, performance of all baseline methods drop significantly or even fail to classify. For instance, when IB rate = 50:1, oversample+reweight only has 19% F1 on the CWRU dataset. However, CCQNet remains over 76% F1 and MCC on two datasets at the 50:1 IB rate, while the second-best method WDCNN only achieves 67.18% and 6.40% in F1 on the two datasets, respectively. The results indicate that our proposed method excels at handling extremely imbalanced data. Next, we focus on evaluating the performance of the top 4 best-performing methods at an imbalance rate of 50:1. To this end, we conduct a detailed analysis of their classification capability for each fault mode by examining the confusion matrix. As shown in Figure 7, all the methods possess basic fault detection capabilities while their ability to accurately classify the different fault modes is affected by the imbalance level. It is noteworthy that all methods achieve high accuracy when classifying the healthy category, which can be attributed to the sufficient number of healthy samples available to train the model. However, in the CWRU dataset and our own bearing dataset, the three baseline methods exhibit poor performance in classifying ball faults (BA1, BA2, BA3) and inner race faults (IR2, IR3), respectively. In contrast, our proposed CCQNet demonstrates a much better classification accuracy for these fault modes. Additionally, CCQNet exhibits superior performance in the overall fault classification performance compared to the other three methods. These observations suggest that CCQNet is better equipped to handle extreme class imbalance conditions. ### Classification visualization #### 4.4.1 Visualization by t-SNE We use t-distributed stochastic neighbor embedding (t-SNE) to visualize the learned feature representation in a 2D space. For the sake of comparison, we only select the first four best methods for comparison. Figure 8 shows 2D feature maps on each test set at 100:1 IB Rate. Firstly, regarding the CWRU dataset, the ball faults (BA1, BA2, BA3) of the three compared methods are separated insufficiently, which can lead to misdiagnosing. Secondly, in our dataset, the same phenomenon appears in some clusters, the clusters of the three compared methods all show an overlap. However, it can be seen that the clusters of different classes are more evenly distributed in CCQNet, and it is possible to better separate the inter-class samples and better aggregate the intra-class samples. The results also suggest that CCQNet has better feature extraction ability under extreme class imbalance conditions. #### 4.4.2 Feature map visualization Here, we provide insights into the superior feature extraction capabilities of quadratic networks compared to their conventional counterparts. Figure 9 show cases the feature maps at each layer of our quadratic residual network and a corresponding ResNet backbone with the same structure. Firstly, it is evident that the input signal exhibits distinct local features of high amplitude, resulting from fault-induced vibrations. Notably, both the conventional and quadratic networks preserve these local features at each layer. However, as the network layers become deeper, the quadratic network progressively places more emphasis on the local features with high amplitude values in the raw signal, surpassing the conventional network. Particularly, from the second to the fifth layer of visualized features, the quadratic network gives heightened attention to local features associated with fault-related signals. This finding strongly indicates that quadratic networks have superior feature representation to support accurate fault diagnosis. #### 4.4.3 Learnable autocorrelation Consistent with our earlier analysis, the effectiveness of quadratic networks lies in their ability to perform learnable autocorrelation, and enables extraction of significant signal features from noise. To investigate the practical anti-noise robustness of quadratic networks, we conduct experiments by introducing Gaussian noise with a variance of 10 to the signals. Specifically, we visualized the autocorrelation operation term that weights set as 1 and the learnable autocorrelation term (Eq. (20)) in the first layer of a quadratic network. The results as shown in Figure 10 indicate that while the autocorrelation operation can effectively extract fault-related signals from the noise, it also amplifies certain parts of extraneous noise. This limitation arises because autocorrelation is primarily designed to enhance transient impulses in the signal. In contrast, the learnable autocorrelation term demonstrates a remarkable ability to extract fault-related signals. The weights learned by the neural network act as adaptive filters, which further enhances the feature extraction capability. These findings emphasize the crucial role of learnable autocorrelation terms in empowering quadratic networks for effective signal feature extraction. ### Analysis experiments #### 4.5.1 Hyperparameter sensitivity It is crucial to consider the hyperparameters of the quadratic network, particularly the scale factor \(\alpha\). In this study, we investigated the effects of \(\alpha\) on the model's performance and conducted experiments to evaluate its sensitivity. The results are reported in Table 5. When the IB rate is set to 50:1, we observed that using an \(\alpha\) value below \(10^{-3}\) led to improved results. However, when \(\alpha\) exceeded \(10^{-3}\), the model parameters were unable to converge fully, resulting in a decrease in performance. Therefore, for our dataset, the optimal \(\alpha\) value is determined to be 0.00001, which achieved an accuracy (ACC) of 75.25%. For the CWRU dataset, the optimal \(\alpha\) value is set to be 0.00005, resulting in an ACC of 80.12%. Note that the exact value of \(\alpha\) may vary depending on the datasets. Therefore, we recommend \begin{table} \begin{tabular}{l c c c c c c} \hline \hline IB rate & \multicolumn{2}{c}{5:1} & \multicolumn{4}{c}{10:1} \\ \hline & ACC & F1 & MCC & ACC & F1 & MCC \\ \hline WDCNN & 87.74\(\pm\)0.34 & 87.46\(\pm\)0.31 & 86.58\(\pm\)0.37 & 66.31\(\pm\)3.39 & 65.02\(\pm\)3.32 & 63.23\(\pm\)3.63 \\ SupCon & 93.36\(\pm\)0.35 & 92.84\(\pm\)0.40 & 92.83\(\pm\)0.39 & 83.05\(\pm\)3.45 & 82.00\(\pm\)3.71 & 81.69\(\pm\)3.69 \\ SelfCon & 93.84\(\pm\)0.32 & 93.66\(\pm\)0.34 & 93.25\(\pm\)0.34 & 57.81\(\pm\)1.59 & 50.74\(\pm\)2.33 & 55.55\(\pm\)1.62 \\ DNCNN & 85.80\(\pm\)0.56 & 85.34\(\pm\)0.58 & 84.37\(\pm\)0.61 & 78.18\(\pm\)0.48 & 77.13\(\pm\)0.58 & 76.05\(\pm\)0.52 \\ CA-Supcon & 93.04\(\pm\)0.32 & 92.67\(\pm\)0.37 & 92.44\(\pm\)0.35 & 88.81\(\pm\)0.73 & 88.37\(\pm\)0.78 & 87.73\(\pm\)0.79 \\ oversample+reweight & 86.47\(\pm\)0.43 & 84.75\(\pm\)0.48 & 85.73\(\pm\)0.45 & 88.36\(\pm\)1.29 & 87.26\(\pm\)1.44 & 87.38\(\pm\)1.40 \\ CCQNet & **96.69\(\pm\)0.19** & **96.64\(\pm\)0.20** & **96.37\(\pm\)0.21** & **93.27\(\pm\)0.34** & **93.03\(\pm\)0.35** & **92.73\(\pm\)0.37** \\ \hline IB rate & \multicolumn{2}{c}{20:1} & \multicolumn{4}{c}{50:1} \\ \hline WDCNN & 40.96\(\pm\)1.98 & 38.04\(\pm\)2.31 & 36.15\(\pm\)2.25 & 12.65\(\pm\)0.24 & 6.40\(\pm\)0.37 & 4.99\(\pm\)0.40 \\ SupCon & 38.83\(\pm\)2.62 & 32.21\(\pm\)2.64 & 34.64\(\pm\)2.83 & 16.88\(\pm\)0.47 & 10.36\(\pm\)0.31 & 10.45\(\pm\)0.69 \\ SelfCon & 34.94\(\pm\)3.10 & 25.65\(\pm\)2.58 & 31.27\(\pm\)3.87 & 11.00\(\pm\)0.85 & 6.88\(\pm\)0.53 & 1.24\(\pm\)1.06 \\ DNCNN & 41.45\(\pm\)0.82 & 39.65\(\pm\)0.76 & 35.59\(\pm\)0.90 & 14.99\(\pm\)0.87 & 10.76\(\pm\)1.16 & 7.25\(\pm\)1.16 \\ CA-Supcon & 50.33\(\pm\)2.58 & 47.12\(\pm\)2.99 & 45.99\(\pm\)2.80 & 16.04\(\pm\)1.43 & 12.66\(\pm\)2.01 & 7.76\(\pm\)1.70 \\ oversample+reweight & 59.50\(\pm\)1.06 & 54.39\(\pm\)1.66 & 56.94\(\pm\)1.14 & 10.32\(\pm\)0.30 & 3.40\(\pm\)0.37 & 0.41\(\pm\)0.39 \\ CCQNet & **92.39\(\pm\)0.34** & **91.16\(\pm\)0.41** & **91.83\(\pm\)0.33** & **75.25\(\pm\)1.11** & **72.64\(\pm\)1.16** & **73.31\(\pm\)1.22** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison (%) on our bearing dataset, where the bold-faced number indicates the best performance. \begin{table} \begin{tabular}{l c c c c c} \hline \hline IB rate & \multicolumn{2}{c}{5:1} & \multicolumn{4}{c}{10:1} \\ \hline & ACC & F1 & MCC & ACC & F1 & MCC \\ \hline WDCNN & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & 99.85\(\pm\)0.06 & 99.85\(\pm\)0.06 & 99.83\(\pm\)0.07 \\ SupCon & 99.61\(\pm\)0.09 & 95.09\(\pm\)0.09 & 99.57\(\pm\)0.10 & 93.36\(\pm\)0.34 & 92.39\(\pm\)0.47 & 92.95\(\pm\)0.34 \\ SelfCon & 95.24\(\pm\)0.40 & 95.09\(\pm\)0.44 & 94.82\(\pm\)0.43 & 86.30\(\pm\)0.54 & 84.64\(\pm\)0.66 & 85.51\(\pm\)0.56 \\ DNCNN & 94.43\(\pm\)0.52 & 94.26\(\pm\)0.56 & 94.03\(\pm\)0.54 & 81.95\(\pm\)0.73 & 80.59\(\pm\)0.91 & 80.48\(\pm\)0.76 \\ CA-Supcon & 99.82\(\pm\)0.08 & 99.82\(\pm\)0.08 & 99.80\(\pm\)0.09 & 96.42\(\pm\)0.30 & 96.37\(\pm\)0.31 & 96.09\(\pm\)0.32 \\ oversample+reweight & 91.67\(\pm\)0.13 & 89.80\(\pm\)0.22 & 91.41\(\pm\)0.12 & 79.96\(\pm\)0.14 & 73.86\(\pm\)0.22 & 79.25\(\pm\)0.14 \\ CCQNet & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** & **100.00\(\pm\)0.00** \\ \hline IB rate & \multicolumn{2}{c}{20:1} & \multicolumn{4}{c}{50:1} \\ \hline WDCNN & 93.66\(\pm\)0.46 & 93.26\(\pm\)0.52 & 93.14\(\pm\)0.49 & 69.3\(\pm\)0.52 & 67.18\(\pm\)0.65 & 66.81\(\pm\)0.55 \\ SupCon & 78.72\(\pm\)0.25 & 72.88\(\pm\)0.19 & 78.03\(\pm\)0.30 & 59.92\(\pm\)0.21 & 50.74\(\pm\)0.30 & 59.78\(\pm\)0.25 \\ SelfCon & 75.23\(\pm\)0.26 & 73.23\(\pm\)0.26 & 73.33\(\pm\)0.27 & 41.63\(\pm\)0.32 & 36.55\(\pm\)0.31 & 37.13\(\pm\)0.37 \\ DNCNN & 71.19\(\pm\)0.41 & 70.85\(\pm\)0.39 & 68.35\(\pm\)0.47 & 61.24\(\pm\)0.83 & 59.61\(\pm\)0.94 & 57.79\(\pm\)0.90 \\ CA-Supcon & 82.21\(\pm\)0.39 & 81.77\(\pm\)0.42 & 80.56\(\pm\)0.42 conducting thorough experiments to determine the optimal \(\alpha\). of quadratic networks in bearing fault diagnosis. It is evident that QResNet achieves an accuracy of 75.25% on our dataset and 80.12% on the CWRU dataset, compared to 71.15% and 69.33% achieved by ResNet34, respectively. Additionally, the ResNet model with the same structure as QResNet exhibits the worst performance, with an accuracy that is 22.94% lower on our dataset and 13.83% lower on the CWRU dataset compared to QResNet. This further highlights the advantages of using a quadratic network in achieving better classification performance and higher computational efficiency. **The form of quadratic neurons.** To investigate the contributions of different parts of quadratic neurons, we conduct experiments using two degraded versions of the standard quadratic neuron. As shown in Table 7, we construct two different types of quadratic neurons, which are degraded versions of the standard quadratic neuron. In the first one, we remove the power term, and in the second one, we remove one of its inner-product terms. As shown in Table 7, the computational results clearly suggest that the standard quadratic neuron outperforms the degraded versions. In particular, the accuracy of the standard one is roughly 10% higher than that of its degraded counterparts on our dataset. Secondly, the performance of two degraded neurons exhibits variations across the two datasets. The degraded neuron lacking an inner-product term shows a slight decrease in accuracy (79.39%) on the CWRU dataset but experiences a significant drop on our dataset (62.16%). On the other hand, the degraded neuron without the power term demonstrates a remarkable decrease in performance on both datasets, and achieves an accuracy of 75.30% and 62.38% on the CWRU and our dataset, respectively. The results indicate that the power term of a quadratic neuron plays an important role in the network. **Quadratic network structures.** We perform experiments at IB Rate = 50:1 to assess the effect of the number of layers and the kernel size of the quadratic network on the model performance. The results are summarized in Tables 8 and 9. With regard to the former, it can be observed that increasing the number of layers in the quadratic network leads to an improved performance. When the number of model layers is 5, the network achieves an accuracy of 75.25% on our dataset and an accuracy of 80.12% on the CWRU dataset. As for the latter, the optimal kernel size is 7, which yields the best performance on both datasets. Based on these results, it can be concluded that a quadratic network \begin{table} \begin{tabular}{l|c c c c} \hline Method & \#Params & \#Flops & Our bearing dataset ACC(\%) & CWRU ACC(\%) \\ \hline ResNet & 200K & 22.6M & 52.31 & 66.29 \\ ResNet34 & 7200K & 719M & 71.15 & 69.33 \\ QResNet & **700K** & **66M** & **75.25** & **80.12** \\ \hline \end{tabular} \end{table} Table 6: The accuracy (%) of quadratic ResNet and conventional ResNet backbones on the CWRU and our dataset at 50:1 IB rate. \begin{table} \begin{tabular}{c|c|c} \hline Quadratic Function & CWRU & Our dataset \\ \hline \((\mathbf{x}*\mathbf{w}^{\prime}+\mathbf{b})\odot(\mathbf{x}*\mathbf{w}^{\prime}+\mathbf{b}^{\prime})\) & 75.30 & 62.38 \\ \((\mathbf{x}*\mathbf{w}^{\prime}+\mathbf{b}^{\prime})+(\mathbf{x}\odot\mathbf{x})*\mathbf{w}^{\prime}+c\) & 79.39 & 62.16 \\ \((\mathbf{x}*\mathbf{w}^{\prime}+\mathbf{b}^{\prime})\odot(\mathbf{x}*\mathbf{w}^{\prime}+\mathbf{b}^{ \prime})+(\mathbf{x}\odot\mathbf{x})*\mathbf{w}^{\prime}+c\) & **80.12** & **75.25** \\ \hline \end{tabular} \end{table} Table 7: The accuracy (%) of different quadratic functions on the CWRU and our dataset at 50:1 IB rate. Figure 10: Comparison of weighted autocorrelation operations with a weight of 1 and learnable autocorrelation operations with training weights in the time and time-frequency domains. Figure 9: Comparison of the features of each layer of the first-order and quadratic network. with five layers and a convolutional kernel size of 7 is the optimal structure for fault diagnosis. ## 5 Conclusion In this paper, we have introduced a novel framework called CCQNet, based on quadratic networks, for addressing long-tailed distribution scenarios in bearing fault diagnosis. CCQNet incorporates a quadratic residual network (QResNet) as its backbone for effective feature extraction and employs two branches with class-weighted contrastive loss and logit-adjusted cross-entropy loss to ensure equal attention to all categories. We have also provided insights into quadratic networks, highlighting their ability to focus on local fault signal features through autocorrelation operations. Experimental results have demonstrated the outstanding performance of CCQNet in highly imbalanced data scenarios. Our work promotes the application of contrastive learning for long-tailed distribution-based bearing fault diagnosis and encourages further exploration of quadratic neural networks. As CCQNet serves as a versatile framework for long-tailed data classification, future research should explore its potential in a broader range of application scenarios. For instance, extending it to few-shot or zero-shot learning, which are also highly desirable issues in fault diagnosis. ## Acknowledgement We would like to express our sincere gratitude to Prof. Xiaoli Zhao and Mr Hongyuan Zhang from the MIT Key Laboratory of Aerospace Bearing Technology and Equipment, Harbin Institute of Technology for their selfless support and valuable assistance in conducting the bearing experiments for this research. Their contributions have been invaluable and greatly appreciated.
2309.15559
Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution
Self-interpreting neural networks have garnered significant interest in research. Existing works in this domain often (1) lack a solid theoretical foundation ensuring genuine interpretability or (2) compromise model expressiveness. In response, we formulate a generic Additive Self-Attribution (ASA) framework. Observing the absence of Shapley value in Additive Self-Attribution, we propose Shapley Additive Self-Attributing Neural Network (SASANet), with theoretical guarantees for the self-attribution value equal to the output's Shapley values. Specifically, SASANet uses a marginal contribution-based sequential schema and internal distillation-based training strategies to model meaningful outputs for any number of features, resulting in un-approximated meaningful value function. Our experimental results indicate SASANet surpasses existing self-attributing models in performance and rivals black-box models. Moreover, SASANet is shown more precise and efficient than post-hoc methods in interpreting its own predictions.
Ying Sun, Hengshu Zhu, Hui Xiong
2023-09-27T10:31:48Z
http://arxiv.org/abs/2309.15559v1
# Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution ###### Abstract Self-interpreting neural networks have garnered significant interest in research. Existing works in this domain often (1) lack a solid theoretical foundation ensuring genuine interpretability or (2) compromise model expressiveness. In response, we formulate a generic Additive Self-Attribution (ASA) framework. Observing the absence of Shapley value in Additive Self-Attribution, we propose Shapley Additive Self-Attributing Neural Network (SASANet), with theoretical guarantees for the self-attribution value equal to the output's Shapley values. Specifically, SASANet uses a marginal contribution-based sequential schema and internal distillation-based training strategies to model meaningful outputs for any number of features, resulting in un-approximated meaningful value function. Our experimental results indicate SASANet surpasses existing self-attributing models in performance and rivals black-box models. Moreover, SASANet is shown more precise and efficient than post-hoc methods in interpreting its own predictions. ## 1 Introduction While neural networks excel in fitting complex real-world problems due to their vast hypothesis space, their lack of interpretability poses challenges for real-world decision-making. Although post-hoc interpretation algorithms (Lundberg & Lee (2017); Shrikumar et al. (2017)) offer extrinsic model-agnostic interpretation, the un-transparent intrinsic modeling procedure unavoidably lead to inaccurate interpretation (Laugel et al. (2019); Frye et al. (2020)). Thus, there's a growing need for self-interpreting neural structures that intrinsically convey their prediction logic faithfully. Self-interpreting neural networks research has garnered notable interest, with the goal of inherently elucidating a model's predictive logic. Various approaches are driven by diverse scenarios. For example, Alvarez Melis & Jaakkola (2018) linearly correlates outputs with features and consistent coefficients, while Agarwal et al. (2020) employs multiple networks each focusing on a single feature, and Wang & Wang (2021) classifies by comparing inputs with transformation-equivariant prototypes. However, many existing models, despite intuitive designs, might not possess a solid theoretical foundation to ensure genuine interpretability. The effectiveness of attention-like weights, for instance, is debated (Serrano & Smith (2019); Wiegreffe & Pinter (2019)). Additionally, striving for higher interpretability often means resorting to simpler structures or intricate regularization, potentially compromising prediction accuracy. This paper aims to achieve theoretically guaranteed faithful self-interpretation while retaining expressiveness in prediction. To achieve this goal, we formulate a generic Additive Self-Attribution (ASA) framework. ASA offers an intuitive understanding, where the contributions of different observations are linearly combined for the final prediction. Notably, even with varying interpretative angles, many existing methods implicitly employ such structure. Thus, we utilize ASA to encapsulate and distinguish their interpretations, clarifying when certain methods are favored over others. Upon examining studies through the ASA lens, we identified an oversight regarding the Shapley value. Widely recognized for post-hoc attribution (Lundberg and Lee (2017); Bento et al. (2021)) with robust theoretical backing from coalition game theory (Shapley et al. (1953)), its potential has been underutilized. Notably, while a recent work called Shapley Explanation Network (SHAPNet) (Wang et al. (2021)) achieves layer-wise Shapley attribution, it lacks model-wise feature attribution. Addressing this gap, we introduce a Shapley Additive Self-Attributing Neural Network (SASANet) under the ASA framework, depicted in Figure 1. In particular, we directly define value function as model output given arbitrary number of input features. With an intermediate sequential modeling and distillation-based learning strategy, SASANet can be theoretically proven to provide additive self-attribution converging to the Shapley value of its own output. Our evaluations of SASANet on various datasets show it surpasses current self-attributing models and reaches comparable performance as black-box models. Furthermore, compared to post-hoc approaches, SASANet's self-attribution offers more precise and streamlined interpretation of its predictions. ## 2 Preliminaries ### Additive Self-Attributing Models Oriented for various tasks, self-interpreting networks often possess distinct designs, obscuring their interrelations and defining characteristics. Observing a commonly shared principle of linear feature contribution aggregation across numerous studies, we introduce a unified framework: **Definition 2.1** (Additive Self-Attributing Model): _Additive Self-Attributing Models output the sum of intrinsic feature attribution values that hold desired properties, formulated as_ \[f(x;\theta,\phi_{0})=\phi_{0}+\sum_{i\in\mathcal{N}}\phi(x;\theta)_{i},\qquad \text{s.t.}\quad\mathcal{C}(\phi,x), \tag{1}\] _where \(x=[x_{1},\cdots,x_{N}]\in\mathbb{R}^{N}\) denotes a sample with \(N\) features, \(\mathcal{N}=\{1,2,\cdots,N\}\) denotes feature indices, \(\phi(x;\theta)_{i}\) is the \(i\)-\(th\) feature's attribution value, \(\phi_{0}\) is a sample-independent bias, \(\mathcal{C}(\phi,x)\) denotes constraints on attribution values._ As there are abundant ways to build additive models, attribution terms are required to satisfy specific constraints for ensuring the physically meaningfulness, often achieved through regularizers or Figure 1: A schematic of the SASANet procedure. (a) Each sample is viewed as a set of features. The intermediate sequential module models an permutation-variant intermediate output as the cumulative contributions of these features. (b) The Shapley Value module trains a self-attributing network via internal Shapley value distillation, in which the attribution values proven to converge to the permutation-invariant final output’s Shapley value. structural inductive bias. Neglecting various pre- and post-transformations for adapting to specific inputs and outputs, many existing structures providing intrinsic feature attribution matches ASA framework. Below, we introduce several representative works. **Self-Explaining Neural Network (SENN)** (Alvarez Melis & Jaakkola (2018)). SENN models generalized coefficients for different concepts (features) and constrains the coefficients to be locally bounded by the feature transformation function. Using \(x\) to denote the explained feature unit, SENN's attribution can be formulated as \(f(x)=\sum_{i=1}^{N}a(x;\theta)_{i}x_{i}+b\), constraining that \(a(x;\theta)_{i}\) is approximately the gradient of \(f(x)\) with respect to each \(x_{i}\). By setting \(\phi(x;\theta):=[a(x;\theta)_{1}x_{1},\cdots,a(x;\theta)_{N}x_{N}]\in\mathbb{R} ^{N}\), SENN aims at the form of Eq. 1 with a constraint \(\nabla_{x_{i}}f(x;\theta,\phi_{0})\approx\phi(x;\theta)_{i}/x_{i}\) for \(i\in\mathcal{N}\). To realize the constraints, SENN adds a regularizer on the distance between the coefficients and the gradients. **Neural Additive Model (NAM)** (Agarwal et al. (2020)). NAM models each feature's independent contribution and sums them for the prediction as \(f(x)=\sum_{i=1}^{N}h_{i}(x_{i};\theta_{i})+b\). Regarding \(h_{i}(\cdot;\theta_{i}):\mathbb{R}\rightarrow\mathbb{R}\) as a function \(\phi(\cdot;\theta)_{i}:\mathbb{R}^{N}\rightarrow\mathbb{R}\) constrained depend only on \(x_{i}\), NAM targets the form of Eq. 1 with a constraint: \(\forall x,x^{\prime}\quad x_{i}=x^{\prime}_{i}\rightarrow\phi(x;\theta)_{i}= \phi(x^{\prime};\theta)_{i}\) for \(i\in\mathcal{N}\). To realize such constraints, NAM models each feature's attribution with an independent network module. **Self-Interpretable model with Transformation Equivariant Interpretation (SITE)** (Wang & Wang (2021)). SITE models the output as a similarity between the sample and a relevant prototype \(G(x;\theta)\in\mathbb{R}^{N}\), formulated as \(f(x)=\sum_{i=1}^{N}G(x;\theta)_{i}x_{i}+b\). Each feature's contribution term \(G(x;\theta)_{i}x_{i}\) is the attribution value. The prototype is constrained to (1) resemble real samples of the corresponding class and (2) be transformation equivariant for image inputs. Thus, SITE targets the form of Eq. 1 with constraints: \(\phi(x;\theta)_{i}=G(x;\theta)_{i}x_{i}\) for \(i\in\mathcal{N}\), \(G(x;\theta)\sim\mathcal{D}_{c}\), and \(T_{\beta}^{-1}(G(T_{\beta}(x);\theta))\sim\mathcal{D}_{c}\), where \(\mathcal{D}_{c}\) is sample distribution, \(T_{\beta}\) is predefined transformations. To realize such constraints, SITE regularizes sample-prototype distance and \(T_{\beta}\)'s reconstruction error. **Salary-Skill Value Composition Network (SSCN)**(Sun et al. (2021)). Domain-specific studies also seek self-attributing networks. For example, SSCN models job salary as the weighted average of skills' values, formulated as \(f(x;\theta_{d},\theta_{v})=\sum_{i=1}^{N}d(x;\theta)_{i}v(x_{i};\theta_{v})\), where \(d(x;\theta_{d})_{i}\) denotes skill dominance and \(v(x_{i};\theta_{v})\) denotes skill value, defined as task-specific components in their study. Regarding each skill as a salary prediction feature, SSCN inherently targets the form of Eq. 1 with constraints: \(\phi(x;\theta)_{i}=d(x;\theta_{d})_{i}v(x;\theta_{v})_{i}\), \(\forall x,x^{\prime}\quad x_{i}=x^{\prime}_{i}\to v(x;\theta_{v})_{i}=v(x^{ \prime};\theta_{v})_{i}\), and \(\sum_{i=1}^{N}d(x;\theta_{d})_{i}=1\) for \(i\in\mathcal{N}\). To realize such constraints, SSCN models \(v\) with a single-skill network and \(d\) with a self-attention mechanism. ### Shapley Additive Self-Attributing Model In coalition game theory, Shapley value ensures _Efficiency_, _Linearity_, _Nullity_, and _Symmetry_ axioms for fair contribution allocation, largely promoting post-hoc interpretation methods (Lundberg & Lee (2017); Bento et al. (2021)). However, there's a gap in studying self-attributing networks satisfying these axioms. Therefore, we define the Shapley ASA model as: **Definition 2.2** (Shapely Additive Self-Attributing Model): _Shapely Additive Self-Attributing Model are ASA models that are formulated as_ \[\begin{split} f(x;\theta,\phi_{0})=&\phi_{0}+\sum_{i= 1}^{N}\phi(x;\theta)_{i},\\ \text{s.t.}\quad\phi(x;\theta)_{i}=&\frac{1}{N!} \sum_{O\in x(\mathcal{N}),k\in\mathcal{N}}\mathbb{I}\{O_{k}=i\}(v_{f}(x_{O_{1 :k-1}\cup\{i\}};\theta,\phi_{0})-v_{f}(x_{O_{1:k-1}};\theta,\phi_{0})),\end{split} \tag{2}\] _where \(\pi(\cdot)\) denotes all possible permutations, \(x_{\mathcal{S}}:=\{x_{i}|i\in\mathcal{S}\}\) represents a subset of features in \(x\) given \(\mathcal{S}\in\mathcal{N}\) (\(x=x_{\mathcal{N}}\)), \(v_{f}\) is a predefined value function about the effect of \(x_{\mathcal{S}}\) with respect to \(f\)._ Following coalition game theory (Shapley et al. (1953)), we constrain \(\phi(x;\phi)_{i}\) to feature \(x_{i}\)'s average marginal contribution upon joining a random input subset, leading to Shapley value axioms. **Theorem 2.3**: _In a Shapley ASA model, the following axioms hold: (Efficiency): \(\sum_{i\in\mathcal{N}}\phi(x;\theta)_{i}=v_{f}(x_{\mathcal{N}};\theta,\phi_{0} )-v_{f}(x_{\Phi};\theta,\phi_{0})\)._ _(Linearity)_: _Given Shapley ASA models \(f\), \(f^{\prime}\), and \(f^{\prime\prime}\), for all \(\alpha,\beta\in\mathbb{R}\): if \(v_{f^{\prime\prime}}=\alpha v_{f}+\beta v_{f^{\prime}}\), then \(\phi^{(f^{\prime\prime})}=\alpha\phi^{(f)}+\beta\phi^{(f^{\prime})}\), where \(\phi^{(\cdot)}\) denotes the internal attribution value of a Shapley ASA model._ _(Nullity)_: \(\forall\mathcal{S}\subset\mathcal{N}\backslash\{i\}\quad v_{f}(x_{\mathcal{S }\cup\{i\}};\theta,\phi_{0})=v_{f}(x_{\mathcal{S}};\theta,\phi_{0})\) then \(\phi(x;\theta)_{i}=0\)._ _(Symmetry)_: \(\forall\mathcal{S}\subset\mathcal{N}\backslash\{i,j\}\quad v_{f}(x_{\mathcal{ S}\cup\{i\}};\theta,\phi_{0})=v_{f}(x_{\mathcal{S}\cup\{j\}};\theta,\phi_{0})\) then \(\phi(x;\theta)_{i}=\phi(x;\theta)_{j}\)._ ## 3 Method The value function of feature subsets has inputs of variable sizes, i.e., \(v_{f}:\bigcup_{k=1}^{N}\mathbb{R}^{k}\rightarrow\mathbb{R}\), whereas most existing models accept fix-size inputs, i.e., \(f:\mathbb{R}^{N}\rightarrow\mathbb{R}\). Transforming from \(f\) to \(v_{f}\) demands approximating model output with missing inputs using handcrafted reference values (Lundberg and Lee (2017)) and a sampling procedure (Datta et al. (2016)). This is non-trivial given complex factors like feature dependency. SASANet addresses this by directly modeling \(f:\bigcup_{k=1}^{N}\mathbb{R}^{k}\rightarrow\mathbb{R}\) using set-based modeling for inputs of any size. **Definition 3.1** (SASANet value function): _In SASANet, the value function \(v_{f}(x_{\mathcal{S}};\theta,\phi_{0})\) for any given feature subset \(x_{\mathcal{S}}\in\bigcup_{k=1}^{N}\mathbb{R}^{k}\) is the model output \(f(x_{\mathcal{S}};\theta,\phi_{0})\)._ With \(v_{f}=f\), ensuring \(f:\bigcup_{k=1}^{N}\mathbb{R}^{k}\rightarrow\mathbb{R}\) meets Definition 2.2 often involves adding a regularization term. This can cause conflicting optimization directions, hindering proper model convergence. To address this, we introduce an intermediate sequential modeling framework and an internal distillation strategy. This naturally achieves \(f:\bigcup_{k=1}^{N}\mathbb{R}^{k}\rightarrow\mathbb{R}\) compliant with Definition 2.2. The process is depicted in Figure 1, with proofs and structural details in the Appendix. ### Marginal Contribution-based Sequential Modeling In SASANet, a marginal contribution-based sequential module generates a permutation-variant output for any feature set size, capturing the intermediate effects of each given feature. Specifically, for a given order \(O\) where features in \(\mathcal{N}\) are sequentially added for prediction, the marginal contribution of each feature is explicitly modeled as \(\triangle(x_{O_{i}},x_{O_{i:i-1}};\theta_{\triangle})\). For any feature subset \(\mathcal{S}\subset\mathcal{N}\), accumulating the marginal contributions given an order \(O_{\mathcal{S}}\in\pi(\mathcal{S})\) yields the permutation-variant output \(f_{c}(x_{\mathcal{S}},O_{\mathcal{S}};\theta_{\triangle})=\sum_{i=1}^{|\mathcal{ S}|}\triangle(x_{O_{\mathcal{S}_{i}},x_{O_{\mathcal{S}_{i:i-1}}}};\theta_{ \triangle})+\phi_{0}\). This module can be trained for various prediction tasks. For example, using \(\sigma\) to represent the sigmoid function, we can formulate a binary classification loss for a sample \(x\) as \[L_{m}(x,y,O)=y\log(\sigma(f_{c}(x,O;\theta_{\triangle})))+(1-y)\log(1-\sigma(f_ {c}(x,O;\theta_{\triangle}))), \tag{3}\] ### Shapley Value Distillation Following Definition 2.1, we train an attribution network \(\phi(:\theta_{\phi}):\bigcup_{k=1}^{N}\mathbb{R}^{k}\rightarrow\bigcup_{k=1}^{ N}\mathbb{R}^{k}\), producing a final permutation-invariant output \(f(x_{\mathcal{S}};\theta_{\phi})=\sum_{i\in\mathcal{S}}\phi(x_{\mathcal{S}}; \theta_{\phi})_{i}\) for any valid input \(x_{\mathcal{S}}\in\bigcup_{k=1}^{N}\mathbb{R}^{k}\). Specially, instead of directly supervising \(f\) with data, we propose an internal distillation method based on the intermediate marginal contribution module \(f_{c}\). Specifically, we construct a distillation loss for each feature \(i\in\mathcal{S}\) in variable-size input \(x_{\mathcal{S}}\in\bigcup_{k=1}^{N}\mathbb{R}^{k}\) as \[L_{s}^{(i)}(x_{\mathcal{S}})=\frac{1}{|\mathcal{D}|}\sum_{O\in\mathcal{D}}( \phi(x_{\mathcal{S}};\theta_{\phi})_{i}-\sum_{k\in\mathcal{S}}\mathbb{I}(O_{k}= i)\triangle(x_{i},x_{O_{i:k-1}};\theta_{\triangle}))^{2}, \tag{4}\] where \(\mathcal{D}\subset\pi(\mathcal{S})\) denotes permutations drawn for training. Training with \(L_{s}\), \(\phi(x_{\mathcal{S}};\theta_{\phi})_{i}\) amortizes the features' effect in \(f_{c}\). We prove it naturally ensure Shapley value constraints without bringing conflict to optimization directions. For simplicity, as is typical, we assume no gradient vanishing and sufficient model expressiveness to reach optimal. **Proposition 3.2**: _By optimizing \(L_{s}^{(i)}(x_{\mathcal{S}})\) with \(\mathcal{D}\), SASANet converges to satisfy \(\phi(x_{\mathcal{S}};\theta_{\phi})_{i}\sim\mathcal{N}(\phi_{i}^{*},\frac{ \sigma_{i}^{2}}{M})\), where \(M=|\mathcal{D}|\), \(\phi_{i}^{*}=\frac{1}{|\mathcal{S}|!}\sum_{O\in\pi(\mathcal{S})}\sum_{k\in \mathcal{S}}\mathbb{I}(O_{k}=i\}\triangle(x_{i},x_{O_{i:k-1}};\theta_{\triangle})\), \(\sigma_{i}^{2}=\frac{1}{|\mathcal{S}|!}\sum_{O\in\pi(\mathcal{S})}\sum_{k\in \mathcal{S}}\mathbb{I}(O_{k}=i)(\triangle(x_{i},x_{O_{i:k-1}};\theta_{ \triangle})-\phi_{i}^{*})^{2}\)._ This means \(\phi\) is trained towards the averaged intermediate feature effects in \(f_{c}\). Then, we can derive the relationship between the final output \(\tilde{f}\) and the intermediate output \(f_{c}\). **Proposition 3.3**: _By optimizing \(L_{s}^{(i)}(x_{\mathcal{S}})\) with enough permutation sampling, there is small uncertainty that the model converges to satisfy \(f(x_{\mathcal{S}};\theta_{\phi})=\frac{1}{|\mathcal{S}|!}\sum_{O\in\pi(\mathcal{ S})}f_{c}(x_{\mathcal{S}},O;\theta_{\triangle})\)._ In this way, \(f\) acts as an implicit bagging of the permutation-variant prediction of \(f_{c}\), leading to valid permutation-invariant predictions that offer stability and higher accuracy, without requiring direct supervision loss from training data. We will subsequently demonstrate how this distillation loss enables \(\phi\) to model the Shapley value of \(f\). **Theorem 3.4**: _If \(\forall O_{1},O_{2}\in\pi(\mathcal{S})\;f_{c}(x_{\mathcal{S}},O_{1})=f_{c}(x_ {\mathcal{S}},O_{2})\), optimizing \(L_{s}^{(i)}(x_{\mathcal{S}})\) for sample \(x\)'s subsets \(x_{\mathcal{S}}\) with ample permutation samples ensures \(\phi(x;\theta_{\phi})\) converge to satisfy Definition 2.2's constraint, i.e., Shapley value in \(f(x;\theta_{\phi})\)._ While \(f_{c}\) is permutation-variant, in the subsequent section, we'll introduce a method for \(f_{c}\) to converge to label expectations of arbitrary feature subsets, thereby not only reflecting pertinent feature-label associations but also naturally inducing a permutation invariance condition. ### Feature Subset Label Expectation Modeling Notably, training \(f_{c}\) with Eq. 3 can lead to \(f\) making good prediction for samples in the dataset, it does not guarantee meaningful outputs for feature subsets. For example, it might output 0 when any feature is missing, assigning the same Shapley value to all features as the last one takes all the credit. Although this attribution captures the model's logic, it fails to represent significant feature-label associations in the data. To address this, we target the output to reflect the label expectation for samples with a specific feature subset. Specifically, we define a loss for \(x_{\mathcal{S}}\) using training set \(D_{tr}\) as \[L_{v}(x_{\mathcal{S}},O_{\mathcal{S}})=\frac{\sum_{(x^{\prime},y^{\prime})\in \mathcal{D}_{tr}}\mathbb{I}(x^{\prime}_{\mathcal{S}}=x_{\mathcal{S}})L_{m}(x_ {\mathcal{S}},y,O_{\mathcal{S}})}{\sum_{(x^{\prime},y^{\prime})\in\mathcal{D}_{ tr}}\mathbb{I}(x^{\prime}_{\mathcal{S}}=x_{\mathcal{S}})}. \tag{5}\] \(L_{v}\) is designed for \(f_{c}\) instead of the Shapley module, averting conflicts with the convergence direction of \(\phi\) we have illustrated. Nonetheless, we show that this approach makes \(\phi\) to be the Shapley value of \(f\) by directing \(f_{c}\)'s output to satisfy the permutation-invariance stipulated in Theorem 3.4. **Theorem 3.5**: _Optimizing \(L_{s}^{(i)}(x_{\mathcal{S}})\) and \(L_{v}(x_{\mathcal{S}},O_{\mathcal{S}})\) for sample \(x\)'s subsets \(x_{\mathcal{S}}\) for all permutations \(O_{\mathcal{S}}\in\pi(\mathcal{S})\) makes \(\phi\) converge to Shapley value of \(f\)._ In this way, we ensure the constraint in Definition 2.2 satisfied, while training \(f\) to make a valid prediction with a unified distillation loss. Moreover, \(L_{v}\) can make the attribution value seize real-world feature-label relevance, which we discuss as follows. **Proposition 3.6**: _Optimizing \(L_{s}^{(i)}(x_{\mathcal{S}})\) and \(L_{v}(x_{\mathcal{S}},O_{\mathcal{S}})\) for all permutations \(O_{\mathcal{S}}\in\pi(\mathcal{S})\) makes \(\sigma(f(x_{\mathcal{S}}))\) converge to \(\mathbb{E}_{\mathcal{D}_{tr}}[y|x_{\mathcal{S}}]\)._ While we cannot exhaust all the permutations in practice, by continuously sampling permutations during training, the network will learn to generalize by grasping permutation patterns. **Remark 3.7**: \(f\) _estimates the expected label when certain features are observed, represented by \(\int_{x^{\prime}_{\mathcal{S}}}p(x^{\prime}_{\mathcal{S}}|x_{\mathcal{S}})h( x^{\prime}_{\mathcal{S}}\cup x_{\mathcal{S}})dx^{\prime}_{\mathcal{S}},\) where \(\mathcal{S}=\mathcal{N}-\mathcal{S}\), \(h(x)\) signifies a sample \(x\)'s real label, and \(p(\cdot|x_{\mathcal{S}})\) is the conditional sample distribution given the feature set \(x_{\mathcal{S}}\). Then, \(\phi\) learns to estimate the actual feature-label Shapley value._ Previously, such analyses can be conducted by training a model with fixed-size inputs and employing post-hoc methods. These methods crafted value functions to estimate expectations based on model outputs when randomly replacing missing features. As outlined in the literature (Strumbelj & Kononenko (2014); Datta et al. (2016); Lundberg & Lee (2017)), this can be depicted as: \(\int_{x^{\prime}_{\mathcal{S}}}p(x^{\prime}_{\mathcal{S}})h^{\prime}(x^{\prime }_{\mathcal{S}}\cup x_{\mathcal{S}})dx^{\prime}_{\mathcal{S}},\) where \(h^{\prime}\) approximates the inaccessible \(h\). The sample distribution for the entire dataset is given by \(p_{gen}(x)=p(x_{\mathcal{S}})\cdot p(x_{\mathcal{S}})\), which only matches \(p\) if \(\forall x_{\mathcal{S}},x_{\mathcal{S}}\quad p(x_{\mathcal{S}})=p(x_{\mathcal{S }}|x_{\mathcal{S}})\). Given that full feature independence is unlikely, generated samples may not align with real data, causing unreliable outputs. Despite progress in post-hoc studies (Laugel et al. (2019); Aas et al. (2021)) on feature dependence, complex tasks remain challenging. ASANet sidesteps this by learning an apt value function, better estimating feature-label relations via self-attribution. ### Positional Shapley Value Distillation The marginal contribution of a feature can fluctuate based on the prefix set size. For instance, with many observed features, a model might resist change in prediction from new ones. Such diversified contribution distribution can hinder model convergence. Therefore, instead of directly training overall attribution function with Eq. 4, we train a positional attribution function \(\phi(x_{\mathcal{S}};\theta_{\phi})_{i,k}\) to measure feature \(i\)'s effects in \(f_{c}\) in a certain position \(k\), with an internal positional distillation loss: \[L_{s}^{(i,k)}(x_{\mathcal{S}})=\frac{1}{|\mathcal{D}|}\sum_{O\in\mathcal{D}} \mathbb{I}[\{O_{k}=i\}(\phi(x_{\mathcal{S}};\theta_{\phi})_{i,k}-\triangle(x_ {i},x_{O_{i:k-1}};\theta_{\triangle}))^{2}. \tag{6}\] **Proposition 3.8**: _By randomly drawing \(m\) permutations where \(x_{i}\) appears at a specific position \(k\) and optimizing \(L_{s}^{(i,k)}(x_{\mathcal{S}})\), we have \(\phi(x_{\mathcal{S}};\theta_{\phi})_{i,k}\sim\mathcal{N}(\phi_{i,k}^{*},\frac{ \sigma_{i,k}^{*}}{m})\), where \(\phi_{i,k}^{*}=\frac{1}{(|\mathcal{S}|-1)!}\sum_{O\in\pi(\mathcal{S})} \mathbb{I}\{O_{k}=i\}\triangle(x_{i},x_{O_{i:k-1}};\theta_{\triangle})\), \(\sigma_{i,k}^{2}=\frac{1}{(|\mathcal{S}|-1)}\sum_{O\in\pi(\mathcal{S})} \mathbb{I}\{O_{k}=i\}(\triangle(x_{i},x_{O_{i:k-1}};\theta_{\triangle})-\phi_{ i,k}^{*})^{2}\)._ **Lemma 3.9**: _When optimizing \(L_{s}^{(i,k)}(x_{\mathcal{S}})\) with enough sampling and calculate \(\phi(x_{\mathcal{S}};\theta_{\phi})_{i}=\frac{1}{|\mathcal{S}|}\sum_{i=1}^{| \mathcal{S}|}\phi(x_{\mathcal{S}};\theta_{\phi})_{i,k}\), there is small uncertainty that \(\phi(x_{\mathcal{S}};\theta_{\phi})_{i}\) converges to \(\frac{1}{|\mathcal{S}|!}\sum_{O\in\pi(\mathcal{S})}\sum_{k=1}^{|\mathcal{S}|} \mathbb{I}\{O_{k}=i\}\triangle(x_{i},x_{O_{i:k-1}};\theta_{\triangle})\)._ The overall attribution value produced this way mirrors the original distillation, equating to the Shapley value in our analysis. Hence, we term it positional Shapley value distillation. **Proposition 3.10**: _Positional Shapley value distillation decreases SASANet's Shapley value estimation's variance._ ### Network Implementation To accommodate features of diverse meanings and distributions, we employ value embedding tables for categorical features and single-input MLPs for continuous ones, outperforming field embedding methods (Song et al. (2019); Guo et al. (2017)). The marginal contribution module applies multi-head attention for prefix representation of each feature, with position embeddings aiding in capturing sequential patterns and ensuring model convergence. Conversely, the Shapley value module uses multi-head attention for individualized sample representation, excluding position embeddings to maintain permutation-invariance. Feature embeddings are combined with prefix and sample representations for marginal contribution and positional Shapley value calculations using feed-forward networks. \(\phi_{0}\) is precomputed, symbolizing the sample-specific label expectation. Refer to the Appendix for a model schematic. For computational efficiency, terms in \(L_{s}\) and \(L_{v}\) are separated based on sample and permutations, resulting in the loss formulation \(L(x,y,O)=\sum_{k=1}^{N}(\lambda_{s}(\phi(x;\theta_{\phi})O_{k},k-\triangle(xO_ {k},x_{O_{i:k-1}};\theta_{\triangle}))^{2}+\lambda_{v}(\sum^{k}i=1\triangle(xO_ {i},x_{O_{i:k-1}};\theta_{\triangle})+\phi_{0}-y)^{2})\). In this manner, for each sample in a batch, we randomly select a single permutation rather than sampling multiple times, promoting consistent training and enhancing efficiency. ## 4 Experiments ### Experimental Setups As depicted in Section 2.1, self-attributing models (including SASANet) can integrate pre-/post-transformations to adapt to various data types, e.g., integrate image feature extractors for image concept-level attribution, then estimate pixel-level contributions with propagation methods like upsampling. Yet, the efficacy and clarity of such attributions can be substantially compromised by the quality of the concept extraction module. To ensure clarity and sidestep potential ambiguities arising from muddled concepts or propagation methods, our evaluation uses tabular data, as it presents standardized and semantically coherent concepts. This approach foregrounds the direct impact of the core attribution layer. Specifically, we tested on three public classification tasks: Census Income Prediction (Kohavi (1996)), Higgs Boson Identification (Baldi et al. (2014)), and Credit Card Fraud Detection1, and examined SASANet's regression on the Insurance Company Benchmark2, all with pre-normalized input features. We benchmarked SASANet against prominent black-box models like LightGBM (Ke et al. (2017)) and MLP; traditional interpretable methods such as LR and DT; and other self-attributing networks like NAM, SITE, and SENN. Hyperparameters were finely tuned for each model on each dataset for a fair comparison, and models specific to classification weren't assessed on regression tasks. Configuration and dataset specifics are in Appendices H and I. For interpretation evaluation, we did not compare with other self-attributing models since their desired feature attribution value have different intuitions and physical meanings. Instead, we compared with post-hoc methods to demonstrate that SASANet accurately conveys its prediction rationale. ### Effectiveness: Prediction Performance Evaluation We used AUC and AP for imbalanced label classification, and RMSE and MAE for regression. Table 6 shows average scores from 10 tests; Appendix J lists standard deviations. **Comparison to Baselines.** Black-box models outperform other compared models. Classic tree and linear methods, though interpretable, are limited by their simplicity. Similarly, current self-attributing networks often sacrifice performance for interpretability due to simplified structures or added regularizers. However, SASANet matches black-box model performance while remaining interpretable. We also show SASANet rivals separately trained MLP models in predicting with arbitrary number of missing features, indicating its Shapley values genuinely reflect feature-label relevance through modeling feature contribution to label expectation. See Appendix K for details. **Ablation Study.** First, without the Shapley value module (i.e., "SASANet-d"), there was a large performance drop. This suggests that parameterizing Shapley value enhances accuracy. The permutation-variant predictions, despite theoretical convergence, can still be unstable in practice. The permutation-invariant Shapley value, akin to inherent bagging across permutations, offers better accuracy. Second, bypassing positional Shapley value and directly modeling overall Shapley value (i.e., "SASANet-p") resulted in performance reduction, particularly on Income and Higgs Boson datasets. This underscores the benefit of distinguishing marginal contributions at various positions. ### Fidelity: Feature-Masking Experiment We observed prediction performance after masking the top 1-5 features attributed by SASANet for each test sample and compared it to the outcome when using KernelSHAP, a popular post-hoc method, and FastSHAP, a recent parametric post-hoc method. The results are shown in Table 7. SASANet's feature masking leads to the most significant drop in performance, indicating its self-attribution is more faithful than post-hoc methods. This stems from self-attribution being rooted in real prediction logic. Moreover, SASANet's reliable predictions on varied feature subsets make it suitable for feature-masking experiments, sidestepping out-of-distribution noise from feature replacements. Notably, FastSHAP performs the worst in feature-masking experiments. While it can approximate the Shapley value in numerical regression, it appears to struggle in accurately identifying the most important features in a ranked manner. Additionally, we conducted experiment to add the top 1-5 features from scratch, showing SASANet's selection led to quicker performance improvement. See Appendix L for details. \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Income} & \multicolumn{2}{c}{Higgs Boson} & \multicolumn{2}{c}{Fraud} & \multicolumn{2}{c|}{Insurance} \\ & AP & AUC & AP & AUC & AP & AUC & RMSE & MAE \\ \hline LightGBM & _0.6972_ & _0.9566_ & 0.8590 & 0.8459 & 0.7934 & 0.9737 & 0.2315 & _0.1052_ \\ MLP & 0.6616 & 0.9518 & _0.8877_ & _0.8771_ & _0.8167_ & _0.9621_ & _0.2310_ & 0.1076 \\ \hline DT & 0.2514 & 0.7250 & 0.6408 & 0.6705 & 0.5639 & 0.8614 & 0.3332 & 0.1167 \\ LR & 0.3570 & 0.8717 & 0.6835 & 0.6846 & 0.7279 & 0.9620 & - & - \\ NAM & 0.6567 & 0.9506 & 0.7897 & 0.7751 & 0.7986 & 0.9590 & 0.2382 & 0.1182 \\ SITE & 0.6415 & 0.9472 & 0.8656 & 0.8597 & 0.7912 & 0.9556 & - & - \\ SENN & 0.6067 & 0.9416 & 0.7563 & 0.7556 & 0.7709 & 0.8916 & 0.2672 & 0.1313 \\ \hline SASANet-p & 0.6708 & 0.9525 & 0.8775 & 0.8656 & 0.8090 & 0.9667 & **0.2368** & **0.0894** \\ SASANet-d & 0.6811 & 0.9527 & 0.8790 & 0.8675 & 0.8090 & 0.9665 & 0.2387 & 0.1037 \\ SASANet & **0.6864** & **0.9542** & **0.8836** & **0.8721** & **0.8124** & **0.9674** & 0.2375 & 0.0901 \\ \hline \end{tabular} \end{table} Table 1: Model performance - the best performance among interpretable models is in bold. ### Efficiency: Attribution Time Evaluation We assessed attribution time for SASANet's self-attribution and two post-hoc interpreters, KernelSHAP and LIME, on 1,000 random samples. As per Table 3, SASANet was the quickest. LIME, reliant on drawing neighboring samples for local surrogates, was the slowest, especially with larger datasets. KernelSHAP, while faster than LIME due to its linear approximation and regression-based estimation, was constrained by its sampling, making it over 200 times slower than SASANet. Notably, while parametric post-hoc methods like FastSHAP (Jethani et al. (2021)) offer swift 1-pass attribution, they demand an extensive post-hoc training involving many forward propagations of the interpreted model. This was problematic for large datasets and models, such as the Higgs Boson. Conversely, self-attribution itself is the core component of model prediction and inherently acquired during model training, eliminating post-hoc sampling or regression procedure. ### Accuracy: Comparison with Ground-Truth Shapley Value SASANet can predict for any-sized input, allowing us to directly ascertain accurate Shapley values through sufficient permutations. While getting ground truth Shapley value is time-intensive, the transformer structure in SASANet alleviates the problem since it can simultaneously produce multiple feature subsets' value by incrementally considering the attention relevant to new features. For each dataset, we sampled 1,000 test samples and estimated their real Shapley values in SASANet with 10,000 permutations. Furthermore, to test the generalization of the attribution module, we created a distribution shift dataset by adding a randomly sampled large bias to each dimension, which has the same scale as the normalized input. The RMSE of KernelSHAP, FastSHAP, and SASANet's attribution value to the real Shapley value are shown in Table 3. We have observed that SASANet's self-attribution models provide accurate estimates of the Shapley value for its own output. The performance of the attribution module may decrease when there is a distribution shift, as it is trained on the distribution of the training data in our experiment. Nevertheless, it still largely outperforms KernelSHAP and FastSHAP. Indeed, our analysis in Section 3.2 indicates that convergence to the Shapley value is ensured on the distilled sample distribution, irrespective of input distribution. Therefore, simple tricks like introducing noise during internal distillation can alleviate this issue without harming the effectiveness of the model. ### Qualitative Evaluation **Overall Interpretation.** We compared feature attributions from SASANet and KernelSHAP for the entire test set. In Figure 2 (a), SASANet reveals clear attribution patterns. Specifically, for the Income dataset, there's evident clustering in attribution values tied to specific feature values. Such sparsity of attribution value aligns with the feature value, which are mostly categorical. The Higgs \begin{table} \begin{tabular}{|c|c c c c c c c c c c c|} \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Top 1} & \multicolumn{3}{c|}{Top 2} & \multicolumn{3}{c|}{Top 3} & \multicolumn{3}{c|}{Top 4} & \multicolumn{3}{c|}{Top 5} \\ & & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC \\ \hline Income & SASA. & **0.560** & **0.929** & **0.497** & **0.917** & **0.441** & **0.904** & **0.398** & **0.892** & **0.361** & **0.880** \\ & KerSH. & 0.606 & 0.939 & 0.566 & 0.932 & 0.547 & 0.928 & 0.532 & 0.926 & 0.518 & 0.923 \\ & FastSH. & 0.683 & 0.954 & 0.617 & 0.944 & 0.617 & 0.944 & 0.618 & 0.944 & 0.613 & 0.943 \\ \hline Higgs & SASA. & **0.825** & **0.813** & **0.777** & **0.764** & **0.730** & **0.715** & **0.681** & **0.665** & **0.632** & **0.616** \\ & KerSH. & 0.855 & 0.843 & 0.833 & 0.821 & 0.808 & 0.795 & 0.783 & 0.770 & 0.760 & 0.746 \\ & FastSH. & - & - & - & - & - & - & - & - & - & - \\ \hline Fraud & SASA. & **0.789** & **0.962** & **0.758** & **0.957** & **0.681** & **0.952** & **0.625** & **0.938** & **0.526** & **0.904** \\ & KerSH. & 0.806 & 0.963 & 0.782 & 0.961 & 0.732 & 0.959 & 0.693 & 0.949 & 0.627 & 0.936 \\ & FastSH. & 0.813 & 0.967 & 0.813 & 0.967 & 0.815 & 0.965 & 0.811 & 0.965 & 0.809 & 0.966 \\ \hline \end{tabular} * FastSHAP results for the Higgs dataset are missing due to prolonged training times. \end{table} Table 2: Results of feature masking experiments. \begin{table} \begin{tabular}{|c|c c c|c c c c|c c c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Time} & \multicolumn{3}{c|}{RMSE} & \multicolumn{3}{c|}{RMSE (Distribution Shift)} \\ & Income & Higgs & Fraud & Income & Higgs & Fraud & Income & Higgs & Fraud \\ \hline LIME & 1604.9s & 11792.7s & 9851.2s & - & - & - & - & - & - & - \\ KernelSHAP & 91.2s & 65.7s & 38.4s & 0.348 & 0.402 & 0.514 & 0.348 & 0.405 & 0.494 \\ FastSHAP\({}^{*}\) & **0.3s\({}^{*}\)** & - & **0.2s\({}^{*}\)** & 0.332 & - & 0.473 & 0.522 & - & 0.550 \\ SASANet & **0.3s** & **0.3s** & **0.2s** & **0.001** & **0.005** & **0.033** & **0.001** & **0.113** & **0.082** \\ \hline \end{tabular} * FastSHAP’s has time-consuming post-hoc training procedure, whose time isn’t reported to prevent confusion. FastSHAP results for the Higgs dataset are missing due to prolonged training times. \end{table} Table 3: Time cost and accuracy for feature attribution. Boson dataset shows SASANet's attributions tend to align with feature value changes, confirming its ability to identify pertinent feature-label relationships. For Higgs Boson dataset, SASANet's attribution value shows rough monotonic associations with feature values. For example, a larger feature value tends to bring positive attribution for "m_jlv" while a negative attribution value for "m_bb". This suggests SASANet identifies feature-label relevance consistent with common sense. That is, despite complex quantitative relationship between features and outcomes, qualitative patterns are usually clear. Meanwhile, clear differences exist between real self-attribution and approximated post-hoc attribution, highlighting potential inaccuracies in post-hoc methods. For example, in both datasets, many features have abnormally high attribution values in KernelSHAP. This may stem from neglecting feature interdependencies, evident upon sample interpretation in the subsequent part. **Sample Interpretation.** We randomly draw a positive sample in Higgs Boson dataset predicted correctly by SASANet and compared its self-attribution with KernelSHAP's attribution in Figure 2 (b). Distinct differences are evident. Notably, SASANet indicates all top-9 features contribute positively, highlighting a simple, intuitive logic. Indeed, the objective of Higgs Boson prediction is to differentiate processes that produce Higgs bosons from those that do not. As a result, the model distinguishes the distinctive traits of Higgs bosons from the ordinary, resulting in a consistent positive influence from key features representing these unique attributes. KernelSHAP sometimes assigns disproportionately large negative values to features. For instance, the attribution for "m_wbb" (\(-6.26\)) significantly surpasses SASANet's peak attribution of \(0.7\). Yet, they aggregate to a final output of \(f(x)=3.959\) as opposing attributions from dependent features, like the positive contribution of "m_jlv" (\(6.14\)), neutralize each other. Notably, SASANet's self-attribution doesn't deem either m_wbb" or "m_jlv" as so crucial. Similar phenomenon can be observed from other datasets and samples, which are visualized in Appendix M. For example, in Income prediction task, indicators exist for both rich and poor people. Accordingly, SASANet identifies input features indicating both outcomes. However, it is still obvious that KernelSHAP attributes exaggerated values to features, which then offset one another. This underscores how KernelSHAP can be deceptively complex by neglecting feature interdependencies, an issue that has been a pitfall in numerous post-hoc studies (Laugel et al. (2019); Frye et al. (2020); Aas et al. (2021)). SASANet naturally handles feature dependency well since the value functions has explicit physical meaning as estimated label expectation and are directly trained under the real data distribution. ## 5 Concluding Remarks We proposed Shapley Additive Self-Attributing Neural Network (SASANet) and proved its self-attribution aligns with the Shapley values of its attribution-generated output. Through extensive experiments on real-world datasets, we demonstrated that SASANet not only surpasses current self-interpretable models in performance but also rivals the precision of black-box models while maintaining faithful Shapley attribution, bridging the gap between expressiveness and interpretability. Furthermore, we showed that SASANet excels in interpreting itself compared to using post Figure 2: Feature attribution visualizations: x-axis shows attribution value; y-axis lists top features by decreasing average absolute attribution. (a) Overall attribution. (b) Single-instance attribution. hoc methods. Moreover, our theoretical analysis and experiments suggest broader applications for SASANet. Firstly, its self-attribution offers a foundation for critiquing post-hoc interpreters, pinpointing areas of potential misinterpretation. Furthermore, its focus on label expectation uncovers intricate non-linear relationships between real-world features and outcomes. In the future, we will use SASANet to gain further insights into model interpretation and knowledge discovery studies.
2309.04332
Graph Neural Networks Use Graphs When They Shouldn't
Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN, in some cases the best solution can be obtained by ignoring it. While GNNs have the ability to ignore the graph- structure in such cases, it is not clear that they will. In this work, we show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it. We analyze the implicit bias of gradient-descent learning of GNNs and prove that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data. We examine this phenomenon with respect to different graph distributions and find that regular graphs are more robust to this over-fitting. We also prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent. Finally, based on our empirical and theoretical findings, we demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance.
Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson
2023-09-08T13:59:18Z
http://arxiv.org/abs/2309.04332v2
# Graph Neural Networks Use Graphs When They Shouldn't ###### Abstract Predictions over graphs play a crucial role in various domains, including social networks, molecular biology, medicine, and more. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Instances of graph labeling problems consist of the graph-structure (i.e., the adjacency matrix), along with node-specific feature vectors. In some cases, this graph-structure is non-informative for the predictive task. For instance, molecular properties such as molar mass depend solely on the constituent atoms (node features), and not on the molecular structure. While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will. In this work, we show that GNNs actually tend to overfit the graph-structure in the sense that they use it even when a better solution can be obtained by ignoring it. We examine this phenomenon with respect to different graph distributions and find that regular graphs are more robust to this overfitting. We then provide a theoretical explanation for this phenomenon, via analyzing the implicit bias of gradient-descent-based learning of GNNs in this setting. Finally, based on our empirical and theoretical findings, we propose a graph-editing method to mitigate the tendency of GNNs to overfit graph-structures that should be ignored. We show that this method indeed improves the accuracy of GNNs across multiple benchmarks. ## 1 Introduction Graph labeling problems arise in many domains, from social networks to molecular biology. In these settings, the goal is to label a graph or its nodes given information about the graph. The information for each graph instance is typically provided in the form of the graph-structure (i.e., its adjacency matrix) as well as the features of its nodes. Graph Neural Networks (GNNs) [16, 13, 14] have emerged as the leading approach for such tasks. The fundamental idea behind GNNs is to use neural-networks that combine the node features with the graph-structure, in order to obtain useful graph representations. This combination is done in an iterative manner, which can capture complex properties of the graph and its node features. Although the graph-structures are provided as input to the GNN, in some cases the best solution can be obtained by ignoring them. This may be due to these graph-structures being non-informative for the predictive task at hand. For instance, some molecular properties such as the molar mass (i.e., weight) depend solely on the constituent atoms (node features), and not on the molecular structure. Also, in cases where the provided graph-structure contains valuable information for the task, the GNN might struggle to effectively exploit this information. In such cases, it is anticipated that better accuracy can be achieved by ignoring the graph-structure. Motivated by this observation, we ask a core question in GNNs learning: will GNNs work well in cases where the graph-structure should be ignored or will they overfit the graph, resulting in reduced accuracy? Answering this question has several far-reaching practical implications. To illustrate, if GNNs lack the ability to discern when to disregard the graph, then providing a graph can actually hurt the performance of GNNs, and thus one must carefully re-think which graphs to provide a GNN. On the other hand, if GNNs easily reject the structure when it fails to exploit it, then practitioners should add it even if their domain knowledge and expertise suggest that there is only a small chance that it is informative. We consider the common setting of over-parameterized GNNs. Namely, when the number of parameters the GNN uses is larger than the size of the training data. This is a very common case in deep-learning [15, 16, 17, 18], where the learned model can fit any training data. Previous studies showed that despite over-parameterization, models learned using gradient descent often generalize well. Hence, it was suggested that the learning algorithm uses an implicit bias (e.g., low parameter norm) to avoid spurious models that happen to fit the training data. Our focus is thus on the implicit bias of GNN learning and specifically, whether GNNs are biased towards using or not using the graph-structure. If the implicit bias is towards "simple models" that do not use the graph-structure when possible, then one would expect GNNs to be oblivious to the graph-structure when it is not informative. Our first empirical finding is that this is actually not the case. Namely, GNNs tend to _not_ ignore the graph, and their performance is highly dependent on the provided graph structure. Specifically, there are graph structures that result in models with low accuracy. Next, we ask which properties of the learned graph distribution affect the GNN's ability to ignore the graph. We empirically show that graphs that are regular result in more resilient GNNs. We then provide a theoretical analysis that explains why GNNs are more robust when applied to non-informative regular graphs Finally, based on our empirical and theoretical findings, we propose a method to mitigate the tendency of GNNs to overfit non-informative graph-structures. Our approach is to add synthetic edges to the input graphs, such that they resemble regular graphs. This turns out to improve accuracy on both synthetic and real problems. **The main contributions of this work are** (1) We show that overparameterized GNNs tend to overfit graph-structures, when they should be ignored. (2) We evaluate the graph-structure overfitting phenomenon with respect to different graph distributions and find that the best performance is obtained for regular graphs. (3) We theoretically analyze the implicit bias of GNNs trained on regular graphs and show they converge to unique solutions that are more robust to graph-structure overfitting. (4) We establish an extrapolation result and assessment for GNNs trained on regular graphs, by incorporating insights from the implicit bias analysis. (5) Based on our empirical and theoretical findings, we propose a method to mitigate the tendency of GNNs to overfit the graph-structure, and show that it performs well in practice. To the best of our knowledge, this is the first work to examine the implicit bias of learning GNNs. Indeed, understanding this bias is key to designing GNN learning methods that generalize well. ## 2 GNNs Overfit the Graph-Structure In this section, we present an empirical evaluation showing that GNNs tend to overfit graph-structures that should be ignored, thus hurting their generalization accuracy. ### Preliminaries A graph example is a tuple \(G=(A,X)\). \(A\) is an adjacency matrix representing the graph-structure. Each node \(i\) is assigned with a feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), and all the feature vectors are stacked to a feature matrix \(X\in\mathbb{R}^{n\times d}\), where \(n\) is the number of nodes in \(G\). The set of neighbors of node \(i\) is denoted by \(N(i)\). We denote the number of samples in a dataset by \(m\). We focus on the common class of Message-Passing Neural Networks (Morris et al., 2021). In these networks, at each layer, each node updates its representation as follows: \[h_{i}^{(k)}=\sigma\left(W_{1}^{(k)}h_{i}^{(k-1)}+W_{2}^{(k)}\sum_{j\in N(i)}h_ {j}^{(k-1)}+b^{(k)}\right) \tag{1}\] where \(W_{1}^{(k)},W_{2}^{(k)}\in\mathbb{R}^{d_{k}\times d_{k-1}}\). The initial representation of node \(i\) is its feature vector \(h_{i}^{(0)}=\mathbf{x}_{i}\). The final node representations \(\{h_{i}^{(L)}\}_{i=1}^{n}\) obtained in the last layer, can then be used for downstream tasks such as node or graph labeling. We focus on graph labeling tasks, where a graph representation vector is obtained by summing all the node representations. This is then followed by a linear transformation matrix \(W_{3}\) that provides the final classification/regression output for regression or classification (referred to as a _readout_). For the sake of presentation, we drop the superscript in cases of one-layer GNNs. We refer to \(W_{1}^{(k)}\) as the _root-weights_ of layer \(k\) and to \(W_{2}^{(k)}\) as the _topological-weights_ of layer \(k\). A Natural way for GNNs to ignore the graph-structure is by zeroing the topological-weights \(W_{2}^{(k)}=\vec{0}\). ### Evidence for Graph Overfitting Our goal is to examine what happens when GNNs learn over graphs that should be ignored. To that end, we conducted experiments on three datasets. Node SumThis is a synthetic task where the label is independent of the graph-structure and relies only on the node features. Therefore the graph-structures should be ignored. In the Node Sum task, the label is generated using a one-layer linear "teacher" model. This teacher simply sums the node features, and applies a linear readout to produce a scalar. The label is then the sign of the result. The teacher readout is sampled once from \(\mathcal{N}(0,1)\) and used for all the graphs. All graphs have \(n=20\) nodes, and each node is assigned with a feature vector in \(\mathbb{R}^{128}\) sampled i.i.d from \(\mathcal{N}(0,1)\). The non-informative graph structures are drawn from the GNP graph distribution (Erdos and Renyi, 1959), where the edges are sampled i.i.d with probability \(p\) (we used \(p=0.5\)). As the teacher model only uses the node features to compute the label, the given graph-structures are non-informative for this task. PROTEINS and ENZYMES(Morris et al., 2020) These are two tasks based on real-world molecular data, where the goal is to classify a molecule into one of two or six classes, respectively. We chose these datasets because Errica et al. (2022) reported on a thorough GNNs comparison, that the best accuracy on these datasets is achieved when the graph-structures are omitted, i.e., on empty graphs. In Errica et al. (2022), the model trained on empty graphs is not an instance of the other compared models. Therefore, one cannot immediately conclude that the other GNNs overfitted the graph-structures. For example, the superiority of the model trained on empty graphs may be due to its architecture rather than the graph information. Nevertheless, if a fixed architecture is used, better performances that are achieved when learning over empty graphs, do indicate that it was better for the GNN to ignore the graph, but it didn't. This is because with a fixed architecture, the solution learned by the \begin{table} \begin{tabular}{l|c|c|c} \hline \hline & Node Sum & PROTEINS & ENZYMES \\ \hline \(GNN\) & 94.5 \(\pm\) 0.9 & 67.4 \(\pm\) 1.9 & 55.2 \(\pm\) 3.1 \\ \(GNN_{\emptyset}\) & 97.5 \(\pm\) 0.7 & 74.1 \(\pm\) 2.5 & 64.1 \(\pm\) 5.7 \\ \hline \hline \end{tabular} \end{table} Table 1: The accuracy of the same GNN architecture, trained on the given datasets (GNN) and on the same data where the graph-structure is omitted (\(GNN_{\emptyset}\)), i.e., on empty graphs. Because the solution of \(GNN_{\emptyset}\) is realizable by \(GNN\), and the only difference between the runs is the given graph-structures, this suggests that the decreased performance of \(GNN\) is due to graph-structure overfitting. GNN trained on empty graphs, is always realizable by the GNN trained on any non-empty graphs. In our experiments, we use a fixed architecture. GNN ModelFor the Node Sum task, following the teacher model, we used a 1-layer "student" GNN, using the update rule in Equation 1, with readout and ReLU activations. For the PROTEINS and ENZYMES tasks, we used 3-layers. Protocol and ResultsOn each of the three datasets, we trained the same GNN twice: once on the given graph-structures in the data (\(GNN\)), and once when the graph-structure is replaced with an empty graph and only the node features are given for training (\(GNN_{\emptyset}\)). The hyperparameters are tuned on a separate validation test. We report test errors averaged over 10 runs with random seeds on a separate holdout test set. More information can be found in the appendix. This difference between these setups shows the effect of providing the graph-structure. Table 1 shows the results of the experiments. In the three tasks, \(GNN_{\emptyset}\) achieves higher accuracy than \(GNN\). This suggests that \(GNN\) made use of the graph in a way that led to lower test accuracy. Therefore, we conclude that \(GNN\) overfitted the graph-structure. ### How Graph Structure Affects Overfitting The previous section showed that in the Node Sum task, where the graph-structures are non-informative and should be ignored, the GNN overfits them instead. Here we further study how this phenomenon is affected by the specific graph-structure used. Thus, we repeat the setup of the Node Sum task but with different graph distributions. DataWe used the Node Sum task described in Section 2.2. We created 4 different datasets from this baseline, by sampling graph-structures from different graph distributions. The set of node feature vectors remains the same across all the datasets, thus the datasets differ only in their graph-structures. The graph distributions we used are: \(r\)-regular graphs (Regular) where all the nodes have the same degree \(r\), star-graph (Star) where the only connections are between one specific node and all other nodes, the Erdos-Renyi graph distribution (GNP) [1], where the edges are sampled i.i.d with probability \(p\), and the preferential attachment model (BA) [1], where the graph is built by incrementally adding new nodes and connecting them to existing nodes with probability proportional to the degrees of the existing nodes. ProtocolThe GNN model was as in the Node Sum task in the previous section. On each dataset, we varied the training set size and evaluated test errors on 10 runs with random seeds. We note that the GNN has a total of \(\sim\)16,000 parameters, and thus it is overparameterized and can fit the training data with perfect accuracy. More information can be found in the appendix. ResultsFor the sake of presentation, we present the results on one instance from each distribution: Regular with \(r=10\), GNP with \(p=0.6\) and BA with \(m=3\). Additional results with more parameters are given in the appendix, and show similar trends. Recall that the datasets differ only by the edges and share the same set of nodes and features. Therefore, had the GNN ignored the graph-structures, we would expect to see similar performance for all datasets. As shown in Figure 1(a), the performance largely differs between different graph distributions, which indicates the GNN overfits the graphs rather than ignores them. To further understand what the GNN learns in these cases, we evaluate the ratio between the norms of the topological and root weights. Results are shown in Figure 1(b). It can be seen that for all the graphs except the empty graphs, the ratio Figure 1: (a) The learning curves of the same GNN model trained on graphs that have the same node features and only differ in their graph-structure, that is samples from different distributions. The label is computed from the node features without the use of any graph-structure. If GNNs were to ignore the non-informative graph-structure they were given, similar performance should have been observed for all graph distributions. Among the different distributions, regular graphs exhibit the best performance. (b) The norm ratio between the topological and the root weights along the same runs. Except for the empty graphs, the ratio is always greater than 1, which indicates that more norm is given to the topological weights. On the empty graphs, the topological weights are not trained and the ratio is 0 due to initialization. is larger than 1, indicating that there is more norm on the topological weights than on the root weights. Specifically, the graph-structure is not ignored. In the case of empty graphs, the topological weights are not trained, and the ratio is 0 due to initialization. We also present the norms of the root and topological weights separately in the appendix. Figure 1 suggests that some graph distributions are more robust to graph-structure overfitting. The GNN trained on regular graphs performs best across all training set sizes. The good performance on regular graphs would seem to suggest that it learns to use low topological weights. However as Figure 1(b) shows actually the opposite is true. This may seem counter-intuitive, but in the next section we show how this comes about. ## 3 Analyzing Regular Distributions In the previous section, we saw that although GNNs tend to overfit the graph-structure when it should be ignored, regular graphs create more resilient GNNs. In this section, we theoretically examine the solutions learned by linear GNNs trained on regular graphs. We begin by analyzing their implicit bias. We then prove they are guaranteed to extrapolate to any other regular graph distribution. For the sake of clarity, we state all theorems for a one-layer GNN with sum-pooling, no readout, and output dimension 1. For simplicity, we also assume no bias term in our analysis. All the proofs and extensions can be found in the appendix. ### Implicit bias of GNNs To examine the solutions learned by GNNs trained on regular graphs, we utilize Theorem 4 from Gunasekar et al. (2018). This theorem states that homogeneous neural networks trained with GD on linearly separable data converge to the max-margin solution. Translated to our formulation of GNNs trained on \(r\)-regular graphs, we get that GD converges to the max-margin solution of the following quadratic problem: \[\min_{\mathbf{w}_{1},\mathbf{w}_{2}} \|\mathbf{w}_{1}\|_{2}^{2}+\|\mathbf{w}_{2}\|_{2}^{2} \tag{2}\] \[s.t. y^{(l)}\big{[}\{(\mathbf{w}_{1}+r\mathbf{w}_{2})\,\cdot\,\Sigma_{i }^{n}\,\mathbf{x}_{i}^{(l)}\}\geq 1\] \[\forall(G^{(l)},\mathbf{y}^{(l)})\in S\] This can be viewed as a max-margin problem in \(\mathbb{R}^{d}\) where the input vector is \(\sum_{i}^{n}\,\mathbf{x}_{i}^{(l)}\). Specifically, although two weight vectors \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) are learned, in the case of \(r\)-regular graphs the GNN only utilizes their weighted sum \(\mathbf{\tilde{w}}=\mathbf{w}_{1}+r\mathbf{w}_{2}\). In the appendix, we present the same view for GNNs trained on general graphs (i.e., not necessarily regular). The next theorem shows that when a GNN is trained using gradient descent on regular graphs, the learned root and topological weights are aligned. **Theorem 3.1** (Weight Alignment).: _Let \(S\) be a set of linearly separable graph examples drawn i.i.d from an \(r\)-regular graph distribution with binary labels. A GNN trained with GD that fits \(S\) perfectly converges to a solution such that \(\mathbf{w}_{2}=r\mathbf{w}_{1}\). Specifically, the root weights \(\mathbf{w}_{1}\) and topological weights \(\mathbf{w}_{2}\) are aligned._ To prove Theorem 3.1 we analyze the KKT conditions for first order stationary points of Equation 2. ### Extrapolation We now use Theorem 3.1 to show that when learning on regular graphs from teachers that do not use the graph-structure, the GNN will extrapolate well to any other regular graph, including the empty graph. This result is in agreement with our empirical results from the previous section where learning with regular graphs indeed succeeds for teachers that do not use the graph-structure. **Theorem 3.2** (Extrapolation).: _Let \(S\) be a set of linearly separable graph examples drawn from an \(r\)-regular distribution, with binary labels. Assume that the ground truth function \(f^{*}\) is realizable by a GNN with \(\mathbf{w}_{2}^{*}=0\). Then a GNN that fits \(S\) perfectly, will extrapolate to any \(r^{\prime}\)-regular distribution._ To prove Theorem 3.2, we substitute the topological weights in Equation 2 with the aligned weights guaranteed from Theorem 3.1. We get that the weights vector used by the GNN is \(\mathbf{w}_{1}+r^{2}\mathbf{w}_{1}\), which does not change its direction when the value of \(r\) is changed. While Theorem 3.2 guarantees extrapolation within the family of regular distributions, we empirically observe that GNNs trained on regular graphs exhibit good extrapolation to other non-regular graph distributions as well, as shown in Table 2. The GNN trained on 5-regular graphs, generalize perfectly for GNP distribution shifts. It generalizes well to BA distribution shifts, and there is a decrease in performance when tested on star-graphs. These extrapolation performances are non-trivial, as it was previously shown in Yehudai et al. (2020) that when there is a certain discrepancy between the train and test distributions, GNNs may fail to extrapolate. We next suggest an explanation for these results. The following lemma shows that applying a GNN trained on regular graphs to any given graph is equivalent to applying it to an \(r^{\prime}\)-regular graph plus applying it to another \(\Delta\)-graph that depends on \(r^{\prime}\), for some \(0\leq r^{\prime}\leq n-1\). **Lemma 3.3**.: _Let \(f\) be a GNN that perfectly fits a training set of \(r\)-regular graphs. Then applying \(f\) to a graph \(G\) can be \begin{table} \begin{tabular}{l|c} \hline Test distribution & Accuracy \\ \hline Regular (r=10) & 100.0 \(\pm\) 0.0 \\ Regular (r=15) & 100.0 \(\pm\) 0.0 \\ \hline GNP (p=0.2) & 100.0 \(\pm\) 0.0 \\ GNP (p=0.5) & 100.0 \(\pm\) 0.0 \\ GNP (p=0.8) & 100.0 \(\pm\) 0.0 \\ \hline BA (m=3) & 98.0 \(\pm\) 1.7 \\ BA (m=15) & 94.2 \(\pm\) 0.9 \\ \hline Star Graph & 73.9 \(\pm\) 1.1 \\ \hline \end{tabular} \end{table} Table 2: Accuracy of a GNN trained on 5-regular graphs and tested on different distribution shifts. The GNN extrapolates perfectly to regular graph distributions, as guaranteed by Theorem 3.2. It also extrapolates well to other distributions, where the lowest performance is when tested on star graphs. written as_ \[f(G)=\underbrace{W_{1}\sum_{i=1}^{n}\mathbf{x}_{i}+rW_{1}\sum_{i=1}^{n}r^{\prime }\mathbf{x}_{i}}_{Regular\ Component}+\underbrace{rW_{1}\sum_{i=1}^{n}\Delta_{r^{ \prime}}(i)\mathbf{x}_{i}}_{\Delta\ Component}\] _Such that \(\Delta_{r^{\prime}}(i)=deg_{G}(i)-r^{\prime}\), for any \(0\leq r^{\prime}\leq n-1\)_ From Lemma 3.3 it follows that if there is \(0\leq r^{\prime}\leq n-1\) such that the \(\Delta\)-component is small with respect to the regular component, then a good extrapolation follows from Theorem 3.2. In the appendix we empirically show, that indeed when the extrapolation is good, there is such \(r^{\prime}\). This suggests that applying the GNN to graphs that are "closer" to regular graphs, exhibits better extrapolation. Due to space limitations, this is fully formulated and explained in the appendix. Inspired by these observations, in the next section we present a method to mitigate the tendency of GNNs to overfit the graph-structure when it should be ignored. ## 4 A Method for Reducing Graph Overfitting In the previous sections, we observed that GNNs tend to overfit the graph-structure even if it should be ignored. One simple practical approach to mitigate this issue is to always try learning a model with an empty graph. This would be equivalent to using a DeepSet model [22]. However, there are cases where the given graph-structure still carries some degree of pertinent information, but the GNN fails to exploit it. In such cases, entirely discarding the graph-structure may result in improved performance, but this performance could potentially be improved if the GNN would manage to exploit the given graph information. Consequently, we suggest a graph-editing method that improves the ability of GNN to exploit the given graph-structures in cases they to carry useful information while reducing overfitting to irrelevant structural information. We show that our method consistently improves performance over the originally given graphs in both synthetic and real-world data. The R-COV Graph Editing MethodThe previous sections showed that regular graphs exhibit robustness to the tendency of GNNs to overfit graph-structures which should be ignored. Therefore, it may be beneficial to edit a given graph to be regular. We suggest a method that preserves the original graph-structures while improving the ability of the GNN to exploit them in the case they are useful for the task and disregard them when they aren't. Unfortunately, it is not clear how to turn a given graph into a regular graph without removing edges. On the other hand, it is possible to make the graphs regular by completing them into full graphs. However, this approach may be computationally expensive or even infeasible when learning on large graphs. Our method, Reduced COV (R-COV) makes the given graphs "more similar" to regular graphs, by reducing their coefficient of variation (COV), i.e., the ratio between the standard-deviation and mean of the node degrees. The COV of regular graphs is 0. Different techniques can be used to reduce the COV of a given graph. In our experiments, we reduce the COV up to a certain threshold by adding edges sampled randomly between nodes of low degree. We iteratively add to each graph a sampled batch of non-existing edges that have at least one end-point in the 10 lowest-degree nodes, until the threshold is exceeded. For efficiency, we used batches of size 3 for small graphs (up to 100 nodes) and batches of size 50 for large graphs (more than 100 nodes). Of course, we do not want to remove the information about the original graph. Thus, we add features to edges that specify if they were part of the original graph or were added by R-COV. Implementation DetailsThe GNN's update rule was revised to incorporate edge-features, as follows. Let \(e_{ij}\) denote the edge-features on edge \(ij\). Then the GNN update is given by: \[h_{i}^{(k)}=\sigma\left(W_{1}^{(k)}h_{i}^{(k-1)}+W_{2}^{(k)}\sum_{j\in N(i)} \left(h_{j}^{(k-1)}+\phi(e_{ij})\right)\right) \tag{3}\] Here \(\phi\) is an MLP with ReLU activations. We used edge feature values 1 and 0.5 for the original and added edges by R-COV, respectively. For the R-COV method, we treated the threshold as a hyper-parameter, and we tested the values \(\{0.15,0.1,0.05\}\). We did not use lower values in order to avoid graphs that are too dense. ### Experiments on Synthetic Data We evaluated R-COV1 on four synthetic tasks: One task where any graph-structure is non-informative (Node Sum), two tasks where the graph-structure is informative and the label relies fully on the graph-structure (Edges, Motifs), and a task where the label can be computed using either the graph-structure or the node features (Mixed Information), i.e., the graph-structure is informative but also can be ignored. In all these datasets, each node has a constant 1 feature and 16 random features drawn i.i.d from \(\mathcal{N}(0,1)\). Footnote 1: Code can be found in github.com/mayabechlerspeicher/Graph_Neural_Networks_Overfit_Graphs Node SumThis is the same task described in Section 2.2. The label is only dependent on the node features. We used graphs over 20 nodes drawn randomly from a GNP(\(p=0.5\)) distribution. EdgesIn this task, the goal is to determine if the number of edges is above a certain threshold. We use the same graphs from the Node Sum task. The label of the graph is 1 if it has at least 190 edges (the average amount of edges in the dataset) and 0 otherwise. In this task, the label relies fully on the graph-structure. Therefore, the graph-structures are informative in the sense that they hold valuable information for the predictive task. MotifsWe used the synthetic motif dataset from Luo et al. (2020). In this dataset, each graph is a random Barabasi-Albert graph over 20 vertices. Half of the graphs are connected to a 5-node house structured graph, and the rest are connected with a 5-node cycle graph. The task is a binary classification of the graph according to the type of structure connected to it. In this task, the graph-structure is informative and the label relies fully on the graph-structure. Mixed InformationHere the task is again to determine if the number of edges is above a certain threshold, as in Edges. This data is designed to allow the GNN to compute the label either from the node features or the graph-structure. We used the same dataset from Edges, with additional node features that indicate for each node to which nodes in the graph it is connected (using a fixed node ordering). Therefore, the number of edges in the graph can be realized from the node features, and so does the graph label. We then created another dataset that shares the same extended node features, but the graph-structures are replaced with non-informative graphs that should be ignored. We then wish to see the performance of R-COV when applied to the dataset with non-informative graphs, with respect to the dataset with the informative graphs. More details about the node features and the non-informative graph generation can be found in the appendix. ProtocolWe evaluate the model in Equation 3 on empty graphs (Empty Graphs), on the original graphs in the dataset (Original Graphs), and on the original graphs with R-COV (Original Graph + R-COV). For each task, we sample the training, validation, and test data from the same distribution. For the Node Sum and Edges, we used a one-layer student GNN with readout, 64 hidden channels, and Relu activations, following the teacher GNN. For the Motifs dataset, two GNN layers were used (because one layer did not fit the training data well). In the Mixed Information task, following our findings in Section 2.2, we expect the GNN to overfit the non-informative graphs. Therefore, we wish to see to what extent R-COV applied to the non-informative graphs is able to improve the performance, with respect to the case where the informative structures, on which the label is computed, are given as the graphs. To allow a more refined analysis, we evaluated this task on varying training set sizes. All models were tuned on a validation set and tested 10 times with random seeds using the best configuration found on the validation set. More details about the training and the hyper-parameters can be found in the appendix. ResultsThe results of Node Sum, Edge and Motif are presented in Table 3. Across all tasks, R-COV significantly improves over the original graphs. The results of the Node Sum task are particularly interesting. The label is only dependent on the node features, yet with R-COV the GNN manages to significantly improve performance over empty graphs. This suggests that the GNN exploits the graph structure although it is not informative. One way in which this can happen is that the graph is used for more efficiently pooling information across the node, or implementing other non-linearities that are invariant to node permutation. Notice that for the Edge and Motif task, the task is not realizable when the graphs are empty, which explains the low performance in this case. Figure 2 shows the learning curves of each dataset in the Mixed Information task. The GNNs trained on the informative graph-structures and the empty graph-structures, perform similarly. As expected, due to GNNs overfitting the graph-structures, when non-informative graph-structures are given, the performance decrease, and does not recover even with \(10k\) samples. When R-COV is applied to the non-informative graph-structures, the performance significantly improves when at least 100 training examples are used. When \(3k\) examples are given, R-COV matches the performance to the case where the informative graphs are given. ### Experiments on Real-World Data Next, we further evaluate R-COV on the real-world datasets used in Errica et al. (2022). All these datasets are publicly available and are frequently used in the GNNs literature. **PROTEINS, ENZYMES, NCI & DD**(Shervashidze et al., 2011) are datasets of chemical compounds. In each dataset, the goal is to classify the compounds according to some property of interest. **IMDB-B, IMDB-M, COLLAB, REDDIT-B, REDDIT-5k**(Yanardag and Vishwanathan, 2015) are social network datasets. Following Errica et al. (2022), we added a feature of the node degrees for all the social network datasets. More information on the datasets can be found in the appendix. EvaluationWe used the protocol and implementation of Errica et al. (2022) who performed a thorough comparison of different GNNs, including GNNs trained on empty graphs. We evaluated the same model in Equation 3 twice: once on the original graphs in the datasets, and once with R-COV applied to the original graphs. The final reported result is the average of 30 runs (10-folds and 3 random seeds). We \begin{table} \begin{tabular}{c|c|c|c} \hline \hline & Nodes Sum & Edges & Motifs \\ \hline Empty Graphs & 97.5 \(\pm\) 0.7 & 50.7 \(\pm\) 1.1 & 50.1 \(\pm\) 0.9 \\ Original Graphs & 94.5 \(\pm\) 0.9 & 89.9 \(\pm\) 0.7 & 85.0 \(\pm\) 0.6 \\ \hline Original Graphs & \multirow{2}{*}{98.9 \(\pm\) 0.9} & \multirow{2}{*}{100.0 \(\pm\) 0.0} & \multirow{2}{*}{95.0 \(\pm\) 0.9} \\ \cline{1-1} \cline{5-5} also included the accuracy of the best model from Errica et al. (2022) among the 5 models they compared. Additionally, we included the accuracy reported on empty graphs from Errica et al. (2022). When the information is available, we also included the best accuracy reported in Alon and Yahav (2021), where the best models from Errica et al. (2022) were trained on the given graphs, with the last layer applied a full graph (FA), to allow long-distance information flow. Additional training details including the hyper-parameters grid are provided in the appendix. ResultsAcross all datasets, R-COV significantly improves over the original graphs. Particularly intriguing outcomes are obtained in the PROTEINS and IMDB-M datasets. Within these two datasets, superior performance is attained when learning over empty graphs in comparison to the provided graphs. Nonetheless, R-COV improves performance also with respect to the empty graphs. This observation suggests that the structural information inherent in the data is indeed informative, yet not fully optimal, as evidenced by the GNN's limited capacity to exploit it. ## 5 Discussion and Practical Implications In practice, the graph layout is typically determined based on a prior understanding of the task at hand, and it is common to asses multiple layouts. In some cases, a natural graph-structure inherently exists within the data, such as in social networks, where the network connections naturally define the graph layout. Nevertheless, it is usually not clear in advance if these graph layouts are informative for the task. Given that certain layouts could provide valuable information for the task while others might not, and this distinction isn't always clear beforehand, this aspect drove our research. Indeed we found that the definition of the graph-structure, typically determined by users, emerges as a pivotal factor in performance outcomes due to the tendency of GNNs to overfit the provided graphs. This revelation opens up a fascinating avenue for further research into the significance of topological information during the training of GNNs. Understanding how GNNs respond to different structural layouts and why certain graph-structures are more effective than others could potentially revolutionize the way we design and train these models. ## 6 Conclusion In this study, we showed that although GNNs possess the capability to disregard the provided graph-structures when needed, they don't. Instead, GNNs tend to overfit the graph-structures, which results in reduced performance. We found that among different graph distributions, regular graphs are more robust to this overfitting. We analyzed the implicit bias of gradient-descent learning of GNNs, as well as their extrapolation abilities, in this setting. Our study shows that in some cases, the graph structure hurts the performance of GNNs, and therefore graph selection is of great importance, as well as having a model that knows when to ignore the graph. Motivated by our empirical and theoretical findings, we suggested R-COV, a graph-editing method that reduces the graph-structure overfitting. We demonstrated on synthetic and real datasets, that R-COV consistently enhances performance. Taken together, our results demonstrate the dramatic effect of the input graph-structure on the performance of GNNs. In future work it will be interesting to obtain a more detailed analysis of the inductive bias of GNNs, for example in cases where both graph structure and node features are informative. ## 7 Acknowledgements This work was supported by a grant from the Tel Aviv University Center for AI and Data Science (TAD) and by the Israeli Science Foundation research grant 1186/18. \begin{table} \begin{tabular}{l|c|c|c|c} \hline & IMDB-B & IMDB-M & COLLAB & REDDIT-B & REDDIT-5K \\ \hline Empty Graphs† & 60.7 ± 2.5 & 49.1 ± 3.5 & 70.2 ± 1.5 & 82.2 ± 3.0 & 52.2 ± 1.5 \\ Original Graphs† & 71.2 ± 3.9 & 48.5 ± 3.3 & **75.6 ± 2.3** & 89.9 ± 1.9 & **56.1 ± 1.7** \\ \hline Original Graphs� & 68.2 ± 2.1 & 47.7 ± 0.9 & 73.5 ± 1.9 & 83.9 ± 1.5 & 50.0 ± 2.1 \\ Original Graphs� + R-COV & **74.1 ± 3.3** & **50.1 ± 3.5** & 74.7 ± 1.9 & **90.2 ± 2.1** & 52.5 ± 2.0 \\ \hline \end{tabular} \end{table} Table 4: Accuracy on real-world tasks. On a fixed architecture using Equation 3, R-COV significantly improves performance across all datasets. § - Equation 3, † - Previously reported in Errica et al. (2022), ‡ - Previously reported in Alon and Yahav (2021).
2309.13302
Gaining the Sparse Rewards by Exploring Lottery Tickets in Spiking Neural Network
Deploying energy-efficient deep learning algorithms on computational-limited devices, such as robots, is still a pressing issue for real-world applications. Spiking Neural Networks (SNNs), a novel brain-inspired algorithm, offer a promising solution due to their low-latency and low-energy properties over traditional Artificial Neural Networks (ANNs). Despite their advantages, the dense structure of deep SNNs can still result in extra energy consumption. The Lottery Ticket Hypothesis (LTH) posits that within dense neural networks, there exist winning Lottery Tickets (LTs), namely sub-networks, that can be obtained without compromising performance. Inspired by this, this paper delves into the spiking-based LTs (SLTs), examining their unique properties and potential for extreme efficiency. Then, two significant sparse \textbf{\textit{Rewards}} are gained through comprehensive explorations and meticulous experiments on SLTs across various dense structures. Moreover, a sparse algorithm tailored for spiking transformer structure, which incorporates convolution operations into the Patch Embedding Projection (ConvPEP) module, has been proposed to achieve Multi-level Sparsity (MultiSp). MultiSp refers to (1) Patch number sparsity; (2) ConvPEP weights sparsity and binarization; and (3) ConvPEP activation layer binarization. Extensive experiments demonstrate that our method achieves extreme sparsity with only a slight performance decrease, paving the way for deploying energy-efficient neural networks in robotics and beyond.
Hao Cheng, Jiahang Cao, Erjia Xiao, Mengshu Sun, Renjing Xu
2023-09-23T08:24:36Z
http://arxiv.org/abs/2309.13302v4
# Gaining the Sparse Rewards by Exploring Binary Lottery Tickets in Spiking Neural Networks ###### Abstract Spiking Neural Network (SNN) as a brain-inspired strategy receives lots of attention because of the high-sparsity and low-power properties derived from its inherent spiking information state. To further improve the efficiency of SNN, some works declare that the Lottery Tickets (LTs) Hypothesis, which indicates that the Artificial Neural Network (ANN) contains a subnetwork without sacrificing the performance of the original network, also exists in SNN. However, the spiking information handled by SNN has a natural similarity and affinity with binarization in sparsification. Therefore, to further explore SNN efficiency, this paper focuses on _(1) the presence or absence of LTs in the binary SNN, and (2) whether the spiking mechanism is a superior strategy in terms of handling binary information compared to simple model binarization_. To certify these consumptions, a sparse training method is proposed to find Binary Weights Spiking Lottery Tickets (BinW-SLT) under different network structures. Through comprehensive evaluations, we show that BinW-SLT could attain up to \(+5.86\%\) and \(+3.17\%\) improvement on CIFAR-10 and CIFAR-100 compared with binary LTs, as well as achieve 1.86x and 8.92x energy saving compared with full-precision SNN and ANN. Hao Cheng\({}^{1}\), Jiahang Cao\({}^{1}\), Erjia Xiao\({}^{1}\), Pu Zhao\({}^{2}\), Mengshu Sun\({}^{3}\), Jiaxu Wang\({}^{1}\), Jize Zhang\({}^{4}\), Xue Lin\({}^{2}\), Bhavya Kailkhura\({}^{5}\), Kaidi Xu\({}^{6}\), Renjing Xu\({}^{1}\)\({}^{1}\) The Hong Kong University of Science and Technology (Guangzhou) \({}^{2}\) Northeastern University, \({}^{3}\) Beijing University of Technology \({}^{4}\) The Hong Kong University of Science and Technology \({}^{5}\) Lawrence Livermore National Laboratory, \({}^{6}\) Drexel University Spiking Neural Networks, Lottery Tickets Hypothesis, Binary Neural Network ## 1 Introduction Although many deep learning algorithms have performed well on different tasks in the current AI research field, some practical problems still need to be resolved. Significantly, the redundancy of model structure and computational burden greatly prevents the democratization of AI. Model compression has been widely combined with various existing AI algorithms as a very effective method. Model pruning [1] and quantization [2] as two representative compressing methods could make the original network pursue more lightweight structures and facilitate their implementations on resource-constrained application systems, such as smartphone or Field Programmable Gate Arrays (FPGA). About model pruning, initially, all of them need to go through 1) original model training, 2) model pruning, and 3) post-fine-tuning to find a sub-model according to various pruning rules, which could be categorized as irregular pruning [1] and regular pruning [3; 4; 5]. And the regular pruning could be further classified as filter pruning [3], column pruning [4], and block-based pruning [5]. Recently, ANNs are been proven to have specific sparser but not sacrificing original performance subnetworks named LTs Hypothesis [6]. This finding makes sparse training [7; 8; 9] possible, which makes it more convenient and efficient to find sub-networks without the need for complex pretraining, pruning, and retraining collocation techniques. Model quantization [10] could quantize the original 32-bit model toward the 16-bit to 2-bit state. Among them, converting 32-bit toward 2-bit is the extreme situation of quantization that could be termed as the Binary Neural Network (BNN) [10]. While adopting quantization or binarization to obtain a sparse model, the corresponding processing objects can be weights and activations. However, weights quantizing or binarizing can achieve a smaller performance loss than handling both. Furthermore, about combining LTs with binarization, the Multi-prize Lottery Tickets Hypothesis (MPTs) [9] certifies that the lottery tickets or subnetworks remain in BNN. SNN, as the most promising third-generation neural network, has received lots of attention due to its distinctive properties of high biological plausibility, temporal information processing capability, inherent binary (spiking) information processing superiority, and energy saving. For training SNN, there are three main approaches, including (1) ANN to SNN conversions [11], (2) direct training [12], and (3) local training [13]. Although it is more computationally and energy efficient than traditional ANN, further improving its sparsity and efficiency is important to be more scalable in resource-limited hardware scenarios. There has already been some work on SNN pruning [14] and binarization [15; 16]. Additionally, Kim et al. [17] claimed that the LTs also exists in SNN. However, the existence or non-existence of LTs in the binary case of SNN is the pending issue of the most interest to this paper. Furthermore, because of the inherent advantage of SNN in processing spiking information that could be assumed as a special binary form, the specific performance of binarized spiking LTs is also a point of great curiosity for this paper. Therefore, two important issues to be investigated are how to discover Binary Weights Lottery Ticket (BinW-SLT) more expeditiously and how to analyze the properties of BinW-SLT comprehensively. We propose a new sparse training method that could efficiently find BinW-SLT without model training to solve these two issues. With this training method, we can prove that the LTs exists in the binary SNN case. Furthermore, compared to ANN with simultaneous binary weights and activations, BinW-SLT shows their advantage in processing binary information and attains up to \(+5.86\%\) and \(+3.17\%\) improvement in CIFAR-10 and CIFAR-100 compared with traditional submodel of BNN with binary weights and activations (BNN-BinAct) [6, 10]. BinW-SLT also maintains a better computational energy advantage of over 1.86x and 8.92x compared to full-precision SNN and ANN. ## 2 Methods and Experiments ### Binary Weights Spking Lottery Tickets (BinW-SLT) **SNN Fundamentals:** Different from ANN, SNN specializes in the processing of spiking information. In this paper, we adopt the widely used Leaky Integrate-and-Fire (LIF) model [18], which is suitable to characterize the dynamic process of spike generation and can be defined as: \[\tau\frac{\mathrm{d}V(t)}{\mathrm{d}t}=-(V(t)-V_{reset})+I(t) \tag{1}\] where \(I(t)\) represents the input synaptic current at time \(t\) to charge up to produce a membrane potential \(V(t),\tau\) is the time constant. When the membrane potential exceeds the threshold \(V_{th}\), the neuron will trigger a spike and reset its membrane potential to a value \(V_{reset}\) (\(V_{reset}<V_{th}\)). The LIF neuron achieves a balance between computing cost and biological plausibility. In practice, the dynamics must be discretized to facilitate reasoning and training. The discretized version of the LIF model can be described as: \[U[n] =e^{\frac{1}{\tau}}V[n-1]+(1-e^{\frac{1}{\lambda}})I[n] \tag{2}\] \[S[n] =\Theta(U[n]-V_{th})\] (3) \[V[n] =U[n](1-S[n])+V_{reset}S[n] \tag{4}\] Where \(n\) is the discrete timestep, \(U[n]\) is the membrane potential before reset, \(S[n]\) denotes the output spike, which equals 1 when there is a spike and 0 otherwise, \(\Theta(x)\) is the Heaviside step function, \(V[n]\) represents the membrane potential after triggering a spike. **Finding Methods:** To explore BinW-SLT, we refer to effective MPTs [9] method that could find binary Spiking LTs without suffering any weight training. The main work we must do is transfer MPTs to the binary state. This transferring process contains two issues that need special attention: _(1) SNN do not need binarize activation since the LIF as the method of processing spiking information is already a special case of binarization; 2) Compared with ANN, SNN has more parameters, such as timestep \(T\), and decay rate \(\lambda\), that will affect final performance and need to be considered._ Therefore, to make the original MPTs satisfy these two issues, the optimizing formula could be rewritten as: \[min_{\alpha}||g(LIF(x;\alpha(M\odot sign(w))))-f(x;W^{*})|| \tag{5}\] where \(\alpha\in\mathbb{R}\) is the Gain term necessary to perform binary subnetworks well. \(f(x;W^{*})\) is the target original network with optimized weights \(W^{*}\) that we wish to approximate. The specific BinW-SLT finding measure is present in Algorithm 1. The optimizing objects still score \(s\) and mask \(M\). However, compared with MPTs, some updates that adapt to SNN are applied. In the input, Step-1 to Step-5 are the initialization of different parameters. The best spiking LTs choose from Step-6 to Step-13, including updating and sorting pruning scores \(s\) and mask \(M\) updating processes. Finally, the sparse BinW-SLT \(g(LIF(x;\alpha(M\odot sign(w))))\) with best performance is screened out. ``` 1:Input: SNN \(g(LIF(x;))\) with 2-bit spiking activation; Loss function \(L\); Training data \(\{(x^{(i)},y^{(i)})\}_{j=1}^{N}\); Dataset size \(N\); SNN parameters \(N_{snn}\). 2:Initialize SNN: Pruning rate \(r_{p}\); Timestep \(t\); Decay rate \(\lambda\). 3:Initialize spiking LTs Parameters: SNN weights \(w\); Pruning scores \(s\); Layerwise masks \(M\in\{0,1\}\). 4:Initialize spiking LTs Weights: \(sign(w)\) 5:Initialize Gain Term: \(\alpha\leftarrow||M\odot w||/||M||\) 6:for k = 1 to \(N_{epochs}\)do 7:for\(r_{p}\), \(t\) and \(\lambda\)do 8:\(s\gets s-\eta\nabla_{s}\ell(L(\alpha\cdot M\odot sign(w)))\) 9:\(Proj_{[0,1]}\leftarrow\) Sorting \(s\) according to \(r_{p}N_{snn}\) 10:\(M\gets M\odot Proj_{[0,1]},\alpha\leftarrow||M\odot\textbf{w}||_{1}/||M||_{1}\) 11:endfor 12:endfor 13:Output: Return \(g(LIF(x;\alpha(M\odot sign(w))))\) ``` **Algorithm 1** Finding BinW-SLT ### Experiments **Experimental Setting:** This paper uses various SNN structures as our basic models under two popular datasets, CIFAR-10 and CIFAR-100. In Cifar-10, Conv-4 with 4 convolutional layers, VGG-9, VGG-11, and ResNet-19 based SNN are utilized here. In CIFAR-100, we choose VGG-11 and ResNet-19 SNNs to explore our BinW-SLT. About training SNN, Adam optimizer with a learning rate of \(0.1\) and surrogate gradient [12] are adopted. In finding the BinW-SLT process, we adopt commonly used LIF neurons with timestep \(T=4\) and decay rate \(\lambda=0.99\). Additionally, to explore the relationship between different SNN parameters and the performance of BinW-SLT, \(T=1,2,6,8\) and \(\lambda=0.8,0.7,0.6,0.5,0.4,0.3,0.2\) are further adopted here. All conducting experiments are implemented based on Pytorch and SpikingJelly [19]. **General Performance of BinW-SLT:** According to the sub-figure (a) in Fig. 2, in CIFAR-10, among most pruning rate regions, BinW-SLT could attain better accuracy compared with BNN with Binary Activation (BNN-BinAct). With respect to the pruning rate\(=40\%\) in Table 1, compared to BNN-BinAct, in CIFAR-10, BinW-SLT could achieve at most \(+5.61\%\), \(+5.79\%\), \(+4.68\%\) and \(+5.86\%\) improvement on CONV-4, VGG-9, VGG-11 and ResNet-19. About CIFAR-100 in Table 2, BinW-SLT could attain \(+3.17\%\) and \(+3.31\%\) enhancement on VGG-11 and ResNet-19. We also adopt the full-precision original ANN and original SNN as our baselines. For the performance of CIFAR-10 in Fig. 2, from low to high pruning rate, BinW-SLT could even exceed full-precision SNN in most cases and approaches full-precision ANN to the greatest extent. Specifically, compared with full-precision SNN/ANN, for the CONV-4, VGG-9, VGG-11 and ResNet-19 in CIFAR-10, BinW-SLT with T=4 could generate \(+0.24\%\), \(-0.95\%\), \(+0.05\%\), \(-1.35\%\), \(+0.56\%\), \(-0.34\%\) and \(+0.04\%\), \(-0.23\%\) modification when the pruning rate is \(40\%\). For CIFAR-100 in Table 2, compared with SNN, BinW-SLT also maintains a relatively low-performance decrease, and the gaps in VGG-11 and ResNet-19 are only \(-1.2\%\) and \(-0.54\%\). Additionally, to have a comprehensive analysis, we also apply the same corresponding model with Timestep \(T=1\), BinW-SLT (T=1), as a fairer comparing object in the sub-figure (a) of Fig. 2. In this case, consistent with the performance scenarios analyzed above, the BinW-SLT in each structure still exceeds the BNN-BinAct and approaches the full-precision SNN and ANN. According to the results in Fig. 2, Table 1 and Table 2, we could get a straightforward concept: _In the case of extreme sparsity that involves both pruning and binarization, a spiking mechanism, like LIF, can better preserve the original information of activations than 0-1 processing of activation values._ This can be proved by the better performance of BinW-SLT compared with BNN-BinAcT under different pruning rates, from \(r_{p}=20\%\) to \(r_{p}=90\%\), and timesteps, \(t=4\) or \(1\). To give a more intuitive analysis, we draw Fig. 1 to illustrate the activation map of full-precision ANN, BNN-BinAct, and BinW-SLT from left to right. The BinW-SLT could obtain a higher-level summary from the ANN activation map (Left). Compared with BNN-BinAct, which performs the naive binary operation on the ANN, the activation map of BinW-SLT is sparser and more representative of the original information. **The Effect of Timestep and Decay Rate:** In Fig.2 (b) and (c), we explore the effect of SNN parameters timestep \(t\) and decay rate \(\lambda\) on the performance of BinW-SLT. The experiments adopt CONV-4, VGG-9, VGG-11, and ResNet-19 with \(r_{p}=40\%,50\%,60\%\) as our basic structures, and \(t=4\) and \(\lambda=0.99\) as the main parameters. During exploration, while respectively keeping other parameters constant, we select \(T=1,2,4,6,8\) and \(\lambda=0.99,0.8,0.7,0.6,0.5,0.4,0.3,0.2\) as our main research objects. According to Fig. 2 (b), on the basis that the performance of BinW-SLT is better than BNN-BinAct, the change of timestep \(t\) could lead up to \(+3.3\%\) (CONV-4), \(+1.22\%\) (VGG-9), \(+3.48\%\) (VGG-11) and \(+5.86\%\) (ResNet-19) increase compared to the full-precision SNN. This increase also confirms the theoretical mechanism of \(t\) in SNN that could be assumed as a multiple processing method of the same input representation. And in Fig. 2 (c), following the changes in decay rate from \(0.99\) to \(0.2\), BinW-SLT still consistently outperforms BNN-BinAct and also maintains a performance advantage over full-precision SNN under certain value combinations of \(r_{p}\) and \(\lambda\). Concretely, BinW-SLT could attain at most \begin{table} \begin{tabular}{c c c} \hline \hline **Architecture** & **Method** & **CIFAR-10 Acc(\%)** \\ \hline \multirow{3}{*}{_CONV-4_ (2.43M)_} & ANN/SNN & 84.60/83.41 \\ & _BNN-BinAct_ & _79.21_ \\ & **BinW-SLT** & **83.65 (+5.61)** \\ \hline \multirow{3}{*}{_VGG-9_ (2.26M)_} & ANN/SNN & 86.70/85.30 \\ & _BNN-BinAct_ & _80.68_ \\ & **BinW-SLT** & **85.35 (+5.79)** \\ \hline \multirow{3}{*}{_VGG-11_ (5.27M)_} & ANN/SNN & 88.10/87.20 \\ & _BNN-BinAct_ & _83.84_ \\ \cline{1-1} & **BinW-SLT** & **87.76 (+4.68)** \\ \hline \multirow{3}{*}{_ResNet-19_ (12.63M)_} & ANN/SNN & 92.05/91.78 \\ \cline{1-1} & _BNN-BinAct_ & _86.74_ \\ \cline{1-1} & **BinW-SLT** & **91.82 (+5.86)** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results on CIFAR-10. BinW-SLT is mainly compared with BNN-BinAct. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Architecture** & **Method** & \begin{tabular}{c} **OPs** \\ **(G)** \\ \end{tabular} & \begin{tabular}{c} **Power** \\ **(mJ)** \\ \end{tabular} & \begin{tabular}{c} **CIFAR-100** \\ **Acc(\%)** \\ \end{tabular} \\ \hline \multirow{3}{*}{_VGG-11_ (5.27M)_} & ANN & 0.209 & 0.966 & 68.64 \\ & SNN & 0.140 & 0.131 & 62.94 \\ & _BNN-BinAct_ & _0.209_ & _0.966_ & _58.57_ \\ & **BinW-SLT** & **0.132** & **0.127** & **61.74 (+3.17)** \\ \hline \multirow{3}{*}{_ResNet-19_ (12.63M)_} & ANN & 2.223 & 10.225 & 86.74 \\ & SNN & 1.778 & 1.609 & 64.27 \\ \cline{1-1} & _BNN-BinAct_ & _2.223_ & _10.225_ & _60.60_ \\ \cline{1-1} & **BinW-SLT** & **1.689** & **1.528** & **63.73 (+3.31)** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results on CIFAR-100. BinW-SLT is mainly compared with BNN-BinAct. OPs (G) and Power (mJ) are corresponding to computing and power consumption. Figure 1: Activation map of the 128 last layer neuron representation of (a) ANN, (b) BNN-BinAct (c) BinW-SLT. \(+2.69\%\) (CONV-4), \(+1.23\%\) (VGG-9), \(+1.91\%\) (VGG-11) and \(+4.33\%\) (ResNet-19) performance fluctuation. Additionally, regardless of the various \(p_{r}\), the effect of \(\lambda\) on the performance of BinW-SLT is related to the particular model size. In a word, a smaller model could also tolerate a smaller decay rate \(\lambda\). Initially, CONV-4, VGG-9, VGG-11, and ResNet-19 experience some degree of accuracy rise or stabilization. However, by decreasing \(r_{\text{p}}\), the accuracy would also translate to the decrease under different \(\lambda\). When the model size is small to large, the corresponding \(\lambda\) are also the increasing values \(0.3\), \(0.4\), \(0.6\), and \(0.7\). **The Effect of Fine Tuning:** Since the finding of BinW-SLT does not adopt weight training, BinW-SLT could also be assumed as a unique initialization strategy for the precisovery of sparse SLTs, and its performance could be further improved according to further fine-tuning. In the right subfigure of Fig. 3, fine-tuned BinW-SLT at different \(p_{r}\) could produce a tangible improvement under the original trend. **Theoretical Power Consumption:** To better understand the effects of the BinW-SLT on energy using, we estimate the theoretical power consumption on neuromorphic chips based on previous studies [20]. We use OPs as a metric to judge computational consumption. For ANNs, OPs correspond to floating point operations (FLOPs), while SNNs perform synaptic operations (SOPs), which is defined as [21]: \[SOPs(l)=fr\times T\times FLOPs(l) \tag{6}\] where \(l\) is a block/layer, \(fr\) is the firing rate of \(l\) and \(T\) is the timestep. After assuming BinW-SLT is implemented on a 45nm hardware [21] with each FLOP and SOP are \(4.6pJ\) and \(0.9pJ\), respectively, then we can get the theoretical energy consumption calculated by: \[E_{SNN} =E_{flop}\times\mathrm{FLOP}^{1}_{\mathrm{SConv}}\] \[+E_{sop}\times\left(\sum_{n=2}^{N}\mathrm{SOP}^{n}_{\mathrm{SConv }}+\sum_{m=1}^{M}\mathrm{SOP}^{m}_{\mathrm{SFC}}\right) \tag{7}\] where \(N\) and \(M\) denotes the number of spike convolutional (\(SConv\)) and fully connected (\(SFC\)) layers. We first sum up the SOPs of all \(SConv\) layers (except the first layer), and \(SFC\) layers and multiply by \(E_{flop}\). For the first convolutional layer of SNN, we calculate the energy consumption utilizing FLOPs due to the spike encoding operation performed here. We illustrate the results of our BinW-SLT, BNN-BinAct, full-precision ANN, and SNN in Table 2. Also, as exemplified in Fig. 3, BinW-SLT benefits from the energy advantage of up to 1.86x and 8.92x compared to standard SNN and ANN. ## 3 Conclusion The main focus of this paper is the Binary Weights Spiking Lottery Ticket, abbreviated as BinW-SLT. The analysis of BinW-SLT certifies that (1) the presence of LTs under binary weights SNN; (2) the spiking mechanism could generate a higher-level binary summary and produce binary information that is superior to simple 0-1 processing of activation; (3) As a special kind of initialization, the performance of BinW-SLT has the potential to be further enhanced by fine-tuning. Additionally, we introduce theoretical energy consumption, and the results indicate that our BinW-SLT model can achieve energy savings of up to 1.86x and 8.92x compared to full-precision SNN and ANN, respectively. Figure 3: Left: Comparison of normalized compute energy computed using (a) ANN, (b) SNN without any pruning, and (c) SNN with Spiking Lottery Ticket; Right: BinW-SLTs and their Fine-Tuning (FT) performance under different pruning rates. Figure 2: The comparison of CIFAR-10 performance under the modification of (a) Pruning Rate; (b) Timestep; (c) Decay Rate. ## Acknowledgement BK's work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE- AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No. 22-DR-009.
2309.11515
Towards Differential Privacy in Sequential Recommendation: A Noisy Graph Neural Network Approach
With increasing frequency of high-profile privacy breaches in various online platforms, users are becoming more concerned about their privacy. And recommender system is the core component of online platforms for providing personalized service, consequently, its privacy preservation has attracted great attention. As the gold standard of privacy protection, differential privacy has been widely adopted to preserve privacy in recommender systems. However, existing differentially private recommender systems only consider static and independent interactions, so they cannot apply to sequential recommendation where behaviors are dynamic and dependent. Meanwhile, little attention has been paid on the privacy risk of sensitive user features, most of them only protect user feedbacks. In this work, we propose a novel DIfferentially Private Sequential recommendation framework with a noisy Graph Neural Network approach (denoted as DIPSGNN) to address these limitations. To the best of our knowledge, we are the first to achieve differential privacy in sequential recommendation with dependent interactions. Specifically, in DIPSGNN, we first leverage piecewise mechanism to protect sensitive user features. Then, we innovatively add calibrated noise into aggregation step of graph neural network based on aggregation perturbation mechanism. And this noisy graph neural network can protect sequentially dependent interactions and capture user preferences simultaneously. Extensive experiments demonstrate the superiority of our method over state-of-the-art differentially private recommender systems in terms of better balance between privacy and accuracy.
Wentao Hu, Hui Fang
2023-09-17T03:12:33Z
http://arxiv.org/abs/2309.11515v2
# Towards Differential Privacy in Sequential Recommendation: A Noisy Graph Neural Network Approach ###### Abstract With increasing frequency of high-profile privacy breaches in various online platforms, users are becoming more concerned about their privacy. And recommender system is the core component of online platforms for providing personalized service, consequently, its privacy preservation has attracted great attention. As the gold standard of privacy protection, differential privacy has been widely adopted to preserve privacy in recommender systems. However, existing differentially private recommender systems only consider static and independent interactions, so they cannot apply to sequential recommendation where behaviors are dynamic and dependent. Meanwhile, little attention has been paid on the privacy risk of sensitive user features, most of them only protect user feedbacks. In this work, we propose a novel DIfferentially Private Sequential recommendation framework with a noisy Graph Neural Network approach (denoted as DIPSGNN) to address these limitations. To the best of our knowledge, we are the first to achieve differential privacy in sequential recommendation with dependent interactions. Specifically, in DIPSGNN, we first leverage piecewise mechanism to protect sensitive user features. Then, we innovatively add calibrated noise into aggregation step of graph neural network based on aggregation perturbation mechanism. And this noisy graph neural network can protect sequentially dependent interactions and capture user preferences simultaneously. Extensive experiments demonstrate the superiority of our method over state-of-the-art differentially private recommender systems in terms of better balance between privacy and accuracy. ## 1 Introduction Recent years have witnessed the tremendous development of various online platforms such as Facebook, Amazon and eBay, they are playing an increasingly important role in users' daily life. And recommender system is the core component of online platforms for providing personalized services, it takes advantage of abundant personal information to recommend items or services that match user preference [1]. The direct access to sensitive personal information makes recommender system a common target of privacy attacks and thus aggravates users privacy concern. [2] find that the actions of arXiv users would be potentially "visible" under targeted attack and they propose to change the privacy settings of the recommender algorithm to mitigate such privacy risk. [3] points out that mobile health applications pose new risks to privacy as they need a large volume of health data to be collected, stored and analyzed. And [4, 5] show that users' sensitive attributes such as racial information, sexual orientation and political inclinations can be precisely predicted from their interaction history in online platforms by attribute inference attack [6, 7]. Even the outputs of recommender systems are likely to reveal users' sensitive attributes and actual interactions to malicious attackers [8, 9]. In a nutshell, despite the ubiquity of recommender system in various online platforms, it is vulnerable to privacy attacks and may cause leakage of sensitive personal information. Besides, the enactment of General Data Protection Regulation (GDPR) [10] raises users' awareness of privacy and makes it more urgent to devise privacy-preserving recommender systems. Existing recommender systems (RSs) can be mainly classified into two categories: one category is traditional RSs which includes content-based and collaborative filtering RSs; the other category is sequential RSs [11, 12]. Traditional RSs model users' historical interactions in a static and independent way, so they can only capture the static long-term preference but ignore to consider short-term interest and the sequential dependencies among user interactions. In contrast, sequential RSs treat user interactions as a dynamic sequence and take sequential dependencies into account to capture both long-term and short-term interest [13]. Figure 1 is an illustration of sequential RS, where each user's interactions are indexed chronologically to form a historical interaction sequence. And sequential RSs need to predict users' next interacted items based on their historical behavior sequences [14; 15; 16; 17]. Due to the capability of capturing users' dynamic and evolving preferences, sequential RSs are quite important and popular in modern online platforms and have attracted much attention in academia. Therefore, we focus on sequential RSs in this paper and derive to build a privacy-preserving sequential RS that can simultaneously resist to privacy attacks and retain considerable recommendation performance. Previous studies on privacy-preserving RSs mainly adopt anonymisation [18], encryption [19; 20] and differential privacy [21; 22] to protect sensitive user information. The drawback of anonymisation-based RSs is that they cannot provide a provable privacy guarantee or resist to reconstruction attacks [23; 24]. Meanwhile, encryption-based RSs bring heavy computation overhead and fail to prevent attackers from inferring sensitive user information based on the exposed output of RSs [4]. So we resort to differential privacy [25] to build a privacy-preserving sequential RS on account of its provable privacy guarantee and lightweight computation overhead. On the other hand, differentially private sequential RSs are quite under-explored because of the challenge to consider sequential dependencies in differential privacy. Existing differentially private RSs are all based on traditional RSs, which can further be divided into two categories. The first category focuses on neighbor-based collaborative filtering [4; 26; 27], where noise is added into the calculation of item/user similarity matrix to protect users' sensitive interactions. The second category is based on matrix factorization [22; 21; 28; 29]. They add noise into the objective function or gradients in order to protect users' ratings or the fact that a user has rated an item. Despite their effectiveness being partially validated, we argue that these solutions suffer from the following three major limitations. First, they model users' preferences based on static rating matrix, thus cannot capture dynamic and evolving preferences in users' interaction sequences. Second, interactions are considered to be independent and equally important in existing differentially private RSs. Nevertheless, users' behavior sequences are characterized by complicated sequential dependencies [15; 30] and the most recent interactions may have greater influence on user preferences [11], thus these solutions are not applicable in sequential recommendation. Third, they only protect users' explicit ratings or implicit feedbacks while neglect protection on users' side information such as user demographics. [31] show that there are privacy risks on users' side information, for example, user gender can be accurately inferred from users' ratings. Although [6] design a dual-stage perturbation strategy to protect sensitive user features, they ignore to protect on users' interactions let alone dependent behavior sequences. In short, none of these differentially private RSs focus on protecting users' sensitive features and interactions simultaneously in order to achieve a better balance between privacy and utility. To bridge the above research gaps, we propose a differentially private sequential recommendation framework called DIPSGNN to protect user features and sequentially dependent interactions at the same time. Specifically, we first take advantage of piecewise mechanism [32] to protect users' sensitive features at input stage and use the perturbed features to initialize user embedding. Then, a gated graph neural network [33] is employed to capture sequential dependencies and dynamic preferences in users' behavior sequences. In this gated graph neural network, we innovatively add calibrated noise into the aggregation step based on aggregation perturbation mechanism [34] to prevent attackers from inferring users' private interactions based on the exposed recommendation results. To summarize, the main contributions of our work are as follows: Figure 1: A toy example of sequential RS. Each user’ interactions are indexed chronologically to form a interaction sequence. And sequential RSs need to predict the next items that users will interact based on historical interaction sequences. 1. To the best of knowledge, we are the first to achieve differential privacy for dependent interactions in sequential recommendation. 2. We propose a novel aggregation scheme that can protect time-dependent interactions and capture user preferences without considerably impairing performance. 3. Both users' features and interactions are well-protected in DIPSGNN which offers a better balance between privacy and accuracy. 4. Theoretical analysis and extensive experiments on three real-world datasets demonstrate the effectiveness and superiority of DIPSGNN over state-of-the-art differentially private recommender systems. The rest of this article is structured as follows. Section 2 reviews the related work. Section 3 introduces the preliminaries and problem setup. Section 4 elaborates on the technical details of our proposed DIPSGNN. Section 5 discusses the experimental results and analyses. Finally, we conclude this article and propose several future directions in Section 6. ## 2 Related Work In this section, we review three lines of studies related to our work including: sequential recommendation, differentially private recommender systems and privacy-preserving graph neural network. ### Sequential Recommendation Sequential recommendation recommends the next item based on the chronological sequence of users' historical interactions. The earliest work [35] on sequential recommendation leverages Markov decision process to model item transition. Later, FPMC [36] fuses the idea of Markov chains with matrix factorization, it learns the first order transition matrix by assuming next item is only relevant with the previous one. Nevertheless, these conventional methods combine past components independently and neglect long range dependency. For stronger ability to model long-term sequential dependency, deep learning based methods represented by recurrent neural networks (RNN) [37; 38; 39] and attention mechanism [15; 40; 41] have been in blossom in sequential recommendation. For example, [39] combines the architecture of RNN and Convolutional Neural Network to capture complex dependencies in user behavior sequences. And attention-based neural networks such as Transformer [42; 43] and BERT [16] use attention scores to explore item-item relationships and achieve remarkable performance in sequential recommendation. Recently, graph neural networks (GNN) [44; 45; 17] have attracted much interest in sequential recommendation, as the input data can be represented by graphs. SRGNN [46] converts user behavior sequences into directed graphs and learns item embeddings on these graphs with gated graph neural network [33]. APGNN [47] promotes SRGNN by fusing personalized user characteristics with item transition patterns in user behavior graphs in order to better model user preferences. SUGRE [30] reconstructs loose item sequences into tight item-item interest graphs based on metric learning, which further improves the performance of GNN in sequential recommendation. By elaborately modeling user interest based on graphs constructed from interaction sequences, GNN-based methods have demonstrated great effectiveness in sequential recommendation. ### Differentially private recommender systems Differential privacy [25] has been introduced into recommender systems since [4]. It adds noise into calculation of item-similarity matrix in order to protect users' explicit ratings. After that, [27] protect users' implicit feedbacks by applying binary randomized response [48] on them and then send the perturbed feedbacks to the centralized server to calculate a private item-similarity matrix. Aside from neighborhood-based collaborative filtering, there is another line of works which are based on matrix factorization (MF) [49]. [22] integrate objective perturbation into MF to make sure the final item embedding learned by MF satisfy differential privacy. Besides, they decompose the noise component into small pieces so that it can fit with the decentralized system. [21] build a differentially private MF framework by using a novel connection between differential privacy and bayesian posterior sampling via stochastic gradient langevin dynamics. And [29] further divide user ratings into sensitive and non-sensitive ones then add different amount of noise on these two kinds of ratings when calculating gradients. Finally, they achieve a uniform privacy guarantee for sensitive ratings. However, these differentially private recommender systems can only protect static rating matrix and assume that interactions are independent and equally important. They largely ignore the sequential dependencies and users' dynamic preference, which makes them inadequate for sequential recommendation. Meanwhile, the protection on user features is overlooked in these works. Though [6] shed light on the protection towards user demographics, it lacks the ability to protect interactions let alone dependent behavior sequences. ### Privacy-preserving graph neural network Graph neural networks (GNNs) have been broadly employed in sequential recommendation as users' interaction sequences can be easily transformed into graph data. However, rich node features and edge information in GNNs make them vulnerable to privacy attacks. [50; 51; 52] show that private edges can be recovered from GNNs via the influence of particular edge on model output. To mitigate such privacy risk, various privacy-preserving techniques for GNNs are emerging. [53] propose a privacy-preserving representation learning framework on graphs from mutual information perspective. [54] perturb graphs based on combinatorial optimization to protect private node labels. Nevertheless, these methods cannot provide a formal privacy guarantee. To address this limitation, differential privacy (DP) has been applied to protect privacy in GNNs. [50] propose LapGraph to provide DP protection for sensitive edges by adding laplace noise into adjacency matrix. It can be regarded as a data mask method with formal differential privacy guarantee. But [55; 56] argue that adding noise into adjacency matrix destroys original graph structure and ruins the neighborhood aggregation inside GNNs. To remedy this defect, [34] propose an aggregation perturbation mechanism to safeguard the privacy of edges in undirected and unweighted graphs, which differs from the directed and weighted graphs in our work. More specifically, it forces the sensitivity of embedding update process equal to one by normalizing each row of embedding matrix. However, we find that this normalization step makes embedding matrix deviate too much from its true value and brings excessive noise. To resolve this problem, we normalize the rows of embedding matrix with tunable threshold for different datasets and achieves better utility. ## 3 Preliminaries ### Differential Privacy Differential privacy [25] is a powerful tool to provide formal privacy guarantee when processing sensitive data. It ensures the output of a randomized algorithm is insensitive to the deletion/addition of one individual record in a database by adding calibrated noise to the algorithm. The formal definition of differential privacy is as follows. **Definition 1** (Differential Privacy).: _A randomized algorithm \(\mathcal{A}\): \(\mathcal{X}^{n}\rightarrow\mathcal{Y}\) is \((\epsilon,\delta)\)-differentially private, if for all neighboring datasets \(X,X^{\prime}\in\mathcal{X}^{n}\) and all \(S\subseteq\mathcal{Y}\),_ \[\Pr[\mathcal{A}(X)\in S]\leq e^{\epsilon}\cdot\Pr\left[\mathcal{A}\left(X^{ \prime}\right)\in S\right]+\delta. \tag{1}\] where \(\Pr[\cdot]\) represents probability, \(\epsilon>0\) is privacy budget and \(\delta>0\) is failure probability. A smaller \(\epsilon\) or \(\delta\) brings a stronger privacy guarantee but forces us to add more noise in the randomized algorithm. Besides, neighboring datasets denote a pair of datasets differing at most one record. In our work, user interaction sequences are transformed into graphs and we focus on edge-level differential privacy, so neighboring datasets represent two graphs differ at only one edge. Besides graph topology data, our work also involves multidimensional numerical and categorical user feature data. [57] propose a hybrid differential privacy notion to properly perturb heterogeneous data types in social networks. They utilize edge-level differential privacy to protect graph topology data and local differential privacy [58] to protect user attributes. Inspired by hybrid differential privacy notion, we also integrate local differential privacy into our model to protect user features, as user feature data and graph topology data have different characteristics. The formal definition of local differential privacy is as follows: **Definition 2** (Local Differential Privacy).: _A randomized function \(f(\cdot)\) satisfies \(\epsilon-\mathrm{LDP}\) if and only if for any two respective inputs \(x\) and \(x^{\prime}\) of one user and all output \(y\),_ \[\Pr\left[f(x)=y\right]\leq e^{\epsilon}\cdot\Pr\left[f\left(x^{\prime}\right) =y\right], \tag{2}\] where \(\epsilon\) is also called privacy budget. A lower \(\epsilon\) provides stronger privacy guarantee but forces us to add heavier noise on each user's data. In local differential privacy, the perturbation of each user's data guarantees that an external attacker cannot easily infer which of any two possible inputs \(x\) and \(x^{\prime}\) from one user is used to produce the output \(y\). Thus, the true input value of this user is protected with high confidence. ### Problem Statement Let \(U=\{u_{i}\}_{i=1}^{|U|}\) and \(V=\{v_{j}\}_{j=1}^{|V|}\) be the set of users and items in the system, respectively. Each user \(u\) has a behavior sequence in the chronological order \(S^{u}=\{v_{s}^{u}\}_{s=1}^{n_{u}}\) (\(v_{s}^{u}\in V\), and \(n_{u}\) is the length of user \(u\)'s behavior sequence) and a sensitive feature vector \(\mathbf{x}_{u}\). We convert each \(S^{u}\) into a directed weighted graph \(\mathcal{G}^{u}=(\mathcal{V}^{u},\mathcal{E}^{u})\), where \(\mathcal{V}^{u}\) and \(\mathcal{E}^{u}\) represent the set of item nodes, and the set of edges, respectively. All numerical features in \(\mathbf{x}_{u}\) are normalized into \([-1,1]\) with min-max normalization and all categorical features are encoded into one-hot vectors. Based on \(\{\mathcal{G}^{u}|u\in U\}\) and \(\{\mathbf{x}_{u}|u\in U\}\), the goal of our work is to _build a differentially private sequential recommendation framework to generate accurate top-\(K\) recommendation list for each user, meanwhile prevent outside malicious attackers from inferring users' sensitive features and sequentially dependent interactions._ ## 4 Methodology Our proposed DIPSGNN seeks to protect sensitive user features and interactions without sacrificing considerable performance in sequential recommendation. Figure 2 depicts the overview of DIPSGNN: we first protect users' features by perturbing them at input stage. Then, we convert each user's behavior sequence \(S^{u}\) into a user behavior graph \(\mathcal{G}^{u}\) and feed it into DIPSGNN to update user embedding and item embedding. And we add calibrated noise in DIPSGNN layer to prevent the leakage of user interactions from recommendation results. Finally, the updated user embedding and item embedding are concatenated to make next-item recommendation. We will elaborate on details of these components subsequently. ### User Feature Protection To protect sensitive user features, we adopt the strategy to perturb them at input stage with local differential privacy. Concretely, we add noise to raw user features based on piecewise mechanism (PM) [32], as it can handle multi-dimensional numerical and categorical features. With the post-processing property of differential privacy, user features will keep private during recommendation. Suppose user \(u\)'s feature vector consists of \(n\) different features, where each numerical feature is represented by a single number and each categorical feature is represented by a single one-hot vector. Thus, user \(u\)'s feature vector \(\mathbf{x_{u}}=\mathbf{x_{1}}\oplus\mathbf{x_{2}}\oplus\cdots\oplus\mathbf{x _{n}}\in\mathbb{R}^{d_{0}}\) (\(d_{0}\geq n\)), where \(\oplus\) is the concatenation operation. In this part, we aim to perturb user features with privacy budget \(\epsilon_{1}\). If we perturb each feature equally, then the privacy budget for each feature shrinks to \(\epsilon_{1}/n\). This will harm the utility of the perturbed data as the incurred noise variance is not minimized in this case. To achieve the lowest incurred noise variance, we randomly select \(k\) (\(k<n\)) features from \(\mathbf{x}_{u}\) and perturb each of them with privacy budget \(\epsilon_{1}/k\), while the non-selected features are dropped by masking them with \(0\) to avoid privacy leakage. We further follow [32] to set \(k\) as: \[k=\max\{1,\min\{n,\lfloor\frac{\epsilon_{1}}{2.5}\rfloor\}\}. \tag{3}\] For each \(\mathbf{x}_{i}\) in the \(k\) selected features, if it is a numerical feature, we first normalize it into \([-1,1]\) with min-max normalization and then perturb it by executing Algorithm 1 with privacy budget \(\epsilon=\frac{\epsilon_{1}}{k}\). On the contrary, if the selected \(\mathbf{x}_{i}\) is a categorical feature, as it is represented by an one-hot vector, we perturb it with optimized unary encoding (OUE) method [59]. Because OUE method will minimize the variance when perturbed one-hot vector has a higher dimension. The details of OUE method are shown in Algorithm 2. By integrating the perturbation for numerical features and categorical features, the whole process of PM are depicted in Algorithm 3. Theorem 1 guarantees that it satisfies \(\epsilon_{1}\)-local differential privacy. Figure 2: The framework of DIPSGNN. First, user features are perturbed and protected at input stage. Next, we construct user behavior graph based on user interaction sequence. Then, user behavior graph is protected with our newly designed DIPSGNN at embedding update stage. Finally, we utilize updated user embedding and item embedding to make next-item recommendation without leakage of user features and interactions. **Theorem 1**.: _Algorithm 3 satisfies \(\epsilon_{1}\)-local differential privacy._ Proof.: Please see appendix. ### Behavior Graph Construction To capture the complex sequential dependencies and transition patterns, we convert each user's behavior sequence \(S^{u}\) into a user behavior graph \(\mathcal{G}^{u}=(\mathcal{V}^{u},\mathcal{E}^{u})\). Inspired by [47, 46], \(\mathcal{G}^{u}\) is a directed and weighted graph whose topological structure can be represented by two adjacency matrices, \(\mathbf{A}_{u}^{out}\) and \(\mathbf{A}_{u}^{in}\). The weights in \(\mathbf{A}_{u}^{out},\mathbf{A}_{u}^{in}\) are the occurrence number of consecutive interactions between two items. For instance, the weight in position \([i,j]\) of \(\mathbf{A}_{u}^{out}\) is \(\text{Count}(v_{i},v_{j})\), which means the number of times that user \(u\) interacts with \(v_{i}\) first, and then immediately with \(v_{j}\). It should be noted that we drop the normalization step [47; 46] to divide \(\text{Count}(v_{i},v_{j})\) by the outdegree of \(v_{i}\), otherwise the deletion of one interaction in \(S^{u}\) will affect one row rather than one element in \(\mathbf{A}_{u}^{out}\) or \(\mathbf{A}_{u}^{in}\), which impedes the subsequent differential privacy analysis. \[\mathbf{A}_{u}^{out}[i,j] =\text{Count}(v_{i},v_{j}), \tag{4}\] \[\mathbf{A}_{u}^{in}[i,j] =\text{Count}(v_{j},v_{i}).\] ### User Behavior Protection: DIPSGNN As we mentioned before that malicious outside attackers can infer user interactions from recommendation results, we need to add noise into the recommendation algorithm in order to protect interactions. Instead of perturbing user behavior graph at input stage, we choose to add calibrated noise into GNN propagation step to protect user interactions. The reason lies in that perturbation on user behavior graph will destroy original graph structure and distort aggregation process inside GNNs [55; 56], we will show the superiority of aggregation perturbation over graph structure perturbation by empirical experiments. In this section, we will dive into the details of aggregation perturbation inside DIPSGNN. As user characteristics impact their preferences, we consider user features when initializing user embedding. User \(u\)'s embedding \(\mathbf{e}_{u}^{(0)}\) and item \(v_{i}\)'s embedding \(\mathbf{e}_{i}^{(0)}\) are initialized as: \[\mathbf{e}_{u}^{(0)}=\widehat{\mathbf{x}}_{u}\mathbf{E}_{U},\;\mathbf{e}_{i}^ {(0)}=\mathbf{x}_{i}\mathbf{E}_{V}, \tag{5}\] where \(\widehat{\mathbf{x}}_{u}\in\mathbb{R}^{1\times d_{0}}\) is perturbed feature of user \(u\) in Algorithm 2 and \(\mathbf{E}_{U}\in\mathbb{R}^{d_{0}\times d^{\prime}}\) is user embedding matrix. Similarly, \(\mathbf{x}_{i}\in\mathbb{R}^{1\times|V|}\) and \(\mathbf{E}_{V}\in\mathbb{R}^{|V|\times d}\) are respectively item one-hot encoding and item embedding matrix. Then, we feed the initialized user embedding and item embedding into DIPSGNN and update them iteratively. At each time step \(t\) of node update, we fuse user embedding \(\mathbf{e}_{u}^{(t-1)}\) with item embedding \(\mathbf{e}_{i}^{(t-1)}\) to update them together. \(\mathbf{h}_{i}^{(t-1)}=\mathbf{e}_{i}^{(t-1)}\oplus\mathbf{e}_{u}^{(t-1)}\in \mathbb{R}^{1\times(d+d^{\prime})}\) denotes the joint embedding at \(t-1\) time for item \(v_{i}\), where \(\oplus\) is the concatenation operation. Combining the joint embedding of all items, we can get a joint embedding matrix \(\mathbf{H}^{(t-1)}\in\mathbb{R}^{|V|\times(d+d^{\prime})}\). To bound the sensitivity of joint embedding matrix and facilitate the privacy analysis, we clip each row of \(\mathbf{\bar{H}}^{(t-1)}\) to make its norm equal to a constant \(C\). And \(C\) can be properly tuned on different datasets for better utility. \[\mathbf{\bar{H}}_{i}^{(t-1)}=\mathbf{h}_{i}^{(t-1)}*\frac{C}{||\mathbf{h}_{i }^{(t-1)}||_{2}},\;i=1,\cdots,|V|, \tag{6}\] where \(\mathbf{\bar{H}}_{i}^{(t-1)}\) means \(i\)-th row of \(\mathbf{\bar{H}}^{(t-1)}\). Then, we use sum aggregation to aggregate information from incoming and outgoing neighbors. This step directly accesses the adjacency matrices, \(\mathbf{A}_{u}^{in}\) and \(\mathbf{A}_{u}^{out}\), which contain sensitive structure information of interactions in user behavior sequences. Therefore, we need to add calibrated noise in this step to protect sensitive interactions: \[\mathbf{\widehat{H}}_{out}^{(t)} =\mathbf{A}_{u}^{out}\cdot\mathbf{\bar{H}}^{(t-1)}+\mathcal{N}( \sigma^{2}\mathbb{I}), \tag{7}\] \[\mathbf{\widehat{H}}_{in}^{(t)} =\mathbf{A}_{u}^{in}\cdot\mathbf{\bar{H}}^{(t-1)}+\mathcal{N}( \sigma^{2}\mathbb{I}),\] where \(\mathcal{N}(\sigma^{2}\mathbb{I})\in\mathbb{R}^{|V|\times(d+d^{\prime})}\) denotes a noise matrix with each element drawn from Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\) independently and \(\mathbf{\widehat{H}}_{out}^{(t)},\mathbf{\widehat{H}}_{in}^{(t)}\) are privately aggregated embedding matrices. Theorem 2 will prove that this step satisfies edge-level differential privacy. Besides, the post-processing property of differential privacy [60] guarantees that any operation afterwards will keep private with respect to adjacency matrices, thus we can protect sensitive interactions during recommendation. After neighborhood aggregation, we conduct a linear transformation on the aggregated embedding matrices and get intermediate representation of node \(v_{i}\) as follows: \[\mathbf{a}_{out_{i}}^{(t)} =(\mathbf{\widehat{H}}_{out}^{(t)}\cdot\mathbf{W}_{out})_{i}+ \mathbf{b}_{out}, \tag{8}\] \[\mathbf{a}_{in_{i}}^{(t)} =(\mathbf{\widehat{H}}_{in}^{(t)}\cdot\mathbf{W}_{in})_{i}+ \mathbf{b}_{in},\] \[\mathbf{a}_{i}^{(t)} =\mathbf{a}_{out_{i}}^{(t)}\oplus\mathbf{a}_{in_{i}}^{(t)},\] where \(i\) in \((\mathbf{\widehat{H}}^{(t)}\cdot\mathbf{W})_{i}\) denotes the \(i\)-th row, \(\mathbf{b}_{out},\mathbf{b}_{in}\in\mathbb{R}^{1\times d}\) are bias terms, and \(\mathbf{W}_{out},\mathbf{W}_{in}\in\mathbb{R}^{(d+d^{\prime})\times d}\) are learnable parameter matrices. \(\mathbf{b}_{out},\mathbf{b}_{in},\mathbf{W}_{out},\mathbf{W}_{in}\) are shared by all users. Then, we leverage gated recurrent unit (GRU) to combine intermediate representation of node \(v_{i}\) with its hidden state of previous time, and then update the hidden state of current time: \[\mathbf{e}_{i}^{(t)}=\text{GRU}(\mathbf{a}_{i}^{(t)},\mathbf{e}_{i}^{(t-1)}). \tag{9}\] It worths noting that \(\mathbf{e}_{i}^{(t-1)}\) has been normalized in Equation (6) as a part of \(\mathbf{h}_{i}^{(t-1)}\) and user embedding \(\mathbf{e}_{u}\) will also be updated implicitly in this process. The whole aggregation step in DIPSGNN are shown in Algorithm 4, and Theorem 2 proves its privacy guarantee. **Theorem 2**.: _For any \(\delta\in(0,1)\), propagation steps \(T\geq 1\), and noise standard deviation \(\sigma>0\), Algorithm 4 satisfies edge-level \((\epsilon_{2},\delta)\)-differential privacy with \(\epsilon_{2}=\frac{TC^{2}}{2\sigma^{2}}+\frac{C\sqrt{2T\log(1/\delta)}}{\sigma}\)._ Proof.: Please see appendix. ### Prediction and Training After finishing the update of all DIPSGNN layers, we get the final representation of all items. Then, we need to obtain a unified representation for each user to conduct next-item prediction. First, we apply a readout function to extract each user's local preference vector \(\mathbf{z}_{l}\) and global preference vector \(\mathbf{z}_{g}\) from item representations. \(\mathbf{z}_{l}\) is defined as \(\mathbf{e}_{u_{u}}^{(T)}\), which is the final representation of the last item in user \(u\)'s behavior sequence \(S_{u}\), and \(\mathbf{z}_{g}\) is defined as: \[\alpha_{s} =\mathbf{q}^{\top}\sigma(\mathbf{W}_{1}\mathbf{e}_{n_{u}}^{(T)}+ \mathbf{W}_{2}\mathbf{e}_{s}^{(T)}+\mathbf{c}), \tag{10}\] \[\mathbf{z}_{\text{g}} =\sum_{s=1}^{|n_{u}|}\alpha_{s}\mathbf{e}_{s}^{(T)},\] where \(e_{s}^{(T)}\) refers to the final representation of the \(s\)-th item in \(S_{u}\), \(\mathbf{q}\in\mathbb{R}^{d}\) and \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{d\times d}\) are learnable weight matrices. Following [47], we concatenate updated user embedding \(\mathbf{e}_{u}^{(T)}\) with the local and global preference vectors, then user \(u\)'s unified representation \(\mathbf{z}_{u}\) can be expressed as: \[\mathbf{z}_{u}=\mathbf{W}_{3}(\mathbf{z}_{g}\oplus\mathbf{z}_{l}\oplus \mathbf{e}_{u}^{(T)}), \tag{11}\] where \(\mathbf{W}_{3}\in\mathbb{R}^{d\times(2d+d^{\prime})}\) is a learnable matrix and \(d,d^{\prime}\) are the dimension of item and user embedding respectively. With user \(u\)'s unified representation and final representation of all items, we can compute the predicted probability of the next item being \(v_{i}\) by: \[\hat{\mathbf{y}}_{i}=\frac{\exp(\mathbf{z}_{u}^{\top}\cdot\mathbf{e}_{i}^{(T )})}{\sum_{j=1}^{|V|}\exp(\mathbf{z}_{u}^{\top}\cdot\mathbf{e}_{j}^{(T)})}. \tag{12}\] And, the loss function is cross-entropy of the prediction \(\hat{\mathbf{y}}\) and the ground truth \(\mathbf{y}\): \[\mathcal{L}(\hat{\mathbf{y}})=-\sum_{i=1}^{|V|}\mathbf{y}_{i}\log \left(\hat{\mathbf{y}}_{i}\right)+\left(1-\mathbf{y}_{i}\right)\log\left(1- \hat{\mathbf{y}}_{i}\right). \tag{13}\] Finally, we use the back-propagation through time (BPTT) algorithm to train the proposed DIPSGNN. ## 5 Experiments ### Experimental Settings We evaluate the performance of all methods on three real-world datasets: ML-1M2, Yelp3 and Tmall4. In ML-1M and Tmall, we have 3 categorical user features like age range, gender and occupation, while in Yelp, we have 6 numerical user features like number of ratings, average rating and etc. For Tmall dataset, we use the click data from November 1 to November 7. After obtaining the datasets, we adopt 10-core setting to filter out inactive users and items following [30]. Table 1 shows their statistics after preprocessing. For each user, we use the first \(80\%\) of its behavior sequence as the training set and the remaining \(20\%\) constitutes the test set. And hyperparameters are tuned on the validation set which is a random \(10\%\) subset of the training set. The codes for our experiments will be released upon acceptance. Footnote 2: grouplens.org/datasets/movielens/. Footnote 3: [https://www.yelp.com/dataset](https://www.yelp.com/dataset). Footnote 4: tianchi.aliyun.com/dataset/dataDetail?dataId=42. To evaluate the performance of DIPSGNN, we compare it with six non-private baselines (BPRMF, SASRec, HRNN, LightGCN, SRGNN, and APGNN) and two private ones (DPMF and EdgeRand): * **BPRMF**[61] is a widely used learning to rank model with a pairwise ranking objective function; * **SASRec**[15] is a sequential prediction model based on attention mechanism; * **HRNN**[37] uses a Hierarchical RNN model to provide personalized prediction in session-based recommendation; * **LightGCN**[62] is a simplified graph convolution network for recommendation; * **SRGNN**[46] utilizes the gated graph neural networks to capture item transitions in user behavior sequences; * **APGNN**[47] considers user profiles based on SRGNN and captures item transitions in user-specific fashion; * **DPMF**[22] is a differentially private matrix factorization method. The original one are based on explicit feedback, we denote it as DPMF_exp. There is only implicit feedback in Tmall, so we modify it with the same negative sampling strategy as BPRMF to fit in implicit feedback and denote it as DPMF_imp. We also evaluate DPMF_imp on other three datasets where ratings larger than \(1\) are regarded as positive interactions; * **EdgeRand** is a graph structure perturbation method to protect user behaviors and the protection on user features is the same as DIPSGNN. Specifically, we add gaussian noise to the adjacency matrices of user behavior graphs to achieve the same level \((\epsilon,\delta)\)-differential privacy as DIPSGNN for fair comparison. It can be seen as a data desensitization method for graphs with formal differential privacy guarantee. The only difference between EdgeRand and existing LapGraph [50] is that one uses gaussian mechanism and the other uses laplace mechanism. As DIPSGNN uses the notation of \((\epsilon,\delta)\)-differential privacy, so we take EdgeRand as our baseline and regard it as the state-of-the-art method to protect edges in GNN following [34]. Motivated by [46; 47], we adopt Recall@\(K\) and MRR@\(K\) with \(K=5,10,20\) as our evaluation metrics. We run all the evaluations \(5\) times with different random seeds and report mean value for each method. The maximum length for \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & ML-1M & Yelp & Tmall \\ \hline Users & 5,945 & 99,011 & 132,724 \\ Items & 2,810 & 56,428 & 98,226 \\ Interactions & 365,535 & 1,842,418 & 3,338,788 \\ Avg. Length & 96.07 & 27.90 & 36.18 \\ User features & 3 & 6 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of datasets after preprocessing. user behavior sequences are \(100,30,50\) for ML-1M, Yelp and Tmall respectively, which are slightly larger than the average length of user behavior sequences. We set the dimension of item embedding \(d=100\) and user embedding dimension \(d^{\prime}=50\) following [47], other hyperparameters are tuned to their optimal values on validation set. The model is trained with Adam optimizer and we train DIPSGNN 10 epochs as we observe that our model can reach convergence at that time. For all baseline methods, we use the optimal hyperparameters provided in the original papers. As for the privacy specification setting, the privacy budget \(\epsilon_{1}\) for protecting user features in EdgeRand and DIPSGNN is set to \(20\) by default following [6]. And the privacy budget \(\epsilon_{2}\) for protecting user behaviors is set to \(5\) by default in all private methods, we numerically calibrate the noise standard deviation \(\sigma\) according to this privacy budget following [34]. \(\delta\) is set to be smaller than the inverse number of edges. Meanwhile, the different privacy budgets for user features and user behaviors capture the variation of privacy expectations of heterogeneous data types, if all the data types are treated as equally sensitive, it will add too much unneeded noise and sacrifice utility [57]. The effect of different privacy budgets will be further discussed by empirical experiments. ### Performance Comparisons Table 2 reports the performance comparisons between DIPSGNN and other baselines in terms of Recall@20, MRR@20, Recall@10 and [email protected] We have the following observations: (1) the non-private graph based method SRGNN and APGNN achieve the best performance among non-private methods, demonstrating the strong power of adopting graph neural networks to capture complex item transitions and dynamic user preferences in sequential recommendation. (2) As for private baselines, DPMF with explicit feedback outperforms that with implicit feedback, because explicit feedback carries more accurate information about user preferences. But these two DPMF methods both perform much worse than EdgeRand and DIPSGNN. We attribute this phenomenon to that DPMF assumes interactions as independent while EdgeRand and DIPSGNN consider complex dependencies between items by building model on user behavior graphs. This highlights the importance to take into account the dependencies between interactions rather than treating them as independent when protecting user interactions. (3) Our DIPSGNN consistently yields better performance than the state-of-the-art EdgeRand method for protecting interactions on all three datasets. Their relative gap in terms of Recall@20 is \(19.99\%,6.16\%,2.95\%\) on ML-1M, Yelp and Tmall respectively. And their difference is also significant in terms of other metrics, which verifies the effectiveness of DIPSGNN. Moreover, DIPSGNN even outperforms the best non-private baseline APGNN in terms of MRR@20, Recall@10 and MRR@10 on ML-1M. A possible explanation is that controlled amount of noise during training may improve the generalization performance on test set [63]. (4) The performance of DIPSGNN is more competitive than commonly used deep learning recommendation methods such as LightGCN, HRNN and SASRec. And it also beats SRGNN on ML-1M and Yelp. This demonstrates the feasibility of applying DIPSGNN in real-world applications to provide accurate recommendation and protect sensitive user information simultaneously. Footnote 5: We can get similar results regarding \(K=5\). Due to space limitation, we do not report. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{ML-1M} & \multicolumn{4}{c}{Yelp} & \multicolumn{4}{c}{Tmall} \\ \cline{2-13} & & R@20 & M@20 & R@10 & M@10 & R@20 & M@20 & R@10 & M@10 & R@20 & M@20 & R@10 & M@10 \\ \hline \multirow{8}{*}{Nonp} & BPRMF & 4.56 & 0.81 & 2.20 & 0.65 & 4.38 & 1.01 & 2.58 & 0.89 & 15.19 & 6.77 & 12.28 & 6.57 \\ & SASRec & 6.69 & 1.20 & 3.44 & 0.98 & 4.70 & 1.03 & 2.69 & 0.89 & 14.46 & 5.08 & 10.46 & 4.81 \\ & HRNN & 18.43 & 4.70 & 11.70 & 4.24 & 1.49 & 0.33 & 0.83 & 0.29 & 13.34 & 5.82 & 10.45 & 5.62 \\ & LightGCN & 8.53 & 1.67 & 4.59 & 1.40 & 6.18 & 1.41 & 3.65 & 1.24 & OOM & OOM & OOM & OOM \\ & SRGNN & 21.01 & 5.15 & 13.09 & 4.61 & 7.24 & 1.77 & 4.41 & 1.58 & 24.85 & 11.55 & 20.42 & 11.24 \\ & APGNN & 21.26 & 5.29 & 13.23 & 4.75 & 7.26 & 1.81 & 4.46 & 1.62 & 24.86 & 11.57 & 20.38 & 11.26 \\ \hline \multirow{4}{*}{Priv} & DPMF \_imp & 3.15 & 0.66 & 1.77 & 0.57 & 0.41 & 0.10 & 0.25 & 0.09 & 1.95 & 0.36 & 1.18 & 0.30 \\ & DPMF\_exp & 3.40 & 0.71 & 1.89 & 0.61 & 1.30 & 0.32 & 0.83 & 0.28 & - & - & - & - \\ & EdgeRand & 17.61 & 4.28 & 10.88 & 3.82 & 6.82 & 1.66 & 4.17 & 1.49 & 20.35 & 9.80 & 16.61 & 9.54 \\ \hline \multirow{2}{*}{Ours} & DIPSGNN & **21.13** & **6.11** & **14.04** & **5.63** & **7.24** & **1.78** & **4.45** & **1.59** & **20.95** & **9.98** & **17.15** & **9.71** \\ & Improve & 19.99\%* & 42.76\%* & 29.04\%* & 47.38\%* & 6.16\%* & 7.23\%* & 6.71\%* & 6.71\%* & 2.95\%* & 1.84\%* & 3.25\%* & 1.78\%* \\ \hline \hline \end{tabular} \end{table} Table 2: Comparative results of different approaches on all datasets. Bold number means DIPSGNN outperforms EdgeRand. Statistical significance of pairwise differences between DIPSGNN vs. EdgeRand is determined by a paired t-test (\({}^{*}\) for \(p<0.01\)). OOM denotes out-of-memory and R denotes Recall, M denotes MRR for succinctness. Nonp means non-private baselines and Priv means private baselines. ### Effect of Privacy Budget To analyze the trade-off between privacy and accuracy with different privacy budgets, we first fix the budget for protecting user features \(\epsilon_{1}\) at its default value \(20\) and change the budget for protecting user interactions \(\epsilon_{2}\) in \(\{3,4,5\}\). The experimental results in terms of Recall@20 are presented in Figure 3. We can observe that both EdgeRand and DIPSGNN generally perform better with a larger \(\epsilon_{2}\), as the noise added on the adjacency matrices or item embedding matrix in GNN will both decrease when \(\epsilon_{2}\) rises thus brings more accurate recommendation. Meanwhile, DIPSGNN always outperforms EdgeRand except when \(\epsilon_{2}=3\) on Tmall. And the performance gap between them tends to enlarge with a larger \(\epsilon_{2}\), this confirms again the effectiveness of our proposed DIPSGNN for protecting user interactions. Similarly, we also conduct experiments by changing the budget for user feature protection \(\epsilon_{1}\) in \(\{10,20,30\}\) with the fixed privacy budget \(\epsilon_{2}=5\) for user interaction protection. Figure 4 shows experimental results in terms of Recall@20. The performance of EdgeRand and DIPSGNN consistently rises with a larger \(\epsilon_{1}\). As we add less noise to user features when \(\epsilon_{1}\) increases, it will facilitate more accurate modeling of user interests, leading to more satisfying recommendation results. And DIPSGNN outperforms EdgeRand all the time which again shows the superiority. ### Hyperparameter Study #### 5.4.1 Effect of GNN Propagation Steps In this section, we first study the influence of GNN propagation steps \(T\) on the performance of EdgeRand and DIPSGNN. We fix privacy guarantees at their default values and change GNN propagation steps \(T\) in \(\{1,2,3\}\). Figure 5 shows the experimental results obtained on three datasets. It can be observed that the performance of DIPSGNN and EdgeRand both obviously decrease with larger aggregation steps \(T\). Besides, the performance decline of EdgeRand is apparently more pronounced than DIPSGNN. This is because EdgeRand adds noise to the original adjacency matrix which distorts the neighborhood aggregation inside GNNs [55, 56] and the distortion effect will become larger with more aggregation steps. Contrarily, DIPSGNN aggregates information from neighbors based on unperturbed adjacency matrix. The slight performance decrease of DIPSGNN comes from the fact that we need to add more noise on node embedding matrix to maintain the same privacy guarantee, this can be derived from Theorem 2. #### 5.4.2 Effect of Embedding Norm We then investigate how the norm of each row in node embedding matrix affects the performance of DIPSGNN. As we can see from Theorem 2 that a smaller \(C\) forces us to add less noise on the embedding matrix at the same privacy Figure 4: Recall@20 of DIPSGNN and EdgeRand with different \(\epsilon_{1}\) (privacy budget for protect user features). Figure 3: Recall@20 of DIPSGNN and EdgeRand with different \(\epsilon_{2}\) (privacy budget for protect user interactions). guarantee, but if \(C\) is too small, the elements of embedding matrix may diverge too much from their true value. To find the proper \(C\) for each dataset, we fix privacy guarantees at their default values, GNN propagation steps \(T=1\), then select \(C\) from \(\{0.2,0.4,0.6,0.8,1\}\) for ML-1M, \(\{0.1,0.3,0.5,0.7,0.9\}\) for Yelp and Tmall. The experimental results in terms of Recall@20 are shown in Figure 6. In general, Recall@20 continues to fall with a larger \(C\) on ML-1M and Yelp, it reaches the highest value at \(C=0.2\) and \(C=0.1\) on ML-1M and Yelp respectively. On Tmall, Recall@20 first increases and reaches the highest value at \(C=0.3\), then decreases when \(C\) becomes larger. We can make a general conclusion that a large \(C\) may bring excessive noise and low utility on these three datasets. ### Importance of User Features To verify the necessity of protecting user features to get a better balance between privacy and accuracy, we compare the performance of DIPSGNN with other three variants of DIPSGNN. **Nonp** means no noise is added on user features or the training of GNN by setting \(\epsilon_{1}=\infty\) and \(\epsilon_{2}=\infty\). **Nonp-U** means no user features are exploited and user embedding is initialized by user id. Similarly, **Priv** means the normal DIPSGNN with perturbed user features and noise added on node embedding matrix during the training of GNN. And in **Priv-U**, perturbed user features are replaced by user id for initializing user embedding, but the same amount of noise is added on the node embedding matrix during the training of GNN. We show the experimental results in Figure 7 and have the following findings. In both non-private and private settings, adding user features can help to capture more accurate user preference and improve recommendation performance. It illustrates the importance of taking user side information, besides user-item interactions, into consideration for a better balance between privacy and accuracy. ## 6 Conclusions and future work With the enactment of General Data Protection Regulation (GDPR), there is an urgent need to protect sensitive user information on various online platforms. And recommender system is the core component of online platforms, it takes advantage of rich personal information to provide personalized service. Therefore, its privacy preservation is of great concern to users and regulators. A privacy-preserving recommender system will greatly alleviate privacy concern and increase user engagement on the platform, thus promote the commercial profit and sustainable development of the platform. Figure 5: Recall@20 of DIPSGNN and EdgeRand with different GNN aggregation steps \(T\). Figure 6: Recall@20 of DIPSGNN with different embedding norm \(C\). In this paper, we address how to protect sensitive user features and interactions concurrently without great sacrifice of accuracy in sequential recommender system. We propose a differentially private sequential recommendation framework named DIPSGNN. DIPSGNN protects sensitive user features by adding noise to raw features at input stage. The noise scale is determined by piecewise mechanism which can process numerical and categorical features to make them satisfy local differential privacy. And the post-processing property of differential privacy will guarantee that user features are always well protected in the recommendation algorithm. As for the protection on user interactions, we first transform interaction sequences into directed and weighted user behavior graphs. Then, user behavior graphs are fed into gated graph neural network to model sequential dependencies and user preference. In this graph neural network, we design a novel aggregation step to protect adjacency matrices of user behavior graphs, thus to protect user behaviors. Concretely, calibrated noise is added into the aggregation step to make it satisfy differential privacy with respect to adjacency matrices of user behavior graphs. And we empirically demonstrate the superiority of this aggregation perturbation method than conventional graph structure perturbation method for protecting user interactions. Besides, extensive experimental results on three datasets (ML-1M, Yelp and Tmall) evidence that our proposed DIPSGNN can achieve significant gains over state-of-the-art differentially private recommender systems. For future work, we will extend our framework to other popular graph neural networks such as graph attention networks (GATs) [45] and GraphSAGE [64]. Besides, we are also interested in incorporating personalized privacy preferences into our framework, as users vary substantially in privacy attitudes in real life [65]. ## Appendix A Proof of Theorem 1 Proof.: First, we prove Algorithm 1 satisfies \(\epsilon\)-local differential privacy. In Algorithm 1, \(C=\frac{\exp(\epsilon/2)+1}{\exp(\epsilon/2)-1}\), \(l(x)=\frac{C+1}{2}\cdot x-\frac{C-1}{2},r(x)=l(x)+C-1\). If \(c\in[l(x),r(x)]\), then \[\Pr(x^{\prime}=c|x) =\frac{\exp(\epsilon/2)}{\exp(\epsilon/2)+1}\cdot\frac{1}{r(x)-l( x)}\] \[=\frac{\exp(\epsilon)-\exp(\epsilon/2)}{2\exp(\epsilon/2)+2}=p.\] Similarly, if \(c\in[-C,l(x))\cup(r(x),C]\), then \[\Pr(x^{\prime}=c|x) =(1-\frac{\exp(\epsilon/2)}{\exp(\epsilon/2)+1})\cdot\frac{1}{2C- r(x)+l(x)}\] \[=\frac{\exp(\epsilon/2)-1}{2\exp(\epsilon)+2\exp(\epsilon/2)}= \frac{p}{\exp{(\epsilon)}}.\] Then if \(x_{1},x_{2}\in[-1,1]\) are any two input values and \(x^{\prime}\in[-C,C]\) is the output of Algorithm 1, we have: \[\frac{\Pr(x^{\prime}\mid x_{1})}{\Pr(x^{\prime}\mid x_{2})}\leq\frac{p}{p/\exp (\epsilon)}=\exp(\epsilon).\] Figure 7: Performance of DIPSGNN compared with three ablation models. Thus, Algorithm 1 satisfies \(\epsilon\)-LDP. In Algorithm 2, we let \(\epsilon=\frac{\epsilon_{1}}{k}\), so perturbation of numerical feature satisfies \(\frac{\epsilon_{1}}{k}\)-LDP. Analogously, we prove the perturbation of categorical features in Algorithm 2 satisfies \(\frac{\epsilon_{1}}{k}\)-LDP. Suppose \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are any two \(m\)-dimensional one-hot vector for perturbation and the output is \(\mathbf{x}^{\prime}\). In \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), \(\mathbf{x}_{1}[v_{1}]=1,\mathbf{x}_{2}[v_{2}]=1\)\((v_{1}\neq v_{2})\) and all other elements are \(0\). Let \(p=0.5\) and \(q=\frac{1}{\exp(\epsilon/k)+1}\), \(p>q\), we have: \[\frac{\Pr(\mathbf{x}^{\prime}\mid\mathbf{x}_{1})}{\Pr(\mathbf{x} ^{\prime}\mid\mathbf{x}_{2})}=\frac{\prod_{i\in[m]}\Pr(\mathbf{x}^{\prime}[i ]\mid\mathbf{x}_{1}[i])}{\prod_{i\in[m]}\Pr(\mathbf{x}^{\prime}[i]\mid \mathbf{x}_{2}[i])}\] \[=\frac{\Pr(\mathbf{x}^{\prime}[v_{1}]\mid\mathbf{x}_{1}[v_{1}]) \Pr(\mathbf{x}^{\prime}[v_{2}]\mid\mathbf{x}_{1}[v_{2}])}{\Pr(\mathbf{x}^{ \prime}[v_{1}]\mid\mathbf{x}_{2}[v_{1}])\Pr(\mathbf{x}^{\prime}[v_{2}]\mid \mathbf{x}_{2}[v_{2}])}\] \[\leq\frac{\Pr(\mathbf{x}^{\prime}[v_{1}]=1\mid\mathbf{x}_{1}[v_{1 }]=1)\Pr(\mathbf{x}^{\prime}[v_{2}]=0\mid\mathbf{x}_{1}[v_{2}]=0)}{\Pr( \mathbf{x}^{\prime}[v_{1}]=1\mid\mathbf{x}_{2}[v_{1}]=0)\Pr(\mathbf{x}^{ \prime}[v_{2}]=0\mid\mathbf{x}_{2}[v_{2}]=1)}\] \[=\frac{p}{q}\cdot\frac{1-q}{1-p}=\exp(\frac{\epsilon_{1}}{k}).\] Thus, the perturbation of categorical feature also satisfies \(\frac{\epsilon_{1}}{k}\)-LDP. As Algorithm 3 is composed of \(k\) times perturbation of numerical or categorical feature, and all of them satisfy \(\frac{\epsilon_{1}}{k}\)-LDP, so Algorithm 3 satisfies \(\epsilon_{1}\)-LDP based on the composition theorem of differential privacy. ## Appendix B Proof of Theorem 2 Proof.: The proof of Theorem 2 uses an alternative definition of differential privacy (DP), called Renyi Differential Privacy (RDP) [66] which is defined as follows, **Definition 3** (Renyi Differential Privacy).: _A randomized algorithm \(\mathcal{A}\) is \((\alpha,\epsilon)\)-RDP for \(\alpha>1,\epsilon>0\) if for every adjacent datasets \(X\sim X^{\prime}\), we have:_ \[D_{\alpha}\left(\mathcal{A}(X)\|\mathcal{A}\left(X^{\prime}\right)\right)\leq\epsilon\] _, where \(D_{\alpha}(P\|Q)\) is the Renyi divergence of order \(\alpha\) between probability distributions \(P\) and \(Q\), defined as:_ \[D_{\alpha}(P\|Q)=\frac{1}{\alpha-1}\log\mathbb{E}_{x\sim Q}\left[\frac{P(x)}{Q (x)}\right]^{\alpha}\] A basic mechanism to achieve RDP is the Gaussian mechanism. Let \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}\) and we first define the sensitivity of \(f\) as, \[\Delta_{f}=\max_{X\sim X^{\prime}}\left\|f(X)-f\left(X^{\prime}\right)\right\| _{2}\] Then, adding Gaussian noise with variance \(\sigma^{2}\) to \(f\) as: \[\mathcal{A}(X)=f(X)+\mathcal{N}\left(\sigma^{2}\mathbf{e}_{d}\right),\] where \(\mathcal{N}\left(\sigma^{2}\mathbf{e}_{d}\right)\in\mathbb{R}^{d}\) is a vector with each element drawn from \(\mathcal{N}(0,\sigma^{2})\) independently, yields an \((\alpha,\frac{\Delta_{f}^{2}\alpha}{2\sigma^{2}})\)-RDP algorithm for all \(\alpha>1\)[66]. As the analysis on \(\mathbf{A}_{u}^{out}\) and \(\mathbf{A}_{u}^{in}\) is the same, so we use \(\mathbf{A}\) to denote \(\mathbf{A}_{u}^{out}\) or \(\mathbf{A}_{u}^{in}\). If we delete an interaction from user \(u\)'s behavior sequence, \(\mathbf{A}_{u}^{out}\) and \(\mathbf{A}_{u}^{in}\) will both only change by 1 at a single position. Let \(\mathbf{A}\) and \(\mathbf{A}^{{}^{\prime}}\) be two neighboring adjacency matrix that only differ by 1 at a single position. Specifically, there exist two nodes \(p\) and \(q\) such that: \[\begin{cases}|\mathbf{A}_{i,j}-\mathbf{A}_{i,j}^{{}^{\prime}}|=1&\text{if $i=p$ and $j=q$},\\ \mathbf{A}_{i,j}=\mathbf{A}_{i,j}&\text{otherwise}.\end{cases}\] The sensitivity of sum aggregation step is: \[||\mathbf{A}\cdot\bar{\mathbf{H}}^{(t-1)}-\mathbf{A}^{{}^{\prime}} \cdot\bar{\mathbf{H}}^{(t-1)}||_{F} \tag{14}\] \[=(\sum_{i=1}^{|V|}\|\sum_{j=1}^{|V|}(\mathbf{A}_{i,j}\bar{\mathbf{ H}}_{j}^{(t-1)}-\mathbf{A}_{i,j}^{{}^{\prime}}\bar{\mathbf{H}}_{j}^{(t-1)})\|_{2}^{2})^{1/2}\] \[=(\|\mathbf{A}_{p,q}\bar{\mathbf{H}}_{p}^{(t-1)}-\mathbf{A}_{p,q}^ {{}^{\prime}}\bar{\mathbf{H}}_{p}^{(t-1)}\|_{2}^{2})^{1/2}\] \[=\|(\mathbf{A}_{p,q}-\mathbf{A}_{p,q}^{{}^{\prime}})\bar{\mathbf{ H}}_{p}^{(t-1)}\|_{2}\] \[=\|\bar{\mathbf{H}}_{p}^{(t-1)}\|_{2}\] \[=C\] where \(\tilde{\mathbf{H}}_{j}^{(t-1)}\) is the \(j\)-th row of row-normalized feature matrix \(\tilde{\mathbf{H}}^{(t-1)}\). The sensitivity of each aggregation step is \(C\) so it satisfies \((\alpha,C^{2}\alpha/2\sigma^{2})\)-RDP based on Gaussian mechanism. And, aggregation in DIPSGNN can be seen as an adaptive composition of \(T\) such mechanisms, based on composition property of RDP [66], the total privacy cost is \((\alpha,TC^{2}\alpha/2\sigma^{2})\)-RDP. As RDP is a generalization of DP, it can be converted back to standard \((\epsilon,\delta)\)-DP using the following lemma. **Lemma 3**.: _If \(\mathcal{A}\) is an \((\alpha,\epsilon)\)-RDP algorithm, then it also satisfies \((\epsilon+\log(1/\delta)/\alpha-1,\delta)-DP\) for any \(\delta\in(0,1)\)._ Therefore, \((\alpha,TC^{2}\alpha/2\sigma^{2})\)-RDP in DIPSGNN is equivalent to edge-level \((\epsilon_{2},\delta)\)-DP with \(\epsilon_{2}=\frac{TC^{2}\alpha}{2\sigma^{2}}+\frac{\log(1/\delta)}{\alpha-1}\). Minimizing this expression over \(\alpha>1\) gives \(\epsilon_{2}=\frac{TC^{2}}{2\sigma^{2}}+C\sqrt{2T\log(1/\delta)}/\sigma\). So we conclude that the aggregation in DIPSGNN satisfies edge-level \((\epsilon_{2},\delta)\)-DP with \(\epsilon_{2}=\frac{TC^{2}}{2\sigma^{2}}+C\sqrt{2T\log(1/\delta)}/\sigma\). ## Acknowledgment This work is supported by Shanghai Rising-Star Program (Grant No. 23QA1403100), Natural Science Foundation of Shanghai (Grant No. 21ZR1421900), National Natural Science Foundation of China (Grant No. 72192832), Graduate Innovation Fund of Shanghai University of Finance and Economics (Grant No.CXJJ-2022-366) and the Program for Innovative Research Team of Shanghai University of Finance and Economics.
2310.12157
Desynchronization of large-scale neural networks by stabilizing unknown unstable incoherent equilibrium states
In large-scale neural networks, coherent limit cycle oscillations usually coexist with unstable incoherent equilibrium states, which are not observed experimentally. We implement a first-order dynamic controller to stabilize unknown equilibrium states and suppress coherent oscillations. The stabilization of incoherent equilibria associated with unstable focus and saddle is considered. The algorithm is demonstrated for networks composed of quadratic integrate-and-fire (QIF) neurons and Hindmarsh-Rose neurons. The microscopic equations of an infinitely large QIF neural network can be reduced to an exact low-dimensional system of mean-field equations, which makes it possible to study the control problem analytically.
Tatjana Pyragiene, Kestutis Pyragas
2023-09-15T12:00:17Z
http://arxiv.org/abs/2310.12157v1
Desynchronization of large-scale neural networks by stabilizing unknown unstable incoherent equilibrium states ###### Abstract In large-scale neural networks, coherent limit cycle oscillations usually coexist with unstable incoherent equilibrium states, which are not observed experimentally. We implement a first-order dynamic controller to stabilize unknown equilibrium states and suppress coherent oscillations. The stabilization of incoherent equilibria associated with unstable focus and saddle is considered. The algorithm is demonstrated for networks composed of quadratic integrate-and-fire (QIF) neurons and Hindmarsh-Rose neurons. The microscopic equations of an infinitely large QIF neural network can be reduced to an exact low-dimensional system of mean-field equations, which makes it possible to study the control problem analytically. keywords: Neural network; Mean-field equations; Synchronization control; Quadratic integrate-and-fire neurons; Hindmarsh-Rose neurons ## 1 Introduction Synchronization studies in large populations of coupled oscillatory or excitable elements are relevant in fields ranging from physics to neuroscience [1; 2; 3; 4]. The role of synchronization in neural systems can be twofold. In a healthy state, it is responsible for learning and cognition [5; 6], however, excessive synchronization can cause a variety of neurological conditions such as Parkinson's disease [7], epilepsy [8; 9], tinnitus [10], and others. High-frequency (HF) deep brain stimulation (DBS) is a standard procedure for the treatment of neurological disorders [11; 12]. The mechanisms of DBS are not yet well understood [13; 14]. Simple models show that the HF DBS effect can be explained either as the result of stabilizing the resting state of individual neurons [15] or as suppressing synchronized oscillations without forcing individual neurons into silence [16]. HF DBS may cause side effects and its therapeutic effect may decrease over time, so there is a significant clinical need for less invasive and more effective stimulation methods [17]. In open loop control systems such as HF DBS, adverse effects on neural tissue can be reduced by optimizing the waveform of the stimulus signal [18; 19]. However, a number of theoretical works show that the desynchronization of coherent oscillations is especially effective with the help of closed-loop (feedback) control algorithms. Various control strategies based on linear [20; 21; 22; 23; 24] and nonlinear [25; 26; 27] time-delayed feedback, linear feedback bandpass filters [28; 29; 30], proportional-integro-differential feedback with a separate stimulation-registration setup [31], act-and-wait time-delayed feedback [32; 33] and others [34; 35; 36] were considered. Recent advances in the theory of nonlinear dynamical systems have provided the neuroscience community with simple, low-dimensional models of neural networks referred to as next-generation neural mass models [37]. Such models are useful objects for developing, testing, and understanding various synchronization control algorithms. Here we show that these models can naturally explain the desynchronization mechanism of our feedback control algorithm in terms of stabilizing unknown unstable incoherent states. The next-generation models are derived directly from the microscopic dynamics of individual neurons and are accurate in the thermodynamic limit of infinite network size. These models represent a closed system of mean-field equations for biophysically relevant parameters such as mean membrane potential and firing rate. Low-dimensional dynamics in a large population of coupled oscillatory elements was first discovered by Ott and Antonsen [38] in the Kuramoto model [2]. Later, this discovery was successfully applied to derive a low-dimensional system of mean-field equations for a certain class of networks consisting of all-to-all pulse-coupled QIF neurons [39], which are canonical models of class I neurons [40]. In recent years, next-generation models have been obtained for a large number of different modifications of QIF neural networks [41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. These models make it possible to carry out their detailed bifurcation analysis and reveal synchronization mechanisms. It has been shown that synchronized limit cycle oscillations can arise from various bifurcations, such as the Hopf bifurcation in Refs. [16; 42] or the homoclinic bifurcation in Ref. [42]. However, stable limit cycles are always accompanied by unstable fixed points, which correspond to unstable incoherent equilibrium states of the network. These unstable states are not observed experimentally. Here, we show that a priori unknown unstable incoherent states can be stabilized using the control algorithm proposed in Refs. [52; 53]. Initially, this algorithm was developed and tested to stabilize unknown unstable equilibrium states of low-dimensional dynamical systems, and recently it has been implemented to stabilize unstable pedestrian flows in the collective behavior of large crowds of people [54]. Here, we implement this algorithm to stabilize unstable incoherent states in large-scale neural networks consisting of QIF and Hinmarsh-Rose [55] neurons. We demonstrate effective control of two types of equilibrium states associated with an unstable focus and a saddle point. As far as we know, the control of saddle equilibrium states in neural networks has not been considered in the literature. The paper is organized as follows. Section 2 describes the control algorithm. In Sec. 3, we apply this algorithm to a population of synaptically coupled excitatory QIF neurons. Here we stabilize incoherent states associated with an unstable focus and a saddle fixed point. The latter is stabilized by an unstable controller. Section 4 is devoted to the control of two interacting populations of excitatory and inhibitory QIF neurons. In Sec. 5, we apply our algorithm to a population of chaotically spiking Hindmarsh-Rose neurons, whose microscopic model equations cannot be reduced to a low-dimensional system. The conclusions are presented in Sec. 6. ## 2 Control algorithm We consider a large network of coupled neurons generating collective coherent oscillations. We assume that, along with the synchronous mode of coherent oscillations, the network has an unstable equilibrium state characterized by incoherent oscillations of individual neurons. Our goal is to stabilize the incoherent state and transition the network from synchronous to incoherent mode. To achieve this goal, we turn to the algorithm for stabilizing unknown unstable equilibrium points of low-dimensional dynamical systems, developed in Refs. [52; 53]. The algorithm uses a simple first-order dynamic controller based on a low-pass filter (LPF). The block diagram of this algorithm, adapted for neural networks, is shown in Fig. 1. We assume that the mean membrane potential \(v(t)\) of the entire or some part of the neural population can be measured at the output of the network. In addition, we assume that all or part of the population of neurons can be stimulated by the input current \(I_{c}(t)\). In general, the measured and stimulated subpopulations may differ. The input and output of the network are connected by a feedback loop described by the following equations: \[\dot{w} = \omega_{c}(v-w), \tag{1a}\] \[I_{c} = k(w-v), \tag{1b}\] where \(w\) is a dynamic variable of the controller (LPF). The control algorithm has two adjustable parameters: the cut-off frequency \(\omega_{c}\) of the LPF and the control gain \(k\). Let us denote the average membrane potential of the free network in a state of unstable equilibrium as \(v=v^{*}\), which in the thermodynamic limit should be a constant, \(v^{*}=const\). We assume that this value is a priori unknown. The control algorithm is designed in such a way that the equilibrium value of \(v^{*}\) remains unchanged in the stationary state of the closed loop system. Indeed, at \(\dot{w}=0\) the control variable coincides with the mean membrane potential \(w=w^{*}=v^{*}\), and the feedback perturbation vanishes, \(I_{c}=0\). However, feedback perturbation affects the stability of the incoherent state. The examples below show that this state can be stabilized by adjusting control parameters \(\omega_{c}\) and \(k\) accordingly. This algorithm has a number of advantages. Firstly, it is weakly invasive. Below, we will show that the feedback perturbation \(I_{c}\) decreases according to a power law with increasing network size and vanishes as the network size tends to infinity. Secondly, this algorithm does not require knowledge of the mean membrane potential \(v^{*}\) of an unstable equilibrium state and, thirdly, the algorithm provides tracking of the equilibrium state in the case of slowly varying system parameters [53]. Note that the control algorithm with an ordinary LPF (\(\omega_{c}>0\)) has a limitation. It works well for unstable equilibrium points like focuses but doesn't work for saddles. More precisely, Ref. [52] gives a theorem that a stable controller cannot stabilize unstable equilibrium points with an odd number of real positive eigenvalues. This limitation can be avoided by using an unstable controller in the same way as it is done in the delayed feedback control algorithm [56] when Figure 1: Block diagram of stabilization of unknown incoherent states in neural networks. The mean membrane potential \(v(t)\) represents the output of the network. The network is stimulated by the input current \(I_{c}(t)\). In a feedback loop, LPF stands for low-pass filter. stabilizing a certain type of unstable periodic orbits [57]. Here, to stabilize an unstable incoherent state of the saddle type, we will use an unstable LPF with the parameter \(\omega_{c}<0\). An unstable LPF can be implemented using an RC circuit with a negative resistor. In the following sections, we will demonstrate the performance of this algorithm for three examples of neural networks. The first two examples deal with large populations of synaptically coupled QIF neurons. In the limit of infinite size, microscopic models of these networks can be reduced to exact low-dimensional systems of mean-field equations. In the first example, one population of excitatory neurons is considered, and in the second example, two interacting populations of excitatory and inhibitory neurons are analyzed. The third example is devoted to electrically coupled chaotic Hindmarsh-Rose neurons. ## 3 Controlling a population of synaptically coupled excitatory QIF neurons First, we apply the algorithm described above to a heterogeneous population of QIF excitatory neurons interacting via finite-width synaptic pulses [42]. The microscopic state of the population is defined by the set of \(N\) neurons' membrane potentials \(\{V_{j}\}_{j=1,\ldots,N}\). They satisfy the following set of equations [40]: \[\dot{V}_{j} = V_{j}^{2}+\eta_{j}+Js(t)+I_{c}(t), \tag{2}\] \[\mbox{if }V_{j}\geq V_{p}\mbox{ then }V_{j}\gets V_{r}.\] Here, \(\eta_{j}\) is a heterogeneous excitability parameter that specifies the behavior of individual neurons and the term \(Js(t)\) stands for the synaptic coupling, where \(J\) is the synaptic weight and \(s(t)\) is the normalized mean synaptic current emitted by spiking neurons. The term \(I_{c}(t)\) describes an external current, which we interpret as a control variable. In this model, the membrane time constant of QIF neurons is assumed to be unity. This means that time here is measured in units of the membrane time constant. For \(J=0\) and \(I_{c}=0\), the neurons with the parameter \(\eta_{j}<0\) are at rest, and the neurons with the parameter \(\eta_{j}>0\) generate spikes. When the potential \(V_{j}\) reaches the threshold value \(V_{p}\), it is instantly reset to the value \(V_{r}\). We choose thresholds in the form \(V_{p}=-V_{r}=\infty\), which allows us to transform QIF neurons into theta neurons and obtain an accurate system of reduced mean-field equations [39]. We consider the case when the heterogeneous parameter \(\eta\) is distributed according to the Lorentzian density function \[g(\eta)=\frac{1}{\pi}\frac{\Delta}{(\eta-\bar{\eta})^{2}+\Delta^{2}}, \tag{3}\] where \(\Delta\) is the half-widths and \(\bar{\eta}\) is the center of the distribution. For the Lorentzian heterogeneity, the reduction of microscopic equations is the most efficient. Note that other distributions of the heterogeneous parameter have been considered in recent publications. [49, 50]. Here we use the model of global coupling in which neurons emit synaptic pulses of finite width with the mean synaptic current defined as [42] \[s(t)=\frac{V_{th}}{N}\sum_{i=1}^{N}H(V_{i}(t)-V_{th}), \tag{4}\] where \(H(\cdot)\) is the Heaviside step function and \(V_{th}\) is a threshold potential that determines the height and width of synaptic pulses. In the limit \(N\rightarrow\infty\), the above microscopic model reduces to an exact system of two ordinary differential equations (ODEs) [42] \[\dot{r} = \Delta/\pi+2rv, \tag{5a}\] \[\dot{v} = \bar{\eta}+v^{2}-\pi^{2}r^{2}+Js(t)+I_{c}(t) \tag{5b}\] for two biophysically relevant parameters, the mean spiking rate \(r(t)\) and the mean membrane potential \(v(t)\). In the infinite size limit, the mean synaptic current (4) is expressed in terms of the parameters \(r(t)\) and \(v(t)\) as [42] \[s(t)=\frac{V_{th}}{\pi}\left[\frac{\pi}{2}-\arctan\left(\frac{V_{th}-v(t)}{ \pi r(t)}\right)\right]. \tag{6}\] This expression closes the system of mean-field Eqs. (5). The bifurcation analysis of these equations without control \(I_{c}(t)=0\) was carried out in Ref. [42]. This analysis showed that synchronous limit cycle oscillations can occur through two types of bifurcations: the Hopf bifurcation and the homoclinic bifurcation. In the first case, the system (5) has a stable focus before the bifurcation. On a microscopic level, this corresponds to a stable equilibrium state of the network with incoherent dynamics of individual neurons. After the bifurcation, the incoherent equilibrium state becomes an unstable focus, and neurons exhibit coherent limit cycle oscillations. Our goal here is to bring back the incoherent dynamics by stabilizing the unstable equilibrium state. In the case of a homoclinic bifurcation, the limit cycle touches the saddle point and becomes a homoclinic orbit. Near this bifurcation, we will suppress coherent oscillations by using an unstable controller to stabilize the incoherent state of the saddle equilibrium. We begin the application of our control algorithm from the case of limit cycle oscillations arising from the Hopf bifurcation. We use typical system parameters corresponding to this mode [42]: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=2\), and \(J=20\). For these parameters, the only attractor in the two-dimensional phase space \((r,v)\) of the free (\(I_{c}=0\)) system (5) is the limit cycle. Inside this cycle there is an unstable focus with the coordinates \((r^{*},v^{*})\approx(2.1081,-0.0755)\) and two complex-conjugate eigenvalues \(\lambda_{1,2}\approx 0.2621\pm 9.6190\). Let us now estimate how the local properties of this fixed point change in the presence of a control defined by the Eq. (1). Due to the additional variable \(w\), the phase space of the closed loop system is expanded to three dimensions: \((r,v,w)\). The coordinates of the fixed point in the three-dimensional phase space are \((r^{*},v^{*},v^{*})\), i.e. its projection onto the original two-dimensional phase space remains unchanged. However, the stability properties of this fixed point now depend on the controller parameters \(\omega_{c}\) and \(k\) and are determined by the eigenvalue problem \[\det(A-\lambda I)=0 \tag{7}\] of the linearized system of Eqs. (5) and (1). Here \[A=\begin{pmatrix}a_{11}&a_{12}&0\\ a_{21}&a_{22}-k&k\\ 0&\omega_{c}&-\omega_{c}\end{pmatrix} \tag{8}\] is the Jacobian matrix of this system, \(a_{ij}\) are the coefficients of the Jacobian matrix of of the system (5) without control evaluated at the fixed point \((r^{*},v^{*})\). Specifically, \(a_{11}=2v^{*}\), \(a_{12}=2r^{*}\), \(a_{21}=-2\pi^{2}r^{*}+JV_{th}(\pi r^{*})^{-2}(V_{th}-v^{*})c^{-1}\) and \(a_{22}=2v^{*}+JV_{th}\pi^{-2}(cr^{*})^{-1}\), where \(c=1+[(V_{th}-v^{*})/(\pi r^{*})]^{2}\). Finally, \(I\) is the identity matrix, and \(\lambda\) is the eigenvalue. For a given fixed point, the dependence of the solutions of the Eq. (7) on the parameters \(\omega_{c}\) and \(k\) is shown in Fig. 2. The colors encode the values of \(\max[\mathrm{Re}(\lambda)]\). The thick red contour line corresponds to \(\max[\mathrm{Re}(\lambda)]=0\). It separates regions of a stable and unstable fixed point. We see that the control algorithm is robust to the choice of control parameters \(\omega_{c}\) and \(k\). The algorithm provides stabilization of the unstable focus for any \(\omega_{c}>0\) and \(k\gtrapprox 0.55\). Figure 3 shows the performance of the control algorithm for fixed values of \(\omega_{c}=1\) and \(k=2\). The thick gray curves show the dynamics of the free and controlled neuronal population obtained from the mean-field equations (5). During the time \(t<5\) the control is switched off and the system is in the mode of limit cycle oscillations. The mean membrane potential [Fig. 3(a)] and the mean spiking rate [Fig. 3(b)] show periodic oscillations. At \(t>5\) the control is activated and the oscillations are damped. The system approaches a stabilized equilibrium state. The control perturbation [Fig. 3(d)] experiences transient damped oscillations and vanishes asymptotically. As a next step, we tested the performance of our algorithm for networks of finite size, described by the microscopic Eqs. (2). Unlike the low-dimensional mean-field Eqs. (5), the microscopic model is defined by a huge number of differential equations. The typical population sizes we model here are \(N\sim 10^{4}\) neurons. There is of course no _a priori_ guarantee whether the control algorithm will work for such high-dimensional systems. Numerical simulation is more convenient after changing variables \[V_{j}=\tan(\theta_{j}/2), \tag{9}\] which transforms QIF neurons into theta neurons. The advantage of theta neurons is that they avoid the discontinuy problem. When the membrane potential \(V_{j}\) of the QIF neuron rises to \(+\infty\) and falls to \(-\infty\), the theta neuron simply crosses the phase \(\theta_{j}=\pi\). For theta neurons, the Eqs (2) are transformed to Figure 3: Suppression of coherent oscillations by stabilization of an unstable focus in a population of synaptically coupled QIF neurons. For \(t<5\), there is no control and the network generates collective coherent oscillations. For \(t>5\), the control is turned on and the system goes into a previously unstable incoherent state. The dynamics of (a) mean membrane potential, (b) mean spiking rate, and (d) control perturbation derived from the mean-field Eqs. (5) are shown as thick gray curves. The thin red curves show the same results derived from the microscopic model (10). (c) Raster plot of 200 randomly selected neurons. The spike moments for each neuron are shown by dots. The neuron numbers are shown on the vertical axis. The parameters of the network are the same as in Fig. 2. Controller parameters: \(\omega_{c}=1\) and \(k=2\). The microscopic model was simulated using \(N=10^{4}\) neurons. Figure 2: The performance of the control algorithm depending on the control parameters \(\omega_{c}\) and \(k\). The results for an unstable focus in a population of synaptically coupled QIF neurons are presented. The contour lines and colors indicate the maximum real part of the eigenvalues \(\max[\mathrm{Re}(\lambda)]\) obtained from the Eq. (7). The thick red contour line corresponds to \(\max[\mathrm{Re}(\lambda)]=0\). It separates stable and unstable regions. The originally unstable focus is stabilized in the region \(\max[\mathrm{Re}(\lambda)]<0\). Network parameters: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=2\), and \(J=20\). \[\dot{\theta}_{j} = 1-\cos\left(\theta_{j}\right) \tag{10}\] \[+ \left[1+\cos\left(\theta_{j}\right)\right]\left[\eta_{j}+Js(t)+I_{c }(t)\right].\] We integrated these equations by the Euler method using a time step of \(dt=10^{-4}\). We have generated the values of the Lorentzian distributed (3) heterogeneous parameter deterministically using \(\eta_{j}=\bar{\eta}+\Delta\tan(\pi/2)(2j-N-1)/(N+1)\)] for \(j=1,\ldots,N\). For more details on modelling the Eqs. (10), see Ref. [42]. From the Eqs. (10), we estimated the Kuramoto order parameter [2] \[Z=\frac{1}{N}\sum_{j=1}^{N}\exp(i\theta_{j}) \tag{11}\] and used its relation with the spiking rate \(r\) and the mean membrane potential \(v\)[39]: \[r=\frac{1}{\pi}\operatorname{Re}\left(\frac{1-Z^{*}}{1+Z^{*}}\right),\quad v= \operatorname{Im}\left(\frac{1-Z^{*}}{1+Z^{*}}\right), \tag{12}\] where \(Z^{*}\) denotes complex conjugate of \(Z\). Results derived from the microscopic model (10) for \(N=10^{4}\) neurons are presented in Fig. 3 by thin red curves. They are in good agreement with the results obtained from the reduced mean-field Eqs. (5). Thus, the control algorithm works well for a large population of \(N=10^{4}\) neurons, and the mean-field theory correctly predicts the dynamics of the population in the presence of control. To demonstrate network dynamics at the microscopic level, Fig. 3(c) shows raster plots of 200 randomly selected neurons. Without stimulation (\(t<5\)), most neurons spike coherently. Turning on the control at \(t>5\) destroys the coherent spiking and stabilizes the initially unstable incoherent state. Although the results of the mean-field equations and the microscopic model are very close, there is a fundamental difference in the asymptotic dynamics of these two models. As \(t\to\infty\), the dynamic variables \((r,v)\) of the mean-field equations approach exactly the unstable fixed point \((r^{*},v^{*})\) of the uncontrolled system, and the control perturbation vanishes \(I_{c}(t)\to 0\). In the microscopic model, the variables \((r,v)\) exhibit small fluctuations around the fixed point \((r^{*},v^{*})\), and the control perturbation \(I_{c}(t)\) fluctuates around zero. Figure 4 shows the dependence of the variance \(\operatorname{Var}(I_{c})\) of the control perturbation in the post-transient regime on the network size \(N\). The variance decreases with increasing \(N\) and vanishes at \(N\to\infty\). This dependence is well described by the power law \(\operatorname{Var}(I_{c})\sim N^{-\gamma}\) with \(\gamma\approx 1.3\). Let us now consider the control of coherent oscillations near a homoclinic bifurcation. We will use the following set of the parameters: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=-7\), and \(J=21\). For these parameters, the free (\(I_{c}=0\)) system (5) has a stable limit cycle and outside it a saddle point with coordinates \((r^{*},v^{*})\approx(0.4073,-0.3908)\) and two real eigenvalues \(\lambda_{1,2}\approx(2.5306,-3.9255)\). Stabilization of the incoherent state associated with the saddle point cannot be attained with an ordinary LPF and requires the use of an unstable LPF with a negative parameter \(\omega_{c}\). The eigenvalues of the saddle point in presence of the control are determined by the Eqs. (7) and (8). The dependence of the two largest real parts of the eigenvalues on \(k\) for a fixed \(\omega_{c}=-1\) is shown in Fig. 5. The saddle point stabilization mechanism is best understood from the root loci diagram shown in the inset. Here we show the evolution of eigenvalues in the complex plane \(\lambda\) as \(k\) changes from \(0\) to \(\infty\). Two crosses on the real axes determine the location of the eigenvalues at \(k=0\). One of them \(\lambda=2.5306\) corresponds to a free network, and the other \(\lambda=-\omega_{c}=1\) corresponds to a disabled unstable controller. With the increase of \(k\), they approach each other on the real axes, collide and pass to the complex plane. At \(k\approx 15.3\), they cross symmetrically into the left half-plane (Hopf bifurcation). For very large \(k\approx 91.8\), we have a collision on the real axis again, and then one of the roots goes to infinity, while the other approaches the origin. For \(k>15.3\), the closed loop system is stable. Figure 4: The variance \(\operatorname{Var}(I_{c})\) of the control perturbation in the post–transient regime as a function of the network size \(N\). The asterisks show the result of the numerical simulation, and the dashed line shows the power-law approximation \(\operatorname{Var}(I_{c})=CN^{-\gamma}\) with \(C=1450\) and \(\gamma\approx 1.3\). Figure 5: Linear stability of a saddle incoherent state of a population of QIF neurons controlled by an unstable controller with a negative parameter \(\omega_{c}=-1\). Dependence of two largest real parts of eigenvalues of the closed loop system on the control gain \(k\). The inset shows the root loci of the characteristic Eq. (7) in the complex plane \(\lambda\) as \(k\) changes from \(0\) to \(\infty\). The crosses on the real axes indicate the location of the eigenvalues at \(k=0\), and the dot at the origin shows the location of one of the eigenvalues at \(k=\infty\). Network parameters: \(\Delta=1\), \(V_{th}=50\), \(\bar{\eta}=-7\), and \(J=21\). Figure 6 shows the results of stabilization of a saddle incoherent state with unstable controller parameters \(\omega_{c}=-1\) and \(k=20\). As in Fig. 3, the dynamics derived from the mean-field equations are shown as thick gray curves, and the corresponding dynamics derived from the microscopic model of \(10^{4}\) neurons are shown as thin red curves. Again, there is complete agreement between the mean-field theory and the microscopic theory. For \(t<10\), there is no control, and the system is in the limit cycle mode, which is close to a homoclinic bifurcation. For \(t>10\), the control is activated and the system approaches a stabilized incoherent saddle point. In the mean-field theory, the control perturbation vanishes asymptotically, while in the microscopic model it experiences small fluctuations around zero. Note that the steady-state spiking rate in saddle equilibrium is much lower than in focus equilibrium [cp. post-transient dynamics in Figs. 3(b) and 6(b)]. ## 4 Controlling two interacting populations of excitatory and inhibitory QIF neurons Let us now consider the control of a more complex network built from two connected populations of excitatory and inhibitory QIF neurons. We follow the model discussed in Ref. [16] whose network architecture mimics the network architecture used in Parkinson's disease models. Such models are usually based on two interacting neural populations of the subthalamic nucleus (STN) consisting of excitatory neurons and the external segment of the globus pallidus (GPe) consisting of inhibitory neurons (cf.,e.g., Ref. [58]). It was shown in [16] that synchronous oscillations can be very effectively suppressed by HF stimulation of the inhibitory population, while HF stimulation of the excitatory population is ineffective. Here we want to test whether our control algorithm applied to the excitatory population can suppress synchronization. The microscopic model of the network considered here is determined by the set of \(2N\) neurons' membrane potentials \(\{V_{j}^{(E,I)}\}_{j=1,\ldots,N}\). They satisfy the system of \(2N\) ODEs [16]: \[\tau_{m}\dot{V}_{j}^{(E,I)} = (V_{j}^{(E,I)})^{2}+\eta_{j}^{(E,I)}+\mathcal{I}_{j}^{(E,I)}, \tag{13}\] \[\text{if}\;\;V_{j}^{(E,I)}\geq V_{p}\;\;\text{then}\;\;V_{j}^{(E, I)}\gets V_{r},\] where, \(V_{j}^{(E,I)}\) is the membrane potential of neuron \(j\) in the excitatory (E) or the inhibitory (I) population, and \(\tau_{m}\) is the membrane time constant. The threshold potential assumption is the same as in the previous model: \(V_{p}=-V_{r}=\infty\). The heterogeneous parameters \(\eta_{j}^{(E,I)}\) for populations E and I are taken from two independent Lorentzian distributions: \[g_{E,I}(\eta)=\frac{1}{\pi}\frac{\Delta_{E,I}}{(\eta-\bar{\eta}_{E,I})^{2}+ \Delta_{E,I}^{2}}, \tag{14}\] where \(\Delta_{E,I}\) and \(\bar{\eta}_{E,I}\) are respectively the width and the center of the distribution for the populations E and I. The last term \(\mathcal{I}_{j}^{(E,I)}\) in Eqs. (2) describes synaptic coupling and external stimulation in the respective populations: \[\mathcal{I}_{j}^{(E)} = -J_{IE}r_{I}(t)+I_{c}(t), \tag{15a}\] \[\mathcal{I}_{j}^{(I)} = J_{EI}r_{E}(t)-J_{II}r_{I}(t). \tag{15b}\] Unlike the previous model, here the interaction between neurons is provided by instantaneous pulses. Each time the potential of a given neuron reaches \(\infty\), it resets to \(-\infty\), and the neuron emits a Dirac delta spike, which contributes to the output of the network. The mean synaptic rates of E and I populations are as follows: \[r_{E,I}(t)=\lim_{\tau_{s}\to 0}\frac{\tau_{m}}{\tau_{s}N}\sum_{i=1}^{N}\sum_{k} \int_{t-\tau_{s}}^{t}\delta(t^{\prime}-(t_{i}^{k})_{E,I})dt^{\prime}, \tag{16}\] where \(\delta(t)\) is the Dirac delta function and \((t_{i}^{k})_{E,I}\) is the time of the \(k\)th spike of the \(i\)th neuron in E and I population, respectively. Parameters \(J_{EI}\), \(J_{IE}\) and \(J_{II}\) denote synaptic weights. The current \(J_{EI}r_{E}(t)\) excites I neurons due to the synaptic activity of E population and the current \(-J_{IE}r_{I}(t)\) inhibits E neurons due to the synaptic activity of the I population. The current \(-J_{II}r_{I}(t)\) recurrently inhibits neurons in population I. We are considering a stimulation protocol in which only the excitatory population is stimulated, so the control current \(I_{c}(t)\) is only included in the Eq. (15a). Figure 6: Suppression of coherent oscillations in a population of QIF neurons by stabilization of a saddle incoherent state with an unstable controller at \(\omega_{c}=-1\) and \(k=20\). As in Fig. 3, the dynamics derived from the mean-field equations are shown as thick gray curves, and the corresponding dynamics derived from the microscopic model of \(10^{4}\) neurons are shown as thin red curves. All other designations are the same as in Fig. 3. The control turns on at \(t=10\). The network parameters correspond to Fig. 5. In the limit \(N\to\infty\), this microscopic model reduces to an exact closed system of four ODEs for four biophysical quantities, mean firing rates \(r_{E,I}\) and mean membrane potentials \(v_{E,I}\) of populations E and I [39, 16]: \[\tau_{m}\dot{r}_{E} = \Delta_{E}/\pi+2r_{E}v_{E}, \tag{17a}\] \[\tau_{m}\dot{v}_{E} = \bar{\eta}_{E}+v_{E}^{2}-\pi^{2}r_{E}^{2}-J_{IE}r_{I}+I_{c}(t),\] (17b) \[\tau_{m}\dot{r}_{I} = \Delta_{I}/\pi+2r_{I}v_{I},\] (17c) \[\tau_{m}\dot{v}_{I} = \bar{\eta}_{I}+v_{I}^{2}-\pi^{2}r_{I}^{2}+J_{EI}r_{E}-J_{II}r_{I}. \tag{17d}\] Bifurcation analysis of an uncontrolled (\(I_{c}=0\)) system (17) showed a wide variety of different dynamic modes [16]. Here we focus on the case when the system has a single attractor, the limit cycle. Specifically, we consider the following set of system parameters: \(\Delta_{E}=0.05\), \(\bar{\eta}_{E}=0.5\), \(\Delta_{I}=0.5\), \(\bar{\eta}_{I}=-4\), \(J_{EI}=20\), \(J_{IE}=5\), \(J_{II}=0.5\), and \(\tau_{m}=14\) ms. At these parameters, the system, along with a stable limit cycle, has an unstable fixed point, which is a high dimensional focus with coordinates \[\left(r_{E}^{*},v_{E}^{*},r_{I}^{*},v_{I}^{*}\right)\approx(0.1319,-0.0603,0. 0663,-1.1990)\] and two pairs of complex conjugate eigenvalues \(\lambda_{1,2}\approx(0.0448\pm 1.0304i)/\tau_{m}\) and \(\lambda_{3,4}\approx(-2.5634\pm 0.8190i)/\tau_{m}\). Our goal is to stabilize this fixed point using the control algorithm defined by Eqs. (1), with the constraint that the available network output is the mean membrane potential of the excitatory population, \(v=v_{E}\), and the control current \(I_{c}\) is applied only to the excitatory population. Linear stability of the fixed point in the presence of control can be analyzed in a similar way as in the previous model. Now the characteristic equation has five eigenvalues. The dependence of the \(\max[\mathrm{Re}(\lambda)]\) on the control gain \(k\) for three different values of the cutoff frequency \(\omega_{c}\) is shown in Fig. 7. Again we see that the stability condition \(\max[\mathrm{Re}(\lambda)]<0\) is satisfied in a wide range of the control parameters \(k\) and \(\omega_{c}\). Figure 8 shows the performance of the control algorithm for fixed values of \(\omega_{c}=0.5/\tau_{m}\) and \(k=0.5\). The dynamics of the free (\(t<300\) ms) and controlled (\(t>300\) ms) network, obtained from the mean-field Eqs. (17), are shown as thick gray curves. The control switches the state of the system from coherent limit cycle oscillations to the stabilized incoherent state and the feedback perturbation asymptotically vanishes. These results are consistent with numerical simulations of a microscopic model with \(N=10^{4}\) neurons in each excitatory and inhibitory population (thin red curves). As in the previous case, we changed the variables \[V_{j}^{(E,I)}=\tan(\theta_{j}^{(E,I)}/2) \tag{18}\] to rewrite the Eqs. (13) in terms of theta neurons: \[\tau_{m}\dot{\theta}_{j}^{(E,I)} = 1-\cos\left(\theta_{j}^{(E,I)}\right) \tag{19}\] \[+ \left[1+\cos\left(\theta_{j}^{(E,I)}\right)\right]\left[\eta_{j} ^{(E,I)}+\mathcal{I}_{j}^{(E,I)}\right].\] We integrated these equations by the Euler method with a time step of \(dt=2\times 10^{-5}\). For the numerical implementation of Eq. (16), we set \(\tau_{s}=5\times 10^{-5}\tau_{m}\). To estimate the variables of the mean-field theory, we calculated the Kuramoto order parameters \[Z_{E,I}=\frac{1}{N}\sum_{j=1}^{N}\exp(i\theta_{j}^{(E,I)}) \tag{20}\] for each population and evaluated the mean spiking rates and mean membrane potentials for populations E and I as [39]: \[r_{E,I}=\frac{1}{\pi}\operatorname{Re}\left(\frac{1-Z_{E,I}^{*}}{1+Z_{E,I}^{* }}\right),\ v_{E,I}=\operatorname{Im}\left(\frac{1-Z_{E,I}^{*}}{1+Z_{E,I}^{* }}\right), \tag{21}\] where \(Z_{E,I}^{*}\) denotes complex conjugate of \(Z_{E,I}\). Panels (a), (c) and (e) in Fig. 8 show a good agreement of time traces obtained from mean filed equations and microscopic model. Panels (b) and (d) are raster plots of randomly selected 500 neurons in excitatory and inhibitory populations, respectively. ## 5 Controlling a population of chaotically spiking Hindmarsh-Rose neurons As a final example, consider the control of synchronous oscillations in a heterogeneous population of electrically coupled Hindmarsh-Rose neurons [55]: \[\dot{v}_{j} = y_{j}-v_{j}^{3}+3v_{j}^{2}-z_{j}+I_{j}+K(v-v_{j})+I_{c}(t), \tag{22a}\] \[\dot{y}_{j} = 1-5v_{j}^{2}-y_{j},\] (22b) \[\dot{z}_{j} = r[\nu(v_{j}-\kappa)-z_{j}],\quad j=1,\ldots,N. \tag{22c}\] Here, \(v_{j}\), \(y_{j}\) and \(z_{j}\) are the membrane potential, the spiking variable and the adaptation current of the \(j\)th neuron, respectively. The variable Figure 7: Linear stability of incoherent state associated with a high-dimensional focus in a system of two interacting populations of excitatory and inhibitory QIF neurons in the presence of control. The entire network is controlled using the output and input of the excitatory population only. The maximum real part of the eigenvalues as a function of the control gain \(k\) is shown for different values of the cut-off frequency \(\omega_{c}\) of LPF. Network parameters: \(\Delta_{E}=0.05\), \(\bar{\eta}_{E}=0.5\), \(\Delta_{I}=0.5\), \(\bar{\eta}_{I}=-4\), \(J_{EI}=20\), \(J_{IE}=5\), \(J_{II}=0.5\), and \(\tau_{m}=14\) ms. \[v=\frac{1}{N}\sum_{i=1}^{N}v_{i} \tag{23}\] is the mean membrane potential. The heterogeneity of neurons is provided by currents \(I_{j}\), which we randomly select from a Gaussian distribution with a mean value of 3 and a variance of 0.1. Parameters \(r=0.06\), \(\nu=4\) and \(\kappa=-1.56\) are chosen such that free (\(K=0\) and \(I_{c}=0\)) neurons generate chaotic bursts. The term \(K(v-v_{j})\) in the Eq. (22a) determines the electrical coupling between neurons, where \(K\) is the coupling strength. To get synchronized oscillations of the uncontrolled population, we take this parameter large enough, \(K=0.1\). The last term \(I_{c}(t)\) in this equation is the control current given by Eqs. (1). Figure 9 shows how the control with fixed parameters \(\omega_{c}=0.05\) and \(k=2\) changes the dynamics of a population of \(N=10^{4}\) coupled neurons. Without control (\(t<700\)), synchronous oscillations of large amplitude are observed in the dynamics of the mean membrane potential \(v(t)\), and coherent bursts are visible on the raster plot. Activation of control at \(t>700\) effectively suppresses synchronous oscillations of the mean membrane potential, and neurons demonstrate incoherent bursts. As in previous examples, only small amplitude oscillations around zero are observed in the asymptotic dynamics of control perturbation \(I_{c}(t)\). Figure 9(d) demonstrates that control almost does not affect the amplitude dynamics of individual neurons. As an example, we show the time trace of the membrane potential of the first neuron \(v_{1}(t)\) before and after activation of control. Note that, unlike the previous examples, there is no known way to reduce this model to a low-dimensional system. Thus, here we cannot theoretically estimate the mean value of the membrane potential of an unstable incoherent state in the thermodynamic limit and determine whether the equilibrium is associated with an unstable focus or saddle and how its stability changes in the presence of control. However, our algorithm does not require such detailed knowledge and, with an appropriate choice of control parameters, works just as well as in previous relatively simple models that allow a low-dimensional reduction in the thermodynamic limit. Numerical simulations of this model show that our algorithm works only when \(\omega_{c}>0\) and fails when \(\omega_{c}<0\). This allows us to conclude that the unstable equilibrium in this model is an unstable focus. ## 6 Conclusions We considered the problem of suppressing collective synchronous oscillations in large-scale neural networks. This problem is relevant in neurology, as excessive synchronized oscillations in certain areas of the brain are often associated with various neurological disorders [7; 8; 9; 10]. Synchronized oscillations usually appear when an equilibrium incoherent state of the network becomes unstable. Information about unstable network states is difficult to extract from experimental data. We have shown that a priory unknown unstable incoherent states of large-scale neural networks can be effectively stabilized using a simple first order feedback controller based on a low-pass filter. Initially, this controller was developed for stabilization of unknown unstable equilibrium points of low-dimensional dynamical systems [52; 53] and has not yet been tested for high-dimensional systems such as neural networks, consisting of a huge number of interacting neurons. We have demonstrated the effectiveness of our control algorithm on three examples of neural networks. The first two examples refer to QIF neurons. In the thermodynamic limit, microscopic models of networks built from QIF neurons can be reduced to exact low-dimensional systems of mean-field equations. This greatly simplifies the analysis of the effect of control on network dynamics. In the first example, we demonstrated the suppression of synchronous oscillations in a population of excitatory QIF neurons interacting through synaptic pulses of finite width. Here we have stabilized two types of incoherent states associated with an unstable focus and a saddle equilibrium point. Until now, the control of the saddle equilibrium state in neural networks has not been considered in the literature. Here we have achieved stabilization of the saddle state with the Figure 8: Suppression of coherent oscillations in a system of two interacting populations of excitatory and inhibitory QIF neurons by stabilization of unstable incoherent state associated with a high-dimensional focus. Dynamics of mean spiking rate of (a) excitatory and (c) inhibitory populations, and (e) control perturbation applied to the excitatory population. The dynamics derived from the mean–field equations are shown as thick gray curves, and the corresponding dynamics derived from the microscopic model with \(10^{4}\) neurons in each excitatory and inhibitory population are shown as thin red curves. (b), (d) Raster plots of 500 randomly selected neurons in E and I populations, respectively. The control turns on at \(t=300\) ms. Network parameters as in Fig. 7. Controller parameters: \(\omega_{c}=0.5/\tau_{\rm m}\) and \(k=0.5\). help of an unstable controller. In the second example, we considered the control of a network built from two connected populations of excitatory and inhibitory QIF neurons, whose architecture mimics that, used in Parkinson's disease models [58]. Previously, it was shown that high-frequency stimulation of the inhibitory population can effectively suppress synchronization in such a network, but stimulation of the excitatory population is ineffective [16]. Here, our algorithm provided effective stabilization of the incoherent state of the network by using the output and input of the excitatory population. For both first examples, the results derived from the mean-field equations were confirmed by numerical simulations of the respective microscopic models. We have shown that networks of \(10^{4}\) neurons are quantitatively well described by mean-field equations. In the third example, we demonstrated the suppression of coherent oscillations in a population of electrically coupled Hindmarsh-Rose neurons. Low-dimensional reduction of the equations of the microscopic model is impossible in this case. However, the direct application of the control algorithm to the microscopic model showed that it works just as well as in the previous two examples. Note that successful stabilization of unstable incoherent states makes them experimentally observable, and this can serve as a quantitative benchmark for assessing the quality of neural network models. Finally, we summarize the main advantages of the proposed algorithm for suppressing coherent oscillations in large-scale neural networks: (i) the algorithm does not require any detailed knowledge of the network model and its unstable incoherent equilibria; (ii) the algorithm is robust to changes in control parameters; (iii) the algorithm can stabilize not only incoherent states associated with an unstable focus but also with a saddle equilibrium point; (iv) for large networks the algorithm is weakly invasive: the control perturbation decreases according to a power law with increasing network size and vanishes as the network size tends to infinity; (v) the algorithm is adaptive, which means that it provides tracking of the equilibrium states in the case of slowly varying system parameters (see [52, 53] for details). In this paper, we limited ourselves to the consideration of the simplest first-order controller to stabilize unknown incoherent states. More complex networks may require higher-order generalized adaptive controllers [52]. In addition, we emphasize that mean-field equations derived from microscopic dynamics accurately describe synchronization processes in large networks, and these models are well suited for testing and developing various algorithms for suppressing unwanted coherent oscillations. ## Acknowledgments This work is supported by grant No. S-MIP-21-2 of the Research Council of Lithuania. Figure 9: Suppression of coherent oscillations in a population of electrically coupled Hidmarsh-Rose neurons. The control is activated at the time \(t=700\). (a) Dynamics of the mean membrane potential. (b) Raster plot of 100 randomly selected neurons. (c) and (d) Time traces of the control perturbation and the membrane potential of the first neuron, respectively. Network parameters: \(r=0.06\), \(\nu=4\), \(\kappa=-1.56\), \(K=0.1\) and \(N=10^{4}\). Heterogeneous currents \(I_{j}\) in Eq. (22a) are randomly selected from a Gaussian distribution with a mean value of 3 and a variance of 0.1. Controller parameters: \(\omega_{\mathrm{c}}=0.05\) and \(k=2\).
2305.19868
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN
Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. However, it remains a challenge to train deep SNNs due to the discrete spike function. A popular approach to circumvent this challenge is ANN-to-SNN conversion. However, due to the quantization error and accumulating error, it often requires lots of time steps (high inference latency) to achieve high performance, which negates SNN's advantages. To this end, this paper proposes Fast-SNN that achieves high performance with low latency. We demonstrate the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which the minimization of the quantization error is transferred to quantized ANN training. With the minimization of the quantization error, we show that the sequential error is the primary cause of the accumulating error, which is addressed by introducing a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. Codes are available at: https://github.com/yangfan-hu/Fast-SNN.
Yangfan Hu, Qian Zheng, Xudong Jiang, Gang Pan
2023-05-31T14:04:41Z
http://arxiv.org/abs/2305.19868v1
# Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN ###### Abstract Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. However, it remains a challenge to train deep SNNs due to the discrete spike function. A popular approach to circumvent this challenge is ANN-to-SNN conversion. However, due to the quantization error and accumulating error, it often requires lots of time steps (high inference latency) to achieve high performance, which negates SNN's advantages. To this end, this paper proposes Fast-SNN that achieves high performance with low latency. We demonstrate the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which the minimization of the quantization error is transferred to quantized ANN training. With the minimization of the quantization error, we show that the sequential error is the primary cause of the accumulating error, which is addressed by introducing a signed IF neuron model and a layer-wise fine-tuning mechanism. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. Codes are available at: [https://github.com/yangfan-hu/Fast-SNN](https://github.com/yangfan-hu/Fast-SNN). Deep Spiking Neural Networks, Neuromorphic Computing, ANN-to-SNN Conversion, Object Detection, Semantic Segmentation ## 1 Introduction Over the last decade, deep artificial neural networks (ANNs) have made tremendous progress in various applications, including computer vision, natural language processing, speech recognition, etc. However, due to the increasing complexity in models and datasets, state-of-the-art ANNs require heavy memory and computational resources [1]. This situation prohibits the deployment of deep ANNs on resource-constrained environments (e.g., embedded systems or mobile devices). In contrast, the human brain can efficiently perform complex perceptual and cognitive tasks with a budget of approximately 20 watts [2]. Its remarkable capacities may be attributed to spike-based temporal processing that enables sparse and efficient information transfer in networks of biological neurons [2]. Inspired by biological neural networks, Maass [3] proposed a new class of neural networks, the spiking neural networks (SNNs). SNNs exchange information via spikes (binary events that are either 0 or 1) instead of continuous activation values in ANNs. An SNN unit (spiking neuron) only activates when it receives or emits a spike and remains dormant otherwise. Such event-driven, asynchronous characteristics of SNNs reduce energy consumption over time. In addition, SNNs use accumulate (AC) operations that are much less costly than the multiply-and-accumulate (MAC) operations in state-of-the-art deep ANNs. In the community of neuromorphic computing, researchers are developing neuromorphic computing platforms (e.g., TrueNorth [4], Loihi [5]) for SNN applications. These platforms, aiming at alleviating the von Neumann bottleneck with co-located memory and computation units, can perform SNN inference with low power consumption. Moreover, SNNs are inherently compatible with emerging event-based sensors (e.g., the dynamic vision sensor (DVS) [6]). However, the lack of efficient training algorithms obstructs deploying SNNs in real-time applications. Due to the discontinuous functionality of spiking neurons, gradient-descent backpropagation algorithms that have achieved great success in ANNs are not directly applicable to SNNs. Recently, researchers have made notable progress in training SNNs directly with backpropagation algorithms. They overcome the non-differentiability of the spike function by using surrogate gradients [7, 8, 9, 10, 11]. Then they apply backpropagation through time (BPTT) to optimize SNNs in a way similar to the backpropagation in ANNs. However, due to the sparsity of spike trains, directly training SNNs with BPTT is inefficient in both computation and memory with prevalent computing devices (e.g., GPUs) [7, 12]. Furthermore, the surrogate gradients would cause the vanishing or exploding gradient problem for deep networks, making direct training methods less effective for tasks of high complexity [12]. In contrast, rate-coded ANN-to-SNN conversion algorithms [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] employ the same training procedures as ANNs, which benefit from the efficient computation of ANN training algorithms. Besides, by approximating the ANN activations with SNN firing rates, ANN-to-SNN conversion algorithms have achieved promising performance of SNNs on challenging tasks, including image classification [16] and object detection [19]. Nevertheless, all existing methods suffer from quantization error and accumulating error [16] (see Section 2.2 for further discussion), resulting in performance degradation during conversion, especially when the latency is short. Although we can significantly mitigate the quantization error with an increasing number of discrete states (higher inference latency), it will unfavorably reduce the computation/energy efficiency of real-time applications. The growing inference latency will proportionally increase the number of operations and actual running time for a deployed SNN. To this end, this paper aims to build a Fast-SNN with competitive performance (i.e., comparable with ANNs) and low inference latency (i.e., 3, 7, 15) to conserve the SNN's advantages. Our basic idea is to reduce the quantization error and the accumulating error. The main contributions are: * **Quantization error minimization.** We show the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs. Based on this mapping, we demonstrate that the quantization error could be minimized by the supervised training of quantized ANNs, which facilitates ANN-to-SNN conversion by finding the optimal clipping range and the novel distributions of weights and activations for each layer. Besides, we derivate the upper bound of inference latency that ensures the lossless conversion from quantized ANNs to SNNs. * **Sequential error minimization.** To further boost the speed of SNNs and reduce the inference latency, we minimize the accumulating error. We show that the sequential error at each layer is the primary cause of the accumulating error when converting a quantized ANN to an SNN. Based on this observation, we propose a signed IF neuron to mitigate the impact from wrongly fired spikes to address the sequential error at each layer and propose a layer-wise fine-tuning mechanism to alleviate the accumulating sequential error across layers. * **Deep models for various computer vision tasks.** Our method provides a promising solution to convert deep ANN models to SNN counterparts. It achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification (accuracy: 71.31%, time steps: 3, ImageNet), object detection (mAP: 73.43%, time steps: 7, PASCAL VOC 2007), and semantic segmentation (mIoU: 69.7%, time steps: 15, PASCAL VOC 2012). ## 2 Related Work We present recent advances of two promising routes to build SNNs: direct training with surrogate gradients and ANN-to-SNN conversion. ### _Direct Training with Surrogate Gradients_ Direct training methods view SNNs as special RNNs [24] and employ BPTT to backward gradients through time. To address the discontinuous, non-differentiable nature of SNNs, researchers employ surrogate gradients [7, 8, 25, 26] to approximate the derivative of the spiking function, which is a Dirac function. In [9], Wu et al. first proposed a spatio-temporal backpropagation (STBP) framework to simultaneously consider the spatial and timing-dependent temporal domain during network training. Wu et al. further proposed to enhance STBP with a neuron normalization technique [27]. In [10], Gu et al. proposed spatio-temporal credit assignment (STCA) for BPTT with a temporal based loss function. To develop a batch normalization method customized for BPTT, Kim et al. [28] proposed a batch normalize through time (BNTT) technique. Similarly, Zheng et al. [29] proposed a threshold-dependent batch normalization (tdBN) for STBP. In [30], Zhang et al. proposed TSSLP to break down error backpropagation across two types of inter-neuron and intra-neuron dependencies, achieving low-latency SNNs. In [31], Rathi et al. proposed to optimize the leakage and threshold in the LIF neuron model. In [32], Kim et al. proposed a Neural Architecture Search (NAS) approach to find better SNN architectures. For other perspectives on direct training methods, readers can refer to [33, 34, 35, 2, 35]. With these emerging techniques, direct training methods can build SNNs low latency and high accuracy [30, 36, 31]. Various SNN applications with direct training methods have also emerged [37, 38, 39, 40, 41, 42, 43]. However, due to the sparsity of spike trains, directly training SNNs with BPTT is inefficient in both computation and memory with prevalent computing devices (e.g., GPUs) [7, 12]. An SNN with \(T\) time steps propagates \(T\) times iteratively during forward and backward propagation. Compared with a full-precision ANN, it consumes more memory and requires about \(T\) times more computation time. Furthermore, the surrogate gradients would cause the vanishing or exploding gradient problem for deep networks, making direct training methods less effective for tasks of high complexity [12]. For these reasons, we focus on ANN-to-SNN conversion methods for building SNNs. It is worth noting that SNNs with \(T=1\) can be viewed as a special case of binary neural networks (BNNs) [1], making direct training and ANN-to-SNN conversion methods interchangeable. However, these SNNs suffer from a significant performance drop compared with their corresponding full-precision ANNs. ### _ANN-to-SNN Conversion_ The first ANN-to-SNN conversion method by Perez-Carrasco et al. [13] maps ANNs of sigmoid neurons into SNNs of leaky integrate-and-fire (LIF) neurons by scaling weights of pre-trained ANNs. The scaling factor is determined by neuron parameters (e.g., threshold, leak rate, persistence). Although this method achieves promising results on two DVS tasks (human silhouette orientation and poker card symbol recognition), it does no apply to other tasks as its hyperparameters (neuron parameters) are determined manually. In [14], Cao et al. demonstrated that the rectified linear unit (ReLU) function is functionally equivalent to integrate-and-fire (IF) neuron, i.e., LIF neuron with no leaky factor nor refractory period. With only one hyperparameter: the firing threshold of spiking neurons, SNNs of IF neurons can approximate ANNs with ReLU activation, no bias term, and average pooling. With the great success of ReLU in ANNs, most existing ANN-to-SNN conversion methods follow the framework in [14]. With the framework in [14], the quantization error [16], usually observed as the over-/under-activation of spiking neurons compared with ANN activations [15], is recognized as the primary factor that obstructs lossless ANN-to-SNN conversion. In an SNN, the spiking functionality inherently quantizes the inputs (clipping the inputs to a range represented by temporally discrete states) and introduces the quantization error at each layer. To mitigate the quantization error, researchers proposed various normalization methods. Diehl et al. [15] proposed to scale the weights by the maximum possible activation. In [16], Rueckauer et al. improved the weight normalization [15] by using the 99.9th percentile of activation instead of the maximum. In [19], Kim et al. proposed to apply channel-wise weight normalization to eliminate extremely small activations. In [18], Sengupta et al. proposed threshold balancing, an equivalent alternative to weight normalization, to dynamically normalize SNNs at run time. Following [18], Han et al. [20] proposed to scale the threshold by the fan-in and fan-out of the IF neuron. However, these methods only optimize the clipping range based on statistical analysis, leaving the distribution of weights/activations not optimized. Different from the above methods, Yousefzadeh et al. [44] proposed to address the over-activation problem with a signed neuron model that adapts the firing rate based on the total membrane charges, with a positive/negative firing threshold of +1/-1. However, it also leaves the distribution of weights/activations not optimized. In addition, as analyzed in [16], the quantization error at each layer will contribute to the accumulating error that severely distorts the approximation between ANN activations and SNN firing rates at deep layers. Therefore, these methods typically employ shallow architectures and require a latency of hundreds or even thousands of time steps to achieve high accuracy. To address the accumulating error, researchers proposed to train a fitting ANN and apply statistical post-processing. In [17], Hu et al. employed ResNet, a robust deep architecture, for conversion. They devised a systematic approach to convert residual connections. They reported that the observed accumulating error in a residual architecture is lower than that in a plain architecture. Furthermore, they proposed to counter the accumulating error by increasing the firing rate of neurons at deeper layers based on the statistically estimated error. In [21], Deng et al. proposed to train ANNs using a capped ReLU, i.e., ReLU1 and ReLU2. Then they applied a scaling factor to normalize the firing thresholds by the maximum activation of capped ReLU function. Different statistical post-processing, an approach that employs ANNs to adjust the distribution of weights/activations emerges. In [45], Zou et al. proposed to employ a quantization function with fixed steps during ANN training to improve the mapping from ANNs to SNNs. In [22], Yan et al. proposed a framework to adjust the pre-trained ANNs with the knowledge of temporal quantization in SNNs. They introduced a residual term in ANNs to emulate the residual membrane potential in SNNs and reduce the quantization error. In [23], Li et al. introduced layer-wise calibration to optimize the weights of SNNs. In [12], Wu et al. proposed a hybrid framework called progressive tandem learning to fine-tune the full-precision floating-point ANNs with the knowledge of temporal quantization. This framework achieves promising results with a latency of 16. However, it still significantly suffers from quantization error when the latency is low, e.g., 3 (see our results in Section). In Table I, we summarize how each SNN method addresses the quantization error and accumulating error. By transferring the minimization of the quantization error to quantized ANN training, our method is the first to employ supervised training to optimize both the clipping range and distribution of weights/activations. It facilitates ANN-to-SNN conversion with a learnable clipping range and a global optimum1 for the distribution of weights/activations. In contrast, methods such as [12], [23], [22] can only achieve a local optimum as they first train a full-precision floating-point (FP32) model and then apply fine-tuning to the FP32 model. As for [45], it can only achieve a local optimum as it does not optimize the clipping range during training. Thanks to transferring the quantization error to ANN training, our method simplifies the complexity of the accumulating error as it contains only the sequential error. With this \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Quantization Error} & \multicolumn{2}{c|}{Accumulating Error} & \multicolumn{1}{c}{Time Step} \\ \cline{2-6} & Clipping & Distribution of & \multirow{2}{*}{Error Types} & \multirow{2}{*}{Minimization} & Required for \\ & Range & Weights/Activations & & & SOTA \\ & Optimization & Optimization & & & Performance \\ \hline Sengupta et al. 2019 [18] & \multirow{3}{*}{Statistical Analysis} & \multirow{3}{*}{N/A} & \multirow{3}{*}{Quantization \& Sequential} & \multirow{3}{*}{N/A} \\ Kim et al. 2020 [19] & & & & & \\ Han et al. 2020 [20] & & & & & \\ \hline Yousefzadeh et al. 2019 [44] & N/A & N/A & Quantization \& Sequential & N/A & \(\geq\)800 \\ \hline Hu et al. 2018 [17] & \multirow{3}{*}{Statistical Analysis} & \multirow{3}{*}{N/A} & \multirow{3}{*}{Quantization \& Sequential} & \multirow{3}{*}{N/A} & \(\geq\)16 \\ Deng et al. 2021 [21] & & & & & \\ \hline Yan et al. 2021 [22] & \multirow{3}{*}{Statistical Analysis} & \multirow{3}{*}{Local Optimum} & \multirow{3}{*}{Quantization \& Sequential} & \multirow{3}{*}{Fine-tuning} & \multirow{3}{*}{\(\geq\)16} \\ Li et al. 2021 [23] & & & & & \\ Wu et al. 2021 [12] & & & & & \\ \hline Zou et al. 2020 [45] & Fixed Value & Local Optimum & Sequential & N/A & \(\geq\)4 \\ \hline Ours & Learning Based & Global Optimum & Sequential & Fine-tuning & 3 \\ \hline \hline \end{tabular} \end{table} TABLE I: A summary of recent ANN-to-SNN conversion methods with respect to the minimization of the quantization error and accumulating error. promising solution, we further explore various computer vision tasks (including image classification, object detection, and semantic segmentation) in contrast to previous SNN methods that primarily focus on image classification. ## 3 Fast-Snn To build Fast-SNNs, we first analyze the equivalent mapping between _spatial quantization_ (mapping inputs to a set of discrete finite values) in ANNs and _temporal quantization_ (mapping inputs to a set of temporal events/spikes) in SNNs. This analysis reveals the activation equivalence between ANNs and SNNs, based on which we transfer the minimization of the quantization error to quantized ANN training. With the minimized quantization error, the accumulating sequential error, i.e., the combination of _accumulating error_ (an error caused by the mismatch in previous layers and grows with network propagation) and _sequential error_ (an error caused by the sequence and firing mechanism of spikes), becomes the primary factor that degrades SNN performance. To mitigate the sequential error at each layer, we introduce a signed IF neuron model. To alleviate the accumulating error across layers, we introduce a layer-wise fine-tuning mechanism to minimize the difference between SNN firing rates and ANN activations. Fig. 1 illustrates the overview of our framework. ### _Activation Equivalence between ANNs and SNNs_ For a rate-coded spiking neuron \(i\) at layer \(l\), its spike count (number of output spikes) \(N_{i}^{l}\in\{0,1,\dots,T\}\), where \(T\) is the length of spike trains. The spike count has \(T+1\) discrete states (values). For a \(b\)-bit unsigned integer, it has \(2^{b}\) discrete states (values): \(\{0,1,\dots,2^{b}-1\}\). Our basic idea of the conversion from ANNs to SNNs is to map the integer activation of quantizaed ANNs \(\{0,1,\dots,2^{b}-1\}\) to spike count \(\{0,1,\dots,T\}\), i.e., set \(T\) to \(2^{b}-1\). In the ANN domain, building ANNs with integer activations is naturally equivalent to compressing activations with the uniform quantization function that outputs uniformly distributed values. Such a function spatially discretizes a full-precision activation \(x_{i}^{l}\) of neuron \(i\) at layer \(l\) in an ANN of ReLU activation into: \[Q_{i}^{l}=\frac{s^{l}}{2^{b}-1}clip(round((2^{b}-1)\frac{x_{i}^{l}}{s^{l}}),0, 2^{b}-1), \tag{1}\] where \(Q_{i}^{l}\) denotes the spatially quantized value, \(b\) denotes the number of bits (precision), the number of states is \(2^{b}-1\), \(round(\cdot)\) denotes a rounding operator, \(s^{l}\) denotes the clipping threshold that determines the clipping range of input \(x_{i}^{l}\), \(clip(x,min,max)\) is a clipping operator that saturates \(x\) within range \([min,max]\). In the SNN domain, we consider the SNN model defined in [16] to facilitate conversion. This model employs direct coding (direct currents at the first layer) and IF (no leakage) neurons with reset-by-subtraction. At time step \(t\), the total membrane charge \(z_{i}^{l}(t)\) for neuron \(i\) at layer \(l\) is formulated as: \[z_{i}^{l}(t)=\sum_{j=1}^{M^{l-1}}W_{ij}^{l}S_{j}^{l-1}(t)+b_{i}^{l}, \tag{2}\] where \(M^{l-1}\) is the set of neurons at layer \(l-1\), \(W_{ij}^{l}\) is the weight of synaptic connection between neuron \(i\) and \(j\), \(b_{i}^{l}\) is the bias term indicates a constant injecting current, \(S_{j}^{l-1}(t)\) indicates an input spike from neuron \(j\) at time \(t\). The membrane equation of IF neuron is then defined as follows: \[V_{i}^{l}(t)=V_{i}^{l}(t-1)+z_{i}^{l}(t)-\theta^{l}\Theta(V_{i}^{l}(t)-\theta ^{l}), \tag{3}\] where \(\theta^{l}\) denotes the firing threshold, \(t\) denotes the \(t\)-th time step, \(\Theta\) is a step function defined as: \[\Theta(x)=\begin{cases}1&\text{if }x\geq 0,\\ 0&\text{otherwise}.\end{cases} \tag{4}\] Given a spike train of length \(T\), the total input membrane charge over the whole time window \(\widetilde{x}_{i}^{l}\) is defined as: \[\widetilde{x}_{i}^{l}=\mu^{l}+\sum_{t=1}^{T}z_{i}^{l}(t), \tag{5}\] where \(\mu^{l}\) is the initial membrane charge. The spiking functionality of IF neurons inherently quantizes \(\widetilde{x}_{i}^{l}\) into a quantized value represented by the firing rate \(r_{i}^{l}(t)\): \[\widetilde{Q}_{i}^{l}=r_{i}^{l}(t)=\frac{N_{i}^{l}}{T}=\frac{1}{T}clip(floor( \frac{\overline{x}_{i}^{l}}{\theta^{l}}),0,T), \tag{6}\] where \(\widetilde{Q}_{i}^{l}\) denotes the temporally quantized value, \(floor(\cdot)\) denotes a flooring operator. Since the 1st spiking layer (i.e., \(l=1\)) receives direct currents as inputs, we have \[\widetilde{x}_{i}^{1}=Tx_{i}^{1}. \tag{7}\] Comparing Eq. 6 with Eq. 1, we let \(\mu^{l}=\theta^{l}/2\), \(T=2^{b}-1\), \(\theta^{l}=s^{l}\). Since a flooring operator can be converted to a rounding operator \[floor(x+0.5)=round(x), \tag{8}\] Fig. 1: Overview of our conversion framework. We first minimize the quantization error during quantized ANN training. Then we provide an equivalence mapping between quantized ANN values and SNN firing rates. Finally, we address the accumulating error with a signed IF neuron model and a layer-wise fine-tuning module. Details of the layer-wise fine-tuning module can be found in Fig. 5. we rewrite Eq. 6 to \[\vec{Q}_{i}^{l}=\frac{1}{T}clip(floor(\frac{Tx_{i}^{l}}{\theta^{l}}),0,T)=\frac{Q _{i}^{l}}{s^{l}}. \tag{9}\] Then we scale the weights in the following layer to \(s^{l}W^{l+1}\), making an output spike equivalent to a continuous output of value \(s^{l}\). We can rewrite the effectively quantized value from Eq. 9 to: \[\vec{Q}_{i}^{l}=\frac{s^{l}}{T}clip(floor(\frac{Tx_{i}^{l}}{\theta^{l}}),0,T)=Q _{i}^{l}. \tag{10}\] In Eq. 10, our equivalent mapping between spatial quantization in ANNs and temporal quantization in SNNs derives the activation equivalence between ANNs and SNNs. In Fig. 2, we present an example illustrating how quantized values in ANNs are mapped to firing rates in SNNs. **Latency upper bound derivation**. According to the equivalence between quantized ANNs and SNNs shown in Eq. 10, the latency upper bound could be computed by forcing spiking neurons to start firing after receiving all possible spikes. In such a case, the sequential error of converting a quantized ANN to a rate-coded SNN is eliminated, and the firing rates in converted SNNs are identical to activations in quantized ANNs: \[Latency=T\times L=(2^{b}-1)\times L. \tag{11}\] According to the latency bound in Eq. 11, \(b\) and \(L\) are two factors that obstruct the fast inference of SNNs. To fully realize our Fast-SNN while maintaining network performance, we explore the minimization of \(b\) in Section 3.2, and reduce the impact from deep models (i.e., with a large \(L\)) in Section 3.3. ### _Quantization Error Minimization_ In Eq. 11, the bit-precision of quantized ANNs determines \(T\), the length of spike trains, and the latency. Since the number of operations in SNNs grows proportionally to \(T\), a high bit-precision (e.g., \(b=16\), \(T=65535\)) of quantized ANNs will negate SNN's advantages in computation/energy efficiency. The solution to reducing \(T\) while retaining high accuracy is training low-precision networks (e.g., \(b=2\), \(T=3\)) with minimized quantization error for conversion. Advances in ANN quantization [1] have shown that low-precision models can reach full-precision baseline accuracy with quantization-aware training [46, 47, 48]. With Eq. 10, we then equivalently convert the quantized ANNs to SNNs, eliminating the quantization error during conversion. Therefore, our framework fully inherits the advantages of quantization-aware training, minimizing the quantization error during training. On the one hand, we optimize network parameters with the knowledge of quantization, which minimizes the mismatch between the distribution of weights/activations and that of discrete quantization states [46, 47, 48]. On the other hand, we optimize the clipping range for quantization by learning a clipping threshold during supervised training. Our scheme of a learnable clipping range for quantization is quite different from existing SNN methods that clip the inputs to a range determined by a pre-defined scaling factor \(\lambda\), e.g., finding the 99th or 99.9th percentile of all ReLU activation values [12, 16, 17, 19]. Our method determines the clipping range using supervised training instead of statistical post-processing. Such difference brings three advantages: * **Effectiveness.** Compared with a clipping threshold optimized during quantization-aware training in ANNs, \(\lambda\) determined by the 99th or 99.9th percentile of all ReLU activation is not optimal. * **Efficiency.** For each trained model, the normalization approach additionally calculates both the 99th and 99.9th percentile of ReLU activations for each ANN layer, respectively. Then it chooses the percentile (99th or 99.9th) that yields higher inference performance. The additional calculation would be notoriously costly if \(\lambda\) is calculated from the whole training set. * **Stability.** For a challenging dataset such as ImageNet, it is impossible to calculate \(\lambda\) from the whole training set. A solution is calculating \(\lambda\) from a batch instead of the entire training set. However, the value of \(\lambda\) will vary from batch to batch, and the performance of converted SNNs will fluctuate accordingly. In Fig. 3, we present the distributions of all ReLU activations for all layers of a pre-trained 2-bit AlexNet with the first batch of the original training set. As can be observed, the clipping threshold between our method and others is quite different. Our optimal clipping range brings a significant performance advantage, as shown in Section 4.3. ### _Accumulating Error Minimization_ According to Eq. 11, we can achieve lossless conversion from quantized ANNs to SNNs by enforcing a waiting period at each layer (spiking neurons can start firing after receiving all possible spikes for each layer). However, this scheme will prevent the practicability of our method on deep neural networks (i.e., when \(L\) is large). A latency of \(T\times L\) grows proportionally to the depth of the employed network, resulting in longer running time and potentially lower computation/energy efficiency in real-time applications (see our discussion in Section 4.6). If we reduce the Fig. 2: An example of how quantized values in ANNs are mapped to firing rates in SNNs. For a 2-bit (\(b=2\)) quantizer from Eq. 1 with \(s^{l}=1\), it has four possible quantized values (e.g., 0, 1/3, 2/3, 3/3). Each quantized value is mapped to the spiking count (capped at \(T=2^{b}-1\)) in SNNs, resulting in four corresponding firing rates (e.g., 0, 1/3, 2/3, 3/3). latency by removing the waiting period, it introduces the sequential error that degrades the performance of converted deep SNNs. In Fig. 4, we demonstrate a case illustrating the cause and impact of the sequential error. Previous works of rate-coded SNNs (e.g., [16, 19, 20]) seldom consider the sequential error because it has little impact on performance when \(T\) is long enough (i.e., hundreds or thousands of time steps). That is, \[\frac{N_{i}^{l}}{T}\approx\frac{N_{i}^{l}+1}{T}, \tag{12}\] where \(N_{i}^{l}\) is the number of output spikes. However, when we reduce the latency \(T\) to several time steps, the sequential error will significantly distort the approximation between ANN activations and SNN firing rates. Furthermore, the sequential error at each layer accumulates as the network propagates, causing significant deviation at deep layers. **Signed IF neuron.** To address the sequential error at each layer, we propose to cancel the wrongly fired spikes by introducing a signed IF neuron model. As for possible hardware implementation, neuromorphic hardware such as Loihi [5] already supports signed spikes. In our signed IF neuron model, a neuron can only fire a negative spike if it reaches the negative firing threshold and has fired at least one positive spike. To restore the wrongly subtracted membrane potential, our model changes the reset mechanism for negative spikes to reset by adding the positive threshold (i.e., \(\theta\)). Then we rewrite the spiking function \(\Theta(t)\) to \[\Theta(t)=\begin{cases}1&\text{if }V_{i}^{l}(t)\geq\theta,\\ -1&\text{if }V_{i}^{l}(t)\leq\theta^{\prime}\text{ and }N_{i}^{l}(t)\geq 1,\\ 0&\text{otherwise, no firing,}\end{cases} \tag{13}\] where \(\theta^{\prime}\) is the negative threshold, \(N_{i}^{l}(t)\) is the number of spikes neuron \(i\) has fired at time step \(t\). To boost the sensitivity of negative spikes, we set the negative threshold to a small negative value (empirically -1e-3). We then rewrite the membrane dynamic of IF neuron in Eq.3 to: \[\begin{split} V_{i}^{l}(t)=& V_{i}^{l}(t-1)+z_{i}^{ l}(t)-\theta^{l}\Theta(V_{i}^{l}(t)-\theta^{l})\\ &+\theta^{l}\Theta(\theta^{\prime}-V_{i}^{l}(t))\Theta(N_{i}^{l}-1). \end{split} \tag{14}\] With Eq.13 and Eq.14, the IF neuron will fire a negative spike to cancel a wrongly fired spike and restore the membrane potential. Compared with our signed IF neuron model, the signed IF neuron model proposed by Kim et al. [19] does not apply to our problem. It is designed to approximate negative outputs of the leaky ReLU function in ANNs and takes no consideration of the sequence of spikes in SNNs. In [44], Yousefzadeh proposed a signed neuron model with a fixed positive/negative firing threshold of +1/-1, making it not sensitive to the sequential error. **Layer-wise fine-tuning scheme.** Although the modified neuron model narrows the sequential error at each layer, the accumulating error still distorts the SNN firing rates at deep layers and degrades network performance. According to our analysis, the firing rate maps in an SNN with a latency of \(T\times L\) are identical to feature maps in quantized ANNs, indicating that the accumulating error in our framework contains only the accumulating sequential error. With this Fig. 4: An example of how the sequence of spikes causes the sequential error. For simplicity, we set the firing threshold in SNNs to 1. SIF denotes our signed IF neuron. For each SNN neuron, we use a table to display the values of \(z\), \(\Theta\), \(V\) for each time step (i. a) An ANN neuron receives two inputs: 2 and -2. Its output activation is 0. (b) An IF neuron receives three spike charges (-1, -1, 2) at \(t=1,2,3\). Its output firing rate is equivalent to the ANN activation. (c) An IF neuron receives three spike charges (2, -1, -1) at \(t=1,2,3\). However, it instantly fires a spike at \(t=1\) since the membrane potential is greater than the firing threshold and outputs no events when \(t=2,3\), resulting in a firing rate that is not equivalent to the ANN activation. (d) An SIF neuron receives three spike charges (2, -1, -1) at \(t=1,2,3\). Although it fires a spike at \(t=1\), our SIF model outputs no events when \(t=2\) as the incoming current cancels the residual membrane potential and fires a negative spike at \(t=3\), resulting in a firing rate that is equivalent to the ANN activation. Fig. 3: Distributions of the activations of a pre-trained 2-bit AlexNet using the first batch of the original training set. The batch size is set to 128. Solid lines denote our clipping thresholds. Values of the clipping threshold, max activation [15], 99th/99.9th percentile [16] of activations for each layer are listed within the table. Note that the max activation, 99th/99.9th percentile of activations are determined statistically from the same batch. insight, we propose to minimize the accumulating error by minimizing the Euclidean distance between ANN activations (free from the sequential or accumulating error) and SNN firing rates at each layer with a framework illustrated in Fig. 5. To overcome the discontinuity in SNNs, we employ a proxy ANN that shares its parameters with the SNN to optimize network parameters (weights and biases). We present the pseudo-codes of our proposed layer-wise fine-tuning method in Algorithm 1. Compared with other fine-tuning methods [12, 22, 23], our framework simplifies the optimization as it only needs to optimize the accumulating sequential error. Compared with layer-wise progressive tandem learning [12] that fine-tunes all subsequent ANN layers together, our end-to-end fine-tuning mechanism yields less computation time and cost. Compared with layer-wise weight calibration [23], our method incorporates the bias term (constant injecting current) during fine-tuning to learn a compensating membrane potential instead of calculating statistically. ### _Implementation Details_ We perform all our experiments with PyTorch [49]. To facilitate network training, we employ batch normalization layers [50] in our models to address the internal covariance shift problem with \[\frac{x-\mu}{\sqrt{\sigma^{2}+\epsilon}}\gamma+\beta, \tag{15}\] where \(\mu\) is mini-batch mean, \(\sigma^{2}\) is mini-batch variance, \(\epsilon\) is a constant, \(\gamma\) and \(\beta\) are two learnable parameters. For hardware implementation, batch normalization can be incorporated into the firing threshold \(\theta\) as \[\bar{\theta}=\frac{\theta-\beta}{\gamma}\sqrt{\sigma^{2}+\epsilon}+\mu. \tag{16}\] To facilitate the training of deep networks, we employ shortcut connections from the residual learning framework [51] to address the vanishing/exploding gradient problem. For hardware implementation, it only requires doubling pre-synaptic connections to receive inputs from the stacked layers and shortcuts, as the integration of spikes naturally performs addition operations. To build quantized ANNs for conversion, we employ a state-of-the-art quantization method [46] during training. To enable our ANN-to-SNN conversion method, we apply uniform quantization to activations. When exploring the building of low-precision SNNs, we quantize ANN weights with additive powers-of-two quantization instead \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Network & Precision & ANN & Precision & ANN & Precision & ANN \\ & W/A & Acc. & W/A & Acc. & W/A & Acc. \\ \hline AlexNet & 32/32 & 92.87 & 3/3 & 92.54 & 2/2 & 91.52 \\ VGG-11 & 32/32 & 93.60 & 3/3 & 93.71 & 2/2 & 93.06 \\ Resnet-20 & 32/32 & 93.00 & 3/3 & 92.39 & 2/2 & 90.39 \\ Resnet-44 & 32/32 & 94.17 & 3/3 & 93.41 & 2/2 & 91.57 \\ Resnet-56 & 32/32 & 94.10 & 3/3 & 93.66 & 2/2 & 91.68 \\ Resnet-18 & 32/32 & 95.85 & 32/3 & 95.62 & 32/2 & 95.51 \\ \hline \hline \end{tabular} \end{table} TABLE II: Accuracy (%) of ANNs trained with different quantization precision on CIFAR-10. We denote the bit-precision of weights and activations by “W” and “A”, respectively. For the details of employed architectures, please refer to Section 4.1. Fig. 5: Layer-wise fine-tuning module. First, we obtain an SNN from a quantized ANN with \(L\) layers. Then we build a proxy ANN that shares its parameters with the SNN. Starting from the 2nd layer (no sequential error in the 1st layer), layer \(l\) of the proxy ANN receives the output map of firing rates from layer \(l-1\) of the SNN as its input. Its output is set to map of firing rates from layer \(l\) of the SNN. We calculate the Euclidean loss between the output of layer \(l\) in the proxy ANN and the reference (activations of layer \(l\) in the quantized ANN). We then minimize the Euclidean loss by optimizing the parameters (weights and biases) of layer \(l\) in the proxy ANN. The updated parameters in the proxy ANN are mapped back to the corresponding SNN layer. We repeat this process until reaching the final classification layer. We bypass the last layer as we directly use its membrane potential for classification. Please refer to Algorithm 1 for more details. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Network & Precision & ANN & Precision & ANN & Precision & ANN \\ & W/A & Acc. & W/A & Acc. & W/A & Acc. \\ \hline AlexNet & 32/32 & 56.52 & 32/3 & 58.58 & 32/2 & 56.41 \\ VGG-16 & 32/32 & 73.36 & 32/3 & 73.02 & 32/2 & 71.91 \\ \hline \hline \end{tabular} \end{table} TABLE III: Accuracy (%) of ANNs trained with different quantization precision on ImageNet. We denote the bit-precision of weights and activations by “W” and “A”, respectively. of uniform quantization for better performance. In Table II and III, we demonstrate the classification accuracy of ANNs trained with different quantization precision on CIFAR-10 and ImageNet, respectively. On CIFAR-10, our ResNet-56 (ANN) with both weights and activations quantized to 3 bits achieves 93.66% accuracy, which is close to the accuracy of full-precision ResNet-56 (ANN) implemented by us (94.10%) or reported in [51] (93.03%). On ImageNet, our VGG-16 (ANN) with activations quantized to 3 bits achieves 73.02% accuracy, only 0.34% lower than the full-precision VGG-16 (ANN) from TorchVision [52]. Our VGG-16 (ANN) with activations quantized to 2 bits achieves 71.91% accuracy on ImageNet, only 1.45% lower than the full-precision VGG-16 (ANN) from TorchVision [52]. Based on the performance of these deep ANNs, we employ 3/2-bit ANNs for conversion in our experiments. Following previous works [14, 21, 23, 27, 16], we employ the integrate-and-fire (IF) model as our spiking neuron model to better approximate ANN activations with SNN firing rates. For spike encoding, we employ the widely-used direct coding [12, 21, 23, 31, 16]. The (positive) firing threshold \(\theta\) is directly mapped from the scaling factor \(s\) optimized in ANNs. As for the negative firing threshold \(\theta^{\prime}\), we empirically set it to -1e-3 for all experiments. ## 4 Experiments on Image Classification Image classification is a fundamental and heavily studied task in computer vision. It determines which objects are in an image or video. In the ANN domain, recent advances in image classification focus on training deep networks [51, 53, 54] and have achieved great success. In the SNN domain, image classification is also the most commonly used task for evaluation. To facilitate comparison with other SNN methods, we perform our primary experimental analysis in this section. ### _Experimental Setup_ **Datasets.** We perform image classification on two benchmark datasets: CIFAR-10 [55] and ImageNet [56]. CIFAR-10 comprises 50,000 training images and 10,000 testing images in 10 classes. ImageNet comprises 1.2 million training images, 50,000 validation images, and 100,000 test images in 1,000 classes. **Data preprocessing.** On CIFAR-10, we follow many works (e.g., [51]) for data preprocessing, i.e., standardizing the data to have zero mean and unit variance, randomly taking a 32 \(\times\) 32 crop from a padded 40 \(\times\) 40 image (4 pixels padded on each side) or its horizontal flip. On ImageNet, we also follow existing works (e.g., [57]) for data preprocessing, i.e., randomly cropping 10-100% of the original image size with a random aspect ratio in (4/5, 5/4), resizing to 224 \(\times\) 224, applying random flip and normalization by ImageNet color. For evaluation, we resize the input by its shorter edge to 256 pixels and take a 224 \(\times\) 224 center crop. **Network architecture.** On CIFAR-10, we train 32/3/2-bit AlexNet [53], VGG-11 [54], and ResNet-18/20/44/56 [51] for evaluation. For ResNet-20/44/56, we employ the original ResNet architectures defined in [51]. To facilitate comparison with the ResNet-19 in [36], we employ a ResNet-18 similar to the ResNet-19 defined in [36]. To explore the building of low-precision SNNs, we quantize both weights and activations for AlexNet, VGG-11, and ResNet-20/44/56 during ANN training. For ResNet-18, we quantize activations to enable conversion and keep full-precision weights for a fair comparison with other full-precision SNNs. On ImageNet, we train 3/2-bit AlexNet [53] and VGG-16 [54] for evaluation. For AlexNet and VGG-16, we quantize activations to enable conversion and keep full-precision weights for a fair comparison with other full-precision SNNs. For full-precision models, we report the performance of pre-trained models from TorchVision [52]. **Training details.** We follow the training protocol defined in [46]. We use stochastic gradient descent (SGD) with a momentum of 0.9. Table IV lists the weight decay and initial learning rate for different bit-precision on CIFAR-10 and ImageNet. On CIFAR-10, we divide the learning rate by 10 at the 150th, 225th epoch, and finish training at the 300th epoch. On ImageNet, we decrease the number of epochs in [46] to 60, and divide the learning rate by 10 at the 20th, 40th, 50th epoch. **Evaluation metrics.** In addition to the performance accuracy, we compare the computation/energy-efficiency of converted SNNs to their ANN counterparts by counting the number of operations during inference [16]. For an ANN, its number of operations is defined as: \[Ops=\sum_{l=1}^{L}f_{in}^{l}M^{l} \tag{17}\] where \(f_{in}\) denotes fan-in (number of incoming connections to a neuron), \(L\) denotes number of layers, \(M^{l}\) denotes the number of neurons at layer \(l\). For an SNN, its number of operations is defined as the summation of all synaptic operations (number of membrane charges over time): \[Ops=\sum_{t=1}^{T}\sum_{l=1}^{L}\sum_{j=1}^{M^{l}}f_{out,j}^{l}s_{j}^{l}(t) \tag{18}\] where \(f_{out,j}^{l}\) denotes fan-out of neuron \(j\) at layer \(l\) (number of output projections to neurons in the subsequent layer), \(T\) denotes latency (number of time steps), \(s_{j}^{l}(t)\) denotes number of spikes neuron \(j\) at layer \(l\) has fired at time step \(t\). ### _Overall Performance_ In Table V, we compare our method (Fast-SNN) with state-of-the-art SNN methods (including normalization [18, 20, 21], calibration [23], clamped and quantized training [22], \begin{table} \begin{tabular}{c c c c} \hline Dataset & Precision (W/A) & Learning Rate & Weight Decay \\ \hline \multirow{3}{*}{CIFAR-10} & 32/32 & 0.1 & 5e-4 \\ & 4/4 & 4e-2 & 1e-4 \\ & 3/3 & 4e-2 & 1e-4 \\ & 2/2 & 4e-2 & 3e-5 \\ \hline \multirow{2}{*}{ImageNet} & 32/3 & 1e-2 & 1e-4 \\ & 32/2 & 1e-2 & 1e-4 \\ \hline \end{tabular} \end{table} TABLE IV: Initial Learning Rate and Weight Decay. We denote the bit-precision of weights and activations by ‘W’ and ‘A’, respectively. progressive tandem learning [12], STDP-tdBN [29], DIET-SNN [31]) using AlexNet [53], VGG 11/16 [54], and ResNet-18 [51]. All numbers for comparison are taken from corresponding papers, including the performance of pre-trained ANNs if provided. For a fair comparison with state-of-the-art conversion methods, we refer to calibration [23] and normalization [20] as our performance baseline on CIFAR-10 and ImageNet, respectively. We refer to progressive tandem learning [12] as our latency baseline on both CIFAR-10 and ImageNet. **CIFAR-10.** Compared with the performance baseline [23], our VGG-11 (latency is 7) achieves an accuracy slightly higher than their VGG-16 while using about 5\(\times\) fewer time steps. Compared with the latency baseline [12], our AlexNet/VGG-11 (latency is 3) outperforms their AlexNet/VGG-11 by 0.77%/1.75% accuracy while using 5\(\times\) fewer time steps. It is worth noting that the weight precision of our SNNs is 2-bit while their SNNs employ full-precision weights. Compared with their AlexNet with 4-bit weight precision, the accuracy of our AlexNet with 2-bit weight precision is 2.15% higher than theirs. Compared with a state-of-the-art direct training method [36], our ResNet-18 (latency is 3) outperforms their ResNet-19 by 0.92% accuracy while using half time steps. **ImageNet.** Compared with the performance baseline [20], our VGG-16 (latency is 7) achieves a top-1 accuracy only slightly lower than their VGG-16 while using about 600\(\times\) fewer time steps. Compared with the latency baseline [12], our AlexNet/VGG-11 (latency is 3) outperforms their AlexNet/VGG-11 by 1.15%(0.59%)/6.23%(4.96%) on top-1 (top-5) accuracy while using 5\(\times\) fewer time steps. Notably, our method achieves a small accuracy gap between the SNNs and their counterpart ANNs. With a latency of 7, the top-1 accuracy of our AlexNet/VGG-16 drops only 0.06%/0.07%. With a latency of 3, the top-1 accuracy of our AlexNet/VGG-16 drops 0.07%/0.60%. In contrast, the latency baseline [12] reported a significant drop in accuracy (3.34%/6.57% for AlexNet/VGG-16 with a latency of 16) compared with pre-trained ANNs. Compared with a state-of-the-art direct training method [31], our VGG-16 (latency is 3) outperforms their VGG-16 by 2.31% top-1 accuracy while using about half time steps. ### _Evaluation for Minimizing the Quantization Error_ To validate the efficacy of a learnable firing threshold, we train a 2-bit AlexNet/VGG-11 on CIFAR-10. Then we convert them to corresponding SNNs with different firing thresholds: the clipping threshold, the max activation, and \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Work**} & \multirow{2}{*}{**Architecture**} & \multirow{2}{*}{**Method**} & **Precision** & **ANN** & **Precision** & **SNN** & \multirow{2}{*}{\(\Delta\)**Acc.**} & **Time** \\ & & & **(ANN W/A)** & **Acc.** & **(SNN W)** & **Acc.** & & **Steps** \\ \hline \hline \multirow{8}{*}{[18]} & Sengupta et al. 2019 [18] & VGG-16 & Norm. & 32/32 & 91.70 & 32 & 91.55 & -0.15 & 2500 \\ & Han et al. 2020 [20] & VGG-16 & Norm. & 32/32 & 93.63 & 32 & 93.63 & 0 & 2048 \\ & Deng et al. 2021 [21] & VGG-16 & Norm. & 32/32 & 92.09 & 32 & 92.29 & +0.20 & 16 \\ & Li et al. 2021 [23] & VGG-16 & Cal. & 32/32 & 95.72 & 32 & 93.71 & -2.01 & 32 \\ & Yan et al. 2021 [22] & VGG-19 & CQT & 32/32 & 93.60 & 32 & 93.44 & -0.06 & 1000 \\ & Yan et al. 2021 [22] & VGG-19 & CQT & 32/32 & 93.60 & 9 & 93.43 & -0.07 & 1000 \\ & Yan et al. 2021 [22] & VGG-19 & CQT & 32/32 & 93.60 & 8 & 92.82 & -0.78 & 1000 \\ & Wu et al. 2021 [22] & VGG-11 & PTL & 32/32 & 90.59 & 32 & 91.24 & +0.65 & 16 \\ & Wu et al. 2021 [12] & AlexNet & PTL & 32/32 & 89.59 & 32 & 90.86 & **+1.27** & 16 \\ & Wu et al. 2021 [12] & AlexNet & PTL & -/- & - & 8 & 90.11 & - & 16 \\ & Wu et al. 2021 [12] & AlexNet & PTL & -/- & - & 4 & 89.48 & - & 16 \\ & Zheng et al. 2021 [29] & ResNet-19 & STBP-tdBN & -/- & - & 32 & 93.16 & - & 6 \\ & Rathi et al. 2021 [31] & VGG-16 & DIET-SNN & 32/32 & 93.72 & 32 & 92.70 & -1.02 & 5 \\ & Deng et al. 2022 [36] & VGG-16 & Grad. Re-wt. & 32/32 & 94.97 & 32 & 94.50 & -0.47 & 6 \\ & Ours & AlexNet & Fast-SNN & 3/3 & 92.54 & 3 & 92.53 & -0.01 & 7 \\ & Ours & AlexNet & Fast-SNN & 2/2 & 91.52 & 2 & 91.63 & +0.11 & 3 \\ & Ours & VGG-11 & Fast-SNN & 3/3 & 93.71 & 3 & 93.72 & +0.01 & 7 \\ & Ours & VGG-11 & Fast-SNN & 2/2 & 93.06 & 2 & 92.99 & -0.07 & 3 \\ & Ours & ResNet-18 & Fast-SNN & 32/3 & 95.62 & 32 & **95.57** & -0.05 & 7 \\ & Ours & ResNet-18 & Fast-SNN & 32/2 & 95.51 & 32 & 95.42 & -0.09 & 3 \\ \hline \hline \multirow{8}{*}{[18]} & Sengupta et al. 2019 [18] & VGG-16 & Norm. & 32/32 & 70.52 (89.39) & 32 & 69.96 (89.01) & -0.56 (+0.38) & 2500 \\ & Han et al. 2020 [20] & VGG-16 & Norm. & 32/32 & 73.49 (-) & 32 & **73.09** (-) & -0.40 (-) & 4096 \\ & Deng et al. 2021 [21] & VGG-16 & Norm. & 32/32 & 72.40 (-) & 32 & 55.80 (-) & -16.60 (-) & 16 \\ & Li et al. 2021 [23] & VGG-16 & Cal. & 32/32 & 75.36 (-) & 32 & 63.64 (-) & -11.72 (-) & 32 \\ & Wu et al. 2021 [12] & AlexNet & PTL & 32/32 & 85.53 (81.07) & 32 & 55.19 (84.41) & -3.34 (+2.66) & 16 \\ & Wu et al. 2021 [12] & VGG-16 & PTL & 32/32 & 71.65 (09.37) & 32 & 65.08 (85.25) & -6.57 (-5.12) & 16 \\ & Zheng et al. 2021 [29] & ResNet-34 & STBP-tdBN & -/- & - & 32 & 63.72 (-) & - & 6 \\ & Rahti et al. 2021 [31] & VGG-16 & DIET-SNN & 32/32 & 70.08 (-) & 32 & 69.00 (-) & -1.08 & 5 \\ & Deng et al. 2022 [36] & ResNet-34\({}^{a}\) & Grad. Re-wt. & 32/32 & & 32 & 64.79 (-) & - & 6 \\ & Ours & AlexNet & Fast-SNN & 32/3 & 85.85 (80.57) & 32 & 58.52 (80.95) & -**0.06 (+0.02)** & 7 \\ & Ours & AlexNet & Fast-SNN & 32/2 & 56.41 (79.11) & 32 & 56.34 (79.00) & -0.07 (-0.11) & 3 \\ & Ours & VGG-16 & Fast-SNN & 32/3 & 73.02 (91.28) & 32 & 72.95 (91.08) & -0.07 (-0.20) & 7 \\ & Ours & VGG-16 & Fast-SNN & 32/2 & 71.91 (90.58) & 32 & 71.31 (90.21) & -0.60 (-0.37) & 3 \\ \hline \hline \end{tabular} * For a fair comparison with other SNN methods, we report ResNet-34 in [36] by the version that restricts spiking neurons to fire at most one spike at a single time step. \end{table} TABLE V: Performance comparison of different SNN methods on CIFAR-10 and ImageNet. Notions for different methods: Norm. is normalization [18], [20], [21], Cal. is calibration [23], CQT is clamped and quantized training [22], PTL is progressive tandem learning [12], STDP-td the 99th/99.9th percentile of activations. We determine the max activation, 99th, and 99.9th percentile of activations by the first batch of the original training set with a batch size of 128. To compare the performance without sequential error, we set the latency to 3\(\times\)7/3\(\times\)11 for AlexNet/VGG-11 (spiking neurons start firing after receiving all possible spikes). To demonstrate the impact of the sequential error, we set the latency to 3. As shown in Table VI, our learnable firing threshold achieves higher accuracy than all other types of firing thresholds under different latency configurations. Compared with normalization methods (statistical post-processing), our framework jointly optimizes the clipping threshold (firing threshold) at each layer, resulting in a clipping range that better fits the input data. It is worth noting that our SNNs using a latency of \((2^{b}-1)\times L\) (without the sequential error) achieve the same accuracy as their pre-trained ANNs. This result indicates no conversion error and validates the efficacy of our latency bound in Eq. 11. ### _Evaluation for Minimizing the Accumulating Error_ To validate the efficacy of our signed IF neuron model and layer-wise fine-tuning mechanism, we stage by stage apply each piece of our method to SNNs with a latency of \(2^{b}-1\). We train 3/2-bit AlexNet/VGG-11 on CIFAR-10 and AlexNet/VGG-16 on ImageNet with quantization-aware training. Then we convert the trained ANNs to SNNs. Here, we introduce a set of notions for SNNs. SNN\({}^{\alpha}\) stands for native SNNs directly converted from quantized ANNs. SNN\({}^{\beta}\) stands for SNN\({}^{\alpha}\) with our signed IF neuron model. SNN\({}^{\gamma}\) stands for SNN\({}^{\beta}\) with our layer-wise fine-tuning mechanism. As shown in Table VII, our signed IF neuron model and layer-wise fine-tuning consistently improve the performance of converted SNNs. On CIFAR-10, SNN\({}^{\beta}\) with signed IF neuron model consistently improves the performance of SNN\({}^{\alpha}\) by at least 1.5%. For the 2-bit VGG-11, the improvement achieves a wide margin of 7.88%. With layer-wise fine-tuning applied, all our SNNs achieve almost lossless performance compared with the corresponding quantized ANNs. Recalling the full-precision ANNs results in Table II and Table III, it is also notable our method achieves an accuracy comparable to full-precision ANN networks on CIFAR-10 and ImageNet with a latency of 7. Even with a latency of 3, the accuracy of SNN\({}^{\gamma}\) (AlexNet/VGG-11) on CIFAR-10 is only 1.24%/0.61% lower than full-precision ANNs. On ImageNet, the top-1 accuracy of SNN\({}^{\gamma}\) (AlexNet/VGG-16) with a latency of 3 is only 0.18%/2.05% lower than full-precision ANNs. In the above experiments, the improvement from our layer-wise fine-tuning mechanism is less significant, for the employed models (AlexNet, VGG-11, VGG-16) are relatively shallow (less than 20 layers). According to Eq. 11, when \(L\) is small, the impact of accumulating sequential error is limited, and SNN firing rates still approximate ANN activations. To further validate the efficacy of our layer-wise fine-tuning mechanism, we apply our layer-wise fine-tuning to deep residual networks (ResNets) [51]. Kindly note that a deep architecture like ResNet could achieve high performance with fewer parameters and operations (e.g., on CIFAR-10, 32-bit AlexNet with an accuracy of 92.87% uses 43\(\times\) more parameters, 5\(\times\) more operations than 32-bit ResNet-20 with an accuracy of 93.00%). However, conventional SNNs may not benefit from deep architectures as the sequential error accumulates during network propagation. On CIFAR-10, we train 3/2-bit ResNet-20/44/56 and convert them to SNNs with a corresponding latency of 7/3. We measure the efficacy of our fine-tuning method by the difference between SNN\({}^{\gamma}\) accuracy and SNN\({}^{\beta}\) accuracy. In Table VIII, our fine-tuning mechanism consistently improves \begin{table} \begin{tabular}{c c c c c} \hline Precision (W/A) & Network & Threshold & Acc. & Steps \\ \hline 2/2 & AlexNet(ANN) & - & 91.52 & - \\ 2/2 & AlexNet(SNN) & Max & 32.71 & 3\(\times\)7 \\ 2/2 & AlexNet(SNN) & 99.9 & 72.29 & 3\(\times\)7 \\ 2/2 & AlexNet(SNN) & 99 & 89.16 & 3\(\times\)7 \\ 2/2 & AlexNet(SNN) & Ours & **91.52** & 3\(\times\)7 \\ \hline 2/2 & AlexNet(SNN) & Max & 23.09 & 3 \\ 2/2 & AlexNet(SNN) & 99.9 & 54.56 & 3 \\ 2/2 & AlexNet(SNN) & 99 & 81.66 & 3 \\ 2/2 & AlexNet(SNN) & Ours & **88.97** & 3 \\ \hline 2/2 & VGG-11(ANN) & - & 93.06 & - \\ 2/2 & VGG-11(NN) & Max & 21.25 & 3\(\times\)11 \\ 2/2 & VGG-11(SNN) & 99.9 & 79.52 & 3\(\times\)11 \\ 2/2 & VGG-11(SNN) & 99 & 91.56 & 3\(\times\)11 \\ 2/2 & VGG-11(SNN) & Ours & **93.06** & 3\(\times\)11 \\ \hline 2/2 & VGG-11(SNN) & Max & **24.04** & 3 \\ 2/2 & VGG-11(SNN) & 99.9 & 45.56 & 3 \\ 2/2 & VGG-11(SNN) & 99 & 80.48 & 3 \\ 2/2 & VGG-11(SNN) & Ours & **84.92** & 3 \\ \hline \end{tabular} \end{table} TABLE VI: Accuracy (%) of SNNs converted from 2-bit networks using different types of firing thresholds on CIFAR-10. We denote the maximum activation [15] as Max, 99th/99.9th percentile [16] of activations as 99/99.9. We convert pre-trained ANNs directly to SNNs with no improvements applied. We denote the bit-precision of weights and activations by “W” and “A”, respectively. Best numbers are indicated by the bold font. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Network} & \multicolumn{2}{c}{ANN} & \multicolumn{2}{c}{SNN\({}^{\alpha}\)} & \multicolumn{2}{c}{SNN\({}^{\beta}\)} & \multicolumn{2}{c}{SNN\({}^{\gamma}\)} & \multicolumn{2}{c}{Time} \\ & \multicolumn{2}{c}{W/A} & \multicolumn{2}{c}{Acc.} & \multicolumn{2}{c}{Acc.} & \multicolumn{2}{c}{Acc.} & \multicolumn{2}{c}{Acc.} & \multicolumn{2}{c}{Steps} \\ \hline & 3/3 & AlexNet & 92.54 & 90.72 & 92.46 & 92.53 & 7 \\ & 2/2 & AlexNet & 91.52 & 88.97 & 91.37 & 91.63 & 3 \\ & 3/3 & VGG-11 & 93.71 & 90.29 & 93.46 & 93.72 & 7 \\ & 2/2 & VGG-11 & 93.06 & 84.92 & 92.80 & 92.99 & 3 \\ \hline & 3/2 & AlexNet & 58.88 & 47.74 & 88.35 & 85.82 & 7 \\ & 3/2 & AlexNet & 56.41 & 46.04 & 55.93 & 65.34 & 3 \\ & 3/2 & VGG-16 & 73.02 & 36.43 & 72.89 & 72.95 & 7 \\ & 3/2/2 & VGG-16 & 71.91 & 46.10 & 71.10 & 71.31 & 3 \\ \hline \hline \end{tabular} \end{table} TABLE VII: Accuracy (%) of SNN\({}^{\alpha}\) (native SNN), SNN\({}^{\beta}\) (SNN with our signed IF neuron model), SNN\({}^{\gamma}\) (SNN with both our signed IF neuron model and layer-wise fine-tuning) on CIFAR-10. SNNs are converted from 3/2-bit ResNets with a latency of 7/3. \(\Delta\)Acc. - SNN\({}^{\beta}\) Acc. - SNN\({}^{\beta}\) Acc. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Network} & \multicolumn{2}{c}{ANN} & \multicolumn{2}{c}{SNN\({}^{\alpha}\)} & \multicolumn{2}{c}{SNN\({}^{\beta}\)} & \multicolumn{2}{c}{SNN\({}^{\gamma}\)} & \multicolumn{2}{c}{Time} \\ & \multicolumn{2}{c}{ResNet-44} & 93.41 & 61.06 & 90.74 & 92.62 & +1.88 \\ & ResNet-56 & 93.66 & 41.47 & 88.55 & 92.17 & +3.62 \\ \hline & ResNet-20 & 90.39 & 81.08 & 88.81 & 90.28 & +1.47 \\ & ResNet-44 & 91.57 & 95.84 & 85.89 & 89.59 & +3.70 \\ & ResNet-56 & 91.68 & 60.95 & 81.25 & 89.25 & +8.00 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: Accuracy (%) of SNN\({}^{\alpha}\) (native SNN), SNN\({}^{\beta}\) (SNN with our signed IF neuron model), SNN\({}^{\gamma}\) (SNN with both our signed IF neuron model and layer-wise fine-tuning) on CIFAR-10. SNNs are converted from 3/2-bit ResNets with a latency of 7/3. \(\Delta\)Acc. - SNN\({}^{\beta}\) Acc. - SNN\({}^{\beta}\) Acc. the performance of all spiking ResNets. Notably, the accuracy gain from fine-tuning is bigger when the network is deeper or the precision is lower (lower latency). For the 2-bit ResNet-56, our fine-tuning method improves the accuracy of SNN\({}^{\beta}\) by a large margin of 8%. This result coincides with our theoretical analysis that the accumulating sequential error grows proportional to network depth \(L\). It also demonstrates the effectiveness of our method in reprogramming anNN activations with fine-tuned SNN firing rates. ### _Computational Efficiency_ In a neural network, the number of operations required for inference determines its computation/energy efficiency. In a rate-coded SNN, the number of operations grows proportionally to the latency. According to the analysis of the quantization error [16], previous methods have to balance the trade-off between classification accuracy and inference latency. However, when the latency reaches a critical point, it invalidates SNN's advantages. For example, AlexNet (SNN), with a latency of 32, yields almost the same number of operations as AlexNet (ANN) on CIFAR-10. Therefore, we should also validate the computation/energy efficiency of SNNs concerning the latency required to achieve comparable accuracy to ANNs. Following [16], we calculate the number of operations for our SNNs on CIFAR-10 with different inference latency. For comparison, we choose progressive tandem learning (PTL) [12] that achieves the shortest latency among previous methods as our baseline. Then we reproduce AlexNet/VGG-11 from [12] on CIFAR-10 using their public codes for both ANN training and SNN conversion with different latency configurations. In Fig. 6, we show the performance comparison ((a), (b), (d), (e)) and ratio of operations ((c), (f)) on CIFAR-10 regarding different latency. As can be observed, our method is much more computation/energy efficient compared with PTL [12]. That is, the accuracy of our AlexNet/VGG-11 (latency is 3) is higher than that of PTL's [12] AlexNet/VGG-11 (latency is 15) by 1.75%/2.00%. With a latency of 15, PTL [12] yields a ratio of operations of 47.4%/74.9%, while ours is 11.1%/24.8% with a latency of 3. This result indicates our AlexNet/VGG-11 is at least 4/3\(\times\) more computation/energy efficient than PTL [12]. ### _Discussion_ On CIFAR-10, we train the quantized ANNs with both weight and activation quantized. Therefore, the converted SNNs naturally inherit the weight precision of ANNs and are inherently compatible with neuromorphic hardware that supports low-bit integer weight precision. As shown in Table V, 7, and 8, our SNNs with quantized weights could achieve high performance with only a few time steps, making them friendly to real-time applications on low-precision neuromorphic hardware. In addition, a latency of 3 also improves the computation/energy efficiency of real-time SNN applications. Regarding the number of operations, our SNNs (latency is 3) are about 10\(\times\) more efficient than their counterpart ANNs, not to mention operations in SNNs are more efficient than operations in ANNs. The deep ANNs are primarily composed of multiply-accumulate (MAC) operations that lead to high execution time, power consumption, and area overhead. In contrast, SNNs are composed of accumulate (AC) operations that are much less costly than MAC operations [58]. Recently, Arafa et al. [59] reported that the energy consumption of an optimized 32-bit floating-point add instruction is about 10\(\times\) lower than that of a multiply instruction under NVIDIA's Turing GPU architecture. Moreover, compared with the latency bound in Eq. 11, our SNNs with a latency of \(T\) improve the efficiency of real-time applications in terms of running time (\(L\times\) faster). In practice, it also benefits energy consumption. Because even though the waiting period ideally does not introduce additional energy consumption as the number of operations remains unchanged, real-time applications still consume energy in a dormant state due to the restrictions in current neuromorphic devices. ## 5 Experiments on Object Detection Object detection is a fundamental and heavily studied task in computer vision [61, 62]. It aims to recognize and locate every object in an image, typically with a bounding box. Thanks to the advances in deep learning, object detection has received significant improvements over recent years, especially with the application of Convolutional Neural Fig. 6: Performance comparison with PTL [12] on AlexNet (top) and VGG-11 (bottom). Dashed lines indicate the performance of full-precision ANNs. (a) (d) Our performance. (b) (e) The performance of PTL [22]. (c) (f) Ratio of the number of operations (SNN to ANN), smaller is better. Networks (CNNs). Currently, the main steam object detection algorithms fall into two lines of research. One line of research focuses on strong two-stage object detectors [63, 64, 65, 66]. These Region-based CNN (R-CNN) [63] algorithms first generate regions of interest (ROIs) with a region proposal network (RPN), then perform classification and bounding box regression. While being accurate, two-stage methods suffer from a slow inference speed. Another line of research focuses on one-stage object detectors [67, 68]. These methods only use CNNs for feature extraction and directly predict the categories and positions of objects. With a balanced latency and accuracy, one-stage methods are widely used in real-time applications. Although heavily studied in the ANN domain, object detection requires further exploration in the SNN domain. An existing SNN object detector is the Spiking-YOLO [19] and its improved version [60]. Spiking-YOLO employs an ANN-to-SNN conversion method with channel-wise normalization. However, Spiking-YOLO is unfriendly to real-time applications for requiring more than 5,000 time steps during inference. Here, we explore our Fast-SNN for object detection using the YOLO framework [69] for a fair comparison. ### _Experimental Setup_ **Datasets.** We perform object detection task on two datasets: PASCAL VOC [61] and MS COCO 2017 [62]. The PASCAL VOC dataset contains 20 object categories. Specifically, the train/val/test data in VOC 2007 contains 24,640 annotated objects in 9,963 images. For VOC 2012, the train/val data contains 27,450 annotated objects in 11,530 images. For MS COCO 2017 dataset, it contains 80 object categories. It has 886,284 annotated objects spread in 118,287 training images and 5,000 validation images. Following a common protocol [65, 68, 69] on PASCAL VOC, the \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & **Work** & **Architecture** & **Method** & \begin{tabular}{c} **Precision** \\ **(ANN W/A)** \\ \end{tabular} & \begin{tabular}{c} **ANN** \\ **mAP** \\ \end{tabular} & \begin{tabular}{c} **Precision** \\ **(SNN W)** \\ \end{tabular} & \begin{tabular}{c} **SNN** \\ **mAP** \\ \end{tabular} & \begin{tabular}{c} **SNN** \\ **Steps** \\ \end{tabular} \\ \hline \hline \multirow{6}{*}{**Oursurs**} & Kim et al. [2020][19] & Tiny YOLO & Norm. & 32/32 & 53.01 & 32 & 51.83 & -1.18 & 8000 \\ & Kim et al. [2020][60] & Tiny YOLO & Norm. & 32/32 & 53.01 & 32 & 51.44 & -1.57 & 5000 \\ & Ours & Tiny YOLO & Fast-SNN & 32/4 & 53.28 & 32 & 53.17 & -0.11 & 15 \\ & Ours & Tiny YOLO & Fast-SNN & 32/3 & 52.77 & 32 & 52.83 & +0.06 & 7 \\ & Ours & Tiny YOLO & Fast-SNN & 32/2 & 50.32 & 32 & 50.56 & **+0.24** & **3** \\ & Ours & YOLOv2(ResNet-34) & Fast-SNN & 32/4 & 76.16 & 32 & **76.05** & -0.11 & 15 \\ & Ours & YOLOv2(ResNet-34) & Fast-SNN & 32/3 & 75.27 & 32 & 73.43 & -1.84 & 7 \\ & Ours & YOLOv2(ResNet-34) & Fast-SNN & 32/2 & 73.57 & 32 & 68.57 & -5.00 & **3** \\ \hline \hline \multirow{6}{*}{**Ours**} & Kim et al. [2020][19] & Tiny YOLO & Norm. & 32/32 & 26.24 & 32 & 25.66 & -0.58 & 8000 \\ & Kim et al. [2020][60] & Tiny YOLO & Norm. & 32/32 & 26.24 & 32 & 25.78 & -0.46 & 5000 \\ & Ours & Tiny YOLO & Fast-SNN & 32/4 & 27.74 & 32 & 27.59 & **-0.15** & 15 \\ & Ours & Tiny YOLO & Fast-SNN & 32/3 & 26.84 & 32 & 26.49 & -0.35 & 7 \\ & Ours & Tiny YOLO & Fast-SNN & 32/2 & 24.34 & 32 & 22.88 & -1.46 & 3 \\ & Ours & YOLOv2(ResNet-34) & Fast-SNN & 32/4 & 46.96 & 32 & **46.40** & -0.56 & 15 \\ & Ours & YOLOv2(ResNet-34) & Fast-SNN & 32/3 & 46.32 & 32 & 41.89 & -4.43 & 7 \\ & Ours & YOLOv2(ResNet-34) & Fast-SNN & 32/2 & 43.33 & 32 & 33.84 & -9.49 & **3** \\ \hline \hline \end{tabular} \end{table} TABLE IX: Performance comparison for object detection task on PASCAL VOC 2007 and MS COCO 2017. Norm: denotes normalization. \(\Delta\)mAP = SNN mAP - ANN mAP. We present the mean AP as a percent (%). We denote the bit-precision of weights and activations by ‘W’ and ‘A’, respectively. Best and second best numbers are indicated by bold and underlined fronts, respectively. Fig. 7: Visual quality comparison of object detection results on \(test\) set of PASCAL VOC 2007 (left) and \(val\) set of MS COCO 2017 (right) with the network architecture YOLOv2(ResNet-34). From left to right: results from full-precision (FP) ANN models, our SNN with \(T=15,7,3\), respectively. training data is the union of VOC 2007 trainval and VOC 2012 trainval datasets. As for testing, we use the VOC 2007 test dataset. **Data preprocessing.** We use a similar data augmentation to YOLO [69] and SSD [68] with random crops, color shifting, etc. **Network architecture.** Following [19], we employ a simple but efficient version of YOLO, the Tiny YOLO [69] for evaluation. We modify Tiny YOLO for ANN-to-SNN conversion by replacing all leakyReLU with ReLU. We also remove max pooling layers by incorporating the downsampling operations into convolution layers. To further explore object detection with deep SNNs, we also evaluate a YOLOv2 [69] with a backbone of ResNet-34. In the remainder of this paper, we refer this architecture as YOLOv2(ResNet-34). For PASCAL VOC 2007, we predict 5 boxes with 5 coordinates each and 20 classes per box, resulting in 125 filters. For MS COCO 2017, we predict 5 boxes with 5 coordinates each and 80 classes per box, resulting in 425 filters. **Training details.** On PASCAL VOC and MS COCO, we fine-tune the models initialized from pre-trained ImageNet models to straightly adapt them to the object detection task. For Tiny YOLO, we initialize the backbone network from a model trained with the protocol in Section 4.1 on ImageNet. For YOLOv2(ResNet-34), we directly initialize the backbone from the pre-trained ResNet-34 in TorchVision [52]. Then we fine-tune the initialized models for 250 epochs using the standard SGD optimizer on PASCAL VOC 2007 and MS COCO 2017. For the first two epochs we slowly raise the learning rate from 0 to 1e-3. Then we continue training with a learning rate of 1e-3 and divide the learning rate by 10 at the the 150th, 200th epoch. **Evaluation metrics.** The performance is measured in terms of mean average precision (mAP). On PASCAL VOC, we use the definitions from VOC 2007 to calculate average precision (AP) and report the mAP over 20 object categories. On MS COCO, we follow MS COCO protocol to calculate AP and report the mAP over 80 object categories. For a fair comparison with Spiking-YOLO [19], we report the mAPs at IoU = 0.5 on both PASCAL VOC and MS COCO. ### _Overall Performance_ We summarize and compare the performance in Table IX. We include the Spiking-YOLO by Kim et al. [19] and its improved version [60] for comparison. All numbers are taken from corresponding papers. On PASCAL VOC 2007, our Tiny YOLO (latency is 7) outperforms [60] (Tiny YOLO) by 1.39% mean AP performance while using about \(714\times\) fewer time steps. Furthermore, our Tiny-YOLO achieves almost lossless conversion for all latency configurations (3, 7, 15). Thanks to the shallow architecture of Tiny YOLO, our converted Tiny YOLO even outperforms its counterpart ANN when the latency is 3/7. To further explore the capacity of deep SNNs in object detection, we apply our method to the more challenging YOLOv2(ResNet-34) architecture. Compared with [60], our YOLOv2(ResNet-34) (latency is 15/7/3) achieves 24.61%/21.99%/17.13% higher mean AP performance while using \(333/714/1,667\times\) fewer time steps. On MS COCO 2017, our Tiny YOLO (latency is 7) outperforms [60] (Tiny YOLO) by 0.71% mean AP performance while using about \(714\times\) fewer time steps. Our YOLOv2(ResNet-34) (latency is 15/7/3) outperforms [60] by 20.62%/16.11%/7.67% mean AP performance while using \(333/714/1,667\times\) fewer time steps. We further provide visual results of our YOLOv2(ResNet-34) in Fig. 7. As shown in the figure, our SNNs are able to detect objects at a degree close to the full-precision ANN. Our SNN with a latency of 15 can detect objects (e.g., vase) that the full-precision ANN fails to detect. ## 6 Experiments on Semantic Segmentation Semantic segmentation is a fundamental and heavily studied task in computer vision [61, 62]. It aims to predict the object class of every pixel in an image or give it a 'background' status if not in listed classes. In recent years, semantic segmentation with deep learning has achieved great success. The fully convolutional network (FCN) [70] that regards semantic segmentation as a dense per-pixel classification problem has been the basis of semantic segmentation with CNNs. To preserve image details, SegNet [71] employs an encoder-decoder structure, U-Net [72] introduces a skip connection between the downsampling and up-sampling paths, and RefineNet [73] presents multi-path refinement to exploit fine-grained low-level features. To capture the contextual information at multiple scales, Deeplab [74] introduces Atrous Spatial Pyramid Pooling (ASPP), and PSPNet [75] performs spatial pyramid pooling at different scales. Although heavily studied in the ANN domain, semantic segmentation is scarcely explored in the SNN domain. An existing work by Kim et al. [43] employs SNNs directly trained with surrogate gradients for semantic segmentation. However, this method suffers from a significant performance drop compared with ANNs. Here, we explore our Fast-SNN for semantic segmentation using the Deeplab framework [74] for a fair comparison. ### _Experimental Setup_ **Datasets.** We perform semantic segmentation task on two datasets: PASCAL VOC 2012 [61] and MS COCO 2017 [62]. PASCAL VOC 2012 is further augmented by the extra annotations provided by [76], resulting in 10,582 training images. **Data preprocessing.** Following the original Deeplab protocol [77, 78, 74], we first standardize the data with the mean and standard deviation from the ImageNet dataset. Then we take a random \(513\times 513\) crop from the image. We further apply data augmentation by randomly scaling the input images (from 0.5 to 2.0) and flipping them horizontally at a chance of 50%. **Network architecture.** We evaluate two Deeplab [74] based architectures for semantic segmentation. The first architecture is the VGG-9 defined in [43]. To facilitate ANN-to-SNN conversion, we remove the average pooling layers and incorporate downsampling operations into convolution layers. With three downsampling layers of stride 2, the \(output\_stride\) (defined as the ratio of input image spatial resolution to final output resolution [78]) of this VGG-9 architecture is 8. The second architecture comprises a backbone of ResNet-34 and an ASPP module. In the remainder of this paper, we refer to this architecture as ResNet-34 + ASPP. The ASPP module contains five parallel convolution layers: one \(1\times 1\) convolution and four \(3\times 3\) atrous convolutions with \(rates=(6,12,18,24)\) to capture multi-scale information, following the configurations in [74]. The \(output\_stride\) of ResNet-34 + ASPP is 16. **Training details.** On PASCAL VOC and MS COCO, we fine-tune the models initialized from pre-trained ImageNet models to straightly adapt them to the semantic segmentation task. For VGG-9, we initialize the first seven convolution and batch normalization layers from the pre-trained VGG-16 in TorchVision [52]. For ResNet-34 + ASPP, we initialize the backbone from the pre-trained ResNet-34 in TorchVision [52]. Then we fine-tune the initialized models for 50 epochs using the standard SGD optimizer on both PASCAL VOC 2012 and MS COCO 2017. Following [78], we initialize the learning rate to 0.007 and employ a 'poly' learning rate policy where the initial learning rate is multiplied by \[(1-\frac{iter}{max\_iter})^{power} \tag{19}\] with \(power=0.9\). We use a momentum of 0.9 and a weight decay of 1e-4. We progressively build ANNs with activations quantized to 4/3/2 bits and apply ANN-to-SNN conversion. **Evaluation metrics.** The performance is measured in terms of pixel intersection-over-union (IoU) averaged across the 21 classes (20 foreground object classes and one background class) on PASCAL VOC 2012 and 81 classes on MS COCO 2017 (80 foreground object classes and one background class). ### _Overall Performance_ We summarize and compare the performance in Table X. On PASCAL VOC 2012, we include the Spiking-Deeplab proposed by Kim et al. [43] comparison. All numbers are taken from their paper. Compared with [43], our VGG-9 (latency is 3) outperforms their VGG-9 by 21.56% mean IoU performance while using about 7\(\times\) fewer time steps. Compared with [43], our VGG-9 also achieves a limited performance gap between ANNs and SNNs. With a latency of 15, our converted VGG-9 even improves the ANN performance by 0.17% at mean IoU. To further explore the capacity of deep SNNs in semantic segmentation, we apply Fig. 8: Visual quality comparison of semantic segmentation results on \(val\) set of PASCAL VOC 2012 (left) and MS COCO 2017 (right) with the network architecture ResNet-34 + ASPP. From left to right: the original image (input), results from full-precision (FP) ANN models, our SNN with \(T=15,7,3\), respectively. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{1}{c}{**Work**} & \multicolumn{1}{c}{**Architecture**} & \multicolumn{1}{c}{\begin{tabular}{c} **Method** \\ **(ANN W/A)** \\ \end{tabular} } & \begin{tabular}{c} **Precision** \\ **(ANN W/A)** \\ \end{tabular} & \begin{tabular}{c} **SNN** \\ **mIoU** \\ \end{tabular} & \begin{tabular}{c} **SNN** \\ **(SNN W)** \\ \end{tabular} & \begin{tabular}{c} **mIoU** \\ \end{tabular} & \begin{tabular}{c} **Time** \\ **Steps** \\ \end{tabular} \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ \end{tabular} & \begin{tabular}{c} **Score** \\ **SNN** \\ **ResNet- our method to the more challenging ResNet-34 + ASPP. Our ResNet-34 + ASPP achieves 69.7% mean IoU performance with a latency of 15, outperforming [43] by 47.4% mean IoU performance and 5 fewer time steps. For the first time, our method demonstrate that SNNs can achieve comparable performance to ANNs on the challenging MS COCO dataset performing semantic segmentation task. Comparing with corresponding ANNs, our VGG-9 has less than 2% performance drop at mean IoU for all latency configurations (3,7,15). Our VGG-9 (latency is 15) achieves 31.14% mean IoU performance, only 0.46% lower than its ANN counterpart. For a deeper architecture, our ResNet-34 + ASPP (latency is 15) achieves 50.24% mean IoU performance. We further provide visual results of our ResNet-34 + ASPP in Fig. 8. As shown in the figure, our SNNs are able to segment objects at a degree close to the full-precision ANN. For some objects (e.g., TV), our SNNs yield better results than the full-precision ANN. ## 7 Conclusion In this work, we propose a framework to build a Fast-SNN with competitive performance (i.e., comparable with ANNs) and low inference latency (i.e., 3, 7, 15). Our basic idea is to minimize the quantization error and accumulating error. We show the equivalent mapping between temporal quantization in SNNs and spatial quantization in ANNs, based on which we transfer the minimization of the quantization error to quantized ANN training. This scheme facilitates ANN-to-SNN conversion by finding the optimal clipping range and the novel distributions of weights and activations for each layer. This mapping also makes the accumulating sequential error the only culprit of performance degradation when converting a quantized ANN to an SNN. To mitigate the impact of the sequential error at each layer, we propose a signed IF neuron to cancel the wrongly fired spikes. To alleviate the accumulating sequential error, we propose a layer-wise fine-tuning mechanism to minimize the difference between SNN firing rates and ANN activations. Our framework derives the upper bound of inference latency that guarantees no performance degradation when converting a quantized ANN to an SNN. Our method achieves state-of-the-art performance and low latency on various computer vision tasks, including image classification, object detection, and semantic segmentation. In addition, our SNNs that inherit the quantized weights in ANNs are intrinsically compatible with low-precision neuromorphic hardware.
2301.00169
Generative Graph Neural Networks for Link Prediction
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
Xingping Xian, Tao Wu, Xiaoke Ma, Shaojie Qiao, Yabin Shao, Chao Wang, Lin Yuan, Yu Wu
2022-12-31T10:07:19Z
http://arxiv.org/abs/2301.00169v1
# Generative Graph Neural Networks for Link Prediction ###### Abstract Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction. The code is available at [https://github.com/star4455/GraphLP](https://github.com/star4455/GraphLP). keywords: Graph Machine Learning, Graph Neural Networks, Link Prediction, Structural Patterns, Network Reconstruction. + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Graphs provide an elegant representation for characterizing entities and their interrelations in complex systems. Given that real-world graphs can usually only be partially observed and are often noisy, link prediction aimed at inferring missing and spurious links based on observed graphs is a paradigmatic and fundamental problem across many scientific domains, including knowledge graph completion [52], experimental design in biological networks [6], fake account detection in online social networks [24], and product recommendation on e-commerce websites [31]. To address the link prediction problem, numerous heuristic methods have been proposed, including local indices such as Common Neighbors (CN) [2], and Resource Allocation (RA) [61], global indices such as Katz [25], and SimRank [23], and quasi-local indices such as the Local Path Index (LP) [61]. However, heuristic methods have a strong assumption on when two nodes are likely to be linked in real-world graphs and lack universal applicability to diverse areas [8]. Subsequently, statistical learning-based algorithms have been proposed to obtain ground-breaking results, such as maximum likelihood-based hierarchical structure model [13], stochastic block model [21], matrix factorization-based link prediction method [37], Linear Optimization (LO) link prediction [38], and Low Frobenius norm-based Link Prediction (LFLP) method [53]. With the proposal of network representation learning, various network embedding algorithms have been put forth so that the likelihood of a non-observed links can be estimated based on the proximity of nodes in low-dimensional vector space, including LINE [44], Node2Vec [20], and DNGR [10]. Recently, driven by the dramatic advances in deep learning techniques, neural networks have gradually been used to solve the link prediction problem. [56] trained a fully-connected neural network on the enclosing subgraphs of target links for link prediction, wherein a Weisfeiler-Lehman (WL) algorithm-based graph labeling mechanism was proposed to encode subgraphs. Based on the enclosing subgraphs extracted around links, [57] trained a Graph Neural Network (GNN) for link prediction to achieve a performance comparable to that of heuristic methods. Along this line of research, [36] encoded subgraphs into random-walk transition probabilities and then computed features using these probabilities to classify positive and negative links. Although these subgraph classification-based methods have achieved state-of-the-art link prediction performance, the prediction results are found to be considerably affected by the extraction process of the _k_-hop enclosing subgraphs and the graph structure features for them. For example, in representation learning on graphs [55], the range of enclosing subgraphs strongly depends on the graph structure, and the effective range should be different for subgraphs with varying properties. Typically, from the perspective of subgraph classification, link prediction methods treat subgraphs in real-world graphs independently and equivalently. That is, the global structural information of real-world graphs is totally neglected during this process. However, extensive empirical analyses indicate that real-world graphs are not locally isolated but globally relevant [35; 51]; here, nodes and edges naturally portray different structural roles and contribute differently to the global organization of real-world graphs [53; 54]. Moreover, subgraph classification-based link prediction methods assume that real-world graphs exhibit low-order connectivity patterns and can be captured at the level of individual nodes and edges. However, empirical studies have discovered that real-world graphs exhibit high-order organizations at the level of small subgraphs, which are recursively grouped into a hierarchical structure [7; 3]. An illustrative example of the global and high-order organization in real-world graphs is depicted in Figure 1. Hence, two challenges need to be addressed for link prediction: (i) how to learn good representation preserving both local and global graph structural features? and (ii) how to characterize and utilize hierarchical structure patterns? To address these challenges, instead of predicting potential links through subgraph classification, this study designs a novel generative and multi-order GNN for link prediction, called GraphLP. Evidently, real-world graphs share some global properties, such as low-rank and sparsity, that can be used to provide guidance for graph learning. Hence, motivated by the network reconstruction theory [21], GraphLP defines a self-representation model-based collaborative inference operation to refine the observed graphs globally, which assumes that the original graph can be reconstructed utilizing the correlation between subgraph patterns. Assuming that the paths between a pair of nodes provide evidence for the existence of potential links, GraphLP extracts the local structural information via a high-order connectivity operation on the observed graphs. Thus, every neural network layer obtains the connectivity of node pairs within two-hop neighborhood, and a neural network with multiple connectivity layers captures the degree of connectivity between node pairs with various path lengths. Meanwhile, the weighted adjacency matrices generated by the connectivity operation in every neural network layer reflect the multi-order connectivity pattern in the graphs. Further, the hierarchical organizational structure of real-world graphs is explored by applying a collaborative inference operation. The contributions of this study can be summarized as follows: * **Generative framework.** Rather than subgraph classification based discriminative schemes, a novel network reconstruction-based generative GNN is proposed for link prediction, which provides a new paradigm for the application of neural networks in link prediction problem. * **End-to-end learning.** Instead of designing heuristic graph structural features for subgraph representation, local and global structural patterns are extracted and fused in an end-to-end fashion for link prediction. * **Algorithm.** A novel collaborative inference operation and high-order connectivity computation mechanism are developed to characterize the structural patterns in real-world graphs at different scales. * **Experiment.** Extensive experiments on real-world datasets from different areas reveal that the proposed method, GraphLP, achieves promising performance and consistently outperforms other state-of-the-art methods. **Paper Organization.** The rest of this work is organized as follows. Section 2 discusses related studies. Section 3 presents the problem definitions and describes the preliminaries. Section 4 describes the proposed method. Section 5 presents the experimental results, and finally, Section 6 presents the conclusion and discussion. ## 2 Related Work GNNs and link prediction task have been extensively investigated in recent years. A brief review of related studies is provided in this section. ### Graph Neural Networks Owing to their potential in modeling the complex structures of non-Euclidean graphs, GNNs have achieved state-of-the-art performance on almost all graph-based tasks, such as node classification, graph classification, link prediction. Based on different theories and perspectives, a plethora of different GNNs have Figure 1: An illustrative example depicting the global and high-order organizations of real-world graphs. (a) Gene network for C. elegans [12]. (b) Representative hierarchical star-like structure [45]. (c) Representative hierarchical modular organization [39]. (b) and (c) depict the representative structural patterns of real-world graphs such as (a). been proposed over the years. Generally, GNNs can be divided into two categories: spectral-based and spatial-based methods. Of these, spectral-based GNNs are types of GNNs that design graph convolution operators in the spectral domain using Fourier transform. The involved convolution operation is defined as follows: \[f_{1}*f_{2}=\mathbf{U}[(\mathbf{U}^{\mathrm{T}}f_{1})\odot(\mathbf{U}^{\mathrm{ T}}f_{2})], \tag{1}\] where \(\odot\) denotes an element-wise product. The spectral filter is defined as \(\mathbf{g}=\mathbf{U}^{\mathrm{T}}f_{1}\), and the node signal \(\mathbf{X}\) can be processed as follows: \[\mathbf{Z}=\mathbf{U}[\mathbf{g}(\Lambda)\odot(\mathbf{U}^{\mathrm{T}}\mathbf{ X})]=\mathbf{U}\mathbf{g}(\Lambda)\mathbf{U}^{\mathrm{T}}\mathbf{X}. \tag{2}\] where \(\mathbf{U}\) denotes a matrix of eigenvectors of the normalized Laplacian graph \(\mathbf{L}=\mathbf{I}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1} {2}}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\mathrm{T}}\)[11]. Assuming that feature representation of node should be affected only by its \(k\)-hop neighborhood, [16] proposed a Chebyshev polynomial based \(k\)-localized convolution and developed a convolutional neural network, ChebNet, which eliminated the need to compute the eigenvectors of the Laplacian. Subsequently, [50] simplified the Chebshev polynomial filter using its first-order approximation and proposed the popular spectral-based method called Graph Convolutional Networks (GCNs). Notably, spatial-based GNNs define graph convolution operator based on graph topology wherein the feature vectors of node's neighbors are aggregated via a permutation-invariant function. Specifically, [22] proposed a GraphSAGE approach that sampled fixed size neighborhood nodes and used max pooling, mean pooling, and LSTM pooling scheme to aggregate neighbor information. Considering the different weights of node's neighbors, [46] proposed a Graph Attention Network (GAT) algorithm to calculate attention coefficient and then aggregated the neighborhood information. Other related models include PATCHY-SAN [34], DCNN [4], and further details on GNNs can be found in the review [60]. ### Neural Networks based Link Prediction Following heuristic methods, matrix completion-based methods and network embedding-based methods, neural networks have been gradually applied to link prediction problem and have achieved state-of-the-art results. Specifically, [56] proposed a link prediction method called Weisfeiler-Lehman Neural Machine (WLNM), which labeled nodes using the Weisfeiler-Lehman algorithm and encoded subgraphs to construct a feedforward neural network-based classification model. Next, from the perspective of subgraph classification, [57] proposed a novel GNN-based link prediction framework, SEAL, to learn subgraph structures and node features from local enclosing subgraphs. Along this line, to directly leverage the topology features of local subgraphs, [36] proposed a new random-walk-based pooling scheme, WalkPool, and built features for subgraph classification. Moreover, [18] proposed a neural network-based link prediction method with only one-hop neighborhood information, which demonstrated almost equivalent performance to the WLNM and SEAL. Instead of subgraph classification, [8] converted the original graph into a corresponding line graph and solved the node classification problem for link prediction. To perform link prediction for general directed or undirected complex networks, [48] represented the adjacency matrices of networks as binary images and developed a generative adversarial networks (GANs)-based method. In addition, because existing GNN-based methods do not scale appropriately to large graphs, [30] extracted sparse enclosing subgraphs based on multiple random walks and presented a scalable link prediction solution, called ScaLed. To reduce the time required to determine the distances between two nodes, [27] defined an anchor-based distance and proposed a new distance-enhanced GNN method for link prediction. Among all existing methods for link prediction, the work closest to the one considered in this study is the GANs-based method [48]. However, this method predicts potential links via image processing within the GANs framework, whereas the proposed method conducts link prediction via GNNs-based network reconstruction. ### Network Structure Analysis Real-world graphs, also known as complex networks, are abstract representation of complex systems and have been extensively studied in the field of network science. Consequently, numerous studies have revealed that complex networks exhibit rich and diverse connectivity patterns. [32] augmented that the organization of real networks usually embodies both regularities and irregularities, where the former can be modeled and decides the extent to which the formation of a network can be explained. Notably, link predictability reflects the structural regularities in real-world networks and denotes the inherent difficulty of link prediction. [53] proposed a self-representation network model-based method, called NetSER, for measuring and regulating link predictability of networks. [54] proposed a deep linear coding-based link prediction adversarial attack method by disturbing the underlying structural pattern of networks, which proved that links play global structural roles in network organization. Moreover, [7] suggested that high-order connectivity patterns are essential for understanding the fundamental structures of networks and developed a framework that identified clusters of network motifs. [41] claimed that hierarchical structure plays an important role in complex systems. To prove the existence of hierarchical organization, an unsupervised method for extracting the hierarchical organization of complex networks was introduced and validated. Although real-world graphs exhibit various structural patterns, most existing neural networks-based link prediction methods simply assume that they are flattened and locally isolated, and these methods judge the existence of links explicitly based only on local enclosing subgraphs. With the exception of local structural features, this study focuses on integrating global and hierarchical structural patterns into neural networks for link prediction. ## 3 Problem Definition and Preliminaries ### Problem Definition NotationsLet \(\mathcal{G}=(V,\mathcal{E})\) denote an undirected and unweighted graph, where \(V=\{v_{1},\cdots,v_{N}\}\) denotes the set of nodes and \(\mathcal{E}=\{e_{1},\cdots,e_{M}\}\) denotes the set of edges. The adjacency matrix of graph \(\mathcal{G}\) is denoted as \(\mathbf{A}\in\{0,1\}^{N\times N}\), where \(\mathbf{A}_{ij}=1\) if nodes \(i\) and \(j\) are connected and \(\mathbf{A}_{ij}=0\) otherwise. Each edge \(e\) can be represented as a node pair \((\mathbf{u},\mathbf{v},)\), where \(\mathbf{u},\mathbf{v}\in\mathcal{V}\). Let \(\mathcal{N}(u)\) denote the neighbors of node \(u\), \(\mathcal{N}(u)=\{v|(\mathbf{u},\mathbf{v})\in\mathcal{E}\}\). **Link Prediction**. Given an observed graph \(\mathcal{G}_{o}=(V,\mathcal{E}_{o})\) that corresponds to the original graph \(\mathcal{G}=(V,\mathcal{E})\), link prediction aims to infer the presence or absence of an edge between a pair of target nodes based on \(\mathcal{G}_{o}\), thereby generating a recovered graph \(\mathcal{G}^{*}\) to approximate the original graph \(\mathcal{G}\). In particular, the prediction problem involves identifying a function that generates a likelihood score for a pair of nodes \((\mathbf{u},\mathbf{v})\notin\mathbf{\xi}\) to infer the missing link \((\mathbf{u},\mathbf{v})\), or to produce a likelihood score for an existing edge \((\mathbf{u},\mathbf{v})\in\mathbf{\xi}\) to identify spurious links. Thus, the link prediction problem can be formulated as \(s_{\mathbf{u}\pi}=f(\mathbf{u},\mathbf{v},\mathbf{\Lambda}|\mathbf{\theta})\), where \(\mathbf{\theta}\) denotes the parameter of link prediction model. In this work, \(\mathbf{\xi}_{m}\) and \(\mathbf{\xi}_{s}\) denote the identified missing and spurious links, respectively. Note that data augmentation is a set of techniques that increases the amount and diversity of data by creating reasonable virtual data points from existing data, such that better machine learning models can be constructed based on them. According to [59], this study considers graph data augmentation and adopts a random mapping mechanism to produce augmented graph set \(\mathcal{D}\) based on the observed graph \(\mathbf{\xi}_{o}=(\mathcal{V},\mathbf{\xi}_{o})\). Specifically, the set of all possible edges in the graph \(\mathbf{\xi}_{o}\) is denoted as \(\Omega\), the existing edge set is denoted as \(\mathbf{\xi}_{o}\), and the non-existing edge set is denoted as \(\mathbf{\xi}_{nom}=\Omega-\mathbf{\xi}_{o}\). Thus, the candidate sets for random mapping are defined as follows: \[\mathbf{\xi}_{dd}^{c}=\mathbf{\xi}_{o},\quad\mathbf{\xi}_{add}^{c}=\mathbf{\xi}_{nom}. \tag{3}\] Thereafter, samples are randomly produced from the candidate sets to obtain the edge sets \(\mathbf{\xi}_{dd}\) and \(\mathbf{\xi}_{add}\). Finally, a new augmented graph is generated by modifying the graph \(\mathbf{\xi}_{o}\) based on \(\mathbf{\xi}_{del}\) and \(\mathbf{\xi}_{add}\): \[\mathbf{\xi}^{\prime}=(\mathcal{V},(\mathcal{B}\cup\mathbf{\xi}_{add})\backslash\mathbf{ \xi}_{del}). \tag{4}\] Each input graph can be viewed as an instance for link prediction, owing to the generative learning scheme of the models considered in this work. Thus, the dataset containing a series of augmented graphs can be denoted as \(\mathcal{D}=\{\mathbf{\xi}_{0}|i=1,...,l\}\) and split to yield disjoint training and validation sets. These can be denoted as \(\mathcal{D}_{train}\) and \(\mathcal{D}_{val}\) respectively, wherein the missing and spurious links of the validation set are guaranteed not to appear in the training set. The observed graph \(\mathbf{\xi}_{o}\) used to generate the augmented graphs is defined as test set \(\mathcal{D}_{test}\). ### Graph Convolutional Networks GCNs are a class of neural networks designed to generalize traditional convolution operator for non-euclidean graph-structured data. In essence, GCNs aim to learn new feature representations of nodes in graphs by exploiting their structural information. Let adjacency matrix \(\mathbf{\Lambda}\in\{0,1\}^{\forall\mathcal{N}\mathcal{N}}\) denote the structural information of the graph \(\mathbf{\xi}\), and \(\mathbf{\mathbf{X}}\in\mathbb{R}^{N\times F}\) denote the feature matrix of all graph nodes. Mathematically, using the output of the \(l\)-th layer as the input for the next layer, each neural network layer can be formulated as a nonlinear function: \[\mathbf{\mathbf{H}}^{(l+1)}=f(\mathbf{\mathbf{H}}^{(l)},\mathbf{\mathbf{A}}) \tag{5}\] where \(\mathbf{\mathbf{H}}^{(l)}\) corresponds to the feature matrix of the \(l\)-th layer, and \(\mathbf{\mathbf{H}}^{(l)}=\mathbf{\mathbf{X}}\) is the input feature matrix of the first layer. Specific GCNs models differ only in the manner in which the nonlinear function \(f(\cdot)\) is instantiated. A simple example of \(f(\cdot)\) is as follows: \[f(\mathbf{\mathbf{H}}^{(l)},\mathbf{\mathbf{A}})=\sigma(\mathbf{\mathbf{A}}\mathbf{\mathbf{H} }^{(l)}\mathbf{\mathbf{W}}^{(l)}) \tag{6}\] where \(\sigma(\cdot)\) denotes a nonlinear activation function, such as a Rectified Linear Unit (ReLU), and \(\mathbf{\mathbf{W}}^{(l)}\) denotes a trainable weight matrix for the \(l\)-th layer. With this propagation rule, the neighbour's features are aggregated to represent each node at every layer, and the features become increasingly abstract by stacking layers on top of each other. However, there exist two limitations: the propagation rule simply aggregates the features of neighboring nodes but not the node itself, and the multiplication with \(\mathbf{\mathbf{A}}\) expected to change the scale of the feature vectors. That is, the nodes with a high degree will have a larger value, and the nodes with a low degree may have smaller values. To address the problems, a new propagation function, \(f(\cdot)\), is presented as follows: \[f(\mathbf{\mathbf{H}}^{(l)},\mathbf{\mathbf{A}})=\sigma(\hat{\mathbf{\mathbf{D}}}^{-\frac{ 1}{2}}\hat{\mathbf{\mathbf{A}}}\hat{\mathbf{\mathbf{D}}}^{-\frac{1}{2}}\mathbf{\mathbf{H} }^{(l)}\mathbf{\mathbf{W}}^{(l)}) \tag{7}\] where \(\hat{\mathbf{\mathbf{A}}}\) is obtained by adding an identity matrix \(\mathbf{\mathbf{I}}\) to the adjacency matrix \(\hat{\mathbf{\mathbf{A}}}=\mathbf{\mathbf{A}}+\mathbf{\mathbf{I}}\), \(\mathbf{\mathbf{D}}\) denotes the diagonal node degree matrix of \(\hat{\mathbf{\mathbf{A}}}\), and \(\hat{\mathbf{\mathbf{D}}}^{-\frac{1}{2}}\hat{\mathbf{\mathbf{A}}}\hat{\mathbf{\mathbf{D}}}^ {-\frac{1}{2}}\) denotes symmetric normalization. ### Low-rank and Sparse Modeling Traditionally, Principal component analysis (PCA) was proposed to determine a low-dimensional representation of data while retaining as much information as possible. However, the PCA is particularly effective when dealing with Gaussian noise, which is independent and identically distributed with respect to the original data. Hence, the Robust Principal Component Analysis (RPCA) [9] has been proposed to eliminate the effect of erratic noise (outliers). PCA and RPCA methods implicitly assume that the underlying data structure is a single low-rank subspace; however, real-world data may be drawn from a union of multiple subspaces, and therefore, modeling may be inaccurate. To this end, Low-Rank Representation (LRR) [28] has been proposed. Considering the correlation between the connectivity patterns of nodes in real-world graphs, the adjacency matrix of the graphs should be low-rank. In other words, the rows or columns of the adjacency matrix must not be linearly independent. Thus, assuming that hidden non-zero entries representing missing links can be recovered according to the adjacency matrix, [37] proposed an RPCA-based link prediction method, which is formulated as the following optimization problem: \[\min_{\mathbf{\mathbf{X}}\cdot\mathbf{\mathbf{E}}}\text{rank}(\mathbf{\mathbf{X}}^{*})+ \gamma\|\mathbf{\mathbf{E}}\|_{0}\text{ }s.t.,\ \mathbf{\mathbf{A}}=\mathbf{\mathbf{X}}^{*}+\mathbf{\mathbf{E}} \tag{8}\] where \(\text{rank}(\mathbf{\mathbf{X}}^{*})\) denotes the rank of matrix \(\mathbf{\mathbf{X}}^{*}\), \(\|\cdot\|_{0}\) is the \(\ell_{0}-\)norm, and \(\gamma\) denotes the balancing parameter. The method searches for \(\mathbf{\mathbf{X}}^{*}\) with a low rank as low as possible and \(\mathbf{\mathbf{E}}\) as sparse as possible from \(\mathbf{\mathbf{A}}\). Moreover, by representing a network structure with as few representative subgraphs as possible, [53] proposed an LRR-based link prediction method, wherein networks could be modeled via a low-rank and sparse represen \begin{table} \begin{tabular}{|c|l|} \hline Notations & Descriptions \\ \hline \(\mathbf{\mathbf{\xi}}\) & Original graph \\ \hline \(\mathbf{\mathbf{\xi}}_{o}\) & Observed graph \\ \hline \(\mathbf{\mathbf{A}}\) & Adjacency matrix of graph \\ \hline \(\mathbf{\mathbf{\xi}}_{m}\) & Missing links \\ \hline \(\mathbf{\mathbf{\xi}}_{s}\) & Spurious links \\ \hline \(\mathbf{\mathbf{\Omega}}\) & Dataset that contains augmented graphs \\ \hline \(\mathbf{\mathbf{H}}^{(l)}\) & Feature matrix of \(l\)-th neural network layer \\ \hline \(\mathbf{\mathbf{W}}^{(l)}\) & Trainable weight matrix for the \(l\)-th layer \\ \hline \(\|\cdot\|_{0}\) & \(\ell_{0}-\)norm \\ \hline \(\|\cdot\|_{2,1}\) & \(\ell_{2,1}-\)norm \\ \hline \end{tabular} \end{table} Table 1: Notations and meanings. tation, as follows: \[\min_{\mathbf{Z},\mathbf{E}}\text{rank}(\mathbf{Z})+\alpha\|\mathbf{Z}\|_{0}+ \beta\|\mathbf{E}\|_{2,1}\ s.t.,\ \mathbf{A}=\mathbf{A}\mathbf{Z}+\mathbf{E} \tag{9}\] where \(\mathbf{Z}\) denotes the representation matrix reflecting the organization principle of the network, and \(\|\cdot\|_{2,1}\) is the \(\ell_{2,1}-\)norm. The notations used in this study are listed in Table 1. ## 4 The Proposed Method This section presents the proposed link prediction method, GraphLP. As depicted in Figure 2, the framework of GraphLP consists of three main components: * Collaborative inference operation. There exist certain similarities between the connection patterns of individuals in a complex system such that the perturbed structure of real-world graphs can be recovered globally based on the correlation between subgraph patterns (Section 4.1). * High-order connectivity computation. The existence of a link between any two target nodes is intended to be primarily determined by the connectivity degree between nodes, i.e., the number of paths and their length. Thus, the likelihood of a link can be estimated locally by computing the connectivity (Section 4.2). * Pattern fusion operation. In addition to the first-order adjacency matrix, the connection patterns of nodes in the high-order adjacency matrix are also considered to be correlated, and the high-order connectivity can be reconstructed based on the collaborative inference. Thus, the graph topology can be estimated by fusing the k-order (\(k\geq 1\)) adjacency matrix (Section 4.3). ### Collaborative Inference Operation [32] suggested that link formation in real-world graphs is usually driven by both regular and irregular factors, and the former can be explained based on the mixture of multiple mechanisms, such as homophily, triadic closure, preferential attachment. Meanwhile, assuming that high-dimensional data are a mixture of simple data and are drawn from a union of multiple low-dimensional linear subspaces, the LRR has been proposed to represent the data \(\mathbf{A}=[a_{1},a_{2},...,a_{N}]\) as a linear combination of the basis in a "dictionary" \(\mathbf{D}=[d_{1},d_{2},...,d_{M}]\): \[\min_{\mathbf{Z}}\text{rank}(\mathbf{Z})\ s.t.,\ \mathbf{A}=\mathbf{D} \mathbf{Z}, \tag{10}\] Figure 2: Demonstration of our link prediction method, GraphLP. (a) Link prediction method, GraphLP. The original graph is perturbed using a random mapping mechanism to obtain the observed graph; after this, the observed graph is further perturbed to generate augmented graphs. These augmented graphs are fed into GraphLP to learn the model using the observed graph as the label. Subsequently, the learned model is used to infer the original graph based on the observed graph. (b) Self-representation-based collaborative inference. Based on the structural regularity of graphs, the original graph can be reconstructed by utilizing the correlation between subgraph patterns. (c) Example of high-order connectivity. In addition to the 1-hop neighborhood, multi-hop connectivity influences the existence of links. The right graph represents the two-hop connectivity of the graph on the left, and the red dotted lines in the left graph provide an example of the two-hop connectivity path of node 2. Thus, the optimal representation matrix \(\mathbf{Z}^{*}\) uncovers the underlying subspaces in the data. By using each subspace to model a homogeneous subset of the data, multiple subspaces in LRR can capture heterogeneous structures within the data. Therefore, considering the above ideas, the regular structure of real-world graphs can be described appropriately by the LRR model, wherein the generation mechanisms of graph organization essentially corresponds to subspaces and the low rankness constraint captures the global correlation in graphs. Meanwhile, based on the generation mechanisms of graph organization, individual nodes may have similar connection patterns, and substructures that follow the same generation mechanism can be represented by each other, as depicted in Figure 2(b). Therefore, by using the adjacency matrix \(\mathbf{A}\) as the dictionary, the real-world graph can be represented by itself, as follows: \[\min_{\mathbf{Z}}\text{rank}(\mathbf{Z})\text{ \emph{s.t.}, }\mathbf{A}=\mathbf{A}\mathbf{Z}. \tag{11}\] In addition to their regular structure, real-world graphs also contain irregular components. Thus, we let matrix \(\mathbf{E}\) denote such irregular connections; then, the proposed self-representation model can be modified as \(\mathbf{A}=\mathbf{A}\mathbf{Z}+\mathbf{E}\). According to the LRR, data are considered to be "sample specific", and the \(\ell_{21}-\)norm is adopted to constrain the matrix \(\mathbf{E}\), i.e., \(||\mathbf{E}||_{2,1}\). However, although the proposed method can be used to model real-world graphs, the low-rank model and \(\ell_{21}-\)norm constraints are usually solved using Alternating direction method (ADM), which requires a large number of iterations and has high complexity. Therefore, a reasonable strategy is to relax the constraints with Frobenius norm: \[\min_{\mathbf{Z}}||\mathbf{Z}||_{\mathcal{G}}^{2}+\lambda||\mathbf{A}- \mathbf{A}\mathbf{Z}||_{\mathcal{G}}^{2}\text{ \emph{s.t.}, }\mathbf{A}=\mathbf{A}\mathbf{Z}+\mathbf{E} \tag{12}\] Let \(\mathcal{L}=||\mathbf{Z}||_{\mathcal{G}}^{2}+\lambda||\mathbf{A}-\mathbf{A} \mathbf{Z}||_{\mathcal{G}}^{2}\) denote the partial derivative of \(\mathcal{L}\) with respect to \(\mathbf{Z}\) is \(\partial\mathcal{L}/\partial\mathbf{Z}=\mathbf{Z}\mathbf{Z}+\lambda(-2 \mathbf{A}^{\mathsf{T}}\mathbf{A}+2\mathbf{A}^{\mathsf{T}}\mathbf{A}\mathbf{Z})\). By setting \(\partial\mathcal{L}/\partial\mathbf{Z}=0\), the optimal representation \(\mathbf{Z}^{*}\) can be obtained as follows: \[\mathbf{Z}^{*}=\lambda(\lambda\mathbf{A}^{\mathsf{T}}\mathbf{A}+\mathbf{I})^{- 1}\mathbf{A}^{\mathsf{T}}\mathbf{A}. \tag{13}\] where \(\mathbf{I}\) denotes the identity matrix. Thus, in the case that the clean data is sufficient enough to represent the graph's structural patterns and the irregular connections are properly characterized, the structure perturbations can be inferred using \(\mathbf{A}\mathbf{Z}^{*}\). Hence, the collaborative inference operation is defined as follows: \[\mathcal{C}\mathcal{B}(\mathbf{A})=\lambda\mathbf{A}(\lambda\mathbf{A}^{ \mathsf{T}}\mathbf{A}+\mathbf{I})^{-1}\mathbf{A}^{\mathsf{T}}\mathbf{A} \tag{14}\] ### High-order Connectivity Computation According to local similarity indices for link prediction, the more the number of paths two nodes possess, the greater the similarity between them. Specifically, two nodes with a high mutual connectivity are more likely to generate a link between them. Thus, n-hop-based (\(n\geq 2\)) paths must be explored to characterize the local structural features for link prediction. Using a deep learning framework, the \(n\)-hop computation can be decomposed into two-hop operations on each neural layer. Hence, a high-order connectivity computation calculates the two hop connectivity of graph nodes in each layer, and the mutual connectivity of two nodes can be estimated by stacking the high-order connectivity computation mechanism. Assuming that the integer powers of the adjacency matrix characterizes the mutual connectivity of graph nodes, that is, \([\mathbf{A}^{\mathsf{n}}]_{ij}\) denotes the number of paths with length \(n\) connecting nodes \(i\) and \(j\), the high-order connectivity computation in each neural layer can be defined based on the idea of the second power of adjacency matrix \(\mathbf{A}\). From the perspective of graph convolution networks, high-order connectivity computation can be defined as \[\mathcal{HCCA}(\mathbf{A})=\hat{\mathbf{D}}^{-\frac{1}{2}}\hat{\mathbf{A}} \hat{\mathbf{D}}^{-\frac{1}{2}}\mathcal{C}\mathcal{G}(\mathbf{A}), \tag{15}\] where the weighted adjacency matrix generated by the proposed collaborative inference operation is viewed as the features of graph nodes. Figure 3 illustrates a high-order connectivity computation. As presented in Equation (15), the global and local structural features can be captured for link prediction at the level of individual nodes and edges. Thus, the nonlinear propagation function can be defined as follows: \[\mathbf{H}^{(l+1)}=\hat{\mathbf{D}}^{-1}\hat{\mathbf{A}}\hat{\mathbf{D}}^{- \frac{1}{2}}\mathcal{C}\mathcal{G}(\mathbf{H}^{(l)})\mathbf{W}^{(l)}. \tag{16}\] Thus, the hierarchical structure of real-world graphs can be characterized by executing the nonlinear propagation function iteratively, in which \(\mathcal{HCCA}(\mathbf{H}^{(l)})=\hat{\mathbf{D}}^{-\frac{1}{2}}\hat{\mathbf{A}} \hat{\mathbf{D}}^{-\frac{1}{2}}\mathcal{C}\mathcal{G}(\mathbf{H}^{(l)})\) represents the high-order connectivity of graph nodes, as depicted in Figure 2(c), and \(\mathcal{C}\mathcal{G}(\mathbf{H}^{(l+1)})\) denotes the collaborative inference. ### Pattern Fusion Operation To estimate the likelihood of potential links, the output of the \((l-1)\)-th layer, i.e., \(\mathbf{H}^{(l)}\), is fed as the input of the \(l\)-th layer. Based on \(\mathcal{C}\mathcal{G}(\mathbf{H}^{(l)})\) and \(\mathcal{HCCA}(\mathbf{H}^{(l)})\), the shallow layers extract the low-order global and local structure features, while the deep layers extract the high-order global and local structure features. Meanwhile, the effective range that the local structure features drawn from increases as the model depth increases. Therefore, the structure features in different range at various order, i.e., \(\mathcal{HCCA}(\mathbf{H}^{(l)})\) and \(\mathcal{C}\mathcal{G}(\mathbf{H}^{(l)})\), \(0\leq l\leq L\), all contribute to the inference of potential links, although the exact extent of their contribution depends on the graph data. To overcome the issues mentioned above, in addition to being used as the inputs of the next layer, the outputs of neural network layers are mapped to skip a block of several layers based on residual connections, as illustrated in Figure 2(a). Next, all outputs are concatenated and used as the input of a two-layer Multi-layer Perceptron (MLP), which is defined as: \[\mathbf{O}\mathbf{=}\text{MLP}(\text{concat}(\mathcal{C}\mathcal{G}(\mathbf{H} ^{(l)}),\mathcal{HCCA}(\mathbf{H}^{(l)}))),0\leq l\leq L. \tag{17}\] where \(\mathbf{O}\) is a vector containing the probabilities of links between all possible node pairs, and missing and spurious links can be inferred based on it. Figure 3: Illustration of high-order connectivity computation. ### Model Training To train the proposed model, augmented graphs generated based on the observed graph are used as training data, and the adjacency matrix of the observed graph is flatten as its labels \(\mathbf{Y}\), where \(\mathbf{Y}_{i:N*,j}\) denotes the existence of the link between nodes \(i\) and \(j\). Correspondingly, \(\mathbf{O}\) represents the prediction results obtained by the proposed model for all possible links. Here, the Binary Cross-Entropy (BCE) is used as the loss function: \[\mathcal{L}=-\frac{1}{N^{2}}\sum_{i=1}^{N^{2}}\mathbf{Y}_{i}\log(\mathbf{O}_{i })+(1-\mathbf{Y}_{i})\log(1-\mathbf{O}_{i}). \tag{18}\] The learned model is then deployed on the observed graph to reconstruct the original graph. The training process of GraphLP is outlined in Algorithm 1. ``` 0: Training set \(\mathcal{D}_{train}\), validation set \(\mathcal{D}_{val}\), and test set \(\mathcal{D}_{test}\), number of neural network layers \(L\). 0: The well-trained model GraphLP. 1:while not convergence do 2:for\(0\leq l\leq L\)do 3: Conduct collaborative inference operation using (14); 4: Compute high-order connectivity using (16); 5:endfor 6: Fuse the outputs based on MLP using (17); 7: Update the model by minimizing the loss function (18); 8:endwhile ``` **Algorithm 1** Training Process of GraphLP ### Model Analysis (1) Generalized local similarity indices. The high-order connectivity computation \(\mathcal{HCCAT}(\mathbf{H}^{(l)})\) in every neural network layer is essentially the second power of the adjacency matrix, and it obtains the connectivity of node pairs within two-hop neighborhood. As the model depth increases, the connectivity of node pairs in a wider range is considered. Thus, GraphLP can degenerate to \(S_{ij}=\mathbf{A}^{2}+\alpha\mathbf{A}^{3}+\beta\mathbf{A}^{4}+\cdots\) when collaborative inference and deep learning mechanism are abolished. (2) Connection to WalkPool. WalkPool [36] first generates node representations based on GNN and encodes them into edge weights of the extracted enclosing subgraphs; following this, it uses the edge weights to compute the transition probabilities of random walk. Next, the method calculates a list of features based on the transition probabilities to classify the subgraphs. However, for an enclosing subgraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), its variants \(\mathcal{G}^{*}=(\mathcal{V},\mathcal{E}\cup\{i,j\})\) and \(\mathcal{G}^{*}=(\mathcal{V},\mathcal{E}\backslash\{i,j\})\) are used as positive and negative samples, respectively. In essence, this method discriminates only those subgraphs that differ by a single edge and is not suitable for practical link prediction scenarios. In contrast, GraphLP can predict any potential links based on graph structure features. (3) Connection to LFLP. The LFLP [53] constructs an adjacency matrix based on a self-representation model and then combines it with the observed network to identify missing and spurious links. The collaborative inference operation \(\mathcal{CG}(\mathbf{H}^{(l)})\) of our work is similar to that in the LFLP with respect to modeling the global structure of graphs; however, the difference is that only low-order global structural features are considered in LFLP, whereas multi-order global and local structural features are characterized based on the deep-learning framework in GraphLP. ## 5 Experiments Further, extensive experiments are conducted on real-world graphs to evaluate the performance of the proposed method GraphLP: (1) Compare GraphLP with state-of-the-art methods; (2) Compare GraphLP with traditional baseline methods; (3) Model architecture analysis; (4) Model sensitivity analysis. Here, Area Under Curve (AUC) and Average Precision (AP) are adopted to evaluate the performance of the methods. Furthermore, Precision is used to verify the superiority of GraphLP over traditional link prediction methods. Based on the link prediction results \(\mathbf{O}\), the scores are sorted in descending and ascending orders, and following this, their top-L links are taken as the predicted missing and spurious links. Note that Precision is defined by calculating the ratio of accurately discovered links to the total number of links in the probe set: \[\text{Precision}=\mathcal{T}/\mathcal{R} \tag{19}\] where \(\mathcal{T}\) is the number of accurately identified links, and \(\mathcal{R}\) is the total number of links in the probe set. ### Experimental Settings #### 5.1.1 Experimental Datasets Herein, seven widely used graph datasets are used for link prediction. (1) USAir [40]. This is the transportation network of the United States, including 332 airports as nodes and 2,126 airlines as edges, connecting the United States worldwide. The average node degree is 12.81. (2) C.ele [49]. This is a neural network of C. elegans, with 297 neurons representing nodes and 2,148 synaptic connections representing edges. The average node degree is 14.46. (3) PB [1]. This dataset is a network of hyperlinks between weblogs on US political blogs, with 1,222 blogs on US politics as nodes and 16,714 hyperlinks between blogs as edges. The average node degree is 27.36. (4) NS [33]. This is an undirected co-authorship network with 1,589 nodes and 2,742 edges, where the nodes denote the scientists engaged in network science research, and the edges denote two scientists have co-authored a publication. The average node degree is 3.45. (5) Yeast [47]. This represents a protein-protein interaction network formed in yeast with 2,375 proteins as nodes and 11,693 protein-protein interactions as edges. The average node degree is 9.85. (6) E.coli [58]. This is a pairwise reaction network of metabolites with 1,805 nodes and 14,660 edges. The average node degree is 12.55. (7) Router [43]. It is a snapshot of the Internet structure at the level of autonomous systems, with 5,022 nodes and 6,258 edges, in which the nodes represent routers and the edges represent the data transmission between routers. The average node degree is 2.49. The properties of the datasets are listed in Table 2. To extensively validate the performance of the proposed method, 90% and 50% of the links of the original graph are selected randomly to first construct the observed graphs. Thereafter, based on the observed graph \(\mathcal{G}_{o}\), 10% nonexing links are add randomly as spurious links, and 10% existing links are removed randomly as missing links, denoted as \(\mathcal{E}_{del}\) and \(\mathcal{E}_{add}\) respectively, to generate the augmented graph set \(\mathcal{D}=\{\mathcal{G}_{i}\|i=1,...,l\}\). Following this, 90% and 10% graphs are randomly select from \(\mathcal{D}\) as the training and validation set, respectively, and the observed graph \(\mathcal{G}_{o}\) is used as the test set. #### 5.1.2 Comparison Methods The proposed method was compared with six state-of-the-art deep learning-based link prediction methods, including: (1) Weisfeiler-Lehman graph kernel (WLK) [42] is a fast feature extraction scheme based on the WL test for graph isomorphism, which maps the original graph to a graph sequence and adds the pair-wise similarities between the graphs. (2) Weisfeiler-Lehman Neural Machine (WLNM) [56] is a subgraph classification-based link prediction method that leverage deep learning to automatically learn topological features from enclosing subgraphs. (3) Node2Vec [20] is a network embedding method that encodes proximity information into low-dimensional vectors. The node features and low-dimensional vectors are then fed into the MLP for link prediction. (4) LINE [44] learns network embeddings that preserve the first-order and second-order proximity, and the resulting low-dimensional vectors are used for link prediction. (5) SEAL [57] extracts the enclosing subgraphs of positive and negative links and marks different roles of their nodes. The method then trains a GNN based on the node information matrix to classify subgraphs for link prediction. (6) WalkPool (WP) [36] is a subgraph classification-based link prediction method. It encodes node feature and graph topology into the transition probabilities of random walk, and following this, a list of features is computed to classify subgraphs. #### 5.1.3 Parameter Settings GraphLP is implemented on a Pytorch platform with a NVIDIA GeForce RTX GPU and optimized using Adam optimizer. All models are implemented using Python 3.6. The learning rate is set to 0.0012 for the NS dataset and 0.0005 for the other graphs. For all the datasets, the weight decay is set to 0.0. The number of epochs on the E.coli and Yeast dataset is 300, whereas it was 200 on the other datasets. Dropout is applied to the MLP, and the dropout rate is set to 0.5 on Router and 0.2 on the others. The trade-off parameter \(\lambda\) is set to 0.13, and the number of neural network layers in the GraphLP is set to three. The detailed hyperparameter settings for the model are listed in Table 3. \begin{table} \begin{tabular}{c|c c c c c c c} \hline Data & USAir & NS & PB & Yeast & C.ele & Router & E.coli \\ \hline WLK & 96.82 \(\pm\) 0.84 & 98.79 \(\pm\) 0.40 & 93.34 \(\pm\) 0.89 & 96.82 \(\pm\) 0.35 & 88.96 \(\pm\) 2.06 & 86.59 \(\pm\) 2.23 & 97.25 \(\pm\) 0.42 \\ \hline WLNM & 95.95 \(\pm\) 1.13 & 98.81 \(\pm\) 0.49 & 92.69 \(\pm\) 0.64 & 96.40 \(\pm\) 0.38 & 85.08 \(\pm\) 2.05 & 93.53 \(\pm\) 1.09 & 97.50 \(\pm\) 0.23 \\ \hline Node2Vec & 89.71 \(\pm\) 2.97 & 94.28 \(\pm\) 0.91 & 84.79 \(\pm\) 1.03 & 94.90 \(\pm\) 0.38 & 83.12 \(\pm\) 1.90 & 68.66 \(\pm\) 1.49 & 90.87 \(\pm\) 1.48 \\ \hline LINE & 97.70 \(\pm\) 11.76 & 85.17 \(\pm\) 1.65 & 78.82 \(\pm\) 2.71 & 90.55 \(\pm\) 2.39 & 67.51 \(\pm\) 2.72 & 71.92 \(\pm\) 1.53 & 86.45 \(\pm\) 1.82 \\ \hline SEAL & 97.13 \(\pm\) 0.80 & 99.06 \(\pm\) 0.37 & 94.55 \(\pm\) 0.43 & 98.33 \(\pm\) 0.37 & 89.48 \(\pm\) 1.85 & 96.23 \(\pm\) 1.71 & 98.03 \(\pm\) 0.20 \\ \hline WP & 98.66 \(\pm\) 0.55 & **99.09 \(\pm\) 0.29** & 95.28 \(\pm\) 0.41 & 98.64 \(\pm\) 0.28 & 91.53 \(\pm\) 1.33 & **97.20 \(\pm\) 0.38** & 98.79 \(\pm\) 0.21 \\ \hline GraphLP & **99.91 \(\pm\) 1.03** & 98.94 \(\pm\) 0.96 & **98.32 \(\pm\) 1.43** & **98.74 \(\pm\) 0.16** & **99.41 \(\pm\) 0.42** & 79.30 \(\pm\) 0.19 & **98.96 \(\pm\) 0.19** \\ \hline \end{tabular} \end{table} Table 6: Prediction measured by AUC ( 50% observed links). **Bold** numbers are the best results of all methods. \begin{table} \begin{tabular}{c|c c c c c c c} \hline Data & USAir & NS & PB & Yeast & C.ele & Router & E.coli \\ \hline WLK & 93.34 \(\pm\) 0.51 & 89.97 \(\pm\) 1.02 & 92.34 \(\pm\) 0.34 & 93.55 \(\pm\) 0.46 & 83.20 \(\pm\) 0.90 & 75.49 \(\pm\) 3.43 & 94.51 \(\pm\) 0.32 \\ \hline WLNM & 92.54 \(\pm\) 0.81 & 90.10 \(\pm\) 1.11 & 91.01 \(\pm\) 0.20 & 93.93 \(\pm\) 0.20 & 76.12 \(\pm\) 1.08 & 86.12 \(\pm\) 0.68 & 94.47 \(\pm\) 0.21 \\ \hline Node2Vec & 82.51 \(\pm\) 2.08 & 86.01 \(\pm\) 0.87 & 77.21 \(\pm\) 0.97 & 92.45 \(\pm\) 0.23 & 72.91 \(\pm\) 1.74 & 66.77 \(\pm\) 0.57 & 85.41 \(\pm\) 0.94 \\ \hline LINE & 71.75 \(\pm\) 11.85 & 71.53 \(\pm\) 0.97 & 78.72 \(\pm\) 1.24 & 83.06 \(\pm\) 9.70 & 60.71 \(\pm\) 6.26 & 64.87 \(\pm\) 6.76 & 75.98 \(\pm\) 14.45 \\ \hline SEAL & 94.15 \(\pm\) 0.54 & 92.21 \(\pm\) 0.97 & 93.42 \(\pm\) 0.19 & 95.32 \(\pm\) 0.38 & 81.99 \(\pm\) 2.18 & 87.79 \(\pm\) 1.71 & 95.67 \(\pm\) 0.24 \\ \hline WP & 95.87 \(\pm\) 0.74 & 92.33 \(\pm\) 0.76 & 94.22 \(\pm\) 0.27 & 96.15 \(\pm\) 0.13 & 86.25 \(\pm\) 1.42 & **89.17 \(\pm\) 0.55** & 96.36 \(\pm\) 0.34 \\ \hline GraphLP & **97.96 \(\pm\) 0.09** & **93.08 \(\pm\) 0.08** & **96.27 \(\pm\) 0.10** & **97.27 \(\pm\) 0.09** & **95.89 \(\pm\) 0.11** & 79.23 \(\pm\) 0.14 & **96.48 \(\pm\) 0.13** \\ \hline \end{tabular} \end{table} Table 7: Prediction measured by AP ( 50% observed links). **Bold** numbers are the best results of all methods. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline Data & USAir & NS & PB & Yeast & C.ele & Router & E.coli \\ \hline WLK & 96.63 \(\pm\) 0.73 & 98.57 \(\pm\) 0.51 & 93.83 \(\pm\) 0.59 & 95.86 \(\pm\) 0.54 & 89.72 \(\pm\) 1.67 & 87.42 \(\pm\) 2.08 & 96.94 \(\pm\) 0.29 \\ \hline WLNM & 95.95 \(\pm\) 1.10 & 98.61 \(\pm\) 0.49 & 93.49 \(\pm\) 0.47 & 95.62 \(\pm\) 0.52 & 86.18 \(\pm\) 1.72 & 94.41 \(\pm\) 0.88 & 97.21 \(\pm\) 0.27 \\ \hline Node2Vec & 91.44 \(\pm\) 1.78 & 91.52 \(\pm\) 1.28 & 85.79 \(\pm\) 0.78 & 93.67 \(\pm\) 0.46 & 84.11 \(\pm\) 1.27 & 65.46 \(\pm\) 0.86 & 90.82 \(\pm\) 1.49 \\ \hline LINE & 81.47 \(\pm\) 10.71 & 80.63 \(\pm\) 1.90 & 76.95 \(\pm\) 2.76 & 87.45 \(\pm\) 3.33 & 69.21 \(\pm\) 3.14 & 67.15 \(\pm\) 2.10 & 82.38 \(\pm\) 2.19 \\ \hline SEAL & 97.09 \(\pm\) 0.7 & 98.85 \(\pm\) 0.41 & 95.01 \(\pm\) 0.34 & 97.91 \(\pm\) 0.52 & 90.30 \(\pm\) 1.35 & 96.38 \(\pm\) 1.45 & 97.64 \(\pm\) 0.22 \\ \hline WP & 98.68 \(\pm\) 0.48 & 98.95 \(\pm\) 0.41 & 95.60 \(\pm\) 0.37 & 98.37 \(\pm\) 0.25 & 95.79 \(\pm\) 1.09 & 97.27 \(\pm\) 0.28 & 98.58 \(\pm\) 0.19 \\ \hline GraphLP & **99.26 \(\pm\) 1.01** & **99.64 \(\pm\) 0.98** & **99.73 \(\pm\) 0.25** & **99.41 \(\pm\) 0.15** & **99.90 \(\pm\) 0.14** & **99.02 \(\pm\) 0.19** & **99.23 \(\pm\) 0.23** \\ \hline \end{tabular} \end{table} Table 4: Prediction measured by AUC (90% observed links). **Bold** numbers are the best results of all methods. \begin{table} \begin{tabular}{c|c c c c c c c} \hline Data & USAir & NS & PB & Yeast & C.ele & Router & E.coli \\ \hline WLK & 96.82 \(\pm\) 0.84 & 98.79 \(\pm\) 0.40 & 93.34 \(\pm\) 0.89 & 96.82 \(\pm\ in the corresponding column indicates the highest accuracy. The results presented in Table 8 demonstrate that the proposed model GraphLP model performs the best among the methods. Furthermore, the link prediction accuracy of the proposed model is far higher than that of the other methods, which can be at least three times higher than that of the best-performing method. For spurious links prediction, the results measured by Precision are listed in Table 9. For all networks, GraphLP performs the best among the methods and is remarkably better than the second best algorithm. The results presented in Table 8 and 9 demonstrate that GraphLP has stronger ability to learn structural features, and can recover the structure of the original network more accurately. Based on Table 2, it can be observed that the Precision of our proposed model performs best, despite the large differences between the ACC and AD across all the datasets; thus indicates that the proposed model performs well for heterogeneous graph structures. ### Recovered Graph Visualization To verify the effectiveness of the proposed model of missing and spurious links inference, the topology of the recovered graphs in the model training process on Club dataset is visually compared, as depicted in Figure 4 and 5. The top half depicts the topology of the graphs, wherein the red links denote the missing Figure 4: The topology visualization of Club dataset. The experiment performs 10% link perturbation, i.e. 10% spurious links are added and 10% missing links are deleted. Figure 5: The topology visualization of Club dataset. The experiment performs 20% link perturbation, i.e. 20% spurious links are added and 20% missing links are deleted. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline Data & Macaque & Mangwet & Jazz & Metabolic & USAir & C.ele & E.coli & Yeast \\ \hline RA & 0.5490 & 0.1380 & 0.5410 & 0.140 & 0.2650 & 0.2790 & 0.4171 & 0.1110 \\ \hline CN & 0.5710 & 0.2880 & 0.5690 & 0.1670 & 0.2480 & 0.2458 & 0.3222 & 0.073 \\ \hline LP & 0.5939 & 0.3280 & 0.7016 & 0.6911 & 0.6271 & 0.4780 & 0.6089 & 0.4585 \\ \hline NMF & 0.8090 & 0.5660 & 0.6510 & 0.2430 & 0.4820 & 0.4333 & 0.4662 & 0.2330 \\ \hline RPCA & 0.810 & 0.5180 & 0.5920 & 0.074 & 0.443 & 0.2609 & 0.3795 & 0.4250 \\ \hline LFLP & 0.818 & 0.583 & 0.663 & 0.2210 & 0.5970 & 0.4390 & 0.4246 & 0.5680 \\ \hline GraphLP & **0.9073** & **0.8750** & **0.9197** & **0.8812** & **0.9057** & **0.9533** & **0.8452** & **0.6210** \\ \hline \end{tabular} \end{table} Table 9: The precision (90% observed links) of spurious links prediction. **Bold** numbers are the best results of all methods. links, the blue links denote the spurious links, and the gray links denote the original links. The bottom half depicts the likelihood scores of missing and spurious links. Based on the results, it can be concluded that with an increase in epoch times, the likelihood scores of missing links increases gradually, and the likelihood scores of spurious links decline gradually. The widths of the lines indicate the following process: when the number of epochs reaches 100, the likelihood scores of missing links approach 1.0, and the likelihood scores of spurious links approach 0.0. This proves that the proposed GraphLP model can distinguish between missing links and spurious links and infer them effectively. Moreover, to further prove the effectiveness of the proposed model, the topology of the recovered graphs in the model training process is visualized when 20% of the links of Club are perturbed, as depicted in Figure 5. Compared to Figure 4, we determined that the likelihood of missing and spurious links is weakened with an increase in the structure perturbation ratio. However, with an increase in the training epochs, the model is still able to distinguish and infer the missing and spurious links according to the structural patterns. For instance, when the training epoch reaches 140, the likelihood scores of missing links are greater than 0.5, and those of spurious links Figure 8: Visualization of model convergence. (a) Convergence of USAir dataset. (b) Convergence of C.ele dataset. Figure 6: The performance of link prediction under various model depth. (a) AUC of link prediction method GraphLP with different layer number. (b) AP of link prediction method GraphLP with different layer number. Figure 7: The performance of link prediction under different values of \(\lambda\). (a) AUC of link prediction method GraphLP under different values of \(\lambda\). (b) AP of link prediction method GraphLP under different values of \(\lambda\). are less than 0.3, indicating that the proposed model can still predict missing and spurious links with high accuracy. ### Impact of Model Depth Next, the performance of GraphLP is explored at various model depths. As depicted in Figure 6, the performance of the model in terms of the AUC and AP trained with one-layer neural network is poor; however, its performance improves significantly with an increase in the number of layers. In particular, when the model depth is equal to two, a significant performance improvement is noted. The primary reason for this is that the model with two-layers multi-order global and local structural features is integrated adaptively based on the MLP component, which considerably improves the performance of the model. Subsequently, as the layer number increases, a slight improvement in model performance is still noted. When the number of layers is four, the accuracy of GraphLP declines significantly on NS and fluctuates on other datasets. A possible reason for this is that the model with four layers becomes more complex, thereby requiring more training iterations or an appropriate learning rate [14]. In general, the performance of the proposed model is optimal when the depth is three, and a deep architecture is necessary. ### Impact of Trade-off Parameter To examine the sensitivity of the proposed model to the trade-off parameter, the AUC and AP values of link prediction methods with different \(\lambda\) are presented in Figure 7. Based on the results, it can be concluded that the performance of the proposed model is not sensitive to \(\lambda\) for most datasets. In Figure 7(a), for the USAir and NS datsets, the AUC value varies significantly under different \(\lambda\), but the performance is still better than other of the other algorithms. In Figure 7(b), the AP value remains stable for different \(\lambda\) values, indicating that the proposed model is insensitive to different \(\lambda\). Overall, our proposed algorithm exhibited satisfactory performance on most datasets with various \(\lambda\). ### The Convergence Analysis Generally, GraphLP converges to optimal values after approximately 200 epochs on most datasets. In particular, Figure 8 plots the learning curves of GraphLP on the USAir and C.ele datasets, including the training loss, validation AUC, validation AP, and validation loss. The results indicate that the AUC and AP values increase rapidly with the decrease in training loss and validation loss, and these values converge to the optimal value when the validation loss approaches a minimum value. Additionally, we discover that validation loss is lower than training loss, and the difference between them remains relatively stable. A possible reason for this is that the dropout manipulation is only applied to the training process. ## 6 Conclusion This paper aims to reconstruct graph structure to improve the performance of link prediction. In particular, unlike existing subgraph-classification-based discriminative methods, this work achieves the aforementioned objective by developing a generative GNN, namely GraphLP, which considered both global and local structure features and hierarchical structural patterns. Concurrently, a novel collaborative inference operation and high-order connectivity computation mechanism are developed. We also present an analysis about the relation between GraphLP and other classical link prediction methods. Extensive experimental results demonstrate the superiority of the proposed method over other state-of-the-art models and traditional baseline methods. This could be a fruitful avenue for future research aimed at addressing graph learning tasks. ## Acknowledgment This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 62106030, 61802039, 62272066; Chongqing Municipal Postdoctoral Science Foundation under Grant No. cstc2021jcyj-bsb0176; Chongqing Municipal Natural Science Foundation under Grant No. cstc2020jcyj-msxmX0804; the Chonging Research Program of Basic Research and Frontier Technology under Grant No. cstc2021jcyj-msxmX0530.
2305.20028
A Study of Bayesian Neural Network Surrogates for Bayesian Optimization
Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inference. While standard GP surrogates have been well-established in Bayesian optimization, Bayesian neural networks (BNNs) have recently become practical function approximators, with many benefits over standard GPs such as the ability to naturally handle non-stationarity and learn representations for high-dimensional data. In this paper, we study BNNs as alternatives to standard GP surrogates for optimization. We consider a variety of approximate inference procedures for finite-width BNNs, including high-quality Hamiltonian Monte Carlo, low-cost stochastic MCMC, and heuristics such as deep ensembles. We also consider infinite-width BNNs, linearized Laplace approximations, and partially stochastic models such as deep kernel learning. We evaluate this collection of surrogate models on diverse problems with varying dimensionality, number of objectives, non-stationarity, and discrete and continuous inputs. We find: (i) the ranking of methods is highly problem dependent, suggesting the need for tailored inductive biases; (ii) HMC is the most successful approximate inference procedure for fully stochastic BNNs; (iii) full stochasticity may be unnecessary as deep kernel learning is relatively competitive; (iv) deep ensembles perform relatively poorly; (v) infinite-width BNNs are particularly promising, especially in high dimensions.
Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson
2023-05-31T17:00:00Z
http://arxiv.org/abs/2305.20028v2
# A Study of Bayesian Neural Network Surrogates ###### Abstract Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inference. While standard GP surrogates have been well-established in Bayesian optimization, Bayesian neural networks (BNNs) have recently become practical function approximators, with many benefits over standard GPs such as the ability to naturally handle non-stationarity and learn representations for high-dimensional data. In this paper, we study BNNs as alternatives to standard GP surrogates for optimization. We consider a variety of approximate inference procedures for finite-width BNNs, including high-quality Hamiltonian Monte Carlo, low-cost stochastic MCMC, and heuristics such as deep ensembles. We also consider infinite-width BNNs and partially stochastic models such as deep kernel learning. We evaluate this collection of surrogate models on diverse problems with varying dimensionality, number of objectives, non-stationarity, and discrete and continuous inputs. We find: (i) the ranking of methods is highly problem dependent, suggesting the need for tailored inductive biases; (ii) HMC is the most successful approximate inference procedure for fully stochastic BNNs; (iii) full stochasticity may be unnecessary as deep kernel learning is relatively competitive; (iv) infinite-width BNNs are particularly promising, especially in high dimensions. ## 1 Introduction _Bayesian optimization_[31] is a distinctly compelling success story of Bayesian inference. In Bayesian optimization, we place a prior over the objective we wish to optimize, and use a _surrogate model_ to infer a posterior predictive distribution over the values of the objective at all feasible points in space. We then combine this predictive distribution with an _acquisition function_ that trades-off exploration (moving to regions of high uncertainty) and exploitation (moving to regions with a high expected value, for maximization). The resulting approach converges quickly to a global optimum, with strong performance in many expensive black-box settings ranging from experimental design, to learning parameters for simulators, to hyperparameter tuning [7, 10]. While many acquisition functions have been proposed for Bayesian optimization [e.g. 8, 44], Gaussian processes (gps) with standard Matern or RBF kernels are almost exclusively used by default as a surrogate model for the objective, without checking whether other alternatives would be more appropriate, despite the fundamental role that the surrogate model plays in Bayesian optimization. Thus, despite promising advances in Bayesian optimization research, there is an elephant in the room: _should we be considering other surrogate models?_ It has become particularly timely to evaluate Bayesian neural network (bnn) surrogates as alternatives to Gaussian processes with standard kernels: In recent years, there has been extraordinary progress in making bnn practical [e.g. 4, 17, 34, 42, 47]. Moreover, bnn can flexibly represent the non-stationary behavior typical of optimization objectives, discover similarity measures as part of representation learning which is useful for higher dimensional inputs, and naturally handle multi-output objectives. In parallel, new _Monte-Carlo_ acquisition functions [1] have been developed which only require posterior samples, significantly lowering the barrier to using non-gp surrogates that do not provide closed-form predictive distributions. In this paper, we exhaustively evaluate Bayesian neural networks as surrogate models for Bayesian optimization. We consider conventional fully stochastic multilayer bnn with a variety of approximate inference procedures, ranging from high-quality full-batch Hamiltonian Monte Carlo [15, 27, 28], to stochastic gradient Markov Chain Monte Carlo [3], to heuristics such as deep ensembles [21]. We also consider infinite-width bnn[22, 29], corresponding to gps with fixed non-stationary kernels derived from a neural network architecture, as well as partially Bayesian last-layer deep kernel learning methods [48]. This particularly wide range of neural network-based surrogates allows us to evaluate the role of representation learning, non-stationarity, and stochasticity in modeling Bayesian optimization objectives. Moreover, given that so much is unknown about the role of the surrogate model, we believe it is particularly valuable not to have a "horse in the race", such as a special bnn model particularly designed for Bayesian optimization, in order to conduct an unbiased scientific study where any outcome is highly informative. We also extensively study a variety of synthetic and real-world objectives--with a wide range of input space dimensionalities, single- and multi-dimensional output spaces, and both discrete and continuous inputs, and non-stationarities. Our study provides several key findings: (1) while stochasticity is often prized in Bayesian optimization [10, 35], fully stochastic bnn do not consistently dominate deep kernel learning, which is not stochastic about network parameters, due to the small data sizes in Bayesian optimization; (2) of the fully stochastic bnn, hmc generally works the best for Bayesian optimization, and deep ensembles work surprisingly poorly, given their success in other settings; (3) on standard benchmarks, standard gps are relatively competitive, due to their strong priors and simple exact inference procedures; (4) there is no single method that dominates across most problems, demonstrating that there is significant variability across Bayesian optimization objectives, where tailoring the surrogate to the objective has particular value; (5) infinite-width bnn are surprisingly effective at high-dimensional optimization, even relative to dkl and stochastic multilayer bnn. These results suggest that the non-Euclidean similarity metrics constructed from neural networks are valuable for high-dimensional Bayesian optimization, but representation learning (provided by dkl and finite-width bnn) is not as valuable as a strong prior derived from a neural network architecture (provided by the infinite-width bnn). This study also serves as an evaluation framework for considering alternative surrogate models for Bayesian optimization. Our code is available at [https://github.com/yucenli/bnn-bo](https://github.com/yucenli/bnn-bo). ## 2 Related Work There is a large body of literature on improving the performance of Bayesian optimization. However, an overwhelming majority of this research only considers Gaussian process surrogate models, focusing on developing new acquisition functions [e.g. 8, 44], additive covariance functions [9, 16], the inclusion of gradient information [49], multi-objectives [41], trust region methods that use input partitioning for higher dimensional and non-stationary data [6], and covariance functions for discrete inputs and strings [26]. For a comprehensive review, see Garnett [10]. There has been some prior work focusing on other types of surrogate models for Bayesian optimization, such as random forests [13] and tree-structured Parzen estimators [2]. Moreover, Snoek et al. [36] apply a Bayesian linear regression model to the last layer of a deterministic neural network for computational scaling, which can be helpful for the added number of objective queries associated with higher dimensional inputs. Deep kernel learning [48], which transforms the inputs of a Gaussian process kernel with a deterministic neural network, is sometimes also used with Bayesian optimization, especially in specialized applications like protein engineering [40]. Additionally, the linearized-Laplace approximation to produce a linear model from a neural network has recently been applied to Bayesian optimization in concurrent work [19]. Despite the extraordinary recent practical advances in developing Bayesian neural networks for many tasks [e.g. 47], and recent Monte-Carlo acquisition functions which make it easier to use surrogates like bnn that do not provide closed-form predictive distributions [1], there is a vanishingly small body of work that considers bnn as surrogates for Bayesian optimization. This is surprising, since we would indeed expect bnn to have properties naturally aligned with Bayesian optimization, such as the ability to learn non-stationary functions without explicit modeling interventions and gracefully handle high-dimensional input and output spaces. The possible first attempt to use a Bayesian neural network surrogate for Bayesian optimization [38] came before most of these advances in bnn research, and used a form of stochastic gradient Hamiltonian Monte Carlo (sghmc) [3] for inference. Like Snoek et al. [36], the focus was largely on scalability advantages over Gaussian processes; however, the reported gains were marginal, and puzzling in that they were largest for a _small_ number of objective function queries (where the neural net would not be able to learn a rich representation, and scalability would not be required). Kim et al. [18] used the same method for bnn with Bayesian optimization, also with sghmc, targeted at scientific problems with known structures and high dimensionality. In these applications, bnn leverage auxiliary information, domain knowledge, and intermediate data, which would not typically be available in many Bayesian optimization problems. Our paper provides several key contributions in the context of this prior work, where standard gp surrogates are nearly always used with Bayesian optimization. While finite-width bnn surrogates have been attempted, they have been largely limited to sghmc inference, and are often applied in specialized settings without an effort to understand their properties. Little is known about whether bnn could generally be used as an alternative to gps for Bayesian optimization, especially in light of more recent general advances in bnn research. This is the first paper to provide a comprehensive study of bnn surrogates, considering a range of model types and experimental settings. We test the utility of bnn in a variety of contexts, exploring their behavior as we change the dimensionality of the problem and the number of objectives, investigating their performance on non-stationary functions, and also incorporating problems with a mix of discrete and continuous input parameters. Moreover, we are the first to study infinite bnn models in Bayesian optimization, and to consider the role of stochasticity and representation learning in neural network based Bayesian optimization surrogates. Finally, rather than champion a specific approach, we provide an objective assessment, also highlighting the benefits of gp surrogates for general Bayesian optimization problems. ## 3 Surrogate Models We consider a wide variety of surrogate models, helping to separately understand the role of stochasticity, representation learning, and strong priors in Bayesian optimization surrogates. We provide additional information about these surrogates, as well as background about Bayesian optimization in Appendix A. Gaussian Processes.Throughout our experiments, when we refer to Gaussian processes, we always mean _standard_ Gaussian processes, with the Matern-5/2 kernel that is typically used in Bayesian optimization [35]. These Gaussian processes have the advantage of simple exact inference procedures, strong priors, and only a few hyperparameters, such as length-scale, which controls rate of variability. On the other hand, these models are _stationary_, meaning the covariance function is translation invariant and models the objective as having similar properties (such as rate of variation) at different points in input space. They also provide a similarity metric for data points based on simple Euclidean distance of inputs, which is often not suitable for higher dimensional input spaces. Fully Stochastic Finite-Width Bayesian Neural Networks.These models treat all of the parameters of the neural network as random variables, which necessitates approximate inference. We consider four different mechanisms of approximate inference: (1) Hamiltonian Monte Carlo (hmc), an MCMC procedure which is the computationally expensive gold standard [15, 27, 28]; (2) Stochastic Gradient Hamiltonian Monte Carlo (sgmc), a scalable MCMC approach that works with mini-batches [3]; (3) Deep Ensembles, considered a practically effective heuristic that ensembles a neural network retrained multiple times, and has been shown to approximate fully Bayesian inference [15, 21]. These approaches are fully stochastic, and can do _representation learning_, meaning that they can learn appropriate distance metrics for the data as well as particular types of non-stationarity. Deep Kernel Learning.Deep kernel learning (dkl) [48] is a hybrid Bayesian deep learning model, which layers a GP on top of a neural network feature extractor. This approach can do non-Euclidean representation learning, handle non-stationarity, and also uses exact inference. However, it is only stochastic about the last layer. Figure 1: **The design of the bnn has a significant impact on the uncertainty estimates. We visualize the uncertainty estimates and function draws produced by full-batch hmc on a simple toy objective function with four function queries (denoted in black). For the visualizations above, we fix all other design choices with the following base parameters: likelihood variance \(=1\), prior variance \(=1\), number of hidden layers \(=3\), and width \(=128\). We see that varying the different aspects of the model leads to significantly different posterior predictive distributions.** Linearized Laplace Approximation.The linearized-Laplace approximation (lda) is a deterministic approximate inference method that uses the Laplace approximation [24, 14] to produce a linear model from a neural network, and has recently been considered for Bayesian optimization in concurrent work [19]. Infinite-Width Bayesian Neural Networks.Infinite-width neural networks (i-bnn) refer to the behavior of neural networks as the number of nodes per hidden layer increases to infinity. Neal [27] famously showed with a central limit theorem argument that a bnn with a single infinite-width hidden layer converges to a gp with a neural network covariance function, and this result has been extended to deep neural networks by Lee et al. [22]. i-bnn are fully stochastic and very different from standard gps, as it can handle non-stationarity and provides a non-Euclidean notion of distance inspired by a neural network. However, it cannot do representation learning and instead has a fixed covariance function that provides a relatively strong prior. ### Role of Architecture We conduct a sensitivity study into the role of architecture and other key design choices for bnn surrogate models. We highlight results for hmc inference, as it is the gold standard for approximate inference in bnns [15]. Gaussian processes involve relatively few design choices -- essentially only the covariance function, which is often simply set to the RBF or Matern kernel. Additionally, we are able to have an intuitive understanding of what the induced distributions over functions look like for the different choices of covariance functions. In contrast, with bnns, we must consider the architecture, the prior over parameters, and the approximate inference procedure. It is also less clear how different modeling choices in bnns affect the inferred posterior predictive distributions. To illustrate these differences for varying network, prior, and variance parameters, we plot the inferred posterior predictive distributions over functions for different network widths and depths, the activation functions, and likelihood and variance parameters in Figure 1, and we evaluate the performance under different model choices for three synthetic data problems in Figure 2. We focus on fully-connected multi-layer perceptrons for this study: while certain architectures have powerful inductive biases for computer vision and language tasks, generic regression tasks such as Bayesian optimization tend to be well-suited for fully-connected multi-layer perceptrons, which have relatively mild inductive biases and make loose assumptions about the structure of the function we are modeling. Model Hyperparameters.We consider isotropic priors over the neural network parameters with zero mean and variance parameters 0.1, 1, and 10. Similarly, we consider Gaussian likelihood functions with variance parameters 0.1, 1, and 10. The corresponding posterior predictive distributions for full-batch hmc are shown in Figure 0(a). As would be expected, an increase in the likelihood variance results in a poor fit of the data and virtually no posterior collapse. In contrast, increasing the prior variance results in a higher predictive variance between data points with a good fit to the data points, whereas a prior variance that is too small leads to over-regularization and uncertainty collapse. As shown in Figure 1(a) and Figure 1(c), lower likelihood variance parameters and larger prior variance parameters tended to perform best across three synthetic data experiments. Network Width and Depth.To better understand the effects of the network size on inference, we explore the differences in performance when varying the number of hidden layers and the number of parameters per layer, each corresponding to an increase in model complexity. In Figure 0(c), we can see that there is a significant increase in uncertainty as we increase the number of hidden layers from two to three, and a bnn with four hidden layers has even larger uncertainty with higher variations in the function draws. Figure (d)d also shows an increase in uncertainty as we increase the width of the network, where a smaller width leads to function draws that are much flatter than function draws from a larger width. However, the best size to choose seems to be problem-dependent, as shown in Figure (b)b and Figure (d)d. Activation Function.The choice of activation function in a neural network determines important characteristics of the function class, such as smoothness or periodicity. The impact of the activation function can be seen in Appendix D, with function draws from the ReLU bnn appearing more jagged and function draws from the tanh bnn more closely resembling the draws from a gp with a Squared Exponential or Matern 5/2 covariance function. ## 4 Empirical Evaluation We provide an extensive empirical evaluation of bnn surrogates for Bayesian optimization. We first assess how bnn compare to gps in relatively simple and well-understood settings through commonly used synthetic objective functions, and we perform an empirical comparison between gps and different types of bnns (hmc, sghmc, lla, ensemble, i-bnn, and dkl). To further ascertain whether bnn may be a suitable alternative to gps in real-world Bayesian optimization problems, we study six real-world datasets used in prior work on Bayesian optimization with gp surrogates [5, 6, 25, 30, 43]. We also provide evidence that the performance of bnns could be further improved with a careful selection of network hyperparameters. We conclude our evaluation with a case study of Bayesian optimization tasks where simple Gaussian process models may fail but bnn models would be expected to prevail. To this end, we design a set of experiments to assess the performance of gps and bnns as a function of the input dimensionality and in settings where the objective function is non-stationary. Figure 2: **There is no single architecture for hmc that performs the best across all problems. We compare the impact of the design on the Bayesian optimization performance for different benchmark problems. For each set of experiments, we fix all other aspects of the design and plot the values of the maximum reward found using hmc after 100 function evaluations over 10 trials.** ### Synthetic Benchmarks We evaluate bnn and gp surrogates on a variety of synthetic benchmarks, and we choose problems with a wide span of input dimensions to understand how the performance differs as we increase the dimensionality of the data. We also select problems that vary in the number of objectives to compare the performance of the different surrogate models. Detailed problem descriptions can be found in Appendix B, and we include the experiment setup in Appendix C. We use Monte-Carlo based Expected Improvement [1] as our acquisition function for all problems. As shown in Figure 3, we find bnn surrogate models to show promising results; however, the specific performance of different bnn varies considerably per problem. dkl matches gps in Branin and BraninCurrin, but seems to perform poorly on highly non-stationary problems such as Ackley. i-bnn also seem to slightly underperform compared to gps on these synthetic problems, many of which have a small number of input dimensions. In contrast, we find finite-width bnn using full hmc to be comparable to gps, performing very similarly in many of the experiments, slightly underperforming in Hartmann and DTLZ5, and outperforming standard gps in the 10-dimensional Ackley experiment. However, this behavior is not generalizable to all approximate inference methods: the performance of sghmc and lla vary significantly per problem, matching the performance of hmc and gps in some experiments while failing to approach the maximum value in others. Deep ensembles also consistently underperform the other surrogate models, plateauing at noticeably lower objective values on multi-objective problems like BraninCurrin and DTLZ1. This result is surprising, since ensembles are often seen to as an effective way to measure uncertainty (Appendix D). ### Real-World Benchmarks To provide an evaluation of bnn surrogates in more realistic optimization problems, we consider a diverse selection of real-world applications which span a variety of domains, such as Figure 3: **bnns are often comparable to gps on standard synthetic benchmarks.** However, the type of bnn used has a big impact: hmc is similar to gps and even outperforms them on Ackley, while sghmc and deep ensembles seem to have less reliable performance and are often unable to effectively find the maximum. i-bnns also seem to struggle on these low-dimensional problems. For each benchmark function, we include \(d\) for the number of input dimensions, and \(o\) for the number of objectives. We plot the mean and one standard error of the mean over 5 trials. solving differential equations and monitoring cellular network coverage [5, 6, 25, 30, 43]. Many of these problems, such as the development of materials to clean oil spills, have consequential applications; however, these objectives are often multi-modal and are difficult to optimize globally. Additionally, unlike the synthetic benchmarks, many real-world applications consist of input data with ordinal or categorical values, which may be difficult for gps to handle. Several of the problems also require multiple objectives to be optimized. Detailed problem descriptions are provided in Appendix B. We share the results of our experiments in Figure 4, and details about the experiment setup can be found in Appendix C. The results are mixed: bnns are able to significantly outperform gps in the Pest Control dataset, while gps find the maximum reward in the Cell Coverage and Lunar Lander experiments. The Pest Control, Cell Coverage, and Oil Spill Sorbent experiments all include discrete input parameters, and there seems to be a slight trend of gp and i-bnns performing well, and sghmc and ensembles performing more poorly. Similar to the findings from the synthetic benchmarks, we see that the different approximate inference methods for finite-width bnns lead to significantly different Bayesian optimization performance, with hmc generally finding higher rewards compared to sghmc, lla, and deep ensembles. Additionally, it appears that gps perform well in the two multi-objective problems, although that may not be generalizable to additional multi-objective problems and may be more related to the curvature of the specific problem space. ### Neural Architecture Search To better understand the impact of architecture on the performance of bnns, we conduct an extensive neural architecture search over a selection of the benchmark problems, varying the width, depth, prior variance, likelihood variance, and activation function. For our experiments, we use SMAC3 [23], a framework which uses Bayesian optimization to select the best hyperparameters, and we detail the experiment setup in Appendix C. While a thorough search over architectures is often impractical for realistic settings of Bayesian Figure 4: **Real world benchmarks show mixed results. bnns outperform gps on some problems and underperform on others, and there does not seem to be a noticeable preference for any particular surrogate as we increase the number of input dimensions. Additionally, there does not appear to be a clear separation between the top row of experiments, which optimize over continuous parameters, and the bottom row of experiments, which also include some discrete inputs. For each benchmark, we include \(d\) for the number of input dimensions, and \(o\) for the number of objectives. We plot the mean and one standard error of the mean over 5 trials.** optimization since it requires a very large number of function evaluations, we use this experiment to demonstrate the flexibility of bnn and to showcase its potential when the design is well-suited for the problem. We show the effect of neural architecture search on hmc surrogate models in Figure 5. On the Cell Coverage problem, the architecture search did not drastically change the performance of hmc. In contrast, extensively optimizing the hyperparameters made a significant difference on the Pest Control problem, leading to hmc finding higher rewards than gps while using fewer function evaluations; however, on this problem i-bnn, which does not require specifying an architecture, still performs best. Neural architecture search was also able to improve the results on dtlz5, leading hmc to be competitive with other surrogate models such as i-bnn and dkl. The difference in the benefits of the search may be attributed to some problems having less inherent structure than others, where extensive hyperparameter optimization may not be as necessary. Additionally, our original hmc surrogate model choice may already have been a suitable choice for some problems, so an extensive search over architectures may not significantly improve the performance. ### Limitations of GP Surrogate Models Although popular, gps suffer from well-known limitations that directly impact their usefulness as surrogate models. To contrast bnn and gp surrogates, we explore two failure modes of gps and demonstrate that the increased flexibility provided by bnn surrogate models can overcome these issues and improve performance in Bayesian optimization. Non-Stationary Objective Functions.To use gp surrogates, we must specify a kernel function class that governs the covariance structure over data points and learn the kernel hyperparameters. We typically constrain model selection to models with kernel of the form \(k(\mathbf{x},\mathbf{x}^{\prime})=k(\|\mathbf{x}-\mathbf{x}^{\prime}\|)\). This constraint makes it easier to describe the functional form and learn the hyperparameters of the kernel. However, because the covariance between two values only depends on their distance and not on the values themselves, this setup assumes the function is stationary and has similar mean and smoothness throughout the input space. Unfortunately, this assumption of stationarity does not hold true in many real-world settings. For example, in the common Bayesian optimization application of choosing hyperparameters of a neural network, the true loss function landscape may have vastly different behavior in one part of the input space compared to another. bnn surrogates, in contrast to gp surrogates, are able to model non-stationary functions without similar constraints. Figure 5: **The impact of neural architecture search on hmc is problem-dependent. The dashed green line indicates the performance of hmc after an extensive neural architecture search, compared to the solid green line representing the hmc model selected from a much smaller pool of hyperparameters. We see that it has minimal impact on Cell Coverage (left), moderate impact on dtlz5 (center), and extensive impact on Pest Control (right), even outperforming gps. For each benchmark, we include \(d\) for the number of input dimensions, and \(o\) for the number of objectives. We plot the mean and one standard error of the mean over 5 trials.** In Appendix D, we show the performance of Bayesian optimization with bnn and gp surrogate models for a non-stationary objective function. Because the gp assumes that the behavior of the function is the same throughout the input domain, it cannot accurately model the input-dependent variation of the function and leads to underfitting around the true optimum. In contrast, bnn surrogates can learn the non-stationarity of the function. High-Dimensional Input Spaces.Due to the curse of dimensionality, gps do not scale well to high-dimensional input spaces without careful human intervention. Common covariance functions may fail to faithfully represent high-dimensional input data, making the design of custom-tailored kernel functions necessary. In contrast, neural networks are well-suited for modeling high-dimensional input data [20]. To measure the effect of dimensionality on the performance of gps and bnns, we first benchmark the ability of the surrogate models to maximize synthetic test functions provided by high-dimensional polynomial functions and function draws from neural networks. To better understand the behaviors of different surrogate models in real-world settings, we also construct a high-dimensional problem by using Bayesian optimization to set the parameters of a neural network in the context of knowledge distillation. Knowledge distillation refers to the act of "distilling" information from a larger teacher model to a smaller student model by matching model outputs [11], and it is known to be a difficult optimization problem [39]. For full descriptions of the high-dimensional problems, see Appendix B. We share the results of our findings in Figure 6 and Appendix D. There is a clear trend that i-bnns perform very well in these high-dimensional settings. The i-bnn has several conceptual advantages in this setting: (1) it provides a non-Euclidean and non-stationary similarity metric, which can be particularly valuable in high-dimensions; (2) it does not have any hyperparameters for learning, and thus is not "data hungry" -- especially important in high dimensional problems with small data sizes, which provide relatively little information for representation learning. Additionally, we find that other bnn surrogate models also outperform gps across the high-dimensional problems, providing a compelling motivation for bnns as surrogate models for Bayesian optimization. Figure 6: **i-bnns outperform other surrogates in many high-dimensional settings**. We show the results of maximizing a polynomial function (left), maximizing a fixed function draw from a neural network (center), and optimizing the parameters of a neural network in the context of knowledge distillation (right). All of these objectives are high-dimensional and non-stationary, and we find that bnns consistently find higher rewards than gps across all problems. We plot the mean and one standard error of the mean over 5 trials, and \(d\) corresponds to the number of input dimensions. Discussion While Bayesian optimization research has made significant progress over the last few decades [10], the surrogate model is a crucial and highly underexplored design choice. Although standard gp models are the default surrogate, it is not because they have been shown to be superior to alternatives -- we simply have had almost no evidence about how alternatives would perform. It is particularly timely to consider neural network surrogates, given significant recent advances in Bayesian neural networks and related approaches. The setting of Bayesian optimization is also quite different from where bnn are typically applied, involving particularly small datasets and online learning. The fact that dkl is competitive with bnn calls into question the importance of fully stochastic surrogates -- a surprising finding given the small datasets common to Bayesian optimization, where overfitting would be a more significant concern. Moreover, infinite-width bnn show promising performance in general, but especially for higher dimensional settings. Given that they also do not involve learning many hyperparameters, and do not require approximate inference, it is possible they could become a de facto standard surrogate for Bayesian optimization. We also show that the Bayesian optimization objectives are sufficiently different such that one method does not generally dominate. This finding supports the use of simple models with strong but generic assumptions, such as standard gp models, which indeed provide relatively competitive performance. On the other hand, perhaps it is self-fulfilling that standard gps would be competitive on standard benchmarks, since most problems were designed with very few alternative models in mind. It may be time for the community to consider new benchmarks that evolve with the advances in our choice of surrogate models. ### Acknowledgements We thank Greg Benton for helpful guidance in the beginning stages of this research, and Sanyam Kapoor for discussions. This work is supported by NSF CAREER IIS-2145492, NSF I-DISRE 193471, NIH R01DA048764-01A1, NSF IIS-1910266, NSF 1922658 NRT-HDR, Meta Core Data Science, Google AI Research, BigHat Biosciences, Capital One, and an Amazon Research Award.
2305.19468
Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
This paper presents an efficient hardware implementation of the recently proposed Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thresholds in an efficient hierarchical structure. This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware. The implementation consists of a multi-layer Spiking Neural Network (SNN) and individual training modules for each layer that enable online self-learning without using back-propagation. By using simple local adaptive selection thresholds, a Winner-Takes-All (WTA) constraint on each layer, and a modified weight update rule that is more amenable to hardware, the trainer module allocates neuronal resources optimally at each layer without having to pass high-precision error measurements across layers. All elements in the system, including the training module, interact using event-based binary spikes. The hardware-optimized implementation is shown to preserve the performance of the original algorithm across multiple spatial-temporal classification problems with significantly reduced hardware requirements.
Ali Mehrabi, Yeshwanth Bethi, André van Schaik, Andrew Wabnitz, Saeed Afshar
2023-05-31T00:34:15Z
http://arxiv.org/abs/2305.19468v1
Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA ###### Abstract This paper presents an efficient hardware implementation of the recently proposed Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thresholds in an efficient hierarchical structure. This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware. The implementation consists of a multi-layer Spiking Neural Network (SNN) and individual training modules for each layer that enable online self-learning without using back-propagation. By using simple local adaptive selection thresholds, a Winner-Takes-All (WTA) constraint on each layer, and a modified weight update rule that is more amenable to hardware, the trainer module allocates neuronal resources optimally at each layer without having to pass high-precision error measurements across layers. All elements in the system, including the training module, interact using event-based binary spikes. The hardware-optimized implementation is shown to preserve the performance of the original algorithm across multiple spatial-temporal classification problems with significantly reduced hardware requirements. Spiking Neural Networks, Supervised Learning, Neuromorphic Hardware. ## 1 Introduction Artificial Neural Networks (ANNs) and multi-layer perceptrons were developed as highly simplified models of biological neural computation through the use of distributed interconnected computing nodes, or neurons, which operate as a network, in contrast to the sequential architecture of conventional modern processors [1, 2]. Deep ANNs have been developed, widely used, and optimized in the past two decades, resulting in significant advances in many scientific fields. As universal function approximators, ANNs can be applied to complex problems such as pattern recognition, classification, time series analysis, and speech recognition using the backpropagation algorithm [3] for training. During the same period, there has been a significant investigation and exploration of artificial Spiking Neural Networks (SNN), which are better models of biological neural networks by incorporating the spiking behavior of neurons observed in larger biological neural networks [4]. The investigation of SNNs is often motivated by the idea that the spiking behavior of biological nervous systems is functionally essential and provides computational and efficiency benefits [5, 6, 7, 8]. In contrast to ANNs, neurons in SNNs use precisely timed binary-valued pulse streams or spikes to transfer information. SNNs can perform sparse computations due to the inherent sparsity in their data. The ability to operate in an event-driven fashion, rather than the traditional synchronous clock-driven computational approach in ANNs, makes SNNs suitable for processing continuous-time spatio-temporal data. However, training SNNs is still an open research question, and a universal training algorithm akin to error backpropagation for ANNs is yet to be found. The spiking outputs generated by spiking neurons can be modeled as a train of Dirac's delta functions which do not have a derivative. The hard thresholding operation that is one of the key elements of function in spiking neuron models is also not differentiable. This non-differentiability of computations poses a fundamental challenge in assigning credit to earlier nodes in a network of spiking neurons to optimize synaptic weights. SpikeProp [9], Tempotron [10], Chronotron [11], ReSuMe [12], and DL-ReSuMe [13] are some early methods introduced to apply gradient descent to train single-layered SNN models using various loss functions. More recent works focused on approximating error backpropagation to SNN architectures, like using surrogate gradients for the different non-differentiable computations in an SNN [14, 15, 16, 17, 18]. All the existing approximations of error backpropagation in SNNs batch data to accumulate gradients. They also require a symmetric backward data pathway to transfer continuous-valued gradients from the output layer through to the input layer to update neuronal weights in hidden layers. Some even rely on non-causal operations like Back Prop Through Time(BPTT) to update the synaptic weights. However, such non-local and non-causal operations in learning are not biologically plausible, and no such evidence of symmetric backward pathways in biological nervous systems is likely to be found. Despite the lack of bio-plausibility, error backpropagation methods have become popular tools to train SNNs for specific tasks. The error backpropagation methods are often computationally expensive, requiring energy-intensive GPUs to train them offline. Feedback alignment [19] is one of the few alternatives to error backpropagation for SNNs, and it also requires passing continuous valued error values to each neuron. Local learning rules that do not require access to the weights of other neurons and communication of continuous-valued error gradients have been desirable for training SNNs. Variations of Spike Time-Dependent Plasticity (STDP) rules were applied to perform unsupervised feature extraction to classify spatio-temporal patterns [20, 21, 22, 23, 24]. Mozafari et al. [25] used reward-modulated STPD to perform object recognition. Paredes-Valles et al. [26] used STDP rules to perform the optical-flow estimation. Local learning rules close to STDP, like Supervised Hebbian Learning [27] and ReSuMe [28], were also developed to perform supervised learning. However, multilayer versions [29, 30] rely on backpropagating continuous-valued feedback across hidden layers. In addition to training concerns, von Neumann computer architectures are not well suited for SNN implementations due to the massive parallelism inherent in an SNN network, where a large number of neurons must be processed simultaneously. While graphics processing units (GPUs) can implement parallelism to some extent, the kernel-launch programming paradigm makes them unsuitable for these applications. On the other hand, Field Programmable Gate Arrays (FPGAs) provide flexibility in designing parallel processing and re-configurable hardware architectures. In many applications, SNNs can provide significant efficiency in power consumption due to the sparsity of inter-neuronal communication using binary-valued spikes.Significant research has been done on implementing SNNs on FPGA and Application Specific Integrated Circuits (ASIC). Munoz et al. [31] implemented an SNN network using the Spike Response Model (SRM) and temporal coding on a Xilinx SPARTAN 3 FPGA to detect simple patterns. Wang et al. [32] introduced a re-configurable, polychronous SNN with Spike Timing Dependent Delay Plasticity to fine-tune and add dynamics to the network. Time multiplexing was used to fit 4096 neurons and up to 1.15 million programmable delay axons on a VIRTEX 6 FPGA. Currently, most hardware SNN systems involve a preconfigured network that is implemented on an FPGA device to accelerate a specific task, leveraging the parallel processing capabilities of FPGAs. In other words, the parameters of the SNN, i.e., weights and thresholds of the neurons, are calculated using a simulator and are fixed in the hardware implementation. Bethi et al. [33] introduced an Optimized Deep Event-driven SNN Architecture (ODESA) that can be trained end-to-end using STDP-like rules which do not require continuous-valued gradients back-propagated. The ODESA training algorithm solves the credit assignment problem in SNNs by using the activity of the next layer in a network as a layer's supervisory signal. The synaptic weight adjustment in each layer only depends on the layer's trace and not on the weights of the other layers in the network. The feedback between the layers is causal and performed via binary event signals. The network does not require a symmetric backward pathway to perform training. This paper presents an efficient hardware implementation of a new SNN architecture utilizing the ODESA algorithm [33]. Each layer has its training hardware module with minimal communication links with other layers. ODESA is an event-driven algorithm and has a very sparse activity due to the hard Winner-Takes-All (WTA) constraints on the layers. All the communication between the layers and the training modules is event-based and binary-valued. The ODESA architecture and its training algorithm provide an efficient, low-power, low-resource-consuming hardware implementation of SNNs that can be trained online and on-chip. The remainder of the paper is organized as follows: Section 2 reviews the background of Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). Section 3 provides a detailed presentation of our heuristic SNN hardware implementation combined with its training hardware using the hardware-optimized ODESA algorithm. We will present two implementations of ODESA hardware experiments and their results. Finally, Section 6 presents the conclusion and directions for future works. ## 2 Background As the adoption of neuromorphic vision sensors increases, various dense tensor representations for sparse asynchronous event data have been investigated for learning spatio-temporal features in the data [34, 35, 36]. A time surface, a term introduced by Lagorce et al. in [37], is the trace of recent spiking activity at a given synapse at any time \(t\). The event-based time surface representations have been used in extracting features in tasks like space object detection and tracking [38, 39], neuromorphic object recognition on UAVs [40], and processing data from SPAD sensors [41]. Afshar et al. [42] introduced an algorithm to extract features from event data using neuronal layers in an unsupervised manner called Feature Extraction using Adaptive Selection Thresholds (FEAST). FEAST is a highly abstracted and computationally optimized model of the SKAN method [43, 44]. The FEAST method has been used and extended for a range of applications such as event-based object tracking [45], activity-driven adaptation in SNNs [46], and feature extraction to solve isolated spoken digits recognition task [47, 48]. In addition to weights representing the features learned by each FEAST neuron, each neuron has a threshold parameter that represents the size of the receptive field around the features represented by the weights. For every input event, the dot product of the time surface context and the synaptic weight vector of a neuron is calculated. The dot products of all neurons in a layer are compared to their respective thresholds. Only the neurons with dot products crossing their respective thresholds are eligible for selection. The neuron with the largest dot product in the eligible neurons is regarded as the winner for the given input event. If there is no winner, or in other terms, no neuron can cross its threshold, then the thresholds of all the neurons are reduced by a constant value. However, if a neuron becomes the winner, the weights of the neuron are updated with the current event context using an exponential moving average. The threshold of the winner neuron is also increased by a fixed value. FEAST is an online learning algorithm that clusters the incoming event contexts of all the input events into clusters equal to the number of neurons used in the FEAST layer. The neurons' thresholds represent the clusters' boundaries (see Section 2.2 in [42]). Since there is no information about the significance of an individual event, FEAST treats each receiving event with equal priority, which results in learning features representing the most commonly observed spatio-temporal patterns in the input data. However, this may not be ideal for tasks that depend on more infrequent task-specific features. The Optimized Deep Event-driven Spiking neural network Architecture (ODESA) [33] is a supervised training method that locally trains hierarchies of well-balanced Excitatory-Inhibitory (EI) networks on event-based data. ODESA is an extension and generalization of FEAST. The output classification layer in ODESA has \(m\cdot N_{c}\) neurons (\(m\in\mathbb{N}\)) for a classification task with \(N_{c}\) classes. The output layer is divided into \(N_{c}\) groups (with \(m\) neurons each), each responsible for one of the \(N_{c}\) classes. Each layer has a hard Winner-Takes-All (WTA) condition, which ensures only one neuron can fire in response to Fig. 1: Multi-Layer Supervision in ODESA using Spike-Timing-Dependent Threshold Adaptation. The shaded vertical lines represent the binary Global Attention Signal generated for each output label spike. The dotted vertical lines represent the binary Local Attention Signals sent to each layer from its next layer. The up and down arrows represent the reward and punishment of the individual neurons. Case 1: The predicted output spike matches the label spike, and the corresponding output neuron is rewarded. Case 2: The corresponding output neuron for the correct class is punished as it failed to spike in the presence of input from Layer 2. Case 3: All neurons in Layer 2 are punished as they failed to spike for an input spike from Layer 1 in the presence of the Global Attention Signal. Case 4: The active neuron in Layer 2 is rewarded in the presence of the Global Attention Signal. Case 5: The neurons with trace above the resent threshold are rewarded and the other neurons are punished in the presence of Local Attention Signal from Layer 2. Figure reproduced from [33]. any input spike to a layer. The supervisory label spikes drive the threshold adaptation in an ODESA output layer for a given input spike stream. Since ODESA is event-driven, it is assumed that an input spike exists for every label spike. The labeled input spikes are treated with additional attention. For the labeled input spike, if there is no spike from the correct class neuron group, the thresholds for all the neurons in the class group are lowered. If there is a spike from any of the neurons in the correct class group, the winner neuron's weights are updated with the input spike's event context, and its threshold is also updated based on the dot product. Alternatively, in the absence of an output spike from the correct class group, thresholds of all neurons in the group are reduced. This weight update and threshold increase in a neuron can be considered "rewarding a neuron" for its correct classification. Similarly, a decrease in the threshold of a neuron to make it more receptive can be considered as "punishing a neuron" for not being active. The ODESA architecture can use multiple hidden layers with different time constants to learn hierarchical spatio-temporal features simultaneously at different timescales [33]. Each hidden layer goes through a similar threshold adaptation as the output layer based on the spiking activity of its next layer in the hierarchy. A binary attention signal is generated by each layer to its previous layer whenever a neuron in the layer is active. All the neurons which were recently active in the previous layer are rewarded and the rest of the neurons are punished. These binary signals called Local Attention Signals (LAS), help provide the necessary feedback required to train the hidden layers. This architecture is well suited for enabling online learning in hardware as the communication between layers is through binary attention signals only, and there is no need to calculate loss functions and pass continuous-valued gradients across the layers during training. A Global Attention Signal (GAS) is generated when a label is assigned to an input spike. The GAS is accessible by all layers. Each layer also has access to the LAS generated by its next layer in the hierarchy. There is no LAS for the output layer. The output layer compares the generated spikes with the labels to reward or punish activated neurons. Fig. 1 depicts the multi-layer supervision of ODESA architecture. The condensed ODESA algorithm is depicted in Fig.2 and Fig. 3 flowcharts. When a LAS is active, the training algorithm determines the participation of a neuron in generating a spike in the next layer based on its eligibility trace. If the trace of a neuron is above a certain limit (generally set to \(10\%\) of its full scale), it is rewarded. The neurons with traces lower than the limit are punished. ## 3 ODESA hardware implementation ### _Primitive building blocks of the ODESA network_ In this Section, we introduce the primitive building blocks of the ODESA network. The primitive building blocks are reusable in different ODESA network architectures. #### 3.1.1 Synchronizer The Synchronizer is used to synchronize the asynchronous input events (spikes) with the system clock. The input spikes to the ODESA network are not necessarily synchronous with the system clock and can be missed. Fig.4 shows the design of a Synchronizer module. If an event happens at the input of the Synchronizer, the output will be asserted at the rising edge of the next clock. The Synchronizer will not respond to new events until it is reset by a logic through its '_rst_n' input signal. This will let the system have control over accepting or rejecting events. #### 3.1.2 Leaky accumulator The Leaky accumulator is a modified digital implementation of the Leaky Integrate and Fire (LIF) model of a neuron [49]. It is used to model the synaptic response to an incoming spike in the ODESA network. In our modified design, the leaky accumulator can model either linear or exponential decay. The linear decay accumulator consists of a base adjustable bit-width down-counter. Here, if an event/spike is received, the counter is reloaded to the output value of the adder and starts to count down at the rising edge of the following clocks until it decays to zero. Equation 1 shows the value of the counter in the linear decay accumulator at time \(t\) given a spike at time \(t^{\prime}\). Thus for a spike \(\delta(t-t^{\prime})\), which arrived at any time \(t^{\prime}\) between two consecutive clock cycles with time period \(T\) (i.e. \((k-1)T<t^{\prime}\leq kT\mid k\in\mathbb{N}\)), the counter value of the Leaky accumulator, \(a(t)\), at time \(t=nT\), \(n\in[k,C+k]\) can be expressed as: \[a(t)=(C-\frac{t-kT}{T})(u(t-kT)-u(t-(C+k)T), \tag{1}\] where \(C\) is the linear decaying constant that will be loaded to the counter when a spike happens and \(u(t)\) is the unit step function. The decay rate is controlled by either the value of the decaying constant \(C\) or the clock frequency. If a new event happens when the decaying counter is not zero, it will be reloaded by the sum of the current counter value and the constant \(C\). For a stimulus \(\delta(t-t_{1})+\delta(t-t_{2})\), where two spikes \(\delta(t-t_{1})\) and \(\delta(t-t_{2})\) occur close to each other at times \(t_{1}\), and \(t_{2}\) respectively, such that \(t_{1}<t_{2}\leq t_{1}+CT\), the counter value is calculated as: \[a(t)=a(t_{1})+a(t_{2}). \tag{2}\] Fig. 5 shows the block diagram of a linear decaying Leaky accumulator and a sample waveform. The Leaky accumulator activates a clear signal ('o_clr') three clock cycles after receiving a synchronized event. This signal is used to reset the Synchronizer and make it ready to receive new input events. The exponential decay is estimated by a divide by two (a shift right) at each clock cycle after loading constant \(C\) into the shift register. The output of an exponential decaying Leaky accumulator, \(a(t)\), will be: \[a(t)=\frac{C}{2^{\frac{t}{T}-k}}\left(u(t-kT)-u(t-(\tau+k)T)\right). \tag{3}\] For the exponential decay, \(C\) is set to \(2^{\tau}-1\), where \(\tau\) is the decay constant. In this work, we used linear decay accumulators only. Fig. 6 shows the architecture of an exponential decaying leaky accumulator. #### 3.1.3 Synapse The Synapse module consists of a leaky accumulator and a weight multiplier. Capturing an asynchronous event forces the leaky accumulator to generate a decaying output amplified by the weight multiplier. The Synapse weight is stored in a register ('r_weight'), and its value is determined during the network training process. The 'r_weight' register resides in the Training hardware, which will be detailed in Section 4. The output of a Synapse \(i\), \(b_{i}(t)\), will be: \[b_{i}(t)=w_{i}\cdot a(t), \tag{4}\] where \(w_{i}\) is the value saved in the 'r_weight' register of the Synapse \(i\). The 'TRACE' register contains the time surface value (the output of the leaky accumulator) at every clock cycle. This value is used for training the previous network layer if it exists. Fig. 7 illustrates the architecture of a Synapse. #### 3.1.4 Neuron Each Neuron comprises several Synapses. The outputs of all Neuron Synapses are added together. The resulting value is equivalent to the dot product calculated in ODESA [33] and it is referred to as "membrane potential" throughout this paper as used in a LIF neuron model. The membrane potential is compared with the Threshold register value. The output of the Neuron is the membrane potential value if it exceeds the Threshold register value; otherwise, it is set to zero. The Threshold register is also located in the training hardware, and its value will be assigned during the training phase. The output of a Neuron with \(m\) Synapses can be written as: \[d(t)=\begin{cases}\sum_{i=1}^{m}b_{i}(t)&\text{if }d(t)\geq\text{Threshold}\\ 0,&\text{otherwise}.\end{cases} \tag{5}\] Fig. 4: Synchronizer module and its timing diagram. The ‘i_spike’ signal will be synchronized with the rising edge of ‘i_clk’. If ‘i_rst_n’ is not activated new spike will be ignored. Fig. 3: ODESA training algorithm for an output layer Fig. 2: ODESA training algorithm for a hidden layer. In an ODESA layer, the comparator and prioritizing module compare the output values of the neurons. The neuron with the highest membrane potential and the lowest index in the layer is declared as the winner. Subsequently, the comparator module generates a spike corresponding to the index of the winning neuron. The membrane potential of the winner Neuron is latched in its 'LAST_VALUE' register. The 'LAST_VALUE' register is used during training. We will discuss the training process in detail in Section 4. Fig. 8 depicts an 8-input neuron block diagram. The 'i_spike' input of the Neuron receives feedback from the output spike ('o_neuron_out'). The membrane potential is latched at the rising edge of the 'i_spike' input and can be accessed via the 'o_lv' output of the Neuron. #### 3.1.5 Comparator and Spike generator The Neurons' outputs are received by the Comparator module that detects which Neuron has the higher membrane potential and prioritizes the neuron outputs based on the input index to the Comparator module. The lower the index, the higher the priority of the Neuron. The Comparator output for an \(n\) neuron layer, is the post-synaptic spike stream of all the Neurons in a layer, and it can be mathematically modeled as: \[e_{i}(t)=\begin{cases}\delta(t),&\text{if:}\begin{cases}&\text{IS\_EVENT}=1, \\ &d_{i-1}(t)<d_{i}(t)\geq\{d_{i+1}(t),\ldots,d_{n}(t)\}.\end{cases}\\ 0,&\text{otherwise},\end{cases} \tag{6}\] where \(i\) is the Neuron index in an ODESA layer with \(n\) Neurons and the 'IS_EVENT' signal indicates whether any input event has occurred during the recent clock cycles. The Comparator output is one-hot encoded indicating the winner Neuron (the one with the highest membrane potential and the lowest index). Due to the Fig. 5: Leaky Accumulator architecture with linear decay and the circuit timing diagram with two subsequent input spikes. Fig. 8: 8-input ODESA Neuron Fig. 6: Leaky Accumulator architecture with exponential decay and the circuit timing diagram with two subsequent input spikes. Fig. 7: Synapse architecture event-driven computation, the Comparator must have an output only if the Neuron becomes a winner due to an input event. This is critical to avoid generating unwanted or unrelated spikes at the output of the ODESA layer and removing unintended spurs generated by the Comparator's combinatorial logic, which can cause intermediate spikes even when there is no input spike to the layer. The Spike generator is a sequential logic that receives a Comparator's output and allows a spike to appear at the output if an input event is recorded a few clock cycles before. The number of clock cycles that the Spike generator module can look back on is adjustable for each module. In our design, after detecting an input event, the spike generator waits until four clock cycles to receive a signal from the Comparator module. Fig. 9 shows the block diagram and the function of the Comparator. Fig. 10 illustrates the Spike generator logic and a sample waveform. #### 3.1.6 ODESA SNN Layers All ODESA layers, either an input, a hidden, or an output layer, have a homogeneous architecture. That is, a number of Neurons are connected to a Comparator module. Neurons can have different numbers of synapses. However, every Neuron has only one output. The number of layer outputs is equal to the number of Neurons in the layer. As discussed, only one of the layer outputs can be active at any given time. Neurons within a layer share the inputs to the layer. The outputs of a layer are fully connected to the inputs of the following layer, except for the output layer whose outputs indicate the classes in the classification problem the ODESA Network is designed to solve. Normally, different ODESA layers operate at different clock periods. The ratios of the hidden and output layers' clock period to the input layer's clock period are part of the network's configuration parameters. We use a naming convention throughout this paper to reference the architecture of the ODESA network. The input layer is always called 'L1'. Then, we increment the Layer's number for the following layers to the output layer, e.g. 'L2', 'L3', and so forth. Any ODESA network architecture can be determined using the following naming convention: ODESA {number of input spike channels}_{number of neurons at level 1 }_{number of neurons at level n}_{number of output classes}. Fig. 11 shows an example of ODESA 8\(2\)4_4 architecture. ## 4 Training hardware ODESA is a multi-layer supervised Spiking Neural Network architecture that can be trained to map an input spatio-temporal spike pattern to an output spatio-temporal spike pattern without requiring access to the weights and thresholds of other neurons or batching of the input data. The training algorithm is distinct for hidden layers (including the input layer) and the output layer. At each layer, the training is done through the guiding signals produced by the successive layer, the layer's output spikes, and the Label spikes. The original algorithm is detailed and implemented in software in [33]. In this work, we represent a revised version of the algorithm enhanced for hardware implementation. If any layer fires a spike, an 'IS_WINNER' signal is generated for that layer's training logic. Each layer's training logic also receives a Local Attention Signal (LAS) and a Global Attention Signal (GAS). If there is a spike at the output of a layer, a LAS signal will be generated for its preceding Fig. 10: Spike generator module and sample waveform Fig. 9: 2-input Comparator and Spike generator and sample waveform layer. The GAS signal, however, is generated when a label spike exists for the current input spike and propagates through all layers. The training set which includes the input spikes and their corresponding labels is stored in the RAM. During the training phase, the input spikes are read from the RAM and injected into the input layer of the ODESA SNN. Likewise, the training hardware reads labels from the RAM and compares them with the output spikes generated by the output layer. Fig. 12 illustrates an ODESA network with the network layers and training logic for each layer. The training hardware for each layer receives the last value of the membrane potential and the trace of all Synapses in that layer. When a Neuron becomes a winner (the 'IS_WINNER' signal is asserted), a (post-synaptic) spike is generated at the output of the layer, and the value of the Synapses' Trace registers (Fig. 7), are latched in a time surface (TS) register. The value of the membrane potential (the adder output in Fig. 8) is also registered in the Last Value (LV) register. The TS register is implemented in the Training module (not visible in Fig. 12 for simplicity) and its value represents the contribution of the Synapse to the generation of the winner's membrane potential. If the Neuron remains silent in presence of an input event and GAS signal, then the trace of the Synapse is latched to the 'NO_WINNER' register that is implemented in the Training module. The high value of the 'NO_WINNER' register indicates that the layer failed to spike for an input spike to the layer. The update of weights and thresholds happens in the presence of the GAS signal in a "reward" or "punish" process. "reward" or "punish" is a similar process for all neurons across the layers. For a Neuron \(j\) with threshold \(T_{j}\), and \(s\) Synapses, the reward process is defined according to Equation 7. \[\begin{split} Reward:\begin{cases}w_{ij}&\gets w_{ij}+ \eta_{w}\cdot(TS_{ij}-w_{ij}),\forall i\in[1,s],\\ T_{j}&\gets T_{j}+\eta_{T}\cdot(LV_{j}-T_{j}),\end{cases}\end{split} \tag{7}\] where \(w_{ij}\) is the synaptic weight, and \(TS_{ij}\) is the time surface register of Synapse \(i\) of Neuron \(j\). \(LV_{j}\) is the Last Value register of Neuron \(j\). The Neuron "punish" process is just lowering Neuron's threshold by the constant value \(\Delta_{T}\) as stated in Equation 8. \[\begin{split} Punish:\ T_{j}\gets T_{j}-\Delta_{T},\end{split} \tag{8}\] where \(\eta_{w}<1\), and \(\eta_{T}<1\), are the learning rates of the layer, and \(\Delta_{T}\geq 1\) are the network hyper-parameters. For sake of a low-cost hardware implementation, learning rate values are chosen as negative powers of two; therefore, the "reward" can be performed by simple shift and addition operations. Usually, the \(\eta_{w}\) and \(\eta_{T}\) are set to the same value. Since the Weight and Threshold registers contain unsigned integer values, special consideration has to be taken to ensure that the product terms of \(\eta_{w}\), and \(\eta_{T}\) in Equation 7 will never become zero, leaving the training process in a locked state. Additionally, when experiments require learning rates that are too small to perform weight updates via shift operations, the Weight and Threshold update steps are reduced to Equation 9. The sign \(<>\) function is used to determine the direction of the weight (or threshold) change and is then updated by a fixed step equal to \(\eta_{w}\) (or \(\eta_{T}\)). \(\eta_{w}\) and \(\eta_{T}\) values are set to the lowest possible step changes (e.g. 1,2,3,..) like used in Section 5.2 \[\begin{split} Reward:\begin{cases}w_{ij}&\gets w_{ij}+ \eta_{w}\cdot\mathrm{sign}(TS_{ij}-w_{ij}),\forall i\in[1,s],\\ T_{j}&\gets T_{j}+\eta_{T}\cdot\mathrm{sign}(LV_{j}-T_{j}),\end{cases} \end{split} \tag{9}\] Algorithm 1 and 2 show the hardware-friendly ODESA training algorithms for the Hidden and Output layers, respectively. Since the 'IS_WINNER' and GAS signals have no overlap, we use a latched version of these signals in the Fig. 11: Two layer ODESA implementation. Layer one (L1) is the input layer with two 8-input neurons. Layer 2 (L2) is the output layer with four 2-input neurons that classify the inputs into four classes. hardware. Fig. 13 shows the training waveforms for a hidden layer. The GAS signal is asserted at the same time 'IS_EVENT' becomes active (events and labels are read simultaneously from the RAM). The 'IS_WINNER' spikes after \(\Delta t_{1}\) time passed from 'IS_EVENT'. The 'r_IS_WINNER', and 'r_GAS' signals are latched and verified after \(\Delta t_{pass}\) time on the rising edge of the ODESA level's clock. The LAS signal is asserted after \(\Delta t_{2}\) time from 'IS_WINNER'. The 'IS_WINNER' also indicates there exists an input event for the next layer. The values of \(\Delta t_{1}\), \(\Delta t_{2}\), and \(\Delta t_{pass}\) are configurable at the Neuron's architecture. Specifically, \(\Delta t_{i}\) represents the time required for a spike to appear at the output of ODESA layer \(i\) following any input event to the layer. On the other hand, \(\Delta t_{pass}\) represents the time that the training module waits to observe the winner and update the weights and thresholds. In our design, all of these parameters are set to 3 clock cycles of the corresponding layer's clock. For an output layer, however, the WINNER is compared with the LABEL in the event of a GAS signal. If the WINNER and LABEL match, then the winner Neuron is rewarded; otherwise, the winner Neuron's weights are suppressed by a negative weight update. The negative weight update is considered the reverse of the weight reward process, i.e., \[w_{ij}\gets w_{ij}+\eta_{w}\cdot w_{ij}-\eta_{w}\cdot TS_{ij}\ \ \forall i\in[1,s]. \tag{10}\] Fig. 14 demonstrates the training signals for the output layer. The LABEL is latched at the rising edge of the GAS signal. It takes \(\Delta t_{pass}\) time for the WINNER to appear at the output layer, which is then compared with the LABEL at the rising edge of the next clock and performs weight and thresholds update. Fig. 12: ODESA network, Neuron layers, training hardware, and connections. ## 5 ODESA network Experiments ### _Experiment 1, Detection of four classes of spike patterns_ Our first experiment uses ODESA to detect four patterns consisting of 16 spikes split into two sub-patterns of 8 spikes each, which appear at a uniform time distance of \(\nu\). A label spike was attached to the last spike of each of the four patterns. The four input event patterns can be mathematically expressed in Equation 11. Fig. 15 visualizes the four spike patterns in time and the assigned class label for each pattern. \[\forall i\in[1,8]: \tag{11}\] \[\begin{cases}Pattern\ 1:i\_event[i]=\delta(t-(i-1)\nu)+\delta(t-(8+i) \nu)\\ Pattern\ 2:i\_event[i]=\delta(t-(9-i)\nu)+\delta(t-(17-i)\nu)\\ Pattern\ 3:i\_event[i]=\delta(t-(i-1)\nu)+\delta(t-(17-i)\nu)\\ Pattern\ 4:i\_event[i]=\delta(t-(9-i)\nu)+\delta(t-(8+i)\nu)\end{cases}\] The ODESA network implemented for this application is configured with two fully connected layers. The input layer (L1) has two Neurons with eight inputs, and the Output layer (L2) has four Neurons with two inputs. The network architecture is illustrated in Fig.11. The Network parameters used for detecting the four class patterns are listed in Table I. In this experiment, we use 6-bit linear decaying counters at each Neuron and a decaying constant \(C=63\). The clock frequency for L1 and L2 is set to 64 kHz and 32 kHz, respectively. Thus, the linear decay to zero will take one millisecond for Neurons at L1 and two milliseconds for Neurons at L2. The distance between two spikes is set to \(\nu=8\times\) (L1 clock period). As shown in Fig. 16, for each pattern, two spikes are generated by L1. The position and time of the spike determine the input pattern injected into the ODESA network. The L1 output spikes are used as inputs to L2. As depicted in Fig. 17, L2 comprises four 2-input neurons that perform the classification task. The ODESA 8\(2\)4_4 implementation was performed on an Intel Cyclone V (part no. 5CSEBA6U23I7) using the Quartus 18.0 Lite design tool. Table II reports the implementation results. The implemented ODESA network achieved an accuracy rate of 100% after completing self-training. The accuracy did not change by applying random changes in the distance \(\nu\) by \(\pm 10\%\) on a trained network. ### _Experiment 2, Iris dataset classification_ The Iris dataset [50] is one of the well-known databases in the pattern recognition literature for being not linearly separable. Different spike-encoding schemes are used to convert the Iris dataset into spikes to test the local learning rules of SNNs [9][13][29]. The data set contains three classes of 50 instances each, where each class refers to a type of iris flower. The four features Fig. 14: ODESA Output Layer training signals. \begin{table} \begin{tabular}{l l l} \hline _Parameter_ & _L1_ & _L2_ \\ \hline \(\eta_{w}\) & \(2^{-3}\) & \(2^{-2}\) \\ \hline \(\eta_{T}\) & \(2^{-3}\) & \(2^{-2}\) \\ \hline \(\Delta_{T}\) & \(2^{6}-1\) & \(2^{6}-1\) \\ \hline _Weight register (bits)_ & 8 & 8 \\ \hline _Decaying counter (bits)_ & 6 & 6 \\ \hline _Clock frequency (MHz)_ & 0.064 & 0.032 \\ \hline _Input events time distance \(\nu\) (ms)_ & 31.25 & - \\ \hline \end{tabular} \end{table} TABLE I: ODESA 8\(2\)4_4 parameters for experiment 1 Fig. 13: ODESA Hidden layer training signals. are sepal length (d1), sepal width (d2), petal length (d3), and petal width (d4), all in centimeters within the range \([0.1,7.7]\). To convert the input feature values into spatio-temporal spike patterns, we used a latency coding that maps the value of each input dimension to the time of a spike generated from a corresponding input channel: \[L\rightarrow\delta(t-L). \tag{12}\] However, the length \(L\) for \(d_{1}\), \(d_{2}\), \(d_{3}\), and \(d_{4}\) has to be scaled to fit in a fixed-length frame. In our case, that is the time frame within the range \([0,30]\). The dataset conditioning we applied to the original Iris dataset follows the offset and compress formulae in Equation (12). \begin{table} \begin{tabular}{l l} \hline _Architecture_ & _ODESA 8\(2\)4_4_ \\ \hline _Used ALM_ & 1192 \\ \hline _Used registers_ & 976 \\ \hline _Used DSP units_ & 20 \\ \hline _L1 max. Clock frequency (MHz)_ & 28.25 \\ \hline _Dynamic power consumption (mW)_ & 1 \\ \hline \end{tabular} \end{table} TABLE II: ODESA 8\(2\)4_4 Implementation results on Intel Cyclone V Fig. 16: ODESA Layer L1 response to input events. Fig. 17: ODESA Layer L2 response to input events. Fig. 15: Experiment 1 input spike patterns. 13. \[\begin{cases}d_{1}=\lceil 3.8\big{(}\frac{(d_{1}-1)}{2}+4\big{)}\rceil,\\ d_{2}=\lceil 3.8\big{(}\frac{(d_{2}-2)}{3}+2.5\big{)}\rceil,\\ d_{3}=\lceil 3.8d_{3}\rceil,\\ d_{4}=\lceil 9(d_{4}+0.5)\rceil.\end{cases} \tag{13}\] Each sample in the new dataset contains the four features scaled to the timestamp ranged in \([0,30]\). Using Equation 12 the lengths \(d1\), \(d2\), \(d3\), and \(d4\) are converted to the timestamps. The dataset with timestamped events is shown in Fig. 18. The ODESA architecture we designed for the Iris dataset with four input spikes within the timeframe \([0,30]\) is ODESA 4\(6\)3_3. The L1 includes 6 Neurons with 4 Synapses, and the L2 comprises 3 Neurons with 6 Synapses each. The clock frequency for L1 is 2.5 MHz, i.e., \(\frac{1}{20}\) of FPGA system Clock (50 MHz). Clock frequency for L2 is set to 0.625 MHz, i.e., \(\frac{1}{4}\) of L1 clock frequency. The ratio of the level one clock to the level two clock is a network parameter, which in this experiment is set to four. Samples are injected with the same clock frequency of L1. Therefore, each sample's timeframe takes a maximum of \(30\times 0.4=12\) microseconds. The decaying counter designed for this application is eight bits wide, and the decaying constant is set to its maximum value \(C=255\) for both L1 and L2. The ODESA 4\(6\)3_3 was implemented on Intel's Cyclone V FPGA, and the results are reported in Table III. #### 5.2.1 Training ODESA 4\(6\)3_3 for Iris dataset Since the Iris dataset is more complex than our previous experiment with patterns of Fig. 15, it requires smaller learning rates than that can be achieved by shift operations. For weights with smaller values, the shift operation can lead to no updates. We have used the weight and threshold update steps introduced in Equation 9. The \(\eta_{w}\) for L1 is set to 1. The resulting Weights register update for each Synapse \(j\) of the Neuron \(i\) follows the rule in Equation 14. \[\begin{cases}w_{ij}\gets w_{ij}+1,\text{if }TS_{ij}>w_{ij},\\ w_{ij}\gets w_{ij}-1,\text{if }TS_{ij}<w_{ij}.\end{cases} \tag{14}\] The new weight update guarantees that the Synapse's weight value moves smoothly towards the time surface of that Synapse. Rewarding the threshold is also performed by incrementing the threshold value by a fine-tuned constant value \(\eta_{T}\) according to Equation 9. This constant value \(\eta_{T}\) is determined by trial. In our test, we set the \(\eta_{T}\) equal to 127 decimals (or \(0X7F\) hexadecimal). At L2, the weight updates require higher learning rates, and a larger step size is used for L2 to achieve the same. According to Equation 15, each Synapse's weight is incremented or decremented by two in a rewarding process. \[\begin{cases}w_{ij}\gets w_{ij}+2,\text{if }TS_{ij}>w_{ij},\\ w_{ij}\gets w_{ij}-2,\text{if }TS_{ij}<w_{ij}.\end{cases} \tag{15}\] The threshold update is done by employing Equation 7 with \(\eta_{T}=2^{-10}\). The "punish" process uses an adaptive \(\Delta_{T}\) value according to Equation 16 to ensure the Threshold register will never cross zero. \[\Delta_{T}=\begin{cases}2^{10}-1,\text{ if }T_{j}>2^{16}-1,\\ 2^{8}-1,\text{ if }T_{j}>2^{12}-1,\\ 2^{4}-1,\text{ if }T_{j}>2^{8}-1,\\ 1,\text{ otherwise.}\end{cases} \tag{16}\] During our experiments, we noticed that training can be significantly accelerated by masking the LAS signals of the output layer, except for the ones that occur after a GAS signal. In other terms, generating LAS signals only when we anticipate a response from the network to the classification problem leads to fewer training epochs being necessary. To evaluate our network's performance and accuracy, we chose 20 random splits of the IRIS dataset (30% training and 70% test splits) and run the hardware with the training dataset splits stored in its RAM. The same dataset was used for the software version of ODESA with a similar ODESA 4\(6\)3_3 configuration. We have used a smaller ODESA network with fewer input and hidden neurons compared to the network architecture \begin{table} \begin{tabular}{l l} \hline _Architecture_ & _ODESA 4\(6\)3_3_3 \\ \hline _Used ALM_ & 2805 \\ \hline _Used registers_ & 1195 \\ \hline _Used DSP units_ & 42 \\ \hline _LI max. clock frequency (MHz)_ & 39.88 \\ \hline _Dynamic Power consumption (mW)_ & \(<\) 2 \\ \hline \end{tabular} \end{table} TABLE III: ODESA 4\(6\)3_3 Implementation results on Intel Cyclone V Fig. 18: Experiment 2, Iris dataset input spike patterns. (ODESA 20_10\(3\)3) used in [33] to fit the area available on Intel's Cyclone V. The input dimensions of the data are 4 as compared to 20 used in the original work [33]. The dataset was converted to spikes using latency coding as described in Section 5.2 as opposed to using a population code that was used in [33] to reduce the number of multipliers required for the hardware implementation. The software version of ODESA is the original algorithm from [33] that used floating-point operations and normalized weights and input time surface for calculating the dot products of the neurons. The dot product in the software version is always bounded between 0 and 1. We let the two networks (our ODESA hardware, and software ODESA) run for 400 epochs on each random split. The accuracy performance of the networks is abridged in Fig. 19. The average and maximum achieved accuracy and the standard deviations are reported in Table IV. The Software ODESA shows a consistent accuracy with a small variation of around 83%, while the hardware version accuracy changes from a maximum of 86.6% to 65%. The hardware version of ODESA does not show a considerable drop in accuracy compared to the software version even with the usage of non-normalized integer-based weights and fixed-step weight and threshold updates as described in Equations 14, 15, and 16. But it does have a larger standard deviation compared to the software version, which is a result of using a fixed-length integer numbering system and fixed-step updates that can limit the convergence. The results show that our hardware ODESA and training algorithms perform very closely to the software version of ODESA. ## 6 Conclusion For the first time, we presented an FPGA implementation of ODESA SNN that can be trained online in a supervised manner on hardware reset. The training data is stored in the internal RAM of the FPGA device and will be used on hardware restart to assign SNN parameters. This trainable hardware is efficient in terms of hardware resources and computing costs, making it appealing for streaming pattern detection applications, e.g., intrusion detection on IoT devices. The architecture can asynchronously update the neuron parameters at each layer independent of other layers in a network. All the communication in the hardware is event-based and via binary spikes. The architecture is capable of performing on-chip online learning and is a promising next step toward building energy-efficient continual learning edge devices. Our work aims to draw attention to designing autonomous hardware that makes decisions based on receiving sensory inputs. Our approach could be extended to handle more complex pattern detection and classification tasks in near real-time. ## Acknowledgment This research is supported by the Commonwealth of Australia as represented by the Defense Science and Technology Group of the Department of Defense.
2310.20671
Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction
Quantum Recurrent Neural Networks (QRNNs) are robust candidates to model and predict future values in multivariate time series. However, the effective implementation of some QRNN models is limited by the need of mid-circuit measurements. Those increase the requirements for quantum hardware, which in the current NISQ era does not allow reliable computations. Emulation arises as the main near-term alternative to explore the potential of QRNNs, but existing quantum emulators are not dedicated to circuits with multiple intermediate measurements. In this context, we design a specific emulation method that relies on density matrix formalism. The mathematical development is explicitly provided as a compact formulation by using tensor notation. It allows us to show how the present and past information from a time series is transmitted through the circuit, and how to reduce the computational cost in every time step of the emulated network. In addition, we derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters, with an eye on gradient-based training and noisy outputs that would appear when using real quantum processors. We finally test the presented methods using a novel hardware-efficient ansatz and three diverse datasets that include univariate and multivariate time series. Our results show how QRNNs can make accurate predictions of future values by capturing non-trivial patterns of input series with different complexities.
José Daniel Viqueira, Daniel Faílde, Mariamo M. Juane, Andrés Gómez, David Mera
2023-10-31T17:32:11Z
http://arxiv.org/abs/2310.20671v1
Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction ###### Abstract Quantum Recurrent Neural Networks (QRNNs) are robust candidates to model and predict future values in multivariate time series. However, the effective implementation of some QRNN models is limited by the need of mid-circuit measurements. Those increase the requirements for quantum hardware, which in the current NISQ era does not allow reliable computations. Emulation arises as the main near-term alternative to explore the potential of QRNNs, but existing quantum emulators are not dedicated to circuits with multiple intermediate measurements. In this context, we design a specific emulation method that relies on density matrix formalism. The mathematical development is explicitly provided as a compact formulation by using tensor notation. It allows us to show how the present and past information from a time series is transmitted through the circuit, and how to reduce the computational cost in every time step of the emulated network. In addition, we derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters, with an eye on gradient-based training and noisy outputs that would appear when using real quantum processors. We finally test the presented methods using a novel hardware-efficient ansatz and three diverse datasets that include univariate and multivariate time series. Our results show how QRNNs can make accurate predictions of future values by capturing non-trivial patterns of input series with different complexities. ## I Introduction Processing and analysing multivariate time series, generally involve sophisticated algorithms that load high-dimensional datasets evolving in time, wherein temporal correlations are not trivial. In classical computation, Machine Learning is a consolidated approach for prediction and anomaly detection tasks [1; 2; 3; 4; 5; 6; 7]. Recurrent Neural Networks (RNNs) are a powerful tool for learning sequential data [8], and, over decades, several variations over the first model have improved their performance, like the well-known _Long Short-Term Memory_ (LSTM) or the _Gated Recurrent Unit_ (GRU) cells [1; 4; 9]. Transformers are a recent alternative [10] that seems to outperform the previous Neural Network models. However, there is not enough experience when using this algorithm for numerical multivariate time series. The RNN models feature great results for sequential learning, but they are not problem-free. Some models cannot store information from first inputs in long time series. The LSTM cell arose to address this problem [1]. Moreover, because of the temporal correlations between different variables in multivariate time series, a high-dimensional space appears for computing data with non-linear patterns. Thus, it is necessary to build neural networks with more neurons, layers and parameters, which are computationally expensive and more challenging to train. Rapidly, we have to tackle a Deep Learning task, which requires computational resources and algorithms that ease the parameter optimisation during the neural network training, such as the backpropagation algorithm for estimating the gradients [11]. Quantum Machine Learning (QML) leverages the power of Quantum Computing to produce complex patterns that are probably not straightforwardly reproducible in a classical computer [12; 13]. However, in the current _Noisy Intermediate-Scale Quantum_ (NISQ) era of quantum computers, there is a need for algorithms requiring a low quantum circuit depth, since only a limited number of operations are meaningful. Consequently, a substantial part of the research in quantum algorithms is focusing on the Variational Quantum Algorithms (VQAs) [14]. A VQA lies on hybrid classical-quantum routines to classically minimise a cost function computed with a back-end Parameterised Quantum Circuit (PQC). In this context, a VQA can be used as a Neural Network by feeding a quantum circuit with information from our dataset and then tuning the circuit parameters to minimise the cost function [15]. The first proposals of Quantum Recurrent Neural Networks (QRNNs) consist of a temporal loop that feeds a quantum state into a quantum circuit and applies the Schrodinger equation to evolve the state. Their applications were simulating stochastic processes [16] and stochastic filtering of signals with noise [17; 18]. In the middle of the NISQ era and with the explosion of Machine Learning, several proposals have arisen to enhance the power of the current Machine Learning models by adding Parameterised Quantum Circuits as part of the algorithm [19; 20; 21; 22; 23]. The actual QRNN model seeks to maximise the use and the power of Quantum Computing by a PQC that computes the whole sequence, and the only classical part is data pre-processing, data post-processing and the optimisation of circuit parameters [24; 25; 26; 27; 28; 29]. The most common circuit architecture repeatedly encodes classical data into the quantum circuit, applies a unitary operator and measures a subset of qubits, involving multiple intermediate measurements before the end of the circuit. However, other proposals avoid intermediate measurements at the expense of increasing the number of qubits with the time-series length [27] or truncating the circuit [29]. The QRNN model is expected to provide advantages compared to classical neural networks, which extend to QML models in general. The encoding of classical data into quantum states allows us to compute a number of functions that increases exponentially with the number of qubits [15]. This feature enhances non-linearities that are required for learning complex patterns. In classical computing, the process requires exponential resources. Besides that, circuits with intermediate measurements, known as dynamic circuits, are not widespread in the QML literature, but they can lead to a new realm of algorithms on real quantum hardware [30], and they are being introduced in Quantum Computing platforms [31; 32]. The main aim of this article is to provide the mathematical tools to emulate this type of circuit, which is the core of QRNN algorithms. This is very interesting for two reasons: with the mathematical formulation, we can learn about the propagation of information through the quantum circuit, and the emulation makes the algorithm compatible with classical devices, before sending it to a quantum computer. Within this context, we test the theoretical procedure in several use cases, showing their potential for multivariate time series prediction. The manuscript is structured as follows. In Section II we provide the method for emulating a quantum circuit with intermediate measurements, which has the property of returning values that depend on past data encoded in the circuit, like recurrent neural networks. As backpropagation is used in classical Machine Learning, in Section III we provide a method to analytically compute first- and second-order partial derivatives of the circuit outputs, by an expansion of the Parameter Shift Rule (PSR) [15; 33]. In Section IV we show the results after applying the former methods to an emulation of a QRNN for multivariate time series prediction, in order to demonstrate that the algorithm works by emulating the quantum circuit with the presented method. Finally, discussions are included in Section V. ## II Formulation of the QRNN states with density matrices The QRNN model is based on the classical RNN model, as proposed by [25]. Following this approach, we explicitly provide a tensor representation of the internal states propagated through the QRNN and its outputs, based on the density matrix formalism. This representation allows the implementation of a classical emulator based on tensor operations and even simple matrix operations, which are very fast in current computational devices. ### The classical Recurrent Neural Network Consider a set of data which is time-ordered, \[\{\mathbf{x}_{(0)},\mathbf{x}_{(1)},\cdots,\mathbf{x}_{(t)},\cdots,\mathbf{x}_{(T)}\}, \tag{1}\] that is the input multivariate time series, since each item is a vector containing the information of \(n_{v}\) variables. The output series is \[\{\mathbf{y}_{(0)},\mathbf{y}_{(1)},\cdots,\mathbf{y}_{(t)},\cdots,\mathbf{y}_{(T)}\}, \tag{2}\] which is the target in the Machine Learning task. The output of a RNN at time \(t\) depends on the inputs from the previous time steps since it preserves past information with a form of memory [34]. The easiest way to imagine a network with memory is thinking in a box that continuously receives and returns data. To start, the box is supplied with an input, \(\mathbf{x}_{(0)}\), and it returns two objects: an output \(\overline{\mathbf{y}}_{(0)}\) which is read, and a _hidden state_\(\mathbf{h}_{(0)}\) which is re-introduced in the box next time. From the second time onwards, the box receives the previous hidden state \(\mathbf{h}_{(t-1)}\) and the input \(\mathbf{x}_{(t)}\). Then, it computes them and generates the output \(\overline{\mathbf{y}}_{(t)}\) and the hidden state \(\mathbf{h}_{(t)}\) to be re-introduced. The model is represented in Fig. 1, where we can see the recurrence, so that the behaviour is dynamic, in contrast to feedforward neural networks, for which the information is never transmitted backwards. The information flux in a RNN through time splits into three different lines, represented in Fig. 1, that stand for _input data_\(\mathbf{x}_{(t)}\), _output data_\(\overline{\mathbf{y}}_{(t)}\) and _hidden state_\(\mathbf{h}_{(t)}\). Both \(\overline{\mathbf{y}}_{(t)}\) and \(\mathbf{h}_{(t)}\) are functions that depend on \(\mathbf{x}_{(t)}\) and \(\mathbf{h}_{(t-1)}\), \[\left\{\begin{array}{ll}\overline{\mathbf{y}}_{(t)}&=\mathcal{Y}\left(\mathbf{x}_{(t )},\mathbf{h}_{(t-1)}\right)\\ \mathbf{h}_{(t)}&=\mathcal{S}\left(\mathbf{x}_{(t)},\mathbf{h}_{(t-1)}\right),\end{array}\right. \tag{3}\] but at the same time, \(\mathbf{h}_{(t-1)}\) also depends on previous inputs and a hidden state. Functions \(\mathcal{Y}\) and \(\mathcal{S}\) are the part which is expensive to compute. They consist of matrix calculations that take into account the tunable parameters (weights), structure and connections between the layers of neurons that form the network. ### The Quantum Recurrent Neural Network The QRNN leverages the power of PQCs implicitly performing matrix calculus. Like in the proposed classical model, wherein a network returns output data every time step, we measure the quantum circuit to obtain data in our classical devices. Quantum Networks also contain a _hidden-state_ flux \(\mathbf{h}\). As its name suggests, a hidden state propagates internally and carries information not required to be known. That is why we can think of a quantum state as a hidden state. This quantum state is actually a mixed state that arises from the measurement of a subsystem A, and a subsystem B, which is not measured, provided that A and B are entangled. Hence, we can now construct a general circuit that resembles the classical RNN described above, as it was proposed by [25]. Since a quantum circuit diagram represents the series of operations applied over several qubits (vertical direction) during the time (horizontal direction), the circuit in Fig. 2 can be interpreted as the unrolled representation through time of the QRNN. The circuit consists of two quantum registers: an _exchange register_ (E) with \(n_{E}\) qubits, and a _memory register_ (M) with \(n_{M}\) qubits. The former is used to exchange information between the quantum and the classical interface, by applying encoding and measurement operations, while the latter is never measured. The total number of qubits is \(n\). We then have a circuit _block_ for every time step: an estimation of an output from the input at time \(t\). The instructions are (see Fig. 2): 1. Initialise register E (M) to a desired pure state \(\ket{\Psi_{E}}\) (\(\ket{\Psi_{M}}\)). We will restrict the initialisation of all the qubits to \(\ket{0}\). 2. Reset of register E. All register E qubits start at \(\ket{0}\) every time step. 3. Apply a parameterised unitary (ansatz) \(U\left(\mathbf{x}_{(t)},\mathbf{\theta}\right)\) that evolves the state of each qubit and entangles some (or all) qubits from both registers E and M, correlating the information from both. \(\mathbf{\theta}\) are the variable parameters (weights) of the network. 4. Measure qubits from register E. 5. Repeat steps (2), (3), and (4) iteratively from \(t=1\) to \(t=T\). The unitary \(U\left(\mathbf{x}_{(t)},\mathbf{\theta}\right)\) must encode the classical input \(\mathbf{x}_{(t)}\) each time step, apply parameterised gates depending on the set of weights \(\mathbf{\theta}\), and apply entanglement between registers E and M. The \(\mathbf{\theta}\) are always the same for the different repetitions of the unitary, following the classical approach (see Fig. 1). Variations of this structure are available in the literature [28] when trying to create a circuit that sustains the coherence for a longer time. This is not the scope of this work, since we want to establish some bases for designing and emulating quantum RNNs. Solutions for NISQ-era limitations in these networks will require further research that may involve quantum hardware parameters, such as the readout features or the coherence itself [35]. ### Memory and output of a QRNN The way the vanilla RNN [36] (from which subsequent RNN algorithms were originated) operates data is the ideal sample of how data is explicitly processed in a RNN: \[\begin{array}{ll}\mathbf{h}_{(t)}=&\sigma_{h}\left(W_{hx}\mathbf{x}_{(t)}+W_{hh}\bm {h}_{(t-1)}+\mathbf{b}_{h}\right),\\ \overline{\mathbf{y}}_{(t)}=&\sigma_{y}\left(W_{yh}\mathbf{h}_{(t)}+\mathbf{b}_{y}\right), \end{array} \tag{4}\] being \(\{W_{hx},W_{hh},W_{yh}\}\) matrices with the weights, \(\{\mathbf{b}_{h},\mathbf{b}_{y}\}\) the biases, and \(\{\sigma_{h},\sigma_{y}\}\) the activation functions. Matrices (2-dimension tensors) are always represented by uppercase letters, while vectors (1-dimension tensors) are represented by bold lowercase letters. A density matrix formalism is behind the classical emulation of the quantum circuit in Fig. 2. In this case, we can compute the exact probability distribution in every measurement of \(n_{E}\) qubits. At the same time, we include non-unitary operations, such as the measurement itself and the reset operation. An alternative is a state-vector emulation by sampling outputs, i.e., measurement probabilities are computed from multiple circuit executions, like in a real quantum device. However, exact probabilities are preferable when emulating circuits in VQAs, especially when the optimisation method is based on gradients. Following the instructions to build the quantum circuits, we formulate the expressions to compute two ob Figure 1: Classical Recurrent Neural Network representations. (a) Basic RNN scheme. (b) Basic RNN unrolled through time. jects for every time step, as in Eq. (4): one is the reduced density matrix \(\rho_{M\,(t)}\), which transmits information from \(t\) to \(t+1\), while the other one is the expected value of some observable \(\left\langle O\right\rangle_{(t)}\) (the output before classical post-processing). Following the density matrix formalism, we get \[\begin{array}{rl}\rho_{M\,(t)}=&\mathrm{Tr}_{E}\left[U\;\rho_{(t)}\,U^{ \dagger}\right],\\ \left\langle O\right\rangle_{(t)}=&\mathrm{Tr}\;\left[U\;\rho_{(t)}\,U^{ \dagger}\,O\otimes I^{\otimes n_{M}}\right],\end{array} \tag{5}\] where \(\rho_{(t)}\) is the initial density matrix of the circuit at time \(t\) after the reset on register E and before applying the operator \(U=U(\mathbf{x}_{(t)},\mathbf{\theta})\). \(O\) is our observable, that will be, without loss of generality, diagonal. If the observable were not diagonal, we could get its spectral decomposition and apply the necessary transformations to the circuit(s) before measurement. The output (prediction) at time \(t\) is \[\overline{y}_{(t)}=f(\left\langle O\right\rangle_{(t)}), \tag{6}\] being \(f\) an arbitrary function. We are restricting to a single-variable output. However, the generalisation for multiple variables is straightforward by considering a set of multiple observables instead of a single one. This section aims to derive a tensor representation of both \(\rho_{M\,(t)}\) and \(\left\langle O\right\rangle_{(t)}\) to (i) provide an explicit formula that can be implemented in a classical computer as matrix products, and (ii) show how the information is transmitted through the quantum circuit. For the derivation of the formulas, a symbol and a group of \(r\) indices (lowercase letters) represent every mathematical object that is a \(r\)-rank tensor. The ordering of the indices follows the next criteria. A subindex in parenthesis refers to the current time step, and we can sometimes neglect it. The rest of the indices identify the coefficients for every projector or vector inside the Hilbert space. Indices above (below) correspond to the base vectors in the Hilbert space (dual Hilbert space). For quantum operators over the full circuit and density matrices representing the \(n\) qubits, indices are in pairs. The first index from the pair corresponds with register E, while the second one, with register M. We use the Einstein summation convention, i.e., terms are summed when upper and lower indices are repeated. Note that operations do not commute in general. See Appendix A for further explanation. From now on, the dimension of the Hilbert spaces is represented by \(N_{E}=2^{n_{E}}\) and \(N_{M}=2^{n_{M}}\). The Hilbert space dimension corresponding to the \(n=n_{E}+n_{M}\) qubits is \(N=2^{n}\). The reduced density matrix after measurement at a time \(t\) is \[\begin{array}{rl}\left(\rho_{M}\right)^{m}_{(t)n}=&U^{im}_{(t)kq}\;\rho^{ kq}_{(t)lr}\;\left(U^{\dagger}\right)^{lr}_{(t)jn}\;\delta^{j}_{i}=\\ &\sum_{i=0}^{N_{E}-1}U^{im}_{(t)kq}\;\rho^{kq}_{(t)lr}\;\left(U^{ \dagger}\right)^{lr}_{(t)in}.\end{array} \tag{7}\] By decomposing the operator into an encoding operator \(V(\mathbf{x}_{(t)})\) that acts only over register E and an operator \(W(\mathbf{\theta})\) that entangles both registers (see Fig. 3), we can separate the density matrix before applying \(W\) and then have that \[\left(\rho_{M}\right)^{m}_{(t)n}=\sum_{i=0}^{N_{E}-1}W^{im}_{kq}\;\left(\left( \rho_{E}\right)^{k}_{(t)l}\left(\rho_{M}\right)^{q}_{(t-1)r}\right)\;\left(W^{ \dagger}\right)^{lr}_{in}, \tag{8}\] where \(\left(\rho_{E}\right)^{k}_{(t)l}=V^{k}_{(t)0}\left(V^{\dagger}\right)^{0}_{(t )l}\). We have omitted \((t)\) in \(W\) since it does not depend on time. All the blocks include the same \(W\). We can see that \(\rho_{M\,(t)}\) depends on \(\rho_{M\,(t-1)}\) and \(\mathbf{x}_{(t)}\), as the hidden state in Eq. (4), unless the \(W\) operator is separable in the two registers (i.e. no entanglement between E and M). Following Eq. (5), the expectation value of some diagonal observable \(O^{i}_{m}=d^{i}\delta^{i}_{m}\) (no sum Figure 3: Decomposition of \(U\) operator into encoding (\(V\)) and evolution part (\(W\)). The latter is the same for all the circuit blocks since \(\mathbf{\theta}\) does not change during a circuit evaluation. Figure 2: General form of the QRNN circuit. Arrows show the information flux. symbol is above in both items) at time \(t\) is \[\left\langle O\right\rangle_{(t)}=(\rho_{ij}^{\prime kl}O_{i}^{i}\beta_{ n}^{j})\delta_{kl}^{mn} \tag{9}\] \[=(\rho_{ij}^{\prime kl}d^{i}\delta_{m}^{i}\delta_{n}^{j})\delta_{kl} ^{mn}=(\rho_{ij}^{\prime kl}d^{i}\delta_{n}^{j})\delta_{kl}^{in}\] where \(\rho^{\prime}\) is the density matrix after applying the \(U\) operator. By decomposing this density matrix, we have \[\left\langle O\right\rangle_{(t)}=\sum_{i=0}^{N_{E}-1}d^{i}\sum_{n=0}^{N_{M}- 1}W_{kq}^{in}\left((\rho_{E})_{(t)l}^{k}(\rho_{M})_{(t-1)r}^{q}\right)\left(W^ {\dagger}\right)_{in}^{lr}, \tag{10}\] which, again, proves the dependency of this observable with respect to both the inputs \(\mathbf{x}_{(t)}\) and the reduced density matrix from the previous step \(\rho_{M\;(t-1)}\), provided that \(W\) is an operator entangling E and M. In Eq. (4), it depends on the current hidden state, not the previous one. However, the recursion makes it dependent on \(\mathbf{h}_{(t-1)}\) too. In most Quantum Machine Learning problems, the \(W\) operator is represented by a highly dense matrix (without null elements) and difficult to decompose since in general it should be formed by several entanglement layers. Then, during emulation, it is plausible to build this operator, keep it in memory and then we can split it into parts and operate with the parts as matrix products, which are very fast operations in classical computers. That splitting can be done by creating \(W^{i}\) matrices, which are \(N_{M}\times N\) dense matrices. With this derivation, we do not need to compute the complete \(N\times N\) density matrix before computing \(\rho_{M\;(t)}\) and \(\left\langle O\right\rangle_{(t)}\). Moreover, these operations are performed only once for both Eqs. (8) and (10), because the sum over \(n\) in Eq. (10) extracts the items with \(m=n\) from Eq. (8). In a matrix representation, this is the trace of each of the \(N_{E}\) matrices that summed together give rise to \(\rho_{M\;(t)}\). ## III Analytical derivatives In Machine Learning, most parameter optimisation algorithms are based on gradients, which require some method to be computed. One is numerical differentiation, which is very inaccurate, and another one is symbolic differentiation, which is more computationally expensive [37]. Automatic Differentiation, with _backpropagation_ as the most-known algorithm, seems to have many advantages. In QML, as current circuits are prone to errors and the outputs are obtained after running the circuit multiple times (shots), systematic and stochastic errors seriously affect the estimation of the cost functions. Errors are amplified when computing numerical gradients, which makes it much more difficult to compute them unless we run circuits with a great number of shots and apply sophisticated error mitigation techniques. Automatic Differentiation as is done in classical computing is not possible, because we need to store intermediate results through the network and intermediate quantum states cannot be accessed in real quantum devices without affecting them. To access them, we need to measure them, and they collapse. Despite these restrictions, the Parameter Shift Rule (PSR) [15; 33] provides a method to compute analytical gradients in quantum hardware. Hereafter, we consider circuits with parameters encoded into rotation gates \(R_{i}(\alpha)=e^{-i\alpha\sigma_{i}}\) generated by a Pauli matrix \(\sigma_{i}\), because they are a set of fundamental gates used for making PQCs in gate-based quantum devices. ### First-order partial derivatives We define the shift expectation value at time \(t\), \(\left\langle O\right\rangle_{(t)}^{\chi}\Big{|}_{ri}\), as the expectation value at time \(t\) after shifting the parameter \(\theta_{i}\) a value \(\chi\) at block \(t_{r}<t\) in the quantum circuit. We recall that if \(t_{r}>t\), the shift does not affect the output at time \(t\). For the set of gates that we consider, the shifts are \(\pm\frac{\pi}{2}\)[33]. Then, in the remaining, we write only the sign. The partial derivative of the observable at time \(t\), \(\left\langle O\right\rangle_{(t)}\), is then \[\partial_{i}\left\langle O\right\rangle_{(t)}\equiv\frac{\partial\left\langle O \right\rangle_{(t)}}{\partial\theta_{i}}=\sum_{r=0}^{t}\frac{1}{2}\left.\left( \left\langle O\right\rangle_{(t)}^{+}-\left\langle O\right\rangle_{(t)}^{-} \right)\right|_{ri}, \tag{11}\] derived in Appendix B. In classical RNNs, the chain rule propagates backwards in time, giving rise to _backpropagation through time_ (BPTT) [38]. This method is necessary for computing gradients in RNNs, as backpropagation is used in deep learning models, but it has some caveats. Firstly, it requires computing plenty of terms because we need to repeat the network the same number of time steps we have. When contributions from the beginning of the series are negligible, a possible solution is truncation methods [39]. Secondly, the propagation backwards in time requires several weights-matrix products. Instability manifests as _vanishing_ (_exploding_) gradients if the eigenvalues of the weights matrices are lower (higher) than \(1\)[40]. Truncation can partially address this problem [41; 39]. From equation (11), computing the partial derivative of the observable at time \(t\) requires \(2t\) function evaluations with this method. Therefore, computing the gradient requires \(2tN_{\theta}\) function evaluations, being \(N_{\theta}\) the number of parameters of our ansatz. The loss function for training depends on the \(T\) time steps. We can evaluate the outputs in a single circuit evaluation, i.e. running the circuit a given number of times (shots) but fixing all the parameters. Thus, in general, the number of function evaluations needed to compute the exact gradient of the loss function is \(2TN_{\theta}\). This is worse than the case of circuits without mid-circuit measurements. Previous proposals implement techniques to reduce the number of circuit evaluations [42; 43; 33], but not for this type of intermediate-measurement-based circuits. Instability problems cannot appear in Quantum Neural Networks in the form of _exploding gradients_ because Quantum Circuits naturally implement unitary operations [44] before measuring. Unitary operations are a restriction of some classical ML models, precisely to avoid exploding gradients, and there are ways to implement them without losing too much expressivity [45]. In the case of a QRNN, it is straightforward to see that exploding gradients cannot appear, by looking at equation (11) since the subtraction of two observables cannot blow up, and the partial derivative only involves the summation of these terms. The _vanishing gradient_ problem does appear in VQAs, where parameterised circuits can suffer from barren plateaus in the optimisation landscape [46; 47]. In the case of RNNs, vanishing gradients cause loss of memory: the network stops depending on past inputs at some moment during the training. In the QRNN model, dependencies of past inputs gradually vanish too, because, at time \(t\), the network starts in a mixed state, \(\rho_{E\,(t)}\otimes\rho_{M\,(t-1)}\), evolved by a unitary operator. The terms from \(\rho_{M\,(t-1)}\) are attenuated after these operations and many circuit blocks. ### Second-order partial derivatives Some optimisers require the computation of the Hessian matrix, apart from the Jacobian (gradient). Some authors use the PSR to compute the Hessian [48; 49] in PQCs. Here, we provide the method for computing second-order partial derivatives in the QRNN, by applying the PSR. We define the double-shift expectation value at the time \(t\) as \(\left\langle O\right\rangle_{(t)}^{\chi\lambda}\right|_{sj}^{ri}\), which is the expectation value at time \(t\) after shifting the parameter \(\theta_{i}\) in block \(t_{r}\) a value \(\chi\) and the parameter \(\theta_{j}\) a value \(\lambda\) in the block \(t_{s}\) of the quantum circuit. The second-order partial derivatives are \[\partial_{i}\partial_{j}\left\langle O\right\rangle_{(t)}\equiv \frac{\partial^{2}\left\langle O\right\rangle_{(t)}}{\partial\theta_{i}\partial \theta_{j}} \tag{12}\] \[-\frac{1}{4}\sum_{r}^{t}\sum_{s}^{t}\left.\left(\left\langle O \right\rangle_{(t)}^{+-}+\left\langle O\right\rangle_{(t)}^{-+}\right)\right| _{sj}^{ri},\] for \(i\neq j\). Moreover, \[\partial_{i}^{2}\left\langle O\right\rangle_{(t)}\equiv\frac{ \partial^{2}\left\langle O\right\rangle_{(t)}}{\partial\theta_{i}^{2}} \tag{13}\] \[-\frac{1}{2}\sum_{r}^{t}\sum_{s}^{r-1}\left.\left(\left\langle O \right\rangle_{(t)}^{+-}+\left\langle O\right\rangle_{(t)}^{-+}\right)\right| _{si}^{ri}-\frac{1}{2}t\left\langle O\right\rangle_{(t)}\] for \(i=j\). These are derived in Appendix B. Considering these expressions, the symmetry \(\partial_{i}\partial_{j}=\partial_{j}\partial_{i}\) and that we can recycle the intermediate calculations for the final estimation of the Hessian of the loss function, the total number of function evaluations is \(2T^{2}N_{\theta}^{2}+1\) (probe in Appendix B). The symmetries allow us to reduce the computational cost. However, the scaling with time steps and number of parameters is quadratic. Some approximations would be needed to improve this scaling. Both gradient and Hessian calculations are easy to parallelise. Then, it would be plausible to simultaneously use multiple Quantum Processing Units (QPUs) to perform several circuit evaluations. The problem is the high noise level of the current quantum hardware and different noise models for different QPUs that could complicate the optimisation process. Further investigation is required to see the scope of these architectures. ## IV Results We have implemented an algorithm that emulates a Quantum Recurrent Neural Network, by using the definitions and methods described in the previous sections. The model is based on a quantum circuit that uses a hardware-efficient ansatz. This ansatz uses layers of rotation gates for encoding classical inputs into the quantum circuit, and alternates layers of single-qubit-rotation gates parameterised by the set \(\mathbf{\theta}\) and CZ gates. The explicit circuit ansatz \(U(\mathbf{x}_{(t)},\mathbf{\theta})\), used to test the performance of the general model, is represented in Fig. 4. The encoding part \(V(\mathbf{x}_{(t)})\) acts only over register E qubits and repeats the encoding for each input value \(R+1\) times in order to have a better expressivity [50; 51; 52]. The evolution and entanglement operator \(W(\mathbf{\theta})\) does not vary during the circuit since it does depend always on the same parameters. We try to maximise its expressibility by repeating the entangling layers several times with different parameters. In order to test the ideal QRNN emulation, and its learning capabilities, we use three datasets: (a) a dimmed triangular signal, (b) a non-linear damping signal with a sinusoidal perturbation, and a set (c) consisting of two non-linear damping signals as input and a linear combination of them as output. These three examples are meaningful for two reasons: the signals are nonlinear, a feature that hinders the training, and are different enough to test several quantum circuit configurations. Moreover, the third one demonstrates the applicability of the model to, at least, two-variable series. Details about data generation are included in Appendix C. The network is trained by optimising the set of parameters \(\mathbf{\theta}\), using the L-BFGS-B algorithm [53]. We divide the series into windows of \(T=20\) points and the task is predicting 5 points for the output variable. The windows are divided into three sets: 20 % for testing, 20 % of the remaining for validation, and the rest for training. The distribution of validation samples is the same for the three datasets, but randomly generated. The loss function is the Root Mean Square Error (RMSE) between the output \(\overline{y}\) and the target series \(y\) of all the training windows. The prediction for every time step is \[\overline{y}_{(t)}=\left\langle Z^{\otimes n_{E}}\right\rangle_{(t)}+b, \tag{14}\] where \(Z\) is the \(\sigma_{z}\) Pauli matrix and \(b\) is a bias, which is an extra trainable parameter. These values are computed from an exact expectation value, without statistical variations typically arising when sampling the outputs in a real quantum computer. The network hyperparameters used for each case are indicated in Table 1. They were not optimised during this analysis, since the scope of this section is only to test the algorithm. Further works would assess different ansatz architectures, hyperparameters and optimisation techniques. We have executed 8 different parameter optimisations for each dataset, randomly initialising the rotation gates parameters in the interval \([0,1)\). The set of parameters after optimisation strongly depends on the initialisation. That has consequences in the training, leading to differences in the prediction accuracy. Nonetheless, results never show a relative RMSE greater than 10 % of the series range (from -0.75 to 0.75). The set of parameters leading to the lower RMSE in the validation set are selected as the solution among the 8 different optimisation results. The corresponding datasets and their predictions are shown in Fig. 5. Table 2 contains the resulting RMSE of the estimated points with respect to the training, validation and test targets. The fourth RMSE corresponds to the prediction of the whole test series, not only the last 5 points of each window, by shifting the input windows 5 by 5 points. The L-BFGS-B optimiser uses the gradient of the loss function to find its minimum. This algorithm does not compute the complete Hessian, but it makes an approximation. Then, the analytical Hessian is not needed. Furthermore, the analytical gradient can be substituted by a numerical one if the precision of the loss function is high enough. We use the default 2-point finite difference estimation with an absolute step size \(\epsilon=1\times 10^{-8}\) implemented in the optimiser, as well as the analytical form of the gradient. For the latter, we evaluate the partial derivatives of the circuit outputs by the formulas provided in Section III, and then we apply the chain rule to calculate the gradient of the loss function. We have compared between analytical and numerical optimisation, as shown in Fig. 6 and Table 2. The op \begin{table} \begin{tabular}{c|c|c|c|c|c|} Case & \(n_{E}\) & \(n_{M}\) & \(L\) & \(R\) & \(N_{\mathbf{\theta}}\) \\ \hline (a) & 1 & 2 & 2 & 3 & 31 \\ (b) & 2 & 3 & 2 & 1 & 43 \\ (c) & 2 & 3 & 5 & 3 & 100 \\ \end{tabular} \end{table} Table 1: Quantum circuit configuration for each case analysed. In case (b), we re-upload input data in two qubits. Figure 4: QRNN ansatz, \(U(\mathbf{x}_{(t)},\mathbf{\theta})\), consisting of two parts. The first one is the data encoding, and gates inside the orange box are repeated with different parameters, that are a subset of trainable parameters, \(\mathbf{\alpha}_{i}^{r}\in\{\mathbf{\theta}\}\). We use use one qubit per input variable. The second one is the evolution and entanglement part, where the blue box is repeated \(L\) times (layers). Each layer is a column of \(U_{3}\) rotations parameterised by a threesome of parameters, \(\mathbf{\beta}_{i}^{l}\in\{\mathbf{\theta}\}\), and CZ gates entangling every qubit from E with every qubit from M. A final column of \(U_{3}\) gates is applied over register E before measurement. timisation converges to close RMSE values for both the training and validation loss curves, despite following a different trajectory in some zones. Although analytical calculations are commonly used in neural networks to improve accuracy [37], we cannot ensure that they are better in the QML tasks provided here, when emulating the circuit without any noise. Apart from that, the plots show a quick convergence in case (a), but slower in cases (b) and (c). Validation curves (and final loss values) are above the training curves in cases (a) and (c). However, they remain relatively stable and close to the training ones. Case (b) manifests validation RMSE values below the training ones, due to the distribution of validation samples. A different election would return distinct validation values. In fact, a higher number of samples should reduce the dependence on the election of validation samples. In contrast, case (c) manifests a training RMSE that slightly moves downwards from the validation one, due to a little overfitting that is not meaningful in this context. Those issues should be addressed in a complete machine learning project. Regarding they are beyond the aim of this work, we will leave them for future research. In ideal emulation without sampling (exact), the emulated quantum circuit returns an expectation value with a precision given by the classical machine variables (e.g. float-64). As, in general, an expectation value cannot be extracted from a quantum circuit in a single shot, its evaluation by multiple repetitions (shots) of the circuit leads to stochastic variations due to this sampling process. That makes numerical gradients very inaccurate and we must use analytical gradients. The PSR allows us to tackle this problem in real quantum circuits, but \begin{table} \begin{tabular}{c c|c|c|c|c|c|c|c} \multirow{2}{*}{**Case Gradient**} & \multirow{2}{*}{\(n_{\textbf{it}}\)} & \multirow{2}{*}{\(n_{\textbf{lev}}\)} & \multirow{2}{*}{\(n_{\textbf{lev}}\)} & \multirow{2}{*}{**RMSE tr.**} & \multirow{2}{*}{**RMSE val.**} & \multirow{2}{*}{**RMSE tes.**} & \multirow{2}{*}{**RMSE fte.**} \\ \cline{4-9} & & & & & & & & \\ \cline{4-9} & \multirow{2}{*}{analytical} & 600 & 701 & 701 & 0.006 & 0.011 & 0.004 & 0.005 \\ \cline{4-9} & & 553 & 20896 & 653 & 0.005 & 0.008 & 0.003 & 0.004 \\ \cline{4-9} \multirow{2}{*}{(b)} & \multirow{2}{*}{analytical} & 579 & 664 & 664 & 0.091 & 0.038 & 0.118 & 0.082 \\ \cline{4-9} & & 581 & 28556 & 649 & 0.090 & 0.041 & 0.125 & 0.082 \\ \cline{4-9} \multirow{2}{*}{(c)} & \multirow{2}{*}{analytical} & 1000 & 1082 & 1082 & 0.035 & 0.048 & 0.050 & 0.044 \\ \cline{4-9} & & 1000 & 110393 & 1082 & 0.036 & 0.048 & 0.051 & 0.045 \\ \hline \end{tabular} \end{table} Table 2: Optimisation solutions for each case with analytical and numerical gradients: number of iterations, number of function evaluations, number of Jacobian evaluations, and RMSE in training, validation and (complete) test sequences. Figure 5: Results of the learning task with each one of the three datasets. Here, numerical gradients were used. Each series was divided into windows of 20 points, each one being a sample. The neural network must predict the value of the last 5 outputs for each window, by reducing the RMSE with respect to the last 5 points of the target. Dashed lines are the inputs \(\mathbf{x}_{(t)}\), from which we make the prediction. Solid lines represent the targets. Points are the predictions. Windows are represented in orange (red), blue (dark blue) or green (dark green, pink) depending on whether the window is used for training (tra), validation (val) or testing (tes), respectively. In the test region, we add a window to make the prediction for the full test sequence (fte). ideal exact emulation using numerical gradients requires less computational resources for the cases studied. The density matrix emulation enables exact probabilities calculation, but also simulating stochastic noise to investigate its effect on the algorithm's performance. In the latter case, we would need analytical gradients. The final loss values and the plots in Fig. 5 show the ability of the network to model non-trivial patterns in univariate and multivariate input series. The univariate output series estimations approximate to the output targets under a reasonable error because the RMSE is, in the worst case, an order of magnitude lower than the range of output values (from -0.75 to 0.75). The lowest error corresponds with the univariate triangular signal, because it is the simplest case. Note that, in this case, the test RMSE is lower because the series decreases in amplitude with time. Meanwhile, the errors for the damped oscillator signals increase to upper values, but the network can reliably extrapolate the time series to the test region, for both the 1-variable and 2-variable cases. Apart from that, the iterations that the optimiser needs during the training are comparable. Furthermore, the more complex, the more parameters are needed, so that qubits and layers are added with increasing complexity from the case (a) to (c). In short, our results confirm the actual applicability of QRNN models to univariate and multivariate time series prediction and their emulation through the density matrix formalism. ## V Discussion We have used the density matrix formalism to derive the formulation of the hidden states and the outputs in a Quantum Recurrent Neural Network with intermediate measurements, and its first- and second-order partial derivatives with respect to its trainable parameters. With the density matrix formulation, it is possible to directly perform an ideal emulation of the parameterised quantum circuit that makes up the network and we showed the results for a machine learning task with three different datasets. The analytical gradients will permit to train networks with noisy outputs coming from quantum circuits. The QRNN was presented in previous works [24, 25, 26, 27, 28, 29]. Some of them delve into the inner structure of such quantum circuits, but there was not an explicit formulation to see how quantum states are operated through the quantum circuit. With our work, we contribute to a better understanding of how information is processed inside the network, and we compare it with the classical RNN. Moreover, the given formulas allow a direct implementation of a code for emulation. The formulation provides various remarkable advantages related to other possible methods. First of all, it avoids computing the complete density matrix of the \(n\)-qubits quantum state, whose number of elements scales as \(\mathcal{O}(4^{n})\), by splitting the ansatz quantum operator. Moreover, the network outputs are directly computed from the terms that build up the reduced density matrix. Secondly, density matrix formulation allows the computation of exact expectation values in circuits with intermediate measurements, which is not feasible with state-vector emulation for real cases of interest. Consequently, the gradients can be computed with high precision in a classical computer. We have seen that parameter optimisations with numerical and analytical gradients converge to close values. The great difference between both methods is the computational cost. Analytical forms require \(2T\) circuit evaluations for a gradient component, being \(T\) the number of time steps (circuit blocks inside a single circuit). Meanwhile, a 2-point finite difference approximation simultaneously shifts the parameter in all circuit blocks, needing only 2 circuit evaluations per gradient component. Nonetheless, as the precision of the variables is lost, analytical forms become necessary. Then, we expect that they will be unavoidable in emulations with noise and executions in real quantum computers. Our formulation lets us use numerical gra Figure 6: Curves of optimisation for each of the three datasets, showing how the value of the RMSE varies with the iterations, for training and validation sets, and comparing results with both analytical and numerical gradients. The optimisations are those with lower final validation RMSE. dients due to its high precision, at least for the datasets that were studied. Future work may explore the effect of quantum circuit noise. Similar conclusions apply to the Hessian calculation, which requires a \(T^{2}\) factor of circuit evaluations to be computed analytically, but it is usually approximated due to its high computational cost in most cases. In this context, the idea of distributed quantum computing, where several Quantum Processing Units work simultaneously, gathers strength. Despite the number of circuit evaluations needed, the QRNN model shows a significant feature: it does not require a lot of qubits. For the hardware-efficient ansatz and the datasets we studied, an ideal quantum processor of 5 qubits would be enough to run the circuits. The results are promising for potential use in real cases. To this end, the model hyperparameters optimisation to enhance the power of the neural network, the adaptability to the quantum hardware limitations, research on circuit structures that capture correlations behind complex datasets and the adaptability of different optimisation techniques are starting points towards a generalised application of Quantum Recurrent Neural Networks to multivariate time series prediction. ###### Acknowledgements. We thank the CESGA Quantum Computing group members for their feedback and the stimulating intellectual environment they provide. We thank Constantino Rodriguez Ramos, especially for theoretical insights. This work was supported by Axencia Galega de Innovacion through the Grant Agreement "Despegamento dunha infraestructura baseada en tecnodoxias cuanticas da informacion que permitia physarum a I+D+I en Galicia" within the program FEDER Galicia 2014-2020. A. Gomez was supported by MICIN through the European Union NextGenerationEU recovery plan (PRTR-C17.11), and by the Galician Regional Government through the "Planes Complementarios de I+D+I con las Comunidades Autonomas" in Quantum Communication. Simulations on this work were performed using the Finisterrae III Supercomputer, funded by the project CESGA-01 FINISTERRAE III. ## Author Contributions J. D. V., M. M. J. and A. G. conceived the problem. J. D.V. developed the mathematical model and programmed the emulator. D.F. reviewed the mathematical model. All the authors contributed to analyse the data and review the manuscript.
2309.15018
Unidirectional brain-computer interface: Artificial neural network encoding natural images to fMRI response in the visual cortex
While significant advancements in artificial intelligence (AI) have catalyzed progress across various domains, its full potential in understanding visual perception remains underexplored. We propose an artificial neural network dubbed VISION, an acronym for "Visual Interface System for Imaging Output of Neural activity," to mimic the human brain and show how it can foster neuroscientific inquiries. Using visual and contextual inputs, this multimodal model predicts the brain's functional magnetic resonance imaging (fMRI) scan response to natural images. VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%. We further probe the trained networks to reveal representational biases in different visual areas, generate experimentally testable hypotheses, and formulate an interpretable metric to associate these hypotheses with cortical functions. With both a model and evaluation metric, the cost and time burdens associated with designing and implementing functional analysis on the visual cortex could be reduced. Our work suggests that the evolution of computational models may shed light on our fundamental understanding of the visual cortex and provide a viable approach toward reliable brain-machine interfaces.
Ruixing Liang, Xiangyu Zhang, Qiong Li, Lai Wei, Hexin Liu, Avisha Kumar, Kelley M. Kempski Leadingham, Joshua Punnoose, Leibny Paola Garcia, Amir Manbachi
2023-09-26T15:38:26Z
http://arxiv.org/abs/2309.15018v1
Unidirectional Brain-Computer Interface: Artificial Neural Network Encoding Natural Images to fMRI Response in the Visual Cortex ###### Abstract While significant advancements in artificial intelligence (AI) have catalyzed progress across various domains, its full potential in understanding visual perception remains underexplored. We propose an artificial neural network dubbed VISION, an acronym for "Visual Interface System for Imaging Output of Neural activity," to mimic the human brain and show how it can foster neuroscientific inquiries. Using visual and contextual inputs, this multimodal model predicts the brain's functional magnetic resonance imaging (fMRI) scan response to natural images. VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%. We further probe the trained networks to reveal representational biases in different visual areas, generate experimentally testable hypotheses, and formulate an interpretable metric to associate these hypotheses with cortical functions. With both a model and evaluation metric, the cost and time burdens associated with designing and implementing functional analysis on the visual cortex could be reduced. Our work suggests that the evolution of computational models may shed light on our fundamental understanding of the visual cortex and provide a viable approach toward reliable brain-machine interfaces. The source code can be found 1. Ruixing Liang,\({}^{1,2}\) Xiangyu Zhang,\({}^{1}\) Qiong Li,\({}^{3}\) Lai Wei,\({}^{1,2}\) Hexin Liu, \({}^{1}\) Avisha Kumar \({}^{1,2}\) Kelley M. Kempski Leadingham, \({}^{1,2}\) Joshua Punnoose, \({}^{1,2}\) Leibny Paola Garcia,\({}^{1}\) Amir Manbachi\({}^{1,2}\) Footnote 1: Our open accessed repository can be found via [https://github.com/Rxliang/VISION](https://github.com/Rxliang/VISION) \({}^{1}\)Johns Hopkins University, \({}^{2}\)Johns Hopkins Medicine, \({}^{3}\)Pennsylvan State University ## 1 Introduction A fundamental pursuit in neuroscience is to uncover the neural basis of perceptions. Translating 1-dimensional auditory cues, which primarily captures amplitude changes over time into neural activity, is relatively well understood [1]. On the other hand, visual stimuli's multifaceted 2D nature, encompassing attributes like color, texture, and depth, makes encoding more intricate [2]. Researchers frequently utilize functional magnetic resonance imaging (fMRI) to explore neural reactions triggered by visual stimuli. By measuring hemodynamic changes elicited by neural activity, fMRI illuminates the brain's approach to interpreting visual signals, serving as a potent modality in developing a brain-computer interface (BCI) aimed at modifying visual perception [3]. Consequently, there has been an increased emphasis on analyzing vision-related fMRI responses as subjects encounter various natural scenarios [4, 5]. Current standard continuous fMRI sessions, with a duration of 3-11 minutes and 3 mm resolution, come at a hefty cost of around $1325 per session [6]. To gather ample data, multiple fMRI sessions spread over a year are essential. The significant expenses have impeded obtaining high-quality data efficiently [7]. As a result, optimizing the use of existing datasets with minimal presuppositions to develop computational BCI models of human visual perception is paramount. By doing so, actionable hypotheses that guide subsequent research in a more focused and evidence-based direction could be generated, ensuring the efficient utilization of resources. Over the past few decades, deep learning (DL) has significantly transformed a myriad of scientific disciplines and industries. This transformation is largely attributed to the expanding scale of training datasets and the advancements in artificial neural networks (ANNs). Interestingly, given ANN's inherent parallel with human brain neural pathways and its performance, the study of artificial and biological intelligence is increasingly converging [8, 9, 10]. Recognizing the potential of DL's nonlinearity and its robustness to noise, re Figure 1: Overview of VISION, an Artificial Neural Network estimating functional Magnetic Resonance Imaging (fMRI) of the visual cortex response to visual stimuli. VISION acts as a neural encoder that parallels the human visual cortex. searchers have begun to leverage it in neuroscience [2]. By treating neural responses, specifically fMRI data, as model inputs, several studies have employed generative ANN models to reconstruct visual stimuli, encompassing both static images and dynamic video streams [7, 11]. These DL models, often called "neural decoders" or "mind readers," provide a compelling avenue to decode brain activity. Conversely, there are DL models that predict neural responses based on visual stimuli, essentially acting as encoders. They offer valuable insights into the mechanism within cascading neural circuits in the human brain, revealing the intricacies of neural computation. Crucially, by integrating these two methodologies--decoding and encoding--a holistic loop for brain-computer interfaces can be established [3]. However, a significant limitation lies in these encoding models' scalability and interpretability. DL frameworks are often described as black boxes, leading them to be perceived as agnostic computational models with limited transparency [12, 13, 14]. To address these limitations, we introduce a novel multimodal (i.e., text and images) ANN model, VISION, with a scaled structure for predicting voxel-by-voxel fMRI responses. VISION's accuracy has been evaluated for varying anatomical regions relevant to human visual processing and was found to align with hypotheses found in the neuroscience community [15, 16]. Additionally, we introduce a new approach for clear and quantifiable visual cortex functional analysis using class activation map (CAM)-based visualization and our purpose-built dataset [17]. This dataset is tailored to measure the model's attention across varied visual cues. ## 2 Methodology ### Multimodal Neural Encoding Model As illustrated in Figure 2, VISION consists of two fundamental building blocks: a multimodal feature extractor and a dense-channel encoding interface network. **Feature Extractor**: Inspired by recent work demonstrating that the visual cortex processes semantic contextual information in addition to visual input [15], the state-of-the-art pre-training model has been adopted, BLIP [18], as a feature extractor. The BLIP model consists of a vision transformer model [19] and three transformer-based models with a similar structure to the BERT model [20]. Instead of using image-text pairs as inputs to the BLIP model, only images have been used to reduce computational complexity. The output of the BLIP feature is a high-dimensional feature containing textual information from the pre-training. **Encoding Interface Network**: Given that the Multilayer Perceptron (MLP) draws inspiration from the structure of brain neurons, encoding interface network's design was grounded on the MLP model. MLP-Mixer has demonstrated meritorious performance across various computer vision tasks [21]. Each MLP model consists of two fully-connected layers and one GELU activation function [22]. For the MLP model to better understand the features from BLIP, we performed a series of processing steps on the BLIP features. First, the BLIP features are converted into a 32*24 two-dimensional matrix, calling it a query. The reconstruction is intended to replicate the complex, layered organization of the cerebral cortex. 197 queries are produced due to the properties of the BLIP feature, followed by processing in an MLP model for each query and combined as shown in Figure 2. ### Feature Space Visualization through Dimension Reduction Dimension reduction plays a crucial role in visualization and as a preprocessing tool in deep learning due to the inherent challenges of high dimensionality [23]. Specifically, Uniform Manifold Approximation and Projection for Dimension Reduction (UMAP) has been employed to transform high-dimensional data into a compact representation, maintaining both local and global structures [24]. We extracted condensed features from the encoding interface network for visualization and paired each feature with augmented supercategories, enabling unsupervised visualization and evaluation of the model feature space. ### Neuroscientific Hypotheses Testing and Generation via CAM Visualization In this study, we employed ScoreCAM to elucidate the pixel-wise contribution from the visual image input towards each single voxel prediction, which can intuitively be interpreted as the model's attention map as illustrated in Figure 3[17]. We hypothesize that the attention map of a visual cortex subregion from the VISION model reflects the input image region that its biological counterpart would process. By evaluating the similarities and distinctions across regions in the visual cortex (Figure 3), we not only verify established neuroscientific hypotheses but also pave the way for formulating new hypotheses for regions whose focus is not known. Our primary emphasis has been testing hypotheses related to Figure 2: Overview of VISION model structure consisting of a feature extractor network and an encoding interface network the higher visual centers, particularly those concerning object comprehension [2]. First, a group of images with one main object were selected. Following, Segment Anything (SAM) was used to segment and extract the main object from the image (e.g., a cat or dog in the center) [25]. Then, we overlapped the attention maps (from ScoreCAM) with the segmented objects to compute the dissimilarity of the overlapping regions between the different visual cortex regions, further quantified by Kullback-Leibler divergence. These attention maps can also be used to predict the role of varying visual cortex regions in object comprehension by dividing the attention score of the primary object by the total attention score, as described in Equation 1: \[P_{f}=\frac{1}{n}\sum\frac{\mu_{AF}}{\mu_{OF}+\mu_{AF}}, \tag{1}\] where \(P_{f}\) is the probability of visual cortex groups belonging to a particular function \(f\), \(\mu_{AF}\) is the mean of attention distribution in some particular regions normalized with a particular area matching a function (e.g., object, edge), and \(\mu_{OF}\) is mean of the attention out of the particular function. Using this process, the relationship between region characteristics of the visual image and neuronal activation can be mapped. It may spark new hypotheses on the role of varying visual cortex regions in visual processing. As this paper primarily focuses on visual comprehension, our analysis was limited to regions associated with the higher-order visual centers that control scene comprehension and object recognition[2]. ## 3 Experiments ### Dataset Our dataset comprised fMRI recordings from 8 participants as they viewed between 9,000 and 10,000 natural scenes [4]. We have integrated the Common Objects in Context (COCO) dataset source of the visual stimulus to augment this dataset by adding text descriptions. This produced image-fMRI pairs and image-caption-fMRI triads. Given the original human captions' verbosity, we devised an automatic labeling and cross-validation pipeline inspired by the OpenCLIP library, preparing our model's multimodal input [26]. Unlike existing studies that focus solely on the five major visual cortical regions of interest (ROIs) [2], we encompass the entirety of the visual cortex's anatomical structure for a total of 27 regions. Moreover, we have preprocessd resting state fMRI scans using the dataset's atlas, delivering robust functional connectivity analysis suitable for heuristic hypothesis input and cross-validation. Lastly, 100 images have been labeled semi-automatically, based on the current understanding of the visual cortex functions, to design our metric for functional analysis on different regions of the visual cortex [25]. ### Experimental Settings and Neural Architecture Search 80% of the dataset from each participant has been allocated for training, and the remaining 20% has been reserved for evaluation and testing. This approach was implemented to ensure model generalization across individuals. Each model was tasked with predicting a hemisphere in a participant's brain. To evaluate prediction accuracy, noise-normalized accuracy is adopted. This entails the noise ceiling [27], using Equation 2: \[\mathrm{NC}=100\times\frac{\sigma_{\text{signal}}^{2}}{\sigma_{\text{signal} }^{2}+\sigma_{\text{noise}}^{2}}. \tag{2}\] NC is the noise-ceiling value associated with each voxel, defined as the maximum percentage of variance in the voxel's responses contributed by signal (\(\sigma_{\text{signal}}\)) given the presence of estimated noise (\(\sigma_{\text{noise}}\)) provided by the dataset. Then accuracy is calculated according to Equation 3 and Equation 4 [12]. \[R_{v} =\mathrm{corr}\left(G_{v},P_{v}\right)\] \[=\frac{\sum_{t}\left(G_{v,t}-\bar{G}_{v}\right)\left(P_{v,t}- \bar{P}_{v}\right)}{\sqrt{\sum_{t}\left(G_{v,t}-\bar{G}_{v}\right)^{2}\sum_{ t}\left(P_{v,t}-\bar{P}_{v}\right)^{2}}}, \tag{3}\] \[\text{Accuracy }=\text{Median }\left\{\frac{R_{1}^{2}}{NC_{1}}, \ldots,\frac{R_{v}^{2}}{NC_{v}}\right\}\times 100, \tag{4}\] where \(G_{v}\) and \(P_{v}\) represent the ground truth and predicted fMRI values of a voxel and its corresponding visual stimuli indexed, respectively, by \(v\) and \(t\). Hyperparameter tuning and Neural Architecture Search (NAS) have been executed parallel to finetune each model. Both have followed the Tree-structured Parzen Estimator (TPE) paradigm [28]. We adopted Adam Optimizer with a finetuned initial learning rate for each model. The training environment was based on Pytorch distributed in an RTX3090 machine and an A100 machine randomly. ### Results #### 3.3.1 Benchmark We have compared both the quantitative and qualitative outcomes of VISION models against two established baselines Figure 3: Illustration of ScoreCAM visualization to get the model’s attention map of a specific region of interest in the visual cortex(i.e., hV4, V3v, and V1v) (i.e., vanilla and baseline model) [4, 2, 12]. Furthermore, our models exhibit superior performance when benchmarking against even unpublished research in some regions[29]. A summary of the quantitative comparisons can be found in Figure 4. Pair-wise t-tests have been used to assess the significance of observed differences. Overall, for all vertex accuracy, VISION models significantly exceed baseline models. It is worth noting that improvements in the early visual cortex are minimal (i.e., V1, V2, V3). This observation aligns with our initial hypothesis: these regions do not process semantic information. Additionally, the accuracy improvements on the peripheral regions are more significant, aligning with the neuroscientific understanding that semantic information is processed on the periphery [15]. This evidence demonstrates the alignment of VISION with recent visual cortex discoveries, warranting further investigation into the ability of this model to capture biological functionalities. #### 3.3.2 Feature Space As illustrated in Figure 5, response patterns within all participants' VISION models tend to group semantically. Typically, such distinct clustering, where identical categories are close-knit while being separated from dissimilar ones, emerges in networks that have been specifically trained and regularized to differentiate between these categories. Moreover, when compared with previous works, this clustering performance is better [4]. For the "other" category, its broader nature results in a broader distribution. The overlap labeled "both" signifies the presence of a person and an animal within a single image, and its placement within the respective person and animal clusters follows expectations. #### 3.3.3 Hypotheses Testing on Functional Connectivity Following the traditional use of ScoreCam on BLIP'S vision encoder's last normalization layer, we have selected hV4, V3v, and V1v as testing regions. According to KL divergence, results show that the similarity of hV4 and V3v is 3.72 times more similar than hV4 and V1v, which could also be visually inferred in Figure 3. This is consistent with functional connectivity analysis derived from resting state fMRI and current findings [16]. This high similarity demonstrates a new way of deciphering the brain by first validating existing neuroscientific hypotheses with attention and then calculating the probability of the region's possible function using the attention ratio (Equation 1). We test the object comprehension function of the hV4 region. Our results show that hV4 has a 59.5% probability of being used for object comprehension. This further shows VISION model resemblance with the biological visual cortex. ## 4 Conclusion This paper introduces a transformer-based ANN model, termed VISION, designed to predict fMRI responses from visual stimuli. Utilizing a multimodal feature extractor, VISION processes visual cues and leverages pre-trained semantic information. This data then feeds into a dense-channel encoding interface network, significantly exceeding state-of-the-art accuracy. Evaluation of VISION demonstrates the model's performance and interpretability and suggests its generalizability to traditional computer vision tasks. We further analyze the model's attention map, using quantifiable metrics to test existing theories. This approach illuminates the potential benefits of integrating large-scale ANN models in neuroscience. This synergy promises progress in BCIs and presents an exciting pathway to advance our fundamental understanding of the visual cortex. Figure 4: Prediction accuracy comparison with vanilla model by Allen et al. [4] and baseline model by Gifford et al. [12]. ns: no significance; *\(p<0.05\); **\(p<10^{-3}\); ***\(p<10^{-4}\); ***\(p<10^{-5}\) Figure 5: Feature map visualization with the global and local link bundling overlay (black lines) for varying image features (i.e., other, animal, person, and both [animal + person]). Sample images are displayed accordingly on the right. ## 5 Acknowledgements The authors declare that there is no conflict of interest. A.M., K.K.L, A.K., and R.L. acknowledge funding support from the Defense Advanced Research Projects Agency (DARPA) Award (N660012024075). Q.L. acknowledges funding support from the National Institutes of Health (NIH) Award (R01NS085200). J.P. acknowledges funding support from NIH Research Training Award (5T32AR67708-8).
2309.07193
A Robust SINDy Approach by Combining Neural Networks and an Integral Form
The discovery of governing equations from data has been an active field of research for decades. One widely used methodology for this purpose is sparse regression for nonlinear dynamics, known as SINDy. Despite several attempts, noisy and scarce data still pose a severe challenge to the success of the SINDy approach. In this work, we discuss a robust method to discover nonlinear governing equations from noisy and scarce data. To do this, we make use of neural networks to learn an implicit representation based on measurement data so that not only it produces the output in the vicinity of the measurements but also the time-evolution of output can be described by a dynamical system. Additionally, we learn such a dynamic system in the spirit of the SINDy framework. Leveraging the implicit representation using neural networks, we obtain the derivative information -- required for SINDy -- using an automatic differentiation tool. To enhance the robustness of our methodology, we further incorporate an integral condition on the output of the implicit networks. Furthermore, we extend our methodology to handle data collected from multiple initial conditions. We demonstrate the efficiency of the proposed methodology to discover governing equations under noisy and scarce data regimes by means of several examples and compare its performance with existing methods.
Ali Forootani, Pawan Goyal, Peter Benner
2023-09-13T10:50:04Z
http://arxiv.org/abs/2309.07193v1
# A Robust SINDy Approach by Combining Neural Networks and an Integral Form ###### Abstract The discovery of governing equations from data has been an active field of research for decades. One widely used methodology for this purpose is sparse regression for nonlinear dynamics, known as SINDy. Despite several attempts, noisy and scarce data still pose a severe challenge to the success of the SINDy approach. In this work, we discuss a robust method to discover nonlinear governing equations from noisy and scarce data. To do this, we make use of neural networks to learn an implicit representation based on measurement data so that not only it produces the output in the vicinity of the measurements but also the time-evolution of output can be described by a dynamical system. Additionally, we learn such a dynamic system in the spirit of the SINDy framework. Leveraging the implicit representation using neural networks, we obtain the derivative information--required for SINDy--using an automatic differentiation tool. To enhance the robustness of our methodology, we further incorporate an integral condition on the output of the implicit networks. Furthermore, we extend our methodology to handle data collected from multiple initial conditions. We demonstrate the efficiency of the proposed methodology to discover governing equations under noisy and scarce data regimes by means of several examples and compare its performance with existing methods. Keywords:Spare regression, discovering governing equations, neural networks, nonlinear system identification, Runge-Kutta scheme. + Footnote †: journal: Journal of Computational and Graphical Statistics ## 1 Introduction System identification is a crucial aspect of understanding and modeling the dynamics of various physical, chemical, and biological systems. Over the years, various powerful and efficient system identification techniques have been developed, and these methods have been applied in a wide range of applications, see, e.g., [1, 2, 3]. Traditionally, system identification techniques rely on prior model hypotheses. With a linear model hypothesis, several methodologies have been proposed; see, e.g., [1, 2]. However, for nonlinear system identification, defining a prior is challenging, and it is often done with the help of practitioners. Despite several earlier works [4, 5, 6], nonlinear system identification is still an active and exciting research field. Towards automatic nonlinear system identification, generic algorithms and symbolic regression have shown their effectiveness and promises in discovering governing nonlinear equations using measurement [7, 8]. However, their computational expenses remain undesirable. Instead of building suitable functions in the spirit of symbolic regression, there has been a focus on sparsity-promoting approaches for nonlinear system identification [9, 10, 11]. They rely on the assumption that nonlinear dynamics can be defined by a few nonlinear basis functions from a dictionary with a large collection of nonlinear basis functions. Such a technique enables the discovery of interpretable, parsimonious, and generalizable models that balance precision and performance. It is nowadays widely referred to as SINDy[11]. SINDy has been employed for a handful number of challenging model discovery problems such as fluid dynamics [12], plasma dynamics [13], turbulence closures [14], mesoscale ocean closures [15], nonlinear optics [16], computational chemistry [17], and numerical integration [18]. Moreover, the results of the SINDy have been extended widely to many applications, such as nonlinear model predictive control [19], rational functions [20, 21], enforcing known conservation laws and symmetries [12], promoting stability [22], generalizations for stochastic dynamics [23], from Bayesian perspective [24]. Often, SINDy approaches require a reliable estimate of the derivative information, making them very challenging for noisy and scare data regimes. Blending numerical methods [21, 25, 26, 27, 28] and weak formulations of differential equations [28] avoid these requirements, but their performance still deteriorates for low signal-to-noise measurements. In addition, the method in [28] relies on the choice of basis functions that allow to write differential equations in a weak formulation. The work in [29] utilizes the concepts of an ensemble to improve the predictions, but it still requires reliable estimates of derivatives to some extent. To discover governing equations from noisy data, the authors in [30] proposed a scheme that aims to decompose the noisy signals into clean signals and the noise using a Runge-Kutta-based integration method. However, the scheme explicitly estimates the noise, making it harder to scale, and requires all the dependent variables to be available at the same time grid. Recently, applications of deep neural networks (DNN) have received attention in sparse regression model discovery methods. For instance, in [31], a deep learning-based discovery algorithm has been employed to identify underlying (partial) differential equations. However, therein, only a single trajectory is considered to recover governing equations, but in many complex processes, we might require data for different parameters and initial conditions to explore rich dynamics, thus, the reliable discovery of governing equations. Furthermore, the work [31] discovers governing equations based on estimating derivative information using automatic differential tools. However, we know that differential equations can also be written in the integral form, whereby the numerical approaches can employed as well, see, e.g., [21, 32]. In this paper, we discuss an approach, namely, iNeural-SINDy, for the data-driven discovery of nonlinear dynamical systems using noisy data from the tens of SINDy. For this, we make use of DNN to learn an implicit representation based on the given data set so that the network outputs denoised data, which is later utilized for the sparse regression to discover governing equations. To solve the sparse regression problem, we make use of not only automatic differential tools but also integral forms for differential equations. As a result, we observe a robust discovery of governing equations. We note that such a concept has recently been used in the context of neural ODEs in [33] to learn black-box dynamics using noisy and scarce data. We further discuss how to incorporate the data coming from multiple initial conditions. The rest of this paper is organized as follows. Section 2 briefly recalls the SINDy approach [11]. In Section 3, we propose a novel methodology for sparse regression to learn underlying governing equations by making use of DNN, automatic differential tools, and numerical methods. Furthermore, in Section 4, we discuss its extension for multiple initial conditions and different parameters. In Section 5, we demonstrate the proposed framework by means of various synthetic noisy measurements and present a comparison with the current state-of-the-art approaches. Finally, Section 6 concludes the paper with a brief summary and future research avenues. ## 2 A Brief Recap of SINDy The SINDy algorithm is a nonlinear system identification approach which is based on the hypothesis that governing equations of a nonlinear system can be given by selecting a few suitable basis functions, see, e.g., [11]. Precisely, it aims at identifying a few basis functions from a dictionary, containing a large number of candidate basis functions. In this regard, sparsity-promoting approaches can be employed to discover parsimonious nonlinear dynamical systems to have a good trade-off for model complexity and accuracy [34, 35]. Consider the problem of discovering nonlinear systems of the form: \[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t)), \tag{1}\] where \(\mathbf{x}(t)=\left[\mathbf{x}_{1}(t),\mathbf{x}_{2}(t),\dots,\mathbf{x}_{n}( t)\right]^{\top}\in\mathbb{R}^{n}\) denotes the state at time \(t\), and \(\mathbf{f}(\mathbf{x}):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a nonlinear function of the state \(\mathbf{x}(t)\). Towards discovering the function \(\mathbf{f}\) in (1), which defines the vector field or dynamics of the underlying system, we start by collecting time-series data of state \(\mathbf{x}(t)\). Let us further assume to have time derivative information of the state. If it is not readily available, we can approximate it using numerical methods, e.g., a finite difference scheme. Thus, consider that the data \(\{\mathbf{x}(t_{0}),\dots,\mathbf{x}(t_{\mathcal{N}})\}\) and its derivative \(\{\dot{\mathbf{x}}(t_{0}),\dots,\dot{\mathbf{x}}(t_{\mathcal{N}})\}\) are given. In the next step, we assemble the data in matrices as follows: \[\mathbf{X}=\begin{bmatrix}\mathbf{x}(t_{1})^{\top}\\ \mathbf{x}(t_{2})^{\top}\\ \vdots\\ \mathbf{x}(t_{\mathcal{N}})^{\top}\end{bmatrix}=\begin{bmatrix}\mathbf{x}_{1} (t_{1})&\mathbf{x}_{2}(t_{1})&\cdots&\mathbf{x}_{n}(t_{1})\\ \mathbf{x}_{1}(t_{2})&\mathbf{x}_{2}(t_{2})&\cdots&\mathbf{x}_{n}(t_{2})\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{x}_{1}(t_{\mathcal{N}})&\mathbf{x}_{2}(t_{\mathcal{N}})&\cdots& \mathbf{x}_{n}(t_{\mathcal{N}})\end{bmatrix}, \tag{2}\] where each row represents a snapshot of the state. Similarly, we can write the time derivative as follows: \[\dot{\mathbf{X}}=\begin{bmatrix}\dot{\mathbf{x}}(t_{1})^{\top}\\ \dot{\mathbf{x}}(t_{2})^{\top}\\ \vdots\\ \dot{\mathbf{x}}(t_{\mathcal{N}})^{\top}\end{bmatrix}=\begin{bmatrix}\dot{ \mathbf{x}}_{1}(t_{1})&\dot{\mathbf{x}}_{2}(t_{1})&\cdots&\dot{\mathbf{x}}_{n }(t_{1})\\ \dot{\mathbf{x}}_{1}(t_{2})&\dot{\mathbf{x}}_{2}(t_{2})&\cdots&\dot{\mathbf{x}} _{n}(t_{2})\\ \vdots&\vdots&\ddots&\vdots\\ \dot{\mathbf{x}}_{1}(t_{\mathcal{N}})&\dot{\mathbf{x}}_{2}(t_{\mathcal{N}})& \cdots&\dot{\mathbf{x}}_{n}(t_{\mathcal{N}})\end{bmatrix}. \tag{3}\] The next key building block in the SINDy algorithm is the construction of a dictionary \(\Theta(\mathbf{y})\), containing candidate basis functions (e.g., constant, polynomial or trigonometric functions). For instance, our dictionary matrix can be given as follows: \[\Theta(\mathbf{X})=\begin{bmatrix}\begin{array}{cccccc}\vline&\vline& \vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline\\ \vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline& \vline\\ \vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline& \vline&\vline\\ \end{array}, \tag{4}\] assume \(\Theta(\mathbf{X})\in\mathbb{R}^{m\times D}\), and in the above formulations polynomial terms are denoted by \(\mathbf{X}^{\mathbb{P}_{2}}\) or \(\mathbf{X}^{\mathbb{P}_{3}}\); to be more descriptive \(\mathbf{X}^{\mathbb{P}_{2}}\) denotes the quadratic nonlinearities of the state \(\mathbf{X}\) as follows: \[\mathbf{X}^{\mathbb{P}_{2}}=\begin{bmatrix}\mathbf{x}_{1}^{2}(t_{1})&\mathbf{ x}_{1}(t_{1})\mathbf{x}_{2}(t_{1})&\cdots&\mathbf{x}_{2}^{2}(t_{1})&\mathbf{x}_{2}(t_{1} )\mathbf{x}_{3}(t_{1})&\cdots&\mathbf{x}_{2}^{2}(t_{1})\\ \mathbf{x}_{1}^{2}(t_{2})&\mathbf{x}_{1}(t_{2})\mathbf{x}_{2}(t_{2})&\cdots& \mathbf{x}_{2}^{2}(t_{2})&\mathbf{x}_{2}(t_{2})\mathbf{x}_{3}(t_{2})&\cdots& \mathbf{x}_{n}^{2}(t_{2})\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \mathbf{x}_{1}^{2}(t_{\mathcal{N}})&\mathbf{x}_{1}(t_{\mathcal{N}})\mathbf{x}_ {2}(t_{\mathcal{N}})&\cdots&\mathbf{x}_{2}^{2}(t_{\mathcal{N}})&\mathbf{x}_{2}( t_{\mathcal{N}})\mathbf{x}_{3}(t_{\mathcal{N}})&\cdots&\mathbf{x}_{n}^{2}(t_{ \mathcal{N}})\end{bmatrix}.\] In this setting, each column of the dictionary \(\Theta(\mathbf{x})\) denotes a candidate function in defining the function \(\mathbf{f}(\mathbf{x})\) in (1). We are interested in identifying a few candidate functions from the dictionary \(\Theta\) so that a weighted sum of these selected functions can describe the function \(\mathbf{f}\). For this, we can set up a sparse regression formulation to achieve this goal. Precisely, we seek to identify a sparse vector \(\Xi=[\xi_{1},\ \xi_{2},\dots,\ \xi_{n}]\), where \(\xi_{i}^{\top}\in\mathbb{R}^{m}\) with \(m\) denoting the number of columns in \(\Theta\), that determines which features from the dictionary are active and their corresponding coefficients. The SINDy algorithm formulates the sparse regression problem as an optimization problem as follows. Given a set of observed data \(\mathbf{X}\) and the corresponding time derivatives \(\dot{\mathbf{X}}\), the goal is to find the sparsest matrix \(\Xi\) that fulfills the following: \[\dot{\mathbf{X}}=\Theta(\mathbf{X})\Xi.\] However, finding such a matrix is an NP hard problem. Therefore, there is a need to come up a sparsity promoting regularization, and in this category, LASSO is a widely known approach [36, 37]. Despite its success, it is unable to yield matrix which is the sparsest, or the approaches, e.g., discussed in [38, 39], require a prior information about how many non-zeros elements are expected in the matrix \(\Xi\), which is not known. On the other hand, the authors in [11] discuss a sequential thresholding approach, where simple least squares problems are solved iteratively, and at each step, coefficients below a given tolerance are pruned. Analysis of such an algorithm is discussed in [40]. We summarize the SINDy approach in Algorithm 1. Moreover, we mention that other regularization schemes or heuristics are discussed in [21, 22, 25, 41, 42], but in this work, we focus only on the sequential thresholding approach, similar to in Algorithm 1 due to its simplicity. ``` 0: Dictionary \(\Theta\), time-series data \(\mathbf{X}\), time derivative information \(\mathbf{X}\), threshold value tol, and maximum iterations max-iter. 0: Estimated coefficients \(\Xi\) that define governing equations for nonlinear systems. 1:\(\Xi=(\Theta^{\top}\Theta)\backslash\Theta^{\top}\mathbf{\hat{X}}\)\(\triangleright\) For initial guess, solving a least-squares problem 2:\(k=1\) 3:while\(k<\texttt{max-iter}\)do 4:\(\texttt{small\_inds}=(\texttt{abs}(\Xi)<\texttt{tol})\)\(\triangleright\) identifying small coefficients 5:\(\Xi(\texttt{small\_inds})=0\)\(\triangleright\) excluding small coefficients 6: Solve \(\Xi=(\Theta^{\top}\Theta)\backslash\Theta^{\top}\mathbf{\hat{X}}\) subject to \(\Xi(\texttt{small\_inds})=0\) 7:\(k=k+1\) ``` **Algorithm 1** SINDy algorithm [29] ## 3 iNeural-SINDy: Neural Networks and Integrating Schemes Assisted SINDy Approach A challenge in the classical SINDy approach discussed in the previous section is the availability of an accurate estimate of the derivative information. If the derivative information is inaccurate, the resulting sparse model may not accurately capture the underlying system dynamics. In this section, we present an approach that combines SINDy framework with a numerical integration scheme and neural networks in a particular way so that a robust discovery of governing equations can be made amid poor signal-to-noise ratio and irregularities in data. The methodology is inspired by the work [33]. The main components of the methodology are as follows. For given noisy data, we aim to learn an implicit representation using a neural network based on the noisy data so that the network yields denoised data but still in the vicinity of the noisy collected data, and governing equations describing the dynamics of the denoised data can be obtained by employing SINDy. For SINDy, we utilize automatic differential tools to obtain the derivative information via the network and also make use of an integral form of the differential equations. In the following, we make these discussions more precise. Consider noisy data \(\mathbf{y}(t)\in\mathbb{R}^{n}\) at the time instances \(\{t_{0},\ldots,t_{\mathcal{N}}\}\), i.e., \(\{\mathbf{y}(t_{0}),\ldots,\mathbf{y}(t_{\mathcal{N}})\}\). Moreover, \(\mathbf{y}(t)=\mathbf{x}(t)+\epsilon(t)\), where \(\mathbf{x}(t)\) and \(\epsilon(t)\) denote clean data and noise, respectively. Under this setting, we aim to discover the structure of vector field \(\mathbf{f}\) by identifying the most active terms in the dictionary \(\Theta\) so that it satisfies as follows: \[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}). \tag{5}\] Note that we do not know \(\epsilon\)'s. In order to learn \(\mathbf{f}\) from \(\mathbf{y}\), we blend three ingredients together, which are discussed in the following. 1. **Sparse regression assumption:** In our setting, we utilize the principle of SINDy, which we discussed in the previous section. This means that the system dynamics (or vector field defining dynamics) can be represented by a few suitable terms from a dictionary of candidate functions. This allows us to obtain a parsimonious representation of dynamical systems and reduces the model complexity, leading to better generalization and interpretability of the models. 2. **Automatic differential to estimate derivative information:** As mentioned earlier, SINDy algorithm requires accurate derivative information for the system, which can be challenging to obtain from experiments or to estimate using numerical methods. To cope with this issue, we make use of neural networks with its automatic differentiation (AD) feature, which is a technique used to estimate derivative information. The use of a DNN in combination with the SINDy algorithm was earlier discussed in [31], where it has been shown that the discovery of nonlinear system dynamics without explicit need of accurate derivative information [31]. We make use of a DNN to parameterize a nonlinear mapping from time \(t\) to the dependent variable \(\mathbf{y}(t)\). To that end, let us denote a DNN by \(\mathcal{G}_{\theta}\), where \(\theta\) contains DNN parameters. The input to \(\mathcal{G}_{\theta}\) is time \(t\), and its output is \(\mathbf{y}(t)\), i.e., \(\mathbf{y}(t)=\mathcal{G}_{\theta}(t)\). However, in the case of noisy measurement \(\mathbf{y}(t)\) at time \(\{t_{0},\ldots,t_{\mathcal{N}}\}\), we expect \(\mathcal{G}_{\theta}\) to predict outputs in the proximity of \(\mathbf{y}\), i.e., \[\mathbf{y}(t)\approx\mathbf{x}(t)=\mathcal{G}_{\theta}(t),\quad t=\{t_{0}, \ldots,t_{\mathcal{N}}\}.\] With the sparse regression hypothesis, we aim to learn a dynamical model for \(\mathbf{x}\), as it can be seen as a denoised version of \(\mathbf{y}\). For this, we construct a dictionary of possible candidate functions using \(\mathbf{x}\), which we denote by \(\Theta\big{(}\mathbf{X}(t)\big{)}\). Next, we require the derivative information of \(\mathbf{x}\) with respect to time \(t\). Since we have an implicit representation of \(\mathbf{x}(t)\) using a DNN, we can employ AD to obtain the required information. Having dictionary and derivative information, we set up a sparse regression problem as follows: \[\dot{\mathbf{X}}(t)=\Theta\big{(}\mathbf{X}(t)\big{)}\Xi,\] (6) where \(\Xi\) is the sparsest possible matrix, which selects the most active terms from the dictionary to define dynamics. Finding the sparsest solution is computationally infeasible; we, thus, utilize the sequential thresholding approach as discussed in Algorithm 1 with minor modifications. Instead of solving least-squares problems in Steps 1 and 5 in Algorithm 1, we have a loss function as follows: \[\mathcal{L}:=\min_{\theta,\Xi}\sum_{i=0}^{\mathcal{N}}\lambda_{1}\|\mathbf{y}( t_{i})-\mathbf{x}(t_{i})\|+\lambda_{2}\|\dot{\mathbf{x}}(t)-\Theta\big{(} \mathbf{x}(t)\big{)}\Xi\|,\] (7) where \(\mathbf{x}(t_{i}):=\mathcal{G}_{\theta}(t_{i})\), and \(\lambda_{1}\) and \(\lambda_{2}\) are hyperparameters. 3. **Numerical integration scheme:** A dynamical system is a particle or an ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives [43]. To predict the evolution of the dynamical system, it is necessary to have an analytical solution of such equations or their integration over time through computer simulations. Therefore, we aim to incorporate the information contained in the form of integration of dynamical systems while discovering governing equations via sparse regression, which is expected to make the process of discovering equations robust to the noise and scarcity of data. When differential equations are written in an integral form, then we do not require derivative information as well; however, the resulting optimization problem involves an integral form. In this regard, one can employ the principle of Neural-ODEs [44] to solve efficiently such optimization problems. One can also approximate the integral form using suitable integrating schemes [45], and recently, fourth-order Runge-Kutta (RK4) scheme [21] and linear multi-step methods [46] are combined with SINDy. In this work, we make use of the RK4 scheme to approximate an integral. Following [21], our goal is to predict the state of a dynamic system \(\mathbf{x}(t_{k+1})\) at time \(t=t_{k+1}\) from the state \(\mathbf{x}(t_{k})\) at time \(t=t_{k}\), where \(k\in\{0,1,\ldots,\mathcal{N}-1\}\). By employing the RK4 scheme, \(\mathbf{x}(t_{k+1})\) can be computed as a weighted sum of four components that are the product of the time-step and gradient field information \(\mathbf{f}(\cdot)\) at the specific locations. These components are computed as follows: \[\mathbf{x}(t_{k+1})\approx\mathbf{x}(t_{k})+\frac{1}{6}h_{k}(\mathbf{a}_{1}+2 \cdot\mathbf{a}_{2}+2\cdot\mathbf{a}_{3}+\mathbf{a}_{4}),\quad h_{k}=t_{k+1}-t _{k},\] (8) where, \[\mathbf{a}_{1}=\mathbf{f}(\mathbf{x}(t_{k})),\quad\mathbf{a}_{2}=\mathbf{f} \Big{(}\mathbf{x}(t_{k})+h_{k}\frac{\mathbf{a}_{1}}{2}\Big{)},\quad a_{3}= \mathbf{f}\Big{(}\mathbf{x}(t_{k})+h_{k}\frac{\mathbf{a}_{2}}{2}\Big{)},\quad \mathbf{a}_{4}=\mathbf{f}\Big{(}\mathbf{x}(t_{k})+h_{k}\mathbf{a}_{3}\Big{)}.\] For the sake of simplicity with a slight abuse of a notation, the right-hand side of (8) is denoted by \(\mathcal{F}_{\texttt{Rk4}}\big{(}f,\mathbf{x}(t_{k}),h_{k}\big{)}\), i.e., \[\mathbf{x}(t_{k+1})=\mathbf{x}(t_{k}+h_{k})\approx\mathcal{F}_{\texttt{Rk4}} \big{(}\mathbf{f},\mathbf{x}(t_{k}),h_{k}\big{)}. \tag{9}\] Like the SINDy algorithm, we collect samples from the dynamical system at time \(t=\{t_{0},\ldots,t_{\mathcal{N}}\}\) and define the time step as \(h_{k}:=t_{k+1}-t_{k}\). With sparse regression assumption, we can write \(\mathbf{f}(\mathbf{x})=\Theta(\mathbf{x})\Xi\), where \(\Theta(\mathbf{x})\) is a dictionary and \(\Xi\) is a sparse matrix. Then, we can set up a sparse regression as follows. We seek to identify the sparsest matrix \(\Xi\) so that the following is minimized: \[\sum_{k}\left\|\mathbf{x}(t_{k+1})-\mathcal{F}_{\texttt{Rk4}}\big{(}\Theta( \mathbf{x})\Xi,\mathbf{x}(t_{k}),h_{k}\big{)}\right\|.\] When the RK4 scheme is merged with the previously discussed DNN framework, we apply a one-time ahead prediction based on RK4-SINDy to the output of our DNN, i.e., \[\mathbf{x}_{\texttt{Rk4}}(t_{k+1})\approx\mathcal{F}_{\texttt{Rk4}}\Big{(} \mathbf{f},\mathbf{x}(t_{k}),h_{k}\Big{)}.\] Having all these ingredients, we combine them to define a loss function to train our DNN structure, as well as to discover governing equations describing underlying dynamics. To that end, we have the following loss function: \[\mathcal{L}=\mu_{1}\mathcal{L}_{\texttt{MSE}}+\mu_{2}\mathcal{L}_{\texttt{ deri}}+\mu_{3}\mathcal{L}_{\texttt{Rk4}},\quad\mu_{1},\mu_{2},\mu_{3}\in[0,1], \tag{10}\] where \(\mathcal{L}_{\texttt{MSE}}\) is the mean square error (MSE) of the output of the DNN \(\mathcal{G}_{\theta}\) (denoted by \(\mathbf{\hat{x}}\)) with respect to the collected data \(\mathbf{y}\), and \(\{\mu_{1},\mu_{2},\mu_{3}\}\) are positive constants, determining the weight of different losses in the total loss function. It is given as \[\mathcal{L}_{\texttt{MSE}}=\frac{1}{\mathcal{N}}\sum_{k=1}^{\mathcal{N}} \left\|\mathbf{y}(t_{k})-\mathbf{x}(t_{k})\right\|_{2}^{2}. \tag{11}\] It forces the DNN to produce output in the vicinity of the measurements, and \(\mu_{1}\) is its weight. \(\mathcal{L}_{\texttt{deri}}\) is inspired by the sparse regression and aims to compute the sparse coefficient matrix \(\Xi\). It is computed as follows: \[\mathcal{L}_{\texttt{deri}}=\frac{1}{\mathcal{N}}\sum_{k=1}^{\mathcal{N}} \left\|\hat{\mathbf{x}}(t_{k})-\Theta\big{(}\mathbf{x}(t_{k})\big{)}\Xi \right\|_{2}^{2}, \tag{12}\] The term \(\mathcal{L}_{\texttt{Rk4}}\) encodes the capabilities of the vector field to predict the state at the next time step. This is the MSE of the output of the RK4 scheme and the output of DNN, given as follows: \[\mathcal{L}_{\texttt{Rk4}}=\frac{1}{\mathcal{N}-1}\sum_{k=1}^{\mathcal{N}} \left\|\frac{1}{h_{k}}\left(\mathbf{x}(t_{k+1})-\mathcal{F}_{\texttt{Rk4}} \left(\Theta\big{(}\mathbf{x}(t_{k})\big{)}\Xi,\mathbf{x}(t_{k}),h_{k}\big{)} \right)\right\|_{2}^{2}. \tag{13}\] It is worth highlighting that the coefficient matrix \(\Xi\) will be updated alongside the weights and biases of the DNN, and the dictionary terms are calculated by (4). Furthermore, after a certain number of epoch training, we employ sequential thresholding on \(\Xi\) to remove small coefficients as sketched in Algorithm 1, and update the remaining parameters thereafter. We summarize the procedure in Algorithm 2. Additional steps in Algorithm 2 are as follows. We train our network for initial iterations (denoted by init-iter) without employing sequential thresholding; this helps the DNN to learn the underlying dynamics of the dataset. Afterward, we employ sequential thresholding every \(q\) iterations. In the rest of the paper, the proposed methodology is referred to as iNeural-SINDy. ``` 0: Data set \(\{\mathbf{y}(t_{0}),\mathbf{y}(t_{1}),\ldots,\mathbf{y}(t_{\mathcal{N}})\}\), tol for sequential thresholding, a dictionary containing candidate functions \(\Theta\), a neural network \(\mathcal{G}_{\theta}\) (parameterized by \(\theta\)), initial iteration (init-iter), maximum iterations max-iter, and parameters \(\{\mu_{1},\mu_{2},\mu_{3}\}\). 0: Estimated coefficients \(\Xi\), defining governing equations. 1: Initialize the DNN module parameters, and the coefficients \(\Xi\) 2:\(k=1\) 3:while\(k<\texttt{max-iter}\)do 4: Feed time \(t_{i}\) as an input to the DNN (\(\mathcal{G}_{\theta}\)) and predict output \(\mathbf{x}\). 5: Compute the derivative information \(\hat{\mathbf{x}}\) using automatic differentiation. 6: Compute the cost function (10). 7: Update the parameters of DNN (\(\theta\)) and the coefficient \(\Xi\) using gradient descent. 8:for\(k\%q=0\) & \(k>\texttt{init-iter}\)do\(\triangleright\) Employing sequential thresholding after \(q\) iterations 9:\(\texttt{small\_inds}=(\texttt{abs}(\Xi)\)\(<\texttt{tol})\)\(\triangleright\) identifying small coefficients 10:\(\Xi(\texttt{small\_inds})=0\)\(\triangleright\) excluding small coefficients 11: Update the parameters of DNN (\(\theta\)) and the coefficient \(\Xi\) using gradient descent, while ensuring \(\Xi(\texttt{small\_inds})\) remains zero. 12:\(k=k+1\). ``` **Algorithm 2** iNeuralSINDy: SINDy combined with neural network and integral scheme for nonlinear system identification. ## 4 Extension to Multi-trajectories Data Thus far, we have presented the discovery of governing equations using a single trajectory time series data set using a single initial condition. However, for complex dynamical processes, a single trajectory is not sufficient to describe underlying dynamics completely. Therefore, it is necessary to collect data using multiple trajectories; hence, we need to adopt our proposed methodology to account for multiple trajectories. To achieve this goal, we augment the input time \(t\) with an initial condition so that a DNN can capture the nonlinear behavior of the system with respect to different initial conditions. To that end, let us consider \(\mathcal{M}\) different trajectories with initial conditions \(y_{0}^{[j]}\), where \(j\in\{1,\ldots,\mathcal{M}\}\). To reflect the multi-trajectories in our framework, we modify the architecture of the DNN, which now takes \(t_{k}\) and \(y_{0}^{[j]}\) as inputs, and intend to predict \(y_{k}^{[j]}\)--that is, the state at time \(t_{k}\) with respect to the initial condition \(y_{0}^{[j]}\). Then, we also adapt our loss function (10) as follows: \[\mathcal{L}=\mu_{1}\sum_{j=1}^{\mathcal{M}}\mathcal{L}_{\texttt{BSE}}^{[j]}+ \mu_{2}\sum_{j=1}^{\mathcal{M}}\mathcal{L}_{\texttt{deri}}^{[j]}+\mu_{3}\sum _{j=1}^{\mathcal{M}}\mathcal{L}_{\texttt{RRA}}^{[j]},\quad\mu_{1},\mu_{2},\mu _{3}\in[0,1], \tag{14}\] where \[\mathcal{L}_{\texttt{BSE}}^{[j]} =\frac{1}{\mathcal{M}\cdot\mathcal{N}}\sum_{j=1}^{\mathcal{M}} \sum_{k=1}^{\mathcal{N}}\left\|\mathbf{y}^{[j]}(t_{k})-\mathbf{x}^{[j]}(t_{k} )\right\|_{2}^{2},\] \[\mathcal{L}_{\texttt{deri}}^{[j]} =\frac{1}{\mathcal{M}\cdot\mathcal{N}}\sum_{j=1}^{\mathcal{M}} \sum_{k=1}^{\mathcal{N}}\left\|\Theta\big{(}\mathbf{x}^{[j]}(t_{k})\big{)} \hat{\Xi}-\hat{\mathbf{x}}(t_{k})\right\|_{2}^{2},\] \[\mathcal{L}_{\texttt{RRA}}^{[j]} =\frac{1}{h}\frac{1}{\mathcal{M}\cdot\mathcal{N}}\sum_{j=1}^{ \mathcal{M}}\sum_{k=1}^{\mathcal{N}}\left\|\mathbf{x}^{[j]}(t_{k+1})-\mathbf{x }_{\texttt{RRA}}^{[j]}(t_{k})\right\|_{2}^{2}\ \ \text{with}\ \ h=t_{k+1}-t_{k}.\] We depict a schematic diagram of our proposed approach in Figure 1 for such a case. ## 5 Numerical Experiments In this section, we demonstrate the proposed methodology, the so-called iNeural-SINDy, by means of several numerical examples and present a comparison with existing methodologies. For the comparison, we primarily consider two approaches, namely DeePyMoD[31], and RK4-SINDy[21]. DeePyMoD utilizes only automatic-differential tools to estimate derivative information by constructing an implicit representation of the noisy data, while RK4-SINDy embeds a numerical integration scheme to avoid computation of derivative information. The proposed methodology iNeural-SINDy can be viewed as a combination of DeePyMoD and RK4-SINDy. For the chaotic Lorenz example, we also present a comparison with Weak-SINDy [28]. To quantify the performance of the considered methodologies, we define the following coefficient error measure for each state variable \(\mathbf{x}_{i}\): \[\mathcal{E}(\mathbf{x}_{i})=\left\|\Xi_{\mathbf{x}_{i}}^{\texttt{truth}}-\Xi_{ \mathbf{x}_{i}}^{\texttt{set}}\right\|_{1}, \tag{15}\] where \(\Xi_{\mathbf{x}_{i}}^{\texttt{truth}}\) and \(\Xi_{\mathbf{x}_{i}}^{\texttt{set}}\) are, respectively, the true and estimated coefficients, corresponding to the state variable \(\mathbf{x}_{i}\), and \(\|\cdot\|_{1}\) denotes the \(l_{1}\)-norm. A motivation to quantity each state variable separately is that their dynamics can be of different scales; thus, their coefficients might also be in a different order. Therefore, to better understand the quality of the discovered models, we analyze them separately. Furthermore, to observe the performance of the methodologies under the noisy data, which is often the case in real-world scenarios, we artificially generate noisy data by corrupting the clean data. For this, we use a white Gaussian noise \(\mathcal{N}(\mu,\sigma^{2})\) with a zero mean \(\mu=0\) and variance \(\sigma^{2}\), where \(\sigma\) denotes the standard deviation. The noise level in the data is controlled by \(\sigma\), i.e., larger \(\sigma\) implies more noise present in the data. Additionally, since iNeural-SINDy and DeePyMoD both involve neural networks, we also compare their performance sensitivity in two scenarios as follows: * Scene_A: In the first scenario, we consider having a single initial condition and a fixed number of neurons in the hidden layers but vary the amount of training data and noise levels. * Scene_B: In the second one, we consider having a single initial condition and a fixed number of training data but vary the number of neurons in the hidden layers and noise levels. In addition, in the following, we further clarify common implementation and reproducibility details that are considered for all the examples. **Data generation.** We have generated the data synthetically by using solve_ivp function from scipy.integrate package to solve a given set of differential equations and produce the data set. When an identification approach terminates based on the considered methodologies (e.g., iNeural-SINDy, DeePyMoD, RK4-SINDy, or Weak-SINDy), we multiply the dictionary \(\Theta\) by estimated coefficient matrix Figure 1: A schematic diagram of the approach iNeural-SINDy. (a) noisy measurement data, (b) feeding the initial condition \(\left(\mathbf{y}_{1,0},\ \mathbf{y}_{2,0}\right)\) and the time \(t\) to the DNN, (c) using the output of the DNN, construct a polynomial dictionary, (d) estimating the parameters of the DNN and sparse vector \(\Xi\) by considering a loss function. \(\Xi^{\texttt{set}}\) to obtain the discovered governing equations. We then make use of the solve_ivp function from scipy.integrate to obtain time-evolution dynamics. Moreover, we perform a data-processing step before feeding to a neural network by mapping the minimum and maximum values to \(-1\) and \(1\), respectively. The hyper-parameters \(\mu\)'s in (14) are set to \(\mu_{1}=1\), \(\mu_{2}=0.1\) and \(\mu_{3}=0.1\) for iNeural-SINDy. Note that we can drive RK4-SINDy and DeePyMoD approaches by setting \(\mu_{3}=0\) and \(\mu_{2}=0\), respectively, in (14). Architecture.We use multi-layer perception networks with periodic activation functions, namely, SIREN[47], to learn an implicit representation based on measurement data. The numbers of hidden layers and neurons will be discussed for each example separately. Hardware.For training neural networks and parameter estimations for discovering governing equations, we have used Nvida(r)RTX A4000 GPU with 16 GB RAM, and for CPU computations (e.g., for generating data), we have used a 12th Gen Intel(r) Core(tm)i5-12600K processor with 32 GB RAM. Training set-up.We use the Adam optimizer [48] to update the coefficient matrix \(\Xi\) that is trained alongside the DNN parameters. The threshold value (tol), learning rate of the optimizer, maximum iterations(max-iter), initial iterations (init-iter), the iteration \(q\) for employing sequential thresholding for Algorithm 2 will be mentioned for each example separately. However, we note that after each thresholding step in Algorithm 2, we reset the learning rate \(5\times 10^{-6}\) for DNN parameters and \(1\times 10^{-2}\) for the coefficient matrix \(\Xi^{\texttt{set}}\) except for the Lorenz example, which is explicitly mentioned in the Lorenz example. ### Two-dimensional damped oscillators In our first example, we consider the discovery of a two-dimensional linear oscillatory damped system using data. The dynamics of the oscillator can be given by \[\begin{split}\dot{\mathbf{x}}_{1}(t)&=-0.1\mathbf{x }_{1}(t)+2.0\mathbf{x}_{2}(t),\\ \dot{\mathbf{x}}_{2}(t)&=-2.0\mathbf{x}_{1}(t)-0.1 \mathbf{x}_{2}(t).\end{split} \tag{16}\] Simulation setup:To generate the training data set, we consider three initial conditions in the range \([-2,2]\) for \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), and for each initial condition, we take \(400\) equidistant points in the time interval \(t\in[0,\ 10]\). Our DNN architecture has three hidden layers, each having \(32\) neurons. We set the number of epochs \(\texttt{max-iter}=15,000\) and threshold value \(\texttt{tol}=0.05\). The initial iteration init-iter is set to \(5,000\) with the learning rate of \(10^{-4}\) for the DNN parameters and \(10^{-3}\) for the coefficient matrix \(\Xi^{\texttt{set}}\), and after \(q=2,000\) iterations, we employ the sequential thresholding. Moreover, we construct a dictionary containing polynomials of degrees up to two. Results:Figure 2 demonstrates the performance of different algorithms in the presence of noise. We consider additive white Gaussian noise with different standard variances \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.08\}\). It shows that as we increase the noise level, the RK4-SINDy fails to estimate the coefficients. However, iNeural-SINDy and DeePyMoD are robust in discovering the underlying equations accurately, even for high noise levels, and both exhibit similar performances. In Table 1 (in the appendix), we also report learned governing equations from data with various noise levels, which again illustrate that both iNeural-SINDy and DeePyMoD have similar performance, and RK4-SINDy fails to recover governing equations from highly noisy data. Furthermore, in Figure 3, the convergence of the non-zero coefficients for the different methods is shown as the training progresses. It can be seen that iNeural-SINDy has a faster convergence rate compared to DeePyMoD and RK4-SINDy. Next, we discuss the performance of iNeural-SINDy and DeePyMoD for Scene_A and Scene_B. * Scene_A: We consider a DNN architecture with three hidden layers, each having \(32\) neurons. For comparison, we consider noise levels with standard variance \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\), and take the number of samples \(\{30,\ 40,\ 50,\ 100,\ 200,\ 300,\ 400\}\) in the time interval \([0,10]\) for a single initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(5,2)\). The rest of the settings are the same as mentioned earlier in the simulation setup. By varying the noise levels and the number of samples, we report the quality of the learned governing equations in Figure 4. Note that the error criterion defined in (15) is used. Each cell shows the error corresponding to sample sizes and noise levels. By comparing the simulation results, we notice that DeePyMoD performs better for low data regime, but as the number of data is increased, both iNeural-SINDy and DeePyMoD perform similarly. * Scene_B: In this case, we consider a DNN architecture with three hidden layers but vary the number of neurons at each layer from 2 to 64. Again, we consider various noise levels. We take 400 samples in the time interval \([0,10]\) for a single arbitrary initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(5,2)\). The rest of the settings are the same as mentioned earlier in the simulation setup. By varying the noise levels and number of neurons, we report a comparison between iNeural-SINDy and DeePyMoD in Figure 5, where each cell shows the error, corresponding to a specific number of neurons and noise level. These comparisons again show that both methodologies perform comparably and learn correct coefficients with similar performance for a large number of neurons as the DNN has more capacities to capture the dynamics present in the data. More interesting, we would like to highlight that both methods do not over-fit as the capacity of the DNN is increased. ### Cubic damped oscillator The cubic oscillatory system is given by the following equation: Figure 3: Linear oscillator: Estimated coefficients during the training loop for iNeural-SINDy, DeePyMoD and RK4-SINDy. Figure 2: Linear oscillator: A comparison of the learned equations using different methods under various noise levels present in measurement with the ground truth. \[\dot{\mathbf{x}}_{1}(t) =-0.1\mathbf{x}_{1}^{3}(t)+2.0\mathbf{x}_{2}^{3}(t), \tag{17}\] \[\dot{\mathbf{x}}_{2}(t) =-2.0\mathbf{x}_{1}^{3}(t)-0.1\mathbf{x}_{2}^{3}(t).\] The system consists of two coupled, non-linear differential equations describing the time evolution of two variables, \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\). Given noisy data, we aim to recover the governing equations and perform a similar analysis as done for the previous example. Simulation setup:To generate the training data set, we consider two initial conditions \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=\{(2,2),(-2,-2)\}\) and collect 800 points in the time interval \(t\in[0,10]\). Our DNN architecture has three hidden layers, each having 32 neurons. We set the number of epoch max-iter\(=30,000\), threshold value \(\texttt{tol}=0.05\). The initial training iteration (init-iter) is set to \(15,000\) with the learning rate \(10^{-4}\) for the DNN parameters and \(10^{-3}\) for the coefficient matrix \(\Xi^{\texttt{next}}\). After the initial training, for Figure 4: Linear oscillator: A comparison of iNeural-SINDy and DePyMoD under Scene_A. Figure 5: Linear oscillator: A comparison of iNeural-SINDy and DePyMoD under Scene_B every \(q=5,000\) iterations afterward, we employ the sequential thresholding and update the DNN parameters and \(\Xi^{\texttt{next}}\). The dynamical system is estimated in the space of polynomials up to order three. Results:To see the performance of these different methodologies under the presence of noise, we consider a Gaussian noise with the standard variance \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\). We report the obtained results in Figure 6 and in Table 2 (see Appendix), and we notice that RK4-SINDy performs poorly for high noise levels, but iNeural-SINDy and DeePyMoD have competitive performance. Further, in Figure 7, we plot the convergence of the non-zero coefficients as the training progresses for the noise-free case. Here, we again observe a faster convergence for iNeural-SINDy as compared to the other two approaches. Next, we investigate performances of iNeural-SINDy and DeePyMoD for Scene_A and Scene_B, which are discussed in the following. * Scene_A: We fix a DNN architecture with three hidden layers, each having 32 neurons. We consider a set of noise level with \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\) and a set of sample size \(\{30,40,\ 50,\ 100,\ 200,\ 300,\ 400\}\). The data are collected using a random initial condition in the interval \([1,4]\) for \(\{\mathbf{x}_{1},\mathbf{x}_{2}\}\). The rest of the settings are the same as mentioned earlier in the simulation setup. The results are shown in Figure 8, where we notice that for a smaller data set, iNeural-SINDy performs slightly better as compared to DeePyMoD, whereas for larger data set, it is otherwise. * Scene_B: For this case, we fix the sample size to 400 but consider a DNN architecture with three hidden layers with the number of neurons ranging from 2 to 64. Furthermore, we consider a set of noise levels with \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\). The data are generated as in Scene_A and the training setting is also to be as above. The results are depicted in Figure 9, where we observe that iNeural-SINDy performs better as compared to DeePyMoD for fewer neurons, and as we increase the number of neurons, both methods perform similarly. \[\dot{\mathbf{x}}_{1}(t) =1.0\mathbf{x}_{1}(t)-1.0\mathbf{x}_{2}(t)-\frac{1}{3}\mathbf{x}_{1} ^{3}(t)+0.1, \tag{18}\] \[\dot{\mathbf{x}}_{2}(t) =0.1\mathbf{x}_{1}(t)-0.1\mathbf{x}_{2}(t).\] Simulation setup:For this simulation example, we consider two initial conditions \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=\{(2,1.5),(1.5,2)\}\) and take \(400\) data points in the time interval \(t\in[0,\ 200]\). The DNN architecture has three hidden layers with \(32\) neurons. We set the number of epoch max-iter\(=50,000\) and threshold value tol\(=0.05\). The number of iterations for the initial training is set to \(15,000\) with the learning rate \(10^{-4}\) for the DNN parameters and \(10^{-3}\) for the coefficient matrix \(\Xi^{\texttt{next}}\). After the initial training, we employ the sequential thresholding after each \(q=5,000\) iterations. We aim to learn the underlying governing equations in the space of polynomials with degrees up to order three. Results:Converse to the results that we earned in the previous two examples, for the FHN, iNeural-SINDy has a slower convergence rate compared to DeePyMoD and RK4-SINDy, see Figure 10. For this example, we again make a similar observation (see Figure 11, and Table 3 in Appendix), where we notice that iNeural-SINDy and DeePyMoD exhibit similar performances for lower noise levels, but for the higher noise values, (see the results for \(\sigma=0.08\) Table 3), iNeural-SINDy tends to outperform DeePyMoD. Moreover, RK4-SINDy clearly fails for high noise levels. However, converse to the results reported in the previous two examples, for this example, we notice a slower convergence Figure 8: Cubic oscillator: A comparison of iNeural-SINDy and DeePyMoD under Scene_A. Figure 7: Cubic oscillator: Estimated coefficients during the training loop for iNeural-SINDy, DeePyMoD and RK4-SINDy. of iNeural-SINDy compared to DeePyMoD and RK4-SINDy; see, Figure 10. But we highlight that iNeural-SINDy can identify governing equations for highly noisy data, as stated earlier. Next, we compare the performances of iNeural-SINDy and DeePyMoD under Scene_A and Scene_B. * Scene_A: In this case, we consider a fixed DNN architecture with three hidden layers, each consisting of 32 neurons. The different noise levels \(\{0.0,\ 0.02,\ 0.04,0.06\}\) are considered, while the sample size is considered in the range from 150 to 450 with an increment of 50. Here, a single initial condition is used for data collection; that is, \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(3,2)\). The rest of the settings are the same as mentioned earlier for this example. The results are shown in Figure 12, where we notice that DeePyMoD outperforms iNeural-SINDy and has a better performance. * Scene_B: Here, we conduct a study where we keep the number of samples fixed at 400, obtained Figure 10: Fitz-Hugh Nagumo: Estimated coefficients during the training loop for iNeural-SINDy, DeePyMoD and RK4-SINDy. Figure 9: Cubic oscillator: A comparison of iNeural-SINDy and DeePyMoD under Scene_B using the initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(3,2)\). The DNN architecture is designed to have three hidden layers. We aim to explore how iNeural-SINDy and DeePyMoD perform under different combinations of neurons for each layer and noise level. The training settings for each case remain the same, as mentioned earlier. The outcomes are presented in the heat-map depicted in Figure 13, where we notice that both DeePyMoD and iNeural-SINDy almost have the same performance in all the settings. ### Chaotic Lorenz system The chaotic Lorenz system is a set of three differential equations as follows [49]: \[\dot{\mathbf{x}}_{1}(t) =\gamma\big{(}\mathbf{x}_{2}(t)-\mathbf{x}_{1}(t)\big{)}, \tag{19a}\] \[\dot{\mathbf{x}}_{2}(t) =\mathbf{x}_{1}(t)\big{(}\rho-\mathbf{x}_{3}(t)\big{)}-\mathbf{x} _{2}(t),\] (19b) \[\dot{\mathbf{x}}_{3}(t) =\mathbf{x}_{1}(t)\mathbf{x}_{2}(t)-\beta\mathbf{x}_{3}(t), \tag{19c}\] Figure 11: Fitz-Hugh Nagumo: Comparison of the estimation with different techniques and noise level Figure 12: Fitz-Hugh Nagumo: A comparison of iNeural-SINDy and DeePyMoD under Scene_A. where the parameters \(\gamma\), \(\rho\), and \(\beta\) are positive constants with associated standard values \(\gamma=10,\ \rho=28,\ \beta=\frac{8}{3}\). The Lorenz system is a classic example of a chaotic system, which means that small differences in the initial conditions can lead to vastly different outcomes over time. It is a widely used benchmark example for discovering governing equations [11]. Simulation setup:We collect our data in the time interval \(t\in[0,10]\) with a sample size of \(200\) for three different initial conditions \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0),\mathbf{x}_{3}(0))=\{(-8,7,27),(-6,6,25),(-9,8,22)\}\). The DNN architecture has three hidden layers, each having \(64\) neurons. We set the number of iterations max-iter\(=35,000\), and threshold value \(\mathtt{tol}=0.2\). We set the number of iterations for the initial training to init-iter\(=10,000\), and the learning rate \(7\cdot 10^{-4}\) for the DNN parameters and \(10^{-2}\) for the coefficient matrix \(\Xi^{\mathtt{est}}\). After finishing the initial iterations, we employ sequential thresholding after every \(q=3,000\) iterations. Moreover, after each sequential thresholding, we reset the learning rate for DNN parameters to \(5\cdot 10^{-6}\) and for the coefficient matrix \(\Xi^{\mathtt{est}}\) to \(10^{-2}\). The governing equations are estimated by constructing a dictionary with polynomials up to degree two. Since the magnitude of \(\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}\}\) for the Lorenz example can be large, we consider scaling the \(\mathbf{x}\)'s using a scaling factor \(\alpha\). Note that such scaling does not affect the interaction between different \(\mathbf{x}\)'s; thus, the sparsity pattern remains the same as well. However, it is observed that improving the condition number of the dictionary matrix enhances the estimate of the coefficients and helps us to determine the right governing equations. Results:We conduct experiments using a scaling factor \(\alpha=0.1\). Further, we aim to learn governing equations from the noisy data with noise levels of \(\sigma=\{0,\ 0.04,\ 0.1,\ 0.2,\ 0.4\}\). We report the obtained results in Table 4, where we notice that iNeural-SINDy and DeePyMoD yield similar performance except for the case of higher noise level (e.g., see the results for \(\sigma=0.4\)), where iNeural-SINDy recovered the equations better. However, RK4-SINDy performs poorly for the higher noise levels. Next, we conduct a performance analysis of iNeural-SINDy and DeePyMoD for Scene_A and Scene_B. We note that in both scenarios, the training data are generated using a single initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0),\mathbf{x}_{3}(0))=(-8,\ 7,\ 27)\). * Scene_A: We compare iNeural-SINDy and DeePyMoD under Scene_A. We also investigate the effect of the scaling factor \(\alpha\) and consider two values of it, i.e., \(\alpha=\{0.1,\ 1\}\). We fix the DNN architecture to have three hidden layers, each having \(64\) neurons. We consider different sample sizes and noise levels to compare the performance of iNeural-SINDy and DeePyMoD. For \(\alpha=0.1\), Figure 13: Fitz-Hugh Nagumo: A comparison of iNeural-SINDy and DeePyMoD under Scene_B we show the results in Figure 14, where we notice that iNeural-SINDy outperforms DeePyMoD in most cases. A similar observation is made for \(\alpha=1\), which is reported in Figure 15. For these experiments, it is hard to conclude the effect of the scaling factors, as we notice that in some cases, the scaling improves the performance, and in some cases, it is not the case. * Scene_B: In this case, we fix the sample size to \(400\). We also fix the number of hidden layers for the DNN architecture to three but vary the number of neurons in each layer. We also conduct experiments to see the effect of the scaling factor in this case as well. The results for \(\alpha=1\) and \(\alpha=0.1\) are shown in Figure 16 and Figure 17, respectively. These heat maps indicate the outperformance of iNeural-SINDy in most cases. We also observe that for a larger number of neurons, the scaling factor \(\alpha=0.1\) slightly performs better compared to scaling factor \(\alpha=1\) in both iNeural-SINDy as well as DeePyMoD. A comparison of iNeural-SINDy with WEAK-SINDy:Beside our previous comprehensive study, we next compare iNeural-SINDy with Weak-SINDy, which also does not require any estimate of Figure 16: Lorenz example: A comparison of iNeural-SINDy and DeePyMoD under Scene_B with the scaling factor \(\alpha=0.1\). Figure 14: Lorenz example: A comparison of iNeural-SINDy and DeePyMoD under Scene_A with the scaling factor \(\alpha=0.1\) Figure 15: Lorenz example: A comparison of iNeural-SINDy and DeePyMoD under Scene_A with the scaling factor \(\alpha=1\) derivatives using noisy data; for more details on Weak-SINDy, we refer to [28]. For this study, we again consider the same initial condition as used for Scene_A and Scene_B. We take 2000 data points in the time interval \([0,10]\), which are corrupted using different noise levels \(\sigma=\{0,0.02,0.08,0.1\}\). To discover the governing equations, we consider a dictionary of polynomials up to degree two. For training iNeural-SINDy we use the same setting as discussed in Section 5.4. For Weak-SINDy, we consider the code provided by the authors1. We report the results in Table 5, which indicates that iNeural-SINDy outperforms Weak-SINDy in the presence of high noise. Footnote 1: [https://github.com/MathBioCU/WSINDy_ODE/tree/master](https://github.com/MathBioCU/WSINDy_ODE/tree/master) ## 6 Conclusions In this work, we proposed a methodology, namely iNeural-SINDy, to discover governing equations using noisy and scarce data. It consists of three main components--these are: (a) learning an implicit representation based on given noisy data using a deep neural network, (b) setting up a sparse regression problem inspired by SINDy[11], discovering governing equations, and (c) utilize an integral form of differential equations. We have combined all these components innovatively to learn governing equations from noisy data. Particularly, we highlight that we leverage the implicit representation using neural networks to estimate the derivative using automatic differential to avoid any numerical derivative estimation using noisy data. We have shown how iNeural-SINDy can be employed when data are collected using multiple trajectories. Furthermore, we have presented an extensive comparison of the proposed methodology with RK4-SINDy[21] and DeePyMoD[31], where we noticed that iNeural-SINDy clearly out-performed RK4-SINDy, and in many cases, iNeural-SINDy also yielded better or comparable results as compared to DeePyMoD, expect for the FHN example. We also compared iNeural-SINDy with Weak-SINDy using the Lorenz example, where we noticed a better performance of iNeural-SINDy. In the future, we would like to extend the proposed framework to the identification of parametric and control-driven dynamical systems. We also like to combine the idea of the ensemble discussed in [29] to further improve the quality of learned governing equations.
2309.14845
Graph Neural Network Based Method for Path Planning Problem
Sampling-based path planning is a widely used method in robotics, particularly in high-dimensional state space. Among the whole process of the path planning, collision detection is the most time-consuming operation. In this paper, we propose a learning-based path planning method that aims to reduce the number of collision detection. We develop an efficient neural network model based on Graph Neural Networks (GNN) and use the environment map as input. The model outputs weights for each neighbor based on the input and current vertex information, which are used to guide the planner in avoiding obstacles. We evaluate the proposed method's efficiency through simulated random worlds and real-world experiments, respectively. The results demonstrate that the proposed method significantly reduces the number of collision detection and improves the path planning speed in high-dimensional environments.
Xingrong Diao, Wenzheng Chi, Jiankun Wang
2023-09-26T11:20:57Z
http://arxiv.org/abs/2309.14845v2
# Graph Neural Network Based Method for Path Planning Problem ###### Abstract Sampling-based path planning is a widely used method in robotics, particularly in high-dimensional state space. Among the whole process of path planning, collision detection is the most time-consuming operation. In this paper, we propose a learning-based path planning method that aims to reduce the number of collision detections. We develop an efficient neural network model based on Graph Neural Networks (GNN). The model outputs weights for each neighbor based on the obstacle, searched path, and random geometric graph, which are used to guide the planner in avoiding obstacles. We evaluate the proposed method's efficiency through simulated random worlds and real-world experiments, respectively. The results demonstrate that the proposed method significantly reduces the number of collision detections and improves the path planning speed in high-dimensional environments. Graph Neural Network (GNN), Collision detection, Sampling-based path planning. ## I Introduction The path planning problem in robotics is to find a collision-free path from the initial state and the goal state of a robot, given a description of the environment. In recent decades, graph-search and sampling-based methods have become two popular techniques for path planning problems in robotics. Graph-search methods, such as Dijkstra [1] and A* [2], usually search in discrete space, and the quality of their solution is often related to the degree of discretization. However, as the dimension of configuration space grows, these methods often fall into the curse of dimensionality [3], making them computationally difficult. In contrast, sampling-based methods such as Probabilistic Roadmap (PRM) [4], Rapidly-exploring Random Tree (RRT) [5], and Expansive Space Trees (EST) [6] improve efficiency and scalability in high-dimensional spaces by avoiding discretization and explicit representation of the configuration space. They explore the whole space by random sampling, resulting in probabilistic completeness for the feasible solution. Some sampling-based methods use graph-search methods concepts to find the path, such as Fast Marching Trees (FMT*) [7] and Batch Informed Trees (BIT*) [8]. FMT* and BIT* use the heuristic function to sort the samples and edges to explore, which improves the initial solution and convergence rate to the optimum. Many sampling-based methods are improved by modifying the sampling distribution, such as Gaussian PRM [9], and GAN-Based heuristic RRT [10]. However, for most existing methods, collision detection is a major computational bottleneck because they need to repeatedly check the path to ensure that the path is collision-free. Typically, a path planning method spends about 70% of the computation time on collision detection. Lazy PRM [22] reduces collision detection by checking the edge only when it is on the global shortest path. Although it is useful in high dimensions, it does not guarantee robustness. To address the aforementioned issues, we propose a learning-based path planning method for reducing the number of collision detections. Our method uses a graph neural network (GNN) model to predict the edge weights of the neighbor set of the current vertex. The weights are used to guide the planner to avoid obstacles and accelerate the search process. We evaluate the proposed method in simulations and real-world experiments and obtain good performance. Compared with classical path planning problems, our method significantly improves the path planning speed, reduces the number of collision detection, and improves the success rate and robustness in a high-dimensional environment. Our main contributions include: 1) Propose a heuristic method for sampling-based path planning with GNN. 2) Design a GNN model to predict weights for each node in the neighbor set of the current vertex. Fig. 1: Demonstrations for our method in 7D environment. The collision check of our method is 0.94 times and 0.0049 times of BIT* and PRM, respectively. The planning time of our method is 1.4% and 1.1% of the BIT* and PRM, respectively. From Left to Right: (a) GNN model. (b) BIT*
2308.16406
CktGNN: Circuit Graph Neural Network for Electronic Design Automation
The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, intensive research efforts have mostly been paid to automate the transistor sizing with a given circuit topology. By recognizing the graph nature of circuits, this paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing based on the encoder-dependent optimization subroutines. Particularly, CktGNN encodes circuit graphs using a two-level GNN framework (of nested GNN) where circuits are represented as combinations of subgraphs in a known subgraph basis. In this way, it significantly improves design efficiency by reducing the number of subgraphs to perform message passing. Nonetheless, another critical roadblock to advancing learning-assisted circuit design automation is a lack of public benchmarks to perform canonical assessment and reproducible research. To tackle the challenge, we introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers with carefully-extracted circuit specifications. OCB is also equipped with communicative circuit generation and evaluation capabilities such that it can help to generalize CktGNN to design various analog circuits by producing corresponding datasets. Experiments on OCB show the extraordinary advantages of CktGNN through representation-based optimization frameworks over other recent powerful GNN baselines and human experts' manual designs. Our work paves the way toward a learning-based open-sourced design automation for analog circuits. Our source code is available at \url{https://github.com/zehao-dong/CktGNN}.
Zehao Dong, Weidong Cao, Muhan Zhang, Dacheng Tao, Yixin Chen, Xuan Zhang
2023-08-31T02:20:25Z
http://arxiv.org/abs/2308.16406v2
# CktGNN: Circuit Graph Neural Network for Electronic Design Automation ###### Abstract The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, intensive research efforts have mostly been paid to automate the transistor sizing with a given circuit topology. By recognizing the graph nature of circuits, this paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing based on the encoder-dependent optimization subroutines. Particularly, CktGNN encodes circuit graphs using a two-level GNN framework (of nested GNN) where circuits are represented as combinations of subgraphs in a known subgraph basis. In this way, it significantly improves design efficiency by reducing the number of subgraphs to perform message passing. Nonetheless, another critical roadblock to advancing learning-assisted circuit design automation is a lack of public benchmarks to perform canonical assessment and reproducible research. To tackle the challenge, we introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains \(10\)K distinct operational amplifiers with carefully-extracted circuit specifications. OCB is also equipped with communicative circuit generation and evaluation capabilities such that it can help to generalize CktGNN to design various analog circuits by producing corresponding datasets. Experiments on OCB show the extraordinary advantages of CktGNN through representation-based optimization frameworks over other recent powerful GNN baselines and human experts' manual designs. Our work paves the way toward a learning-based open-sourced design automation for analog circuits. Our source code is available at [https://github.com/zehao-dong/CktGNN](https://github.com/zehao-dong/CktGNN). ## 1 Introduction Graphs are ubiquitous to model relational data across disciplines (Gilmer et al., 2017; Duvenaud et al., 2015; Dong et al., 2021). Graph neural networks (GNNs) (Kipf and Welling, 2016; Xu et al., 2019; Velickovic et al., 2018; You et al., 2018; Scarselli et al., 2008) have been the de facto standard for representation learning over graph-structured data due to the superior expressiveness and flexibility. In contrast to heuristics using hand-crafted node features (Kriege et al., 2020) and non-parameterized graph kernels (Vishwanathan et al., 2010; Shervashidze et al., 2009; Borgwardt and Kriegel, 2005), GNNs incorporate both graph topologies and node features to produce the node/graph-level embeddings by leveraging inductive bias in graphs, which have been extensively used for node/graph classification (Hamilton et al., 2017; Zhang et al., 2018), graph decoding (Dong et al., 2022; Li et al., 2018), link prediction (Zhang and Chen, 2018), and etc. Recent successes in GNNs have boosted the requirement for benchmarks to properly evaluate and compare the performance of different GNN architectures. Numerous efforts have been made to produce benchmarks of various graph-structured data. Open Graph Benchmark (OGB) (Hu et al., 2020) introduces a collection of realistic and diverse graph datasets for real-world applications including molecular networks, citation networks, source code networks, user-product networks, etc. NAS-Bench-\(101\)(Ying et al., 2019) and NAS-Bench-\(301\)(Zela et al., 2022) create directed acyclic graph datasets for surrogate neural architecture search (Elsken et al., 2019; Wen et al., 2020). These benchmarks efficiently facilitate substantial and reproducible research, thereby advancing the study of graph representation learning. Analog circuits, an important type of integrated circuit (IC), are another essential graph modality (directed acyclic graphs, i.e., DAGs). However, since the advent of ICs, labor-intensive manual efforts dominate the analog circuit design process, which is quite time-consuming and cost-ineffective. This problem is further exacerbated by continuous technology scaling where the feature size of transistor devices keeps shrinking and invalidates designs built with older technology. Automated analog circuit design frameworks are thus highly in demand. Dominant representation-based approaches (Liu et al., 2021; Wang et al., 2020; Cao et al., 2022; Zhang et al., 2019) have recently been developed for analog circuit design automation. Specifically, they optimize device parameters to fulfill desired circuit specifications with a given circuit topology. Typically, GNNs are applied to encode nodes' embeddings from circuit device features based on the fixed topology, where black-box optimization techniques such as reinforcement learning (Zoph & Le, 2016) and Bayesian Optimization (Kandasamy et al., 2018) are used to optimize parameterized networks for automated searching of device parameters. While these methods promisingly outperform traditional heuristics (Liu et al., 2017) in node feature sizing (i.e., device sizing), they are not targeting the circuit topology optimization/generation, which, however, constitutes the most critical and challenging task in analog circuit design. In analogy to neural architecture search (NAS), we propose to encode analog circuits into continuous vectorial space to optimize both the topology and node features. Due to the DAG essence of analog circuits, recent DAG encoders for computation graph optimization tasks are applicable to circuit encoding. However, GRU-based DAG encoders (D-VAE (Zhang et al., 2019)) and DAGNN (Thost & Chen, 2021) use shallow layers to encode computation defined by DAGs, which is insufficient to capture contextualized information in circuits. Transformer-based DAG encoder (Dong et al., 2022), however, encodes DAG structures instead of computations. Consequently, we introduce Circuit Graph neural Network (CktGNN) to address the above issues. Particularly, CktGNN follows the nested GNN (NGNN) framework (Zhang & Li, 2021), which represents a graph with rooted subgraphs around nodes and implements message passing between nodes with each node representation encoding the subgraph around it. The core difference is that CktGNN does not extract subgraphs around each node. Instead, a subgraph basis is formulated in advance, and **each circuit is modeled as a DAG \(G\) where each node represents a subgraph in the basis**. Then CktGNN uses two-level GNNs to encode a circuit: the inner GNNs independently learn the representation of each subgraph as node embedding, and the outer GNN further performs directed message passing with learned node embeddings to learn a representation for the entire graph. The inner GNNs enable CktGNN to stack multiple message passing iterations to increase the expressiveness and parallelizability, while the outer directed message passing operation empowers CktGNN to encode computation of circuits (i.e. circuit performance). Nonetheless, another critical barrier to advancing automated circuit design is the lack of public benchmarks for sound empirical evaluations. Researches in the area are hard to be reproduced due to the non-unique simulation processes on different circuit simulators and different search space design. To ameliorate the issue, we introduce Open Circuit Benchmark (OCB), the first open graph dataset for optimizing both analog circuit topologies and device parameters, which is a good supplement to the growing open-source research in the electronic design automation (EDA) community for IC (Chai et al., 2022; Hakhamaneshi et al., 2022). OCB contains \(10\)K distinct operational amplifiers (circuits) whose topologies are modeled as graphs and performance metrics are carefully extracted from circuit simulators. Therefore, the EDA research can be conducted via querying OCB without notoriously tedious circuit reconstructions and simulation processes on the simulator. In addition, we will open-source codes of the communicative circuit generation and evaluation processes to facilitate further research by producing datasets with arbitrary sizes and various analog circuits. The OCB dataset is also going to be uploaded to OGB to augment the graph machine learning research. The key contributions in this paper are: 1) we propose a novel two-level GNN, CktGNN, to encode circuits with deep contextualized information, and show that our GNN framework with a pre-designed subgraph basis can effectively increase the expressiveness and reduce the design space of a very challenging problem-circuit topology generation; 2) we introduce the first circuit benchmark dataset OCB with open-source codes, which can serve as an indispensable tool to advance research in EDA; 3) experimental results on OCB show that CktGNN not only outperforms competitive GNN baselines but also produces high-competitive operational amplifiers compared to human experts' designs. ## 2 Related Works ### Graph Neural Networks **GNNs for DAGs** Directed acyclic graphs (DAGs) are another ubiquitous graph modality in the real world. Instead of implementing message passing across all nodes simultaneously, DAG GNNs (encoders) such as D-VAE (Zhang et al., 2019) and DAGNN (Thost and Chen, 2021) sequentially encode nodes following the topological order. Message passing order thus respects the computation dependency defined by DAGs. Similarly, S-VAE (Bowman et al., 2016) represents DAGs as sequences of node strings of the node type and adjacency vector of each node and then applies a GRU-based RNN to the topologically sorted sequence to learn the DAG representation. To improve the encoding efficiency, PACE (Dong et al., 2022) encodes the node orders in the positional encoding and processes nodes simultaneously under a Transformer (Vaswani et al., 2017) architecture. ### Automated Analog Circuit Design **Design Automation Methods for Device Sizing** Intensive research efforts have been paid in the past decades to automate the analog circuit design at the pre-layout level, i.e., finding the optimal device parameters to achieve the desired circuit specifications. Early explorations focus on optimization-based methods, including Bayesian Optimization (Lyu et al., 2018), Geometric Programming (Colleran et al., 2003), and Genetic Algorithms (Liu et al., 2009). Recently, learning-based methods such as supervised learning methods (Zhang et al., 2019) and reinforcement learning methods (Wang et al., 2020; Li et al., 2021; Cao et al., 2022;b) have emerged as promising alternatives. Supervised learning methods aim to learn the underlying static mapping relationship between the device parameters and circuit specifications. Reinforcement learning methods, on the other hand, endeavor to find a dynamic programming policy to update device parameters in an action space according to the observations from the state space of the given circuit. Despite their great promise, all these prior arts have been limited to optimizing the device parameters with a given analog circuit topology. There are only a few efforts (e.g., Genetic Algorithms (Das and Vemuri, 2007)) to tackle another very challenging yet more important problem, i.e, circuit topology synthesis. These works leverage genetic operations such as crossover and mutation to randomly generate circuit topologies and do not sufficiently incorporate practical constraints from feasible circuit topologies into the generation process. Therefore, most of the generated topologies are often non-functional and ill-posed. Conventionally, a newly useful analog circuit topology is manually invented by human experts who have rich domain knowledge within several weeks or months. Our work focuses on efficiently and accurately automating circuit topology generation, based on which the device parameters for the circuit topology are further optimized. **Graph Learning for Analog Circuit Design Automation** With the increasing popularity of GNNs in various domains, researchers have recently applied GNNs to model circuit structures as a circuit topology resembles a graph very much. Given a circuit structure, the devices in the circuit can be treated as graph vertices, and the electrical connections between devices can be abstracted as edges between vertices. Inspired by this homogeneity between the circuit topology and graph, several prior arts have explored GNNs to automate the device sizing for analog circuits. A supervised learning method (Zhang et al., 2019) is applied to learn the geometric parameters of passive devices with a customized circuit-topology-based GNN. And reinforcement learning-based methods (Wang et al., 2020; Cao et al., 2022) propose circuit-topology-based policy networks to search for optimal device parameters to fulfill desired circuit specifications. Distinctive from these prior arts, our work harnesses a two-level GNN encoder to simultaneously optimize circuit topologies and device features. ## 3 Circuit Graph Neural Network In this section, we introduce the proposed CktGNN model constructed upon a two-level GNN framework with a subgraph basis to reduce the topology search space for the downstream optimization algorithm. We consider the graph-level learning task. Given a graph \(\mathcal{G}=(V,E)\), where \(V=\{1,2,...,n\}\) is the node set with \(|V|=n\) and \(E\in V\times V\) is the edge set. For each node \(i\) in a graph \(\mathcal{G}\), we let \(\mathcal{N}(v)=\{u\in V|(u,v)\in E\}\) denote the set of neighboring nodes of \(v\). ### Two-level GNN Framework with a Subgraph Basis Most undirected GNNs follow the message passing framework that iteratively updates the nodes' representation by propagating information from the neighborhood into the center node. Let \(h_{v}^{t}\) denote the representation of \(v\) at time stamp \(t\), the message passing framework is given by: \[a_{v}^{t+1}=\mathcal{A}(\{h_{u}^{t}|(u,v)\in E\})\ \ h_{v}^{t+1}=\mathcal{U}(h_{v}^{t},a_{v}^{t+1}) \tag{1}\] Here, \(\mathcal{A}\) is an aggregation function on the multiset of representations of nodes in \(\mathcal{N}(v)\), and \(\mathcal{U}\) is an update function. Given an undirected graph \(G\), GNNs perform the message passing over all nodes simultaneously. For a DAG \(G\), the message passing progresses following the dependency of nodes in DAG \(G\). That is, a node \(v\)'s representation is not updated until all of its predecessors are processed. It has been shown that the message passing scheme mimics the 1-dimensional Weisfeiler-Lehman (1-WL) algorithm (Leman and Weisfeiler, 1968). Then, the learned node representation encodes a rooted subtree around each node. And GNNs exploit the homophily as a strong inductive bias in graph learning tasks, where graphs with common substructures will have similar predictions. However, encoding rooted subtrees limits the representation ability, and the expressive power of GNNs is upper-bounded by the 1-WL test (Xu et al., 2019). For instance, message passing GNNs fail to differentiate d-regular graphs (Chen et al., 2019; Murphy et al., 2019). To improve the expressive ability, Zhang and Li (2021) introduces a two-level GNN framework, NGNN, that encodes the general local rooted subgraph around each node instead of a subtree. Concretely, given a graph \(G\), \(h\)-hop rooted subgraph \(g_{v}^{h}\) around each node \(v\) is extracted. Then, inner GNNs are independently applied to these subgraphs \(\{g_{v}^{h}|v\in G\}\) and the learned graph representation of \(g_{v}^{h}\) is used as the input embedding of node \(v\) to the outer GNN. After that, the outer GNN applies graph pooling to get a graph representation. The NGNN framework is strictly more powerful than 1-WL and can distinguish almost all \(d\)-regular graphs. However, the two-level GNN framework can not be applied to DAG encoders (GNNs). Hence, we introduce a two-level GNN framework with an (ordered) subgraph basis, to restrict subgraphs for inner message passing in the given subgraph basis and apply it to the circuit (DAG) encoding problems in order to reduce the topology search space and increase the expressive ability. **Definition 3.1**: _(Ordered subgraph basis) An ordered subgraph basis \(\mathbb{B}=\{g_{1},g_{2},...,g_{K}\}\) is a set of subgraphs with a total order \(o\). For \(\forall g_{1},\ g_{2}\in\mathbb{B}\), \(g_{1}<g_{2}\) if and only if \(o(g_{1})<o(g_{2})\)._ Figure 1 illustrates the two-level GNN framework. Given a graph \(G\) and an ordered subgraph basis \(\mathbb{B}\), the rules to extract subgraphs for inner GNNs to learn representations are as follows: 1) For node \(v\in G\), suppose it belongs to multiple subgraphs \(g_{1}^{v},...g_{m}^{v}\in\mathbb{B}\), then the selected subgraph to perform (inner) message passing is \(g_{h}^{v}=\textit{argmax}_{i=1,2,...,m}o(g_{i}^{v})\). 2) If connected nodes \(v\) and \(u\) select the same subgraph \(g_{h}\in\mathbb{B}\), we merge \(v\) and \(u\) and use the representation of subgraph \(g_{h}\) as the feature of the merged node when performing the outer message passing. We show that two-level GNNs can be more powerful than 1-WL in Appendix B. Figure 1: Illustration of the two-level GNN framework with a pre-designed subgraph basis. It first represents input graph \(g\) as a combination of subgraphs in the subgraph basis \(\mathbb{B}\) and then learns representations of subgraphs with inner GNNs. The subgraph representations are used as input features to the outer message passing operation, where the message passing can be directed/undirected. ### The CktGNN Model Next, we introduce the CktGNN model for the circuit (i.e. DAGs) automation problem which requires optimizing the circuit topology and node features at the same time. Given a circuit \(G=(V,E)\), each node \(v\in V\) has a node type \(x_{t}\) and node features \(x_{s}\), where \(x_{t}\) denotes the device type in a circuit (i.e., resistor, capacitor, and etc.), and \(x_{s}\) are the categorical or continuous features of the corresponding device. Due to the similarity between the circuit automation problem and neural architecture search (NAS), potential solutions naturally come from advancements in neural architecture search, where two gradient-based frameworks have achieved impressive results in DAG architecture learning: 1) The VAE framework (Dong et al., 2022; Zhang et al., 2019) uses a GRU-based (or Transformer-based) encoder to map DAGs into vectorial space and then trains the model with a VAE loss. 2) NAS-subroutine-based framework develops DAG encoders and implements encoding-dependent subroutines such as perturb-architecture subroutines (Real et al., 2019; White et al., 2020) and train-predictor-model subroutines (Wen et al., 2020; Shi et al., 2019). As the NAS-subroutine-based framework usually requires a relatively large-size training dataset, yet the large-scale performance simulation of circuits can be time-consuming, we resort to the VAE framework. Due to the complexity of circuit (DAG) structures and the huge size of the circuit design space (see Appendix A), the circuit automation problem is typically a highly non-convex, challenging optimization problem. Thus, the GRU-based encoders that use shallow layers to encode complex DAG architectures are not sufficient to capture complicated contextualized information of node features and topologies. To address these limitations, we introduce the CktGNN model. Figure 2 illustrates the architecture of CktGNN. The key idea is to decompose the input circuit as combinations of non-overlapping subgraphs in the subgraph basis \(\mathbb{B}\). After the graph transformation process (i.e. graphizer \(f\)), each node in the transformed DAG \(G^{{}^{\prime}}=(V^{{}^{\prime}},E^{{}^{\prime}})\) represents a subgraph in the input graph (circuit). Then, the representation of a circuit is learned in the two-level GNN framework. Each node \(v^{{}^{\prime}}\in V^{{}^{\prime}}\) in the transformed DAG \(G^{{}^{\prime}}\) corresponds to a subgraph \(g_{v^{{}^{\prime}}}\) in the input graph. CktGNN treats \(g_{v^{{}^{\prime}}}\) as an undirected graph and uses inner GNNs to learn the subgraph representation \(h_{v^{{}^{\prime}}}\), and each inner GNN consists of multiple undirected message passing layers followed by a graph pooling layer to summarize a subgraph representation. Such a technique enables inner GNNs to capture the contextualized information within each subgraph, thereby increasing the representation ability. In addition, undirected message passing also provides better parallelizability than directed message passing, which increases encoding efficiency. Figure 2: Illustration of the overall framework. The performance evaluation of circuits is implemented on the circuit simulator. In the representation learning process, we formulate a subgraph basis for operational amplifiers to implement the CktGNN model. In the outer level GNN, CktGNN performs directed message passing where the aggregation function \(\mathcal{A}\) uses a gated summation and the update function \(\mathcal{U}\) uses a gated recurrent unit (GRU): \[a_{v^{{}^{\prime}}}=\mathcal{A}(\{z_{u^{{}^{\prime}}}|u^{{}^{ \prime}}\in\mathcal{N}(v^{{}^{\prime}})\})=\sum_{u^{{}^{\prime}}\in\mathcal{N} (v^{{}^{\prime}})}g(z_{u^{{}^{\prime}}})\otimes m(z_{u^{{}^{\prime}}}), \tag{2}\] \[z_{v^{{}^{\prime}}}=\mathcal{U}(\textit{concat}(x_{v^{{}^{ \prime}}}^{{}^{\prime}},h_{v^{{}^{\prime}}}),a_{v^{{}^{\prime}}}). \tag{3}\] Here, \(z_{v^{{}^{\prime}}}\) is the hidden representation of node \(v^{{}^{\prime}}\) in the transformed DAG \(G^{{}^{\prime}}\), \(m\) is a mapping network and \(g\) is a gating network. In the GRU, \(x_{v^{{}^{\prime}}}^{{}^{\prime}}\) is the one-hot encoding of the subgraph type, and \(h_{v^{{}^{\prime}}}\) is the corresponding subgraph representation learned by inner GNNs. The outer level GNN processes nodes in transformed DAG \(G^{{}^{\prime}}\) following the topological order, and uses the hidden representation of the output node as the graph representation. ### Discussions **The Injectivity of Graph Transformation Process** Since CktGNN is constructed upon a graphlizer \(f:G\to G^{{}^{\prime}}\) that converts input circuits \(G\) to DAGs \(G^{{}^{\prime}}\) whose nodes \(v^{{}^{\prime}}\) represent non-overlapping subgraphs in \(G\), it's worth discussing the injectivity of \(f\). If \(f(G)\) is not unique, CktGNN will map the same input circuit (DAG \(G\)) to different transformed \(G^{{}^{\prime}}\), thereby learning different representations for the same input \(G\). **Theorem 3.2**: _Let subgraph basis \(\mathbb{B}\) contain every subgraph of size \(1\). There exists an injective graph transformation \(f\), if each subgraph \(g_{v^{{}^{\prime}}}\in\mathbb{B}\) has only one node to be the head (tail) of a directed edge whose tail (head) is outside the subgraph \(g_{v^{{}^{\prime}}}\)._ We prove Theorem 3.2 in Appendix C. The theorem implies that \(f\) exists when subgraph basis \(\mathbb{B}\) contains every subgraph of size \(1\) and characterizes conditions when the injectivity concern is satisfied. The proof shows that an injective \(f\) can be constructed based on the order function \(o\) over the basis \(\mathbb{B}\). The condition in the Theorem holds in various real-world applications including circuit automation and neural architecture search. For instance, Figure 2 presents an ordered subgraph basis that satisfies the condition for the general operational amplifiers (circuits), and we introduce operational amplifiers and the corresponding subgraph basis in Appendix A. **Comparison to Related Works.** The proposed CktGNN model extends the NGNN technique (i.e., two-level GNN framework) (Zhang & Li, 2021) to the directed message passing framework for DAG encoding problem. In contrast to NGNN which extracts a rooted subgraph structure around each node within which inner GNNs perform message passing to learn the subgraph representation, CktGNN only uses inner GNNs to learn representations of non-overlapping subgraphs in a known ordered subgraph basis \(\mathbb{B}\). Two main advantages come from the framework over GRU-based DAG encoders (i.e. D-VAE and DAGNN). 1) The topology within subgraphs is automatically embedded once the subgraph type is encoded in CktGNN, which significantly reduces the size of the topology search space. 2) The inner GNNs in CktGNN help to capture the contextualized information within each subgraph, while GNNs in GRU-based DAG encoders are usually too shallow to provide sufficient representative ability. ## 4 Open Circuit Benchmark We take operational amplifiers (Op-Amps) to build our open circuit benchmark as they are not only one of the most difficult analog circuits to design in the world, but also are the common benchmarks used by prior arts (Wang et al., 2020; Li et al., 2021; Cao et al., 2022b) to evaluate the performance of the proposed methods. Our benchmark equips with communicative circuit generation and evaluation capabilities such that it can also be applied to incorporate a broad range of analog circuits for the evaluations of various design automation methods. To enable such great capabilities, two notable features are proposed and introduced below. **Converting Circuits into Graphs** We leverage an acyclic graph mapping method by abstracting a complex circuit structure to several simple low-level functional sub-structures, which is similar to the standard synthesis of modern digital circuits. Such a conversion method is thus scalable to large-scale analog circuits with thousands of devices while effectively avoiding cycles in graph converting. To illustrate this general mapping idea, we take the Op-Amps in our benchmark as an example (see the left upper corner in Figure 2). For an \(N\)-stage Op-Amp (\(N=2,3\)), it consists of \(N\) single-stage Op-Amps in the main feedforward path (i.e., from the input direction to the output direction) and several feedback paths (i.e., vice versa) with different sub-circuit modules. We encode all single-stage Op-Amps and sub-circuit modules as functional sub-structures by using their behavioral models (Lu et al., 2021). In this way, each functional sub-structures can be significantly simplified without using its exact yet complex circuit structure. For instance, a single-stage Op-Amp with tens of transistors can be modeled as a voltage-controlled current source (VCCS, \(g_{m}\)) with a pair of parasitic capacitor \(C\) and resistor \(R\). Instead of using these functional sub-structures as graph vertices, we leverage them as our graph edges while the connection point (e.g., node \(1\) and node \(2\)) between these sub-structures is taken as vertices. Meanwhile, we unify both feedforward and feedback directions as feedforward directions but distinguish them by adding a polarity (e.g., '\(+\)' for feedforward and '\(-\)' for feedback) on device parameters (e.g., \(g_{m}\)+ or \(g_{m}\)\(-\)). In this way, an arbitrary Op-Amp can be efficiently converted into an acyclic graph as shown in Figure 2. Inversely, a newly-generated circuit topology from graph sampling can be converted back into the corresponding Op-Amp by using a conversion-back process from the graph and functional sub-structures to real circuits. **Interacting with Circuit Simulators** Another important feature of our circuit benchmark is that it can directly interact with a circuit simulator to evaluate the performance of a generated Op-Amp in real-time. Once a circuit topology is generated from our graph sampling, a tailored Python script can translate it into a circuit netlist. A circuit netlist is a standard hardware description of a circuit, based on which the circuit simulator can perform a simulation (evaluation) to extract the circuit specifications, e.g., gain, phase margin, and bandwidth for Op-Amps. This process can be inherently integrated into a Python environment as both open-sourced and commercial circuit simulators support command-line operations. We leverage this conversion-generation-simulation loop to generate \(10,000\) different Op-Amps with detailed circuit specifications. Note that in our dataset, the topologies of Op-Amps are not always distinct from each other. Some Op-Amps have the same topology but with different device parameters. However, our benchmark is friendly to augment other analog circuits such as filters if corresponding circuit-graph mapping methods are built. ## 5 Experiments ### Dataset, Baselines, and Tasks **Dataset** Our circuit dataset contains 10,000 operational amplifiers (circuits) obtained from the circuit generation process of OCB. Nodes in a circuit (graph) are sampled from \(C\) (capacitor), \(R\) (resistor), and single-stage Op-Amps with different polarities (positive or negative) and directions (feedforward or feedback). Node features are then determined based on the node type: resistor \(R\) has a specification from \(10^{5}\) to \(10^{7}\) Ohm, capacitor \(C\) has a specification from \(10^{-14}\) to \(10^{-12}\) F, and the single-stage Op-Amp has a specification (transconductance, \(g_{m}\)) from \(10^{-4}\) to \(10^{-2}\) S. For each circuit, the circuit simulator of OCB performs careful simulations to get the graph properties (circuit specifications): DC gain (Gain), bandwidth (BW), and phase margin (PM), which characterize the circuit performance from different perspectives. Then the Figure of Merit (FoM) which is an indicator of the circuit's overall performance is computed from Gain, BW, and PM. Details are available in B. **Baselines** We compare CktGNN with GNN baselines including: 1) widely adopted (undirected) GNNs: GCN (Kipf and Welling, 2016), GIN (Xu et al., 2019), NGNN (Zhang and Li, 2021) and \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Gain} & \multicolumn{2}{c}{BW} & \multicolumn{2}{c}{PM} & \multicolumn{2}{c}{Fault} & \multicolumn{2}{c}{Reconc} \\ \cline{2-10} Evaluation Mode & RMSE \(\downarrow\) & Pearson \(\downarrow\) & \(\uparrow\) & RMSE \(\downarrow\) & Pearson \(\downarrow\) & \(\uparrow\) & RMSE \(\downarrow\) & Pearson \(\downarrow\) & \(\uparrow\) & Acc \(\uparrow\) \\ \hline **GateN** & **0.657\(\pm\)0.003** & **0.751\(\pm\)0.002** & **0.637\(\pm\)0.003** & **0.647\(\pm\)0.001** & 0.793\(\pm\)0.002 & 0.217\(\pm\)0.001 & **0.854\(\pm\)0.003** & **0.649\(\pm\)0.002** & **0.937** \\ \hline **PACE** & 0.644 \(\pm\)0.003 & 0.732\(\pm\)0.001 & 0.836\(\pm\)0.003 & 0.642\(\pm\)0.001 & 0.979\(\pm\)0.003 & 0.232\(\pm\)0.001 & 0.839\(\pm\)0.003 & **0.632\(\pm\)0.001** & 0.336 \\ DACNN & 0.659 \(\pm\)0.003 & 0.720\(\pm\)0.001 & 0.818\(\pm\)0.002 & 0.455\(\pm\)0.001 & 0.969\(\pm\)0.003 & 0.231\(\pm\)0.002 & 0.607\(\pm\)0.001 & 0.442\(\pm\)0.001 & 0.289 \\ D-VAE & 0.658 \(\pm\)0.003 & 0.729\(\pm\)0.001 & 0.914\(\pm\)0.000 & 0.396\(\pm\)0.001 & **0.556\(\pm\)0.003** & **0.331\(\pm\)0.002** & 0.977\(\pm\)0.003 & 0.442\(\pm\)0.001 & 0.271 \\ \hline GCN & 0.975\(\pm\)0.003 & 0.140\(\pm\)0.002 & 0.970\(\pm\)0.000 & 0.236\(\pm\)0.001 & 0.993\(\pm\)0.002 & 0.171\(\pm\)0.001 & 0.974\(\pm\)0.003 & 0.217\(\pm\)0.001 & 0.055 \\ GCN & 0.950\(\pm\)0.003 & 0.532\(\pm\)0.001 & 0.935\(\pm\)0.002 & 0.251\(\pm\)0.001 & 0.985\(\pm\)0.004 & 0.187\(\pm\)0.002 & 0.705\(\pm\)0.001 & 0.246\(\pm\)0.001 & 0.058 \\ Graphormer (Ying et al., 2021); 2) dominant DAG encoders (directed GNNs): D-VAE (Zhang et al., 2019), DAGNN (Thost and Chen, 2021) and PACE (Dong et al., 2022). Baseline settings are provided in Appendix D. **Tasks** We compare CktGNN against baselines on the following three tasks to evaluate the expressiveness, efficiency, and real-world impact: (1) Predictive performance and topology reconstruction accuracy: We test how well the learned graph representation encodes information to predict the graph properties (i.e., Gain, BW, PM, FoM) and reconstruct the input circuit topology. (2) Circuit encoding efficiency: This task compares the training/inference time to characterize the efficiency of the circuit (DAG) encoders. (3) Effectiveness in real-world electronic circuit design automation: For the purpose of circuit generation, we test the proportion that the decoder in the VAE architecture can generate valid DAGs, circuits, and new circuits that are never seen in the training set. Furthermore, for the purpose of automated circuit design, we also perform Bayesian Optimization and compare the overall performance (i.e. FoM) of the detected circuit topology and specifications. ### Predictive Performance and Topology Reconstruction Accuracy In the experiment, we test the expressive ability of circuit encoders by evaluating the predictivity of the learned graph representation. As the encoder with more expressive ability can better distinguish circuits (graphs) with different topologies and specifications, circuits with similar properties (Gain, BW, PM, or FoM) can be mapped to close representations. Following experimental settings of D-VAE (so as DAGNN and PACE), we train sparse Gaussian Process (SGP) regression models (Snelson and Ghahramani, 2005) to predict circuit properties based on learned representations. In addition, we also characterize the complexity of topology space to encode by measuring the reconstruction accuracy of the circuit topology. We show our results in Table 1. Experimental results illustrate that CktGNN consistently achieves state-of-art performance in predicting circuit properties. In analogy to D-VAE, we find CktGNN encodes computation (i.e., circuit performance) instead of graph structures when the map between subgraph structure and their defined computation is injective for subgraphs in the basis B. Compared to DAG encoders D-VAE and DAGNN that encodes circuit, inner GNNs in CktGNN enables more message passing iterations to better encode the complicated contextualized information in circuits. On the other hand, PACE, undirected GNNs, and graph Transformers inherently encode graph structures, hence, the latent graph representation space of CktGNN is smoother w.r.t the circuit performance. Furthermore, we also find that CktGNN significantly improves the topology reconstruction accuracy. Such an observation is consistent with the purpose of CktGNN (see Section 3.2) to reduce the topology search space. ### Circuit Encoding Efficiency Besides the representation learning ability, efficiency and scalability play critical roles in the model design due to the potential heavy traffic coming into the circuit design automation problem in the real world. In the experiment, we compare the average training time per epoch to validate the efficiency of the VAE framework that includes a circuit encoding process and a circuit generation process, while the encoding efficiency itself is tested using the average inference time. Figure 3 illustrates the results. Compared to GRU-based computation encoders (i.e., D-VAE and DAGNN), since CktGNN utilizes simultaneous message passing within each subgraph, it significantly reduces the training/inference Figure 3: Comparison of training/inference efficiency. time. In addition, we also find that the extra computation time of CktGNN to fully parallelize encoders (i.e., undirected GNNs like GCN) is marginal. ### Effectiveness in Real-World Electronic Circuit Design Next, we test the real-world performance of different circuit encoders from two aspects. 1) In a real-world automation process, the circuit generator (i.e., decoder in the VAE framework) is required to be effective to generate proper and novel circuits. Hence, we compare the proportion of valid DAGs, valid circuits, and novel circuits of different methods. 2) We also perform batch Bayesian Optimization (BO) with a batch size of \(50\) using the expected improvement heuristic (Jones et al., 1998) and compute the FoM of the detected circuits after \(10\) iterations. Table 2 illustrates the results. We find that the proportion of valid circuits generated by the CktGNN-related decoder is significantly higher than other decoders. One potential reason is that CktGNN has an easier topology space to encode and the corresponding decoder can thus better learn the circuit generation rules. We also find that CktGNN has the best circuit design (generation) ability in the generation process. These observations are significant for real-world applications as the automation tools can perform fewer simulations to be cost-effective and time-efficient. In the end, we find that CktGNN-based VAE can find the best circuits with the highest FoM, and we visualize detected circuits in Appendix D. ## 6 Conclusion and Discussions In this paper, we have presented CktGNN, a two-level GNN model with a pre-designed subgraph basis for the circuit (DAG) encoding problem. Inspired by previous VAE-based neural architecture search routines, we applied CktGNN based on a VAE framework for the challenging analog circuit design automation task. Experiments on the proposed open circuit benchmark (OCB) show that our automation tool can address this long-standing challenging problem in a predictivity-effective and time-efficient way. To the best of our knowledge, the proposed CktGNN-based automation framework pioneers the exploration of learning-based methods to simultaneously optimize the circuit topology and device parameters to achieve the best circuit performance. In addition, the proposed OCB is also the first open-source benchmark in the field of analog circuit topology generation and device parameter optimization. Last but not the least, both our method and benchmark can be generalized to other types of analog circuits with excellent scalability and compatibility. With the increasing popularity of applying deep learning methods to design industrial digital circuits such as Google TPU (Mirhoseini et al., 2021) and Nvidia GPU (Khailany et al., 2020), the attention paid to analog circuit design automation will be unprecedented as analog circuits are a critical type of ICs to connect our physical analog world and modern digital information world. In a nutshell, we believe that deep learning (especially graph learning)-based analog circuit design automation is an important rising field, which is worthy of extensive explorations and interdisciplinary collaborations. ## 7 Acknowledgement This work is supported in part by NSF CCF #1942900 and NSF CBET 2225809. \begin{table} \begin{tabular}{l c c c c} \hline \hline Methods & Valid DAGs (\%) \(\uparrow\) & Valid circuits (\%) \(\uparrow\) & Novel circuits (\%) \(\uparrow\) & BO (FoM) \(\uparrow\) \\ \hline **CRIGNN** & **98.92** & **98.92** & 92.29 & **33.43647** \\ \hline PACE & 83.12 & 75.82 & 97.14 & 32.271422 \\ DAGNN & 83.10 & 74.21 & 97.19 & 33.271462 \\ D-VAE & 82.12 & 73.93 & **97.15** & 32.77778 \\ \hline GCN & 81.02 & 72.03 & 97.01 & 31.624473 \\ GIN & 80.92 & 73.17 & 96.88 & 31.624473 \\ NQNN & 82.17 & 73.22 & 95.29 & 32.82656 \\ Graphomer & 82.81 & 72.70 & 94.80 & 32.82656 \\ \hline \hline \end{tabular} \end{table} Table 2: Effectiveness in real-world electronic circuit design. ## 8 Reproducibility Statement The main theoretical contribution of our paper comes from Theorem 3.2, and the complete proof of the Theorem is available in Appendix C. Furthermore, our proposed circuit encoder, CktGNN, is constructed upon the two-level GNN framework with a pre-designed subgraph basis, hence we compare it with general two-level GNN in Section 3.2, and discuss the expressive ability in Appendix B. For our open circuit benchmark (OCB), we provide detailed instructions in Appendix A, and provide open-source code in supplementary material, including the circuit generation code and the simulation file for the circuit simulator. Our source code to implement experiments is also provided in the supplementary materials, and we will make it public on Github in the future.
2309.17357
Module-wise Training of Neural Networks via the Minimizing Movement Scheme
Greedy layer-wise or module-wise training of neural networks is compelling in constrained and on-device settings where memory is limited, as it circumvents a number of problems of end-to-end back-propagation. However, it suffers from a stagnation problem, whereby early layers overfit and deeper layers stop increasing the test accuracy after a certain depth. We propose to solve this issue by introducing a module-wise regularization inspired by the minimizing movement scheme for gradient flows in distribution space. We call the method TRGL for Transport Regularized Greedy Learning and study it theoretically, proving that it leads to greedy modules that are regular and that progressively solve the task. Experimentally, we show improved accuracy of module-wise training of various architectures such as ResNets, Transformers and VGG, when our regularization is added, superior to that of other module-wise training methods and often to end-to-end training, with as much as 60% less memory usage.
Skander Karkar, Ibrahim Ayed, Emmanuel de Bézenac, Patrick Gallinari
2023-09-29T16:03:25Z
http://arxiv.org/abs/2309.17357v3
# Module-wise Training of Neural Networks via the Minimizing Movement Scheme ###### Abstract Greedy layer-wise or module-wise training of neural networks is compelling in constrained and on-device settings where memory is limited, as it circumvents a number of problems of end-to-end back-propagation. However, it suffers from a stagnation problem, whereby early layers overfit and deeper layers stop increasing the test accuracy after a certain depth. We propose to solve this issue by introducing a module-wise regularization inspired by the minimizing movement scheme for gradient flows in distribution space. We call the method TRGL for Transport Regularized Greedy Learning and study it theoretically, proving that it leads to greedy modules that are regular and that progressively solve the task. Experimentally, we show improved accuracy of module-wise training of various architectures such as ResNets, Transformers and VGG, when our regularization is added, superior to that of other module-wise training methods and often to end-to-end training, with as much as \(60\%\) less memory usage. ## 1 Introduction End-to-end backpropagation is the standard training method of neural networks. However, it requires storing the whole model and computational graph during training, which requires large memory consumption. It also prohibits training the layers in parallel. Dividing the network into modules, a module being made up of one or more layers, accompanied by auxiliary classifiers, and greedily solving module-wise optimization problems sequentially (i.e. one after the other fully) or in parallel (i.e. at the same time batch-wise), consumes much less memory than end-to-end training as it does not need to store as many activations at the same time, and when done sequentially, only requires loading and training one module (so possibly one layer) at a time. Module-wise training has therefore been used in constrained settings in which end-to-end training can be impossible such as training on mobile devices [58; 57] and dealing with very large whole slide images [65]. When combined with batch buffers, parallel module-wise training also allows for parallel training of the modules [8]. Despite its simplicity, module-wise training has been recently shown to scale well [8; 47; 60; 45], outperforming more complicated alternatives to end-to-end training such as synthetic [33; 14] and delayed [32; 31] gradients, while having superior memory savings. In a classification task, module-wise training splits the network into successive modules, a module being made up of one or more layers. Each module takes as input the output of the previous module, and each module has an auxiliary classifier so that a local loss can be computed, with backpropagation happening only inside the modules and not between them (see Figure 1 below). The main drawback of module-wise training is the well-documented _stagnation problem_ observed in [43; 7; 60; 47], whereby early modules overfit and learn more discriminative features than end-to-end training, destroying task-relevant information, and deeper modules don't improve the test accuracy significantly, or even degrade it, which limits the deployment of module-wise training. We further highlight this phenomenon in Figures 2 and 3 in Section 4.5. To tackle this issue, InfoPro [60] propose to maximize the mutual information that each module keeps with the input, in addition to minimizing the loss. [7] make the auxiliary classifier deeper and Sedona [47] make the first module deeper. These last two methods lack a theoretical grounding, while InfoPro requires a second auxiliary network for each module besides the classifier. We propose a different perspective, leveraging the analogy between residual connections and the Euler scheme for ODEs [61]. To preserve input information, we minimize the kinetic energy of the modules along with the training loss. Intuitively, this forces the modules to change their input as little as possible. We leverage connections with the theories of gradient flows in distribution space and optimal transport to analyze our method theoretically. Our approach is particularly well-adapted to networks that use residual connections such as ResNets [27; 28], their variants (e.g. ResNeXt [62], Wide ResNet [63], EfficientNet [56] and MobileNetV2 [48]) and vision transformers that are made up essentially of residual connections [39; 17], but is immediately usable on any network where many layers have the same input and output dimension such as VGG [52]. Our contributions are the following: * We propose a new method for module-wise training. Being a regularization, it is lighter than many recent state-of-the-art methods (PredSim [45], InfoPro [60]) that train another auxiliary network besides the auxiliary classifier for each module. * We theoretically justify our method, proving that it is a transport regularization that forces the module to be an optimal transport map making it more regular and stable. We also show that it amounts to a discretization of the gradient flow of the loss in probability space, which means that the modules progressively minimize the loss and explains why the method avoids the accuracy collapse observed in module-wise training. * Experimentally, we consistently improve the test accuracy of module-wise trained networks (ResNets, VGG and Swin-Transformer) beating 8 other methods, in sequential and parallel module-wise training, and also in _multi-lap sequential_ training, a variant of sequential module-wise training that we introduce and that performs better in many cases. In particular, our regularization makes parallel module-wise training superior or comparable in accuracy to end-to-end training, while consuming \(10\%\) to \(60\%\) less memory. ## 2 Transport-regularized module-wise training The typical setting of (sequential) module-wise training for minimizing a loss \(L\), is, given a dataset \(\mathcal{D}\), to solve one after the other, for \(1{\leq}k{\leq}K\), Problems \[(T_{k},F_{k})\in\arg\min_{T,F}\sum_{x\in\mathcal{D}}L(F,T\circ G_{k-1}(x)) \tag{1}\] Figure 1: Module-wise training. where \(G_{k}=T_{k}\circ\ldots\circ T_{1}\) for \(1{\leq}k{\leq}K\), \(G_{0}{=}\texttt{id}\), \(T_{k}\) is the module (one or many layers) and \(F_{k}\) is an auxiliary classifier. Module \(T_{k}\) receives the output of module \(T_{k-1}\), and auxiliary classifier \(F_{k}\) computes the prediction from the output of \(T_{k}\) so the loss can be computed. The inputs are \(x\) and \(L\) has access to their labels \(y\) to calculate the loss. i.e. \(L(F,T\circ G_{k-1}(x))=l(F\circ T\circ G_{k-1}(x),y)\) where \(l\) is a machine learning loss such as cross-entropy. See Figure 1. The final network trained this way is \(F_{K}\circ G_{K}\). But, at inference, we can stop at any depth \(k\) and use \(F_{k}\circ G_{k}\) if it performs better. Indeed, an intermediate module often performs as well or better than the last module because of the early overfitting and subsequent stagnation or collapse problem of module-wise training [43; 7; 60; 47]. We propose below in (2) a regularization that avoids the destruction of task-relevant information by the early modules by forcing them to minimally modify their input. Proposition 2.2 proves that by using our regularization (2), we are indeed making the modules build upon each other to solve the task, which is the property we desire in module-wise training, as the modules now act as successive proximal optimization steps in the _minimizing movement scheme_ optimization algorithm for maximizing the separability of the data representation. The background on optimal transport (OT), gradient flows and the minimizing movement scheme is in Appendices A and B. ### Method statement To keep greedily-trained modules from overfitting and destroying information needed later, we penalize their kinetic energy to force them to preserve the geometry of the problem as much as possible. If each module is a single residual block (that is a function \(T{=}\texttt{id}{+}r\), which includes many transformer architectures [39; 17]), its kinetic energy is simply the squared norm of its residue \(r{=}T{-}\texttt{id}\), which we add to the loss \(L\) in the target of the greedy problems (1). All layers that have the same input and output dimension can be rewritten as residual blocks and the analysis applies to a large variety of architectures such as VGG [52]. Given \(\tau{>}0\), we now solve, for \(1{\leq}k{\leq}K\), Problems \[(T_{k}^{\tau},F_{k}^{\tau})\in\arg\min_{T,F}\sum_{x\in\mathcal{D}}L(F,T\circ G _{k-1}^{\tau}(x))+\frac{1}{2\tau}\|T\circ G_{k-1}^{\tau}(x){-}G_{k-1}^{\tau}(x )\|^{2} \tag{2}\] where \(G_{k}^{\tau}{=}T_{k}^{\tau}\circ\ldots\circ T_{1}^{\tau}\) for \(1{\leq}k{\leq}K\) and \(G_{0}^{\tau}{=}\texttt{id}\). The final network is \(F_{k}^{\tau}{\circ}G_{K}^{\tau}\). Intuitively, this biases the modules towards moving the points as little as possible, thus at least keeping the performance of the previous module. Residual connections are already biased towards small displacements and this bias is desirable and should be encouraged [35; 64; 26; 15; 36]. But the method can be applied to any module where \(T(x)\) and \(x\) have the same dimension so that \(T(x){-}x\) can be computed. To facilitate the theoretical analysis, we rewrite the method in a more general formulation using data distribution \(\rho\), which can be discrete or continuous, and the distribution-wide loss \(\mathcal{L}\) that arises from the point-wise loss \(L\). Then Problem (2) is equivalent to Problem \[(T_{k}^{\tau},F_{k}^{\tau})\in\arg\min_{T,F}\ \mathcal{L}(F,T_{\sharp}\rho_{k}^ {\tau})+\frac{1}{2\tau}\int_{\Omega}\|T(x)-x\|^{2}\,\mathrm{d}\rho_{k}^{\tau}(x) \tag{3}\] with \(\rho_{k+1}^{\tau}{=}(T_{k}^{\tau})_{\sharp}\rho_{k}^{\tau}\), \(\rho_{1}^{\tau}{=}\rho\) and \(\mathcal{L}(F,T_{\sharp}\rho_{k}^{\tau})=\int L(F,T(x))\,\mathrm{d}\rho_{k}^{ \tau}(x)=\int L(F,z)\,\mathrm{d}T_{\sharp}\rho_{k}^{\tau}(x)\). ### Link with the minimizing movement scheme We now formulate our main result: solving Problems (3) is equivalent to following a _minimizing movement scheme (MMS)_[50] in distribution space for minimizing \(\mathcal{Z}(\mu):=\min_{F}\mathcal{L}(F,\mu)\), which is the loss of the best classifier. If we are limited to linear classifiers, \(\mathcal{Z}(\rho_{k}^{\tau})\) represents the linear separability of the representation \(\rho_{k}^{\tau}\) at module \(k\) of the data distribution \(\rho\). The MMS, introduced in [24; 23], is a metric counterpart to Euclidean gradient descent for minimizing functionals over distributions. In our case, \(\mathcal{Z}\) is the functional we want to minimize. We define the MMS below in Definition 2.1 The distribution space we work in is the metric Wasserstein space \(\mathbb{W}_{2}(\Omega)=(\mathcal{P}(\Omega),W_{2})\), where \(\Omega\subset\mathbb{R}^{d}\) is a convex compact set, \(\mathcal{P}(\Omega)\) is the set of probability distributions over \(\Omega\) and \(W_{2}\) is the Wasserstein distance over \(\mathcal{P}(\Omega)\) derived from the optimal transport problem with Euclidean cost: \[W_{2}^{2}(\alpha,\beta)=\min_{T\,\mathrm{s.t.}\ T_{\sharp}\alpha=\beta}\int_{ \Omega}\|T(x)-x\|^{2}\,\mathrm{d}\alpha(x) \tag{4}\] where we assume that \(\partial\Omega\) is negligible and that the distributions are absolutely continous. **Definition 2.1**.: Given \(\mathcal{Z}:\mathbb{W}_{2}(\Omega)\rightarrow\mathbb{R}\), and starting from \(\rho_{1}^{\tau}\in\mathcal{P}(\Omega)\), the Minimizing Movement Scheme (MMS) takes proximal steps for minimizing \(\mathcal{Z}\). It is s given by \[\rho_{k+1}^{\tau}\in\arg\min_{\rho\in\mathcal{P}(\Omega)}\ \mathcal{Z}(\rho)+ \frac{1}{2\tau}W_{2}^{2}(\rho,\rho_{k}^{\tau}) \tag{5}\] The MMS (5) can be seen as a non-Euclidean implicit Euler step for following the gradient flow of \(\mathcal{Z}\), and \(\rho_{k}^{\tau}\) converges to a minimizer of \(\mathcal{Z}\) under some conditions (see the end of this section). So under the mentioned assumptions on \(\Omega\) and absolute continuity of the distributions, we have that Problems (3) are equivalent to the minimizing movement scheme (5): **Proposition 2.2**.: _The distributions \(\rho_{k+1}^{\tau}=(T_{k}^{\tau})_{\sharp}\rho_{k}^{\tau}\), where the functions \(T_{k}^{\tau}\) are found by solving (3) and \(\rho_{1}^{\tau}=\rho\) is the data distribution, coincide with the MMS (5) for \(\mathcal{Z}=\min_{F}\mathcal{L}(F,.)\)._ Proof.: The minimizing movement scheme (5) is equivalent to taking \(\rho_{k+1}^{\tau}=(T_{k}^{\tau})_{\sharp}\rho_{k}^{\tau}\) where \[T_{k}^{\tau}\in\arg\min_{T:\Omega\rightarrow\Omega}\mathcal{Z}(T_{\sharp}\rho _{k}^{\tau})+\frac{1}{2\tau}W_{2}^{2}(T_{\sharp}\rho_{k}^{\tau},\rho_{k}^{ \tau}) \tag{6}\] under conditions that guarantee the existence of a transport map between \(\rho_{k}^{\tau}\) and any other measure, and absolute continuity of \(\rho_{k}^{\tau}\) suffices, and the loss can ensure that \(\rho_{k+1}^{\tau}\) is also absolutely continuous. Among the functions \(T_{k}^{\tau}\) that solve problem (6), is the optimal transport map from \(\rho_{k}^{\tau}\) to \(\rho_{k+1}^{\tau}\). To solve specifically for this optimal transport map, we have to solve the equivalent Problem \[T_{k}^{\tau}\in\arg\min_{T}\mathcal{Z}(T_{\sharp}\rho_{k}^{\tau})+\frac{1}{2 \tau}\int_{\Omega}\|T(x)-x\|^{2}\,\mathrm{d}\rho_{k}^{\tau}(x) \tag{7}\] Problems (6) and (7) have the same minimum value, but the minimizer of (7) is now an optimal transport map between \(\rho_{k}^{\tau}\) and \(\rho_{k+1}^{\tau}\). This is immediate from the definition (4) of the \(W_{2}\) distance. Equivalently minimizing first over \(F\) and then over \(T\) in (3), it follows from the definition of \(\mathcal{Z}\) that Problems (3) and (7) are equivalent, which concludes. Since we solve Problems (3) over neural networks, their representation power shown by universal approximation theorems [13; 29] is important to get close to equivalence between (5) and (3), as we need to approximate an optimal transport map. We also know that the training of each module, if it is shallow, converges [5; 6; 34; 22; 18]. If \(\mathcal{Z}\) is lower-semi continuous then Problems (5) always admit a solution because \(\mathcal{P}(\Omega)\) is compact. If \(\mathcal{Z}\) is also \(\lambda\)-geodesically convex for \(\lambda{>}0\), we have convergence of \(\rho_{k}^{\tau}\) as \(k{\rightarrow}\infty\) and \(\tau{\rightarrow}0\) to a minimizer of \(\mathcal{Z}\), potentially under more technical conditions (see Appendix B). Even though a machine learning loss will usually not satisfy these conditions, this analysis offers hints as to why our method avoids in practice the problem of stagnation or collapse in performance of module-wise training as \(k\) increases, as we are making proximal local steps in Wasserstein space to minimize the loss. This convergence discussion also suggests taking \(\tau\) as small as possible and many modules. ### Regularity result As a secondary result, we show that Problem (3) has a solution and that the solution module \(T_{k}^{\tau}\) is an optimal transport map between its input and output distributions, which means that it comes with some regularity. [36] show that these networks generalize better and overfit less in practice. We assume that the minimization in \(F\) is over a compact set \(\mathcal{F}\), that \(\rho_{k}^{\tau}\) is absolutely continuous, that \(\mathcal{L}\) is continuous and non-negative, that \(\Omega\) is convex and compact and that \(\partial\Omega\) is negligible. **Proposition 2.3**.: _Problem (3) has a minimizer \((T_{k}^{\tau},F_{k}^{\tau})\) such that \(T_{k}^{\tau}\) is an optimal transport map. And for any minimizer \((T_{k}^{\tau},F_{k}^{\tau})\), \(T_{k}^{\tau}\) is an optimal transport map._ The proof is in Appendix C. OT maps have regularity properties under some boundedness assumptions. Given Theorem A.1 in Appendix A taken from [20], \(T_{k}^{\tau}\) is \(\eta\)-Holder continuous almost everywhere and if the optimization algorithm we use to solve the discretized problem (2) returns an approximate solution pair \((\tilde{F}_{k}^{\tau},\tilde{T}_{k}^{\tau})\) such that \(\tilde{T}_{k}^{\tau}\) is an \(\epsilon\)-optimal transport map, i.e. \(\|\tilde{T}_{k}^{\tau}-T_{k}^{\tau}\|_{\infty}\leq\epsilon\), then we have (using the triangle inequality) the following stability property of the module \(\tilde{T}_{k}^{\tau}\): \[\|\tilde{T}_{k}^{\tau}(x)-\tilde{T}_{k}^{\tau}(y)\|\leq 2\epsilon+C\|x-y\|^{\eta} \tag{8}\] for almost every \(x,y\in\text{supp}(\rho_{k}^{\tau})\) and \(C{>}0\). Composing these stability bounds on \(T_{k}^{\tau}\) and \(\tilde{T}_{k}^{\tau}\) allows to get bounds for the composition networks \(G_{k}^{\tau}\) and \(\tilde{G}_{k}^{\tau}{=}\tilde{T}_{k}^{\tau}\circ...\circ\tilde{T}_{1}^{\tau}\). To summarize Section 2, the transport regularization makes each module more regular and it allows the modules to build on each other as \(k\) increases to solve the task, which is the property we desire. ## 3 Practical implementation ### Multi-block modules For simplicity, we presented in (2) the case where each module is a single residual block. However, in practice, we often split the network into modules that are made-up of many residual blocks each. We show here that regularizing the kinetic energy of such modules still amounts to a transport regularization, which means that the theoretical results in Propositions 2.2 and 2.3 still apply. If each module \(T_{k}\) is made up of \(M\) residual blocks, i.e. applies \(x_{m+1}{=}x_{m}{+}r_{m}(x_{m})\) for \(0{\leq}m{<}M\), then its total discrete kinetic energy for a single data point \(x_{0}\) is the sum of its squared residue norms \(\sum\|r_{m}(x_{m})\|^{2}\), since a residual network can be seen as a discrete Euler scheme for an ordinary differential equation [61] with velocity field \(r\): \[x_{m+1}=x_{m}+r_{m}(x_{m})\ \longleftrightarrow\ \partial_{t}x_{t}=r_{t}(x_{t}) \tag{9}\] and \(\sum\|r_{m}(x_{m})\|^{2}\) is then the discretization of the total kinetic energy \(\int_{0}^{1}\|r_{t}(x)\|^{2}\,\mathrm{d}t\) of the ODE. If \(\psi_{m}^{x}\) denotes the position of a point \(x\) after \(m\) residual blocks, then regularizing the kinetic energy of multi-block modules now means solving \[(T_{k}^{\tau},F_{k}^{\tau})\in\arg\min_{T,F}\sum_{x\in\mathcal{D }}(L(F,T(G_{k-1}^{\tau}(x))+\frac{1}{2\tau}\sum_{m=0}^{M-1}\|r_{m}(\psi_{m}^{x })\|^{2}) \tag{10}\] \[\text{s.t. }T=(\mathtt{id}+r_{M-1})\circ...\circ(\mathtt{id}+r_{0 }),\ \psi_{0}^{x}=G_{k-1}^{\tau}(x),\psi_{m+1}^{x}=\psi_{m}^{x}+r_{m}(\psi_{m}^{x})\] where \(G_{k}^{\tau}{=}T_{k}^{\tau}\circ...\circ T_{1}^{\tau}\) for \(1{\leq}k{\leq}K\) and \(G_{0}^{\tau}{=}\mathtt{id}\). We also minimize this sum of squared residue norms instead of \(\|T(x)-x\|^{2}\) (the two no longer coincide) as it works better in practice, which we assume is because it offers a more localized control of the transport. As expressed in (9), a residual network can be seen as an Euler scheme of an ODE and Problem (10) is then the discretization of \[(T_{k}^{\tau},F_{k}^{\tau})\in\arg\min_{T,F}\ \mathcal{L}(F,T_{ \sharp}\rho_{k}^{\tau})+\frac{1}{2\tau}\int_{0}^{1}\|v_{t}\|_{L^{2}((\phi_{t}) \sharp\rho_{k}^{\tau})}^{2}\,\mathrm{d}t \tag{11}\] \[\text{s.t. }T=\phi_{1},\ \partial_{t}\phi_{t}^{x}=v_{t}(\phi_{t}^{x }),\ \phi_{0}^{\cdot}=\mathtt{id}\] where \(\rho_{k+1}^{\tau}=(T_{k}^{\tau})_{\sharp}\rho_{k}^{\tau}\) and \(r_{m}\) is the discretization of vector field \(v_{t}\) at time \(t=m/M\). Here, distributions \(\rho_{k}^{\tau}\) are pushed forward through the maps \(T_{k}^{\tau}\) which correspond to the flow \(\phi\) at time \(t=1\) of the kinetically-regularized velocity field \(v_{t}\). We recognize in the second term in the target of (11) the optimal transport problem in its dynamic formulation (15) from [9], and given the equivalence between the Monge OT problem (4) and the dynamic OT problem (15) in Appendix A, Problem (11) is in fact equivalent to the original continuous formulation (3), and the theoretical results in Section 2 follow immediately (see also the proof of Proposition 2.3 in Appendix C). ### Solving the module-wise problems The module-wise problems can be solved in two ways. One can completely train each module with its auxiliary classifier for \(N\) epochs before training the next module, which receives as input the output of the previous trained module. We call this _sequential_ module-wise training. But we can also do this batch-wise, i.e. do a complete forward pass on each batch but without a full backward pass, rather a backward pass that only updates the current module \(T_{k}^{\tau}\) and its auxiliary classifier \(F_{k}^{\tau}\), meaning that \(T_{k}^{\tau}\) forwards its output to \(T_{k+1}^{\tau}\) immediately after it computes it. We call this _parallel_ module-wise training. It is called _decoupled_ greedy training in [8], which shows that combining it with batch buffers solves all three locking problems and allows a linear training parallelization in the depth of the network. We propose a variant of sequential module-wise training that we call _multi-lap sequential_ module-wise training, in which instead of training each module for \(N\) epochs, we train each module from the first to the last sequentially for \(N/R\) epochs, then go back and train from the first module to the last for \(N/R\) epochs again, and we do this for \(R\) laps. For the same total number of epochs and training time, and the same advantages (loading and training one module at a time) this provides a non-negligible improvement in accuracy over normal sequential module-wise training in most cases, as shown in Section 4. Despite our theoretical framework being that of sequential module-wise training, our method improves the test accuracy of all three module-wise training regimes. ### Varying the regularization weight The discussion in Section 2.2 suggests taking a fixed weight \(\tau\) for the transport cost that is as small as possible. However, instead of using a fixed \(\tau\), we might want to vary it along the depth \(k\) to further constrain with a smaller \(\tau_{k}\) the earlier modules to avoid that they overfit or the later modules to maintain the accuracy of earlier modules. We might also want to regularize the network further in earlier epochs when the data is more entangled. We propose in Appendix D to formalize this varying weight \(\tau_{k,i}\) across modules \(k\) and SGD iterations \(i\) by using a scheme inspired by the method of multipliers to solve Problems (2) and (10). However, it works best in only one experiment in practice. The observed dynamics of \(\tau_{k,i}\) suggest simply finding a fixed value of \(\tau\) that is multiplied by 2 for the second half of the network, which works best in all the other experiments (see Appendix E). ## 4 Experiments We call our method TRGL for Transport-Regularized Greedy Learning. For the auxiliary classifiers, we use the architecture from DGL [7; 8], that is a convolution followed by an average pooling and a fully connected layer, which is very similar to that used by InfoPro [60], except for the Swin Transformer where we use a linear layer. We call vanilla greedy module-wise training with the same architecture but without our regularization VanGL, and we include its results in all tables for ablation study purposes. The code is available at github.com/block-wise/module-wise and implementation details are in Appendix E. ### Parallel module-wise training To compare with other methods, we focus first on parallel training, as it performs better than sequential training and has been more explored recently. The first experiment is training in parallel 3 residual architectures and a VGG-19 [52] divided into 4 modules of equal depth on TinyImageNet. We compare in Table 1 our results in this setup to three of the best recent parallel module-wise training methods: DGL [8], PredSim [45] and Sedona [47], from Table 2 in [47]. We find that our TRGL has a much better test accuracy than the three other methods, especially on the smaller architectures. It also performs better than end-to-end training on the three ResNets. Parallel TRGL in this case with 4 modules consumes \(10\) to \(21\%\) less memory than end-to-end training (with a batch size of 256). The second experiment is training in parallel two ResNets divided into 2 modules on CIFAR100 [37]. We compare in Table 2 our results in this setup to the two delayed gradient methods DDG [32] and FR [31] from Table 2 in [31]. Here again, parallel TRGL has a better accuracy than the other two methods and than end-to-end training. With only two modules, the memory gains from less backpropagation are neutralized by the weight of the extra classifier and there are negligible memory savings compared to end-to-end training. However, parallel TRGL has a better test accuracy by up to almost 2 percentage points. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Architecture & Parallel VanGL & Parallel TRGL (ours) & PredSim & DGL & Sedona & E2E \\ \hline VGG-19 & 56.17 \(\pm\) 0.29 (\(\downarrow\)\(27\%\)) & **57.28 \(\pm\)** 0.20 (\(\downarrow\)\(21\%\)) & 44.70 & 51.40 & 56.56 & 58.74 \\ ResNet-50 & 58.43 \(\pm\) 0.45 (\(\downarrow\)\(26\%\)) & **60.30 \(\pm\)** 0.58 (\(\downarrow\)\(20\%\)) & 47.48 & 53.96 & 54.40 & 58.10 \\ ResNet-101 & 63.64 \(\pm\) 0.30 (\(\downarrow\)\(24\%\)) & **63.71 \(\pm\)** 0.40 (\(\downarrow\)\(11\%\)) & 53.92 & 53.80 & 59.12 & 62.01 \\ ResNet-152 & 63.87 \(\pm\) 0.16 (\(\downarrow\)\(21\%\)) & **64.23 \(\pm\)** 0.14 (\(\downarrow\)\(10\%\)) & 51.76 & 57.64 & 64.10 & 62.32 \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracy of parallel TRGL with 4 modules (average and 95\(\%\) confidence interval over 5 runs) on TinyImageNet, compared to DGL, PredSim, Sedona and E2E from Table 2 in [47], with memory saved compared to E2E as a percentage of E2E memory consumption in red. The third experiment is training in parallel a ResNet-110 divided into two, four, eight and sixteen modules on STL10 [12]. We compare in Table 3 our results in this setup to the recent methods InfoPro [60] and DGL [8] from Table 2 in [60]. TRGL largely outperforms the other methods. It also outperforms end-to-end training in all but one case (that with 16 modules). With a batch size of 64, memory savings of parallel TRGL compared to end-to-end training reach \(48\%\) and \(58.5\%\) with 8 and 16 modules respectively, with comparable test accuracy. With 4 modules, TRGL training weighs \(24\%\) less than end-to-end-training, and has a test accuracy that is better by \(2\) percentage points (see Section 4.2 and Table 5 for a detailed memory usage comparison with InfoPro). The fourth experiment is training (from scratch) in parallel a Swin-Tiny Transformer [39] divided into 4 modules on three datasets. We compare in Table 4 our results with those of InfoPro [60] and InfoProL, a variant of InfoPro proposed in [46]. TRGL outperforms the other module-wise training methods. It does not outperform end-to-end training in this case, but consumes \(29\%\) less memory on CIFAR10 and CIFAR100 and 50\(\%\) less on STL10, compared to \(38\%\) for InfoPro and \(45\%\) for InfoProL in [46]. ### Memory savings As seen above, parallel TRGL is lighter than end-to-end training by up to almost \(60\%\). The extra memory consumed by our regularization compared to parallel VanGL is between 2 and \(13\%\) of end-to-end memory. Memory savings depend then mainly on the size of the auxiliary classifier, which can easily be adjusted. Note that delayed gradients method DDG and FR increase memory usage [31], and Sedona does not claim to save memory, but rather to speed up training [47]. DGL is architecture-wise essentially identical to VanGL and consumes the same memory. We compare in Table 5 the memory consumption of our method to that of InfoPro [60] on a ResNet-110 on STL10 with a batch size of 64 (so the same setting as in Table 3). InfoPro [60] also propose to split the network into modules that have the same weight but not necessarily the same number of layers. They only implement this for \(K{\leq}4\) modules. When the modules are even in weight and not in depth, we call the training methods VanGL*, TRGL* and InfoPro*. In practice, this leads to shallower early modules which slightly hurts performance according to [47], and as seen below. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Parallel VanGL & Parallel TRGL (ours) & InfoPro & InfoProL & E2E \\ \hline STL10 & 67.00 \(\pm\) 1.36 (\(\downarrow\) 55\%) & **67.92**\(\pm\) 1.12 (\(\downarrow\) 50\%) & 64.61 (\(\downarrow\) 38\%) & 66.89 (\(\downarrow\) 45\%) & 72.19 \\ CIFAR10 & 83.94 \(\pm\) 0.42 (\(\downarrow\) 33\%) & **86.48**\(\pm\) 0.54 (\(\downarrow\) 29\%) & 83.38 (\(\downarrow\) 38\%) & 86.28 (\(\downarrow\) 45\%) & 91.37 \\ CIFAR100 & 69.34 \(\pm\) 0.91 (\(\downarrow\) 33\%) & **74.11**\(\pm\) 0.31 (\(\downarrow\) 29\%) & 68.36 (\(\downarrow\) 38\%) & 73.00 (\(\downarrow\) 45\%) & 75.03 \\ \hline \hline \end{tabular} \end{table} Table 4: Test accuracy of parallel TRGL with 4 modules (average and 95\(\%\) confidence interval over 5 runs) on a Swin-Tiny Transformer, compared to InfoPro, InfoProL and E2E from Table 3 in [46], with memory saved compared to E2E as a percentage of E2E memory consumption in red. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Architecture & Parallel VanGL & Parallel TRGL (ours) & DDG & FR & E2E \\ \hline ResNet-101 & 77.31 \(\pm\) 0.27 & **77.87**\(\pm\) 0.44 & 75.75 & 76.90 & 76.52 \\ ResNet-152 & 75.40 \(\pm\) 0.75 & **76.55**\(\pm\) 1.90 & 73.61 & 76.39 & 74.80 \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracy of parallel TRGL with 2 modules (average and 95\(\%\) confidence interval over 3 runs) on CIFAR100, compared to DDG, FR and E2E from Table 2 in [31]. However, TRGL* still outperforms InfoPro and end-to-end training, and it leads to even bigger memory savings than InfoPro*. We see in Table 5 below that TRGL saves more memory than InfoPro in two out of three cases (4 and 8 modules), and about the same in the third case (16 modules), with much better test accuracy in all cases. Likewise, TRGL* is lighter than InfoPro*, with better accuracy. See Appendix F for more details. ### Training time Since we do not implement forward unlocking with batch buffers as in DGL (i.e. only the backward passes of the modules happen in parallel), parallel module-wise training does slightly slow down training in this case. Epoch time increases by \(6\%\) with 2 modules and by \(16\%\) with 16 modules. TRGL is only slower than VanGL by \(2\%\) for all number of modules due to the additional regularization term. This is comparable to InfoPro which reports a time overhead between 1 and \(27\%\) compared to end-to-end training. ### Sequential full block-wise training Block-wise sequential training, meaning that each module is a single residual block and that the blocks are trained sequentially, therefore requiring only enough memory to train one block and its classifier. Even though it has been less explored in recent module-wise training methods, it has been used in practice in very constrained settings such as on-device training [58; 57]. We therefore test our regularization in this section in this setting, with more details in Appendix G. We propose here to use shallower ResNets that are initially wider. These architectures are well-adapted to layer-wise training as seen in [7]. We check first in Table 9 in Appendix G that this architecture works well with parallel module-wise training with 2 modules by comparing it favorably on CIFAR10 [37] with methods DGL [8], InfoPro [60] and DDG [32] that use a ResNet-110 with the same number of parameters. We then train a 10-block ResNet block-wise on CIFAR100. In Tables 10 and 11 in Appendix G, we see that MLS training improves the accuracy of sequential training by \(0.8\) percentage points when the trainset is full, but works less well on small train sets. Of the two, the regularization mainly improves the test accuracy of MLS training. The improvement increases as the training set gets smaller and reaches 1 percentage point. While parallel module-wise training performs quite close to end-to-end training in the full data regime and much better in the small data regime, sequential and multi-lap sequential training are competitive with end-to-end training in the small data regime. Combining the multi-lap trick and the regularization improves the accuracy of sequential training by 1.2 percentage points when using the entire trainset. We report further results for full block-wise training on MNIST [38] and CIFAR10 [37] in Tables 12 and 13 in Appendix G. The \(88\%\) accuracy of sequential training on CIFAR10 in Table 12 is the same as in Table 2 of [7], which is the best method for layer-wise sequential training available, with VGG networks of comparable depth and width. On the left, from parallel module-wise training experiments from Table 3, TRGL performs worse than vanilla greedy learning early, but surpasses it in later modules, indicating that it does avoid early overfitting. On the right, from sequential block-wise training experiments from Table 12, we see a large decline in performance that the regularization avoids. We see similar patterns in Figure 3 in Appendix G with parallel and MLS block-wise training. ## 5 Limitations The results in Appendix G show a few limitations of our method, as the improvements from the regularization are sometimes minimal on sequential training. However, the results show that our approach works in all settings (parallel and sequential with many or few modules), whereas other papers don't test their methods in all settings, and some show problems in other settings than the original one in subsequent papers (e.g. delayed gradients methods when the number of modules increases [31] and PredSim in [47]). Also, for parallel training in Section 4.1, the improvement from the regularization compared to VanGL is larger and increases with the number of modules (so with the memory savings) and reaches almost 5 percentage points. We show in Appendix H that our method is not very sensitive to the choice of hyperparameter \(\tau\) over a large scale. ## 6 Related work Layer-wise training was initially considered as a pre-training and initialization method [10; 43] and was recently shown to be competitive with end-to-end training [7; 45]. Many papers consider using a different auxiliary loss, instead of or in addition to the classification loss: kernel similarity [42], information-theory-inspired losses [53; 44; 41; 60] and biologically plausible losses [53; 45; 25; 11]. Methods [7], PredSim [45], DGL [8], Sedona [47] and InfoPro [60] report the best module-wise training results. [7; 8] do it simply through the architecture choice of the auxiliary networks. Sedona applies architecture search to decide on where to split the network into modules and what auxiliary classifier to use before module-wise training. Only BoostResNet [30] also proposes a block-wise training idea geared for ResNets. However, their results only show better early performance and end-to-end fine-tuning is required to be competitive. A method called ResIST [19] that is similar to block-wise training of ResNets randomly assigns ResBlocks to one of up to 16 modules that are trained independently and reassembled before another random partition. More of a distributed training method, it is only compared to local SGD [54]. These methods can all be combined with our regularization, and we do use the auxiliary classifier from [7; 8]. Besides module-wise training, methods such as DNI [33; 14], DDG [32] and FR [31], solve the update and backward locking problems with an eye towards parallelization by using delayed or predicted gradients, or even predicted inputs to address forward locking, which is what [55] do. But they observe training issues with more than 5 modules [31]. This makes them compare unfavorably to module-wise training [8]. The high dimension of the predicted gradient, which scales with the Figure 2: Test accuracy after each module averaged over 10 runs with \(95\%\) confidence intervals. Left: parallel vanilla (VanGL, in blue) and regularized (TRGL, in red) module-wise training of a ResNet-110 with 16 modules on STL10 (Table 3). Right: sequential vanilla (VanGL, in blue) and regularized (TRGL, in red) block-wise training of a 10-block ResNet on \(2\%\) of CIFAR10 (Table 12). size of the network, makes [33; 14] challenging in practice. Therefore, despite its simplicity, greedy module-wise training is more appealing when working in a constrained setting. Viewing ResNets as dynamic transport systems [16; 36] followed from their view as a discretization of ODEs [61]. Transport regularization of ResNets in particular is motivated by the observation that they are naturally biased towards minimally modifying their input [35; 26]. We further linked this transport viewpoint with gradient flows in the Wasserstein space to apply it in a principled way to module-wise training. Gradient flows on the data distribution appeared recently in deep learning. In [1], the focus is on functionals of measures whose first variations are known in closed form and used, through their gradients, in the algorithm. This limits the scope of their applications to transfer learning and similar tasks. Likewise, [21; 40; 4; 3] use the explicit gradient flow of \(f\)-divergences and other distances between measures for generation and generator refinement. In contrast, we use the discrete minimizing movement scheme which does not require computation of the first variation and allows to consider classification. ## 7 Conclusion We introduced a transport regularization for module-wise training that theoretically links module-wise training to gradient flows of the loss in probability space. Our method provably leads to more regular modules and experimentally improves the test accuracy of module-wise parallel, sequential and multi-lap sequential (a variant of sequential training that we introduce) training. Through this simple method that does not complexify the architecture, we make module-wise training competitive with end-to-end training while benefiting from its lower memory usage. Being a regularization, the method can easily be combined with other layer-wise training methods. Future work can experiment with working in Wasserstein space \(W_{p}\) for \(p\neq 2\), i.e. regularizing with a norm \(\|.\|_{p}\) with \(p\neq 2\). One can also ask how far the obtained composition network \(G_{K}\) is from being an OT map itself, which could provide a better stability bound than the one obtained by naively chaining the stability bounds (8).
2309.16049
Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression
Acoustic howling suppression (AHS) is a critical challenge in audio communication systems. In this paper, we propose a novel approach that leverages the power of neural networks (NN) to enhance the performance of traditional Kalman filter algorithms for AHS. Specifically, our method involves the integration of NN modules into the Kalman filter, enabling refining reference signal, a key factor in effective adaptive filtering, and estimating covariance metrics for the filter which are crucial for adaptability in dynamic conditions, thereby obtaining improved AHS performance. As a result, the proposed method achieves improved AHS performance compared to both standalone NN and Kalman filter methods. Experimental evaluations validate the effectiveness of our approach.
Yixuan Zhang, Hao Zhang, Meng Yu, Dong Yu
2023-09-27T22:07:00Z
http://arxiv.org/abs/2309.16049v1
# Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression ###### Abstract Acoustic howling suppression (AHS) is a critical challenge in audio communication systems. In this paper, we propose a novel approach that leverages the power of neural networks (NN) to enhance the performance of traditional Kalman filter algorithms for AHS. Specifically, our method involves the integration of NN modules into the Kalman filter, enabling refining reference signal, a key factor in effective adaptive filtering, and estimating covariance metrics for the filter which are crucial for adaptability in dynamic conditions, thereby obtaining improved AHS performance. As a result, the proposed method achieves improved AHS performance compared to both standalone NN and Kalman filter methods. Experimental evaluations validate the effectiveness of our approach. Yixuan Zhang\({}^{1*}\), Hao Zhang\({}^{2}\), Meng Yu\({}^{2}\), Dong Yu\({}^{2}\)\({}^{1}\)The Ohio State University, Columbus, OH, USA \({}^{2}\)Tencent AI Lab, Bellevue, WA, USA Footnote †: This work was done during an internship at Tencent AI Lab. ## 1 Introduction Acoustic howling is a phenomenon that frequently arises in audio systems such as karaoke systems, and public address systems where the amplified sound from the loudspeaker is captured by the microphone and subsequently re-amplified recursively. This creates an internal positive feedback loop within audio systems, leading to an unpleasant howling sound that reinforces specific frequency components [1, 2, 3] which not only jeopardizes the proper functioning of equipment but also poses potential risks to human auditory health. Various techniques have been explored to suppress howling including gain control [4], notch filter [5, 6], and adaptive feedback control (AFC) [6, 7, 8]. Notably, AFC methods leverage adaptive filters like the Kalman filter to suppress howling dubartments of filter coefficients driven by iterative feedback. The real-time adaptation of AFC methods breaks the positive feedback loop and results in better AHS performance. However, such methods have shown sensitivity to control parameters and cannot effectively manage the nonlinearity introduced by amplifiers and loudspeakers. In recent years, inspired by acoustic echo cancellation (AEC) studies [9, 10], deep-learning-based approaches have been explored to solve the AHS problem. Gan et al. [11] trained a deep neural network (DNN) in the time-frequency domain to suppress howling noise from speech signals. In [12], a model based on a convolutional recurrent neural network is introduced for howling detection for real-time communication scenarios. In [13], a deep learning framework, called DeepMFC, is introduced to address marginal stability issues of acoustic feedback systems. While existing approaches seem promising, the data used for training is generated offline in a closed-loop system without AHS processing. This leads to a mismatch during streaming inference, as AHS processing is continuously integrated, influencing its input stream. To improve the mismatch issue and ease training, DeepAHS [14] leverages the teacher-forcing strategy and demonstrates superior performance compared to other approaches. HybridAHS [15] which incorporates the Kalman filter by augmenting its output as network input further improves AHS performance. Nevertheless, although both methods address the training concern and exhibit superiority, the discrepancy between training and real-time streaming inference still exists. NN-augmented adaptive filtering approaches which potentially introduce less distortion have been explored in the context of AEC. Deep Adaptive AEC [16] employs NN modules to estimate the nonlinear reference and the step size in the normalized least mean square (NLMS) algorithm, which shows improved performance compared to fully DNN-based baselines in time-varying acoustic environments. Our prior study [17] integrates NN modules into a frequency-domain Kalman filter (FDKF) for estimating the nonlinear reference signal and a nonlinear transition function, which improves the performance of FDKF significantly and outperforms the NLMS-based Deep Adaptive AEC model. It is worth noting that our prior experiments show that exclusively employing NNs to estimate Kalman filter components does not necessarily yield performance improvements. However, leveraging NNs to estimate absent or approximated components within the Kalman filter algorithm has demonstrated considerable improvements, which further motivates our continued explorations in the field of AHS. In this study, we introduce NeuralKalmanAHS, an NN augmented Kalman filter for AHS. The proposed model incorporates NN modules into the frequency-domain Kalman filter, optimizing reference signal refinement and covariance matrix estimation. NeuralKalmanAHS is trained in a streaming mode that aligns with the streaming inference framework detailed in [14] which evaluates AHS models in recurrent and real-world settings, thus eliminating potential mismatch issues. Furthermore, we employ a howling detection strategy during training to ensure model convergence, allowing successful model training even in challenging acoustic bowling scenarios. Ablation studies indicate that streaming training ensures robustness of NeuralKalmanAHS against acoustic howling, even with lightweight models focused solely on covariance estimation, while reference signal refinement substantially boosts performance. Experimental results show that the proposed NeuralKalmanAHS effectively suppresses howling noise with less distortion, demonstrating remarkable stability in challenging scenarios, and outperforming strong baseline methods. 1 Footnote 1: Demos are available in [https://yixuanz.github.io/NeuralKalmanAHS](https://yixuanz.github.io/NeuralKalmanAHS). The remainder of this paper is organized as follows. Section 2 introduces the problem formulation of acoustic howling and the frequency-domain Kalman filter. Section 3 presents the proposed NeuralKalmanAHS. Section 4 and Section 5 describe the experimental setup and evaluation results, respectively. Section 6 concludes the paper. ## 2 Acoustic Howling Suppression ### Problem formulation Fig. 1 shows a typical single-channel acoustic amplification system with the proposed NeuralKalmanAHS model. The system consists of a microphone and a loudspeaker, both of which are in the same space. The microphone receives a mixture of the near-end speech signal \(s(t)\) as well as the playback signal \(d(t)\) and sends the mixed signal \(y(t)\) to an AHS system for howling suppression. The output from the AHS system is then amplified based on the designated loudspeaker gain and the amplified output \(x(t)\) is played out by the loudspeaker. The playback signal \(d(t)\) originated from \(x(t)\) can be formulated as, \[d(t)=x(t)*h(t), \tag{1}\] where \(*\) and \(h(t)\) denote convolution and the acoustic path from the loudspeaker to the microphone respectively. Without AHS system, the microphone signal can be formulated as, \[y(t)=s(t)+[G\cdot y(t-\Delta t)]*h(t), \tag{2}\] where \(G\) is the loudspeaker gain and \(\Delta t\) denotes the delay between the microphone and the loudspeaker introduced by the system. The recursive relation in Eq.2 leads to a re-amplifying of the playback signal which results in acoustic howling - a high-pitched jarring sound. With AHS system, the formulation becomes, \[y(t)=s(t)+[G\cdot\hat{s}(t-\Delta t)]*h(t), \tag{3}\] where \(\hat{s}(t)\) is the output from the AHS system. Ideally, the estimated \(\hat{s}(t)\) will be as close as \(s(t)\). Since acoustic bowling is a recursive process, the robustness of the AHS system is decided by how thoroughly the howling sound can be removed in each iteration. While acoustic howling and acoustic echo share origins in feedback within communication systems, it is worth noting that they represent distinct issues for two reasons. First, while both stem from playback signals, howling involves recursively accumulated and re-amplified playback signals. Second, in acoustic howling scenarios, the playback signal originates from the same near-end speaker, thereby making AHS more challenging. ### Frequency-domain Kalman filter Frequency-domain Kalman filter (FDKF) estimates the feedback signal by modeling the acoustic path with an adaptive filter \(\mathbf{W}(k)\) (\(k\) denotes the frame index). FDKF can be understood as a two-step process, where the iterative feedback from these steps drives the update of filter weights. In the prediction phase, the frequency-domain near-end signal \(\mathbf{S}(k)\) is estimated by, \[\hat{\mathbf{S}}(k)=\mathbf{Y}(k)-\mathbf{X}(k)\hat{\mathbf{W}}(k), \tag{4}\] where \(\mathbf{X}(k)\) is the frequency-domain reference signal which corresponds to the amplified and delayed estimates from the AHS system, \(\mathbf{Y}(k)\) corresponds to the frequency-domain microphone signal. \(\hat{\mathbf{W}}(k)\) denotes the estimated acoustic path in the frequency domain. In the update step, the state equation for updating echo path \(\hat{\mathbf{W}}(k)\) is defined as, \[\hat{\mathbf{W}}(k+1)=A[\hat{\mathbf{W}}(k)+\mathbf{K}(k)\hat{\mathbf{S}}(k)], \tag{5}\] where \(A\) is the transition factor, and \(\mathbf{K}(k)\) denotes the Kalman gain. As shown in Fig. 1, the update of \(\mathbf{K}(k)\) is related to the reference signal, acoustic path, and the estimated near-end signal. The dashed line in Fig. 1 indicates the relations not expressed directly in the equations. The calculation of \(\mathbf{K}(k)\) is defined as, \[\mathbf{K}(k)=\mathbf{P}(k)\mathbf{X}^{H}(k)[\mathbf{X}(k)\mathbf{P}(k) \mathbf{X}^{H}(k)+\boldsymbol{\Psi}_{vv}(k)]^{-1}, \tag{6}\] \[\mathbf{P}(k+1)=A^{2}[\mathbf{I}-\alpha\mathbf{K}(k)\mathbf{X}(k)]\mathbf{P}(k )+\boldsymbol{\Psi}_{\Delta\Delta}(k), \tag{7}\] where \(\boldsymbol{\Psi}_{vv}(k)\) and \(\boldsymbol{\Psi}_{\Delta\Delta}(k)\) are observation noise covariance and process noise covariance respectively, \(\mathbf{P}(k)\) is the state estimation error covariance. In FDKF, \(\boldsymbol{\Psi}_{vv}(k)\) and \(\boldsymbol{\Psi}_{\Delta\Delta}(k)\) are approximated by the covariance of the estimated near-end signal \(\boldsymbol{\Psi}_{\hat{s}\hat{s}}(k)\) and the acoustic-path \(\boldsymbol{\Psi}_{\hat{W}\hat{W}}(k)\), respectively. More details can be found in [18, 19]. ## 3 Proposed Method ### Overall approach The overall structure of the proposed NeuralKalmanAHS method is depicted in Fig. 1, where the frequency-domain Kalman filter (described in Section 2.2) is enhanced by integrating NN components to refine the reference signal \(\mathbf{X}(k)\) and covariance matrices \(\boldsymbol{\Psi}_{vv}(k)\) and \(\boldsymbol{\Psi}_{\Delta\Delta}(k)\). #### 3.1.1 Modeling reference signal \(\mathbf{R}\) As a promising strategy to enhance performance, refining the original reference signal by incorporating a learned reference signal has been established in prior acoustic echo cancellation research [15, 17] to enhance adaptive algorithm capabilities. To further develop this idea in AHS, we propose to integrate a learned reference signal \(\mathbf{R}(k)\) into the Kalman filter framework, with the original reference signal \(\mathbf{X}(k-1)\) and microphone recording \(\mathbf{Y}(k)\) as inputs: \[\mathbf{R}(k)=\mathcal{H}_{r}(\mathbf{Y}(k),\mathbf{X}(k-1)), \tag{8}\] where \(\mathcal{H}_{r}\) represents the network parameters for reference signal estimation. \(\mathcal{H}_{r}\) is designed as a two-layer long short-term memory (LSTM) network with 300 units per layer followed by a linear layer with Sigmoid activation function, which takes the concatenation of the log power spectrum of the original reference signal and microphone recording as input and estimates a ratio mask which is then applied on the microphone signal to get the refined reference signal \(\mathbf{R}(k)\). By integrating this refined reference signal, the operational load on the Kalman filter, particularly in attenuating severe acoustic bowling, is reduced, enhancing its efficiency. 1.2 Modeling covariance matrices \(\boldsymbol{\Psi}_{vv}\) and \(\boldsymbol{\Psi}_{\Delta\Delta}\) In the Kalman filter, covariance matrices \(\boldsymbol{\Psi}_{vv}(k)\) and \(\boldsymbol{\Psi}_{\Delta\Delta}(k)\) represent uncertainties associated with measurement and state variables. The accuracy of covariance matrices estimation significantly influences Kalman filter performance, affecting state estimation accuracy, adaptation to dynamic conditions, and convergence rate, which is especially crucial for dependable acoustic howling suppression. Conventional methods for covariance matrix estimation in the Kalman filter often assume linearity and stationary conditions, neglecting variable interdependencies and being sensitive to noise and outliers, limiting adaptability and prediction accuracy. We propose employing NN modules to learn covariance matrices. Original estimations in equations (9) and (10) are transformed to: \[\boldsymbol{\Psi}_{vv}(k)=\mathcal{H}_{\Psi 1}(\hat{\mathbf{S}}(k)), \tag{9}\] \[\boldsymbol{\Psi}_{\Delta\Delta}(k)=\mathcal{H}_{\Psi 2}(\hat{\mathbf{W}}(k)), \tag{10}\] where the estimation of \(\boldsymbol{\Psi}_{vv}(k)\) and \(\boldsymbol{\Psi}_{\Delta\Delta}(k)\) both involve training an LSTM cell with 65 hidden states. The inputs to the RNNs for estimating \(\boldsymbol{\Psi}_{vv}(k)\) and \(\boldsymbol{\Psi}_{\Delta\Delta}(k)\) are the magnitude of estimated near-end speech \(\hat{\mathbf{S}}(k)\) and \(\hat{\mathbf{W}}(k)\), respectively. ### Loss function and training strategy The loss function in Eq. 11 relies on the L1 norm to quantify the difference in magnitude spectrum between the enhanced signal \(\hat{S}\) and the target signal \(S\). By utilizing the L1 loss on the magnitude spectrum, the model benefits from effective regularization of the scale of the output signal. \[Loss=l1(S,\hat{S}) \tag{11}\] We observe that training the NeuralKalmanAHS model can be difficult and severe acoustic howling is prone to occur with an initially randomized model. The recursive nature of streaming training often results in an energy explosion, leading to a 'not a number' (NAN) issue and halting gradient updates. In the process of training NeuralKalmanAHS, we incorporate howling detection as a key measure. During each training iteration, we monitor the NeuralKalmanAHS output that utilizes a normalized scale of about -1.0 to +1.0 to interpret the 16-bit WAV file amplitudes. If the amplitude surpasses the upper limit for over 100 consecutive samples--a threshold set from experimental observations--training is halted to prevent howling. This strategy prevents the recursive training from triggering NAN issues, consequently preventing gradient update failures and enhancing the convergence of the model. The proposed model is trained for 60 epochs with a batch size of 128. ## 4 Experimental Setup ### Data preparation During streaming training and inference, for each sample, a pair of RIRs and speech signals are randomly selected and recursively used to generate the playback and microphone signal. The near-end speech audios are obtained from the AISHELL-2 dataset [20]. In addition, 10,000 pairs of room impulse responses (RIRs) are generated using the image method [21]. The RIRs are characterized by random room properties and reverberation times (RT60) selected \begin{table} \begin{tabular}{c|c c} \hline \(G\) = 2 & SDR (dB) & PESQ \\ \hline NeuralKalmanAHS & **2.32 \(\pm\) 1.92** & **2.27 \(\pm\) 0.46** \\ without \(\mathbf{R}\) & 1.28 \(\pm\) 1.42 & 1.72 \(\pm\) 0.38 \\ without \(\boldsymbol{\Psi}_{vv}\), \(\boldsymbol{\Psi}_{\Delta\Delta}\) & 2.17 \(\pm\) 1.85 & 2.21 \(\pm\) 0.47 \\ Kalman filter & -11.92 \(\pm\) 15.62 & 1.62 \(\pm\) 0.80 \\ \hline \end{tabular} \end{table} Table 1: Ablation study on NeuralKalmanAHS components for acoustic howling suppression. Mean and standard deviation are included for all evaluation metrics. Figure 1: Diagram of an acoustic amplification system and the proposed NeuralKalmanAHS model. within the 0 to 0.6 seconds range. Each RIR pair represents near-end speaker and loudspeaker positions. The system delay (\(\Delta\)t) of the simulated acoustic amplification system ranges randomly from 0.15 to 0.25 seconds, and the amplification gain is randomly chosen between 1 and 3. Overall, the training, validation, and testing sets comprised 38,000, 1,000, and 200 utterances, respectively. The testing data contained distinct utterances and RIRs compared to the training and validation data. All input audios are sampled at 16 kHz. STFT is computed with an 8 ms frame length and 50% frame shift. ### Evaluation metrics In this study, we evaluate the acoustic howling suppression (AHS) techniques using two metrics: signal-to-distortion ratio (SDR) [22] and perceptual evaluation of speech quality (PESQ) [23]. While we rely on PESQ to evaluate speech quality preservation, we emphasize SDR results to show the effectiveness of AHS methods in howling suppression, considering the insensitivity of PESQ to scale. ## 5 Experimental Results ### Ablation study We perform an ablation study on NeuralKalmanAHS, evaluating the role of modeling reference signal and covariance matrices. Using 50 utterances randomly selected from the test set and evaluating with a fixed loudspeaker gain of 2, we train and compare various model versions: complete NeuralKalmanAHS, NeuralKalmanAHS without modeling covariance matrices, NeuralKalmanAHS without modeling reference signal, and original Kalman filter. Table 1 demonstrates the significance of modeling both \(\mathbf{R}(k)\) and \(\mathbf{\Psi}_{vv}(k)\), \(\mathbf{\Psi}_{\Delta\Delta}(k)\) components. We can observe that although not modeling \(\mathbf{R}(k)\) results in a model only estimating covariance matrices for having just 0.08 M parameters and inadequate performance in terms of speech quality, with streaming training, the model still shows robustness in preventing severe howling, achieving significantly higher mean and lower standard deviation of SDR compared to the Kalman filter. In addition, we observe that only estimating \(\mathbf{R}(k)\) shows strong howling suppression performance, SDR and PESQ are slightly worse compared to the complete NeuralKalmanAHS. Spectrograms in Fig. 2 which visualize differences among evaluated models in howling suppression also validate these observations. ### Comparison with other methods Table 2 compares NeuralKalmanAHS to several benchmarks: the frequency-domain Kalman filter, two offline-trained methods including DeepMFC [13] and HybridAHS [15], and an online-trained model Neural-KG. The Neural-KG, inspired by [24], modeling Kalman gain in the Kalman filter, is built upon [25] and uses streaming training. For a fair comparison, all NN-based methods, unless explicitly mentioned, utilize a two-layer LSTM network with a model size and experimental settings aligned with NeuralKalmanAHS. We evaluate the models at loudspeaker gains (\(G\)) of {1.5, 2, 2.5, 3}, where lower gains imply less challenging scenarios. Results show that DNN-based methods consistently outperform the frequency-domain Kalman filter in all cases. While DeepMFC and HybridAHS perform well at low \(G\), their efficacy diminishes as \(G\) rises. On the other hand, NeuralKalmanAHS remains robust in challenging situations with high loudspeaker gain values. Compared to the best-performed HybridAHS, NeuralKalmanAHS demonstrates robust howling suppression performance with reduced distortion, especially in challenging cases. When \(G=3\), it enhances SDR by 4.94 dB and PESQ by 0.09. Spectrograms of the estimates from all benchmark methods are illustrated in Fig. 3. We can observe that the proposed NeuralKalmanAHS consistently achieves the best results across both moderate and severe howling scenarios. ## 6 Conclusion In this study, we have introduced NeuralKalmanAHS, an NN-augmented Kalman filter for acoustic howling suppression. Our approach employs NN to help refine reference signal and estimate covariance matrices in the frequency-domain Kalman filter. Through an ablation study, we have demonstrated the significance of modeling the covariance matrices and reference signal, and the efficacy of streaming training, even when focusing solely on modeling covariance with a compact model. The proposed NeuralKalmanAHS outperforms strong DNN-based benchmarks and exhibits less distortion. Future work will explore lightweight model designs and extend the approach to multi-channel acoustic howling suppression. \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline \hline Models & \multicolumn{4}{c}{SDR (dB)} & \multicolumn{4}{c}{PESQ} \\ \hline G & 1.5 & 2 & 2.5 & 3 & 1.5 & 2 & 2.5 & 3 \\ \hline no AHS & -30.51 \(\pm\) 7.23 & -31.86 \(\pm\) 5.66 & -33.10 \(\pm\) 3.96 & -33.21 \(\pm\) 3.94 & - & - & - & - \\ \hline Kalman & -5.11 \(\pm\) 13.20 & -10.33 \(\pm\) 14.84 & -14.88 \(\pm\) 15.14 & -18.25 \(\pm\) 14.77 & 1.94 \(\pm\) 0.72 & 1.65 \(\pm\) 0.73 & 1.44 \(\pm\) 0.70 & 1.30 \(\pm\) 0.64 \\ DeepMFC [13] & -0.09 \(\pm\) 6.50 & -2.78 \(\pm\) 9.44 & -5.99 \(\pm\) 11.40 & -7.69 \(\pm\) 12.26 & 2.11 \(\pm\) 0.51 & 1.88 \(\pm\) 0.59 & 1.70 \(\pm\) 0.62 & 1.56 \(\pm\) 0.59 \\ HybridAHS [15] & 2.96 \(\pm\) 3.04 & 1.25 \(\pm\) 5.79 & -1.45 \(\pm\) 9.60 & -3.49 \(\pm\) 10.90 & **2.57 \(\pm\) 0.47** & 2.33 \(\pm\) 0.53 & **2.22 \(\pm\) 0.59** & 1.95 \(\pm\) 0.62 \\ Neural-KG & 2.50 \(\pm\) 2.78 & 1.63 \(\pm\) 3.34 & -0.46 \(\pm\) 7.46 & -2.50 \(\pm\) 9.94 & 2.35 \(\pm\) 0.46 & 2.14 \(\pm\) 0.44 & 1.95 \(\pm\) 0.48 & 1.80 \(\pm\) 0.53 \\ \hline NeuralKalmanAHS & **3.65 \(\pm\) 2.01** & **2.65 \(\pm\) 1.70** & **1.98 \(\pm\) 1.49** & **1.45 \(\pm\) 1.31** & 2.55 \(\pm\) 0.44 & **2.33 \(\pm\) 0.41** & 2.17 \(\pm\) 0.39 & **2.04 \(\pm\) 0.37** \\ \hline \hline \end{tabular} \end{table} Table 2: Howling suppression performance of different methods. Mean and standard deviation are included for all evaluation metrics. Figure 3: Spectrograms of (a) target signal, (b) no AHS, (c) Kalman filter, (d) DeepMFC, (e) HybridAHS, (f) Neural-KG, and (g) Proposed NeuralKalmanAHS.
2309.04303
Fast Bayesian gravitational wave parameter estimation using convolutional neural networks
The determination of the physical parameters of gravitational wave events is a fundamental pillar in the analysis of the signals observed by the current ground-based interferometers. Typically, this is done using Bayesian inference approaches which, albeit very accurate, are very computationally expensive. We propose a convolutional neural network approach to perform this task. The convolutional neural network is trained using simulated signals injected in a Gaussian noise. We verify the correctness of the neural network's output distribution and compare its estimates with the posterior distributions obtained from traditional Bayesian inference methods for some real events. The results demonstrate the convolutional neural network's ability to produce posterior distributions that are compatible with the traditional methods. Moreover, it achieves a remarkable inference speed, lowering by orders of magnitude the times of Bayesian inference methods, enabling real-time analysis of gravitational wave signals. Despite the observed reduced accuracy in the parameters, the neural network provides valuable initial indications of key parameters of the event such as the sky location, facilitating a multi-messenger approach.
M. Andrés-Carcasona, M. Martinez, Ll. M. Mir
2023-09-08T13:04:34Z
http://arxiv.org/abs/2309.04303v2
# Fast Bayesian gravitational wave parameter estimation using convolutional neural networks ###### Abstract The determination of the physical parameters of gravitational wave events is a fundamental pillar in the analysis of the signals observed by the current ground-based interferometers. Typically, this is done using Bayesian inference approaches which, albeit very accurate, are very computationally expensive. We propose a convolutional neural network approach to perform this task. The convolutional neural network is trained using simulated signals injected in a Gaussian noise. We verify the correctness of the neural network's output distribution and compare its estimates with the posterior distributions obtained from traditional Bayesian inference methods for some real events. The results demonstrate the ability of the convolutional neural network to produce posterior distributions that are compatible with the traditional methods. Moreover, it achieves a remarkable inference speed, lowering by orders of magnitude the times of Bayesian inference methods, enabling real-time analysis of gravitational wave signals. Despite the observed reduced accuracy in the parameters, the neural network provides valuable initial indications of key parameters of the event such as the sky location, facilitating a multi-messenger approach. keywords: gravitational waves - software: data analysis ## 1 Introduction Gravitational waves (GW) are ripples in the fabric of spacetime produced by moving massive objects such as binary black holes or rotating neutron stars. These waves were not directly detected until the recent discovery by the LIGO and Virgo collaborations in 2015 (Abbott et al., 2016). Since then, a total of 90 events have been detected in the three publicly released catalogs (Abbott et al., 2019, 2020, 2021a), corresponding to the O1-O3 observing runs of Advanced LIGO (Aasi et al., 2015) and Advanced Virgo (Acernese et al., 2015). The fourth observing run started in Spring 2023, already detecting tenths of new events. The majority of them are binary black hole mergers, but binary neutron star mergers and black hole-neutron star mergers have also been detected. The discovery of GWs has opened up a new window to study the universe, as they can be used, for example, to infer the population of merging compact objects (Abbott et al., 2023b), test the theory of General Relativity (Abbott et al., 2021b) or understand the composition of neutron stars (Abbott et al., 2022, 2022, 2022). In order to extract useful information from the detected GW signals, it is necessary to accurately perform an estimation of the parameters of the source leading to the observed signal. Typically, this is done using a Bayesian inference framework, in which a probability distribution is constructed over the parameter space based on the observed data and prior knowledge (Veitch et al., 2015; Singer and Price, 2016; Abbott et al., 2019; Ashton et al., 2019; Romero-Shaw et al., 2020; Smith et al., 2020). Markov Chain Monte Carlo (MCMC) or Nested Sampling (NS) algorithms are commonly used to wander the parameter space and sample the posterior probability distribution (Gilks, 2005; Skilling, 2006; Veitch et al., 2015; Romero-Shaw et al., 2020). These methods, albeit powerful and precise, can be computationally intensive and do not scale appropriately with the increasing number of events. Due to the large volume of expected detections of future runs, in particular in next-generation experiments, new tools or implementations are being studied (Berry et al., 2015; Pankow et al., 2015; Singer and Price, 2016; Cuoco et al., 2020; Bhardwaj et al., 2023; Alvey et al., 2023; Crisostomi et al., 2023). Recently, there has been a growing interest in using artificial intelligence and machine learning techniques to address this task. Among those, convolutional neural networks (CNNs), normalizing flows and autoencoders have been the most common approaches followed (George and Huerta, 2018; Gabbard et al., 2018; Fan et al., 2019; Gabbard et al., 2022; Green et al., 2020; Green and Gair, 2021; Krastev et al., 2021). CNNs have been applied to detect events (George and Huerta, 2018; Menendez-Vazquez et al., 2021; Morris et al., 2022; Andres-Carcasona et al., 2023), to perform the parameter estimation of the signals (Chua and Vallisneri, 2020; Gabbard et al., 2022; Green et al., 2020) or to classify glitches (Zevin et al., 2017; George et al., 2018), among other applications (see Cuoco et al. (2020) for a comprehensive review). The best performing machine learning algorithms applied to the parameter estimation problem are the ones described by Gabbard et al. (2022); Dax et al. (2023); Green et al. (2020); Green and Gair (2021). Gabbard et al. (2022) use a variational autoencoder to predict the posterior probability distribution typically produced by a Bayesian inference approach. Their results are almost identical to those obtained using a Bayesian inference technique but to generate \(8,000\) samples only takes \(0.1\)s, an improvement of several orders of magnitude. A different approach is followed by Dax et al. (2023), where they estimate the Bayesian posterior using a neural network and then modify the distribution using importance sampling. This method takes between one and ten hours (depending on the waveform used) to perform the parameter estimation running on a GPU and multiple cores, which is a great improvement over the several hours to days that typically take the MCMC samplers to wander the parameter space. Finally, Green et al. (2020); Green and Gair (2021) use normalizing flows to produce accurate posterior distributions, comparable to those produced by traditional methods. In this paper, we present a new and fast method for GW parameter estimation using a CNN. This article is organized as follows. The training data set and preprocessing is explained in Sec. 2. In Sec. 3 the CNN architecture is presented. The training procedure is explained in Sec. 4. Finally, Sec. 5 shows the results obtained for the test set and for three real events, comparing it to the results obtained with the traditional MCMC approach. ## 2 Dataset and preprocessing To generate the training data for the CNN we first take realizations of Gaussian noise that follow the power spectral density (PSD) of the strain measured for the O3b run in the three interferometers (Abbott et al., 2021). Then, we simulate and inject on this noise GW signals using the IMRPhenomv2 waveform (Husa et al., 2016; Khan et al., 2016) from the PyCBC library (Nitz et al., 2020; Usman et al., 2016). The parameters of the injected signals are those described in Tab. 1. The delay between the different interferometers and its antenna patterns are also taken into account. The strain is then whitened and normalized by dividing it by its standard deviation. We choose a sampling frequency of \(4096\) Hz and include the information of LIGO Hanford, LIGO Livingstone and Virgo at the same time. The frequency is restricted to the \((30~{}\mathrm{Hz},1024~{}\mathrm{Hz})\) range, as the signals fit well within these values. The signal is then cropped to \(1\mathrm{s}\) containing the merger between the time \(0.5~{}\mathrm{s}\) and \(0.9~{}\mathrm{s}\) (the exact coalescing time is taken as random over a uniform distribution). Therefore, the input matrix for the CNN has a shape of \((4096,3)\), each column corresponding to the data of each interferometer. The signal-to-noise ratio (SNR) is defined as \[\mathrm{SNR}=\sqrt{4\int_{f_{\mathrm{min}}}^{f_{\mathrm{max}}}\mathrm{d}f \frac{|\tilde{h}(f)|^{2}}{S_{n}(f)}}~{}, \tag{1}\] where \(\tilde{h}(f)\) is the strain in frequency domain and \(S_{n}(f)\) is the PSD of the strain noise. The network SNR is then computed as \[\mathrm{SNR}_{\mathrm{net}}=\sqrt{\sum_{i=1}^{3}\mathrm{SNR}_{i}^{2}}~{}, \tag{2}\] where \(i\) runs over the three interferometers. We restrict ourselves to signals that satisfy \(\mathrm{SNR}_{\mathrm{net}}\geq 10\). A total of \(601,600\) signals are used for the training stage, \(38,400\) for the validation and \(100\) are kept for testing the performance afterwards. An improved efficiency has been observed when fitting some derived quantities rather than the original ones presented in Tab. 1. For instance, instead of directly estimating the individual masses of the black holes, we utilize the chirp mass, denoted by \(\mathcal{M}_{c}\) and which is calculated using the expression \[\mathcal{M}_{c}=\frac{(m_{1}m_{2})^{3/5}}{(m_{1}+m_{2})^{1/5}}~{}, \tag{3}\] alongside the mass ratio, denoted by \(q\) and computed as \[q=\frac{m_{1}}{m_{2}}~{}, \tag{4}\] where \(m_{1}\geq m_{2}\). Similarly, instead of predicting the individual spins, we use the effective spin, defined as \[\chi_{\mathrm{eff}}=\frac{a_{1}m_{1}+a_{2}m_{2}}{m_{1}+m_{2}}~{}. \tag{5}\] Furthermore, we make the decision not to fit the polarization and inclination angles, as they are not among the key parameters required for real-time analyses. As a result, the CNN will estimate the set of parameters \(\boldsymbol{\theta}=\{\mathcal{M}_{c},q,d,t_{c},\chi_{\mathrm{eff}},\alpha, \delta\}\). ## 3 CNN architecture The CNN architecture that has been used in this work is displayed in Fig. 1. The first set of layers are one-dimensional convolutions and one-dimensional max pooling ones, followed by a set of fully connected dense layers. Between each of the dense layers, a dropout \(20\%\) is applied to prevent overfitting during the training stage. To be able to capture the uncertainty in the final estimation of the parameters we choose to model the full distribution as the posterior in Bayesian statistics would, instead of producing point estimates. Therefore, the last dense layer that contains \(14\) neurons will be connected to a TensorFlow Probability layer with a Kumaraswamy distribution for each parameter. The Kumaraswamy distribution has a probability density function defined by (Kumaraswamy, 1980) \[f(x;\alpha,\beta)=\alpha\beta x^{\alpha-1}(1-x^{\alpha})^{\beta-1}~{}, \tag{6}\] for \(x\in[0,1]\), which is a flexible enough distribution. It is very similar to the beta distribution, but its simple expression of the probability density function allows to have an easy evaluation of the quantile function, which takes the form of \[F^{-1}(\xi;\alpha,\beta)=(1-(1-\xi)^{1/\beta})^{1/\alpha}~{}. \tag{7}\] Sampling from this distribution is then trivial by applying the inverse transform sampling method. If we draw \(\xi\sim\mathrm{Uniform}(0,1)\) and then compute \(F^{-1}(\xi;\alpha,\beta)\) for each realization, we will obtain the desired samples of this distribution. This procedure is very fast \begin{table} \begin{tabular}{l c c c} \hline \hline **Parameter** & **Symbol** & **Distribution** & **Units** \\ \hline \hline Mass 1 & \(m_{1}\) & Uniform\((20,60)\) & [\(M_{\odot}\)] \\ Mass 2 & \(m_{2}\) & Uniform\((20,60)\) & [\(M_{\odot}\)] \\ Distance & \(d\) & Uniform\((100,3000)\) & [Mpc] \\ Right ascension & \(\alpha\) & Uniform\((0,2\pi)\) & [rad] \\ Cosine of declination & \(\cos(\delta)\) & Uniform\((-1,1)\) & - \\ Polarization angle & \(\psi\) & Uniform\((0,\pi)\) & [rad] \\ Inclination & \(\theta_{JN}\) & Uniform\((0,\pi/2)\) & [rad] \\ Time of coalescence & \(t_{c}\) & Uniform\((0.5,0.9)\) & [s] \\ Spin magnitude 1 & \(a_{1}\) & Uniform\((-1,1)\) & - \\ Spin magnitude 2 & \(a_{2}\) & Uniform\((-1,1)\) & - \\ \hline \hline \end{tabular} \end{table} Table 1: Parameter space of the training set. We additionally assume that \(m_{1}\geq m_{2}\). in contrast to sampling from a distribution for which no analytical expression of the quantile function is available or which is computationally expensive to evaluate. Since our variables are not in the range \([0,1]\), the estimated parameters need to be transformed as \[x=\frac{\theta-\theta_{\min}}{\theta_{\max}-\theta_{\min}}\;, \tag{8}\] where \(\theta\in\mathbf{\theta}\). The numbers estimated for the different variables are different in orders of magnitude and, therefore, we estimate the \(\log\alpha\) and \(\log\beta\) instead. This motivates taking the exponential of the 14 numbers generated by the last dense layer before evaluating the probability distribution. The loss function that will be used is the negative log-likelihood (i.e. \(L=-\log\mathcal{L}(\alpha,\beta|X)\)). For the Kumaraswamy distribution and \(n\) observations, the log-likelihood equals \[\begin{split}\log\mathcal{L}&=n\log\alpha+n\log \beta+(\alpha-1)\sum_{i=1}^{n}\log(x_{i})\\ &+(\beta-1)\sum_{i=1}^{n}\log(1-x_{i}^{\alpha})\;.\end{split} \tag{9}\] Choosing this loss function is equivalent to maximizing the log-likelihood, meaning that the neural network is actually learning to find the parameters of the distribution that best fit the data. ## 4 Training To build and train the CNN we use Keras with TensorFlow's backend and its implementation in GPUs (Abadi et al., 2016). For managing the probabilistic layers we use distribution layers (Dillon et al., 2017) from TensorFlow Probability. We feed the data in batches of 32 signals as was done in Menendez-Vazquez et al. (2021); Andres-Carcasona et al. (2023). The learning rate is set to \(10^{-5}\). The metrics tracked during this stage are mainly the loss and the validation loss. The former is computed with the training set and represents the function being minimized, while the latter is computed using the validation set. This allows to observe the appearance of overfitting or underfitting during the training procedure. The training lasts for 8 epochs and the one yielding a minimum validation loss is going to be the neural network used for inference. The evolution of the metrics tracked is displayed in Fig. 2. The loss, as expected, decreases steadily, indicating that the CNN is continuously learning to fit the data. This indicates a good behavior over the training set. The validation loss also displays a decrease during the training indicating that the CNN is also learning how to generalize the results over data that has not seen before. The final loss and validation loss that are obtained for the CNN chosen, the one of the eighth epoch, are both \(-1.13\). This CNN is tested with the other data set and two real events in the following section. It is worth noting that the CNN performance could be further improved by expanding the training set to include a wider range of signals and noise conditions. Moreover, a higher fine-tuning of the model's hyperparameters, such as adjusting the learning rate or the architecture, could potentially enhance its accuracy and reduce a bit more the uncertainties in the estimated parameters. This is out of the scope of the paper, as it intends to be a proof-of-concept. Figure 1: Architecture of the convolutional neural network used in this work. \({}^{a}\) Size of the input layer. \({}^{b}\) Filters and size of the kernel of the 1D convolution layer. \({}^{c}\) Size of the pool size of the 1D maximum pooling layer. \({}^{d}\) Number of neurons of the fully connected dense layer. \({}^{e}\) Layer that includes the 7 Kumaraswamy distribution (one per each estimated parameter). Before this layer, the exponential of the numbers outputted by the last dense fully connected layer are taken. Figure 2: Metrics tracked during the training procedure. ## 5 Results In this final section, the main results of the CNN are presented. The first test that can be performed to ensure the proper performance is constructing the probability-probability (or P-P for short) plot. This plot is constructed by performing the inference of the CNN over the test set and evaluating the p-value of the true parameter. This is computed as \[\mathrm{p-value}=\int_{\theta_{\mathrm{true}}}^{\infty}f(\theta;\alpha,\beta) \mathrm{d}\theta. \tag{10}\] If the distribution outputted by the CNN is a true statistical distribution, the cumulative probability density of this p-value should follow a \(45^{\circ}\) line between the points \((0,0)\) and \((1,1)\). This plot is shown in Fig. 3. These cross-validation results demonstrate the robustness of our CNN approach, as it consistently produces a reliable parameter estimation using a new subset of data. Since the CNN has been trained on simulated signals and on Gaussian noise it is interesting to test it using real data. The GWTC-3 catalog (Abbott et al., 2021) contains the \(35\) O3b confident detections. From this collection, a judicious selection of events is needed following certain criteria. Namely, the chosen events must have been detected during periods when all three interferometers were actively online and when the data quality recorded across these interferometers was designated as optimal. Furthermore, compatibility with the CNN's training regimen requires that the parameters derived through conventional methodologies lie within the predetermined range calibrated for the neural network. Finally, only events exhibiting a SNR exceeding 10 are considered viable candidates as this was a decision during training. Accordingly, among these stringent criteria, the events GW200129_065458, GW200224_222234 and GW200311_115853 emerge as good choices for this validation, fulfilling the requirements and having different parameters. For GW200129_065458, we apply the CNN on the data that contains it and compare it to the posterior distributions published in the GWTC-3 catalog by the LVK collaboration using the traditional Bayesian inference methods(Abbott et al., 2021, 2023). The result is shown in Fig. 4. To generate \(10,000\) samples of the posterior the CNN, running on a GPU NVIDIA GeForce RTX 2080 Ti, takes \(0.05\) s. This implies an improvement of several orders of magnitude with respect to the traditional method. Our CNN is able to produce posterior distributions that in all cases are compatible with the published results. The one that exhibits the worst similarity is, for this particular event, the distance, as it overestimates it. In this case, the sky position has a big overlap with the MCMC estimated one, but still has a larger uncertainty. The results for the event GW200224_222234 are shown in Fig. 5. In this case, all the estimated parameters are also in accordance with those published in the GWTC-3 catalog. The one that exhibits the worst performance is, in this case, the effective spin. Regarding the sky position, the uncertainty yielded by the CNN is too large to accurately pinpoint the event, but it still is unbiased and could produce a first alert for the instruments that look for electromagnetic counterparts to start pointing their instruments to a given patch in the sky while the more accurate pipelines improve the precision of the sky localization. A typical short gamma ray burst (GRB), as the one observed alongside GW170817 (Abbott et al., 2017), lasts less than \(2\) s and, taking into account that the instruments might need time to point towards a given direction, ideally the inference speed should be of a fraction of a second to avoid becoming the bottleneck. This is achieved by our neural network. Finally, the results for the GW200311_115853 event are shown in Fig. 6. In this particular example, the coalesce time is slightly underestimated and the distance overestimated. The rest of the parameters are well predicted but with larger uncertainties. Overall, the results indicate a good response from CNN. An important advantage of our approach is its computational efficiency. The CNN model provides several posterior samples in a fraction of the time required by Bayesian inference methods. This enhanced capability enables real-time analysis of GW signals and even facilitates prompt alerts for possible follow-up observations and multi-messenger astronomy collaborations. Despite the high inference speed, the CNN approach introduces a trade-off in terms of higher parameter uncertainty. These can be mainly attributed to the simplified assumptions made by the CNN architecture and the limited training data used. However, even with these uncertainties, the CNN model can serve as an initial indication for the sky position and other key parameters, enabling a swift response and triggering more accurate analyses by other slower, yet more accurate, pipelines. In summary, with these results, our CNN model demonstrates promising capabilities for a fast parameter estimation of GWs. It provides a reliable posterior distribution, while exhibits a competitive performance compared to traditional Bayesian sampling methods, and offers real-time inference capabilities. This is only a proof-of-concept and future research can focus on refining the CNN architecture, incorporating additional data sources, and exploring ensembling techniques to further improve the accuracy and robustness of gravitational wave parameter estimation. ## 6 Conclusions We have presented a neural network to perform the typically computationally expensive task of estimating the parameters of gravitational wave events. The chosen architecture has been a one-dimensional convolutional neural network with three channels, one per available Figure 3: P-P plot for the different variables that are being fitted by the CNN. The individual masses are obtained by post-processing the chirp mass and mass ratio. The shaded regions indicate the \(\pm 1\sigma\), \(\pm 2\sigma\) and \(\pm 3\sigma\) confidence levels in decreasing intensity of color, respectively. interferometer, and predicting the parameters of a Kumaraswamy distribution. These parameters then define what can be regarded as the posterior distribution. The training shows that this architecture can correctly learn how to estimate them and that the outputted distribution behaves like a probability density function. Finally, it has been applied to real events, and the results have shown that, although our CNN has a larger uncertainty than traditional Bayesian inference approaches, the actual parameters are generally in agreement with the outputted distribution. Our approach offers real-time inference speeds that could be used for possible multi-messenger follow-ups. ## Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 754510. This work is partially supported by the Spanish MCIN/AEI/10.13039/501100011033 under the Grants No. SEV-2016-0588, No. PGC2018-101858-B-I00, and No. PID2020-113701GB-I00, some of which include ERDF funds from the European Union, and by the MICINN with funding from the European Union NextGenerationEU (PRTR-C17.11) and by the Generalitat de Catalunya. IFAE is partially funded by Figure 4: Full posterior distribution obtained for the CNN and for the public parameter estimation release obtained with a MCMC approach for the GW200129_065458 event. the CERCA program of the Generalitat de Catalunya. MAC is supported by the 2022 FI-00335 grant. The corner plots use the _corner,py_ library (Foreman-Mackey, 2016). This document has received a LIGO DCC number of P2300296 and Virgo TDS number of VIR-0791A-23. This research has made use of data, software, and/or web tools obtained from the Gravitational Wave Open Science Center ([https://www.gw-openscience.org/](https://www.gw-openscience.org/)), a service of the LIGO Laboratory, the LIGO Scientific Collaboration, and the Virgo Collaboration. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN), and the Dutch Nikhef, with contributions by institutions Figure 5: Full posterior distribution obtained for the CNN and for the public parameter estimation release obtained with a MCMC approach for the GW200224_222234 event. from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, and Spain. ## Data Availability The codes and data generated for this paper are available under reasonable request to the authors.
2309.04317
Actor critic learning algorithms for mean-field control with moment neural networks
We develop a new policy gradient and actor-critic algorithm for solving mean-field control problems within a continuous time reinforcement learning setting. Our approach leverages a gradient-based representation of the value function, employing parametrized randomized policies. The learning for both the actor (policy) and critic (value function) is facilitated by a class of moment neural network functions on the Wasserstein space of probability measures, and the key feature is to sample directly trajectories of distributions. A central challenge addressed in this study pertains to the computational treatment of an operator specific to the mean-field framework. To illustrate the effectiveness of our methods, we provide a comprehensive set of numerical results. These encompass diverse examples, including multi-dimensional settings and nonlinear quadratic mean-field control problems with controlled volatility.
Huyên Pham, Xavier Warin
2023-09-08T13:29:57Z
http://arxiv.org/abs/2309.04317v1
# Actor critic learning algorithms for mean-field control ###### Abstract We develop a new policy gradient and actor-critic algorithm for solving mean-field control problems within a continuous time reinforcement learning setting. Our approach leverages a gradient-based representation of the value function, employing parametrized randomized policies. The learning for both the actor (policy) and critic (value function) is facilitated by a class of moment neural network functions on the Wasserstein space of probability measures, and the key feature is to sample directly trajectories of distributions. A central challenge addressed in this study pertains to the computational treatment of an operator specific to the mean-field framework. To illustrate the effectiveness of our methods, we provide a comprehensive set of numerical results. These encompass diverse examples, including multi-dimensional settings and nonlinear quadratic mean-field control problems with controlled volatility. **Keywords:** Mean-field control, reinforcement learning, policy gradient, moment neural network, actor-critic algorithms. ## 1 Introduction This paper is concerned with the numerical resolution of mean-field (a.k.a. McKean Vlasov) control in continuous time in a partially model-free reinforcement learning setting. The dynamics of the controlled mean field stochastic differential equation on \(\mathbb{R}^{d}\) is in the form \[\mathrm{d}X_{t}=\;b(t,X_{t},\mathbb{P}_{X_{t}},\alpha_{t})\mathrm{d}t+ \sigma(t,X_{t},\mathbb{P}_{X_{t}},\alpha_{t})\mathrm{d}W_{t},\quad 0\leq t \leq T,\;X_{0}\sim\mu_{0},\] where \(W\) is a standard \(d\)-dimensional Brownian motion on a filtered probability space \((\Omega,\mathcal{F},\mathbb{F}=(\mathcal{F}_{t})_{t},\mathbb{P})\), \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the Wasserstein space of square integrable probability measures, \(\mathbb{P}_{X_{t}}\) denotes the marginal distribution of \(X_{t}\) at time \(t\), and the control process \(\alpha\) is valued in \(A\subset\mathbb{R}^{p}\). The coefficients \(b,\sigma\) are in the separable form: \[\mathbf{(SC)}\qquad\qquad b(t,x,\mu,a)\;=\;\beta(t,x,\mu)+C(t,a),\qquad \sigma\sigma^{\tau}(t,x,\mu,a)\;=\;\Sigma(t,x,\mu)+\vartheta(t,a),\] for \((t,x,\mu,a)\in[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d}) \times\mathbb{R}^{p}\), where \(\beta\), \(\Sigma\) depending on the state variable and its probability distribution are unknown functions, while the coefficients \(C\), \(\vartheta\) on the control are known functions on \([0,T]\times\mathbb{R}^{p}\). The expected total cost associated to a control \(\alpha\) is given by \[\mathbb{E}\Big{[}\int_{0}^{T}f(X_{t},\mathbb{P}_{X_{t}},\alpha_{t})\mathrm{d }t+g(X_{T},\mathbb{P}_{X_{T}})\Big{]}, \tag{1.1}\] and we denote by \((t,x,\mu)\mapsto V(t,x,\mu)\) the associated value function defined on \([0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\). The analytical forms of \(f\), \(g\) are unknown, but it is assumed that given an input \((x,\mu,a)\in\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\times A\), we obtain (by observation or from a blackbox) the realized output costs \(f(x,\mu,a)\) and \(g(x,\mu)\). Such setting is partially model-free, as the model coefficients \(\beta\), \(\Sigma\), \(f\), and \(g\) are unknown, and we only assume that the action functions \(C\), \(\vartheta\) on the control are known. The theory and applications of mean-field control (MFC) problems have generated a vast literature in the last decade, and we refer to the monographs [2], [3] for a comprehensive treatment of this topic. From a numerical aspect, the main challenging issue is the infinite dimensional feature of MFC coming from the distribution law state variable. In a model-based setting, i.e., when all the coefficients \(b\), \(\sigma\), \(f\) and \(g\) are known, several recent works have proposed deep learning schemes for MFC, based on neural network approximations of the feedback control and/or the value function solution to the Hamilton-Jacobi-Bellman equation from the dynamic programming or backward stochastic differential equations (BSDEs) from the maximum principle, see [5], [8], [9], [11], [17], [16], [14]. The approximation of solutions to MFC in a (partially) model-free setting is the purpose of reinforcement learning (RL) where one learns in an unknown environment the optimal control (and the value function) by repeatedly trying policies, observing state, receiving and evaluating rewards, and improving policies. RL is a very active branch of machine learning, see the seminal reference monograph [18], and has recently attracted attention in the context of mean-field control in discrete-time, and mostly by \(Q\)-learning methods, see [6], [1], [10]. In this paper, we consider a partially model-free continuous time setting as described above, and adopt a policy gradient approach as in [7]. This relies on a gradient representation of the cost functional associated to randomized policies, which makes appear an additional operator term \(\mathcal{H}\) compared to the classical diffusion setting of [12], and specific to the mean-field setting. The computational treatment of this operator \(\mathcal{H}\) on functions defined on the Wasserstein space is the crucial issue, and has been handled in [7] only for one-dimensional linear quadratic (LQ) models, hence with a very particular dependence of the value function and optimal control on the law of the state. Here, we address the general dependence of the coefficients on the distribution state, and deal with the operator \(\mathcal{H}\) by means of the class of moment neural networks. This class of neural networks consists of functions that depend on the measure via its first \(L\) moments, and satisfies some universal approximation theorem for functions defined on the Wasserstein space, see [15], [20]. We then design an actor-critic algorithm for learning alternately the optimal policy (actor) and value function (critic) with moment neural networks, which provides an effective resolution of MFC control problems in a (partially) model-free setting beyond the LQ setting for multivariate dynamics with control on the drift and the volatility. Our actor-critic algorithm has the structure of general actor-critic algorithms but during gradient iterations, instead of following a single state trajectory by sampling, we follow the evolution of an entire distribution that is initially randomly chosen and described by its empirical measure obtained with a large fixed number of particles. Then, the batch version of the algorithm consists in sampling and following \(m\) distributions together to estimate the gradients. The outline of the paper is organized as follows. We recall in Section 2 the gradient representation of the functional cost with randomized policies and formulate notably the expression of the operator \(\mathcal{H}\). In Section 3, we consider the class of moment neural networks, and show how it acts on the operator \(\mathcal{H}\). We present in Section 4 the actor-critic algorithm, and Section 5 is devoted to numerical results for illustrating the accuracy and efficiency of our algorithm. We present various examples with control on the drift and on the volatility, and non LQ examples in a multi-dimensional setting. **Notations.** The scalar product between two vectors \(x\) and \(y\) is denoted by \(x\cdot y\), and \(|\cdot|\) is the Euclidian norm. Let \(Q=(Q_{i_{1}\ldots i_{q}})\in\mathbb{R}^{d_{1}\times\ldots\times d_{q}}\) be a tensor of order \(q\), and \(P=(P_{i_{1}\ldots i_{p}})\in\mathbb{R}^{d_{1}\times\ldots\times d_{p}}\) be a tensor of order \(p\leq q\). We denote by \(Q\circ P\) the circ product defined as the tensor in \(\mathbb{R}^{d_{p+1}\times\ldots\times d_{q}}\) with components: \[[Q\circ P]_{i_{p+1}\ldots i_{q}}=\ \sum_{i_{1},\ldots,i_{p}}Q_{i_{1}\ldots i_{ p}i_{p+1}\ldots i_{q}}P_{i_{1}\ldots i_{p}}.\] When \(q=p=1\), \(\circ\) is the scalar product in \(\mathbb{R}^{d_{1}}\). When \(q=2\), \(p=1\), \(Q\circ P=Q^{\intercal}P\in\mathbb{R}^{d_{2}}\) where \({}^{\intercal}\) is the transpose matrice operator. When \(q=p=2\), \(\circ\) is the inner product \(Q\circ P=\operatorname{tr}(Q^{\intercal}P)\) where \(\operatorname{tr}\) is the trace operator. When \(q=3\), \(Q\circ P\) is a vector in \(\mathbb{R}^{d_{3}}\) for \(p=2\), and a matrix in \(\mathbb{R}^{d_{2}\times d_{3}}\) for \(p=1\). Preliminaries We adopt a policy gradient approach by searching optimal control among parametrized randomized policies, i.e., family of probability transition kernels \(\pi_{\theta}\) from \([0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\) into \(\mathbb{R}^{p}\), with densities \(p_{\theta}(t,x,\mu,.)\) w.r.t. some measure on \(\mathbb{R}^{p}\), and thus by minimizing over the parameters \(\theta\in\mathbb{R}^{D}\) the functional \[\mathrm{J}(\theta)=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}\int_{0}^{T}f (X_{t},\mathbb{P}_{X_{t}},\alpha_{t})\mathrm{d}t+g(X_{T},\mathbb{P}_{X_{T}}) \Big{]}. \tag{2.1}\] Here \(\alpha\sim\pi_{\theta}\) means that at each time \(t\), the action \(\alpha_{t}\) is sampled (independently from \(W\)) from the probability distribution \(\pi_{\theta}(.|t,X_{t},\mathbb{P}_{X_{t}})\). **Remark 2.1**.: _We may include a entropy (e.g. Shannon) regularizer term in the functional cost (2) as proposed in [19] for encouraging exploration of randomized policies. This can slightly help the convergence of the policy gradient algorithms by permitting the use of higher learning rates, but it turns out that it does not really improve the accuracy of the results. Here, we only consider exploration through the randomization of policies._ We have the gradient representation of \(\mathrm{J}\) as derived in [7]: \[\mathrm{G}(\theta):= \ \nabla_{\theta}\mathrm{J}(\theta)\] \[= \ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}\int_{0}^{T}\nabla_{ \theta}\log p_{\theta}(t,X_{t},\mathbb{P}_{X_{t}},\alpha_{t})\big{[}\mathrm{d }J_{\theta}(t,X_{t},\mathbb{P}_{X_{t}})+f(X_{t},\mathbb{P}_{X_{t}},\alpha_{t}) \mathrm{d}t\big{]}\] \[\qquad\qquad\qquad\qquad+\ \int_{0}^{T}\mathcal{H}_{\theta}[J_{ \theta}](t,X_{t},\mathbb{P}_{X_{t}})\mathrm{d}t\Big{]}, \tag{2.2}\] where \(J_{\theta}:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\to \mathbb{R}\) is the dynamic value function associated to (2), hence satisfying the property that \[(\mathbf{MJ})\quad\{J_{\theta}(t,X_{t},\mathbb{P}_{X_{t}})+\int_{0}^{t}f(X_{ s},\mathbb{P}_{X_{s}},\alpha_{s})\mathrm{d}s,\ 0\leq t\leq T\}\ \ \text{is a martingale},\] and \(\mathcal{H}_{\theta}\) is the operator specific to the mean-field framework, defined by \[\mathcal{H}_{\theta}[\varphi](t,x,\mu)=\ \nabla_{\theta}\mathbb{E}_{ \xi\sim\mu}\Big{[}b_{\theta}(t,\xi,\mu)\cdot\partial_{\mu}\varphi(t,x,\mu)( \xi)+\frac{1}{2}\Sigma_{\theta}(t,\xi,\mu)\circ\partial_{\xi}\partial_{\mu} \varphi(t,x,\mu)(\xi)\Big{]}\ \in\mathbb{R}^{D}\] with \(b_{\theta}(t,x,\mu)=\int_{A}b(t,x,\mu,a)\pi_{\theta}(\mathrm{d}a|t,x,\mu)\), \(\Sigma_{\theta}(t,x,\mu)=\int_{A}\sigma\sigma^{\tau}(t,x,\mu,a)\pi_{\theta}( \mathrm{d}a|t,x,\mu)\). Here \(\partial_{\mu}\varphi(t,x,\mu)(.)\) is the Lions-derivative with respect to \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), and it is a function from \(\mathbb{R}^{d}\) into \(\mathbb{R}^{d}\), and \(\mathbb{E}_{\xi\sim\mu}[.]\) means that the expectation is taken with respect to the random variable \(\xi\) distributed according to \(\mu\). Notice that under the structure condition **(SC)**, we have \[b_{\theta}(t,x,\mu)\ =\ \beta(t,x,\mu)+C_{\theta}(t,x,\mu),\quad\Sigma_{ \theta}(t,x,\mu)\ =\ \Sigma(t,x,\mu)+\vartheta_{\theta}(t,x,\mu)\] where \(C_{\theta}(t,x,\mu):=\int_{A}C(t,a)\pi_{\theta}(\mathrm{d}a|t,x,\mu)\), \(\vartheta_{\theta}(t,x,\mu):=\int_{A}\vartheta(t,a)\pi_{\theta}(\mathrm{d}a|t,x,\mu)\) are known functions, and thus \[\mathcal{H}_{\theta}[\varphi](t,x,\mu)=\ \nabla_{\theta}\mathbb{E}_{\xi\sim\mu} \Big{[}C_{\theta}(t,\xi,\mu)\cdot\partial_{\mu}\varphi(t,x,\mu)(\xi)+\frac{1} {2}\vartheta_{\theta}(t,\xi,\mu)\circ\partial_{\xi}\partial_{\mu}\varphi(t,x, \mu)(\xi)\Big{]}.\] ## 3 Parametrization of actor/critic functions with moment neural networks A moment neural network function on \([0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\) of order \(L\in\mathbb{N}^{*}\) is a parametric function in the form \[\phi_{\eta}(t,x,\mu)=\ \Psi_{\eta}(t,x,\bar{\mathbf{\mu}}_{L}),\] where \(\bar{\boldsymbol{\mu}}_{L}=(\bar{\mu}^{\ell})_{\ell\in\mathcal{L}}\), with \(\bar{\mu}^{\ell}=\mathbb{E}_{\xi\sim\mu}[\prod_{i=1}^{d}\xi_{i}^{\ell_{i}}]\) for \(\ell=(\ell_{i})_{i\in\mathbb{I}_{1},d\mathbb{I}}\in\mathcal{L}=\{\ell=(\ell_{1},\ldots,\ell_{d})\in\mathbb{N}^{d}:\sum_{i=1}^{d}\ell_{i}\leq L\}\) of cardinality \(L_{d}\), and \((t,x,y)\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{L_{d}}\mapsto\Psi_{\eta} (t,x,y)\) is a classical finite-dimensional feedforward neural network with parameters \(\eta\). Moment neural networks have been considered in [20] as a special case of cylindrical mean-field neural networks, and satisfy a universal approximation theorem for continuous functions on \([0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\), see [15]. By abuse of notation and language, we identify \(\phi_{\eta}\) and \(\Psi_{\eta}\), and call them indifferently moment neural networks. We shall parametrize the randomized policy (actor) by a Gaussian probability transition kernel in the form \[\pi_{\theta}(.|t,x,\mu) = \mathcal{N}(m_{\theta}(t,x,\bar{\boldsymbol{\mu}}_{L}),\lambda \mathbb{I}_{p}),\] where \(m_{\theta}\) is a moment neural network, hence with log density: \[\log p_{\theta}(t,x,\mu,a) = -\frac{1}{2}\log(2\pi\lambda)-\frac{|a-m_{\theta}(t,x,\bar{ \boldsymbol{\mu}}_{L})|^{2}}{2\lambda},\] and \(\lambda>0\) is a parameter for exploration. Notice that in this case, the known functions \(C_{\theta}\), \(\vartheta_{\theta}\) depend on \(\mu\) only though its \(L\) moments \(\bar{\boldsymbol{\mu}}_{L}\), and by misuse of notation we also write: \(C_{\theta}(t,x,\bar{\boldsymbol{\mu}}_{L})\), \(\vartheta_{\theta}(t,x,\bar{\boldsymbol{\mu}}_{L})\). The value function (critic) is parametrized by a moment neural network \(\mathcal{J}_{\eta}(t,x,\mu)=\mathcal{J}_{\eta}(t,x,\bar{\boldsymbol{\mu}}_{L})\), and we notice that \[\partial_{\mu}\mathcal{J}_{\eta}(t,x,\mu)(\xi)= D_{1}(\xi)\circ\nabla_{y}\mathcal{J}_{\eta}(t,x,\bar{\boldsymbol{\mu}}_{L})\] \[\partial_{\xi}\partial_{\mu}\mathcal{J}_{\eta}(t,x,\mu)(\xi)= D_{2}(\xi)\circ\nabla_{y}\mathcal{J}_{\eta}(t,x,\bar{\boldsymbol{\mu}}_{L}),\] where \(D_{1}(\xi)\) is the matrix in \(\mathbb{R}^{L_{d}\times d}\), and \(D_{2}(\xi)\) is the tensor in \(\mathbb{R}^{L_{d}\times d\times d}\) with components \[[D_{1}(\xi)]_{\ell i}= \ \ell_{i}\xi_{i}^{\ell_{i}-1}\prod_{k\neq i}\xi_{k}^{\ell_{k}}, \quad\text{ for }\xi=(\xi_{i})_{i\in[\mathbb{I},d]},\ \ell=(\ell_{i})_{i\in[\mathbb{I},d]},\] \[[D_{2}(\xi)]_{\ell ij}= \ \begin{cases}\ell_{i}(\ell_{i}-1)\xi_{i}^{\ell_{i}-2}\prod_{k\neq i }\xi_{k}^{\ell_{k}},\quad i=j\\ \ell_{i}\ell_{j}\xi_{i}^{\ell_{i}-1}\xi_{j}^{\ell_{j}-1}\prod_{k\neq i,j}\xi_ {k}^{\ell_{k}},\quad i\neq j.\end{cases}\] The expression of the operator \(\mathcal{H}_{\theta}\) applied to the moment neural network critic function is then given by \[\mathcal{H}_{\theta}[\mathcal{J}_{\eta}](t,x,\mu)=\ \nabla_{\theta}\Big{[} \mathbb{E}_{\xi\sim\mu}\big{[}D_{1}(\xi)C_{\theta}(t,\xi,\bar{\boldsymbol{ \mu}}_{L})+\frac{1}{2}D_{2}^{\intercal}(\xi)\circ\vartheta_{\theta}(t,\xi,\bar{ \boldsymbol{\mu}}_{L})\big{]}\cdot\nabla_{y}\mathcal{J}_{\eta}(t,x,\bar{ \boldsymbol{\mu}}_{L})\Big{]}. \tag{3.1}\] Here \(D_{2}^{\intercal}(\xi)\) is the tensor in \(\mathbb{R}^{d\times d\times L_{d}}\) with components \([D_{2}^{\intercal}(\xi)]_{ij\ell}=[D_{2}(\xi)]_{\ell ij}\). In the algorithm, we shall use the expectation of \(\mathcal{H}_{\theta}\), which is given from (3) by \[\overline{\mathcal{H}}_{\theta}[\mathcal{J}_{\eta}](t,\mu):= \ \mathbb{E}_{\xi\sim\mu}\big{[}\mathcal{H}_{\theta}[\mathcal{J}_{ \eta}](t,\xi,\mu)\big{]}\] \[= \ \nabla_{\theta}\Big{[}\mathbb{E}_{\xi\sim\mu}\big{[}D_{1}(\xi)C_ {\theta}(t,\xi,\bar{\boldsymbol{\mu}}_{L})+\frac{1}{2}D_{2}^{\intercal}(\xi) \circ\vartheta_{\theta}(t,\xi,\bar{\boldsymbol{\mu}}_{L})\big{]}\cdot\nabla_{y} \mathbb{E}_{\xi\sim\mu}\big{[}\mathcal{J}_{\eta}(t,\xi,\bar{\boldsymbol{\mu}}_{L })\big{]}\Big{]}.\] **Remark 3.1**.: **1.** _For complexity argument, it is crucial to rely on the above expression of the operator \(\mathcal{H}_{\theta}\) where the differentiation is taken on the expectation \(\mathbb{E}_{\xi\sim\mu}[.]\), and not the reversal: expectation of the differentiation. Indeed, in the latter case, after empirical approximation of the expectation with \(M\) samples \(\xi^{j}\sim\mu\), \(j=1,\ldots,M\), one should compute by automatic differentiation_ \[\nabla_{\theta}\big{[}D_{1}(\xi^{j})C_{\theta}(t,\xi^{j},\bar{\boldsymbol{\mu}} _{L})+\frac{1}{2}D_{2}^{\intercal}(\xi^{j})\circ\vartheta_{\theta}(t,\xi^{j}, \bar{\boldsymbol{\mu}}_{L})\big{]},\quad\nabla_{y}\mathcal{J}_{\eta}(t,\xi^{j}, \bar{\boldsymbol{\mu}}_{L}),\quad j=1,\ldots,M,\] _which is very costly as \(M\) is of order \(10^{4}\). In the former case, \(\overline{\mathcal{H}}_{\theta}[\mathcal{J}_{\eta}](t,\mu)\) is approximated by automatic differentiation via_ \[\widehat{\mathcal{H}}_{\theta}^{M}[\mathcal{J}_{\eta}](t,\mu) := \ \nabla_{\theta}\Big{[}\tfrac{1}{M}\sum_{j=1}^{M}D_{1}(\xi^{j})C_{ \theta}(t,\xi^{j},\bar{\boldsymbol{\mu}}_{L})+\tfrac{1}{2}D_{2}^{\intercal}(\xi^ {j})\circ\vartheta_{\theta}(t,\xi^{j},\bar{\boldsymbol{\mu}}_{L})\cdot\nabla_{y} \tfrac{1}{M}\sum_{j=1}^{M}\mathcal{J}_{\eta}(t,\xi^{j},\bar{\boldsymbol{\mu}}_{L })\Big{]}.\] hence saving an order \(M\) for the complexity cost. In theory, it is also possible to choose other networks for taking into account the dependency on the distribution \(\mu\): the cylindrical network proposed in [15] could be used but some automatic differentiation are then requested to calculate \(\partial_{\mu}\mathcal{J}_{\eta}(t,x,\mu)(\xi)\) and \(\partial_{\xi}\partial_{\mu}\mathcal{J}_{\eta}(t,x,\mu)(\xi)\) for each sample of \(\xi\) leading to an explosion in the computation time._ **2.** _In order to calculate the term \(\nabla_{y}\mathcal{J}_{\eta}\), it is necessary to explore different initial distributions, otherwise \(\mathcal{J}_{\eta}\) only depends on \(t\) and \(x\) at convergence and the gradient is impossible to estimate._ ## 4 Algorithm The actor-critic method consists in two optimization stages that are performed alternately: 1. _Policy evaluation_: given an actor policy \(\pi_{\theta}\), evaluate its cost functional with the critic function \(\mathcal{J}_{\eta}\) that minimizes the loss function arising from the martingale property **(MJ)** after time discretization of the interval \([0,T]\) with the time grid \(\{t_{k}=k\Delta t,k=0,\ldots,n\}\): \[L^{PE}(\eta)=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}\sum_{k=0}^{n-1} \Big{|}g_{t_{n}}+\sum_{l=k}^{n-1}f_{t_{l}}\Delta t-\mathcal{J}_{\eta}(t_{k},X _{t_{k}},\mu_{t_{k}})\Big{|}^{2}\Delta t\Big{]},\] where we set \(\mu_{t_{l}}=\mathbb{P}_{X_{t_{l}}}\) for the law of \(X_{t_{l}}\), and \(f_{t_{l}}=f(X_{t_{l}},\mu_{t_{l}},\alpha_{t_{l}})\) as the output cost at time \(t_{l}\) for input state \(X_{t_{l}}\), law \(\mu_{t_{l}}\), action \(\alpha_{t_{l}}\sim\pi_{\theta}(.|t_{l},X_{t_{l}},\mu_{t_{l}})\), and \(g_{t_{n}}=g(X_{t_{n}},\mu_{t_{n}})\) the terminal output cost for input \(X_{t_{n}}\), \(\mu_{t_{n}}\). 2. _Policy gradient_: given a critic cost function \(\mathcal{J}_{\eta}\), update the parameter \(\theta\) of the actor by stochastic gradient descent by using the gradient, which is given from (2) and after time discretization by \[G(\theta)=\ \mathbb{E}_{\alpha\sim\pi_{\theta}}\Big{[}\sum_{k=0}^{n- 1}\nabla_{\theta}\log p_{\theta}(t_{k},X_{t_{k}},\mu_{t_{k}},\alpha_{t_{k}}) \big{[}\mathcal{J}_{\eta}(t_{k+1},X_{t_{k+1}},\mu_{t_{k+1}})-\mathcal{J}_{\eta }(t_{k},X_{t_{k}},\mu_{t_{k}})\] \[\qquad\qquad\qquad+\ f_{t_{k}}\Delta t\big{]}+\mathcal{H}_{\theta}[ \mathcal{J}_{\eta}](t_{k},X_{t_{k}},\mu_{t_{k}})\Big{]}.\] In the practical implementation, we proceed as follows for each epoch e (gradient iteration descent) with a given exploration parameter \(\lambda(e)\) decreasing to \(0\): * We start with a batch \(N\) (of order \(10\)) of initial distributions \(\mu_{0}^{i}\), \(i=1,\ldots,N\), e.g. Gaussian distributions by varying the mean and std deviations parameters, and sample \(X_{0}^{i,j}\sim\mu_{0}^{i}\), \(j\)\(=1,\ldots,M\) with \(M\) of order \(10^{4}\). If our ultimate goal is to learn the optimal control and function value for other families of initial distributions, the initial distributions should be sampled accordingly. * We then run by forward induction in time: for \(k=0,\ldots,n-1\): * Empirical estimate of \(\mu_{t_{k}}^{i}\) from \((X_{t_{k}}^{i,j})_{j\in[\![1,M]\!]}\), for \(i=1,\ldots,N\). * Sample \(\alpha_{t_{k}}^{i,j}\sim\pi_{\theta}(.|t_{k},X_{t_{k}}^{i,j},\mu_{t_{k}}^{i})\), \(i\in[\![1,N]\!]\), \(j\in[\![1,M]\!]\) using the exploration parameter \(\lambda(e)\) * Observe running cost \(f_{t_{k}}^{i,j}=f(X_{t_{k}}^{i,j},\mu_{t_{k}}^{i},\alpha_{t_{k}}^{i,j})\), and next state \(X_{t_{k+1}}^{i,j}\), \(i\in[\![1,N]\!]\), \(j\in[\![1,M]\!]\) * Observe final cost \(g_{t_{n}}^{i,j}=g(X_{t_{n}}^{i,j},\mu_{t_{n}}^{i})\), \(i\in[\![1,N]\!]\), \(j\in[\![1,M]\!]\) * Compute the empirical mean approximation of \(L^{PE}(\eta)\) on all initial distributions \(\mu_{0}^{i}\), \(i\in[\![1,N]\!]\): \[\widetilde{L}_{M}^{PE}(\eta)=\ \frac{1}{MN}\sum_{i=1}^{N}\sum_{j=1}^{M}\sum_{k=0}^{n- 1}\big{|}g_{t_{n}}^{i,j}+\sum_{l=k}^{n-1}f_{t_{l}}^{i,j}\Delta t-\mathcal{J}_ {\eta}(t_{k},X_{t_{k}}^{i,j},\mu_{t_{k}}^{i})\big{|}^{2}\Delta t,\] and update the critic parameter by \[\eta\longleftarrow\ \eta-\rho^{C}\nabla_{\eta}\tilde{L}_{M}^{PE}(\eta),\] where \(\rho^{C}\) is a learning rate. Notice that the gradient is calculated by automatic differentiation. * Compute the empirical mean approximation of \(G(\theta)\) on all initial distributions \(\mu_{0}^{i}\), \(i\in[\![1,N]\!]\): \[\widetilde{G}_{M}(\theta)=\ \nabla_{\theta}\frac{1}{MN}\sum_{i=1}^{N}\sum_{j=1}^{M} \Big{\{}\sum_{k=0}^{n-1}\log p_{\theta}(t_{k},X_{t_{k}}^{i,j},\mu_{t_{k}}^{i}, \alpha_{t_{k}}^{i,j})\big{[}\mathcal{J}_{\eta}(t_{k+1},X_{t_{k+1}}^{i,j},\mu_{ t_{k+1}}^{i})-\mathcal{J}_{\eta}(t_{k},X_{t_{k}}^{i,j},\mu_{t_{k}}^{i})\] \[+\ f_{t_{k}}^{i,j}\Delta t\big{]}\Big{\}}+\widetilde{\mathcal{H}}_{ \theta}^{M}[\mathcal{J}_{\eta}]\] where \[\widetilde{\mathcal{H}}_{\theta}^{M}[\mathcal{J}_{\eta}]= \nabla_{\theta}\Big{(}\frac{1}{N}\sum_{i=1}^{N}\Big{[}\sum_{k=0} ^{n-1}\Big{(}\frac{1}{M}\sum_{j=1}^{M}D_{1}(X_{t_{k}}^{i,j})C_{\theta}(t_{k},X _{t_{k}}^{i,j},\mu_{t_{k}}^{i})+\frac{1}{2}D_{2}^{*}(X_{t_{k}}^{i,j})\circ \vartheta_{\theta}(t_{k},X_{t_{k}}^{i,j},\mu_{t_{k}}^{i})\Big{)}\cdot\] \[\nabla_{y}\Big{(}\frac{1}{M}\sum_{j=1}^{M}\mathcal{J}_{\eta}(t_{k},X _{t_{k}}^{i,j},\mu_{t_{k}}^{i})\Big{)}\Big{]}\Big{)}\] and update the actor parameter by \[\theta\longleftarrow\ \theta-\rho^{A}\widetilde{G}_{M}(\theta),\] where \(\rho^{A}\) is a learning rate. Again for efficiency, it is crucial to compute by automatic differentiation the gradient after computing all the different expectations as in Remark 3.1. The output \((\theta^{*},\eta^{*})\) are the optimal parameters obtained at convergence of the algorithm. **Remark 4.1**.: _Compared to classical actor critic algorithm where one samples a trajectory for a given distribution, here the batch version of the algorithm consists in sampling and following \(N\) distributions together to estimate the gradients._ **Remark 4.2**.: _In order to check that the algorithm has effectively converged to the solution, we can use the calculated control \(m_{\theta^{*}}(t,x,\mu)\) and apply it from different initial distributions \(\mu_{0}\) sampled as \((X_{0}^{j})_{j\in[\![1,M]\!]}\) in a time discretized version of \((\ref{eq:1})\). Taking discrete expectation, we can compare the result obtained to \(\frac{1}{M}\sum_{j=1}^{M}\mathcal{J}_{\eta^{*}}(0,X_{0}^{j},\mu_{0})\). When results are very close, we can suppose that the algorithm has effectively converged to the right solution._ **Remark 4.3**.: _In the case where we know a priori that the running cost and terminal cost functions depend on the probability distribution \(\mu\) only via its moments \(\bar{\mu}_{L}\), then we only need to estimate the moments of \(\mu_{t_{k}}^{i}\) from \((X_{t_{k}}^{i,j})_{j\in[\![1,M]\!]}\), since all the other coefficients in the algorithm depend upon the measure via its moments._ **Remark 4.4**.: _When \(C_{\theta}(t,x,\mu):=\int_{A}C(t,a)\pi_{\theta}(\mathrm{d}a[t,x,\mu)\), \(\vartheta_{\theta}(t,x,\mu):=\int_{A}\vartheta(t,a)\pi_{\theta}(\mathrm{d}a[t, x,\mu)\) are not analytically explicit, it is always possible to estimate them numerically for example using a numerical quadrature or a quasi Monte carlo/Monte-Carlo method but with some non negligible extra costs._ ## 5 Numerical results Throughout this section, we use moment neural networks with 3 hidden layers and 20 neurons on each layer, and choose the activation function \(\tanh\). The exploration parameter \(\lambda\) is a function of the number of gradient descent iterations (epoch number \(e\leq\hat{N}\)): \[\lambda(e)=(\bar{\lambda}-\underline{\lambda})\Big{(}1-S\big{(}\frac{20e-10 \hat{N}}{\hat{N}}\big{)}\Big{)}+\underline{\lambda},\] where \(\underline{\lambda}=0.0001\) and \(\bar{\lambda}=0.1\) and \(S\) is the sigmoid function: \(S(x)=\frac{1}{1+\exp(-x)}\). In other words, it is chosen so that the exploration period with \(\lambda\) close to \(0.1\) is long enough, then \(\lambda\) slowly decreases to \(0.0001\) and stays close to that value long enough. This fonction is plotted on Figure 1. At a gradient descent iteration \(e\), we sample the control as: \[\pi_{\theta}(.|t,x,\mu)\sim\mathcal{N}(m_{\theta}(t,x,\mu),\lambda(e)\mathbb{I}_{ p}).\] During the gradient descent algorithm, we use the ADAM optimizer [13]. We point out that it is crucial to use two timescales approach (see [1]), for the learning rates \(\rho^{C}\), \(\rho^{A}\) of the critic and actor updates: \(\rho^{C}\) should be at least one order of magnitude higher than \(\rho^{A}\) to get good convergence, hence the approximate critic function should evolve faster. We take a batch size \(N=10\) while the number of samples to estimate distributions is taken equal to \(M=10000\) or \(M=20000\) depending on the examples. In the tables and figures below, we give the average analytic solution "Anal" at \(t=0\), i.e. \(\mathbb{E}_{X_{0}\sim\mu_{0}}[V(0,X_{0},\mu_{0})]\), and the average calculated value function "Calc": \(\mathbb{E}_{X_{0}\sim\mu_{0}}[\mathcal{J}_{\eta^{*}}(0,X_{0},\mu_{0})]\) obtained by the algorithm at \(t=0\), by varying the initial distributions \(\mu_{0}\). The MSE is the mean square error between the analytic and the critic value computed at \(t=0\), i.e. \(\mathbb{E}_{X_{0}\sim\mu_{0}}\big{|}\mathcal{J}_{\eta^{*}}(0,X_{0},\mu_{0})- V(0,X_{0},\mu_{0})\big{|}^{2}\), and the relative error is \[\text{RelError}=\ \frac{\mathbb{E}_{X_{0}\sim\mu_{0}}\big{[}\mathcal{J}_{ \eta^{*}}(0,X_{0},\mu_{0})-V(0,X_{0},\mu_{0})\big{]}}{\mathbb{E}_{X_{0}\sim\mu _{0}}[V(0,X_{0},\mu_{0})]}\] We shall also plot in the one-dimensional case \(d=1\), and \(A=\mathbb{R}\), the trajectories of the optimal control \(t\mapsto\alpha_{t}^{*}\) vs the ones obtained from moment neural networks, i.e., \(t\mapsto m_{\theta^{*}}(t,X_{t},(\mathbb{E}[X_{t}^{\ell}])_{\ell\in[1,L]})\). All training times are calculated on a on GPU NVidia V100 32Go graphic card. We consider four examples with control on the drift, including multidimensional setting and nonlinear quadratic mean-field control, and one example with controlled volatility, for which we have analytic solutions to be compared with the approximations calculated from our actor critic algorithm. ### Examples with controlled drift In the four examples of this paragraph, we take \(\vartheta(t,a)\equiv 0\), \(C(t,a)=a\) and so \(\vartheta_{\theta}\equiv 0\), \(C_{\theta}=m_{\theta}\). #### 5.1.1 Systemic risk model in one dimension We consider the model in [4]: \[\begin{cases}b(x,\mu,a)&=\ \kappa(\bar{\mu}-x)+a,\quad\sigma\ \text{positive constant}\\ f(x,\mu,a)&=\ \frac{1}{2}a^{2}-qa(\bar{\mu}-x)+\frac{p}{2}(\bar{\mu}-x)^{2}, \qquad g(x,\mu)\ =\ \frac{c}{2}(x-\bar{\mu})^{2},\end{cases}\] for \((x,\mu,a)\in\mathbb{R}\times\mathcal{P}_{2}(\mathbb{R})\times\mathbb{R}\), with some positive constants \(\kappa\), \(q\), \(p\), \(c>0\), \(q^{2}\leq p\). Here we denote by \(\bar{\mu}:=\mathbb{E}_{\xi\sim\mu}[\xi]\). In this linear quadratic (LQ) model, the value function is explicitly given by \[V(t,x,\mu)=\ K(t)(x-\bar{\mu})^{2}+\sigma^{2}R(t), \tag{5.1}\] Figure 1: Exploration function \(\lambda\) for a number of epochs \(\hat{N}=6000\). where \[K(t)=-\frac{1}{2}\Big{[}\kappa+q-\sqrt{\Delta}\frac{\sqrt{\Delta} \sinh(\sqrt{\Delta}(T-t))+(\kappa+q+c)\cosh(\sqrt{\Delta}(T-t))}{\sqrt{\Delta} \cosh(\sqrt{\Delta}(T-t))+(\kappa+q+c)\sinh(\sqrt{\Delta}(T-t))}\Big{]},\] with \(\sqrt{\Delta}=\sqrt{(\kappa+q)^{2}+p-q^{2}}\), and \[R(t)=\ \frac{\sigma^{2}}{2}\ln\Big{[}\cosh(\sqrt{\Delta}(T-t))+ \frac{\kappa+q+c}{\sqrt{\Delta}}\sinh(\sqrt{\Delta}(T-t))\Big{]}-\frac{\sigma ^{2}}{2}(\kappa+q)(T-t),\] while the optimal control is given by \[\alpha_{t}^{*}=\ (2K(t)+q)(\mathbb{E}[X_{t}]-X_{t}),\quad 0\leq t\leq T.\] The parameters of the model are fixed to the following values: \(\kappa=0.6\), \(\sigma=1\), \(p=c=2\), \(q=0.8\), \(T=1\). We take a number of time steps \(n=100\), \(M=10000\). At each gradient iteration, the initial distribution is sampled with \[X_{0}\sim\ \mu_{0}\ =\ \upsilon_{0}\ \mathcal{N}(0,1),\] where \(\upsilon_{0}^{2}\) is sampled at each iteration according to the uniform distribution on \([0,1]\) for each element of the batch. In the table, we give the results obtained in simulation with \(\upsilon_{0}^{2}\in\{0,\frac{1}{10},\ldots,1\}\) after training with \(L=2\) moments, using \(\hat{N}=6000\) gradient iterations. On Figure 2, we plot 3 trajectories of the optimal control and the ones calculated with moment neural networks, and we observe that the control is very well estimated. In this LQ example, we know that the suitable number of moments to take is \(L=2\). In the real test case, we do not know which \(L\) to take. In Table 2, we give the results with \(L=4\), which shows that the results are also very accurate and do not depend on \(L\) being small. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\upsilon_{0}^{2}\) & 0. & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline Anal & 0.3870 & 0.4095 & 0.4321 & 0.4546 & 0.4772 \\ \hline Calc & 0.3958 & 0.4198 & 0.4421 & 0.4642 & 0.4875 \\ \hline MSE & 0.0001 & 0.0001 & 0.0002 & 0.0002 & 0.0004 \\ \hline \hline \(\upsilon_{0}^{2}\) & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline Anal & 0.4997 & 0.5223 & 0.5448 & 0.5674 & 0.5900 \\ \hline Calc & 0.5112 & 0.5341 & 0.5593 & 0.5858 & 0.6082 \\ \hline MSE & 0.0005 & 0.0005 & 0.0006 & 0.0005 & 0.0007 \\ \hline \end{tabular} \end{table} Table 1: Results for the systemic model using \(L=2\), \(\rho^{A}=0.0005\), \(\rho^{C}=0.01\). Training time is 106863s. Figure 2: Trajectories of control with \(\upsilon_{0}^{2}=0.9\) #### 5.1.2 An optimal trading example We consider an optimal trading model taken from [7]: \[\begin{cases}b(x,\mu,a)&=\ a,\quad\sigma\ \text{positive constant}\\ f(x,\mu,a)&=\ a^{2}+2Pa,\qquad g(x,\mu)\ =\ \gamma(x-\bar{\mu})^{2},\end{cases}\] for \((x,\mu,a)\in\mathbb{R}\times\mathcal{P}_{2}(\mathbb{R})\times\mathbb{R}\), with \(P>0\) the constant transaction price per trading, and \(\gamma>0\) the risk aversion parameter. In this LQ framework, the value function has the form as in (5.1.1) with \[K(t)\ =\ \frac{\gamma}{1+\gamma(T-t)},\qquad R(t)\ =\ \sigma^{2}\log(1+\gamma(T-t))-P^{2}(T-t),\] while the optimal control is given by \[\alpha_{t}^{\star}=\ -K(t)(X_{t}-\mathbb{E}[X_{t}])-P,\quad 0\leq t\leq T.\] We take the following parameters : \(P=3,\gamma=3,\sigma=1,T=0.5\), and \(n=100\), \(M=10000\). At each gradient iteration, the initial distribution is sampled with \[X_{0}\sim\ \mu_{0}=\bar{\mu}_{0}+\upsilon_{0}\mathcal{N}(0,1),\] where \((\bar{\mu}_{0},\upsilon_{0}^{2})\) are sampled from \((0.4\mathcal{U}([0,1]),0.5\mathcal{U}([0,1]))\) for each element of the batch. The relative error is plotted on Figure 3 by varying \((\bar{\mu}_{0},\upsilon_{0})\), while the trajectories of the control (optimal vs moment neural netwok) are plotted on Figure 4. Again, in this LQ example, the suitable number of moments to take is \(L=2\), and when we increase \(L\), convergence is more difficult to achieve and results become more instable and may depend on the run with the same hyper-parameters. Nevertheless we manage to obtain very good results with \(L=4\) as shown on Figure 3. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\upsilon_{0}^{2}\) & 0. & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline Anal & 0.3869 & 0.4095 & 0.4320 & 0.4546 & 0.4771 \\ \hline Calc & 0.3917 & 0.4159 & 0.4376 & 0.4588 & 0.4801 \\ \hline MSE & 0.0000 & 0.0000 & 0.0001 & 0.0001 & 0.0001 \\ \hline \hline \(\upsilon_{0}^{2}\) & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline Anal & 0.4997 & 0.5222 & 0.5448 & 0.5674 & 0.5899 \\ \hline Calc & 0.5023 & 0.5249 & 0.5471 & 0.5688 & 0.5891 \\ \hline MSE & 0.0000 & 0.0001 & 0.0001 & 0.0002 & 0.0003 \\ \hline \end{tabular} \end{table} Table 2: Results for the systemic case using \(L=4\), \(\rho^{A}=0.0005\), \(\rho^{C}=0.01\). Training time is 115183s. #### 5.1.3 A non linear quadratic mean-field control We construct an ad-hoc mean-field control model with \[\begin{cases}b(t,x,\mu,a)&=\ \beta(t,x,\mu)+a,\quad\sigma\ \mbox{positive constant}\\ f(x,\mu,a)&=F(t,x,\mu)+\frac{1}{2}|a|^{2},\quad g(x,\mu)\ =\mathbb{E}_{\xi\sim\mu}[w(x-\xi)] \end{cases}\] for some smooth \(C^{2}\) even function \(w\) on \(\mathbb{R}\), e.g. \(w(x)=\cos(x)\), and \(F\) is a function to be chosen later. In this case, the optimal feedback control valued in \(A=\mathbb{R}\) is given by \[\mathfrak{a}^{\star}(t,x,\mu) = \hat{\mathrm{a}}(t,x,\mathcal{U}(t,x,\mu))\ =\ -\mathcal{U}(t,x,\mu)\ =\ -\partial_{\mu}v(t,\mu)(x)\] \[\quad\mbox{with }v(t,\mu)\ =\ \mathbb{E}_{\xi\sim\mu}[V(t,\xi,\mu)],\] and \(V\) is solution to the Master Bellman equation (see section 6.5.2 in [2]): \[\partial_{t}V(t,x,\mu)+\big{(}\beta(t,x,\mu)-\mathcal{U}(t,x,\mu )\big{)}\partial_{x}V(t,x,\mu)+\frac{\sigma^{2}}{2}\partial_{xx}^{2}V(t,x,\mu)\] \[+\ \mathbb{E}_{\xi\sim\mu}\Big{[}\big{(}\beta(t,\xi,\mu)- \mathcal{U}(t,\xi,\mu)\big{)}\partial_{\mu}V(t,x,\mu)(\xi)+\frac{\sigma^{2}}{2 }\partial_{x^{\prime}}\partial_{\mu}V(t,x,\mu)(\xi)\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+F(t,x,\mu)+\frac{1}{2}| \mathcal{U}(t,x,\mu)|^{2}=\ 0, \tag{5.2}\] with the terminal condition \(V(T,x,\mu)=g(x,\mu)\). We look for a solution to the Master equation in the form: \(V(t,x,\mu)=e^{T-t}\mathbb{E}_{\xi\sim\mu}[w(x-\xi)]\). For such function \(V\), we have \(\partial_{t}V(t,x,\mu)=-V\), \[\partial_{x}V(t,x,\mu) = e^{T-t}\mathbb{E}_{\xi\sim\mu}[w^{\prime}(x-\xi)],\quad\partial_ {xx}^{2}V(t,x,\mu)\ =\ e^{T-t}\mathbb{E}_{\xi\sim\mu}[w^{\prime\prime}(x-\xi)]\] \[\partial_{\mu}V(t,x,\mu)(\xi) = -e^{T-t}w^{\prime}(x-\xi),\quad\partial_{x^{\prime}}\partial_{ \mu}V(t,x,\mu)(\xi)\ =\ e^{T-t}w^{\prime\prime}(x-\xi),\] and \[\mathcal{U}(t,x,\mu) = e^{T-t}\mathbb{E}_{\xi\sim\mu}[w^{\prime}(x-\xi)-w^{\prime}(\xi- x)]\ =\ 2e^{T-t}\mathbb{E}_{\xi\sim\mu}[w^{\prime}(x-\xi)]\ =\ 2 \partial_{x}V(t,x,\mu).\] since \(w\) is even. By plugging these derivatives expressions of \(V\) into the l.h.s. of (5.2), we then see that by choosing \(F\) equal to \[F(t,x,\mu) = e^{T-t}\mathbb{E}_{\xi\sim\mu}\Big{[}(w-\sigma^{2}w^{\prime\prime })(x-\xi)+(\beta(t,\xi,\mu)-\beta(t,x,\mu))w^{\prime}(x-\xi)\Big{]}\] \[-\ 2e^{2(T-t)}\mathbb{E}_{(\xi,\xi^{\prime})\sim\mu\otimes\mu} \big{[}w^{\prime}(x-\xi)w^{\prime}(\xi-\xi^{\prime})\big{]},\] the function \(V\) satisfies the Master Bellman equation, hence is the value function to the mean-field control problem. Actually, with the choice of \(w(x)=\cos(x)\), and using trigonometric relations, we have \[F(t,x,\mu) = \cos(x)\Big{[}e^{T-t}\left((1+\sigma^{2})\mathbb{E}_{\xi\sim\mu}[ \cos(\xi)]+\mathbb{E}_{\xi\sim\mu}[\sin(\xi)\beta(t,\xi,\mu)]-\beta(t,x,\mu) \mathbb{E}_{\xi\sim\mu}[\sin(\xi)]\right)\] \[-\ 2e^{2(T-t)}\left(\mathbb{E}_{\xi\sim\mu}[\sin(\xi)\xi)[ \mathbb{E}_{\xi\sim\mu}[\sin(\xi)]-\mathbb{E}_{\xi\sim\mu}[\sin^{2}(\xi)] \mathbb{E}_{\xi\sim\mu}[\cos(\xi)]\right)\Big{]}\] \[+\ \sin(x)\Big{[}e^{T-t}\left((1+\sigma^{2})\mathbb{E}_{\xi\sim\mu}[ \sin(\xi)]-\mathbb{E}_{\xi\sim\mu}[\beta(t,\xi,\mu)\cos(\xi)]+\beta(t,x,\mu) \mathbb{E}_{\xi\sim\mu}[\cos(\xi)]\right)\] \[-\ 2e^{2(T-t)}\left(\mathbb{E}_{\xi\sim\mu}[\sin(\xi)\cos(\xi)] \mathbb{E}_{\xi\sim\mu}[\cos(\xi)]-\mathbb{E}_{\xi\sim\mu}[\cos^{2}(\xi)] \mathbb{E}_{\xi\sim\mu}[\sin(\xi)]\right)\Big{]}.\] For the test case, we take \[\beta(t,x,\mu)=\ \kappa(\bar{\mu}-x), \tag{5.3}\] with the parameters \(\kappa=\sigma=1\), \(T=0.4\), \(n=40\), \(M=20000\). At each gradient iteration, the initial distribution is sampled with \[X_{0}\sim\ \mu_{0}=\bar{\mu}_{0}+\upsilon_{0}\mathcal{N}(0,1)\] where \((\bar{\mu}_{0},\upsilon_{0}^{2})\) is sampled from \((0.2\mathcal{U}([0,1]),0.5\mathcal{U}([0,1]))\) for each element of the batch. On Figure 5, we give the analytic solution depending on \((\bar{\mu}_{0},\upsilon_{0}^{2})\) and the relative error obtained by the algorithm. We observe that the results are quite accurate. We plot in Figure 6 the trajectories of the optimal control vs the moment neural network, and observe that they are very close. Figure 7 shows that using \(L=2\) or \(3\) is optimal in terms of relative error, while the convergence is more difficult to achieve for high values of \(L\). #### 5.1.4 A multi-dimensional LQ example We consider a multi-dimensional extension of the LQ systemic risk model in Section 5.1.1, by supposing that on each dimension, the dynamics satisfies the same equation with independent Brownian motions, and that the cost functions are the sum over each component of the cost function in the univariate case. In this case, the value function is given by \(V(t,x,\mu)=\sum_{i=1}^{d}V_{1}(t,x_{i},\mu_{i})\), for \(t\in[0,T]\), \(x=(x_{i})_{i\in[1,d]}\)\(\in\mathbb{R}^{d}\), \(\mu_{i}\) is the \(i\)-th marginal law of \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), and \(V_{1}\) is the value function in the univariate model given by (5.1.1). We keep the same parameters as in Section 5.1.1, with a number of time steps \(n=50\), \(M=10000\), \(L=2\), and test in dimension \(d=2\) and \(3\). At each gradient iteration, the initial distribution is sampled from \[X_{0}\sim\ \mu_{0}=\mathcal{N}(0,\upsilon_{0}),\] where \(\upsilon_{0}\) is the diagonal \(d\times d\)-matrix with diagonal elements \(\upsilon_{0,i}\) sampled from \(u_{i}\mathcal{U}([0,1])\), with constants \(u_{i}\in[0,1]\), \(i=1,\ldots,d\), for each element of the batch. We plot in Figure 8 the relative error in dimension \(d=2\) by varying \((\upsilon_{0,1},\upsilon_{0,2})\), and for \(L=2\) and \(L=4\). In Figure 9, we plot the relative error in dimension \(d=3\) by varying \((\upsilon_{0,1},\upsilon_{0,2})\), and for \(\upsilon_{0,3}\)\(=0\). Figure 7: Relative error in a non LQ model for different values of \(L\). \(\hat{N}=9000\) gradient iterations, \(\rho^{A}=0.0005\), \(\rho^{C}=0.02\). ### A non LQ example with controlled volatility We consider a one-dimensional model with \[\begin{cases}b(t,x,\mu,a)&=\ \beta(t,x,\mu),\quad\sigma(t,x,\mu,a)\ =\ a,\\ f(x,\mu,a)&=\ F(t,x,\mu)+\frac{1}{2}P|a|^{2}-a,\quad g(x,\mu)\ =\ \mathbb{E}_{\xi \sim\mu}[w(x-\xi)],\end{cases}\] where \(P\) is a positive constant, \(w\) is a smooth \(C^{2}\) even function on \(\mathbb{R}\), e.g. \(w(x)=\cos(x)\), and \(F\) is a function to be chosen later. Notice that \(C_{\theta}\equiv 0\), and \[\vartheta_{\theta}(t,x,\mu) = m_{\theta}(t,x,\mu)^{2}+\lambda,\] where \(\lambda=\lambda(e)\) (depending on the epoch \(e\)) is the exploration parameter of \(\pi_{\theta}(.|t,x,\mu)=\mathcal{N}(m_{\theta}(t,x,\mu),\lambda)\). In this model, the optimal feedback control valued in \(A=\mathbb{R}\) is given by \[\mathfrak{a}^{*}(t,x,\mu)=\ \hat{\mathrm{a}}(t,x,\partial_{x} \mathcal{U}(t,x,\mu))\ =\ \frac{1}{P+\partial_{x}\mathcal{U}(t,x,\mu)},\] \[\text{with}\ \ \mathcal{U}(t,x,\mu)\ =\ \partial_{\mu}v(t,\mu)(x),\quad v(t,\mu)\ =\ \mathbb{E}_{\xi\sim\mu}[V(t,\xi,\mu)],\] Figure 8: Relative error in dimension \(d\) = 2, with \(\rho^{C}=0.01\). Figure 9: Relative error for \(L=2\) in dimension \(d=3\), with \(\hat{N}=9000\) gradient iterations, \(\rho^{A}=0.0005\), \(\rho^{C}=0.01\). Training time is 144000s. and \(V\) is solution to the Master Bellman equation: \[\partial_{t}V(t,x,\mu)+\beta(t,x,\mu)\partial_{x}V(t,x,\mu)+\frac{1} {2}\frac{1}{(P+\partial_{x}\mathcal{U}(t,x,\mu))^{2}}\partial_{xx}^{2}V(t,x,\mu)\] \[+\,\mathbb{E}_{\xi\sim\mu}\Big{[}\beta(t,\xi,\mu)\partial_{\mu}V (t,x,\mu)(\xi)+\frac{1}{2}\frac{1}{(P+\partial_{x}\mathcal{U}(t,\xi,\mu))^{2}} \partial_{x^{\prime}}\partial_{\mu}V(t,x,\mu)(\xi)\Big{]}\] \[+F(t,x,\mu)+\frac{1}{2}\frac{P}{(P+\partial_{x}\mathcal{U}(t,x, \mu))^{2}}-\frac{1}{P+\partial_{x}\mathcal{U}(t,x,\mu)}=\;0, \tag{5.4}\] with the terminal condition \(V(T,x,\mu)=g(x,\mu)\). We look for a solution to the Master equation in the form: \(V(t,x,\mu)=e^{T-t}\mathbb{E}_{\xi\sim\mu}[w(x-\xi)]\), and by similar calculations as in Section 5.1.3, we would have \(\mathcal{U}(t,x,\mu)=2\partial_{x}V(t,x,\mu)=2e^{T-t}\mathbb{E}_{\xi\sim\mu} [w^{\prime}(x-\xi)]\). Therefore, with \(w(x)=\cos(x)\), and by choosing \(F\) equal to \[F(t,x,\mu)= -\frac{P}{2(P-2e^{T-t}\mathbb{E}_{\xi\sim\mu}[\cos(x-\xi)])^{2}}+ \frac{1}{P-2e^{T-t}\mathbb{E}_{\xi\sim\mu}[\cos(x-\xi)]}+\] \[\mathbb{E}_{\xi\sim\mu}[\cos(x-\xi)]e^{T-t}(1+\frac{1}{2}\frac{1} {(P-2e^{T-t}\mathbb{E}_{\xi\sim\mu}[\cos(x-\xi)])^{2}})+\] \[e^{T-t}\mathbb{E}_{\xi\sim\mu}[(\beta(t,x,\mu)-\beta(t,\xi,\mu) )\sin(x-\xi)]+\] \[e^{T-t}\mathbb{E}_{\xi\sim\mu}[\cos(x-\xi)\frac{1}{2}\frac{1}{(P -2e^{T-t}\mathbb{E}_{\xi^{\prime}\sim\mu}[\cos(\xi-\xi^{\prime})])^{2}}],\] the function \(V\) satisfies the Master Bellman equation, hence is the value function to the mean-field control problem. To be easily computable, the function \(F\) can be rewritten using trigonometric relations as \[F(t,x,\mu)= -\frac{P}{2}\frac{1}{(P-2e^{T-t}[\cos(x)\overline{\cos}_{\mu}+ \sin(x)\overline{\sin}_{\mu}])^{2}}+\frac{1}{P-2e^{T-t}[\cos(x)\overline{\cos} _{\mu}+\sin(x)\overline{\sin}_{\mu}]}\] \[+\;e^{T-t}[\cos(x)\overline{\cos}_{\mu}+\sin(x)\overline{\sin}_{ \mu}](1+\frac{1}{2}\frac{1}{(P-2e^{T-t}[\cos(x)\overline{\cos}_{\mu}+\sin(x) \overline{\sin}_{\mu}])^{2}})\] \[+\;e^{T-t}[\beta(t,x,\mu)\sin(x)\overline{\cos}_{\mu}-\beta(t,x, \mu)\cos(x)\overline{\sin}_{\mu}\] \[\quad-\;\mathbb{E}_{\xi\sim\mu}[\beta(t,\xi,\mu)\cos(\xi)]\sin(x )+\mathbb{E}_{\xi\sim\mu}[\beta(t,\xi,\mu)\sin(\xi)]\cos(x)]\] \[+\;\frac{e^{T-t}}{2}\cos(x)\mathbb{E}_{\xi\sim\mu}[\frac{\cos( \xi)}{2[P-2e^{T-t}(\cos(\xi)\overline{\cos}_{\mu}+\sin(\xi)\overline{\sin}_{ \mu})]^{2}}]\] \[+\;\frac{e^{T-t}}{2}\sin(x)\mathbb{E}_{\xi\sim\mu}[\frac{\sin( \xi)}{2[P-2e^{T-t}(\cos(\xi)\overline{\cos}_{\mu}+\sin(\xi)\overline{\sin}_{ \mu})]^{2}}]\] with the notations: \(\overline{\cos}_{\mu}:=\mathbb{E}_{\xi\sim\mu}[\cos(\xi)]\), \(\overline{\sin}_{\mu}:=\mathbb{E}_{\xi\sim\mu}[\sin(\xi)]\). We take \(P=2.2e^{T}\) so that the control is bounded, the same trend \(\beta\) as in (5.1.3), and parameters as in section 5.1.3: \(\kappa=\sigma=1\), \(T=0.4\), \(n=40\), \(M=20000\). At each gradient iteration, the initial distribution is sampled with \[x_{0}\sim\;\mu_{0}=\bar{\mu}_{0}+\upsilon_{0}\mathcal{N}(0,1)\] where \((\bar{\mu}_{0},\upsilon_{0}^{2})\) is sampled from \((0.2U([0,1]),0.5U([0,1]))\). Controlling the volatility is more difficult than controlling the trend, and it is crucial for the method that \(\rho^{A}\) is very small. We take \(\hat{N}=9000\) gradient iterations. Training time with \(L=3\) is 67228s, while it takes 69073s for \(L=4\). On Figure 10, we give the relative error obtained with \(L=3\) and \(L=4\). Notice that with \(L=3\), \(\rho^{A}\) is small and that we have to take \(\rho^{A}\) even smaller with \(L=4\). The relative error is small, but the control is not as well approximated as in the controlled drift example, as shown in Figure 11. ## 6 Conclusion In this study, we have presented a robust effective resolution to the challenging problem of mean-field control within a partially model-free continuous-time framework. Leveraging policy gradient techniques and actor-critic algorithms, our approach has demonstrated the valuable role of moment neural networks in the sampling of distributions. We have illustrated the significance of maintaining a low number of moments (typically two or three) while underscoring the critical role played by fine-tuning learning rates for actor and critic updates. Subsequent developments could encompass the extension to non-separable forms within the state and control components of drift and diffusion coefficients. Furthermore, a compelling direction for further investigation could involve mean-field dynamics governed by jump diffusion processes, where the intensities of the jumps remain unknown.
2309.12212
SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices
Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic `0' and `1', AQFP devices serve as excellent carriers for binary neural network (BNN) computations. Although recent research has made initial strides toward developing an AQFP-based BNN accelerator, several critical challenges remain, preventing the design from being a comprehensive solution. In this paper, we propose SupeRBNN, an AQFP-based randomized BNN acceleration framework that leverages software-hardware co-optimization to eventually make the AQFP devices a feasible solution for BNN acceleration. Specifically, we investigate the randomized behavior of the AQFP devices and analyze the impact of crossbar size on current attenuation, subsequently formulating the current amplitude into the values suitable for use in BNN computation. To tackle the accumulation problem and improve overall hardware performance, we propose a stochastic computing-based accumulation module and a clocking scheme adjustment-based circuit optimization method. We validate our SupeRBNN framework across various datasets and network architectures, comparing it with implementations based on different technologies, including CMOS, ReRAM, and superconducting RSFQ/ERSFQ. Experimental results demonstrate that our design achieves an energy efficiency of approximately 7.8x10^4 times higher than that of the ReRAM-based BNN framework while maintaining a similar level of model accuracy. Furthermore, when compared with superconductor-based counterparts, our framework demonstrates at least two orders of magnitude higher energy efficiency.
Zhengang Li, Geng Yuan, Tomoharu Yamauchi, Zabihi Masoud, Yanyue Xie, Peiyan Dong, Xulong Tang, Nobuyuki Yoshikawa, Devesh Tiwari, Yanzhi Wang, Olivia Chen
2023-09-21T16:14:42Z
http://arxiv.org/abs/2309.12212v1
# SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices ###### Abstract Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic '0' and '1', AQFP devices serve as excellent carriers for binary neural network (BNN) computations. Although recent research has made initial strides toward developing an AQFP-based BNN accelerator, several critical challenges remain, preventing the design from being a comprehensive solution. In this paper, we propose SupeRBNN, an AQFP-based randomized BNN acceleration framework that leverages software-hardware co-optimization to eventually make the AQFP devices a feasible solution for BNN acceleration. Specifically, we investigate the randomized behavior of the AQFP devices and analyze the impact of crossbar size on current attenuation, subsequently formulating the current amplitude into the values suitable for use in BNN computation. To tackle the accumulation problem and improve overall hardware performance, we propose a stochastic computing-based accumulation module and a clocking scheme adjustment-based circuit optimization method. To effectively train the BNN models that are compatible with the distinctive characteristics of AQFP devices, we further propose a novel randomized BNN training solution that utilizes algorithm-hardware co-optimization, enabling simultaneous optimization of hardware configurations. In addition, we propose implementing batch normalization matching and the weight rectified clamp method to further improve the overall performance. We validate our SupeRBNN framework across various datasets and network architectures, comparing it with implementations based on different technologies, including CMOS, ReRAM, and superconducting RSFQ/ERSFQ. Experimental results demonstrate that our design achieves an energy efficiency of approximately \(7.8\times 10^{4}\) times higher than that of the ReRAM-based BNN framework while maintaining a similar level of model accuracy. Furthermore, when compared with superconductor-based counterparts, our framework demonstrates at least two orders of magnitude higher energy efficiency. ## 1 Introduction In recent years, deep learning and deep neural networks (DNNs) have become the core enabler of a broad spectrum of artificial intelligence (AI) applications such as image recognition [22], natural language processing [23], and autonomous driving [6]. However, the high computation and storage demands of DNN executions are still an essential challenge for the democratization of AI. A significant amount of effort has been dedicated to reducing network energy consumption at the algorithmic level. Recent studies have proposed Binary Neural Networks (BNNs) as a solution [19, 20, 47, 58], which have a 32\(\times\) smaller memory footprint than conventional DNNs that use 32-bit floating-point precision, despite having a similar network structure. Additionally, BNNs can avoid the tremendous floating-point multiply-accumulation (MAC) operations in conventional DNN models by employing bit-wise exclusive-NOR and popcount logic. In recent years, advancements in BNN training techniques and network structure designs have led to significant improvements in network accuracy [75, 9], making BNN a promising candidate for energy-efficient-oriented designs. In addition to algorithmic optimizations, significant advancements have been achieved in the hardware domain, with superconducting electronics (SCE) being a prime example. Superconducting logic families, leveraging magnetic flux quantization and quantum interference in Josephson-junction (JJ)-based superconductor loops, have emerged as promising candidates for future computing. The IEEE International Roadmap on Devices and Systems (IRDS) has recognized SCE as one of the top-level roadmaps since 2018 [21, 33]. Among various superconducting logic families, Adiabatic Quantum-Flux-Parametron (AQFP) logic stands out for its exceptional energy efficiency. In 2019, researchers experimentally demonstrated a 1.4 zJ energy dissipation for each operation in AQFP at the device level [67]. On the circuit level, authors in [15] have shown that, compared to state-of-the-art CMOS technology, AQFP can achieve an energy-efficiency gain in the range of \(10^{4}\sim 10^{5}\).In addition, research on AQFP design automation has been conducted worldwide, aiming to achieve system-level AQFP circuit design and implementation [11, 12, 60, 71]. Thanks to advancements in the EDA environment for AQFP VLSI design, several successful AQFP chips have been demonstrated [3, 70, 76]. Diverging from previous neural network acceleration efforts focused on RSFQ superconducting logic, such as superNPU [37] and JBNN [27], recent research [77] has recognized the immense potential of integrating BNN with AQFP technology to achieve exceptionally efficient DNN accelerator design, marking an initial endeavor in this direction. In [77], a crossbar synapse array architecture using AQFP devices is proposed, which is a prototype module that intends to efficiently compute the vector-matrix multiplications required for the MAC operation in BNNs. However, this is far from a complete solution to make the AQFP devices feasibly used for BNN acceleration. There are still several critical challenges that need to be addressed. First of all, the utilization of AQFP devices for constructing crossbar arrays poses a challenge regarding their _randomized behavior issue_ (**Challenge #1**). Specifically, when building an AQFP-based crossbar, the accumulated current in the analog domain on each crossbar column will suffer from the _current attenuation_ caused by the increasing superconductive inductance as the crossbar size increases. With an attenuated input current, the AQFP buffer may not be able to precisely detect the direction of the input current, resulting in randomized outputs (more details in Section 4.2). Such randomness will introduce inaccuracy in BNN computation, resulting in significant discrepancies between the BNN model that is trained on software and its actual behavior during hardware implementation. Consequently, the accuracy of the network may degrade substantially. Moreover, due to the current attenuation issue and immature manufacturing technology, the AQFP-based crossbar has limited scalability (**Challenge #2**). This indicates that the size of the crossbar array cannot be arbitrarily large. Multiple crossbar arrays must be employed to accommodate all the weights of a BNN layer or a convolutional filter. However, this will raise another problem: how to effectively accumulate the intermediate results from the corresponding crossbar columns between multiple crossbars (**Challenge #3**). This is not a trivial task since the binary intermediate results are used. Inappropriately addressing this problem can lead to significant accuracy degradation. Last but not least, the hardware configurations such as crossbar size and the threshold current of the AQFP buffer also need to be optimized to deliver the best accuracy while considering the hardware performance (**Challenge #4**). Due to these critical challenges, we would like to ask whether this is yet another crippled design that has to end with compromised accuracy or hardware performance? Fortunately, the answer is _no_. To overcome these four challenges, we first investigate the randomized behavior of AQFP devices. Then, we propose an AQFP randomized behavior-aware BNN training paradigm, which incorporates the randomized behavior of AQFP buffer by formulating the binarization process of the output feature maps in a probabilistic manner according to the amplitude of the crossbar's output current (**Contribution #1**). We also incorporate the weight-rectified clamp method to help improve randomized BNN training accuracy. After that, we simulate the impact of current attenuation in terms of different crossbar sizes and formulate the current amplitude into the mathematical value used in BNN computation. By doing this, the gap between the BNN model that is trained on software and the model's actual behavior in hardware implementation can be mitigated (**Contribution #2**). Intriguingly, we find that the unique randomness behavior of AQFP devices is inherently compatible with the stochastic computing (SC) technique. Therefore, to solve the problem of accumulating the intermediate results from the corresponding crossbar columns between multiple crossbars, we propose a novel and efficient SC-based accumulation module circuit to add up the intermediate result as well as improve the model accuracy impacted by the randomized behavior (**Contribution #3**). Since the randomized behavior that appears in the AQFP buffer's output is constrained and dependent on the input current amplitude, it can be seamlessly converted to a stochastic number (SN) via a specific observation window with minimal hardware overhead. Due to the significant influence of hardware configurations on the model accuracy, we propose a comprehensive software-hardware co-optimization. This helps optimize the hardware configurations of AQFP-based BNN accelerator design, such as crossbar synapse array size, stochastic computing bit-stream length, and "gray-zone" width of AQFP buffer by comprehensively considering power consumption, energy efficiency, and hardware computing error (**Contribution #4**). Besides that, we introduce a batch normalization (BN) matching method to address the floating-point computation problem induced by BN layer with no additional peripheral circuits. And a clocking scheme adjustment-based circuit optimization is also proposed to improve the hardware performance (**Contribution #5**). To validate the effectiveness of SupeRBNN, a series of detailed comparative experiments are provided. We analyze the accuracy distribution according to multiple hardware configurations and the sensitivity of the relationship between SC bit-stream length with model accuracy. We also compare our method with multiple representative technologies, including CMOS, ReRAM, and RSFQ/ERSFQ on MNIST and CIFAR-10 datasets. SupeRBNN achieves about \(7.8\times 10^{4}\) times higher energy efficiency with a similar model accuracy level compared with the representative ReRAM-based BNN framework on CIFAR-10 dataset. ## 2 Background and Related Work ### Model Quantization and Binary Neural Network Model quantization is a crucial technique for DNN inference acceleration. It maps the 32-bit floating-point weight and activation values in a DNN model using fewer bits representation. Existing model quantization research can be categorized according to quantization schemes. Binary neural network (BNN) [19, 20, 47, 58] and ternary neural network (TNN) [32, 81] use extremely low precision for DNN models, and low-bit-width fixed-point neural network [17, 80] quantizes models with the same interval between each quantization level. Among them, with weights constrained to \(\{-1,1\}\), multiplications of BNN can be replaced by additions/subtractions. Additions/subtractions can also be eliminated using XNOR and AND operations if activations are quantized to binary as well. This can significantly reduce operations and simplify hardware implementation, which is ideal for low-power consumption scenarios. As a pioneer work, Courbariaux et al. [20] first binarized both weights and activations with the sign function. To overcome the almost everywhere zero gradients in the sign function, they incorporated the STE [8] as an approximation to enable the gradient back-propagation. However, the limited representational ability of BNNs leads to a significant drop in accuracy. To mitigate the accuracy drop, XNOR-Net [58] introduces scaling factors obtained from the \(L_{1}\)-norm of the weights or activations to reduce the quantization error. Then, the rotated binary neural network (RBNN) [46] explores and reduces the quantization error by considering the influence of the angular bias between the binarized weight vector and its full-precision version. Later works propose new gradient estimation functions and binarization-friendly network architectures to promote the BNN performance [47, 48, 49, 75, 78, 30, 75]. ### AQFP Superconducting Logic AQFP originates from quantum-flux-parametron (QFP) logic, one among many superconducting logic families, which was first proposed in 1985 [50]. Authors in [66] proposed an adiabatic version of QFP to obtain extremely low energy dissipation by re-parameterizing the device to allow QFP gates to be operated at an adiabatic mode, resulting in roughly 5\(\sim\)6 orders lower energy dissipation than its CMOS counterpart. Like many other superconductor-based logic families, AQFP also employs the Josephson Junction (JJ) as the basic switching element to obtain the state transition for logic encoding. The most basic structure of AQFP circuits is the AQFP buffer, which consists of a double-Josephson-Junction SQUID (\(J_{1}\), \(J_{2}\)) [18], as shown in Figure 1. A minimalist approach has been proposed to create an AQFP cell library containing essential logic gates (e.g., INVERTER, AND, OR, and MAJORITY gates) built from AQFP buffers for digital circuit design [69]. Utilizing different directions (positive and negative) of output current pulses (\(I_{out}\)) to represent distinct logic states (0 or 1), the accumulation operation for outputs from various AQFP gates can be efficiently achieved through a straightforward current summation in the analog domain. Moreover, when keeping a high excitation current to an AQFP buffer, the logic state stored in the AQFP buffer can be retained, making it possible to be used as a single-bit memory cell for storing the 1-bit BNN weights. These characteristics render AQFP well-suited for addressing MAC operations in BNNs using a crossbar-based in-memory computing architecture. However, due to the principle of AQFP buffer, the output current is sensitive to the direction of the input current. When the amplitude of input current is very small, which falls in the "grayzone" \(\Delta I_{in}\)[25] of an AQFP buffer, the stochastic switching behavior (caused by the thermal or quantum fluctuation) exists in an AQFP buffer will make it hard to detect the direction of the input current, resulting in a randomized output with a probability related to input current, i.e., \(0<P(I_{in})<1\). This unique property is a double-edged sword: it introduces inaccuracy but also enables compatibility with stochastic computing. Diverging from the previous neural network acceleration works [37] targeting RSFQ superconducting logic, recent research [77] proposes a crossbar synapse array architecture designed for implementing BNN models tailored for AQFP logic. However, unresolved issues like current attenuation, limited scalability, and the randomized behavior of AQFP buffers still hinder the true implementation of AQFP-based crossbar array architecture. Our proposed framework addresses and resolves these challenges, making it a feasible solution. ### Stochastic Computing Stochastic computing (SC) is a paradigm that represents a number, named stochastic number (SN), by counting the number of ones in a bitstream. For example, the bitstream 0100110100, containing four ones in a 10-bit stream, represents a real number \(x=P_{X}=4/10=0.4\). (Here we use \(X\) to represent the stochastic bitstream, whereas \(x\) represents the real value associated with \(X\).) In the bit-stream, each bit is independent and identically distributed (i.i.d.). In addition to the above unipolar encoding format, SC can also represent numbers in the range of [-1, 1] using the bipolar encoding format. Concretely, a real number \(x\) is processed by \(P(X=1)=(x+1)/2\). Hence, 0.4 can be represented by 1011011101, as \(P(X=1)=(0.4+1)/2=7/10\). \(-0.6\) can be represented by 0100100000, with \(P(X=1)=(-0.6+1)/2=2/10\). Figure 2 shows the examples of different SN representation format. Recent work SC-AQFP [13] develops AQFP-based DNN acceleration framework trying to use stochastic computing to realize the whole DNN implementation. But it can only work on a very small network for simple tasks (e.g., MNIST) without complex layers (e.g., batch normalization) and requires a pretty large bit-stream length (i.e., 256\(\sim\)2048). Compared with SC-AQFP, our proposed SupeRBNN contributes a new computational paradigm, where stochastic computing is used as a component for the accumulation of intermediate results, which can work on larger DNN and requires smaller bit-stream length (i.e., 16\(\sim\)32). ## 3 Challenges and Motivations As mentioned in the introduction, the characteristics of the AQFP buffer well match the needs of computation in BNN models. It is appealing to design the ultra energy-efficient AQFP-based BNN accelerator. Recent work [77] proposes an AQFP-based crossbar synapse array architecture targeting BNN model implementation. This architecture pre-stores Figure 1: Adiabatic Quantum-Flux-Parametron Logic. (a) Schematic of an AQFP buffer. (b) Microphotograph of the fabricated AQFP buffer using 4-layer niobium process [51]. BNN weights and deploys XNOR macro inside logic-in-memory cells, which can achieve energy-efficient in-memory computing theoretically. But the randomized behavior of AQFP buffers, current attenuation within the crossbar, the limited hardware scalability, and the hardware configuration problem makes it hard to realize a practical deployment. **Randomized Behavior of AQFP Buffer:** Because of the thermal noise and/or quantum fluctuation impact, the output of AQFP buffer presents randomized switching behavior, especially when input current amplitude falls in a certain range, known as a finite "gray-zone" \(\Delta I_{in}\)[25], in which the AQFP buffer may not be able to precisely detect the direction of the input current. Such a phenomenon introduces in-accuracy in BNN computation and may lead to a degraded network accuracy eventually. To handle this problem, we first investigate the randomized behavior and simulate this phenomenon within our research scope (4.2K), then incorporate it into our proposed AQFP-aware BNN training algorithm. (Section 4.2 and Section 5.1) Previous ReRAM and PCM-based work [38] also consider randomness on devices. There are two types of noises considered. Programming Noise and Draft Noise. People usually add a random variable to the original weights to mimic the potential noise/imprecision when mapping the model on different products/hardware, and make a trained model have overall better performance/robustness on different products/hardware. These noises are deterministic after a model is mapped to a specific hardware. They are not data-dependent. On the contrary, the randomized behavior in AQFP devices is data-dependent, which depends on both \(I_{I}n\) and hardware configuration for each computation. Therefore, we need to analyze the probability of the intermediate results and incorporate this randomized behavior inside the training algorithm. **Crossbar Current Attenuation and Scalability Problem:** When building an AQFP-based crossbar, the accumulated current in the analog domain on each crossbar column will suffer from the current attenuation caused by the increasing superconductive inductance. The relationship between accumulated current amplitude with the mathematical value (the latent activation value in BNN) varies in terms of the crossbar size which increases the randomized behavior in the value domain because the attenuated input current amplitude is more likely to fall into the "gray-zone" of the AQFP buffer. As a result, such randomness in the value domain is intensified when the crossbar becomes larger. Since excessive current attenuation results in completely randomized output, the AQFP crossbar scalability is limited and it is not able to accommodate all the weights from a BNN layer or a convolutional filter. To overcome the limited scalability of AQFP crossbar, we use multiple crossbars to accumulate the intermediate result of each BNN filter. To mitigate the impact of the current attenuation on BNN computation, we investigate the impact of crossbar size, formulate the current amplitude into the value used in BNN computation, and incorporate the factor of current attenuation into the AQFP-aware BNN training (Section 4.2). **Accumulation of Intermediate Result Problem:** In the design of [77], AQFP buffer is used as the neuron circuit of the crossbar (Section 4.1), which functions both a sign operator and an analog-digital-converter (ADC), directly outputting the 1-bit binarized results. This architecture is ultra-energy-efficient but requires one column of the crossbar to contain a whole filter computation in BNN. But as mentioned above, the limited scalability of AQFP crossbar may not satisfy the demand of BNN model and we need to use multiple crossbars to do the intermediate results' accumulation. To handle the problem of intermediate results' accumulation as well as preserve the accuracy impacted by the AQFP buffer randomized behavior, we design a novel SC-based accumulation module circuit as the output peripheral circuit to add up the intermediate results from each crossbar and convert the stochastic numbers back to 1-bit value as the input of the next layer (Section 4.3). **Hardware Configuration Problem:** In general, the crossbar accelerator designs prefer a larger crossbar size and coarse-grained computations to ensure a higher computation throughput and energy efficiency [2, 45, 64]. The AQFP-based design becomes more complex since the hardware configurations, such as the crossbar size, the "gray-zone" width, the threshold current of the AQFP comparator (illustrated in Section 4.2), and the bit-stream length of SNs not only affect the energy efficiency but also affect the randomized behavior as well as the model accuracy. We need to make optimization to deliver the best accuracy while considering the hardware constraints. Therefore, we propose a comprehensive algorithm-hardware co-optimization for both randomized BNN training and hardware configurations (Section 5.4). To fully leverage the potential of AQFP devices, we also introduce a batch normalization matching method to address the floating-point computation problem induced by BN layer with no additional peripheral circuits (Section 5.4). ## 4 Hardware Design of AQFP-Based Randomized BNN Accelerator In this section, we first revisit the AQFP-based crossbar synapse array and the corresponding neuron circuit design proposed in [77] (Section 4.1). Then, we make a comprehensive analysis of the randomized behavior of AQFP buffer and crossbar current attenuation and propose our novel designs. In Section 4.2, we explore the impact of crossbar size on the current attenuation and analysis the randomized behavior of AQFP buffer and formulate the current amplitude into the value used in BNN computation. In Section 4.3, we propose a stochastic computing-based accumulation module to accumulate the intermediate computation results from the crossbar columns by considering the randomized outputs from the neuron circuits. Figure 2: Examples of the unipolar and bipolar representations of stochastic numbers. ### AQFP-based Crossbar Synapse Array Design for BNN Although BNNs employ binary weights and activations, they still suffer from significant data movement between the memory and computing units in conventional Von Neumann architectures. This data movement can lead to performance bottlenecks and increased energy consumption. Considering the nature of the AQFP buffer that can be used as a single-bit memory cell and its output current can be easily accumulated in the analog domain, the logic-in-memory (LiM) array-based architecture is used to perform BNNs, employing the in-memory/near-memory computing concept. Figure 3 illustrates the circuit architecture of the AQFP BNNs. The binarized weights are pre-stored in the 1-bit AQFP LiM cells and multiplied by the in-cell XNOR macro. The output of each LiM cell is the multiplication result of the input activation \(a_{i}\) in the \(i_{th}\) row of the crossbar and the corresponding pre-stored weight \(w_{i,j}\) in the \(i_{th}\) row and \(j_{th}\) column of the synapse array, and the multiplication result is represented as \(a_{i}(\bigtriangledown w_{i,j})\). Being different from conventional popcount-based accumulation in BNNs, the crossbar adopts an analog summation approach to add up all the outputs directly since the logic '1' and '0' in AQFP are represented by positive and negative current pulses. The accumulation result represented by the current sum-up of each column in the illustrated synapse array will be sent to the neuron circuits. As shown in Figure 1, a basic AQFP gate consists of two inductor-Josephson-junction loops \(L_{1}\)-\(J_{1}\) and \(L_{2}\)-\(J_{2}\), and the output logic state is denoted by the positive or negative current flowing through the output inductor \(L_{out}\). Therefore, an AQFP buffer can also serve as a current sensor since AQPF buffers can detect directions/signs of the input current and convert them into '0's or '1's. This unique characteristic makes AQFP naturally able to be used as the analog-digital-converters (ADCs) in BNN since the BNN also requires 1-bit representation for the intermediate computation results such as the output feature maps. Therefore, the neuron circuit can be simply built by using the AQFP buffers. For a specific column of the crossbar, the output currents of each LiM cell are merged by the magnetic coupling and obtain the accumulated current. Then, depending on the direction of the accumulated current, an AQFP buffer serves as both a sign function and an ADC to binarize and covert the accumulated current into logic state '0' or '1'. ### Randomized Behavior of AQFP buffer and Crossbar Current Attenuation Analysis Ideally, the neuron circuit in the BNN desires to generate deterministic results to ensure accurate computation. The randomized behavior of AQFP-based neuron circuits may introduce computation inaccuracy, leading to a potential accuracy drop eventually. Therefore, we need an effective way to quantify the randomized behavior, so that we can integrate it into the BNN training process. And with such a randomized-aware trained BNN, the accuracy can be significantly preserved. Moreover, understanding the relationship between the crossbar size and randomized behavior can also help us select appropriate hardware configurations for the implementation. As we mentioned earlier, the AQFP buffer can serve as a current sensor to detect directions/signs of the input current. However, randomized switching behavior exists in an AQFP comparator when input current amplitude falls in a certain range, known as a finite "gray-zone" \(\Delta I_{in}\)[25], resulting in a finite output probability \(0<P(I_{in})<1\), introduced by the thermal or quantum fluctuation. Quantitative research [73] on the quantum fluctuation effect on Josephson device shows that \(\Delta I_{in}\) grows at high temperatures due to thermal noise, whereas at \(T\to 0\), it saturates due to quantum fluctuations. Within our research scope (4.2K), we only consider thermal fluctuations as noise sources. Figure 4 shows the output probability of '1' corresponding to a given input current amplitude in micro-ampere level, where the boundary of randomized switching is around \(\pm 2\mu\)A. The probability of output of a forward current from AQFP buffer can be formulated as: \[P\left(I_{in}\right)=0.5+0.5\,\mathrm{erf}\left(\sqrt{\pi}\frac{\left(I_{in}- I_{th}\right)}{\Delta I_{in}}\right), \tag{1}\] where \(I_{in}\) is the input current amplitude of the AQFP buffer, which is accumulated through the whole column in the crossbar synapse array. \(\Delta I_{in}\) means the width of the "gray-zone". \(I_{th}\) means the current threshold which can be adjusted manually, and \(\mathrm{erf}(\cdot)\) is the error function. To better explore the impact of this randomized behavior Figure 4: The relationship between output probability of ”1” with input current on AQFP buffer. Figure 3: AQFP-based crossbar synapse array circuit architecture. on the BNN and quantify it, we conduct an analysis of the crossbar current attenuation. For the input of the crossbar synapse array, we use \(+70\mu A\) and \(-70\mu A\) to present value of \(+1\) and \(-1\), respectively. Since the current is added together (merged) in an analog manner via superconductive inductance, the merged current amplitude inevitably attenuates as more inputs in the merging circuits bring larger inductance. As shown in Fig. 5, we measure the degree of current attenuation under different crossbar synapse array sizes. According to the rationale of the current attenuation, it is reasonable to see the amplitude of the output current decrease as the crossbar size increases. Then, we generate a corresponding mathematical fitting curve of it. The curve can be expressed in the form of: \[I_{1}(C_{s})=A\cdot C_{s}^{-B}, \tag{2}\] where, \(I_{1}\) is the output current amplitude representing the value of 1, \(C_{s}\) is the size of crossbar synapse array, \(A\) and \(B\) are positive constants of fitting parameters. In consequence, the current amplitude representing the logic state of '1' in the neural network varies according to the size of the crossbar synapse array. We need to figure out the relationship between the output current amplitude with the presented value and convert the current amplitude to the specific value in the intermediate feature map of the neural network. To this end, we can convert the probability Equation (1) into the DNN value version: \[P_{v}(V_{in})=0.5+0.5\,\mathrm{erf}\left(\sqrt{\pi}\frac{(V_{in}-V_{th})}{ \Delta V_{in}(C_{s})}\right), \tag{3}\] where, \(V_{in}\) is the mathematical value converted from the input current of AQFP buffer, \(V_{th}\) and \(\Delta V_{in}(C_{s})\) are the counterpart of \(I_{th}\) and \(\Delta I_{in}\), respectively. \(\Delta V_{in}(C_{s})\) can be presented as follows: \[\Delta V_{in}(C_{s})=\Delta I_{in}/I_{1}(C_{s}). \tag{4}\] With the DNN value version of the probability expression, we make it possible to consider the AQFP randomized behavior in the BNN training process. ### Stochastic Computing-based Accumulation Module Design Though the randomized behavior of AQFP buffer is not an ideal property for the neuron circuit design, it also makes the AQFP buffer inherently compatible with the stochastic computing (SC) technique. In SC, the stochastic number (SN) is used to represent the value of a number, which consists of a time-independent bit sequence (as introduced in Section 2.3). Since the randomized behavior that appears in the AQPF buffer presents an output probability dependence on the input current amplitude, which can provide a sufficient level of SNs through a certain observation window with almost no hardware overhead. For example, as shown in Figure 6 (a), for each clock cycle/phase, an AQFP buffer in the neuron circuit will generate a 1-bit output with the probability of '1' or '0' depending on the accumulated current from the corresponding crossbar column. Using the 1-bit output directly carries a higher risk of being affected by the randomized behavior of the AQFP buffer. But, if we allow a longer observation window for the output of the neuron circuit while keeping the input of the crossbar unchanged, we can obtain an output bit-stream, which is naturally a stochastic number, thanks to the true randomness property of the AQFP buffer [68, 29]. Limited by the crossbar current attenuation property and the hardware manufacture constraints, the crossbar size cannot be arbitrarily large. Therefore, multiple crossbars are needed to accommodate all the weights of the same BNN layer. To make the convolution computation correct, it is required to accumulate the SNs from the same column of different crossbars. And we propose our SC-based accumulation module for the inter-crossbar accumulation. As shown in Figure 6 (b), we choose to use the approximate parallel Figure 5: (a) Schematic of analog accumulation circuit. (b) Current Attenuation Curve. The relationship between output current with crossbar synapse array size. Figure 6: (a) Convert intermediate results to stochastic number through a certain observation window. (b) Architecture design of stochastic computing-based accumulation module. counters (APCs) [41] to perform the SN accumulation between different crossbars. The APC counts the number of 1s in the inputs and represents the result with a binary number. This method consumes fewer logic gates compared with the conventional accumulative parallel counter [41, 53]. A comparator is followed by the APC to perform as a step function to generate 1-bit feature map/activation of BNN. Note that all the logic cells/circuits, such as APCs and comparators are designed based on the AQFP standard cell library consisting of all the AQFP logic gates including AND, OR, buffer, inverter, majority, splitter and read-out interfaces. The binary feature maps and activations are represented by positive and negative currents so that they can be directly used as the input for the crossbars for the next level computation. In general, the larger SN length will result in a higher accuracy of SC, but at the cost of longer computation clock cycles/phases. In our work, we also make the SC bit-stream length one of the dimensions in our algorithm-hardware co-optimization (more details in Section 5.4.2). By incorporating SC and using our SC-based accumulation module, the possible accuracy loss introduced by the AQFP neuron circuit can be efficiently and effectively resolved. To have a better understanding of where each BNN computational block is implemented, we show the overall matching graph as Fig. 7. The weight matrices in BNN blocks are separated and pre-stored in each AQFP crossbar. The batch normalization is directly converted and matched into the neuron circuits after each crossbar without additional cost (refer to Section 5.2). SC-based accumulation module is used to collect the output of each neuron circuit and generate the intermediate result of each BNN block. The data representations are marked for the corresponding data flows, e.g., Analog, stochastic computing stream, and digital. ### Clocking Scheme Adjustment-based Circuit Optimization In AQFP, all logic gates are synchronized by a multi-phase clock, facilitating data propagation between adjacent logic stages during a sufficient overlapping window of their respective clock phases. This distinctive characteristic necessitates a minimum of a 3-phase clock system. Current AQFP designs commonly employ a 4-phase clocking system, as it simplifies the testing process: a 4-phase clock can be easily generated using a 2-phase ac with a 90-degree phase difference in conjunction with a dc offset. Due to the synchronization nature of AQFP, numerous buffers must be inserted to ensure that all logic paths are balanced, preventing possible data propagation failure caused by the non-overlap of adjacent logic stages in a typical 4-phase clocking scheme. However, increasing the clock phase for the computing part can significantly reduce the buffers required for path-balancing [61]. This is because the clock phase overlap exists not only in adjacent logic stages but also in non-adjacent stages. Our simulations indicate that the total Josephson junction (JJ) count can be reduced by at least 20.8% and 27.3%, assuming 8-phase and 16-phase clocking, respectively. On the other hand, the buffer-chain-based memory (BCM) employed in this study is achieved in a fully balanced structure without any inserted buffer, and the clock is independent of the computing part. Thus, we propose an alternative approach that involves reducing the number of clock phases in the memory design from the original 4 phases to 3 phases, resulting in a 20% reduction in the total JJ count of the memory component. These simulations demonstrate the significant potential for clock phase adjustment-based component circuit optimization in enhancing the performance and efficiency of AQFP-based computing systems. ## 5 Algorithm and Hardware Co-Optimization ### Randomized-aware Binary Neural Network Training As the existence of the randomized behavior of AQFP buffer, training a BNN normally will lead to a significant performance mismatch between the pure software results and actual implementation on hardware, resulting in severe accuracy degradation. To mitigate this issue, it is desirable to make the training process of BNN randomized-aware. Given a DNN, for ease of representation, we simply denote its per-layer real-valued weights as \(w_{r}\) and the inputs as \(a_{r}\). Then, the convolutional result can be expressed as: \[Y=\text{CONV}\left(a_{r},w_{r}\right), \tag{5}\] where \(\text{CONV}\left(\cdot\right)\) represents the standard convolution. For simplicity, we omit the effect of stochastic computing and non-linear operation in this subsection. Binarized quantization aims to quantizes weight \(\sqsupseteq_{r}\) and activation \(\dashv_{r}\) to binarized levels, i.e., \(\left\{+1,-1\right\}\). Following XNORNet [58], given \(x_{r}\), the corresponding binarized value \(x_{b}\) can be achieved by the sign function: \[x_{b}=\text{sign}\left(x_{r}\right)=\begin{cases}+1,&\text{if }x_{r}\geq 0,\\ -1,&\text{otherwise},\end{cases} \tag{6}\] Taking the AQFP randomized behavior into consideration, each activation value is generated with the value probability function. Different from the conventional BNN quantization, the randomized activation \(a_{b}\) can be presented as: \[a_{b}=\text{sign}\left(a_{r}\right)=\begin{cases}+1,&\text{with probability }P_{v}\left(a_{r}\right),\\ -1,&\text{with probability }1-P_{v}\left(a_{r}\right),\end{cases} \tag{7}\] To mitigate the large quantization error in DNN binarization, XNOR-Net [58] applies two scaling factors for the Figure 7: Computation matching from software model (upside) to hardware architecture (downside). quantized weights \(w_{b}\) and activations \(a_{b}\), respectively. Since weight and activation are multiplied in convolution layers, we can simplify these two scaling factors as one parameter, denoted as \(\alpha\). Then, the binary convolution operation can be formulated as: \[Y_{b}=\text{BCONV}\left(a_{b},w_{b}\right)\odot\alpha, \tag{8}\] where \(\text{BCONV}\left(\cdot\right)\) denotes the binary convolution which includes bit-wise operations XNOR. \(\odot\) represents the element-wise multiplication. Here \(\alpha\) is set to be a learnable vector that contains independent values for each output channel. For BNN training, the forward-propagation is expressed by Equation (8) with the binarized values \(w_{b}\) and \(a_{b}\), while the real-valued \(w_{r}\) and \(a_{r}\) are updated during the back-propagation. However, the gradient of the sign function is an impulse function that breaks the transitivity of the derivative. The back-propagation can not be processed directly. Following STE [8], we compute the approximate gradient of the loss function \(L\), as following: \[\frac{\partial L}{\partial w_{r}}=\frac{\partial L}{\partial w_{b}}\cdot\frac{ \partial w_{b}}{\partial w_{r}}\approx\frac{\partial L}{\partial w_{b}}, \tag{9}\] For the gradient of the activations, since the AQFP probability function has already turned the sign function into the error function, we can leverage this characteristic to achieve the back-propagation instead of using piece-wise polynomial function [49]. Using the expected value of \(a_{b}\) as the approximation, we can have the AQFP randomized-aware back-propagation as follows: \[\frac{\partial L}{\partial a_{r}}=\frac{\partial L}{\partial a_{b}}\cdot\frac {\partial a_{b}}{\partial a_{r}}\approx\frac{\partial L}{\partial a_{b}}\cdot \frac{\partial\mathbb{E}\left(a_{b}\right)}{\partial a_{r}}, \tag{10}\] where \(\mathbb{E}\left(a_{b}\right)=\text{erf}\left(\sqrt{\pi}\frac{\left(a_{r}-V_{h }\right)}{\Delta V_{h}\left(C_{s}\right)}\right)\) In this way, we implement both forward-propagation and backward-propagation, which achieves the AQFP randomized-aware training. ### Batch Normalization Matching Batch Normalization (BN) [36] is a DNN layer that normalizes the activation values in the mini-batch during training. Many neural networks use the BN since it is important in stabilizing and accelerating the training process. But BN brings additional floating-point computation in the inference period which causes inefficiency in the BNN implementation on AQFP devices. In this section, we propose an AQFP-aware BN matching technique. BN can be described by the following equation: \[Y=\gamma\frac{X-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta \tag{11}\] where \(X\) and \(Y\) are the input and output of BN, and \(\gamma\), \(\beta\), \(\mu\), and \(\sigma\) stand for weight, bias, mean, and standard deviation, respectively. \(\epsilon\) is a small constant value to prevent the potential zero in the denominator. \(\gamma\) and \(\beta\) are updated through back-propagation in the training process. \(\mu\) and \(\sigma\) are updated using a moving average during training but fixed in inference. Note that BN in the inference process becomes a linear transformation, which makes it possible to convert BN into a simple addition operation in BNN and match crossbar synapse array. As shown in Fig. 8 (a), we use a common BNN cell as an example. The data get through the convolution layer followed by a BN layer and an activation layer (HardTanh), then input into the binarization layer before getting into the next BNN cell. The transferred values before the binarization layer are deformed back to floating-point values \(x_{r}^{bm}\) due to the existence of BN. Given the output values \(x_{r}^{conv}\) for binary convolution layer, the \(x_{r}^{bm}\) for BN layer, and the scaling factor \(\alpha\), the BN can be rewritten as: \[x_{r}^{bm}=\gamma\frac{x_{r}^{conv}\cdot\alpha-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta \tag{12}\] The output \(x_{b}\) of the BNN cell can be indicated as: \[x_{b}=\text{sign}(\text{HT}(x_{r}^{bm}))=\begin{cases}+1,&\text{if }x_{r}\geq 0,\\ -1,&\text{otherwise},\end{cases} \tag{13}\] where \(HT\) means the activation function HardTanh. Combined with equation (12) and AQFP probability function, the whole cell can be merged as: When \(\gamma>0\): \[x_{b}=\begin{cases}+1,\text{with probability }P_{v}\left(D\right),\\ -1,\text{with probability }1-P_{v}\left(D\right),\end{cases} \tag{14}\] When \(\gamma<0\): \[x_{b}=\begin{cases}+1,\text{with probability }1-P_{v}\left(D\right),\\ -1,\text{with probability }P_{v}\left(D\right),\end{cases} \tag{15}\] where \(D=x_{r}^{conv}+\frac{\beta\sqrt{\sigma^{2}+\epsilon}}{\gamma\alpha}-\frac{ \mu}{\alpha}\). Figure 8: BNN cell architecture. (a) Basic BNN convolution cell, (b) converted AQFP-based randomized BNN convolution cell. Thus, we can achieve a similar activation format as equation (7) by leveraging the current threshold mentioned in Equation (1) with the setting: \[I_{th}=\left(-\frac{\beta\sqrt{\sigma^{2}+\varepsilon}}{\gamma\cdot\alpha}+\frac{ \mu}{\alpha}\right)\cdot I_{1}(C_{s}). \tag{16}\] As shown in Fig. 8 (b), by adjusting the hardware configuration \(I_{th}\) in the AQFP probability function, the whole cell is converted into one randomized binary convolution layer without additional peripheral circuits. If the computation needs to be separated into multiple crossbars with stochastic computing as shown in Fig. 6 (b), we can divide \(I_{th}\) evenly and assign them to the corresponding crossbar. ### Weight Rectified Clamp Method As pointed out in [5, 75], the real-valued weights \(w_{r}\) of a quantized network roughly follow the zero-mean Laplace distribution due to their quantization in the forward propagation. Most weights are gathered around the distribution peak, while many outliers fall into the two tails, far away from the peak. These outliers adversely affect the training of a BNN and slow down the convergence when training BNNs. It is because though the magnitudes of the weights are updated during the back-propagation by gradient descent, the chances of changing their signs are extremely small, which limits the representational ability of BNNs [75]. To revive these outlier weights and promote the BNN training performance, we apply weight rectified clamp method following ReLU [75]: \[\text{ReLU}(w_{r})=\max\left(\min\left(w_{r},Q_{(\tau)}\right),Q_{(1-\tau)} \right), \tag{17}\] where \(Q(\tau)\) and \(Q(1-\tau)\) denote the \(\tau\) quantile and \((1-\tau)\) quantile of Weight, respectively. As proved in ReLU [75], this technique can move the outlier weights towards the distribution peak to increase the probability of changing their signs, which decreases the quantization error, as well as promotes the representational ability of BNNs. ### Hardware Configuration Optimizations In this section, we optimize the hardware configurations of AQFP-based randomized BNN accelerator design, including crossbar synapse array size, stochastic computing bit-stream length, and "gray-zone" width \(\Delta I_{in}\) by comprehensively considering power consumption, energy efficiency, and hardware computing error. The computing error mainly comes from two aspects: the average mismatch error AME comes from the output expectation bias of AQFP buffer (see Section 5.4.2); and the stochastic computing error including SN quantization error and random fluctuation [4, 56]. For simplicity, we omit the mathematical deduction of the latter error and use a series of accuracy comparative experiments to directly analyze it (see Section 5.4.1). #### 5.4.1 Stochastic Computing Bit-stream Length Optimization We take advantage of the probabilistic behavior that appears in the AQFP buffer to achieve stochastic computing among multiple crossbar synapse arrays. In this process, the stochastic computing bit-stream length is a critical configuration that has a close relationship with model accuracy, inference latency, and power consumption. Generally, a large bit-stream-length leads to better accuracy but suffers from longer inference latency and more power consumption. To choose a proper bit-stream length, we conduct a series of experiments to explore its behavior on model accuracy. Our general observation is that, as the SC bit-stream length increases (from 1), the model accuracy is improved significantly at the beginning but the accuracy stabilizes after the bit-stream length reaches 16\(\sim\)32. Therefore, using a bit-stream length longer than 32 will not have considerable gain in accuracy. Compared with pure stochastic computing work which generally requires a pretty large bit-stream length, e.g., 512, 1024, to maintain the stability of computation, SupeRBNN reaps benefit from the low demand of bit-stream length, thus achieving fast computation speed. Detailed results can be found in Section 6.3. #### 5.4.2 Optimization for Width of Gray-zone \(\Delta I_{in}\) and Crossbar Size Generally, given a crossbar size \(C_{s}\), for the stochastic computing of bipolar signals, the information carried in a stochastic stream of bits \(X\) is \(x=(2P(X=1)-1)\cdot C_{s}=2P(X)\cdot C_{s}-C_{s}\), where \(X\) is the stochastic bitstream, and \(x\) represents the real value associated with X (\(-C_{s}\leq x\leq+C_{s}\)). Since the AQFP buffer is used to generate the stochastic number with the probability \(P(X=1)=P_{s}(x)\). the expected value of the carried information \(y=(2P_{s}(x)\cdot C_{s}-C_{s})=\text{erf}\left(\sqrt{\pi}\frac{(x-y_{th})}{ \Delta V_{in}(C_{s})}\right)\cdot C_{s}\) does not exactly match the real value \(x\). The nonlinear probability function of AQFP buffer causes an expectation mismatch, which impacts the robustness and accuracy of the model. We show more comparison results in Section 6. Considering the activation value distribution, the average mismatch error AME can be defined as: \[\text{AME}=\frac{1}{C_{s}}\int_{-C_{s}}^{+C_{s}}f\left(x|C_{s}\right)\left(x- y\right)^{2}\text{ d}x, \tag{18}\] where \(f\left(x|C_{s}\right)\) is the probability density function of AQFP-buffer input value \(x\). Early works [5, 79] have shown that the real-valued weight and activation for a quantized model roughly follow a Gaussian distribution. Thus, \(f\left(x\right)\) can also be approximated as a Gaussian distribution related to \(C_{s}\), i.e., \(f\left(x|C_{s}\right)\sim N\left(C_{s}\mu,C_{s}\sigma^{2}\right)\). We optimize the related hardware configuration \(\Delta I_{in}\) and crossbar size \(C_{s}\) by minimizing AME. Since \(C_{s}\) is highly related to hardware performance, we first constraint \(C_{s}\) to a range that meets the energy efficiency demand, then adjust both \(C_{s}\) and \(\Delta I_{in}\) to find the local optimal solution within it. Related comparison experiments are incorporated in Section 6. ## 6 Experimental Results In this section, we present optimizations of AQFP hyperparameters along with comparison results with respect to model accuracy, power consumption, energy efficiency, etc. Finally, we perform thorough optimizations on the overall SupeRBNN to construct the AQFP-aware randomized BNN on both MNIST and CIFAR-10 datasets compared with multiple representative works based on different techniques, including CMOS-based DDN [16] and SyncBNN [27], ReRAM-based MB [40], RSFQ/ERSFQ-based JBNN [27], and AQFP-based pure stochastic computing work SC-AQFP [13]. ### Experiment Setup AQFP hardware implementation is achieved using a semi-automated design approach that targets the AIST 4-layer \(10\,\mathrm{kA/cm^{2}}\) niobium process (HSTP) [51]. Analog cells and circuits, such as AQFP neurons and merging circuits (analog accumulation), are manually designed at the Josephson-junction (JJ) level. This process takes into account device characteristics and is optimized with superconductor inductance extraction tools. In contrast, logic cells and circuits, such as logic-in-memory cells, stochastic accumulators (APCs), and comparators, are designed using the AQFP standard cell library. This library consists of all AQFP logic gates, including AND, OR, buffer, inverter, majority, splitter, and read-out interfaces. Figure 9 displays the microphotograph of a fabricated \(8\times 8\) AQFP crossbar block. The clock/excitation used to drive the entire circuit is a 4-phase sinusoidal current, achieving a \(5\,\mathrm{GHz}\) clock rate and a \(50\,\mathrm{ps}\) stage-to-stage delay. By introducing a delay-line (micro-stripline) based clocking scheme [31], the overall latency is further reduced. This approach effectively increases the total clock phases to 40 by delaying the sinusoidal current by \(5\,\mathrm{ps}\) between each adjacent logic stage. Circuit-level verification is conducted using a modified version of the Josephson simulator Jsim [24], which accounts for thermal noise. The fabricated \(8\times 8\) AQFP crossbar block is further validated at \(4.2\,\mathrm{K}\) inside a liquid helium Dewar, interfaced with a customized cryogenic probe, as illustrated in the right of Figure 9, which shows the block diagram of the tested system. The setup includes a chip bonded to a ceramic substrate and housed in a cryogenic probe for testing at \(4.2\,\mathrm{K}\). Waveform generators provide data and sinusoidal inputs, while DC voltage sources support a 4-phase clocking scheme. Low-noise differential amplifiers amplify output signals for oscilloscope analysis. A 4-layer shield made by Permalloy effectively blocks the external magnetic field. A remote host manages all input-output operations for automated data handling. Except for the chip and probe, all equipment is at room temperature. Only low-speed module functionality (\(100\,\mathrm{kHz}\)) has been assessed, with high-speed tests planned. For the thorough optimizations on CIFAR-10 dataset, SupeRBNN trains from scratch and takes 600 epochs to perform the whole AQFP-aware randomized BNN training with a batch size of 256. The learning rate is initialized as 0.1 and decays with a cosine annealing schedule. SGD [59] is used as the optimizer in the training process. Additional training optimizations, such as warmup and weight rectified clamp method are performed during the training. The number of warmup epochs is 5. And we follow the work [75] to initialize the rectified clamp parameter \(\tau\) as 0.85, then increase it gradually to the maximum of 0.99 during training. ### Hardware Results of the Proposed AQFP-based Crossbar Synapse Array Since crossbar synapse array size is a crucial hardware configuration that is highly related to energy efficiency, we first explore the relationship between them. As shown in Table 1, we present the hardware results of our proposed AQFP-based crossbar synapse array, including the latency, number of JJs, and energy dissipation (per clock cycle) for one crossbar synapse array of different sizes. As the crossbar area increases, all three hardware benchmarks increase but with different growth trends. Given a number of JJs, we can get a range of crossbar sizes that meet our energy efficiency requirement. ### Sensitivity Analysis on Relationship between SC Bit-stream Length and Accuracy To choose a proper bit-stream length, we conduct a series of experiments to explore its behavior on model accuracy. As shown in Fig 10, we use VGG-small training on CIFAR-10 as an example, four different crossbar sizes are incorporated in the comparison. We observe that, as the SC bit-stream length increases, the model accuracy improves a lot at the beginning, but maintains a stable value after the SC bit-stream length reaches 16\(\sim\)32. Keep increasing the SC bit-stream length over 32 will not have considerable accuracy improvements but will result in a longer computing time. ### The Overall Comparison among Different \(\Delta I_{in}\) Crossbar Size Configurations Here, we do a series of experiments to prove our methodology. VGG-small is used to be trained on CIFAR-10 dataset for these experiments. \begin{table} \begin{tabular}{c|c c c} \hline \hline **Crossbar Area** & **Latency (ps)** & **\#JJs** & **Energy Dissipation (aJ)** \\ \hline 4\(\times\)4 & 60 & 384 & 1.92 \\ 8\(\times\)8 & 120 & 1152 & 5.76 \\ 16\(\times\)16 & 240 & 3840 & 19.20 \\ 18\(\times\)18 & 270 & 4752 & 23.76 \\ 36\(\times\)36 & 540 & 17280 & 86.4 \\ 72\(\times\)72 & 1080 & 65664 & 328.32 \\ 144\(\times\)144 & 2160 & 255744 & 1278.72 \\ \hline \hline \end{tabular} \end{table} Table 1: Circuit latency, JJ count, and energy dissipation under different crossbar sizes. Figure 9: Module validation setup. Left: Die micrograph of a prototype 8\(\times\)8 crossbar. Right: Block diagram of the tested system. Using bit-stream length as 1 for the example, the overall accuracy comparison among different \(\Delta l_{in}\) and crossbar size is shown in Figure 11, where the x-axis, y-axis, and z-axis represent the values of \(\Delta l_{in}\), crossbar size \(C_{s}\) and model accuracy, respectively. As we can see, the accuracy distribution represents a close relationship to both \(\Delta l_{in}\) and \(C_{s}\). The growth trend between accuracy with one of the configurations changes largely when another one is modified. This behavior brings in multiple accuracy peaks within the whole accuracy distribution, which matches what we predicted in Section 5.4. Using hardware benchmarks, e.g., energy consumption, efficiency, to constraint crossbar size, a comprehensive optimization can be conducted as mentioned in Section 5.4 within the target distribution area to find the local optimal solution. ### Device Level Comparison with Cryogenic CMOS Technique In addition to superconducting devices, there has been a notable investigation into cryogenic devices based on CMOS technology, which presents itself as a viable alternative solution. These Cryo-CMOS devices offer the potential to enhance the energy efficiency of computer systems by capitalizing on diminished leakage currents and wire latency. A variety of endeavors have been undertaken in the realm of cryogenic CMOS-based research to bolster the overall performance metrics of hardware infrastructure [1, 7, 52, 63, 55, 62]. In the modern landscape of cryogenic computing, a prevalent objective encompasses the attainment of two distinct low-temperature thresholds, 77K and 4K, achieved by applying Liquid Nitrogen (LN) and Liquid Helium (LHe), respectively. Unlike superconducting computation that thrives at 4K level temperature, 77K temperature is more actively considered for cryogenic CMOS-based design to save the cooling consumption. According to [1, 7, 52, 63, 55, 62], for 77 K, the cooling consumption is approximately 9.65 times the device consumption, and the 77K Cryo-CMOS can achieve about 1.5 times the energy efficiency of the conventional room temperature CMOS. According to our device level simulation, we observe that lower frequency can generally achieve higher energy efficiency. To make a comprehensive comparison, we test our AQFP-based device under different frequencies from 0.1GHz to 10.0GHz. CMOS-BNN (1.4MHz, 622MHz) [42], HERMES (1GHz) [39], CryoBNN (2.24GHz) [27], and their corresponding Cryo-CMOS counterparts are incorporated in the comparison as shown in Fig. 12. We consider both the energy efficiency with/without cooling consumption in Cryo-CMOS and our AQFP framework. As illustrated in Fig. 12, in contrast to Cryo-CMOS, our approach consistently attains approximately four orders of magnitude superior energy efficiency when solely accounting for device consumption, and achieves a notable enhancement of two to three orders of magnitude in energy efficiency when factoring in cooling consumption. ### Optimization Result As shown in Table 2, SupeRBNN optimizes the model \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{**Design**} & \multirow{2}{*}{**Scheme**} & \multirow{2}{*}{**Accuracy**} & \multicolumn{2}{c}{**Energy Efficiency**} & \multicolumn{2}{c}{**Energy Efficiency**} & \multirow{2}{*}{**Power**} & \multirow{2}{*}{**Throughput**} \\ & & & & & **W/O Cooling (TOPS/W)** & & **(mW)** & **(images/ms)** \\ \hline DDN (VGG-Small) [16] & Full-precision & 92.5 & 0.28 & - & - & - \\ IMB [40] & Binary & 87.7 & 82.6 & - & 12.5 & 1.3 \\ STT-BNN [54] & Binary & 80.1 & 311 & - & - & - \\ CMOS-BNN [42] & Binary & 92.0 & 617 & - & - & - \\ Ours (VGG-Small) & Binary & 91.7 & \(1.9\times 10^{5}\) & \(4.8\times 10^{2}\) & \(6.2\times 10^{-3}\) & 2.0 \\ Ours (VGG-Small) & Binary & 90.6 & \(3.8\times 10^{5}\) & \(9.5\times 10^{2}\) & \(6.3\times 10^{-3}\) & 3.9 \\ Ours (VGG-Small) & Binary & 89.2 & \(1.5\times 10^{6}\) & \(3.8\times 10^{3}\) & \(6.4\times 10^{-3}\) & 15.2 \\ Ours (VGG-Small) & Binary & 87.4 & \(6.8\times 10^{6}\) & \(1.7\times 10^{4}\) & \(7.6\times 10^{-3}\) & 47.4 \\ Ours (ResNet-18) & Binary & 92.2 & \(1.9\times 10^{5}\) & \(4.8\times 10^{2}\) & \(6.2\times 10^{-3}\) & 2.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Model accuracy on Cifar-10 dataset under different energy efficiency constraints. CMOS-BNN [42] has a lower frequency of 13MHz and thus has a relatively higher energy efficiency compared with other CMOS-based work. Figure 11: Accuracy distribution in two dimensions of Gray-zone \(\Delta l_{in}\) and crossbar size. The stochastic bit-stream length used here is 1. Figure 10: Relationship between SC bit-stream length with model accuracy. VGG-small trained on CIFAR-10 with four different crossbar sizes are deployed in the comparison. The \(\Delta l_{in}\) is set to be 2.4 \(\mu A\) in this experiment. accuracy according to the given energy efficiency constraints. For CIFAR-10, we provide the result compared with DDN [16], CMOS-BNN [42], IMB [40] and STT-BNN [54]. DDN is a representative CMOS-based digital accelerator. CMOS-BNN is a BNN accelerator based on 10-nm FinFET CMOS under low frequency, 13MHz (thus has a higher energy efficiency). IMB uses resistive memory crossbar Array (RCA) with RRAM architecture to implement BNN computation. STT-BNN combines spin-transfer torque magnetoresistive Random Access Memory (MRAM) with BNN to improve energy efficiency. Besides these works, recent work [10] explore the low temperature CMOS (77K), which may potentially achieve 1.5 times overall better energy efficiency compared with the conventional room temperature CMOS. For full-precision VGG-small model, DDN can achieve 92.5% top-1 accuracy with 0.45 TOPS/W energy efficiency. Our SupeRBNN achieves \(4.2\times 10^{5}\) times better energy efficiency with a similar level of accuracy (92.2%) on the same model. Compared with IMB with BNN model, SupeRBNN has a much higher frequency 5GHz, \(7.8\times 10^{4}\) times of higher energy efficiency with similar model accuracy. When we loosen the efficiency constraint, SupeRBNN can achieve 91.7% and 92.2% top-1 model accuracy on VGG-small and ResNet-18, respectively, with the energy efficiency of \(1.9\times 10^{5}\) TOPS/W. The cooling cost for typical superconducting digital circuits is about 400\(\times\) the chip power dissipation [34]. Even considering cryo energy, SupeRBNN still shows 205.8\(\times\) higher energy efficiency compared to IMB under the same level of accuracy. To compare with JBNN [27] and SC-AQFP [13], we test our approach on MNIST dataset. As shown in Table 3, SyncBNN, RSFQ, and ERSFQ are CMOS-based, RSFQ-based, and ERSFQ-based BNN accelerators in JBNN paper [27]. SC-AQFP is the AQFP-based pure stochastic computing accelerator. We use the same model architecture (MLP) as shown in JBNN [27]. With the same BNN model, whether considering the cooling energy or not, our approach consistently achieves two to four orders of magnitude better energy efficiency compared with CMOS-based, RSQ-based, and ERSFQ-based accelerators with similar accuracy. Compared with SC-AQFP, which processes pure stochastic computing on AQFP devices, our approach achieves 153\(\times\) better energy efficiency for both cooling/non-cooling situations with 2% better top-1 accuracy. ## 7 Discussion In addition to AI-focused accelerators, the proposed AQFP technology can also be employed for conventional general-purpose computing to cater to a variety of application scenarios. The AQFP technology boasts a standard cell library designed for different manufacturing processes, such as Japan AIST 4-layer process HSTP [51] and US MIT-LL 8-layer Nb process SFQ5ee [72], presenting a rich assortment of over 80 cells [26]. This includes 3- and 5-input logic gates, signal-driving boosters, and refined interfaces across various superconducting logic families. The development of a comprehensive EDA toolchain--from logic synthesis to placement and routing--is specifically tailored for this standard cell library. Digital modelling and a synthesis flow for cell-based AQFP structural circuit generation were proposed in 2017, which can be seen as the earliest attempt towards AQFP design automation [74]. This synthesis flow is further tailored to support more AQFP features by different research groups in [28, 35, 71]. For the development of placement and routing, T. Tanaka et al. proposed a framework using a genetic algorithm (GA) for placement and a left-edge channel routing scheme in 2019 [70], whereas Y. Chang et al. proposed another framework adapting a learning-based placer to minimize the runtime overhead in 2020 [14]. H. Li et al. have developed a different tool using a negotiation-based A* router, targeting processes allowing multiple routable metal layers [44]. System-level performance analysis has been successfully conducted using the aforementioned EDA framework [15]. These results ensure that AQFP technology is compatible with conventional logic and memory design, including but not limited to microprocessor, register file and random-access memory. Moreover, by amplifying superconducting signals to voltage levels, specially designed on-chip interfaces between AQFP and conventional CMOS technologies have been implemented and demonstrated [65]. This paves the way for the system to be employed in broader applications, including supercomputing, cloud computing, and secure computing. ## 8 Conclusion Figure 12: Comparison with room/low-temperature CMOS techniques according to energy efficiency and frequency. Among them, Cryo-CMOS counterpart results of CMOS-BNN [42] and HERMES [39] are based on estimation, Cryogenic result of CryoBNN is from [27]. \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Design**} & \multirow{2}{*}{**Accuracy**} & \multicolumn{2}{c}{**Energy Efficiency (TOPS/W)**} \\ & & W/O Cooling & W/ Cooling \\ \hline SyncBNN [27] & 98.4 & 36.6 & 36.6 \\ RSFQ [27] & 97.9 & \(2.4\times 10^{3}\) & 8.1 \\ ERSFQ [27] & 97.9 & \(1.5\times 10^{4}\) & 50.0 \\ SC-AQFP [13] & 96.9 & \(9.8\times 10^{3}\) & 24.5 \\ Ours & 98.1 & \(1.5\times 10^{6}\) & \(3.8\times 10^{3}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with RSFQ-JBNN, ERSFQ-JBNN, CMOS-based SyncBNN, SC-AQFP, and our implementation (MLP) on MNIST Dataset. In this paper, we first make the analysis of the randomized behavior of AQFP buffer and current attenuation feature of AQFP, then propose the randomized-aware BNN training algorithm effectively integrating the randomized behavior into the BNN training process. To solve the intermediate results accumulation problem as well as preserve the model accuracy, we inspiringly convert the randomized output of the neuron circuit to the stochastic computing domain and propose a novel stochastic computing-based accumulation module. Finally, we propose an algorithm-hardware co-optimization method and batch normalization matching to close the gap between software with hardware. The clocking scheme adjustment-based circuit optimization is also applied to improve the overall performance. Based on our algorithm-hardware co-optimization the hardware configurations of our AQFP-based randomized BNN accelerator, including crossbar synapse array size, stochastic computing bit-stream length, and "gray-zone" width of AQFP buffer by comprehensively considering power consumption, energy efficiency, and hardware computing error, are jointly optimized. ## Acknowledgments This study was supported by JST PRESTO Program (Grant No. JPMJPR19M7), FOREST Program (Grant Number JPMJFR226W, Japan), and the NSF Expedition program CCF-2124453, NSF CCF-2008514.
2309.16048
Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks
In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process. This framework integrates a neural network (NN) module into the closed-loop system during training with signals generated recursively on the fly to closely mimic the streaming process of acoustic howling suppression (AHS). The proposed recursive training strategy bridges the gap between training and real-world inference scenarios, marking a departure from previous NN-based methods that typically approach AHS as either noise suppression or acoustic echo cancellation. Within this framework, we explore two methodologies: one exclusively relying on NN and the other combining NN with the traditional Kalman filter. Additionally, we propose strategies, including howling detection and initialization using pre-trained offline models, to bolster trainability and expedite the training process. Experimental results validate that this framework offers a substantial improvement over previous methodologies for acoustic howling suppression.
Hao Zhang, Yixuan Zhang, Meng Yu, Dong Yu
2023-09-27T22:02:53Z
http://arxiv.org/abs/2309.16048v1
# Advancing Acoustic Howling Suppression Through Recursive Training of Neural Networks ###### Abstract In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process. This framework integrates a neural network (NN) module into the closed-loop system during training with signals generated recursively on the fly to closely mimic the streaming process of acoustic howling suppression (AHS). The proposed recursive training strategy bridges the gap between training and real-world inference scenarios, marking a departure from previous NN-based methods that typically approach AHS as either noise suppression or acoustic echo cancellation. Within this framework, we explore two methodologies: one exclusively relying on NN and the other combining NN with the traditional Kalman filter. Additionally, we propose strategies, including howling detection and initialization using pre-trained offline models, to bolster trainability and expedite the training process. Experimental results validate that this framework offers a substantial improvement over previous methodologies for acoustic howling suppression. Hao Zhang\({}^{1*}\), Yixuan Zhang\({}^{2*}\), Meng Yu\({}^{1}\), Dong Yu\({}^{1}\)\({}^{1}\)Tencent AI Lab, Bellevue, WA, USA \({}^{2}\)The Ohio State University, Columbus, OH, USA Footnote †: Equal contributions by H. Zhang and Y. Zhang. This work was performed when Y. Zhang was an intern at Tencent AI Lab. AHS, recursive training, Kalman filter, deep learning ## 1 Introduction Acoustic howling is a phenomenon stems from positive feedback within the audio system itself, often caused by the amplified sound output from the loudspeaker being picked up by the microphone and subsequently re-amplified [1, 2, 3]. This results in an uncontrolled positive feedback loop, leading to the undesirable amplification of specific frequency components and the generation of a sustained and unpleasant howling sound. It is commonly observed in systems like hearing aids, public addressing system, and karaoke. The presence of howling not only poses a threat to the functionality of the equipment but also poses potential risks to human hearing system. Many methods have been proposed for acoustic howling suppression (AHS), including gain control [4], frequency shift [5], notch filter [6, 7], and adaptive feedback cancellation (AFC) [8, 9, 7]. Among them, the AFC method employs adaptive filters like the Kalman filter [10] to estimate and cancel howling signals by continuously updating filter coefficients based on detected feedback, making them a powerful approach for AHS over other methods. However, AFC techniques are sensitive to control parameters and often fall short in feedback systems exhibiting nonlinear distortions. Acoustic howling is similar to acoustic echo since they both arise from feedback in communication systems and mishandling acoustic echo can lead to howling. Deep learning has demonstrated impressive performance in tackling acoustic echo problems [11, 12] and has recently emerged as a promising solution for AHS tasks. Chen et al. [13] introduced a deep learning method for howling detection. Later, two deep learning-based approaches for AHS were introduced: howling noise suppression [14] and deep marginal feedback cancellation (DeepMFC) [15]. These methods treat AHS as a noise suppression task, training neural network (NN) modules to estimate target speech directly from offline generated microphone signals without incorporating AHS processing. A distinct approach, known as DeepAHS [16], leverages teacher-forcing learning and displays superior performance compared to previous methods. Building upon DeepAHS, HybridAHS [17] further enhances howling suppression by incorporating the output of a Kalman filter as an additional input for model training. Despite these strides, current methods all adhere to offline-generated signal training, leading to a mismatch between training and real-time inference that ultimately curtails their effectiveness. This paper introduces an innovative training approach for acoustic howling suppression by implementing recursive training of a neural network. Specifically, we utilize a recurrent neural network with long short-term memory (LSTM) architecture [18] as the NN module and integrate it into the closed-loop system of the howling suppression setup for frame-by-frame processing of the microphone signal. To achieve howling suppression, the NN module is trained to estimate the target speech from the microphone signal using complex ratio mask (cRM) estimation [19]. Input signals for model training are generated recursively on the fly to emulate the fundamental process of acoustic howling formation, i.e., the output of the NN at each frame is passed through the closed-loop and subsequently fed back to generate its following input frames. To fully leverage the benefits of recursive training, we utilize either past processed loudspeaker signal or the output of Kalman filter as reference signals for training the NN module. Our study offers three main contributions. Firstly, recursive training of the neural network effectively eliminates the mismatch limitations observed in previous NN-based AHS methods, leading to enhanced performance and increased robustness. Secondly, we explore two configurations within this framework: a pure NN-based method and a hybrid approach that combines NN with a Kalman filter, providing flexibility for the design of NN-based AHS sys Figure 1: Configuration of an AHS system. tems. Thirdly, we employ strategies such as howling detection and initialization using pre-trained models to facilitate convergence of recursive training. The remainder of this paper is organized as follows. Section 2 introduces the acoustic howling problem. Section 3 presents the proposed method. The experimental setup and results are described in Section 4 and 5, respectively. Section 6 concludes the paper. ## 2 Acoustic howling suppression ### Problem formulation of acoustic howling Let us consider a single-channel acoustic amplification system as shown in Fig. 1. The target signal \(s(t)\) captured by microphone is transmitted to the loudspeaker for acoustic amplification. The amplified signal \(x(t)\) is played out by the loudspeaker and arrives at the microphone as playback \(d(t)\), the corresponding microphone signal is a mixture of the target speech and the playback signal: \[y(t)=s(t)+x(t)*h(t) \tag{1}\] where \(*\) denotes linear convolution, \(h(t)\) represents the acoustic path from loudspeaker to microphone. Without any AHS processing, the loudspeaker signal \(x(t)\) will be an amplified version of the previous microphone signal \(y(t-\Delta t)\) and undergo repeated re-entry into the pickup, leading to the representation of the microphone signal as: \[y(t)=s(t)+[y(t-\Delta t)\cdot G]*h(t) \tag{2}\] where \(\Delta t\) indicates the system delay from the microphone to the loudspeaker, and \(G\) denotes the amplifier gain. With proper howling suppression, the AHS module will suppress feedback and output an estimate of the target signal, \(\hat{s}(t)\), and the corresponding microphone signal will be: \[y(t)=s(t)+[\hat{s}(t-\Delta t)\cdot G]*h(t) \tag{3}\] The recursive relationship between \(y(t)\) and \(y(t-\Delta t)\) and the possible leakage in \(\hat{s}(t-\Delta t)\) give rise to the re-amplification of the playback signal, creating a feedback loop that manifests as an unpleasant, high-pitched sound known as acoustic howling. ### Model training #### 3.2.1 NN Only Method The NN-only method takes frequency-domain microphone signal \(\mathbf{Y}_{m}\) and reference signal \(\mathbf{R}_{m}\) as input to get an estimate of the target signal \(\hat{\mathbf{S}}_{m}\), as described in Algorithm 1: \[\hat{\mathbf{S}}_{m}=\mathbb{N}\mathbb{N}(\mathbf{Y}_{m},\mathbf{R}_{m}) \tag{4}\] where \(m\) denotes the frame index, and the loudspeaker signal obtained in the previous frame \(\mathbf{X}_{m-1}\) is used as the reference signal. #### 3.2.2 Hybrid Method Diagram of the hybrid method is shown in Fig. 2(b). It combines NN with traditional Kalman filter, where the Kalman module addresses howling suppression by modeling the acoustic path with an adaptive filter and then subtracting the corresponding estimated playback signal \(\hat{\mathbf{D}}_{m}\) from microphone recording to get an error signal \(\mathbf{E}_{m}\): \[\hat{\mathbf{D}}_{m} =\mathbb{K}(\mathbf{Y}_{m},\mathbf{X}_{m-1}) \tag{5}\] \[\mathbf{E}_{m} =\mathbf{Y}_{m}-\hat{\mathbf{D}}_{m}\] The output of Kalman filter is then used as the reference signal for training the NN module in the hybrid method to get an estimate of the target speech: \[\hat{\mathbf{S}}_{m}=\mathbb{N}\mathbb{N}(\mathbf{Y}_{m},\mathbf{E}_{m}) \tag{6}\] The proposed hybrid method can be view as a recursive training adaptation of HybridAHS [17]. Unlike HybridAHS, which uses pre-processed signals from the Kalman filter during offline training, our method integrates the NN module and Kalman filter within the closed-loop system for frame-by-frame processing. This approach capitalizes on the strengths of both modules while effectively addressing the mismatch problem in HybridAHS. #### 3.2.3 Network structure and loss function We employ LSTM for complex ratio mask estimation [19] in the proposed NN module. It's worth noting that our primary focus in this study is not to introduce new network architectures; instead, our proposed method offers flexibility in choosing the network structure. The LSTM network utilized in our implementation consists of two hidden layers, each with 300 units, resulting in 1.54 million trainable parameters. The input for model training is a concatenation of \([|\mathbf{Y}|,|\mathbf{R}|,\mathbf{Y}_{r},\mathbf{Y}_{i}]\), with the training target being set as \([\mathbf{S}_{r},\mathbf{S}_{i}]\). Here \(|*|\), \(*_{r}\) and \(*_{i}\) represent the magnitude, real, and imaginary spectrograms of the corresponding frequency-domain signals. We use a frame size and frame shift of 8 ms and 4 ms, respectively. All models are trained for 60 epochs with a batch size of 128. We use mean absolute error (MAE) of real and imaginary spectrograms at the utterance level as loss function for model training: \[Loss=\text{MAE}(\hat{\mathbf{S}}_{r},\mathbf{S}_{r})+\text{MAE}(\hat{ \mathbf{S}}_{i},\mathbf{S}_{i}) \tag{7}\] ### Convergence issue in recursive training Introducing the recursive training of NN for AHS poses challenges, particularly the difficulty in achieving convergence. The inherent recursive nature of howling generation can lead to signal accumulation and energy explosion, surpassing the maximum allowable value in Python and triggering "not a number" (NAN) warnings, hindering gradient calculations and model updates. This issue is especially prominent during batch training, where the convergence failure of one utterance affects the entire batch's loss value. To address this challenge, we propose two strategies: howling detection and initialization using pre-trained models. #### 3.3.1 Howling detection (HD) An effective strategy is to integrate howling detection into the training process. During recursive training, we continuously monitor the microphone signal for the presence of howling, identified by the amplitude of microphone signal consistently exceeding a threshold for 100 consecutive samples. Upon detection, further processing of the current utterance is halted, and only the already processed portion is used for loss calculation. Excluding the howling signal from further processing and loss calculation prevents potential NAN issue and minimizes its impact on the convergence of the NN module. #### 3.3.2 Initialization using pre-trained models The other strategy we proposed for enhancing trainability and expediting training involves utilizing a pre-trained offline model to initialize the NN parameters. Normally, the NN module's parameters are initialized randomly, which may not guarantee adequate howling suppression and can lead to severe howling and NAN warnings during the initial training phases. Despite the inevitable mismatches in the recursive inference scenarios, the offline pre-trained model still demonstrates superior howling suppression compared to randomly initialized NN modules. Adopting pre-trained offline models for NN parameter initialization addresses the NAN issue and ensures the convergence of model training. This approach can be seen as a form of recursive fine-tuning of the offline model. In our implementation, we employ the pre-trained HybridAHS [17] for NN initialization. ## 4 Experimental Setup ### Data Preparation The experiments are conducted using the AISHELL-2 dataset [20]. We generate 10,000 pairs of room impulse responses (RIRs) using the image method [21] with random room characteristics and reverberation times (RT60) randomly selected within the range of 0 to 0.6 seconds. Each RIR pair include RIRs for the near-end speaker and loudspeaker positions. Model training follows the steps outlined in Algorithm 1. During training, for each training sample, we randomly select a speech sample and a pair of RIRs for generating the target speech and playback signal. The system delay \(\Delta t\) is randomly generated within the range of 0.15 to 0.25 seconds, while the amplification gain \(G\) is selected randomly within the range of 1 to 3. The training, validation, and testing set we used includes 38,000, 1000, and 200 utterances, respectively. The testing data uses different utterances and RIRs compared to the training and validation data. ### Evaluation Metrics Two metrics presented as \(mean\)\(\pm\)\(standard\)\(deviation\) are used to evaluate AHS performance: signal-to-distortion ratio (SDR) [22] and perceptual evaluation of speech quality (PESQ) [23]. Given PESQ's insensitivity to scale, we emphasize SDR results to demonstrate the effectiveness of suppressing howling, while relying on PESQ to assess speech quality. ## 5 Experimental Results ### Convergence explorations We first evaluate the effectiveness of our proposed strategies in achieving convergence during recursive training. The loss values across the first few epochs are depicted in Fig. 3. Without the adoption of any strategies, convergence is not assured due to the aforementioned NAN issue. Incorporating the HD strategy successfully circumvents this problem, ensuring the model's trainability. Additionally, initializing the NN module using a pre-trained model (HybridAHS) leads to substantial improvements in the model's convergence. ### Howling suppression performance We compare the proposed method with Kalman filter and NN-based AHS method across different amplification gain (\(G\)) levels and present the results in Table 1. The spectrograms of a test utterance assessed under moderate and severe howling scenarios are provided in Fig. 4. All NN-based methods shared the same network architecture and dataset for a fair comparison. It is worth mentioning that the proposed recursive training requires longer training times but maintains the same inference times as baseline methods using offline training. Without any howling suppression ("no AHS"), the microphone signal exhibits average SDR values below 30 dB, indicating significant bowling dominance and negligible discernible speech. Therefore, calculating PESQ values in this context becomes redundant. Utilizing the Kalman filter achieves notable howling suppression compared to the "no AHS" case. Moving to NN-based AHS methods (DeepMFC and HybridAHS) results in substantial enhancements in howling suppression. Our proposed approach consistently outperforms the baseline methods in terms of SDR, particularly at higher \(G\) levels. In terms of PESQ results, our methods demonstrate comparability to HybridAHS at lower \(G\) levels and outperform the baselines at higher \(G\) values. Notably, our hybrid method surpasses the NN-only approach in performance. We also implement the proposed Hybrid method using ratio mask estimation (RM) for comparison. Incorporating complex-domain estimation effectively mitigates playback leakage in the enhanced signal, resulting in improved AHS performance, as also observed in Fig. 4(g) and (h). Additionally, our proposed recursive training approach shows lower standard deviations compared to offline-trained baseline methods, indicating enhanced stability and consistent howling suppression. ## 6 Conclusion In this study, we have introduced a recursive training approach for NN-based AHS, utilizing howling detection and pre-trained model initialization to improve model trainability. Our approach is implemented in both NN-only and Hybrid configurations. The proposed method successfully addresses the mismatch issue observed in previous NN-based AHS techniques, resulting in superior howling suppression while preserving speech quality. This study makes notable contributions to the field of howling suppression. Future directions include exploring alternative recursive training strategies and extending our approach to multi-channel AHS. \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \hline Models & \multicolumn{4}{c|}{SDR (dB)} & \multicolumn{4}{c}{PESQ} \\ \hline G & 1.5 & 2 & 2.5 & 3 & 1.5 & 2 & 2.5 & 3 \\ \hline no AHS & -30.51 +7.23 & -31.86 + 5.66 & -33.10 + 3.96 & -33.21 + 3.94 & – & – & – & – \\ \hline Kalman filter [10] & -5.11 +13.20 & -10.33 + 14.84 & -14.88 + 15.14 & -18.25 + 14.77 & 1.94 + 0.72 & 1.65 + 0.73 & 1.44 + 0.70 & 1.30 + 0.64 \\ DeepMFC [15] & -0.09 + 6.50 & -2.78 + 9.44 & -5.59 + 11.40 & -7.69 + 12.26 & 2.11 + 0.51 & 1.88 + 0.59 & 1.70 + 0.62 & 1.56 + 0.59 \\ HybridAHS [17] & 2.96 + 3.04 & 1.25 + 5.79 & -1.45 + 9.60 & -3.49 + 10.90 & 2.57 + 0.47 & 2.33 + 0.53 & 2.22 + 0.59 & 1.95 + 0.62 \\ \hline Proposed NN & 3.70 + 1.70 & 2.85 + 1.45 & 2.34 + 1.26 & 1.99 +1.05 & 2.50 + 0.43 & 2.28 + 0.39 & 2.12 + 0.36 & 2.00 + 0.34 \\ Proposed Hybrid (RM) & 2.95 + 2.02 & 1.92 + 1.70 & 1.28 + 1.47 & 0.84 + 1.30 & 2.56 + 0.40 & 2.35 + 0.36 & 2.21 + 0.34 & 2.11 + 0.32 \\ Proposed Hybrid & **3.87 + 1.68** & **3.04 + 1.34** & **2.49 + 1.11** & **2.11 + 0.98** & **2.60 + 0.41** & **2.40 + 0.38** & **2.25 + 0.36** & **2.13 + 0.34** \\ \hline \hline \end{tabular} \end{table} Table 1: Streaming evaluation of different methods for howling suppression. Figure 4: Spectrograms of: (a) target signal, (b) no AHS, (c) Kalman filter [10], (d) DeepMFC [15], (e) HybridAHS [17], (f) Proposed NN, (g) Proposed Hybrid (RM), and (h) Proposed Hybrid. (Demos are available at [https://yixuanz.github.io/AHS_2023_1](https://yixuanz.github.io/AHS_2023_1)). Figure 3: Convergence exploration of the proposed method.
2309.15555
Low Latency of object detection for spikng neural network
Spiking Neural Networks, as a third-generation neural network, are well-suited for edge AI applications due to their binary spike nature. However, when it comes to complex tasks like object detection, SNNs often require a substantial number of time steps to achieve high performance. This limitation significantly hampers the widespread adoption of SNNs in latency-sensitive edge devices. In this paper, our focus is on generating highly accurate and low-latency SNNs specifically for object detection. Firstly, we systematically derive the conversion between SNNs and ANNs and analyze how to improve the consistency between them: improving the spike firing rate and reducing the quantization error. Then we propose a structural replacement, quantization of ANN activation and residual fix to allevicate the disparity. We evaluate our method on challenging dataset MS COCO, PASCAL VOC and our spike dataset. The experimental results show that the proposed method achieves higher accuracy and lower latency compared to previous work Spiking-YOLO. The advantages of SNNs processing of spike signals are also demonstrated.
Nemin Qiu, Chuang Zhu
2023-09-27T10:26:19Z
http://arxiv.org/abs/2309.15555v1
# Low Latency Spiking Neural Network for Object Detection ###### Abstract Spiking Neural Networks (SNNs), as a third-generation neural network, are well-suited for edge AI applications due to their binary spike nature. However, when it comes to complex tasks like object detection, SNNs often require a substantial number of time steps to achieve high performance. This limitation significantly hampers the widespread adoption of SNNs in latency-sensitive edge devices. In this paper, our focus is on generating highly accurate and low-latency SNNs specifically for object detection. Firstly, we systematically derive the conversion between SNNs and ANNs and analyze how to improve the consistency between them: improving the spike firing rate and reducing the quantization error. Then we propose a structural replacement, quantization of ANN activation and residual fix to alleviate the disparity. We evaluate our method on challenging dataset MS COCO, PASCAL VOC and our spike dataset. The experimental results show that the proposed method achieves higher accuracy and lower latency compared to previous work Spiking-YOLO. The advantages of SNNs processing of spike signals are also demonstrated. Nemin Qiu, Chuang Zhu\({}^{\star}\)School of Artificial Intelligence, Beijing University of Posts and Telecommunications. [email protected] SNN; ANN-SNN conversion; Time steps; Low latency. ## 1 Introduction Artificial neural networks have achieved great success in computer vision [1], natural language processing, and other domains. Despite these achievements, there still exists a fundamental difference between the operational mechanisms of artificial neural networks and human neural activity. Consequently, some researchers have begun studying neural networks that emulate the neural activity of the human brain. Spiking neural networks (SNNs) are considered as the third generation of neural network models, utilizing simplified yet biologically realistic neuron models for computation. SNNs differ from traditional artificial neural networks, such as convolutional neural networks (CNNs), in that they transmit activation data between layers as sequences of binary spikes, following specific firing rules. SNNs significantly reduce computational resource requirements and effectively avoids excessive resource consumption [2]. As SNNs have demonstrated successful applications in edge AI [2], research in this field is gaining increased attention from researchers. In general, there are two mainstream methodologies for developing deep supervised SNNs up to date: direct training for SNNs and converting ANNs into SNNs. However, directly trained SNNs generally do not achieve better performance on relatively complex vision scenes and tasks [2]. For directly trained SNNs, on the one hand,the back-propagation algorithm could not be applied to SNNs as spiking activation functions are inherently non-differentiable. It is hard to find a way to update the SNN neural network weights well. This leads to difficulties in achieving satisfactory performance of SNN on tasks with complex scenarios. On the other hand, directly trained SNNs usually use complex neuron models without specified optimization for storage and operation with binary events. This leads to less practicality[2]. For converted SNNs, as they are transferred from a certain pre-trained ANN model, it is possible to make the SNN achieve a performance close to that of the ANN. in order to attain sufficient representation precision, a considerable number of time steps are usually required for a nearly lossless conversion, which is known as the accuracy-delay tradeoff. This tradeoff significantly restricts the practical application of SNNs. The consumption of a large number of time steps can result in a significant delay in SNN inference, which is detrimental for certain real-time tasks, such as the object detection task emphasized in this paper. Recent works [3][4] propose methods to alleviate this problem by exploiting the quantization and clipping properties of aggregation representations. However, these works primarily focus on the image classification task and overlook the impact of residual voltage and neuron firing rate on error propagation. There still exists a noticeable performance gap between ANNs and SNNs when it comes to low inference latency, and the underlying cause for this degradation remains unclear. In this work, we identify that the conversion error under low time steps mainly arises from low spike firing rate, quantization error, and the misrepresentation of residual membrane potential. These factors accurately characterize the information loss between the input and output of spiking neurons with asynchronous spike firing. Inspired by these findings, we propose methods to address these issues, namely the low spike firing rate layer replacement, quantization ac tivation, and residual fix methods. By implementing these techniques, we generate an SNN for object detection that achieves remarkable performance with an extremely low inference delay. The main contributions of this work can be summarized as follows: * We describe the specific conversion process of ANNs to SNNs and model the errors introduced during the ANN-SNN conversion. Then we propose methods to reduce these errors. * We propose a scheme for layer replacement in the low spike firing rate layer and quantized ANNs activation for the adaptation conversion. In the first phase, some SNNs-unfriendly layers were replaced in the ANNs before conversion. Quant-ReLU functions are applied to finetune ANNs. In the second phase, using residual fix mechanisms in IF neurons. * We verify the effectiveness and efficiency of the proposed methods on the MS COCO, PASCAL VOC and spike dataset. Experimental results show significant improvements in accuracy-latency tradeoffs compared to previous works. ## 2 Related Work Existing SNNs are generally divided into two fields to study, directly trained SNNs and converted SNNs [5]. For directly trained SNNs, unsupervised and supervised learning are both attractive research topics. On the one hand unsupervised learning, the mainstream learning method is the spike timing dependent plasticity rule (STDP). STDP uses synaptic plasticity and spike activity to learn features of input data, which is biologically plausible. On the other hand, the supervised SNNs can achieve much better performance given a large number of labeled training data. There are some successful attempts that introduce BP into SNN models, such as STBP, SLAYER, BPSTDP, which achieve good performance on some simple cognitive tasks. The conversion of ANN-SNN is in burgeoning research. Cao et al. [6] proposed a ANN-SNN conversion method that neglected bias and max-pooling. In the next work, Diehl et al [7] proposed data-based normalization to improve the performance in deep SNNs. Rueckauer et al [8] presented an implementation method of batch normalization and spike max-pooling. Sengupta et al [5] expanded conversion methods to VGG and residual architectures. Nonetheless, most previous works have been limited to the image classification task [8]. Kim et al [2] have presented Spiking-YOLO, the first SNN model that successfully performs objectdetection by achieving comparable results to those of the original DNNs on non-trivial datasets, PASCAL VOC and MS COCO. Ding et al [4] presented Rate Norm Layer to replace the ReLU function, and obtain the scale through a gradient-based algorithm. Conversion approaches have revealed their potential of achieving ANN level performance in various tasks [2]. However, taking advantage of the artificial neural network's success, network conversion outperforms other methods without auxiliary computation power involved. converted SNN model suffers from efficiency problems. SNNs converted require massive timesteps to reach competitive performance [2]. All of them are complicated procedures vulnerable to high inference latency [2]. Converted SNNs still suffer from increased energy consumption, long inference time and high time delays [2]. Building on these previous efforts, We put toward to minimize the ANN-to-SNN conversion error in complex visual scene tasks, using ultra-low latency when achieving high precision SNNs. ## 3 Preliminaries In this section, we introduce the activation propagation rule of ANNs and the working principle of the spiking neurons. **ANN:** Let \(x\) denote the activation. The relationship between the activation of the two adjacent layers is as follows: \[x_{k}^{l}=f(\sum_{j}(w_{k,j}^{l}\cdot x_{j}^{l-1})+b_{k}^{l}), \tag{1}\] where \(w_{k,j}^{l}\) represents weights, \(b_{k}^{l}\) represents biases, and the activation \(x_{k}^{l}\) represents the activation of neuron \(k\) in layer \(l\). \(f(.)\) is a type of activation function. **SNN:** Let us focus on the IF neuron model. Let \(U_{k}^{l}(t)\) denote a transient membrane potential increment of spiking neuron \(k\) in layer \(l\): \[U_{k}^{l}(t)=\sum_{j}(w_{k,j}^{l}\cdot\Theta_{t,j}^{l-1})+b_{k}^{l}, \tag{2}\] where \(\Theta_{t,k}^{l}\) denotes a step function indicating the occurrence of a spike at time \(t\): \[\Theta_{t,k}^{l}=\Theta(V_{k}^{l}(t-1)+U_{k}^{l}(t)-V_{k,th}), \tag{3}\] \[\mathrm{with}\ \Theta(x)=\left\{\begin{array}{ll}1,&x\geq 0\\ 0,&\mathrm{otherwise}\end{array}\right., \tag{4}\] The spiking neuron integrates inputs \(U_{k}^{l}(t)\) until the membrane potential \(V_{k}^{l}(t-1)\) exceeds a threshold \(V_{k,th}\) and a spike is generated. After a spike is fired at time \(t\), the membrane potential is reset. The formula for resetting \(V_{k}^{l}(t)\) is as follows: \[V_{k}^{l}(t)=V_{k}^{l}(t-1)+U_{k}^{l}(t)-V_{k,th}\Theta_{t,k}^{l}. \tag{5}\] ## 4 Method In order to solve the problem of high latency of SNN for object detection task, this paper proposes a method to generate low latency object detection SNN network. Figure 1 illustrates the overall flow of the generation. The 'unfriendly' modules in the ANN network are first replaced,then the BN layer is fused into the convolution or linear layer. After completing the restructuring of the network, we use Quant-ReLU instead of ReLU, and use quantization training to update the parameters. After completing training, the weights are subsequently converted using the conversion formula [8], the original activation function is replaced using IF neurons, and finally residual fix is added to further reduce the conversion error. This section describes these improvements that we have proposed through our ANN-SNN conversion error modeling. ### ANN-SNN conversion error modeling In this section, we will introduce the proof of ANN-SNN conversion[8].And from it, we analyze some improvements that will help reduce latency and improve performance after conversion. To simplify the description, we assume that the interval of time step \(dt=1\), and inferring an image takes \(T\) time steps. The firing rate of each SNN neuron as \(r_{k}^{l}(T)=N_{k}^{l}(T)/T\), where \(N_{k}^{l}(T)=\sum_{t=1}^{T}\Theta_{t,k}^{l}\) is the number of spikes generated. From the above definition, it is clear that the firing rate of IF neurons \(r_{k}^{l}(T)\,\in\,[0,1]\). Moreover, it is clear that the firing rate is discrete, and has a resolution of 1/T. Assume that the initial membrane potential is zero \(V_{k}^{l}(0)=0\). After accumulating T time steps, the membrane potential at any time point \(T\) is given as \(V_{k}^{l}(T)=\sum_{t=1}^{T}\,U_{k}^{l}(t)-V_{k,th}\cdot N_{k}^{l}(T)\). From this we can deduce \(N_{k}^{l}(T)=\lfloor\frac{\sum_{t=1}^{T}U_{k}^{l}(t)-V_{k}^{l}(T)}{V_{k,th}}\rfloor\), and then the firing rate \(r_{k}^{l}(T)\) is: \[r_{k}^{l}(T)=\frac{N_{k}^{l}(T)}{T}=\frac{\lfloor(\frac{\sum_{t=1}^{T}U_{k}^{l }(t)}{V_{k,th}\cdot T}-\frac{V_{k}^{l}(T)}{V_{k,th}\cdot T})\cdot T\rfloor}{T}. \tag{6}\] \[r_{k}^{l}(T)=\frac{N_{k}^{l}(T)}{T}=\frac{\sum_{t=1}^{T}U_{k}^{l}(t)}{V_{k,th} \cdot T}-\frac{V_{k}^{l}(T)}{V_{k,th}\cdot T}. \tag{7}\] Assume that the threshold \(V_{k,th}=1\). With this subtraction mechanism, let the Eq. (6) change to: \[r_{k}^{l}(T)=\frac{\lfloor(\sum_{j}(w_{k,j}^{l}\cdot\frac{\sum_{t=1}^{T}\Theta _{t,j}^{l-1}}{T})+b_{k}^{l}-\frac{V_{k}^{l}(T)}{T})\cdot T\rfloor}{T}, \tag{8}\] \[r_{k}^{l}(T)=\frac{\lfloor(\sum_{j}(w_{k,j}^{l}\cdot r_{j}^{l-1}(T))+b_{k}^{l }-\frac{V_{k}^{l}(T)}{T})\cdot T\rfloor}{T}. \tag{9}\] Here we define an approximation of the firing rate \(\widehat{r}_{k}^{l}(T)\): \[\widehat{r}_{k}^{l}(T)=\sum_{j}(w_{k,j}^{l}\cdot r_{j}^{l-1}(T))+b_{k}^{l}- \frac{V_{k}^{l}(T)}{T}. \tag{10}\] The relationship between the approximation and the true value of firing rate is: \[r_{k}^{l}(T)=\frac{\lfloor(\widehat{r}_{k}^{l}(T))\cdot T\rfloor}{T}. \tag{11}\] **ANN to SNN**: The similarity of IF neurons and ReLU activation functions is an important basis on which ANNs can be converted to SNNs. The principle of ANN-SNN conversion is that the firing rates of spiking neuron \(r_{k}^{l}(T)\) should correlate with the original ANN activations \(x_{k}^{l}\) such that \(r_{k}^{l}(T)\)\(\rightarrow\)\(x_{k}^{l}\). Setting \(V_{k}^{l}(0)=0\), \(V_{k,th}=1\). Figure 2 shows the corresponding relationship between the output of ReLU activation function in ANN and the output of firing rate in SNN with the input \(\hat{x}_{k}^{l}=\sum_{j}(w_{k,j}^{l}\cdot x_{j}^{l-1})+b_{k}^{l}\) and \(r_{k}^{l}(T)=\sum_{j}(w_{k,j}^{l}\cdot r_{j}^{l-1}(T))+b_{k}^{l}\). ### Low spike firing rate layer replacement method After the derivation in Section 4.1, we can know that the purpose of the conversion is to make \(r_{k}^{l}(T)=\sum_{j}(w_{k,j}^{l}\cdot r_{j}^{l-1}(T))+b_{k}^{l}\). Let us define that the following \(\frac{V_{k}^{l}(T)}{T}=Re\) Figure 1: Overall flow of training phase and converting phase. Figure 2: Firing rate of IF neuron and ReLU. is a remainder. When the time steps T is very large, the remainder \(Re=\frac{V_{k}^{l}(T)}{T}\approx 0\) and \(r_{k}^{l}(T)\approx\tilde{r}_{k}^{l}(T)\). The other case is that the value of the membrane potential \(V_{k}^{l}(T)\) is exactly 0 after each T time step. In this case, \(Re=\frac{V_{k}^{l}(T)}{T}=0\). In addition, the remainder relative error is reduced if the first half of Eq. (10) has a larger value. It can be understood as \(r_{k}^{l}(T)\) has a larger spike firing rate. After satisfying the above conditions we can conclude that: \[r_{k}^{l}(T)\approx\tilde{r}_{k}^{l}(T)\approx\sum_{j}(w_{k,j}^{l}\cdot r_{j}^ {l-1}(T))+b_{k}^{l}. \tag{12}\] After the above analysis, it is known that the low spiking firing rate introduces a large error in the ANN-SNN conversion method. The previous works did not bother about the impact that the modules in the network structure bring to the ANN-SNN conversion, we counted the spike firing rate of each layer, and we found that the spike firing rate decreased very seriously in the maximum pooling layer. However, the down-sampling operation inevitably causes the loss of spike information in the convolutional feature map, which affects the detection accuracy. Down-sampling of spike information requires a more accurate information integration process. Considering this, we modified the original network model structure by replacing the network structure of the original model using downsampling convolution and transposed convolution. After the replacement and conversion, we found some improvement in the spike firing rate of neurons in this layer, the time step required for the original model to achieve the same accuracy is shorter. ### Quantizition activation and residual fix methods In addition to the analysis of spiking firing rate on the network structure. According to equations Eq. (10) and Eq. (11), we also analyzed that there is a quantization error due to the gap between the two activation patterns during the conversion, Figure 3 shows the activation resolution of the spiking neural network is 1/T (T is the time step). Obviously if reducing the gap between the expression of the two neurons helps to reduce the conversion error. For the SNN, we added a residual recovery setting. To address the residual error, we set a specific initial value for the membrane potential when configuring it. This helps to reduce the difference between the activation forms of IF neurons and ReLU neurons in the ANN network. For the ANN, We set the activation value of the activation function ReLU of ANN to be discrete and the resolution is also 1/T. so we proposed a strategy of activation substitution in the training phase and setting the initial membrane potential to reduce the error between the two. Specifically, our scheme is using Quant-ReLU activation (quantization clipping function) instead of ReLU activation in the pre-transformation ANN training. learning and continuously reducing the quantization error through powerful ANN training methods, making the activation form of the SNN simulated in the training phase. Quant-ReLU activation is more similar to the activation form of IF neurons of the SNN. For residual fix, we set the initial membrane potential \(V_{k}^{l}(0)=0.5\). This setting makes the output firing rate of IF neurons and the input firing rate as shown in the green line in Figure 3, which can greatly reduce the quantization error. In terms of theoretical calculations, the average error ratio of the two methods (red and green lines) is 2 to 1. Similarly, the activation form of Quant-ReLU is also set to this form, which facilitates the ANN training to get the best performance. Finally, we need to convert the weights of the ANN according to the weight conversion formula (due to the characteristics of spiking neurons, the spiking frequency cannot be higher than one) [8]. After the weight conversion as mentioned in main paper, Eq. (1) changes to the following form: \[x_{k}^{l}=f(\sum_{j}(\hat{w}_{k,j}^{l}\cdot x_{j}^{l-1})+\hat{b}_{k}^{l}), \tag{13}\] After weight conversion, the range of \(x_{k}^{l}\) is [0,1]. According to the definition of ReLU activation function, Eq. (13) can then be changed to the following form: \[x_{k}^{l}=\sum_{j}(\hat{w}_{k,j}^{l}\cdot x_{j}^{l-1})+\hat{b}_{k}^{l}, \tag{14}\] After applying the weights to the SNN (\(\hat{w}_{k,j}^{l}{\rightarrow}w_{k,j}^{l}\) and \(\hat{b}_{k}^{l}{\rightarrow}b_{k}^{l}\)), Eq. (12) can then be changed to the following form: \[r_{k}^{l}(T)\approx\sum_{j}(\hat{w}_{k,j}^{l}\cdot r_{j}^{l-1}(T))+\hat{b}_{k} ^{l}. \tag{15}\] It can be seen that between two adjacent layers, SNN and ANN pass features with the almost same formula. ## 5 Experiment and Evaluation In this section, all experiments are performed on NVIDIA Tesla V100 32G GPUs and based on the Pytorch framework. Figure 3: The firing rate of the IF neuron and ANN ReLU. For the object detection task, we compare our low-latency SNN detection network with the previous work Spiking-YOLO [2]. Our experiment is tested on MS COCO and PASCAL VOC dataset. In addition we tested on our spike dataset. Our spike dataset is mainly for object detection of people and vehicles in some traffic scenarios. Our spike dataset involved in this work are captured using spiking cameras [9] or by encoding the video with spike encoder [2]. The detection results of our experiments are evaluated using mAP50(\(\%\)). Table 1 illustrates the significant time step savings of our method over Spiking-YOLO [2] on the dataset MS COCO, we only need 150 time steps to achieve comparable performance, and our method achieves a huge improvement (+10.54) in accuracy at 300 time steps. When comparing our method to STDP-Spiking [10], which utilizes direct training with a similar network structure, we have consistently demonstrated superior performance. Even when considering a time frame of 300 steps, our approach outperforms this method as well. Table 2 illustrates the significant time step savings of our method over Spiking-YOLO [2] on the dataset PASCAL VOC, we only need 150 time steps to achieve comparable performance, and our method achieves a considerable improvement (+2.37) in accuracy at 300 time steps. By using our method, SNN with better performance can be obtained. To summarize Table 1 and Table 2, the inclusion of residual fix in our approach yields a slight yet notable performance gain. Experiments show that on MS COCO and PASCAL VOC datasets, SNNs generated using our method can significantly reduce the time step required for inference, with our method requiring only 1/23 of the time step of spiking-yolo to achieve the same accuracy, and our method achieves good performance at 300 time steps. This highlights the effectiveness of our method in pushing the boundaries of SNN capabilities and achieving superior results in neural network applications. Table 3 shows the improvement of SNN compared to ANN for the same network structure on the spike dataset we produced. This dataset is composed of gray images, along with the corresponding spike data and annotation information. This demonstrates that SNNs may have an inherent advantage when processing spike signals. Figure 4 shows some visualization results of object detection using our spiking neural network, with images from both the MS COCO and our spike dataset. In this experiment, we set all time steps to 300. Specifically, for the MS COCO, when performing detection, we will convert the images to spikes first, then use SNN to perform detection and output the detection results to the original images. For our spike dataset, we directly perform detection on the spike data and visualize the results to their corresponding images. ## 6 Conclusion In this paper, we present a low latency SNN generation method by object detection task for accurate conversion with low latency. We theoretically analyze the error of the ANN-SNN conversion process and illustrate the influence of the quantization error and spike firing rate on the accurate conversion. Then we propose low spike firing rate layer replacement method. It significantly reduces the problem of firing rate due to spike feature scale variation in SNN networks, thus reducing the inference time step required for the \begin{table} \begin{tabular}{c c c c} \hline Method & Neural Model & Time Steps & mAP \\ \hline Spiking-YOLO [2] & IBT & 3500 & 25.66 \\ \hline STDP-Spiking [10] & IF & 300 & 26.8 \\ \hline Ours & IF w IMP & 100 & 14.8 \\ \hline Ours & IF w IMP & **150** & 26.3 \\ \hline Ours & IF w IMP & 200 & 31.5 \\ \hline Ours & IF w IMP & 250 & 34.2 \\ \hline Ours & IF w/o IMP & 300 & 35.7 \\ \hline Ours & IF w IMP & 300 & **36.2** \\ \hline \end{tabular} \end{table} Table 1: Comparison of the object detection performance between our method (Ours) and Spiking-YOLO [2] (using IBT neuron [2]) on PASCAL VOC. (w IMP or w/o IMP means the IF neuron of SNN with or without residual fix.) \begin{table} \begin{tabular}{c c c c} \hline Method & Network & Time Steps & mAP \\ \hline ANN & Tiny-yolov3 & - & 51.6 \\ \hline Ours & SNN-Tiny-yolov3 & 64 & 45.0 \\ \hline Ours & SNN-Tiny-yolov3 & 128 & 50.9 \\ \hline Ours & SNN-Tiny-yolov3 & 256 & **52.4** \\ \hline \end{tabular} \end{table} Table 3: Comparison of the object detection performance between our method (Ours) and ANN [1] on our dataset. (The input of ANN is the spike signal reconstructed into gray image, and the input of SNN is the spike signal.) network. To address the problem of quantization error, we propose the mechanism of quantization activation function Quant-ReLU and residual fix to alleviate the above problem. The experimental results show that the time step required for SNN network inference is greatly reduced after using the above method, thus greatly reducing the real-time delay of the detection network. Beyond the object detection task, the proposed methods are theoretically generalizable to other SNN tasks.
2308.16375
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in modern society where privacy attacks are rampant. To address this issue, researchers have started to develop privacy-preserving GNNs. Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain. In this survey, we aim to address this gap by summarizing the attacks on graph data according to the targeted information, categorizing the privacy preservation techniques in GNNs, and reviewing the datasets and applications that could be used for analyzing/solving privacy issues in GNNs. We also outline potential directions for future research in order to build better privacy-preserving GNNs.
Yi Zhang, Yuying Zhao, Zhaoqing Li, Xueqi Cheng, Yu Wang, Olivera Kotevska, Philip S. Yu, Tyler Derr
2023-08-31T00:31:08Z
http://arxiv.org/abs/2308.16375v3
# A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications ###### Abstract Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in modern society where privacy attacks are rampant. To address this issue, researchers have started to develop privacy-preserving GNNs. Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain. In this survey, we aim to address this gap by summarizing the attacks on graph data according to the targeted information, categorizing the privacy preservation techniques in GNNs, and reviewing the datasets and applications that could be used for analyzing/solving privacy issues in GNNs. We also outline potential directions for future research in order to build better privacy-preserving GNNs. Graph Neural Networks; Privacy Attacks; Privacy Preservation; Deep Learning on Graphs ## 1 Introduction Graph-structured data, notable for its capacity to represent objects along with their interactions for a broad range of applications, is ubiquitous in the real world. Compared with independent and identically distributed (i.i.d) data that are typically utilized in deep neural networks (DNNs), graph data is more challenging to deal with due to its complexity in capturing object relationships and its irregular and non-grid-like shapes. To tackle the above challenges, various Graph Neural Networks (GNNs) [1, 2, 3, 4] are developed for multiple tasks such as node classification [5, 6], link prediction and recommendation [7, 4], community detection [8, 9] and graph classification [10, 11]. These models have achieved unprecedented success for applications across different domains such as e-commerce and recommender systems [12, 4, 13], social network analysis [14, 15], financial quantitative analysis [16, 17], and drug discovery [18, 19]. Despite their remarkable success in solving real-world tasks, most of GNNs lack privacy considerations. They are designed to achieve high performance, leaving private information vulnerable against attack. Consequently, data privacy and safety from high-stake domains (e.g. finance, social, and medical) involving sensitive and private information could be undermined. In other words, without well-designed strategies, private information is constantly subjected to leakage. Even worse, a large variety of attack models are designed based on the vulnerability of the models. The aforementioned issues have become increasingly concerning, which spawned government regulations and laws for combating malicious attacks. For instance, the California Consumer Privacy Act (CCPA) was signed into law to protect customers' privacy by regulating the collected information from businesses; the European Union has proposed a guideline that highlights the importance of trustworthy AI and indicates one of the ethical principles that a system should follow is the prevention of harm. Therefore, it is crucial to protect private data within GNNs (i.e., the privacy of the graph-structured data and the model parameters). However, the requirement for privacy protection in GNNs differs from that of the traditional DNNs. In addition to the need to protect sensitive features of node/graph instances, there is also the need to protect the relational information among entities in graphs which is at risk of exposure. Furthermore, the unique message-passing mechanism exacerbates the challenge of protection since sensitive/confidential features might be potentially leaked during the propagation process. As a result, the existing privacy protecting methods developed for DNNs may not be readily adaptable to graphs, thereby imposing additional privacy requirements. These privacy requirements emerged uniquely in the graph domain have motivated a stream of work by cybersecurity experts and GNN researchers from academia and industry. In this work, we give a comprehensive survey about the attack strategies and privacy preservation techniques. We categorize the attack strategies into four categories, including Model Extraction Attacks (MEA) [20, 21], Graph Structure Reconstruction (GSR) [22, 23, 24], Attribute Inference Attacks (AIA) [25], and Membership Inference Attacks (MIA) [26, 27, 25, 28], where attackers aim to infer different parts of the graph data and GNN-based models. Specifically, MEA aims to extract a model that has similar behavior to the original model; GSR endeavors to reconstruct the graph structural information from limited information; AIA aims to infer the sensitive features, and MIA seeks to determine whether a certain component (e.g., node, edge, sub-graph) is contained in the training dataset. For privacy preserving techniques, they are summarized into four directions, namely, Latent Factor Disentangling [29, 30, 31], Adversarial Training [29, 32, 33], Differentially Private Approach [34, 35], and Federated Learning [36, 37, 38]. Generally, the goal of latent factor disentangling is to learn the representations that do not contain confidential information. Adversarial training aims to minimize the impact of specific attacks by reducing the performance of attacks during the training process. Differentially private approach utilizes differential privacy techniques to ensure data privacy. Federated learning [39], on the other hand, seeks to develop distributed learning frameworks that enable various organizations to collaborate in training a model without sharing their own data. This survey is primarily focused on investigating the privacy aspect of GNNs and the organization is as follows: We start by introducing the preliminary context and summarizing the relation to other surveys in Section 2, which includes the privacy concept on data, traditional privacy/attacks on deep learning, and the basic knowledge of graph data and deep learning on graphs. We then present different types of privacy attack methods on GNNs in Section 3. Thereafter, in Section 4 we discuss privacy preserving techniques for GNNs. The currently used and more possible graph datasets for studying GNN privacy attacks/preservation along with applications are summarized in Section 5. After that, we discuss the future directions in Section 6 and then conclude in Section 7. ## 2 Preliminaries of Data Privacy, Attacks and Deep Learning on Graphs In this section, we introduce the preliminaries for the privacy of data and models. We begin with the privacy of non-graph data and discuss the privacy and attacks in the general deep learning (DL) domain. After that, we introduce graph, graph data, and their deep learning techniques. ### _Privacy on Data_ Data privacy refers to protecting sensitive and confidential information such as personal identifiable information (PII) or information related to national security infrastructure. One standard approach to enhance the privacy of structured static data is to mask sensitive information, and it is used for query extraction, analysis, and sharing. Standard masking techniques such as \(k\)-anonymity, \(l\)-diversity, and \(t\)-closeness [40] are limited because another publicly accessible dataset could be used to re-identify the masked entries. The above limitation allured a series of de-anonymization attacks aiming to extract or infer sensitive details or attributes associated with a specific record in the released dataset, such as isolation attacks and information amplification attacks To overcome this challenge, A group of researchers [41] proposes privacy-preserving techniques called differential privacy (DP). The main idea is that if two datasets differ only by one record, then the same algorithm should have similar results on these datasets. It provides a solid mathematical privacy guarantee, which is formally defined to be the following: **Definition 1: Differential Privacy (DP)/Global Differential Privacy (GDP)** A randomized mechanism \(K\) gives \(\epsilon\)-DP if for all datasets \(D\) and \(D^{\prime}\) differing on at most one element, and all \(S\subseteq Range(K)\), \[Pr[K(D)\in S]\leq e^{\epsilon}Pr[K(D^{\prime})\in S],\] where \(S\) is the set of all possible outputs of \(K\); \(K(D)\) is the privacy-preserving mechanism; and \(Pr\) is the probability distribution of \(K(D)\). The mechanism \(K\) is guaranteed to leak information less than or equal to the amount specified by the parameter. The probability distributions of the randomized function \(K(D)\) and \(K(D^{\prime})\) overlap, making it impossible to infer which of the two datasets \(D\) and \(D^{\prime}\) that the query is executed on. \(\epsilon\) is a relative measurement of the allowance of information leakage and specifies how much the probability distributions should overlap [42]. In DP, a data curator first collects the raw data and then performs an analysis (i.e., global differential privacy (GDP) defined in Def. 2.1). Local differential privacy (LDP) [43] is a differential privacy in the local settings. In LDP, the data is perturbed first before being sent to an aggregator for analysis, which is the primary difference from GDP. **Definition 2: Local Differential Privacy (LDP)** A randomized mechanism \(K\) satisfies \((\epsilon,\delta)\)-LDP where \(\epsilon\geq 0\) and \((0\leq\delta\leq 1)\), if and only if any pair of input values \(v,v^{\prime}\in S\) and \(S\subseteq Range(K)\), \[Pr[K(v)\in S]\leq e^{\epsilon}Pr[K(v^{\prime})\in S],\] where \(Range(K)\) denotes the set of all possible outputs of the algorithm \(K\). If \(\delta=0\), the algorithm satisfies pure (strict) local differential privacy (pure LDP). If \(\delta>0\), the algorithm satisfies approximate (relaxed) local differential privacy (approximate LDP), namely, \((\epsilon,\delta)\)-LDP [44]. In summary, LDP is defined for the situation when individual data is privately protected before added to the database while GDP is defined for the situation when data is privately protected when it is queried from the database (see Figure 1, where we visualize LDP on the left and GDP is shown on the right). We note that both forms of DP are needed due to varying real-world application needs and settings. In general, differential privacy can be achieved by adding a reasonable amount of random noise into the output results of the query function. The amount of noise will ultimately affect the trade-off between privacy and utility. Concretely, excessive noise will compromise the dataset while meager noise hampers privacy guarantees. Specifically, the amount of noise can be determined by sensitivity. Generally, there are two types of sensitivity, global sensitivity and local sensitivity, where we will next provide the associated definitions for both types of sensitivity as follows: Fig. 1: Visualizing the differences between local (left) and global (right) differential privacy. **Definition 3: Global Sensitivity** Given a query function \(f\) that operates on a dataset \(D\) and produces the maximal result difference for all datasets \((D,D^{\prime})\) with at most one different entry, Global Sensitivity is defined as: \[GS\left(f,D\right)=\max_{D,\;D^{\prime}}\left\|\left|f(D)-f(D^{\prime})\right| \right|_{1}\] where \(||.||_{1}\) is the \(L_{1}\)-norm distance between datasets differing at most one element, \(max\) is the maximum result of \(f(D)-f(D^{\prime})\) for all datasets \(D,D^{\prime}\). **Definition 4: Local Sensitivity** Given a query function \(f\) that operates on a dataset \(D\), the local sensitivity is the maximum difference that the change of one data point in \(D\) can produce, defined as \[LS\left(f,D\right)=\max_{D^{\prime}}\left\|\left|f(D)-f(D^{\prime})\right| \right|_{1}\] However, both GDP and LDP are highly vulnerable to manipulation (i.e., an adversary could insert additional data to undermine the output quality, i.e., poison attack). Additionally, most real-world data are unstructured, and the attacks and privacy challenges corresponding to such data will be discussed next. ### _Privacy and Attacks on Deep Learning (DL)_ Next, we discuss the vulnerabilities of deep learning, present a categorization of privacy approaches with deep learning, and summarize the most common privacy attacks on deep learning. #### 2.2.1 Vulnerabilities of DL By leveraging a copious amount of data, deep learning (DL) algorithms are particularly impressive at decision-making, knowledge extraction, recommendation, forecasting, and many other crucial tasks. The input to DL models is the algebraic form (e.g. scalars, vectors, matrices, tensors, etc.) corresponding to raw images, videos, audios, text, graphs, and other data forms, and the output of DL models can be a class (classification), a value (regression), an embedding (encoding), or a generated sample (generative). Unfortunately, it is often possible to discern sensitive information from the input data based on the outputs of the neural network. During the training process, DL models encode the sensitive information of the training data, consequently, it is not surprising that a trained DL model could disclose sensitive information [45]. The data is also vulnerable to attack because it is typically not obfuscated but instead stored in centralized repositories, which are subjected to the risks of data breaches. This type of data breach has been demonstrated in [45]. Under the context of private data analysis, we hope to ensure that anything that can be learned about a member of the database via a privacy-preserving database should also be learnable without access to the database [42]. For example, in a medical database of smoking patients used for investigating if smoking causes cancer, the algorithm infers sensitive information from data even if the user's PII has been protected. #### 2.2.2 Privacy Approaches for DL There are a few ways to protect the privacy of the DL model; Liu et al. [46] define them as follows (also seen in Figure 2): 1. **Privacy of DL model and data:** provides privacy protection to the DL model, training dataset, testing dataset, and the output because the assumption is that the whole DL system is the target for privacy protection. 2. **DL-enhanced privacy protection:** provides privacy protection of the data, and DL is a tool to help privacy protection. DL algorithm identifies the sensitive information and provides input to the user about privacy concerns. 3. **DL-based privacy attack:** DL is used as an attack tool by the adversary without access to the original dataset used for training. This is particularly important for cases when the model is trained to detect some sensitive information such as people's identities and landmarks. However, there are vulnerabilities in private DL, and in the next part, we will explain the privacy attacks for each of the categories mentioned earlier. #### 2.2.3 Privacy Attacks on DL Recent attacks against DL models [47, 48] emphasize the implicit risks and catalyze an urgent demand for privacy preserving. Some of the work focused on efficient attack strategies while others focused on defense mechanism. The defense mechanism usually used differential privacy to provide guarantee. Differentially private DL ensures that the adversaries are incapable to infer any information about a single record with high confidence from the released DL models or output results [49]. However, in general, the attacks are split into white-box and black-box based on whether the model parameters of the target model are available. In _White-box attack_, the attacker can access the target model. So it knows the gradients, architecture, hyper-parameters, and training data. In _Black-box attack_, the attacker cannot access the target model but only has access to query the target model. Based on the interactions, the attackers then infer some information about the model, such as possible datasets used for training. There are many DL attack models proposed in the literature such as model extraction attack, model inversion attack, attribute inference attack, and membership inference attack. We explain them briefly below. More details about the privacy attacks on graph neural networks are provided in Section 3. Fig. 2: Categorization of research problems in privacy and deep learning. **Model Extraction Attack (MEA):** The goal of this attack is to steal model parameters and hyper-parameters to duplicate or mimic the functionality of the target model. The adversary does not have any prior knowledge about the DL model parameters or training data. Wang et al. [50] design an attack to get the hyper-parameters from the DL model. Tramer et al. [51] use the shadow model approach to get information about the target model. Takemura et al. [52] demonstrate the effectiveness of the MEA on complex neural networks such as recurrent neural networks with or without LSTM. Similarly, Zhang et al. [53] demonstrate the MEA on pre-trained models. **Model Inversion Attack (MIvA):** The goal of the model inversion attack is to use the output of the model to extract features that characterize one of the model's classes [54]. Fredrikson et al. [55] develop an attack that exploits confidence values revealed along with predictions. Hidano et al. [56] extend the work of [55] and assume no knowledge of the non-sensitive attributes. The focus of Parl et al. [57] is on the defense side by using differential privacy. **Attribute Inference Attack (AIA):** AIA aims to reconstruct the missing attributes given partial information about the data record and access to the machine learning model. Jia et al. [58] develop a defense mechanism by adding noise to the sensitive attributes. Gong et al. [59] develop a new attack to infer sensitive attributes of social network users. **Membership Inference Attack (MIA):** MIA aims to infer whether a data sample is part of the data used in training a model or not [28]. Shokri et al. [60] use a shadow training technique to imitate the behavior of the target model. The trained model is used to discover differences in the target models on training and non-training inputs. Salem et al. [61] use an unsupervised binary classification instead of the shadow model. Truex et al. [62] study under what circumstances the model might be more vulnerable and find that collaborative learning exposes vulnerabilities to membership inference risks when the adversary is a participant. Jia et al. [63] focus on defense mechanisms by adding noise to each confidence score vector predicted by the target classifier. ### _Graphs and Graph-structured Data_ Graphs are powerful for representing relational data, and they are widely applied in different fields such as recommender system [13, 4], chemistry [19, 18], social science [64, 15], and e.t.c. Here we provide the graph notations that will be used throughout this paper. We denote a graph \(\mathcal{G}\) by \((\mathcal{V},\mathcal{E},\mathbf{X})\) with a set of nodes \(\mathcal{V}=\{v_{i}\}_{i=1}^{n}\) where \(|\mathcal{V}|=n\), a set of edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) among these nodes that represent the connections between node pairs where \(|\mathcal{E}|=m\), and the feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) where each row \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) is a \(d\)-dimension feature of node \(v_{i}\). The topological information of graph \(\mathcal{G}\) is described by the adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) where \(\mathbf{A}_{ij}=1\) if \((v_{i},v_{j})\in\mathcal{E}\) and \(\mathbf{A}_{ij}=0\) otherwise. Neighbors of a node \(v_{i}\) are denoted as \(\mathcal{N}(v_{i})\), which consists of node \(v_{j}\) that is connected with node \(v_{i}\) (i.e., \(\mathcal{N}(v_{i})=\{v_{j}|(v_{i},\ v_{j})\in\mathcal{E}\}\)). For example, in social networks, users are represented as nodes and their actions (e.g. commenting, following) are modeled as the edges. In recommendation, users and items are nodes and the user-item interactions (e.g. purchasing) are the edges. ### _Deep Learning on Graphs_ Owing to the powerful representation ability of graphs and the rapid development of deep learning, Graph Neural Networks (GNN) have achieved impressive success in wide applications, and their powerful performances in various applications have been demonstrated [65, 66, 17, 67, 2, 68]. In the following part, using the basic graph notations and definitions from Section 2.3, we first introduce the main idea of GNNs via the neural message-passing mechanism. Then, we provide a brief introduction to some of the most popular GNNs. #### 2.4.1 Neural Message Passing GNNs The main idea of GNNs is to leverage the message passing mechanism, which iteratively collects information from neighbors and integrates the aggregated message with the current node representation. This iteration is described by two stage: AGGREGATE and UPDATE (illustrated in Figure 3). In one layer, these two steps iterate for all nodes in graph \(\mathcal{G}\). Stacking layers contributes to building powerful GNNs that capture higher-order relationships. We first give the message passing formula for a target node \(u\) in the \(k^{\text{th}}\) layer in terms of AGGREGATE and UPDATE in Eq. (1) and Eq. (2). Then we will introduce the details of these two steps. More detailed explanations of how they are designed in different models are in Section 2.4.2. \[\mathbf{m}_{\mathcal{N}(u)}^{(k)}=\text{AGGREGATE}(\{\mathbf{h}_{v}^{(k)}, \forall v\in\mathcal{N}(u)\}) \tag{1}\] \[\mathbf{h}_{v}^{(k+1)}=\text{UPDATE}(\mathbf{h}_{v}^{(k)},\mathbf{m}_{ \mathcal{N}(u)}^{(k)}) \tag{2}\] **AGGREGATE:** Aggregation means gathering information from neighbors. At \(k^{\text{th}}\)-iteration of the GNN, the AGGREGATE function takes the embeddings of the target node \(u^{\text{s}}\) neighbors \(\mathcal{N}(u)\) and generates a new representation \(\mathbf{m}_{\mathcal{N}(u)}^{(k)}\) based on the collected embeddings. Fundamentally, the AGGREGATE operator is a set function where the input is a set of neighbor's embeddings and the output is a single vector. There are many choices for this aggregate operator, such as degree-based graph convolution (e.g., GCN [6] and Graphsage [5]) or using attention-based strategies (e.g., GAT [69]) to design an AGGREGATE function. **UPDATE:** Update refreshes the representation of a node \(u\) with its own feature information and the aggregated messages from neighbors \(\mathbf{m}_{\mathcal{N}(u)}^{(k)}\). At the \(k^{\text{th}}\) layer of the GNN, the update function combines the new representation \(\mathbf{m}_{\mathcal{N}(u)}^{(k)}\) and node \(u^{\text{s}}\) embedding \(\mathbf{h}_{u}^{(k)}\) to obtain the new user embedding \(\mathbf{h}_{u}^{(k+1)}\). A typical UPDATE operator involves a Fig. 3: The two-step message passing framework commonly used in many GNNs: AGGREGATE and UPDATE. linear combination of the node's embedding and the aggregated message along with a non-linear activation function (e.g., sigmoid, Tanh, ReLU) after the linear transformation. Other variants have been proposed to further improve the performance, such as concatenation methods [5] and skip-connection methods [70]. #### 2.4.2 Typical GNNs Here are several widely-used GNNs along with their corresponding message-passing functions. The AGGREGATE function is denoted by \(f_{A}^{(k)}\) through which we obtain the \(\mathbf{m}\) representation, and UPDATE function is denoted by \(f_{U}^{(k)}\). \(\sigma\) denotes the activation function, and \(\mathbf{W}\) and \(\mathbf{a}\) are trainable parameters with their detailed dimensions illustrated later when detailing the models. The dimensions of node representations before/after the linear transformation are denoted as \(d/d^{\prime}\), respectively. **GCN**[6]. The graph convolutional network (GCN) is one of the most popular GNNs with aggregation and update functions as follows: \[f_{A}^{(k)}(\{\mathbf{h}_{v}^{(k-1)}|v\in\mathcal{N}(u)\})=\sum_{v\in \mathcal{N}(u)}\frac{\mathbf{h}_{v}^{(k-1)}}{\sqrt{deg(u)deg(v)}}\] \[f_{U}^{(k)}(\{\mathbf{h}_{u}^{(k-1)},\mathbf{m}_{u}^{(k)}\})=\sigma(\mathbf{ W}^{(k)}\mathbf{m}_{u}^{(k)})\] where the aggregation is normalized by degree (i.e., \(deg(\cdot)\)) from both source and target nodes and \(\mathbf{W}^{k}\in\mathbb{R}^{d^{\prime}\times d}\) is the weight of the linear transformation layer. **GraphSAGE**[5]. GCN models are inherently transductive. They cannot efficiently generalize to unseen nodes or different graphs. GraphSAGE is proposed to solve this issue in an inductive manner, which samples a fixed-size neighborhood of each node and leverages node attribute information to efficiently generate representations on previously unseen data. \(\oplus\) operation concatenates the embedding of the target node and aggregated message, and \(\mathbf{W}^{k}\) has dimension of \(\mathbb{R}^{d^{\prime}\times 2d}\) due to the concatenate operation. \[f_{A}^{(k)}(\{\mathbf{h}_{v}^{(k-1)}|v\in\mathcal{N}(u)\})=\frac{1}{deg(u)} \sum_{v\in\mathcal{N}(u)}\mathbf{h}_{v}^{(k-1)}\] \[f_{U}^{(k)}(\{\mathbf{h}_{u}^{(k-1)},\mathbf{m}_{u}^{(k)}\})=\sigma(\mathbf{ W}^{(k)}[\mathbf{h}_{u}^{(k-1)}\oplus\mathbf{m}_{u}^{(k)}])\] **GAT**[69]. Graph Attention Networks (GAT) introduce attention mechanism to compute the weight between edge \(e_{uv}\) (denoted as \(\alpha_{uv}\)), which indicates that different neighbors will contribute differently in the aggregation process based on the learned node representations where \(\mathbf{W}^{k}\in\mathbb{R}^{d^{\prime}\times d}\) and \(\mathbf{a}\in\mathbb{R}^{2d^{\prime}\times 1}\). \[\alpha_{uv}^{(k)}=\frac{\exp{(\sigma(\mathbf{a}^{(T)}|\mathbf{W}^{(k)}\mathbf{ h}_{u}^{(k-1)}\oplus\mathbf{W}^{(k)}\mathbf{h}_{v}^{(k-1)}]))}}{\sum_{v^{ \prime}\in\mathcal{N}(u)}\exp{(\sigma(\mathbf{a}^{(T)}[\mathbf{W}^{(k)}\mathbf{ h}_{u}^{(k-1)}\oplus\mathbf{W}^{(k)}\mathbf{h}_{v^{(}}^{(k-1)}]))}})\] \[f_{A}^{(k)}(\{\mathbf{h}_{v}^{(k-1)}|v\in\mathcal{N}(u)\})=\sum_{v\in \mathcal{N}(u)}\alpha_{uv}^{(k)}\mathbf{h}_{v}^{(k-1)}\] \[f_{U}^{(k)}(\{\mathbf{h}_{u}^{(k-1)},\mathbf{m}_{u}^{(k)}\})=\sigma(\mathbf{ W}^{(k)}\mathbf{m}_{u}^{(k)})\] We encourage readers who seek a more comprehensive introduction to deep learning on graphs to explore dedicated surveys [71, 2], tutorials [4, 3] and books [4, 1]. ### _Motivation_ Although data privacy has been well-investigated in the general DL domain, it is rather critical to consider privacy for graph data and models since (1) graph data and models are prevalent in real-world applications; (2) the data and models are vulnerable to attacks, it is less explored than other forms of data (e.g., tabular); (3) extending directly from regular data to graph data without any domain modification is challenging due to the complex graph-based structure. Therefore, discussing privacy attacks and preservation techniques that are specific to graphs becomes a necessity [72]. #### 2.5.1 Vulnerabilities of Graph Data and GNN Models Compared to regular tabular, image, and text data, graph data has complex connections and potential edge features, both of which could be considered as potential objects to attack [73, 74]. For example, while most other forms of data are independent and identically distributed (i.i.d.), the nodes within a graph are inherently connected. Furthermore, many real-world networks have high homophily [75] where connected nodes are prone to share similar features. Thus, this can be exploited to infer the node information based on its neighborhood and cause the risk of information leakage, which poses brand new challenges to privacy preservation. However, we note that even for graphs not exhibiting high homophily, GNNs have shown to still perform well with lower levels of homophily [76, 77], which suggests they are also vulnerable to privacy attacks. Furthermore, the complex connections among nodes make it hard for data partitioning and thus bring huge challenges for distributed training. Therefore, GNNs are typically trained in a centralized way where the model and data are stored in one place, which increases the risk of information leakage and might be impossible in many real-world settings. While GNNs have shown significant improvement in various applications, the unique message-passing process might exacerbate the sensitive leakage as each node now encodes the information from its neighbors. It means that to protect one node, not only that specific node should be protected as in i.i.d data, but the substructures surrounding that node should also be protected. #### 2.5.2 Related Surveys and Differences Recently, several surveys [78, 79, 80, 81] have been conducted on the trustworthiness in GNNs, including the reliability, explainability, fairness, privacy, and transparency aspects, showing the common interest and concern on trustworthiness with our interests here. On one hand, within the range of trustworthiness, although surveys related to explainability [82], and fairness [83] of GNNs exist, few of them provide a comprehensive and focused discussion on privacy for GNNs. On the other hand, although there are privacy surveys in ML/DL [84, 85, 47] and social networks [86, 87], privacy specific to graph data and graph models has not been comprehensively discussed yet. This motivates us to present a systematic and in-depth review of the existing attack models and privacy-preserving techniques on GNNs, which could benefit the research community in developing privacy-preserving GNNs immune to privacy attacks. Privacy attacks on GNNs Privacy attack is a popular and well-developed topic in various fields such as social network analysis, healthcare, finance, system, etc. [88, 89, 90]. During recent years, the surge of machine learning has provided powerful tools to solve many practical problems. However, data-driven approaches also threaten users' privacy due to the associated risks of data leakage and inference [85]. Consequently, a substantial amount of work has been devoted to investigate the vulnerabilities of ML models and the risks of privacy leakage [47]. A branch of privacy research is to develop privacy attack models, which has received much attention during the past few years. However, attack models with respect to GNNs have only been explored very recently because GNN techniques are relatively new compared with CNN/transformers in image/natural language processing(NLP) domains, and the irregular graph structure poses unique challenges to transfer existing attack techniques that are well-established in other domains. In this section, we summarize papers that have developed attack models specifically targeting GNNs. We classify the privacy attack models on GNN into four categories (which are visualized in Figure 4): a) model extraction attack (MEA), b) graph structure reconstruction (GSR), c) attribute inference attack (AIA), and d) membership inference attack (MIA). In MEA, the GNNs model is often directly extracted/inferred with the aid of a surrogate model. Concretely, the surrogate model is trained so that it can output predictions similar to the ground-truth values that would be generated by the target model given the same input. In GSR attack, information related to graph structure such as topology and connectivity is inferred by the attackers. Compared to MEA, GSR aims to obtain more information about the model rather than simply mimicking the performance. GSR is similar to the model inversion attack mentioned in the previous section except it is for graph specifically. Note that GSR is equivalent to graph information reconstruction (GIR) that is used in the literature. We rename it to GSR because this attack focuses more on the reconstruction of graph structural information. In AIA, concrete features of node (e.g. age, salary) are obtained by the attackers. In MIA, the attackers aim to determine whether or not a node belongs to the training set. Another way to categorize the attack models is based on the accessibility of information, and the models can be divided into _white-box attacks_ and _black-box attacks_ respectively. In the white-box setting, adversaries are assumed to be able to access rich information of GNNs such as their architecture, parameters, embeddings, and outputs. By contrast, in black-box settings, adversaries have limited information about the target model, if not none. ### _Model Extraction Attack_ MEA often occurs in models based on multi-layer perceptron (MLP) and convolutional neural networks (CNNs). Recently, a trend of increasing popularity in researching MEA on GNN has been observed. Under the MEA scheme, the attackers typically have limited information about the GNN model (i.e., black-box). During a MEA, the attackers first adopt a model (e.g., GNN) similar to the victim GNN model. Subsequently, the adopted model is tuned so that it has a similar performance as the target model in terms of criteria such as accuracy and decision boundary. To accomplish this, the attackers first generate queries to the victim model, then collect the outputs from the model API, and subsequently feed the same queries to the extracted model, and finally tune the parameters of the adopted model so that it can have similar outputs as does the victim model. As the performance of the two models converges, the adopted model can be considered as an extracted model of the victim model. The mathematical definition of MEA to GNN-based approach is the following: a GNN model can be expressed as \(f\) operated on a graph \(G\). The goal of MEA is to obtain an extracted model \(f^{\prime}\) such that \(f^{\prime}(G)\approx f(G)\). Note that due to the connectivity of the data samples in a graph, MEA to GNN will be facilitated if additional information about the graph structure is available. This is one distinct difference from the MEA to CNN-based or MLP-based models, in which the data samples are not connected, unlike the connected nodes in a graph. An early work studying MEA towards GNN is done by [21], where the extracted model is able to produce the output with an \(80\%\) similarity to that of the victim model (also called fidelity) on Cora and Pubmed datasets, with the limited access to only a subgraph. A recent comprehensive investigation of MEA towards GNNs is done by [20]. The authors present seven possible scenarios under which the attacker possesses a different amount of prior information (e.g., node attributes and topology) for the attack. Based on these scenarios, they determine seven categories of attack. After experimenting with their attack on multiple real-world datasets, they claim that their extracted model can reach a fidelity of as high as \(90\%\). Similarly, other recent work has also focused on the MEA on inductive GNNs [91]. Fig. 4: Illustrations of the four categories of privacy attack models on graphs: a) Model extraction attacks (MEA); b) Graph structure reconstruction (GSR); c) Attribute inference attacks (AIA); and d) Membership inference attacks (MIA). ### _Graph Structure Reconstruction_ In this subsection, we will discuss recent progress about GSR specifically targeting GNN. In a GSR attack on GNN, the attackers seek to steal the private information of the input graph, mainly pertinent to the graph structure. Duddu et al. [25] conduct a GSR on GNN model to extract the graph \(G_{\text{target}}\) using the publicly accessible node embedding \(\Psi(v),\forall v\in G_{\text{target}}\,\). The process has 2 phases, of which the first one trains a graph-encoder-decoder structure using the auxiliary graph \(G_{\text{auxiliary}}\) of which nodes follow the data distribution of the target graph \(G_{\text{target}}\), and the second phase leverages the publicly accessible target node embedding and the trained decoder to estimate the target adjacency matrix \(A_{\text{target}}\), which captures the graph connectivity and edge distribution. They demonstrate that their proposed GSR reaches a very high precision (above 0.7) for estimating the target graph using Cora, Citeer, and Pubmed. Zhang et al. [22] look at GSR for edge reconstruction and structural reconstruction, where they call their attack model Graph Module Inversion Attack (GraphMI). In the white-box attack setting, GraphMI consists of a projected gradient module, a graph auto-encoder module, and a random sampling module. The projected gradient module aims to extract graph topology with the output labels of the target model and auxiliary knowledge, and is designed to tackle discrete optimization problems via convex relaxation while preserving graph sparsity and feature smoothness. Then, the graph auto-encoder module takes node attributes, graph topology, and target model parameters into consideration for graph reconstruction. Specifically, the projected gradient module solves the following optimization problem: \[\operatorname*{arg\,min}_{\mathbf{a}\in[0,1]^{n}}\mathcal{L}_{GNN}+\alpha \mathcal{L}_{s}+\beta\|\mathbf{a}\|_{2}\] where \(\mathbf{a}\in[0,1]^{n}\,(n=N(N-1)/2)\) is a continuous-valued adjacency vector, which is transformed from the adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\) so that the original combinatorial optimization problem can be solved through the projected gradient descent method, \(\mathcal{L}_{GNN}\) denotes the loss of the target model aiming to make the reconstructed adjacency more similar to the original one, \(\mathcal{L}_{s}\) is the term to ensure the feature smoothness in the optimized graph, the last term is to encourage the sparsity of graph structure, and \(\alpha,\beta\) are constant parameters. One can refer to [92] for details of the model. The graph auto-encoder module is composed of an encoder and a decoder. The encoder is directly transferred from the target model \(f(\mathbf{x};\theta^{*})\) with partial parameters (i.e., excluding the readout layer). After the optimization of projected gradient module, the encoder encodes nodes to node embeddings by using topology information and node attributes \(\mathbf{x}\). Then, the decoder will reconstruct the graph adjacency based on the node embeddings. The random sampling module is designed to recover the binary adjacency matrix. In [92], the authors claim that GraphMI is effective towards inferring edges after evaluating it on three GNNs (i.e., GCN [6], GAT [69], GraphSAGE [5]). Further, based on their analysis of the relation between the edge influence and the model inversion risk, they conjectured that the ease of reconstruction is positively correlated with the edges' influence. In addition, the authors incorporate gradient estimation and reinforcement learning into GraphMI to render it capable for black-box attack. He et al. [23] first propose a threat model called link stealing attack aiming to infer the existence of links among nodes in target datasets. The authors systematically characterize the background knowledge (i.e., accessibility of knowledge) of an adversary through three dimensions, which are attributes of nodes in the target datasets \(\mathbf{X}\), partial graph structure in target datasets \(\mathbf{\tilde{A}}\), and a shadow dataset \(D_{Shadow}\), respectively. Based on different accessibility settings (i.e., have access or not) of the knowledge in three dimensions, they develop in total 8 attack mechanisms for all 8 possible settings. Basically, if an adversary has no knowledge or only the target node attributes \(\mathbf{X}\), the attack model is conducted in an unsupervised way, which is mainly based on the intuition that a node pair would share more similar attributes or closer posteriors queried from the target model when linked. If an adversary has access to the partial graph structure in target datasets \(\mathbf{\tilde{A}}\) or a shadow dataset \(D_{Shadow}\), the attack model can be trained in a supervised way. For partial graph \(\mathbf{\tilde{A}}\), the adversary takes the links as the ground truth label to train an attack model. Also, an adversary could train an attack model based on a shadow dataset \(D_{Shadow}\) so that the trained model can be later transferred to infer links in the target dataset. The intuition is that the trained model could obtain the ability to capture the similarity between two nodes' posteriors, which can be further transferred to different datasets (e.g., target model). One can refer to [23] for more details about each attack model. Plenty of experimental results demonstrate the effectiveness of the proposed attack models. More importantly, the results indicate that the output predictions of GNNs preserve rich information about the structure of a graph that is used to train the model. Wu et al. [24] also focus on edge privacy and aim to recover private edges from GNNs through influence analysis. [93] demonstrates that additional knowledge of post-hoc feature explanations substantially enhances the structural attacks. ### _Attribute Inference Attack_ AIA aims to infer the properties of a target training dataset. For graph-based data, the properties are usually related to nodes and edges, and these properties could be sensitive features (e.g. gender and age information of a user node in social network analysis [94], chemical bonds information in a molecular graph). Note that the difference between GSR and AIA can be subtle. The former one aims to reproduce the graph, while the later one focuses on stealing the concrete features of the dataset. Compared to many other adversarial attacks, AIA is often considered to be more malicious due to its capability to directly predict the sensitive features of a target user. In addition, an even more serious derivative of AIA, data reconstruction attack, could occur when attackers try to infer a subset of the training data, rather than a single one. In AIA, embedding is often used to predict the sensitive features because of their close relationship. Duddu et al. [25] investigate AIA on graphs by inferring the gender and location of certain users of the targeted graphs composed with Facebook and LastFM. In their study, a fraction of users publicly disclose their gender and location, and a sub-graph \(G_{\text{aux}}\) can be constructed based on these users. Here, given \(\left(\Psi(v),s_{v}\right)\forall v\in G_{\text{aux}}\), where \(\Psi(v)\) is node embedding and \(s_{v}\) is the disclosed sensitive feature, a supervised attack classifier model \(f_{\text{attack}}\) can be trained. Based on the trained model and the available node embeddings of the target graphs, their sensitive features can be estimated via \(f_{\text{attack}}\left(\Psi\left(v^{\prime}\right)\right)\) where \(v^{\prime}\in G_{\text{target}}\). Their AIA showed high performance. Although less common, edges could also be related to or contain sensitive information. For example, in a drug-target interaction graph, a link could contain sensitive information such as the interaction pattern and affinity between certain drug and target. Thus, AIA could also be designed specifically against the edges. ### _Membership Inference Attack_ Membership is to describe whether a data sample belongs to the training dataset or not [95]. In GNNs, the goal of a MIA can be at the node level, edge level, and graph level [25, 26, 27, 96, 23, 97]. Node-level MIA is to infer whether a node exists in the original graph or not. For instance, attackers could be interested in whether or not a person is in a certain social community by using an MIA on that social network; subsequently, the attackers could further modify/steal the private information of the targeted user. Edge-level MIA aims to infer the membership of links. Graph-level MIA is to infer whether a graph is used during the training which is often used for graph classification tasks. Typical methods for MIA include using shallow models, node embedding, and graph embedding [98, 99]. MIA is a binary classification problem, and typical metrics for evaluating MIA include the inference accuracy, precision (i.e., the percentage of true positive), ROC-AUC score enabling the visualization of true positive rate versus false positive rate, etc. **Node-level MIA.** In [26], the authors perform MIA with the help of a shadow model (i.e., a simplified model trained to approximate the target model), where they build the attack model in a supervised way with data generated from the trained shadow model, under the assumption that they can construct the shadow model using the same neural architecture as the target model. They train the shadow model with data from the same distribution as the training set of the target model. In order to better simulate the behaviors of the target model, the output probabilities queried from the target model are considered as the ground truth during the training of the shadow model. The attack model is designed as a binary classification model, which maps the output prediction confidence of a model to the membership of the corresponding input nodes. To train the attack model through a supervised way, the authors generate the dataset by using the trained shadow model to predict on the entire dataset \(\mathcal{D}_{Shadow}\) (both member and non-member nodes) and obtain the corresponding output confidence. Finally, they can infer the membership with the trained attack model once they have the output of a node queried from target models. In addition, they also try another way to obtain the shadow model. Instead of querying the target model for confidence, they use ground truth labels of the original nodes to train the shadow model. Interestingly, it is found that there is no significant difference in the attack success rate. The authors also claim that it is unnecessary for the shadow model to have exactly the same architecture as does the target model after they realize that a standard graph convolutional network would also gain good results in MIA. In [25], the authors develop the MIA method under black-box and white-box settings respectively. In the black-box setting, an adversary is assumed to have only access to the output probabilities of the target model when given a node. Thus, under this scenario, the authors consider exploiting the statistical difference between the prediction confidence on training (i.e.) member and non-member data). Specifically, they demonstrate that if a node belongs to the training data, the output probability queried from the target model would be more confident (i.e., has higher values) on the corresponding label. While for a non-member node, the output probability distribution is supposed to be less confident and more uniform. Then, based on this setting, the authors consider two attack methods (i.e. shadow attack and confidence attack) respectively. The shadow model shares the similar idea as does the one in [26], which builds the attack model through a supervised way with the training data generated from the trained shadow model. By contrast, a confidence attack performs inference in an unsupervised setting. Based on the fact that nodes with a higher prediction confidence are more likely to be members, an adversary could decide memberships according to whether the highest confidence of a node's prediction is above a certain threshold which can be set or learned. The authors also experimentally verify that the confidence attack performs much better than does shadow attack under the black-box setting. Then, in the white-box setting, an adversary has the access to the intermediate output of the target model (i.e., node embedding in [25]). Here, the authors propose an unsupervised method that maps the intermediate embedding to a single membership value. Specifically, they train an encoder-decoder model. The encoder encodes the intermediate embeddings with a single membership value, which is then passed to the decoder to reconstruct the embeddings. Afterwards, they use clustering method (e.g., K-Means) to distinguish the obtained single membership value into the clusters (i.e., members and non-members). The authors conducted their analysis using a wide range of GNNs and network-embedding methods (e.g. GCN, GraphSAGE, GAT, Topological Adaptive GCN, DeepWalk, Node2Vec) and thus show the generalizability of their result. He et al. [27] propose a MIA model also based on training a shadow model. Instead of first querying from the target model for the output probabilities of nodes in \(\mathcal{D}_{Shadow}\) as [26], the authors directly train the shadow model with the dataset \(\mathcal{D}_{Shadow}\) as well as the corresponding features and labels, which are derived from the same distribution as the training set \(\mathcal{D}_{Target}\) of the target. Similar to [26], the attack model is trained through a supervised way with membership information in \(\mathcal{D}_{Shadow}\) and the queried posteriors of nodes from the trained shadow model. Differently, depending on the adversary's knowledge of node topology, the authors develop three query methods, which are 0-hop query, 2-hop query, and combined query, respectively. Also, there are three different attack models that are trained with datasets corresponding to three query methods respectively. The experimental results demonstrate that the combined attack outperforms 0-hop attack and 2-hop attack, since it takes advantages of the later two methods. Moreover, they experimentally show that the assumption of the identical distributions of \(\mathcal{D}_{Shadow}\) and \(\mathcal{D}_{Target}\), and the same architecture of the shadow model and target model can both be relaxed, which is consistent with the results in [26]. **Edge-level MIA.** Similar to the node, which contains sensitive information, the connections also carry valuable information and thus become a target of the attackers. Edge-level MIA aims to determine whether there is a link between two nodes in the training graph [23]. **Graph-level MIA.** In addition to the node and edge level, researchers have also investigated graph-level membership inference [100, 97, 101] where the task is to infer whether a graph/sub-graph is used in the training set. Wu et al. [97] aim to infer whether a graph sample has been used for training and design two types of attacks including training-based attacks and threshold-based attacks from different adversarial capabilities. Zhang et al. [100] infer the basic graph properties and the membership of a sub-graph based on graph embedding. ## 4 Privacy preservation techniques on GNNs After discussing possible attacks toward GNNs model, we now shift our attention to preservation methods that can effectively protect against these attacks. In the following subsections, we will discuss latent factor disentangling, adversarial training, differential privacy approach, and federated learning. Latent factor disentangling aims to remove the sensitive information from the embedding while minimizing the loss of meaningful information for the downstream tasks; adversarial training aims to render a model resistant to privacy attacks through careful training; differential privacy approaches incorporate random noise into data samples or intermediate model variables to protect sensitive information during queries; and federated learning enables collaboration among users with private datasets without revealing them. Table I enumerates the techniques associated with these four categories. ### _Latent Factor Disentangling_ Graph/node embedding is able to preserve sensitive and non-sensitive information of the original graph. From the perspective of representational learning, latent factor disentangling aims to disentangle the sensitive information from the embedding while ensuring the utility of the disentangled embedding for downstream tasks. In this way, it would be difficult for an adversary to implement privacy attacks with limited/no latent sensitive information. Various methods have been proposed to achieve this goal [29, 30, 31]. Li et al. [29] propose a graph embedding model to disentangle sensitive information from the embedding while preserving the structural information and data utility. The model incorporates two mechanisms in a complementary way, both of which can separately protect the sensitive information. The first mechanism is based on the graph autoencoder (GAE) model [126], which is able to process graph-structured data, or a supervised Adversarial Autoencoder (AAE) model [127], which augments the decoder with the one-hot encoding of the label so that the final embedding would contain label-invariant information. According to this, the authors use GCN as encoder, and incorporate privacy labels to the decoder to disentangle sensitive information from the final embedding. The proposed autoencoder achieves graph reconstruction with the output containing both link prediction and the prediction of non-sensitive node attributes. Note there also has a discriminator for updating the encoder. The loss function can be described as \(\mathcal{L}_{recon}=\mathcal{L}_{link}+\mathcal{L}_{attr}\) where \(\mathcal{L}_{link}\) and \(\mathcal{L}_{attr}\) denote loss for link prediction and the prediction of non-sensitive node attributes respectively. The discriminator has a separate loss function \(\mathcal{L}_{dc}\). Instead of disentangling privacy labels as decoder input, the second mechanism achieves the similar effect by incorporating an attacker model (i.e., a softmax classifier) that aims to predict sensitive node attributes to an obfuscator (i.e. similar to the autoencoder model of the first mechanism while without incorporating privacy labels). The goal of the second mechanism is to reduce the performance of the attacker while preserving the graph structure and non-sensitive data utility. Given the loss function of the attacker \(\mathcal{L}_{attack}\), the loss function of the obfuscator will be \(\mathcal{L}_{obf}=\mathcal{L}_{recon}-\lambda\mathcal{L}_{attack}\) where \(\lambda\) is a trade-off hyper-parameter. \begin{table} \begin{tabular}{c|c|c} \hline \hline Category & Method & Public Code \\ \hline \multirow{3}{*}{Latent Factor Disentangling} & APGE [29] & Link \\ & Wang et al. [30] & DP-GCN [31] \\ & DGCF [102] & Link \\ \hline \multirow{6}{*}{Adversarial Training} & NetFense [32] & Link \\ & Tian et al. [33] & - \\ & SecGNN [103] & Link \\ & FRFC [104] & - \\ & GAL [105] & Link \\ & Wang et al. [106] & - \\ & AttroDF [107] & - \\ \hline \multirow{6}{*}{Differential Privacy} & LYCNN [34] & Link \\ & GAP [35] & Link \\ & Mueller et al. [108] & - \\ & DP-Adam [109] & Link \\ & PrivCNN [10] & - \\ & DP-GNN [111] & Link \\ & Solitude [112] & - \\ & GERAI [113] & - \\ \hline \multirow{6}{*}{Federated Learning} & Fedgraphnn [114] & Link \\ & FedGL [115] & - \\ \cline{1-1} & FedGAN [116] & - \\ \cline{1-1} & FedGCN [117] & Link \\ \cline{1-1} & VFGNN [118] & - \\ \cline{1-1} & SGNN [119] & - \\ \cline{1-1} & FedVGCN [120] & - \\ \cline{1-1} & D-FedGNN [121] & - \\ \cline{1-1} & SpreadGNN [122] & Link \\ \cline{1-1} & FedPerGAN [123] & Link \\ \cline{1-1} & FLIT(+) [124] & Link \\ \cline{1-1} & GraFeHty [125] & - \\ \hline \hline \end{tabular} \end{table} TABLE I: Privacy preservation techniques that have either been utilized or have the potential to be employed on GNNs. Public code links to these methods are also provided (if available). All the methods are collected here. Finally, the authors use two mechanisms together to protect the privacy in both input and output of the decoder in a complementary way. Wang et al. [30] design a framework based on encoder-decoder architecture to learn graph representations that encode sufficient information for downstream tasks while disentangling the learned representation from privacy features. The key component is a conditional variational graph autoencoder (CVGAE), which captures the relationship between the learned embeddings and the sensitive features. They add a penalty loss into the original reconstruction objective to encourage the CVGAE to minimize the mean Gaussian distribution differences so that the privacy leakage will be punished. Under this framework, they consider two specific scenarios where the graph structure is available and unavailable to the adversary, respectively. In some graph datasets, the same attribute could be private for some nodes, while non-private for the other nodes. One example is social network composed with people of different genders, where the age of people of certain genders (e.g. female) could be private while the age of the people of the rest of genders (e.g. male) could be public. Due to graph homophily (i.e. connected nodes are similar), showing the public information about some users (e.g. ages of men) could lead to inference about some private information (e.g. ages of women who are connected to the men on the social network). To handle this issue, Hu et al. [31] propose a privacy-preserving GCN model named DP-GCN that can conceal the value of the private sensitive information that has been exposed by public users in the same network. DP-GCN has a Disentangled Representation Learning Module (DRL) and a Node Classification Module (NCL). DRL separates the non-sensitive attributes into sensitive and non-sensitive latent representations to be orthogonal to each other. NCL trains the GCN to determine the class of the nodes with non-sensitive latent representations. The authors experimentally showed that the attributes disentangling of the public users (e.g. men with disclosed ages) can help protect the privacy of the private users (e.g. women with undisclosed ages). Besides the above influential works, researchers have also worked in this direction [102], introducing innovations that improve performance and inspire further research in privacy-preserving. ### _Adversarial Training_ An intuitive perspective to defend against privacy attacks is to directly reduce the performance of specified attacks. Accordingly, one can train models with the objective of minimizing the performance of specified privacy attacks (i.e., privacy protection) and maintaining the performance of downstream tasks (i.e., utility). We refer to this kind of approach as adversarial training. Actually, the aforementioned model in [29] can also be categorized to adversarial training. Specifically, the updating process contains loss functions of the discriminator \(\mathcal{L}_{dc}\) and the attacker \(\mathcal{L}_{attack}\), which will make the model robust to the attack in an adversarial way. Hsieh et al. [32] propose an adversarial defense model against GNN-based privacy attacks named NetFense, which is able to reduce the prediction performance of the attacker and maintain the data and model utility (i.e., maintain the performance on downstream tasks). Different from [29], this paper presents a graph perturbation-based approach aiming to make perturbation to the original graph (i.e., changing the adjacency matrix, \(\mathbf{A}\)) to fool the attacker while maintaining utility for the downstream task (i.e., node classification). To achieve this, the model comprises three phases, which are candidate selection, influence with GNNs, and combinatorial optimization, respectively. The first phase aims to find out a set of candidate edges for perturbation. Specifically, it uses the change of Personalized PageRank (PPR) score to measure the noticeability of the perturbation. To keep data utility, it selects candidate edge perturbation with minimal noticeability. Then, the second and third phases ensure model performance and privacy preservation. Compared to [29], which alternately updates the model and attacker, NetFense does not update models in an adversarial way. Instead, given the pre-trained classification model and attack model, NetFense modifies the input graph structure to obtain a perturbed graph that can protect privacy. Another difference is that Netfense assumes privacy labels are binary. Thus, it aims to reduce the prediction accuracy of the attacker to 0.5 rather than just maximizing the loss function of the attacker. Tian et al. [33] adopt adversarial training for privacy preservation for social network analysis. Concretely, their model has two stages, of which the former one is based on an \(\in\)-k anonymization method, and the later one is based on an adversarial training mechanism. The adversarial training can help GNN extract useful information from the anonymous social network data after the first step. In other words, the main purpose of adding the adversarial training is to render GNNs resistant to the disturbance from \(\in\)-k anonymization so that the privacy can be enhanced and the model performance is minimally compromised. The authors also showed the effectiveness of their approach through experiments related to classification, link prediction, and graph clustering using multiple real-world datasets. Further, the authors showed the flexibility and efficiency of their model in the data collection/training phases. Apart from the aforementioned and discussed influential studies in this area, additional researchers have also explored this field [104, 105, 106, 107], introducing various innovations that have led to increased performance and inspiring even more future work. ### _Differential Privacy_ As discussed in section 2.2, differential privacy has been successfully applied to various machine learning models for privacy protection. However, unlike other machine learning models wherein the data points are usually assumed to be independent, the data samples of GNNs are nodes of graphs, which are connected by edges, making the methods developed for other machine learning models difficult to be directly applied to GNNs. Recently, increasing works develop differentially private (DP) approaches to solve privacy preserving problem particularly for GNNs. The key point of DP approaches is to add random noise to data samples or intermediate model variables so that when querying for the sensitive information, an adversary would only obtain perturbed data, which is useless for attacks. Obviously, data with too much noise will also degrade the performance of GNNs, making it a major issue to find a good balance between data privacy and utility. Sajadmanesh et al. [34] present a locally private GNN (LPGNN) using the local differential privacy (LDP) technique to protect data privacy. They add noise to both node features and labels to ensure privacy and design denoising mechanisms so that LPGNN is able to be trained with the perturbed private data. More specifically, to add noise on node features, they propose an LDP encoder and an unbiased rectifier, where the former is to inject noise (i.e., \(\mathbf{X}\rightarrow\mathbf{X}^{*}\)), and the latter converts the encoded vector \(\mathbf{X}^{*}\) to an unbiased perturbed vector \(\mathbf{X}^{*}\) (i.e., \(\mathbb{E}[\mathbf{X}^{{}^{\prime}}]=\mathbf{X}\)). Then, to denoise the perturbed node features, they append a multi-hop aggregation layer to the GNN, called KProp, which is able to average out the injected noise. To perturb node labels, the authors use the generalized randomized response mechanism [128]. LPGNN denoises the node labels with another mechanism: label denoising with propagation (called Drop in the paper), which also utilizes KProp. The Drop recovers the true labels of nodes based on the fact that nodes with similar labels are more likely to connect with each other. Experimental results show an appropriate privacy-utility trade-off of LPGNN. More info about DP and LDP is presented in subsection 2.1. Sajadmanesh et al. [35] develop a DP-GNN by adding stochastic noise to the aggregator of GNN such that the existence of a single edge (i.e., edge-level privacy) or a single node and its adjacent edges (i.e., node-level privacy) becomes statistically obscure. Their model consists of three different modules, including an encoder module, an aggregation module, and a classification module. In the encoder module, the private node embedding is learned independent of the edge information. In the aggregation module, they use the graph structure to determine the noisy node embedding after aggregation. In the classification module, a neural network is trained on the private aggregations for node classification without further querying the graph edges. The authors also point out that their model is advantageous to the earlier approaches as it benefits from the multi-hop neighborhood aggregations, and both the edge-level and node-level DP is guaranteed during both the training and the inference stages. Differentially private stochastic gradient descent (DP-SGD) is a popular algorithm that greatly advances the private deep learning. It can guarantee privacy to all data points in the dataset. Yet, modifications must be made for it to be applicable for graph structured data. This challenge is partially overcame in the work by [108], where the authors focus on the task of graph-level classification. Furthermore, in this work they attempt to better understand the differences between SGD and DP-SGD through the lens of GNNExplainer[129]. They find training with DP-SGD learns similar models, but as requiring tighter privacy guarantees (which require higher levels of noise) there is a decline in the similarity to the non-privately trained models. Another investigation proposes DP-Adam [109] (i.e., a differentially private Adam-based optimization for GNNs), which achieves similar performance as [108]. In addition to the previously mentioned and extensively examined influential works in this field, researchers have pursued similar avenues [110, 111, 112, 113], presenting diverse advancements that have resulted in enhanced performance and further stimulating this direction. ### _Federated Learning_ Federated Learning (FL) [130, 131, 132, 133] is a promising paradigm to protect data privacy which enables clients (e.g., companies) to train models collaboratively without revealing the raw data. Such a need is prevalent in the real world. For example, several hospitals want to train a model jointly, but their data is not allowed to be shared due to patient privacy concerns. Under this circumstance, FL comes to the rescue via collectively learning models with decentralized data in a privacy-preserving manner. The general framework [134] is to compute local updates from each client, update the global parameters from a central server, and then distribute the update to local clients. In this way, the raw data are only accessible on the local server, preventing information from leakage. Inspired by the confluence of the development of federated learning and the popularity of graph learning, the interest in the intersection of these two fields, federated graph learning (FGL), has grown rapidly in recent years [37]. Zhang et al. [36] summarize FGL models into three categories based on how the graph data is distributed, namely Inter-graph FL, Intra-graph FL, and Graph-structured FL. **Inter-graph FL**. Inter-graph FL is designed for scenarios in which clients want to jointly train a GNN model, with each of them having a subset of graph samples. Every client here has local graph samples. Naturally, the general task for this type of GFL is graph-level prediction (i.e. graph classification). A typical example is drug development, where pharmaceutical companies process confidential datasets, including molecular graphs and their properties. The goal is to utilize the knowledge learned from each organization to build a global GNN model while protecting the raw data. Fedgraphnn [114] has provided examples of several drug-related tasks. **Intra-graph FL**. Intra-graph FL is designed for scenarios in which clients want to jointly train a GNN model, with each of them having a subgraph. Different from inter-graph FL, the graphs in intra-graph are not independent and are related to others. Furthermore, depending on the way how subgraphs are related to each other, it is divided into horizontal and vertical intra-graph FL [131]. As the names suggest, the local sub-graphs are regarded as horizontally/vertically partitioned from the global graph. More specifically, a horizontal partition means that clients share the same feature and label space but different node ID spaces. In contrast, a vertical partition means that clients process different features and label spaces but share the same node ID space. For example, in the horizontal setting, each user has a local social network from different social networking apps, and the local sub-graphs together form an overall social network, providing extra structural information. For the vertical setting, the users' properties in different sub-graphs (e.g., local financial graph, local social graph, local knowledge graph) are available, providing different perspectives about the user. Various methods have been designed to protect data privacy in horizontal intra-graph FL [115, 116, 117] and vertical intra-graph FL [118, 119, 120]. **Decentralized GFL**. The above categories mainly study the centralized setting where a central server is required to aggregate local model information from clients as described in the general framework. Although strategies have been applied to prevent data leakage from the local side (e.g., lossy compression before transferring [135], noise perturbation [136]), the risk remains in the central server considering it would be possible to infer the protected information. Therefore, it is not practical for clients to accept one of them as the leader (i.e., the central server). From another practical consideration, the existence of a central server becomes a bottleneck due to computation cost and communication overhead. To solve these mentioned issues, several decentralized FGL has been developed [121, 122] where clients communicate and aggregate information from each other without a central server. **Applications**. Because of the wide range of applications of graph-federated learning, we provide a separate subsection here. Recommendation, particularly collaborative filtering based on the user-item interaction graph, is one of the most crucial applications of GNN. A federated GNN model is proposed by Huang et al. [123] to ensure personalized recommendations while minimizing the risk of exposure to adversarial attacks. Typically, personalized recommendation requires operations on the entire graph, which could easily lead to the leakage of private user information. To circumvent this, the authors mine the decentralized graph data derived from the global graph. In addition, they introduce a privacy-preserving graph expansion protocol so that high-order information under privacy protection can be incorporated into the model. Consequently, both the local and global information of the graph are incorporated into the model with privacy protection at the cost of little information loss. Another important task for GNN is molecular property prediction [122], which often requires a large amount of training data. However, privacy concerns, regulations, and commercial competition also hinder the collection of a large and centralized training sample. In the work by He et al. [122], the authors propose SpreadGNN, a decentralized multi-task federated learning GNN model for this task. Notably, SpreadGNN is compatible with partial labels (i.e., missing labels for part of the training sample). In addition, the authors show that SpreadGNN is guaranteed to converge under certain criteria and is effective on multiple datasets with partial labels. Note that one issue associated with using delocalized datasets in federated GNN for molecular property prediction is that the datasets obtained from multiple sources can be highly heterogeneous (i.e. different datasets could cover vastly different regions in the chemical space), which could compromise the model performance, so the authors in [124] tackle this issue by proposing federated learning with instance reweighing (FLIT(+)), which is capable of aligning local training across multiple clients. Federated GNNs have also been explored in the field of human activity recognition from sensor measurements [125]. Two common major obstacles to human activity recognition are noisy data and privacy. To tackle the two issues, GraFeHty [125] is proposed, which utilizes a semi-supervised algorithm to tackle the prior issue and the federated learning framework to tackle the later issue. Concretely, in the federated learning framework, only the learned representation is transferred out of the device to the central server so that user privacy can be protected. Also, federated learning allows the authors to address limitations of traditional centralized machine learning for human activity recognition, e.g., infrastructure availability, network connectivity, and latency issues. ## 5 Datasets and Applications This section lists datasets used or could potentially be used to develop privacy attacks and preservation methods for GNN. They can be divided into Social Network, Citation, User-item, Molecule, and Protein. We provide a succinct description for each dataset and their statistics in Table II. **Social Network**. Social network analysis is a crucial domain that often requires the use of GNN. Social network analysis can be used to detect sub-communities, help marketing, identity disease propagation, and etc. In a social network, nodes are typically users of social media, and edges are the relationships among users. The features of the users in social network typically include gender, education, age, geographical information, relationship status, and etc. * **Facebook**[25] Facebook Dataset is a small user-relation network extracted from the Facebook social media. The nodes represent user accounts while the edges describe the connectivity. Each user node has different features including gender, education, hometown, and etc. Leveraging real-world social media data to oppose malicious attacks can be particularly meaningful due to its relevance. * **Twitter**[25] Twitter Dataset is a small user-relation network extracted from the Twitter social media. The nodes represent user profiles while the edges describe the connectivity. Node features are user profile information, and the dataset contains information about circles and ego networks. * **LastFM**[25, 27] LastFM is an Asian social network, where the nodes are the users from Asian countries, and the edges describe the mutual follower relationships. Node features are based on the artists liked by the users. * **Reddit**[137, 60] The Reddit dataset is a sub-community of Reddit posts obtained from Sep. 2014. Thus, the nodes are the posts, and an edge exists between two posts if the same user provides comments about both posts. The class label is the community that the post belongs to. * **Computers**[137] Computers is an Amazon co-purchase graph, of which nodes are merchandise, and an edge exists between two merchandises if they are often bought together. The node features are information extracted from the merchandise reviews, and the class labels are the merchandise categories. **Citation**. In citation graphs, nodes are papers, and edges characterize the citation relationships among papers. Compared to the social network data, the consequence of adver sarial attacks on citation graph is much less serious because the information captured by such dataset is usually not private. However, due to the high accessibility, citation data is still frequently used, and thus we summarize a few below. * **Planetoid**[25, 26, 20] As one most common collection of citation datasets, Planetoid includes Cora, Citeseer and Pubmed, each of which is consisted with scientific publications that are categorized into different classes. The edges describe the citation relationships between the papers. The feature of each node is a \(0/1\)-valued word vector indicating whether a word exists or not. * **DBLP**[91] It is extracted from a website about computer science bibliography. Different from the aforementioned citation graphs, DBLP is heterogeneous as it has four entities including authors, papers, terms, and conferences. * **ogbn-arxiv**[138] A directed citation graph of which the nodes are computer science arXiv papers indexed by Microsoft academic graph (MAG). * **Aminer**[137] It consists of multiple relational datasets including citation networks, social networks, and etc. **User-item**. User-item graph is a bipartite graph describing the relationship between users and their interacted items. Leveraging the user-item interaction graph can enable the recommendation based on collaborative filtering. Privacy attack targeting such datasets could cause the information leakage of items liked by certain users, together with the item attributes and user attributes. * **Flixster**[139] The rating data of users towards movies. The listed dataset is a small Flixster subset dataset crawled by the authors of [139]. * **Douban**[139] The user-movie interaction graph and connections are users' comments on the movies. * **YahooMusic**[139] YahooMusic contains information about the music liked by users. **Molecule**. Graph-based drug discovery has gained increased attention recently due to the development of GNN. Molecules can be represented by graphs, in which nodes are atoms and edges are chemical bonds. Unlike previous datasets in which all data points are used to form one single graph, each data point in molecular dataset is a graph. * **NCI1**[97, 100] The NCI1 dataset contains molecules that are assessed to be positive and negative to cell lung cancer. In other words, there are only two classes with either 0 or 1 indicating the cancer-target interactivity. * **AIDS**[100, 23, 92] AIDS dataset is constructed from the AIDS Antiviral Screen Database of Active Compounds, where the molecules are specific to AIDS. * **OVCAR-SH**[97, 100] OVCAR-SH is a database of molecules targeting Ovarian human cancer cell line. **Protein**. Like molecules, proteins can also be represented as graphs with nodes being typically amino acid and edges being the amino acid bond or the spatial proximity. * **PROTEINS**[97] It includes two classes: enzymes or nonenzymes. The nodes are amino acids, and two nodes are connected if they are within 6 Angstroms from each other. * **ENZYMES**[140, 97] A dataset containing protein tertiary structures from the BRENDA enzyme database. ## 6 Future directions While many works have demonstrated the applicability and efficiency of privacy-preserved GNN, newly developed privacy attacks are continuously being introduced that exhibit vulnerabilities in existing privacy-preservation techniques. Additionally, there are generally several avenues yet to be explored, and many challenges to be overcome before reaching a desired level of performance in preserving privacy in real-world applications that are deployed with GNNs. Below we summarize future directions that highlight some of these open challenges and important new frontiers. * **Information leakage for pre-trained GNNs**. Pre-training and model-sharing are often used to improve the performance of GNNs under various tasks especially when the labels are inadequate, which has led to an increasing interest in leveraging self-supervised learning for GNNs [141, 142, 143]. Generally, pre-training [144] can be classified into three categories generally: model-based, mapping-based, and parameter-based methods. The model-based one utilizes the pre-trained source domain as the starting point for the remaining training on the target domain data. This method is also called model-based fine-tuning. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline Dataset Name & Data Domain & \# Graphs & (Avg.) \# Nodes & (Avg.) \# Edges & \# Features & Ref. \\ \hline Facebook & Social Network & 1 & 4,039 & 88,234 & - & [25][34] \\ \hline Twitter & Social Network & 1 & 81,306 & 1,768,149 & - & [25][27] \\ \hline LastFM & Social Network & 1 & 7,624 & 27,806 & 7,842 & [25][27] \\ \hline Reddit & Social Network & 1 & 232,965 & 57,307,946 & 602 & [60][137] \\ \hline Computers & Social Network & 1 & 13,752 & 245,861 & 767 & [137] \\ \hline Cora & Citation & 1 & 2,708 & 5,429 & 1,433 & [25][26, 20] \\ \hline Citeseer & Citation & 1 & 3,312 & 4,715 & 3,703 & [25][26, 20] \\ \hline PubMed & Citation & 1 & 19,717 & 44,338 & 500 & [25][20] \\ \hline DBLP & Citation & 1 & 17,716 & 105,734 & 1,639 & [91] \\ \hline ogbn-arxiv & Citation & 1 & 169,434 & 1,166,243 & 128 & [138] \\ \hline Amirner & Citation & 1 & 659,574 & 2,875,577 & - & [137] \\ \hline Flixster & User-item & 1 & 6,000 & 26,173 & - & [139] \\ \hline Douban & User-item & 1 & 6,000 & 136,891 & - & [139] \\ \hline YahooMusic & User-item & 1 & 6,000 & 5,335 & - & [139] \\ \hline NCI1 & Molecule & 4,110 & 29.87 & 32.30 & 37 & [97, 100] \\ \hline AIDS & Molecule & 2,000 & 15.69 & 16.20 & 42 & [97, 100] \\ \hline OVCAR-SHI & Molecule & 4,052 & 46.67 & 48.70 & 65 & [97, 100] \\ \hline PROTEINS & Protein & 1,113 & 39.06 & 72.82 & 29 & [97] \\ \hline ENZYMES & Protein & 600 & 32.63 & 62.14 & 21 & [97] \\ \hline \end{tabular} \end{table} TABLE II: Basic statistics of datasets that have been used or could potentially be used to benchmark privacy attacks and/or preservation methods on GNNs. We collect the sources of all datasets here. The mapping-based approach aligns hidden representations by reducing the difference between the source and target domains. The parameter-based one operates in a multi-task fashion to jointly update a shared network to learn transferable feature representations [145]. However, these strategies are double-edged sword because they can lead to private information leakage and compromise privacy. This issue is particularly concerning since the source and target datasets for the transfer learning are often from different organizations. Thus, it is critical to create a safe environment for different organizations to share the datasets to build the intact transfer learning model while minimizing the privacy concern [145]. Methods have been developed for general deep transfer learning to enhance privacy. For example, Wu et al. [146] and Mou et al. [147] propose stochastic gradient Langevin dynamics (SGLD), yet these for GNNs still await to be developed. * **GNN in distributed learning settings**. Federated learning with GNN has shown promising results in protecting data privacy, especially in healthcare and financial applications. To reiterate, in FL, clients are able to jointly train a GNN model, with each of them having a sub-graph or subset of graph samples. The focus of most existing works is on the architecture learning and knowledge sharing for building a global GNN model without compromising the privacy of the raw data. Lyu et al [148] provide a comprehensive survey about the privacy and robustness of federated learning attacks and defenses. In their survey, they cover threat model, privacy attacks and defenses, and poisoning attacks and defenses. However, we want to point out that very few works focus on the attacks on GNN-based federated learning, and more investigation in this area is welcomed. Also, we suggest that it will be beneficial to investigate privacy preservation of GNNs under other distributed learning settings. * **Trade-off between privacy and utility**. As a long-lasting issue for ethical AI, the trade-off between the ethical implication and utility of AI model is almost unavoidable. Metrics for evaluating the defense performance against privacy attack are privacy loss, confidence score, and reconstruction error. The metric for evaluating the model performance is classification accuracy or regression loss. In addition, there lacks a standard way of measuring the trade-off between two groups of metrics, and we believe that studies on these aspects will be particularly helpful. * **Privacy trade-off in Specialized GNNs** Specialized GNNs have been developed to mitigate a plethora of data quality challenges, such as imbalanced classification [149, 150, 11], mitigating bias [64, 151, 152], heterophily [153, 154, 77], etc. An investigation of privacy-utility-fairness trade-off in general neural network was done by Marlotte and Giacomo [155]. In their work, the models under investigation are Simple (S-NN), Fair (F-NN), differentially private (DP-NN), and differentially private and fair neural network (DPF-NN). Similar analysis could be conducted on GNN-based models. Recently, a few works have started to explore this area of privacy and fairness with GNNs [156, 157]. From the imbalance perspective, it would be of interest to study disaggregated performances to better understand which nodes are more susceptible of privacy attacks, and how they might align with the majority/minority groups according to sensitive features and/or class labels. Similarly, such analysis could be done according to node homophily [75]. * **Privacy in GNNs for Complex Graphs** While most efforts investigating privacy attacks and preservations in GNNs have focused on simple graphs, in many real-world applications the complex systems are better represented with complex graphs, where dedicated GNN efforts have been made, e.g., on hypergraphs [158, 17], multi-dimensional graphs [159], signed graphs [15], dynamic/temporal graphs [160], knowledge graphs [161], general heterogeneous graphs [162, 163], etc. It is expected that there could be varying levels of attack/preservation strategies among these complex networks. * **Generative AI Impacts on Graph/GNN Privacy Attacks/Preservation** The recent emergence of generative AI in image/NLP domains has raised many privacy concerns, especially in the medical/health domain [164]. Also, generated images may include sensitive information that violates companies' copyright and disclose confidential information [165]. Moreover, the inherent uncertainty in the generation process could even exacerbate the difficulty of designing stable privacy-preserving techniques. Since these generative techniques can be easily adapted to graph-structured data [166], the same privacy concern may also arise. In molecular generation, the generated molecules may contain confidential substructures. In the social network domain, if the generation process involves user embeddings, the generated content may reflect the profile information of that user's neighborhood. ## 7 Conclusion In this survey, we present a comprehensive review of the privacy considerations related to graph data and models. We begin by introducing the necessary concepts and notations for understanding the topic of graph privacy. We then provide an overview of various attacks on graph privacy, categorizing them according to the targeted information. We summarize the available techniques for privacy preservation. We also review the datasets and applications that have been used in the study of privacy in graph domains. Finally, we highlight several potential directions for future research in this area. Our hope is that this work will serve as a useful resource for researchers and practitioners interested in this topic, and will encourage further exploration in this promising field. ## Acknowledgment This research is supported by the National Science Foundation (NSF) under grant number IIS2239881, The Home Depot, and Snap Inc. This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)).
2309.11928
Video Scene Location Recognition with Neural Networks
This paper provides an insight into the possibility of scene recognition from a video sequence with a small set of repeated shooting locations (such as in television series) using artificial neural networks. The basic idea of the presented approach is to select a set of frames from each scene, transform them by a pre-trained singleimage pre-processing convolutional network, and classify the scene location with subsequent layers of the neural network. The considered networks have been tested and compared on a dataset obtained from The Big Bang Theory television series. We have investigated different neural network layers to combine individual frames, particularly AveragePooling, MaxPooling, Product, Flatten, LSTM, and Bidirectional LSTM layers. We have observed that only some of the approaches are suitable for the task at hand.
Lukáš Korel, Petr Pulc, Jiří Tumpach, Martin Holeňa
2023-09-21T09:42:39Z
http://arxiv.org/abs/2309.11928v1
# Video Scene Location Recognition with Neural Networks ###### Abstract This paper provides an insight into the possibility of scene recognition from a video sequence with a small set of repeated shooting locations (such as in television series) using artificial neural networks. The basic idea of the presented approach is to select a set of frames from each scene, transform them by a pre-trained single-image pre-processing convolutional network, and classify the scene location with subsequent layers of the neural network. The considered networks have been tested and compared on a dataset obtained from The Big Bang Theory television series. We have investigated different neural network layers to combine individual frames, particularly AveragePooling, MaxPooling, Product, Flatten, LSTM, and Bidirectional LSTM layers. We have observed that only some of the approaches are suitable for the task at hand. + Footnote †: Copyright ©2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). ## 1 Introduction People watching videos are able to recognize where the current scene is located. When watching some film or serial, they are able to recognize that a new scene is on the same place they have already seen. Finally, people are able to understand scenes hierarchy. All this supports human comprehensibility of videos. The role of location identification in scene recognition by humans motivated our research into scene location classification by artificial neural networks (ANNs). A more ambitious goal would be a make system able to remember unknown video locations and using this data identify video scene that is located in that location and mark it with the same label. This paper reports a work in progress in that direction. It describes the employed methodology and presents first experimental results obtained with six kinds of neural networks. The rest of the paper is organized as follows. The next section is about existing approaches to solve this problem. Section 3 is divided to two parts. The first one is about data preparation before their usage in ANNs. The second one is about design of the ANNs in our experiments. Finally, Section 4 - the last section before the conclusion shows our results of experiments with these ANNs. ## 2 ANN-Based Scene Classification The problem of scene classification has been studied for many years. There are many approaches based on neural networks, where an ANN using huge amount of images learned to recognize the type of given scene (for example, a kitchen, a bedroom, etc.). For this case several datasets are available. One example is [11], but it does not specify locations, so this and similar datasets are not usable for our task. However, our classification problem is different. We want to train an ANN able to recognize a particular location (for example "Springfield-EverGreenTerrace-742-floor2-bathroom"), which can me recorded by camera from many angles (typically, some object can be occluded by other objects from some angles). One approach using ANN to solve this task is described in [1], there convolutional networks were used. The difference to our approach is on the one hand in the extraction and usage of video images, on the other hand in types of ANN layers. Another approach is described in [4]. The authors propose a high-level image representation, called Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors. Leveraging on the Object Bank representation, good performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. ## 3 Methodology ### Data Preparation Video data consists of large video files. Therefore, the first task of video data preparation consists in loading the data that is currently needed. We have evaluated the distribution of the data used for ANN training. We have found there are some scenes with low occurence, whereas others occur up to 30 times more frequently compared to them. Hence, the second task of video data preparation is to increase the uniformity of their distribution, to prevent biasing the ANN to most frequent classes. This is achieved due to undersampling the frequent classes in the training data. The input consists of video files and a text file. The video files are divided into independent episodes. The text e scene. Every row contains trainformation about one scene. The scene is understand as sequence of frames, that are not interrupted by another frame with different scene location label. Every row contains a relative path to the source video file, the frame number where the scene begins and the count of the its frames. Figure 1 outlines how frames are extracted and prepared for an ANNs. For ANNs training, we select from each target scene a constant count 20 frames (denoted # frames in Figure 1). To get most informative representation of the considered scene, frames for sampling are taken from the whole length of the scene. This, in particular, prevents to select frames only within a short time interval. Each scene has its own frame distance computed from its frames count: \[SL=\frac{SF}{F}\] where SF is the count of scene frames, F is the considered constant count of selected frames and SL is the distance between two selected frames in the scene. After frames extraction, every frame is reshaped to an input 3D matrix for the ANN. Finally the reshaped frames are merged to one input matrix for the neural network. ### Used Neural Networks and Their Design Our first idea was to create a complex neural network based on different layers. However, there were too many parameters to train in view of the amount of data that we had. Therefore, we have decided to use transfer learning from some pretrained network. Because our data are actually images, we considered only ANNs pretrained on image datasets in particular ResNet50 [9], ResNet101 [9] and VGGnet [2]. Finally, we have decided to use VGGnet due to its small size. Hence, ANNs which we trained on our data are composed of two parts. The first part, depicted in Figure 2 is based on the VGGnet. At the input, we have 20 frames (resolution \(224\times 224\), BGR colors) from one scene. This is processed by a pretrained VGG19 neural network without two top layers. The two top layers were removed due to transfer learning. Its output is a vector with size 4096. For the 20 input frames we have 20 vectors with size 4096. These vectors are merged to a 2D matrix with size \(20\times 4096\). For the second part, forming the upper layers of the final network, we have considered six possibilities: a product layer, a flatten layer, an average pooling layer, a max pooling layer, an LSTM layer and a bidirectional LSTM layer. All of them, as well as the VCGnet, will be described below. Each of listed layers is preceded by a Dense layer. The Dense layer returns matrix \(20\times 12\), where number 12 is equal to the number of classes. With this output every model works differently. VGGnetThe VGGNets [2] were originally developed for object recognition and detection. They have deep convolutional architectures with smaller sizes of convolutional kernel \((3\times 3)\), stride \((1\times 1)\), and pooling window \((2\times 2)\). There are different network structures, ranging from 11 layers to 19 layers. The model capability is increased when the network is deeper, but imposing a heavier computational cost. We have used the VGG19 model (VGG network with 19 layers) from the Keras library in our case. This model [3] won the 1st and 2nd place in the 2014 ImageNet Large Scale Visual Recognition Challenge in the 2 categories called **object localization** and **image classification**, respectively. It achieves 92.7% in image classification on Caltech-101, top-5 test accuracy on ImageNet dataset which contains 14 million images belonging to 1000 classes. The architecture of the VGG19 model is depicted in figure 3. #### 3.2.1 Product array In this approach, we apply a product array layer to all output vectors from the dense layer. A Product array layer Figure 1: Input preparation for a neural network Figure 2: First, untrainable part of our neural network, where the Input Layer represents frame with resolution \(224\times 224\) in BGR colors and output is vector with length 4096, which is output from VGG19 network without last two layers computes product of all values in chosen dimension of an n-dimensional array and returns an n-1-dimensional array. A model with a product layer is outlined in Figure 4. The output from a Product layer is one number for each class, i.e. scene location, so our result is vector with 12 numbers. It returns a probability distribution over the set of scene locations. #### 3.2.2 Flatten In this approach, we apply a flatten layer to all output vectors from the dense layer. A Flatten layer creates one long vector from matrix so, that all rows are in sequence. A model with a flatten layer is outlined in Figure 5. After the input and a dense layer, a flatten layer follows, which returns long vector with \(12*20\) numbers in this case. It is followed by a second dense layer. Its output has again a dimension equal to the number of classes and it returns a probability distribution over the set of scene locations. Figure 4: Trainable part of the neural network based on a product layer Figure 5: Trainable part of the neural network based on a flatten layer Figure 3: Architecture of the used VGG19 model [10], in our network is used without FC1, FC2 and Softmax layers #### 3.2.3 Average Pooling In this approach, we apply average pooling to all output vectors from the dense layer part of the network (Figure 6). An average-pooling layer computes the average of values assigned to subsets of its preceding layer that are such that: * they partition the preceding layer, i.e., that layer equals their union and they are mutually disjoint; * they are identically sized. Taking into account these two conditions, the size \(p_{1}\times\ldots\times p_{D}\) of the preceding layer and the size \(r_{1}\times\ldots\times r_{D}\) of the sets forming its partition determine the size of the average-pooling layer. In this case, an Average Pooling layer's forming sets size is \(20\times 1\). Using this size in average-pooling layer, we get again one number for each class, which returns a probability distribution over the set of scene locations. Apart form average pooling, we have tried also max pooling. However, it led to substantially worse results. Its classification of the scene location was typically based on people or items in the foreground, not on the scene as a whole. Although using the average-pooling layer is easy, it gives acceptable results. The number of trainable parameters of the network is then low, which makes it suitable for our comparatively small dataset. #### 3.2.4 Long Short Term Memory An LSTM layer is used for classification of sequences of feature vectors, or equivalently, multidimensional time series with discrete time. Alternatively, that layer can be also employed to obtain sequences of such classifications, i.e., in situations when the neural network input is a sequence of feature vectors and its output is a a sequence of classes, in our case of scene locations. LSTM layers are intended for recurrent signal propagation, and differently to other commonly encountered layers, they consists not of simple neurons, but of units with their own inner structure. Several variants of such a structure have been proposed (e.g., [5, 8]), but all of them include at least the following four components: * _Memory cells_ can store values, aka cell states, for an arbitrary time. They have no activation function, thus their output is actually a biased linear combination of unit inputs and of the values coming through recurrent connections. * _Input gate_ controls the extent to which values from the previous unit within the layer or from the preceding layer influence the value stored in the memory cell. It has a sigmoidal activation function, which is applied to a biased linear combination of the input and recurrent connections, though its bias and synaptic weights are specific and in general different from the bias and synaptic weights of the memory cell. * _Forget gate_ controls the extent to which the memory cell state is suppressed. It again has a sigmoidal activation function, which is applied to a specific biased linear combination of input and recurrent connections. * _Output gate_ controls the extent to which the memory cell state influences the unit output. Also this gate has a sigmoidal activation function, which is applied to a specific biased linear combination of input and recurrent connections, and subsequently composed either directly with the cell state or with its sigmoidal transformation, using a different sigmoid than is used by the gates. Hence using LSTM layers a more sophisticated approach compared to simple average pooling. A LSTM, layer can keep hidden state through time with information about previous frames. Figure 7 shows that the input to an LSTM layer is a 2D matrix. Its rows are ordered by the time of frames from the input scene. Every input frame in the network is represented by one vector. The output from the LSTM layer is a vector of the same size as in previous approaches, which returns a probability distribution over the set of scene locations. #### 3.2.5 Bidirectional Long Short Term Memory An LSTM, due to its hidden state, preserves information from inputs that has already passed through it. Unidirectional LSTM only preserves information from the past because the only inputs it has seen are from the past. A Bidirectional LSTM runs inputs in two ways, one from the past to the future and one from the future to the past. To this end, it combines two hidden states, one for each direction. Figure 6: Trainable part of the neural network based on an average-pooling layer Figure 8 shows that the input to a bidirectional LSTM layer is the same as the input to a LSTM layer. Every input frame in the network is represented by one vector. The output from the Bidirectional LSTM layer is a vector of the same size as in previous approaches, which returns a probability distribution over the set of scene locations. ## 4 Experiments ### Experimental Setup The ANNs for scene location classification were implemented in the libraries Python language using TensorFlow and Keras. Neural network training was accelerated using a NVIDIA GPU. The versions of the employed hardware and software are listed in Table 1. For image preparation, OpenCV and Numpy were used. The routine for preparing frames is a generator. It has lower capacity requirements, because data are loaded just in time when they are needed and memory is released after the data have been used for ANN. All non-image information about inputs (video location, scenes information, etc.) are processed in text format by Pandas. We have 17 independent datasets prepared by ourselves from proprietary videos of the The Big Bang Theory series, thus the datasets can't be public. Each dataset originates from one episode of the series. Each experiment was trained with one dataset, so results are independent as well. So we can compare behavior of the models with different datasets. Our algorithm to select data in training routine is based on oversampling. It randomly selects target class and from the whole training dataset is randomly select source scene with replacement. This algorithm is applied due to an unbalanced proportion of different target classes. Thanks to this method, all targets are distributed equally and the network does not overfit a highly represented class. ### Results The differences between the models considered in the second, trained part of the network were tested for significance by the Friedman test. The basic null hypotheses that the mean classification accuracy for all 6 models coincides was strongly rejected, with the achieved significance \(p=2.8\times 10^{-13}\). For the post-hoc analysis, we employed the Wilcoxon signed rank test with two-sided alternative for all 15 pairs of theconsidered models, because of the inconsistence of the more commonly used mean ranks post-hoc test, to which recently Benavoli et al. pointed out [6]. For correction to multiple hypotheses testing, we used the Holm method [7]. The results are included the comparison between models in Table 2. Summary statistics of the predictive accuracy of classification all 17 episode datasets are in Table 3. Every experiment was performed on every dataset at least 7 times. The table is complemented with results for individual episodes, depicted in box plots. The model with a max-pooling layer had the worst results (Figure 12) of all experiments. Its overall mean ac \begin{table} \begin{tabular}{|l|l|} \hline CPU cores & 2 \\ \hline GPU compute capability & 3.5 and higher \\ \hline OS & Linux 5.4.0 \\ \hline CUDA & 11.3 \\ \hline Python & 3.8.6 \\ \hline TensorFlow & 2.3.1 \\ \hline Keras & 2.4.0 \\ \hline OpenCV & 4.5.2 \\ \hline \end{tabular} \end{table} Table 1: Versions of the employed hardware and software Figure 8: Trainable part of the neural network based on a bidirectional LSTM layer Figure 7: Trainable part of the neural network based on an LSTM layer curacy was around 10 %. This is only slightty higher than random choice which is \(1/12\). The model was not able to achieve better accuracy than 20 %. Its results were stable and standard deviation was very low. Slightly better results (Figure 10) had the model with the a flatten layer, it was sometimes able to achieve a high accuracy, but its standard deviation was very high. On the other hand, results for some other episodes were not better than those of the max-pooling model. A better solution is the product model, whose predictive accuracy (Figure 9) was for several episodes higher than 80 %. On the other hand, other episodes had only slightly better results than the flatten model. And it had the highest standard deviation among all considered models. The most stable results (Figure 11) with good accuracy had the model based on average-pooling layer. Its mean accuracy was 32 % and for no episode, the accuracy was substantially different. The model with unidirectional LSTM layer had the second mean accuracy of considered our models (Figure 13). Its internal memory brings advantage in compare over the previous approaches, over 40 %, though also a comparatively high standard deviation. The highest mean accuracy had the model with a bidirectional LSTM layer (Figure 14). It had a similar standard deviation as the one with a unidirectional LSTM, but an accuracy mean nearly 50 %. ## 5 Conclusion and Future Research In this paper was provided an insight into the possibility of using artificial neural networks for scene recognition location from a video sequence with a small set of repeated shooting locations (such as in television series) was provided. Our idea was to select more than one frame from each scene and classify the scene using that sequence of frames. We used a pretrained VGG19 network without two last layers. This results were used as an input to the trainable part our neural network architecture. We have designed six neural network models with different layer types. We have investigated different neural network layers to combine video frames, in particular average-pooling, max-pooling, product, flatten, LSTM, and bidirectional LSTM layers. The considered networks have been tested and compared on a dataset obtained from The Big Bang Theory television series. The model with max-pooling layer was not successful, its accuracy was the lowest of all models. The models with a flatten or product layer were very unstable, their standard deviation was very large. The most stable among all models was the one with an average-pooling layer. The models with unidirectional LSTM and bidirectional LSTM had similar standard deviation of the accuracy. The model with a bidirectional LSTM had the highest accuracy among all considered models. In our opinion, this is because its internal memory cells preserve information in both directions. Those results shows, that models with internal memory are able to classify with a higher accuracy than models without internal memory. Our method may have limitations due to the chosen pretrained ANN and the low dimension of some neural layer parts. In future research, it is desirable to achieve higher accuracy in scene location recognition. This task may also need modifying model parameters or using other architectures. It also may need other pretrained models or combining several pretrained models. It is also desirable that, if the ANN detects an unknown scene, it will remember it and next time it will recognize a scene from the same location properly. ## Acknowledgments The research reported in this paper has been supported by the Czech Science Foundation (GACR) grant 18-18080S. Computational resources were supplied by the project "e-Infrastruktura CZ" (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures. Computational resources were provided by the ELIXIR-CZ project (LM2018131), part of the international ELIXIR infrastructure. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline & Product & Flatten & Average & Max & LSTM & BidirectionalLSTM & SummaryScore \\ \hline Product & X & **16** & \(6\) & **16** & \(5\) & 1 & 44 \\ Flatten & 1 & X & 0 & _10_ & 0 & 0 & 11 \\ Average & _11_ & **17** & X & **17** & 3 & 1 & 49 \\ Max & 1 & \(6\) & 0 & X & 0 & 0 & 7 \\ LSTM & _12_ & **17** & 14 & **17** & X & 3 & 63 \\ BidirectionalLSTM & **16** & **17** & **15** & **17** & _14_ & X & 79 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of accuracy results on all 17 episode datasets. The values in the table are counts of datasets, in which the model in row has higher accuracy compared to the model in column. If the difference is not significant in the Wilcoxon test than the count is in italic. If the difference is significant, then the higher count is in bold. Figure 11: Box plot with results obtained using the average-pooling model \begin{table} \begin{tabular}{l r r r r r} \hline \hline model & mean & std & 25\% & 50\% & 75\% \\ \hline Product & 43.7 & 38.4 & 4.6 & 32.4 & 85.2 \\ Flatten & 23.6 & 30.8 & 1.0 & 5.1 & 39.6 \\ Average & 32.2 & 8.1 & 26.5 & 31.5 & 37.1 \\ Max & 9.3 & 2.9 & 8.1 & 9.3 & 10.9 \\ LSTM & 40.7 & 25.2 & 19.7 & 39.9 & 59.4 \\ BidirectionalLSTM & 47.8 & 25.1 & 29.6 & 50.5 & 67.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Aggregated predictive accuracy over all 17 datasets [%] Figure 10: Box plot with results obtained using the flatten model Figure 9: Box plot with results obtained using the product model Figure 14: Box plot with results obtained using the bidirectional LSTM model Figure 12: Box plot with results obtained using the max-pooling model Figure 13: Box plot with results obtained using the LSTM model
2303.00498
Adaptive Hybrid Spatial-Temporal Graph Neural Network for Cellular Traffic Prediction
Cellular traffic prediction is an indispensable part for intelligent telecommunication networks. Nevertheless, due to the frequent user mobility and complex network scheduling mechanisms, cellular traffic often inherits complicated spatial-temporal patterns, making the prediction incredibly challenging. Although recent advanced algorithms such as graph-based prediction approaches have been proposed, they frequently model spatial dependencies based on static or dynamic graphs and neglect the coexisting multiple spatial correlations induced by traffic generation. Meanwhile, some works lack the consideration of the diverse cellular traffic patterns, result in suboptimal prediction results. In this paper, we propose a novel deep learning network architecture, Adaptive Hybrid Spatial-Temporal Graph Neural Network (AHSTGNN), to tackle the cellular traffic prediction problem. First, we apply adaptive hybrid graph learning to learn the compound spatial correlations among cell towers. Second, we implement a Temporal Convolution Module with multi-periodic temporal data input to capture the nonlinear temporal dependencies. In addition, we introduce an extra Spatial-Temporal Adaptive Module to conquer the heterogeneity lying in cell towers. Our experiments on two real-world cellular traffic datasets show AHSTGNN outperforms the state-of-the-art by a significant margin, illustrating the superior scalability of our method for spatial-temporal cellular traffic prediction.
Xing Wang, Kexin Yang, Zhendong Wang, Junlan Feng, Lin Zhu, Juan Zhao, Chao Deng
2023-02-28T06:46:50Z
http://arxiv.org/abs/2303.00498v1
# Adaptive Hybrid Spatial-Temporal Graph Neural Network for Cellular Traffic Prediction ###### Abstract Cellular traffic prediction is an indispensable part for intelligent telecommunication networks. Nevertheless, due to the frequent user mobility and complex network scheduling mechanisms, cellular traffic often inherits complicated spatial-temporal patterns, making the prediction incredibly challenging. Although recent advanced algorithms such as graph-based prediction approaches have been proposed, they frequently model spatial dependencies based on static or dynamic graphs and neglect the coexisting multiple spatial correlations induced by traffic generation. Meanwhile, some works lack the consideration of the diverse cellular traffic patterns, result in suboptimal prediction results. In this paper, we propose a novel deep learning network architecture, Adaptive Hybrid Spatial-Temporal Graph Neural Network (AHSTGNN), to tackle the cellular traffic prediction problem. First, we apply adaptive hybrid graph learning to learn the compound spatial correlations among cell towers. Second, we implement a Temporal Convolution Module with multi-periodic temporal data input to capture the nonlinear temporal dependencies. In addition, we introduce an extra Spatial-Temporal Adaptive Module to conquer the heterogeneity lying in cell towers. Our experiments on two real-world cellular traffic datasets show AHSTGNN outperforms the state-of-the-art by a significant margin, illustrating the superior scalability of our method for spatial-temporal cellular traffic prediction. Cellular Traffic Prediction, Spatial-Temporal Data, Graph Neural Network, Mobile Network, Deep Learning ## I Introduction Total global mobile data traffic reached 67EB per month by the end of 2021, and is projected to grow 4.2 percent to reach 282EB per month in 2027 [1]. The explosive growth of traffic not only brings a huge demand for network capacity, but also brings challenges for telecom network management and resource allocation. As a crucial aspect of telecom network operation, traffic prediction is essential for intelligent wireless networks. Accurate cellular traffic prediction plays an important role in network planning, traffic scheduling, network fault diagnosis and reducing operation cost, etc. However, it is extremely challenging to predict cellular traffic due to several reasons. First, cellular traffic exhibits nonlinear temporal dependencies since the mobile data traffic is extremely dynamic. For instance, a user can consume a large volume of data for a moment via a given cell tower. At the next moment, this user may stop the connection or migrate to a new cell tower, causing a certain amount of traffic to disappear suddenly [2]. The discontinuous nature of data usage makes the inherent temporal dependency of mobile traffic a complex nonlinear and unstable problem. Second, frequent user mobility and complicated network scheduling mechanisms bring complex spatial correlations between cell towers. As shown in Fig. 1, different cell towers in a wireless network maintain distinct coverage zones. Users can attach to a cell tower within its coverage area to access mobile network services and consume traffic. Due to the limited coverage of wireless signals, users could switch between multiple cell towers as they migrate between areas, resulting in the spatial correlations of cellular traffic. Moreover, users can easily travel across the city within half an hour with the efficient urban transportation, which brings the spatial dependencies even with distant cell towers [3]. For better user experience, the wireless network scheduling mechanism may also hand over the user to a closer cell tower, or a cell tower with fewer users, or a distant cell tower with stronger signal, etc., which increases the complexity of spatial correlation. Third, the different capacities, geographical locations, and surroundings of cell towers make their data traffic patterns diverse, which called heterogeneity. For example, the data traffic of a cell tower near a shopping wall shows significant increase at weekend compared with the weekday. Meanwhile, it is opposite for the cell tower located in a subway station, whose data traffic in the weekday is regularly higher than that of the weekend with obvious morning and evening peak. Recently deep learning-based methods for mobile traffic prediction have attracted the interests of researchers due to its powerful ability to capture intricate data patterns. Typically, Fig. 1: An example of cell tower distribution in a region. Purple, orange and green ellipses indicate the coverage of the cell tower A, B and C. Blue ellipses means the coverage of other cell towers. The coverage areas of different cell towers are partially overlapped, so users can switch between cell towers and still maintain the connection.
2309.07390
Unleashing the Power of Depth and Pose Estimation Neural Networks by Designing Compatible Endoscopic Images
Deep learning models have witnessed depth and pose estimation framework on unannotated datasets as a effective pathway to succeed in endoscopic navigation. Most current techniques are dedicated to developing more advanced neural networks to improve the accuracy. However, existing methods ignore the special properties of endoscopic images, resulting in an inability to fully unleash the power of neural networks. In this study, we conduct a detail analysis of the properties of endoscopic images and improve the compatibility of images and neural networks, to unleash the power of current neural networks. First, we introcude the Mask Image Modelling (MIM) module, which inputs partial image information instead of complete image information, allowing the network to recover global information from partial pixel information. This enhances the network' s ability to perceive global information and alleviates the phenomenon of local overfitting in convolutional neural networks due to local artifacts. Second, we propose a lightweight neural network to enhance the endoscopic images, to explicitly improve the compatibility between images and neural networks. Extensive experiments are conducted on the three public datasets and one inhouse dataset, and the proposed modules improve baselines by a large margin. Furthermore, the enhanced images we proposed, which have higher network compatibility, can serve as an effective data augmentation method and they are able to extract more stable feature points in traditional feature point matching tasks and achieve outstanding performance.
Junyang Wu, Yun Gu
2023-09-14T02:19:38Z
http://arxiv.org/abs/2309.07390v1
Unleashing the Power of Depth and Pose Estimation Neural Networks by Designing Compatible Endoscopic Images ###### Abstract Deep learning models have witnessed depth and pose estimation framework on unannotated datasets as a effective pathway to succeed in endoscopic navigation. Most current techniques are dedicated to developing more advanced neural networks to improve the accuracy. However, existing methods ignore the special properties of endoscopic images, resulting in an inability to fully unleash the power of neural networks. In this study, we conduct a detail analysis of the properties of endoscopic images and improve the compatibility of images and neural networks, to unleash the power of current neural networks. First, we introduce the Mask Image Modelling (MIM) module, which inputs partial image information instead of complete image information, allowing the network to recover global information from partial pixel information. This enhances the network's ability to perceive global information and alleviates the phenomenon of local overfitting in convolutional neural networks due to local artifacts. Second, we propose a lightweight neural network to enhance the endoscopic images, to explicitly improve the compatibility between images and neural networks. Extensive experiments are conducted on the three public datasets and one inhouse dataset, and the proposed modules improve baselines by a large margin. Furthermore, the enhanced images we proposed, which have higher network compatibility, can serve as an effective data augmentation method and they are able to extract more stable feature points in traditional feature point matching tasks and achieve outstanding performance. ## 1 Introduction With development of precision medicine, minimally invasive surgery has become a major development direction in medicine. Endoluminal intervention aims to reach a lesion through the body's cavity or lumen for biopsy or treatment. As a non-invasive intervention tool, endoscopes play an important role in endoluminal treatment. However, the pathways of lumen structure can be complex. How to design a suitable navigation algorithm to guide doctors to manipulate endoscopes to reach the target area is a clinical challenge. For visual navigation, depth and pose estimation is an important challenge. The depth information of endoscopic images can measure the distance between the endoscope and the organ wall to avoid collision damage between the instrument and the patient's organs; pose information can perceive the position of the endoscope in real time and guide doctors to reach the target area according to the preoperative planned path. Moreover, robust pose and depth estimation techniques enable advanced applications like augmented reality and automated medical interventions, revolutionizing the field of endoscopy and pushing the boundaries of medical innovation. Traditional multiview stereo method is a fascinating area of computer vision research that aims to reconstruct the three-dimensional structure of a scene from multiple images, e.g., structure from motion(SfM) [1] and simultaneous localization and mapping(SLAM) [2]. It leverage the power of multiple viewpoints to infer the depths and positions of objects in the scene. By analyzing the correspondences between points in different views and incorporating geometric constraints, traditional multiview stereo methods can estimate accurate depth maps and camera poses. However, due to the sparse and uneven distribution of feature points in endoscopy, traditional methods cannot achieve satisfactory results. In recent years, with the development of deep learning, learning-based methods to estimate pose and depth have explored rapidly. These methods [3, 4, 5, 6, 7, 8] fully take advantage of the fundamental principles of SfM and train the depth and pose neural network by minimizing the appearance difference between consecutive target and frames. Although many works focus on designing more advanced networks for depth and pose estimation tasks, in endoscopic scenarios, we believe that in addition to considering more sophisticated networks, it is crucial to address the question of whether the original endoscopic images are compatible with current convolution neural network. Due to the presence of artifacts and sparse feature inherent to endoscopic scenes, the original endoscopic images may introduce much noise to the neural network. In addition, within
2309.09550
Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks
The human brain can self-organize rich and diverse sparse neural pathways to incrementally master hundreds of cognitive tasks. However, most existing continual learning algorithms for deep artificial and spiking neural networks are unable to adequately auto-regulate the limited resources in the network, which leads to performance drop along with energy consumption rise as the increase of tasks. In this paper, we propose a brain-inspired continual learning algorithm with adaptive reorganization of neural pathways, which employs Self-Organizing Regulation networks to reorganize the single and limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to efficiently cope with incremental tasks. The proposed model demonstrates consistent superiority in performance, energy consumption, and memory capacity on diverse continual learning tasks ranging from child-like simple to complex tasks, as well as on generalized CIFAR100 and ImageNet datasets. In particular, the SOR-SNN model excels at learning more complex tasks as well as more tasks, and is able to integrate the past learned knowledge with the information from the current task, showing the backward transfer ability to facilitate the old tasks. Meanwhile, the proposed model exhibits self-repairing ability to irreversible damage and for pruned networks, could automatically allocate new pathway from the retained network to recover memory for forgotten knowledge.
Bing Han, Feifei Zhao, Wenxuan Pan, Zhaoya Zhao, Xianqi Li, Qingqun Kong, Yi Zeng
2023-09-18T07:56:40Z
http://arxiv.org/abs/2309.09550v2
# Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks ###### Abstract The human brain can self-organize rich and diverse sparse neural pathways to incrementally master hundreds of cognitive tasks. However, most existing continual learning algorithms for deep artificial and spiking neural networks are unable to adequately auto-regulate the limited resources in the network, which leads to performance drop along with energy consumption rise as the increase of tasks. In this paper, we propose a brain-inspired continual learning algorithm with adaptive reorganization of neural pathways, which employs Self-Organizing Regulation networks to reorganize the single and limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to efficiently cope with incremental tasks. The proposed model demonstrates consistent superiority in performance, energy consumption, and memory capacity on diverse continual learning tasks ranging from child-like simple to complex tasks, as well as on generalized CIFAR100 and ImageNet datasets. In particular, the SOR-SNN model excels at learning more complex tasks as well as more tasks, and is able to integrate the past learned knowledge with the information from the current task, showing the backward transfer ability to facilitate the old tasks. Meanwhile, the proposed model exhibits self-repairing ability to irreversible damage and for pruned networks, could automatically allocate new pathway from the retained network to recover memory for forgotten knowledge. ## Keywords Self-organized Regulation, Continual Learning, Reorganize Sparse Neural Pathways, Spiking Neural Networks, Child-like Simple-to-complex Cognitive Tasks ## 1 Introduction Human brain is the ultimate self-organizing system [1, 2]. The brain has the remarkable ability to coordinate 86 billion neurons with over 100 trillion synaptic connections self-organizing into dynamic neural circuits for different cognitive functions [3]. Throughout the human lifetime, neural connections continue to adaptively reorganize [4], driven by the genes and the external environment [5]. Under the central regulation of multiple communication modes such as feedback mechanisms [6], synchronous oscillations [7] and the third regulation factor [8], large neural connections are temporarily disconnected, leading to self-organized convergence forming task-specific sparse neural circuits [9, 10]. Even though the number of neurons no longer increases in adulthood [11], humans still possess the capacity for lifelong learning from simple movement and perception to complex reasoning, decision making and social cognition. Although existing artificial neural network continual learning algorithms have proposed some solutions inspired by brain mechanisms, they are different from the lifelong learning process of the brain. Most of them are based on fixed dense network structure or mask-preserving subnetworks and lack the capacity to self-organize for discovering adaptive connections and selecting efficient neural pathways. In addition, the vast majority of existing continual learning algorithms are based on Deep Neural Networks (DNNs), with little exploration on Spiking Neural Networks (SNNs). The DNNs-based Continual Learning has been proposed mainly inspired by two categories of brain mechanisms: synaptic plasticity and structural plasticity. Synapses are carriers of memory [12], and synapse-based continuous learning algorithms can be divided into synaptic importance measure and knowledge distillation. The former restricts the plasticity of important synapses [13, 14, 15], and the latter uses "soft-supervised" information from old tasks to overcome catastrophic forgetting [16, 17]. Some recently proposed dual-network continuous learning algorithms [18, 19, 20, 21] use the additional network to generate the main network weights or weight regulation coefficients. However, these algorithms use fixed dense networks in all tasks lacking sufficient memory capacity and brain-inspired sparsity to learn large-scale tasks. Inspired by the dynamic structural plasticity mechanisms of the brain, additional algorithms extend new network structures for new tasks [22, 23], resulting in network consumption skyrocketing with the number of tasks, and being inconsistent with the energy efficiency of the brain. To solve this problem, the proposed subnetwork selection algorithms select a sparse subnetwork structure for each task by evolution [24], pruning [25, 26], or reinforcement learning [27, 28]. These algorithms reduce energy consumption, but need to store a subnetwork mask for each task. And the subnetworks of all tasks are selected from the initial network, preventing knowledge migration between tasks. Spiking neural networks (SNNs), as the third-generation artificial neural network [29], simulate the discrete spiking information transfer mechanism in the brain [30]. The basic unit spiking neuron integrates rich spatio-temporal information, which is more biologically plausible and energy efficient. As a brain-inspired cognitive computing platform, SNNs have achieved comparable performance to DNNs in classification [31; 32], reinforcement learning [33; 34] and social cognition [35] modelling. Among the few existing SNNs-related continuous learning algorithms, HMN [36] uses DNNs to regulate the spiking thresholds of the neurons in SNNs. ASPs [37] rely on spiking time-dependent synaptic plasticity (STDP) to overcome catastrophic forgetting. However, both of them are only suitable for shallow networks to accomplish simple tasks. DSD-SNN [38] introduces brain-inspired structural development and knowledge reuse mechanism that enable deep SNNs to accomplish continual learning and no longer need to save additional sub-network masks, but still suffers from the problem of ever-expanding network consumption. The contribution of SNNs in multi-task continual learning is still waiting to be further explored. The human brain could dynamically reorganize neural circuits during continual development and learning processes. Especially, the adult brain possesses a nearly fixed number of neurons and connections [11], while it can incrementally learn and memorize new tasks by dynamically reorganizing the connections between neurons [39], as shown in Fig 1. Inspired by this, we design Self-Organized regulation (SOR) networks to adaptively activate sparse neural pathways from a fixed spiking neural network and endow them with synaptic strengths, enabling the single SNN to incrementally memorize multiple tasks. The main contributions of this paper can be summarized as follows: * Our proposed SOR-SNN model can self-organize to activate task-related sparse neural pathways without human intervention, reducing per-task energy con Figure 1: **Sparse neural pathways self-organized collaboration for continual learning.** Purple neurons and cyan neurons are individual neurons for task 1 and task 2, respectively, and blue neurons are shared for both tasks. In the blue box, the different synapses of neuron D are utilized for different tasks and form sparse connections. sumption while enabling the limited SNNs to have a large number of sparse neural connectivity combinations, thus enhancing the memory capacity for more tasks. * Extensive experiments demonstrate the superior performance of the proposed model on child-like simple-to-complex cognitive tasks and generalized datasets CIFAR100 and ImageNet. In addition, the proposed model reveals outstanding strengths on more complex tasks, and knowledge backward transfer capability in which learning new tasks improves the performance of old tasks without replaying old task samples. * The SOR-SNN model also shows the self-repairing ability to irreversible damage, in that when the structure of the SNN is pruned resulting in forgetting of acquired knowledge, the retained neurons and synapses can be automatically assigned to repair memory for the forgotten task without affecting other acquired tasks. ## 2 Results ### Spiking Neural Networks with Self-organized regulation Networks Framework The SNN invokes different sparse neural circuits sequentially to accomplish different tasks under the regulation of the self-organized regulation network. From Fig. 2, each region of the spiking neural network possesses a self-organizing regulation network, and the regions with multiple layers are divided according to the structure of the SNN (e.g. each block in a ResNet model represents a region). The self-organizing regulation network can generate different sparse neural pathways \(P_{t}\) (Pathway Search Module) and the synaptic strength \(W_{t}\) (Fundamental Weighting Module) for different tasks \(t\). The inputs to the regulation network are learnable task-related vector \(x_{T}\) and layer-related vector \(x_{L}\), as well as the state of the regulation network in the previous layer. In the testing phase, SOR network generates the task-specific sparse structure and corresponding weights of the main SNN based on the inputs of the task \(x_{T}\) and the layer \(x_{L}\), which are combined with the sample inputs of the current task to realize the task output. During the continual learning process, the self-organized regulation network directly designs the main SNN and will be optimized end-to-end to achieve high performance. In addition, to overcome catastrophic forgetting, the regulation network makes the neural pathways \(P_{t}\) activated for each task as orthogonal as possible to minimize inter-task interference. Orthogonal activation pathways allow finite SNNs with a large number of different connection combinations to accomplish more tasks and achieve higher memory capacity. At the same time, we make the synaptic weights \(W_{t}\) as equal as possible in each task to preserve the memory. Based on the flexible regulation of the SOR network, the main SNN could generate diverse sub-neural pathways, showing the potential of expanding the memory capacity. In this paper, we verify the performance, energy consumption, memory capacity, and backward transfer ability on child-like simple-to-complex multi-task learning, generalized continual learning datasets (CIFAR100 and ImageNet). More importantly, the separate SOR network shows high adaptability to structural mutations of the main SNN network. ### Continual learning child-like simple-to-complex tasks During child growth and development, the brain gradually learns and memorizes hundreds of cognitive tasks. This process does not happen all at once, but starts with simple tasks and gradually progresses to more complex ones. In this paper, we simulate the developing process of child simple-to-complex cognition by sequentially learning sketches (3,929 images), cartoon drawings (3,929 images), and real photographs (1,670 images) [40], as shown in Fig. 3 A-C. Specifically, our SNN structure is ResNet18 and the regulation network employs the LSTM with 96 hidden layer neurons. During the simple-to-complex experiment, samples of three cognitive tasks are sequentially fed into the SOR-SNN model to learn. We monitored the SNN weights Figure 2: **The procedure of SOR-SNN model. Each spiking neural network block in the proposed SOR-SNN model involves a self-organizing regulation network which is responsible for selectively activating task-specific sparse pathways in the SNN. For example: the purple connections form the pathway for task 1. In particular, the self-organizing regulatory network contains the fundamental weighing module and the path searching module. The large number of different combinations of connections allows the limited SNN to have the capacity to incrementally learn more n tasks.** and task-specific pathways under the guidance of the self-organizing regulation network during the learning process. We visualized the activation of partial weights in the fully connected output layer of the ResNet18 SNN for each of cognitive task, as shown in Fig. 3 E-G. In the whole network, the self-organizing regulation network activates different combinations of weights for different tasks, forming task-specific sparse neural pathways. This gives a single network the ability to accomplish multiple cognitive tasks while reducing mutual interference between tasks and saving the energy consumption required per task. Meanwhile, each synapse in our SNN was involved in a different number of task-specific pathways as shown in Fig. 3 D,H,L. Some synapses were activated in all three task-specific pathways, some synapses were activated in only one task-specific pathway, while others constantly remained inactive. This suggests that in our SOR-SNN model, the pathways of different tasks share connections for processing common features as well as have their own unique connections for recognizing task-specific features. Moreover, there is a portion of connections in the network that remain inactive all the time, indicating that the memory space of the network still has the capacity to accomplish more tasks. In addition, we statistically calculated the distribution of the real-valued weights of SNNs from the output of the fundamental weighting module in the SOR network Figure 3: **Validation of child-like simple-to-complex continual learning. (A-C)** The simple to complex cognitive tasks include sketches, cartoons and photos. **(E-G)** Task-specific sparse pathways, for example, the blue, yellow and purple arrows represent the pathways for Task 1, Task 2 and Task 3 respectively in the fully connected output layer. **(D,H,L)** Visualization of synaptic activation counts in partial convolutional and fully connected layers. **(I-K)** Distribution of real-valued weights in the fully connected layer for three different tasks. for each cognitive task as shown in Fig. 3 I-K. The results show that the distribution of the synaptic weights of the SNNs in different tasks has minor variations but generally stays the same. Unlike existing artificial neural networks in which past learned knowledge is lost due to large changes in weights, our SOR-SNN model applies similar weights in different tasks that can effectively avoid catastrophic forgetting. ### Superiority in performance, energy consumption, memory capacity and backward transfer To verify the effectiveness of our SOR-SNN, we conduct experiments on child-like simple-to-complex continual learning tasks, and two generalized CIFAR100 and ImageNet datesets. For comparison, we replicate other continual learning algorithms to SNN, including: the EWC [13] and MAS [15], methods based on the modification of synaptic plasticity in a single network; HNET [18], LSTM_NET [19] and the DualNet [41] methods based on the dual-network generating weights or weight regulation coefficients. In addition, we compare the rare previous SNN-based Continual Learning method DSD-SNN [38], which belongs to the sparse structural extension methods. All of these methods were conducted in multiple times using the same LIF neurons and the same generational gradient training method in the SNN. Figure 4: **The comparative performance of SOR-SNN on diverse continual learning tasks.** The average accuracy **(A-C)** and the number of inactive parameters **(D-F)** of the network for simple to complex cognitive tasks, the CIFAR100 and Mini-ImageNet datasets. The average accuracy on the large scale ImageNet dataset **(G)**. #### 2.3.1 Continual learning accuracy Fig. 4 A-C,G is the average accuracy of task t and its previous tasks. We can find that our SOR-SNN algorithm maintains superior accuracy in all multi-task sequential learning of the simple-to-complex cognitive task, the CIFAR100, mini-ImageNet, and ImageNet datasets. For simple-to-complex cognitive task, our SOR-SNN achieves an average accuracy of 62.72\(\pm\)1.25%, which is consistently higher than HNET SNN and LSTM_NET SNN continual learning algorithms. Although, the EWC SNN and MAS SNN achieved high accuracy in the first simplest task, their learning and memorization abilities significantly decreased in the more complex tasks. The DSD-SNN achieves higher accuracy in the first two tasks than our SOR-SNN, but after adding the most complex tasks, the average accuracy of our SOR-SNN is higher than the DSD-SNN. This suggests that our SOR-SNN is able to gradually enhance the learning ability in the process of knowledge accumulation and better accomplish more complex tasks compared to other algorithms. For CIFAR100, we tested continual learning scenarios with 5steps (each task contains 20 classes), 10steps (each task contains 10 classes), and 20steps (each task contains 5 classes). The accuracy comparisons are depicted in Tab. 1. For 10steps, our SOR-SNN achieves an average accuracy of 80.12\(\pm\)0.25%, both consistently higher than other methods based on SNN or replicated in SNN. Compared to DSD-SNN, which has the second highest average accuracy and is the only known method implementing SNNs continual learning in CIFAR100, our SOR-SNN achieves a 2.20% accuracy improvement. Besides, our SOR-SNN has accuracy rates of 73.48%\(\pm\)0.46 and 86.65%\(\pm\)0.20 in 5steps and 20steps, with 9.04% and 5.48% improvement than the second highest accuracy, respectively. In particular, in 20steps learning, the performance of the proposed model not only does not degrade with the number of tasks, but also achieves a higher accuracy, demonstrating the strong memory capability of our model. Compared to DNN-based continual learning algorithms, our model achieves superior accuracy with low energy consumption as shown in Tab. 2. In the ImageNet dataset, which has a larger sample scale and a larger number of sample classes, we first randomly selected 100 classes to form the Mini-ImageNet dataset and divided them into ten tasks. As shown in Fig. 4C, our SOR-SNN model achieves consistent superiority over HNET SNN and LSTM_NET SNN continual learning algorithms (EWC SNN and MAS SNN fail in the Mini-ImageNet). \begin{table} \begin{tabular}{l|c c c|c c|c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{5steps} & \multicolumn{2}{c|}{10steps} & \multicolumn{2}{c}{20steps} \\ \cline{2-7} & Acc (\%) & Std (\%) & Acc (\%) & Std (\%) & Acc (\%) & Std (\%) \\ \hline EWC [13] & 45.89 & 0.96 & 61.11 & 1.43 & 50.04 & 4.26 \\ SI [14] & 66.92 & 0.17 & 64.81 & 1.00 & 61.10 & 0.82 \\ MAS [15] & 61.88 & 0.27 & 64.77 & 0.78 & 60.40 & 1.74 \\ HNET [18] & 48.69 & 0.37 & 63.57 & 1.03 & 70.48 & 0.25 \\ LSTM\_NET [19] & 60.11 & 0.88 & 66.61 & 3.77 & 79.96 & 0.26 \\ DSD-SNN [38] & 64.44 & 0.24 & 77.92 & 0.29 & 81.17 & 0.73 \\ **Our SOR-SNN** & **73.48** & **0.46** & **80.12** & **0.25** & **86.65** & **0.20** \\ \hline \end{tabular} \end{table} Table 1: Accuracy comparisons on 5steps, 10steps and 20steps for CIFAR100. Although the DSD-SNN outperforms our SOR-SNN in the earlier tasks, its accuracy gradually decreases during learning the subsequent tasks. Eventually, our SOR-SNN achieves a higher average accuracy than DSD-SNN in all tasks. This demonstrates that our model has more memory capacity and is able to learn and memorize more tasks. In contrast, DSD-SNN relies on energy-consuming structure growth, which is not competent for learning more tasks (although it brings improvement on the first few tasks). In addition, we evaluated our SOR-SNN on the complete ImageNet dataset, which is the first time the SNN-based model achieves continual learning on the large-scale ImageNet dataset. Compared to the competitive DSD-SNN method, our model consistently achieves superior performance after learning 20 tasks. The accuracy rate of our SOR-SNN remains essentially stable over 100 tasks of the continual learning. This indicates that our SOR-SNN has a large memory capacity to constantly learn new tasks. #### 2.3.2 Energy consumption Under the effect of the self-organizing regulation network, our model activates a task-specific sparse pathway per task and uses only a portion of the network parameters, reducing the energy consumption of each task. Fig. 4 D-F depicts the number of synapses used for each task in the three datasets respectively. The activation rate of the main SNN is only between 40% and 50%. The average compression rate of our SOR-SNN is 52.37%,53.88% and 53.08% in the simple-to-complex cognitive task, CIFAR100 and Mini-ImageNet datasets, respectively. Tab. 2 illustrates the comparison of the proposed method with other DNN-based methods in terms of the accuracy and the number of synaptic parameters utilized for SNN. The accuracy is represented by 10steps, and the number of parameters is averaged over all tasks. Compared to regularization algorithms using dense networks, our model uses the least number of parameters and achieves at least 15.31% improvement in accuracy. In comparison to the DSD-SNN, which is also based on sparse networks, such as MAS [15], HNET [18], LSTM_NET [19], etc, our SOR-SNN method achieves a 2.2% improvement in accuracy using only 9.35% of DSD-SNN parameters. The results show that our SOR-SNN achieves higher performance with fewer network parameters and lower energy consumption. #### 2.3.3 Backward transfer capability In the brain, not only has the forward transfer capacity that the past learned knowledge can assist the learning of new tasks, but also the learning of new tasks will improve the performance of the past tasks. In recent years, the forward transfer capability of continual learning algorithms has been proposed and validated in many models, but currently there are fewer models with backward transfer capability. This is due to the fact that many models freeze the connections related to previous tasks from being updated again or reduce their weight plasticity [45, 23, 38], hindering the backward transfer between tasks. As for our SOR-SNN model, it retains the ability of past task-related synapses to be fine-tuned according to the new task while selecting new pathways for the new tasks. As shown in Fig. 5 A of the first box, in our SOR-SNN model, the accuracy of the second task after the third task is learned (darkblue line) is higher than when the second task was first learned (black line), and the similar phenomenon occurs in the learning of other tasks. In addition with the learning of subsequent tasks, the accuracy stability of the previous task also increases to some extent. This suggests that the knowledge learned from later tasks in our SOR-SNN model could contribute to a better completion of the previous task without retraining the previous task, realizing the backward transfer of knowledge. #### 2.3.4 The balance of stability-plasticity The challenge for neural networks to maintain a stable memory of acquired knowledge while possessing plasticity to learn new tasks is called the "stability-plasticity" dilemma, which is the core problem of Continual Learning. In our SOR-SNN model, we maintain stable memory by making the real weights of different tasks as equal as possible through memory loss, and enable the network to learn more new tasks by making the pathway connections of different tasks as different as possible through orthogonality loss. The \(\alpha\) and \(\beta\) respectively control the contribution of memory loss and orthogonality loss in network optimization. We analyze the effect of different \(\alpha\) and \(\beta\) parameters on the stability and plasticity of the proposed model. As shown in Fig. 5B, when \(\alpha\) is too large, the network forces the real weights of two different tasks to be too identical, resulting in a drop in the accuracy of both the old and new tasks. For example, during the learning process of task 2, when \(\alpha\) = 5, the accuracy rate of task 2 is inferior, while the accuracy for the previously learned task 1 is also the lowest. For CIFAR100, when \(\alpha\) is less than 0.5, the network maintains the memory of the old tasks without affecting the learning of the new tasks. Among them, when \(\alpha\) = 0.5, the highest average accuracy rate was consistently observed over the 10 tasks learning process. \begin{table} \begin{tabular}{c c c c} \hline \hline Method & Memory method & Accuracy (\%) & Parameters (\(10^{5}\)) \\ \hline EWC [13] & Synaptic Regularization & 61.11 \(\pm\) 1.43 & 6.9 \\ SI [14] & Synaptic Regularization & 64.81 \(\pm\) 1.00 & 6.9 \\ MAS [15] & Synaptic Regularization & 64.77 \(\pm\) 0.78 & 6.9 \\ IMM [42] & Synaptic Regularization & 63.13 & 65.5 \\ HAT [43] & Subnetwork Selection & 74.52 & 68.2 \\ PathNet [24] & Subnetwork Selection & 60.48 & 70.4 \\ DEN [23] & Subnetwork Selection & 58.10 & 3.6 \\ PGN [22] & Structural extensions & 68.32 & 68.0 \\ LG [44] & Structural extensions & 76.21 & 68.6 \\ HNET [18] & Hypernetwork & 63.57 \(\pm\) 1.03 & 4.6 \\ LSTM\_NET [19] & Hypernetwork & 66.61\(\pm\)3.77 & 4.6 \\ DSD-SNN [38] & Subnetwork Selection & 77.92 \(\pm\) 0.29 & 34.2 \\ **Our SOR-SNN** & **Self-organized Regulation** & **80.12 \(\pm\) 0.25** & **3.2** \\ \hline \hline \end{tabular} \end{table} Table 2: Energy consumption and accuracy comparisons with DNNs-based algorithms for CIFAR100. For the orthogonal loss coefficient \(\beta\), the SNN achieves the best performance of 80.12% when \(\beta=10^{-5}\) as shown in Fig. 5C. When \(\beta\) is minor, taking the learning process of task 7 as an example, the new task 7 has a higher accuracy, but the test accuracy of the old task 1 is lower. This is because if \(\beta\) is smaller, the orthogonal loss is smaller and the contribution of the classification loss is larger, so the new task performs better. However, this simultaneously results in a higher overlap between task-specific pathways, where the knowledge learned with the old task is easily interfered and disturbed. Conversely, when \(\beta\) is larger, the contribution of orthogonality loss is greater and the pathways of the old and new tasks are more orthogonal. As a result, there is stronger memorization and a higher test accuracy for the old task 1. Meanwhile, due to the overemphasis on pathways orthogonality, the accuracy on the new task 7 slightly drops. As shown in Fig. 5B-C, our SOR-SNN algorithm achieves an acceptable average accuracy for different loss parameters. This suggests that our model is robust and stable. ### The injury self-repair capability of SOR-SNN Self-organization in the brain is also reflected in the self-repairing capability after injury [2]. Our brain is a robust system that can withstand various perturbations Figure 5: **(A)** The current test accuracy of past learned tasks in our SOR-SNN model. **(B-C)** The effect of memory loss coefficient and orthogonal loss coefficient on performance. **(D)** Injury schematic, containing the initial network, the network with task-specific pathways assigned and the network after injury task 1 of SNNs. **(E)** Accuracy comparisons before and after injury of first four tasks on CIFAR100 10steps. or even multiple micro-strokes without significant deleterious impact [46]. After the injury of one subsystem, the brain is able to self-organize to select new subsystems (synapses or networks) to perform functions previously handled by the injured subsystem, which is known as the functional plasticity [47, 48]. For example, in the visual system of cats, after parts of the retina are lesioned and injured, up to 98% of afferent neurons generate new receptive fields in the residual uninjured areas by optimizing neuronal connectivity when receiving previous inputs again [49]. Similarly, after partial peripheral nerve injury in the monkey brain, connections from the somatic surface to the primary sensory cortex undergo substantial reorganization. Electrophysiological recordings show that stimuli inputs that are previously responded to in the injured area now produce a reaction in the neighbouring uninjured area [50]. This self-organizing repair capacity is also found extensively in the medial prefrontal cortex, hippocampus, and amygdala [51]. To simulate this brain mechanism and verify the self-repairing ability of our model, we pruned part of the connections that are unique to task 1 after continuously learning the first 4 tasks as shown in Fig. 5D. The pruned connections are no longer used for all tasks. When our model received samples of task 1 again, the SOR-SNN was able to self-organize to select new neural pathways among the remaining available synapses without affecting the performance of the previously learned other tasks 2, 3 and 4. The experimental results show that the accuracy of our model in task 1 kept stable pre and post-injury, and under the condition of not replaying the other tasks, the tasks 2, 3 and 4 maintained their original performance and were not significantly affected (and even improved slightly), as shown in Fig. 5E. Before and after the injury, our model slightly decreased by 0.9% for task 2 and improved by 1.4% and 0.3% for tasks 3 and 4, respectively, whereas the DSD-SNN average accuracy of tasks 2, 3 and 4 dropped dramatically from 68.60% pre-injury to 16.12% experienced catastrophic forgetting. This is due to the fixed pathway of DSD-SNN for all tasks that lacked the ability of adaptive regulation. Overall, the experiments demonstrate that our self-organized regulation enables a brain-inspired injury self-repair capability. ## 3 Discussion During human lifelong learning, the nerve centre of the brain is self-organized to flexibly modulate neural circuits according to the characteristics of different tasks [52, 53, 54], selectively activating some of appropriate neuronal connections to efficiently complete incremental tasks. Neuroscience studies have shown that the neural network scale of the adult brain hardly changes anymore [55], but it can form temporary task-specific neural pathways by dynamically reorganizing existing neurons and synapses [56, 57]. The neural pathway is activated when needed, and during the rest of the time its elements are involved in forming neural pathways for other tasks [39]. Inspired by this, we propose a brain-inspired continual learning algorithm with spiking neural networks based on self-organizing regulatory networks. Our model dynamically builds rich combinations of task-specific sparse neuronal connections for different tasks in limited SNNs, giving the SNNs the capacity to continually learn more complex tasks and more tasks. Different from other continual learning algorithms for DNNs and SNNs, the proposed algorithm constructs sparse pathways that are self-organizingly selected by the regulation network integrating the past learned knowledge and the current task features, rather than being artificially designed. Compared with existing synaptic regularization algorithms [16; 58; 17] and neuroregulation algorithms [20; 59; 18; 19], our SOR-SNN uses only a portion of the neural connections for each task, reducing task energy consumption while alleviating forgetting due to the interference between tasks. For example, EWC [13] and MAX [15] use all the synaptic connections of the 6.9\(\times\)10\({}^{5}\) parametric network, while our SOR-SNN uses an average of only 46.12% of the synaptic connections with 3.2\(\times\)10\({}^{5}\) parameters for each task, achieving 80.12% \(\pm\) 0.25% accuracy that is 19.01% and 15.35% higher than EWC and MAX, respectively. Meanwhile, compared with classical structure expansion algorithms [60; 61; 62], our SOR-SNN is able to fully utilize the limited network to form rich connection combinations without having to grow new neurons for the new task; and temporarily inhibit some neural connections for subsequent tasks instead of actually pruning irrelevant connections for the current task. In addition, unlike the common operation in structure expansion algorithms that freezes the neurons and synapses related to the old task [45; 23; 38], each of our neurons and synapses is continuously learned and optimized, and thus the proposed algorithm achieves the knowledge backward transfer capability that the learning of the new task improves the performance of the old task without replaying the old task samples. To validate the effectiveness of the proposed SOR-SNN, we conducted extensive experiments on different tasks and datasets. The results in child-like simple-to-complex cognitive task indicate that our SOR-SNN achieves higher performance than other methods in more complex tasks and realizes the highest average accuracy. Due to its ability to flexibly select sparse neural connections, when an already learned neural pathway is damaged and no longer usable, our model is able to adaptively select suitable alternative pathways from the remained network to repair the structure and function like the biological brain, and does not affect other tasks already learned. Furthermore, our SOR-SNN belongs to the pioneering exploration of SNN-based Continual Learning algorithms in the large-scale dataset ImageNet. Experiments in the generalized datasets CIFAR100 and ImageNet demonstrate that our SOR-SNN achieves superior performance in SNN-based Continual Learning algorithms. In summary, our SOR-SNN model continuously learns more tasks using less energy by self-organizingly constructing task-specific sparse neural pathways, which opens the path for building brain-inspired flexible and adaptive efficient Continual Learning. ## Method ### Spiking Neural Networks Motivated by the energy-efficient brain information transfer method, spiking neurons process binary information using discrete spike sequences as input and output. The spike sequence contains dynamic temporal information, thus different from traditional artificial neurons, spiking neurons have dual dimensions of temporal and spatial. In the spatial dimension, the spiking neuron \(i\) synthesizes the spike input \(S_{j}\) from the presynaptic neuron \(j\) to form the input current \(I_{i}\); in the temporal dimension, the membrane potential \(U_{i}\) of the spiking neuron accumulates the past spike information while receiving the current input current. In SOR-SNN, we used the common leaky integrate-and-fire (LIF) [63] spiking neuron with the following membrane potential \(U_{i}\) and spike \(S_{i}\) calculation formula: \[I_{i}^{t}=\sum_{j=1}^{M}P_{t}^{ij}S_{j}^{t} \tag{1}\] \[U_{i}^{t}=\tau U_{i}^{t-1}+I_{i}^{t} \tag{2}\] \[S_{i}^{t}=\begin{cases}1,&U_{i}^{t}\geq V_{th}\\ 0,&U_{i}^{t}<V_{th}\end{cases} \tag{3}\] where \(\tau=0.2\) is the time constant and \(t=4\) is the spiking time window. The spiking neuron receives the spike sequence input of length \(t\) and calculates the membrane potential at time \(t\) according to Eq. 2. When the membrane potential at time \(t\) exceeds the spike firing threshold \(V_{th}\), the neuron outputs \(1\) at time \(t\); conversely the neuron outputs \(0\). Discrete spikes result in spiking neurons being non-microscopic. To solve this problem, we used the Qgradgate [64] surrogate gradient method to approximate the gradient of the spike output as follows: \[\frac{S_{i}^{t}}{U_{i}^{t}}=\begin{cases}0,&|U_{i}^{t}|>\frac{1}{\lambda}\\ -\lambda^{2}|U_{i}^{t}|+\lambda,&|U_{i}^{t}|\leq\frac{1}{\lambda}\end{cases} \tag{4}\] where constant \(\lambda=2\). Above all, the discrete spatio-temporal spiking information transfer method reduces the energy consumption and enhances the knowledge representation capability of the spiking neural networks. ### Self-organizing regulation Network of our SOR-SNN We use the same regulation network to generate different sparse pathways \(P_{t}\) and weight \(W_{t}\) for different tasks \(t\). An entire regulation network consists of multiple sub-regulation networks one-to-one responsible for each region of the main SNN. To output task-specific pathways for different tasks with a single regulation network, our regulation network receives learnable task-relevant inputs \(x_{T}\) and layer-relevant inputs \(x_{L}\) as Eq. 5. During training, both of them adaptively learn representative information for different tasks and different layers, respectively; during testing, the learned \(x_{T}\) and \(x_{L}\) guide the regulation network to output different specific sparse pathways. Self-organizing regulation network includes a fundamental weighting module and a pathway search module. \[x=\{x_{T},x_{L}\} \tag{5}\] #### 3.2.1 Fundamental Weighting Module To synergize the synaptic activities between different regions and different layers, the recurrent neural network acts as the fundamental weighting module to output the real-valued weights \(W_{t}\) prepared for the SNNs. In SNN, the weights of each layer are not independent, but work together to affect the overall performance. Thus we use the Long Short-Term Memory Network (LSTM) to synthesize information from previous layers and find the superior state of the current layer. Specifically, the hidden state of layer \(l\) is the output of the previous layer \(o_{l-1}\); in particular, the hidden state of the first layer of each region is the output of the last layer of the previous region. The fundamental weights \(W_{t}\) are calculated as follows: \[c_{l}=c_{l-1}Forgate(x,o_{l-1})+Ingate(x,o_{l-1})Cell(x,o_{l-1}) \tag{6}\] \[o_{l}=Outgate(x,o_{l-1})\tanh(c_{l}) \tag{7}\] \[W_{t,l}=FC(o_{l}) \tag{8}\] where \(Forgate,Ingate,Outgate\) and \(Cell\) are respectively the forget gate function, input gate function, output gate function and cell processing function of LSTM. The \(FC\) is the fully-connected output layer. #### 3.2.2 Pathway Search Module Based on the fundamental weights \(W_{t}\), the pathway search module is responsible for deciding whether the activation or inhibition of each weight to self-organize the task-specific sparse neural pathways. Inspired by the differentiable structure search algorithm [65, 66], each synapse \(s\) has two states, active and inactive, respectively corresponding to a learnable synaptic selection parameter \(A_{s}\) and \(\widetilde{A_{s}}\). When the learnable parameter \(A_{s}\) is greater than \(\widetilde{A_{s}}\), activating the synapses s is preferable for the performance of the current task compared to inhibiting it. That is, if \(A_{s}>\widetilde{A_{s}}\), the synapse \(s\) is activated. Otherwise, when the learnable parameter \(\widetilde{A_{s}}\) is greater than \(A_{s}\), the synapse s is supposed to be inhibited. Hence, the selection formula of sparse neural pathways \(P_{t}\) for task t is as follows: \[P_{t}=\begin{cases}W_{t},&A_{s}\geq\widetilde{A_{s}}\\ 0,&A_{s}<\widetilde{A_{s}}\end{cases} \tag{9}\] ### The plasticity - stability in SOR-SNN The loss function \(L\) is divided into three parts: classification loss \(L_{class}\), memory loss \(L_{mem}\) and orthogonal loss \(L_{orth}\). Firstly, the normal categorization loss \(L_{class}\) aims to improve the learning performance of the current task. Our SOR-SNN model uses the cross-entropy loss. Moreover, we attempt to make \(A_{s}\) and \(\widetilde{A_{s}}\) close to 0.5 to stabilize the selection of task-specific sparse pathways. Secondly, the memory loss \(L_{mem}\) aims to make the real-valued weights \(W_{t}\) as constant as possible in each task \(t\) as Eq. 10. Memory loss ensures that the fundamental weights \(W_{t}\) do not change significantly across tasks to maintain the stability of learned knowledge. \[L_{mem}=\|W_{t}-W_{t-1}\|_{2} \tag{10}\] And orthogonal loss \(L_{orth}\) expects the sparse pathways \(P_{t}\) selected for different tasks to be as different as possible to reduce interference between tasks, while making the SNN plastic to learn more new tasks through different connection combinations. The orthogonal loss is calculated as follows: \[L_{orth}=\sum_{k=1}^{t}P_{t}P_{k} \tag{11}\] The combination of the latter two not only saves energy, but also allows the limited network to learn more tasks achieving larger memory capacity. The loss function \(L\) is calculated as follows: \[L=L_{class}+\alpha L_{mem}+\beta L_{orth} \tag{12}\] where \(\alpha\) and \(\beta\) are the constant coefficient. ### The procedure of SOR-SNN In the testing process of our model, the regulation network inputs task-relevant inputs \(x_{T}\) and layer-relevant inputs \(x_{L}\), self-organized outputs task-specific sparse pathways \(P_{t}\); the SNN receives picture samples \(D_{t}\) and task-specific pathways \(P_{t}\), outputs prediction class \(y\) as shown following: \[y=SNN(P_{t},D_{t}) \tag{13}\] During training, our SOR-SNN inputs training samples to compute the training loss \(L\). Then through backpropagation our model adaptively optimizes the parameters of the regulation network which contains learnable task-related inputs \(x_{T}\), layer-related inputs \(x_{L}\), synaptic selection parameter \(A_{s},\widetilde{A_{s}}\), and the LSTM weights. We present the specific procedure of our SOR-SNN algorithm as follow Algorithm: Input: Dataset \(D_{t}\) for each task \(t\). Initialization: randomly initialize learnable input \(x_{T},x_{L}\) and learnable synaptic selection parameter \(A_{s},\widetilde{A_{s}}\). Output: Prediction Class \(y\). For \(D_{t}\) in sequential task \(T\): For \(e\) in \(Epoch\): Calculating \(W_{t}\) with learnable input \(x_{T},x_{L}\) as Eq. 5-8; Selecting \(P_{t}\) with synaptic parameter \(A_{s},A_{s}\) as Eq. 9; SNN forward prediction as Eq. 1-3; Calculating the training loss as Eq. 10-12; Backpropagation to update the regulation network; end ## Data availability The data used in this study are available in the following databases. The simple-to-complex cognitive task data [40]: [https://github.com/robertofranceschi/Domain-adaptation-on-PACS-dataset](https://github.com/robertofranceschi/Domain-adaptation-on-PACS-dataset). The CIFAR100 data [67]: [http://www.cs.toronto.edu/](http://www.cs.toronto.edu/) kriz/cifar.html. The ImageNet data[68]: [https://image-net.org/](https://image-net.org/). ## Acknowledgments This work is supported by the National Key Research and Development Program (Grant No. 2020AAA0107800), the National Natural Science Foundation of China (Grant No. 62106261, No. 62372453).
2309.15179
ParamANN: A Neural Network to Estimate Cosmological Parameters for $Λ$CDM Universe Using Hubble Measurements
In this article, we employ a machine learning (ML) approach for the estimations of four fundamental parameters, namely, the Hubble constant ($H_0$), matter ($\Omega_{0m}$), curvature ($\Omega_{0k}$) and vacuum ($\Omega_{0\Lambda}$) densities of non-flat $\Lambda$CDM model. We use $31$ Hubble parameter values measured by differential ages (DA) technique in the redshift interval $0.07 \leq z \leq 1.965$. We create an artificial neural network (ParamANN) and train it with simulated values of $H(z)$ using various sets of $H_0$, $\Omega_{0m}$, $\Omega_{0k}$, $\Omega_{0\Lambda}$ parameters chosen from different and sufficiently wide prior intervals. We use a correlated noise model in the analysis. We demonstrate accurate validation and prediction using ParamANN. ParamANN provides an excellent cross-check for the validity of the $\Lambda$CDM model. We obtain $H_0 = 68.14 \pm 3.96$ $\rm{kmMpc^{-1}s^{-1}}$, $\Omega_{0m} = 0.3029 \pm 0.1118$, $\Omega_{0k} = 0.0708 \pm 0.2527$ and $\Omega_{0\Lambda} = 0.6258 \pm 0.1689$ by using the trained network. These parameter values agree very well with the results of global CMB observations of the Planck collaboration. We compare the cosmological parameter values predicted by ParamANN with those obtained by the MCMC method. Both the results agree well with each other. This demonstrates that ParamANN is an alternative and complementary approach to the well-known Metropolis-Hastings algorithm for estimating the cosmological parameters by using Hubble measurements.
Srikanta Pal, Rajib Saha
2023-09-26T18:25:57Z
http://arxiv.org/abs/2309.15179v3
ParamANN: A Neural Network to Estimate Cosmological Parameters for \(\Lambda\)CDM Universe using Hubble Measurements ###### Abstract In this article, we employ a machine learning (ML) approach for the estimations of four fundamental parameters, namely, the Hubble constant (\(H_{0}\)), matter (\(\Omega_{0m}\)), curvature (\(\Omega_{0k}\)) and vacuum (\(\Omega_{0\Lambda}\)) densities of non-flat \(\Lambda\)CDM model. We use 53 Hubble parameter values measured by _differential ages_ (DA) and _baryon acoustic oscillations_ (BAO) techniques in the redshift interval \(0.07\leq z\leq 2.36\). We create an _artificial neural network_ (called ParamANN) and train it with simulated values of \(H(z)\) using various sets of \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\) parameters chosen from different and sufficiently wide prior intervals. We use a correlated noise model in the analysis. We demonstrate accurate validation and prediction by ParamANN. ParamANN provides an excellent cross-check for the validity of the \(\Lambda\)CDM model and alleviates the Hubble tension problem which has been reported earlier in the literature. We obtain \(H_{0}=66.11\pm 2.59\) kmMpc\({}^{-1}\)sec\({}^{-1}\), \(\Omega_{0m}=0.3359\pm 0.0814\), \(\Omega_{0k}=0.0237\pm 0.1248\) and \(\Omega_{0\Lambda}=0.6405\pm 0.0861\) by using the trained network. These parameter values agree very well with the results of global CMB observations of Planck collaboration. \({}^{a}\)Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal - 462066, Madhya Pradesh, India **Keywords:** Hubble parameter - Cosmological density parameters - Machine learning ## 1 Introduction Our universe consists of visible matter, radiation as well as two mysterious components, i.e., dark matter (Zwicky, 2017; Freeman, 1970; Lacki & Beacom, 2010) and dark energy (Riess et al., 1998; Perlmutter et al., 1999). Although, the properties of both visible and dark matters are similar to each other, the latter does not interact with anything except gravitation. Moreover, dark energy, which explains the accelerated expansion of the universe, follows the property of negative pressure. The simplest cosmological model to express our universe is \(\Lambda\) cold dark matter (\(\Lambda\)CDM) model, where \(\Lambda\) defines the cosmological constant which was first envisioned by Albert Einstein (1917). This cosmological constant or vacuum is the simplest form of dark energy. Following this \(\Lambda\)CDM model, recent observations (Planck collaboration VI, 2020) of _cosmic microwave background_ (CMB) radiation reveal spatially flatness of our present universe and measure the radiation density of the present universe which is of the order of \(10^{-4}\). Therefore, our present universe shows that the matter (specifically dark matter) and vacuum densities are the major components of the universe (Planck collaboration VI, 2020). Recent observations of CMB data agree well with the flat \(\Lambda\)CDM universe. According to CMB observations, our present universe comprises 4.6% visible matter, 24% dark matter, and 71.4% vacuum energy approximately1. However, researchers raise their concerns with \(\Lambda\)CDM model for the final interpretation of the universe. Zunckel and Clarkson (2008) verify the acceptance of \(\Lambda\)CDM model using 'litmus test'. Their results show the disagreement with \(\Lambda\) as the dark energy component of the universe as well as prevent other dark energy models used in their analysis. Measurements of Canada-France-Hawaii-Telescope Lensing Survey (Macaulay et al., 2013; Raveri, 2016) raise a tension with the estimations of Planck collaboration VI (2020). Testing of Copernican principle (Uzan et al., 2008; Valkenburg et al., 2014) is also a well-known procedure to examine the existence and evolution of dark energy. For the null test of flat \(\Lambda\)CDM universe, \(Om(z_{i},z_{j})\) and \(Omh^{2}(z_{i},z_{j})\) diagnostics (Sahni et al., 2008; Shafieloo et al., 2012; Sahni et al., 2014) can be probed by using Hubble measurements directly. Zheng et al. (2016) use \(Om(z_{i},z_{j})\) and \(Omh^{2}(z_{i},z_{j})\) to estimate the matter density by using observed \(H(z)\) values as well as assuming the prior value of Hubble constant for three cosmological models (i.e., \(\Lambda\)CDM, wCDM, CPL (Chevalier and Polarski, 2001; Linder, 2003)) of flat universe. Shahalam et al. (2015) apply the \(Om\) diagnostic in scalar field models as well as distinguish the \(\Lambda\)CDM model from non-minimally coupled scalar field, phantom field, and quintessence models. Leaf and Melia (2017) analyze only _differential ages_ (DA) Hubble measurements using a two-point diagnostic for model comparison since the measurements of \(H(z)\) from cosmic chronometers are model independent. Geng et al. (2018) use \(H(z)\) data measured by DA and _baryon acoustic oscillation_ (BAO) techniques to constrain the cosmological parameters as well as quantify the impact of future \(H(z)\) measurements in the estimation of these parameters. The recent literatures, e.g., (Gomez-Valent and Amendola, 2018; Ryan et al., 2018, 2019; Cao et al., 2021; Cao and Ratra, 2022), effectively use the low redshift data (i.e., DA+BAO Hubble data, QSO angular size, Pantheon, DES supernova etc.) to analyse the various cosmological models. Footnote 1: [https://map.gsfc.nasa.gov/media/121236/index.html](https://map.gsfc.nasa.gov/media/121236/index.html) In spite of excellent agreement of the \(\Lambda\)CDM model with the observations as discussed above, there are still some indications that the \(\Lambda\)CDM model may not be entirely consistent with the observations. One such area of incompatibility is known as the so-called problem of 'Hubble tension'. The tension was reported since \(H_{0}\) values estimated by local and global observations do not agree with each other. The value of \(H_{0}\) constrained by Planck collaboration VI (2020) shows \(\sim 3.6\sigma\) tension with the same measured by local observational data, in which local measurements of \(H_{0}\) demand the higher value of this parameter. Recent local observations of Hubble Space Telescope (HST) (Riess et al., 2018, 2019; Riess, 2020; Riess et al., 2021, 2022) estimate the value of Hubble constant which shows approximately 4-5\(\sigma\) tension with Planck's estimation of \(H_{0}\). Local measurement of \(H_{0}\) by using gravitationally lensed quasars (i.e., H0LiCOW (Wong et al., 2019)) shows \(\sim 5.3\sigma\) tension with Planck's \(H_{0}\) value. Di Valentino (2021) estimated the value of \(H_{0}\) by combining 23 local measurements of this parameter and this estimation shows 5.9\(\sigma\) tension with \(H_{0}\) constrained by CMB observations (Planck collaboration VI, 2020). _An alternative method to estimate of \(H_{0}\) value (along with other cosmological parameters) as is done in this work is of utmost importance in the contemporary cosmological analysis._ It is worth mentioning that the problem of Hubble tension seems to be somewhat perplexing in nature. On one hand, some observations (de Jaeger et al., 2022) indicate a high value of \(H_{0}\). Riess et al. (2022) constrain \(H_{0}=73.04\) kmMpc\({}^{-1}\)sec\({}^{-1}\) with a uncertainty of 1.04 kmMpc\({}^{-1}\)sec\({}^{-1}\) using Cepheid-SNe samples for their baseline redshift range \(0.0233<z<0.15\). The Pantheon+ analysis (Brout et al., 2022) use 1550 distinct type Ia Supernovae in the redshift interval 0.001 to 2.26 and report the \(H_{0}=73.5\pm 1.1\) kmMpc\({}^{-1}\)sec\({}^{-1}\). Thus it appears that the Hubble tension extends up to a redshift of 2.26 if Supernovae data is used. Interestingly, using the red giant branch (TRGB) calibration to a sample of Type Ia Supernovae, Freedman (2021) reports \(H_{0}=69.8\pm 0.6\) (stat) \(\pm 1.6\) (sys) kmMpc\({}^{-1}\)sec\({}^{-1}\) which is consistent with the \(H_{0}\) value estimated by Planck collaboration VI (2020). Kelly et al. (2023) obtain \(H_{0}=64.8^{+44}_{-4.3}\) kmMpc\({}^{-1}\)sec\({}^{-1}\) by using eight lens models and \(66.6^{+4.1}_{-3.3}\) kmMpc\({}^{-1}\)sec\({}^{-1}\) from two preferred models. A recent article (Mukherjee et al., 2020) report \(H_{0}=67.6^{+4.3}_{-4.2}\) kmMpc\({}^{-1}\)sec\({}^{-1}\) using VLBI and gravitational wave observations from bright binary black hole GW190521 and conclude that the value is consistent with the Planck observations. These relax the tension. At the backdrop of such diverse indications from the observations it is a very important task to measure the cosmological parameters in an as much general framework (i.e., measuring all the independent cosmological parameters) as possible using the favoured \(\Lambda\)CDM model and new observational data. In this article, we use \(H(z)\) observations alone to constrain the parameters using the artificial intelligence (AI) as the driving engine. We use AI to estimate the Hubble constant (\(H_{0}\)) and today's density parameters (i.e., matter (\(\Omega_{0m}\)), curvature (\(\Omega_{0k}\)), vacuum (\(\Omega_{0\Lambda}\))) for \(\Lambda\)CDM universe from the \(H(z)\) values measured by DA and BAO techniques. We create an ANN (hereafter ParamANN) for modelling a direct mapping function between the observed \(H(z)\) and corresponding four parameters of the \(\Lambda\)CDM universe. The density parameters estimated by ParamANN indicate the spatially flat \(\Lambda\)CDM universe which is consistent with the results of Planck collaboration VI (2020). Moreover, Hubble constant predicted by ParamANN agrees excellently with the Planck's estimation of the same. Therefore, we do not see the problem of Hubble tension in the Hubble data. Primary motivations of our current article stem from the perspective of both theoretical and observational fronts. From the observational side, the problem like Hubble tension (Planck collaboration VI, 2020; Riess et al., 2018, 2019; Riess, 2020; Riess et al., 2021, 2022; Wong et al., 2019; Di Valentino, 2021; de Jaeger et al., 2022; Brout et al., 2022) exists and needs further understanding using various types of available data. AI has become one of the promising tool to investigate the observed data since an ML model can predict (in principle) a complicated function once it has been trained successfully. Moreover, there are several articles in the literature, e.g. (Macaulay et al., 2013; Raveri, 2016; Shafieloo et al., 2012; Sahni et al., 2014; Zheng et al., 2016; Linder, 2003; Shahalam et al., 2015; Leaf & Melia, 2017; Geng et al., 2018; Gomez-Valent & Amendola, 2018; Bengaly et al., 2023; Liu et al., 2019; Arjona & Nesseris, 2020; Mukherjee et al., 2022; Garcia et al., 2023), which try to measure the cosmological parameters assuming flat spatial curvature of our universe. However, it is also important to ask whether we can estimate these cosmological parameters by relaxing the assumption of the flatness of the universe. In this article, we motivate ourselves to estimate cosmological density parameters along with today's Hubble parameter value for a general non-flat \(\Lambda\)CDM universe. From the observational perspective, many new experiments are being proposed (e.g., Echo (aka CMB-Bharat2), CCAT-prime (Stacey et al., 2018), PICO (Hanany et al., 2019), Lite-Bird (Hazumi et al., 2020), SKA (Dewdney et al., 2009)) which has lower noise level implying the model parameters of a theory can be measured with higher accuracy. The higher accuracy of the future generation observations demand accurate constraining of cosmological parameters to distinguish between different models of the universe which is a major step to find a better and more accurate theoretical understanding of the physics of the universe. Footnote 2: [http://cmb-bharat.in/](http://cmb-bharat.in/) In the modern era, ML techniques are utilized as the powerful tools to analyze the observational data of the several fields in cosmology (Olvera et al., 2022). Instead of Metropolis-Hastings algorithm, artificial neural network (ANN) can be used as an alternative approach for the Bayesian inference in cosmology effectively reducing the computational time (Graff et al., 2012; Moss, 2020; Hortua et al., 2020; Gomez-Vargas et al., 2021). Moreover, ANNs can be performed for the non-parametric reconstructions of cosmological functions (Escamilla-Rivera et al., 2020; Wang et al., 2020; Dialektopoulos et al., 2022; Gomez-Vargas et al., 2023). For the estimations of cosmological parameters by using CMB data, Mancini et al. (2022) implement several types of ANN and show the less computational time in the Bayesian process by using these ANNs. Baccigalupi et al. (2000) utilize ANN to separate different types of foregrounds (e.g., thermal dust emissions, galactic synchrotron, and radiation emitted by galaxy clusters) from CMB signal. Petroff et al. (2020) develop a Bayesian spherical convolutional neural network (CNN) to recover the CMB anisotropies from the foreground contaminations. Using a CNN, Shallue and Eisenstein (2023) reconstruct the cosmological initial conditions from the late-time and non-linearly evolved density fields. ML can be used to reconstruct full sky CMB temperature anisotropies from the partial sky maps (Chanda and Saha, 2021; Pal et al., 2023). Khan and Saha (2023) develop an ANN to estimate the dipole modulation from the foreground cleaned CMB temperature anisotropies. In our previous article (Pal and Saha, 2023), we use a CNN to recover full sky \(E\)- and \(B\)-modes polarizations from the partial sky maps avoiding the so-called \(E\)-to-\(B\) leakage. Wang et al. (2020) use ANN to estimate cosmological parameters from the temperature power spectrum of CMB. Bengaly et al. (2023) employ ML algorithm to constrain the Hubble constant (\(H_{0}\)) by using DA Hubble measurements. Moreover, the recent literature, e.g. (Liu et al., 2019; Arjona and Nesseris, 2020; Mukherjee et al., 2022; Garcia et al., 2023; Wang et al., 2021; Liu et al., 2021), also perform the ML techniques to analyze the low redshift data (i.e., Hubble measurements (DA+BAO), Type Ia Supernovae etc.) for constraining the cosmological parameters of present universe. We organize our article as the following. In section 2, we describe the fundamental relation between Hubble parameter and redshift for \(\Lambda\)CDM universe. We discuss the methodology of our analysis in section 3. In section 3.1, we present the Hubble parameters measured by DA and BAO techniques. In section 3.2, we discuss the procedure to generate the samples of four fundamental parameters (i.e., \(H_{0}\),\(\Omega_{0m}\),\(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) and the corresponding mock values of \(H(z)\). In section 3.3, we describe the addition of noises to these mock \(H(z)\) values. We express the deep learning procedure of ParamANN in section 3.4. In section 3.4.1, we discuss about the architecture of ParamANN. In section 3.4.2, we describe the preprocessing of input data for ParamANN. In section 3.4.3, we present the loss function employed in the ParamANN. After that, in section 3.4.4, we provide the detailed descriptions about the training and prediction processes of ParamANN. We show the results, predicted by ParamANN, with detailed analysis in section 4. We present the predictions of ParamANN for the test set in section 4.1. We show the results predicted by ParamANN for observed Hubble data in section 4.2. In section 4.2.1, we present the values of Hubble constant and the density parameters with their corresponding uncertainties which are obtained by trained ParamANN for observed \(H(z)\) data as well as in section 4.2.2 we present the Hubble parameter curve using these estimated parameter values comparing with the same obtained by using the results of Planck collaboration VI (2020). Finally, in section 5, we conclude our current analysis with the beneficial discussions of this new work. ## 2 Formalism Einstein's equations which can be encapsulated into a compact form using tensor notations follow \[G_{\mu\nu} = -\frac{8\pi G}{c^{4}}T_{\mu\nu}, \tag{1}\] where Einstein's tensor \(G_{\mu\nu}\) is determined by some suitable second order derivative functions of the metric tensor \(g_{\mu\nu}\) with respect to the coordinates. In equation 1, \(G\) denotes the universal gravitational constant, \(c\) is the velocity of light in vacuum, and \(T_{\mu\nu}\) defines the energy-momentum tensor. In spherical coordinates, the Friedmann-Robertson-Walker (FRW) line element can be written as \[\mathrm{d}s^{2} = c^{2}\mathrm{d}t^{2}-a^{2}(t)\left[\frac{\mathrm{d}r^{2}}{1-kr^{ 2}}+r^{2}\left(\mathrm{d}r^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right)\right], \tag{2}\] where \(r,\theta\), and \(\phi\) are the comoving coordinates of the spherical space of the universe. In equation 2, \(a(t)\) and \(k\) define the cosmological scale factor and the curvature constant of the universe respectively. Moreover, positive, negative, and zero values of \(k\) indicate spatially closed, open, and flat universes respectively. Using equations 1 and 2, we obtain two well-known Friedmann equations which are expressed as \[\frac{1}{a^{2}(t)}\left[\left(\frac{\mathrm{d}a}{\mathrm{d}t} \right)^{2}+kc^{2}\right] = \frac{8\pi G}{3}\rho(t), \tag{3}\] \[\frac{2}{a(t)}\frac{\mathrm{d}^{2}a}{\mathrm{d}t^{2}}+\frac{1}{a ^{2}(t)}\left[\left(\frac{\mathrm{d}a}{\mathrm{d}t}\right)^{2}+kc^{2}\right] = -\frac{8\pi G}{c^{2}}P(t), \tag{4}\] where \(\rho(t)\) and \(P(t)\) denote the density and gravitational pressure of the universe respectively. Using equations 3 and 4 as well as the equation of state \(P(t)=\omega c^{2}\rho(t)\), we obtain the energy-momentum conservation law which is given by \[\frac{\partial\rho}{\partial t}+3\left(1+\omega\right)H(t)\rho(t) = 0, \tag{5}\] where \(\omega\) denotes the equation of state parameter and \(H(t)\) represents the Hubble parameter which is defined by \(\frac{1}{a(t)}\frac{\mathrm{d}a}{\mathrm{d}t}\). We note that \(\omega=0,\frac{1}{3}\), and \(-1\) represent the equation of state for matter, radiation and vacuum densities of the universe respectively. Using equation 5, the density components of the universe can be expressed as \[\rho_{m} = \rho_{0m}\left(1+z\right)^{3}, \tag{6}\] \[\rho_{r} = \rho_{0r}\left(1+z\right)^{4},\] (7) \[\rho_{\Lambda} = \rho_{0\Lambda}, \tag{8}\] where \(z\) denotes the redshift and '0' stands for present universe (i.e., \(z=0\)). In equations 6, 7 and 8, subscript notations \(m,r\) and \(\Lambda\) define the matter, radiation and vacuum densities. Moreover, redshift is related to scale factor as \((1+z)=a_{0}/a\). Neglecting the radiation density for late time universe, equation 3 can be written as \[\Omega_{m}+\Omega_{k}+\Omega_{\Lambda} = 1. \tag{9}\] In equation 9, \(\Omega\) denotes the density parameter which is defined by \(\rho/\rho_{c}\) for matter and vacuum densities, where \(\rho_{c}\) is called the critical density which is expressed as \(3H^{2}/8\pi G\). However, the curvature density parameter (\(\Omega_{k}\)) is defined by \(-kc^{2}/a^{2}H^{2}\). Using equations 6 and 7 in equation 9, the Hubble parameter for \(\Lambda\)CDM universe is expressed by \[H^{2}(z) = H_{0}^{2}\left[\Omega_{0m}\left(1+z\right)^{3}+\Omega_{0k}\left( 1+z\right)^{2}+\Omega_{0\Lambda}\right], \tag{10}\] where \(\Omega_{0m}\) and \(\Omega_{0\Lambda}\) are the matter and vacuum density parameters in the present universe respectively. Moreover, \(\Omega_{0k}\) is today's curvature density parameter which is specified as \(-kc^{2}/a_{0}^{2}H_{0}^{2}\). Zero value of \(\Omega_{0k}\) specifies a spatially flat universe. Positive and negative values of \(\Omega_{0k}\) define the open and closed universes respectively. At present universe, i.e. \(z=0\), equation 10 provides \[\Omega_{0m}+\Omega_{0k}+\Omega_{0\Lambda} = 1. \tag{11}\] Using equation 11, to apply the condition of present universe on the curvature density parameter, equation 10 can be written as \[H^{2}(z) = H_{0}^{2}\left[\Omega_{0m}\left(1+z\right)^{3}+\left(1-\Omega_{ 0m}-\Omega_{0\Lambda}\right)\left(1+z\right)^{2}+\Omega_{0\Lambda}\right]. \tag{12}\] Equation 12 contains three independent parameters \(H_{0}\), \(\Omega_{0m}\), and \(\Omega_{0\Lambda}\). However, the curvature density parameter \(\Omega_{0k}\) depends on \(\Omega_{0m}\) and \(\Omega_{0\Lambda}\) by equation 11. This equation 11 represents the theoretical model of Hubble parameters at different redshifts for a given set of values of \(H_{0}\), \(\Omega_{0m}\), and \(\Omega_{0\Lambda}\). ## 3 Methodology In this section, firstly we discuss about the observed Hubble data (available in the range \(0.07\leq z\leq 2.36\)). Then, we describe the procedure to simulate the mock values of H(z) as well as the noise addition to these mock H(z) data. Thereafter, we provide the descriptions about the deep learning of neural network used in our analysis. \begin{table} \begin{tabular}{l l l l|l l l l} \hline \multicolumn{4}{c|}{DA technique} & \multicolumn{4}{c}{BAO technique} \\ \hline \hline \(z\) & \(H(z)\) & \(\sigma_{H(z)}\) & Reference & \(z\) & \(H(z)\) & \(\sigma_{H(z)}\) & Reference \\ \hline \hline 0.07 & 69 & 19.6 & Zhang et al. (2014) & 0.24 & 79.69 & 2.99 & Gaztanaga et al. (2009) \\ 0.09 & 69 & 12 & Jimenez et al. (2003) & 0.30 & 81.7 & 6.22 & Oka et al. (2014) \\ 0.12 & 68.6 & 26.2 & Zhang et al. (2014) & 0.31 & 78.18 & 4.74 & Wang et al. (2017) \\ 0.17 & 83 & 8 & Simon et al. (2005) & 0.34 & 83.8 & 3.66 & Gaztanaga et al. (2009) \\ **0.1791** & **75** & **4** & Moresco et al. (2012) & 0.35 & 82.7 & 8.4 & Chuang \& Wang (2013) \\ **0.1993** & **75** & **5** & Moresco et al. (2012) & 0.36 & 79.94 & 3.38 & Wang et al. (2017) \\ 0.2 & 72.9 & 29.6 & Zhang et al. (2014) & **0.38** & **81.5** & **1.9** & Alam et al. (2017) \\ 0.27 & 77 & 14 & Simon et al. (2005) & 0.4 & 82.04 & 2.03 & Wang et al. (2017) \\ 0.28 & 88.8 & 36.64 & Zhang et al. (2014) & 0.43 & 86.45 & 3.97 & Gaztanaga et al. (2009) \\ **0.3519** & **83** & **14** & Moresco et al. (2012) & 0.44 & 84.81 & 1.83 & Wang et al. (2017) \\ **0.3802** & **83** & **13.5** & Moresco et al. (2016) & 0.48 & 87.79 & 2.03 & Wang et al. (2017) \\ **0.4004** & **77** & **10.2** & Moresco et al. (2016) & **0.51** & **90.4** & **1.9** & Alam et al. (2017) \\ **0.4247** & **87.1** & **11.2** & Moresco et al. (2016) & 0.52 & 94.35 & 2.64 & Wang et al. (2017) \\ **0.4497** & **92.8** & **12.9** & Moresco et al. (2016) & 0.56 & 93.34 & 2.3 & Wang et al. (2017) \\ 0.47 & 89 & 34 & Ratsimbazafy et al. (2017) & 0.57 & 96.8 & 3.4 & Anderson et al. (2014) \\ **0.4783** & **80.9** & **9** & Moresco et al. (2016) & 0.59 & 98.48 & 3.18 & Wang et al. (2017) \\ **0.5929** & **104** & **13** & Moresco et al. (2012) & 0.6 & 87.9 & 6.1 & Blake et al. (2012) \\ **0.6797** & **92** & **8** & Moresco et al. (2012) & **0.61** & **97.3** & **2.1** & Alam et al. (2017) \\ **0.7812** & **105** & **12** & Moresco et al. (2012) & 0.64 & 98.82 & 2.98 & Wang et al. (2017) \\ **0.8754** & **125** & **17** & Moresco et al. (2012) & 0.73 & 97.3 & 7 & Blake et al. (2012) \\ 0.88 & 90 & 40 & Stern et al. (2010) & 2.3 & 224 & 8 & Busca et al. (2013) \\ 0.9 & 117 & 23 & Simon et al. (2005) & 2.33 & 224 & 8 & Bautista et al. (2017) \\ **1.037** & **154** & **20** & Moresco et al. (2012) & 2.34 & 222 & 7 & Delubac et al. (2015) \\ 1.3 & 168 & 17 & Simon et al. (2005) & 2.36 & 226 & 8 & Font-Ribera et al. (2014) \\ **1.363** & **160** & **33.6** & Moresco (2015) & & & & \\ 1.43 & 177 & 18 & Simon et al. (2005) & & & & \\ 1.53 & 140 & 14 & Simon et al. (2005) & & & & \\ 1.75 & 202 & 40 & Simon et al. (2005) & & & & \\ **1.965** & **186.5** & **50.4** & Moresco (2015) & & & & \\ \hline \end{tabular} \end{table} Table 1: Table shows the Hubble parameters (\(H(z)\)) and corresponding uncertainties (\(\sigma_{H(z)}\)) in kmMpc\({}^{-1}\)sec\({}^{-1}\) at different redshifts measured by DA and BAO techniques. We consider the covariances for the Hubble data which are shown with bold fonts in this table. We refer the section 3.3 for the detailed discussions regarding covariances between \(H(z)\) data. ### Hubble measurements In our analysis, we create an ANN method to estimate Hubble constant (\(H_{0}\)) and density parameters (i.e., \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) focussing on 53 available Hubble measurements. These \(H(z)\) are measured by using DA (Jimenez & Loeb, 2002) and BAO (Blake & Glazebrook, 2003; Seo & Eisenstein, 2003) techniques. In case of DA technique, 29 \(H(z)\) values are observed in the redshift range \(0.07\leq z\leq 1.965\) without assuming any cosmological model. BAO technique measures 24 \(H(z)\) values in the redshift interval \(0.24\leq z\leq 2.36\) assuming \(\Lambda\)CDM universe to measure sound horizon. In the left panel of table 1, we show the observed \(H(z)\) and corresponding uncertainties measured by DA technique and the right panel of this table represents \(H(z)\) measurements with their uncertainties for BAO technique. We consider the redshift points (in ascending order for the range \(0.07\leq z\leq 2.36\)) of these observed \(H(z)\) data to generate the mock values of \(H(z)\) for our analysis. ### Signal model We use random.uniform3 function of the python library numpy4 to simulate the values of Hubble constant (\(H_{0}\)), matter density (\(\Omega_{0m}\)), and vacuum density (\(\Omega_{0\Lambda}\)) of the universe in the suitable ranges. To keep our analysis free from any prejudice that may stem from restricted choice of priors, we use wide prior ranges for each independent parameters. We consider the uniform range \(\{60,80\}\) kmMpc\({}^{-1}\)sec\({}^{-1}\) for \(H_{0}\). Similarly, we utilize the uniform ranges \(\{0.2,0.5\}\) and \(\{0.5,0.8\}\) for \(\Omega_{0m}\) and \(\Omega_{0\Lambda}\) respectively. We generate \(1.2\times 10^{5}\) random values for each of these three parameters in their corresponding uniform ranges. Then, we use equation 11 to obtain the values of \(\Omega_{0k}\) by using the simulated values of \(\Omega_{0m}\) and \(\Omega_{0\Lambda}\). After simulating the values of \(H_{0}\), \(\Omega_{0m}\) and \(\Omega_{0\Lambda}\), we use the equation 12 to generate the values of \(H(z)\) at 53 observed redshift points (i.e., \(0.07\leq z\leq 2.36\)) for a given set of the simulated values of these cosmological parameters. Finally, we obtain \(1.2\times 10^{5}\) number of realizations of \(H(z)\), where each realization contains 53 H(z) values corresponding to the observed redshift range \(0.07\leq z\leq 2.36\). Footnote 3: [https://numpy.org/doc/stable/reference/random/generated/numpy.random.uniform.html](https://numpy.org/doc/stable/reference/random/generated/numpy.random.uniform.html) Footnote 4: [https://numpy.org/](https://numpy.org/) ### Noise model We incorporate correlated Gaussian noises in the simulated \(H(z)\) data for accounting the inherent noises of the Hubble measurements. We generate full covariance matrix corresponding to the Hubble measurements following the procedure of Moresco et al. (2020) as well as including the covariances given by Alam et al. (2017). In figure 1, we show this full covariance matrix which is used to generate the correlated Gaussian noises. We utilize the openly available code5(Moresco et al., 2020) to estimate the covariances between Hubble parameters measured by Moresco et al. (2012, 2016) and Moresco (2015). Moreover, following the suggestion made by Moresco et al. (2020), we consider the bias calculations for the systematic contributions due to initial mass function (IMF) and stellar population synthesis (SPS) model (odd one out) for the estimation of the covariances corresponding to these Hubble measurements (i.e., Moresco et al. (2012, 2016); Moresco (2015)). We refer the literature (e.g., Moresco et al. (2012, 2016); Moresco (2015); Moresco et al. (2020)) for the details about the systematic contributions in the Hubble parameters measured by DA technique. For BAO \(H(z)\) measurements, we use the covariances between three Hubble parameters measured by Alam et al. (2017) along with their measured variances. For the rest (Zhang et al., 2014; Jimenez et al., 2003; Simon et al., 2005; Ratsimbazafy et al., 2017; Stern et al., 2010; Gaztanaga et al., 2009; Oka et al., 2014; Wang et al., 2017; Chuang & Wang, 2013; Anderson et al., 2014; Blake et al., 2012; Busca et al., 2013; Bautista et al., 2017; Delubac et al., 2015; Font-Ribera et al., 2017; D'Amico et al., 2018; D'Amico et al., 2019; D'Amico et al., 2019; D'Amico et al. al., 2014) of the observed Hubble data, we utilize only corresponding variances in the diagonal of the full covariance matrix. We consider the noise distributions around 0 mean, which are bounded by the full covariance matrix (shown in figure 1) corresponding to 53 Hubble measurements. Using random.multivariate_normal6 function of numpy library, we generate \(1.2\times 10^{5}\) realizations of correlated Gaussian noises by using different seed values. Each of these noise realizations contains 53 values corresponding to the observed redshift points (shown in table 1). We add these correlated noises to the simulated \(H(z)\) data. These noise-included realizations of \(H(z)\) are used as the input of ParamANN and the corresponding values of \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\) are provided as the targets in the output layer of ParamANN. Footnote 6: [https://numpy.org/doc/stable/reference/random/generated/numpy.random.multivariate_normal.html](https://numpy.org/doc/stable/reference/random/generated/numpy.random.multivariate_normal.html) ### Deep learning of ParamANN We use open-source ML platform TensorFlow7(Abadi et al., 2015), using python programming language, to create the architecture of ParamANN as well as for deep learning of this neural network. Footnote 7: [https://www.tensorflow.org/](https://www.tensorflow.org/) #### 3.4.1 ParamANN We construct ParamANN with one hidden layer for the direct mapping between noise-included H(z) and four fundamental parameters (i.e., \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) of \(\Lambda\)CDM universe. In figure 2, we present the architecture of ParamANN. In ParamANN, input layer contains fifty three neurons, hidden layer consists of thirty neurons and the output layer comprises eight neurons. Neurons of Figure 1: Figure shows the full covariance matrix of the observed Hubble parameters in the redshift interval \(0.07\leq z\leq 2.36\). Colorbar represents the non-zero covariances between Hubble measurements in log scale. White pixels in the covariance plot represent redshift pairs for which covariance informations are not available. each layer are densely connected (by weights and biases) to each neurons of the previous layer. We note that first half of the output layer shows the predictions and second half of the same provides the uncertainties corresponding to these predictions. To train the ParamANN, we use the noise-included \(H(z)\) as input and the corresponding parameter values (i.e., \(\hat{H}_{0}\), \(\hat{\Omega}_{0m}\), \(\hat{\Omega}_{0k}\), \(\hat{\Omega}_{0\Lambda}\)) are used as targets, where the 'hat' notation is used to define the target parameters. We use ReLU (Agarap, 2019) activation function in the hidden layer to learn the non-linearity of the direct mapping between input and targets. For the optimization process using mini-batch algorithm (Ruder, 2016; Sun et al., 2020), we utilize _adaptive moment estimation_ (ADAM) (Kingma & Ba, 2014) optimizer with learning rate \(5\times 10^{-4}\) to update weights and biases in the backward propagation (Hecht-Nielsen, 1992) of the training process of ParamANN. #### 3.4.2 Preprocessing of data Preprocessing is widely known procedure to normalize the input data (generally distributing the values in the lower range) for the better performance of supervised learning of ANN. Moreover, normalized input data can speed up the training process of neural network. The familiar techniques for the preprocessing are _min-max normalization, z-score normalization_ (i.e., _standardization_) etc. (Kotsiantis et al., 2007). We use the _standardization_ method to normalize the noise-included \(H(z)\) data which are used as input in ParamANN. Four fundamental parameters (i.e., \(\hat{H}_{0}\), \(\hat{\Omega}_{0m}\), \(\hat{\Omega}_{0k}\), \(\hat{\Omega}_{0\Lambda}\)) corresponding to these input are used as targets without any scaling. After random shuffling of the entire data, we split these data in three sets (i.e., training, validation and test sets). We use \(10^{5}\) data for training, \(1.5\times 10^{4}\) data for validation and \(5\times 10^{3}\) data for testing the predictions of ParamANN. To perform the _standardization_ in input, at first we obtain mean and standard deviation of noise-included \(H(z)\) data of training set. Then, each sample of these three sets is subtracted by this mean and divided by this standard deviation. We use the mean and standard deviation of training set even in the validation and test sets, which helps to pass the informations about the training of ParamANN into these data sets effectively. After normalizing the noise-included \(H(z)\) of these three sets by using the standardization method, we use these standardized \(H(z)\) data as normalized input in ParamANN. Figure 2: Figure shows the architecture of ParamANN, which contains one hidden layer in between input and output layers. Input layer contains 53 neurons representing 53 noise-included \(H(z)\) values in the range \(0.07\leq z\leq 2.36\), and hidden layer contains 30 neurons with ReLU activation (i.e., \(\{a_{i}:i\in\{0,1,...,29\}\}\)). Output layer contains 8 neurons in which first four neurons represent the predicted parameters (i.e., \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\) and \(\Omega_{0\Lambda}\)) and last four neurons predict the uncertainties corresponding to these parameters. #### 3.4.3 Loss function In our analysis, we use _heteroscedastic_ (HS) loss function (Kendall and Gal, 2017) which is given by \[L^{HS}=\frac{1}{2n}\sum_{q=0}^{n}\left[\exp(-s_{q})\left(y_{q}-\hat{y}_{q}\right) ^{2}+s_{q}\right], \tag{13}\] where \(n\) defines the number of targets which is four in our analysis. In equation 13, \(y_{q}\) and \(\hat{y}_{q}\) represent the predictions and targets respectively. Moreover, in this equation, \(s_{q}\) is the log variances (i.e., \(\ln\sigma_{q}^{2}\)) corresponding to the predictions, where \(\sigma_{q}\) denotes the aleatoric uncertainty of prediction. Using this special type of loss function, we obtain the uncertainties (i.e., aleatoric uncertainty) corresponding to each predictions. Therefore, the output layer of ParamANN contains eight neurons, in which first four neurons estimate the values of the cosmological parameters and rest of them evaluate the uncertainties corresponding to these parameters. This HS loss function is equivalent to the negative log-likelihood function which is commonly utilized in the traditional approaches for cosmological analysis. Similar to the usual likelihood method, ParamANN minimizes the HS loss function by comparing the predictions with targets. Therefore, the uncertainty measurements of predicted parameters provided by this loss function are equivalent to the maximum-likelihood estimates of the cosmological parameters in the case of a traditional method. #### 3.4.4 Training and prediction We train ParamANN by using \(10^{5}\) realizations of standardized \(H(z)\) data (in the range \(0.07\leq z\leq 2.36\)) and corresponding target parameters (i.e., \(\hat{H}_{0}\), \(\hat{\Omega}_{0m}\), \(\hat{\Omega}_{0k}\), \(\hat{\Omega}_{0\Lambda}\)). Training process continues in an iterative way by minimizing HS loss function used in the neural network. We use 100 epochs to decide how many times the optimization process should continue. Moreover, each epoch completes with a fixed number of iterations since we choose the mini-batch size of 128. Depending upon the mini-batch size, each iteration takes a subset from the entire training set to minimize the HS loss function (equation 13). Therefore, one can estimate the number of iterations in each epoch by taking the ratio between the number of training samples and the mini-batch size. In our analysis, the number of iterations in each epoch is 782. We use _model averaging ensemble_ (MAE) method (Lai et al., 2022) to reduce the epistemic uncertainties8 in the predicted parameters. For MAE method, we perform the training of ParamANN 100 times (with the same data and the same tuning of hyperparameters) by varying the initialization of weights using 100 randomly selected seed values. The entire training process of these 100 ensembles takes approximately 85 minutes to execute in a Intel(R) Core(TM) i7-10700 CPU system (contains two threads in each of eight cores) with 2.9 GHz processor speed. We also use \(1.5\times 10^{4}\) number of validation data at the time of training process to check any kind of overfitting or underfitting in the minimization of HS loss function and notice no overfitting or underfitting in the training of ParamANN. Footnote 8: In the deep learning of ANN, epistemic uncertainty exists due to the lack of knowledge in input data as well as the ignorance about the hyperparameters of ANN model. After completion of the entire training process, we predict the values of parameters (i.e., \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) with corresponding log variances by using \(5\times 10^{3}\) number (i.e., test set) of standardized \(H(z)\) data in 100 ensembles of trained ParamANN. We estimate the final predictions of these parameters by taking the mean of 100 ensembles of the predicted parameter values for each realization of the test set. To calculate the uncertainties corresponding to these final predictions, at first we take the exponential of the log variances to obtain the variances corresponding to 100 ensembles of these parameters for each realization of test set. Then, we estimate the mean of 100 ensembles of variances and take the square root of these averaged variances to obtain the uncertainties corresponding to the final predictions of these parameters for each realization of test set. Results and analysis In this section, at first we show the predicted results for test set. Then, we present the predictions of ParamANN for Hubble measurements (DA+BAO) and compare these predictions with the results estimated by Planck collaboration VI (2020). ### Predictions for test set We predict the Hubble constant (\(H_{0}\)) and the density parameters (i.e., \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) of \(\Lambda\)CDM universe with their corresponding uncertainties (i.e., \(\sigma_{H_{0}}\), \(\sigma_{\Omega_{0m}}\), \(\sigma_{\Omega_{0k}}\), \(\sigma_{\Omega_{0\Lambda}}\)) by using the noise-included \(H(z)\) data of test set in trained ParamANN. We compare these predictions with the corresponding targets (i.e., \(\hat{H}_{0}\), \(\hat{\Omega}_{0m}\), \(\hat{\Omega}_{0k}\), \(\hat{\Omega}_{0\Lambda}\)) to show the accuracy of the predictions of ParamANN. We estimate the differences between the targets and predictions of the test set by using the equation which is given by \[\Delta_{y} = \hat{y}-y, \tag{14}\] where \(y\) represents the predicted parameters, i.e., \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\), and \(\hat{y}\) defines the corresponding target parameters, i.e., \(\hat{H}_{0}\), \(\hat{\Omega}_{0m}\), \(\hat{\Omega}_{0k}\), \(\hat{\Omega}_{0\Lambda}\). In the top left subfigure of figure 3, we present the Figure 3: Top left subfigure shows the differences between target and predicted Hubble constant. In top right subfigure, we show the differences between target and predicted matter density. Similarly, in bottom left and bottom right subfigures, we present the differences between target and prediction for curvature and vacuum density parameters respectively. Moreover, in each of these subfigures, we show three times uncertainties of the corresponding predictions. Horizontal axis of each subfigure represents the index number of test samples. differences between target and predictions for Hubble constant. In the same figure, we show the similar differences for matter, curvature and vacuum density parameters in top right, bottom left and bottom right subfigures respectively. In each of these subfigures of figure 3, we also present three times predicted uncertainties of the corresponding predicted parameters for test set. We note that the differences corresponding to each of these parameters dominantly lie within their three times uncertainty ranges. Therefore, we conclude that the predictions of test set agree well with the corresponding targets within the corresponding predicted uncertainties. ### Predictions for Hubble measurements We train the ParamANN by using mock \(H(z)\) values (for the range \(0.07\leq z\leq 2.36\)) as input. Moreover, these mock \(H(z)\) values contain the correlated noises compatible with observed Hubble parameters. Therefore, we can use the observed \(H(z)\) values (shown in table 1) directly to the trained ParamANN (after performing the standardization method) to extract the values of Hubble constant, matter, curvature, and vacuum density parameters with their corresponding uncertainties for the present \(\Lambda\)CDM universe. #### 4.2.1 Hubble constant and density parameters In table 2, we show the present values of Hubble parameter (\(H_{0}\)) and three density parameters (i.e., \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) with their corresponding uncertainties which are estimated from the trained ParamANN by using the observed Hubble data (DA+BAO). We compare our estimated values of these four parameters with the values of these same parameters obtained by Planck collaboration VI (2020). In table 3, we show the values of these cosmological parameters constrained by Planck collaboration VI (2020) for \(\Lambda\)CDM universe. We note that the predicted uncertainties corresponding to our estimated parameters are comparably larger than the uncertainties of Planck's estimated values of these parameters, since the observed Hubble data are lesser numbers than CMB data as well as these \(H(z)\) data (specifically DA data) contain larger uncertainties including various systematic effects. We calculate the significances of the parameter values, predicted by ParamANN, with respect \begin{table} \begin{tabular}{c|c|c} \hline Parameter & Value & Significance \\ \hline \hline \(H_{0}\) & \(66.11\pm 2.59\) & \(0.6\sigma\) \\ \(\Omega_{0m}\) & \(0.3359\pm 0.0814\) & \(0.3\sigma\) \\ \(\Omega_{0k}\) & \(0.0237\pm 0.1248\) & \(0.18\sigma\) \\ \(\Omega_{0\Lambda}\) & \(0.6405\pm 0.0861\) & \(0.56\sigma\) \\ \hline \end{tabular} \end{table} Table 2: Table shows the values of Hubble constant (\(H_{0}\) in kmMpc\({}^{-1}\)sec\({}^{-1}\)) and three density parameters (i.e., \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) predicted by ParamANN from the observed \(H(z)\) parameter values. Third column of this table represents the significance of these predictions with respect to the results (shown in table 3) obtained by Planck collaboration VI (2020). \begin{table} \begin{tabular}{c|c} \hline Parameter & Value \\ \hline \hline \(H_{0}\) & \(67.66\pm 0.42\) \\ \(\Omega_{0m}\) & \(0.3111\pm 0.0056\) \\ \(\Omega_{0k}\) & \(0.001\pm 0.002\) \\ \(\Omega_{0\Lambda}\) & \(0.6889\pm 0.0056\) \\ \hline \end{tabular} \end{table} Table 3: Table shows the values of Hubble constant (\(H_{0}\) in kmMpc\({}^{-1}\)sec\({}^{-1}\)), matter (\(\Omega_{0m}\)), curvature (\(\Omega_{0k}\)) and vacuum (\(\Omega_{0\Lambda}\)) density parameters estimated by Planck collaboration VI (2020). to the Planck's results. We estimate these significances by taking the absolute differences between our estimated parameters and Planck's results as well as dividing these absolute differences by the corresponding uncertainties predicted by ParamANN. We show these significances in the third column of table 2, where \(\sigma\) denotes the uncertainties (predicted by ParamANN) corresponding to the parameters. We notice that our estimation of \(H_{0}\) from observed Hubble data shows less deviation (i.e., \(0.6\sigma\)) from the value of \(H_{0}\) obtained by Planck collaboration VI (2020). Less deviation of our estimated \(H_{0}\) from Planck's \(H_{0}\) value indicates the alleviation of so-called Hubble tension by using observed Hubble data (DA+BAO) alone. Moreover, matter (\(\Omega_{0m}\)), curvature (\(\Omega_{0k}\)) and vacuum (\(\Omega_{0\Lambda}\)) densities predicted by the ParamANN show less deviations (i.e., \(0.3\sigma\), \(0.18\sigma\) and \(0.56\sigma\) respectively) from the values of these parameters constrained by Planck collaboration VI (2020). These low significances (shown in table 2) corresponding to our estimated parameters indicate the better agreement between ours and Planck's estimates of these four cosmological parameters (i.e., \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)). We note that, although the ParamANN is trained for generalized \(\Lambda\)CDM universe by considering spatial curvature density, this trained ParamANN predict spatially flat \(\Lambda\)CDM universe and alleviates the so-called Hubble tension by using Hubble measurements (DA+BAO) alone. In figure 4, we demonstrate the estimated values of Hubble constant (\(h_{0}\)) and three density parameters (i.e., \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) provided by trained ParamANN as well as the values of these parameters constrained by Planck collaboration VI (2020). In this same figure, we also show the uncertainties corresponding to these parameters. We note that in this figure we present the Hubble constant in unit 100 kmMpc\({}^{-1}\)sec\({}^{-1}\). #### 4.2.2 Hubble parameter curve We obtain the Hubble parameter curve (equation 10) using the parameter values (shown in table 2) predicted by trained ParamANN. We also compute the same by using the parameter values (shown in table 3) constrained by Planck collaboration VI (2020). In figure 5, we show these two curves along with the data points (in the range \(0.07\leq z\leq 2.36\)) measured by DA and BAO techniques. Figure 4: Figure represents the values of parameters (i.e., \(h_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) with corresponding error bars obtained by ParamANN as well as estimated by Planck collaboration VI (2020). The Hubble constant (\(h_{0}\)) is represented in the unit of 100 kmMpc\({}^{-1}\)sec\({}^{-1}\). Horizontal axis indicates these four parameters and vertical axis represents the values of these parameters. Visually both the curves seem to fit the observations well. To test how these curves fit the observed points, we estimate the reduced-\(\chi^{2}\) statistics as defined follow9, Footnote 9: Although we have not done a \(\chi^{2}\) fitting by ParamANN, we use the reduced-\(\chi^{2}\) fit to estimate the goodness of the fits. \[\text{reduced-}\chi^{2} = \frac{1}{dof}\sum_{i,j=0}^{52}\left[H(z_{i})-H_{obs}(z_{i}) \right]\left[C^{-1}\right]_{ij}\left[H(z_{j})-H_{obs}(z_{j})\right], \tag{15}\] where \(i,j\) are the dummy indices and 'dof' indicates the number of degrees of freedom, which is 50 in this case since we use 53 Hubble data to estimate three independent parameters. In equation 15, \(H_{obs}(z)\) defines 53 observed Hubble data (shown in table 1), \(H(z)\) represents the Hubble parameters estimated by using our predicted parameters (or Planck's results) at observed redshift points, and \(C\) denotes the covariance matrix shown in figure 1. The reduced-\(\chi^{2}\) value for our analysis is \(\sim 0.96\). A similar value (i.e., \(\sim 0.78\)) was calculated for Planck curve as well. This shows that the predicted parameters by ParamANN fit the observations well. Further improvements of the error bars of the predicted parameters can be achieved once \(H(z)\) observations of lower errors become available. ## 5 Discussions and Conclusions Recent CMB observations indicate that our universe follows the flat \(\Lambda\)CDM model dominated by vacuum density (Planck collaboration VI, 2020). This vacuum density (i.e., the simplest form of dark energy) perhaps accelerates the expansion of our universe (Riess et al., 1998; Perlmutter et al., 1999). The expansion rate of the present universe can be measured by estimating the value of Hubble constant (\(H_{0}\)). Estimation of \(H_{0}\) from the CMB observations shows a significant tension, i.e. so-called Hubble tension (Planck collaboration VI, 2020; Riess et al., 2018, 2019; Riess, 2020; Figure 5: Figure shows the Hubble parameter curve computed by using our estimated parameters in green color and the same is determined by using the Planck’s parameter values in blue color. In the same figure, we also present the Hubble parameters measured by DA and BAO techniques. In this figure, the horizontal axis represents the redshifts and the vertical axis defines the values of the Hubble parameter (in kmMpc\({}^{-1}\)sec\({}^{-1}\) unit) at these redshifts. Riess et al., 2021, 2022; Wong et al., 2019; Di Valentino, 2021; de Jaeger et al., 2022; Brout et al., 2022), with the value of today's Hubble parameter constrained by local observational data (e.g., Supernovae type Ia measurements). In this article, we predict the value of Hubble constant by employing ML algorithm using local measurements (DA+BAO) of Hubble parameter alone. We also measure the cosmological density parameters (i.e., \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) of \(\Lambda\)CDM model without assuming the spatially flat curvature of our present universe. Several earlier literature constrain the cosmological parameters under the assumption of a spatially flat universe (Macaulay et al., 2013; Raveri, 2016; Shafieloo et al., 2012; Sahni et al., 2014; Zheng et al., 2016; Linder, 2003; Shahalam et al., 2015; Leaf and Melia, 2017; Geng et al., 2018; Gomez-Valent and Amendola, 2018; Bengaly et al., 2023; Liu et al., 2019; Arjona and Nesseris, 2020; Mukherjee et al., 2022; Garcia et al., 2023). However, the work of this article estimates all relevant cosmological parameters without assuming the spatial flatness of the universe. This work, therefore, provides the generalized investigations of observational data to extract the fundamental informations of our universe. Very interestingly, our measured \(H_{0}\) value is in sharp agreement with the Planck collaboration VI (2020). Hence we do not see any evidences of Hubble tension using \(H(z)\) data alone. Since these values are in contradiction with the \(H_{0}\) value measured by Supernovae data (Riess et al., 2022) and other local measurements of \(H_{0}\), further attention may be needed in future article to investigate the nature and the cause of the origin of this tension. In a classical fitting-the-paremeter approaches (e.g., using a likelihood function), the analyses are performed by estimating the likelihood function of some distribution of observational data as well as initially providing the prior values of all or some of the parameters. The ML approach used by us can be treated as an complementary approach to these and therefore provides a mechanism to test for robustness of the scientific results with respect to various methods of analysis. The ML approaches uncover the direct mapping between input and targets to learn the hidden pattern between them. Therefore, ML can be used to learn the complex function to extract the cosmological parameters from observational data (without even assuming the Gaussian model of the noise). We therefore do not need to use any prior values of the parameters to estimate others. Given the complementarity of ML and maximum-likelihood approach, we note that our estimates of the cosmological parameters possess the properties of maximum-likelihood estimates as well (equation 13). We create an ANN with one hidden layer containing 30 neurons. We call this ANN by a suitable name of ParamANN. We significantly train the ParamANN using \(10^{5}\) samples of mock \(H(z)\) values (which contain correlated noises convenient for observed Hubble data) as input and corresponding parameters as targets which are varied uniformly in their specified ranges (i.e.,\(\{60,80\}\) kmMpc\({}^{-1}\)sec\({}^{-1}\), \(\{0.2,0.5\}\) and \(\{0.5,0.8\}\) for \(H_{0}\), \(\Omega_{0m}\) and \(\Omega_{0\Lambda}\) respectively). We note that the mock values of \(\Omega_{0k}\) is calculated by \(1-\Omega_{0m}-\Omega_{0\Lambda}\). We use another \(1.5\times 10^{4}\) samples of mock data to validate the training of ParamANN and \(5\times 10^{3}\) samples to test the performance of trained ParamANN. We note that the differences between targets and predictions for each parameter dominantly lie within three times uncertainties (predicted by ParamANN) corresponding to the predictions of test set. These results (shown in figure 3) of test set show the excellent agreement between targets and predictions of ParamANN (at least within the predicted uncertainties). Finally, we use the trained ParamANN for the predictions of the Hubble constant and the density parameters from the Hubble parameters measured by DA and BAO techniques in the redshift interval \(0.07\leq z\leq 2.36\). We obtain \(H_{0}=66.11\pm 2.59\) kmMpc\({}^{-1}\)sec\({}^{-1}\), \(\Omega_{0m}=0.3359\pm 0.0814\), \(\Omega_{0k}=0.0237\pm 0.1248\) and \(\Omega_{0\Lambda}=0.6405\pm 0.0861\). We note that the predicted density parameters show \(0.3\sigma\), \(0.18\sigma\) and \(0.56\sigma\) deviations from Planck's results for matter, curvature and vacuum respectively. Moreover, our predicted Hubble constant (showing \(0.6\sigma\) deviation from Planck's result) agrees well with Planck's estimation of \(H_{0}\), which indicates the alleviation of the so-called Hubble tension by employing our ML technique. The current article presents the first attempt to measure the four fundamental parameters (i.e., \(H_{0}\), \(\Omega_{0m}\), \(\Omega_{0k}\), \(\Omega_{0\Lambda}\)) of \(\Lambda\)CDM universe from the Hubble measurements (DA+BAO) by using ML algorithm. In this current analysis, we consider spatially non-flat \(\Lambda\)CDM model to train the ParamANN for the estimates of cosmological density parameters along with Hubble constant. In a future article, we will employ the ML procedure on the different types of dark energy model (i.e., wCDM, CPL, scalar field etc.) regarding the estimations of fundamental cosmological parameters to compare these dark energy models each other. ## Acknowledgment We acknowledge the use of open-source software library TensorFlow10, python library numpy11 and openly available code12 given by Moresco. We thank Albin Joseph and Md Ishaque Khan for useful discussions associated with this work. Footnote 10: [https://www.tensorflow.org/](https://www.tensorflow.org/) Footnote 11: [https://numpy.org/](https://numpy.org/) Footnote 12: [https://gitlab.com/mmoresco/CCcovariance](https://gitlab.com/mmoresco/CCcovariance)
2310.12985
Enabling energy-Efficient object detection with surrogate gradient descent in spiking neural networks
Spiking Neural Networks (SNNs) are a biologically plausible neural network model with significant advantages in both event-driven processing and spatio-temporal information processing, rendering SNNs an appealing choice for energyefficient object detection. However, the non-differentiability of the biological neuronal dynamics model presents a challenge during the training of SNNs. Furthermore, a suitable decoding strategy for object detection in SNNs is currently lacking. In this study, we introduce the Current Mean Decoding (CMD) method, which solves the regression problem to facilitate the training of deep SNNs for object detection tasks. Based on the gradient surrogate and CMD, we propose the SNN-YOLOv3 model for object detection. Our experiments demonstrate that SNN-YOLOv3 achieves a remarkable performance with an mAP of 61.87% on the PASCAL VOC dataset, requiring only 6 time steps. Compared to SpikingYOLO, we have managed to increase mAP by nearly 10% while reducing energy consumption by two orders of magnitude.
Jilong Luo, Shanlin Xiao, Yinsheng Chen, Zhiyi Yu
2023-09-07T15:48:00Z
http://arxiv.org/abs/2310.12985v1
Enabling Energy-Efficient Object Detection with Surrogate Gradient Descent in Spiking Neural Networks ###### Abstract Spiking Neural Networks (SNNs) are a biologically plausible neural network model with significant advantages in both event-driven processing and spatio-temporal information processing, rendering SNNs an appealing choice for energy-efficient object detection. However, the non-differentiability of the biological neuronal dynamics model presents a challenge during the training of SNNs. Furthermore, a suitable decoding strategy for object detection in SNNs is currently lacking. In this study, we introduce the Current Mean Decoding (CMD) method, which solves the regression problem to facilitate the training of deep SNNs for object detection tasks. Based on the gradient surrogate and CMD, we propose the SNN-YOLOv3 model for object detection. Our experiments demonstrate that SNN-YOLOv3 achieves a remarkable performance with an mAP of 61.87% on the PASCAL VOC dataset, requiring only 6 time steps. Compared to Spiking-YOLO, we have managed to increase mAP by nearly 10% while reducing energy consumption by two orders of magnitude1. Footnote 1: The code is made available at [https://github.com/xiaolongren969/](https://github.com/xiaolongren969/) SNN-YOLOv3 Jilong Luo, Shanlin Xiao\({}^{*}\), Yinsheng Chen, Zhiyi Yu\({}^{*}\)+ Sun Yat-sen University, China Energy-Efficient, Object Detection, Spiking Neural Networks, Surrogate Gradient ## 1 Introduction As a third-generation artificial neural network [1], Spiking Neural Networks (SNNs) are promising for implementing low-power artificial intelligence algorithms using event-driven neuromorphic hardware [2, 3, 4]. Based on biological plausibility [5], SNNs emulate information processing mechanisms observed in the biological neural system, where computation and information transfer between neurons occur through discrete binary events [6]. Despite the attractive energy efficiency of spiking neural networks, training SNNs remains a significant challenge. One of the primary reasons for this challenge is the complexity of the dynamics model and the non-differentiability of spiking neurons, typically modeled as IF or LIF neurons, which makes performing gradient descent-based backpropagation difficult [7, 8]. Several researchers have proposed training SNNs using ANN-to-SNN conversion method [9, 10], in which ANNs (Artificial Neural Networks) with ReLU activation function are initially trained via gradient descent and then converted into SNNs with integrate-and-fire neurons by applying appropriate threshold balancing techniques [11]. However, SNNs obtained through ANN-to-SNN methods generally occur 2000-3000 time steps to achieve acceptable accuracy. Here, a time step denotes the time unit for the forward propagation of a single layer, effectively represents network latency [12]. To reduce the latency, the gradient surrogate-based backpropagation algorithm [7, 13] has been introduced for end-to-end gradient descent learning on spiking train. Within these algorithms, the non-differentiable neuron model completes the backpropagation process by specifying a surrogate gradient as a continuous approximation of the actual gradient [14]. Training SNNs with gradient surrogate substantially decreases the inference latency by nearly 100x (e.g., only requiring fewer than 30 time steps). Despite the appealing property of SNNs, previous research has mainly focused on less complex tasks (image classification) and small-scale datasets (MNIST and CIFAR10), with relatively shallow network structures (\(<\)30 layers) [15, 16]. In this study, we investigate more complex machine learning problems (object detection) in deep SNNs, using the gradient surrogate approach. Object detection is considered a demanding and challenging task in computer vision, aiming to recognize multiple objects and calculate the exact coordinates of the bounding box in images or videos. Unlike image classification tasks, when predicting the output values of a neural network, object detection requires predicting continuous numerical or real outputs, rather than just selecting the category with the highest probability (using the argmax function), as is typically done in image classification tasks. Our contributions can be summarized as follows: * We present the first SNN model that implement object detection using surrogate gradient to achieve the state-of-the-art performance (61.87% mAP) on the non-trivial dataset of PASCAL VOC. * We introduce the Current Mean Decoding (CMD) method, which solve the regression problem to facilitate the training of deep SNNs for object detection tasks. ## 2 Methods ### Surrogate gradient for spiking neuron models Unlike ANNs, SNNs utilize spike trains for computation and information transmission among neurons. The dynamics of the classic IF neuron model [17, 18] can be described as follows: \[V_{mem,j}^{l}\left[t\right]=V_{mem,j}^{l}[t-1]+I_{j}^{l}[t]-V_{th}s_{j}^{l}[t] \tag{1}\] where \(s_{j}^{l}[t]\) represents the spike state of the j-th neuron in the l-th layer at time step t. \(x_{j}^{l}[t]\) represents the input membrane potential, and \(V_{mem,j}^{l}[t]\) represents the membrane potential of the j-th neuron in the l-th layer. The description of \(I_{j}^{l}[t]\) can be expressed as follows: \[I_{j}^{l}\left[t\right]=\sum_{i}w_{i,j}^{l}s_{i}^{l-1}\left[t\right]+b_{j}^{l} \tag{2}\] where \(w\) and \(b\) represent the synaptic weights and bias, respectively. When the membrane potential \(V_{mem,j}^{l}[t]\) of the j-th neuron in the l-th layer exceeds the threshold voltage \(V_{th}\), a spike \(s_{j}^{l}[t]\) is emitted. The mathematical formula is as follows: \[s_{j}^{l}\left[t\right]=H\left(V_{mem,j}^{l}\left[t\right]-V_{th}\right) \tag{3}\] the mathematical description of \(H(\cdot)\) is the Heaviside step function. This function produces a value of 1 when x is greater than or equal to 0, and 0 otherwise. Due to the non-differentiability of Heaviside unit step function, the surrogate gradient method is used to estimate gradient computations during backpropagation. The fundamental idea of surrogate gradient is to update the weights using gradient backpropagation via a surrogate gradient function rather than the unit step function. In this research, we have chosen the arctangent function as our surrogate gradient function. Its mathematical expression is illustrated below: \[g\left(x\right)=\ \frac{1}{\pi}\arctan\left(\frac{\pi}{2}\alpha x\right)+\frac {1}{2} \tag{4}\] Its derivative is represented as follows: \[g^{{}^{\prime}}\left(x\right)=\frac{\alpha}{2\left(1+\left(\frac{\pi}{2} \alpha x\right)^{2}\right)} \tag{5}\] ### Current mean decoding (CMD) Rate decoding is a commonly used approach to transfer information in spiking neural networks that decodes information intensity dependent on the rate of neuron spike emissions [19, 20]. Nonetheless, in specific tasks, particularly those concerned with regression problems like object detection, rate decoding will struggle and require more advanced decoding strategies. Object detection generally involves two primary tasks: classification and regression. In the classification task, the classification result can be determined by the max magnitude of the output neuron spike firing rate (rate decoding) in SNNs. However, it is often necessary to predict the location, size and shape of the object in regression tasks, which requires the output value space of network to be real-valued. Nevertheless, the discrete spikes employed for rate decoding are not directly mapped to a continuous numerical space, resulting Figure 1: Traditional rate decoding and proposed current mean decoding (CMD) in SNN-YOLOv3. The left part shows that the input of the image is coded by a direct encoding way and the activation layer in the network is replaced with IF neurons. The last layer is the decoding layer, and the right part shows the different decoding methods used in the decoding layer. in the network output being discrete. Consequently, the discrete event of rate decoding might lead to a loss of accuracy when representing continuous outputs. While continuous values may be approximated through rate decoding, this approximation need a compromise between precision and time step. For this reason, we introduce current mean decoding(hereafter abbreviated as CMD) in spiking neural networks, which is a more powerful decoding technique that exploits the dynamic properties of neurons. Fig.1 illustrates a schematic diagram of CMD, which collects the currents produced at synapses upon neuron spike event. Subsequently, it accomplishes information decoding by computing the mean value of input current, which is defined as follows. \[Output=\frac{\sum\limits_{t=1}^{T}\sum\limits_{i=1}^{n}x_{i}\left[t\right] \times w_{i}}{Timestep} \tag{6}\] where \(x_{i}[t]\) represents the spike firing spike state of the \(i\)-th neuron of the presynaptic neuron at time step \(t\), and \(w_{i}\) represents the weight associated with the corresponding neuron. These values are multiplied to yield the synaptic current. On the comparison with rate decoding, this decoding method provides a better approximation of continuous values and offers better accuracy and flexibility for regression problems. ## 3 Experiments ### Experimental setup In this experiment, we choose the classical version of the YOLOv3 [21] real-time object detection network for validating the effectiveness of the CMD method. SNN-YOLOv3 was tested on the PASCAL VOC datasets [22] with simulations based on the PyTorch platform and all experiments were done on NVIDIA Tesla V100 GPUs. The SNN-YOLOv3 obtains more efficiency and robust spike feature trains by using the first convolutional set as the encoding layer without additional encoding layers. During the training process, we adopt a stochastic gradient descent optimiser with a momentum parameter of 0.9 and a cosine decay scheduler for fine-tuning the learning rate. we set the weight decay for the upper bound parameter \(\theta\) of SNN-YOLOv3 to \(5\times 10^{-3}\). Furthermore, We used normalization and horizontal flipping method for data augmentation. ### Experimental results In order to verify and analyse the effectiveness of our proposed method, we evaluate the performance of our methods for object detection tasks on PASCAL VOC datasets. Fig. 2 shows the object detection performance of SNN-YOLOv3 as the number of training epochs increases. In the figure, the green and red curves represent the performance trends for the CMD method with \(T=4\) and \(T=6\), respectively, while the blue curves illustrate the accuracy variation for the rate decoding method with \(T=6\). From the experimental results, we observe the following conclusions: (1) In general, CMD brings higher performance compared to rate decoding. (2) As the current mean decoded SNN is trained with a larger number of time steps, its performance will further increase. The results clearly indicate that when utilizing rate decoding, the SNN network struggles to learn, leading to consistently low accuracy. In contrast, the CMD method shows a remarkable ability to improve accuracy with increasing number of training iterations. This illustrate the excep Figure 3: Object detection results on PASCAL VOC dataset. Figure 2: Experimental results of SNN-YOLOv3 on PASCAL VOC dataset for various time step; maximum mAP is in parentheses. tional effectiveness of the CMD method for object detection tasks in SNNs. Moreover, the remarkable performance of SNN-YOLOv3 is also shown in the other examples in Fig. 3. The SNN-YOLOv3 precisely locates and classifies various object categories within images, including person, cars, and bicycles which proves its excellent object localisation capability. ### SNN-YOLOv3 energy efficiency In order to assess the outstanding energy efficiency of SNN-YOLOv3, we compare the computational operations of SNN-YOLOv3 and YOLOv3 within the realm of digital signal processing. Within convolutional deep neural networks, the convolutional layer is the main computational region, where the multiply-accumulate (MAC) operation is the main executive operation. However, the operation performed in the spiking neural network is an accumulation (AC) operation because spiking events are binary events. The input current is integrated or accumulated into the membrane potential only when the neuron received a spike. For a fair comparison, we focus on the number of MACs and ACs consumed during single-image object detection. According to the literature [23], 32-bit floating-point MAC operations consume 4.6 pJ and 32-bit floating-point AC operations consume 0.9 pJ. Based on these operation energy results, we calculated the energy consumption of YOLOv3 and SNN-YOLOv3 by multiplying the FLOPs (floating point operations) and the energy consumption per MAC or AC operation. if it is a SNN model, it needs to be further multiplied by time step. According our simulations, the FLOPs for ANN-YOLOv3 and SNN-YOLOv3 were \(66.19\) and \(0.425\) GFLOPs, respectively. Fig. 4 shows the results, where SNN-YOLOv3 is more than 158 times energy efficient than YOLOv3 in 32-bit FL operations both under the \(T=4\) and \(T=6\). ### Comparison with the State-of-the-Art We compare our approach with other state-of-the-art ANN-to-SNN conversion methods on the PASCAL VOC dataset to achieve SNN object detection on non-trivial datasets. Our calculation result as shown in Table 1. We calculated the energy consumption of SNN-YOLOv3 running on a neuromorphic chip (TrueNorth) and compared it to Spiking YOLO. the GFLOPS/W of TrueNorth is 300 GFLOPs/W, and we defined a time step as 1 ms (1 kHz synchronization signal in TrueNorth) [2]. For SNN-YOLOv3, ours proposed method can achieve 61.87% mAP using only 6 time steps. Compare to Spiking-YOLO, we have nearly 10% increase in mAP while requiring nearly two orders of magnitude less energy consumption. Considering that the TrueNorth chip was initially introduced in 2014, we can expect increased energy and computational efficiency as neuromorphic chips advance and produce better results. ## 4 Conclusion In this paper, we introduce the energy-efficient SNN-YOLOv3, which is the first SNN model that implement object detection using surrogate gradient. It achieves the state-of-the-art performance (61.87% mAP) on the PASCAL VOC dataset using only 6 time steps. Compared to previous work, we can accomplish object detection with less energy consumption. In addition, we proposed current mean decoding method for solving regression problem in SNN, which provides a different approach to work out more advanced machine learning problems with deep SNNs.
2309.13534
Comparison of Random Forest and Neural Network Framework for Prediction of Fatigue Crack Growth Rate in Nickel Superalloys
The rate of fatigue crack growth in Nickle superalloys is a critical factor of safety in the aerospace industry. A machine learning approach is chosen to predict the fatigue crack growth rate as a function of the material composition, material properties and environmental conditions. Random forests and neural network frameworks are used to develop two different models and compare the two results. Both the frameworks give good predictions with $r^2$ of 0.9687 for random forest and 0.9831 for neural network.
Raghunandan Pratoori
2023-09-24T03:08:52Z
http://arxiv.org/abs/2309.13534v1
Comparison of Random Forest and Neural Network Framework for Prediction of Fatigue Crack Growth Rate in Nickel Superalloys ###### Abstract The rate of fatigue crack growth in Nickle superalloys is a critical factor of safety in the aerospace industry. A machine learning approach is chosen to predict the fatigue crack growth rate as a function of the material composition, material properties and environmental conditions. Random forests and neural network frameworks are used to develop two different models and compare the two results. Both the frameworks give good predictions with \(r^{2}\) of 0.9687 for random forest and 0.9831 for neural network. ## 1 Introduction Superalloys are of great utility in the fields of engineering, especially in aerospace and power generation industries because of their excellent balance of mechanical and chemical properties [24]. These superalloy components are subjected to high operating stresses under cyclic loading. This makes fatigue behavior a very important factor to be considered. Fatigue is caused due to fluctuations in the mechanical and thermal loads in various stages of operation. The fatigue crack propagation is dependent on the various factors like environmental conditions, microstructure, composition of the material and the like [1, 18, 28, 2]. This makes a case for developing a model to predict the fatigue crack growth rate under various conditions. Understanding the complexity of the phenomena, in this work, a machine learning approach is chosen to develop a model for the prediction of fatigue crack growth rate. Machine learning approaches are preferred in situation where calculation of some attributes exactly in real-time is not possible. Among the various available machine learning approaches, random forest and neural network frameworks are chosen. In the recent past, many researchers have used random forests in the field of material science to calculate properties of materials which need extreme experimental conditions to determine them, to establish a relationship between two or more material properties or to enhance the material design process. Random forest has successfully been used to enhance the prediction accuracy in the fields of ecology [10, 26], remote sensing [9, 11, 15, 21], material science [6, 19, 31]. Carrete et al. used random forests to identify the compounds with low lattice thermal conductivity and the critical properties influencing the lattice thermal conductivity. With this approach, approximately 79,000 half-Heusler entries in AFLOWLIB.org database were scanned through with much ease, which would have been a high cost and time-consuming experimental challenge. Nagasawa et al. have used random forest to design organic photovoltaic materials and demonstrated its utility in the synthesis and characterization of the polymer. Vinci et al. have used random forest to investigate the mechanical properties of Ultra-High-Temperature-Ceramic-Matrix-Composites and also studied the influence of different parameters on different properties. Neural networks are one of the popular machine learning techniques often used in a lot of applications such as pattern recognition [20, 25], image processing [23, 30], biotechnology [7, 13, 17], material sciences [2, 5, 14, 12, 29], etc. The accurate results obtained from it and the simple representation makes it applicable to almost all the areas of research. Kotkunde et al. have developed a neural network model enhanced with differential evolution algorithm to predict the flow stress values for Ti-6Al-4V alloy as a function of strain, strain rate and temperature. Hassan et al. have used neural networks in predicting the physical properties like density and porosity and hardness of aluminium-copper/silicon carbide composites as a function of weight percentage of copper and volume fraction of the reinforced particles. Singh et al. have developed a neural network model for predicting the effective thermal conductivity of porous systems filled with different liquids. Methods In this work, two different frameworks - random forest and neural network, are tried to predict the fatigue crack growth rate. ### Random Forest Random Forest is an ensemble of \(n\) trees dependent on a \(p\)-dimensional vector of variables. The ensemble produces \(n\) outputs one for each tree, which are averaged to produce one final prediction (Figure 1). The training algorithm proceeds as follows: 1. From the training data, draw a random sample with replacement (bootstrapping). 2. For each bootstrap sample, grow a tree choosing the best split among a randomly selected subset of variables until no further splits are possible. 3. Repeat the above steps until \(n\) such trees are grown. Cross-validation is in-built in the training step of random forest with the use of Out-Of-Bag (OOB) samples [4].So, We can calculate an ensemble prediction \(Y^{OOB}\) by averaging only its OOB predictions. Calculate an estimate of the mean square error (MSE) for regression by \[MSE=n^{-1}\Sigma_{i=1}^{n}\{Y^{OOB}(X_{i})-Y_{i}\}^{2} \tag{1}\] Random Forest algorithm is capable of selecting important variables but does not produce an explicit relationship between variables and the predictions [3]. However, a measure of how each variable contributes can be estimated by measuring the node purity in the course of training. ### Neural Network A neural network is made of neurons which combine all the inputs given to the neuron and transfers it to another neuron based on an activation function. Tan sigmoid, linear line and Log sigmoid are in general the popularly used activation functions or transfer functions. A group of neurons connecting together in a weighted form to give rise to output is called a layer of neurons. The layout of a single layer neural network is shown in Figure 2. The weights of each layer are evaluated by using a back propagation algorithm. The transfer function relating the inputs and the hidden layer is given by \[h_{i}=\tanh(\Sigma_{j}w_{ij}^{(1)}x_{j}+\theta_{i}^{(1)}) \tag{2}\] The relationship between the hidden units and the output is given by \[y=\Sigma_{i}w_{i}^{(2)}h_{i}+\theta^{(2)} \tag{3}\] ## 3 Data The data set used in this work is from published literature. This data set consists of 1894 data points for fatigue crack growth which is dependent on 51 input variables which can be categorized into stress intensity factor, temperature, microstructure, heat treatment, load waveform, type of crack growth, properties of the material and composition. The data is summarized as shown in Figure 3. A detailed account of all the input variables is described in!Fujii?. Type of crack growth is binary valued with short crack growth represented by 0 and long crack growth represented by 1. Since the problem here is modelled as a regression problem, type of crack growth is omitted in the analysis and hence only 50 variables are being used. Among the output variables, \(\log(da/dN)\) is chosen instead of \(da/dN\) to build a regression model, following the Paris' law [22]. ## 4 Analysis As can be observed from Figure 3, the range of the data varies significantly between the variables. To prevent any adversarial effect in determination of influence of the variables, both the input variables and output variables are normalized within the range [0, 1] as follows: \[x_{N}=\frac{x-x_{min}}{x_{max}-x_{min}} \tag{4}\] where \(x_{N}\) is the normalized value of \(x\), \(x_{min}\) is the minimum value of each variable and \(x_{max}\) is the maximum value of each variable of the original data. The normalized data is then analysed using the R [27] software using a random forest and neural network framework. 75% of the data is randomly selected for training the models and tested with the remaining 25% of the data. ### Random Forest Random forest framework is developed using randomforest package [16]. All the 50 variables are used in the initial run, assuming a linear relationship between the input and the output variables. The maximum number of decision trees is chosen to be 1000 and the number of trees that give the least mean square error is selected for further analysis. Since there are a total of 50 variables, there is a chance of model overfitting the data and hence leading to a poor performance with the test data. To avoid this, the factors with low %IncMSE and IncNodePurity are omitted and again analysed in the same fashion as the initial run. ### Neural Network Neural network framework is developed using neuralnet package. All the 50 variables are used in the initial run, assuming a linear relationship between the input and the output variables, just like in the case of random forest. In the initial run, only one hidden layer is considered with the number of nodes varying from 1 to 20. These models are trained on the training data and then applied on the testing data. The number of nodes with the highest \(r^{2}\) is selected and a new layer is added to it and the number of nodes are varied from 1 to 20. Again, these are applied on the testing data and the number of nodes with the highest \(r^{2}\) is selected. A sigmoid function is used as the activation function. ## 5 Results ### Random Forest For the first run with all the 50 variables, the change in mean square error with increase in number of decision trees is shown in Figure 4. The least error is observed with a choice of 978 decision trees and a random forest framework is designed with the same. The number of variables tried at each split are 16. The mean squared residual obtained from this model is \(1.32\times 10^{-3}\). The percentage variance explained by this framework is 96.93%. This shows the model is a good fit for the training data. Using this framework on the testing data with and comparing with the original testing data, gives an \(r^{2}\) value of 0.9693. The correlation value of 0.9865 and p-value of \(<2.2\times 10^{-16}\) show there is a significant correlation with the chosen variables and the data. Figure 5 shows the comparison of the predicted values of \(\log da/dN\) to the original values. It can be clearly seen from Figure 5 that the points align close to the \(x=y\) line with major deviations present near the lower end. This shows that the model is not a good representation when the rate of crack growth is smaller. Figure 6 shows the distribution of the original and the predicted rate of crack growth. It can be clearly observed that the original data in the range (0, 0.1) has not been predicted accurately and even the data in the range (0.8, 1.0) has not been completely reproduced. Table 1 shows the values of Mean Decrease Accuracy and Mean Decrease Gini sorted according to decreasing Mean Decrease Accuracy. The variables with low Mean Decrease Accuracy and Mean Decrease Gini are not significant and can be omitted. In this case, the variables with the Mean Decrease Accuracy less than 5 and Mean Decrease Gini less than 0.005 are omitted leaving us with 35 variables. For the final run, the same procedure is followed as that of the initial run. The change in mean square error with increase in number of decision trees is shown in Figure 7. The least error is observed with a choice of 189 decision trees and a random forest framework is designed with the same. The number of variables tried at each split are 11. The mean squared residual obtained from this model is \(1.31\times 10^{-3}\) and the percentage variance explained by this framework is 96.94%, which are very close to the values observed in the initial run. Using this framework on the testing data with and comparing with the original testing data, gives an \(r^{2}\) value of 0.9687 which is 0.06% lower than the initial run. The correlation value of 0.9862 and p-value of \(<2.2\times 10^{-16}\) show there is a significant correlation with the chosen variables and the data. Looking at the values of \(r^{2}\) and correlation coefficient, it can be attested that the omitted values are insignificant in determining the rate of crack growth. Figure 8 shows the comparison of the predicted values of \(\log da/dN\) to the original values. It can be clearly seen from Figure 8 that the points align close to the \(x=y\) line with major deviations present near the lower end similar to the initial run. This shows that the model is not a good representation when the rate of crack growth is smaller. Figure 9 shows the distribution of the original and the predicted rate of crack growth. Similar to the initial run, the prediction was not good at the extremes. Table 1 shows the values of Mean Decrease Accuracy and Mean Decrease Gini sorted according to decreasing Mean Decrease Accuracy for the random forest model with 35 variables. ### Neural Network For the first run with one hidden layer has the best \(r^{2}\) value with 19 nodes in the hidden layer, when used for predicting the testing. The \(r^{2}\) value for this framework is 0.9831. The correlation value of 0.9916 and p-value of \(<2.2\times 10^{-16}\) show there is a significant correlation with the chosen variables and the data. Figure 10 shows the comparison of the predicted values of \(\log da/dN\) to the original values. It can be clearly seen from Figure 10 that the points align close to the \(x=y\) line with major deviations present near the lower end similar to the initial run. This shows that the model is also not a good representation when the rate of crack growth is smaller, but slightly better than random forest. Figure 11 shows the distribution of the original and the predicted rate of crack growth. It can be clearly observed that the distribution of the predicted data is much closer to the original data, which is better than what we observed in the case of random forest. For the second run with two hidden layers, the framework with 5 nodes in the second hidden layer has the best \(r^{2}\) value of 0.9847 which is only 0.16% better. This shows that adding another hidden layer does not significantly improve the performance of the neural net for being significantly computationally intensive for this data. So, a single hidden layer with 19 nodes is the best neural network framework for this data set. ## 6 Conclusions In this work, the rate of fatigue crack growth in Nickel superalloys is estimated as a function of 51 available variables. Two different frameworks were used - random forest and neural network. Looking at the \(r^{2}\) values of the predicted data \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Variable & \%IncMSE & IncNodePurity & Variable & \%IncMSE & IncNodePurity \\ \hline Atm.Pressure & 72.645326 & 0.435396 & Cobalt\_wt.. & 13.21943 & 0.582949 \\ \hline delta\_K..MPa.sqrt.m.. & 50.245341 & 12.06703 & Nickel\_wt.. & 12.33514 & 0.26295 \\ \hline Frequency..Hz. & 48.807527 & 15.70495 & HT2.Temp..K. & 12.25707 & 1.019443 \\ \hline log10.delta\_K. & 47.736954 & 10.87632 & Titanium\_wt.. & 12.24269 & 0.513162 \\ \hline Yield\_Strength..MPa. & 44.850812 & 0.503051 & Unloading.Time\_s. & 11.82461 & 0.286359 \\ \hline Temp..K. & 35.091226 & 4.990744 & HT1.CoolingRate.. & 11.81688 & 0.057953 \\ \hline Min.Grain.Size..micro.m. & 29.230863 & 0.259286 & HT3.CoolingRate.. & 11.65292 & 0.027644 \\ \hline Max.Grain.Size..micro.m. & 28.139714 & 0.196492 & Niobium\_wt.. & 11.3235 & 0.084579 \\ \hline Loading.Time..s. & 27.477331 & 2.021161 & HT2.CoolingRate... & 10.97752 & 0.06581 \\ \hline HT1.Temp..K. & 26.732803 & 0.278611 & Manganese..wt.. & 10.45497 & 0.213812 \\ \hline Diff.in.GS. & 26.287002 & 0.095541 & Copper\_wt.. & 8.091316 & 0.031112 \\ \hline Chromium..wt.. & 26.195653 & 0.5403 & Phosphorus\_wt.. & 7.709504 & 0.033819 \\ \hline R.ratio & 23.602636 & 0.486236 & Sulphur..wt.. & 7.652882 & 0.020254 \\ \hline Thickness..mm. & 19.991276 & 0.589908 & HT1.Time..hrs. & 7.056421 & 0.160201 \\ \hline HT3.Temp..K. & 19.269015 & 0.077698 & Tungsten\_wt.. & 6.754946 & 0.01083 \\ \hline HT3.Time..hrs. & 18.495807 & 0.14059 & Tantalum\_wt.. & 6.463679 & 0.023136 \\ \hline Molybdenum\_wt.. & 17.562076 & 2.840485 & Yttrium.Oxide..wt.. & 5.813943 & 0.035488 \\ \hline Iron..wt.. & 17.265415 & 0.076556 & Silver\_wt.. & 4.174516 & 0.001748 \\ \hline Silicon..wt.. & 17.074731 & 0.123597 & Rhenium\_wt.. & 4.118154 & 0.00736 \\ \hline HT2.Time..hrs. & 16.238195 & 0.07077 & Lead..wt.. & 4.059671 & 0.003141 \\ \hline Load.Shape & 16.116972 & 0.546295 & Magnesium\_wt.. & 3.879866 & 0.002424 \\ \hline Boron..wt.. & 15.479676 & 2.378501 & Calcium\_wt.. & 3.761607 & 0.00224 \\ \hline Zirconium\_wt.. & 15.323161 & 0.382954 & Tin\_wt.. & 3.729041 & 0.001958 \\ \hline Carbon..wt.. & 14.446973 & 0.210651 & Hafmium\_wt.. & 3.342515 & 0.006146 \\ \hline Aluminium\_wt.. & 14.203788 & 0.13301 & Bismuth\_wt.. & 3.321477 & 0.001853 \\ \hline \end{tabular} \end{table} Table 1: Mean Decrease Accuracy (%IncMSE) and Mean Decrease Gini (IncNodePurity) of each variable in the initial run of random forest using both the frameworks, the performance of neural network was only marginally better with 1.48% improvement. It can be concluded that for Nickel superalloys, both random forest and neural network frameworks work well, but random forest is recommended in terms of computational speed. Neural network, although give a better performance, is much more computationally intensive and does not justify the marginal increase in accuracy. But in case, an explicit model is preferred to understand the relationship between the variables and the output, neural network is preferred.
2309.16114
Comparing Active Learning Performance Driven by Gaussian Processes or Bayesian Neural Networks for Constrained Trajectory Exploration
Robots with increasing autonomy progress our space exploration capabilities, particularly for in-situ exploration and sampling to stand in for human explorers. Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands between the human operator and robot may cause undue delays in mission fulfillment. An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly. Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment. In this paper, we investigate the performance differences between active learning algorithms driven by Gaussian processes or Bayesian neural networks for exploration strategies encoded on agents that are constrained in their trajectories, like planetary surface rovers. These two active learning strategies were tested in a simulation environment against science-blind strategies to predict the spatial distribution of a variable of interest along multiple datasets. The performance metrics of interest are model accuracy in root mean squared (RMS) error, training time, model convergence, total distance traveled until convergence, and total samples until convergence. Active learning strategies encoded with Gaussian processes require less computation to train, converge to an accurate model more quickly, and propose trajectories of shorter distance, except in a few complex environments in which Bayesian neural networks achieve a more accurate model in the large data regime due to their more expressive functional bases. The paper concludes with advice on when and how to implement either exploration strategy for future space missions.
Sapphira Akins, Frances Zhu
2023-09-28T02:45:14Z
http://arxiv.org/abs/2309.16114v1
Comparing Active Learning Performance Driven by Gaussian Processes or Bayesian Neural Networks for Constrained Trajectory Exploration ###### Abstract Robots with increasing autonomy progress our space exploration capabilities, particularly for in-situ exploration and sampling to stand in for human explorers. Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands between the human operator and robot may cause undue delays in mission fulfillment. An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly. Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment. In this paper, we investigate the performance differences between active learning algorithms driven by Gaussian processes or Bayesian neural networks for exploration strategies encoded on agents that are constrained in their trajectories, like planetary surface rovers. These two active learning strategies were tested in a simulation environment against science-blind strategies to predict the spatial distribution of a variable of interest along multiple datasets. The performance metrics of interest are model accuracy in root mean squared (RMS) error, training time, model convergence, total distance traveled until convergence, and total samples until convergence. Active learning strategies encoded with Gaussian processes require less computation to train, converge to an accurate model more quickly, and propose trajectories of shorter distance, except in a few complex environments in which Bayesian neural networks achieve a more accurate model in the large data regime due to their more expressive functional bases. The paper concludes with advice on when and how to implement either exploration strategy for future space missions. ## I Nomenclature \(d\) \(=\) distance \(d_{c}\) \(=\) distance until convergence \(D\) \(=\) dataset \(e\) \(=\) error \(f\) \(=\) true model \(\hat{f}\) \(=\) oracle model \(g\) \(=\) suggestion policy \(i\) \(=\) index \(i_{c}\) \(=\) samples until convergence \(I\) \(=\) objective function \(k\) \(=\) kernel \(N\) \(=\) normal distribution \(\mu\) & = model posterior mean \(r\) & = location in environment \(\mathcal{R}\) & = the entire environment space \(\sigma\) & = measurement noise \(t\) & = time \(V\) & = model posterior variance \(X\) & = aggregate input dataset target position \(x\) & = single training pair target position \(Y\) & = aggregate input dataset target variable of interest \(y\) & = single training pair target variable of interest ## 1 Introduction Traditionally in robotic exploration either robots are teleoperated by humans or autonomous robots are provided with user-defined waypoints within the environment prior to deployment. There is always human involvement. Now, intelligent, adaptive autonomous robots are needed to explore unknown, dynamic environments where little is known a priori. The robot must use its own sensors to fully understand its environment. An in-situ exploration strategy that incorporates science information and maximizes a formal cost objective generating proximal destinations of interest, yielding more efficient scientific data collection, time savings, and potentially convergence properties. Even if this exploration strategy algorithm is not fully autonomous, the generated waypoints can inform teleoperators of potential destinations of interest, which could accelerate the site selection process or affirm sites selected by teleoperators. The science mission that motivates this technology is the search for water ice. Water ice is one of the most important resources on the Moon and Mars [1, 2]. The direct detection of surface-exposed water ice using infrared data in the lunar polar regions accelerates the progress of exploring lunar ice in-situ resources [3]. Data gathered from observations of surface-level water-ice deposits on the Moon suggest these deposits may also exist subsurface. However, we do not currently have the knowledge necessary to classify any subset of the total volume of lunar water-ice resources. Orbital InfraRed (IR) measurements suggest that water ice exists in approximately 5% of Lunar cold traps (regions where the annual maximum temperature is less than 110 K and water-ice is stable) and in up to 30% of the total exposed surface mass [3]. At present, we do not yet understand enough about the physical characteristics of lunar water-ice deposits to consider these reserves for future exploration and resource utilization efforts. The most direct way to characterize the volume of subsurface water is to conduct an in-situ investigation, necessitating human or robot surface operations. Currently, human operators intuit the scientific value of exploring specific destinations, much like NASA's Sojourner, China's Yutu-2, and MERs [4]. Although the most recently landed rover MSL shows hints of autonomy, the autonomous interactions are restricted to mobility actions - separate from any science [5]. Rover will very likely face power and thermal limitations dependent on time spent in a permanently shadowed region for which the mission cannot afford extensive sampling or teleoperators to stop and intuit the next waypoint to visit. The optimization problem of space exploration is that a limited set of spacecraft resources (power) must be allocated between competing choices (destinations) in a way that maximizes science discovered and mitigates risk, a specific formulation of the Bayesian optimization problem [6]. This paper directly compares the performance of active learning strategies driven by a Gaussian process or Bayesian neural network along metrics of accuracy (RMS error), train time, and samples until convergence in a constrained trajectory exploration application. Section 2 reviews core concepts in understanding Gaussian process performance to neural network performance in driving active learning algorithms and distinguishes this work from previous work. Section 3 discusses the active learning algorithm, the algorithm implementation, the benchmark environments, and the experiments run to compare Gaussian processes to Bayesian neural networks. Section 4 reports the results of the comparison by defining the metrics for comparison, performance along these metrics, and an interpretation of performance for other applications. ## 2 Background Active learning algorithms use historical measurements to generate an uncertainty map that suggests a location in the space with the highest uncertainty to sample next, which offers a sample-efficient method for exploring and characterizing a space. The agent is encoded with an objective function, \(J\), that aims to minimize a learned model's prediction \(\hat{f}(X,t,D,k(\cdot))\) with respect to ground truth \(f(X,t)\) at a location on the surface across a set of discretized locations \(X\in[x_{1},\cdots,x_{i}]\) using dataset \(D\) and kernel \(k(\cdot)\). This model error takes the form of the \(L_{2}\) norm or root mean-squared (RMS) error, seen in Eq. (1).This data \(D\), defined in Eq. (2), is collected iteratively by the robot in the environment with a control policy \(g^{*}\) that chooses a proximal location \(x_{\nu max}\) that has the highest variance (or uncertainty) \(V_{pred}\) in the model prediction \(\hat{f}(\cdot)\). \[J =\ \big{\|}f(X,t)-\hat{f}(X,t,D,k(\cdot))\big{\|}_{2} \tag{1}\] \[D =\begin{bmatrix}t_{1}&x_{k,1}&y_{k,1}\\ t_{2}&x_{k,2}&y_{k,2}\\ &\vdots&\\ t_{j}&x_{k,j}=\tau_{\nu max}&y_{k,j}\\ &\vdots&\\ t_{m}&x_{k,m}&y_{k,m}\end{bmatrix} \tag{2}\] Active learning algorithms are underpinned by two components: an oracle that predicts a mean and covariance function across space \(\hat{f}(\cdot)\) and a policy that suggests the next location to sample \(g(\cdot)\). The oracle is typically a Gaussian process due to its highly expressive capacity (lends well to characterization) and convenient uncertainty quantification in the posterior prediction (lends well to exploration), but can be represented by any model that offers a mean and covariance function as the model output shown in Eq. (3), like a probabilistic or Bayesian neural network. \[\hat{f}(X)\!\sim\!N(\mu,V) \tag{3}\] A Gaussian process is a probabilistic kernel method that relies on a user definition of basis kernel, most commonly the radial basis function. The basis function heavily determines the performance of the Gaussian process in generating an accurate mean and covariance function to the true underlying function, which is unknown. While Gaussian processes are mathematically elegant and conceptually simple, the kernel definition can be constraining. Neural networks offer more flexible, adaptable bases to represent a wider range of underlying functions but need more data and training time to generate an accurate model. Neural networks excel in applications of large data, complex bases, and unconstrained training time. Gaussian processes excel in applications of sparse, unevenly distributed data but can be computationally prohibitive for large datasets due to the single matrix operation that relies on matrix inversion. For the sake of exploration, a policy \(g\) chooses a location \(r_{\nu max}\) that has the highest variance (or uncertainty) \(V_{pred}\) in the model prediction \(\hat{f}\) in some space \(\mathcal{R}\). \[r_{\nu max}=g(r\in\mathcal{R})=\operatorname*{argmax}_{r\in\mathcal{R}}\ \tilde{V}_{pred} \tag{4}\] In conventional active learning algorithms, the suggestion policy is free to select the location of high uncertainty to sample across the entire global space, like a satellite leveraging remote sensing that can point any visible point on the Earth's surface depicted in Figure 1. But for applications involving in-situ sampling, like a robot visiting a destination in an environment and sampling at that specific destination depicted in Figure 2, an agent is limited to sampling at locations within finite distance. This sequence of sample locations \(x_{D}=[x_{k,1},\cdots,x_{k,m}]\) can be thought of as a constrained trajectory and Figure 1: Difference between prediction horizon \(d_{\mathit{horizon}}\) and sampling distance \(d_{\mathit{samples}}\) Figure 2: Difference between sampling in remote sensing (left) vs. in-situ exploration (right) applications adds nuance to how a suggestion policy may be crafted, namely defining 1) the distance between sequential samples \(d_{samples}\) and 2) the uncertainty horizon to consider sampling \(d_{horizon}\). Given in Eq. (5), the constrained trajectory suggestion policy is a modified version of the aforementioned unconstrained suggestion policy. \[\begin{split}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}vspace{ -0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0. Load the environment's geometry (parabola, Townsend, or lunar crater), size (length and width of surface), and noise level (random Gaussian noise ranging in variance). Selection of these environment geometries are discussed in the following section. 2. Define the exploration strategy (spiral, snake, or active learning) and stopping condition (number of total samples in the training dataset divided by two for active learning strategies, detailed for each surface: parabola - 219 samples, Townsend - 219 samples, 3km Lunar - 83 samples, 6km Lunar - 311 samples). 3. Define the Gaussian process model and the Bayesian neural network hyperparameters as defined in Model Selection. 4. Initialize the agent's starting location. 5. Seed the training dataset with 10 training points. 1. For spiral and snake methods, predefined pairs are utilized throughout the entire experiment. 2. For active learning methods, a random walk generates the initial training data. 6. Explore the surface \(R\) until a predefined maximum number of samples is reached. 1. For spiral and snake methods, continue to sample and train the model along the predefined trajectory. 2. For active learning: 3. Train the Gaussian process and Bayesian neural network models on the \(n\) input-output pairs in the training set thus far \((X,Y)\rightarrow\hat{f}\) where \(X=\begin{bmatrix}\hat{x}_{1}\\ \vdots\\ \hat{x}_{n}\end{bmatrix}\) and \(Y=\begin{bmatrix}\gamma_{1}\\ \vdots\\ \gamma_{n}\end{bmatrix}\). 4. Predict scalar expected values \(\hat{P}_{pred}\) and variance \(\hat{V}_{pred}\) in the prediction horizon \(r_{pred}\). 5. Generate a control policy \(g^{*}\) that identifies the location in the prediction horizon with the highest variance \(r_{\gamma max}\) defined in Eq. (6). \[r_{\gamma max}=g^{*}(r\in r_{pred})=\underset{r\in r_{pred}}{\text{argmax}}\ \hat{V}_{pred}\] (6) 6. Traverse to the nearest neighbor location in the direction of the high-variance location \(r_{\gamma max}\). The action \(a\) is the next location, given in Eq. (7). \[a=\underset{r_{way\in d}\in d_{samples}}{\text{argmin}}\left\|r_{way}-r_{ \gamma max}\right\|_{2}\] (7) 6. Sample the value \(y_{n+1}\) at this location \(a=x_{n+1}\) and append this training pair to the training set. ### Benchmark Surfaces Each algorithm's performance is evaluated through its ability to map an environment across three distinct surfaces: parabola, Townsend, and the lunar south pole crater. Due to the surfaces' changing complexities, the Bayesian neural network and Gaussian process machine learning strategies can be evaluated on their performance in these varying conditions, allowing a deeper understanding of their strengths and weaknesses in relation to the complexity of the environment. The inclusion of multiple surfaces facilitates a comprehensive assessment of algorithm adaptability across diverse environments, thus enhancing its robustness. Moreover, utilization of the lunar surface enables the testing of these frameworks in a real-world setting. These surfaces have two independent dimensions (planar position \(r=(x_{1},x_{2})\)) and a third dependent dimension \(y\); the dependent variable's algebraic relationship to position is known for the parabola and Townsend benchmark surfaces but unknown for the lunar ice data. The parabola surface is defined by Eq. (8), where \(\sigma_{noise}^{2}=0.02\) or \(0\), \(x_{1}\in[-1\text{:}\,0.1\text{:}\,1]\), and \(x_{2}\in[-1\text{:}\,0.1\text{:}\,1]\). The Townsend surface is defined by Eq. (9), where \(\sigma_{noise}^{2}=0.02\) or \(0\), \(x_{1}\in[-2.5\text{:}\,0.1\text{:}\,2.5]\), and \(x_{2}\in[-2.5\text{:}\,0.1\text{:}\,2.5]\). \[y=x_{1}^{2}+x_{2}^{2}+\sigma_{noise}^{2} \tag{8}\] \[y=-(\cos((x_{1}-0.1)x_{2})^{2}-x_{1}\sin(3x_{1}+x_{2})+\sigma_{noise }^{2} \tag{9}\] The lunar surface, derived from LAMP data [14], consists of a digital elevation map (DEM) \(\left(r=(x_{1},x_{2},x_{3})\right)\) of the lunar south pole in 5 m spatial resolution and hydroxyl data \(y\) in 250 m spatial resolution. Noise is present in the data and significant gaps appear near the crater rim. The results comparing the six exploration strategies is presented in order of ascending complexity of the surfaces shown in Figure 3: noiseless parabola, noisy parabola, noiseless Townsend, noisy Townsend, 3km lunar crater swath, and 6km lunar crater swath. ## Appendix E Simulation Experiment Campaign The Gaussian process and Bayesian neural network informed exploration strategies of various movement and prediction horizons (Table 2) were evaluated on the three surfaces of varying surface size, toggling between a noiseless and noisy measurements (Table 1). The movement horizon is varied between one grid space (movement to nearest neighbor) for active learning methods and two to four grid spaces for the science-blind snake and spiral exploration strategies. For the active learning strategies, the prediction horizon is set to look at one grid space (\(1\Delta r\)), three grid spaces (\(3\Delta r\)), or globally across the entirety of the surface (\(r\in R\)). By varying these parameters, the exploration efficiency of each model can be analyzed thoroughly. Although different movement horizons are compared between the science-blind and active learning methods, the science-blind method serves as a baseline metric over a pre-determined path, unlike the active learning strategy that changes with each "step" the agent takes. Additionally, note these tests are run for multiple trials to verify the validity of the data. ## IV Results ### Metrics To comprehensively assess the performance of Gaussian process and Bayesian neural network active learning exploration strategies, the following metrics were utilized with their associated definitions and desired intention. * Training Time \(\circ\) All experiments were run on the same compute node on University of Hawaii's high-performance computing cluster that leveraged 32 CPUs and 128 GB of RAM. Execution time was measured using the operating system's time function prior to and after either model's training function call. Training time is a proxy for the computational intensity of either implemented algorithm. * RMS Error upon Convergence \(e_{c}\) \(\circ\) To determine the concept of convergence loosely, the 2% settling time from control theory was adopted. The global RMS error between the model prediction and true values is inspected to verify that 1) there are enough data points to confirm convergence and 2) that the final values of RMS error stay within a 2% band of the final value \(e_{f}\). The 2% error band \(e_{2\%}\) is found by differencing the initial RMS error \(e_{0}\) and final RMS error \(e_{f}\), given in Eq. (10). The RMS error upon convergence \(e_{c}\) is thus defined as the upper bound of this error band, defined by Eq. (11). \[\Delta e_{2\%} =0.02(e_{0}-e_{f})\] (10) \[e_{c} =e_{f}+e_{2\%}\] (11) It is important to note that RMS error upon convergence and along with samples/distance until convergence (mentioned in detail below) are particularly relevant for trials that exhibit asymptotic behavior, implying a convergence to a constant value. However, such convergence can only be speculated when the terminal value is unknown. * Samples until Convergence \(i_{c}\) \(\circ\) The index of convergence or samples until convergence \(i_{c}\) is then found through minimizing the difference between the error at an index \(i\), \(e_{l}\), and the error upon convergence \(e_{c}\), given in Eq. (12). \[i_{c}=\underset{i}{\text{argmin}}\|e_{l}\ -\ e_{c}\|_{2}\] (12) * Distance until Convergence \(d_{c}\) \(\circ\) The distance traveled until convergence is the sum of radial difference between each waypoint until the sample of convergence \(i_{c}\), given in Eq. (13). \[d_{c}=\sum_{i=1}^{i_{c}}\big{\|}x_{k,i+1}\ -\ x_{k,i}\big{\|}_{2}\] (13) Note that distance until convergence can provide insight regarding which methods are more effective, as lower distance traversed until convergence implies a more efficient exploration strategy * Position Error in Identifying Location of Global Minimum \(e_{\text{min}}\) Eq. (14) calculates the difference between the location of the true minima and the minimum converged upon by the exploration algorithm, where the true location of the minima of the target surface for the parabola is \(r_{min}=(0,0)\), the Townsend \(r_{min}=(-1.75,-1.75)\), and the lunar surface \(r_{min}=(1,0.5)\). \[e_{\min}=\left\|r_{min}\ -\ \underset{r}{\operatorname{argmin}}\hat{f}(r \in\mathcal{R}\ )\right\|_{2}\] (14) ## Appendix B Resulting Simulations The variety of simulations aims to emphasize the difference in performance between the exploration strategies embedded with differing models and to specifically highlight the performance of the constrained active learner. The science-blind algorithms, snake and spiral alike, offer baseline performance metrics to compare the effectiveness of intelligent strategies. The constrained active learning algorithms aim to mimic rovers, the main interest of this paper. Simulations of exploration using the constrained active learner, illustrated in Figures 5 to 7, display specific exploration trajectories and model performance over the iterative sampling experiment. As previously displayed in Table 2, there are three different prediction horizons associated with the algorithms that utilize active learning and a constrained movement horizon: nearest neighbor (NN), local, and global prediction horizons. Each simulation of an exploration algorithm generates figures to illustrate the evolution of the underlying model's performance. Figures 5 - 7 are formatted such that the top three graphs display data regarding the BNN algorithm, the middle three graphs display data regarding the GP algorithm, and the bottom two graphs compare the GP and BNN performance, with GPs being graphed in blue and BNNs graphed in black. Figure 5 has each subplot labeled. Subplot a) illustrates the BNN prediction across the surface test location, where the colored surface is the ground truth and the gray surface is the prediction. The purple star represents the agent's location on the surface and the black lines represent the agent's historical trajectory. Subplot b) displays the BNN algorithm's uncertainty across the environment at the model's most recent evaluation. Next, subplot c) displays the BNN algorithm's error across the surface at the most recent model evaluation. Subplots d), e), and f) show similar information as the plots above them, but for a GP algorithm. Lastly, subplot g) graphs the RMS error and subplot h) graphs the variance, which is defined as the mean of the uncertainty graphed in plots b) and e). Figure 5 illustrates a constrained active learning algorithm comparison between a BNN and GP with a local prediction horizon across a noiseless parabola. The GP active learning strategy initially preferences the agent to traverse the outer edges of the surface. The agent then moves inward and explores the remainder of the surface in a set number of total samples. The BNN active learning strategy explores one half of the space thoroughly. These behaviors carry onto the constrained AL comparison but with a nearest neighbor prediction horizon across a Townsend surface, shown in Figure 8. GP driven active learning with nearest neighbor and local prediction horizons demonstrated the best overall performance across all active learning strategies. The results derived from trials utilizing these algorithms were not only consistent across trials, but also the most precise in finding the global minimum and on the higher end of computational efficiency. Figure 7 displays algorithm performance when a global prediction horizon is imposed on a constrained trajectory active learner. Although still one of the higher performing algorithms, the GP driven global prediction horizon algorithm does not compare to the efficiency reached with GP algorithms of smaller prediction horizons. Instead of traveling across the edges, the agent mimics the BNN algorithm's movement patterns of oversampling a region of the space. Consequently, the GP algorithm ends with higher error in finding the global minimum, as compared to other exploration strategies that utilize GPs. Along with this, the GP algorithm requires increased samples and distance to reach convergence. Figure 6: Example of an active learning GP and BNN model with a nearest neighbor prediction horizon and constrained movement horizon on a noiseless Townsend ### Analysis Across all surface environments and algorithm hyperparameters, the Gaussian process active learner required less time to train the model for training datasets containing up to 331 samples, the size of the largest environment GP active learning algorithms are generally more accurate than the BNN active learning algorithms with some exceptions. GPs usually converge to a good model in less samples than a BNN. Active learners (BNN and GP alike) require less roving distance to converge to an accurate model, though not much less roving distance than science blind methods. GPs are more accurate in identifying the surface's true minimum location. These findings underscore the distinct advantages of GP-based active learning strategies in optimizing training efficiency and accuracy across diverse surface environments for constrained movement horizons. The results are displayed in the order of the metrics defined in Section A. For nearly all plots, the performance metrics were plotted in a log scale across the y-axis for all figures, except when analyzing the error in locating the global minimum. The x-axis spans the surface type and exploration strategy. Exploration strategy is defined by the acronyms "SB" and "AL", which are acronyms respectively for science-blind and active learning. The movement horizon is indicated after the exploration strategy definition and is categorized as either \(1\Delta x\), \(2\Delta x\), or \(4\Delta x\). Each point in the figure represents a mean value and the error bars around each point represent standard deviation across all noiseless/noisy trials completed for each exploration strategy. Note that results for both snake and spiral science-blind strategies are captured into one data point. Lastly, whether the algorithms are driven by GPs or BNNs is denoted on the legend with the use of "GP" or "BNN", as well as color coordination. Figure 7: Example of an active learning GP and BNN model with a global prediction horizon and constrained movement horizon on a 6km lunar crater swath GP algorithms have a shorter training time compared to their BNN counterparts across all surface types as seen Figure 8 below. The training time across science-blind and constrained active learning strategies did not differ much when comparing GP or BNN algorithms individually. GP algorithms are generally more computationally efficient than BNN algorithms. Figure 9 highlights a single trial in which training time per each sample is graphed. There appears to be a small increase in training time for the BNN algorithm as the number of samples increases. For example, this BNN algorithm started at 23.3 seconds per sample and ended with 35.3 seconds per sample. This instance of a GP algorithm maintained more steadiness with 0.308 seconds per sample initially and moved to 0.288 seconds per sample by the end of the simulation. _RMS Error Upon Convergence_ GP algorithms outperform BNN algorithms, showing lower average RMS error, as shown in Figure 10 below. There are a few exceptions to this trend. One deviation can be witnessed in the baseline science-blind exploration strategy across the Townsend surface, where the GP science-blind algorithms have a higher RMS error as compared to their BNN counterparts. This could be due to the complexity of the surface, which may require a more expressive basis function to model it effectively. Of note, the science-blind snake method performed poorly on the Townsend surface and did not converge; therefore, the only data shown comes from the science-blind spiral method. Figure 10 illustrates that on average, GP algorithms produce higher state accuracy as compared to BNN algorithms. Comparing the active learning algorithms to the science-blind methods, note that as the surface complexity increases, there appears to be a increase in the RMS error for SB strategies. This can be seen through the Townsend surface, as well through the lunar crater, where the RMS error for SB methods approaches that of the active learner. Figure 8: Comparison of BNN and GP science blind and active learning strategies on training time Figure 9: Training time versus samples taken across a 3km lunar surface utilizing the constrained active learning NN exploration strategy The effectiveness and superior performance of GP algorithms continues to be displayed in Figure 11, which details the samples until convergence is reached. The GP algorithms display lower sample count until convergence in all instances of exploration strategy and movement horizon, except one case. Figure 11 illustrates the instance in which the GP algorithm does not converge with lower sample numbers across the 6km lunar crater surface where the baseline science-blind GP algorithm does not outperform the BNN algorithm in terms of number of samples taken. Regardless of this baseline metric, the active-learning strategies that utilize GP algorithms take lower samples than their BNN counterparts. In regards to the science blind methods, lower samples were taken in every instance (due to the nature of the pre-determined path), and therefore there were less samples available upon evaluation of convergence. Regardless, the graph below demonstrates that science-blind methods converged to a global minimum in less samples than the active learning strategies. Figure 11: Comparison of BNN and GP science blind and active learning strategies on sample until convergence Figure 10: Comparison of BNN and GP science blind and active learning strategies on RMS error ### Distance Until Convergence Data regarding the distance traveled until reaching convergence is illustrated in Figure 12. BNN algorithms generally travel farther distances than GP algorithms to converge on a model, suggesting decreased effectiveness as compared to GP models. The 6km lunar surface displays an exception to this trend where the snake science-blind GP algorithm cannot converge on an accurate model over the 6km lunar crater surface. The GP algorithm requires a slightly higher distance for convergence as compared to the BNN algorithm. Again, the increased complexity of the surface may require a more expressive basis function, which is provided by the BNN algorithm in this instance. This data ultimately confirms that GP algorithms generally perform better when utilizing active learning exploration strategies over science-blind methods. Figure 12: Comparison of BNN and GP science blind and active learning strategies on distance until convergence Figure 13: Comparison of BNN and GP science blind and active learning strategies on position error in finding the global minimum The GP algorithms continue to outperform the BNN algorithms, as seen in Figure 13 where all active learning GP algorithms have lower average position error in finding the global minimum. In fact, the GP algorithm converges to the correct global minimum with zero position error in two instances and three other instances approach near zero error. None of the BNN models could precisely identify minima locations across any surface. In terms of the science-blind methods, although convergence to a low position error global minimum did occur, it is not suggestive of a better performing exploration strategy. This is due to the fact that the science-blind method traversed an environment in a meticulous way that required the agent to travel farther than was necessary. As such, the active learners provide a more cost-effective strategy. ## 5 Conclusion This paper investigates the comparative performance between active learning algorithms driven by Gaussian processes and Bayesian neural networks tested in various simulation environments to predict the spatial distribution of a variable of interest along multiple datasets. The active learning algorithms consistently converge to an accurate model after traversing less distance as compared to science-blind methods. Note that a smaller distance traveled does not signify fewer samples were taken, as science-blind methods require fewer samples to reach convergence than their active learning counterparts. We can also conclude that GP models are superior oracles for active learning strategies, provide higher computational efficiency, and predict with more accuracy than BNN models across all environments tested. GP algorithms outperform BNN algorithms in nearly all cases, with the exception being when target surface is very complex or when global prediction horizons are utilized. Instead, GP algorithms benefit from short-sightedness where greedy actions lead to increased rewards. This model has potential for future applications in rovers traversing planetary surfaces, such as the Moon or Mars. Not only do these algorithms have the capability to assist in the search for water-ice on these surfaces, but can be easily extended to search for other science objectives. The authors recommend that Gaussian process oracle models assist in science operations whether in real time onboard the rover or as a suggestion system to teleoperators offline. The next step in furthering this research is to encode the GP algorithm onto a physical rover and conduct field testing with real time science data. ## Acknowledgments This work was supported by NASA Grant HI-80NSSC21M0334. We would like to extend our sincerest appreciation to the University of Hawaii's high-performance computing (HPC) cluster IT department, who assisted in not only ensuring the full-time operation of the HPC cluster but also in solving the many technical difficulties that arose throughout the duration of our research.
2309.13736
Geometry of Linear Neural Networks: Equivariance and Invariance under Permutation Groups
The set of functions parameterized by a linear fully-connected neural network is a determinantal variety. We investigate the subvariety of functions that are equivariant or invariant under the action of a permutation group. Examples of such group actions are translations or $90^\circ$ rotations on images. We describe such equivariant or invariant subvarieties as direct products of determinantal varieties, from which we deduce their dimension, degree, Euclidean distance degree, and their singularities. We fully characterize invariance for arbitrary permutation groups, and equivariance for cyclic groups. We draw conclusions for the parameterization and the design of equivariant and invariant linear networks in terms of sparsity and weight-sharing properties. We prove that all invariant linear functions can be parameterized by a single linear autoencoder with a weight-sharing property imposed by the cycle decomposition of the considered permutation. The space of rank-bounded equivariant functions has several irreducible components, so it can not be parameterized by a single network-but each irreducible component can. Finally, we show that minimizing the squared-error loss on our invariant or equivariant networks reduces to minimizing the Euclidean distance from determinantal varieties via the Eckart-Young theorem.
Kathlén Kohn, Anna-Laura Sattelberger, Vahid Shahverdi
2023-09-24T19:40:15Z
http://arxiv.org/abs/2309.13736v2
# Geometry of Linear Neural Networks: ###### Abstract The set of functions parameterized by a linear fully-connected neural network is a determinantal variety. We investigate the subvariety of functions that are equivariant or invariant under the action of a permutation group. Examples of such group actions are translations or \(90^{\circ}\) rotations on images. For such equivariant or invariant subvarieties, we provide an explicit description of their dimension, their degree as well as their Euclidean distance degree, and their singularities. We fully characterize invariance for arbitrary permutation groups, and equivariance for cyclic groups. We draw conclusions for the parameterization and the design of equivariant and invariant linear networks, such as a weight sharing property, and we prove that all invariant linear functions can be learned by linear autoencoders. ###### Contents * 1 Introduction * 2 Warm-up and preliminaries * 3 Invariance under permutation groups * 4 Equivariance under cyclic subgroups of the symmetric group * 5 Conclusion and outlook * 6 Introduction Neural networks that are equivariant or invariant under the action of a group attract high interest in applications and the theory of machine learning. It is important to thoroughly study their fundamental properties. While invariance is important for classifiers, equivariance typically comes into play in feature extraction tasks. The present article characterizes and investigates linear fully-connected neural networks. For instance, linear encoder-decoder models fit this setup well: they are families of functions \(\{f_{\theta}\}_{\theta\in\Theta}\), parameterized by a set \(\Theta=\mathbb{R}^{n\times r}\times\mathbb{R}^{r\times n}\). For each parameter \(\theta\in\Theta\), the function \(f_{\theta}\) is a composition of linear maps \[f_{\theta}\colon\,\mathbb{R}^{n}\xrightarrow{f_{1,\theta}}\mathbb{R}^{r} \xrightarrow{f_{2,\theta}}\mathbb{R}^{n}, \tag{1.1}\] where \(r\leq n\). One commonly visualizes \(f_{\theta}\) as in the figure below. If \(n=m^{2}\) is a square number, one can think of the input of the network as a quadratic image with \(m\times m\) pixels. If \(n=m^{3}\), the input might be a cubic 3D scenery. In applications, one often aims to learn functions that are equi- or invariant under certain group actions, such as translations, rotations, or reflections. The function space of a linear fully-connected neural network is a determinantal variety: For natural numbers \(r,m,n\), we write \(\mathcal{M}_{r,m\times n}\) for the subvariety of \(\mathbb{C}^{m\times n}\) whose points are complex \(m\times n\) matrices of rank at most \(r\). In learning tasks, _real_ matrices of rank at most \(r\) are in use; these are precisely the real-valued valued points \(\mathcal{M}_{r,m\times n}(\mathbb{R})\) of \(\mathcal{M}_{r,m\times n}\). Reading the entries of the matrix \(M\) as variables, the variety \(\mathcal{M}_{r,m\times n}\) is the locus of simultaneous vanishing of all \((r+1)\times(r+1)\) minors of \(M\). For a linear fully-connected neural network with input dimension \(n\), output dimension \(m\), and whose smallest layer has width \(r\), the set of functions parameterized by it is exactly \(\mathcal{M}_{r,m\times n}(\mathbb{R})\). A good understanding of the geometry of the function space of a neural network is not only mathematically interesting per se. It is useful to understand the training process of a network. For instance, it is important for understanding the type of the critical points of the loss function. This behavior typically varies from architecture to architecture. In the case of linear fully-connected networks, critical points often correspond to matrices of rank even lower than \(r\), i.e., they lie in the singular locus of the determinantal variety \(\mathcal{M}_{r,m\times n}(\mathbb{R})\)[15]. Investigating those points is crucial for proving the convergence of such networks to nice minima [13]. The nature of critical points is very different in the case of linear convolutional networks. Here, critical points are almost always smooth points of the function space [10, 11]. Figure 1: A fully-connected network of depth 2. In the present article, we investigate the subvarieties \(\mathcal{E}^{G}_{r,n\times n}\subset\mathcal{M}_{r,n\times n}\) and \(\mathcal{I}^{G}_{r,m\times n}\subset\mathcal{M}_{r,m\times n}\) of linear functions of bounded rank that are equivariant and invariant under the action of a permutation groups \(G\), respectively. The group \(G\) is a subgroup of the symmetric group \(\mathcal{S}_{n}\) and acts on the input and output space \(\mathbb{R}^{n}\) (or \(\mathbb{R}^{m}\), respectively) by permuting the entries of the input or output vector. For \(m=n\), the subvarieties \(\mathcal{E}^{G}_{r,n\times n}\) and \(\mathcal{I}^{G}_{r,n\times n}\) encode the part of the function space of a linear autoencoder that is equi- and invariant under the action of \(G\), respectively. Our main contribution is an algebraic characterization of \(\mathcal{I}^{G}_{r,m\times n}\) for arbitrary permutation groups, and for cyclic subgroups of \(\mathcal{S}_{n}\) in the case of equivariance. Our results allow for implications on the design of equi- and invariant networks. For instance, for invariant autoencoders, we prove a weight sharing property on the encoder and deduce a rank constraint, i.e., a constraint on the width of the middle layer. We prove that the function space of such a constrained autoencoder is exactly \(\mathcal{I}^{G}_{r,n\times n}\). In other words, linear autoencoders with our weight sharing property on the encoder precisely parameterize invariant functions. Taking carbon emissions during the training of models [5] into account, it is important for the role of AI in the climate crisis to encounter the increasing training and hence energy costs. This is also one of the aims that the initiative _Green AI_[14] is thriving for, to which the construction of efficient autoencoders with a low-dimensional middle layer might contribute. **Related work.** We here give a petite sample of related references, which is by no means claimed to be exhaustive. An overview of equivariant neural networks is provided in the survey [12]. The study of equivariance has roots in pattern recognition [17]. Group-equivariant convolutional networks were introduced by Cohen and Welling in [4], allowing for applications in image analysis [1]. In [2], transitive group actions are considered. Therein, Bekkers proves that, on the level of feature maps, a linear map is equivariant if and only if it is a group convolution. Isometry- and gauge-equivariant CNNs on Riemannian manifolds were investigated in [16]. This geometric perspective has strong connections to physics. To the best of our knowledge, our article is the first one to tackle the _algebro-geometric_ study of equi- or invariant networks and their function spaces. **Notation.** By \(\mathcal{F}\), we denote the function space, and by \(\Theta\) the parameter space of a neural network \(F\colon\Theta\to\mathcal{F}\). For a parameter \(\theta\in\Theta\), we denote the function \(F(\theta)\) by \(f_{\theta}\). The symbol \(\mathbb{K}\) denotes one of the fields \(\{\mathbb{R},\mathbb{C}\}\). For an ideal \(I\) in a polynomial ring \(\mathbb{K}[x_{1},\ldots,x_{n}]\), we denote by \(V(I)\) the algebraic variety \(V(I)=\{x\in\mathbb{K}^{n}\,|\,p(x)=0\text{ for all }p\in I\}\). The symbol \(\mathcal{M}_{r,m\times n}\) denotes the determinantal variety in \(\mathbb{C}^{m\times n}\) whose points are complex \(m\times n\) matrices \(M\) of rank at most \(r\), and \(\mathcal{M}_{r,m\times n}(\mathbb{R})\) its real-valued points, i.e., real matrices \(M\) of rank at most \(r\). Its subsets of equi- and invariant subspaces under the action of a group \(G\) will be denoted by \(\mathcal{E}^{G}_{r,m\times n}\) and \(\mathcal{I}^{G}_{r,m\times n}\), respectively. If the letter \(r\) is dropped in the notation, this means that no rank constraint is imposed on the matrices. For fixed \(T\in\operatorname{GL}_{n}(\mathbb{K})\) and any matrix \(M\in\mathcal{M}_{n\times n}(\mathbb{K})\), we write shortly \(M^{\sim_{T}}\coloneqq T^{-1}MT\) for the respective similarity transform. We denote the identity matrix of size \(n\) by \(\mathrm{I}_{n}\), and by \(\mathcal{S}_{n}\) the symmetric group on the set \([n]=\{1,\ldots,n\}\). For a permutation \(\sigma\in\mathcal{S}_{n}\), \(\mathcal{P}(\sigma)\) denotes the partition \(\{A_{1},\ldots,A_{k}\}\) of the set \([n]\) induced by the decomposition \(\sigma=\pi_{1}\circ\cdots\circ\pi_{k}\) of \(\sigma\) into pairwise disjoint cycles \(\pi_{i}\), where we also count trivial cycles, i.e., cycles of length \(1\). **Outline.** In Section 2, we present motivating examples and present necessary preliminaries from algebraic geometry. Section 3 treats invariance under permutation groups. We characterize invariance and investigate implications on the design of invariant linear autoencoders. Section 4 characterizes equivariance of linear autoencoders under cyclic subgroups of the symmetric group. In Section 5, we present which other groups and further generalizations we are planning to tackle in future work. ## 2 Warm-up and preliminaries We start with cyclic subgroups \(G\) of the symmetric group \(\mathcal{S}_{n},\) i.e., groups of the form \(G=\left\langle\sigma\right\rangle\) for some permutation \(\sigma\in\mathcal{S}_{n}\). Such groups \(G\) naturally act on the input space \(\mathbb{R}^{n}\) and the output space \(\mathbb{R}^{n}\) of the network (1.1) by permuting the entries of the in- and output vector, respectively, and on linear maps \(f\): \(\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) by permuting the columns of the \(m\times n\) matrix representing \(f\). ### Warm-up examples #### 2.1.1 Rotation-invariance of linear maps for \(m\times m\) pictures Let \(n=m^{2}\) be a square number and let \(\sigma\) denote the clockwise rotation of the input \(m\times m\) picture by \(90\) degrees, i.e., \(\sigma\) is the following permutation of the pixels \(a_{ij}\): \[\sigma\colon\ \mathbb{R}^{m\times m}\rightarrow\mathbb{R}^{m\times m},\qquad \begin{pmatrix}a_{11}&a_{12}&\ldots&a_{1m}\\ a_{21}&a_{22}&\ldots&a_{2m}\\ \vdots&\vdots&\ddots&\vdots\\ a_{m1}&a_{m2}&\ldots&a_{mm}\end{pmatrix}\mapsto\begin{pmatrix}a_{m1}&a_{m-1, 1}&\ldots&a_{11}\\ a_{m2}&a_{m-1,2}&\ldots&a_{12}\\ \vdots&\vdots&\ddots&\vdots\\ a_{mm}&a_{m,m-1}&\ldots&a_{1m}\end{pmatrix}. \tag{2.1}\] Since square numbers are either \(0\) or \(1\) modulo \(4\), we will distinguish between \(m\) odd and \(m\) even for the identification of \(\mathbb{R}^{m\times m}\) with \(\mathbb{R}^{n}\). If \(m\) is odd, we identify \[\mathbb{R}^{m\times m}\xrightarrow{\cong}\mathbb{R}^{n},\qquad A\ =\begin{pmatrix}a_{1,1}&a_{1,2}&\cdots&a_{1,m-1}&a_{1,m}\\ a_{2,1}&a_{2,2}&\cdots&a_{2,m-1}&a_{2,m}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ a_{m-1,1}&a_{m-1,2}&\cdots&a_{m-1,m-1}&a_{m-1,m}\\ a_{m,1}&a_{m,2}&\cdots&a_{m,m-1}&a_{m,m}\end{pmatrix}\ \mapsto\ \mathrm{vec}(A)\,, \tag{2.2}\] where \(\mathrm{vec}(A)=(a_{1,1},a_{1,m},a_{m,m},a_{m,1},a_{1,2},a_{2,m},a_{m,m-1},a_{ m-1,1},\ldots,a_{1,m-1},a_{m-1,m},a_{m,2},a_{2,1},\)\(a_{2,2},a_{2,m-1},a_{m-1,m-1},a_{m-1,2},\ldots,a_{\frac{m+1}{2},\frac{m+1}{2}})^{ \top}.\) The intuition of the choice of vectorization can be described as "passing from corner to corner clockwise, inwards layer by layer". Under this identification, the action of \(\sigma\) is given by the \(n\times n\) block matrix \[\begin{pmatrix}0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ &&&\cdots\\ &&&&0&0&0&1\\ &&&1&0&0&0\\ &&&&0&1&0&0\\ &&&&0&0&1&0\\ &&&&0&0&1&0\\ &&&&1\end{pmatrix}\,, \tag{2.3}\] where non-filled entries are \(0\). If \(m\) is even, we use the identification \[\mathbb{R}^{m\times m}\xrightarrow{\cong}\mathbb{R}^{n},\qquad A\ =\begin{pmatrix}a_{1,1}&a_{1,2}&\cdots&a_{1,m-1}&a_{1,m}\\ a_{2,1}&a_{2,2}&\cdots&a_{2,m-1}&a_{2,m}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ a_{m-1,1}&a_{m-1,2}&\cdots&a_{m-1,m-1}&a_{m-1,m}\\ a_{m,1}&a_{m,2}&\cdots&a_{m,m-1}&a_{m,m}\end{pmatrix}\ \mapsto\ \mathrm{vec}(A)\,, \tag{2.4}\] where \(\mathrm{vec}(A)=(a_{1,1},a_{1,m},a_{m,m},a_{m,1},a_{1,2},a_{2,m},a_{m,m-1},a_{ m-1,1},\ldots,a_{1,m-1},a_{m-1,m},a_{m,2},a_{2,1},\)\(a_{2,2},a_{2,m-1},a_{m-1,m-1},a_{m-1,2},\ldots,a_{\frac{m}{2}},\frac{m}{2},\frac{m}{2},\frac{m}{2}+1,a\frac{m}{2}+1,\frac{m}{2},a\frac{m}{2}+1,\frac{m}{2}+1)^{\top}\,\). Under this identification, \(\sigma\) acts on \(\mathbb{R}^{n}\) by the \(n\times n\) block matrix \[\begin{pmatrix}0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ &&&&\cdots\\ &&&&0&0&0&1\\ &&&&1&0&0&0\\ &&&&0&1&0&0\\ &&&&0&0&1&0\end{pmatrix}\,. \tag{2.5}\] Invariance of \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) under \(\sigma\) hence implies that columns 1-4, 5-8, 9-12, and so on, of the matrix \(M\) representing \(f\) have to coincide. In particular, \(\sigma\)-invariance implies that the rank of \(f\) is at most \(\lceil\frac{m^{2}}{4}\rceil\), where \(\lceil\cdot\rceil\) denotes the ceiling function. Note that the set \(\mathcal{I}_{n\times n}^{\sigma}\) of all linear rotation-invariant maps \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a vector space. **Remark 2.1**.: Also operations such as rotations, reflections, and shifting the rows of \(A\), all can be seen as special cases of permutations. \(\diamond\) **Remark 2.2** (Design of rotation-invariant autoencoders).: From what was found above, we deduce that for any linear encoder-decoder \(f\colon\mathbb{R}^{m\times m}\to\mathbb{R}^{r}\to\mathbb{R}^{m\times m}\) that is invariant under \(\sigma\), the number \(r\) can be chosen to be \(\leq\lceil\frac{m^{2}}{4}\rceil\). A rank constraint with \(r\) small imposes moreover that some of the blocks of four consecutive columns coincide, are zero, or linear combinations of each other. #### 2.1.2 Rotation-equivariance of linear maps for \(3\times 3\) pictures Consider the set of \(\mathbb{R}\)-linear maps \(f\colon\mathbb{R}^{9}\to\mathbb{R}^{9}\) of rank at most \(3.\) Every such map can be written as a composition of \(\mathbb{R}\)-linear maps \[\mathbb{R}^{9}\longrightarrow\mathbb{R}^{3}\longrightarrow\mathbb{R}^{9}. \tag{2.6}\] Those maps are encoded precisely by real \(9\times 9\) matrices \(M=(m_{ij})_{i,j=1,\ldots,9}\in\mathcal{M}_{9\times 9}(\mathbb{R})\) all whose \(4\times 4\) minors vanish. The minors are homogeneous polynomials of degree \(4\) in the entries of the matrix \(M.\) We denote by \(I_{3,9\times 9}\leq\mathbb{C}[\{m_{ij}\}]\) the ideal generated by those polynomials. In more geometric terms, we are looking for the real points of the \(45\)-dimensional variety \[\mathcal{M}_{3,9\times 9}\,=\,V(I_{3,9\times 9})\,\subset\,\mathbb{C}^{9 \times 9}. \tag{2.7}\] Denote by \(\sigma\) the clockwise rotation of a \(3\times 3\) matrix by \(90\) degrees, i.e., \[\sigma\colon\,\mathbb{R}^{3\times 3}\to\mathbb{R}^{3\times 3},\qquad \begin{pmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{pmatrix}\,\mapsto\,\begin{pmatrix}a_{31}&a_{21}&a_{ 11}\\ a_{32}&a_{22}&a_{12}\\ a_{33}&a_{23}&a_{13}\end{pmatrix}, \tag{2.8}\] and let \(G=\langle\sigma\rangle\). This is a finite, cyclic subgroup of \(\mathrm{O}(2)\) which preserves the \(m\times m\) shape of the input matrix, which we interpret as a quadratic image with \(n=9\) real pixels \(a_{ij}\). We are interested in those maps that are equivariant under \(G\), i.e., linear maps \(f\) for which \[\sigma\circ f\,=\,f\circ\sigma\,. \tag{2.9}\] Again, we identify \(\mathbb{R}^{3\times 3}\cong\mathbb{R}^{9}\) via \[\begin{pmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}&a_{13}&a_{33}& a_{31}&a_{12}&a_{23}&a_{32}&a_{21}&a_{22}\end{pmatrix}^{\top}. \tag{2.10}\] Then \(\sigma(A)\) is represented by the vector \((a_{31}\ a_{11}\ a_{13}\ a_{33}\ a_{21}\ a_{12}\ a_{23}\ a_{32}\ a_{22})^{\top}.\) Under this identification, the rotation is the permutation \(\sigma=(1\,4\,3\,2)(5\,8\,7\,6)\in\mathcal{S}_{9}\) and is represented by \[\left(\begin{array}{cccc|c}0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\end{array}\right)\qquad\begin{array}{c|c|c}0&&0\\ 0\\ \hline\\ \\ 0&&\begin{array}{cccc|c}0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&1&0\end{array}\right)\,. \tag{2.11}\] from Equation (2.3). A map \(f\) is equivariant under \(\sigma\), and hence under \(G\), if and only if its representing matrix \(M\) satisfies \[P_{\sigma}\cdot M\,=\,M\cdot P_{\sigma}\,. \tag{2.12}\] We therefore aim to determine all matrices \(M\) that commute with \(P_{\sigma}\). Hence, a matrix \(M\) is equivariant under \(\sigma\) if and only if \(M\) is similar to itself with the permutation matrix of \(\sigma\) as base change. Also condition (2.9) can be expressed as the vanishing of polynomials read from Equation (2.12). Those \(81\) homogeneous binomials of degree \(1\) cut out the vector space \(\mathcal{E}_{9\times 9}^{\sigma}\). We see from (2.11) that the matrices in \(\mathcal{E}_{9\times 9}^{\sigma}\) must be of the form \[\left(\begin{array}{cccc|cccc|c}\alpha_{1}&\alpha_{2}&\alpha_{3}&\alpha_{4} &\beta_{1}&\beta_{2}&\beta_{3}&\beta_{4}&\varepsilon_{3}\\ \alpha_{4}&\alpha_{1}&\alpha_{2}&\alpha_{3}&\beta_{4}&\beta_{1}&\beta_{2}& \beta_{3}&\varepsilon_{3}\\ \alpha_{3}&\alpha_{4}&\alpha_{1}&\alpha_{2}&\beta_{3}&\beta_{4}&\beta_{1}& \beta_{2}&\varepsilon_{3}\\ \alpha_{2}&\alpha_{3}&\alpha_{4}&\alpha_{1}&\beta_{2}&\beta_{3}&\beta_{4}& \beta_{1}&\varepsilon_{3}\\ \hline\gamma_{1}&\gamma_{2}&\gamma_{3}&\gamma_{4}&\delta_{1}&\delta_{2}& \delta_{3}&\delta_{4}&\varepsilon_{4}\\ \gamma_{4}&\gamma_{1}&\gamma_{2}&\gamma_{3}&\delta_{4}&\delta_{1}&\delta_{2}& \delta_{3}&\varepsilon_{4}\\ \gamma_{3}&\gamma_{4}&\gamma_{1}&\gamma_{2}&\delta_{3}&\delta_{4}&\delta_{1}& \delta_{2}&\varepsilon_{4}\\ \gamma_{2}&\gamma_{3}&\gamma_{4}&\gamma_{1}&\delta_{2}&\delta_{3}&\delta_{4}& \delta_{1}&\varepsilon_{4}\\ \hline\varepsilon_{1}&\varepsilon_{1}&\varepsilon_{1}&\varepsilon_{1}& \varepsilon_{2}&\varepsilon_{2}&\varepsilon_{2}&\varepsilon_{2}&\varepsilon_ {5}\end{array}\right)\,. \tag{2.13}\] Hence the dimension of the vector space \(\mathcal{E}_{9\times 9}^{\sigma}\) is \(81-60=4\cdot 4+5\cdot 1=21.\) The matrices in this vector space that can be parameterized by the autoencoder (2.6) form the variety \(\mathcal{E}_{3,9\times 9}^{\sigma}\), which is obtained as the intersection of \(\mathcal{E}^{\sigma}\) and \(\mathcal{M}_{3,9\times 9}\), i.e., \[\mathcal{E}_{3,9\times 9}^{\sigma}\,=\,\mathcal{M}_{3,9\times 9}\cap \mathcal{E}_{9\times 9}^{\sigma}\,. \tag{2.14}\] Intersecting with \(\mathcal{M}_{3,9\times 9}\) imposes that at most \(3\) of the \(9\) columns of the matrix in (2.13) are linearly independent. ### Algebraic geometry of similarity transforms For natural numbers \(m,n\) and \(r\leq\min(m,n)\), the variety \(\mathcal{M}_{r,m\times n}\) of \(m\times n\) matrices of rank at most \(r\) has dimension \[\dim(\mathcal{M}_{r,m\times n})\,=\,r\cdot(m+n-r)\,, \tag{2.15}\] cf. [9, Proposition 12.2]. We remind our readers that the dimension of an affine variety is the Krull dimension of its coordinate ring. **Remark 2.3** (Real vs. complex).: Since all the coefficients of the contributing polynomials are real, the real variety of real-valued points \(\mathcal{M}_{r,m\times n}(\mathbb{R})=\mathcal{M}_{r,m\times n}(\mathbb{C}) \cap\mathbb{R}^{m\times n}\) of \(\mathcal{M}_{r,m\times n}\) has the same dimension as the complex variety \(\mathcal{M}_{r,m\times n}\). The points of \(\mathcal{M}_{r,m\times n}(\mathbb{R})\) are real \(m\times n\) matrices of rank \(\leq r\). \(\diamond\) As was pointed out in [9, Example 19.10], it is proven in [7, Example 14.4.11] that the degree of \(\mathcal{M}_{r,m\times n}\) is \[\deg\left(\mathcal{M}_{r,m\times n}\right)\ =\ \prod_{i=0}^{n-r-1}\frac{(m+i)!\, \cdot\!i!}{(r+i)!\,\cdot\!(m-r+i)!}\,. \tag{2.16}\] **Lemma 2.4**.: _Let \(0<r<n\). A matrix \(M\in\mathcal{M}_{r,m\times n}\) is a singular point of \(\mathcal{M}_{r,m\times n}\) if and only if its rank is strictly smaller than \(r\), i.e., if \(\operatorname{rank}(M)<r\). _ We will also investigate the Euclidean distance degree of [6]. **Definition 2.5**.: The _Euclidean distance (ED) degree_ of an algebraic variety \(\mathcal{X}\) in \(\mathbb{R}^{N}\) is the number of complex critical points of the squared Euclidean distance from \(\mathcal{X}\) to a general point outside the variety. It is denoted by \(\operatorname{deg}_{\operatorname{ED}}(\mathcal{X})\). **Remark 2.6**.: Consider a linear fully-connected network with input dimension \(n\), output dimension \(m\), and smallest layer of width \(r\). For sufficiently large and generic data, training that network with the squared error loss is equivalent to minimizing the Euclidean distance from the variety \(\mathcal{M}_{r,m\times n}(\mathbb{R})\) to a general matrix in \(\mathcal{M}_{m\times n}(\mathbb{R})\); see [15, Section 3.3]. \(\diamond\) The ED degree of a variety is a measure for the complexity of determining points of the variety which are closest to a data point of interest. In learning, one might want to find an in-/equivariant function that is "closest" to a learned, arbitrary function. The ED degree is a measure of computational complexity for this problem in the following sense: it is the number of critical points of the distance function from the function space to a generic linear function. Note that the ED degree is not an algebraic invariant of the real variety \(\mathcal{X}\) alone, but it depends on the chosen coordinate system of its ambient space \(\mathbb{R}^{N}\). **Example 2.7**.: The circle has ED degree 2 and generic ellipses are of ED degree 4. \(\diamond\) The ED degree of the determinantal variety \(\mathcal{M}_{r,m\times n}(\mathbb{R})\) can be computed from the Eckart-Young Theorem [6, Example 2.3], which gives \[\operatorname{deg}_{\operatorname{ED}}(\mathcal{M}_{r,m\times n}(\mathbb{R})) \,=\,\binom{\min(m,n)}{r}\,. \tag{2.17}\] In our investigations, we will commonly perform base changes of the matrices in \(\mathcal{M}_{m\times n}\). For a subvariety \(\mathcal{X}\subset\mathcal{M}_{m\times n}\) and any \(T\in\operatorname{GL}_{n}(\mathbb{C})\), we denote by \(\mathcal{X}^{\cdot T}\) the image of \(\mathcal{X}\) under the linear isomorphism \[\cdot T\colon\,\mathcal{M}_{m\times n}\longrightarrow\mathcal{M}_{m\times n}, \quad M\mapsto MT\,. \tag{2.18}\] **Lemma 2.8**.: _Let \(\mathcal{X}\subset\mathcal{M}_{m\times n}\) be a subvariety and let \(T\in\operatorname{GL}_{n}\). Then, \(\dim(\mathcal{X}^{\cdot T})=\dim\mathcal{X}\), \(\operatorname{deg}(\mathcal{X}^{\cdot T})=\operatorname{deg}\mathcal{X}\), \(\operatorname{Sing}(\mathcal{X}^{\cdot T})=\operatorname{Sing}(\mathcal{X})^{ \cdot T}\), and \((\mathcal{X}^{\cdot T})\cap\mathcal{M}_{r,m\times n}=(\mathcal{X}\cap\mathcal{ M}_{r,m\times n})^{\cdot T}\) for any \(r\leq\min(m,n)\)._ Proof.: Since (2.18) is a linear isomorphism, it preserves the dimension and the degree, and maps regular points to regular points. For the last assertion, we observe that every matrix \(M\in\mathcal{M}_{m\times n}\) satisfies that \(\operatorname{rank}(MT)=\operatorname{rank}(M)\), since \(T\) has full rank. We point out that the ED degree is in general not preserved by the isomorphism (2.18), as the following example demonstrates. **Example 2.9**.: Let \(m=1\) and \(n=3\). Consider the circle \(\mathcal{X}\subset\mathcal{M}_{1\times 3}(\mathbb{C})\) defined by the equation \(M_{1,1}^{2}+M_{1,2}^{2}-M_{1,3}^{2}=0\). The ED degree of \(\mathcal{X}(\mathbb{R})\) in the standard Euclidean distance \[d(0,M)\,\coloneqq\,\|M\|^{2}\,=\,\operatorname{tr}(MM^{\top})\,=\,M_{1,1}^{2}+ M_{1,2}^{2}+M_{1,3}^{2} \tag{2.19}\] is equal to \(2\). Let us now consider the permutation matrix \(P\coloneqq\left(\begin{smallmatrix}0&0&1\\ 1&0&0\\ 0&1&0\end{smallmatrix}\right)\). To diagonalize it, we use the symmetric Vandermonde matrix \[T\,\coloneqq\,V(1,\zeta_{3},\zeta_{3}^{2})\,=\,\begin{pmatrix}1&1&1\\ 1&\zeta_{3}&\zeta_{3}^{2}\\ 1&\zeta_{3}^{2}&\zeta_{3}\end{pmatrix},\] where \(\zeta_{3}\) is any primitive third root of unity. In fact, we obtain \(T^{-1}PT=\operatorname{diag}(1,\zeta_{3}^{2},\zeta_{3})\). The ED degree of \(\mathcal{X}^{\cdot T}(\mathbb{R})\) is not equal to \(2\) anymore. In fact, the ED degree of \(\mathcal{X}^{\cdot T}\) counts the critical points of the function \(d(0,MT-U)\) over all \(M\in\mathcal{X}(\mathbb{R})\) for fixed, generic \(U\). Since \(U\) is generic, we can replace it by \(UT\). Hence, the ED degree of \(\mathcal{X}^{\cdot T}\) under the standard Euclidean distance (2.19) is equal to the ED degree of \(\mathcal{X}\) under the modified Euclidean distance \[d_{T}(0,M)\,\coloneqq\,d(0,MT)\,=\,3\cdot(M_{1,1}^{2}+2M_{1,2}M_{1,3})\,.\] A computation in Macaulay2[8] shows that the ED degree of \(\mathcal{X}(\mathbb{R})\) under \(d_{T}\) is \(4\). \(\diamond\) Depending on whether we study invariance or equivariance, we perform base changes on only one side of the matrices, or on both sides. For the latter, we write \(M^{\sim T}:=T^{-1}MT\) for given matrices \(M\in\mathcal{M}_{n\times n}\) and \(T\in\operatorname{GL}_{n}\). **Lemma 2.10**.: _Consider the matrices \(M\in\mathcal{M}_{m\times m}\), \(P\in\mathcal{M}_{n\times n}\), and \(T\in\operatorname{GL}_{n}\). Then, \(MP=M\) if and only if \(M^{\cdot T}P^{\sim T}=M^{\cdot T}\). In the case that \(m=n\), we moreover have that \(MP=PM\) if and only if \(M^{\sim T}P^{\sim T}=P^{\sim T}M^{\sim T}\). _ In our investigations, we will make use of presentations of permutation matrices \(P\) in different bases. The strategy is as follows. Step 1 consists in decomposing a permutation \(\sigma\in\mathcal{S}_{n}\) into disjoint cycles \(\pi_{1},\ldots,\pi_{k}\) of lengths \(\ell_{1},\ldots,\ell_{k}\); this brings the permutation matrix \(P_{\sigma}\) of \(\sigma\) into block diagonal form, where each diagonal block is a circulant matrix. The second step is to diagonalize those circulant matrices of sizes \(\ell_{1},\ldots,\ell_{k}\). Step 3 is optional and groups the columns corresponding to the same eigenvalue. **Procedure 2.11**.: 1. [label=Step 0] 2. Represent \(\sigma\) by the permutation matrix \(P_{\sigma}\in\mathcal{M}_{n\times n}(\{0,1\})\) with respect to the standard basis of \(\mathbb{R}^{n}\), i.e., the \(j\)-th row of \(P_{\sigma}\) is the transpose of the \(\sigma(j)\)-th standard unit vector of \(\mathbb{R}^{n}\). 1. Determine a permutation matrix \(T_{1}\in\mathcal{M}_{n\times n}(\{0,1\})\) such that \(P_{\sigma}^{\sim_{T_{1}}}=T_{1}^{-1}P_{\sigma}T_{1}\) is block diagonal whose blocks are circulant matrices \(C_{1},\ldots,C_{k}\) of the form \[C_{i}\,=\,\begin{pmatrix}0&&&&1\\ 1&0&&&\\ &1&\ddots&\\ &&\ddots&\\ &&&1&0\end{pmatrix}\in\mathcal{M}_{\ell_{i}\times\ell_{i}}(\{0,1\}).\] (2.20) Each \(C_{i}\in\mathcal{M}_{\ell_{i}\times\ell_{i}}(\{0,1\})\) has \(\ell_{i}\)-th roots of unity as eigenvalues, namely \(\zeta_{\ell_{i}}^{j}\), where \(j=0,\ldots,\ell_{i}-1\), and \(\zeta_{n}\) denotes the primitive root of unity \(e^{2\pi i/n}\). Depending on the lengths \(\ell_{i}\) of the cycles, some of \(C_{i}\)'s might share common eigenvalues. Collect the eigenvalues of all the \(C_{i}\)'s in a set \(\{\lambda_{1},\lambda_{2},\ldots,\lambda_{s}\}\) together with their multiplicities \(b_{1},b_{2},\ldots,b_{s}\). Note that one of the \(\lambda_{i}\)'s is equal to \(1\) and for this \(\hat{i}\), \(b_{\hat{i}}=k\). 2. Diagonalize each matrix \(C_{i}\) from (2.20) via a matrix in \(\mathrm{GL}_{n}(\mathbb{C})\); this can be obtained via Vandermonde matrices \(V(1,\zeta_{\ell_{i}},\ldots,\zeta_{\ell_{i}}^{\ell_{i}-1})\) as in Example 2.9. The following block diagonal matrix then diagonalizes the block circulant diagonal matrix \(P_{\sigma}^{\sim_{T_{1}}}\) from Step 1: \[T_{2}\,=\,\begin{pmatrix}V\big{(}1,\zeta_{\ell_{1}},\ldots,\zeta_{\ell_{1}}^{ \ell_{1}-1}\big{)}&&&&\\ &\ddots&\\ &&&V\Big{(}1,\zeta_{\ell_{k}},\ldots,\zeta_{\ell_{k}}^{\ell_{k}-1}\Big{)} \end{pmatrix}\,.\] (2.21) 3. Group identical eigenvalues: determine \(T_{3}\in\mathrm{GL}_{n}(\{0,1\})\) such that the matrix from Step 2 is block diagonal with blocks of the form \(\lambda_{i}\operatorname{I}_{b_{i}}\), i.e., determine \(T_{3}\) such that \[T_{3}^{-1}\cdot\big{(}P_{\sigma}^{\sim_{T_{1}}}\big{)}^{\sim_{T_{2}}}\cdot T_ {3}\ =\ \begin{pmatrix}\lambda_{1}\operatorname{I}_{b_{1}}&&&&\\ &\lambda_{2}\operatorname{I}_{b_{2}}&&\\ &&\ddots&\\ &&&\lambda_{s}\operatorname{I}_{b_{s}}\end{pmatrix}\,.\] (2.22) We demonstrate Steps 0-2 of Procedure 2.11 for an example. **Example 2.12**.: Consider the permutation \(\sigma=\begin{pmatrix}1&2&3&4&5\\ 3&5&4&1&2\end{pmatrix}\in\mathcal{S}_{5}\). Then \[P_{\sigma}\,=\,\begin{pmatrix}0&0&1&0&0\\ 0&0&0&0&1\\ 0&0&0&1&0\\ 1&0&0&0&0\\ 0&1&0&0&0\end{pmatrix}\overset{\sim_{T_{1}}}{\mapsto}\,\begin{pmatrix}0&0&1& &0\\ 1&0&0&0&\\ 0&1&0&0&\\ &0&1&0\end{pmatrix}\,\overset{\sim_{T_{2}}}{\mapsto}\,\begin{pmatrix}1&&&\\ &\zeta_{3}^{2}&&\\ &&\zeta_{3}&&\\ &&&1&\\ &&&-1\end{pmatrix}\] with \[T_{1}\,=\,\begin{pmatrix}0&0&1&0&0\\ 0&0&0&1&0\\ 0&1&0&0&0\\ 1&0&0&0&0\\ 0&0&0&0&1\end{pmatrix}\in\mathcal{M}_{5\times 5}(\{0,1\})\quad\text{and} \quad T_{2}\,=\,\begin{pmatrix}1&1&1\\ 1&\zeta_{3}&\zeta_{3}^{2}&0\\ 1&\zeta_{3}^{2}&\zeta_{3}\end{pmatrix}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ **Remark 2.15**.: In the notation of Procedure 2.11, \(\{d_{i}\,|\,i\in\mathbb{N}_{>0},\;d_{i}\neq 0\}=\{b_{1},\dots,b_{s}\}\) as sets. Moreover, \(d_{1}=k\), i.e., the number of disjoint cycles into which \(\sigma\) decomposes. \(\diamond\) **Lemma 2.16**.: _Let \(\mathbb{F}\subset\mathbb{K}\) be a field extension. Then for any \(n\times n\) matrix \(M\in\mathcal{M}_{n\times n}(\mathbb{F})\), we have \(\dim_{\mathbb{F}}(C_{\mathbb{F}}(M))=\dim_{\mathbb{K}}(C_{\mathbb{K}}(M))\)._ Proof.: Let \(M\) be a matrix in \(\mathcal{M}_{n\times n}(\mathbb{F})\), and let \(Q_{M,\mathbb{F}}:\mathcal{M}_{n\times n}(\mathbb{F})\to\mathcal{M}_{n\times n }(\mathbb{F})\) be a linear map that maps a matrix \(N\) to its commutator with \(M\), namely to \([M,N]=MN-NM\). Since the process of Gaussian elimination is independent of the choice of the field for the linear map \(Q_{M,\cdot}\), it follows that \(\dim_{\mathbb{F}}(\ker(Q_{M,\mathbb{F}}))=\dim_{\mathbb{K}}(\ker(Q_{M, \mathbb{K}}))\). **Corollary 2.17**.: _Lemma 2.14 holds true as well as when the base field \(\mathbb{F}\) is \(\mathbb{R}\) or \(\mathbb{Q}\). _ **Example 2.18**.: The blocks of the matrix in (2.11) represent the three disjoint cycles \((1\,2\,3\,4)\), \((5\,6\,7\,8)\), and \((9)\) of length \(4\), \(4\), and \(1\), respectively. Hence the only non-zero \(d_{i}\)'s are \(d_{1}=k=3\), \(d_{2}=2\), and \(d_{4}=2\). Therefore, by Lemma 2.14 and Corollary 2.17, we have \(\dim(C(P_{\sigma}))=\varphi(1)\cdot 3^{2}+\varphi(2)\cdot 2^{2}+\varphi(4) \cdot 2^{2}=9+4+8=21\), in coherence what was obtained in Section 2.1.2. After the full base change in Procedure 2.11 such that \(P_{\sigma}\) is the diagonal matrix in Equation (2.22), the matrices in \(C(P_{\sigma})\) are those of the form \[\left(\begin{array}{cccc|c|c|c}\alpha_{11}&\alpha_{12}&\alpha_{13}&&&\\ \alpha_{21}&\alpha_{22}&\alpha_{23}&0&0&0\\ \alpha_{31}&\alpha_{32}&\alpha_{33}&&&\\ \hline 0&&&\beta_{11}&\beta_{12}&0&0\\ &&&\beta_{21}&\beta_{22}&\\ \hline 0&&&\gamma_{11}&\gamma_{12}&0\\ &&&\gamma_{21}&\gamma_{22}&\\ \hline 0&&&0&0&\delta_{11}&\delta_{12}\\ &&&\delta_{21}&\delta_{22}\\ \end{array}\right), \tag{2.24}\] with scalars \(\alpha_{ij}\), \(\beta_{ij}\), \(\gamma_{ij}\), \(\delta_{ij}\). \(\diamond\) ## 3 Invariance under permutation groups We here study linear maps \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) and are going to deal with determinantal subvarieties of \(\;\mathcal{M}_{m\times n}\). For invariance, the action of the considered group is required on the input space \(\mathbb{R}^{n}\) only. Let \(G\leq\mathcal{S}_{n}\) be an arbitrary subgroup of the symmetric group. The linear map \(f\) is _invariant under \(G\)_ if \[f\circ\sigma\,=\,f \tag{3.1}\] for all permutations \(\sigma\in G\). ### Reduction to cyclic groups For a decomposition \(\sigma=\pi_{1}\circ\cdots\circ\pi_{k}\in\mathcal{S}_{n}\) into disjoint cycles, we denote by \(\mathcal{P}(\sigma)=\{A_{1},\ldots,A_{k}\}\) its induced partition of the set \([n]\). The \(A_{i}\subset[n]\) fulfill \(\cup_{i=1}^{k}A_{i}=[n]\) and \(A_{i}\cap A_{j}=\emptyset\) whenever \(i\neq j\). **Example 3.1**.: Let \(\sigma=(1\,3\,4)(2\,5)\in\mathcal{S}_{5}.\) Its induced partition of \([5]=\{1,2,3,4,5\}\) is \(\mathcal{P}(\sigma)=\{\{1,3,4\},\{2,5\}\}.\) Note that different permutations might induce the same partition: the permutation \(\eta=(1\,4\,3)(2\,5)\neq\sigma\) gives rise to the partition \(\mathcal{P}(\eta)=\mathcal{P}(\sigma)\) of the set \([5]\). \(\diamond\) **Proposition 3.2**.: _Let \(G=\langle\sigma\rangle\leq\mathcal{S}_{n}\) be cyclic and \(\sigma=\pi_{1}\circ\cdots\circ\pi_{k}\) a decomposition of \(\sigma\) into \(k\) disjoint cycles \(\pi_{1},\ldots,\pi_{k}\). The dimension of \(\mathcal{I}_{m\times n}^{G}\) is \(m\cdot k\)._ Proof.: Consider the partition \(\mathcal{P}(\sigma)=\{A_{1},\ldots,A_{k}\}\) of \([n]\) induced by the decomposition \(\sigma=\pi_{1}\circ\cdots\circ\pi_{k}\). Assuming invariance of \(M\) under \(\sigma\), each cycle \(\pi_{i}\) of \(\sigma\) forces some of the columns of \(M\) to coincide, namely the columns indexed by \(A_{i}\subset[n]\). For each \(A_{i}\in\mathcal{P}(\sigma)\), we need to remember only one column \(m_{A_{i}}\) of \(M\). For each \(i\), \(\operatorname{length}(\pi_{i})\) many identical copies of \(m_{A_{i}}\) are listed as columns in \(M\). For each of the \(m_{A_{i}}\), there are \(m\) degrees of freedom to fill its entries, which results in \(\dim(\mathcal{I}_{m\times n}^{\sigma})=mk\). From the proof, we deduce that invariance under a permutation \(\sigma\) depends only on \(\mathcal{P}(\sigma)\). We also deduce a rank constraint on \(\sigma\)-invariant linear functions. **Corollary 3.3**.: _If a linear function \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) is invariant under \(\sigma\in\mathcal{S}_{n}\), then its rank is at most \(k\), the number of disjoint cycles into which \(\sigma\) decomposes. _ Invariance under a permutation group \(G\) with arbitrarily many generators can be reduced to invariance under cyclic groups, which we make precise in the following proposition. **Proposition 3.4**.: _Let \(G=\langle\sigma_{1},\ldots,\sigma_{g}\rangle\leq\mathcal{S}_{n}\) be a permutation group. There exists \(\sigma\in\mathcal{S}_{n}\) such that \(\mathcal{I}_{m\times n}^{G}=\mathcal{I}_{m\times n}^{\sigma}\)._ Proof.: Decompose each \(\sigma_{i}\) into pairwise disjoint cycles \(\pi_{i,1}\circ\cdots\circ\pi_{i,k_{i}}.\) Invariance of a matrix \(M\) under \(\sigma_{i}\) forces some of the columns of \(M\) to coincide and depends only on the partition \(\mathcal{P}(\sigma_{i})\) of \([n]\). Any additional \(\sigma_{j}\), \(j\neq i\), forces further columns of \(M\) to coincide. Invariance of \(M\) under \(G\) is hence described by the finest common coarsening of \(\mathcal{P}(\sigma_{1}),\ldots,\mathcal{P}(\sigma_{k})\). We deduce from the proof that \(\mathcal{I}_{m\times n}^{G}=\mathcal{I}_{m\times n}^{\sigma}\) for any \(\sigma\in\mathcal{S}_{n}\) whose induced partition \(\mathcal{P}(\sigma)\) is the finest common coarsening of \(\{\mathcal{P}(\sigma_{1}),\ldots,\mathcal{P}(\sigma_{k}\}\). ### Characterization of \(\mathcal{I}_{r,m\times n}^{G}\) By omitting repeated columns, we see that \(\mathcal{I}_{r,m\times n}^{G}\) is linearly isomorphic to \(\mathcal{M}_{\min(r,k),m\times k}\). We formulate this key observation as a proposition. **Proposition 3.5**.: _Let \(G=\langle\sigma\rangle\leq\mathcal{S}_{n}\) be cyclic, and \(\sigma=\pi_{1}\circ\cdots\circ\pi_{k}\) its decomposition into pairwise disjoint cycles \(\pi_{i}\). The variety \(\mathcal{I}_{r,m\times n}^{\sigma}\) is isomorphic to the determinantal variety \(\mathcal{M}_{\min(r,k),m\times k}\) via a linear morphism that deletes repeated columns:_ \[\psi_{\mathcal{P}(\sigma)}\colon\,\mathcal{I}_{r,m\times n}^{\sigma}\to \mathcal{M}_{\min(r,k),m\times k}\,. \tag{3.2}\] Proof.: For each \(A_{i}\in\mathcal{P}(\sigma)\), \(\Psi_{\mathcal{P}(\sigma)}\) remembers the column \(m_{A_{i}}\) of \(M\). Intersecting with \(\mathcal{M}_{r,m\times n}\) imposes linear dependencies on the columns of \(M\). This mapping rule is linear in the entries of \(M\). To invert \(\psi_{\mathcal{P}(\sigma)}\), one needs to remember \(\mathcal{P}(\sigma)\) as datum. **Example 3.6** (\(m=2\), \(n=5\), \(r=1\)).: Let \(\sigma=(1\,3\,4)(2\,5)\in\mathcal{S}_{5}\) and hence \(k=2\). Any invariant matrix \(M\in\mathcal{M}_{2\times 5}(\mathbb{R})\) is of the form \(\left(\begin{smallmatrix}\alpha&\gamma&\alpha&\alpha&\gamma\\ \beta&\delta&\beta&\beta\end{smallmatrix}\right)\) for some \(\alpha,\beta,\gamma,\delta\in\mathbb{R}\). The rank constraint \(r=1\) imposes that \((\gamma,\delta)=\lambda\cdot(\alpha,\beta)^{\top}\) for some \(\lambda\in\mathbb{R}\), where we assume w.l.o.g. that \((\alpha,\beta)\neq(0,0)\) in the case of rank one. The morphism (3.2) hence is \[\psi_{\mathcal{P}(\sigma)}\colon\begin{pmatrix}\alpha&\lambda\alpha&\alpha& \alpha&\lambda\alpha\\ \beta&\lambda\beta&\beta&\beta&\lambda\beta\end{pmatrix}\mapsto\begin{pmatrix} \alpha&\lambda\alpha\\ \beta&\lambda\beta\end{pmatrix}\,. \tag{3.3}\] To invert the morphism \(\psi_{\mathcal{P}(\sigma)}\), one reads from \(\mathcal{P}(\sigma)\) how to copy and paste the columns \((\alpha,\beta)^{\top}\) and \((\lambda\alpha,\lambda\beta)^{\top}\) to recover the \(2\times 5\) matrix one started with. \(\diamond\) **Corollary 3.7**.: _In the setup of Proposition 3.5, one has_ \[\dim\left(\mathcal{I}^{\sigma}_{r,m\times n}\right) =\;\min(r,k)\cdot(m+k-\min(r,k))\,,\] \[\deg\left(\mathcal{I}^{\sigma}_{r,m\times n}\right) =\;\prod_{i=0}^{k-\min(r,k)-1}\frac{(m+i)!\,\cdot\!i!}{(\min(r,k) +i)!\,\cdot\!(m-(\min(r,k)+i)!}\,, \tag{3.4}\] \[\operatorname{Sing}(\mathcal{I}^{\sigma}_{r,m\times n}) =\;\psi_{\mathcal{P}(\sigma)}^{-1}\left(\mathcal{M}_{\min(r,k)-1, m\times k}\right)\,.\] Proof.: The statements are an immediate consequence of Lemma 2.8 combined with Equations (2.15) and (2.16), and Lemma 2.4. The following statement is non-trivial in light of Example 2.9. **Proposition 3.8**.: _The ED degree of \(\mathcal{I}^{\sigma}_{r,m\times n}(\mathbb{R})\) is_ \[\deg_{\mathrm{ED}}\left(\mathcal{I}^{\sigma}_{r,m\times n}(\mathbb{R})\right) \,=\,\deg_{\mathrm{ED}}\left(\mathcal{M}_{\min(r,k),m\times k}(\mathbb{R}) \right). \tag{3.5}\] Proof.: Let \(\sigma\in\mathcal{S}_{n}\) consist of \(k\) disjoint cycles \(\pi_{1},\ldots,\pi_{k}\) of lengths \(\ell_{1},\ldots,\ell_{k}\). After re-ordering the columns of \(M\in\mathcal{I}^{\sigma}_{m\times n}\) according to the cycle decomposition, the matrices \(M\) are those whose first \(\ell_{1}\) many columns are equal, and whose following \(\ell_{2}\) many columns are equal, and so on. Given any data matrix \(U\in\mathcal{M}_{m\times n}\), when minimizing its Euclidean distance to \(\mathcal{I}^{\sigma}_{r,m\times n}\), we can assume without loss of generality that \(U\) is contained in the linear space \(\mathcal{I}^{\sigma}_{m\times n}\) by orthogonally projecting \(U\) onto that linear space. The squared Euclidean distance from \(M\) to such \(U\) then becomes \[\sum_{i=1}^{m}\sum_{j=1}^{k}\,\ell_{j}\cdot\left(M_{i,l(j)}-U_{i,l(j)}\right)^{ 2}, \tag{3.6}\] where the \(l(j)\coloneqq\ell_{1}+\cdots+\ell_{j-1}+1\) index the pairwise different columns of \(M\) resp. \(U\). In other words, only keeping the columns indexed by the \(l(j)\) is exactly the linear projection from \(\mathcal{I}^{\sigma}_{r,m\times n}\) onto \(\mathcal{M}_{\min(r,k),m\times k}\). Hence, (3.6) is a _weighted_ Euclidean distance from to a data matrix. Such weighted Euclidean distances and low-rank matrix problems are discussed in Chapter 2 of the upcoming textbook [3]. Denoting by \(\tilde{M}\) and \(\tilde{U}\) the \(m\times k\) matrices with columns indexed by the \(l(j)\), we can express our ED problem explicitly as \[\min_{\tilde{M}\,\in\,\mathcal{M}_{\min(r,k),m\times k}}\,\sum_{i=1}^{m}\sum_{j =1}^{k}\,\ell_{j}\cdot\left(\tilde{M}_{i,j}-\tilde{U}_{i,j}\right)^{2}. \tag{3.7}\] In general, introducing weights changes the ED degree. However, the weights in (3.7) are special, since the entries in column \(j\) are all affected by the _same_ weight \(\ell_{j}\). Since scaling the columns of a matrix \(\tilde{M}\) by non-zero scalars--here \(\sqrt{\ell_{j}}\)--is an automorphism on \(\mathcal{M}_{\min(r,k),m\times k}\), we can rewrite (3.7) as \[\min_{\tilde{M}\in\mathcal{M}_{\min(r,k),m\times k}}\,\sum_{i=1}^{m}\sum_{j=1 }^{k}\left(\sqrt{\ell_{j}}\tilde{M}_{i,j}-\sqrt{\ell_{j}}\tilde{U}_{i,j} \right)^{2}\;=\min_{\tilde{M}\in\mathcal{M}_{\min(r,k),m\times k}}\sum_{i=1}^{ m}\sum_{j=1}^{k}\left(\tilde{M}_{i,j}-\sqrt{\ell_{j}}\tilde{U}_{i,j} \right)^{2}\,.\] From the latter reformulation, we conclude that the ED degree of \(\mathcal{M}_{\min(r,k),m\times k}(\mathbb{R})\) coincides with the ED degree of \(\mathcal{I}^{\sigma}_{r,m\times n}(\mathbb{R})\). ### Parameterizing invariance and network design In this section, we investigate which implications imposing invariance has on the individual layers of a linear autoencoder. Recall that a matrix \(M\in\mathcal{M}_{n\times n}(\mathbb{K})\) has rank \(r\leq n\) if and only if there exist rank-\(r\) matrices \(A\in\mathcal{M}_{n\times r}(\mathbb{K})\) and \(B\in\mathcal{M}_{r\times n}(\mathbb{K})\) such that \(M=AB\). The factors of an invariant matrix do not need to be invariant, as the following example demonstrates. To be more precise, this questions makes sense to be asked a priori only for the first layer, \(B\), since the group \(G\) acts on \(\mathbb{R}^{n}\) only. **Example 3.9**.: Let \(\sigma=(1\,2)\in\mathcal{S}_{3}\) and consider \[A\,=\,\begin{pmatrix}1&1\\ 1&1\\ 0&0\end{pmatrix}\quad\text{and}\quad B\,=\,\begin{pmatrix}1&2&1\\ 2&1&1\end{pmatrix}. \tag{3.8}\] Observe that \(B\) is not invariant under \(\sigma\), but the product \(AB\) is. \(\diamond\) **Proposition 3.10** ([15, Proposition 22]).: _Let \(r\leq n\). Denote by \(\mu\colon\mathcal{M}_{n\times r}\times\mathcal{M}_{r\times n},\;(A,B)\mapsto A \cdot B,\) the multiplication map. If \(\operatorname{rank}(M)=r\) and \(M=\mu(A,B)\), then its fiber is_ \[\mu^{-1}(M)\,=\,\left\{\left(AT^{-1},TB\right)\,|\,T\in\operatorname{GL}_{r}( \mathbb{K})\right\}\,\subset\,\mathcal{M}_{n\times r}\times\mathcal{M}_{r \times n}\,. \tag{3.9}\] **Proposition 3.11**.: _Any matrix \(M\in\mathcal{M}_{n\times n}\) that is invariant under a permutation \(\sigma\in\mathcal{S}_{n}\) with \(k\) pairwise disjoint cycles factorizes as_ \[M\ =\ M_{1}\cdot(e_{i_{1}}|\cdots|e_{i_{n}})\, \tag{3.10}\] _where the \(M_{1}\in\mathcal{M}_{n\times k}\) and the \(e_{i_{j}}\), for \(j=1,\ldots,n\), are standard basis vectors of \(\mathbb{R}^{k}\) with repetitions being allowed._ Proof.: Let \(M\) be invariant under \(\sigma\in\mathcal{S}_{n}\). For each \(A_{i}\in\mathcal{P}(\sigma)\), remember the column \(m_{A_{i}}\) and collect them as the columns of the \(m\times k\) matrix \(M_{1}\coloneqq(m_{A_{1}}|\cdots|m_{A_{k}}).\) The remaining columns of \(M\) are copies of columns of \(M_{1}\), hence \(M=M_{1}\cdot(e_{i_{1}}|\cdots|e_{i_{n}})\) for standard basis vectors \(e_{i_{j}}\) of \(\mathbb{R}^{k}\), with repetitions being allowed. **Example 3.12** (\(n=4\)).: Let \(\sigma=(1\,3)\in\mathcal{S}_{4}\), so that \(k=3\). The matrix \[M\,=\,\begin{pmatrix}1&0&1&1\\ 2&4&2&1\\ 3&4&3&1\\ 0&2&0&1\end{pmatrix}\,=\,\begin{pmatrix}1&0&1\\ 2&4&1\\ 3&4&1\\ 0&2&1\end{pmatrix}\cdot\begin{pmatrix}1&0&1&0\\ 0&1&0&0\\ 0&0&0&1\end{pmatrix} \tag{3.11}\] is invariant under \(\sigma\) and factorizes as claimed in Proposition 3.11. **Proposition 3.13**.: _Let \(M\) be invariant under \(\sigma\) and of maximal possible rank, i.e., of \(\operatorname{rank}(M)=k\). Then every factorization \(M=AB\) of \(M\) is of the form_ \[(A,B)\,\in\,\left\{\left(M_{1}T^{-1},T\cdot(e_{i_{1}}|\cdots|e_{i_{n}})\right) \,|\,T\in\operatorname{GL}_{k}\right\}\,. \tag{3.12}\] Proof.: The statement is an immediate consequence of Propositions 3.10 and 3.11. This statement tells us that linear autoencoders are well-suited for expressing invariance when one imposes appropriate weight sharing on the encoder. More precisely, the decoder factor \(M_{1}T^{-1}\) in (3.12) is an arbitrary matrix, but the encoder factor \(T\cdot(e_{i_{1}}|\cdots|e_{i_{n}})\) has repeated columns. We impose this repetition pattern via weight sharing on the encoder. Proposition 3.13 states that invariant matrices naturally lie in the function space of such an autoencoder. Given any permutation \(\sigma\in\mathcal{S}_{n}\), we say that an encoder \(\mathbb{R}^{n}\to\mathbb{R}^{r}\) has _\(\sigma\)-weight sharing_ if its representing matrices satisfy the following: for every set \(S\in\mathcal{P}(\sigma)\), the columns indexed by the elements in \(S\) coincide, and no additional weight sharing is imposed. **Example 3.14**.: We revisit Example 2.12. The invariance of a matrix \(M=A\cdot B\in\mathcal{I}_{2,5\times 5}^{\sigma}\) forces the encoder factor \(B\) to fulfill the following weight sharing property: Which weights have to coincide is to be read from the color labeling in the figure above. \(\diamond\) **Proposition 3.15**.: _Let \(\sigma\in\mathcal{S}_{n}\) be a permutation consisting of \(k\) disjoint cycles, and let \(r\leq k\). Consider the linear autoencoder \(\mathbb{R}^{n}\to\mathbb{R}^{r}\to\mathbb{R}^{n}\) with fully-connected dense decoder \(\mathbb{R}^{r}\to\mathbb{R}^{n}\) and encoder \(\mathbb{R}^{n}\to\mathbb{R}^{r}\) with \(\sigma\)-weight sharing. Its function space is \(\mathcal{I}_{r,n\times n}^{\sigma}(\mathbb{R})\)._ Figure 2: The \(\sigma\)-weight sharing property imposed on the encoder by \(\sigma=(1\,3\,4)(2\,5)\). Proof.: Every matrix in the function space of the autoencoder is of the form \(AB\) such that the encoder factor \(B\in\mathcal{M}_{r\times n}\) has repeated columns according to \(\mathcal{P}(\sigma).\) Therefore, the product \(AB\) has the same repetition in its columns, i.e., \(AB\in\mathcal{I}_{n\times n}^{\sigma}\). Since \(\operatorname{rank}(AB)\leq\operatorname{rank}(B)\leq r\), we conclude that \(AB\in\mathcal{I}_{r,n\times n}^{\sigma}(\mathbb{R})\). For the converse direction, consider \(M\in\mathcal{I}_{r,n\times n}^{\sigma}(\mathbb{R})\). By Proposition 3.11, that matrix can be factorized as \(M=M_{1}\cdot(e_{i_{1}}|\cdots|e_{i_{n}})\), where \(M_{1}\in\mathcal{M}_{n\times k}\). If \(r=k\), that factorization is compatible with the autoencoder and we are done. Thus, it is left to consider the case \(r<k\). Note that the factor \((e_{i_{1}}|\cdots|e_{i_{n}})\) has full rank \(k\) by construction since the standard basis vectors in the columns of that matrix correspond to the \(k\) cycles of \(\sigma\). Hence, we have that \(\operatorname{rank}(M_{1})=\operatorname{rank}(M)=r\). Therefore, we can factorize \(M_{1}=A_{1}B_{1}\), where \(A_{1}\in\mathcal{M}_{n\times r}\) and \(B_{1}\in\mathcal{M}_{r\times k}\). The factorization \(M=A_{1}\cdot(B_{1}\cdot(e_{i_{1}}|\cdots|e_{i_{n}}))\) is compatible with the autoencoder. ### Induced filtration of \(\mathcal{M}_{r,m\times n}\) If an \(m\times n\) matrix \(M\) is invariant under \(\sigma\in\mathcal{S}_{n}\), then it is also invariant under every permutation \(\eta\in\mathcal{S}_{n}\) whose associated partition \(\mathcal{P}(\eta)\) of \([n]\) is a refinement \(\mathcal{P}(\eta)\prec\mathcal{P}(\sigma)\) of \(\mathcal{P}(\sigma)\). This induces a filtration \(\mathcal{I}_{m\times n}^{\bullet}\) of the variety \(\mathcal{M}_{m\times n}\), which is indexed by partitions of \([n]\), and by intersecting with \(\mathcal{M}_{r,m\times n}\), we obtain a filtration \(\mathcal{I}_{r,m\times n}^{\bullet}\) of the variety \(\mathcal{M}_{r,m\times n}\). The set \(\mathcal{P}([n])\) of partitions of \([n]\) equals \(\mathcal{S}_{n}/{\sim}\), where we identify \(\sigma_{1}\sim\sigma_{2}\) if and only if \(\mathcal{P}(\sigma_{1})=\mathcal{P}(\sigma_{2})\). Here, \(\mathcal{I}_{r,m\times n}^{\mathcal{P}}\) denotes any \(\mathcal{I}_{r,m\times n}^{\sigma}\) for which \(\mathcal{P}(\sigma)=\mathcal{P}\). As we saw earlier on, the variety \(\mathcal{I}_{r,m\times n}^{\sigma}\) depends only on \(\mathcal{P}(\sigma)\), but not on \(\sigma\) itself, hence this notion is well-defined. Together with refinements of partitions, the set of partitions of \([n]\) is a partially ordered set. Define the category \(\underline{\operatorname{Part}}_{[n]}^{\prec}\) whose set of objects is \(\mathcal{P}([n])\), and a morphism from \(\mathcal{P}_{1}\) to \(\mathcal{P}_{2}\) whenever \(\mathcal{P}_{1}\prec\mathcal{P}_{2}\). By \(\underline{\operatorname{Subv}}_{\mathcal{M}_{r,m\times n}}\), we denote the category whose objects are subvarieties of \(\mathcal{M}_{r,m\times n}\), and the inclusion as morphism between \(U_{1},U_{2}\in\underline{\operatorname{Subv}}_{\mathcal{M}_{r,m\times n}}\) whenever \(U_{1}\subset U_{2}\). This formulation gives rise to the functor \[\underline{\operatorname{Part}}_{[n]}^{\prec}\longrightarrow\underline{ \operatorname{Subv}}_{\mathcal{M}_{r,m\times n}},\quad\mathcal{P}\,\mapsto \,\mathcal{I}_{r,m\times n}^{\mathcal{P}}\,. \tag{3.13}\] **Remark 3.16**.: The opposite category \(\underline{\operatorname{Part}}_{[n]}^{\prec,\mathrm{op}}\) of \(\underline{\operatorname{Part}}_{[n]}^{\prec}\) is \(\underline{\operatorname{Part}}_{[n]}^{\prec}\), i.e., partitions of \([n]\) with coarsenings \(\succ\) of partitions as morphisms. In this formulation, the finest common coarsening of partitions \(\mathcal{P}_{1},\dots,\mathcal{P}_{k}\) then is their inverse limit. \(\diamond\) ## 4 Equivariance under cyclic subgroups of \(\mathcal{S}_{n}\) In this section, we address equivariance under cyclic subgroups \(G=\langle\sigma\rangle\leq\mathcal{S}_{n}\). We explore some fundamental algebraic properties of \(\mathcal{E}_{r,n\times n}^{\sigma}\subset\mathcal{M}_{r,n\times n}\) such as its number of irreducible components, their dimension and degree as well as their singular points. ### Characterizing equivariance #### 4.1.1 In cycle decomposition We consider a permutation \(\sigma=\pi_{1}\circ\cdots\circ\pi_{k}\in\mathcal{S}_{n}\) with a decomposition into pairwise disjoint cycles of lengths \(\ell_{1},\dots,\ell_{k}\). We reorder the entries in \(\mathbb{R}^{n}\) as in Step 1 of Procedure 2.11 such that the permutation matrix \(P_{\sigma}\) becomes block diagonal with cyclic blocks as in (2.20). Those square blocks have sizes \(\ell_{1},\ldots,\ell_{k}\). We are interested in matrices \(M\in\mathcal{M}_{n\times n}\) that are equivariant under \(\sigma\). We have already characterized those matrices after a base change that diagonalized \(P_{\sigma}\) in Lemma 2.13. However, in general, that base change involves complex Vandermonde matrices. Since we are mainly interested in real matrices \(M\), we characterize now the equivariant matrices in the basis where \(P_{\sigma}\) is a block diagonal matrix, with cyclic diagonal blocks of sizes \(\ell_{1},\ldots,\ell_{k}\). For that, we divide \(M\in\mathcal{M}_{n\times n}\) into blocks following the same pattern: \(M\) has square diagonal blocks \(M^{(ii)}\) of size \(\ell_{i}\times\ell_{i}\) and rectangular off-diagonal blocks \(M^{(ij)}\) of size \(\ell_{i}\times\ell_{j}\). We now show that the equivariance of \(M\) under \(\sigma\) means that its blocks have to be _(rectangular) circulant matrices_. We can observe this property in our previous example (2.13). We call a (possibly non-square) matrix _circulant_ if each row is a copy of the previous row, cyclically shifted one step to the right, and also each column is a copy of the previous column, cyclically shifted one step downwards. Some examples of such matrices are shown in Equation (4.1). A circulant matrix of size \(\ell_{i}\times\ell_{j}\) has at most \(\gcd(\ell_{i},\ell_{j})\) different entries. \[\begin{pmatrix}\alpha^{(ij)}&\beta^{(ij)}&\gamma^{(ij)}\\ \gamma^{(ij)}&\alpha^{(ij)}&\beta^{(ij)}\\ \beta^{(ij)}&\gamma^{(ij)}&\alpha^{(ij)}\end{pmatrix},\quad\begin{pmatrix} \alpha^{(ij)}&\alpha^{(ij)}&\alpha^{(ij)}\\ \alpha^{(ij)}&\alpha^{(ij)}&\alpha^{(ij)}\\ \alpha^{(ij)}&\alpha^{(ij)}&\alpha^{(ij)}\\ \alpha^{(ij)}&\alpha^{(ij)}&\alpha^{(ij)}\end{pmatrix}, \tag{4.1}\] **Proposition 4.1**.: _The matrix \(M\in\mathcal{M}_{n\times n}\) is equivariant under \(\sigma\) if and only if each block \(M^{(ij)}\) of \(M\) is a (possibly non-square) circulant matrix._ Proof.: Since \(P_{\sigma}\) is a block diagonal matrix with cyclic blocks \(C_{1},\ldots,C_{k}\), the equivariance condition \(MP=PM\) means that \(M^{(ij)}C_{j}=C_{i}M^{(ij)}\) for all \(i,j\). The multiplication by \(C_{i}\) from the left cyclically permutes the rows of \(M\), and the multiplication by \(C_{j}\) from the right permutes the columns of \(M\). Since the resulting matrices need to coincide, it follows that the block \(M^{(ij)}\) has to be circulant. #### 4.1.2 Irreducible components of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) Our focus now shifts to exploring the intriguing algebraic properties arising from the intersection of \(\mathcal{M}_{r,n\times n}\) with \(C(P_{\sigma})\). Both \(\mathcal{M}_{r,n\times n}\) and \(C(P_{\sigma})\) are algebraic spaces, meaning they can be characterized by polynomial equations. Their intersection \(\mathcal{E}^{\sigma}_{r,n\times n}=\mathcal{M}_{r,n\times n}\cap C(P_{\sigma})\) inherits this property and is also an algebraic set. In the following statement, we use the notation from Lemma 2.14. **Proposition 4.2**.: _There is a one-to-one correspondence between the irreducible components of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) and the integer solutions \(\mathbf{r}=(r_{l,m})\) of_ \[\sum_{l\geq 1}\sum_{m\,\in\,(\mathbb{Z}/\mathbb{Z})^{\times}}r_{l,m}\ =\ r\,,\quad\text{ where }\,0\leq r_{l,m}\leq d_{l}\,. \tag{4.2}\] _The dimension of the irreducible component corresponding to such an integer solution \(\mathbf{r}\) is_ \[\sum_{l\geq 1}\sum_{m\,\in\,(\mathbb{Z}/l\mathbb{Z})^{\times}}(2d_{l}-r_{l,m}) \cdot r_{l,m}\,. \tag{4.3}\] Proof.: By Lemma 2.14, every matrix \(M\in C(P_{\sigma})\) is similar to a complex block-diagonal matrix \(B\) with \(\varphi(l)=|(\mathbb{Z}/l\mathbb{Z})^{\times}|\) many blocks of size \(d_{l}\times d_{l}\) for every \(l\). We will denote these blocks by \(B_{l,m}\) for \(m\in(\mathbb{Z}/l\mathbb{Z})^{\times}\). Imposing a rank constraint on the matrix \(M\) affects the rank of the diagonal blocks of \(B\). Hence, if \(r_{l,m}\) is the rank of the block \(B_{l,m}\), then \(M\) has rank \(r\) if and only if (4.2) holds. This implies that the number of different irreducible components of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) is exactly the number of solution vectors to (4.2), where \(0\leq r_{l,m}\leq d_{l}\). The second part of the proposition follows directly from (2.15). **Example 4.3**.: Let \(\sigma\) again denote the clockwise rotation by \(90\) degrees on images with \(3\times 3\) pixels. The numbers \(d_{l}\) are computed in Example 2.18. For the permutation matrix in (2.11), if \(r=3\) then the number of irreducible components is equal to the number of non-negative integer solutions of the equation \(r_{1,1}+r_{2,1}+r_{4,1}+r_{4,3}=3\), where \(r_{1,1}\leq 3=d_{1}\), \(r_{2,1}\leq 2=d_{2}\), and \(r_{4,1},r_{4,3}\leq 2=d_{4}\). With the stars and bars formula, one finds that there are \(\binom{6}{3}-3=17\) solutions, and hence \(\mathcal{E}^{\sigma}_{3,9\times 9}(\mathbb{C})\) has \(17\) irreducible components. Six of those have dimension \(11\), five have dimension \(9\), and the remaining six have dimension \(7\). The six maximal-dimensional components correspond to the integer solutions \((r_{1,1},r_{2,1},r_{4,1},r_{4,3})\in\{(2,1,0,0),\,(2,0,1,0),\,(2,0,0,1),\,(1,1, 1,0),\,(1,1,0,1),\,(1,0,1,1)\}\). \(\diamond\) These discussions imply that, in contrast to the case of invariance, autoencoders are not well-suited to parameterize equivariant functions: For a rank constraint \(r<n\), \(\mathcal{E}^{\sigma}_{r,n\times n}\) has many components; the function space of an autoencoder would cover only a single one of them. Not all irreducible components of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) have to appear in the real locus \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{R})\). We will see this later in Example 4.8. **Proposition 4.4**.: _Let \(\mathbf{r}\) be an integer solution of (4.2). The degree of the corresponding irreducible component of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) is equal to_ \[\prod_{l}\prod_{m\,\in\,(\mathbb{Z}/l\mathbb{Z})^{\times}}\prod_{i=0}^{d_{l}-r _{l,m}-1}\frac{(d_{l}+i)!\,\cdot\,\!i!}{(r_{l,m}+i)!\,\cdot\,\!(d_{l}-r_{l,m}+i )!}. \tag{4.4}\] Proof.: After the full base change in Procedure 2.11, the irreducible component of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) corresponding to \(\mathbf{r}\) is the set of block diagonal matrices where the block with label \((l,m)\) has rank \(r_{l,m}\). Now (4.4) follows from (2.16). **Proposition 4.5**.: _Let \(1\leq r<n\). The locus of singular points of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{K})\) is \(\mathcal{E}^{\sigma}_{r-1,n\times n}(\mathbb{K})\)._ Proof.: Again, after Procedure 2.11, the irreducible component \(\mathcal{E}^{(\mathbf{r})}\) of \(\mathcal{E}^{\sigma}_{r,n\times n}(\mathbb{C})\) corresponding to an integer solution \(\mathbf{r}\) of (4.2) is the set of block-diagonal matrices such that each block has rank at most \(r_{l,m}\). Therefore, by Lemma 2.4, the singular points of \(\mathcal{E}^{(\mathbf{r})}\) are exactly \(\mathcal{E}^{(\mathbf{r})}\cap\mathcal{E}^{\sigma}_{r-1,n\times n}\). Finally, since the intersection of two distinct components \(\mathcal{E}^{(\mathbf{r})}\) and \(\mathcal{E}^{(\mathbf{r}^{\prime})}\) is a subset of \(\mathcal{E}^{\sigma}_{r-1,n\times n}\), we get that \[\operatorname{Sing}(\mathcal{E}^{\sigma}_{r,n\times n})\,=\,\bigcup_{\mathbf{r }}\left(\mathcal{E}^{(\mathbf{r})}\cap\mathcal{E}^{\sigma}_{r-1,n\times n} \right)\,=\,\mathcal{E}^{\sigma}_{r-1,n\times n}\,,\] concluding the proof. ### Parameterizing equivariance We observe already in simple examples that the factors of an equivariant linear map themselves do _not_ need to be equivariant. **Example 4.6**.: Let \(\sigma=(1\,2)\in\mathcal{S}_{3}\) and \(M\) be the invertible matrix \[M\,=\,\begin{pmatrix}1&2&0\\ 2&1&0\\ 3&3&4\end{pmatrix}.\] Indeed, \(MP_{\sigma}=P_{\sigma}M\), hence \(M\) is equivariant under \(\sigma\). Let \(M=QR\) denote the QR decomposition of \(M\); uniqueness of the decomposition is obtained by imposing that \(R\) has positive diagonal entries. One can check that neither \(Q\) nor \(R\) is equivariant under \(\sigma\). \(\diamond\) **Remark 4.7**.: The question whether the individual layers of an equivariant autoencoder are equivariant, is not well-posed in its naive form. A priori, the group \(G\) acts only on the in- and output space of \(f_{\theta}\colon\mathbb{R}^{n}\to\mathbb{R}^{r}\to\mathbb{R}^{n}\). To address questions about equivariance of the two individual layers, one would first need to define an action of \(G\) on \(\mathbb{R}^{r}\), in a way that is "meaningful" for the considered application. \(\diamond\) In Section 3.3, we described how linear autoencoders are well-suited to parameterize permutation-invariant maps. We leave the general theory for the case of equivariance for future work. We consider the example for rotation-equivariant maps of rank at most one. **Example 4.8** (Parameterization of \(\mathcal{E}^{\sigma}_{1,9\times 9}\)).: Let \(\sigma\in\mathcal{S}_{9}\) again denote the rotation of a \(3\times 3\) picture by \(90\) degrees. Denote by \(P\) the matrix obtained by applying Step 1 of Procedure 2.11 to \(P_{\sigma}\), i.e., \(P\) is the block diagonal matrix \(\operatorname{diag}(C_{4},C_{4},C_{1})\). Its eigenvalues are \(\{1,1,1,-1,-1,i,i,-i,-i\}\), here denoted as multiset together with their multiplicities. We chop \(M\) into blocks of sizes determined by the blocks of \(P\), i.e. into blocks of size pattern \[\left(\begin{array}{c|c|c}4\times 4&4\times 4&4\times 1\\ \hline 4\times 4&4\times 4&4\times 1\\ \hline 1\times 4&1\times 4&1\times 1\end{array}\right).\] Each of the blocks \(M^{(i,j)}\) is circulant, as spelled out in (2.13). We are now going to describe the set of matrices \(M\) which commutate with \(P\) and are of rank at most \(1\). Imposing \(r=1\) gives rise to three irreducible components of \(\mathcal{E}^{\sigma}_{1,9\times 9}(\mathbb{C})\) of dimension \(3\) and one of dimension \(5\) according to Proposition 4.2. An explicit analysis reveals that the general matrices in the four components of \(\mathcal{E}^{\sigma}_{1,9\times 9}(\mathbb{C})\) take the following forms for scalars \(\alpha,\beta,\gamma,\lambda,\mu\): \[\left(\begin{array}{cccc|cccc|cccc}\alpha&\alpha&\alpha&\alpha&\alpha&\lambda \alpha&\lambda\alpha&\lambda\alpha&\lambda\alpha&-\lambda\alpha&\lambda\alpha\\ \alpha&\alpha&\alpha&\alpha&\lambda\alpha&\lambda\alpha&\lambda\alpha&\lambda\alpha &\lambda\alpha\\ \alpha&\alpha&\alpha&\alpha&\lambda\alpha&\lambda\alpha&\lambda\alpha&\lambda \alpha&\lambda\alpha\\ \alpha&\alpha&\alpha&\alpha&\lambda\alpha&\lambda\alpha&\lambda\alpha&\lambda \alpha&\lambda\alpha\\ \alpha&\alpha&\alpha&\alpha&\lambda\alpha&\lambda\alpha&\lambda\alpha&\lambda \alpha&\lambda\alpha\\ \hline\beta&\beta&\beta&\beta&\lambda\beta&\lambda\beta&\lambda\beta&\lambda \beta&\lambda\beta&\lambda\beta\\ \beta&\beta&\beta&\beta&\lambda\beta&\lambda\beta&\lambda\beta&\lambda\beta& \lambda\beta\\ \beta&\beta&\beta&\beta&\lambda\beta&\lambda\beta&\lambda\beta&\lambda\beta& \lambda\beta\\ \hline\gamma&\gamma&\gamma&\gamma&\gamma&\lambda\gamma&\lambda\gamma&\lambda \gamma&\lambda\gamma&\mu\gamma\end{array}\right),\] \[\left(\begin{array}{cccc|cccc|cccc}\alpha&-i\alpha&-\alpha&i\alpha&\lambda \alpha&-i\lambda\alpha&-\lambda\alpha&i\lambda\alpha&\\ i\alpha&\alpha&-i\alpha&-\alpha&i\lambda\alpha&\lambda\alpha&-i\lambda\alpha &-\lambda\alpha&0\\ -\alpha&i\alpha&\alpha&-i\alpha&-\lambda\alpha&i\lambda\alpha&\lambda\alpha&-i \lambda\alpha&0\\ -i\alpha&-\alpha&i\alpha&\alpha&-i\lambda\alpha&-\lambda\alpha&i\lambda\alpha& \lambda\alpha&\alpha\\ \hline\beta&-i\beta&-\beta&i\beta&\lambda\beta&-i\lambda\beta&-i\lambda\beta& -\lambda\beta&0\\ -\beta&i\beta&\beta&-i\beta&-\lambda\beta&i\lambda\beta&\lambda\beta&-i\lambda \beta&0\\ -\beta&i\beta&\beta&-i\beta&-\lambda\beta&i\lambda\beta&\lambda\beta&-i\lambda \beta&0\\ -i\beta&-\beta&i\beta&\beta&-i\lambda\beta&-\lambda\beta&i\lambda\beta&\lambda \beta&0\\ \hline\end{array}\right),\] or \[\left(\begin{array}{cccc|cccc|cccc}\alpha&i\alpha&-\alpha&-i\alpha&\lambda \alpha&i\lambda\alpha&-\lambda\alpha&-i\lambda\alpha&\\ -i\alpha&\alpha&i\alpha&-\alpha&-i\lambda\alpha&\lambda\alpha&i\lambda\alpha& -\lambda\alpha\\ -\alpha&-i\alpha&\alpha&i\alpha&-\lambda\alpha&-i\lambda\alpha&\lambda\alpha &i\lambda\alpha\\ i\alpha&-\alpha&-i\alpha&\alpha&i\lambda\alpha&-\lambda\alpha&-i\lambda\alpha &\lambda\alpha&\lambda\alpha\\ \hline\beta&i\beta&-\beta&-i\beta&\lambda\beta&i\lambda\beta&-\lambda\beta& -i\lambda\beta&-i\lambda\beta\\ -i\beta&\beta&i\beta&-\beta&-i\lambda\beta&\lambda\beta&i\lambda\beta&-\lambda \beta\\ -\beta&-i\beta&\beta&i\beta&-\lambda\beta&-i\lambda\beta&\lambda\beta&i\lambda \beta\\ i\beta&-\beta&-i\beta&\beta&i\lambda\beta&-\lambda\beta&-i\lambda\beta&\lambda \beta&\lambda\beta\\ \hline\end{array}\right).\] Hence, \(\mathcal{E}^{\sigma}_{1,9\times 9}(\mathbb{C})\) has four complex irreducible components, but only the first two of them also appear in the real locus \(\mathcal{E}^{\sigma}_{1,9\times 9}(\mathbb{R})\). Note that in this particular case, all components are vector spaces--but this is a special occurrence for imposing \(r=1\). \(\diamond\) ### Induced filtration of \(\mathcal{M}_{r,n\times n}\) Let \(\sigma\in\mathcal{S}_{n}\). Whenever a matrix is invariant under \(\sigma\), then it is also invariant under any power of \(\sigma\). Hence, \(\mathcal{E}^{\sigma^{k}}_{n\times n}\subset\mathcal{E}^{\sigma^{l\cdot k}}_{n \times n}\) for all \(k,l\in\mathbb{N}\). Therefore, any \(\sigma\in\mathcal{S}_{n}\) gives rise to an increasing filtration \(\mathcal{E}^{\sigma^{\bullet}}_{n\times n}\) of \(\mathcal{M}\) (and \(\mathcal{E}^{\sigma^{\bullet}}_{r,n\times n}\) of \(\mathcal{M}_{r,n\times n}\)). This filtration is finite, since \(l\cdot k>\mathrm{ord}(\sigma)\) for \(l\) sufficiently large, hence \(\mathcal{E}^{\sigma^{l\cdot k}}_{n\times n}=\mathcal{E}^{id}_{n\times n}= \mathcal{M}_{n\times n}\) for such \(l\). By intersecting with \(\mathcal{M}_{r,n\times n}\), we obtain analogous statements for \(\mathcal{E}^{\sigma^{\bullet}}_{r,n\times n}\). ### Example: equivariance for non-cyclic groups We here revisit equivariance for \(3\times 3\) pictures. Characterizing equivariance for non-cyclic permutation groups is more complicated than the cyclic case. As a case study, we impose \(\sigma\) equivariance both under rotation and under reflection, i.e., we consider the group \(G=\langle\sigma,\chi\rangle\) generated by the clock-wise rotation by \(90\) degrees \(\sigma\) as in (2.1), and the reflection \[\chi\colon\,\begin{pmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{pmatrix}\,\mapsto\,\begin{pmatrix}a_{13}&a_{12}&a_{ 11}\\ a_{23}&a_{22}&a_{21}\\ a_{33}&a_{32}&a_{31}\end{pmatrix}. \tag{4.5}\] We will again identify \(\mathbb{R}^{3\times 3}\cong\mathbb{R}^{9}\) via \[\begin{pmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}&a_{13}&a_{33}&a_ {31}&a_{12}&a_{23}&a_{32}&a_{21}&a_{22}\end{pmatrix}^{\top}. \tag{4.6}\] Then \(\chi(A)\) is represented by the vector \((a_{13}\ a_{11}\ a_{31}\ a_{33}\ a_{12}\ a_{21}\ a_{32}\ a_{23}\ a_{22})^{\top}\). Under this identification, the reflection is \(\chi=(1\,2)(3\,4)(6\,8)\in\mathcal{S}_{9}\) is and is represented by the matrix \[P_{\chi}\,=\,\left(\begin{array}{cccc|cccc}0&1&0&0&&&&\\ 1&0&0&0&&&&\\ 0&0&0&1&&&&\\ 0&0&1&0&&&&\\ \hline&&&1&0&0&0&&\\ &0&&\begin{array}{cccc|cccc}0&0&0&1&&\\ 0&0&1&0&0\\ 0&1&0&0&&\\ \hline 0&&&0&&1\end{array}&0\\ \hline\end{array}\right). \tag{4.7}\] Hence, equivariance of a matrix \(M=(m_{ij})_{i,j}\in\mathcal{M}_{9\times 9}\) under both \(\sigma\) and \(\chi\), i.e., \(MP_{\sigma}=P_{\sigma}M\) and \(MP_{\chi}=P_{\chi}M\), implies that \(M\) has to be of the form \[M\,=\,\left(\begin{array}{cccc|cccc}\alpha_{1}&\alpha_{2}&\alpha_{3}&\alpha _{2}&\beta_{1}&\beta_{2}&\beta_{2}&\beta_{1}&\varepsilon_{3}\\ \alpha_{2}&\alpha_{1}&\alpha_{2}&\alpha_{3}&\beta_{1}&\beta_{1}&\beta_{2}& \beta_{2}&\varepsilon_{3}\\ \alpha_{3}&\alpha_{2}&\alpha_{1}&\alpha_{2}&\beta_{2}&\beta_{1}&\beta_{1}& \beta_{2}&\varepsilon_{3}\\ \alpha_{2}&\alpha_{3}&\alpha_{2}&\alpha_{1}&\beta_{2}&\beta_{2}&\beta_{1}& \beta_{1}&\varepsilon_{3}\\ \hline\gamma_{1}&\gamma_{1}&\gamma_{3}&\gamma_{3}&\delta_{1}&\delta_{2}& \delta_{3}&\delta_{2}&\varepsilon_{4}\\ \gamma_{3}&\gamma_{1}&\gamma_{1}&\gamma_{3}&\delta_{2}&\delta_{1}&\delta_{2} &\delta_{3}&\varepsilon_{4}\\ \gamma_{3}&\gamma_{3}&\gamma_{1}&\gamma_{1}&\delta_{3}&\delta_{2}&\delta_{1} &\delta_{2}&\varepsilon_{4}\\ \gamma_{1}&\gamma_{3}&\gamma_{3}&\gamma_{1}&\delta_{2}&\delta_{3}&\delta_{2} &\delta_{1}&\varepsilon_{4}\\ \hline\varepsilon_{1}&\varepsilon_{1}&\varepsilon_{1}&\varepsilon_{1}& \varepsilon_{2}&\varepsilon_{2}&\varepsilon_{2}&\varepsilon_{2}&\varepsilon_ {5}\end{array}\right)\,. \tag{4.8}\] Therefore, \(\dim(\mathcal{E}^{G}_{9\times 9})=81-66=2\cdot 3+2\cdot 2+5\cdot 1=15\). In comparison to matrices that are required to be equivariant under the rotation \(\sigma\) only (see (2.13)), the entries \(\alpha_{4}\), \(\beta_{3}\), \(\beta_{4}\), \(\gamma_{2}\), \(\gamma_{4}\), and \(\delta_{4}\) can no longer be chosen freely, which drops the dimension by \(6\). Let us add the action of another permutation on \(3\times 3\) pictures, namely shifting each row by one to the right, i.e., for \(i=1,2,3\), \(a_{i,j}\mapsto a_{i,j+1}\) for \(j=1,2\), and \(a_{i,3}\mapsto a_{i,1}\). In the choice from above, the shift corresponds to the permutation \((1\,5\,2)(3\,4\,7)(6\,8\,9)\in\mathcal{S}_{9}\). All \(9\times 9\) matrices that are equivariant under rotation, reflection, and shift, are of the following form, with only 3 degrees \(\alpha_{1},\alpha_{2},\alpha_{3}\) of freedom: \[M\,=\,\left(\begin{array}{cccc|cccc}\alpha_{1}&\alpha_{2}&\alpha_{3}&\alpha_{ 2}&\alpha_{2}&\alpha_{3}&\alpha_{3}&\alpha_{2}&\alpha_{3}\\ \alpha_{2}&\alpha_{1}&\alpha_{2}&\alpha_{3}&\alpha_{2}&\alpha_{2}&\alpha_{3} &\alpha_{3}&\alpha_{3}\\ \alpha_{3}&\alpha_{2}&\alpha_{1}&\alpha_{2}&\alpha_{3}&\alpha_{2}&\alpha_{2} &\alpha_{3}&\alpha_{3}\\ \alpha_{2}&\alpha_{3}&\alpha_{3}&\alpha_{1}&\alpha_{3}&\alpha_{3}&\alpha_{2} &\alpha_{2}&\alpha_{3}\\ \hline\alpha_{2}&\alpha_{2}&\alpha_{3}&\alpha_{3}&\alpha_{1}&\alpha_{3}& \alpha_{2}&\alpha_{3}&\alpha_{2}\\ \alpha_{3}&\alpha_{2}&\alpha_{2}&\alpha_{3}&\alpha_{3}&\alpha_{1}&\alpha_{3} &\alpha_{2}&\alpha_{2}\\ \alpha_{3}&\alpha_{3}&\alpha_{2}&\alpha_{2}&\alpha_{2}&\alpha_{3}&\alpha_{1} &\alpha_{3}&\alpha_{2}\\ \alpha_{2}&\alpha_{3}&\alpha_{3}&\alpha_{2}&\alpha_{3}&\alpha_{2}&\alpha_{3} &\alpha_{1}&\alpha_{2}\\ \hline\alpha_{3}&\alpha_{3}&\alpha_{3}&\alpha_{3}&\alpha_{2}&\alpha_{2}& \alpha_{2}&\alpha_{2}&\alpha_{2}&\alpha_{1}\end{array}\right). \tag{4.9}\] To understand the general behavior, one will need to engage in combinatorial tailoring. ## 5 Conclusion and outlook We investigated linear neural networks through the lens of algebraic geometry, with an emphasis on linear autoencoders. Their function spaces are determinantal varieties \(\mathcal{M}_{r,n\times n}\) in a natural way. We considered permutation groups \(G\) and fully characterized the elements of the function space which are invariant under the action of \(G\). They form an algebraic variety \(\mathcal{I}^{G}_{r,n\times n}\subset\mathcal{M}_{r,n\times n}\) for which we computed the dimension, singular points, degree, as well as the ED degree. The latter is a measure of complexity for training invariant networks, as well as finding nearest invariant networks post-training. We proved implications for the design of neural networks, such as a dimensional constraint on the middle layer of the autoencoder, as well as a weight sharing property of the encoder. Moreover, we proved that all \(G\)-invariant functions can be learned by a linear autoencoder. For equivariance, we treated cyclic subgroups \(G=\langle\sigma\rangle\) of permutation groups. Also in this case, the resulting part of the function space is an algebraic variety \(\mathcal{E}^{\sigma}_{r,n\times n}\subset\mathcal{M}_{r,n\times n}\), for which we computed the dimension, degree, and singular points over \(\mathbb{C}\). The computation of the ED degree is more intricate in the case of equivariance, as well as the generalization to non-cyclic groups. We plan to tackle both problems in follow-up work. We constructed a parameterization of \(\mathcal{E}^{\sigma}_{1,9\times 9}\) as a starting point, and leave the parameterization of \(\mathcal{E}^{G}_{r,n\times n}\) for future work. One should also address groups other than permutation groups, such as non-discrete groups. Another natural step to take is to generalize the network architecture to a bigger number of layers as well as to allowing non-trivial activation functions, such as ReLU. For the latter, we expect that tropical expertize will be helpful to study the resulting geometry of the function space, which we will tackle in future work. Having the geometry of the function spaces understood, one should also investigate the types of critical points during training processes and how they compare to networks without imposed equi- or invariance. **Acknowledgments.** KK and ALS were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
2301.00181
Smooth Mathematical Function from Compact Neural Networks
This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters, through discussing a few topics about regression. First, we reinterpret inside of NNs for regression; consequently, we propose a new activation function--integrated sigmoid linear unit (ISLU). Then special charateristics of metadata for regression, which is different from other data like image or sound, is discussed for improving the performance of neural networks. Finally, the one of a simple hierarchical NN that generate models substituting mathematical function is presented, and the new batch concept ``meta-batch" which improves the performance of NN several times more is introduced. The new activation function, meta-batch method, features of numerical data, meta-augmentation with metaparameters, and a structure of NN generating a compact multi-layer perceptron(MLP) are essential in this study.
I. K. Hong
2022-12-31T11:33:24Z
http://arxiv.org/abs/2301.00181v1
# Smooth Mathematical Function from Compact Neural Networks ###### Abstract This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters, through discussing a few topics about regression. First, we reinterpret inside of NNs for regression; consequently, we propose a new activation function-integrated sigmoid linear unit (ISLU). Then special characteristics of metadata for regression, which is different from other data like image or sound, is discussed for improving the performance of neural networks. Finally, the one of a simple hierarchical NN that generate models substituting mathematical function is presented, and the new batch concept "meta-batch" which improves the performance of NN several times more is introduced. The new activation function, meta-batch method, features of numerical data, meta-augmentation with metaparameters, and a structure of NN generating a compact multi-layer perceptron(MLP) are essential in this study. smooth function approximation, artificial intelligence, neural network, compactness, smoothness, activation function, batch Introduction In many fields, such as astronomy, physics, and economics, someone may want to obtain a general function that satisfies a dataset through regression from numerical data, which are fairly accurate ([1; 2; 3; 4]). The problem of smoothly approximating and inferring general functions using neural networks (NNs) has been considered in the some literature. However, there is insufficient research on using NNs to completely replace the ideal mathematical functions of highly smooth levels, which are sufficiently precise to be problem-free when a simulation is performed. This study aims to completely replace such ideal mathematical functions. Assuming a model \(M(X)\) was developed by regression on a dataset using an NN. \(M(X)\) for input \(X\) can be thought of as a replacement of a mathematical function \(f(X)\). In this study, such NN is called "_neural function(NF)_" as a mathematical function created by an NN. The components of an analytic mathematical function can be analyzed using a series expansion or other methods, whereas it is difficult for a NF. In this study, we _created "highly accurate" and "highly smooth" NFs with a "few parameters" using metadata._ Particularly, we combined _a new activation function, a meta-batch method, and weight-generating network (WGN)_ to realize the desired performances. The major contributions of this study can be summarized as follows. * We dissected and interpreted the middle layers of NNs. The outputs of each layer are considered basis functions for the next layer; from this interpretation, we proposed a _new activation function_-integrated sigmoid linear unit (ISLU)-suitable for regression. * The characteristics and advantages of metadata for regression problems were investigated. A training technique with _fectious metaparameters and data augmentation_, which significantly improves performance, was introduced. It was also shown that for regression problems, _the function values at specific locations_ could be used as metaparameters representing the characteristics of a task. * NN structures that could _generate compact_1_NFs_ for each task from metaparameters were investigated, and a new batch concept- _'meta-batch'_-that could be used in the NFs was introduced. Footnote 1: Comprising few parameters NNs for regression Let's talk about an easy but non-common interpretation about regression with a multilayer perceptron (MLP). What do the outputs of each layer of an MLP mean? They can be seen as _basis functions that determine the function to be input to the next layer_. The input \(x_{i+1}\) of the (\(i+1\))th layer can be expressed as follows: \[x_{j}^{i+1}=\sum_{k}w_{j,k}^{i}*M_{k}^{i}(x_{0})+b_{j}, \tag{1}\] where \(x_{0}\) denotes the input of the first layer, \(w_{j,k}^{i}\) denotes the weight that connects the \(k\)th node of the \(i\)th layer to \(j\)th node of the (\(i+1\))th layer, and \(M_{k}^{i}\) denotes a model comprising the \(0\)th to \(i\)th layers and having the \(k\)th node of the \(i\)th layer as the output. This is similar to the expression \(f(x)=\sum_{j}w_{j}\phi_{j}(x)+b\) of the radial basis function(RBF) kernel method. Clearly, the outputs of each layer act as basis functions for the next layer. Figure 2 shows the outputs of each layer of an MLP that learned the dataset \(D=\{(x_{i},y_{i})|y=0.2(x-1)x(x+1.5)\), \(x\in[-2,2]\}\) with the exponential linear unit (ELU) activation function. To efficiently extract the final function, the output functions of the intermediate layers must be well-developed. If the output functions of each layer are well-developed, the desired final NF can be compact. In addition, for the final function of NN to be infinitely differentiable, the output functions of the intermediate layers should also be infinitely differentiable. Figure 1: Perspective on MLP Figure 2: The output graphs of each layer, trained with an MLP, where the nodes of each layer are If the activation function is a rectified linear unit(ReLU), the output function bends sharply after every layer. If a one-dimensional regression problem is modeled with a simple MLP that has (k+1) layers with nodes \([N_{0},N_{1},N_{2}..N_{k}]\), the output function will bend more than \(N_{0}*N_{1}...N_{k}\). The ELU activation function weakens such bending but does not smoothen it for the first derivative. Moreover, apt attention is required when using the hyperbolic tangent function for all layers in a regression problem because the output function bends in two places after each layer. Thus, the question is which activation function can develop the intermediate basis functions well? If the activation function starts as a linear function and bends at an appropriate curvature after each layer, the final result will be good. Therefore, we propose an activation function suitable for regression, called _"integrated sigmoid linear unit(ISLU)"_. \[\log(\alpha+exp(\beta x))/\beta-\log(1+\alpha)/\beta, \tag{2}\] where \(\alpha\) and \(\beta\) are positive numbers. Our experiment shows that ISLU performs sufficiently well and is worth further research. It can improve the accuracy and smoothness of our experimental data. 2 Mathematically, ISLU for \(\alpha=1\) is a translated SoftPlus that passes the origin, but ISLU absolutely differs from SoftPlus. The purposes of their production differ, and there is a significant difference in their results.3 Figure 4: Scores. The numerical score table is shown in Appendix E. The experimental results are shown in Figure 4.567 By default, a model structure is represented in the form "[the name of the model structure]_([the number of layers]L, [the number of nodes of all hidden layers)N]_[activation function]_[further information (option)]." The experimental metadataset is described in Appendix A <1>, which has \(B\), \(k\), and \(m\) as metaparameters and the corresponding task dataset for \(L\),\(t\),\(\phi\). 8 The average of the "sum of error squares" for eight tasks among the experimental metadatasets is considered a score. Footnote 5: In our experiment, the Swish activation function was also tested, and its performance was comparable to that of ISLU. However, for consistency, we do not discuss it in the main text; the details are presented in Appendix B. Footnote 6: All box plots in this study are arranged in order of small scores from the top, and items wherein the box is invisible have a larger score than the shown graph range. In Figure 4a, we consider a basic MLP structure trained on one task; WGN and fMLP, which will be introduced hereinafter, in Figure 4b,4c are models trained using metadata. Considering ISLU[0], what is in [] represents a degree of freedom in which the activation function's shape can be changed. ISLU[0] is trained with \(\alpha=0.5\) and \(\beta=1\), ISLU[1]\({}_{a}\) is trained with \(\alpha=0.5\) and \(\beta=var\), and ISLU[1]\({}_{b}\) is trained with \(\alpha=0.5\) and \(\beta=1+var\), where \(var\) are trainable parameters. Because variables tend to be learned in a distribution with a mean near zero when training with an NN, ISLU[1]\({}_{a}\) bends slightly after each layer and ISLU[1]\({}_{b}\) bends to a certain amount and additionally adjusts its degree. 9 Footnote 7: All experimental conditions of NNs in this study are shown in Appendix D Footnote 8: Most of the experiments in this study are done with the experimental dataset. Footnote 9: The smaller the \(\beta\) value, the closer ISLU is to a straight line. Considering the experimental results in Figure 4, the following is observed. * (1) There is a significant difference in performance between SoftPlus and ISLU. * (2) Considering an MLP, there is not much difference in performance between ISLU and ELU (Figure 4a). However, in all models trained with metadata, ISLU significantly outperforms ELU (Figure 4b,4c). Figure 5: Comparison of ELU and ISLU when training with WGN. From left to right, the 0th, 1st, and 2nd derivatives of the curves with respect to time t in a task in the given metadatasets. Blue lines: WGN_(4L,64N)_ELU_MB, Red lines: WGN_(4L,64N)_ISLU[1]a_MB * (3) In Figure 3(b), when the number of nodes is high(64N), ISLU[0] outperforms ISLU[1], whereas when the number of nodes is low(15N,16N), ISLU[1] outperforms ISLU[0]. * (4) In Figure 3(c), ISLU[1]\({}_{b}\) always outperforms ISLU[0]. * (5) As shown in ISLU[1]\({}_{a}\) and ISLU[1]\({}_{b}\), there are slight differences in performance depending on what the shape of ISLU is based on. The reason for (2) can be explained as follows: setting an activation function parameter entails giving a certain bias. When given well, it considerably helps in predicting results; otherwise, it may interfere. When using metadata, the performance is improved because biases are determined by referring to various data. We now discuss the reasons for (3) and (4). In Figure 3(b), fMLP indicates an MLP structure trained with fictitious metadata10 for only one task. If an MLP has a lots of nodes, even if the curvature functions of all activations are set to be the same, several functions can be added and combined to produce curves with the desired shapes. Meanwhile, when the nodes are few, desired curves may not be obtained well without adjusting the curvatures of the activation functions. In Figure 3(c), WGN is a network structure 11 that learns the entire metadata at once. In this case, using ISLU[1] allows the activation shape to change between tasks, yielding better results than the fixed-shaped ISLU[0]. Footnote 10: described in II.2 Footnote 11: described in III.1 The ISLU presented in this study is an example of an activation function for creating desired curves; a better activation function can be studied. ### Perspectives of Metadata In this study, _metadata_ are the data of datasets that are sometimes the sets of task datasets, _metafeatures_ are features of a task dataset, and _metalabels_ or _metaparameters_ are parameters representing metafeatures. Consider a case where a physical system has the relation \(y=f(x_{1},x_{2}..)\) and the function \(f\) depends on the variables \(a1,a2....\). For example, a pendulum's Figure 6: Metadata structure. kinetic energy \(E\) is \(E=f(\theta)\), where \(\theta\) denotes the angle between the string and gravitational field direction, and the function \(f\) depends on the string's length \(l\) or pendulum's mass \(m\). In this case, the kinetic energy \(E\) can be viewed not only as \(f(\theta,l,m..)\) but also as \(f_{l,m}(\theta)\). The dataset \(\mathcal{D}=\{(l_{i},m_{i},\theta_{i},E_{i})|E_{i}=f(\theta_{i},l_{i},m_{i}..) \}=\{(l_{i},m_{i},D_{i})|D_{i}=D_{m_{i},l_{i}}(\theta)\}\) is metadataset and the numerical value \(l,m\) can be considered as metaparameters. One might want to interpret the kinetic energy as \(E=f_{l,\theta}(m)\). This cannot be said to be wrong, and there may be various perspectives and interpretations for _a numerical dataset used for regression_. ### Advantages of Training with Metadata and Meta-Augmentation Consider an experiment performed with the following metadata \(\mathcal{D}_{k}=\{(x_{i},y_{i})|y_{i}=A_{k}*\sin(p_{k}*x_{i}+\phi_{k}),x\in[0, 10],A_{k}\in[-1.5,1.5],p_{k}\in[0.5,1.5],\phi_{k}\in[0,2\pi]\}\). It can be seen from the perspective that the tasks \(\mathcal{D}=\{(x_{i},y_{i})|y_{i}=A*\sin(p*x_{i}+\phi)\}\) are given according to the metaparameters of \(A\), \(p\), and \(\phi\). In this case, if not only \(x\) but also \(A\), \(p\), and \(\phi\) were trained as training inputs, a curve could be created with zero shot just by setting \(A\), \(p\), and \(\phi\). 12 Consequently, if metadata are used to learn, the accuracy of each task increases. Footnote 12: MLP with inputs \(A,p,\phi,\theta\) and the WGN in III.1 were used for the experiment. Taking a hint from the fact that metadata improve inference accuracy for each task, it can be thought that even in a situation where only one task is given, fictitious metadata with fictitious metalabels (or metaparameters) can be generated to learn curves. If only fictitious metalabels are used and the data remain the same, curves would be learned in the direction of ignoring the metalabels; therefore, some data modifications are required. For the experiment, fictitious metadata comprising 10 tasks with the metaparameter \(a\) were created by moving the \(y_{i}\) value in parallel \(\pm 0.05\) for every \(a=\pm 0.02\) with the original data of \(a=0\) for a given task \(\mathcal{D}=\{(x_{i},y_{i})\}\). As a result of using fictitious metadata, the score improved significantly (Figure 9). The performance improvement was similar even when the fictitious metadata were generated by moving \(x_{i}\) instead of \(y_{i}\) according to the fictitious metalabel. We reiterate that data augmentation _including ficitous meta-parameters_ is required to achieve significant performance improvement, otherwise there is little performance improvement. In this study, only the experimental results using MLP with fictitious metaparameters added to inputs are shown; however, further experiments show that the performance improvement due to fictitious metadata occurs independent of the model structure. ### Learning Function with Restricted Metadata The regression task for the numerical dataset \(\mathcal{D}=\{(x_{i},y_{i})|i=0,1,2..\}\) can have a significant advantage different from the image problems, i.e., \(y_{i}\)_values at particular locations can be metaparameters that represent the entire task dataset_. For the set of images, _if we know the RGB values at specific positions of pixels, it does not help to distinguish the features of images_. However, for a set of mathematical functions f(x)s such as fifth degree polynomial or sine curve sets, _just knowing f(x) at specific x positions can let us distinguish the functions well_. This can be shown in the experiments with sine curve datasets. For the tasks \(\mathcal{D}_{k}=\{(x_{i},y_{i})|y_{i}=A_{k}*\sin(p_{k}*x_{i}+\phi_{k}),x\in[0, 10],A_{k}\in[-1.5,1.5],p_{k}\in[0.5,1.5],\phi_{k}\in[0,2\pi]\}\) that are not given metaparameters \(A\), \(p\), and \(\phi\), it is possible to learn the sine curves just using the function values \(y_{i}\) at six points of \(x_{i}\) as metaparameters (Figure 7). In other words, it is possible to perform _few-shot learning_ simply without metalabels. In addition, the relationship between the six \(y\) points and \(A\), \(p\), and \(\phi\) can be learned with a simple MLP that has six-dimensional inputs and three-dimensional outputs, indicating that the metaparameters \(A\), \(p\), and \(\phi\) can be completely extracted to generate a sine curve using on the six points. ## III Function-generating networks ### Wgn When learning metadata in a regression problem, one can think of an hierarchical NN structure in which a NF corresponding to each task is generated from corresponding meta parameters. The structure in which a model is generated from variables has been studied extensively ([5; 6]). We consider the one of the structure of a function-generating network called _weight generating network(WGN)_ in this study. As shown in Figure 6, WGN generates parameters such as the weight and bias of _main network_ through a simple MLP called _weight generator_ from metaparameters. If there are trainable parameters of the activation function on the main network, can also be generated from metaparameters. WGN is expected to generate _NFs comprising a few parameters_ corresponding to each task through the weight generator.This is because enormous data and weight generators carefully generate the parameters of the main network. Experiments showed that WGN is effective in creating the main network with excellent performance, although it comprises only a few parameters. What are the advantages of creating a NF with _only a few parameters_? First, because the number of times that a linear input function can be bent is reduced, it may have a regulation effect or help create a smooth function. Second, it may be helpful in interpreting and analyzing the network by directly adjusting the parameters. Third, because the number of weights is small and the inference speed is fast, it can be advantageous when a fast interference speed is required, such as a simulation. ### Meta-batch When training a function-generating network, such as WGN, 'one' metalabel (or metaparameter) \(z_{i}\) is usually placed on the weight generator's input, and it is updated with the batch of the corresponding task on the main network. However, in this case, it becomes training with batch-size=1 for the metaparameters, and when it is updated with backpropagation at once, the metacorrelation between tasks is not used well. From these problems, the _meta-batch_ concept is proposed. To distinguish the _meta-batch_ from the conventional batch, the batch of each task corresponding to one \(z_{i}\) is called "task batch." "Meta-batch" refers to both the batch of metaparameters and the corresponding batch of the tasks. The training method for WGN using the meta-batch is as follows. Suppose a training metadataset \(\mathcal{D}=\{(\mathcal{D}_{k},z_{k})|k\in\{1..K\}\}\) comprising task training datasets \(\mathcal{D}_{k}=\{(x_{i}^{k},y_{i}^{k})\}_{i=1}^{N_{k}}\) are given, where \(N_{k}\) is the number of datapoints of \(\mathcal{D}_{k}\) task. For index sets \(M\subset\{1,,\,,K\},T_{k}\subset\{1,,\,,N_{k}\}\) that determines meta-batch and task batch, select the batch \(\mathcal{X}_{M}=\{(D_{m},z_{m})|m\in M\}\) and \(\mathcal{X}_{T}^{M}=\{(x_{t}^{m},y_{t}^{m})|t\in T_{m},m\in M\}\). We denote the dimensions of \(x_{i},y_{i}\), and \(z_{i}\) as \(N[x],N[y]\), and \(N[z]\), respectively. \(w_{ij}^{l}\) denotes the weight between the \(l\)th and (\(l+1\))th layers of the WGN's main network, which has a shape \((N[w_{l}],N[w_{l+1}])\), where \(N[w_{i}]\) denotes the number of nodes at the \(i\)-th layer. The inputs \(\mathcal{X}_{T}^{M}\) of the main network are rank-3 tensors in the form of \((\mathrm{MB},\mathrm{TB},N[x])\), where \(\mathrm{MB}\) and \(\mathrm{TB}\) denote the sizes of \(M\) and \(T\), respectively. If \(z_{m}\) enters to weight generator as inputs in the form of \((\mathrm{MB},N[z])\), \(G[w_{ij}^{l}](z_{m})\) generates a tensor in the form \((\mathrm{MB},N[w_{l}]*N[w_{l+1}])\) and it is reshaped as \((\mathrm{MB},N[w_{l}],N[w_{l+1}])\), where \(G[w_{ij}^{l}]\) denotes a generator that generates \(w_{ij}^{l}\). The outputs of the \(l\)-th layer of the main network, which has the shape \((\mathrm{MB},\mathrm{TB},N[w_{l}])\), are matrix-produced with the weights in the form \((\mathrm{MB},N[w_{l}],N[w_{l+1}])\), and then it becomes a tensor in the form \((\mathrm{MB},\mathrm{TB},N[w_{l+1}])\).13 Finally, the outputs of the main network with shape \((\mathrm{MB},\mathrm{TB},N[y])\) and \(y_{t}^{m}\) are used to calculate the loss of the entire network. Conceptually, it is simple as shown in Figure 10. Footnote 13: All other parameters in the main network can be generated from weigh generators using a similar method As a result of the experiment, Figure 1214 shows a significant difference in performance between using and not using meta-batch, where "MB" means using meta-batch, and "ST" means training by inputting metaparameters individually without using meta-batch. Figure 12 also shows the difference between using WGN and just using a simple MLP. Meta-batch can be used in any function-generating network structure that generates models from variables; another example is shown in Figure 11. The outputs of generators concatenate with the layers of the main network. As a result of experimenting with ISLU[1] in the structure shown in Figure 11, there was a performance difference of more than four times between using and not using meta-batch. Figure 13 shows the results of using WGN and meta-batch compared with those of using only MLP. "sWGN" indicates a WGN trained with metaparameters that are the function values at 10 points of \((L,t,\phi)\) without using original metaparameters "\(B\), \(k\), and \(m\)." "mMLP" indicates an MLP that trained with a six-dimensional input combined with "\(L\), \(t\), and \(\phi\)" and the original metaparameters. "MLP" indicates a trained model for each task with just inputs "\(L\), \(t\), and \(\phi\)." This figure shows that using meta-batch, WGN outperformed MLP with fewer parameters. This also shows that WGN excels at learning all metadata and using them with only a few parameters. Figures 14 and 15 shows the results of other metadatasets, which are described in Appendix A. The combinations of ISLU, meta-batch, and WGN give much better performance than MLP in terms of accuracy and compactness. Figure 12: Comparison between using meta-batch and not using meta-batch Figure 13: Scores for each task of metadata from different models. ## IV Conclusion In this study, we focus on creating mathematical functions with desired shapes using an NN with a few parameters. Irregular and numerous parameters are helpful for generalizations because of randomness; however, this sometimes makes it difficult to interpret the network and reduces the smoothness of the functions. In this study, we dissected NNs for regression; consequently, we proposed a new activation function. We looked at the special features of regression-related metadata, such as the possibilities to extract meta-parameters immediately, and how, given only one task, we could create fictitious meta-parameters and metadata to increase performance by more than a few times. In addition, the network structures generating NFs from metaparameters were discussed and the _meta-batch_ method was introduced and tested for the structure called WGN. WGN makes it possible to provide smooth and desired-shaped NFs comprised of a few parameters because it carefully generates different parameters and shapes of activation functions for each task. The findings of this study, as well as the insights obtained in the process, are significant for earning smooth and accurate functions from NNs. One of them is the perspective of obtaining desired output functions at _intermediate_ layers from enormous data. Regarding regression problems, it will help elucidate how to find the metafeature of each task and map to the corresponding metaparameter as well as how to get a smooth and compact NF of a desired shape.
2309.13907
HiGNN-TTS: Hierarchical Prosody Modeling with Graph Neural Networks for Expressive Long-form TTS
Recent advances in text-to-speech, particularly those based on Graph Neural Networks (GNNs), have significantly improved the expressiveness of short-form synthetic speech. However, generating human-parity long-form speech with high dynamic prosodic variations is still challenging. To address this problem, we expand the capabilities of GNNs with a hierarchical prosody modeling approach, named HiGNN-TTS. Specifically, we add a virtual global node in the graph to strengthen the interconnection of word nodes and introduce a contextual attention mechanism to broaden the prosody modeling scope of GNNs from intra-sentence to inter-sentence. Additionally, we perform hierarchical supervision from acoustic prosody on each node of the graph to capture the prosodic variations with a high dynamic range. Ablation studies show the effectiveness of HiGNN-TTS in learning hierarchical prosody. Both objective and subjective evaluations demonstrate that HiGNN-TTS significantly improves the naturalness and expressiveness of long-form synthetic speech.
Dake Guo, Xinfa Zhu, Liumeng Xue, Tao Li, Yuanjun Lv, Yuepeng Jiang, Lei Xie
2023-09-25T07:07:02Z
http://arxiv.org/abs/2309.13907v2
# Hignn-Tts: Hierarchical prosody modeling with graph neural networks for expressive long-form TTS ###### Abstract Recent advances in text-to-speech, particularly those based on Graph Neural Networks (GNNs), have significantly improved the expressiveness of short-form synthetic speech. However, generating human-parity long-form speech with high dynamic prosodic variations is still challenging. To address this problem, we expand the capabilities of GNNs with a hierarchical prosody modeling approach, named HiGNN-TTS. Specifically, we add a virtual global node in the graph to strengthen the interconnection of word nodes and introduce a contextual attention mechanism to broaden the prosody modeling scope of GNNs from intra-sentence to inter-sentence. Additionally, we perform hierarchical supervision from acoustic prosody on each node of the graph to capture the prosodic variations with a high dynamic range. Ablation studies show the effectiveness of HiGNN-TTS in learning hierarchical prosody. Both objective and subjective evaluations demonstrate that HiGNN-TTS significantly improves the naturalness and expressiveness of long-form synthetic speech1. Footnote 1: Speech samples: [https://anonymous-asru.github.io/HiGNN-TTS/](https://anonymous-asru.github.io/HiGNN-TTS/) Dake Guo\({}^{1}\), Xinfa Zhu\({}^{1}\), Liumeng Xue\({}^{2}\), Tao Li\({}^{1}\), Yuanjun Lv\({}^{1}\), Yuepeng Jiang\({}^{1}\), Lei Xie\({}^{1}\)+\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern Polytechnical University, Xi'an, China \({}^{2}\) School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China Expressive long-form TTS, hierarchical prosody modeling, graph neural network, semantic representation enhancing Footnote †: 1}\)Speech samples: [https://anonymous-asru.github.io/HiGNN-TTS/](https://anonymous-asru.github.io/HiGNN-TTS/) ## 1 Introduction Text-to-Speech (TTS), aiming to generate human-like speech from text, has achieved dramatically advanced in naturalness with the proliferation of sequence-to-sequence (seq2seq) based neural approaches [1, 2, 3]. With the growing demand for anthropomorphic human-computer interaction, there has been increasing interest in generating speech with high expressiveness [4, 5, 6, 7], which exhibits high dynamic range in prosodic variation, including pitch and duration, etc. Human speech is expressive in nature, and proper expression rendering affects overall speech perception, which is essential for many TTS applications, such as newsreaders and voice assistants, etc. However, there is still a noticeable gap between synthetic speech and human speech in terms of expressiveness, particularly in long-form speech synthesis scenarios like audiobooks. Conventional TTS usually works on the prosody modeling of a single sentence. However, long-form speech usually contains multiple semantically coherent sentences, where the prosody of each individual sentence is also affected by its corresponding context. Therefore, in long-form speech generation, besides ensuring the coherence of the overall rhythm, it is also necessary to model the local fine-grained and global prosody in each sentence as well as the cross-sentence contextual prosody [8]. To address sentence-level global prosody, several works have brought attention to the extraction of a global style embedding from a given reference speech which has successfully captured the global style features of speech [9, 10]. In order to synthesize expressive speech without the need of auxiliary reference speech during inference, some methods attempt to obtain prosodic variations directly from text, which is more practical. By incorporating text-only predictions, the Text-Predicted Global Style Token (TP-GST) [11] extends the capabilities of GST [10], enabling the generation of style embeddings or style token weights solely based on textual input. However, only sentence-level global prosody modeling lacks local fine-grained prosody information within a sentence, such as pauses and emphasis. By aligning the extracted prosody embedding sequence to the phoneme sequence, some works manage to model fine-grained prosody [12, 13, 14]. Moreover, the tight interplay of prosody and semantic information within sentences has led to a growing focus on leveraging pre-trained language models (such as BERT [15], XLNET [16]) to enhance fine-grained prosody representation [17, 18]. Further, to fully utilize the tree-structured syntactic information, some works have leveraged graph-based methods to model word-level prosody. Some studies incorporate BERT embeddings and employ Relational Gated Graph Networks (RGGN) to extract semantic information, and consequently enriches prosodic variations and improves expressiveness in the generated speech [19]. Instead of using BERT embedding, other work utilizes the average pooling of hidden layer representations from the acoustic model as
2309.11651
Drift Control of High-Dimensional RBM: A Computational Method Based on Neural Networks
Motivated by applications in queueing theory, we consider a stochastic control problem whose state space is the $d$-dimensional positive orthant. The controlled process $Z$ evolves as a reflected Brownian motion whose covariance matrix is exogenously specified, as are its directions of reflection from the orthant's boundary surfaces. A system manager chooses a drift vector $\theta(t)$ at each time $t$ based on the history of $Z$, and the cost rate at time $t$ depends on both $Z(t)$ and $\theta(t)$. In our initial problem formulation, the objective is to minimize expected discounted cost over an infinite planning horizon, after which we treat the corresponding ergodic control problem. Extending earlier work by Han et al. (Proceedings of the National Academy of Sciences, 2018, 8505-8510), we develop and illustrate a simulation-based computational method that relies heavily on deep neural network technology. For test problems studied thus far, our method is accurate to within a fraction of one percent, and is computationally feasible in dimensions up to at least $d=30$.
Baris Ata, J. Michael Harrison, Nian Si
2023-09-20T21:32:58Z
http://arxiv.org/abs/2309.11651v4
# Drift Control of High-Dimensional RBM: A Computational Method Based on Neural Networks ###### Abstract Motivated by applications in queueing theory, we consider a stochastic control problem whose state space is the \(d\)-dimensional positive orthant. The controlled process \(Z\) evolves as a reflected Brownian motion whose covariance matrix is exogenously specified, as are its directions of reflection from the orthant's boundary surfaces. A system manager chooses a drift vector \(\theta(t)\) at each time \(t\) based on the history of \(Z\), and the cost rate at time \(t\) depends on both \(Z(t)\) and \(\theta(t)\). In our initial problem formulation, the objective is to minimize expected discounted cost over an infinite planning horizon, after which we treat the corresponding ergodic control problem. Extending earlier work by Han et al. (Proceedings of the National Academy of Sciences, 2018, 8505-8510), we develop and illustrate a simulation-based computational method that relies heavily on deep neural network technology. For test problems studied thus far, our method is accurate to within a fraction of one percent, and is computationally feasible in dimensions up to at least \(d=30\). ## 1 Introduction Beginning with the seminal work of Iglehart and Whitt [26, 27], there has developed over the last 50+ years a large literature that justifies the use of reflected Brownian motions as approximate models of queueing systems under "heavy traffic" conditions. In particular, a limit theorem proved by Reiman [39] justifies the use of \(d\)-dimensional reflected Brownian motion (RBM) as an approximate model of a \(d\)-station queueing network. Reiman's theory is restricted to networks of the generalized Jackson type, also called single-class networks, or networks with homogeneous customer populations, but it has been extended to more complex multi-class networks under certain restrictions, most notably by Peterson [37] and Williams [45]. The survey papers by Williams [44] and by Harrison and Nguyen [20] provide an overview of heavy traffic limit theory through its first 25 years. Many authors have commented on the compactness and simplicity of RBM as a mathematical model, at least in comparison with the conventional discrete-flow models that it replaces. For example, in the preface to Kushner [32]'s book on heavy traffic analysis one finds the following passage: "These approximating [Brownian] models have the basic structure of the original problem, but are significantly simpler. Much inessential detail is eliminated... They greatly simplify analysis, design, and optimization, [yielding] good approximations to problems that would otherwise be intractable..." Of course, having adopted RBM as a system model, one still confronts the question of how to do performance analysis, and in that regard there has been an important recent advance: Blanchet et al. [10] have developed a simulation-based method to estimate steady-state performance measures for RBM in dimensions up to 200, and those estimates come with performance guarantees. **Descriptive performance analysis versus optimal control.** Early work on heavy traffic approximations, including the papers cited above, focused on descriptive performance analysis under fixed operating policies. Harrison [18, 19] expanded the framework to include consideration of dynamic control, using informal arguments to justify Brownian approximations for queueing network models where a system manager can make sequencing, routing and/or input control decisions. Early papers in that vein by Harrison and Wein [22, 23] and by Wein [43] dealt with Brownian models simple enough that their associated control problems could be solved analytically. But for larger systems and/or more complex decisions, the Brownian control problem that approximates an original queueing control problem may only be solvable numerically. Such stochastic control problems may be of several different types, depending on context. At one end of the spectrum are drift control problems, in which the controlling agent can effect changes in system state only at bounded finite rates. At the other end of the spectrum are impulse control problems, in which the controlling agent can effect instantaneous jumps in system state, usually with an associated fixed cost. In between are singular control problems, in which the agent can effect instantaneous state changes of any desired size, usually at a cost proportional to the size of the displacement; see for example, Karatzas [28]. In this paper we develop a computational method for the first of those three problem classes, and then illustrate its use on selected test problems. Our method is a variant of the one developed by Han et al. [17] for solution of semi-linear partial differential equations, and in its implementation we have re-used substantial amounts of the code provided by Han et al. [17] and Zhou et al. [49]. **Literature Review.** Two of the most relevant streams of literature are _i_) drift rate control problems, and _ii_) solving PDEs using deep learning. Ata et al. [5] considers a one-dimensional drift rate control problem on a bounded interval under a general cost of control but no state costs. The authors characterize the optimal policy in closed form; and they discuss the application of their model to a power control problem in wireless communication. Ormeci Matoglu and Vande Vate [36] consider a drift rate control problem where a system controller incurs a fixed cost to change the drift rate. The authors prove that a deterministic, non-overlapping control band policy is optimal; also see Vande Vate [42]. Ghosh and Weerasinghe [15, 16] extend Ata et al. [5] by incorporating state costs, abandonments and optimally choosing the interval where the process lives. Drift control problems arise in a broad range of applications in practice. Rubino and Ata [40] studies a dynamic scheduling problem for a make-to-order manufacturing system. The authors model order cancellations as abandonments from their queueing system. This model feature gives rise to a drift rate control problem in the heavy traffic limit. Ata et al. [6] uses a drift control model to study a dynamic staffing problem in order to determine the number of volunteer gleaners, who sign up to help but may not show up, for harvesting leftover crops donated by farmers for the purpose of feeding food-insecure individuals. Bar-Ilan et al. [7] use a drift control model to study international reserves. All of the papers mentioned above study one-dimensional drift-rate control problems. To the best of our knowledge, there have not been any papers studying such problems in high dimensions. One exception to this is the recent working paper Ata and Kasikaralar [4] that studies dynamic scheduling of a multiclass queue motivated by call center industry. Focusing on the Halfin-Whitt asymptotic regime, the authors derive a (limiting) drift rate control problem whose state space is \(\mathbb{R}^{d}\), where \(d\) is the number of buffers in their queueing model. Similar to us, the authors build on Han et al. [17] to solve their (high-dimensional) drift rate control problem. However, our work differs from theirs significantly, because their control problem has no state space constraints. As mentioned earlier, our work builds on the seminal paper Han et al. [17]. In the last five years, there have been many papers written on solving PDEs using deep neural networks. We refer the reader to the recent survey Beck et al. [8]; also see E et al. [14]. **The remainder of this paper.** Section 2 recapitulates essential background knowledge from RBM theory, after which Section 3 states in precise mathematical terms the discounted control and ergodic control problems that are the object of our study. In each case, the problem statement is expressed in probabilistic terms initially, and then re-expressed analytically in the form of an equivalent Hamilton-Jacobi-Bellman equation (hereafter abbreviated to HJB equation). Section 4 derives key identities, that significantly contribute to the subsequent development of our computational method. Section 5 describes our computational method in detail. Section 6 specifies three families of drift control test problems, each of which has members of dimensions \(d=1,2,\ldots\). The first two families arise as heavy traffic limits of certain queueing network control problems, and we explain that motivation in some detail. Drift control problems in the third family have a separable structure that allows them to be solved exactly by analytical means, which is of obvious value for assessing the accuracy of our computational method. Section 7 presents numerical results obtained with our method for all three families of test problems. In that admittedly limited context, our computed solutions are accurate to within a fraction of one percent, and our method remains computationally feasible up to at least dimension \(d=30\), and in some cases up to dimension \(100\) or more. In Section 8 we describe variations and generalizations of the problems formulated in Section 3 that are of interest for various purposes, and which we expect to be addressed in future work. Finally, there are a number of appendices that contain proofs or other technical elaboration for arguments or procedures that have only been sketched in the body of the paper. ## 2 RBM preliminaries We consider here a reflected Brownian motion \(Z=\left\{Z(t),t\geq 0\right\}\) with state space \(\mathbb{R}_{+}^{d},\) where \(d\geq 1.\) The data of \(Z\) are a (negative) drift vector \(\mu\in\mathbb{R}^{d},\) a \(d\times d\) positive-definite covariance matrix \(A=(a_{ij}),\) and a \(d\times d\) reflection matrix \(R\) of the form \[R=I-Q,\text{ where }Q\text{ has non-negative entries and spectral radius }\rho(Q)<1. \tag{1}\] The restriction to reflection matrices of the form (1) is not essential for our purposes, but it simplifies the technical development and is consistent with usage in the related earlier paper by Blanchet et al. [10]. Denoting by \(W=\left\{W(t),t\geq 0\right\}\) a \(d\)-dimensional Brownian motion with zero drift, covariance matrix \(A,\) and \(W(0)=0,\) we then have the representation \[Z(t)=Z(0)+W(t)-\mu t+RY(t),\text{ }t\geq 0,\text{ where} \tag{2}\] \[Y_{i}(\cdot)\text{ is continuous and non-decreasing with }Y_{i}(0)=0\text{ }(i=1,2,\ldots,d),\text{ and}\] (3) \[Y_{i}(\cdot)\text{ only increases at those times }t\text{ when }Z_{i}(t)=0\text{ }(i=1,2,\ldots,d). \tag{4}\] Harrison and Reiman [21] showed that the relationships (1) to (4) determine \(Y\) and \(Z\) as pathwise functionals of \(W,\) and that the mapping \(W\to(Y,Z)\) is continuous in the topology of uniform convergence. We interpret the \(i^{\text{th}}\) column of \(R\) as the direction of reflection on the boundary surface \(\left\{z\in\mathbb{R}_{+}^{d}:z_{i}=0\right\}\), and call \(Y_{i}=\left\{Y_{i}(t),t\geq 0\right\}\) the "pushing process" on that boundary surface. In preparation for future developments, let \(f\) be an arbitrary \(C^{2}\) (that is, twice continuously differentiable) function \(\mathbb{R}^{d}\rightarrow\mathbb{R},\) and let \(\nabla f\) denote its gradient vector as usual. Also, we define a second-order differential operator \(\mathcal{L}\) via \[\mathcal{L}f=\frac{1}{2}\sum_{i=1}^{d}\sum_{j=1}^{d}a_{ij}\frac{\partial^{2}} {\partial z_{i}\partial z_{j}}f, \tag{5}\] and a first-order differential operator \(\mathcal{D}=(\mathcal{D}_{1},\ldots,\mathcal{D}_{d})^{\top}\) via \[\mathcal{D}f=R^{\top}\nabla f, \tag{6}\] where \(\top\) in (6) denotes transpose. Thus \(\mathcal{D}_{i}f(\cdot)\) is the directional derivative of \(f\) in the direction of reflection on the boundary surface \(\left\{z_{i}=0\right\}.\) With these definitions, an application of Ito's formula now gives the following identify, cf. Harrison and Reiman [21], Section 3: \[\mathrm{d}f(X(t))=\nabla f(Z(t))\cdot\mathrm{d}W(t)+(\mathcal{L}f-\mu\cdot \nabla f)(Z(t))\,\mathrm{d}t+\mathcal{D}f(Z(t))\cdot\mathrm{d}Y(t),\text{ }t\geq 0. \tag{7}\] In the obvious way, the first inner product on the right side of (7) is shorthand for a sum of \(d\) Ito differentials, while the last one is shorthand for a sum of \(d\) Riemann-Stieltjes differentials. ## 3 Problem statements and HJB equations Let us now consider a stochastic control problem whose state space is \(\mathbb{R}^{d}_{+}\) (\(d\geq 1\)). The controlled process \(Z\) has the form \[Z(t)=Z(0)+W(t)-\int_{0}^{t}\theta(s)\mathrm{d}s+RY(t),\ t\geq 0, \tag{8}\] where (i) \(W=\{W(t),t\geq 0\}\) is a \(d\)-dimensional Brownian motion with zero drift, covariance matrix \(A\), and \(W(0)=0\) as in Section 2, (ii) \(\theta=\{\theta(t),t\geq 0\}\) is a non-anticipating control, or non-anticipating drift process, chosen by a system manager and taking values in a bounded set \(\Theta\subset\mathbb{R}^{d}\), and (iii) \(Y=\{Y(t),t\geq 0\}\) is a \(d\)-dimensional pushing process with components \(Y_{i}\) that satisfy (3) and (4). Note that our sign convention on the drift in the basic system equation (8) is _not_ standard. That is, we denote by \(\theta(t)\) the _negative_ drift vector at time \(t\). The control \(\theta\) is chosen to optimize an economic objective (see below), and attention will be restricted to _stationary_ Markov controls, or stationary control policies, by which we mean that \[\theta(t)=u(Z(t)),\ t\geq 0\ \text{for some measurable policy function}\ u:\mathbb{R}^{d}_{+}\to\Theta. \tag{9}\] Hereafter the set \(\Theta\) of drift vectors available to the system manager will be referred to as the _action space_ for our control problem, a function \(u:\mathbb{R}^{d}_{+}\to\Theta\) will simply be called a _policy_, and we denote by \(Z^{u}\) the controlled RBM defined via (8) and (9). With regard to the system manager's objective, we take as given a continuous cost function \(c:\mathbb{R}^{d}_{+}\times\Theta\to\mathbb{R}\) with polynomial growth (see below for the precise meaning of that phrase), and assume that the cumulative cost incurred over the time interval \([0,t]\) under policy \(u\) is \[C^{u}(t)\equiv\int_{0}^{t}c(Z^{u}(s),u(Z^{u}(s)))\,\mathrm{d}s,\ t\geq 0. \tag{10}\] To be more specific, for \(m,n\geq 1\), a function \(g:D\subset\mathbb{R}^{m}\to\mathbb{R}^{n}\) is said to have polynomial growth if there exist constants \(\alpha_{1}\), \(\beta_{1}>0\) such that \[|g(z)|\leq\alpha_{1}\left(1+|z|^{\beta_{1}}\right),\ z\in D.\] Because the action space \(\Theta\) is bounded, the polynomial growth assumption on \(c\) reduces to the following: \[|c(z,\theta)|\leq\alpha_{2}\left(1+|z|^{\beta_{2}}\right)\ \text{for all}\ z\in\mathbb{R}^{d}_{+}\ \text{and}\ \theta\in\Theta, \tag{11}\] where \(\alpha_{2}\), \(\beta_{2}\) are positive constants. Because our action space \(\Theta\) is bounded by assumption, the controlled RBM \(Z^{u}\) has bounded drift under any policy \(u\), from which one can prove the following mild but useful property; see Appendix A for its proof. **Proposition 1**.: _Under any policy \(u\) and for any integer \(n=1,2,\ldots\) the function_ \[g_{n}(z,t)=\mathbb{E}_{z}\left\{|Z^{u}(t)|^{n}\right\},\ t\geq 0,\] _has polynomial growth in \(t\) for each fixed \(z\in\mathbb{R}_{+}^{d}\)._ ### Discounted control In our first problem formulation, an interest rate \(r>0\) is taken as given, and we adopt the following discounted cost objective: choose a policy \(u\) to minimize \[V^{u}(z)\equiv\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\mathrm{d}C^{u}(t) \right]=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}c(Z^{u}(t),u(Z^{u}(t))) \,\mathrm{d}t\right], \tag{12}\] where \(\mathbb{E}_{z}\left(\cdot\right)\) denotes a conditional expectation given that \(Z(0)=z.\) Given the polynomial growth condition (11), it follows from Proposition 1 that the moments of \(Z(t)\) are polynomially bounded as functions of \(t\) for each fixed initial state \(z\). Given the assumed positivity of the interest rate \(r\), the expectation in (12) is therefore well defined and finite for each \(z\in\mathbb{R}_{+}^{d}\). Hereafter we refer to \(V^{u}(\cdot)\) as the _value function_ under policy \(u\), and define the _optimal value function_ \[V(z)=\min_{u\in\mathcal{U}}V^{u}(z)\ \text{for each}\ z\in\mathbb{R}_{+}^{d}, \tag{13}\] where \(\mathcal{U}\) is the set of stationary Markov control policies. To solve for the value function \(V^{u}(\cdot)\) under an arbitrary policy \(u\), a standard argument gives the following PDE with boundary conditions, where \(\mathcal{L}\) and \(\mathcal{D}_{i}\) are the differential operators defined via (5) and (6), respectively: \[\mathcal{L}V^{u}(z)-u(z)\cdot\nabla V^{u}(z)+c(z,u(z))=rV^{u}(z),\ \ z\in \mathbb{R}_{+}^{d}, \tag{14}\] with boundary conditions \[\mathcal{D}_{i}V^{u}(z)=0\ \text{if}\ z_{i}=0\ (i=1,2,\ldots,d). \tag{15}\] The corresponding HJB equation, to be solved for the _optimal_ value function \(V(\cdot)\), is \[\mathcal{L}V(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z, \theta)\right\}=rV(z),z\in\mathbb{R}_{+}^{d}, \tag{16}\] with boundary conditions \[\mathcal{D}_{i}V(z)=0\ \text{if}\ z_{i}=0\ (i=1,2,\ldots,d). \tag{17}\] Moreover, the policy \[u^{*}(z)=\arg\max_{\theta\in\Theta}\{\theta\cdot\nabla V(z)-c(z,\theta)\} \tag{18}\] is optimal, meaning that \(V^{u^{*}}(z)=V(z)\) for \(z\in\mathbb{R}^{d}_{+}\). There will be no attempt here to prove existence of \(C^{2}\) solutions, but our computational method proceeds as if that were the case, striving to compute a \(C^{2}\) function \(V\) that satisfies (16)-(17) as closely as possible in a certain sense. In Appendix B.1 we use (7) to verify that a sufficiently regular solution of the PDE (14)-(15) does in fact satisfy (12) as intended, and similarly, that a sufficiently regular solution of (16)-(17) does in fact satisfy (13). ### Ergodic control For our second problem formulation, it is assumed that \[c(z,\theta)\geq 0\text{ for all }(z,\theta)\in\mathbb{R}^{d}_{+}\times\Theta. \tag{19}\] Readers will see that our analysis can be extended to cost functions that take on negative values in at least some states, but to do so one must deal with certain irritating technicalities. To be specific, the issue is whether the expected values involved in our formulation are well defined. In preparation for future developments, let us recall that a square matrix \(R\) of the form (1), called a _Minkowski matrix_ in linear algebra (or just _M-matrix_ for brevity), is non-singular, and its inverse is given by the Neumann expansion \[R^{-1}=I+Q+Q^{2}+\ldots\geq 0.\] Hereafter, we assume that \[\text{there exists at least one }\theta\in\Theta\text{ such that }R^{-1}\theta>0. \tag{20}\] It is known that an RBM with a non-singular covariance matrix, reflection matrix \(R\), and negative drift vector \(\theta\) has a stationary distribution if and only if the inequality in (20) holds, cf. Section 6 of Harrison and Williams [24]. Of course, our statement of this "stability condition" reflects the non-standard sign convention used in this paper. That is, \(\theta\) denotes the _negative_ drift vector of the RBM under discussion. For our ergodic control problem, a policy function \(u:\mathbb{R}^{d}_{+}\rightarrow\Theta\) is said to be _admissible_ if, first, the corresponding controlled RBM \(Z^{u}\) has a unique stationary distribution \(\pi^{u}\), and if, moreover, \[\int_{\mathbb{R}^{d}_{+}}\left|f(z)\right|\pi^{u}(dz)<\infty \tag{21}\] for any function \(f:\mathbb{R}^{d}_{+}\rightarrow\mathbb{R}\) with polynomial growth. Our assumption (20) ensures the existence of at least one admissible policy \(u\), as follows. Let \(\theta\in\Theta\) be a negative drift vector satisfying (20), and consider the constant policy \(u(\cdot)\equiv\theta\). The corresponding controlled process \(Z^{u}\) is then an RBM having a unique stationary distribution \(\pi^{u}\), as noted above. It has been shown in Budhiraja and Lee [11] that the moment generating function of \(\pi^{u}\) is finite in a neighborhood of the origin, from which it follows that \(\pi^{u}\) has finite moments of all orders. Thus \(\pi^{u}\) satisfies (21) for any function \(f\) with polynomial growth, so \(u\) is admissible. Because our cost function \(c(z,\theta)\) has polynomial growth and our action space \(\Theta\) is bounded, the steady-state average cost \[\xi^{u}\equiv\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi^{u}(dz) \tag{22}\] is well defined and finite under any admissible policy \(u\). The objective in our ergodic control problem is to find an admissible policy \(u\) for which \(\xi^{u}\) is minimal. To solve for the steady-state average cost \(\xi^{u}\) and the corresponding _relative value function_, denoted by \(v^{u}(\cdot)\), under an admissible policy \(u\), a standard argument gives the following PDE: \[\mathcal{L}v^{u}(z)-u(z)\cdot\nabla v^{u}(z)+c(z,\theta)=\xi^{u}\text{ for each }z\in\mathbb{R}^{d}_{+}, \tag{23}\] with boundary conditions \[\mathcal{D}_{i}v^{u}(z)=0\text{ if }z_{i}=0\text{ }(i=1,2,\ldots,d). \tag{24}\] The HJB equation for ergodic control is again of a standard form, involving a constant \(\xi\) (interpreted as the minimum achievable steady-state average cost) and a relative value function \(v:\mathbb{R}^{d}_{+}\to\mathbb{R}\). To be specific, the HJB equation is \[\mathcal{L}v(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla v(z)-c(z, \theta)\right\}=\xi\text{ for each }z\in\mathbb{R}^{d}_{+}, \tag{25}\] with boundary conditions \[\mathcal{D}_{i}v(z)=0\text{ if }z_{i}=0\text{ }(i=1,2,\ldots,d). \tag{26}\] Paralleling the previous development for discounted control, we show the following in Appendix B.2: if a \(C^{2}\) function \(v\) and a constant \(\xi\) jointly satisfy (25)-(26), then \[\xi=\inf_{u\in\mathcal{U}}\xi^{u}, \tag{27}\] where \(\mathcal{U}\) denotes the set of admissible controls for the ergodic cost formulation. Moreover, the policy \[u^{*}(z)=\arg\max_{\theta\in\Theta}\left\{\theta\cdot\nabla v(z)-c(z,\theta) \right\},\ z\in\mathbb{R}^{d}_{+}, \tag{28}\] is optimal, meaning that \(\xi^{u^{*}}=\xi.\) Again paralleling the previous development for discounted control, there is no attempt to prove that such a solution for (25)-(26) exists. In Appendix B.2 we use (7) to verify that a sufficiently regular solution of the PDE (23)-(24) does in fact satisfy (22) as intended, and similarly, that a sufficiently regular solution of (25)-(26) does in fact satisfy (27). ## 4 Equivalent SDEs In this section we prove two key identities, Equations (31) and (40) below, that are closely patterned after results used by Han et al. [17] to justify their "deep BSDE method" for solution of certain non-linear PDEs. That earlier work provided both inspiration and detailed guidance for our study, but we include these derivations to make the current account as nearly self-contained as possible. Sections 4.1 and 4.2 treat the discounted and ergodic cases, respectively. Our method begins by specifying what we call a _reference policy_. This is a nominal or default policy, specified at the outset but possibly revised in light of computational experience, that we use to generate sample paths of the controlled RBM \(Z\). Roughly speaking, one wants to choose the reference policy so that its paths tend to occupy parts of the state space thought to be most frequently visited by an optimal policy. ### Discounted control Our reference policy for the discounted case chooses a constant action \(u(z)=\tilde{\theta}>0\) in every state \(z\in\mathbb{R}_{+}^{d}\). (Again we stress that, given the non-standard sign convention embodied in (8) and (19), this means that \(\tilde{Z}\) has a constant drift vector \(-\tilde{\theta}\), with all components negative.) Thus the corresponding _reference process_\(\tilde{Z}\) is a \(d\)-dimensional RBM which, in combination with its \(d\)-dimensional pushing process \(\tilde{Y}\) and the \(d\)-dimensional Brownian motion \(W\) defined in Section 2, satisfies \[\tilde{Z}(t)=\tilde{Z}(0)+W(t)-\tilde{\theta}\,t+R\,\tilde{Y}(t),\ t\geq 0, \tag{29}\] plus the obvious analogs of Equations (3) and (4). For the key identity (31) below, let \[F(z,x)=\tilde{\theta}\cdot x-\max_{\theta\in\Theta}\left\{\theta\cdot x-c(z, \theta)\right\}\text{ for }z\in\mathbb{R}_{+}^{d}\text{ and }x\in\mathbb{R}^{d}. \tag{30}\] **Proposition 2**.: _If \(V\left(\cdot\right)\) satisfies the HJB equation (16) - (17), then it also satisfies the following identity almost surely for any \(T>0\):_ \[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))=\int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}( t))\cdot\mathrm{d}W(t)-\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),\nabla V(\tilde{Z}(t))) \mathrm{d}t. \tag{31}\] Proof.: Applying Ito's formula to \(e^{-rt}V(\tilde{Z}(t))\) and using Equation (7) yield \[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0)) = \int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}(t))\cdot\mathrm{d}W(t)+ \int_{0}^{T}e^{-rt}\mathcal{D}V(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)\] \[+\int_{0}^{T}e^{-rt}\left(\mathcal{L}V(\tilde{Z}(t))-\tilde{ \theta}\cdot\nabla V(\tilde{Z}(t))-rV(\tilde{Z}(t))\right)\mathrm{d}t.\] Using the HJB boundary condition (17), plus the complementarity condition (4) for \(\tilde{Y}\) and \(\tilde{Z},\) one has \[\int_{0}^{T}e^{-rt}\mathcal{D}V(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)=0.\] Furthermore, substituting \(z=\tilde{Z}(t)\) in the HJB equation (16), multiplying both sides by \(e^{-rt},\) rearranging the terms, and integrating over \([0,T]\) yields \[\int_{0}^{T}e^{-rt}\left(\mathcal{L}V(\tilde{Z}(t))-rV(\tilde{Z}(t))\right) \mathrm{d}t=\int_{0}^{T}e^{-rt}\max_{\theta\in\Theta}\left(\theta\cdot\nabla V (\tilde{Z}(t))-c(\tilde{Z}(t),\theta)\right)\mathrm{d}t. \tag{33}\] Substituting Equation (33) into Equation (32) gives Equation (31). Proposition 2 provides the motivation for the loss function that we strive to minimize in our computational method (see Section 5). Before developing that approach, we prove the following, which can be viewed as a converse of Proposition 2. **Proposition 3**.: _Suppose that \(V:\mathbb{R}_{+}^{d}\rightarrow\mathbb{R}\) is a \(C^{2}\) function, \(G:\mathbb{R}_{+}^{d}\rightarrow\mathbb{R}^{d}\) is continuous, and \(V,\)\(\nabla V\), and \(G\) all have polynomial growth. Also assume that the following identity holds almost surely for some fixed \(T>0\) and every \(Z(0)=z\in\mathbb{R}_{+}^{d}\):_ \[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))=\int_{0}^{T}e^{-rt}G(\tilde{Z}(t))\cdot \mathrm{d}W(t)-\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G(\tilde{Z}(t)))\,\mathrm{d}t. \tag{34}\] _Then \(G(\cdot)=\nabla V(\cdot)\) and \(V\) satisfies the HJB equation (16) - (17)._ **Remark 1**.: The surprising conclusion that (34) implies \(\nabla V(\cdot)=G(\cdot),\) without any _a priori_ relationship between \(G\) and \(\nabla V\) being assumed, motivates the "double parametrization" method in Section 5. Proof of Proposition 3.: Because \(\tilde{Z}\) is a time-homogeneous Markov process, we can express (34) equivalently as follows for any \(k=0,1,\ldots\) : \[e^{-rT}V(\tilde{Z}((k+1)T)-V(\tilde{Z}(kT) =\int_{kT}^{(k+1)T}e^{-r(t-kT)}G(\tilde{Z}(t))\cdot dW(t)\] \[-\int_{kT}^{(k+1)T}e^{-r(t-kT)}F(\tilde{Z}(t),G(\tilde{Z}(t)))\,dt. \tag{35}\] Now multiply both sides of (35) by \(e^{-rkT},\) then add the resulting relationships for \(k=0,1,\ldots,n-1\) to arrive at the following: \[e^{-rnT}V(\tilde{Z}(nT))=V(\tilde{Z}(0))+\int_{0}^{nT}e^{-rt}G(\tilde{Z}(t)) \cdot dW(t)-\int_{0}^{nT}e^{-rt}F(\tilde{Z}(t),G(\tilde{Z}(t)))dt. \tag{36}\] Because \(G\) has polynomial growth, one can show that \[\mathbb{E}_{z}\left[\int_{0}^{nT}e^{-2rt}G(\tilde{Z}(t))^{2}dz\right]<\infty\] for all \(n\geq 1\). Thus, when we take \(\mathbb{E}_{z}\) of both sides of (36), the stochastic integral (that is, the second term) on the right side vanishes, and then rearranging terms gives the following: \[V(z)=e^{-rnT}\mathbb{E}_{z}\left[V(\tilde{Z}(nT))\right]+\mathbb{E}_{z}\left[ \int_{0}^{nT}e^{-rt}F(\tilde{Z}(t),G(\tilde{Z}(t)))\mathrm{d}t\right],\] for arbitrary positive integer \(n\). By Proposition 1 and the polynomial growth condition of \(V\), we have \(e^{-rnT}\mathbb{E}_{z}[V(\tilde{Z}(nT))]\to 0\) as \(n\to\infty\). Therefore, \[V(z)=\lim_{n\to\infty}\mathbb{E}_{z}\left[\int_{0}^{nT}e^{-rt}F\left(\tilde{Z} (t),G(\tilde{Z}(t))\right)\mathrm{d}t\right]\text{ for }z\in\mathbb{R}_{+}^{d}.\] Similarly, since \(F\) and \(G\) have polynomial growth, we conclude that \[\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\left|F\left(\tilde{Z }(t),G(\tilde{Z}(t))\right)\right|\mathrm{d}t\right]<+\infty\text{ for }z\in\mathbb{R}_{+}^{d},\text{ and}\] \[\int_{0}^{nT}e^{-rt}F\left(\tilde{Z}(t),G(\tilde{Z}(t))\right) \mathrm{d}t\leq\int_{0}^{\infty}e^{-rt}\left|F\left(\tilde{Z}(t),G(\tilde{Z}(t ))\right)\right|\mathrm{d}t<+\infty\text{ for }z\in\mathbb{R}_{+}^{d}.\] Thus, by dominated convergence, we have \[V(z)=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}F\left(\tilde{Z}(t),G(\tilde{ Z}(t))\right)\mathrm{d}t\right]\text{ for }z\in\mathbb{R}_{+}^{d}. \tag{37}\] In other words, \(V(z)\) can be viewed as the expected discounted cost associated with the RBM under the reference policy starting in state \(\tilde{Z}(0)=z\), where \(F\left(\cdot,G(\cdot)\right)\) is the state-cost function. Therefore, it follows from Equations (14) - (15) with \(u(z)=\tilde{\theta}\) and \(c(z,u(z))=c(z,\tilde{\theta})=F(z,G(z))\) for \(z\in\mathbb{R}_{+}^{d}\) that \(V\) satisfies the following PDE: \[\mathcal{L}V(z)-\tilde{\theta}\cdot\nabla V(z)+F\left(z,G(z)\right)=rV(z),\ z \in\mathbb{R}_{+}^{d}, \tag{38}\] with boundary conditions (15), and that it has polynomial growth. Suppose that \(G(\cdot)=\nabla V(\cdot)\) (which we will prove later). Substituting this into Equation (38) and using the definition of \(F\), it follows that \[\mathcal{L}V(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z, \theta)\right\}=rV(z),\ z\in\mathbb{R}_{+}^{d},\] which along with the boundary condition (15) gives the desired result. To complete the proof, it remains to show that \(G(\cdot)=\nabla V(\cdot).\) By applying Ito's formula to \(e^{-rt}V(\tilde{Z}(t))\) and using Equations (3)-(4) and (17), we conclude that \[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))= \int_{0}^{T}e^{-rt}\left(\mathcal{L}V(\tilde{Z}(t))-\tilde{\theta} \cdot\nabla V(\tilde{Z}(t))-rV(\tilde{Z}(t))\right)\mathrm{d}t\] \[+\int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}(t))\cdot\mathrm{d}W(t).\] Then, using Equation (38), we rewrite the preceding equation as follows: \[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))=\int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}(t ))\cdot\mathrm{d}W(t)-\int_{0}^{T}e^{-rt}F\left(\tilde{Z}(t),G(\tilde{Z}(t)) \right)\mathrm{d}t.\] Comparing this with Equation (34) yields \[\int_{0}^{T}e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t))\right)\cdot \mathrm{d}W(t)=0,\] which yields the following: \[\mathbb{E}_{z}\left[\left(\int_{0}^{T}e^{-rt}\left(G(\tilde{Z}(t))-\nabla V( \tilde{Z}(t))\right)\cdot\mathrm{d}W(t)\right)^{2}\right]=0. \tag{39}\] Thus, provided that \(e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t))\right)\) is square integrable, Ito's isometry [48, Lemma D.1] yields the following: \[\mathbb{E}_{z}\left[\left(\int_{0}^{T}e^{-rt}\left(G(\tilde{Z}(t))-\nabla V( \tilde{Z}(t))\right)\cdot\mathrm{d}W(t)\right)^{2}\right]=\mathbb{E}_{z} \left[\int_{0}^{T}\left\|e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t)) \right)\right\|_{A}^{2}\mathrm{d}t\right]=0,\] where \(\|x\|_{A}:=x^{\top}Ax\). The square integrability of \(e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t))\right)\) follows because \(G\) and \(\nabla V\) have polynomial growth and the action space \(\Theta\) is bounded. Because of \(A\) is a positive definite matrix, we then have \(\nabla V(\tilde{Z}(t))=G(\tilde{Z}(t))\) almost surely. By the continuity of \(\nabla V\left(\cdot\right)\) and \(G(\cdot)\), we conclude that \(\nabla V(\cdot)=G(\cdot)\). ### Ergodic control Again we use a reference policy with constant (negative) drift vector \(\tilde{\theta}\), and now we assume that \(R^{-1}\tilde{\theta}>0\), which ensures that the reference policy is admissible for our ergodic control formulation. For the following analogs of Propositions 2 and 3, let \[f(z,x)=\tilde{\theta}\cdot x-\max_{\theta\in\Theta}\left\{\theta\cdot x-c(z, \theta)\right\}\text{ for }x\in\mathbb{R}^{d},\,z\in\mathbb{R}^{d}_{+}.\] **Proposition 4**.: _If \(v\left(\cdot\right)\) and \(\xi\) solve the HJB equation (25) - (26), then we have_ \[v(\tilde{Z}(T))-v(\tilde{Z}(0))=\int_{0}^{T}\nabla v(\tilde{Z}(t))\cdot\mathrm{d }W(t)+T\xi-\int_{0}^{T}f(\tilde{Z}(t),\nabla v(\tilde{Z}(t)))\,\mathrm{d}t. \tag{40}\] Proof.: Applying Ito's formula to \(v(z)\) yields \[v(\tilde{Z}(T)) -v(\tilde{Z}(0)) \tag{41}\] \[=\int_{0}^{T}\nabla v(\tilde{Z}(t))\cdot\mathrm{d}W(t)+\int_{0}^{ T}\mathcal{D}v(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)+\int_{0}^{T}\left( \mathcal{L}v(\tilde{Z}(t))-\tilde{\theta}\cdot\nabla v(\tilde{Z}(t))\right) \mathrm{d}t.\] Recall the boundary condition of the HJB equation is \(\mathcal{D}_{j}v(z)=0\) if \(z_{j}=0\). Thus Equations (3)-(4) jointly imply \[\int_{0}^{T}\mathcal{D}v(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)=0.\] Then, substituting the HJB equation (25) into Equation (41) gives (40). **Proposition 5**.: _Suppose that \(v:\mathbb{R}^{d}_{+}\to\mathbb{R}\) is a \(C^{2}\) function, \(g:\mathbb{R}^{d}_{+}\to\mathbb{R}^{d}\) is continuous, and \(v,\,\nabla v,\,g\) all have polynomial growth. Also assume that the following identity holds almost surely for some fixed \(T>0\), a scalar \(\xi\) and every \(Z(0)=z\in\mathbb{R}^{d}_{+}\):_ \[v(\tilde{Z}(T))-v(\tilde{Z}(0))=\int_{0}^{T}g(\tilde{Z}(t))\cdot\mathrm{d}W(t )+T\xi-\int_{0}^{T}f(\tilde{Z}(t),g(\tilde{Z}(t)))\,\mathrm{d}t. \tag{42}\] _Then, \(g(\cdot)=\nabla v(\cdot)\) and \((v,\xi)\) satisfies the HJB equation (16) - (17)._ Proof.: Let \(\tilde{\pi}\) be the stationary distribution of the RBM \(\tilde{Z}\) under the reference policy and \(\tilde{Z}(\infty)\) be a random variable with the distribution \(\tilde{\pi}\). Then, assuming the initial distribution of the RBM under the reference policy is \(\tilde{\pi}\), i.e. \(\tilde{Z}(0)\sim\tilde{\pi}\), its marginal distribution at time t is also \(\tilde{\pi}\), i.e. \(\tilde{Z}(t)\sim\tilde{\pi}\) for every \(t\geq 0\). Because \(g\) has polynomial growth, one can show that the expectation of the stochastic integral (that is, the first term) on the right side of (42) vanishes. Then, by taking the expectation over \(\tilde{Z}(0)\sim\tilde{\pi}\), Equation (42) implies \[\mathbb{E}_{\tilde{\pi}}\left[v(\tilde{Z}(0))\right]=\mathbb{E}_{\tilde{\pi}} \left[v(\tilde{Z}(T))\right]+\mathbb{E}_{\tilde{\pi}}\left[\int_{0}^{T}f( \tilde{Z}(t),g(\tilde{Z}(t)))\mathrm{d}t\right]-T\xi. \tag{43}\] By observing that \(\mathbb{E}_{\tilde{\pi}}[v(\tilde{Z}(0))]=\mathbb{E}_{\tilde{\pi}}[v(\tilde{Z }(T))]\) and \[\mathbb{E}_{\tilde{\pi}}\left[f(\tilde{Z}(t),g(\tilde{Z}(t)))\right] = \mathbb{E}\left[f(\tilde{Z}(\infty),g(\tilde{Z}(\infty)))\right] \text{ for }t\geq 0,\] we conclude that \(\xi=\mathbb{E}[f(\tilde{Z}(\infty),g(\tilde{Z}(\infty)))]\). In other words, \(\xi\) can be viewed as the expected steady-state cost associated with the RBM under the reference policy, where \(f\left(\cdot,g(\cdot)\right)\) is the state-cost function. Therefore, it follows (by assumption) from Equations (23) - (24) with \(u(z)=\tilde{\theta}\) and \(c(z,u(z))=c(z,\theta)=f(z,g(z))\) for \(z\in\mathbb{R}_{+}^{d}\) that there exists a \(C^{2}\)_relative value function_\(\tilde{v}\) with polynomial growth that satisfies the following PDE: \[\mathcal{L}\tilde{v}(z)-\tilde{\theta}\cdot\nabla\tilde{v}(z)+f\left(z,g(z) \right)=\xi,\ \ z\in\mathbb{R}_{+}^{d}, \tag{44}\] with boundary conditions \(\mathcal{D}_{i}\tilde{v}(z)=0\) if \(z_{i}=0\ \ (i=1,\ldots,d)\). Furthermore, applying Ito's formula to \(\tilde{v}(\tilde{Z}(t))\) yields \[\tilde{v}(\tilde{Z}(T))- \tilde{v}(\tilde{Z}(0)) \tag{45}\] \[=\int_{0}^{T}\nabla\tilde{v}(\tilde{Z}(t))\cdot\mathrm{d}W(t)+ \int_{0}^{T}\mathcal{D}\tilde{v}(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)+ \int_{0}^{T}\left(\mathcal{L}\tilde{v}(\tilde{Z}(t))-\tilde{\theta}\cdot \nabla\tilde{v}(\tilde{Z}(t))\right)\mathrm{d}t.\] Since \(\tilde{v}(z)\) also satisfies the boundary conditions \(\mathcal{D}_{i}\tilde{v}(z)=0\) if \(z_{i}=0\ \ (i=1,\ldots,d)\), it follows from Equations (3)-(4) that \[\int_{0}^{T}\mathcal{D}\tilde{v}(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)=0.\] Then, substituting Equation (44) into Equation (45) gives \[\tilde{v}(\tilde{Z}(T))-\tilde{v}(\tilde{Z}(0))=\int_{0}^{T}\nabla\tilde{v}( \tilde{Z}(t))\cdot\mathrm{d}W(t)+T\xi-\int_{0}^{T}f(\tilde{Z}(t),g(\tilde{Z}(t )))\,\mathrm{d}t. \tag{46}\] In the proof of Proposition 3, we first showed that, because \(\tilde{Z}\) is a time-homogeneous Markov process, the assumed stochastic relationship (34) can be extended to the more general form (36) with \(n\) an arbitrary positive integer. In the current context one can argue in exactly the same way to establish the following. First, the assumed stochastic relationship (42) actually holds in the more general form where \(T\) is replaced by \(nT\), with \(n\) an arbitrary positive integer. As a consequence, the derived stochastic relationship (46) also holds with \(nT\) in place of \(T\). And finally, after taking expectations on both sides of those generalized versions of (42) and (46) we arrive at the following: \[v(z) = \mathbb{E}_{z}\left[v(\tilde{Z}(nT))\right]-nT\xi+\mathbb{E}_{z} \left[\int_{0}^{nT}f(\tilde{Z}(t),g(\tilde{Z}(t)))\mathrm{d}t\right],\ \text{and} \tag{47}\] \[\tilde{v}(z) = \mathbb{E}_{z}\left[\tilde{v}(\tilde{Z}(nT))\right]-nT\xi+ \mathbb{E}_{z}\left[\int_{0}^{nT}f(\tilde{Z}(t),g(\tilde{Z}(t)))\mathrm{d}t \right], \tag{48}\] for \(z\in\mathbb{R}_{+}^{d}\) and an arbitrary positive integer \(n.\) Note that the expectation of the stochastic integral vanishes because \(\nabla\tilde{v}\) has polynomial growth. Subtracting (48) from (47) further yields \[v(z)-\tilde{v}(z)=\mathbb{E}_{z}\left[v(Z(nT))\right]-\mathbb{E}_{z}\left[ \tilde{v}(Z(nT))\right].\] Without loss of generality, we assume \(\mathbb{E}\left[\tilde{v}(\tilde{Z}(\infty))\right]=\mathbb{E}\left[v(\tilde{ Z}(\infty))\right]\). Since \(v(\cdot)\) and \(\tilde{v}(\cdot)\) have polynomial growth and the reference policy is admissible, we have that \(\sup_{n>0}\mathbb{E}_{z}\left[\left(v(\tilde{Z}(nT))\right)^{2}\right]<\infty\) and \(\sup_{n>0}\mathbb{E}_{z}\left[\left(\tilde{v}(\tilde{Z}(nT))\right)^{2}\right]<\infty\) by Equation (21). Then, by the Vitali convergence theorem, we have \[\lim_{n\rightarrow+\infty}\mathbb{E}_{z}\left[v(Z(nT))\right] = \mathbb{E}\left[v(\tilde{Z}(\infty))\right]\text{ and }\] \[\lim_{n\rightarrow+\infty}\mathbb{E}_{z}\left[\tilde{v}(Z(nT))\right] = \mathbb{E}\left[\tilde{v}(\tilde{Z}(\infty))\right].\] Therefore, we have \[v(z)-\tilde{v}(z)=\lim_{n\rightarrow+\infty}\left(\mathbb{E}_{z}\left[v(Z(nT) )\right]-\mathbb{E}_{z}\left[\tilde{v}(Z(nT))\right]\right)=0\text{ for }z\in\mathbb{R}_{+}^{d},\] which means \(v(\cdot)\) also satisfies the PDE (44) and the associated boundary conditions. That is, \[\mathcal{L}v(z)-\tilde{\theta}\cdot\nabla v(z)+f\left(z,g(z) \right)=\xi,\text{ for }z\in\mathbb{R}_{+}^{d}. \tag{49}\] \[\mathcal{D}_{i}v(z)=0\text{ if }z_{i}=0\text{ \ }(i=1,\ldots,d). \tag{50}\] Suppose that \(g(\cdot)=\nabla v(\cdot)\) (which we will prove later). Substituting this into Equation (49) and using the definition of \(f\), it follows that \[\mathcal{L}v(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla v(z)-c(z, \theta)\right\}=\xi,\text{ \ }z\in\mathbb{R}_{+}^{d},\] which along with the boundary condition (50) gives the desired result. To complete the proof, it remains to show that \(g(\cdot)=\nabla v(\cdot).\) By applying Ito's formula to \(v(\tilde{Z}(t))\) and using Equations (3)-(4) and (50), we conclude that \[v(\tilde{Z}(T))-v(\tilde{Z}(0))=\int_{0}^{T}\left(\mathcal{L}v(\tilde{Z}(t))- \tilde{\theta}\cdot\nabla v(\tilde{Z}(t))\right)\mathrm{d}t+\int_{0}^{T}\nabla v (\tilde{Z}(t))\cdot\mathrm{d}W(t).\] Then, using Equation (49), we rewrite the preceding equation as follows: \[v(\tilde{Z}(T))-v(\tilde{Z}(0))=T\xi-\int_{0}^{T}f\left(\tilde{Z}(t),g(\tilde {Z}(t))\right)\mathrm{d}t+\int_{0}^{T}\nabla v(\tilde{Z}(t))\cdot\mathrm{d}W (t).\] Comparing this with Equation (42) yields \[\int_{0}^{T}\left(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\right)\cdot\mathrm{d }W(t)=0,\] which yields the following: \[\mathbb{E}_{z}\left[\left(\int_{0}^{T}\left(g(\tilde{Z}(t))-\nabla v(\tilde{Z }(t))\right)\cdot\mathrm{d}W(t)\right)^{2}\right]=0. \tag{51}\] Thus, provided \(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\) is square integrable, Ito's isometry [48, Lemma D.1] gives the following: \[\mathbb{E}_{z}\left[\left(\int_{0}^{T}\left(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t)) \right)\cdot\mathrm{d}W(t)\right)^{2}\right]=\mathbb{E}_{z}\left[\int_{0}^{T} \left\|g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\right\|_{A}^{2}\mathrm{d}t \right]=0.\] The square integrability of \(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\) follows because \(g\) and \(\nabla v\) have polynomial growth, and \(\mathbb{E}_{z}\left(|\tilde{Z}(nT)|^{k}\right)\) is finite for all \(k\) because our action space \(\Theta\) is bounded. Then, since \(A\) is positive definite, \(\nabla v(\tilde{Z}(t))=g(\tilde{Z}(t))\) almost surely. By the continuity of \(\nabla v\left(\cdot\right)\) and \(g(\cdot)\), we conclude that \(\nabla v(\cdot)=g(\cdot)\). ## 5 Computational method We follow in the footsteps of Han et al. [17], who developed a computational method to solve semilinear parabolic partial differential equations (PDEs). Those authors focused on a backward stochastic differential equation (BSDE) associated with their PDE, and in similar fashion, we focus on the stochastic differential equations (34) and (42) that are associated with our two stochastic control formulations (see Section 4). Our method differs from that of Han et al. [17], because they consider PDEs on a finite-time interval with an unbounded state space and a specified terminal condition, whereas our stochastic control problem has an infinite time horizon and state space constraints. As such, it leads to a PDE on a polyhedral domain with oblique derivative boundary conditions. We modify the approach of [17] to incorporate those additional features, treating the discounted and ergodic formulations in Sections 5.1 and 5.2, respectively. ### Discounted control We approximate the value function \(V(\cdot)\) and its gradient \(\nabla V(\cdot)\) by deep neural networks \(V_{w_{1}}(\cdot)\) and \(G_{w_{2}}(\cdot)\), respectively, with associated parameter vectors \(w_{1}\) and \(w_{2}\). Seeking an approximate solution of the stochastic equation (34), we define the loss function \[\ell(w_{1},w_{2}) = \mathbb{E}\left[\left(e^{-rT}\,V_{w_{1}}(\tilde{Z}(T))-V_{w_{1}} (\tilde{Z}(0))\right.\right.\] \[\left.\left.-\int_{0}^{T}e^{-rt}G_{w_{2}}(\tilde{Z}(t))\cdot \mathrm{d}W(t)+\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t))) \,\mathrm{d}t\right)^{2}\right],\] Here the expectation is taken with respect to the sample path distribution of our reference process \(\tilde{Z}\), which will be specified in Algorithm 3 below. Our definition (52) of the loss function does not explicitly enforce the consistency requirement \(\nabla V_{w_{1}}(\cdot)=G_{w_{2}}(\cdot)\), but Proposition 3 provides the justification for this separate parametrization. This type of double parametrization has also been implemented by Zhou et al. [49]. Our computational method seeks a neural network parameter combination \((w_{1},w_{2})\) that minimizes an approximation of the loss defined via (52). Specifically, we first simulate multiple discretized paths of the reference RBM \(\tilde{Z}\), restricted to a fixed and finite time domain \([0,T]\). To do that, we sample discretized paths of the underlying Brownian motion \(W\), and then solve a discretized Skorohod problem for each path of \(W\) (this is the purpose of Subroutine 2) to obtain the corresponding path of \(\tilde{Z}\). Thereafter, our method computes a discretized version of the loss (52), summing over sampled paths to approximate the expectation and over discrete time steps to approximate the integral over \([0,T]\), and minimizes it using stochastic gradient descent; see Algorithm 3. In Subroutine 2, given the index set \(B\), \(R_{B,B}\) is the submatrix derived by deleting the rows and columns of \(R\) with indices in \(\{1,\ldots,d\}\backslash B\). Similarly, \(R_{:,B}\) is the matrix that one arrives at by deleting the columns of \(R\) whose indices are in the set \(\{1,\ldots,d\}\backslash B\). ``` 1:A vector \(x\in\mathbb{R}^{d}\) and the reflection matrix \(R\). 2:A solution to the Skorokhod problem \(y\in\mathbb{R}^{d}_{+}\) 3:Set \(\epsilon=10^{-8}\); 4:functionSkorookhod(\(x\)) 5:\(y=x\); 6:while Exists \(y_{i}<-\epsilon\)do 7: Compute the set \(B=\{i:y_{i}<\epsilon\}\); 8: Compute \(L_{B}=-R_{B,B}^{-1}x_{B}\); 9: Compute \(y=x+R_{:,B}\times L_{B}\); 10:endwhile 11:return\(y\). 12:endfunction ``` **Subroutine 1** Euler discretization scheme After the parameter values \(w_{1}\) and \(w_{2}\) have been determined, our proposed policy is as follows: \[\theta_{w_{2}}(z)=\arg\max_{\theta\in\Theta}\left(\theta\cdot G_{w_{2}}(z)-c(z, \theta)\right),\ z\in\mathbb{R}^{d}_{+}. \tag{54}\] **Remark 2**.: One can also consider the policy using \(\nabla V_{w_{1}}(\cdot)\) instead of \(G_{w_{2}}(\cdot).\) That is, \[\arg\max_{\theta\in\Theta}\left(\theta\cdot\nabla V_{w_{1}}(z)-c(z,\theta)\right),\ z\in\mathbb{R}_{+}^{d}. \tag{55}\] However, our numerical experiments suggest that this policy is inferior to (54). ### Ergodic control We parametrize \(v(\cdot)\) and \(\nabla v(\cdot)\) using deep neural networks \(v_{w_{1}}(\cdot)\) and \(g_{w_{2}}(\cdot)\) with parameters \(w_{1}\) and \(w_{2}\), respectively, and then use Equation (42) to define an auxiliary loss function \[\tilde{\ell}(w_{1},w_{2},\xi) = \mathbb{E}\left[\left(v_{w_{1}}(\tilde{Z}(T))-v_{w_{1}}(\tilde{Z} (0))\right.\right.\] \[\left.\left.-\int_{0}^{T}g_{w_{2}}(\tilde{Z}(t))\mathrm{d}W(t)-T \xi+\int_{0}^{T}f\left(\tilde{Z}(t),g_{w_{2}}\left(\tilde{Z}(t)\right)\right) \mathrm{d}t\right)^{2}\right].\] Then, defining the loss function \(\ell(w_{1},w_{2})=\min_{\xi}\ell(w_{1},w_{2},\xi)\) and noting that \[\mathrm{Var}\left(X\right)=\min_{\xi}\mathbb{E}[\left(X-\xi\right)^{2}],\] we arrive at the following expression for the loss function \[\ell(w_{1},w_{2})=\mathrm{Var}\left(v_{w_{1}}(\tilde{Z}(T))-v_{w_{1}}(\tilde{Z}(0 ))-\int_{0}^{T}g_{w_{2}}(\tilde{Z}(t))\mathrm{d}W(t)+\int_{0}^{T}f\left(\tilde{Z }(t),g_{w_{2}}\left(\tilde{Z}(t)\right)\right)\mathrm{d}t\right)\right). \tag{57}\] We present our method for the ergodic control case formally in Algorithm 4. ``` 0: The number of iteration steps \(M\), a batch size \(B\), a learning rate \(\alpha\), a time horizon \(T\), a discretization step-size \(h\) (for simplicity, we assume \(N\triangleq T/h\) is an integer), a starting point \(z\), and an optimization solver (SGD, ADAM, RMSProp, etc). 0: A neural network approximation of the value function \(v_{w_{1}}\) and the gradient function \(g_{w_{2}}\). 1: Initialize the neural networks \(v_{w_{1}}\) and \(g_{w_{2}}\); set \(z_{0}^{(i)}=z\) for \(i=1,2,...,B\). 2:for\(k\gets 0\) to \(M-1\)do 3: Simulate \(B\) discretized RBM paths and the Brownian increments \(\{\tilde{Z}^{(i)},\delta^{(i)}\}\) with a time horizon \(T\) and a discretization step-size \(h\) starting from \(\tilde{Z}^{(i)}(0)=z_{k}^{(i)}\) by invoking Discretize\((T,h,z_{k}^{(i)})\), for \(i=1,2,...,B\). 4: Compute the empirical loss \[\hat{\ell}(w_{1},w_{2}) = \widehat{\mathrm{Var}}\left(v_{w_{1}}(\tilde{Z}^{(i)}(T))-v_{w_{ 1}}(\tilde{Z}^{(i)}(0))-\sum_{j=0}^{N-1}g_{w_{2}}(\tilde{Z}^{(i)}(hj))\cdot \delta_{j}^{(i)}\right.\] \[\left.+\sum_{j=0}^{N-1}f\left(\tilde{Z}^{(i)}(hj),g_{w_{2}}\left( \tilde{Z}^{(i)}(hj)\right)\right)h\right).\] 5: Compute the gradient \(\partial\hat{\ell}(w_{1},w_{2})/\partial w_{1},\partial\hat{\ell}(w_{1},w_{2} )/\partial w_{2}\) and update \(w_{1},w_{2}\) using the chosen optimization solver. 6: Update \(z_{k+1}^{(i)}\) as the end point of the path \(\tilde{Z}^{(i)}\): \(z_{k+1}^{(i)}\leftarrow\tilde{Z}^{(i)}(T)\). 7:endfor 8:return Functions \(v_{w_{1}}(\cdot)\) and \(g_{w_{2}}(\cdot)\). ``` **Algorithm 4** Method for the ergodic control case After the parameters values \(w_{1}\) and \(w_{2}\) have been determined, our proposed policy is the following: \[\bar{\theta}_{w_{2}}(z)=\arg\max_{\theta\in\Theta}\left(\theta\cdot g_{w_{2}}( z)-c(z,\theta)\right),\ z\in\mathbb{R}_{+}^{d}.\] ## 6 Three families of test problems Here we specify three families of test problems for which numerical results will be presented later (see Section 7). Each family consists of RBM drift control problems indexed by \(d=1,2,\dots\), where \(d\) is the dimension of the orthant that serves as the problem's state space. The first of the three problem families, specified in Section 6.1, is characterized by a feed-forward network structure and linear cost of control. Recapitulating earlier work by Ata [2], Section 6.2 explains the interpretation of such problems as "heavy traffic" limits of input control problems for certain feed-forward queueing networks. Our second family of test problems is identical to the first one except that now the cost of control is quadratic rather than linear. The exact meaning of that phrase will be spelled out in Section 6.3, where we also explain the interpretation of such problems as heavy traffic limits of dynamic pricing problems for queueing networks. In Section 6.4, we describe two parametric families of policies with special structure that will be used later for comparison purposes in our numerical study. Finally, Section 6.5 specifies our third family of test problems, which have a separable structure that allows them to be solved exactly by analytical means. Such problems are of obvious value for evaluating the accuracy of our computational method. ### Main example with linear cost of control We consider a family of test problems with parameters \(K=0,1,\ldots\), attaching to each such problem the index \(d\) (mnemonic for _dimension_) \(=K+1\). Problem \(d\) has state space \(\mathbb{R}^{d}_{+}\) and the \(d\times d\) reflection matrix \[R=\left[\begin{array}{cccc}1&&&\\ -p_{1}&1&&&\\ \vdots&&\ddots&\\ -p_{K}&&&1\end{array}\right], \tag{59}\] where \(p_{1},\ldots,p_{K}>0\) and \(p_{1}+\cdots+p_{K}=1\). Also, the set of drift vectors available in each state is \[\Theta=\prod_{k=0}^{K}\left[\underline{\theta}_{k},\overline{\theta}_{k} \right]. \tag{60}\] where the lower limit \(\underline{\theta}_{k}\) and upper limit \(\overline{\theta}_{k}\) are as specified in Section 6.2 below. Similarly, the \(d\times d\) covariance matrix \(A\) for problem \(d\) is as specified in Section 6.2. Finally, the cost function for problem \(d\) has the linear form \[c(z,\theta)=h^{\top}z+c^{\top}\theta\ \ \mbox{where}\ \ h,\,c\in\mathbb{R}^{d}_{+}. \tag{61}\] That is, the cost rate \(c(Z(t),u(Z(t)))\) that the system manager incurs under policy \(u\) at time \(t\) is linear in both the state vector \(Z(t)\) and the chosen drift rate \(u(Z(t))\). In either the discounted control setting or the ergodic control setting, inspection of the HJB equation displayed earlier in Section 3 shows that, given this linear cost structure, there exists an optimal policy \(u^{*}(\cdot)\) such that \[\mbox{either}\ \ u^{*}_{k}(z)=\underline{\theta}_{k}\ \ \mbox{or}\ \ u^{*}_{k}(z)=\overline{\theta}_{k} \tag{62}\] for each state \(z\ \in\mathbb{R}^{K+1}_{+}\) and each component \(k=0,1,\ldots,K\). In the next section we explain how drift control problems of the form specified here arise as heavy traffic limits in queueing theory. Strictly speaking, however, that interpretation of the test problems is inessential to the main subject of this paper: the computational results presented in Section 7 can be read without reference to the queueing theoretic interpretations of our test problems. ### Interpretation as heavy traffic limits of queueing network control problems Let us consider the feed-forward queueing network model of a make-to-order production system portrayed in Figure 1. There are \(d=K+1\) buffers, represented by the open-ended rectangles, indexed by \(k=0,1,\ldots,K.\) Each buffer has a dedicated server, represented by the circles in Figure 1. Arriving jobs wait in their designated buffer if the server is busy. There are two types of jobs arriving to the system: regular versus thin streams. Thin stream jobs have the same service time distributions as the regular jobs, but they differ from the regular jobs in two important ways: First, thin stream jobs can be turned away upon arrival. That is, a system manager can exercise admission control in this manner, but in contrast, she must admit all regular jobs arriving to the system. Second, the volume of thin stream jobs is smaller than that of the regular jobs; see Assumption 1. Regular jobs enter the systems only through buffer zero, as shown by the solid arrow pointing to buffer zero in Figure 1. A renewal process \(E=\{E(t):t\geq 0\}\) models the cumulative number of regular jobs arriving to the system over time. We let \(\lambda\) denote the arrival rate and \(a^{2}\) denote the squared coefficient of variation of the interarrival times for the regular jobs. The thin stream jobs arrive to buffer \(k\) (as shown by the dashed arrows in Figure 1) according to the renewal process \(A_{k}=\{A_{k}(t):t\geq 0\}\) for \(k=0,1,\ldots,K\). We let \(\eta_{k}\) denote the arrival rate and \(b_{k}^{2}\) denote the squared coefficient of variation of the interarrival times for renewal process \(A_{k}\). Jobs in buffer \(k\) have i.i.d. general service time distributions with mean \(m_{k}\) and squared coefficient of variation \(s_{k}^{2}\geq 0,\ k=0,1,\ldots,K\); \(\mu_{k}=1/m_{k}\) is the corresponding service rate. We let \(S_{k}=\{S_{k}(t):t\geq 0\}\) denote the renewal process associated with the service completions by server \(k\) for \(k=1,\ldots,K\). To be specific, \(S_{k}(t)\) denotes the number of jobs server \(k\) processes by time \(t\) if it incurs no idleness during \([0,t]\). The jobs in each buffer are served on a first-come-first-served (FCFS) basis, and servers work continuously unless their buffer is empty. After receiving service, jobs in buffer zero join buffer \(k\) with probability \(p_{k},\ k=1,2,\ldots,K\), independently of other events. This probabilistic Figure 1: A feedforward queueing network with thin arrival streams. routing structure is captured by a vector-valued process \(\Phi(\cdot)\) where \(\Phi_{k}(\ell)\) denotes the total number of jobs routed to buffer \(k\) among the first \(\ell\) jobs served by server zero for \(k=1,\ldots,K\) and \(\ell\geq 1\). We let \(p=(p_{k})\) denote the \(K\)-dimensional vector of routing probabilities. Jobs in buffers \(1,\ldots,K\) leave the system upon receiving service. As stated earlier, the system manager makes admission control decisions for thin stream jobs. Turning away a thin stream job arriving to buffer \(k\) (externally) results in a penalty of \(c_{k}\). For mathematical convenience, we model admission control decisions as if the system manager can simply "turn off" each of the thin stream arrival processes as desired. In particular, we let \(\Delta_{k}(t)\) denote the cumulative amount of time that the (external) thin stream input to buffer \(k\) is turned off during the interval \([0,t]\). Thus, the vector-valued process \(\Delta=(\Delta_{k})\) represents the admission control policy. Similarly, we let \(T_{k}(t)\) denote the cumulative amount of time server \(k\) is busy during the time interval \([0,t]\), and \(I_{k}(t)=t-T_{k}(t)\) denotes the cumulative amount of idleness that server \(K\) incurs during \([0,t]\). Letting \(Q_{k}(t)\) denote the number of jobs in buffer \(k\) at time \(t\), the vector-valued process \(Q=(Q_{k})\) will be called the queue-length process. Given a control \(\Delta=(\Delta_{k})\), assuming \(Q(0)=0\), it follows that \[Q_{0}(t) = E(t)+A_{0}(t-\Delta_{0}(t))-S_{0}(T_{0}(t))\geq 0,\ \ t\geq 0, \tag{63}\] \[Q_{k}(t) = A_{k}(t-\Delta_{k}(t))+\Phi_{k}\left(S_{0}(T_{0}(t))\right)-S_{k }(T_{k}(t))\geq 0,\ \ t\geq 0,\ \ k=1,\ldots,K. \tag{64}\] Moreover, the following must hold: \[I(\cdot)\text{ is continuous and nondecreasing with }I(0)=0, \tag{65}\] \[I_{k}(\cdot)\text{ only increases at those times }t\text{ when }Q_{k}(t)=0,\ \ k=0,1,\ldots,K,\] (66) \[\Delta_{k}(t)-\Delta_{k}(s)\leq t-s,\ \ 0\leq s\leq t<\infty,\ \ k=0,1, \ldots,K.\] (67) \[I,\Delta\text{ are non-anticipating.} \tag{68}\] The system manager also incurs a holding cost at rate \(h_{k}\) per job in buffer \(k\) per unit of time. We use the processes \(\xi=\{\xi(t),t\geq 0\}\) as a proxy for the cumulative cost under a given admission control policy \(\Delta\left(\cdot\right),\) where \[\xi(t)=\sum_{k=0}^{K}c_{k}\eta_{k}\Delta_{k}(t)+\sum_{k=0}^{K}\int_{0}^{t}h_{k }Q_{k}(s)\,\mathrm{d}s,\ \ t\geq 0.\] This is an approximation of the realized cost because the first term on the right-hand side replaces the admission control penalties actually incurred with their means. In order to derive the approximating Brownian control problem, we consider a sequence of systems indexed by a system parameter \(n=1,2,\ldots\); we attach a superscript of \(n\) to various quantities of interest. Following the approach used by Ata [2], we assume that the sequence of systems satisfies the following heavy traffic assumption. **Assumption 1**.: _For \(n\geq 1,\) we have that_ \[\lambda^{n}=n\lambda,\eta_{k}^{n}=\eta_{k}\sqrt{n}\text{ and }\mu_{k}^{n}=n\mu_{k}+ \sqrt{n}\beta_{k},\ \ k=0,1,\ldots,K,\] _where \(\lambda,\)\(\mu_{k},\eta_{k}\) and \(\beta_{k}\) are nonnegative constants. Moreover, we assume that_ \[\lambda=\mu_{0}=\frac{\mu_{k}}{p_{k}}\ \text{ for }\ k=1,\ldots,K.\] One starts the approximation procedure by defining suitably centered and scaled processes. For \(n\geq 1,\) we define \[\hat{E}^{n}(t)=\frac{E^{n}(t)-\lambda^{n}t}{\sqrt{n}} \text{ and }\hat{\Phi}^{n}(q)=\frac{\Phi\left([nq]\right)-p([nq])}{\sqrt{n}}, \ \ t\geq 0,\ \ q\geq 0,\] \[\hat{A}_{k}^{n}=\frac{Q^{n}(t)}{\sqrt{n}} \text{ and }\hat{S}_{k}^{n}(t)=\frac{S_{k}^{n}(t)-\mu_{k}^{n}t}{\sqrt{n}}, \ t\geq 0,\ k=0,1,\ldots,K,\] \[\hat{Q}^{n}(t)=\frac{Q^{n}(t)}{\sqrt{n}} \text{ and }\hat{\xi}^{n}(t)=\frac{\xi^{n}(t)}{\sqrt{n}},\ t\geq 0.\] In what follows, we assume \[T_{k}^{n}(t)=t-\frac{1}{\sqrt{n}}I_{k}(t)+o\left(\frac{1}{\sqrt{n}}\right),\ t \geq 0,\ k=0,1,\ldots,K, \tag{69}\] where \(I_{k}(\cdot)\) is the limiting idleness process for server \(k;\) see [18] for an intuitive justification of (69). Then, defining \[\chi_{0}^{n}(t) = \hat{E}^{n}(t)+\hat{A}_{0}^{n}(t-\Delta_{0}(t))+\hat{S}_{0}^{n}(T _{0}^{n}(t)),\ t\geq 0,\] \[\chi_{k}^{n}(t) = \hat{A}_{k}^{n}(t-\Delta_{0}(t))+\Phi_{k}\left(\frac{1}{n}\hat{S }_{0}^{n}(T_{0}^{n}(t))\right)+p_{k}\hat{S}_{0}^{n}(T_{0}^{n}(t))\] \[\ \ \ -\hat{S}_{k}^{n}(T_{k}^{n}(t)),\ t\geq 0,\ k=1,2,\ldots,K,\] and using Equations (63) - (64) and (69), it is straightforward to derive the following for \(t\geq 0\) and \(k=1,\ldots,K:\) \[\hat{Q}_{0}^{n}(t) = \chi_{0}^{n}(t)+\left(\eta_{0}-\beta_{0}\right)t-\eta_{0}\Delta_{ 0}(t)+\mu_{0}I_{0}(t)+o(1), \tag{70}\] \[\hat{Q}_{k}^{n}(t) = \chi_{k}^{n}(t)+\left(\eta_{k}+p_{k}\beta_{0}-\beta_{k}\right)t- \eta_{k}\Delta_{k}(t)+\mu_{k}I_{k}(t)-p_{k}\mu_{0}I_{0}(t)+o(1). \tag{71}\] Moreover, it follows from Equation (67) that \(\Delta_{k}(t)\) is absolutely continuous. We denote its density by \(\delta_{k}(\cdot),i.e.,\) \[\Delta_{k}(t)=\int_{0}^{t}\delta_{k}(s)\mathrm{d}s,\ t\geq 0,\ k=0,1,\ldots,K,\] where \(\delta_{k}(t)\in[0,1].\) Using this, we write \[\hat{\xi}^{n}(t)=\sum_{k=0}^{K}\int_{0}^{t}v_{k}\eta_{k}\delta_{k}(s)\mathrm{d}s+ \sum_{k=0}^{K}\int_{0}^{t}h_{k}\hat{Q}_{k}^{n}(s)\mathrm{d}s,t\geq 0. \tag{72}\] Then passing to the limit formally as \(n\rightarrow\infty,\) and denoting the weak limit of \((\hat{Q}^{n},\hat{X}^{n},\hat{\xi}^{n})\) by \(\left(Z,\chi,\xi\right),\) where \(\chi\) is a \((K+1)\)-dimensional driftless Brownian motion with covariance matrix (see Appendix C for its derivation) \[A=\mu_{0}\left[\begin{array}{cccc}s_{0}^{2}+a^{2}&-p_{1}s_{0}^{2}&\cdots& \cdots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}+p_{1}s_{1}^{2}&p_{1}p_{2} \left(s_{0}^{2}-1\right)&\cdots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\cdots&\cdots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}+p_{K}s_{K}^{2}\end{array}\right],\] we deduce from (70) - (71) and (72) that \[Z_{0}(t) = \chi_{0}(t)+(\eta_{0}-\beta_{0})t-\int_{0}^{t}\eta_{0}\delta_{0} (s)\mathrm{d}s+\mu_{0}I_{0}(t),\] \[Z_{k}(t) = \chi_{k}(t)+(\eta_{k}+p_{k}\beta_{0}-\beta_{k})\,t-\int_{0}^{t} \eta_{k}\delta_{k}(s)\mathrm{d}s+\mu_{k}I_{k}(t),k=1,\ldots,K,\] \[\xi(t) = \sum_{k=0}^{K}\int_{0}^{t}v_{k}\eta_{k}\delta_{k}(s)\mathrm{d}s+ \sum_{k=0}^{K}\int_{0}^{t}h_{k}Z_{k}(s)\mathrm{d}s.\] In order to streamline the notation, we make the following change of variables: \[Y_{k}(t) = \mu_{k}I_{k}(t),\ k=0,\ldots,K,\] \[\theta_{0}(t) = \eta_{0}\delta_{0}(t)-(\eta_{0}-\beta_{0}),\ t\geq 0,\] \[\theta_{k}(t) = \eta_{k}\delta_{k}(t)-(\eta_{k}+p_{k}\beta_{0}-\beta_{k})\] and let \[\underline{\theta}_{0} = \beta_{0}\ \ \mbox{and}\ \ \overline{\theta}_{k}=\beta_{0}-\eta_{0},\] \[\underline{\theta}_{k} = \beta_{k}-p_{k}\beta_{0}\ \ \mbox{and}\ \ \overline{\theta}_{k}=\beta_{k}-\eta_{k}-p_{k}\beta_{0},\ k=1,\ldots,K.\] Lastly, we define the set of negative drift vectors available to the system manager as in Equation (60). As a result, we arrive at the following Brownian system model: \[Z_{0}(t) = \chi_{0}(t)-\int_{0}^{t}\theta_{0}(s)\mathrm{d}s+Y_{0}(t),\ \ t\geq 0, \tag{73}\] \[Z_{k}(t) = \chi_{k}(t)-\int_{0}^{t}\theta_{k}(s)\mathrm{d}s-Y_{k}(t)+p_{k}Y_{0} (t),\ \ k=1,\ldots,K, \tag{74}\] which can be written as in Equation (2) with \(d=K+1\), where the reflection matrix \(R\) is given by Equation (59). Moreover, the processes \(Y,Z\) inherit properties in Equation (63) - (73) from their pre-limit counterparts in the queueing model, cf. Equations (65) - (66). To minimize technical complexity, we restrict attention to stationary Markov control policies as done in Section 3. That is, \(\theta(t)=u(Z(t))\) for \(t\geq 0\) for some policy function \(u:\mathbb{R}_{+}^{d}\to\Theta.\) Then, defining \(c=(c_{0},c_{1},\ldots,c_{K})^{\top},\ h=(h_{0},h_{1},\ldots,h_{K})^{\top}\) and \[c(z,\theta)=h^{\top}z+c^{\top}\theta,\] as in Equation (61), the cumulative cost incurred over the time interval \([0,t]\) under policy \(u\) can be written as in Equation (10). Note that \(C^{u}(t)\) and \(\xi(t)\) differ only by a term that is independent of the control. Given \(C^{u}(t)\), one can formulate the discounted control problem as done in Section 3.1. Similarly, the ergodic control problem can be formulated as done in Section 3.2. **Interpreting the solution of the drift control problem in the context of the queueing network formulation.** Because the instantaneous cost rate \(c(z,\theta)\) is linear in the control, inspection of the HJB equation reveals that the optimal control is of bang-bang nature. That is, \(\theta_{k}(t)\in\{\underline{\theta}_{k},\overline{\theta}_{k}\}\) for all \(k,t\) as stated in Equation (62). This can be interpreted in the context of the queueing network displayed in Figure 1 as follows: For \(k=0,1,\ldots,K\), whenever \(\theta_{k}(t)=\overline{\theta}_{k}\), the system manager turns away the thin stream jobs arriving to buffer \(k\) externally, i.e., she shuts off the renewal process \(A_{k}(\cdot)\) at time \(t\). Otherwise, she admits them to the system. Of course, the optimal policy is determined by the gradient \(\nabla V(z)\) of the value function through the HJB equation, which we solve for using the method described in Section 5. ### Related example with quadratic cost of control Celik and Maglaras [12] and Ata and Barjesteh [3] advance formulations where a system manager controls the arrival rate of customers to a queueing system by exercising dynamic pricing. One can follow a similar approach for the feed-forward queueing networks displayed in Figure 1 with suitable modifications, e.g., the dashed arrows also correspond to arrivals of regular jobs. This ultimately results in a problem of drift control for RBM with the cost of control \[c(\theta,z)=\sum_{k=0}^{K}\alpha_{k}(\theta_{k}-\underline{\theta}_{k})^{2}+ \sum_{k=0}^{K}h_{k}z_{k}, \tag{75}\] where \(\underline{\theta}\) is the drift rate vector corresponding to a nominal price vector. ### Two parametric families of benchmark policies Recall the optimal policy can be characterized as \[u^{*}(z)=\arg\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z,\theta) \right\},\ z\in\mathbb{R}_{+}^{d}. \tag{76}\] **The benchmark policy for the main test problem.** In our main test problem (see Section 6.1), we have \(c(z,\theta)=h^{\top}z+c^{\top}\theta\). Therefore, it follows from (76) that for \(k=0,1,\ldots,K\), \[u_{k}^{*}(z)=\left\{\begin{array}{ll}\overline{\theta}_{k}&\text{ if }\left(\nabla V(z)\right)_{k}\geq c_{k},\\ \underline{\theta}_{k}&\text{ otherwise.}\end{array}\right.\] Namely, the optimal policy is of bang-bang type. Therefore, we consider the following linear-boundary policies as our benchmark polices: For \(k=0,1,\ldots,K\), \[u_{k}^{\text{lbp}}(z)=\left\{\begin{array}{ll}\overline{\theta}_{k}&\text{ if }\beta_{k}^{\top}z\geq c_{k},\\ \underline{\theta}_{k}&\text{ otherwise,}\end{array}\right.\] where \(\beta_{0},\beta_{1}\ldots,\beta_{K}\in\mathbb{R}^{K+1}\) are vectors of policy parameters to be tuned. In our numerical study, we focus attention on the symmetric case where \[h_{0}>h_{1}=\ldots=h_{K},\] \[c_{0}=c_{1}=\ldots=c_{K},\] \[p_{1}=\ldots=p_{K}=\frac{1}{K},\] \[\underline{\theta}_{1}=\ldots=\underline{\theta}_{K},\] \[\overline{\theta}_{1}=\ldots=\overline{\theta}_{K}.\] Due to this symmetry, the downstream buffers look identical. As such, we restrict attention to parameter vectors of the following form: \[\beta_{0} = \left(\phi_{1},\phi_{2},\ldots,\phi_{2}\right),\text{ and}\] \[\beta_{i} = \left(\phi_{3},\phi_{4},\ldots\phi_{4},\phi_{5},\phi_{4},\ldots, \phi_{4}\right)\text{ where }\phi_{5}\text{ is the }i+1^{\text{st}}\text{ element of }\beta_{i}\text{ for }i=1,\ldots,K.\] The parameter vector \(\beta_{0}\), which is used to determine the benchmark policy for buffer zero, has two distinct parameters: \(\phi_{1}\) and \(\phi_{2}\). In considering the policy for buffer zero, \(\phi_{1}\) captures the effect of its own queue length, whereas \(\phi_{2}\) captures the effects of the downstream buffers \(1,\ldots,K\). We use a common parameter for the downstream buffers because they look identical from the perspective of buffer zero. Similarly, the parameter vector \(\beta_{i}\) (\(i=1,\ldots,K\)) has three distinct parameters: \(\phi_{3}\), \(\phi_{4}\) and \(\phi_{5}\), where \(\phi_{3}\) is used as the multiplier for buffer zero (the upstream buffer), \(\phi_{5}\) is used to capture the effect of buffer \(i\) itself and \(\phi_{4}\) is used for all other downstream buffers. Note that all \(\beta_{i}\) use the same three parameters \(\phi_{3}\), \(\phi_{4}\) and \(\phi_{5}\) for \(i=1,\ldots,K\). They only differ with respect to the position of \(\phi_{5}\), i.e., it is in the \(i+1^{\text{st}}\) position for \(\beta_{i}\). In summary, the benchmark policy uses five distinct parameters in the symmetric case. This allows us to do a brute-force search via simulation on a five-dimensional grid regardless of the number of buffers. **The benchmark policy for the test problem with the quadratic cost of control.** In this case, substituting Equation (75) into Equation (76) gives the following characterization of the optimal policy: \[u_{k}^{*}(z)=\underline{\theta}_{k}+\frac{(\nabla V(z))_{k}}{2\alpha_{k}},\ k=0,1, \ldots,K. \tag{77}\] Namely, the optimal policy is affine in the gradient. Therefore, we consider the following affine-rate policies as our benchmark polices: For \(k=0,1,\ldots,K\), \[u_{k}^{\text{arp}}(z)=\underline{\theta}_{k}+\beta_{k}^{\top}z,\] where \(\beta_{0},\beta_{1}\ldots,\beta_{K}\in\mathbb{R}^{K+1}\) are vectors of policy parameters to be tuned. We truncate this at the upper bound \(\overline{\theta}_{k}\) if needed. We focus attention on the symmetric case for this problem formulation too. To be specific, we assume \[h_{0}>h_{1}=\ldots=h_{K},\] \[\alpha_{0}=\alpha_{1}=\ldots=\alpha_{K},\] \[p_{1}=\ldots=p_{K}=\frac{1}{K},\] \[\underline{\theta}_{1}=\ldots=\underline{\theta}_{K},\] \[\overline{\theta}_{1}=\ldots=\overline{\theta}_{K}.\] Due to this symmetry, the downstream buffers look identical. As such, we restrict attention to parameter vectors of the following form: \[\beta_{0} = (\phi_{1},\phi_{2},\ldots,\phi_{2})\,,\text{ and}\] \[\beta_{i} = (\phi_{3},\phi_{4},\ldots\phi_{4},\phi_{5},\phi_{4},\ldots,\phi_{ 4})\text{ where }\phi_{5}\text{ is the }i+1^{\text{st}}\text{element for }i=1,\ldots,K.\] As done for the first benchmark policy above, this particular form of the parameter vectors can be justified using the symmetry as well. ### Parallel-server test problems In this section, we consider a problem whose solution can be derived analytically by considering a one-dimensional problem. To be specific, we consider the parallel-server network that consists of \(K\) identical single-server queues as displayed in Figure 2. Clearly, this network can be decomposed into \(K\) separate single-server queues, leading to K separate one-dimensional problem formulations, which can be solved analytically, see Appendix D for details. For this example we have that \(R=I_{d\times d}\) and \(A=I_{d\times d}\). In addition, we assume that the action space \(\Theta\) and the cost function \(c(z,\theta)\) are the same as above. ## 7 Computational results For the test problems introduced in Section 6, we now compare the performance of policies derived using our method (see Section 5) with the best benchmark we could find. The results show that our method performs well, and it remains computationally feasible up to at least dimension \(d=30\). We implement our method using three-layer or four-layer neural networks with the elu activation function [38] in Tensorflow 2 [1], and using code adapted from that of Han et al. [17] and Zhou et al. [50]; see Appendix E for further details of our implementation.1 Footnote 1: Our code is available in [https://github.com/nian-si/RBMSSolver](https://github.com/nian-si/RBMSSolver). For our main test problem with linear cost of control (introduced previously in Section 6.1), and also for its variant with quadratic cost of control (Section 6.3), the following parameter values are assumed: \(h_{0}=2\), \(h_{k}=1.9\) for \(k=1,\ldots,K\), \(c_{k}=1\) for \(k=0,\ldots,K\), and \(p_{k}=1/K\) for \(k=1,\ldots,K\). Also, the reflection matrix \(R\) and the covariance matrix \(A\) for those families of problems are as follows: \[R=\left[\begin{array}{cccc}1&&&&\\ -1/K&1&&\\ \vdots&&\ddots&\\ -1/K&&&1\end{array}\right]\text{ and }A=\left[\begin{array}{cccc}1&0& \cdots&\cdots&0\\ 0&1&-\frac{1}{K^{2}}&\cdots&-\frac{1}{K^{2}}\\ \vdots&-\frac{1}{K^{2}}&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&-\frac{1}{K^{2}}\\ 0&-\frac{1}{K^{2}}&\cdots&\cdots&1\end{array}\right].\] However, as stated previously in Section 6.5, the reflection matrix and covariance matrix for our parallel-server test problems are \(R=I_{d\times d}\) and \(A=I_{d\times d}\). Problems in that third class have \(K=d\) buffers indexed by \(k=1,\ldots,K\), and we set \(h_{1}=2\) and \(h_{k}=1.9\) for \(k=2,\ldots,K\). ### Main test problem with linear cost of control For our main test problem with linear cost of control (Section 6.1), we take \[\underline{\theta}_{k}=0\text{ and }\overline{\theta}_{k}=b\in\{2,10\}\text{ for all }k.\] Figure 2: A decomposable parallel-server queueing network. Also, interest rates \(r=0.1\) and \(r=0.01\) will be considered in the discounted case. To begin, let us consider the simple case where \(K=0\) (that is, there are no downstream buffers in the queueing network interpretation of the problem) and hence \(d=1\). In this case, one can solve the HJB equation analytically; see Appendix D for details. For the discounted formulation with \(r=0.1\), Figure 3 compares the derivative of the value function computed using the analytical solution, shown in blue, with the neural network approximation for it that we computed using our method, shown in red. (The comparisons for \(r=0.01\) and the ergodic control case are similar.) Combining Figure 3 with Equation (76), one sees that the policy derived using our method is close to the optimal policy. Table 1 reports the simulated performance with standard errors of these two policies based on four million sample paths and using the same discretization of time as in our computational method. Specifically, we report the long-run average cost under each policy in the ergodic control case, and report the simulated value \(V(0)\) in the discounted case. To repeat, the benchmark policy in this case is the optimal policy determined analytically, but not accounting for the discretization of the time scale. Of course, all the performance figures reported in the table are subject to simulation errors. Finally, it is worth noting that our method took less than one hour to compute its policy recommendations using a 10-CPU core computer. Let us consider now the two-dimensional case (\(K=1\)), where the optimal policy is unknown. Therefore, we compare our method with the best benchmark we could find: the linear boundary policy described in Section 6.3. In the two-dimensional case, the linear boundary policy reduces to the following: \[\theta_{0}(z)=b\mathbb{I}\left\{\beta_{0}^{\top}z\geq 1\right\}\;\text{and} \;\;\theta_{1}(z)=b\mathbb{I}\left\{\beta_{1}^{\top}z\geq 1\right\}.\] Through simulation, we perform a brute-force search to identify the best values of \(\beta_{0}\) and \(\beta_{1}\). The policies for \(b=2\) and \(b=10\) are shown in Figures 4 and 5, respectively, for the discounted control Figure 3: Comparison between the derivative \(G_{w}(\cdot)\) learned from neural networks and the derivative of the optimal value function for the case of \(d=1\) and \(r=0.1\). The dotted line indicates the cost \(c_{0}=1\). When the value function gradient is above this dotted line, the optimal control is \(\theta=b\), and otherwise it is \(\theta=0\). case with \(r=0.1\). Our proposed policy sets the drift to \(b\) in the red region and to zero in the blue region, whereas the best-linear boundary policy is represented by the white-dotted line. That is, the benchmark policy sets the drift to \(b\) in the region above and to the right of the dotted line, and sets it to zero below and left of the line. Table 2 presents the costs with standard errors of the benchmark policy and our proposed policy obtained in a simulation study. The two policies have similar performance. Our method takes about one hour to compute policy recommendations using a 10-CPU core computer. \begin{table} \begin{tabular}{c c c c c} \hline \hline & & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 1.455 \(\pm\) 0.0006 & 145.3 \(\pm\) 0.05 & 14.29 \(\pm\) 0.004 \\ & Benchmark & 1.456 \(\pm\) 0.0006 & 145.3 \(\pm\) 0.05 & 14.29 \(\pm\) 0.004 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 1.375 \(\pm\) 0.0007 & 137.2 \(\pm\) 0.06 & 13.56 \(\pm\) 0.005 \\ & Benchmark & 1.374 \(\pm\) 0.0007 & 137.2 \(\pm\) 0.06 & 13.56 \(\pm\) 0.005 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of our proposed policy with the benchmark policy in the one-dimensional case (\(K=0\)). \begin{table} \begin{tabular}{c c c c c} \hline \hline & & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 2.471 \(\pm\) 0.0008 & 246.6 \(\pm\) 0.08 & 24.28 \(\pm\) 0.006 \\ & Benchmark & 2.473 \(\pm\) 0.0008 & 246.8 \(\pm\) 0.08 & 24.29 \(\pm\) 0.006 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 2.338 \(\pm\) 0.0009 & 233.3 \(\pm\) 0.09 & 23.10 \(\pm\) 0.006 \\ & Benchmark & 2.338 \(\pm\) 0.0009 & 233.6 \(\pm\) 0.09 & 23.10 \(\pm\) 0.006 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of our proposed policy with the benchmark policy in the two-dimensional case (\(K=1\)). Finally, let us consider the six-dimensional case (\(K=5\)), where the linear boundary policy reduces to \[\theta_{i}(z)=b\mathbb{I}\left\{\beta_{i}^{\top}z\geq 1\right\}\text{ for }i=0,1,2,\ldots,5.\] Although there appear to be 36 parameters to be tuned, recall that we reduced the number of parameters to five in Section 6.4 by exploiting symmetry. This makes the brute-force search computationally feasible. Table 3 compares the performance with standard errors of our proposed policies with the benchmark policies. They have similar performance. In this case, the running time for our method is several hours using a 10-CPU computer. Figure 4: Graphical representation of the policy learned from neural networks and the benchmark policy for the case \(b=2,d=2\) and \(r=0.1\) Figure 5: Graphical representation of the policy learned from neural networks and the benchmark policy for the case \(b=10,d=2\) and \(r=0.1\) ### Test problems with quadratic cost of control In this section, we consider the test problem introduced in Section 6.3, for which we set \(\alpha_{k}=1\) and \(\underline{\theta}_{k}=1\) for all \(k\). As in the previous treatment of our main test example, we report results for the cases of \(d=1,2,6\) in Tables 4, 5, 6, respectively, where the benchmark policies are the affine-rate policies discussed in Section 6.4, with policy parameters optimized via simulation and brute-force search. We observe that our proposed policies outperform the best affine-rate policies by very small margins in all cases. In the one-dimensional ergodic control case (\(K=0\)), we obtain analytical solutions to the RBM control problem in closed form by solving the HJB equation directly, which reduces to a first-order ordinary differential equation in this case; see Appendix D for details. Figure 6 compares the derivative of the optimal value function (derived in closed form) with its approximation via neural networks in the ergodic case. Combining Figure 6 with Equation (77) reveals that our proposed policy is close to the optimal policy. \begin{table} \begin{tabular}{c c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 0.757 \(\pm\) 0.0004 & 75.53 \(\pm\) 0.03 & 7.415 \(\pm\) 0.003 \\ Benchmark & 0.758 \(\pm\) 0.0004 & 75.67 \(\pm\) 0.03 & 7.427 \(\pm\) 0.003 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison of our proposed policy with the benchmark policy in the case of quadratic cost of control and \(d=1\). \begin{table} \begin{tabular}{c l c c} \hline \hline & & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 7.927 \(\pm\) 0.001 & 791.0 \(\pm\) 0.1 & 77.83 \(\pm\) 0.01 \\ & Benchmark & 7.927 \(\pm\) 0.001 & 791.3 \(\pm\) 0.1 & 77.83 \(\pm\) 0.01 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 7.565 \(\pm\) 0.0016 & 754.8 \(\pm\) 0.15 & 74.61 \(\pm\) 0.01 \\ & Benchmark & 7.525 \(\pm\) 0.0016 & 751.7 \(\pm\) 0.15 & 74.32 \(\pm\) 0.01 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of our proposed policy with the benchmark policy in the six-dimensional case \(d=6\) (\(K=5\)). \begin{table} \begin{tabular}{c c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 1.216 \(\pm\) 0.0005 & 121.3 \(\pm\) 0.04 & 11.94 \(\pm\) 0.003 \\ Benchmark & 1.219 \(\pm\) 0.0005 & 121.7 \(\pm\) 0.05 & 11.96 \(\pm\) 0.003 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparison of our proposed policy with the benchmark policy in the case of quadratic cost of control and \(d=2\) (\(K=1\)) In the two-dimensional case, our proposed policy is shown in Figure 7 for the ergodic case, with contour lines showing the state vectors \((z_{0},z_{1})\) for which the policy chooses successively higher drift rates. The white dotted lines similarly show the states \((z_{0},z_{1})\) for which our benchmark policy (that is, the best affine-rate policy) chooses the drift rate \(\theta_{k}=1.5\) (for \(k=0\) in the left panel and \(k=1\) in the right panel). \begin{table} \begin{tabular}{l c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 3.863 \(\pm\) 0.0008 & 385.7 \(\pm\) 0.08 & 37.92 \(\pm\) 0.006 \\ Benchmark & 3.874 \(\pm\) 0.0008 & 386.9 \(\pm\) 0.08 & 38.04 \(\pm\) 0.006 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance comparison of our proposed policy with the benchmark policy in the case of quadratic cost of control and \(d=6\) (\(K=5\)) Figure 6: Comparison of the gradient approximation \(G_{w}(\cdot)\) learned from neural networks with the derivative of the optimal value function for the ergodic control case with quadratic cost of control in the one-dimensional case (\(d=1\)). Figure 7: Graphical representation of the policy learned from neural networks and the benchmark policy for the ergodic case with \(d=2\). ### Parallel-server test problems This section focuses on parallel-server test problems (see Section 6.5) to demonstrate our method's scalability. As illustrated in Figure 2, the parallel-server networks are essentially \(K\) independent copies of the one-dimensional case. We present the results in Table 7 for \(d=30\) and linear cost of control. When \(b=2\), our policies perform almost equally well as the optimal policy, while for \(b=10\), our policies perform within \(1\%\) of the optimal policy. The run-time for our method is about one day in this case using a 20-CPU core computer. For quadratic cost of control, we are able to solve the test problems up to at least 100 dimensions. The results for \(d=100\) are given in Table 8, where the benchmark policies are the best affine-rate policies (see Section 6.4). The performance of our policy is within \(1\%\) of the benchmark performance. The run-time for our method is several days in this case using a 30-CPU core computer. ## 8 Concluding remarks Consider the general drift control problem formulated in Section 3, assuming specifically that the instantaneous cost rate \(c(z,\theta)\) is linear in \(\theta\), and further assuming that the set of available drift vectors is a rectangle \(\Theta=[0,b_{1}]\times\cdots\times[0,b_{d}]\). If one relaxes such a problem by letting \(b_{i}\uparrow\infty\) for one or more \(i\), then one obtains what is called a _singular_ control problem, cf. Kushner and Martins [31]. Optimal policies for such problems typically involve the imposition of _endogenous_ reflecting barriers, that is, reflecting barriers imposed by the system controller in order to minimize cost, in addition to exogenous reflecting barriers that may be imposed to represent physical constraints in \begin{table} \begin{tabular}{l l l l l} \hline \hline & & Ergodic & r = 0.01 & r = 0.1 \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 42.56 \(\pm\) 0.003 & 4247 \(\pm\) 0.3 & 417.3 \(\pm\) 0.02 \\ & Benchmark & 42.52 \(\pm\) 0.003 & 4244 \(\pm\) 0.3 & 417.2 \(\pm\) 0.02 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 40.53 \(\pm\) 0.004 & 4054 \(\pm\) 0.4 & 399.4 \(\pm\) 0.026 \\ & Benchmark & 40.23 \(\pm\) 0.004 & 4018 \(\pm\) 0.4 & 396.7 \(\pm\) 0.024 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance comparison between our proposed policy and the benchmark policy for 30-dimensional parallel-server test problems with linear cost of control. \begin{table} \begin{tabular}{l c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 72.74 \(\pm\) 0.003 & 7258.3 \(\pm\) 0.3 & 712.4 \(\pm\) 0.02 \\ Benchmark & 72.53 \(\pm\) 0.003 & 7237.3 \(\pm\) 0.3 & 710.2 \(\pm\) 0.02 \\ \hline \hline \end{tabular} \end{table} Table 8: Performance comparison between our proposed policy and the benchmark policy for 100-dimensional parallel-server test problems with quadratic cost of control. the motivating application. There are many examples of queueing network control problems whose natural heavy traffic approximations involve singular control; see, for example, Krichagina and Taksar [30], Martins and Kushner [34], and Martins et al. [33]. Given that motivation, we intend to show in future work how the computational method developed in this paper for drift control can be extended in a natural way to treat singular control, and to illustrate that extension by means of queueing network applications. Separately, the following are three desirable generalizations of the problem formulations propounded in Section 3 of this paper. Each of them is straightforward in principle, and we expect to see these extensions implemented in future work, perhaps in combination with mild additional restrictions on problem data. (a) Instead of requiring that the reflection matrix \(R\) have the Minkowski form (1), require only that \(R\) be a completely-\(\mathcal{S}\) matrix, which Taylor and Williams [41] showed is a necessary and sufficient condition for an RBM to be well defined. (b) Allow a more general state space for the controlled process \(Z\), such as the convex polyhedrons characterized by Dai and Williams [13]. (c) Remove the requirement that the action space \(\Theta\) be bounded. ## Appendix A Proof of Proposition 1 Proof.: Let \(f:\mathbb{R}_{+}\rightarrow\mathbb{R}^{d}\) be right continuous with left limits (rcll). Following Williams [46], we define the oscillation of \(f\) over an interval \([t_{1},t_{2}]\) as follows: \[Osc(f,[t_{1},t_{2}])=\sup\left\{\left|f(t)-f(s)\right|:t_{1}\leq s<t\leq t_{2} \right\}, \tag{78}\] where \(\left|a\right|=\max_{i=1....,d}\left|a_{i}\right|\) for any \(a\in\mathbb{R}^{d}\). Then for two rcll functions \(f,g\), the following holds: \[Osc(f+g)\leq Osc(f)+Osc(g). \tag{79}\] Also recall that the controlled RBM \(Z\) satisfies \(Z(t)=X(t)+RY(t)\), where \[X(t)=W(t)-\int_{0}^{t}\theta(s)ds,\ t\geq 0. \tag{80}\] Then it follows from Theorem 5.1 of Williams [46] that \[Osc(Z,[0,t]) \leq C\,Osc(X,[0,t])\] \[\leq C\,Osc(W,[0,t])+C\bar{\theta}t\] for some \(C>0\), where \(\bar{\theta}=\sum_{l=1}^{d}\left(\bar{\theta}_{l}-\underline{\theta}_{l}\right)\) and \(\underline{\theta}_{l},\bar{\theta}_{l}\) are the minimal and maximal values on each dimension, and the second inequality follows from (79). Let \(\mathcal{O}=Ocs(W,[0,t])\) and recall that we are interested in bounding the expectation \(\mathbb{E}\left[|Z(t)|^{n}\right]\). To that end, note that \[\left|Z(t)-Z(0)\right|^{n} \leq C^{n}\left(\mathcal{O}+\bar{\theta}t\right)^{n} \tag{81}\] \[= C^{n}\,\sum_{k=0}^{n}\binom{n}{k}\mathcal{O}^{k}\bar{\theta}^{n -k}t^{n-k}.\] To bound \(\mathbb{E}\left[\mathcal{O}^{k}\right],\) note that \[\mathcal{O} = \sup\left\{\left|W(t_{2})-W(t_{1})\right|:0\leq t_{1}<t_{2}\leq t\right\}\] \[\leq \sup\left\{W(s):0\leq s\leq t\right\}-\inf\left\{W(s):0\leq s\leq t\right\}\] \[\leq 2\sup\left\{\left|W(s)\right|:0\leq s\leq t\right\}\] \[\leq 2\sum_{l=1}^{d}\sup\left\{\left|W_{l}(s)\right|:0\leq s\leq t \right\}.\] So, by the union bound, we write \[\mathbb{P}\left(\mathcal{O}>x\right) \leq \sum_{l=1}^{d}\mathbb{P}\left(\sup_{0\leq s\leq t}W_{l}(s)>\frac{x} {2d}\right)+\sum_{l=1}^{d}\mathbb{P}\left(\inf_{0\leq s\leq t}W_{l}(s)<-\frac{x }{2d}\right)\] \[\leq 4\sum_{l=1}^{d}\mathbb{P}\left(W_{l}(t)>\frac{x}{2d}\right),\] where the last inequality follows from the reflection principle. Thus, \[\mathbb{E}[\mathcal{O}^{k}]=\int_{0}^{\infty}x^{k-1}\mathbb{P}\left(\mathcal{ O}>x\right)\mathrm{d}x\leq 4\sum_{l=1}^{d}\int_{0}^{\infty}x^{k-1}\mathbb{P} \left(W_{l}(t)>\frac{x}{2d}\right)\mathrm{d}x.\] By change of variable \(y=x/d\), we write \[\mathbb{E}[\mathcal{O}^{k}] \leq 4\sum_{l=1}^{d}\left(2d\right)^{k}\int_{0}^{\infty}y^{k-1} \mathbb{P}\left(W_{l}(t)>y\right)\mathrm{d}y\] \[= 4\sum_{l=1}^{d}\left(2d\right)^{k}\mathbb{E}[|W_{l}(t)|^{k}]\] \[= \frac{4\left(2d\right)^{k}2^{k/2}t^{k/2}\Gamma\left(\frac{k+1}{2 }\right)}{\sqrt{\pi}}\sum_{l=1}^{d}\sigma_{ll}^{k},\] where \(\Gamma\) is the Gamma function, and the last equality is a well-known result; see, for example, Equation (12) in [47]. Substituting this into (81) gives the following: \[\mathbb{E}[|Z(t)-Z(0)|^{n}] \leq C^{n}\sum_{k=0}^{n}\frac{4\left(2d\right)^{k}\binom{n}{k}2^{k/2} t^{k/2}\Gamma\left(\frac{k+1}{2}\right)}{\sqrt{\pi}}\bar{\theta}^{n-k}t^{n-k} \left(\sum_{l=1}^{d}\sigma_{ll}^{k}\right) \tag{82}\] \[\leq \tilde{C}_{n}(t^{n}+1).\] Letting \(z=Z(0).\) We write \[|Z(t)|^{n} = |Z(t)-z+z|^{n}\leq(|Z(t)-z|+|z|)^{n}\] \[\leq \sum_{k=0}^{n}\binom{n}{k}\left|Z(t)-z|^{k}\left|z\right|^{n-k}.\] Using (82), we can therefore write \[\mathbb{E}\left[|Z(t)|^{n}\right] \leq \sum_{k=0}^{n}\binom{n}{k}\tilde{C}_{k}|z|^{n-k}\left(t^{k}+1\right)\] \[\leq \hat{C}_{n}(1+t^{n}).\] ## Appendix B Validity of HJB Equations ### Discounted Control **Proposition 6**.: _Let \(u\in\mathcal{U}\) be an admissible policy and \(V^{u}\) a \(C^{2}\) solution of the associated PDE (14)-(15). If both \(V^{u}\) and its gradient have polynomial growth, then \(V^{u}\) satisfies (12)._ Proof.: Applying Ito's formula to \(e^{-rt}\,V^{u}(Z^{u}(t))\) and using Equation (7), we write \[e^{-rT}\,V^{u}(Z^{u}(T))-V^{u}(z)= \int_{0}^{T}e^{-rt}\,(\mathcal{L}V^{u}(Z^{u}(t))-u(Z^{u}(t))\cdot V ^{u}(Z^{u}(t))-rV^{u}(Z^{u}(t)))\,dt\] \[+\int_{0}^{T}e^{-rt}\mathcal{D}\,V^{u}(Z^{u}(t))\cdot dY(t)+\int_ {0}^{T}e^{-rt}\,\nabla V^{u}(Z^{u}(t))\cdot dW(t).\] Then using (3)-(4) and (14)-(15), we arrive at the following: \[e^{-rT}\,V^{u}(Z^{u}(T))-V^{u}(z)=-\int_{0}^{T}e^{-rt}\,c(Z^{u}(t ),u(Z^{u}(t)))\,dt+\int_{0}^{T}e^{-rt}\,\nabla V^{u}(Z^{u}(t))\cdot dW(t). \tag{83}\] Because \(\nabla V^{u}\) has polynomial growth and the action space \(\Theta\) is bounded, we have that \[\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\nabla\,V^{u}(Z^{u}(t)) \cdot dW(t)\right]=0;\] see, for example, Theorem 3.2.1 of Oksendal [35]. Thus, taking the expectation of both sides of (83) yields \[V^{u}(z)=\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z ^{u}(t)))\,dt\right]+e^{-rT}\,\mathbb{E}_{z}\left[V^{u}(Z^{u}(T))\right].\] Because \(V^{u}\) has polynomial growth and \(\Theta\) is bounded, the second term on the right-hand side vanishes by as \(T\to\infty\). Then, because \(c\) has polynomial growth and \(\Theta\) is bounded, passing to the limit as \(T\to\infty\) completes the proof. **Proposition 7**.: _If \(V\) is a \(C^{2}\) solution of the HJB equation (16)-(17), and if both \(V\) and its gradient have polynomial growth, then \(V\) satisfies (13)._ Proof.: First, consider an arbitrary admissible policy \(u\) and let \(V^{u}\) denote the solution of the associated PDE (14)-(15). By Proposition 6, we have that \[V^{u}(z)=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\,c(Z^{u}( t),u(Z^{u}(t))\,dt\right],\ z\in\mathbb{R}_{+}^{d}. \tag{84}\] On the other hand, because \(V\) solves (16)-(17) and \[u(z)\cdot\nabla V(z)-c(z,u(z))\leq\max_{\theta\in\Theta}\left\{ \theta\cdot\nabla V(z)-c(z,\theta\right\},\ z\in\mathbb{R}_{+}^{d},\] we conclude that \[\mathcal{L}\,V(z)-u(z)\cdot\nabla V(z)+c(z,u(z))\geq r\,V(z). \tag{85}\] Now applying Ito's formula to \(e^{-rt}\,V(Z^{u}(t))\) and using Equation (7) yields \[e^{-rT}\,V(Z^{u}(T))-V(z)= \int_{0}^{T}\left(\mathcal{L}V(Z^{u}(t))-u(Z^{u}(t))\cdot\nabla V( Z^{u}(t))-rV(Z^{u}(t))\right)dt\] \[+\int_{0}^{T}\mathcal{D}\,V(Z^{u}(t))\cdot dY(t)+\int_{0}^{T}e^{- rt}\nabla V(Z^{u}(t))\cdot dW(t).\] Combining this with Equations (3)-(4), (16)-(17) and (85) gives \[e^{-rT}\,V(Z^{u}(t))-V(z)\geq-\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z^ {u}(t)))\,dt+\int_{0}^{T}e^{-rt}\nabla V(Z^{u}(t))\cdot dW(t). \tag{86}\] Because \(\nabla V\) has polynomial growth and the action space \(\Theta\) is bounded, we have that \[\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,\nabla V(Z^{u}(t))\cdot dW (t)\right]=0;\] see, for example, Theorem 3.2.1 of Oksendal [35]. Using this and taking the expectation of both sides of Equation (86) yields \[V(z)\leq\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z^ {u}(t)))\,dt\right]+e^{-rT}\,\mathbb{E}\left[V(Z^{u}(T))\right].\] Because \(V\) has polynomial growth and \(\Theta\) is bounded, the second term on the right-hand side vanishes as \(T\to\infty\). Then, because \(c\) has polynomial growth and \(\Theta\) is bounded, passing to the limit yields \[V(z)\leq\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z^ {u}(t)))\,dt\right]=V^{u}(z), \tag{87}\] where the equality holds by Equation (84). Now, consider the optimal policy \(u^{*}\), where \(u^{*}(z)=\arg\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z,\theta)\right\}\). For notational brevity, let \(Z^{*}=Z^{u^{*}}\) denote the RBM under policy \(u^{*}\). Note from Equation (16) that \[\mathcal{L}V(z)-u^{*}(z)\cdot\nabla V(z)+c(z,u^{*}(z))=rV(z),\ z \in\mathbb{R}_{+}^{d}. \tag{88}\] Repeating the preceding steps with \(u^{*}\) in place of \(u\) and replacing the inequality with an equality, cf. Equations (85) and (88), we conclude that \[V(z)=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\,c(Z^{*}(t),u^ {*}(Z^{*}(T)))\,dt\right]=V^{u^{*}}(z).\] Combining this with Equation (87) yields (13). ### Ergodic Control **Proposition 8**.: _Let \(u\in\mathcal{U}\) be an admissible policy and \((\tilde{\xi},v^{u})\) a \(C^{2}\) solution of the associated PDE (23)-(24). Further assume that \(v^{u}\) and its gradient have polynomial growth. Then_ \[\tilde{\xi}=\xi^{u}=\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi^{u}(dz).\] Proof.: Let \(\pi^{u}\) denote the stationary distribution of RBM under policy \(u\), and let \(Z^{u}\) denote the RBM under policy \(u\) that is initiated with \(\pi^{u}\). That is, \[\mathbb{P}(Z^{u}(0)\in B)=\pi^{u}(B)\ \ \text{for}\ B\subset\mathbb{R}^{d}_{+}.\] Then applying Ito's formula to \(v^{u}(Z^{u}(t))\) and using Equation (7) yields \[v^{u}(Z^{u}(t))-v^{u}(Z^{u}(0))= \int_{0}^{T}(\mathcal{L}v^{u}(Z^{u}(t))-u(Z^{u}(t))\cdot\nabla v^ {u}(Z^{u}(t)))\,dt\] \[+\int_{0}^{T}\mathcal{D}v^{u}(Z^{u}(t))\cdot dY(t)+\int_{0}^{T} \nabla v^{u}(Z^{u}(t))\cdot dW(t).\] Then using Equations (3)-(4) and (23)-(24), we arrive at the following: \[v^{u}(Z^{u}(T))-v^{u}(Z^{u}(0))=\int_{0}^{T}\left[\tilde{\xi}-c(Z^{u}(t),u(Z^{ u}(t)))\right]dt+\int_{0}^{T}\nabla v^{u}(Z^{u}(t))\cdot dW(t). \tag{89}\] Note that the marginal distribution of \(Z^{u}(t)\) is \(\pi^{u}\) for all \(t\geq 0\). Thus, we have that \[\mathbb{E}_{\pi^{u}}[v^{u}(Z^{u}(T))]=\mathbb{E}_{\pi^{u}}[v^{u}(Z^{u}(0))].\] Moreover, using Equation (21) and the polynomial growth of \(\nabla v^{u}\), we conclude that \[\mathbb{E}_{\pi^{u}}\left[\int_{0}^{T}\left|\nabla v^{u}(Z^{u}(t))\right|^{2} \,dt\right]=T\,\int_{\mathbb{R}^{d}_{+}}\left|\nabla v^{u}(z)\right|^{2}\,\pi ^{u}(dz)<\infty.\] Consequently, we have that \(\mathbb{E}\left[\int_{0}^{T}\nabla v^{u}(Z^{u}(t))\cdot dW(t)\right]=0\); see, for example, Theorem 3.2.1 of Oksendal [35]. Combining these and taking the expectation of both sides of (89) gives \[\tilde{\xi}=\frac{1}{T}\int_{0}^{T}\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(t),u(Z^{ u}(t)))\right]\,dt=\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi_{u}(dz)=\xi^{u}.\] **Proposition 9**.: _Let \((v,\xi)\) be a \(C^{2}\) solution of the HJB equation (25)-(26), and further assume that both \(v\) and its gradient have polynomial growth. Then (27) holds, and moreover, \(\xi=\xi^{u^{*}}\) where the optimal policy \(u^{*}\) is defined by (28)._ Proof.: First, consider an arbitrary policy \(u\) and note that \[\xi^{u}=\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi^{u}(dz),\] where \(\pi^{u}\) is the stationary distribution of RBM under policy \(u\). Let \(Z^{u}\) denote the RBM under policy \(u\) that is initiated with the stationary distribution \(\pi^{u}\). That is, \[\mathbb{P}(Z^{u}(0)\in B)=\pi^{u}(B),\ \ B\subset\mathbb{R}^{d}_{+}.\] On the other hand, because \((v,\xi)\) solves the HJB equation and \[u(z)\cdot\nabla v(z)-c(z,u(z))\leq\max_{\theta\,\in\,\Theta}\left\{\theta \cdot\nabla v(z)-c(z,\theta)\right\},\] we have that \[\mathcal{L}v(z)-u(z)\cdot\nabla v(z)+c(z,u(z)))\geq\xi \tag{90}\] Now, we apply Ito's formula to \(v(Z^{u}(t))\) and use Equation (7) to get \[v(Z^{u}(T))-v(Z^{u}(0))= \int_{0}^{T}(\mathcal{L}v(Z^{u}(t)))-u(Z^{u}(t))\cdot\nabla v(Z^ {u}(t))))\,dt\] \[+\int_{0}^{T}\nabla v(Z^{u}(t))\cdot dY(t)+\int_{0}^{T}\nabla v(Z ^{u}(t))\cdot dW(t).\] Combining this with Equations (3)-(4), (26) and (90) gives \[v(Z^{u}(T))-v(Z^{u}(0))\geq\int_{0}^{T}(\xi-c(Z^{u}(t),u(Z^{u}(t)))\,dt+\int_{ 0}^{T}\nabla v(Z^{u}(t))\cdot dW(t). \tag{91}\] Note that the marginal distribution of \(Z^{u}(t)\) is \(\pi^{u}\) for all \(t\geq 0\). Thus, we have that \[\mathbb{E}_{\pi^{u}}[v(Z^{u}(T))]=\mathbb{E}_{\pi^{u}}[v(Z^{u}(0))].\] Moreover, using Equation (21) and the polynomial growth of \(\nabla v\), we conclude that \[\mathbb{E}_{\pi^{u}}\left[\int_{0}^{T}\left|\nabla v(Z^{u}(t)) \right|^{2}\,dt\right]=T\,\int_{\mathbb{R}^{d}_{+}}\left|\nabla v(z)\right|^{2 }\,\pi^{u}(dz)<\infty.\] Consequently, we have that \(\mathbb{E}\left[\int_{0}^{T}\nabla v(Z^{u}(t))\,dW(t)\right]=0\); see for example Theorem 3.2.1 of Oksendal [35]. Combining these and taking the expectation of both sides of (91) give \[\xi\,T\leq\int_{0}^{T}\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(t),u(Z^{u}(t))\right] \,dt=T\,\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(0),u(Z^{u}(0)))\right],\] which yields \[\xi\leq\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(0),u(Z^{u}(0)))\right]=\int_{\mathbb{R}_{ +}^{d}}c(z,u(z))\,\pi^{u}(dz)=\xi^{u}. \tag{92}\] Now, consider policy \(u^{*}\). For notational brevity, let \(Z^{*}(t)=Z^{u^{*}}(t)\) denote the RBM under policy \(u^{*}\) that is initiated with the stationary distribution \(\pi^{u^{*}}\). In addition, note from (25) that \[\mathcal{L}v(z)-u^{*}(z)\cdot\nabla v(z)+c(z,u^{*}(z))=\xi,\ \ z\in\mathbb{R}_{+}^{d}. \tag{93}\] Repeating the preceding steps with \(u^{*}\) in place of \(u\) and replacing the inequality with an equality, cf. Equations (90) and (93), we conclude \[\xi=\mathbb{E}_{\pi^{u^{*}}}\left[c(Z^{*}(0),u(Z^{*}(0)))\right]=\int_{ \mathbb{R}_{+}^{d}}c(z,u^{*}(z))\,\pi^{u^{*}}(dz)=\xi^{u^{*}}.\] Combining this with Equation (92) completes the proof. ## Appendix C Derivation of the covariance matrix of the feed-forward examples By the functional central limit theorem for the renewal process [9], we have \[\hat{E}^{n}(\cdot)\Rightarrow W_{E}\left(\cdot\right),\] where \(W_{E}\left(\cdot\right)\) is an one-dimensional Brownian motion with drift zero and variance \(\lambda a^{2}=\mu_{0}a^{2}\). Furthermore, we have \[\hat{S}_{k}^{n}(t)\Rightarrow W_{k}\left(\cdot\right),\ \text{for}\ k=1,2, \ldots,K,\] where \(W_{k}\left(\cdot\right)\) is an one-dimensional Brownian motion with drift zero and variance \(\mu_{0}p_{k}s_{k}^{2}.\) Now, we turn to \(\hat{S}_{0}^{n}(t)\) and \(\hat{\Phi}^{n}(t).\) By Harrison [18], we have \[\text{Cov}\left(\left[\begin{array}{c}\hat{S}_{0}^{n}(t)\\ \hat{\Phi}^{n}(t)\end{array}\right]\right)=\mu_{0}\Omega^{0}+\mu_{0}s_{0}^{2}R ^{0}\left(R^{0}\right)^{\top},\] where \(\Omega_{kl}^{0}=p_{k}(\mathbb{I}\{k=l\}-p_{l})\) for,\(k,l=0,\ldots,K\) and \(R^{0}=[1,-p_{1},\ldots,-p_{K}]^{\top}.\) Therefore, we have \[\text{Cov}\left(\left[\begin{array}{c}\hat{S}_{0}^{n}(t)\\ \hat{\Phi}^{n}(t)\end{array}\right]\right)=\mu_{0}\left[\begin{array}{cccc}s_ {0}^{2}&-p_{1}s_{0}^{2}&\ldots&\ldots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}&p_{1}p_{2}\left(s_{0}^{2}-1 \right)&\ldots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\ldots&\ldots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}\end{array}\right].\] Therefore, the variance of \(\chi\) is \[A = diag(\lambda a^{2},\mu_{1}s_{1}^{2},\ldots,\mu_{K}s_{K}^{2})+\] \[\mu_{0}\left[\begin{array}{cccc}s_{0}^{2}&-p_{1}s_{0}^{2}&\ldots& \ldots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}&p_{1}p_{2}\left(s_{0}^{2}-1 \right)&\cdots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\cdots&\cdots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}\end{array}\right]\] \[= \mu_{0}\left[\begin{array}{cccc}s_{0}^{2}+a^{2}&-p_{1}s_{0}^{2 }&\ldots&\ldots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}+p_{1}s_{1}^{2}&p_{1}p_{2} \left(s_{0}^{2}-1\right)&\cdots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\cdots&\cdots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}+p_{K}s_{K}^{2}\end{array}\right].\] In particular, if the arrival and service processes are Poisson processes, we have \(a=1\) and \(s_{k}=1\) for \(k=0,1,2,\ldots,K.\) Then, we have \[A_{\text{Poisson}}=\mu_{0}\left[\begin{array}{ccccc}2&-p_{1}&\cdots&\cdots&-p _{K}\\ -p_{1}&2p_{1}&&\\ \vdots&&\ddots&&\\ \vdots&&\ddots&\\ -p_{K}&&&2p_{K}\end{array}\right]\] Furthermore, if the service time for server zero is deterministic, i.e., \(s_{0}=0\), we have \[A_{\text{deterministic}}=\mu_{0}\left[\begin{array}{ccccc}a^{2}&0&\cdots& \cdots&0\\ 0&p_{1}(1-p_{1})+p_{1}s_{1}^{2}&-p_{1}p_{2}&\cdots&-p_{1}p_{K}\\ \vdots&-p_{1}p_{2}&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&-p_{K-1}p_{K}\\ 0&-p_{1}p_{K}&\cdots&\cdots&p_{K}(1-p_{K})+p_{K}s_{K}^{2}\end{array}\right]\] ## Appendix D Analytical solution of one-dimensional test problems ### Ergodic control formulation with linear cost of control We consider the one-dimensional control problem with the cost function \[c(z,\theta)=hz+c\theta\text{ for }z\in\mathbb{R}_{+}\text{ and }\theta\in\Theta=[0,b].\] In the ergodic control case, the HJB equation (25) - (26) is \[\frac{a}{2}v^{\prime\prime}(z)-\max_{\theta\in[0,b]}\left\{\theta \cdot v^{\prime}(z)-hz-c\theta\right\}=\xi,\text{ and} \tag{94}\] \[v^{\prime}(0)=0\text{ and }v^{\prime}(z)\text{ having polynomial growth rate}, \tag{95}\] where the covariance matrix \(A=a\) in this one-dimensional case. The HJB equation (94) - (95) is equivalent to \[\frac{a}{2}v^{\prime\prime}(z)+hz-(v^{\prime}(z)-c)^{+}b=\xi,\] and the solution is \[\left(v^{*}\right)^{\prime}(z)=\left\{\begin{array}{cc}\frac{2}{\sqrt{a}} \sqrt{ch+\frac{ah^{2}}{4b^{2}}}z-\frac{h}{a}z^{2}&\text{ if }z<z^{*}\\ \frac{h}{b}z+\frac{ha}{2b^{2}}-\frac{\sqrt{a}}{b}\sqrt{ch+\frac{ah^{2}}{4b^{2} }}+c&\text{ if }z\geq z^{*}\end{array}\right.,\text{ with}\] \[z^{*}=\frac{1}{h}\sqrt{a\left(ch+\frac{ah^{2}}{4b^{2}}\right)}-\frac{a}{2b} \text{ and }\xi^{*}=\sqrt{a\left(ch+\frac{ah^{2}}{4b^{2}}\right)},\] and the optimal control is \[\theta^{*}(z)=\left\{\begin{array}{cc}0&\text{ if }z<z^{*},\\ b&\text{ if }z\geq z^{*}.\end{array}\right.\] ### Discounted formulation with linear cost of control The cost function is still \[c(z,\theta)=hz+c\theta\text{ for }z\in\mathbb{R}_{+}\text{ and }\theta\in\Theta=[0,b],\] and in the discounted control case, the HJB equation (16) - (17) is \[\frac{a}{2}V^{\prime\prime}(z)+hz-(V^{\prime}(z)-c)^{+}b = rV(z),\] \[V^{\prime}(0) = 0.\] The solution is \[V^{*}(z)=\left\{\begin{array}{cc}V_{1}(z)&\text{ if }z<z^{*}\\ V_{2}(z)&\text{ if }z\geq z^{*}\end{array}\right.,\] and the optimal control is \[\theta^{*}(z)=\left\{\begin{array}{cc}0&\text{ if }z<z^{*}\\ b&\text{ if }z\geq z^{*}\end{array}\right.,\] where \[V_{1}(z)=\frac{h\sqrt{a}e^{-\frac{\sqrt{2}\sqrt{z}}{\sqrt{a}}}}{\sqrt{2}r^{3/ 2}}+\frac{hz}{r}+C_{1}e^{\frac{\sqrt{2}\sqrt{r}z}{\sqrt{a}}}+C_{1}e^{-\frac{ \sqrt{2}\sqrt{r}z}{\sqrt{a}}},\text{ and}\] \[V_{2}(z)=\frac{-bh+brc+hrz}{r^{2}}+C_{2}e^{z\left(\frac{b}{a}\frac{\sqrt{b^{2}+2 r\alpha}}{a}\right)},\] for some parameters \(z^{*},C_{1},C_{2},\) to be determined later. **Case 1:**\(h\leq rc.\) Note that if \(C_{1}=0,\) then we have \[V_{1}^{\prime}(z)=\frac{h}{r}\left(1-e^{-\frac{\sqrt{2}\sqrt{r}z}{\sqrt{a}}} \right)<\frac{h}{r}\leq c.\] Therefore, we have \[V^{*}(z)=\frac{h\sqrt{a}e^{-\frac{\sqrt{2}\sqrt{r}z}{\sqrt{a}}}}{\sqrt{2}r^{3/ 2}}+\frac{hz}{r}\] for the case \(h\leq rc\) and the optimal control is always to set \(\theta^{*}(z)=0.\) **Case 2:**\(h>rc.\) We have \[z^{*} = \frac{a\log\left(\frac{(h-rc)a}{C_{2}\lambda\left(\sqrt{b^{2}+2 ra}-b\right)}\right)}{b-\sqrt{b^{2}+2ra}},\] \[V_{2}^{\prime}(z^{*}) = c,\text{ and }\] \[V_{2}^{\prime\prime}(z^{*}) = \frac{(h-rc)\left(\sqrt{b^{2}+2\lambda a}-b\right)}{ra}.\] At point \(z^{*},\) we must have \[V_{1}^{\prime}(z^{*})=V_{2}^{\prime}(z^{*})\text{ and }V_{1}^{\prime\prime}(z^{ *})=V_{2}^{\prime\prime}(z^{*}).\] Then we can numerically solve for \(C_{1}\) and \(C_{2}\) using the following equations: \[V_{1}^{\prime}(z^{*}) = c,\] \[V_{1}^{\prime\prime}(z^{*}) = \frac{(h-rc)\left(\sqrt{b^{2}+2ra}-b\right)}{ra}.\] Table 9 presents numerical values of \(z^{*}\) for different parameter combinations. \begin{table} \begin{tabular}{c c c c} \hline \hline & & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & \(h=2\) & 0.501671 & 0.517133 \\ & \(h=1.9\) & 0.519136 & 0.535753 \\ \hline \multirow{2}{*}{\(b=10\)} & \(h=2\) & 0.660354 & 0.674135 \\ & \(h=1.9\) & 0.678797 & 0.693707 \\ \hline \hline \end{tabular} \end{table} Table 9: The numerical values of \(z^{*}\) for different parameter combinations (\(a=c=1\)). ### Ergodic control formulation with quadratic cost of control We consider the cost function \[c(\theta,z)=\alpha(\theta-\underline{\theta})^{2}+hz.\] The HJB equation (25) - (26) then becomes \[\frac{a}{2}v^{\prime\prime}(z)-\max_{\theta}\left\{\theta\cdot v^{ \prime}(z)-hz-\alpha(\theta-\underline{\theta})^{2}\right\}=\xi,\text{ and} \tag{96}\] \[v^{\prime}(z)=0\text{ and }v^{\prime}(z)\text{ having polynomial growth rate,} \tag{97}\] which is equivalent to \[hz+\frac{a}{2}v^{\prime\prime}(z)-\frac{1}{4\alpha}\left(v^{ \prime}(z)\right)^{2}-\underline{\theta}v^{\prime}(z) = \xi,\] \[v^{\prime}(0) = 0.\] Let \(f(z)=v^{\prime}(z)\) with \(f(0)=0.\) Then we have \[\xi=hz+\frac{a}{2}f^{\prime}(z)-\frac{1}{4\alpha}\left(f(z)\right)^{2}- \underline{\theta}f(z),f(0)=0,\] which is a Riccati equation. One can solve this equation numerically to find \(\xi\) such that \(f(\cdot)\) has polynomial growth. For example, if \(\alpha=\underline{\theta}=a=1,\) and \(h=2,\) we have \(\xi^{*}=0.8017.\) ## Appendix E Implementation Details of Our Method **Neural network architecture.** We used a three or four-layer, fully connected neural network with 20 - 1000 neurons in each layer; see Tables 10 and 11 for details. **Common hyperparameters.** Batch size \(B=256\); time horizon \(T=0.1\), discretization step-size \(0.1/64\); see Tables 10 and 11 for details. **Learning rate.** The learning rate starts from \(0.0005\), and decays to \(0.0003\) and \(0.0001\) with a rate detailed in Tables 10 and 11. **Optimizer.** We used the Adam optimizer [29]. **Reference policy.** The reference policy sets \(\tilde{\theta}=1\). **Activation function.** We use the 'elu' action function [38]. **Code.** Our code structure follows from that of Han et al. [17] and Zhou et al. [50]. We implement two major changes: First, we have separated the data generation and training processes to facilitate data reuse. Second, we have conducted the RBM simulation. We have also integrated all the features discussed in this section. ### Decay loss in the test example with linear cost of control Recall in our main test example with linear cost of control, the cost function is \[c(z,\theta)=h^{\top}z+c^{\top}\theta.\] In the discounted cost formulation, substituting this cost function into the \(F\) function defined in Equation (30) gives the following: \[F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))=\tilde{\theta}\cdot x+h^{\top}z-b\sum_{ i=1}^{d}\max(G_{w_{2}}(\tilde{Z}(t))_{i}-c,0). \tag{98}\] \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Hyperparameters & \begin{tabular}{l} 1-dimensional \\ b=2 \\ \end{tabular} & \begin{tabular}{l} 2-dimensional \\ b=10 \\ \end{tabular} & \begin{tabular}{l} 6-dimensional \\ b=2 \\ \end{tabular} & \begin{tabular}{l} 100-dimensional \\ b=10 \\ \end{tabular} \\ \hline \#Iterations & 6000 & 6000 & 6000 & 6000 & 6000 & \\ \#Epoches & 13 17 & 15 & 19 & 23 & 27 & 111 & 115 \\ \multirow{4}{*}{Learning rate scheme} & 0.0005 (0,2000) & 0.0005 (0,3000) & 0.0005 (0,3000) & 0.0005 (0,9500) \\ & 0.0003 (2000,4000) & 0.0003 (3000,6000) & 0.0003 (3000,6000) & 0.0003 (9500,22000) \\ & 0.0001 (4000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (22000,\(\infty\)) \\ \#Hidden layers & 4 & 4 & 4 & 3 & \\ \#Neurons in each layer & 50 & 50 & 50 & 400 & \\ \(\tilde{c}_{0}\) & & 0.4 & 7 & 0.4 & 7 & 0.4 & 7 \\ \(\tilde{c}_{1}\) & & 800 & 2400 & & 16000 & \\ \hline \hline \end{tabular} \end{table} Table 10: Hyperparameters used in the test problems with linear costs \begin{table} \begin{tabular}{l l l l l} \hline \hline Hyperparameters & 1-dimensional & 2-dimensional & 6-dimensional & 100-dimensional \\ \hline \#Iterations & 6000 & 6000 & 6000 & 12000 \\ \#Epoches & 12 & 14 & 22 & 110 \\ \multirow{4}{*}{Learning rate scheme} & 0.0005 (0,3000) & 0.0005 (0,3000) & 0.0005 (0,3000) & 0.0005 (0,9500) \\ & 0.0003 (3000,6000) & 0.0003 (3000,6000) & 0.0003 (3000,6000) & 0.0003 (9500,22000) \\ & 0.0001 (6000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (22000,\(\infty\)) \\ \#Hidden layers & 3 & 4 & 4 & 3 \\ \#Neurons in each layer & 20 & 50 & 50 & 1000 \\ \hline \hline \end{tabular} \end{table} Table 11: Hyperparameters used in the test problems with quadratic costs Note that if \(G_{w_{2}}(\tilde{Z}(t))<c\), we have \[\frac{\partial F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))}{w_{2}}=0,\] which suggests that the algorithm may suffer from the gradient vanishing problem [25], well-known in the deep learning literature. To overcome this difficulty, we propose an alternative \(F\) function \[\tilde{F}(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))=\tilde{\theta}\cdot x+h^{\top}z -b\sum_{i=1}^{d}\max(G_{w_{2}}(\tilde{Z}(t))_{i}-c,0)-\tilde{b}\sum_{i=1}^{d} \min(G_{w_{2}}(\tilde{Z}(t))_{i}-c,0) \tag{99}\] where \(\tilde{b}\) is a decaying function with respect to the training iteration. Specifically, we propose \[\tilde{b}=\left(\tilde{c}_{0}-\frac{\text{iteration}}{\tilde{c}_{1}}\right)^{ +},\] for some positive constants \(\tilde{c}_{0}\) and \(\tilde{c}_{1}\). The specific choices of \(\tilde{c}_{0}\) and \(\tilde{c}_{1}\) are shown in Table 10. We proceed similarly in the ergodic cost case using the function \(f\) defined in Equation (40). ### Variance loss function in discounted control Let us parametrize the value function as \(V_{w_{1}}(z)=\tilde{V}_{w_{1}}(z)+\xi.\) Note that \(\partial\tilde{V}_{w_{1}}(z)/\partial z\)=\(\partial V_{w_{1}}(z)/\partial z\). Therefore, we can rewrite the loss function (52) \[\ell(w_{1},w_{2}) = \mathbb{E}\left[\left(e^{-rT}\left(\tilde{V}_{w_{1}}(\tilde{Z}( T))+\xi\right)-(\tilde{V}_{w_{1}}(\tilde{Z}(0))+\xi\right)\right.\] \[\left.-\int_{0}^{T}e^{-rt}G_{w_{2}}(\tilde{Z}(t))\cdot\mathrm{d}W (t)+\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))\,\mathrm{d}t \right)^{2}\right],\] By optimizing \(\xi\) first, we obtain the following variance loss function: \[\tilde{\ell}(w_{1},w_{2}) = \mathrm{Var}\left[e^{-rT}\,\tilde{V}_{w_{1}}(\tilde{Z}(T))- \tilde{V}_{w_{1}}(\tilde{Z}(0))\right.\] \[\left.-\int_{0}^{T}e^{-rt}G_{w_{2}}(\tilde{Z}(t))\cdot\mathrm{d}W (t)+\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))\,\mathrm{d}t \ \right].\] We observe that this trick could accelerate the training speed when \(r\) is small. Because when \(r>0\) is small, \(\xi\) is of the order \(O(1/r)\) and \(\tilde{V}_{w_{1}}(\cdot),G_{w_{2}}(\cdot)\) are of the order \(O(1)\).
2306.17670
Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings
Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence the spike arrival times, and it is well-known that spiking neurons respond more strongly to coincident input spikes. More formally, it has been shown theoretically that plastic delays greatly increase the expressivity in SNNs. Yet, efficient algorithms to learn these delays have been lacking. Here, we propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation, in an offline manner. To simulate delays between consecutive layers, we use 1D convolutions across time. The kernels contain only a few non-zero weights - one per synapse - whose positions correspond to the delays. These positions are learned together with the weights using the recently proposed Dilated Convolution with Learnable Spacings (DCLS). We evaluated our method on three datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its non-spiking version Google Speech Commands v0.02 (GSC) benchmarks, which require detecting temporal patterns. We used feedforward SNNs with two or three hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We showed that fixed random delays help and that learning them helps even more. Furthermore, our method outperformed the state-of-the-art in the three datasets without using recurrent connections and with substantially fewer parameters. Our work demonstrates the potential of delay learning in developing accurate and precise models for temporal data processing. Our code is based on PyTorch / SpikingJelly and available at: https://github.com/Thvnvtos/SNN-delays
Ilyass Hammouamri, Ismail Khalfaoui-Hassani, Timothée Masquelier
2023-06-30T14:01:53Z
http://arxiv.org/abs/2306.17670v3
# Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings ###### Abstract Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence the spike arrival times, and it is well-known that spiking neurons respond more strongly to coincident input spikes. More formally, it has been shown theoretically that plastic delays greatly increase the expressivity in SNNs. Yet, efficient algorithms to learn these delays have been lacking. Here, we propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation, in an offline manner. To simulate delays between consecutive layers, we use 1D convolutions across time. The kernels contain only a few non-zero weights - one per synapse - whose positions correspond to the delays. These positions are learned together with the weights using the recently proposed Dilated Convolution with Learnable Spacings (DCLS). We evaluated our method on three datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its non-spiking version Google Speech Commands v0.02 (GSC) benchmarks, which require detecting temporal patterns. We used feedforward SNNs with two or three hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We showed that fixed random delays help and that learning them helps even more. Furthermore, our method outperformed the state-of-the-art in the three datasets without using recurrent connections and with substantially fewer parameters. Our work demonstrates the potential of delay learning in developing accurate and precise models for temporal data processing. Our code is based on PyTorch / SpikingJelly and available at: [https://github.com/Thvnvtos/SNN-delays](https://github.com/Thvnvtos/SNN-delays) ## 1 Introduction Spiking neurons are coincidence detectors [27; 33]: they respond more when receiving synchronous, rather than asynchronous, spikes. Importantly, it is the spike arrival times that should coincide, not the spike emitting times - these times are different because propagation is usually not instantaneous. There is a delay between spike emission and reception, called delay of connections, which can vary across connections. Thanks to these heterogeneous delays, neurons can detect complex spatiotemporal spike patterns, not just synchrony patterns [23] (see Figure 1). In the brain, the delay of a connection corresponds to the sum of the axonal, synaptic, and dendritic delays. It can reach several tens of milliseconds, but it can also be much shorter (1 ms or less) [23]. For example, the axonal delay can be reduced with myelination, which is an adaptive process that is required to learn some tasks (see [4] for a review). In other words, learning in the brain can not be reduced to synaptic plasticity. Delay learning is also important. A certain theoretical work has led to the same conclusion: Maass and Schmitt demonstrated, using simple spiking neuron models, that a SNN with k adjustable delays can compute a much richer class of functions than a threshold circuit with k adjustable weights [29]. Finally, on most neuromorphic chips, synapses have a programmable delay. This is the case for Intel Loihi [8], IBM TrueNorth [1], SpiNNaker [13] and SENeCA [45]. All these points have motivated us and others (see related works in the next section) to propose delay learning rules. Here we show for the first time that delays can be learned together with the weights, using backpropagation, in arbitrarily deep SNNs. The trick is to simulate delays using temporal convolutions and to learn them using the recently proposed Dilated Convolution with Learnable Spacings [24; 25]. In practice, the method is fully integrated with PyTorch and leverages its automatic-differentiation engine. ## 2 Related Work ### Deep Learning for Spiking Neural Networks Recent advances in SNN training methods like the surrogate gradient method [30; 35] and the ANN2SNN conversion methods [5; 9; 19] made it possible to train increasingly deeper spiking neural networks. The surrogate gradient method defines a continuous relaxation of the non-smooth spiking nonlinearity: it replaces the gradient of the Heaviside function used in the spike generating process by a smooth surrogate gradient that is suitable for optimization. On the other hand, the ANN2SNN methods convert conventional artificial neural networks (ANNs) into SNNs by copying the weights from ANNs while trying to minimize the conversion error. Other works have explored improving the spiking neurons using inspiration from biological mechanisms or techniques used in ANNs. The Parametric Leaky Integrate-and-Fire (PLIF) [12] incorporates learnable membrane time constants that could be trained jointly with synaptic weights.[18] proposes a method to dynamically adapt firing thresholds in order to improve continual learning in SNNs. Spike-Element-Wise ResNet [11] addresses the problem of vanishing/exploding gradient in the plain Spiking ResNet caused by sigmoid-like surrogate functions, and successfully trained the first deep SNN with more than 150 layers. Spikformer [48] adapts the softmax-based self-attention mechanism of Transformers [40] to a spike-based formulation. These efforts have resulted in closing the gap between the performance of ANNs and SNNs on many widely used benchmarks. Figure 1: Coincidence detection: we consider two neurons \(N1\) and \(N2\) with the same positive synaptic weight values. \(N2\) has a delayed synaptic connection denoted \(d_{21}\) of \(8\)ms, thus both spikes from spike train \(S1\) and \(S2\) will reach \(N2\) quasi-simultaneously. As a result, the membrane potential of \(N2\) will reach the threshold \(\vartheta\) and \(N2\) will emit a spike. On the other hand, \(N1\) will not react to these same input spike trains. ### Delays in SNNs Few previous works considered learning delays in SNNs. [41] proposed a similar method to ours in which they convolve spike trains with an exponential kernel so that the gradient of the loss with respect to the delay can be calculated. However, their method is used only for a shallow SNN with no hidden layers. Other methods like [16; 17; 47; 39] also proposed learning rules developed specifically for shallow SNNs with only one layer. [21] proposed to learn temporal delays with Spike Timing Dependent Plasticity (STDP) in weightless SNNs. [20] proposed a method for delay-weight supervised learning in optical spiking neural networks. [2] proposed a method for deep feedforward SNNs that uses a set of multiple fixed delayed synaptic connections for the same two neurons before pruning them depending on the magnitude of the learned weights. To the best of our knowledge, [38; 37] are the only ones to learn delays and weights jointly in a deep SNN. They proposed an adaptive maximum delay value that depends on the distribution of delays on each network layer. However, they use finite difference approximation to numerically estimate the gradients of the spikes in respect to the delays, and we think that those gradients are not suitable as we achieve similar performance in our experiments with fixed random delays. We propose a control test that was not considered by the previous works and that we deem necessary: the SNN with delay learning should outperform an equivalent SNN with fixed random and uniformly distributed delays, especially with sparse connectivity. ## 3 Methods ### Spiking Neuron Model The spiking neuron, which is the fundamental building block of SNNs, can be simulated using various models. In this work, we use the Leaky Integrate-and-Fire model [14], which is the most widely used for its simplicity and efficiency. The membrane potential \(u_{i}^{(l)}\) of the \(i\)-th neuron in layer \(l\) follows the differential equation: \[\tau\frac{du_{i}^{(l)}}{dt}=-(u_{i}^{(l)}(t)-u_{reset})+RI_{i}^{(l)}(t) \tag{1}\] where \(\tau\) is the membrane time constant, \(u_{reset}\) the potential at rest, \(R\) the input resistance and \(I_{i}^{(l)}(t)\) the input current of the neuron at time \(t\). In addition to the sub-threshold dynamics, a neuron emits a unitary spike \(S_{i}^{(l)}\) when its membrane potential exceeds the threshold \(\vartheta\), after which it is instantaneously reset to \(u_{reset}\). Finally, the input current \(I_{i}^{(l)}(t)\) is stateless and represented as the sum of afferent weights \(W_{ij}^{(l)}\) multiplied by spikes \(S_{j}^{(l-1)}(t)\): \[I_{i}^{(l)}(t)=\sum_{j}W_{ij}^{(l)}S_{j}^{(l-1)}(t) \tag{2}\] We formulate the above equations in discrete time using Euler's method approximation, and using \(u_{reset}=0\) and \(R=\tau\). \[u_{i}^{(l)}[t] =(1-\frac{1}{\tau})u_{i}^{(l)}[t-1]+I_{i}^{(l)}[t] \tag{3}\] \[I_{i}^{(l)}[t] =\sum_{j}W_{ij}^{(l)}S_{j}^{(l-1)}[t]\] (4) \[S_{i}^{(l)}[t] =\Theta(u_{i}^{l}[t]-\vartheta) \tag{5}\] We use the surrogate gradient method [30] and define \(\Theta^{\prime}(x)\triangleq\sigma^{\prime}(x)\) during the backward step, where \(\sigma(x)\) is the surrogate arctangent function [12]. ### Synaptic Delays as a Temporal Convolution A feed-forward SNN model with delays is parameterized with \(W=(w_{ij}^{(l)})\in\mathbb{R}\) and \(D=(d_{ij}^{(l)})\in\mathbb{R}^{+}\), where the input of neuron \(i\) at layer \(l\) is \[I_{i}^{(l)}[t]=\sum_{j}w_{ij}^{(l)}S_{j}^{(l-1)}[t-d_{ij}^{(l)}] \tag{6}\] We model a synaptic connection from neuron \(j\) in layer \(l-1\) to neuron \(i\) in layer \(l\) which have a synaptic weight \(w_{ij}^{(l)}\) and delay \(d_{ij}^{(l)}\) as a one dimensional temporal convolution (see Figure 2) with kernel \(k_{ij}^{(l)}\) as follows: \(\forall n\in\llbracket 0,...\ T_{d}-1\rrbracket\): \[k_{ij}^{(l)}[n]=\begin{cases}w_{ij}^{(l)}&\text{if }n=T_{d}-d_{ij}^{(l)}-1\\ 0&\text{otherwise}\end{cases} \tag{7}\] where \(T_{d}\) is the kernel size or maximum delay + 1. Thus we redefine the input \(I_{i}^{(l)}\) in Equation 6 as a sum of convolutions: \[I_{i}^{(l)}=\sum_{j}k_{ij}^{(l)}*S_{j}^{(l-1)} \tag{8}\] We used a zero left-padding with size \(T_{d}-1\) on the input spike trains \(S\) so that \(I[0]\) does correspond to \(t=0\). Moreover, a zero right-padding could also be used, but it is optional; it could increase the expressivity of the learned delays with the drawback of increasing the processing time as the number of time-steps after the convolution will increase. To learn the kernel elements positions (i.e., delays), we use the 1D version of DCLS [24] with a Gaussian kernel [25] centered at \(T_{d}-d_{ij}^{(l)}-1\), where \(d_{ij}^{(l)}\in\llbracket 0,\ T_{d}-1\rrbracket\), and of standard deviation \(\sigma_{ij}^{(l)}\in\mathbb{R}^{*}\), thus we have: \(\forall n\in\llbracket 0,...\ T_{d}-1\rrbracket\): \[k_{ij}^{(l)}[n]=\frac{w_{ij}^{(l)}}{c}\ \text{exp}\left(-\frac{1}{2}\left( \frac{n-T_{d}+d_{ij}^{(l)}+1}{\sigma_{ij}^{(l)}}\right)^{2}\right) \tag{9}\] Figure 2: Example of one neuron with 2 afferent synaptic connections, convolving \(K1\) and \(K2\) with the zero left-padded \(S_{1}\) and \(S_{2}\) is equivalent to following Equation 6 With \[c=\epsilon+\sum_{n=0}^{T_{d}-1}\text{exp}\left(-\frac{1}{2}\left(\frac{n-T_{d}+d_{ ij}^{(l)}+1}{\sigma_{ij}^{(l)}}\right)^{2}\right) \tag{10}\] a normalization term and \(\epsilon=1e-7\) to avoid division by zero, assuming that the tensors are in float32 precision. During training, \(d_{ij}^{(l)}\) are clamped after every batch to ensure their value stays in \([0,...\,T_{d}-1]\). The learnable parameters of the 1D DCLS layer with Gaussian interpolation are the weights \(w_{ij}\), the corresponding delays \(d_{ij}\), and the standard deviations \(\sigma_{ij}\). However, in our case, \(\sigma_{ij}\) are not learned, and all kernels in our model share the same decreasing standard deviation, which will be denoted as \(\sigma\). Throughout training, we exponentially decrease \(\sigma\) as our end goal is to have a sparse kernel where only the delay position is non-zero and corresponds to the weight. The Gaussian kernel transforms the discrete positions of the delays into a smoother kernel (see Figure 3), which enables the calculation of the gradients \(\frac{\partial L}{\partial d_{ij}^{(l)}}\). By adjusting the parameter \(\sigma\), we can regulate the temporal scale of the dependencies. A small value for \(\sigma\) enables the capturing of variations that occur within a brief time frame. In contrast, a larger value of \(\sigma\) facilitates the detection of temporal dependencies that extend over longer durations. Thus \(\sigma\) tuning is crucial to the trade-off between short-term precision and long-term dependencies. We start with a high \(\sigma\) value and exponentially reduce it throughout the training process, after each epoch, until it reaches its minimum value of 0.5 (Fig. 4). This approach facilitates the learning of distant long-term dependencies at the initial time. Subsequently, when \(\sigma\) has a smaller value, it enables refining both weights and delays with more precision, making the Gaussian kernel more similar to the discrete kernel that is used at inference time. As we will see later in our ablation study (Section 4.3), this approach outperforms a constant \(\sigma\). Indeed, the Gaussian kernel is only used to train the model; when evaluating on the validation or test set, it is converted to a discrete kernel as described in Equation 7 by rounding the delays. This permits to implement sparse kernels for inference which are very useful for uses on neuromorphic Figure 3: Gaussian convolution kernels for \(N\) synaptic connections. The Gaussians are centered on the delay positions, and the area under their curves corresponds to the synaptic weights \(w_{i}\). On the right, we see the delayed spike trains after being convolved with the kernels. (the \(-1\) was omitted for figure clarity). hardware, for example, as they correspond to only one synapse between pairs of neurons, with the corresponding weight and delay. ## 4 Experiments ### Experimental Setup We chose to evaluate our method on the SHD and SSC/GSC datasets [6], as they require leveraging temporal patterns of spike times to achieve a good classification accuracy, unlike most computer vision spiking benchmarks. Both spiking datasets are constructed using artificial cochlear models to convert audio speech data to spikes; the original audio datasets are the Heidelberg Dataset (HD) and the Google Speech Commands v0.02 Dataset (SC) [42] for SHD and SSC, respectively. The SHD dataset consists of 10k recordings of 20 different classes that consist of spoken digits ranging from zero to nine in both English and German languages. SSC and GSC are much larger datasets that consist of 100k different recordings. The task we consider on SSC and GSC is the top one classification on all 35 different classes (similar to [6; 3]) which is more challenging than the original key-word spotting task on 12 classes, proposed in [42]. For the two spiking datasets, we used spatio-temporal bins to reduce the input dimensions. Input neurons were reduced from 700 to 140 by binning every 5 neurons; as for the temporal dimension we used a discrete time-step \(\Delta t=10\) ms, and a zero right-padding to make sure all recordings in a batch have the same time duration. As for the non-spiking GSC, we used the Mel Spectrogram representation of the waveforms with 140 frequency bins and approximately 100 timesteps to remain consistent to the input sizes used in SSC. Figure 4: This figure illustrates the evolution of the same delay kernels for an example of eight synaptic connections of one neuron throughout the training process. The x-axis corresponds to time, and each kernel is of size \(T_{d}=25\). And the y-axis is the synapse id. (a) corresponds to the initial phase where the standard deviation of the Gaussian \(\sigma\) is large (\(\frac{T_{d}}{2}\)), allowing to take into consideration long temporal dependencies. (b) corresponds to the intermediate phase, (c) is taken from the final phase where \(\sigma\) is at its minimum value (0.5) and weight tuning is more emphasized. Finally, (d) represents the kernel after converting to the discrete form with rounded positions. We used a very simple architecture: a feedforward SNN with two or three hidden fully connected layers. Each feedforward layer is implemented using a DCLS module where each synaptic connection is modeled as a 1D temporal convolution with one Gaussian kernel element (as described in Section 3.2), followed by batch normalization, a LIF module (as described in Section 3.1) and dropout. Table 1 lists the values of some hyperparameters used for the three datasets (for more details, refer to the code repository). The readout layer consists of \(n_{\text{classes}}\) LIF neurons with infinite threshold (where \(n_{\text{classes}}\) is 20 or 35 for SHD and SSC/GSC respectively). Similar to [3], the output \(\text{out}_{i}[t]\) for every neuron \(i\) at time \(t\) is \[\text{out}_{i}[t]=\text{softmax}(u_{i}^{(r)}[t])=\frac{e^{u_{i}^{(r)}[t]}}{ \sum_{j=1}^{n_{\text{classes}}}e^{u_{j}^{(r)}[t]}} \tag{11}\] where \(u_{i}^{(r)}[t]\) is the membrane potential of neuron \(i\) in the readout layer \(r\) at time \(t\). And the final output of the model after \(T\) time-steps is defined as \[\hat{y_{i}}=\sum_{t=1}^{T}\text{out}_{i}[t] \tag{12}\] We denote the batch size by \(N\) and the ground truth by \(y\). We calculate the cross-entropy loss for one batch as \[\mathcal{L}=\frac{1}{N}\sum_{n=1}^{N}-\log(\text{softmax}(\hat{y}_{y_{n}}[n])) \tag{13}\] The Adam optimizer [26] is used for all models and groups of parameters with base learning rates \(lr_{w}=0.001\) for synaptic weights and \(lr_{d}=0.1\) for delays. We used a one-cycle learning rate scheduler [36] for the weights and cosine annealing [28] without restarts for the delays learning rates. Our work is implemented1 using the PyTorch-based SpikingJelly[10] framework. Footnote 1: Our code is available at: [https://github.com/Thvnvtos/SNN-delays](https://github.com/Thvnvtos/SNN-delays) ### Results We compare our method in Table 2 to previous works on the SHD, SSC and GSC-35 (35 denoting the 35 classes harder version) benchmark datasets in terms of accuracy, model size and whether recurrent connections or delays were used. The reported accuracy of our method corresponds to the accuracy on the test set using the best performing model on the validation set. However, since there is no validation set provided for SHD we use the test set as the validation set (similar to [3]). The margins of error are calculated at a 95% confidence level using a t-distribution (we performed ten and five experiments using different random seeds for SHD and SSC/GSC respectively). Our method outperforms the previous state-of-the-art accuracy on the three benchmarks (with a significant improvement on SSC and GSC) without using recurrent connections and with a substantially lower number of parameters, and using only vanilla LIF neurons. Other methods that use delays do have a slightly lower number of parameters than we do, yet we outperform them significantly on SHD; while they didn't report any results on the harder benchmarks SSC/GSC. Finally, by increasing the number of hidden layers, we found that the accuracy plateaued after two hidden layers for SHD, and three for SSC/GSC. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & \# Hidden Layers & \# Hidden size & \(\tau\)(ms) & Maximum Delay(ms) & Dropout rate \\ \hline SHD & 2 & 256 & \(10.05^{*}\) & 250 & 0.4 \\ SSC/GSC & 2 or 3 & 512 & 15 & 300 & 0.25 \\ \hline \hline \end{tabular} *We found that a LIF with quasi-instaneous leak \(\tau=10.05\) (since \(\Delta t=10\)) is better than using a Heaviside function for SHD. \end{table} Table 1: Network parameters for different datasets ### Ablation study In this section, we conduct control experiments aimed at assessing the effectiveness of our delay learning method. The model trained using our full method will be referred to as _Decreasing_\(\sigma\), while _Constant_\(\sigma\) will refer to a model where the standard deviation \(\sigma\) is constant and equal to the minimum value of \(0.5\) throughout the training. Additionally, _Fixed random delays_ will refer to a model where delays are initialized randomly and not learned, while only weights are learned. Meanwhile _Decreasing_\(\sigma\) - _Fixed weights_ will refer to a model where the weights are fixed and only delays are learned with a decreasing \(\sigma\). Finally, _No delays_ denotes a standard SNN without delays. To ensure equal parameter counts across all models (for fair comparison), we increased the number of hidden neurons in the _No delays_ case. Moreover, to make the comparison even fairer, all models have the same initialization for weights, and if required, the same initialization for delays. We compared the five different models as shown in Figure 4(a). The models with delays (whether fixed or learned) significantly outperformed the _No delays_ model both on SHD (FC) and SSC (FC); for us, this was an expected outcome given the temporal nature of these benchmarks, as achieving a high accuracy necessitates learning long temporal dependencies. However, we didn't expect the Fixed random delays model to be almost on par with models where delays were trained, with Decreasing \(\sigma\) model only slightly outperforming it. To explain this, we hypothesized that a random uniformly distributed set of delay positions will most likely cover the whole possible temporal range. This hypothesis is plausible given the fact that the number of synaptic connections vastly outnumbers the total possible discrete delay positions for each kernel. Therefore, as the number of synaptic connections within a layer grows, the necessity of moving delay positions away from their initial state diminishes. And only tuning the weights of this set of fixed delays is enough to achieve comparable performance to delay learning. In order to validate this hypothesis, we conducted a comparison using the same models with a significantly reduced number of synaptic connections. We applied fixed binary masks to the network's synaptic weight parameters. Specifically, for each neuron in the network we reduced the number of its synaptic connections to ten for both datasets (except for the No delays model, which has more connections to ensure equal parameters counts). This corresponds to 96% sparsity for SHD and 98% sparsity for SSC. With the number of synaptic connections reduced, it is unlikely that the random \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Recurrent & Delays & \# Params & Top1 Accuracy \\ \hline \multirow{8}{*}{**SHD**} & EventProp-GeNN [31] & ✓ & ✗ & N/a & 84.80 \(\pm\) 1.5\% \\ & Cuba-LIF [7] & ✗ & ✗ & N/a & 87.80 \(\pm\) 1.1\% \\ & Adaptive SRNN [44] & ✓ & ✗ & N/a & 90.40\% \\ & SNN with Delays [2] & ✗ & ✓ & 0.1M & 90.43\% \\ & TA-SNN [43] & ✗ & ✗ & N/a & 91.08\% \\ & STSC-SNN [46] & ✗ & ✗ & 2.1M & 92.36\% \\ & Adaptive Delays [37] & ✗ & ✓ & 0.1M & 92.45\% \\ & RadLIF [3] & ✓ & ✗ & 3.9M & 94.62\% \\ & **Our work (2 hidden layers)** & ✗ & ✓ & **0.2M** & **95.07 \(\pm\) 0.24\%** \\ \hline \multirow{4}{*}{**SSC**} & Recurrent SNN [6] & ✓ & ✗ & N/a & 50.90 \(\pm\) 1.1\% \\ & Heterogeneous RSNN [32] & ✓ & ✗ & N/a & 57.30\% \\ & SNN-CNN [34] & ✗ & ✓ & N/a & 72.03\% \\ & Adaptive SRNN [44] & ✓ & ✗ & N/a & 74.20\% \\ & SpikGRU [7] & ✓ & ✗ & N/a & 77.00 \(\pm\) 0.4\% \\ & RadLIF [3] & ✓ & ✗ & 3.9M & 77.40\% \\ & **Our work (2 hidden layers)** & ✗ & ✓ & **0.7M** & **79.77 \(\pm\) 0.09\%** \\ & **Our work (3 hidden layers)** & ✗ & ✓ & **1.2M** & **80.29 \(\pm\) 0.06\%** \\ \hline \multirow{4}{*}{**GSC-35**} & MSAT [22] & ✗ & ✗ & N/a & 87.33\% \\ & RadLIF [3] & ✓ & ✗ & 1.2M & 94.51\% \\ \cline{1-1} & **Our work (2 hidden layers)** & ✗ & ✓ & **0.7M** & **94.91 \(\pm\) 0.09\%** \\ \cline{1-1} & **Our work (3 hidden layers)** & ✗ & ✓ & **1.2M** & **95.29 \(\pm\) 0.11\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracy on SHD, SSC and GSC-35 datasets uniform initialization of delay positions will cover most of the temporal range. Thus, specific long term dependencies will need to be learned by moving the delays. The test accuracies corresponding to this control test are shown in Figure 4(b). And it illustrates the difference in performance between the Fixed random delays model and the Decreasing/Constant \(\sigma\) models in the sparse case. This enforces our hypothesis and shows the need to perform this control test for delay learning methods. Furthermore, it also indicates the effectiveness of our method. In addition, we also tested a model where only the delays are learned while the synaptic weights are fixed (Decreasing \(\sigma\) - Fixed weights). It can be seen that learning only the delays gives acceptable results in the fully connected case (in agreement with [15]) but not in the sparse case. To summarize, it is always preferable to learn both weights and delays (and decreasing \(\sigma\) helps). If one has to choose, then learning weights is preferable, especially with sparse connectivity. ## 5 Conclusion In this paper, we propose a method for learning delays in feedforward spiking neural networks using dilated convolutions with learnable spacings (DCLS). Every synaptic connection is modelled as a 1D Gaussian kernel centered on the delay position, and DCLS is used to learn the kernel positions (i.e delays). The standard deviation of the Gaussians is decreased throughout training, such that at the end of training we obtain a SNN model with one discrete delay per synapse, which could potentially be compatible with neuromorphic implementations. We show that our method outperforms the state-of-the-art in the temporal spiking benchmarks SHD and SSC and the non-spiking benchmark GSC-35, while using fewer parameters than previous proposals. And we also perform a rigorous control test that demonstrates the effectiveness of our method. Future work could investigate the use of other kernel functions than the Gaussian, or applying our method to other network architectures like convolutional networks. #### Limitations The primary limitations of our work revolve around the compatibility and constraints of our delay learning method. Specifically, our method is limited to offline training conducted in discrete-time simulations, and it cannot handle recurrent connections. Additionally, a maximum delay limit, which corresponds to the size of the kernel, must be predetermined and fixed before the learning process. Figure 5: Barplots of test accuracies on SHD and SSC datasets for different models. With (a): fully connected layers (FC) and (b): with sparse synaptic connections (S). Reducing the number of synaptic connections of each neuron to ten, for both SHD and SSC. #### Computational resources This project required about 500 GPU hours on a single Nvidia Tesla T4 GPU with two Intel(R) Xeon(R) CPUs @ 2.20 GHz threads. Given this hardware configuration, a single training session lasted approximately 1 hour for the SHD runs, while for the SSC/GSC runs, a single training session lasted around 7 hours. The available computing resources allowed us to perform the required calculations efficiently, leading to accurate and competitive outcomes within a reasonable time. #### Acknowledgment This research was supported in part by the Agence Nationale de la Recherche under Grant ANR-20-CE45-0005 BRAIN-Net. This work was granted access to the HPC resources of CALMIP supercomputing center under the allocation 2023-[P22021]. Support from the ANR-3IA Artificial and Natural Intelligence Toulouse Institute is gratefully acknowledged. We also want to thank Wei Fang for developing the SpikingJelly framework that we used in this work.
2309.10759
A Blueprint for Precise and Fault-Tolerant Analog Neural Networks
Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, achieving high precision and DNN accuracy using such technologies is challenging, as high-precision data converters are costly and impractical. In this paper, we address this challenge by using the residue number system (RNS). RNS allows composing high-precision operations from multiple low-precision operations, thereby eliminating the information loss caused by the limited precision of the data converters. Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve ${\geq}99\%$ of FP32 accuracy for state-of-the-art DNN inference using data converters with only $6$-bit precision whereas a conventional analog core requires more than $8$-bit precision to achieve the same accuracy in the same DNNs. The reduced precision requirements imply that using RNS can reduce the energy consumption of analog accelerators by several orders of magnitude while maintaining the same throughput and precision. Our study extends this approach to DNN training, where we can efficiently train DNNs using $7$-bit integer arithmetic while achieving accuracy comparable to FP32 precision. Lastly, we present a fault-tolerant dataflow using redundant RNS error-correcting codes to protect the computation against noise and errors inherent within an analog accelerator.
Cansu Demirkiran, Lakshmi Nair, Darius Bunandar, Ajay Joshi
2023-09-19T17:00:34Z
http://arxiv.org/abs/2309.10759v1
# A blueprint for precise and fault-tolerant analog neural networks ###### Abstract Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, achieving high precision and DNN accuracy using such technologies is challenging, as high-precision data converters are costly and impractical. In this paper, we address this challenge by using the residue number system (RNS). RNS allows composing high-precision operations from multiple low-precision operations, thereby eliminating the information loss caused by the limited precision of the data converters. Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve \(\geq\)99% of FP32 accuracy for state-of-the-art DNN inference using data converters with only 6-bit precision whereas a conventional analog core requires more than 8-bit precision to achieve the same accuracy in the same DNNs. The reduced precision requirements imply that using RNS can reduce the energy consumption of analog accelerators by several orders of magnitude while maintaining the same throughput and precision. Our study extends this approach to DNN training, where we can efficiently train DNNs using 7-bit integer arithmetic while achieving accuracy comparable to FP32 precision. Lastly, we present a fault-tolerant dataflow using redundant RNS error-correcting codes to protect the computation against noise and errors inherent within an analog accelerator. ## Introduction Deep Neural Networks (DNNs) are widely employed across various applications today. Unfortunately, their compute, memory, and communication demands are continuously on the rise. The slow-down in CMOS technology scaling, along with these increasing demands has led analog DNN accelerators to gain significant research interest. Recent research has been focused on using various analog technologies such as photonic cores [1; 2; 3; 4; 5; 6; 7], resistive arrays [8; 9; 10; 11; 12], switched capacitor arrays [13; 14], Phase Change Materials (PCM) [15], SpinTransfer Torque (STT)-RAM [16; 17], etc., to enable highly parallel, fast, and efficient matrix-vector multiplications (MVMs) in the analog domain. These MVMs are fundamental components used to build larger general matrix-matrix multiplication (GEMM) operations, which make up more than 90% of the operations in DNN inference and training [18]. The success of this approach, however, is constrained by the limited precision of the digital-to-analog and analog-to-digital data converters (i.e., DACs and ADCs). In an analog accelerator, the data is converted between analog and digital domains using data converters before and after every analog operation. Typically, a complete GEMM operation cannot be performed at once in the analog domain due to the fixed size of the analog core. Instead, the GEMM operation is tiled into smaller MVM operations. As a result, each MVM operation produces a partial output that must be accumulated with other partial outputs to obtain the final GEMM result. Concretely, an MVM operation consists of parallel dot products between \(b_{w}\)-bit signed weight vectors and \(b_{in}\)-bit signed input vectors--each with \(h\) elements--resulting in a partial output containing \(b_{\text{out}}\) bits of information, where \(b_{\text{out}}=b_{\text{in}}+b_{w}+\log_{2}(h)-1\). An ADC with a precision greater than \(b_{\text{out}}\) (i.e., \(b_{\text{ADC}}\geq b_{\text{out}}\)) is required to ensure no loss of information when capturing these partial outputs. Unfortunately, the energy consumption of ADCs increases exponentially with bit precision (often referred to as effective number of bits (ENOB)). This increase is roughly \(4\times\) for each additional output bit [19]. As a result, energy-efficient analog accelerator designs typically employ ADCs with lower precision than \(b_{\text{out}}\) and only capture the \(b_{\text{ADC}}\) most significant bits (MSBs) from the \(b_{\text{out}}\) bits of each partial output [20]. Reading only MSBs causes information loss in each partial output leading to accuracy degradation in DNNs, as pointed out by Rekhi et al. [20]. This degradation is most pronounced in large DNNs and large datasets. Fig. 1 shows the impact of this approach on DNN accuracy in two tasks: (1) a two-layer convolutional neural network (CNN) for classifying the MNIST dataset [21]: a simple task with only 10 classes, and (2) the ResNet50 CNN [22] for classifying the ImageNet dataset [23]: a more challenging task with 1000 classes. As the vector size \(h\) increases, higher precision is needed at the output to maintain the accuracy in both DNNs. Moreover, ResNet50 experiences accuracy degradation at smaller values of \(h\) compared to the two-layer CNN. While using a higher precision ADC can help recover from this accuracy degradation, it significantly reduces the energy efficiency of the analog hardware. Essentially, to efficiently execute large DNNs using analog accelerators, it is crucial to find a better way to achieve high accuracy than simply increasing the bit precision of the data converters. In this work, we present a universal residue number system (RNS)-based framework to overcome the above-mentioned challenge in analog DNN inference as well as DNN training. RNS represents high-precision values using multiple low-precision integer residues for a selected set of moduli. As such, RNS enables high-precision arithmetic without any information loss on the partial products, even when using low-precision DACs and ADCs. Utilization of RNS leads to a significant reduction in the data converter energy consumption, which is the primary contributor to energy usage in analog accelerators. This reduction can reach up to six orders of magnitude compared to a conventional fixed-point analog core with the same output bit precision. Our study shows that the RNS-based approach enables \(\geq 99\%\) FP-32 inference accuracy by using only 6-bit data converters for state-of-the-art MLPerf (Inference: Datacenters) benchmarks [24] and Large Language Models (LLMs). We also demonstrate the applicability of this approach in training and fine-tuning state-of-the-art DNNs using low-precision analog hardware. The RNS approach, however, is susceptible to noise as small errors in the residues scale up during output reconstruction, leading to larger errors in the standard representation. To address this issue, we incorporate the Redundant RNS (RRNS) error-correcting code [25; 26; 27] to introduce fault-tolerance capabilities into the dataflow. As RNS is closed under multiplication and addition, no significant changes are required in the design of the analog core or in how GEMM operations are performed. Unlike a conventional analog core design, performing RNS operations necessitates an analog modulo operation. This operation can be implemented by using ring oscillators [28] in an analog electrical core or by using optical phase shifters in an analog optical core. Our proposed framework, however, remains agnostic to the underlying technology. Importantly, arbitrary fixed-point precision can be achieved by combining the positional number system (PNS) and RNS in analog hardware. Overall, our presented RNS-based methodology offers a solution combining high accuracy, high energy efficiency, and fault tolerance in analog DNN inference and training. ## Results ### DNN Inference and Training Using RNS The RNS represents an integer as a set of smaller (integer) residues. These residues are calculated by performing a modulo operation on the said integer using a selected set of \(n\)_co-prime_ moduli. Let \(A\) be an integer. \(A\) can be represented in the RNS with \(n\) residues as \(\{a_{1},\ldots,a_{n}\}\) for a set of co-prime moduli \(\mathcal{M}=\{m_{1},\ldots,m_{n}\}\) where \(a_{i}=|A|_{m_{i}}\equiv A\mod m_{i}\) for \(i\in\{1\ldots n\}\). \(A\) can be uniquely reconstructed using the Chinese Remainder Theorem (CRT): \[A=\bigg{|}\sum_{i=1}^{n}a_{i}M_{i}T_{i}\bigg{|}_{M}, \tag{1}\] if \(A\) is within the range \([0,M)\) where \(M=\prod_{i}m_{i}\). Here, \(M_{i}=M/m_{i}\) and \(T_{i}\) is the multiplicative inverse of \(M_{i}\), i.e., \(|M_{i}T_{i}|_{m_{i}}\equiv 1\). Hereinafter, we refer to the integer \(A\) as the _standard representation_, while we refer to the set of integers \(\{a_{1},\ldots,a_{n}\}\) simply as the residues. A DNN consists of a sequence of \(L\) layers. During inference, where the DNN is previously trained and its parameters are fixed, only a forward pass is performed. Generically, the input \(X\) to \((\ell+1)\)-th layer of a DNN during the forward pass is the output generated by the previous \(\ell\)-th layer: \[X^{(\ell+1)}=f^{(\ell)}\big{(}W^{(\ell)}X^{(\ell)}\big{)}, \tag{2}\] where \(W^{(\ell)}X^{(\ell)}=O^{(\ell)}\) is a GEMM operation and \(f(\cdot)\) is an element-wise nonlinear function. DNN training requires both forward and backward passes as well as weight updates. The forward pass in the training is performed the same way as in Eq. (2). After the forward pass, a loss value \(\mathcal{L}\) is calculated using the output of the last layer and the ground truth. The gradients of the DNN activations and parameters with respect to \(\mathcal{L}\) for each layer are calculated by performing a backward pass after each forward pass: \[\frac{\partial\mathcal{L}}{\partial X^{(\ell)}}={W^{(\ell)}}^{T}\frac{\partial \mathcal{L}}{\partial O^{(\ell)}}, \tag{3}\] \[\frac{\partial\mathcal{L}}{\partial W^{(\ell)}}=\frac{\partial\mathcal{L}}{ \partial O^{(\ell)}}{X^{(\ell)}}^{T}. \tag{4}\] Figure 1: **Inference accuracy versus vector size for varying data bit-width in a conventional analog core.****a** Inference accuracy for a two-layer CNN classifying handwritten digits from the MNIST dataset. **b** Inference accuracy for ResNet50 classifying images from the ImageNet dataset evaluated in an analog core with varying precision \(b\) and vector sizes \(h\). For both **a** and **b**, **b**-bit precision means \(b=b_{\text{DAC}}=b_{\text{ADC}}<b_{\text{out}}\) where \(b\) varies between 2 and 8. Using these gradients \(\Delta W^{(\ell)}=\frac{\partial\mathcal{L}}{\partial W^{(\ell)}}\), the DNN parameters are updated in each iteration \(i\): \[W^{(\ell)}_{i+1}=W^{(\ell)}_{i}-\eta\Delta W^{(\ell)}_{i} \tag{5}\] with a step size \(\eta\) for a simple stochastic gradient descent (SGD) optimization algorithm. Essentially, for each layer, one GEMM operation is performed in the forward pass and two GEMM operations are performed in the backward pass. Because RNS is closed under addition and multiplication operations, GEMM operations can be performed in the RNS space. Using the RNS, Eq. (2) can be rewritten as: \[X^{(\ell+1)}=f^{(\ell)}\Bigg{(}\text{CRT}\bigg{(}\Big{|}\big{|}W^{(\ell)}\big{|} _{\mathcal{M}}\big{|}X^{(\ell)}\big{|}_{\mathcal{M}}\Big{|}_{\mathcal{M}} \bigg{)}\Bigg{)}. \tag{6}\] The same approach applies for Eqs. (3) and (4) in the backward pass. The moduli set \(\mathcal{M}\) must be chosen to ensure that the outputs of the RNS operations are smaller than \(M\), which means that \[\log_{2}M\geq b_{\text{out}}=b_{\text{in}}+b_{w}+\log_{2}(h)-1, \tag{7}\] should be guaranteed for a dot product between \(b_{\text{in}}\)-bit input and \(b_{w}\)-bit weight vectors with \(h\)-elements. This constraint prevents overflow in the computation. ### Precision and Energy Efficiency in the RNS-based Analog Core The selection of moduli set \(\mathcal{M}\), which is constrained by Eq. (7), impacts the achievable precision at the output as well as the energy efficiency of the RNS-based analog core. Table 1 compares RNS-based analog GEMM cores with example moduli sets and regular fixed-point analog GEMM cores. Here, we show two cases for the regular fixed-point representation: (1) the low-precision (LP) case where \(b_{\text{out}}>b_{\text{ADC}}=b_{\text{DAC}}\), and (2) the high-precision (HP) case where \(b_{\text{out}}=b_{\text{ADC}}>b_{\text{DAC}}\). It should be noted that all three analog cores represent data as fixed-point numbers. We use the term'regular fixed-point core' to refer to a typical analog core that performs computations in the standard representation (without RNS). 'RNS-based core' refers to an analog core that performs computations on the fixed-point residues. While the LP approach introduces \(b_{\text{out}}-b_{\text{ADC}}\) bits of information loss in every dot product, the HP approach uses high-precision ADCs to prevent this loss. For the RNS-based core, we picked \(b_{\text{in}}=b_{w}=b_{\text{ADC}}=b_{\text{DAC}}=\lceil\log_{2}m_{i}\rceil \equiv b\) for ease of comparison against the fixed-point cores. Table 1 shows example moduli sets that are chosen to guarantee Eq. (7) for \(h=128\) while keeping the moduli under the chosen bit-width \(b\). In this case, for \(n\) moduli with bit-width of \(b\), \(M\) covers \(\approx n\cdot b\) bits of range at the output. \(h\) is chosen to be 128 as an example considering the common layer sizes in the evaluated MLPerf (Inference: Datacenter) benchmarks. The chosen \(h\) provides high throughput with high utilization of the GEMM core. Fig. 2a compares the error (with respect to FP32 results) observed when performing dot products with the RNS-based core and the LP fixed-point core with the same bit precision. Both cores use the configurations described in Table 1 for the example vector size \(h=128\). The larger absolute error observed in the LP fixed-point case illustrates the effect of the information loss mentioned above. HP fixed-point case is not shown as it is equivalent to the RNS case. Fig. 2b shows the energy consumption of DACs and ADCs per dot product for the three aforementioned analog hardware configurations. To achieve the same MVM throughput as the (LP/HP) fixed-point cores, the RNS-based core with \(n\) moduli must use \(n\) distinct MVM units and \(n\) sets of DACs and ADCs. This makes the energy consumption of the RNS-based core \(n\times\) larger compared to the LP fixed-point approach. However, the LP fixed-point approach with low-precision ADCs experiences information loss in the partial outputs and hence has lower accuracy. The RNS-based approach and the HP fixed-point approach provide the same bit precision (i.e., the same DNN accuracy). Yet, using the RNS-based approach is orders of magnitude more energy-efficient than the HP fixed-point approach. This is mainly because of the high cost of high-precision ADCs required to capture the full output in the HP fixed-point approach. ADCs dominate the energy consumption with approximately three orders of magnitude higher energy usage than DACs with the same bit precision. In addition, energy consumption in ADCs increases exponentially with increasing bit precision[19]. This favors using multiple DACs and ADCs with lower precision in the RNS-based approach over using a single high-precision ADC. Briefly, the RNS-based approach provides a sweet spot between LP and HP fixed-point approaches without compromising on both high accuracy and high energy efficiency. ### Accuracy in the RNS-based Analog Core Fig. 3a compares the inference accuracy of MLPerf (Inference: Datacenters) benchmarks[24] and OPT[29] (a transformer-based LLM) when run on an RNS-based analog core and a fixed-point (LP) analog core. The HP fixed-point analog core results are not shown as they are equivalent to the RNS-based results. The evaluated DNNs, their corresponding tasks, and the datasets are listed in Table 2. Fig. 3a shows that the RNS-based approach significantly ameliorates the accuracy drop caused by the low-precision ADCs used in the LP fixed-point approach for all the networks. By using the RNS-based approach, it is possible to achieve \(\geq\)99% of FP32 accuracy (this cut-off is defined in the MLPerf benchmarks[24]) for all evaluated benchmarks when using residues with as low as 6 bits. This number can be lowered to 5 bits for BERT-large and RNN-T and to 4 bits for DLRM. Besides inference, the RNS-based approach opens the door for analog computing to be used in tasks that require higher precision than inference such as DNN training. Figure 3b shows the loss during DNN training/fine-tuning. Table 3 reports the validation accuracies after FP32 and RNS-based low-precision training. Here, the GEMM operations during forward and backward passes of training follow the same methodology as inference, with weight updates carried out in FP32. Our experiments show that \(\geq\)99% FP32 validation accuracy is achievable after training ResNet50 from scratch using the RNS-based approach with only 6-bit moduli. Similarly, fine-tuning BERT-large and OPT-125M by using 5-bit and 7-bit moduli, respectively, can reach \(\geq\)99% FP32 validation accuracy. The results are noticeably promising as the previous efforts on analog DNN hardware that adopted the LP fixed-point approach had never successfully demonstrated the training of state-of-the-art DNNs due to the limited precision of this approach. Fig. 4 illustrates the dataflow of the RNS-based analog core when performing MVM as part of the DNN inference/training. An input vector \(X\) and a weight matrix \(W\) to be multiplied in the MVM unit are first mapped to signed integers. To mitigate the quantization effects, \(X\) and each row in \(W\) are scaled by an FP32 scaling factor that is unique to the vector (See Methods). The signed integers are then converted into RNS residues through modulo operation (i.e., forward conversion). By construction, each residue is within the range of \([0,m_{i})\). To achieve the same throughput as a fixed-point analog core, the RNS-based analog core with \(n\) moduli requires using \(n\) analog MVM units--one for each modulus--and running them in parallel. Each analog MVM unit requires a set of DACs for converting the associated input and weight residues into the analog domain. The MVM operations are followed by an analog modulo operation on each output residue vector. Thanks to the modulo operation, the output residues--to be captured by ADCs--are reduced back to the \([0,m_{i})\) range. Therefore, a bit precision of \([\log_{2}m_{i}]\) is adequate for both DACs and ADCs to perform input and output conversions without any information loss. The output residues are then converted back to the standard representation in the digital domain using Eq. (1) to generate the signed-integer output vector, which is then mapped back to an FP32 final output \(Y\). The non-linear function \(f\) (e.g., ReLU, sigmoid, etc.) is then performed digitally in FP32. \begin{table} \begin{tabular}{l l l} \hline \hline **DNN** & **Task** & **Dataset** \\ \hline ResNet50 & Image classification & ImageNet[23] \\ SSD-ResNet34 & Object detection & MS COCO[35] \\ BERT-Large & Question answering & SQuADv1.[13] \\ RNN-T & Speech recognition & Librispeech[37] \\ DLRM & Recommendation & 1TB Click Logs[38] \\ OPT-125M & Language Modeling & Wikitext[39] \\ OPT-350M & Language Modeling & Wikitext[39] \\ \hline \hline \end{tabular} \end{table} Table 2: MLP Perf (Inference: Datacenters) benchmarks. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**RNS-based Core (This work)**} & \multicolumn{4}{c}{**LP Fixed-Point Core**} & \multicolumn{4}{c}{**HP Fixed-Point Core**} \\ \cline{3-10} \(b_{\text{in}}\), \(b_{w}\) & \(b_{\text{DAC}}\) & \(\log_{2}\)\(\mathcal{M}\) & \(b_{\text{ADC}}\) & Moduli Set (\(\mathcal{M}\)) & RNS Range (\(M\)) & \(b_{\text{DAC}}\) & \(b_{\text{out}}\) & \(b_{\text{ADC}}\) & Lost Bits & \(b_{\text{DAC}}\) & \(b_{\text{out}}\) & \(b_{\text{ADC}}\) \\ \hline 4 & 4 & 4 & 4 & 4 & \(\{15,14,13,11\}\) & \(\simeq 2^{15}-1\) & 4 & 14 & 4 & 10 & 4 & 14 & 14 \\ 5 & 5 & 5 & 5 & 5 & \(\{31,29,28,27\}\) & \(\simeq 2^{19}-1\) & 5 & 16 & 5 & 11 & 5 & 16 & 16 \\ 6 & 6 & 6 & 6 & 6 & \(\{63,62,61,59\}\) & \(\simeq 2^{24}-1\) & 6 & 18 & 6 & 12 & 6 & 18 & 18 \\ 7 & 7 & 7 & 7 & 7 & \(\{127,126,125\}\) & \(\simeq 2^{21}-1\) & 7 & 20 & 7 & 13 & 7 & 20 & 20 \\ 8 & 8 & 8 & 8 & \(\{255,254,253\}\) & \(\simeq 2^{24}-1\) & 8 & 22 & 8 & 14 & 8 & 22 & 22 \\ \hline \hline \end{tabular} \end{table} Table 1: Data and data converter precision in RNS-based, LP fixed-point, and HP fixed-point analog cores. Figure 2: **Comparison of the RNS-based and regular fixed-point analog approaches.** The distribution of average error observed at the output of a dot product performed with the RNS-based analog approach (pink) and the LP regular fixed-point analog approach (cyan). Error is defined as the distance from the result calculated in FP32. The experiments are repeated for 10,000 randomly generated vector pairs with vector size \(h=128\). **b** Energy consumption of data converters (i.e., DACs and ADCs) per dot product for the RNS-based analog approach (pink) and the LP (cyan) and HP (dark blue) regular fixed-point analog approaches. See Methods for the energy estimation methodology. ### Redundant RNS for Fault Tolerance Analog compute cores are sensitive to noise. In the case of RNS, even small errors in the residues can result in a large error in the corresponding integer they represent. The Redundant Residue Number System (RRNS) [25; 26; 27] can detect and correct errors--making the RNS-based analog core fault tolerant. RRNS uses a total of \(n+k\) moduli: \(n\) non-redundant and \(k\) redundant. An RRNS(\(n+k,n\)) code can detect up to \(k\) errors and can correct up to \(\lfloor\frac{k}{2}\rfloor\) errors. In particular, the error in the codeword (i.e., the \(n+k\) residues representing an integer in the RRNS space) can be one of the following cases: * **Case 1:** Fewer than \(\lfloor\frac{k}{2}\rfloor\) residues have errors--thereby they are correctable, * **Case 2:** Between \(\lfloor\frac{k}{2}\rfloor\) and \(k\) residues have errors or the codeword with more than \(k\) errors does not overlap with another codeword in the RRNS space--thereby the error is detectable, * **Case 3:** More than \(k\) residues have errors and the erroneous codeword overlaps with another codeword in the RRNS space--thereby the error goes undetected. Errors are detected by using majority logic decoding wherein we divide the total \(n+k\) output residues into Figure 4: **RNS-based analog GEMM dataflow. The operation is shown for a moduli set \(\mathcal{M}=\{m_{1},\dots,m_{n}\}\). The \(n\times h\) analog MVM units are represented as generic blocks. The dataflow is agnostic of the technology.** Figure 3: **Accuracy performance of the RNS-based analog core.****a Inference accuracy of regular fixed-point (LP) and RNS-based cores (See Table 1) on MLPer (Inference: Datacenters) benchmarks. The accuracy numbers are normalized to the FP32 accuracy. b-d Loss during training for FP32 and RNS-based approaches with varying moduli bit-width. ResNet50 (a) is trained from scratch for 90 epochs using SGD optimizer with a momentum. BERT-Large (b) and OPT-125M (c) are fine-tuned from pre-trained models. Both models are fine-tuned using the Adam optimizer with a linear learning rate scheduler for 2 and 3 epochs for BERT-Large and OPT-125M, respectively. All inference and training experiments use FP32 for all non-GEMM operations.** \(\binom{n+k}{n}\) groups with \(n\) residues per group. One simple way of majority logic decoding in this context is to convert the residues in each group back to the standard representation via CRT to generate an output value for each group and compare the results of the \(\binom{n+k}{n}\) groups. If more than 50% of the groups have the same result in the standard representation, then the generated codeword is correct. This corresponds to **Case 1**. In contrast, not having a majority indicates that the generated codeword is erroneous and cannot be corrected. This corresponds to **Case 2**. In this case, the detected errors can be eliminated by repeating the entire calculation. In **Case 3**, the erroneous codeword generated by the majority of the groups overlaps with another codeword. As a result, more than 50% of the groups have the same incorrect result and the error goes undetected. To optimize the hardware performance of this process, more efficient base-extension-based algorithms [30] instead of CRT can be used for error detection. The final error probability in an RRNS code depends on the percentage of the _non-correctable_ errors observed in the residues. This probability is influenced by the chosen moduli set and the number of error correction iterations (See Methods). Let \(p_{c},p_{d}\), and \(p_{u}\) be the probabilities that Cases 1, 2, and 3 occur respectively when computing a single output. Overall, \(p_{c}+p_{d}+p_{u}=1\). For a single attempt (i.e., \(R=1\)), the probability of producing the incorrect output integer is \(p_{\text{err}}(R=1)=1-p_{c}=p_{u}+p_{d}\). Generally, it is possible to repeat the calculations \(R\)-times until no detectable error is found at the expense of increasing compute latency. In this case, the probability of having an incorrect output after \(R\) attempts of error correction is \[p_{\text{err}}(R)=1-p_{c}\sum_{r=0}^{R-1}(p_{d})^{r}. \tag{8}\] As the number of attempts increases, the output error probability decreases and converges to \(\lim_{R\rightarrow\infty}p_{\text{err}}(R)=p_{u}/(p_{u}+p_{c})\). Fig. 5 shows \(p_{\text{err}}\) for different numbers of redundant moduli (\(k\)), attempts (\(R\)), and moduli sets with different bit-widths. Broadly, as the probability of a single residue error \(p\) increases, the output error probability tends to 1. For a given number of attempts, increasing bit precision and the number of redundant moduli decreases \(p_{\text{err}}\). For a fixed number of redundant moduli and a fixed number of bits per moduli, \(p_{\text{err}}\) decreases as the number of attempts increases. Fig. 6 investigates the impact of noise on the accuracy of two large and important MLPerf benchmarks--ResNet50 and BERT-Large--when using RRNS. The two networks show similar behavior: adding extra moduli and increasing the number of attempts decrease \(p_{\text{err}}\) at the same value of \(p\). ResNet50 requires \(\sim\)3.9 GigaMAC operations (GOp) for one inference on a single input image. For a \(128\times 128\) MVM unit, inferring an ImageNet image through the entire network involves computing \(\sim\)29.4M partial output elements. Therefore, we expect the transition point from an accurate network to an inaccurate network to occur at \(p_{\text{err}}\) to be \(\leq 1/29.4\text{M}=3.4\times 10^{-8}\). This \(p_{\text{err}}\) transition point is \(\leq 1/358.6\text{M}=2.8\times 10^{-9}\) for BERT-Large. Fig. 6, however, shows that the evaluated DNNs are more resilient to noise than expected: it is able to tolerate higher \(p_{\text{err}}\) while maintaining good accuracy. The accuracy of ResNet50 only starts degrading (below 99% FP32) when \(p_{\text{err}}\approx 4.5\times 10^{-5}\) (1000\(\times\) higher than the estimated value) on average amongst the experiments shown in Figure 6. This transition probability is \(p_{\text{err}}\approx 4\times 10^{-4}\) for BERT-Large (on average \(100,000\times\) higher than the estimated value). ## Discussion The RNS (and the fault-tolerant RRNS) framework are agnostic to the analog technology employed. Generally, the RNS GEMM operations can be performed as a regular GEMM operation followed by a modulo operation in the analog domain. Analog GEMM is well-explored in the literature. Previous works leveraged photonics [1; 2; 3; 4; 5; 6; 7], crossbar arrays consisting of resistive RAM [8; 9; 10; 11; 12], switched capacitors [13; 14], PCM cells [15], STT-RAM [16; 17], etc. The analog modulo operation can be performed electrically or optically. In the electronic domain, one can use ring oscillators: a circuit that generates a continuous waveform by cycling through a series of inverters [28] to perform modulo operations. By carefully designing the parameters of the ring oscillator, it is possible to create an output frequency that corresponds to the desired modulus value. Alternatively, the phase of a signal can be used for performing modulo due to the periodicity of phases in optical systems. Optical phase is inherently modular against \(2\pi\). By modulating the phase of an optical signal, one can achieve modulo operations in the analog domain. Using RNS requires forward and reverse conversion circuits to switch between the RNS and the standard number system. The forward conversion is a modulo operation while the reverse conversion can be done using the CRT, mixed-radix conversion, or look-up tables. The (digital) hardware costs of these circuits can be reduced by choosing special moduli sets [31; 32]. The RNS framework can be extended with the PNS to \begin{table} \begin{tabular}{l c c c} \hline \hline **Precision** & **ResNet50** & **BERT-Large** & **OPT-125M** \\ **Acc.(\%)** & **F1 Score (\%)** & **Acc.(\%)/PPL** \\ \hline FP32 & 75.80 & 91.03 & 43.95/19.72 \\ 8-bit & 75.77 & 90.98 & 43.86/20.00 \\ 7-bit & 75.68 & 90.97 & 43.59/20.71 \\ 6-bit & 75.13 & 90.85 & 42.79/22.62 \\ 5-bit & 59.72 & 90.81 & 41.45/26.17 \\ 4-bit & 42.15 & 89.66 & 38.64/35.65 \\ \hline \hline \end{tabular} \end{table} Table 3: Validation accuracy results after training/fine-tuning. work with arbitrary precision, despite having DACs and ADCs with limited precision. For applications requiring higher-precision arithmetic than the example cases in this study (e.g., some high-performance computing applications, homomorphic encryption, etc.), a higher \(M\) value and therefore moduli with higher bit-width might be necessary, which will be bound by the same limitations discussed in this paper. Instead, one can represent an integer value as \(D\) separate digits where each digit is represented as a set of residues in the RNS domain and has an RNS range of \(M\). This hybrid scheme can achieve \(D\log_{2}M\) bit precision where \(D\) can liberally be increased without increasing the bit precision of the data converters. Different from the RNS-only scheme, the hybrid scheme requires overflow detection and carry propagation from lower digits to higher digits. The overflow detection can be achieved using two sets of residues: primary and secondary. While the operations are performed with both sets of residues, base extension between two sets helps detect any overflow and propagate the carry to the higher digits if required (See Methods). In conclusion, our work provides a methodology for precise, energy-efficient, and fault-tolerant analog DNN acceleration. Overall, we believe that RNS is a crucial numeral system for the development of next-generation analog hardware capable of both inference and training state-of-the-art neural networks for advanced applications, such as generative artificial intelligence. ## Methods ### Handling Negative Numbers with RNS An RNS with dynamic range \(M\) allows representing values within the range of \([0,M)\). This range can be shifted to \([-\psi,\psi]\), where \(\psi=[(M-1)/2]\), to represent negative values. This is achieved by reassigning the values in between \((0,\phi]\) to be positive, \(0\) to be zero, and the numbers in between \((\psi,2\psi]\) to be negative (i.e. \([-\psi,0]\)). Then, the values can Figure 5: **Calculated output error probability (\(\mathbf{p}_{err}\)) versus single residue error probability (\(\mathbf{p}\)). a-c \(p_{err}\) for one (a), two (b), and infinite (c) error correction attempts and a varying number of redundant moduli (\(k\)).** Figure 6: **Inference accuracy versus single residue error probability (\(\mathbf{p}\)). a-f The plots show ResNet-50 (a-c) and BERT-Large (d-f) inference accuracy results under varying \(p\) for one (a and d), two (b and e), and infinite (e and f) error correction attempts and a varying number of redundant moduli (\(k\)).** be recovered uniquely by using CRT with a slight modification: \[A=\begin{cases}\sum\limits_{l=1}^{n}|a_{i}M_{i}T_{i}|_{M},&\text{if }\sum\limits_{i=1}^{n}|a_{i}M_{i}T_{i}|_{M}\leq\psi\\ \sum\limits_{i=1}^{n}|a_{i}M_{i}T_{i}|_{M}-M,&\text{otherwise}.\end{cases} \tag{9}\] ### Data Converter Energy Estimation The DAC and ADC energy numbers in Fig. 2**(b)** are estimated by using equations formulated by Murmann [19; 33]. The energy consumption of a DAC per conversion is \[E_{\text{DAC}}=\text{ENOB}^{2}C_{u}V_{\text{DD}}^{2}, \tag{10}\] where \(C_{u}=0.5\) fF is a typical unit capacitance and \(V_{\text{DD}}=\) 1V is the supply voltage [19]. The energy consumption of an ADC per conversion is estimated as \[E_{\text{ADC}}=k_{1}\text{ENOB}+k_{2}4^{\text{ENOB}}, \tag{11}\] where \(k_{1}\)\(\approx\)100 fJ and \(k_{2}\)\(\approx\)1 aJ. \(E_{\text{ADC}}\) is dominated by the exponential term (i.e., \(k_{2}4^{\text{ENOB}}\)) at large ENOB ( \(\geq\) 10-bits). ### Accuracy Modeling Both RNS-based and regular fixed-point analog cores are modeled using PyTorch for estimating inference and training accuracy. Convolution, linear, and hatched matrix multiplication (BMM) layers are performed as tiled-GEMM operations which are computed tile-by-tile as a set of tiled-MVM operations. Each input, weight, and output of the tiled MVM are quantized with a desired bit precision. Pre-quantization, the input vectors and weight tiles are first dynamically scaled, i.e. scaled at runtime, to mitigate the quantization effects as follows: For an \(h\times h\) weight tile \(\mathcal{W}_{t}\), we denote each row vector as \(\mathcal{W}_{t}\) where the subscript \(r\) stands for the row and \(t\) for the tile. Similarly, an input vector of length \(h\) is denoted as \(\mathcal{X}_{t}\) where tiledinates the tile. Each weight row \(\mathcal{W}_{t}\) shares a single FP32 scale \(s^{r}_{rt}=\max(|\mathcal{W}_{rt}|)\) and each input vector \(\mathcal{X}_{t}\) shares a single FP32 scale \(s^{r}_{rt}=\max(|\mathcal{X}_{t}|)\). \(h\) scales per \(h\times h\) weight tile and 1 scale per input vector, in total \(h+1\) scales, are stored for each tiled-MVM operation. The tiled MVM is performed between the scaled weight and input vectors, \(\mathcal{W}_{rt}=\mathcal{W}_{rt}/s^{r}_{rt}\) and \(\mathcal{\widetilde{X}}_{t}=\mathcal{X}_{t}/s^{r}_{rt}\), to produce \(\mathcal{\widetilde{Y}}_{t}=\mathcal{\widetilde{W}}_{t}\mathcal{\widetilde{X} }_{t}\). The output \(\mathcal{\widetilde{Y}}_{rt}\) is then quantized (if required) to resemble the output ADCs and multiplied back with the appropriate scales so that the actual output elements \(Y_{rt}=\mathcal{\widetilde{Y}}_{rt}\cdot s^{r}_{rt}\) are obtained. Here, the methodology is the same for RNS-based and regular fixed-point cores. For the RNS-based case, in addition to the description above, the quantized input and weight integers are converted into the RNS space before the tiled-MVM operations. MVMs are performed separately for each set of residues and are followed by a modulo operation before the quantization step. The output resides for each tiled MVM are converted back to the standard representation using the CRT. The GEMM operations (i.e., convolution, linear, and RBM layers) are sandwiched between input operation \(O_{\text{m}}\) and an output operation \(O_{\text{out}}\). This makes the operation order \(O_{\text{in}}\)-GEMM-\(O_{\text{out}}\) during the forward pass, and \(O_{\text{out}}\)-GEMM-\(O_{\text{in}}\) in the backward pass. On i.e. times the input and weight tensors the forward pass and is a null operation in the backward pass. In contrast, \(O_{\text{out}}\) is a null operation in the forward pass and quantizes the activation gradients in the backward pass. In this way, the quantization is always performed before the GEMM operation. The optimizer (i.e., SGD or Adam) is modified to keep a copy of the FP32 weights to use during the weight updates. Before each forward pass, the FP32 weights are copied and stored. After the forward pass, the quantized model weights are replaced by the previously stored FP32 weights before the step function so that the weight updates are performed in FP32. After the weight update, the model parameters are quantized again for the next forward pass. This high-precision weight update step is crucial for achieving high accuracy in training. In Fig. 3b, all the convolution, linear, and BMM layers in the models were replaced by the quantized versions. We trained ResNet50 from scratch by using SGD optimizer for 90 epochs with a momentum of 0.9 and a learning rate starting from 0.1. The learning rate was scaled down by 10 at epochs 30, 60, and 80. We fine-tuned BERT-large and OPT-125M from the implementations available in the Hugging-face transformers repository [34]. We used the Adam optimizer for both models with the default settings. The script uses a linear learning rate scheduler. The learning rate starts at 3e-05 and 5e-05 and the models are trained for 2 and 3 epochs, respectively for BERT-Large and OPT-125M. ### Error distribution in the RRNS code space For an \(\text{RRNS}(n+k,n)\) with \(n\) non-redundant moduli \((m_{1},m_{2},...,m_{n})\) and \(k\) redundant moduli \((m_{n+1},m_{n+2},...,m_{n+k})\), the probability distributions \((p_{x},\ p_{d},\) and \(p_{u})\) of different types of errors (Case 1, Case 2, and Case 3 that were mentioned in Redundant RNS for Fault Tolerance) are related to the Hamming distance distribution of the RRNS code space. In an \(\text{RRNS}(n+k,n)\), every integer is represented as \(n+k\) residues (\(r_{i}\) where \(i\in\{1,...,n+k\}\)) and this vector of \(n+k\) residues is considered as an RRNS codeword. A Hamming distance of \(q\in\{0,1,...,n+k\}\) between the original codeword and the erroneous codeword indicates that \(n\) out of \(n+k\) residues are erroneous. The erroneous codewords create a new vector space of \(n+k\)-long vectors where at least one \(r_{i}\) is replaced with \(r^{\prime}_{i}\neq r_{i}\) with \(i\in\{1,...,n+k\}\) and \(r^{\prime}_{i}<m_{i}\). This vector space includes all the \(\text{RRNS}(n+k,n)\) codewords as well as other possible \(n+k\)-long vectors that do not overlap with any codeword in the RRNS code space space. A vector represents a codeword and is in the RRNS code space if and only if it can be converted into a value within the legitimate range \([0,M)\) of the \(\text{RRNS}(n+k,n)\) by using the CRT. The number of all vectors that have a Hamming distance \(\eta\) from a codeword in \(\text{RRNS}(n+k,n)\) can be expressed as \[V_{\eta}=\sum\limits_{Q\binom{n+k}{\eta}}\prod\limits_{i=1}^{\eta}(m_{i}-1), \tag{12}\] where \(Q\binom{n+k}{\eta}\) represents one selection of \(\eta\) moduli from \(n+k\) moduli while \(\sum_{Q\binom{n+k}{\eta}}\) represents the summation over all distinct \(\binom{n+k}{\eta}\) selections. The number of codewords that are in the RNS code space with a Hamming distance of \(\eta\in\{0,1,...,n+k\}\) can be expressed as \[\centering\mathcal{D}_{\eta}=\sum\limits_{h=0}^{\eta-1-k}(-1)^{h}\binom{n+k-\eta +h}{n+k-\eta}\zeta(n+k,\eta-h),\@add@centering \tag{13}\] for \(k+1\leq\eta\leq n+k\). For \(1\leq\eta\leq k\), \(D_{\eta}=0\) and \(D_{0}=1\). \(\zeta(n+k,\eta)\) represents the total number of non-zero common divisors in the height range \([0,M)\) for any \(n+k-\eta\) moduli out of the \(n+k\) moduli of the \(\text{RRNS}(n+k,n)\) code and can be denoted as \[\centering\zeta(n+k,\eta)=\sum\limits_{Q\binom{n+k}{\eta+k-\eta}}\left\lfloor \frac{M-1}{m_{i_{1}}m_{i_{2}}...m_{i_{n+k-\eta}}}\right\rfloor,\@add@centering \tag{14}\] where \((m_{i_{1}},m_{i_{2}},...,m_{i_{\lambda}})\) with \(1\leq\lambda\leq n+k\) is a subset of the \(\text{RRNS}(n+k,n)\) moduli set. An undetectable error occurs only if a codeword with errors overlaps with another codeword in the same RRNS space. Given the distance distributions for the vector space \(V\) and the codespace \(D\) (Eq. (12), (13), respectively), the probability of observing an undetectable error \((p_{u})\) for \(\text{RRNS}(n+k,n)\) can be computed as \[\centering p_{u}=\sum\limits_{\eta=k+1}^{n+k}\frac{D_{\eta}}{V_{\eta}}p_{E}( \eta),\@add@centering \tag{15}\] where \(p_{E}(\eta)\) is the probability of having \(\eta\) erroneous residues in a codeword which can be calculated as \[\centering p_{E}(\eta)=\sum\limits_{Q\binom{n+k}{\eta}}p^{\eta}(1-p)^{(n+k-\eta )},\@add@centering \tag{16}\] for an error probability of \(p\) in a single residue. Eq. (13) indicates that for up to \(\eta=k\) erroneous residues \(D_{\eta}=0\), and so it is not possible for an erroneous codeword to overlap with another codeword in the RRNS code space. This guarantees the successful detection of the observed error. If the Hamming distance of the erroneous codeword is \(\eta\leq\lfloor\frac{k}{2}\rfloor\), the error can be corrected by the majority logic decoding mechanism. In other words, the probability of observing a correctable error is equal to observing less or equal to \(\lfloor\frac{k}{2}\rfloor\) errors in the residues and can be calculated as \[p_{c}=\sum_{\eta=0}^{\lfloor\frac{k}{2}\rfloor}p_{E}(\eta)=\sum_{\eta=0}^{ \lfloor\frac{k}{2}\rfloor}\Big{(}\sum_{Q\binom{(n+k)}{\eta}}p^{\eta}(1-p)^{(n+k -\eta)}\Big{)}. \tag{17}\] All the errors that do not fall under the undetectable or correctable categories are referred to as detectable but not correctable errors with a probability \(p_{d}\) where \(p_{d}=1-(p_{c}+p_{d})\). The equations in this section were obtained from the work conducted by Yang [27]. To model the error in the RNS core for the analysis shown in Fig. 6, \(p_{c}\), \(p_{d}\), and \(p_{u}\) are computed for a given RNS\((n+k,n)\) using Eqs. (15) and (17). Given the number of error correction attempts, output error probability (\(p_{err}\)) is calculated according to Eq. (8). Random noise is injected at the output of every tiled-MVM operation using a Bernoulli distribution with the probability of \(p_{err}\). ## RNS Operations The proposed analog RNS-based approach requires modular arithmetic. In this section, we discuss two ways of performing modular arithmetic in the analog domain, one electrical and one optical. ### Modular Arithmetic with Ring Oscillators In a ring oscillator, where each inverter has a propagation delay \(t_{\rm prop}>0\), there is always only one inverter that has the same input and output--either 1-1 or 0-0--at any given time when the ring oscillator is on. The location of this inverter with the same input and output propagates along with the signal every \(t_{\rm prop}\) time and rotates due to the ring structure. This rotation forms a modulator behavior in the ring when the location of this inverter is tracked. Let \(S_{\rm RO}(t)\) be the state of a ring oscillator with \(N\) inverters. Here, \(S_{\rm RO}(t)\in\{0,...,N-1\}\) and \(S_{\rm RO}(t)=k\) means that the \(k+1\)-th inverter's input and output have the same value at time \(t\). \(S_{\rm RO}(t)\) keeps rotating between 0 to \(N-1\) as long as the oscillator is on. Figure 7a shows a simple example where \(N=3\). In the first \(t_{\rm prop}\) time interval, the input and output the first inverter are both 0, therefore, the state \(S_{\rm RO}(t<t_{\rm prop})=0\). Similarly, when \(t_{\rm prop}<t<2t_{\rm prop}\), the input and output of the second inverter are 1, so \(S_{\rm RO}(t_{\rm prop}<t<2t_{\rm prop})=1\). Here, the time between two states following one another (i.e., \(t_{\rm prop}\)) is fixed as \(S_{\rm RO}(t)\) rotates, \((1,2,1,0,1,...)\). Assume the state of the ring oscillator is sampled periodically with a sampling period of \(T_{a}=A-t_{\rm prop}\). Then, the observed change in the state of the ring oscillator between two samples (\(S_{\rm RO}(t=T_{a})-S_{\rm RO}(t=0)\)) is equivalent to \(|A_{N}|\) where \(A\) is a positive integer value. Therefore, to perform a modulo against a modulus value \(m\), the number of inverters \(N\) should be equal to \(m\). The dividend number \(A\) and the sampling period can be adjusted by changing the analog input voltage to a voltage-to-time converter (VTC). In a modular dot product or an MVM operation, the dividend \(A\) is replaced by the output of the dot product. Analog dot products can be performed using traditional methods with no change with any desired analog technology where output can be represented as an analog electrical signal (e.g., current or voltage) before the analog modulo. ### Modular Arithmetic with Phase Shifters The amount of phase shift introduced by a single dual-rail phase shifter when \(v\) and \(-v\) voltages are applied on the upper and the bottom arms, respectively, is \[\Delta\Phi=\frac{vL}{V_{\rm{\pi-cm}}}, \tag{18}\] where \(V_{\rm{\pi-cm}}\) is the modulation efficiency of the phase shifter and is a constant value. \(\Delta\Phi\) is proportional to both the length of the shifter \(L\) and the amount of applied voltage \(v\). Figure 7b shows an example modular dot product operation between two vectors, \(x\) and \(w\), using cascaded dual-rail phase shifters. \(w\) is encoded digit-by-digit using phase shifters with lengths proportional to \(2^{j}\) where \(j\) represents the digit number. In the example, each element (i.e., \(w_{0}\) and \(w_{1}\)) of the 2-element vector \(w\) consists of 3 digits and uses 3 phase shifters, each with lengths \(L,2L\), and \(4L\). If the \(j\)-th digit of the \(i\)-th element of \(w\), \(w_{i}^{j}=1\), a voltage \(v_{i}\) is applied to the phase shifter pair (top and bottom) with the length \(2^{j}L\). If the digit \(w_{i}^{j}=0\), then no voltage is applied and therefore no phase shift is introduced to the input signal. To encode the second operand \(x\), a voltage \(v_{i}\) that is proportional to \(x_{i}\) is applied to all non-zero digits of \(w_{i}\). To take modulo with a modulus \(m\) instead of 2\(\pi\), the input \(x\) and therefore the applied voltage \(v\) should be multiplied by the constant \(2\pi/m\). For encoding \(x_{i}\), \[v_{i}=x_{i}\cdot\frac{V_{\rm{\pi-cm}}}{\pi L}\cdot\frac{2\pi}{m}, \tag{19}\] should be applied so that the total phase shift at the end of the optical path is \[\Delta\Phi_{\rm{total}}=\Big{|}\frac{2\pi}{m}\sum_{i}\big{(}\sum_{j}(2^{j}w_{ i}^{j})x_{i}\big{)}\big{|}_{2\pi}=\frac{2\pi}{m}\big{|}\sum_{i}(w_{i}x_{i}) \big{|}_{m}. \tag{20}\] The resulting output values are collected at the end of the optical path and are in the form of the phase difference between input and output. These outputs are then re-multiplied by \(m/2\pi\) to obtain the outputs of the modular dot products for each residue. ## Extended RNS By combining RNS and PNS, an integer value \(Z\) can be represented as \(D\) separate digits, \(z_{d}\) where \(d\in\{0,1,...,D-1\}\) and \(0\leq z_{d}<M\) where \(M\) is the RNS range: \[Z=\sum_{d=0}^{D-1}z_{d}M, \tag{21}\] and can provide up to \(D\log_{2}M\) bit precision. This hybrid scheme requires carry propagation from lower digits to higher digits, unlike the RNS-only scheme. For this purpose, one can use two sets of moduli, primary and secondary, where every operation is performed for both sets of residues. After every operation, overflow is detected for each digit and carried over to the higher-order digits. Let us define and pick \(n_{p}\) primary moduli \(m_{i}\), where \(i\in\{1,...,n_{p}\}\) and \(n_{s}\) secondary moduli \(m_{i}\), where \(j\in\{1,...,n_{s}\}\) and \(m_{i}\neq m_{j}\)\(\bigvee\{i,j\}\). Here \(M=M_{p}\cdot M_{s}=\prod_{i=1}^{n_{p}}m_{i}\cdot\prod_{i=1}^{n_{s}}m_{j}\) is large enough to represent the largest possible output of the operations performed in this numerical representation and \(M_{p}\) and \(M_{s}\) are co-prime. To execute an operation in this hybrid number system, the operation is performed separately for each digit of the output. These operations for each digit are independent of one another and can be parallelized except for the overflow detection and carry propagation. Assume \(z_{d}=z_{d}|_{p_{i}\times}\) consists of primary and secondary residues and is a calculated output digit of an operation before overflow detection. \(z_{d}\) can be decomposed as \(z_{d}|_{p}=Q_{d}|_{p}M_{p}+R_{d}|_{p}\) where \(Q_{d}|_{p}\) and \(R_{d}|_{p}\) are the quotient and the remainder of the digit, with respect to the primary RNS. To detect a potential overshift in the digit \(z_{d}\), a base extension from primary to secondary RNS is performed on \(z_{d}|_{p}\) and the base extended residues are compared with the original secondary residues of the digit, \(z_{d}|_{s}\). If the residues are the same, this indicates that there is no overflow, i.e., \(Q_{d}|_{p_{i}\times}=0\), and both primary and secondary residues are kept without any error moved to the next higher digit. As against that, if the base-extended secondary residues and the original secondary residues are not the same, it means that there exists an overflow (i.e., \(Q_{d}|_{p_{i}\times}\neq 0\)). In the case of overflow, the remainder of the secondary RNS, \(R_{d}|_{s}\), is calculated through a base extension from primary to secondary RNS on \(R_{d}|_{p}\), where \(R_{d}|_{p}=z_{d}|_{p}\). \(Q_{d}|_{s}\) can then be computed as \(Q_{d}|_{s}=(z_{d}|_{s}-R_{d}|_{s})M_{p}^{-1}\) where \(|M_{p}\cdot M_{s}^{-1}|_{M_{s}}=1\). \(Q_{d}|_{p}\) is calculated through base extension from the secondary to primary RNS on the computed \(Q_{d}|_{s}\). The full quotient \(Q_{d}|_{p_{i}\times}\) is then propagated to the higher-order digit. Algorithm \(T\) shows the pseudo-code for handling an operation \(\square\) using the extended RNS representation. The operation can be replaced by any operation that is closed under RNS. It should be noted that \(z_{d}|_{p_{i}\times}\) is not always computed as \(x_{d}|_{p_{i}\times}\square q_{d}|_{p_{i}\times}\). For operations such as addition, each digit before carry propagation is computed by simply adding the same digits of the operands, i.e., \(z_{d}|_{psz}=x_{d}|_{psz}+y_{d}|_{psz}\). However, for multiplication, each digit of \(z_{d}|_{psz}\) should be constructed as in long multiplication. The multiplication of two numbers in the hybrid number system with \(D_{x}\) and \(D_{y}\) digits requires \(D_{x}D_{y}\) digit-wise multiplications and the output will result in \(D_{x}=D_{x}+D_{y}\) digits in total. Similarly, a dot product is a combination of multiply and add operations. If two vectors with \(h\) elements where such element has \(D_{x}\) and \(D_{y}\) digits, the output will require in \(D_{x}=D_{x}+D_{y}+\log_{2}h\) digits. **Algorithm 1** **Pseudocode for performing the operation \(\square\) using the hybrid number system.** Here, \(x\) and \(y\) are the inputs for operation \(\square\) and \(z\) is the output with \(D\) digits. \(z_{d}\) represents the digits of the output where \(z_{d}|_{ps}\) are the primary residues and \(z_{d}|_{s}\) are the secondary residues. Primary and secondary residues together are referred to as \(z_{d}^{\prime}|_{psz}\). \(Q\) is the quotient and \(R\) is the remainder where \(z_{d}=Q_{d}M_{p}+R_{d}\). \(\mathrm{p2s}()\) and \(\mathrm{s2p}()\) refer to base extension algorithms from primary to secondary residues and from secondary to primary residues, respectively. ``` 1:\(Q_{-1}|_{psz}=0\) 2:for\(d\gets 0\) to \(D_{z}\)do 3:\(z_{d}^{\prime}|_{psz}=(x|_{psz},\,\square\,\,y|_{psz})_{d}\) 4:endfor 5:for\(d\gets 0\) to \(D_{z}\)do 6:\(z_{d}|_{psz}=z_{d}|_{psz}+Q_{d-1}|_{psz}\) 7:\(R_{d}|_{psz}=z_{d}|\) 8:\(R_{d}|_{s}=p2s(R_{d}|_{p})\) 9:if\(R_{d}|_{s}=z_{d}|_{s}\)then 10:\(d|_{psz}=0\) 11:else 12:\(Q_{d}|_{psz}=(z_{d}^{\prime}|_{s}-R_{d}|_{s})M_{p}^{-1}\) 13:\(Q_{d}|_{ps}=z\mathrm{p}Q(Q_{d}|_{s})\) 14:endif 15:endfor ``` **ACKNOWLEDGEMENTS** We thank Dr. Rashmi Agrawal and Prof. Vijay Janapa Reddi for their insightful discussions. ## Author contributions D.B. conceived the project idea. C.D. and D.B. developed the theory. C.D. implemented the accuracy modeling and the analytical error models with feedback from D.B. and A.J. C.D. and L.N. conducted the experiments. D.B. and A.J. supervised the project. C.D. wrote the manuscript with input from all authors. ## Competing interests The authors declare the following patent application: U.S. Patent Application No.: 17 / 543,676. L.N. and D.B. declare individual ownership of shares in Lightmatter, a startup company developing photonic hardware for AI. Figure 7: **Analog modulo implementations.****a** Modulo operation performed using a ring oscillator. A ring oscillator with \(N=3\) inverters is shown to perform modulo against a modulus \(m=3\). This operation is performed after every analog dot product to perform a modular dot product. **b** Modular dot product performed using phase shifters. A modular dot product operation between two \(2\)-element vectors \(x\) and \(w\), each with \(3\) digits, is shown by using a dual-rail set of cascaded phase shifters. The transistor switch turns on and supplies voltage to the phase shifter when the corresponding digit of \(w\) is \(1\).
2309.16314
A Primer on Bayesian Neural Networks: Review and Debates
Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks. To address these challenges, Bayesian neural networks (BNNs) have emerged as a compelling extension of conventional neural networks, integrating uncertainty estimation into their predictive capabilities. This comprehensive primer presents a systematic introduction to the fundamental concepts of neural networks and Bayesian inference, elucidating their synergistic integration for the development of BNNs. The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise, as well as machine learners proficient in deep neural networks but with limited exposure to Bayesian statistics. We provide an overview of commonly employed priors, examining their impact on model behavior and performance. Additionally, we delve into the practical considerations associated with training and inference in BNNs. Furthermore, we explore advanced topics within the realm of BNN research, acknowledging the existence of ongoing debates and controversies. By offering insights into cutting-edge developments, this primer not only equips researchers and practitioners with a solid foundation in BNNs, but also illuminates the potential applications of this dynamic field. As a valuable resource, it fosters an understanding of BNNs and their promising prospects, facilitating further advancements in the pursuit of knowledge and innovation.
Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin
2023-09-28T10:09:15Z
http://arxiv.org/abs/2309.16314v1
# A Primer on Bayesian Neural Networks: Review and Debates ###### Abstract Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks. To address these challenges, Bayesian neural networks (BNNs) have emerged as a compelling extension of conventional neural networks, integrating uncertainty estimation into their predictive capabilities. This comprehensive primer presents a systematic introduction to the fundamental concepts of neural networks and Bayesian inference, elucidating their synergistic integration for the development of BNNs. The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise, as well as machine learners proficient in deep neural networks but with limited exposure to Bayesian statistics. We provide an overview of commonly employed priors, examining their impact on model behavior and performance. Additionally, we delve into the practical considerations associated with training and inference in BNNs. Furthermore, we explore advanced topics within the realm of BNN research, acknowledging the existence of ongoing debates and controversies. By offering insights into cutting-edge developments, this primer not only equips researchers and practitioners with a solid foundation in BNNs, but also illuminates the potential applications of this dynamic field. As a valuable resource, it fosters an understanding of BNNs and their promising prospects, facilitating further advancements in the pursuit of knowledge and innovation. ###### Contents * 1 Introduction * 2 Neural networks and statistical learning theory * 2.1 Choice of architecture * 2.2 Expressiveness * 2.3 Inductive bias * 2.4 Generalization and overfitting * 2.5 Limitations of the frequentist approach to deep learning * 3 Bayesian machine learning * 3.1 Bayesian paradigm * 3.2 Priors * 3.3 Computational methods * 3.3.1 Variational inference * 3.3.2 Laplace approximation * 3.3.3 Sampling methods * 3.4 Model selection * 4 What are Bayesian neural networks? * 4.1 Priors * 4.1.1 Weight priors (parameter-space) * 4.1.2 Unit priors (function-space) * 4.1.3 Regularization * 4.2 Approximate inference for Bayesian neural networks * 4.2.1 Variational inference * 4.2.2 Laplace approximation * 4.2.3 Sampling methods * 5 To be Bayesian or not to be? * 5.1 Frequentist and Bayesian connections * 5.1.1 Priors and initialization schemes * 5.1.2 Posteriors and optimization methods * 5.1.3 Cold and tempered posteriors * 5.1.4 Deep ensembles * 5.2 Performance certificates * 5.2.1 Frequentist validation of the posterior * 5.2.2 Posterior concentration and generalization to out-of-sample data * 5.2.3 Marginal likelihood and generalization * 5.3 Benchmarking * 5.3.1 Evaluation datasets * 5.3.2 Evaluation metrics-tasks * 5.3.3 Output interpretation * 6 Conclusion Introduction **Motivation.** Technological advancements have sparked an increased interest in the development of models capable of acquiring knowledge and performing tasks that resemble human abilities. These include tasks such as object recognition and scene segmentation in images, speech recognition in audio signals, and natural language understanding. They are commonly referred to as artificial intelligence (AI) tasks. AI systems possess the remarkable ability to mimic human thinking and behavior. Machine learning, a subset of artificial intelligence, encompasses a fundamental aspect of AI--learning the underlying relationships within data and making decisions without explicit instructions. Machine learning algorithms autonomously learn and enhance their performance by leveraging their output. These algorithms do not rely on explicit instructions to generate desired outcomes; instead, they learn by analyzing accessible datasets and comparing them with examples of the desired output. Deep learning, a specialized field within machine learning, focuses on algorithms inspired by the structure and functioning of the human brain, known as (artificial) neural networks. Deep learning concepts enable machines to acquire human-like skills. Through deep learning, computer models can be trained to perform classification tasks using inputs such as images, text, or sound. Deep learning has gained popularity due to its ability to achieve state-of-the-art performance. The training of these models involves utilizing large labeled datasets in conjunction with neural network architectures. Neural networks, or NNs, are particularly effective deep learning models that can solve a wide range of problems. They are now widely employed across various domains. For instance, they can facilitate translation between languages, guide users in banking applications, or even generate artwork in the style of famous artists based on simple photographs. However, neural networks are often regarded as black boxes due to the lack of intuitive interpretations that would allow us to trace the flow of information from input to output. In certain industries, the acceptance of AI algorithms necessitates explanations. This requirement may stem from regulations encompassed in the concept of AI safety or from human factors. In the field of medical diagnosis and treatment, decisions based on AI algorithms can have life-changing consequences. While AI algorithms excel at detecting various health conditions by identifying minute details imperceptible to the human eye, doctors may hesitate to rely on this technology if they cannot explain the rationale behind its outcomes. In the realm of finance, AI algorithms can assist in tasks such as assigning credit scores, evaluating insurance claims, and optimizing investment portfolios, among other applications. However, if these algorithms produce biased outputs, it can cause reputational damage and even legal implications. Consequently, there is a pressing need for interpretability, robustness, and uncertainty estimation in AI systems. The exceptional performance of deep learning models has fueled research efforts aimed at comprehending the mechanisms that drive their effectiveness. Nevertheless, these models remain highly opaque, as they lack the ability to provide human-understandable accounts of their reasoning processes or explanations. Understanding neural networks can significantly contribute to the development of safe and explainable AI algorithms that could be widely deployed to improve people's lives. The Bayesian perspective is often viewed as a pathway toward trustworthy AI. It employs probabilistic theory and approximation methods to express and quantify uncertain ties inherent in the models. However, the practical implementation of Bayesian approaches for uncertainty quantification in deep learning models often incurs significant computational costs and necessitates the use of improved approximation techniques. **Objectives and outline.** The recent surge of research interest in Bayesian deep learning has spawned several notable review articles that contribute valuable insights to the field. For instance, Jospin et al. (2022) present a useful contribution by offering practical implementations in Python, enhancing the accessibility of Bayesian deep learning methodologies. Another significant review by Abdar et al. (2021) provides a comprehensive assessment of uncertainty quantification techniques in deep learning, encompassing both frequentist and Bayesian approaches. This thorough examination serves as an essential resource for researchers seeking to grasp the breadth of available methods. While existing literature delves into various aspects of Bayesian neural networks, Goan and Fookes (2020) specifically focuses on inference algorithms within BNNs. However, the comprehensive coverage of prior modeling, a critical component of BNNs, is not addressed in this review. Conversely, Fortuin (2022) presents a meticulous examination of priors utilized in diverse Bayesian deep learning models, encompassing BNNs, deep Gaussian processes, and variational auto-encoders (VAEs). This review offers valuable insights into the selection and impact of priors across different Bayesian modeling paradigms. In contrast to these works, our objective is to offer an accessible and comprehensive guide to Bayesian neural networks, catering to both statisticians and machine learning practitioners. The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise, as well as machine learners proficient in deep neural networks but with limited exposure to Bayesian statistics. Assuming no prior familiarity with either deep learning or Bayesian statistics, we provide succinct explanations of both domains in Section 2 and Section 3, respectively. These sections serve as concise reminders, enabling readers to grasp the foundations of each field. Subsequently, in Section 4, we delve into Bayesian neural networks, elucidating their core concepts, with a specific emphasis on frequently employed priors and inference techniques. By addressing these fundamental aspects, we equip the reader with a solid understanding of BNNs and their associated methodologies. Furthermore, in Section 5, we analyze the principal challenges encountered by contemporary Bayesian neural networks. This exploration provides readers with a comprehensive overview of the obstacles inherent to this field, highlighting areas for further investigation and improvement. Ultimately, Section 6 concludes our guide, summarizing the key points and emphasizing the significance of Bayesian neural networks. By offering this cohesive resource, our goal is to empower statisticians and machine learners alike, fostering a deeper understanding of BNNs and facilitating their broader application in practice.1 Footnote 1: We provide an up-to-date reading list of research articles related to Bayesian neural networks at this link: [https://github.com/konstantinos-p/Bayesian-Neural-Networks-Reading-List](https://github.com/konstantinos-p/Bayesian-Neural-Networks-Reading-List). Neural networks and statistical learning theory The inception of neural network models can be traced back to 1955 when the first model, known as the _perceptron_, was constructed (Rosenblatt, 1958). Subsequently, significant advancements have taken place in this field, notably the discovery of the _backpropagation_ algorithm in the 1980s (Rumelhart et al., 1986). This algorithm revolutionized neural networks by enabling efficient training through gradient-descent-based methods. However, the current era of profound progress in deep learning commenced in 2012 with a notable milestone: convolutional neural networks, when trained on graphics processing units (GPUs) for the first time, achieved exceptional performance on the ImageNet task (Krizhevsky et al., 2012). This breakthrough marked a significant turning point and propelled the rapid advancement of deep learning methodologies. **Definition and notations.**_Neural networks_ are hierarchical models made of layers: an input, several hidden layers, and an output, see Figure 1. The number of hidden layers \(L\) is called _depth_. Each layer following the input layer consists of units which are linear combinations of previous layer units transformed by a nonlinear function, often referred to as the nonlinearity or _activation function_ denoted by \(\phi:\mathbb{R}\to\mathbb{R}\). Given an input \(\mathbf{x}\in\mathbb{R}^{N}\) (for instance an image made of \(N\) pixels), the \(\ell\)-th hidden layer consists of two vectors whose size is called the _width_ of layer, denoted by \(H_{\ell}\), where \(\ell=1,\ldots,L\). The vector of units before application of the non-linearity is called _pre-nonlinearity_ (or _pre-activation_), and is denoted by \(\mathbf{g}^{(\ell)}=\mathbf{g}^{(\ell)}(\mathbf{x})\), while the vector obtained after element-wise application of \(\phi\) is called _post-nonlinearity_ (or _post-activation_) and is denoted by \(\mathbf{h}^{(\ell)}=\mathbf{h}^{(\ell)}(\mathbf{x})\). More specifically, these vectors are defined as \[\mathbf{g}^{(\ell)}(\mathbf{x})=\mathbf{w}^{(\ell)}\mathbf{h}^{(\ell-1)}(\mathbf{x}),\quad\mathbf{h}^ {(\ell)}(\mathbf{x})=\phi(\mathbf{g}^{(\ell)}(\mathbf{x})), \tag{1}\] where \(\mathbf{w}^{(\ell)}\) is a weight matrix of dimension \(H_{\ell}\times H_{\ell-1}\) including a bias vector, with the convention that \(H_{0}=N\), the input dimension. **Supervised learning.** We denote the learning sample \((X,Y)=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\in(\mathcal{X}\times\mathcal{Y})^{n}\), which contains \(n\) input-output pairs. Observations \((X,Y)\) are assumed to be randomly sampled from a distribution \(\mathfrak{D}\). Thus, we denote \((X,Y)\sim\mathfrak{D}^{n}\) the i.i.d observation of \(n\) elements. We define the test set \((X_{\text{test}},Y_{\text{test}})\) of \(n_{\text{test}}\) samples in a similar way to that of the learning sample. We consider some loss function \(\mathcal{L}:\mathcal{F}\times\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\), where \(\mathcal{F}\) is a set of predictors \(f:\mathcal{X}\to\mathcal{Y}\). We also denote the empirical risk \(\mathcal{R}_{n}^{\mathcal{L}}(f)=(1/n)\sum_{i}\mathcal{L}(f(\mathbf{x}_{i}),\mathbf{y} _{i})\) and the risk \[\mathcal{R}_{n}^{\mathcal{L}}(f)=\mathbf{E}_{(\mathbf{x},\mathbf{y})\sim\mathfrak{D}} \mathcal{L}(f(\mathbf{x}),\mathbf{y}). \tag{2}\] Figure 1: Simple fully-connected neural network architecture. The minimizer of \(\mathcal{R}^{\mathcal{L}}_{\mathfrak{D}}\) is called _Bayes optimal predictor_\(f^{*}=\arg\min_{f:\mathcal{X}\rightarrow\mathfrak{Y}}\mathcal{R}^{\mathcal{L}}_{ \mathfrak{D}}(f)\). The minimal risk \(\mathcal{R}^{\mathcal{L}}(f^{*})\), called _Bayes risk_, is achieved by the Bayes optimal predictor \(f^{*}\). Returning to neural networks, we denote their vectorized weights by \(\mathbf{w}\in\mathbb{R}^{d}\) with \(d=\sum_{\ell=1}^{L}H_{\ell-1}H_{\ell}\), such that \(f(\mathbf{x})=f_{\mathbf{w}}(\mathbf{x})\). The goal is to find the optimal weights such that the neural network output \(\mathbf{y}_{i}^{*}\) for input \(\mathbf{x}_{i}\) is the _closest_ to the given label \(\mathbf{y}_{i}\), which is estimated by a loss function \(\mathcal{L}\). In the regression problem, for example, the loss function \(\mathcal{L}\) could be the mean-squared error \(\|\mathbf{y}_{i}^{*}-\mathbf{y}_{i}\|^{2}\). The optimization problem is then to minimize the empirical risk: \[\hat{\mathbf{w}}=\operatorname*{arg\,min}_{\mathbf{w}}\mathcal{R}^{\mathcal{L}}_{n}(f_ {\mathbf{w}}).\] With optimal weights \(\hat{\mathbf{w}}\), the empirical risk \(\mathcal{R}^{\mathcal{L}}_{n}(\hat{f})\) is small and should be close to the Bayes risk \(\mathcal{R}^{\mathcal{L}}(f^{*})\). **Training.** The main workhorse of neural network training is gradient-based optimization: \[\mathbf{w}\leftarrow\mathbf{w}-\eta\,\nabla_{\mathbf{w}}\mathcal{R}^{\mathcal{L}}_{n}(f_{ \mathbf{w}}), \tag{3}\] where \(\eta>0\) is a _step size_, or _learning rate_, and the gradients are computed as products of gradients between each layer _from right to left_, a procedure called _backpropagation_(Rumelhart et al., 1986), thus making use of the chain rule and efficient implementations for matrix-vector products. For large datasets, this optimization is often replaced by stochastic gradient descent (SGD), where gradients are approximated on some randomly chosen subsets called _batches_(Robbins and Monro, 1951). In this case, it requires a careful choice of the learning rate parameter. For a survey on different optimization methods, see, for example, Sun et al. (2019a). For the optimization procedure, another important aspect is how to choose the weight initialization; we discuss this in detail in Section 5.1.1. ### Choice of architecture With the progress in deep learning, different neural network architectures have been introduced to better adapt to different learning problems. Knowledge about the data allows encoding specific properties into the architecture. Depending on the architecture, this results (among other benefits) in better feature extraction, a reduced number of parameters, invariance or equivariance to certain transformations, robustness to distribution shifts and more numerically stable optimization procedures. We shortly review some important models and refer the reader to Sarker (2021) for a more in-depth overview of recent techniques. _Convolutional neural networks_ (CNNs) are widely used in computer vision. Image data has spatial features that refer to the arrangement of pixels and their relationship. For example, we can easily identify a human's face by looking at specific features like eyes, nose, mouth, etc. CNNs were introduced to capture spatial features by using _convolutional layers_, a particular case of the fully-connected layers described above, where certain sets of parameters are shared (LeCun et al., 1989; Krizhevsky et al., 2012). Convolutional layers perform a dot product of a convolution kernel with the layer's input matrix. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map of smaller dimension which serves as an input to the next layer. It introduces the concept of parameter sharing where the same kernel, or filter, is applied across different input parts to extract the relevant features from the input. _Recurrent neural networks_ (RNNs) are designed to save the output of a layer by adding it back to the input (Rumelhart et al., 1986; Hochreiter and Schmidhuber, 1997). During training, the recurrent layer has some information from the previous time-step. Such neural networks are advantageous for sequential data where each sample can be assumed to be dependent on preceding ones. _Residual neural networks_ (ResNets) have residual blocks which add the output from the previous layer to the output of the current layer, a so-called _skip-connection_(He et al., 2016). It allows training very deep neural networks by ensuring that deeper layers in the model will perform at least as well as layers preceding them. (He et al., 2016). _Transformers_ are a type of neural network architecture that is almost entirely based on the _attention mechanism_(Vaswani et al., 2017). The idea behind _attention_ is to find and focus on small, but important, parts of the input data. Transformers show better results than convolutional or residual networks on some tasks with big datasets such as image classification with JFT-300M (300M images), or English-French machine translation with WMT-2014 (36M sentences, split into a 32000 token vocabulary). An open question in deep learning is why deep neural networks (NNs) achieve state-of-the-art performance in a significant number of applications. The common belief is that neural networks' complexity and over-parametrization result in tremendous _expressive power_, beneficial _inductive bias_, flexibility to avoid _overfitting_ and, therefore, the ability to _generalize_ well. Yet, the high dimensionalities of the data and parameter spaces of these models make them challenging to understand theoretically. In the following, we review these open topics of research as well as the current scientific consensus on them. ### Expressiveness The expressive power describes neural networks' ability to approximate functions. In the late 1980s, a line of works established a universal approximation theorem, stating that one-hidden-layer neural networks with a suitable activation function could approximate any continuous function on a compact domain, that is \(f:[0,1]^{N}\rightarrow\mathbb{R}\), to any desired accuracy (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Barron, 1994). The obstacle is that the size of such networks may be exponential in the input dimension \(N\), which makes them highly prone to overfitting as well as impractical, since adding extra layers in the model is often a cheaper way to increase the representational power of the neural network. More recently, Telgarsky (2016) studied which functions neural networks could represent by focusing on the choice of the architecture and showed that deeper models are more expressive. Chatziafratis et al. (2020, 2020) extended this result by obtaining width-depth trade-offs. Another approach is to analyze the finite-sample expressiveness of neural networks. Zhang et al. (2017) state that as soon as the number of parameters of a network is greater than the input sample size, even a simple two-layer neural network can represent any function of the input sample. Though neural networks are theoretically expressive, the core of the learning problem lies in their complexity, and research focuses on obtaining complexity bounds. In general, the ability to approximate or to _express_ specific functions can be considered as explicit _inductive bias_ which we discuss in detail in the next section. ### Inductive bias By choosing a design and a training procedure for a model assigned to a given problem, we make some assumptions on the problem structure. These assumptions are summed in the term _inductive bias2_, i.e., prior preferences for specific models and problems. Footnote 2: The term _inductive_ comes from philosophy: _inductive reasoning_ refers to _generalization_ from specific observations to a conclusion. This is a counterpoint to _deductive reasoning_, which refers to _specialization_ from general ideas to a conclusion. **Examples.** For instance, the linear regression model is built on the assumption of a linear relationship between the target variable and the features. The knowledge that the data is of a linear nature is _embedded_ into the model. Because of this limitation of the linearity of the model, linear regression is bound to perform poorly for data in where the target variable does not linearly depend on features, see the left plot of Figure 2. This assumption of a linear relationship between the target and the features is the inductive bias of linear regression. In the \(k\)-nearest neighbours model, the inductive bias is that the answer for any object should be calculated only on the basis of what values of the answers were in the elements of the training sample closest to this object, see the right plot of Figure 2. In the non-linear regression, the assumption is some non-linear function. **Importance.** The goal of a machine learning model is to derive a general rule for all elements of a domain based on a limited number of observations. In other words, we want the model to _generalize_ to data it has not seen before. Such generalization is impossible without the presence of _inductive bias_ in the model because the training sample is always _finite_. From a finite set of observations, without making any additional assumptions about the data, a general rule can be deduced in an infinite number of ways. Inductive bias is additional information about the nature of the data for the model; a way to show models _which way to think_. It allows the model to prioritize one generalization method over another. Thus, when choosing a model for training to solve a specific problem, one needs to choose a model whose inductive bias better matches the nature of the data and better allows to solve this problem. The introduction of any inductive bias into a machine learning model relies on certain characteristics of the model _architecture_, _training algorithm_ and manipulations on _training data_. Figure 2: Example of using the linear regression (left) and \(k\)-nearest neighbours regression (right) models on simulated data points. **Inductive bias and training data.** One can also consider inductive bias through training data. The fewer data, the more likely the model chooses a poor generalization method. If the training sample is small, models such as neural networks are often _overfitted_. For example, when solving the problem of classifying images of cats and dogs, sometimes attention is paid to the background and not to the animals themselves. But people, unlike neural networks, can quickly learn on the problem of classifying cats and dogs, having only a dozen pictures in the training set. This is because people have additional inductive bias: we know that there is a background in the picture, and there is an object, and during the classification of pictures, you need to pay attention only to the object itself. And the neural network before training does not know about any "backgrounds" and "objects"--it is simply given different pictures and asked to learn how to distinguish them. Thus, _the smaller the training sample and the more complex the problem, the stronger inductive bias_ is required to be invested in the model device for successful model training. Conversely, the more extensive and more diverse the training set, the more knowledge about the nature of the data the model receives during training. This means that the less likely the model is to choose a "bad" generalization method that will work poorly on data outside the training set. Thus, _the more data you have, the better the model will train_. One of the tricks to increase the dataset is to artificially augment the training set by introducing distortions into the inputs, a procedure known as _data augmentation_. Suppose we are trying to classify images of objects or handwritten digits. Each time we visit a training example, we can randomly distort it, for instance, by shifting it by a few pixels, adding noise, rotating it slightly, or applying some sort of warping. This can increase the effective size of the training set and make it more likely that any given test example has a closely related training example. The data augmentation procedure is a sort of inductive bias because it requires the knowledge of how to construct additional data points, such as if the object or part of the object can be rotated, zoomed in, etc. **Inductive bias and simplicity.** The _no free lunch_ theorem states that no single learning algorithm can succeed on all possible problems (Wolpert, 1996). It is, thus, essential to enforce a form of _simplicity_ in the algorithm, typically by restricting the class of models to be learned, which may reflect prior knowledge about the problem being tackled. This is associated with _inductive bias_ which should encode the prior knowledge to seek for efficiency. In the context of neural networks, one form of simplicity is in the choice of _architecture_, such as using convolutional neural networks (LeCun et al., 1989) when learning from image data. Another example is _sparsity_, which may seek models that only rely on a few relevant variables out of many available ones and can be achieved through some regularization methods (Tibshirani, 1996). **Inductive bias of neural network architecture.** A number of deep neural network architectures have been designed with the aim of improving the inductive bias of the corresponding predictor. Here we review two popular neural network architectures that encode useful inductive biases. _Convolutional neural networks_ (CNNs). The inductive bias of convolutional layers (LeCun et al., 1989) is the assumption of compactness and translation invariance. The convolution filter is designed in such a way that at one time it captures a compact part of the entire image (for example, a \(3\times 3\) pixels square), regardless of the distant pixels of the image. Also in the convolutional layer, the same filter is used to process the entire image (the same filter processes all \(3\times 3\) pixels square). It turns out that the convolutional layer is designed in such a way that its inductive bias correlates well with the nature of images and objects on them, which is why convolutional neural networks are so efficient at processing images (Krizhevsky et al., 2012). This is an example of the desired, or _explicit_ inductive bias. _What makes data efficiently learnable by fitting a huge neural network with a specific algorithm? Is there implicit inductive bias?_Ulyanov et al. (2018) demonstrate that the output of a convolutional neural network with randomly initialized weights corresponds to a _deep image prior_, i.e. non-trivial image properties, _before_ training. It means that how convolutional neural networks are designed, their architecture itself, helps to encode the information from images. Geirhos et al. (2019) show that _convolutional neural networks_ have implicit inductive bias concerning the texture of images: it turns out that convolutional networks are designed in such a way that when processing images, they pay more attention to textures rather than to the shapes of objects. To get rid of this undesirable behavior, the images from the training dataset are augmented so that the dataset contains more images of the same shape, but with different types of textures (Li et al., 2021). Despite the popularity of the topic, the implicit inductive bias in neural networks is still an open question due to the complexity of the models. _Visual transformers_(Dosovitskiy et al., 2021) are a type of neural network architecture that shows better results than convolutional networks on some tasks, including, for example, classification of images from the JFT-300M dataset. This dataset consists of 300 million images, while Imagenet has 1.2 million images. The visual transformer is almost entirely based on the _attention mechanism_(Vaswani et al., 2017), so the model has the inductive bias that attention has which consists in a shift towards simpler functions. But like convolutions, transformers also have the implicit inductive bias of neural networks (Morrison et al., 2021). Though there is still a lot of ongoing research on transformers, the inductive bias of transformers is much simpler than that of convolutional neural networks, as the former models impose fewer restrictions than the latter models. Here we see confirmation that the larger dataset we have at our disposal, the less inductive bias is required, and the better the model can learn for the task. Therefore, transformers have simple inductive bias and show state-of-the-art results in image processing, but they require a lot of data. On the contrary, convolutional neural networks have a strong inductive bias, and they perform well on smaller datasets. Recently, d'Ascoli et al. (2021) combined the transformer and convolutional neural network architectures, introducing the CONVIT model. This model is able to process images almost as well as transformers, while requiring less training data. ### Generalization and overfitting When we train a machine learning model, we do not just want it to learn to model the training data. We want it to _generalize_ to data it has not seen before. Fortunately, there is a way to measure an algorithm's generalization performance: we measure its performance on a held-out test set, consisting of examples it has not seen before. If an algorithm works well on the training set but fails to generalize, we say it suffers from _overfitting_. Modern machine learning systems based on deep neural networks are usually over-parameterized, i.e., the number of parameters in the model is much larger than the size of the training data, which makes these systems prone to overfitting. **Classical regime.** Let us randomly divide the original dataset into a train, validation and test set. The model is trained by optimizing the training error computed on the train set, then its performance is checked by computing the validation error on the validation set. After tuning any existing hyperparameters by checking the validation error, the model (or models) are then evaluated on final time on the test set. During the training procedure, the model can suffer from overfitting and underfitting (see Figure 3 for an illustration), which can be described in terms of training and testing errors. _Overfitting_ is a negative phenomenon that occurs when a learning algorithm generates predictions that fit too closely or exactly to a particular dataset and are therefore not suitable for applying the algorithm to additional data or future observations. In this case, the training error is low but the error computed on a test set is high. The model finds dependencies in the train set which does not hold in the test set. As a result, the model has _high variance_, a problem caused by being highly sensitive to small deviations in the training set. The opposite of overfitting is _underfitting_, in which the learning algorithm does not provide a sufficiently small average error on the training set. Underfitting occurs when insufficiently complex models are used or the training is stopped too early. In this case, the error is high for both train and test sets. As a result, the model has _high bias_, an error of incorrect assumptions in the learning algorithm. The goal is to find the best strategy to reduce overfitting and improve the generalization, or, in other words, reduce the trained model's bias and variance. Ensembles can be used to eliminate high variance and high bias. For example, the _boosting_ procedure of several models with high bias can get a model with a reduced bias. In another case, when _bagging_, several low-bias models are connected, and the resulting model can reduce the variance. But in general, reducing one of the adverse effects leads to an increase in the other. This conflict in an attempt to simultaneously minimize bias and variance is called the _bias-variance trade-off_. This trade-off is achieved in the minimum of the test error, see the classical regime region on Figure 3. **Modern regime.** In the past few years, it was shown that when increasing the model size beyond the number of training examples, the model's test error can start _decreasing again_ after reaching the interpolation peak, see Figure 4. This phenomenon is called _double-descent_ by Belkin et al. (2019) who demonstrated it for several machine learning models, including a two-layer neural network. Nakkiran et al. (2021) extensively study this double-descent phenomenon for deep neural network models and show the double-descent phenomenon occurs when varying the width of the model or the number of iterations during the optimization. Moreover, the double-descent phenomenon can be observed as a function of dataset size, where more data Figure 3: Examples of underfitting, optimum solution, and overfitting in a toy classification problem. The green dots and violet squares represent two classes. The lines represent different models that classify the data. The left plot shows the result of using a model that is too simple or underfitted for the presented dataset, while the right plot shows an overfitted model. sometimes lead to worse test performance. It is not fully understood yet why this phenomenon occurs in machine learning models and which inductive biases are responsible for it. However, it is important to take this aspect into account while choosing strategies to improve generalization. **Strategies.** One reason for overfitting is the lack of training data, making the learned distribution not mirror the real underlying distribution. Collecting data arising from all possible parts of the domain to train machine learning models is prohibitively expensive and even impossible. Therefore, enhancing the generalization ability of models is vital in both industry and academic fields. _Data augmentation methods_, which are discussed above in the context of inductive bias, extract more information from the original dataset through augmentations, thus, help to improve the generalization. Many strategies for increasing generalization performance focus on the model's architecture itself. Regularization methods are used to encourage a lower complexity of a model. Functional solutions such as dropout regularization (Srivastava et al., 2014), batch normalization (Ioffe and Szegedy, 2015), transfer learning (Weiss et al., 2016), and pretraining (Erhan et al., 2010) have been developed to try to adapt deep learning to applications on smaller datasets. Another approach is to treat the number of training epochs as a hyperparameter and to stop training if the performance of the model on the test dataset starts to degrade, e.g., loss begins to increase or accuracy begins to decrease. This procedure is called _early stopping_. Though _explicit regularization_ techniques are known to improve generalization, their absence does not imply poor generalization performance for deep learning models. Indeed, Zhang et al. (2017) argue that neural networks have _implicit regularizations_; for instance, stochastic gradient descent tends to converge to small norm solutions. The early stopping procedure can also be viewed as _implicit regularization_ method as it implicitly forces to use a smaller network with less _capacity_(Zhang et al., 2017, 2021). **Generalization bounds.** We are often interested in making the discussion on training, validation, and testing sets formal, so as to ensure that our neural network will work well on new data with high probability. We are thus often interested in finding a bound on risk \(\mathcal{R}^{\mathcal{L}}_{\mathcal{D}}(f)=\mathbf{E}_{(\mathbf{x},\mathbf{y})\sim \mathcal{D}}\mathcal{L}(f(\mathbf{x}),\mathbf{y})\) with high probability. The most common way of bounding the above in the context of deep neural networks is by use of a test set (Langford, 2005; Kaariainen and Langford, 2005). One first trains a predictor Figure 4: Illustration of the double-descent phenomenon. using a training set \(\mathcal{D}_{\text{train}}\), and then computes a test risk \(\mathcal{R}^{\mathcal{L}}_{\mathcal{D}_{\text{test}}}(f)\). For \(n_{\text{test}}\) test samples, and in the classification setting, this can be readily turned into a bound on the risk \(\mathcal{R}^{\mathcal{L}}_{\mathfrak{D}}(f)\), using a tail bound on the corresponding binomial distribution (Langford, 2005). However, this approach has some shortcomings. For one it requires a significant number of samples \(n_{\text{test}}\). This can be a problem in that these samples cannot be used for training, possibly hindering the performance of the deep network. At the same time, for a number of fields such as healthcare, the cost of obtaining test samples can be prohibitively high (Davenport and Kalakota, 2019). Finally, even though we can prove that the true risk will be low, we do not get any information about the reason _why_ the classifier performs well in the first place. As such, researchers often use the empirical risk (on the training set) together with the _complexity_(Mohri et al., 2018) of the classifier to derive bounds roughly of the form \[\mathcal{R}^{\mathcal{L}}_{\mathfrak{D}}(f)\leq\mathcal{R}^{\mathcal{L}}_{ \mathcal{D}_{\text{train}}}(f)+\text{complexity}.\] Intuitively, the more complex the classifier, the more it is prone to simply memorize the training data, and to learn any discriminative patterns. This leads to high true risk. Traditional data-independent complexity measures such as Rademacher complexity (Mohri et al., 2018) and VC-dimension (Blumer et al., 1989) are loose for deep neural networks. This is because they intuitively make a single complexity estimate for the neural network for all possible input datasets. Thus they are pessimistic, as a neural network could memorize one dataset (which is difficult) but learn patterns that generalize on another dataset (which might be easy). Based on the above results, researchers focused on complexity measures which are data-dependent (Golowich et al., 2017; Arora et al., 2018; Neyshabur et al., 2017; Sokolic et al., 2016; Bartlett et al., 2017; Dziugaite and Roy, 2017). This means that they assess the complexity of a deep neural network based on the specific instantiation of the weights that we inferred for a given dataset. The tightest data-dependent generalization bounds are currently PAC-Bayes generalization bounds (McAllester, 1999; Germain et al., 2016; Dziugaite and Roy, 2017; Dziugaite et al., 2021). Contrary to the VC-dimension or the Rademacher complexity, these bounds work for stochastic neural networks (which are also the topic of this review). They can be roughly seen as bounding the mutual information between the training set and the deep neural network weights. The main complexity quantity of interest is typically the Kullback-Leibler (KL) divergence between a prior and a posterior distribution over the deep neural network weights (Dziugaite and Roy, 2017; McAllester, 1999). ### Limitations of the frequentist approach to deep learning Although deep learning models have been largely used in many research areas, such as image analysis (Krizhevsky et al., 2012), signal processing (Graves et al., 2013), or reinforcement learning (Silver et al., 2016), their safety-critical real-world applications remain limited. Here we identify a number of limitations of the frequentist approach to deep learning: * miscalibrated and/or overconfident uncertainty estimates (Minderer et al., 2021); * non-robustness to _out-of-distribution_ samples (Lee et al., 2018; Mitros and Mac Namee, 2019; Hein et al., 2019; Ashukha et al., 2020), and sensitivity to _domain shifts_(Ovadia et al., 2019); * sensitivity to adversarial attacks by malicious actors (Moosavi-Dezfooli et al., 2016, 2017; Wilson et al., 2016); * poor interpretability of a deep neural networks' inference model (Sundararajan et al., 2017; Selvaraju et al., 2017; Lim et al., 2021; Koh and Liang, 2017); * poor understanding of generalization, over-reliance on validation sets (McAllester, 1999; Dziugaite and Roy, 2017). #### 4.2.3 Uncertainty estimates. We typically distinguish between two types of uncertainty (Der Kiureghian and Ditlevsen, 2009). _Data (aleatoric) uncertainty_ captures noise inherent in the observations. This could be for example sensor noise or motion noise, resulting in uncertainty that cannot be reduced even if more data were to be collected. _Model (epistemic) uncertainty_ derives from the uncertainty on the model parameters, i.e., the weights in case of a neural network (Blundell et al., 2015). This uncertainty captures our ignorance about which model generated our collected data. While aleatoric uncertainty remains even for an infinite number of samples, model uncertainty can be explained away given enough data. For an overview on methods for estimating the uncertainty in deep neural networks see Gal (2016); Gawlikowski et al. (2021). While NNs often achieve high train and test accuracy, the uncertainty of their predictions is miscalibrated (Guo et al., 2017). In particular, in the classification setting, interpreting softmax outputs as per-class probabilities is not well-founded from a statistical perspective. The Bayesian paradigm, by contrast, provides well-founded and well-calibrated uncertainty estimates (Kristiadi et al., 2020), by dealing with stochastic predictors and applying Bayes' rule consistently. #### 4.2.4 Distribution shift. Traditional machine learning methods are generally built on the _iid assumption_ that training and testing data are independent and identically distributed. However, the iid assumption can hardly be satisfied in real scenarios, resulting in uncertainty problems with _in-domain_, _out-of-domain_ samples, and _domain shifts_. _In-domain_ uncertainty is measured on data taken from the training data distribution, i.e. data from the same domain. _Out-of-domain_ uncertainty of the model is measured on data that does not follow the same distribution as the training dataset. Out-of-domain data can include data naturally corrupted with noise or relevant transformations, as well as data corrupted adversarially. Under corruption, the test domain and the training domain differ significantly. However, the model should still not be overconfident in its predictions. Hein et al. (2019) demonstrate that rectified linear unit (ReLU) networks are always overconfident on out-of-distribution examples: scaling a training point \(\mathbf{x}\in\mathbb{R}^{N}\) with a scalar \(a\) yields predictions of arbitrarily high confidence in the limit \(a\to\infty\). Modas et al. (2021); Fawzi et al. (2016) discuss that neural networks in the case of classification can suffer from reduced accuracy in the presence of common corruptions. A common remedy is training on appropriately designed data transformations (Modas et al., 2021). However, the Bayesian paradigm should again be beneficial. It is expected that the resulting _Bayesian_ neural networks will give more uncertainty in regions far from the training data, thus degrading as images become gradually more corrupted, and diverging from the training data. #### 4.2.5 Adversarial robustness. As previously mentioned, modern image classifiers achieve high accuracy on iid test sets but are not robust to small, adversarially-chosen perturbations of their inputs. Given an image \(\mathbf{x}\) correctly classified by a neural network, an adversary can usually engineer an adversarial perturbation \(\mathbf{\delta}\) so small that \(\mathbf{x}+\mathbf{\delta}\) looks just like \(\mathbf{x}\) to the human eye, yet the network classifies \(\mathbf{x}+\mathbf{\delta}\) as a different, incorrect class. Bayesian neural networks with distributions placed over their weights and biases enable principled quantification of their predictions' uncertainty. Intuitively, the latter can be used to provide a natural protection against adversarial examples, making BNNs particularly appealing for safety-critical scenarios, in which the safety of the system must be provably guaranteed. **Interpretability.** Deep neural networks are highly opaque because they cannot produce human-understandable accounts of their reasoning processes or explanations. There is a clear need for deep learning models that offer explanations that users can understand and act upon (Lipton, 2018). Some models are designed explicitly with interpretability in mind (Montavon et al., 2018; Selvaraju et al., 2017). At the same time, a number of techniques have been developed to interpret neural network predictions, including among others gradient-based methods (Sundararajan et al., 2017; Selvaraju et al., 2017) which create "heatmaps" of the most important features, as well as influence-function-based approaches (Koh and Liang, 2017). The Bayesian paradigm allows for an elegant treatment of interpretability. Defining a prior is central to the Bayesian paradigm, and selecting it helps analyze which tasks are similar to the current task, how to model the task noise, etc. (see Fortuin et al., 2021; Fortuin, 2022). Furthermore, the Bayesian paradigm incorporates a function-space view of predictors (Khan et al., 2019). Compared to the weight-space view, this can result in more interpretable architectures. **Generalization bounds.** It is well known that traditional approaches to proving generalization using generalization bounds fail for deterministic deep neural networks. Such generalization bounds are very useful for cases where we have little training data. In such cases, we might not be able to both train the predictor sufficiently and keep a large enough additional set for validation and testing. Therefore, a generalization bound could ensure that we both train on the full data available while at the same time proving generalization. For example, Zhang et al. (2017); Golowich et al. (2017) generalization bounds based on Rademacher complexity and the VC dimension provide vacuous bounds on the true error rate (they provide upper bounds larger than 100%). On the contrary, the Bayesian paradigm currently results in the tightest generalization bounds for deep neural networks, in conjunction with a frequentist approach termed PAC-Bayes (Dziugaite and Roy, 2017). Thus following the Bayesian paradigm is a promising direction for tasks with difficult-to-obtain data. We introduce the Bayesian paradigm in Section 3 and then review its application to neural networks in Section 4. Bayesian machine learning Achieving a simultaneous design of adaptive and robust systems presents a significant challenge. In their work, Khan and Rue (2021) propose that effective algorithms that strike a balance between robustness and adaptivity often exhibit a Bayesian nature, as they can be viewed as approximations of Bayesian inference. The Bayesian approach has long been recognized as a well-established paradigm for working with probabilistic models and addressing uncertainty, particularly in the field of machine learning (Ghahramani, 2015). In this section, we will outline the key aspects of the Bayesian paradigm, aiming to provide the necessary technical foundation for the application of Bayesian neural networks. ### Bayesian paradigm The fundamental idea behind the Bayesian approach is to quantify the uncertainty in the inference by using probability distributions. Considering parameters as random variables is in contrast to non-Bayesian approaches, also referred to as frequentist or classic, where parameters are assumed to be deterministic quantities. A Bayesian acts by updating their beliefs as data are gathered according to Bayes' rule, an inductive learning process called Bayesian inference. The choice of resorting to Bayes' rule instead of any other has mathematical justifications dating back to works by Cox and by Savage (Cox, 1961; Savage, 1972). Recall the following notations: let a dataset \(\mathcal{D}=\{(\mathbf{x}_{1},\mathbf{y}_{1}),\ldots,(\mathbf{x}_{n},\mathbf{y}_{n})\}\), modeled with a data generating process characterized by a _sampling model_ or _likelihood_\(p(\mathcal{D}|\mathbf{w})\). Let parameters \(\mathbf{w}\) belong to some parameter space denoted by \(\mathbf{\mathcal{W}}\), usually a subset of the Euclidean space \(\mathbb{R}^{d}\). A _prior distribution_\(p(\mathbf{w})\) represents our prior beliefs about the distribution of the parameters \(\mathbf{w}\) (more details in Section 3.2). Note that simultaneously specifying a prior \(p(\mathbf{w})\) and a sampling model \(p(\mathcal{D}|\mathbf{w})\) amounts to describing the _joint distribution_ between parameters \(\mathbf{w}\) and data \(\mathcal{D}\), in the form of the product rule of probability \(p(\mathbf{w},\mathcal{D})=p(\mathbf{w})p(\mathcal{D}|\mathbf{w})\). The prior and the model are combined with Bayes' rule to yield the _posterior distribution_\(p(\mathbf{w}|\mathcal{D})\) as follows: \[p(\mathbf{w}|\mathcal{D})=\frac{p(\mathbf{w})p(\mathcal{D}|\mathbf{w})}{p(\mathcal{D})}. \tag{4}\] The normalizing constant \(p(\mathcal{D})\) in Bayes' rule is called the model _evidence_ or _marginal likelihood_. This normalizing constant is irrelevant to the posterior since it does not depend on the parameter \(\mathbf{w}\), which is why Bayes' rule is often written in the form \[\text{posterior}\propto\text{prior}\times\text{likelihood}.\] Nevertheless, the model evidence remains critical in _model comparison_ and _model selection_, notably through _Bayes factors_. See for example Chapter 28 in MacKay (2003), and Lotfi et al. (2022) for a detailed exposition in Bayesian deep learning. It can be computed by integrating over all possible values of \(\mathbf{w}\): \[p(\mathcal{D})=\int p(\mathcal{D}|\mathbf{w})p(\mathbf{w})\mathrm{d}\mathbf{w}. \tag{5}\] Using a Bayesian approach, all information conveyed by the data is encoded in the posterior distribution. Often statisticians are asked to communicate scalar summaries in the form of point estimates of the parameters or quantities of interest. A convenient way to proceed for Bayesians is to compute the _posterior mean_ of some quantity of interest \(f(\mathbf{w})\) of the parameters. The problem therefore comes down to numerical computation of the integral \[\mathbb{E}[f(\mathbf{w})|\mathcal{D}]=\int f(\mathbf{w})p(\mathbf{w}|\mathcal{D})\mathrm{d} \mathbf{w}. \tag{6}\] This includes the posterior mean if \(f(\mathbf{w})=\mathbf{w}\), as well as _predictive_ distributions. More specifically, let \(\mathbf{y}^{*}\) be a new observation associated to some input \(\mathbf{x}^{*}\) in a regression or classification task; then the prior and posterior predictive distributions are respectively \[p(\mathbf{y}^{*}|\mathbf{x}^{*}) =\mathbb{E}[p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})]\] \[=\int p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})p(\mathbf{w})\mathrm{d}\mathbf{w},\] \[\text{and}\quad p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathcal{D}) =\mathbb{E}[p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})|\mathcal{D}]\] \[=\int p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})p(\mathbf{w}|\mathcal{D})\mathrm{ d}\mathbf{w}.\] The posterior predictive distribution is typically used in order to assess model fit to the data, by performing posterior predictive checks. More generally, it allows us to account for _model uncertainty_, or _epistemic uncertainty_, in a principled way, by averaging the sampling distribution \(p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})\) over the posterior distribution \(p(\mathbf{w}|\mathcal{D})\). This model uncertainty is in contrast to the uncertainty associated with data measurement, also called _aleatoric uncertainty_ (see Section 2.5). ### Priors Bayes' rule (4) tells us how to update our beliefs, but it does not provide any hint about what those beliefs should be. Often the choice of a prior may be dictated by computational convenience. Let us mention the case of _conjugacy_: a prior is said to be _conjugate_ to a sampling model if the posterior remains in the same parametric family. Classic examples of such conjugate pairs of [prior, model] include the [Gaussian, Gaussian], [beta, binomial], [gamma, Poisson], among others. These three pairs have in common the fact that their model belongs to the exponential family. More generally, any model from the exponential family possesses some conjugate prior. However, the existence of conjugate priors is not a distinguishing feature of the exponential family (for example, the Pareto distribution is a conjugate prior for the uniform model on the interval \([0,\mathbf{w}]\), for a positive scalar parameter \(\mathbf{w}\)). Discussing the choice of a prior often comes with the question of _how much information it conveys_? with the distinction of _objective priors_ as opposed to _subjective priors_. For example, Jeffreys' prior, defined as being proportional to the square root of the determinant of the Fisher information matrix, is considered an objective prior in the sense that it is invariant to parameterization changes. Uninformative priors often have the troublesome oddity of being _improper_, in the sense of having a density that does not integrate to a finite value (for example, a uniform distribution on an unbounded parameter space). As surprising as it may seem, such priors are commonplace in Bayesian inference and are considered valid ones as soon as they yield a proper posterior, from which one can draw practical conclusions. However, note that an improper prior hinders the use of the prior predictive (which is de facto improper, too), as well as Bayes factors. Somehow in the opposite direction to objective priors, subjective priors lie at the roots of the Bayesian approach, where one's beliefs are encoded through a prior. Eliciting a prior distribution is a delicate issue, see for instance Mikkola et al. (2023) for a recent review. Critically, encoding prior beliefs becomes more and more difficult with more complex models, where parameters may not have a direct interpretation, and with higher-dimensional parameter spaces, where the design of a prior that adequately covers the space gets intricate. In this case, direct computation of the posterior distribution may become intractable. If exact Bayesian inference is intractable for a model, its performance hinges critically on the form of approximations made due to computational constraints and the nature of the prior distribution over parameters. ### Computational methods Posterior computation involves three terms: the prior \(p(\mathbf{w})\), likelihood \(p(\mathcal{D}|\mathbf{w})\), and evidence \(p(\mathcal{D})\). The evidence integral (5) is typically not available in closed form and becomes intractable for high-dimensional problems. The impossibility to obtain a precise posterior as a closed-form solution has led to the development of different approximation methods. The inference can be made by considering _sampling strategies_ like Markov chain Monte Carlo (MCMC) procedures, or _approximation methods_ based on optimization approaches like _variational inference_ and the _Laplace method_. In recent years, the development of probabilistic programming languages allowed to simplify the implementation of Bayesian models in numerous programming environments: we can mention Stan (Carpenter et al., 2017), PyMC3 (Salvatier et al., 2016), Nimble (de Valpine et al., 2017), but also some probabilistic extensions of deep learning libraries like TensorFlow Probability (Dillon et al., 2017) and Pyro (Bingham et al., 2019), among others. Nevertheless, there are still many options to be tuned and challenges for each step of a Bayesian model, which we briefly summarize in the following sections. We refer to Gelman et al. (2020) for a detailed overview of the Bayesian workflow. #### 3.3.1 Variational inference Variational inference (Jordan et al., 1999; Blei et al., 2017) approximates the true posterior \(p(\mathbf{w}|\mathcal{D})\) with a more tractable distribution \(q(\mathbf{w})\) called variational posterior distribution. More specifically, variational inference hypothesizes an approximation (or variational) family of simple distributions \(q\), e.g., isotropic Gaussians, to approximate the posterior: \(p(\mathbf{w}|\mathcal{D})\approx q(\mathbf{w}|\theta)\). Variational inference seeks the distribution parameter \(\theta\) in this family by minimizing the KL divergence between approximate posteriors and the true posterior. The KL divergence from \(q(\cdot|\theta)\) (denoted simply \(q\) hereafter) to \(p(\cdot|\mathcal{D})\) is defined as \[\text{KL}(q||p(\cdot|\mathcal{D}))=\int q(\mathbf{w})\log\frac{q(\mathbf{w})}{p(\mathbf{w} |\mathcal{D})}\text{d}\mathbf{w}.\] Then, Bayesian inference is performed with the intractable posterior \(p(\mathbf{w}|\mathcal{D})\) replaced by the tractable variational posterior approximation \(q(\mathbf{w})\). It is easy to see that \[\text{KL}(q||p(\cdot|\mathcal{D}))=-\int q(\mathbf{w})\log\frac{p(\mathbf{w})p( \mathcal{D}|\mathbf{w})}{q(\mathbf{w})}\text{d}\mathbf{w}+\log p(\mathcal{D}).\] Since the log evidence does not depend on the choice of the approximate posterior \(q\), minimizing the KL is equivalent to maximizing the so-called evidence lower bound (ELBO): \[\text{ELBO}(q) =\int q(\mathbf{w})\log\frac{p(\mathbf{w})p(\mathcal{D}|\mathbf{w})}{q(\mathbf{w})} \text{d}\mathbf{w}\] \[=-\text{KL}(q||p)+\int q(\mathbf{w})\log p(\mathcal{D}|\mathbf{w})\text{d} \mathbf{w}.\] To illustrate how to optimize the above objective, let us take the common approach where the prior \(p(\mathbf{w})\) and posterior \(q(\mathbf{w})\) are modeled as Gaussians: \(p(\mathbf{w})=\mathcal{N}(\mathbf{w}|\mathbf{w}_{p},\mathbf{\Sigma}_{p})\) and \(q(\mathbf{w})=\mathcal{N}(\mathbf{w}|\mathbf{w}_{q},\mathbf{\Sigma}_{q})\), respectively. Then the first term in the ELBO can be computed in closed-form by noting that \(2\text{KL}(q||p)\) is equal to \[\text{tr}(\mathbf{\Sigma}_{p}^{-1}\mathbf{\Sigma}_{p})-d+(\mathbf{w}_{p}-\mathbf{w}_{q})^{\top} \mathbf{\Sigma}_{p}^{-1}(\mathbf{w}_{p}-\mathbf{w}_{q})+\log\left(\frac{\det\mathbf{\Sigma}_{ p}}{\det\mathbf{\Sigma}_{q}}\right),\] where \(d\) is the dimension of \(\mathbf{w}\). The second term can be approximated through Monte Carlo sampling as \[\int q(\mathbf{w})\log p(\mathcal{D}|\mathbf{w})\text{d}\mathbf{w}\approx\sum_{i=1}^{S} \log p(\mathcal{D}|\mathbf{w}_{i}),\] where \(\mathbf{w}_{i}\sim q(\mathbf{w})\), \(i=1,\ldots,S\) are Monte Carlo samples. The resulting objective can be typically optimized by gradient descent, by using the reparametrization trick for Gaussians (Kingma et al., 2015). #### 3.3.2 Laplace approximation Another popular method is _Laplace approximation_ that uses a normal approximation centered at the maximum of the posterior distribution, or maximum a posteriori (MAP). Let us illustrate the Laplace method for approximating a distribution \(g\) (typically a posterior distribution) known up to a constant, \(g(\mathbf{w})=f(\mathbf{x})/Z\), defined over a \(d\)-dimensional space \(\mathbf{\mathcal{W}}\). At a stationary point \(\mathbf{w}_{0}\), the gradient \(\nabla f(\mathbf{w})\) vanishes. Expanding around this stationary point yields \[\log f(\mathbf{w})\simeq\log f(\mathbf{w}_{0})-\frac{1}{2}(\mathbf{w}-\mathbf{w}_{0})^{\top} \mathbf{A}(\mathbf{w}-\mathbf{w}_{0}),\] where the Hessian matrix \(\mathbf{A}\in\mathbb{R}^{d\times d}\) is defined by \[\mathbf{A}=-\nabla\nabla\log f(\mathbf{w})|_{\mathbf{w}=\mathbf{w}_{0}},\] and \(\nabla\) is the gradient operator. Taking the exponential of both sides we obtain \[f(\mathbf{w})\simeq f(\mathbf{w}_{0})\exp\left\{-\frac{1}{2}(\mathbf{w}-\mathbf{w}_{0})^{\top }\mathbf{A}(\mathbf{w}-\mathbf{w}_{0})\right\}.\] The distribution \(g(\mathbf{w})\) is proportional to \(f(\mathbf{w})\) and the appropriate normalization coefficient can be found by inspection, giving \[g(\mathbf{w}) =\frac{|\mathbf{A}|^{1/2}}{(2\pi)^{d/2}}\exp\left\{-\frac{1}{2}( \mathbf{w}-\mathbf{w}_{0})^{\top}\mathbf{A}(\mathbf{w}-\mathbf{w}_{0})\right\}\] \[=\mathcal{N}(\mathbf{w}|\mathbf{w}_{0},\mathbf{A}^{-1}),\] where \(|\mathbf{A}|\) denotes the determinant of \(\mathbf{A}\). This Gaussian distribution is well-defined provided its precision matrix \(\mathbf{A}\) is positive-definite, which implies that the stationary point \(\mathbf{w}_{0}\) must be a local maximum, not a minimum or a saddle point. Identifying \(f(\mathbf{w})=p(\mathcal{D}|\mathbf{w})p(\mathbf{w})\) and \(Z=p(\mathcal{D})\) and applying the above formula results in the typical Laplace approximation to the posterior. To find a maximum \(\mathbf{w}_{0}\), one can simply run a gradient descent algorithm on \(\log f(\mathbf{w})=\log p(\mathcal{D}|\mathbf{w})+\log p(\mathbf{w})\). #### 3.3.3 Sampling methods Sampling methods refer to classes of algorithms that use sampling from probability distributions. They are also referred to as Monte Carlo (MC) methods when used in order to approximate integrals and have become fundamental in data analysis. In simple cases, rejection sampling or adaptive rejection sampling can be implemented to return independent samples from a distribution. For more complex distributions, typically multidimensional ones, one can resort to _Markov chain Monte Carlo_ (MCMC) methods which have become ubiquitous in Bayesian inference (Robert and Casella, 2004). This class of methods consists in devising a Markov chain whose equilibrium distribution is the target posterior distribution. Recording the chain samples, after an exploration phase known as the burn-in period, provides a sample approximately distributed according to the posterior. The Metropolis-Hastings (MH) method uses some proposal kernel that depends on the previous sample of the chain. MH proposes an acceptance/rejection rule for the generated samples. The choice of kernel defines different types of MH. For example, random walk MH uses a Gaussian kernel with mean at the previous sample and some heuristic variance. In the multidimensional case, Gibbs sampling is a particular case of MH when the full-conditional distributions are available. Gibbs sampling is appealing in the sense that samples from the full-conditional distributions are never rejected. However, full-conditional distributions are not always available in closed-form. Another drawback is that the use of full-conditional distributions often results in highly correlated iterations. Many extensions adjust the method to reduce these correlations. Metropolis-Adjusted Langevin Algorithm (MALA) is another special case of MH algorithm that proposes new states according to so-called Langevin dynamics. Langevin dynamics evaluate the gradient of the target distribution in such a way that proposed states in MALA are more likely to fall in high-probability density regions. Hamiltonian Monte Carlo (HMC) is an improvement over the MH algorithm, where the chain's trajectory is based on the Hamiltonian dynamic equations. In Hamilton's equations, there are two parameters that should be computed: a random variable distribution and its moment. Therefore, the exploration space of a given posterior is expended with its moment. After generating a sample from a given posterior and computing its moment, the stationary principle of Hamilton's equations gives level sets of solutions. HMC parameters -a step size and a number of steps for a numerical integrator -define how far one should slide the level sets from one space point to the next one in order to generate the next sample. The No-U-Turn Sampler (NUTS) is a modification of the original HMC which has a criterion to stop the numerical integration. This makes NUTS a more automatic algorithm than plain HMC because it avoids the need to set the step size and the number of steps. The main advantage of sampling methods is that they are asymptotically exact: when the number of iterations increases, the Markov chain distribution converges to the (target) posterior distribution. However, constructing efficient sampling procedures with good guarantees of convergence and satisfactory exploration of the sample parameter space can be prohibitively expensive, especially in the case of high dimensions. Note that the initial samples from a chain do not come from the stationary distribution, and should be discarded. The amount of time it takes to reach stationarity is called the mixing time or burn-in time, and reducing it is a key factor for making a sampling algorithm fast. Evaluating convergence of the chain can be done with numerical diagnostics (see for instance Gelman and Rubin, 1992; Vehtari et al., 2021; Moins et al., 2023). ### Model selection The Bayesian paradigm provides a principled approach to model selection. Let \(\{\mathcal{M}_{i}\}_{i=1}^{M}\) be a set of \(M\) models. We suppose that the data is generated from one of these models but we are uncertain about which one. The uncertainty is expressed through a prior probability distribution \(p(\mathcal{M}_{i})\) which allows us to express a preference for different models, although a typical assumption is that all models are given equal prior probability \(\nicefrac{{1}}{{M}}\). Given a dataset \(\mathcal{D}\), we then wish to evaluate the posterior distribution \[p(\mathcal{M}_{i}|\mathcal{D})\propto p(\mathcal{M}_{i})p(\mathcal{D}| \mathcal{M}_{i}).\] The _model evidence_\(p(\mathcal{D}|\mathcal{M}_{i})\) describes the probability that the data were generated from each individual model \(\mathcal{M}_{i}\)(Bishop and Nasrabadi, 2006). For a model governed by a set of parameters \(\mathbf{w}\), the model evidence is obtained by integrating out the parameters \(\mathbf{w}\) from the joint distribution \((\mathcal{D},\mathbf{w})\), see Equation (5): \[p(\mathcal{D}|\mathcal{M}_{i}) =\int p(\mathcal{D},\mathbf{w}|\mathcal{M}_{i})\mathrm{d}\mathbf{w}\] \[=\int p(\mathcal{D}|\mathbf{w},\mathcal{M}_{i})p(\mathbf{w}|\mathcal{M}_{ i})\mathrm{d}\mathbf{w}.\] The model evidence is also sometimes called the _marginal likelihood_ because it can be viewed as a likelihood function over the space of models, in which the parameters have been marginalized out. From a sampling perspective, the marginal likelihood can be viewed as the probability of generating the dataset \(\mathcal{D}\) from a model whose parameters are sampled from the prior. If the prior probability over models is uniform, Bayesian _model selection_ corresponds to choosing the model with the highest marginal likelihood. The ratio of model evidences \(p(\mathcal{D}|\mathcal{M}_{i})/p(\mathcal{D}|\mathcal{M}_{j})\) for two models is known as a _Bayes factor_(Kass and Raftery, 1995). The marginal likelihood serves as a criterion for choosing the best model with different hyperparameters. When derivatives of the marginal likelihood are available (such as for Gaussian process regression), we can learn the optimal hyperparameters for a given marginal likelihood using an optimization procedure. This procedure, known as _type 2 maximum likelihood_(Bishop and Nasrabadi, 2006), results in the _most likely model_ that generated the data. It differs from Bayesian inference which finds the posterior over the parameters for a given model. In the Gaussian process literature, type 2 maximum likelihood optimization often results in better hyperparameters than cross-validation (Lotti et al., 2022). For models other than Gaussian processes, one needs to resort to an approximation of the marginal likelihood, typically using the Laplace approximation (Bishop and Nasrabadi, 2006). What are Bayesian neural networks? We have seen now that neural networks are a popular class of models due to their expressivity and generalization abilities, while Bayesian inference is a statistical technique heralded for its adaptivity and robustness. It is therefore natural to pose the question of whether we can combine these ideas to yield the best of both worlds. Bayesian neural networks (BNNs) are an attempt at achieving just this. As outlined in Section 2, we aim to infer the parameters of a neural network \(\mathbf{w}\in\mathbf{\mathcal{W}}\), which might be the weights and biases of a fully-connected network, the convolutional kernels of a CNN, the recurrent weights of an RNN, etc. However, in contrast to just using the SGD procedure from Eq. (3) to get a point estimate for \(\mathbf{w}\), we will try to use the Bayesian strategy from Eq. (4) to yield a posterior distribution \(p(\mathbf{w}|\mathcal{D})\) over parameters. This distribution enables the quantification of uncertainty associated with the model's predictions and can be updated as new data is observed. While this approach seems straightforward on paper, we will see in the following that it leads to many unique challenges in the context of BNNs, especially when compared to more conventional Bayesian models, such as Gaussian processes (Rasmussen and Williams, 2006). Firstly, the weight-space \(\mathbf{\mathcal{W}}\) of the neural network is often high-dimensional, with modern architectures featuring millions or even billions of parameters. Moreover, understanding how these weights map to the functions implemented by the network is not trivial. Both of these properties therefore strongly limit our ability to formulate sensible priors \(p(\mathbf{w})\), as illustrated in Fig. 5. We will discuss these challenges as well as strategies to overcome them in more detail in Section 4.1, focusing primarily on the theoretical understanding and explanation of empirically observed phenomena, such as the Gaussian process limit in function-space and the relationship between prior selection and implicit and explicit regularization in conventional neural networks. Secondly, due to the complicated form of the likelihood function (which is parameterized by the neural network itself), neither of the integrals in Eq. (5) and Eq. (6) are tractable. We thus have to resort to approximations, which are again made more cumbersome by the high dimensionality of \(\mathbf{\mathcal{W}}\). We will discuss different approximation techniques and their specific implementations in the context of BNNs in Section 4.2, contrasting their tradeoffs and offering guidance for practitioners. Whether the aforementioned challenges relating to priors and inference in BNNs are surmountable in practice often depends on the particular learning problem at hand and on the modeling effort and computational resources one is willing to spend. We will critically reflect on this question in the following and also offer some reconciliation with frequentist approaches later in Section 5. ### Priors Specifying a prior distribution can be delicate for complex and extremely high-dimensional models such as neural networks. Reasoning in terms of parameters is challenging due to their high dimension, limited interpretability, and the over-parameterization of the model. Moreover, since the true posterior can rarely be recovered, it is difficult to isolate a prior's influence, even empirically (Wenzel et al., 2020). This gives rise to the following question: _do the specifics of the prior even matter?_ This question is all the more important since inference is usually blunted by posterior approximations and enormous datasets. The machine learning interpretation of the _no free lunch_ theorem states that any supervised learning algorithm includes some _implicit prior_(Wolpert, 1996). From the Bayesian perspective, priors are explicit. Thus, there is an impossibility of the existence of a universal prior valid for any task. This line of reasoning leads to carefully choosing the prior distribution since it can considerably help to improve the performance of the model. On the other hand, assigning priors to complex models is often thought of as imposing soft constraints, like regularization, or via data transformations like data augmentation. The idea behind this type of prior is to help and stabilize computation. These priors are sometimes called _weakly informative_ or _mildly informative_ priors. Moreover, most regularization methods used for point-estimate neural networks can be understood from a Bayesian perspective as setting a prior, see Section 4.1.3. We review recent works on the influence of the prior in _weight-space_, including how it helps to connect classical and Bayesian approaches applied to deep learning models. More discussion on the influence of the prior choice can be found in Nalisnick (2018) and Fortuin (2021). The choice of the prior and its interaction with the approximate posterior family are studied in Hron et al. (2018). #### 4.1.1 Weight priors (parameter-space) The Gaussian distribution is a common and default choice of prior in Bayesian neural networks. Looking for the maximum-a-posteriori (MAP) of such a Bayesian model is equivalent to training a standard neural network under a weighted \(\mathscr{L}_{2}\) regularization (see discussion in Section 4.1.3). There is no theoretical evidence that the Gaussian prior is preferable over other prior distribution choices (Murphy, 2012). Yet, its well-studied mathematical properties lead to having Gaussian distribution as a default prior. Further, we review the works that show how different weight priors influence the resulting model. **Adversarial robustness and priors.** In BNNs, one can evaluate adversarial robustness with the posterior predictive distribution of the model (Blaas and Roberts, 2021). A Lipschitz constant arising from the model can be used in order to quantify this robustness. The posterior predictive depends on the model structure and the weights' prior distribution. In quantifying how the prior distribution influences the Lipschitz constant, Blaas and Roberts (2021) establish that for BNNs with Gaussian priors, the model's Lipschitz constant is monotonically increasing Figure 5: Bayesian neural network architecture, where weights \(\mathbf{w}^{(\ell)}\) at layer \(\ell\) follow some prior distribution \(p^{(\ell)}\). with respect to the prior variance. It means that lower variance should lead to a lower Lipschitz constant, thus, should lead to higher robustness. **Gaussian process inducing.** A body of works imposes weight priors so that the induced priors over functions have desired properties, e.g., be close to some Gaussian process (GP). For instance, Flam-Shepherd et al. (2017), and further extended Flam-Shepherd et al. (2018), propose to tune priors over weights by minimizing the Kullback-Leibler divergence between BNN functional priors and a desired GP. However, the Kullback-Leibler divergence is difficult to work with due to the need to estimate an entropy term based on samples. To overcome this, Tran et al. (2020) suggest using the Wasserstein distance and provide an extensive study on performance improvements when imposing such priors. Similarly, Matsubara et al. (2021) use the ridgelet transform (Candes, 1998) to approximate the covariance function of a GP. **Priors based on knowledge about function-space.** Some works suggest how to define priors using information from the function-space since it is easier to reason about than in weight-space. Nalisnick et al. (2021) propose _predictive complexity priors_ (PREDCPs) that constrain the Bayesian prior by comparing the predictions between the model and some less complex reference model. These priors are constructed hierarchically with first-level priors over weights (for example, Gaussian) and second-level hyper-priors over weight priors parameters (for example, over Gaussian variances). The hyper-priors are defined to encourage functional regularization, e.g., depth selection. During training, the model sometimes needs to be updated concerning the architecture, training data, or other aspects of the training setup. Khan and Swaroop (2021) propose _knowledge-adaptation priors_ (K-priors) to reduce the cost of retraining. The objective function of K-priors combines the weight and function-space divergences to reconstruct past gradients. Such priors can be viewed as a generalization of weight-space priors. More on the function-space priors can be found in the next section. #### 4.1.2 Unit priors (function-space) Arguably, the prior that matters the most from a practitioner's point of view is the prior induced in function-space, not in parameter space or weight-space (Wilson, 2020). The prior seen at the function level can provide insight into what it means in terms of the functions it parametrizes. To some extent, priors on BNNs' parameters are often challenging to specify since it is unclear what they actually mean. As a result, researchers typically lack interpretable semantics on what each unit in the network represents. It is also hard to translate some subjective domain knowledge into the neural network parameter priors. Such subjective domain knowledge may include feature sparsity or signal-to-noise ratio (see for instance Cui et al., 2021). A way to address this problem is to study the priors in the function-space, thus raising the natural question: _how to assign a prior on functions of interest for classification or regression settings?_ The priors over parameters can be chosen carefully by reasoning about the functions that these priors induce. Gaussian processes are perfect examples of how this approach works (Rasmussen and Williams, 2006). There is a body of work on translating priors on functions given by GPs into BNN priors (Flam-Shepherd et al., 2017, 2018; Tran et al., 2020; Matsubara et al., 2021). Recent studies establish a closer connection between infinitely-wide BNNs and GPs which we review next. **Infinite-width limit.** Pioneering work of Neal (1996) first connected Bayesian neural networks and Gaussian processes. Applying the central limit theorem, Neal showed that the output distribution of a one-hidden-layer neural network converges to a Gaussian process for appropriately scaled weight variances. Recently, Matthews et al. (2018); Lee et al. (2018) extended Neal's results to deep neural networks showing that their units' distribution converges to a Gaussian process when _the width of all the layers_ goes to infinity. These observations have recently been significantly generalized to a variety of architectures, including convolutional neural networks (Novak et al., 2020; Garriga-Alonso et al., 2019), batch norm and weight-tying in recurrent neural networks (Yang, 2019), and ResNets (Hayou, 2022). There is also a correspondence between GPs and models with _attention layers_, i.e., particular layers with an attention mechanism relating different positions of a single sequence to compute a representation of the sequence, see e.g. Vaswani et al. (2017). For multi-head attention architectures, which consist of several attention layers running in parallel, as the number of heads and the number of features tends to infinity, the outputs of an attention model also converge to a GP (Hron et al., 2020). Generally, if an architecture can be expressed solely via matrix multiplication and coordinate-wise nonlinearities (i.e., a tensor program), then it has a GP limit Yang (2019). Further research builds upon the limiting Gaussian process property to devise novel architecture rules for neural networks. Specifically, the neural network Gaussian process (NNGP) (Lee et al., 2018) describes the prior on function-space that is realized by an iid prior over the parameters. The function-space prior is a GP with a specific kernel defined recursively with respect to the layers. For the rectified linear unit (ReLU) activation function, the Gaussian process covariance function is obtained analytically (Cho and Saul, 2009). Stable distribution priors for weights also lead to stable processes in the infinite-width limit (Favaro et al., 2020). When the prior over functions behaves like a Gaussian process, the resulting BNN posterior in function-space also weakly converges to a Gaussian process, which was firstly empirically shown in Neal (1996) and Matthews et al. (2018) and then theoretically justified by Hron et al. (2020). However, given the wide variety of structural assumptions that GP kernels can represent (Rasmussen and Williams, 2006; Lloyd et al., 2014; Sun et al., 2018), BNNs outperform GPs by a significant gap in expressive power (Sun et al., 2019). Adlam et al. (2020) show that the resulting NNGP is better calibrated than its finite-width analogue. The downside is its poorer performance in part due to the complexity of training GPs with large datasets because of matrix inversions. However, this limiting behavior triggers a new line of research to find better approximation techniques. For example, Yaida (2020) shows that finite-width corrections are beneficial to Bayesian inference. Nevertheless, infinite-width neural networks are valuable tools to obtain some theoretical properties on BNNs in general and to study the neural networks from a different perspective. It results in learning dynamics via the _neural tangent kernel_(Jacot et al., 2018), and an _initialization procedure_ via the so-called _Edge of Chaos_(Poole et al., 2016; Schoenholz et al., 2017; Hayou et al., 2019). We describe below the aforementioned aspects in detail. **Neural tangent kernel.** Bayesian inference and the GP limit give insights into how well over-parameterized neural networks can generalize. Then, the idea is to apply a similar scheme to neural networks after training and study the dynamics of gradient descent on infinite width. For any parameterized function \(f(\mathbf{x},\mathbf{w})\) let: \[K_{\mathbf{w}}(\mathbf{x},\mathbf{x}^{\prime})=\langle\nabla_{\mathbf{w}}f(\mathbf{x},\mathbf{w}), \nabla_{\mathbf{w}}f(\mathbf{x}^{\prime},\mathbf{w})\rangle. \tag{7}\] When \(f(\mathbf{x},\mathbf{w})\) is a feedforward neural network with appropriately scaled parameters, a con vergence \(K_{\mathbf{w}}\to K_{\infty}\) occurs to some fixed kernel called neural tangent kernel (NTK) when the network's widths tend to infinity one by one starting from the first layer (Jacot et al., 2018). Yang (2019) generalizes the convergence of NTK to the case when widths of different layers tend to infinity together. If we choose some random weight initialization for a neural network, the initial kernel of this network approaches a deterministic kernel as the width increases. Thus, NTK is independent of specific initialization. Moreover, in the infinitely wide regime, NTK stays constant over time during optimization. Therefore, this finding enables to study learning dynamics in infinitely wide feed-forward neural networks. For example, Lee et al. (2019) show that NNs in this regime are simplified to linear models with a fixed kernel. While this may seem promising at first, empirical results show that neural networks in this regime perform worse than practical over-parameterized networks (Arora et al., 2019; Lee et al., 2020). Nevertheless, this still provides theoretical insight into some aspects of neural network training. **Finite width.** While infinite-width neural networks help derive theoretical insights into deep neural networks, neural networks at finite-width regimes or approximations of infinite-width regimes are the ones that are used in real-world applications. It is still not clear when the GP framework is more amenable to describe the BNN behavior. In some cases, finite-width neural networks outperform their infinite-width counterparts (Lee et al., 2018; Garriga-Alonso et al., 2019; Arora et al., 2019; Lee et al., 2020). Arora et al. (2019) show that convolutional neural networks outperform their corresponding limiting NTK. This performance gap is likely due to the finite width effect where a fixed kernel cannot fully describe the CNN dynamics. The evolution of the NTK along with training has its benefits on generalization as shown in further works (Dyer and Gur-Ari, 2020; Huang and Yau, 2020). Thus, obtaining a unit prior description for finite-width neural networks is essential. One of the principal obstacles in pursuing this goal is that hidden units in BNNs at finite-width regime are dependent (Vladimirova et al., 2021). The induced dependence makes it difficult to analytically obtain distribution expressions for priors in function-space of neural networks. Here, we review works on possible solutions such as the introduction of finite-width corrections to infinite-width models and the derivation of distributional characterizations amenable for neural networks. **Corrections.** One of the ways to describe priors in the function-space is to impose corrections to BNNs at infinite width. In particular, Antognini (2019) shows that ensembles of finite one-hidden-layer NNs with large width can be described by Gaussian distributions perturbed by a fourth Hermite polynomial. The scale of the perturbations is inversely proportional to the neural network's width. Similar corrections are also proposed in Naveh et al. (2020). Additionally, Dyer and Gur-Ari (2020) propose a method using Feynman diagrams to bound the asymptotic behavior of correlation functions in NNs. The authors present the method as a conjecture and provide empirical evidence on feed-forward and convolutional NNs to support their claims. Further, Yaida (2020) develops the perturbative formalism that captures the flow of pre-activation distributions to deeper layers and studies the finite-width effect on Bayesian inference. **Full description.** Springer and Thompson (1970) show that the probability density function of the product of independent normal variables can be expressed through a Meijer G-function. It results in an accurate description of unit priors induced by Gaussian priors on weights and linear or ReLU activation functions (Zavatone-Veth and Pehlevan, 2021; Noci et al., 2021). It is the first full description of function-space priors but under strong assumptions, requiring Gaussian priors on weights and linear or ReLU activation functions, and with fairly convoluted expressions. Though this is an accurate description, it is hard to work with due to its complex structure. However, this accurate characterization is in line with works on heavy-tailed properties for hidden units which we discuss further. **Distributional characteristics.** Concerning the distributional characteristics of neural networks units, a number of alternative analyses to the Gaussian Process limit have been developed in the literature. Bibi et al. (2018) provides the expression of the first two moments of the output units of a one-hidden-layer neural network. Obtaining moments is a preliminary step to characterizing a whole distribution. However, the methodology of Bibi et al. (2018) is also limited to one-hidden-layer neural networks. Later, Vladimirova et al. (2019, 2020) focuses on the moments of hidden units and shows that moments of any order are finite under mild assumptions on the activation function. More specifically, the _sub-Weibull_ property of the unit distributions is shown, indicating that hidden units become heavier-tailed when going _deeper_ in the network. This result is refined by Vladimirova et al. (2021) who show that hidden units are _Weibull-tail_ distributed. Weibull-tail distributions are characterized in a different manner than sub-Weibull distributions, not based on moments but on a precise description of their tails. These tail descriptions reveal differences between hidden units' distributional properties in finite and infinite-width BNNs, since they are in contrast with the GP limit obtained when going _wider_. **Representation learning.** The _representation learning_ (when the model is provided with data and learned how to represent the features) in finite-width neural networks is not yet well-understood. However, the infinitely wide case gives rise to studying representation learning from a different perspective. For instance, Zavatone-Veth et al. (2021) compute the leading perturbative finite-width corrections. Aitchison (2020) studies the prior over representations in finite and infinite Bayesian neural networks. The narrower, deeper networks, the more flexibility they offer because the covariance of the outputs gradually vanishes as the network size increases. The results are obtained by considering the variability in the top-layer kernel induced by the prior over a finite neural network. #### 4.1.3 Regularization Since deep learning models are over-parametrized, it is essential to avoid overfitting to help these systems generalize well. Several explicit regularization strategies are used, including Lasso \(\mathscr{L}_{1}\) and weight-decay \(\mathscr{L}_{2}\) regularization of the parameters. Another way is to inject some stochasticity into the computations that implicitly prevents certain pathological behaviors and thus helps the network to prevent overfitting. The most popular methods in this line of research are dropout (Srivastava et al., 2014) and batch normalization (Ioffe and Szegedy, 2015). It has also been observed that the stochasticity in stochastic gradient descent (which is normally considered as a drawback) can itself serve as an implicit regularizer (Zhang et al., 2017). Here we draw connections between popular regularization techniques in neural networks and weight priors in their Bayesian counterparts. Khan and Rue (2021); Wolinski et al. (2020) have discussed how different regularization methods implicitly correspond to enforcing different priors. **Priors as regularization.** Given a dataset \(\mathcal{D}=\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i}\), where \((\mathbf{x}_{i},\mathbf{y}_{i})\) are pairs of inputs and outputs, the _maximum-a-posteriori_ (MAP) can be used to obtain point estimation of the parameters: \[\hat{\mathbf{w}}_{\mathrm{MAP}} =\operatorname*{arg\,max}_{\mathbf{w}}\log p(\mathbf{w}|\mathcal{D}) \tag{8}\] \[=\operatorname*{arg\,max}_{\mathbf{w}}\left[\log p(\mathcal{D}|\mathbf{w })+\log p(\mathbf{w})\right].\] Performing classification with a softmax link function, \(-\log p(\mathcal{D}|\mathbf{w})\), corresponds to the cross-entropy loss. Performing regression with Gaussian noise such that \(p(\mathcal{D}|\mathbf{w})=\prod_{i}p(\mathbf{y}_{i}|\mathbf{w},\mathbf{x}_{i})=\prod_{i} \mathcal{N}\left(\mathbf{y}_{i}|f(\mathbf{x}_{i},\mathbf{w}),\sigma^{2}\right)\), then \(-\log p(\mathcal{D}|\mathbf{w})\) is a mean-squared error loss. In this context, the MAP estimation with a Gaussian prior \(p(\mathbf{w})\) is equivalent to optimization of the mean-squared error loss with \(\mathscr{L}_{2}\) regularization, or weight-decay for NNs. Similarly, assigning a Laplace prior to the weights \(\mathbf{w}\) leads to \(\mathscr{L}_{1}\) regularization. In case of a flat prior (uniform and improper) distribution \(p(\mathbf{w})\propto 1\), the optimization (8) boils down to the _maximum likelihood estimator_ (MLE): \[\hat{\mathbf{w}}_{\mathrm{MLE}}=\operatorname*{arg\,max}_{\mathbf{w}}\log p(\mathcal{ D}|\mathbf{w}).\] However, it is important to note that point solutions like \(\hat{\mathbf{w}}_{\mathrm{MAP}}\) or \(\hat{\mathbf{w}}_{\mathrm{MLE}}\) are not Bayesian per se, since they do not use _marginalization_ with respect to the posterior, a distinguishing property of the Bayesian approach (Wilson, 2020). **Dropout.** In this regularization technique due to Srivastava et al. (2014), each individual unit is removed with some probability \(\rho\) by setting its activation to zero. This can be recast as multiplying the activations \(h_{ij}^{(\ell)}\) by a mask variable \(m_{ij}^{(\ell)}\), which randomly takes the values 0 or 1: \(h_{ij}^{(\ell)}=m_{ij}^{(\ell)}\phi(g_{ij}^{(\ell)})\). Significant work has focused on the effect of _dropout_ as a weight regularizer (Wager et al., 2013). Inductive bias (see Section 2.3) of dropout was studied in Mianjy et al. (2018): for single hidden-layer linear neural networks, they show that dropout tends to make the norm of incoming/outgoing weight vectors of all hidden nodes equal. The dropout technique can be reinterpreted as a form of approximate Bayesian variational inference (Kingma et al., 2015; Gal and Ghahramani, 2016). Gal and Ghahramani (2016) build a connection between dropout and the Gaussian process representation, while Kingma et al. (2015) propose a way to interpret Gaussian dropout. They develop a _variational dropout_ where each weight of a model has its individual dropout rate. _Sparse variational dropout_, proposed by Molchanov et al. (2017), extends _variational dropout_ to all possible values of dropout rates and leads to a sparse solution. The approximate posterior is chosen to factorize either over rows or over individual entries of the weight matrices. The prior usually factorizes in the same way. Therefore, performing dropout can be used as a Bayesian approximation. However, as noted by Duvenaud et al. (2014), dropout has no regularization effect on infinitely-wide hidden layers. Nalisnick et al. (2019) propose a Bayesian interpretation of regularization via multiplicative noise, with dropout being the particular case of Bernoulli noise. They find that noise applied to hidden units ties the scale parameters in the same way as the automatic relevance determination (ARD) algorithm (Neal, 1996), a well-studied shrinkage prior. See Section 4.2.3 for more details. ### Approximate inference for Bayesian neural networks Exact inference is intractable for Bayesian deep neural networks (DNNs) due to them being highly non-linear functions. Therefore, practitioners resort to approximate inference techniques. Typically, Bayesian approximate inference techniques fall into the following groups: 1) _variational inference_, 2) _Laplace approximation_, and 3) _Monte Carlo sampling_. These approaches for DNNs have strong similarities to the general approaches described in Section 3.3. However, the following problems arise in the deep learning setting: * Inference is difficult or intractable: deep learning models have a very large number of parameters and the training datasets have many samples; * The DNNs' loss landscape is multimodal: deep learning models have many local minima with near equivalent training loss. To address these issues, researchers propose more efficient approaches to performing inferences in DNNs than those that usually strictly follow the Bayesian paradigm. Depending on one's point of view, these approaches can be seen as either very rough approximations to the true posterior distribution, or as non-Bayesian approaches that still provide useful uncertainty estimates (see more discussion on this in Section 5). In this section, we give an overview of inference methods in DNNs and describe the tractability and multimodality problems in more detail. #### 4.2.1 Variational inference The first _variational approach_ applied to simple neural networks is proposed by Hinton and Van Camp (1993). They use an analytically tractable Gaussian approximation with a diagonal covariance matrix to the true posterior distribution. Further, Barber and Bishop (1998) show that this approximation can be extended to a general covariance matrix remaining tractable. However, these methods were not deemed fully satisfactory due to their limited practicality. It took eighteen years after the pioneering work of Hinton and Van Camp (1993) to design more practical variational techniques with the work of Graves (2011) who suggests searching for variational distributions with efficient numerical integration. It allows variational inference for very complex neural networks but remains computationally extremely heavy. Later, Kingma and Welling (2014) introduce a _reparameterization trick_ for the variational evidence lower bound (ELBO), yielding a lower bound estimator (see Section 3.3.1 for a definition of the ELBO). This estimator can be straightforwardly optimized using standard stochastic gradient methods. Along with the advances in variational methods and scalable inference, Blundell et al. (2015) propose a novel yet efficient algorithm named _Bayes by Backprop_ (BBB) to quantify the uncertainty of the neural network weights. It is amenable to backpropagation and returns an approximate posterior distribution, still allowing for complex prior distributions. This method achieves performance on par with neural networks combined with dropout. However, it requires twice more training parameters than the original non-Bayesian neural network due to the need for Gaussian variance parameters. At the same time, Hernandez-Lobato and Adams (2015) suggest the _probabilistic backpropagation procedure_ (PBP), which propagates expectations and performs backpropagation in a standard way. In addition, both BBB and PBP assume independence between weights when optimizing the variational evidence lower bound. While they achieve good results on small datasets, this substantial restrictive assumption on the posterior distribution is likely to result in underestimating the overall posterior uncertainty. Variational inference with the _mean-field_ assumption (Blundell et al., 2015; Khan et al., 2018; Kingma et al., 2015; Khan et al., 2017) achieved early success for BNNs due to being computationally cheap and easy to adapt to modern automatic differentiation libraries. However, the mean-field assumption is too restrictive to achieve a reliable posterior approximation. A whole body of research focuses on adapting variational inference to deep learning models under different optimization methods to find flexible solutions (Louizos and Welling, 2016; Sun et al., 2017; Osawa et al., 2019; Zhang et al., 2018; Dusenberry et al., 2020; Mishkin et al., 2018). Typically, more expressive variational posteriors achieve lower test negative log-likelihood and misclassification error, as well as better uncertainty calibration. But variational inference methods are known to suffer from _mode collapse_(Lakshminarayanan et al., 2017), i.e., tend to focus on a single mode of the posterior distribution. Thus, the resulting variational posterior distributions still lack expressiveness. Moreover, accurate variational inference for DNNs is difficult for practitioners as it often requires tedious optimization of hyperparameters (Wen et al., 2018). #### 4.2.2 Laplace approximation The Laplace approximation can be seen as an intermediate step between variational inference and sampling approaches (see Section 3.3.2 for details). It is computationally relatively cheap and useful for theoretical analyses, resulting in an expressive posterior. The main advantage is bypassing the need to optimize the data likelihood of the stochastic predictor. Furthermore, once at a minimum of the loss landscape, Gaussian posteriors can be calculated using simple vector products. It brings significant benefits for DNNs, as optimization of the data likelihood for a stochastic neural network is challenging in practice, as we mentioned in the previous section. Works that conventionally popularized BNNs are MacKay (1992) and Neal (1992, 1996). MacKay (1992) is the first to perform an extensive study using the Laplace method. He experimentally shows that BNNs have high predictive uncertainty in the regions outside of the training data. The approach has recently seen a resurgence in interest due to these appealing properties. For a Gaussian posterior, the primary problem is choosing an appropriate approximation to the Hessian (and, therefore, the Gaussian covariance) that is computationally tractable for modern deep networks. Ritter et al. (2018) propose the Kronecker-factored Approximate Curvature (K-FAC) approximation for the Hessian (Martens and Grosse, 2015). This results in a block diagonal covariance that can be efficiently estimated using the outer products of the gradients. Daxberger et al. (2021) introduced Laplace Redux, a Python package that automatically computes the Laplace approximation of a given network, for various approximations to the covariance. It has led to a flurry of research on the Laplace approximation that includes works on improving predictions (Immer et al., 2021; Antoran et al., 2022), the use the marginal likelihood for model selection (Immer et al., 2021; Lotfi et al., 2022), as well as learning architectures that are invariant to transformations of the dataset (Immer et al., 2022). The Laplace method can also be used to efficiently compute a posterior on a subnetwork, resulting in a more expressive posterior of the whole network (Daxberger et al., 2021). #### 4.2.3 Sampling methods While the Laplace approximation offers comparable or even better posterior expressiveness and is more stable to optimize than variational inference methods, it still suffers from exploring only a single mode of the loss landscape. Sampling-based approaches offer a potential solution to this problem (see Section 3.3.3). While having a heavy computational burden, they provide (asymptotically) samples from the true posterior and should be able to explore all modes. **MCMC/HMC.** Neal (1993) proposes the first Markov chain Monte Carlo (MCMC) sampling algorithm for Bayesian neural networks. He presents _Hamiltonian Monte Carlo_ (HMC), a sophisticated gradient-based MCMC algorithm. However, HMC is prohibitively expensive, requiring full gradient estimates as well as long burn-in periods before providing a single sample from the posterior. Only recently, Izmailov et al. (2021) revisit this approach and apply it to modern deep learning architectures. They use a large number of Tensor Processing Units (TPUs) to perform inference, which is not typically practical. Huang et al. (2023) propose a sampling approach based on adaptive importance sampling which exploits some geometric information on the complex (often multimodal) posterior distribution. **Monte Carlo dropout.** Gal and Ghahramani (2016) establish that neural networks with dropout applied before every weight layer are mathematically equivalent to an approximation to the probabilistic deep Gaussian process (Damianou and Lawrence, 2013). This gives rise to the MC dropout method, a prevalent approach to obtaining uncertainty estimates using dropout without additional cost. More specifically, the idea of Monte Carlo dropout is simple and consists of performing random sampling at test time. Instead of turning off the dropout layers at test time (as is usually done), hidden units are randomly dropped out according to a Bernoulli(\(p\)) distribution. Repeating this operation \(M\) times provides \(M\) versions of the MAP estimate of the network parameters \(\mathbf{w}^{m}\), \(m=1,\ldots,M\) (where some units of the MAP are dropped), yielding an approximate posterior predictive in the form of the equal-weight average: \[p(y|x,\mathcal{D})\approx\frac{1}{M}\sum_{m=1}^{M}p(y|x,\mathbf{w}^{m}). \tag{9}\] However, the obtained approximate posterior exhibits some pathologies which can result in overconfidence (Foong et al., 2019). Also, Monte Carlo dropout captures some uncertainty from out-of-distribution (OOD) inputs but is nonetheless incapable of providing valid posterior uncertainty. Indeed, Monte Carlo dropout changes the Bayesian model under study, which modifies also the properties of the approximate Bayesian inference performed. Specifically, Folgoc et al. (2021) show that the Monte Carlo dropout posterior predictive (9) assigns zero probability to the true model posterior predictive distribution. **Stochastic gradient Markov chain Monte Carlo (SG-MCMC).** The seminal work of Welling and Teh (2011) combines SGD and Langevin dynamics providing a highly scalable sampling scheme as an efficient alternative to a full evaluation of the gradient. The tractability of gradient mini-batches evaluations in SGD is a common feature behind many subsequent proposals (Ahn et al., 2012; Chen et al., 2014; Neiswanger et al., 2014; Korattikara Balan et al., 2015; Wang et al., 2015). However, posterior distributions in deep learning often have complex geometries including multimodality, high curvatures, and saddle points. The presence of these features heavily impacts the efficacy of SG-MCMC in properly exploring the posterior. In order to partially alleviate this problem, Ma et al. (2015); Li et al. (2016) use adaptive preconditioners to mitigate the rapidly changing curvature. Borrowing ideas from the optimization literature, preconditioners use local information of the posterior geometry at each step to provide more efficient proposals. To address the multimodality problem, Zhang et al. (2019) propose an SG-MCMC with a cyclical step-size schedule. Alternating large and small step-size proposals, the sampler explores a large portion of the posterior, moving from one mode to another along with a local exploration of each mode. Combining these two approaches of adaptive preconditioning and cyclical step-size scheduling yields a state-of-the-art sampling algorithm in Bayesian deep learning (Wenzel et al., 2020). Both MCMC and stochastic gradient-MCMC based methods often result in state-of-the-art results with respect to the test negative log-likelihood error and accuracy (Izmailov et al., 2021), albeit with significant additional computation and storage costs compared to variational inference and the Laplace approximation. To be Bayesian or not to be? This section highlights several areas where Bayesian and frequentist approaches overlap, sometimes in a controversial way. In some cases, this overlap brings mutual benefits to both perspectives, resulting in theoretical and empirical advances. However, some topics do not appear to be resolved and remain open for discussion. In Section 5.1, we first discuss how the Bayesian framework can lead to insights and improvements for standard NNs and vice versa. In Section 5.1.1, we describe the connections between randomized initialization schemes for deterministic neural networks and priors in the Bayesian framework. Section 5.1.2 discusses connections between the optimization methods used for deterministic neural networks (such as SGD and ADAM) and posterior distributions in the Bayesian framework. To make BNNs competitive with their deterministic counterparts, down-weighting the effect of the prior in approximate inference is often necessary for what is known as _cold_ or _tempered_ posteriors (Wilson, 2020; Wenzel et al., 2020). We discuss this effect and its possible interpretations given in the literature in Section 5.1.3. In Section 5.1.4, we discuss the connection between deep ensembles and approximate inference methods. In Section 5.2, we discuss certificates that can be obtained for the performance on out-of-sample data for Bayesian neural networks and relate these to the frequentist setting. In Section 5.2.1, we detail how frequentist guarantees are often used in posterior contraction, showing that the posterior converges to the true posterior when the sample size grows to infinity. In Section 5.2.2, we describe how PAC-Bayes theorems can be used to certify the performance of Bayesian neural networks on out-of-sample data with high probability. In Section 5.2.3, we discuss the use of the marginal likelihood for model selection. The marginal likelihood has been a subject of debate and various interpretations in recent years, and we detail its connections to frequentist guarantees on out-of-sample performance. Finally in Section 5.3, we describe the difficulties encountered when benchmarking Bayesian neural networks. In Section 5.3.1, we discuss various popular datasets used to evaluate uncertainty in Bayesian deep learning. In Section 5.3.2, we discuss the different evaluation metrics that are being used for evaluation. Finally in Section 5.3.3, we describe subtle differences in how neural network outputs can be interpreted. These differences can result in different conclusions across different researchers. ### Frequentist and Bayesian connections Deep neural networks have been typically treated as deterministic predictors. This has been mainly due to the significant computational costs of training. Significant research has been conducted in deriving good initialization schemes for deep neural network parameters and good optimizers. In this section, we explore the connections between the design choices in this frequentist setting and the Bayesian setting. Furthermore, we make connections between deep ensembles and Bayesian inference and provide some possible explanations as to why deterministic neural networks often outperform Bayesian ones. * Empirical studies have demonstrated that SGD tends to induce heavy-tailed distributions on the weights of neural networks. This deviates from the prevalent assumption of Gaussian distributions in variational inference. By adopting Bayesian principles, frequentist optimizers can be reinterpreted, leading to enhanced outcomes in uncertainty estimation. However, to achieve competitive performance, it is often necessary to down-weight the influence of the prior distribution. The underlying reasons for this requirement are currently a subject of active debate within the research community. Despite ongoing efforts, Bayesian approaches often struggle to surpass the performance of deep ensembles in various tasks. #### 5.1.1 Priors and initialization schemes This section reviews techniques for choosing initialization distributions over weights and biases in neural networks. This is by essence a frequentist procedure, but can be interpreted as well as prior elicitation from a Bayesian standpoint. Initialization schemes often consider Gaussian distributions on the pre-activations. As such they are closely related to the Bayesian wide regime limit when the number of hidden units per layer tends to infinity, because this regime results in a Gaussian process distribution for the weights (Section 4.1.2). Therefore, approaches to choosing deep neural network initializations should be fruitful in designing better deep neural network priors, and vice versa. In deep learning, initializing neural networks with appropriate weights is crucial to obtaining convergence. If the weights are too small, then the variance of the input signal is bound to decrease after several layer passes through the network. As a result, the input signal may drop under some critical minimal value, leading to inefficient learning. On the other hand, if the weights are too large, then the variance of the input signal tends to grow rapidly with each layer. This leads to a saturation of neurons' activations and to gradients that approach zero. This problem is sometimes referred to as _vanishing gradients_. Opposite to the vanishing problem is accumulating large error gradients during backpropagation. The gradient grows exponentially by repetitively multiplying gradients, leading to _exploding gradients_. So, initialization must help with _vanishing_ and _exploding gradients_. In addition, the _dying ReLU_ problem is very common when depth increases (Lu et al., 2020). Initialization also must induce _symmetry breaking_, i.e., forcing neurons to learn different functions so that the effectiveness of a neural network is maximized. Usually, this issue is solved with the _randomization procedure_. Randomized asymmetric initialization helps to deal with the dying ReLU problem (Lu et al., 2020). Frankle and Carbin (2019) proposed an iterative algorithm for parameter pruning in neural networks while saving the original initialization of the weights after pruning, also known as the _winning ticket_ of the initialization "lottery". Neural networks with such winning tickets could outperform unpruned neural networks; see Malach et al. (2020) for theoretical investigations. These findings illustrate that neural networks' initialization influences their structure, even without looking like it. This also opens a crucial question in deep learning research: _how to best assign network weights before training starts?_ The standard option for the initialization distribution is independent Gaussian. The Gaussian distribution is easy to specify as it is defined solely in terms of its mean and variance. It is also straightforward to sample from, which is an essential consideration when picking a sampling distribution in practice. In particular, to initialize a neural network, we independently sample each bias \(b_{i}^{(\ell)}\) and each weight \(w_{ij}^{(\ell)}\) from zero-mean Gaussian distributions: \[b_{i}^{(\ell)}\sim\mathcal{N}\left(0,\sigma_{b}^{2}\right),\quad w_{ij}^{(\ell) }\sim\mathcal{N}\left(0,\frac{\sigma_{w}^{2}}{H_{\ell-1}}\right), \tag{10}\] for all \(i=1,\ldots,H_{\ell}\) and \(j=1,\ldots,H_{\ell-1}\). Here, the normalization of weight variances by \(1/H_{\ell-1}\) is conventional to avoid the variance explosion in wide neural networks. The bias variance \(\sigma_{b}^{2}\) and weight variance \(\sigma_{w}^{2}\) are called _initialization hyperparameters_. Note that these could depend on the layer index \(\ell\). The next question is _how to set the initialization hyperparameters_ so that the output of the neural network is well-behaved. **Xavier's initialization.** An active line of research studies the propagation of deterministic inputs in neural networks. Some heuristics are based on the information obtained before and after backpropagation, such as variance and covariance between the neurons or units corresponding to different inputs. Glorot and Bengio (2010) suggest sampling weights from a uniform distribution, saving the variance of activations in the forward and gradients backward passes, which are respectively \(1/H_{\ell-1}\) and \(1/H_{\ell}\). Since both conditions are incompatible, the initialization variance is a compromise between the two: \(2/(H_{\ell-1}+H_{\ell})\). The initialization distribution, called _Xavier's_ or _Glorot's_, is the following: \[w_{ij}^{(\ell)}\sim\mathcal{U}\left(-\frac{\sqrt{6}}{\sqrt{H_{\ell-1}+H_{\ell }}},\frac{\sqrt{6}}{\sqrt{H_{\ell-1}+H_{\ell}}}\right),\] with biases \(b_{i}^{(\ell)}\) assigned to zero. The same reasoning can be applied with a zero-mean normal distribution: \[w_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{1}{H_{\ell-1}}\right),\quad\text{ or}\quad w_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{2}{H_{\ell-1}+H_{\ell}} \right).\] This heuristic, based on an analysis of linear neural networks, has been improved by He et al. (2015). First, they show that the variance of the initialization can be indifferently set to \(1/H_{\ell-1}\) or \(1/H_{\ell}\) (up to a constant factor) without damaging either information propagation or back-propagation, thus making any compromise unnecessary. Second, they show that for the ReLU activation function, the variance of the Xavier initialization should be multiplied by \(2\), that is: \[w_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{2}{H_{\ell-1}}\right).\] **Edge of Chaos.** Other works explore the covariance between pre-activations corresponding to two given different inputs. Poole et al. (2016) and Schoenholz et al. (2017) obtain recurrence relations by using Gaussian initializations and under the assumption of Gaussian pre-activations. They conclude that there is a critical line, so-called _Edge of Chaos_, separating signal propagation into two regions. The first one is an ordered phase in which all inputs end up asymptotically fully correlated, while the second region is a chaotic phase in which all inputs end up asymptotically independent. To propagate the information deeper in a neural network, one should choose initialization hyperparameters \((\sigma_{b}^{2},\sigma_{w}^{2})\) corresponding to the separating Edge of Chaos line, which we describe below in more detail. Let \(\mathbf{x}_{a}\) be a deterministic input vector of a data point \(a\), and \(g_{i,a}^{(\ell)}\) be the \(i\)th pre-activation at layer \(\ell\) given a data point \(a\). Since the weights and biases are randomly initialized according to a centered distribution (some Gaussian), the pre-activations \(g_{i,a}^{(\ell)}\) are also random variables, centered and identically distributed. Let \[q_{aa}^{(\ell)} =\mathbb{E}\left[\left(g_{i,a}^{(\ell)}\right)^{2}\right],\quad q _{ab}^{(\ell)}=\mathbb{E}\left[g_{i,a}^{(\ell)}g_{i,b}^{(\ell)}\right],\] \[\text{and}\quad c_{ab}^{(\ell)} =q_{ab}^{(\ell)}/\sqrt{q_{aa}^{(\ell)}q_{bb}^{(\ell)}},\] be respectively their variance according to input \(a\), covariance and correlation according to two inputs \(a\) and \(b\). Assume the Gaussian initialization rules (or priors) of Equation (10) for the weights \(w_{ij}^{(\ell)}\) and biases \(b_{i}^{(\ell)}\) for all \(\ell\), \(i\) and \(j\), independently. Then, under the assumption that pre-activations \(g_{i,a}\) and \(g_{i,b}\) are Gaussian, the variance and covariance defined above satisfy the following two-way recurrence relations: \[q_{aa}^{(\ell)} =\sigma_{w}^{2}\int\phi^{2}\left(u_{1}^{(\ell-1)}\right)\mathcal{ D}g_{i,a}+\sigma_{b}^{2},\] \[q_{ab}^{(\ell)} =\sigma_{w}^{2}\int\phi(u_{1}^{(\ell-1)})\phi(u_{2}^{(\ell-1)}) \mathcal{D}g_{i,a}\mathcal{D}g_{i,b}+\sigma_{b}^{2}.\] Here, \(\mathcal{D}g_{i,a}\) and \(\mathcal{D}g_{i,b}\) stand for the distributions of standard Gaussian pre-activations \(g_{i,a}\) and \(g_{i,b}\). Also, \((u_{1}^{(\ell-1)},u_{2}^{(\ell-1)})\) correspond to the following change of variables \[u_{1}^{(\ell-1)} =\sqrt{q_{aa}^{(\ell-1)}}g_{i,a},\] \[u_{2}^{(\ell-1)} =\sqrt{q_{bb}^{(\ell-1)}}\left(c_{ab}^{(\ell-1)}g_{i,a}+\sqrt{1-( c_{ab}^{(\ell-1)})^{2}}g_{i,b}\right).\] For any \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\), there exist limiting points \(q^{*}\) and \(c^{*}\) for the variance, \(q^{*}=\lim_{\ell\to\infty}q_{aa}^{(\ell)}\), and for the correlation, \(c^{*}=\lim_{\ell\to\infty}c_{ab}^{(\ell)}\). Two regions can be defined depending on the value of \(c^{*}\): (i) an _ordered_ region if \(c^{*}=1\), as any two inputs \(a\) and \(b\), even far from each other, tend to be fully correlated in the deep limit \(\ell\to\infty\); (ii) a _chaos_ region if \(c^{*}<1\), as any two inputs \(a\) and \(b\), even close to each others, tend to decorrelate as \(\ell\to\infty\). To study whether the point \(c^{*}=1\) is _stable_, we need to check the values of the derivative: \(\chi_{1}=\frac{\partial c_{ab}^{(\ell)}}{\partial c_{ab}^{(\ell-1)}}\Big{|}_{ c_{ab}^{(\ell)}=1}^{(\ell)}\). There are three cases: (i) _order_, when \(\chi_{1}<1\), i.e., the point \(c^{*}=1\) is stable; (ii) _transition_, when \(\chi_{1}=1\); (iii) _chaos_, when \(\chi_{1}>1\), i.e., the point \(c^{*}=1\) is unstable. Therefore, there exists a separating line in the hyperparameters \((\sigma_{w}^{2},\sigma_{b}^{2})\) space when \(c^{*}=1\) and \(\chi_{1}=1\), that is referred to as _Edge of Chaos_. By assigning the hyperparameters on the Edge of Chaos line, the information propagates as deep as possible from inputs to outputs. Note that all of this procedure assumes that pre-activations \(g_{i,a}\) and \(g_{i,b}\) are Gaussian. Wolinski and Arbel (2023) analyze the Edge of Chaos framework without the Gaussian hypothesis. #### 5.1.2 Posteriors and optimization methods Neural networks without explicit regularization perform well on out-of-sample data (Zhang et al., 2017a). This could mean that neural network models, and their architecture or optimization procedure in particular, have an inductive bias which leads to implicit regularization during training. A number of works aim at understanding this topic by analyzing the SGD training process. One can relate this research direction to the Bayesian perspective. In particular, especially in variational inference, Bayesian practitioners are greatly concerned with the family of posterior distributions they optimize. Insights into the distribution of solutions found by common optimizers could inform the design of better parametric families to optimize. Nevertheless, research on the posterior distributions induced by constant step SGD remains in its infancy. Here we review some recent results and argue that it will be fruitful to see their implications for Bayesian inference. Some works establish that SGD induces implicit regularization. For instance, Soudry et al. (2018) show that SGD leads to \(\mathscr{L}_{2}\) regularization for linear predictors. Further, SGD applied to convolutional neural networks of depth \(L\) with linear activation function induces \(\mathscr{L}_{2/L}\) regularization (Gunasekar et al., 2018). This type of regularization can be explicitly enforced in the Bayesian setting, for example by the use of an isotropic Gaussian prior. Recent research also proposes that SGD induces heavy-tailed distributions in deep neural networks and connects this with compressibility. Mahoney and Martin (2019) empirically assess the correlation matrix between the weights. Using spectral theory, they show that the correlation matrix converges to a matrix with heavy-tailed entries during training, a phenomenon known as heavy-tailed self-regularization. Gurbuzbalaban et al. (2021) also argue that the gradient noise is heavy-tailed. This has important implications for a Bayesian practitioner. In particular heavy tailedness of the posterior contrasts with the Gaussian distribution assumption typically made in variational inference and the Laplace approximation. Other parametric distributions have been explored in the literature (Fortuin, 2022). Conversely, different optimizers have been proposed, partly inspired by Bayesian inference (Neelakantan et al., 2016; Foret et al., 2021; Khan and Rue, 2021). Neelakantan et al. (2016) inject noise into gradient updates, partly inspired by the SGLD algorithm, from Bayesian inference. They show significant improvements in out-of-sample performance. Foret et al. (2021) relax a PAC-Bayesian objective so as to obtain an optimizer called Sharpness Aware Minimizer (SAM). The SAM optimizer makes gradient steps that have been adversarially perturbed so as to improve generalization by converging to flatter minima. SAM significantly improves performance on diverse datasets and architectures. The connections with Bayesian inference are deep; Mollenhoff and Khan (2022) show that SAM is an optimal relaxation of the ELBO objective from variational inference. Finally Mandt et al. (2017) show that SGD can be interpreted as performing approximate Bayesian inference. The line between frequentist and Bayesian approaches is blurred and has been fruitful in both directions. A significant line of works, including Khan et al. (2017, 2018); Khan and Rue (2021); Osawa et al. (2019); Mollenhoff and Khan (2022), explores existing optimizers that work well in the frequentist setting, and reinterprets them as approximate Bayesian algorithms, subsequently proposing novel (Bayesian) optimizers. Khan et al. (2018) propose a Bayesian reinterpretation of ADAM which has favorable Bayesian inference properties compared to other VI schemes. Mollenhoff and Khan (2022) propose a Bayesian reformulation of SAM which often outperforms the conventional SAM across different metrics. Refer to Khan and Rue (2021) for a detailed treatment of this research direction. #### 5.1.3 Cold and tempered posteriors A tempered posterior distribution with temperature parameter \(T>0\) is defined as \(p(\mathbf{w}|D)\propto\exp(-U(\mathbf{w})/T)\), where \(U(\mathbf{w})\) is the posterior energy function \[U(\mathbf{w})\coloneqq-\log p(\mathcal{D}|\mathbf{w})-\log p(\mathbf{w}).\] Here \(p(\mathbf{w})\) is a proper prior density function, for example, a Gaussian density. It was recently empirically found that posteriors obtained by exponentiating the posterior to some power greater than one (or, equivalently, dividing the energy function \(U(\mathbf{w})\) by some temperature \(T<1\)), performs better than an untempered one, an effect termed the _cold posterior effect_ by Wenzel et al. (2020). The effect is significant for Bayesian inference, as Bayesian inference should in principle result in the most likely parameters given the training data, and thus to optimal predictions. Bayesian inference could be deemed sub-optimal due to the need for cold posteriors, an observation that cannot go unnoticed. In order to explain the effect, Wenzel et al. (2020) suggest that Gaussian priors might not be appropriate for Bayesian neural networks, while in other works Adlam et al. (2020) suggest that misspecification might be the root cause. In some works, data augmentation is argued to be the main reason for this cold posterior effect (Izmailov et al., 2021; Nabarro et al., 2021; Bachmann et al., 2022): indeed, artificially increasing the number of observed data naturally leads to higher posterior contraction (Izmailov et al., 2021). At the same time, taking into consideration data augmentation does not entirely remove the cold posterior effect for some models. In addition, Aitchison (2021) demonstrates that the problem might originate in a wrong likelihood specification of the model which does not take into account the fact that common benchmark datasets are highly curated, and thus have low aleatoric uncertainty. Nabarro et al. (2021) hypothesize that using an appropriate prior incorporating knowledge of the data augmentation might provide a solution. Finally, heavy-tailed priors such as Laplace and Student-t are shown to mitigate the cold posterior effect (Fortuin et al., 2021). Kapoor et al. (2022) argue that for Bayesian classification we typically use a categorical distribution in the likelihood with no mechanism to represent our beliefs about aleatoric uncertainty. This leads to likelihood misspecification. With detailed experiments, Kapoor et al. (2022) show that correctly modeling aleatoric uncertainty in the likelihood partly (but not completely) alleviates the cold posterior effect. Pitas and Arbel (2022) discuss how the commonly used Evidence Lower Bound Objective (a sub-case in the cold posterior effect literature) results in a bound on the KL divergence between the true and the approximate posterior, but not a direct bound on the test misclassification rate. They discuss how some of the tightest PAC-Bayesian generalization bounds (which directly bound the test misclassification rate) naturally incorporate a temperature parameter, that trades off the effect of the prior compared to the training data. Despite the aforementioned research, the cold and tempered posterior effect has still not been completely explained, posing interesting and fruitful questions for the Bayesian deep learning community. #### 5.1.4 Deep ensembles Lakshminarayanan et al. (2017) suggest using an _ensemble of networks_ for uncertainty estimation, which does not suffer from mode collapse but is still computationally expensive. Neural network ensembles are multiple MAP estimates of the deep neural network weights. The predictions of these MAP estimates are then averaged to make an ensemble prediction. Subsequent methods such as _snapshot ensembling_(Huang et al., 2017), _fast geometric ensembling_(FGE: Garipov et al., 2018), _stochastic weight averaging_(SWA: Izmailov et al., 2019), _SWA-Gaussian_(SWAG: Maddox et al., 2019), greatly reduce the computation cost but at the price of a lower predictive performance (Ashukha et al., 2020). While Lakshminarayanan et al. (2017) frame ensemble approaches as an essentially non-Bayesian technique, they can also be cast as a Bayesian model averaging technique (Wilson and Izmailov, 2020; Pearce et al., 2020), and can even asymptotically converge to true posterior samples when adding repulsion (D'Angelo and Fortuin, 2021). Specifically they can be seen as performing a very rough Monte Carlo estimate of the posterior distribution over weights. Ensembles are both cheap, but more importantly, typically outperform Bayesian approaches that have been carefully crafted (Ashukha et al., 2020). This has been empirically explained as resulting from the increased functional diversity of different modes of the loss landscape (Fort et al., 2019). These are sampled by definition using deep ensembles, and this sampling is hard to beat using Bayesian inference. ### Performance certificates ``` TL:DR Bayesian inference is renowned for its ability to provide guarantees on accurate inference of the true posterior distribution given a sufficient amount of data. However, such guarantees pertain to the accurate estimation of the posterior distribution itself, rather than ensuring performance on out-of-sample data. To address the latter, it becomes necessary to rely on generalization bounds, such as the PAC-Bayes framework. Within this framework, model comparison utilizing the marginal likelihood offers guarantees on the performance of the selected model on out-of-sample data, provided that the inference process has been conducted accurately. ``` #### 5.2.1 Frequentist validation of the posterior Recent works address generalization and approximation errors for the estimation of smooth functions in a nonparametric regression framework using sparse deep NNs and study their posterior mass concentration depending on data sample size. Schmidt-Hieber (2020) shows that sparsely connected deep neural network with ReLU activation converges at near-minimax rates when estimating Holder-smooth functions, preventing the curse of dimensionality. Based on this work, Polson and Rockova (2018) introduce a Spike-and-Slab prior for deep ReLU networks which induces a specific regularization scheme in the model training. The obtained posterior in such neural networks concentrates around smooth functions with near-minimax rates of convergence. Further, Kohler and Langer (2021) extend the consistency guarantees for Holder-smooth functions of Schmidt-Hieber (2020) and Polson and Rockova (2018) to fully connected neural networks without the sparsity assumption. Alternatively, Suzuki (2018) provides generalization error bounds for more general functions in Besov spaces and variants with mixed smoothness. One of the ways to visualize the obtained uncertainty is using credible sets around some parameter estimator, where the credible region contains a large fraction of the posterior mass (Szabo et al., 2015). Hadji and Szabo (2021) study the uncertainty resulting from using Gaussian process priors. Franssen and Szabo (2022) provide Bayesian credible sets with frequentist coverage guarantees for standard neural networks trained with gradient descent. Only the last layer is assigned a prior distribution on the parameters and the output obtained from the previous layer is used to compute the posterior. #### 5.2.2 Posterior concentration and generalization to out-of-sample data It is interesting to take a step back and evaluate the difference in _goals_ between the frequentist and Bayesian approaches to machine learning. The Bayesian approach emphasizes that the posterior concentrates around the true parameter as we increase the training set size, see the previous section. The primary goal of the frequentist approach is the performance on out-of-sample data, i.e., generalization, see Section 2.4. This performance is quantified with validation and test sets. These two goals frequently align, although posterior concentration guarantees and performance on out-of-sample data are typically not mathematically equivalent problems. When the number of parameters is smaller than the number of samples \(n\), typically in parametric models, the posterior concentrates on the true set of parameters when \(n\) approaches to infinity. In such cases, the posterior tends to a Dirac delta mass centered on the true parameters. In this setting, we can then argue that we are making predictions using the true predictive distribution, and frequentist and Bayesian goals align. We have inferred the true predictor (according to Bayesian goals) and can be sure that we cannot improve the predictor loss on new out-of-sample data, such as validation and test sets (according to the frequentist approach priorities). However, neural networks do not operate in this regime. They are heavily overparametrized, so that Bayesian model averaging always occurs empirically. Usually, we are not interested in the proposed model itself but in its predictions based on new data. Also, due to misspecification, we cannot even assume that we are concentrating around the true predictor. At this point, the frequentist and Bayesian goals diverge. But it is clear that in a non-asymptotic setting and where performance on out-of-sample data is crucial, we need a more detailed description of the predictor's loss on new data. One way to approach this problem is through generalization bounds (Vapnik, 1999) which directly link the empirical loss on the training set with the loss on new data. Of particular interest are PAC-Bayes generalization bounds (McAllester, 1999; Germain et al., 2016; Dziugaite and Roy, 2017; Dziugaite et al., 2021), which directly bound the true risk of a stochastic predictor. Minimizing the ELBO objective in variational inference corresponds to minimizing a PAC-Bayes bound (Dziugaite and Roy, 2017), and thus a bound on the true risk. If alternatively one samples _exactly_ from the Gibbs posterior (for example using MCMC), then one is still minimizing a PAC-Bayes bound on the true risk (Germain et al., 2016). Furthermore, in this setting, maximizing the _marginal likelihood_ of the model is equivalent to minimizing a PAC-Bayes bound (Germain et al., 2016) and it has been shown that PAC-Bayes bounds can be used to meta-learn better priors for BNNs (Rothfuss et al., 2021, 2022). Of particular interest in this discussion is that performing Bayesian inference is equivalent to minimizing _some_ PAC-Bayes bound and not necessarily _the tightest_ bound. PAC-Bayes bounds typically include a temperature parameter that trades-off the empirical risk with the KL complexity term, and plays a crucial role in the bound tightness (see Section 5.1.3). An interesting open question is whether this temperature parameter provides a justification for the _cold posterior effect_, with a number of works providing evidence to support this view (Grunwald, 2012; Pitas and Arbel, 2022). #### 5.2.3 Marginal likelihood and generalization The marginal likelihood (MacKay, 2003) has been explored for model selection, architecture search and hyperparameter learning for deep neural networks. While estimating the marginal likelihood and computing its gradients is relatively straightforward for simple models such as Gaussian processes (Bishop and Nasrabadi, 2006), deep neural networks often require to resort to approximations. One approach is the Laplace approximation as previously discussed in Section 4.2.2. Daxberger et al. (2021); Immer et al. (2021, 2022) use the Laplace approximation to the marginal likelihood to select the best-performing model on out-of-sample data. They also use the marginal likelihood to learn hyperparameters, in particular the prior variance and the softmax temperature parameter. For the case of the Laplace approximation, the marginal likelihood of training data \(\mathcal{D}\) given the deep neural network architecture \(\mathcal{M}\) can be written as \[\log p(\mathcal{D}|\mathcal{M}) =\log p(\mathcal{D}|\hat{\mathbf{w}}_{\text{MAP}},\mathcal{M})+\log p (\hat{\mathbf{w}}_{\text{MAP}}|\mathcal{M})\] \[+\frac{d}{2}\log 2\pi+\frac{1}{2}\log\left|\mathbf{\Lambda}_{\hat{ \mathbf{w}}_{\text{MAP}}}\right|, \tag{11}\] where \(d\) is the number of weights of the neural network, \(\hat{\mathbf{w}}_{\text{MAP}}\) is a MAP estimate of the network parameters, and \(\mathbf{\Lambda}_{\hat{\mathbf{w}}_{\text{MAP}}}\) is the precision matrix of the Gaussian posterior distribution under the Laplace approximation. Similarly to the discussion in Section 4.2.2, the primary computational problem is forming the precision matrix and estimating its determinant. Again the generalized Gauss-Newton approximation and the Empirical Fisher approximation to the Hessian (and correspondingly to the precision matrix) are the most common and efficient approximations, and are the ones used in Daxberger et al. (2021); Immer et al. (2021). On a conceptual level, a main criticism of the Laplace approximation for the marginal likelihood of deep neural networks is that it is unimodal while the loss landscape of deep neural networks has multiple minima (Lotfi et al., 2022). This might severely underestimate the volume of good solutions with respect to bad solutions given the prior, which is essentially what the marginal likelihood estimates. A further criticism is that this approximation to the marginal likelihood is sensitive to the prior variance. Indeed for a fixed prior variance across different neural network architectures, Lotfi et al. (2022) show that the marginal likelihood performs poorly for model selection. However optimizing a common prior covariance across layers, or optimizing different prior variances for different layers, results in a better empirical correlation of the marginal likelihood with out-of-sample performance. Overall, the marginal likelihood provides reasonable predictive power for out-of-sample performance for deep neural networks, and as such constitutes a reasonable approach to model selection. A different approach is to resort to the product decomposition of the marginal likelihood as \[\log p(\mathcal{D}|\mathcal{M}) =\log\prod_{i=1}^{n}p(\mathcal{D}_{i}|\mathcal{D}_{<i},\mathcal{M}) \tag{12}\] \[=\sum_{i=1}^{n}\log[\mathbf{E}_{p(\theta|\mathcal{D}_{<i})}p( \mathcal{D}_{i}|\theta,\mathcal{M})]\] which measures how good the model is at predicting each data point \(\mathcal{D}_{i}\) in sequence given every data point before it, \(\mathcal{D}_{<i}\). Based on this observation, Lyle et al. (2020); Ru et al. (2021) propose the sum of losses of the different batches across an epoch as an approximation to the marginal likelihood. Then, they use this as a measure of the ability of a model to generalize to out-of-sample data. They also propose different heuristics, such as taking the average of the sum of the losses over multiple epochs. A further heuristic is keeping only the last epochs of training while rejecting the sum of the losses of the first epochs. Finally the authors propose to train the neural network for a limited number of epochs, for example only half of the number of epochs that would be typically used to train to convergence. As such the approach is computationally efficient, requiring only partial convergence of the deep neural network and a calculation of the training losses over batches, which are efficient to estimate. Ru et al. (2021) compare their approach to the task of architecture search to other common approaches. These approaches are a mixture of heuristics and frequentist statistics. The first is the sum of validation losses up to a given epoch. The second is the validation accuracy at an early epoch, which corresponds to the early-stopping practice whereby the user estimates the final test performance of a network using its validation accuracy at an early epoch. The third is the learning curve extrapolation method, which was proposed in Baker et al. (2017) and which trains a regression model on previously evaluated architecture data to predict the final test accuracy of new architectures. The inputs for the regression model comprise architecture meta-features and learning curve features up to a given epoch. They also compare to zero-cost baselines: an estimator based on input Jacobian covariance (JavCov, Mellor et al., 2021) and two adapted from pruning techniques (SNIP and SynFlow, Abdelfattah et al., 2021). The authors demonstrate significantly better rank-correlation in neural architecture search (NAS) for the marginal likelihood approach compared to the baselines. These results have been further validated in Lotfi et al. (2022). Ru et al. (2021) have however been criticized for using the term "training speed" (as in the number of steps needed to reach a certain training error) to describe their approach. In short, they claim that Equation (12) corresponds to some measure of training speed, and thus they claim that _training faster corresponds to better generalization_. This however is not generally true as pointed out in Lotfi et al. (2022). The marginal likelihood can be _larger_ for a model that converges _in more steps_ (than another model) if the marginal likelihood at step \(i=1\) in decomposition (12) is higher. There is a debate as to whether the marginal likelihood is appropriate for model selection at all. Lotfi et al. (2022) make a distinction between the question "what is the probability that a prior model generated the training data?" and the question "how likely is the posterior, conditioned on the training data, to have generated withheld points drawn from the same distribution?". They claim that the marginal likelihood answers the first question and not the second. However, high marginal likelihood also provides frequentist guarantees on _out-of-sample_ performance through PAC-Bayesian theorems (Germain et al., 2016). If one selects a model based on the marginal likelihood and also performs Bayesian inference correctly, then the resulting model and its posterior over parameters are guaranteed to result in good performance on out-of-sample data. Overall, the debate is far from concluded, and in light of the good empirical performance of the marginal likelihood, more research is warranted in its direction. ### Benchmarking BNNs present unique challenges in terms of their evaluation and benchmarking. Two main challenges are the choice of the evaluation _datasets_ and _metrics_ that do not have a consensus in the society. Non-consensus reflects a difficulty with clearly defining the goals of Bayesian deep learning in a field traditionally viewed through a _frequentist_ lens, and more specifically through performance on out-of-sample data. [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=topsep,topsep=topsep,topsep=topsep #### 5.3.2 Evaluation metrics-tasks For most popular machine learning tasks, the community has reached a consensus on the appropriate evaluation metric of choice, such as the mean-squared error (MSE) for regression and zero-one loss for classification. In the case of Bayesian deep learning, there is not yet a clear choice. Should the Bayesian approach improve on frequentist metrics such as misclassification rate on held-out data? Should it provide solutions to known issues of traditional approaches, such as improved robustness to adversarial and non-adversarial noise? Or should Bayesian approaches be evaluated on different metrics altogether or on metrics that capture _uncertainty_? **Standard losses.** Practitioners propose several metrics (and corresponding tasks) for the evaluation of Bayesian deep learning approaches. By far the most popular choice is to evaluate frequentist metrics on held-out data that are the MSE for regression and the zero-one loss for classification (Khan et al., 2018; Khan and Swaroop, 2021; Gal and Ghahramani, 2016; Izmailov et al., 2021; Wenzel et al., 2020). The intuition behind this choice is that the posterior predictive distribution should improve upon deterministic predictions as multiple predictions from the posterior are averaged. For example, in the case of classification, the posterior predictive is meant to better approximate the _probability_ that a given class is correct. One problem with this approach is that Bayesian approaches have typically provided inconsistent gains for this task-metric combination. For example, sometimes Bayesian approaches improve upon a deterministic neural network and sometimes provide worse results. See for example Figure 5 in Izmailov et al. (2021) where the MSE is evaluated on UCI regression tasks. Similarly, Figure 4.a. in Daxberger et al. (2021) shows that the Laplace approximation to a DNN posterior does not improve upon the MAP solution. Wenzel et al. (2020) point out that one can improve upon deterministic neural networks by using heuristics such as cold posteriors which however deviate from the Bayesian paradigm. One common switch away from MSE and zero-one loss consists in evaluating the (negative) log-likelihood of the test data. Here, Bayesian approaches often outperform frequentist ones, but exceptions remain (Wenzel et al., 2020). **Calibration.** By far, the metric on which Bayesian neural networks consistently outperform is the _calibration_ for a classification task, i.e., if a classifier has \(x\%\) confidence when classifying samples from a sample set, it should also be correct \(x\%\) of the time. The two most popular metrics for evaluating calibration are the _expected calibration error_ (ECE: DeGroot and Fienberg, 1983), and the _thresholded adaptive calibration error_ (TACE: Nixon et al., 2019). For this type of task-metric combination Bayesian and Bayesian-like approaches such as ensembles (see Section 5.1.4) consistently outperform deterministic neural networks (Izmailov et al., 2021; Daxberger et al., 2021; Maddox et al., 2019). Ashukha et al. (2020) provide a detailed discussion on evaluation metrics for uncertainty estimation as well as common pitfalls. They argue that for a given metric one should always compare a Bayesian method to an ensemble. Ensembles provide good gains in different uncertainty metrics for each new ensemble member. Bayesian methods, often do not result in the same gains for each new sample from the posterior. Other methods for evaluating calibration include reliability diagrams (Vaicenavicius et al., 2019) and calibration curves (Maddox et al., 2019). A strength of these metrics is that they are generally clear, direct and intuitive. One weakness of them is that like other visual methods, they are subject to misinterpretation. For example, calibration curves provide a simple and intuitive way to determine which classifier is better calibrated than others when the difference between the classifiers is large. However, when the difference is small or the classifier is miscalibrated only for certain confidence levels, then deriving reliable conclusions becomes more tedious. One caveat is that a classifier that is guessing completely at random and assigns the marginal class frequencies as predictive probabilities to each data point would trivially achieve a perfect ECE of 0 (Gruber and Buettner, 2022). Moreover, it has been argued that while many of these metrics measure marginal uncertainties on single data points, joint uncertainties across many points might be more relevant in practice, e.g., for sequential decision-making (Osband et al., 2022). **Robustness.** There are many works that explored robustness to adversarial noise (Louizos and Welling, 2017; Rawat et al., 2017; Liu et al., 2019; Grosse et al., 2019; Bekasov and Murray, 2018) and to non-adversarial noise (Gal and Ghahramani, 2016; Daxberger et al., 2021; Dusenberry et al., 2020; Daxberger et al., 2021; Izmailov et al., 2021), including Gaussian noise, image rotations, among others. Band et al. (2021) analyze a form of distribution shift whereby classifiers are trained on a set of images for which diabetic retinopathy exists at moderate levels. Then, the evaluation of the classifiers is assessed on a test set where diabetic retinopathy is more severe. The intuition is that Bayesian approaches should correctly classify these corrupted samples and assign low confidence in their predictions. The results in these tasks-metrics are mixed. In the adversarial setting, BNNs are typically far from the state-of-the-art defenses against adversarial attacks. In the non-adversarial setting, some works show _improved_ robustness (Daxberger et al., 2021), while others show _reduced_ robustness (Izmailov et al., 2021). #### 5.3.3 Output interpretation We conclude by analyzing the output of BNNs with the question of its probabilistic interpretation and its relation to evaluation metrics. We restrict the discussion to classification models, though the discussion for other tasks is similar. Both frequentist and Bayesian practitioners recognize that the outputs of a deep neural network classifier often do not accurately reflect the probability of choosing the correct class. That is, the NNs are not well calibrated. However, frequentist and Bayesian communities propose different solutions. The frequentist solution is to transform the outputs of the classifier through a post-processing step to obtain well-calibrated outputs. Common approaches include _histogram binning_(Zadrozny and Elkan, 2001), _isotonic regression_(Zadrozny and Elkan, 2002), _Bayesian binning into quantiles_(Naeini et al., 2015) as well as _Platt scaling_(Platt, 1999). In a Bayesian setting, the predictive distribution has a clear interpretation, that is the confidence of the model in each class for a given input signal. Confusion can arise from the fact that scaling is sometimes considered part of an evaluation metric. For example, Guo et al. (2017) consider _Platt scaling_ as a post-processing step (therefore it defines a new model), while Ashukha et al. (2020) propose that it be incorporated into a new evaluation metric. The choice of which of the two is true is important as the impact of recalibration methods can be significant in improving the calibration of a model. Thus, if one considers recalibration as defining a new model as in Ashukha et al. (2020), then a K-FAC Laplace BNN outperforms its corresponding frequentist one significantly in calibration. If recalibration is part of the evaluation metric, then the gains become marginal. Conclusion The present review encompasses various topics, such as the selection of prior (Section 3.2), computational methods (Section 3.3), and model selection (Section 3.4), which pertain to Bayesian problems in a general sense as well as Bayesian neural networks specifically. This comprehensive perspective enables the contextualization of the diverse inquiries that emerge within the Bayesian deep learning community. Despite the growing interest and advancements in inference techniques for Bayesian deep learning models, the considerable computational burden associated with Bayesian deep learning approaches remains a primary hindrance. Consequently, the community dedicated to Bayesian deep learning remains relatively small, and the adoption of these approaches in the industry remains limited. The establishment of a consensus regarding evaluation metrics and benchmarking datasets for Bayesian deep learning has yet to be attained. The lack of consensus stems from the challenge of precisely defining the objectives of Bayesian deep learning within a domain traditionally perceived through a _frequentist_ framework, particularly emphasizing performance on out-of-sample data. This review provides readers with a thorough exposition of the challenges intrinsic to Bayesian deep learning, while also shedding light on avenues that warrant additional exploration and enhancement. With this cohesive resource, our objective is to empower statisticians and machine learners alike, facilitating a deeper understanding of Bayesian neural networks (BNNs) and promoting their wider practical implementation.
2309.06661
Sound field decomposition based on two-stage neural networks
A method for sound field decomposition based on neural networks is proposed. The method comprises two stages: a sound field separation stage and a single-source localization stage. In the first stage, the sound pressure at microphones synthesized by multiple sources is separated into one excited by each sound source. In the second stage, the source location is obtained as a regression from the sound pressure at microphones consisting of a single sound source. The estimated location is not affected by discretization because the second stage is designed as a regression rather than a classification. Datasets are generated by simulation using Green's function, and the neural network is trained for each frequency. Numerical experiments reveal that, compared with conventional methods, the proposed method can achieve higher source-localization accuracy and higher sound-field-reconstruction accuracy.
Ryo Matsuda, Makoto Otani
2023-09-13T01:32:46Z
http://arxiv.org/abs/2309.06661v1
# Sound field decomposition based on two-stage neural networks ###### Abstract A method for sound field decomposition based on neural networks is proposed. The method comprises two stages: a sound field separation stage and a single-source localization stage. In the first stage, the sound pressure at microphones synthesized by multiple sources is separated into one excited by each sound source. In the second stage, the source location is obtained as a regression from the sound pressure at microphones consisting of a single sound source. The estimated location is not affected by discretization because the second stage is designed as a regression rather than a classification. Datasets are generated by simulation using Green's function, and the neural network is trained for each frequency. Numerical experiments reveal that, compared with conventional methods, the proposed method can achieve higher source-localization accuracy and higher sound-field-reconstruction accuracy. Introduction Sound field recording (i.e., recording the spatio-temporal distribution of sound pressures) is useful for better understanding the sound field through visualization and auralization of wave phenomena over a wide area. Sound field recording is an inverse problem that estimates the sound pressure at an arbitrary location in a region of interest from the sound pressure at discrete observation locations in space, such as at a single microphone array [1; 2; 3; 4], distributed microphones [5], or distributed microphone arrays [6; 7]. In a three-dimensional sound field, an arbitrary sound field can be represented by a linear combination of bases such as spherical harmonics and plane waves; therefore, we consider estimating the coefficients for these bases by regression. Once those coefficients are obtained, the sound field can be reproduced for the listener using a loudspeaker array [8; 9; 10; 11] or a set of headphones [12; 13]. When a sound field in a region is recorded, the representation of the sound field differs depending on whether the target region includes a sound source [14; 15]. In a region without a sound source (i.e., a region subject to the homogeneous Helmholtz equation), the sound field is represented through a straightforward spherical harmonic expansion or plane-wave expansion. However, in a region that includes a sound source, the sound field follows the inhomogeneous Helmholtz equation, which is an ill-posed problem, and cannot be directly expanded using those bases. Therefore, a method has been proposed to decompose the sound field into a superposition of a small number of point sources by imposing sparsity on the distribution of sound sources as a constraint on the acoustical environment [16; 17]. The sparsity constraint improves the estimation accuracy even in frequencies greater than the spatial Nyquist frequency. However, these methods require discretization of candidate positions for the sound source location onto a grid in advance and thus cannot accurately estimate the sound source locations when sound sources do not exist at the pre-assumed grid points. In addition, although reducing the grid interval improves the estimation accuracy, it also leads to an increase in computational complexity and memory because of the larger number of grid points. In contrast to the aforementioned methods that discretize a priori assumed sound source positions, sound field decomposition methods based on the reciprocity gap functional (RGF) [18] and the RGF in the spherical harmonic domain [19] have been proposed as methods for gridless sound field decomposition. because these methods can directly estimate sound source positions in closed form, they are not affected by grid discretization. However, because of the effect of spatial aliasing, the frequency band with high reproduction accuracy is limited by the number of microphones and their arrangement. Many neural-network-based methods have been proposed in the fields of sound source localization and direction-of-arrival estimation in recent years [20]. Neural-network-based methods estimate the sound source positions either by classification or regression. Classification requires the prior discretization of candidate sound source positions and has the same off-grid problem encountered in the case of sparse sound field decomposition. By contrast, regression does not have the off-grid problem because the source positions can be obtained as the output of the network. In addition, the regression model has also shown better performances than classification for a single-source situation [21; 22]. However, the performance of source localization based solely on single-frequency sound field information is unclear be cause most regression models have been considered in the time domain [23] or time-frequency domain [24; 21; 22; 25] and are limited to specific sound source signals (e.g., speech). Therefore, in the present study, we propose a sound field decomposition method that uses a regression-type neural network based solely on sound field information in a single frequency independent of the source information. The proposed method consists of two stages: In the first stage, the sound pressure at the microphones synthesized from multiple sound sources is separated into the sound pressure excited by each source. Then, in the second stage, the sound source position is obtained as a regression from the sound pressure at the microphone consisting of a single sound source. The strength of each sound source is obtained from the sound pressure in the microphone array after separation and the sound source position by linear regression. The structure of the neural networks is similar to that of the source-splitting proposed in [24]. However, the proposed method explicitly separates the contributions of the sound sources in the first stage using a loss function proposed in the present study. The proposed method also limits the number of sources in advance, which can impose sparsity constraints. This paper is organized as follows: Section II defines the problem setting of sound field decomposition based on the sparsity of the source distribution. Section III describes our proposed method using neural networks; datasets and loss functions for training networks are also described. Section IV presents the numerical experiments and their results. Section. V concludes this study. ## II Sound field decomposition ### Preliminaries Throughout this paper, the following notations are used: matrices and vectors are denoted by uppercase and lowercase boldface, respectively. The imaginary unit \(\sqrt{-1}\) is denoted by j. Wave number is denoted by \(k=\omega/c\), where \(\omega\) is the angular frequency and \(c\) is the sound velocity. The position vector is denoted by \(\mathbf{r}=(x,y,z)\in\mathbb{R}^{3}\) in the Cartesian coordinate system. The time dependency is assumed as \(\exp\left(\mathrm{j}\omega t\right)\) and is hereafter omitted for simplicity. ### Problem setting Consider the reconstruction of the sound field in the region \(\Omega\) in \(\mathbb{R}^{3}\) including sound sources from the sound pressure measured by microphone sets \(\mathcal{M}\) discretely placed on the boundary \(\partial\Omega\) (Fig. 1). Because \(\Omega\) includes sources (i.e., singular points), the sound pressure Figure 1: Overview of the problem setting. Sound sources exist in the target region and microphones are distributed around the region. in \(\Omega\) satisfies the following inhomogeneous Helmholtz equation: \[(\nabla^{2}+k^{2})p(\mathbf{r},k)=-Q(\mathbf{r},k). \tag{1}\] Here, \(p(\mathbf{r},k)\) represents the sound pressure of \(k\) at \(\mathbf{r}\in\Omega\), \(Q(\mathbf{r},k)\) denotes the source distribution in \(\Omega\), and \(\nabla^{2}\) is the Laplace operator. The solution satisfying Eq. (1) can be expressed in terms of the volume integral of the three-dimensional free field Green's function \(G(\mathbf{r}|\mathbf{r}^{\prime},k)\) and \(Q(\mathbf{r}^{\prime},k)\) as \[p(\mathbf{r},k)=\int_{\Omega}Q(\mathbf{r}^{\prime},k)G(\mathbf{r}|\mathbf{r}^{ \prime},k)\mathrm{d}\Omega, \tag{2}\] where \[G(\mathbf{r}|\mathbf{r}^{\prime},k)=\frac{\exp{(-\mathrm{j}k\|\mathbf{r}- \mathbf{r}^{\prime}\|_{2})}}{4\pi\|\mathbf{r}-\mathbf{r}^{\prime}\|_{2}}. \tag{3}\] If all sources in the \(\Omega\) region are assumed to be point sources, Eq. (2) can be approximated as \[p(\mathbf{r},k)\approx\sum_{s=1}^{S}a_{s}G(\mathbf{r}|\mathbf{r}_{s},k), \tag{4}\] where \(S\) denotes the number of sound sources and \(a_{s}\in\mathbb{C}\) and \(\mathbf{r}_{s}\in\Omega\) represent the position and amplitude of the \(s\)-th source, respectively. Therefore, the sound field reconstruction in \(\Omega\) can be considered a sound field decomposition problem to estimate \(S\), \(\{a_{s}\}_{s\in\mathcal{S}}\), and \(\{\mathbf{r}_{s}\}_{s\in\mathcal{S}}\) from the set of observed sound pressure \(\{p(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\), where \(\mathcal{S}\) denotes the set of the sound sources. Hereafter, the number of sound sources is assumed to be known in advance. ## III Sound field decomposition based on neural networks Sound field decomposition based on neural networks is proposed in this section. The proposed method consists of two stages: a sound field separator (SFS) stage and a sound source localizer (SSL) stage. Figure 2 shows a schematic of the proposed model. ### Model architecture #### iii.1.1 Sound field separator The SFS aims to separate a sound field generated by multiple sound sources observed at the microphones into multiple sound fields generated by each source. The sound pressure observed at the microphones is normalized before input to the neural network as follows in order for the neural network to learn scale-independently: \[\bar{p}(\mathbf{r}_{m},k)=\frac{p(\mathbf{r}_{m},k)}{p_{\max}}, \tag{5}\] Figure 2: Overview of the proposed sound field decomposition method based on neural networks. (5) where \[p_{\max}=\max_{m\in\mathcal{M}}{(|p(\mathbf{r}_{m},k)|)}. \tag{6}\] Here, \(|\cdot|\) and \(\max(\cdot)\) denote the operations of taking the absolute value and the maximum value, respectively. Because the neural network processes with real values, the complex sound pressure vector \(\bar{\mathbf{p}}\in\mathbb{C}^{\mathbb{M}}\), which is a column vector of \(\{\bar{p}(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\), is transformed into a real-valued tensor \(\bar{\mathbf{P}}^{\mathbb{R}}\in\mathbb{R}^{2\times M}\) as \[\begin{split}\left[\bar{\mathbf{P}}^{\mathbb{R}}\right]_{0,:}& =\Re\left[\bar{\mathbf{p}}^{\top}\right],\\ &\left[\bar{\mathbf{P}}^{\mathbb{R}}\right]_{1,:}&= \Im\left[\bar{\mathbf{p}}^{\top}\right],\end{split} \tag{7}\] where \(\Re[\cdot]\) and \(\Im[\cdot]\) represent the operations of taking the real and imaginary parts, respectively. The neural network of the SFS is defined as a one-dimensional U-net [26] (Fig. 3). Each convolution layer consists of a one-dimensional (1D) convolution followed by layer normalization, and activation, except for the final layer, which has only 1D convolution. The kernel size for convolution is 5, with stride size 1, and padding size 2. Transposed convolution is Figure 3: Schematic of the neural network architecture of SFS in the case of \(M=64\). (Color online) defined as 1D transposed convolution with kernel size 3, stride size 2, padding size 2, and output-padding size 2. Max pooling and rectified linear unit (ReLU) functions are used for all pooling layers and activation functions, respectively. The output of the neural network corresponds to a tensor of the separated sound pressure denoted by \(\bar{\mathbf{P}}_{\text{sep}}^{\mathbb{R}}\in\mathbb{R}^{2S\times M}\). The sound pressure vector corresponding to the \(s\)-th sound source, \(\bar{\mathbf{p}}_{\text{sep},s}\in\mathbb{C}^{\mathbb{M}}\), is represented as \[\bar{\mathbf{p}}_{\text{sep},s}=\left(\left[\bar{\mathbf{P}}_{\text{sep}}^{\mathbb{R} }\right]_{2(s-1),:}+\text{j}\left[\bar{\mathbf{P}}_{\text{sep}}^{\mathbb{R}} \right]_{2(s-1)+1,:}\right)^{\top} \tag{8}\] and then unnormalized as \[\hat{\mathbf{p}}_{\text{sep},s}=\bar{\mathbf{p}}_{\text{sep},s}\times p_{\text{max}}. \tag{9}\] Figure 4: Schematic of the neural network architecture of SSL in the case of \(M=64\). (Color online) #### iii.1.2 Single source localizer In the SSL, the sound source position is located from the sound pressure at the microphones corresponding to each source as described in Sec. III.1.1. Therefore, the SSL is repeated \(S\) times. We denote \(u(\mathbf{r}_{m},k)\) as the separated pressure \(\hat{p}_{\text{sep},s}(\mathbf{r}_{m},k)\), and consider the \(s\)-th source hereafter. The sound pressure is also normalized before the neural network for scale-independent learning as \[\bar{u}(\mathbf{r}_{m},k)=\frac{u(\mathbf{r}_{m},k)}{\max_{m\in\mathcal{M}}\left(|u( \mathbf{r}_{m},k)|\right)}. \tag{10}\] The normalized spatial covariance matrix of sound pressure vectors \(\mathbf{\Sigma}=\bar{\mathbf{u}}\bar{\mathbf{u}}^{\text{H}}\in\mathbb{C}^{M\times M}\) is used as the input of the neural network. Here, \(\bar{\mathbf{u}}\in\mathbb{C}^{\text{M}}\) is a column vector of \(\{\bar{u}(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\). The spatial covariance matrix is transformed into a real-valued tensor \(\mathbf{\Sigma}^{\mathbb{R}}\in\mathbb{R}^{2\times M\times M}\) to represent it in a format compatible with the network as follows: \[\begin{split}\left[\mathbf{\Sigma}^{\mathbb{R}}\right]_{0,:,:}& =\Re\left[\bar{\mathbf{u}}\bar{\mathbf{u}}^{\text{H}}\right],\\ \left[\mathbf{\Sigma}^{\mathbb{R}}\right]_{1,:,:}&= \Im\left[\bar{\mathbf{u}}\bar{\mathbf{u}}^{\text{H}}\right].\end{split} \tag{11}\] The neural network consists of a feature extractor composed of four convolution layers and a multilayer perceptron (MLP) composed of four linear transformation layers (Fig. 4). Each convolution layer consists of a 2D convolution layer, layer normalization, and activation, in that order. The kernel size for convolution is \(5\times 5\), the stride is 2, and the padding is 1. Each linear transformation layer except the final layer consists of a linear transformation, layer normalization, and activation, in that order. Layer normalization and activation are not used in the final layer. ReLU functions are used for all activation functions, and bias is added in all layers. The output of the neural network corresponds to the sound source position \(\hat{\mathbf{r}}=(\hat{x},\hat{y},\hat{z})\in\mathbb{R}^{3}\) in the Cartesian coordinate system. The signal of the sound source can be obtained by linear regression from the estimated source position \(\hat{\mathbf{r}}\) and the sound pressure at microphones \(\{u(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\) as \[\left\{\begin{array}{ll}\hat{a}(k)=\frac{\sum_{m=1}^{M}\{(u(\mathbf{r}_{m},k)-\mu _{u})(G(\mathbf{r}_{m}|\hat{\mathbf{r}},k)-\mu_{g})\}}{\sum_{m=1}^{M}(G(\mathbf{r}_{m}| \hat{\mathbf{r}},k)-\mu_{g})^{2}}&\text{if }S=1\\ \hat{\mathbf{a}}=\mathbf{G}^{\dagger}\mathbf{p}&\text{if }S>1\end{array}\right. \tag{12}\] where \(\hat{\mathbf{a}}\in\mathbb{C}^{S}\) denotes the estimated source-signal vector; \(\dagger\) denotes the Moore-Penrose pseudo-inverse; \(\mathbf{G}\in\mathbb{C}^{M\times S}\) denotes the transfer function matrix between the \(s\)-th source and the \(m\)-th microphone; and \(\mathbf{p}\in\mathbb{C}^{M}\) denotes the vector of recorded sound pressure at the microphones; \[\mu_{u}=\frac{1}{M}\sum_{m=1}^{M}u(\mathbf{r}_{m},k);\quad\mu_{g}=\frac{1}{M}\sum _{m=1}^{M}G(\mathbf{r}_{m}|\hat{\mathbf{r}},k). \tag{13}\] ### Dataset We assume that the sound field is a three-dimensional free field, that \(\Omega\) is a spherical region of radius 1.0 m with the free-field condition, that microphones are located on \(\partial\Omega\) with \(M=64\) using a spherical _t_-design [27], and that the sound sources exist inside a spherical region \(\Omega_{\text{S}}\) of radius 0.8 m. Datasets for training the SFS and SSL are prepared separately. Pairs of a sound source position and simulated sound pressure at the microphones are used as the dataset for the SSL. If the sound field is assumed to be excited by a single point source, the sound pressure observed at each microphone can be obtained by \[u(\mathbf{r}_{m},k)=a(k)G(\mathbf{r}_{m}|\mathbf{r}_{\text{src}},k)+n(\mathbf{r}_{m},k), \tag{14}\] where \(a(k)\in\mathbb{C}\) denotes the source signal, \(\mathbf{r}_{\rm src}\) denotes the single source position, and \(n(\mathbf{r}_{m},k)\) denotes the noise component. The SSL dataset comprises 10,000 pairs of \(\mathbf{r}_{\rm src}\) and \(\{u(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\) for each frequency. The positions of the sound source are randomly generated from a uniform distribution in \(\Omega_{S}\). We use 90% of the dataset for training and the remaining 10% for validation. The amplitude of the sound source \(|a(k)|\) is set to one, and the phase \(\angle a_{s}(k)\) is randomly varied from batch to batch following uniform distribution \(\mathcal{U}(-\pi,\pi)\) for phase-independent learning. The noise is generated as a Gaussian distribution such that the signal-to-noise ratio (SNR) is in the range of \([20,60]\) dB. The number of sources \(S\) that exist in \(\Omega\) is assumed to be two for the SFS. The SFS dataset consists of pairs of the sound pressure observed at the microphones \(\{p(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\) and the separated sound pressure corresponding to each sound source denoted by \(\{p_{{\rm sep},s}(\mathbf{r}_{m},k)\}_{s\in\mathcal{S},m\in\mathcal{M}}\). The sound pressure observed at each microphone can be expressed as, \[p(\mathbf{r}_{m},k)=\sum_{s=1}^{S}\underbrace{a_{s}(k)G(\mathbf{r}_{m}|\mathbf{r}_{s},k)}_ {p_{{\rm sep},s}(\mathbf{r}_{m},k)}+n(\mathbf{r}_{m},k). \tag{15}\] Here, \(a_{s}(k)\in\mathbb{C}\) denotes the signal of the \(s\)-th source. The source locations \(\{\mathbf{r}_{s}\}_{s\in\mathcal{S}}\) used to generate the training dataset are selected from a random combination of 45,000 source locations used in the SSL training. Similarly, a random combination of 5,000 points from the SSL validation dataset is chosen for the source positions for validation. The source signal \(\{a_{s}(k)\}_{s\in\mathcal{S}}\) is randomly varied from batch to batch following \(\Re\left[a_{s}(k)\right]\sim\mathcal{U}(-1,1)\), \(\Im\left[a_{s}(k)\right]\sim\mathcal{U}(-1,1)\) for inter-amplitude-independent and inter-phase-independent learning. The noise is generated as a Gaussian distribution such that the SNR is in the range of \([20,60]\) dB. ### Loss function and training procedure The mean squared error (MSE) with respect to the source position is used as a loss function of the SSL: \[\mathcal{L}_{\mathrm{SSL}}=\frac{1}{3}\|\mathbf{r}_{\mathrm{src}}-\hat{\mathbf{r}}_{ \mathrm{src}}\|_{2}^{2}. \tag{16}\] Here, \(\hat{\mathbf{r}}_{\mathrm{src}}\) denotes the estimated source position in each dataset, respectively. The Adam optimizer [28] with a learning rate of \(5\times 10^{-4}\) is used for training, and the batch size is 100. The model is trained during 1,000 epochs. As a loss function of the SFS, we propose the permutation-invariant MSE [29] with respect to the separated sound pressure. The loss function in the case of \(S=2\) is defined as, \[\mathcal{L}_{\mathrm{SFS}}=\frac{1}{S}\min\Bigl{(}\mathrm{MSE}_{11}+\mathrm{ MSE}_{22},\mathrm{MSE}_{12}+\mathrm{MSE}_{21}\Bigr{)}, \tag{17}\] where \[\mathrm{MSE}_{ij}=\frac{1}{M}\left(\sum_{M}|p_{\mathrm{sep},i}(\mathbf{r}_{m},k)- \hat{p}_{\mathrm{sep},j}(\mathbf{r}_{m},k)|^{2}\right). \tag{18}\] The Adam optimizer with a learning rate of \(1\times 10^{-3}\) is used for training, and the batch size is 100. The model is trained during 10,000 epochs. Numerical experiments Numerical simulations were conducted to compare the performance of the proposed method with that of conventional methods (i.e., the sparse sound field decomposition method [16] and the spherical-harmonic-domain RGF [19]). Hereafter, these methods are denoted as **Proposed**, **Sparse**, and **SHD-RGF**, respectively. The arrangement of microphones was the same as that defined in Sec. III.3. Numerical simulations were performed for one and two sound sources, respectively. To validate the effectiveness of SFS in the case of two sources, we used a neural-network-based model in which the output layer of the SSL model was changed to two source positions; this model was used as a baseline model, hereafter referred to as **Baseline**. The MSE of the permutation-invariant source positions was used as a loss function to learn the baseline, defined as \[\mathcal{L}_{\text{base}}=\frac{1}{3S}\min\Bigl{(}\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{1 }\|_{2}^{2}+\|\mathbf{r}_{2}-\hat{\mathbf{r}}_{2}\|_{2}^{2},\|\mathbf{r}_{1}-\hat{\mathbf{r}}_ {2}\|_{2}^{2}+\|\mathbf{r}_{2}-\hat{\mathbf{r}}_{1}\|_{2}^{2}\Bigr{)}. \tag{19}\] The dataset and noise addition used for training **Baseline** were the same as those used for training the SFS neural network. The Adam optimizer with a learning rate of \(5\times 10^{-4}\) was used for optimization. The model was trained during 10,000 epochs with a batch size of 100. In addition, a neural network model with the same layer as proposed for the SFS trained by Eq. (19) was used to validate the effectiveness of loss function Eq. (17) in the case of two sources. The SSL of the model was pre-trained, and the weights of each SSL layer were fixed when training the SFS. The model was optimized by the Adam optimizer with a learning rate of \(1\times 10^{-3}\) and trained during 10,000 epochs with a batch size of 100. Hereafter, the model is denoted by **Proposed** (\(\mathcal{L}_{\text{base}}\)). In **Sparse**, it is necessary to discretize \(\Omega\) in advance in order to set up the candidate source positions. In this experiment, \(\Omega\) was discretized in the \(x,y,z\) directions into grids with \(\delta\) intervals and the grid points were used as source-position candidates. Therefore, we discretized the source-included region \(\Omega_{s}\) by \(\delta=0.1\) m and \(\delta=0.2\) m; the total number of candidate points was 2,109 and 257, respectively. Hereafter, they are denoted by **Sparse** (\(\delta=0.1\)) and **Sparse** (\(\delta=0.2\)), respectively. The OMP [30; 31] algorithm was used for sparse decomposition. Because **SHD-RGF** requires truncation of the order of the spherical harmonic expansion, three truncation orders (i.e., 5, 6, and 7) were used; they are denoted by **SHD-RGF** (\(N=5\)), **SHD-RGF** (\(N=6\)), and **SHD-RGF** (\(N=7\)), respectively. In this experiment, we compare and evaluate each method in terms of the accuracy of sound source localization and sound field reconstruction. To evaluate the accuracy of the sound source localization, we define the root-means-square error (RMSE) as \[\text{RMSE}=\left\{\begin{array}{ll}\sqrt{\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{1}\|_ {2}^{2}}&\text{if }\,S=1\\ \sqrt{\frac{1}{S}\min\Bigl{(}\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{1}\|_{2}^{2}+\|\mathbf{r}_ {2}-\hat{\mathbf{r}}_{2}\|_{2}^{2},\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{2}\|_{2}^{2}+\|\mathbf{r }_{2}-\hat{\mathbf{r}}_{1}\|_{2}^{2}\Bigr{)}}&\text{if }\,S=2.\end{array}\right. \tag{20}\] To evaluate the accuracy of the sound field reconstruction, we define the signal-to-distortion ratio (SDR) as \[\text{SDR}=10\log_{10}\frac{\int_{\Omega}|p_{\text{rec}}(\mathbf{r},k)-p(\mathbf{r}, k)|^{2}\text{d}\mathbf{r}}{\int_{\Omega}|p(\mathbf{r},k)|^{2}\text{d}\mathbf{r}}\ (\text{dB}). \tag{21}\] \(\Omega\) was discretized at 0.1 m intervals to calculate the integral in Eq. (21). ### Training results All neural networks were trained using a single GPU (GeForce RTX 3090, NVIDIA). Figure 5 shows the training and validation loss of the SSL as a function of the epoch number. The training loss was found to decrease with each additional epoch at all frequencies. However, the difference between the validation loss and the training loss increased slightly with increasing frequency. The computation time for SSL training was ~0.7 h for each frequency. Figure 6 shows the training and validation loss of the SFS for **Proposed** as a function of the epoch number. Unlike the SSL learning, little difference was observed between the training loss and the validation loss at all frequencies. However, the loss converged to a larger value as the frequency increased. The computation time for SFS training in **Proposed** was about ~16.7 h for each frequency. Figure 7 shows the training and validation loss of **Baseline** as a function of the epoch number. Although the converged loss values were dependent on the frequency, the training loss and validation loss converged similarly for all of the investigated frequencies. The computation time for training in **Baseline** was about ~14.0 h for each frequency. Figure 8 shows the training and validation loss of **Proposed** (\(\mathcal{L}_{\text{base}}\)) as a function of the epoch number. The learning trend was similar to that of **Baseline**; however, the loss value was slightly greater. The computation time for training SFS in **Proposed** (\(\mathcal{L}_{\text{base}}\)) was about ~20.7 h for each frequency. ### Experiments for a single source In the case of a single sound source, **Proposed** consists of the SSL only. The source position sets used in this simulation were the entire SSL validation dataset described in Sec. III.3. The amplitude of the source signal was set to one, and the phase was randomly Figure 5: Training and validation loss of SSL plotted against the epoch number at each frequency. (Color online) Figure 6: Training and validation loss of SFS for **Proposed** plotted against the epoch number at each frequency. (Color online) chosen from \(\mathcal{U}(-\pi,\pi)\) for each condition. We compared each method with an SNR of 40 dB and 20 dB. The results were averaged for all conditions. Figure 9 shows the RMSE plotted against frequency for the frequency range 100-900 Hz at intervals of 100 Hz. The RMSEs of **Sparse** were nearly constant at all of the investigated frequencies and SNRs for each \(\delta\). Comparing **Sparse** (\(\delta=0.1\)) and **Sparse** (\(\delta=0.2\)) reveals that a smaller discretization interval resulted in smaller RMSEs, although the RMSEs Figure 8: Training and validation loss of SFS for **Proposed** (\(\mathcal{L}_{\text{base}}\)) plotted against the epoch number at each frequency. (Color online) Figure 7: Training and validation loss of **Baseline** plotted against the epoch number at each frequency. (Color online) remained at almost \(\delta/2\). Figure 9(a) shows that the RMSEs of **SHD-RGF** were smaller than those of **Sparse** at frequencies less than 200 Hz. However, the RMSEs of **SHD-RGF** increased with increasing frequency because of spatial aliasing. In addition, Fig. 9(b) shows that the RMSEs of **SHD-RGF** were larger than those of **Sparse** because noise increased at all of the investigated frequencies. However, **Proposed** achieved much smaller RMSEs than the other methods at all frequencies under both investigated SNRs. Figure 10 shows the SDR plotted against frequency for the frequency range 100-900 Hz at intervals of 100 Hz. The SDRs of **SHD-RGF** were higher than those of **Sparse** for high SNRs (Fig. 10(a)) and at frequencies less than 200 Hz. The **Proposed** SDRs were the highest among the SDRs of the investigated methods for all of the considered frequencies and SNRs. Figure 9: RMSE as a function of frequency in the case of a single source with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online) Figures 11 and 12 show the reconstructed sound pressure distribution and the normalized error distribution at 500 Hz on the \(x\)-\(y\) plane for a single source with an SNR of 20 dB. The true position of the source was at \((-0.05,-0.17,-0.45)\), which was chosen randomly from the validation dataset. The amplitude was set to unity. For **Sparse** and **SHD-RGF**, only the results of **Sparse** (\(\delta=0.1\)) and **SHD-RGF** (\(N=7\)) are shown. The red and green crosses correspond to the true and estimated sound source positions, respectively. The black lines represent the sphere where microphones exist. Figure. 12 shows that **Proposed** achieved the lowest normalized error distribution among the investigated methods. The SDRs in **Proposed**, **Sparse** (\(\delta=0.1\)), and **SHD-RGF** (\(N=7\)) were 26.3, 5.8, and 1.8 dB, respectively. Figure 10: SDR as a function of frequency in the case of a single source with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online) Figure 11: Real part of true and reconstructed sound pressure distribution at 500 Hz on the \(x\)-\(y\) plane in the case of a single source with an SNR of 20 dB. The red and green crosses represent the true and estimated sound source positions, respectively. The black lines represent the sphere where microphones exist. (Color online) Figure 12: Normalized error distribution at 500 Hz on the \(x\)-\(y\) plane in the case of a single source with an SNR of 20 dB. The SDRs in (a), (b), and (c) were 26.3, 5.8, and 1.8 dB, respectively. (Color online) ### Experiments for two sources In this experiment, we randomly chose 1,000 validation data from all of the SFS validation datasets which are described in Sec. III.3. The source signal \(\{a_{s}(k)\}_{s\in\mathcal{S}}\) was randomized for each condition following \(\Re\left[a_{s}(k)\right]\sim\mathcal{U}(-1,1),\ \Im\left[a_{s}(k)\right]\sim \mathcal{U}(-1,1)\). The experiments were conducted for SNRs of 40 dB and 20 dB, respectively. The results were averaged for all conditions. Figure 13 shows the RMSE plotted against frequency for frequencies ranging from 100 Hz to 900 Hz at intervals of 100 Hz in the case of two sources. In **Sparse**, the RMSE increased with increasing frequency; this trend differs somewhat from that in the case with a single sound source (Fig. 9). Comparing **Sparse** (\(\delta=0.1\)) and **Sparse** (\(\delta=0.2\)) reveals that a smaller discretization interval resulted in smaller RMSEs even in the case of two sound Figure 13: RMSE as a function of frequency in the case of two sources with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online) sources. Unlike the case of a single sound source shown in Fig. 9, the RMSEs of **SHD-RGF** were largest at all frequencies under both investigated SNR conditions. Comparing **Proposed** (\(\mathcal{L}_{\text{base}}\)) and **Baseline** reveals that the RMSEs of **Baseline** were smaller than those of **Proposed** (\(\mathcal{L}_{\text{base}}\)). However, **Proposed** achieved much smaller RMSEs than the other investigated methods at all frequencies under both SNR conditions, except for 100 Hz with an SNR of 40 dB. These results not only demonstrate the effectiveness of the proposed method but also the effectiveness of the proposed loss function. Figure 14 shows the SDR plotted against frequency for frequencies in the range of 100-900 Hz at intervals of 100 Hz. **Proposed** achieved the highest SDRs among the investigated methods for all frequencies and SNRs; it also achieved the lowest RMSEs. Remarkably, at 500 Hz, the SDR was more than 10 dB greater than those of the other methods. Figure 14: SDR as a function of frequency in the case of two sources with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online) Figures 15 and 16 show the reconstructed sound pressure distribution and the normalized error distribution at 500 Hz on the \(x\)-\(y\) plane for an SNR of 20 dB. The true positions of the sources were at \((-0.54,\ 0.04,\ -0.04)\) and \((0.44,\ 0.61,\ 0.06)\), which were chosen randomly from the validation dataset. The amplitudes of the sources were set to one. For **Sparse** and **SHD-RGF**, only the results of **Sparse** (\(\delta=0.1\)) and **SHD-RGF** (\(N=7\)) are shown. Figure 16 shows that **Proposed** achieved the lowest normalized error distribution Figure 15: Real part of true and reconstructed sound pressure distribution at 500 Hz on the \(x\)-\(y\) plane in the case of two sources with an SNR of 20 dB. (Color online) among the investigated methods. The SDRs in **Proposed**, **Proposed** (\(\mathcal{L}_{\rm base}\)), **Baseline**, **Sparse** (\(\delta=0.1\)), and **SHD-RGF** (\(N=7\)) were 19.5, 8.0, 11.9, 7.0, and 0.1 dB, respectively. Figure 16: Normalized error distribution at 500 Hz on the \(x\)-\(y\) plane in the case of two sources with an SNR of 20 dB. The SDRs in (a), (b), (c), (d), and (e) were 19.5, 8.0, 11.9, 7.0, and 0.1 dB, respectively. (Color online) Conclusion A neural-network-based method for sound field decomposition was proposed. To reconstruct a sound field in a source-included region, some constraints are necessary to address the ill-posed problem. Conventional methods that use sparsity in the number of sound sources have the disadvantage of determining source-position candidates in advance, which results in a loss of accuracy when sound sources exist at locations other than the candidate locations. In other conventional methods that uses the reciprocity of the transfer function, the accuracy of sound field reconstruction is limited to the spatial Nyquist frequency. To overcome these problems, we proposed two-stage neural networks for sound field decomposition. In the first stage, the sound pressure at microphones is separated into the sound pressure corresponding to each source. In the second stage, the position of each source is localized. For training the first stage, a loss function that explicitly separates measured sound pressure into the sound pressure corresponding to each source was also proposed. Numerical experiments showed that the proposed method that uses the proposed loss function achieved more accurate sound source localization and sound field reconstruction than the investigated conventional methods. Future work will consider the non-anechoic conditions where room reflections exist. ###### Acknowledgements. This research was partially supported by JSPS Grants (Nos. JP19H04153 and JP22H00523).
2307.16727
Multi Agent Navigation in Unconstrained Environments using a Centralized Attention based Graphical Neural Network Controller
In this work, we propose a learning based neural model that provides both the longitudinal and lateral control commands to simultaneously navigate multiple vehicles. The goal is to ensure that each vehicle reaches a desired target state without colliding with any other vehicle or obstacle in an unconstrained environment. The model utilizes an attention based Graphical Neural Network paradigm that takes into consideration the state of all the surrounding vehicles to make an informed decision. This allows each vehicle to smoothly reach its destination while also evading collision with the other agents. The data and corresponding labels for training such a network is obtained using an optimization based procedure. Experimental results demonstrates that our model is powerful enough to generalize even to situations with more vehicles than in the training data. Our method also outperforms comparable graphical neural network architectures. Project page which includes the code and supplementary information can be found at https://yininghase.github.io/multi-agent-control/
Yining Ma, Qadeer Khan, Daniel Cremers
2023-07-31T14:48:45Z
http://arxiv.org/abs/2307.16727v2
Multi Agent Navigation in Unconstrained Environments using a Centralized Attention based Graphical Neural Network Controller ###### Abstract In this work, we propose a learning based neural model that provides both the longitudinal and lateral control commands to simultaneously navigate multiple vehicles. The goal is to ensure that each vehicle reaches a desired target state without colliding with any other vehicle or obstacle in an unconstrained environment. The model utilizes an attention based Graphical Neural Network paradigm that takes into consideration the state of all the surrounding vehicles to make an informed decision. This allows each vehicle to smoothly reach its destination while also evading collision with the other agents. The data and corresponding labels for training such a network is obtained using an optimization based procedure. Experimental results demonstrate that our model is powerful enough to generalize even to situations with more vehicles than in the training data. Our method also outperforms comparable graphical neural network architectures. Project page which includes the code and supplementary information can be found here: [https://yininghase.github.io/multi-agent-control/](https://yininghase.github.io/multi-agent-control/) ## I Introduction Data driven approaches to sensorimotor control have seen a meteoric growth with the advent of deep learning in the last decade [1, 2, 3, 4]. Powerful neural network architectures can now be trained and deployed in real-time applications [5]. The recent success of deep learning for agent control can primarily be attributed to the following 2 factors: 1. Cheap hardware accelerators that exploit the parallel computations[6], particularly in deep neural architectures [7]. 2. The availability of simulation platforms that allow benchmarking and evaluation of various vehicle control algorithms [8, 9]. 3. [10, 11] have used such platforms to evaluate their learning based control algorithms. However, many learning based control approaches have certain limitations of their own: 1. They require collection of tremendous amounts of labeled supervised data for training which in some cases may not even be available. For e.g. recovering a vehicle from driving on a sidewalk. 2. The neural network is trained to control only one vehicle. Moreover, since the sensors are placed on the ego-vehicle, the model has partial observability of the environment. Therefore, traffic rules are used to regulate the flow of vehicles and prevent any untoward incident. Moreover, for the task of autonomous driving, the vehicle is constrained to drive only on the road. In this work, we present a technique to control not one but a variable number of vehicles in an unconstrained environment having obstacles. The vehicles are meant to reach their desired destination/target state without collision. Supervised labeled data is also not available. Rather, we optimize against a cost function to determine the longitudinal and lateral control labels. This optimization procedure for the task of label generation is only done offline and hence does not influence Fig. 1: **Overview of multi-agent control:** The initial configuration (on the left) shows five vehicles colored **black**, **red**, **green**, **blue** and olive. The initial starting state of the vehicles is represented by a rectangle with solid boundaries. The arrow within each rectangle depicts the corresponding orientation of that vehicle. Meanwhile, the rectangles with broken boundaries represents the desired destination/target position of each vehicle. We would like to produce the sequence of control actions such that the five vehicles safely reach their destination state without colliding with each other or the circled obstacle. These control actions are produced by the Attention Based Graphical Neural Network (A-GNN). The A-GNN receives information about the current state and desired destination state of all the five vehicles along with information about any obstacle. The network outputs the control commands for all the five vehicles together. These control commands are executed for all the five vehicles simultaneously. Each vehicle then attains a new state. This new state is then fed again to the A-GNN as the current state to predict the new steering command. This process is iteratively repeated until all the vehicles reach their corresponding destination state. The trajectory traversed by all the vehicles as a result of this iterative process is shown on the right of the figure. Some video examples of our model demonstrating this can be found: here the real-time operation. Figure 1 describes this process. The Attention based Graphical Neural Network (A-GNN) is a core component which makes these control command predictions. It takes information of all obstacles, the current state of all vehicles and their desired destination/target state. The output of the A-GNN are the control commands for all the vehicles. These control commands are then executed which results in the vehicle achieving a new state. This new state along with the target/desired destination state of the all the vehicles is fed back to the A-GNN to produce the new control commands. The process is iteratively repeated until all vehicles reach their destination state. Note that the A-GNN is trained with labels obtained from offline optimization performed against a cost function. Note that the A-GNN allows each vehicle to attend to the information of other vehicles. This allows each vehicle to make informed control decisions in order to avoid collisions among themselves while also successfully reaching their destination. We summarize the contributions of our framework below: 1. Ability to control a variable number of vehicles to reach their desired destination states. We show that our model can even perform inference for more number of vehicles than for which it was initially trained. 2. The architecture of our A-GNN outperforms other comparable attention based GNN layers such as GAIN [12], TransformerConv [13], non-attention based architectures such as EdgeConv [14] and also the naive multilayer percepton (MLP) method that does not utilize any edges in the graphical structure. 3. Our A-GNN can handle the presence of both dynamic vehicles and static obstacles. 4. We have released the implementation of our framework here: [https://github.com/yininghbase/multi-agent-control](https://github.com/yininghbase/multi-agent-control). An application of this work, could be in large unmanned indoor warehouses where multiple agents are transporting items from one location to another. We assume that all the agents/vehicles move in the x-y plane and can be localized to determine their state. Localization in an indoor environment can be done using for eg. the approaches from [15, 16]. ## II Related Work **Multi-Agent Trajectory Prediction & Control:**[17, 18, 19] model pedestrian behaviour to predict their future trajectory primarily utilizing information from the past. Our model is rather concerned with control of vehicle agents using current state and destination information. [20, 21] do focus on multiple agents but for the task of leader guided formation control. Meanwhile, [22] investigates connectivity restoration for formation control. In our work, the multiple vehicles are neither guided to follow a leader nor are they in pursuit of a formation. Rather, the objective of each vehicle is to independently reach its desired destination while taking information of other vehicles into consideration. Co-operation among the vehicles in our framework only exists to the extent that each vehicle can reach its desired destination without collision with each other and the obstacles. **Attention based architectures:** In the context of deep learning, the attention mechanism was primarily introduced in the discourse of Natural Language Processing [23]. However, it has gained utility in other domains too such as for the task of single vehicle control [24]. Here, attention is applied between different patches of the sensory data for the same vehicle. In contrast, our method applies attention between the different vehicles to be controlled. [12] uses an Graph Attention Isomorphism Neural Network (GAIN) architecture, where the task is to predict the trajectory of multiple vehicles using their past trajectory as input. Our method in contrast predicts the control commands using only the existing and destination state rather than the entire historical trajectory. **Reinforcement Learning**[25, 26] use reinforcement learning for navigating vehicular-traffic. However, this is done by controlling the traffic signal rather than controlling the individual vehicles. Moreover, this is done only at intersections where clear rules for vehicle navigation exist. In our case, we handle an unconstrained environment where no predefined rules exist for negotiating bottlenecks. The only objective that the vehicles must ensure is that they avoid collisions while reaching the destination. [27] also use RL in simulation but for the control of humanoids rather than vehicles. [28] performs multi-agent path finding but in a discrete action space in the gridworld. This is as opposed to our approach which is in continuous action space and environment. The co-operative navigation task of [29] is similar to our task of navigating multiple agents to desired destinations. However, the agents are treated as particles while our approach considers the vehicle kinematics. Another issue with RL methods is that the training tends to be heavily sample-inefficient [30, 31] requiring far greater training sessions than methods where the target labels are already known such as in our case through optimization techniques. **BEV representation:** Note that the state of the vehicles are represented in a Bird's Eye View (BEV) format. This format has extensively been used in various perception related tasks such as object detection, tracking, segmentation [32, 33] etc. It has also demonstrated to be very convenient for planning and control tasks. For e.g. [34] takes the BEV representation along with the intention of a single vehicle as input to predict its trajectory. [35] also uses the BEV representation to first train a teacher for the task of single vehicle control. Knowledge distillation is then used to train a subsequent student model that takes image data as input for control prediction. [36] uses pseudo-labeling of web images as part of a pipeline that predicts future waypoints in the BEV space. The aforementioned approaches for prediction and control are only tailored towards single vehicles. Our approach on the other hand is capable of handling multiple vehicles. ## III Framework In this section, we first describe the architecture of our Attention based Graphical Neural Network (A-GNN) (See Subsection III-A). Next, the process to generate the steering labels for training the A-GNN is discussed (See Subsection III-B). ### _Attention Graph Neuron Network_ The A-GNN takes as input the state information of both the \(N_{vehicle}\) dynamic vehicles and \(N_{obstacles}\) static obstacles in the scene to predict the control commands (steering angle, \(\varphi\) and pedal acceleration, \(p\)) for all vehicles. Each vehicle/obstacle is indexed by \(i\in[\) 1, \(N_{vehicle}+N_{obstacle}]\). The feature vector for entity \(i\) at layer \(l\) of the neural network is denoted by \(z_{i}^{l}\). The input feature vector for each entity is given by \(z_{i}^{0}\in\mathbb{R}^{8}\) representing the current location (\(x\) and \(y\)), current orientation (\(\theta\)), current velocity \(v\), target position (\(\hat{x}\) and \(\hat{y}\)), target orientation (\(\hat{\theta}\)) and whether or not the entity is a vehicle (0) or a circular obstacle (with radius \(r\)). Hence, with this representation \(z_{i_{vehicle}}^{0}=[x,y,\theta,v,\hat{x},\hat{y},\hat{\theta},0]^{T}\). Meanwhile, \(z_{i_{obstacle}}^{0}=[x,y,\theta,0,x,y,\theta,r]^{T}\). Note that the target state of the static obstacle is the same as its current state. This is because the obstacles are stationary. We now construct a Graph wherein each vehicle and obstacle is considered a node. Input node \(i\) is represented by the feature vector \(z_{i}^{0}\). Note that each vehicle node in the graph needs to retrieve state information about all other entities in the environment in order to avoid collision and reach its desired destination. Therefore, we build an edge from all the other vehicle nodes and all obstacle nodes towards this vehicle node. Meanwhile, the obstacle nodes are static and do not have any incoming edge. Mathematically, for any vehicle node \(i\) its neighbors in the Graph \(G\) are \(N_{i}=\{j|j=0,1,2...,N_{vehicles}+N_{obstacle}\cap j\neq i\}\) and for any of the obstacle nodes its neighbor in the graph \(G\) is \(\emptyset\). This Graph \(G\) is then passed through a series of neural layers to eventually predict the control command for each vehicle. The control command \(\in\mathbb{R}^{2}\) corresponds to the steering angle (\(\varphi\)) and pedal acceleration (\(p\)) of the vehicle. The flow of information through the A-GNN is described as follows: The input node features are first converted to a higher dimensional latent vector: \(z_{i}^{1}=\sigma^{1}(W^{1}\cdot z_{i}^{0})\in\mathbb{R}^{d_{1}}\), where \(W^{1}\in\mathbb{R}^{d_{1}\times 8}\) is a trainable weight matrix, while \(\sigma^{1}\) is the non linear ReLU activation for the first layer. Next, a series of \(L\) residual graphical layers are used. In these layers information about the neighbouring entities is retrieved by each node via an attention based mechanism. The residual connection carries information from the preceding layer \(l-1\) to a successive layer \(l+1\). This ensures that prior information important for the network is carried forward. Mathematically, this process is described by: \[\begin{split} z_{i}^{2k}&=\sigma^{2k}(F_{Att}^{2k}( z_{i}^{2k-1},z_{N_{i}}^{2k-1}))\\ z_{i}^{2k+1}&=\sigma^{2k+1}(F_{Att}^{2k+1}(z_{i}^{2k },z_{N_{i}}^{2k})+z_{i}^{2k-1})\end{split} \tag{1}\] where \(z_{i}^{2k-1}\in\mathbb{R}^{d_{2k-1}}\), \(z_{i}^{2k}\in\mathbb{R}^{d_{2k}}\) and \(z_{i}^{2k+1}\in\mathbb{R}^{d_{2k+1}}\). To cater for the residual connection in the last equation of the for loop, note that: \(d_{2k+1}=d_{2k-1}\). Meanwhile \(F_{Att}^{l+1}\) describes the _Attention_ mechanism at layer \(l+1\) and is defined by:, \[F_{Att}^{l+1}(z_{i}^{l},z_{j}^{l}))=W_{self}^{l}\cdot z_{i}^{l}+\sum_{j\in N_{i }}\alpha_{ij}\cdot F_{value}(z_{i}^{l}|z_{i}^{l}-z_{j}^{l})) \tag{2}\] where \(\alpha_{ij}=\frac{F_{query}(z_{i}^{l})^{T}\cdot F_{key}(z_{j}^{l})}{\sum_{k\in N _{i}}F_{query}(z_{i}^{l})^{T}\cdot F_{key}(z_{k}^{l})}\) are the attention weights of the neighbours of vehicle \(i\). Meanwhile, \(F_{value}\), \(F_{query}\), \(F_{key}\) are the joint trainable parameters of a U-Net inspired architecture with shared encoder weights for the attention mechanism. Details of this U-Net inspired attention mechanism can be found in Subsection VII-A. The final layer \(F\) of the A-GNN outputs the steering commands for all the vehicles: \[z_{i}^{F}=\sigma^{F}(W^{F}\cdot z_{i}^{2L+1}) \tag{3}\] \(\sigma^{l}\) with \(l\in\{1,...,2L+1\}\) is the ReLU non-linear activation. Meanwhile, \(\sigma^{F}\) is chosen to be the scaled _tanh_ function to accommodate negative values of the steering angles and reverse pedal acceleration. ### _Label Generation Process_ In this subsection, we describe the procedure for generating the data and corresponding control commands. This is done by first creating a motion model of each vehicle using the kinematic bicycle model [37]. Next, the optimization is run such that all vehicles reach their desired destination state without colliding with each other or any obstacles in the scene. Once we generate enough data, the A-GNN model architecture described in Subsection III-A is trained to predict the control commands for the corresponding vehicle scenarios in the dataset. Now one may rightfully ask, what is the advantage of training the A-GNN when we can already generate the control commands from optimization? The reason is that as number of cars in the environment is increased, the optimization becomes extremely slow and may not be applicable for real time operation. The advantage of the A-GNN is that it learns to extract patterns from the data and applies them to similar situations not seen during training. In fact, in the experiments section, we show that the data is generated using optimization for a maximum of 3 vehicles in the environment. However, the A-GNN is powerful enough to even make predictions for controlling 6 vehicles in the scene. Meanwhile, please refer to Subsection VII-C for a comparison of run times, wherein the inference time of our model remains fairly stable with the increase in the number of vehicles/obstacles. Recall that the state of the vehicle is described by its location (\(x\) and \(y\)), orientation/angle (\(\theta\)) and velocity (\(v\)). The vehicle can be controlled by adjusting the acceleration from the pedal (\(p\)) and maneuvering the steering angle (\(\varphi\)). Using the kinematic bicycle model, the equations of motion can be updated from time \(t\) to \(t+1\) using the following: \[\begin{split} x_{t+1}&=x_{t}+v_{t}\cdot\cos(\theta_{t })\cdot\Delta t\\ y_{t+1}&=y_{t}+v_{t}\cdot\sin(\theta_{t})\cdot \Delta t\\ \theta_{t+1}&=\theta_{t}+v_{t}\cdot\tan(\varphi_{t })\cdot\gamma\cdot\Delta t\\ v_{t+1}&=\beta\cdot v_{t}+p\cdot\Delta t\end{split} \tag{4}\] where, \(\beta\) and \(\gamma\) are tuneable parameters during optimization. We would like to use these equations of motion to determine the control commands of each vehicle for a horizon of \(H\) timesteps ahead. The cost function to be minimized during the optimization should cater to two primary objectives: One is to guide the vehicle to the target location and the other is to prevent collision with other vehicles/obstacles. The first component of the cost ensures that the predicted vehicle state at any step in the horizon is as close to the target state as possible. This is done by penalizing the difference between current vehicle state and target vehicle state through target cost \(C_{tar}\): \[\begin{split} C_{tar}&=\sum_{t=1}^{H}\sum_{i=1}^{N_ {vehicle}}\|X_{t}^{(i)}-X_{target}^{(i)}\|_{2}\cdot w_{pos}\\ &+\|\theta_{t}^{(i)}-\theta_{tar}^{(i)}\|_{2}\cdot w_{orient}\end{split} \tag{5}\] For convenience \(X=[x,y]^{T}\) represents the position vector. \(w_{pos}\) and \(w_{orient}\) are the tuneable weights. The optimization should also ensure that a vehicle does not collide with obstacles and also stays clear of other vehicles. For an obstacle with radius \(r\), a penalty is introduced if a vehicle is within a margin of \(r_{mar\_obs}\). The cost occurring due to collision of any vehicle with any obstacle over any of the timesteps in the horizon is described by \(C_{coll\_obs}\): \[\begin{split} C_{coll\_obs}&=\sum_{t=1}^{H}\sum_{i= 1}^{N_{vehicle}}\sum_{j=1}^{N_{ obstacles}}[\frac{1}{\|X_{t}^{(i)}-X^{(j)}\|_{2}-r^{( j)}}\\ &-\frac{1}{r_{mar\_obs}}]\cdot\Pi_{obs}^{i,j}\cdot w_{col\_obs} \end{split} \tag{6}\] Likewise, a penalty is also incurred if any vehicle \(i\) collides with vehicle \(j\) or is in its vicinity with a margin less than \(r_{mar\_veh}\). The cost \(C_{coll\_veh}\) is given as: \[\begin{split} C_{coll\_veh}&=\sum_{t=1}^{H}\sum_{i= 1}^{N_{vehicle}-1}\sum_{j=i+1}^{N_{vehicle}}[\frac{1}{(\|X_{t}^{(i)}-X_{t}^{(j)} \|_{2}}\\ &-\frac{1}{r_{mar\_veh}}]\cdot\Pi_{veh}^{i,j}\cdot w_{col\_veh} \end{split} \tag{7}\] \[\begin{split}\Pi_{veh}^{i,j}&=\begin{cases}1&(\|X_{t}^{(i)}-X_{t}^{(j)}\|_{2}-r_{mar\_veh})<0 \\ 0&\text{otherwise}\end{cases}\end{split}\] Note that the \(r_{mar\_obs}\) and \(r_{mar\_veh}\) are the safety margin of the obstacles and other dynamic vehicles. When an ego-vehicle enters the safety margin of other objects, the collision cost starts to penalize it in inverse proportion to its distance to the other object. Meanwhile, \(w_{col\_obs}\) and \(w_{col\_veh}\) are the tuneable weights. Finally we optimize for the control commands via: \[\min_{p,\varphi}[C_{tar}+C_{coll\_obs}+C_{coll\_veh}] \tag{8}\] This cost function is minimized using sequential least square programming to yield the optimal control commands. ## IV Experiments This section provides the results of our experiments. We first give details about the data generation and model training process in Subsection IV-A. Next, the evaluation metrics and the quantitative results are discussed in Subsection IV-B. Lastly, the behaviour of the attention weights of our A-GNN are analyzed for a scenario with three vehicles in Subsection IV-C. Meanwhile, videos demonstrating the qualitative performance of our model can be visualized in the provided here: here ### _Data Generation and Model Training_ We are not aware of any open source platform for simultaneous control of multiple vehicles in unconstrained environments that also considers vehicle kinematics. Therefore, we create our own as depicted in the provided codebase. The vehicles in this platform are maneuvered following the equations of motion described in Subsection III-B. For the purpose of training the model we collect data using this platform and determine the labels by minimizing the cost function in Equation 8. The data for which labels are generated contain between 1-3 vehicles and 0-4 static obstacles, for a total of around 20,961 trajectories. The start and destination states of the vehicle/obstacles are generated at random. Each trajectory is collected for 120 timesteps. Therefore, the total number of scenarios generated are 2,515,320. Note that increasing the number of vehicles and obstacles respectively beyond 3 and 4 significantly slows down the optimization for determining the optimal control values during this data and label generation process. Nevertheless, we demonstrate in Table I, that our model is still powerful enough to make inference for even more vehicles than for which it was trained. The collected ground truth data is split into the training and validation set with a 4:1 ratio. The data for ground truth control has a horizon of 20. However, only the control command predicted at the first step of the horizon is used to train the model. At inference time, this command is executed, which causes the vehicle to attain a new state. This new state is then fed back to the model to predict the new command. The process is iteratively repeated until the vehicle reaches the target state. Although, the model is trained to predict only the first control command, this does not mean that the rest of the steps in the horizon are meaningless. Rather, a longer horizon facilitates the optimization to look further into the future and take early actions to prevent a collision which would otherwise have been inevitable. When training the model, we treat the static obstacle nodes as a special type of vehicle that has zero velocity and for which the current state is the same as the target state. The control command output by the model for such static obstacle nodes is zero for both the steering angle and pedal acceleration. Treating the obstacle nodes in this manner can help the model learn to cause the actual dynamic vehicles to not overshoot but remain stationary once they have reached their respective target state. We use the MSE loss to train the model, which penalizes the difference between the predicted and the ground truth control variables. We use the Adam optimizer [38] with an initial learning rate of 0.01 and weight decay of 1e-6. The learning rate is reduced by factor of 0.2 from its previous value, if the validation loss does not reduce for 15 epochs. The number of training epochs is set to be 500 but training is prematurely stopped, if there is no decrease in the validation loss for 50 epochs. ### _Quantitative results_ We evaluate the performance of our method in an online setting on a randomly generated dataset not seen during training. The test set contains scenarios having 1-6 vehicles and 0-4 obstacles. The model needs to predict the appropriate control commands such that all the vehicles in the test scenarios reach their target state without colliding with others. The online performance of the model is evaluated against three criterion: **1) Success-to-goal rate:** is defined by the percentage of vehicles that successfully reach their target state within a tolerance without colliding with other objects along the way. The location tolerance is set to be 1.25 meters from the vehicle centers and angle tolerance is set to 0.2 radians. Higher value for this metric is better. **2) Collision rate:** is defined as the total number of collisions caused by a model divided by the total distance travelled by all vehicles. Note that this metric is meaningless when evaluation is being done for one vehicle with no obstacles. This is because no collisions are expected to occur in such a scenario. A lower value of this metric is better. Lastly, note that the inverse of this metric describes the average total distance travelled before a collision happens. **3) Step Efficiency:** A naive approach to solve the problem of navigating all the vehicles to the goal while avoiding collision is to control the individual vehicles one by one while keeping other vehicles stationary. This considerably simplifies the problem and is akin to having one dynamic agent with multiple static obstacles. However, this approach is inefficient taking more steps to solve the whole problem with many vehicles in the scene. Therefore, we introduce the step efficiency metric, to show the advantage of our model on tackling all the vehicles simultaneously. It is the ratio of the lower bound of the number of steps required by running the vehicles one by one with other vehicle kept stationary divided by the number of steps used by navigating all the vehicles simultaneously using our approach. A higher value for this metric is better. For the step efficiency metric, calculating the total number of steps by running the the vehicles one after the other is not trivial. This is because the total number of steps is influenced by the order in which the vehicles are run. Therefore, to ensure that the the step efficiency metric is not affected by the permutation order, we use the lower bound of the actual number of steps for running the vehicles one after the other. To do this, all other vehicles are ignored when one of the vehicles is run with its steps being counted. This leads to a simplified version of the original task since the vehicle being run only has to consider the presence of static obstacles. This leads to either an equal or lesser number steps taken to reach its destination. The same is done for all the other vehicles and their steps are summed to yield the lower bound of the actual number of steps taken. Since the summation is a permutation invariant function, the step efficiency metric is therefore not influenced by the vehicle order. This is implemented by simply removing the edges between vehicles in our GNN model so that all the vehicles run simultaneously rather than one-by-one to get the total steps. **Comparision with other methods:** We make comparison with 3 additional Graphical Neural Network architectures: namely the attention based GAIN [12] and TransformerConv [13] and the non-attention based EdgeConv [14]. Comparision is also made with the naive Multi-Layer Perceptron (MLP) model which has a similar architecture to our model but does not utilize any graphical edges in its structure. **Training:** The training process for all these models is similar as our method described in Subsection IV-A. The difference is in the respective architectures. GAIN [12] and TransformerConv [13] do not utilize the relative information of the neighbouring edge nodes as was done for our case in Equation 2. They also do not utilize a joint trainable U-Net inspired architecture with shared encoder weights for the attention mechanism. Meanwhile, EdgeConv [14] is different from our architecture in that a node does not attend to other nodes. Meanwhile, the MLP does not use any graphical edges in its structure. Note that GAIN [12], had utilized the historical trajectory information of the road vehicles as input. We are only using the current and target state and therefore do not require processing of any temporal information. Moreover, rather than having the entire road network in the map topology guiding the vehicles, our environment is unconstrained. Therefore, to adapt the GAIN into our pipeline, we replace all our graph attention layers with the graph attention isomorphism operator introduced in [12] without needing any historical trajectory information and the entire road map topology. For TransformerConv [13] and EdgeConv [14] we directly use the implementation from [39]. **Evaluation:** Experimental results for all the models are shown in Table I. Note that our approach consistently performs better than all the other approaches. The model even performs well when there are 4-6 vehicles in the scene. It is important to highlight here that our training data only contained up to a maximum of 3 vehicles. This demonstrates the predictive power of our A-GNN which has generalized to work on even more number of vehicles than it was trained. We now give some observations about the performance of our A-GNN in Table I which is superior to the other three Graphical Neural Network architectures and the naive MLP model without edges. It can be seen that the collision rate of our model is much lower and the success-to-goal rate is also higher than all the others. The reason for poor performance of the other three models is that they only seem to learn to reach the target but do not learn how to avoid collision. Figure 2 shows that in pursuit of reaching their desired destination, the vehicles controlled by the other models collide with one another or with other obstacles. Hence, they have an extremely high collision rate. The reason why our model outperforms all the other models is because of the difference in architectures. Primarily we are using a U-net inspired attention mechanism with shared encoder weights along with concatenating relative information between the self-node and the neighbouring nodes when determining the attention weights. VII-B shows the importance of contribution of these components of the architecture. Removal of either the U-Net architecture or not concatenating the relative information reduces the performance. Furthermore, Figure 2 shows our model can generalize when there are 6 vehicles in the environment. Such a scenario was not present in the training dataset. The training dataset was limited to a maximum of 3 vehicles. Meanwhile, the other models show poor performance on these scenarios with plenty of collisions among the vehicles. Note that the in Table I of the main paper, the success-to-goal rate for our model is not a perfect score of 1. The reason the model fails to always reach the target destination is that it tends to behave conservatively. Rather than taking the risk of collision, it sometimes stops mid-way before other objects to avoid collision. This is because the ground truth data obtained from the optimization is not always prefect. Nevertheless, it is important to keep such failed samples in the dataset for training as they teach the model to learn to stop before other objects to avoid collision. If these failure samples are removed from the training set, then the model performs worse as it starts colliding with other obstacles. Note that the success-to-goal rate for our model is not a perfect score of 1. The reason the model fails to always reach the target destination is that it tends to behave conservatively. Rather than taking the risk of collision, it sometimes stops mid-way before other objects to avoid collision. This is because the ground truth data obtained from the optimization is not always prefect. Nevertheless, it is important to keep such failed samples in the dataset for training as they teach the model to learn to stop before other objects to avoid collision. If these failure samples are removed from the training set, then the model performs worse as it starts colliding with other obstacles. \begin{table} \begin{tabular}{|c|c c c c c c c c c c c|} \hline \multirow{2}{*}{Number of Vehicles.} & \multirow{2}{*}{Number of Vehicles.} & \multirow{2}{*}{Number of Vehicles.} & \multirow{2}{*}{Our} & \multirow{2}{*}{GAN} & \multirow{2}{*}{TransformerConv} & \multirow{2}{*}{EdgeConv} & \multicolumn{3}{c|}{collision rate 1} & \multicolumn{3}{c|}{step efficiency \(\uparrow\)} \\ & & & & & & & & & & & & \\ & & & & & & & & & & & & \\ \hline 1 & **0.0000** & 0.95385 & 0.93891 & 0.1890 & **1.0000** & & & & & & \\ & 1 & **0.0000** & 0.95385 & 0.93891 & 0.1890 & **1.0000** & & & & & \\ & 1 & **0.0000** & 0.94777 & 0.7750 & 0.7610 & 0.8124 & **4.03612-55** & 5.6612-03 & 5.3726-03 & 3.4488-03 & 3.5171-03 & 1.0000 \\ & 1 & 2 & **0.0001** & 0.95715 & 0.57317 & 0.5771 & 0.6593 & **3.3211-44** & 1.1459-02 & 1.1451-02 & 1.1487-02 & 1.1670-02 & 1.0000 \\ \hline [MISSING_PAGE_POST] Lastly, since our model is the best performing among all others, the last column in Table I also provides the _step efficiency_ metric for it. It can be observed that our method of executing the control commands for all vehicles simultaneously tends to be faster than running the vehicles one by one. ### _Attention Weights Analysis_ To get some intuition about the behavior of our model, we visualize the mean of attention logits from all the graph attention layers of our model. Figure 4 shows a visualization of these attention logits as a matrix for 4 timesteps \(t_{1}\), \(t_{2}\), \(t_{3}\) and \(t_{4}\) of a scenario with 3 vehicles and 2 obstacles. The rows in the attention matrix correspond to the vehicle of interest. The columns show which vehicle/obstacle is being attended to. A lighter shade in the attention matrix depicts high attention and a darker shade represents lack of attention. Because the static obstacles are stationary and are never a hindrance for any of the 3 vehicles, their attention logits of all the vehicles on them is very low (dark pixel on the last 2 columns) for all the 4 timesteps. At time \(t_{1}\), the vehicles are relatively far away from each other, so the attention logits are relatively low. At \(t=t_{2}\), the **red** vehicle (vehicle 2) is in the way of the **black** vehicle (vehicle 1), the **green** vehicle (vehicle 3) is in the way of **red** vehicle (vehicle 2), the **red** vehicle (vehicle 2) is in the way of **green** vehicle (vehicle 3) and all the 3 vehicles are close to each other. So each vehicle pays more attention to the other vehicles that are on their way to the target and in close proximity. These 3 corresponding pixels are therefore very bright. At \(t=t_{3}\), the **red** vehicle (vehicle 2) decides to stop and let the **green** vehicle (vehicle 3) pass through. So the attention of the **green** vehicle (vehicle 3) on the **red** vehicle (vehicle 2) goes down. However the **red** vehicle (vehicle 2) needs to wait until the **green** vehicle (vehicle 3) passes through. So the attention of **red** vehicle (vehicle 2) on the **green** vehicle (vehicle 3) is still high. And the **black** vehicle (vehicle 1) has found its way to go around the **red** vehicle (vehicle 2), so the attention of the **black** vehicle (vehicle 1) on the **red** vehicle (vehicle 2) also goes down. Finally, all the vehicles reach their goal, and their attentions on each other reduce. This trivial qualitative example demonstrates, that our model is able to learn driving behaviour analogous to how a human would make decisions. However, more experiments and quantitative studies would be required to better understand the reasoning process behind these graphical neural architectures. Therefore, this work can potentially be used as a starting point for interpreting the decision making capabilities of trained models before their deployment in relevant and related applications. ## V Future Work As we discussed in Section II, concurrent Reinforcement Learning (RL) methods do not consider the vehicle kinematics and also tend to be heavily sample-inefficient requiring far more steps than our method to train the model. However, the advantage is that they can be trained without labels obtained from optimization. Therefore, to exploit this aspect, we can extend this work by using our pretrained model as an initializer to prevent the cold-start problem associated with Reinforcement Learning. This way the model can be trained Fig. 3: An example of a conservative optimization yielding a sample trajectory wherein the vehicles halt rather than take the risk of collision in an attempt to reach their destination by trying to pass between the two obstacles. Fig. 2: Shows the trajectory traversed by the vehicles as a result of applying the control commands predicted by the five different models for a scenario containing 6 vehicles and 3 obstacles. The red dots on the trajectories show the point of collision between the vehicles. Except for our model, all other models have plenty of collisions. Video can be found: here. by progressively adding more vehicles into the environment. This prevents the cumbersome offline optimization process. ## VI Conclusion In this paper, we proposed an attention based graphical neural network model. It is capable of predicting the control commands for multiple agents for reaching their desired destination without collision. We demonstrated that utilizing information of both the ego-vehicle and neighboring agents in the graphical layers helps the model generalize well. In fact, the model performs even on situations with more vehicles and obstacles than those in the training set. _Acknowledgements:_ We thank Marc Brede for helping with the initial setup of the label generation process in the early phase of the project.
2309.05208
Quaternion MLP Neural Networks Based on the Maximum Correntropy Criterion
We propose a gradient ascent algorithm for quaternion multilayer perceptron (MLP) networks based on the cost function of the maximum correntropy criterion (MCC). In the algorithm, we use the split quaternion activation function based on the generalized Hamilton-real quaternion gradient. By introducing a new quaternion operator, we first rewrite the early quaternion single layer perceptron algorithm. Secondly, we propose a gradient descent algorithm for quaternion multilayer perceptron based on the cost function of the mean square error (MSE). Finally, the MSE algorithm is extended to the MCC algorithm. Simulations show the feasibility of the proposed method.
Gang Wang, Xinyu Tian, Zuxuan Zhang
2023-09-11T02:56:55Z
http://arxiv.org/abs/2309.05208v2
# Quaternion MLP Neural Networks Based on the Maximum Correntropy Criterion ###### Abstract We propose a gradient ascent algorithm for quaternion multilayer perceptron (MLP) networks based on the cost function of the maximum correntropy criterion (MCC). In the algorithm, we use the split quaternion activation function based on the generalized Hamilton-real quaternion gradient. By introducing a new quaternion operator, we first rewrite the early quaternion single layer perceptron algorithm. Secondly, we propose a gradient descent algorithm for quaternion multilayer perceptron based on the cost function of the mean square error (MSE). Finally, the MSE algorithm is extended to the MCC algorithm. Simulations show the feasibility of the proposed method. Generalized Hamilton-real-least mean square-quaternion gradient-Wirtinger calculus ## I Introduction Tultilayer (MLP) neural networks are the most popular neural networks. The connection weights can be adjusted to represent the input-output relations. Recently MLP neural network models have been explored based on quaternions in [1, 2, 3, 4, 5, 6], because the 4-D signals in the quaternion domain can incorporate mutual information to reduce complexity. Applications of quaternion neural networks include image classification [7, 8, 9], signal classification [10, 11], image retrieval [12], the control problem [13, 14], the prediction of time series [15, 16, 17], forecasting three-dimensional wind signals [35], pose estimation [18, 19] and convolutional neural networks [20, 21, 22]. The adjusting process of quaternion connection weights is also called learning. Quaternion learning algorithms can be loosely divided into not-gradient-based methods, such as extreme learning machines [23, 24, 25], and gradient-based methods, such as least-mean-square (LMS) algorithms [26]. This paper is focused on gradient-based methods. When the LMS algorithm, or the back-propagation algorithm, of MLP neural networks was generalized from the real domain to the complex domain, the analyticity of the activation functions had to be studied [27, 28, 29]. If an activation function is analytic, then it is a constant function. As an alternative, a split complex activation function was proposed for the MLP. Following the split idea in the complex domain, a split quaternion activation function was proposed in and used in a nonlinear filter, or single layer perceptron(SLP). In addition to the split function, in and, another activation function was introduced using a local analyticity condition for a quaternion nonlinear filter and echo state networks, respectively. However, the quaternion gradient in misused the non-commutative property, and it prevented the extension of the quaternion SLP to the MLP. From 2014 to 2019, the correct quaternion gradient was proposed using three approaches: generalized Hamilton-real (GHR) calculus [30, 31, 32], the quaternion product [33], and the quaternion involutions [34]. Hitzer detected the gradient mistake in, and obtained corrections based on Clifford calculus. The LMS algorithm is based on the cost function of the mean square error (MSE), and works well in Gaussian additive noise, whose higher statistics are constants. In real scenarios, non-Gaussian problems arise [36]. The signals are often contaminated by non-Gaussian noise or impulsive noise. The performance of the above LMS algorithm may be poor in non-Gaussian noise. Recently, information theoretic learning was proposed for non-Gaussian noise. The proposed cost functions include [12] the maximum correntropy criterion (MCC), improved least sum of exponentials, and least mean kurtosis. The MCC is robust to large outliers or impulsive noise, and has been applied in quaternion filters. This motivates us to study quaternion MLP networks based on the MCC [18, 19]. In this paper, we revisit the split quaternion activation function based on the GHR quaternion gradient, and rewrite the quaternion nonlinear filter algorithm by introducing a new quaternion operator. Then we propose two LMS algorithms for the quaternion MLP based on the MSE and MCC. Simulations show the feasibility of the proposed method [37, 38]. ## II Preliminaries ### _Quaternion Algebra_ The quaternion domain is a non-commutative extension of the complex domain. A quaternion variable consists of a real part and three imaginary components and can be expressed as \[\mathrm{q}=\mathrm{q}_{\mathrm{a}}+\iota\mathrm{q}_{\mathrm{b}}+J\mathrm{q}_{ \mathrm{c}}+\kappa\mathrm{q}_{\mathrm{d}}\] The quaternion product is non-commutative, that is, but Given we have \[\left\{\begin{array}{l}\iota J=\kappa,J\kappa=\iota,\kappa\iota=J,\\ \iota J\kappa=\iota^{2}=J^{2}=\kappa^{2}=-1.\end{array}\right.\] \[\mathrm{q}_{1}\mathrm{q}_{2}\neq\mathrm{q}_{2}\mathrm{q}_{1}.\] Three perpendicular quaternion involutions are given by \[\left\{\begin{array}{l}\mathrm{q}^{t}=-\iota\mathrm{q}_{t}=\mathrm{q}_{ \mathrm{a}}+\iota\mathrm{q}_{\mathrm{b}}-J\mathrm{q}_{\mathrm{c}}-\kappa \mathrm{q}_{\mathrm{d}},\\ \mathrm{q}^{J}=-J\mathrm{q}_{\mathrm{a}}-\iota\mathrm{q}_{\mathrm{b}}+J \mathrm{q}_{\mathrm{c}}-\kappa\mathrm{q}_{\mathrm{d}},\\ \mathrm{q}^{\kappa}=-\kappa\mathrm{q}_{\mathrm{c}}=\mathrm{q}_{\mathrm{a}}- \iota\mathrm{q}_{\mathrm{b}}-J\mathrm{q}_{\mathrm{c}}+\kappa\mathrm{q}_{\mathrm{ d}}.\end{array}\right.\] Then the quaternion conjugate operation can be expressed as \[\mathrm{q}^{*}=\mathrm{q}_{\mathrm{a}}-\iota\mathrm{q}_{\mathrm{b}}-J\mathrm{q}_{ \mathrm{c}}-\kappa\mathrm{q}_{\mathrm{d}}=\frac{1}{2}\left(\mathrm{q}^{\iota}+ \mathrm{q}^{J}+\mathrm{q}^{\kappa}-\mathrm{q}\right).\] The norm of a quaternion is defined as \[\left\|\mathrm{q}\right\|_{2}^{2}=\mathrm{q}\mathrm{q}^{*}=\mathrm{q}_{\mathrm{ a}}^{2}+\mathrm{q}_{\mathrm{b}}^{2}+\mathrm{q}_{\mathrm{c}}^{2}+\mathrm{q}_{ \mathrm{d}}^{2}.\] \(HR\) calculus is an extension of Wirtinger calculus, which is also called calculus. calculus provides a simple and straightforward approach to calculating derivatives with respect to complex parameters. HR calculus comprises the HR derivatives \[\frac{\partial J}{\partial\mathrm{q}^{*}}=\frac{1}{4}\left(\frac{\partial J}{ \partial\mathrm{q}_{\mathrm{a}}}+\frac{\partial J}{\partial\mathrm{q}_{b}} \iota+\frac{\partial J}{\partial\mathrm{q}_{c}}J+\frac{\partial J}{\partial \mathrm{q}_{d}}\kappa\right).\] ### Quaternion Split Product Definition 1. (Quaternion Split Product ) For two quaternions, \(x,y\), the quaternion split product is defined as \[x\odot y=y\odot x=x_{\mathrm{a}}y_{\mathrm{a}}+\iota\mathrm{x}_{\mathrm{b}}y_ {\mathrm{b}}+Jx_{\mathrm{c}}y_{\mathrm{c}}+\kappa x_{\mathrm{d}}y_{\mathrm{d}} \in H^{\mathrm{m}},\] where represents the dot product. For two quaternions,, the quaternion split product is defined as \[\mathrm{x}\odot y=y\odot\mathrm{x}=\mathrm{x}_{\mathrm{a}}y_{\mathrm{a}}+ \iota\mathrm{x}_{\mathrm{b}}y_{\mathrm{b}}+J\mathrm{x}_{\mathrm{c}}y_{\mathrm{ c}}+\kappa\mathrm{x}_{\mathrm{d}}y_{\mathrm{d}}\in H^{\mathrm{m}}\] The quaternion split product has the following property, \[\left(x\odot y\right)^{*}=x\odot y^{*}=x^{*}\odot y=y^{*}\odot x=y\odot x^{*}.\] ### Nonlinear Filtering or SLP When the quaternion nonlinear filtering problem is considered in the discrete time domain i, there is an input quaternion vector \(\mathbf{u}\left(i\right)\in H^{n}\) with the unknown original quaternion parameter \(\mathbf{w}^{\nu}\in H^{n}\) and the desired response \(d\left(i\right)\in H^{1}\) as follows: \[\left\{\begin{array}{l}\mathrm{x}=\mathbf{w}^{H}u\left(\mathrm{i}\right),\\ \mathrm{x}=\mathrm{x}_{\mathrm{a}}+\iota\mathrm{x}_{\mathrm{b}}+J\mathrm{x}_{ \mathrm{c}}+\kappa\mathrm{x}_{\mathrm{d}}\in H^{1},\\ \mathbf{w}=\mathbf{w}_{\mathrm{a}}+\iota\mathbf{w}_{\mathrm{b}}+J\mathbf{w}_{ \mathrm{c}}+\kappa\mathbf{w}_{\mathrm{d}}\in H^{n},\\ u\left(\mathrm{i}\right)=\mathrm{u}_{\mathrm{a}}+u\mathrm{b}_{\mathrm{b}}+Ju_{ \mathrm{c}}+\kappa u_{\mathrm{d}}\in H^{n}.\end{array}\right.\] The error signal for the quaternion linear filter is defined as \[e\left(i\right)=d\left(i\right)-\Phi\left[\mathbf{w}^{H}\mathbf{u}\left(i \right)\right].\] The quaternion nonlinear function or the activation function is defined as \[\Phi\left(x\right)=\phi\left(x_{a}\right)+\phi\left(x_{b}\right)\iota+\phi \left(x_{c}\right)J+\phi\left(x_{d}\right)\kappa,\] where \[\left\{\begin{array}{l}\phi\left(a\right)=\tanh\left(a\right),\\ \frac{\partial\phi\left(a\right)}{\partial\mathrm{a}}=\mathrm{sech}^{2}\left( a\right).\end{array}\right.\] The gradient of (13) is \[\begin{array}{l}4\frac{\partial\Phi}{\partial x^{*}}=\mathrm{sech}^{2}\left( x_{a}\right)+\mathrm{sech}^{2}\left(x_{b}\right)\iota\\ +\mathrm{sech}^{2}\left(x_{c}\right)J+\mathrm{sech}^{2}\left(x_{d}\right)\kappa.\end{array}\] The cost function of the MSE is defined as to estimate the parameter \[J(\mathbf{w})=e\left(i\right)\left(d^{*}\left(i\right)-\Phi^{*}\left[x\right] \right)=\left(d\left(i\right)-\Phi\left[x\right]\right)e^{*}\left(i\right).\] The corresponding gradient is defined as \[\frac{\partial J}{\partial\mathbf{w}^{*}}=\frac{1}{4}\left(\frac{\partial J}{ \partial\mathbf{w}_{a}}+\frac{\partial J}{\partial\mathbf{w}_{b}}\iota+\frac{ \partial J}{\partial\mathbf{w}_{c}}J+\frac{\partial J}{\partial\mathbf{w}_{d} }\kappa\right).\] The quaternion gradient descent algorithm was proposed in [35] \[\begin{array}{l}\mathbf{w}\left(i{+}1\right)=\mathbf{w}\left(i\right)+\eta \mathbf{u}\left(i\right)e^{*}\left(i\right)=\mathbf{w}\left(i\right)+\\ \eta\mathbf{u}\left[\mathrm{sech}^{2}\left(x_{a}\right)e_{a}-\mathrm{sech}^{2} \left(x_{b}\right)e_{b}\iota-\mathrm{sech}^{2}\left(x_{c}\right)e_{c}J-\mathrm{ sech}^{2}\left(x_{d}\right)e_{d}\kappa\right],\end{array}\] where is the step size. Using the definition of the quaternion split product (8), we rewrite it as \[\mathbf{w}\left(i{+}1\right)=\mathbf{w}\left(i\right)+\eta\mathbf{u}\left(i \right)\left[\frac{\partial\Phi}{\partial x^{*}}\odot e^{*}\left(i\right)\right],\] which has a similar expression to that in the complex domain. ## 3 Quaternion MLP Based on The MSE Fig. 1 shows the schematic diagram of the MLP. The following expression is used: \[\left\{\begin{array}{l}y=W\mathrm{H}x+p,\\ \Psi\left(y\right)=\Psi\left(W^{\mathrm{H}}x+p\right),\\ \mathrm{z}=v^{H}\Psi\left(y\right)+\mathrm{q},\\ h_{W,v}\left(x\right)=\Phi\left(z\right)=\Phi\left[v^{H}\Psi\left(y\right)+ \mathrm{q}\right],\\ =\Phi\left[v^{H}\Psi\left(W^{\mathrm{H}}x+p\right)+\mathrm{q}\right],\end{array}\right.\] where \[\left\{\begin{array}{l}y=\left[\begin{array}{cccc}\mathrm{y}=\left[\begin{array}{ cccc}\mathrm{y}_{1}&\mathrm{y}_{2}&\cdots&\mathrm{y}_{m}\\ W=\left[\begin{array}{cccc}w_{1}&w_{2}&\cdots&w_{n}\\ \end{array}\right]\in H^{\mathrm{m}\times\mathrm{n}},\\ p=\left[\begin{array}{cccc}\mathrm{p}_{1}&\mathrm{p}_{2}&\cdots&\mathrm{p}_{m}\\ v=\left[\begin{array}{cccc}\mathrm{v}_{1}&\mathrm{v}_{2}&\cdots&\mathrm{v}_{m} \\ \end{array}\right]\in H^{\mathrm{n}},\\ \mathrm{y}_{\mathrm{i}}=w_{\mathrm{i}}^{H}x+\mathrm{p}_{\mathrm{i}}\mathrm{i}=1,2, \cdots\mathrm{n},\\ \mathrm{z}_{\mathrm{d}}\mathrm{,q}\Phi\left(z\right)\in H^{1}\mathrm{,x}\in H^{ \mathrm{m}}.\end{array}\right.\end{array}\right.\] The activation functions have the same expressions: \[\left\{\begin{array}{l}\Psi\left(\mathbf{y}\right)=\phi\left(\mathbf{y}_{a} \right)+\phi\left(\mathbf{y}_{b}\right)\iota+\phi\left(\mathbf{y}_{c}\right)J+ \phi\left(\mathbf{y}_{d}\right)\kappa,\\ \Phi\left(z\right)=\phi\left(z_{a}\right)+\phi\left(z_{b}\right)\iota+\phi\left(z _{c}\right)J+\phi\left(z_{d}\right)\kappa.\end{array}\right.\end{array}\right.\] Figure 1: Schematic diagram of the MLP The cost function is defined as to estimate the parameters W, V, p and q: \[\left\{\begin{array}{l}J\left(\mathbf{W},\mathbf{V},\mathbf{p},q\right)=ee^{*}=e \left(d^{*}-\Phi^{*}\left[z\right]\right)\mathbf{=}\left(d-\Phi\left[z\right] \right)e^{*},\\ e=d-\Phi\left[z\right].\end{array}\right.\] ### Quaternion Gradient for V and q The gradients of v and q are similar to the gradient of w in the above nonlinear filter, and can be obtained by (18) directly as follows: \[-4\frac{\partial J}{\partial\mathbf{v}^{*}}=2\left[\mathrm{sech}^{2}\left(z_ {a}\right)e_{a}+\mathrm{sech}^{2}\left(z_{b}\right)e_{b}t\right.\] \[\left.\qquad+\mathrm{sech}^{2}\left(z_{c}\right)e_{c}J-\mathrm{sech}^{2}\left(z _{d}\right)e_{a}\kappa\right]\] \[=2\frac{\partial\Phi\left(z\right)}{\partial z^{*}}\odot e,\] \[-4\frac{\partial J}{\partial\mathbf{v}^{*}}=2\Psi\left(\mathbf{y}\right)\left[ \mathrm{sech}^{2}\left(z_{a}\right)e_{a}-\mathrm{sech}^{2}\left(z_{b}\right)e _{b}t\right.\] \[\left.\qquad-\mathrm{sech}^{2}\left(z_{c}\right)e_{c}J-\mathrm{sech}^{2}\left( z_{d}\right)e_{a}\kappa\right]\] \[=2\Psi\left(\mathbf{y}\right)\left[\frac{\partial\Phi\left(z\right)}{\partial z ^{*}}\odot e^{*}\right].\] Those two equations above have similar expressions to those in the complex domain. ### Quaternion Gradient for p We rewrite it as follows to use the chain rule: \[\left\{\begin{array}{l}\left[\begin{array}{c}\mathbf{y}_{a}\\ \mathbf{y}_{b}\\ \mathbf{y}_{c}\\ \mathbf{y}_{d}\end{array}\right]=\left[\begin{array}{c}\mathbf{W}_{a}^{T}& \mathbf{W}_{b}^{T}&\mathbf{W}_{c}^{T}&\mathbf{W}_{d}^{T}\\ -\mathbf{W}_{d}^{T}&\mathbf{W}_{d}^{T}&\mathbf{W}_{d}^{T}\\ -\mathbf{W}_{d}^{T}&-\mathbf{W}_{d}^{T}&\mathbf{W}_{d}^{T}&\mathbf{W}_{d}^{T}\\ -\mathbf{W}_{d}^{T}&\mathbf{W}_{d}^{T}&-\mathbf{W}_{b}^{T}&\mathbf{W}_{d}^{T} \\ \end{array}\right]\left[\begin{array}{c}\mathbf{x}_{a}\\ \mathbf{x}_{b}\\ \mathbf{x}_{c}\\ \mathbf{x}_{d}\end{array}\right]\] \[\left.\qquad\qquad+\left[\begin{array}{c}\mathbf{p}_{b}\\ \mathbf{p}_{c}\\ \mathbf{p}_{d}\end{array}\right],\] \[\left.\qquad\qquad+\left[\begin{array}{c}q_{a}\\ q_{b}\\ q_{c}\\ q_{d}\end{array}\right],\] \[\Phi\left[z\right]=\phi\left(z_{a}\right)+\phi\left(z_{b}\right)\iota+\phi \left(z_{c}\right)J+\phi\left(z_{d}\right)\kappa.\] The corresponding gradient of p is defined as \[\frac{\partial J}{\partial\mathbf{p}^{*}}=\frac{1}{4}\left(\frac{\partial J}{ \partial\mathbf{p}_{a}}+\frac{\partial J}{\partial\mathbf{p}_{b}}\iota+\frac {\partial J}{\partial\mathbf{p}_{c}}J+\frac{\partial J}{\partial\mathbf{p}_{d}} \kappa\right),\] where \[\left\{\begin{array}{l}\frac{\partial J}{\partial\mathbf{p}_{a}}=-\frac{ \partial e\Phi^{*}[z]}{\partial\mathbf{p}_{b}}-\frac{\partial\Phi[z]e^{*}}{ \partial\mathbf{p}_{b}},\\ \frac{\partial J}{\partial\mathbf{p}_{b}}\iota=-\frac{\partial e\Phi^{*}[z]}{ \partial\mathbf{p}_{b}}\iota-\frac{\partial\Phi[z]e^{*}}{\partial\mathbf{p}_{b }}\iota,\\ \frac{\partial J}{\partial\mathbf{p}_{c}}J=-\frac{\partial e\Phi^{*}[z]}{ \partial\mathbf{p}_{c}}J-\frac{\partial e\Phi^{*}[z]}{\partial\mathbf{p}_{c}} J,\\ \frac{\partial J}{\partial\mathbf{p}_{d}}\kappa=-\frac{\partial e\Phi^{*}[z]}{ \partial\mathbf{p}_{d}}\kappa-\frac{\partial\Phi[z]}{\partial\mathbf{p}_{d}} \kappa.\end{array}\right.\] Note that \[\frac{\partial e\Phi^{*}\left[z\right]}{\partial\mathbf{p}_{\delta}}=\left( \frac{\partial\Phi\left[z\right]e^{*}}{\partial\mathbf{p}_{\delta}}\right)^{*},\delta\in\left\{a,b,c,d\right\}\cdot\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)\] We only provide the details of \(\partial e\Phi^{*}\left[z\right]/\partial\mathbf{p}_{a}\), because of the limitation of space: \[\frac{\partial e\Phi^{*}\left[z\right]}{\partial\mathbf{p}_{a}}=e \left[\frac{\partial\partial\phi\left(z_{a}\right)}{\partial\mathbf{p}_{a}}- \frac{\partial\phi\left(z_{b}\right)}{\partial\mathbf{p}_{a}}t-\frac{\partial \phi\left(z_{b}\right)}{\partial\mathbf{p}_{a}}J-\frac{\partial\phi\left(z_{d }\right)}{\partial\mathbf{p}_{a}}\kappa\right]\] \[=e\left[\frac{\partial\phi\left(z_{a}\right)}{\partial z_{a}} \frac{\partial\phi}{\partial z_{a}}\frac{\partial\phi}{\partial z_{a}}\frac{ \partial\phi}{\partial z_{a}}\frac{\partial\phi}{\partial z_{a}}\frac{\partial \phi}{\partial z_{a}}-\frac{\partial\phi\left(z_{b}\right)}{\partial z_{a}} \frac{\partial\phi}{\partial z_{b}}\frac{\partial\phi}{\partial z_{a}}\frac{ \partial\phi}{\partial z_{a}}\frac{\partial\phi}{\partial z_{a}}\frac{\partial \phi}{\partial z_{a}}\right]\] \[-\frac{\partial\phi\left(z_{c}\right)}{\partial z_{a}}\frac{ \partial\phi}{\partial z_{a}}\frac{\partial\phi}{\partial z_{a}}\frac{\partial \phi}{\partial z_{a}}\frac{\partial\phi}{\partial z_{a}}\frac{\partial\phi}{ \partial z_{a}}\frac{\partial\phi}{\partial z_{a}}\kappa\right]\] \[=\mathrm{sech}^{2}\left(z_{a}\right)\left[\mathbf{v}_{\mathrm{s}}\mathrm{sech }^{2}\left(\mathbf{y}_{a}\right)\right]e+\mathrm{sech}^{2}\left(z_{b}\right) \left[\mathbf{v}_{\mathrm{s}}\mathrm{sech}^{2}\left(\mathbf{y}_{a}\right) \right]e\iota\] \[+\mathrm{sech}^{2}\left(z_{c}\right)\left[\mathbf{v}_{\mathrm{s}}\mathrm{sech }^{2}\left(\mathbf{y}_{a}\right)\right]e\kappa\] \[\frac{\partial e\Phi^{*}[z]}{\partial\mathbf{p}_{b}}J=\mathrm{sech}^{2} \left(z_{a}\right)\left[\mathbf{v}_{\mathrm{s}}\mathrm{sech}^{2}\left(\mathbf{y}_{c }\right)\right]eJ\] \[-\mathrm{sech}^{2}\left(z_{b}\right)\left[\mathbf{v}_{\mathrm{s}} \mathrm{sech}^{2}\left(\mathbf{y}_{c}\right)\right]e\kappa\] \[\frac{\partial e\Phi^{*}[z]}{\partial\mathbf{p}_{c}}J=\mathrm{sech}^{2} \left(z_{a}\right)\left[\mathbf{v}_{\mathrm{s}}\mathrm{sech}^{2}\left( \mathbf{y}_{c}\right)\right]eJ\] \[-\mathrm{sech}^{2}\left(z_{b}\right)\left[\mathbf{v}_{\mathrm{s}} \mathrm{sech}^{2}\left(\mathbf{y}_{c}\right)\right]e\iota\] \[+\mathrm{sech}^{2}\left(z_{c}\right)\left[\mathbf{v}_{\mathrm{s}} \mathrm{sech}^{2}\left(\mathbf{y}_{c}\right)\right]e\] \[-\mathrm{sech}^{2}\left(z_{d}\right)\left[\mathbf{v}_{\mathrm{s}} \mathrm{sech}^{2}\left(\mathbf{y}_{c}\right)\right]e\iota,\] \[\frac{\partial e\Phi^{*}[z]}{\partial\mathbf{p}_{d}}\kappa=\mathrm{sech}^{2} \left(z_{d}\right)\left[\mathbf{v}_{\mathrm{s}}\mathrm{sech}^{2}\left( \mathbf{y}_{d}\right)\right]e\kappa\] \[-\mathrm{sech}^{2}\left(z_{b}\right)\left[\mathbf{v}_{\mathrm{s}} \mathrm{sech}^{2}\left(\mathbf{y}_{d}\right)\right]e\iota\] \[-\mathrm{sech}^{2}\left(z_{c}\right)\left[\mathbf{v}_{\mathrm{s}} \mathrm{sech}^{2}\left(\mathbf{y}_{d}\right)\right]e\iota\] \[-4\frac{\partial J}{\partial\mathbf{p}_{b}}=-\left(\frac{\ Note that \[\frac{\partial e\Phi^{*}\left[z\right]}{\partial\mathbf{W}_{\delta}}=\left(\frac{ \partial\Phi\left[z\right]e^{*}}{\partial\mathbf{W}_{\delta}}\right)^{*}, \delta\in\left\{a,b,c,d\right\}.\] Because of the limitation of space, we only provide the details of \(\partial e\Phi^{*}\left[z\right]/\partial\mathbf{W}_{a}\) : \[\frac{\partial e\Phi[z]^{*}}{\partial\mathbf{W}_{a}}=e\left[\frac{\partial \phi\left(z_{a}\right)}{\partial\mathbf{W}_{a}}-\frac{\partial\phi\left(z_{b} \right)}{\partial\mathbf{W}_{a}}{}_{t}-\frac{\partial\phi\left(z_{c}\right)}{ \partial\mathbf{W}_{a}}J-\frac{\partial\phi\left(z_{d}\right)}{\partial \mathbf{W}_{a}}\kappa\right],\] where \[\frac{\partial\phi\left(z_{a}\right)}{\partial\mathbf{W}_{a}} =\frac{\partial\phi\left(z_{a}\right)}{\partial z_{a}}\frac{\partial \phi_{a}}{\partial z_{a}}\frac{\partial\phi_{a}}{\partial z_{a}}\frac{\partial \psi_{a}}{\partial\mathbf{W}_{a}}\frac{\partial\mathbf{y}_{a}}{\partial \mathbf{W}_{a}}\frac{\partial\mathbf{y}_{a}}{\partial\mathbf{W}_{a}}\frac{ \partial\mathbf{y}_{a}}{\partial\mathbf{W}_{a}}\] \[+\frac{\partial\phi\left(z_{a}\right)}{\partial z_{a}}\frac{ \partial\phi_{a}}{\partial z_{a}}\frac{\partial\phi_{a}}{\partial z_{a}}\frac{ \partial\phi_{a}}{\partial z_{a}}\frac{\partial\psi_{a}}{\partial\mathbf{W}_{ a}}\frac{\partial\psi_{a}}{\partial\mathbf{W}_{a}}\frac{\partial\psi_{a}}{ \partial\mathbf{W}_{a}}\] \[=\mathrm{sech}^{2}\left(z_{a}\right)\left(\lambda_{a}\big{[} \mathbf{v}_{a}\mathrm{sech}^{2}\left(\mathbf{y}_{a}\right)\big{]}^{T}+ \mathbf{x}_{b}\big{[}\mathbf{v}_{b}\mathrm{sech}^{2}\left(\mathbf{y}_{b} \right)\big{]}^{T}\] \[+\mathbf{x}_{c}\big{[}\mathbf{v}_{a}\mathrm{sech}^{2}\left( \mathbf{y}_{c}\right)\big{]}^{T}+\mathbf{x}_{d}\big{[}\mathbf{v}_{d}\mathrm{ sech}^{2}\left(\mathbf{y}_{d}\right)\big{]}^{T}\right),\] \[\frac{\partial\phi\left(z_{a}\right)}{\partial\mathbf{W}_{a}} =\mathrm{sech}^{2}\left(z_{b}\right)\left(\mathbf{x}_{a}\big{[} \mathbf{-v}_{b}\mathrm{sech}^{2}\left(\mathbf{y}_{a}\right)\big{]}^{T}+ \mathbf{x}_{b}\big{[}\mathbf{v}_{a}\mathrm{sech}^{2}\left(\mathbf{y}_{b} \right)\big{]}^{T}+\mathbf{x}_{c}\big{[}\mathbf{v}_{d}\mathrm{sech}^{2}\left( \mathbf{y}_{d}\right)\big{]}^{T}\right),\] \[\frac{\partial\phi\left(z_{a}\right)}{\partial\mathbf{W}_{a}} =\mathrm{sech}^{2}\left(z_{c}\right)\left(\mathbf{x}_{a}\big{[} \mathbf{-v}_{c}\mathrm{sech}^{2}\left(\mathbf{y}_{a}\right)\big{]}^{T}+ \mathbf{x}_{d}\big{[}\mathbf{v}_{a}\mathrm{sech}^{2}\left(\mathbf{y}_{d} \right)\big{]}^{T}\right),\] \[\frac{\partial\phi\left(z_{d}\right)}{\partial\mathbf{W}_{a}} =\mathrm{sech}^{2}\left(z_{d}\right)\left(\mathbf{x}_{a}\big{[} \mathbf{-v}_{d}\mathrm{sech}^{2}\left(\mathbf{y}_{a}\right)\big{]}^{T}+ \mathbf{x}_{b}\big{[}\mathbf{v}_{c}\mathrm{sech}^{2}\left(\mathbf{y}_{b} \right)\big{]}^{T}\right.\] \[\left.+\mathbf{x}_{c}\big{[}\mathbf{v}_{d}\mathrm{sech}^{2}\left( \mathbf{y}_{c}\right)\big{]}^{T}+\mathbf{x}_{d}\big{[}\mathbf{v}_{a}\mathrm{ sech}^{2}\left(\mathbf{y}_{d}\right)\big{]}^{T}\right).\] Others in can be obtained similarly. Then the gradient of W can be obtained as follows: \[-4\,\frac{\partial J}{\partial\mathbf{W}^{*}}=\left(\frac{ \partial J}{\partial\mathbf{W}_{a}}+\frac{\partial J}{\partial\mathbf{W}_{b}} \iota+\frac{\partial J}{\partial\mathbf{W}_{a}}J+\frac{\partial J}{\partial \mathbf{W}_{a}}\kappa\right)\] \[=2\mathbf{x}\Big{(}\Big{(}v\left[\frac{\partial\Phi\left(z\right) }{\partial z^{*}}\odot e\right]\Big{)}\odot\left(\frac{\partial\Phi\left[ \mathbf{y}\right]}{\partial\mathbf{y}^{*}}\right)\Big{)}^{H}.\] Finally, the quaternion LMS algorithm for the MLP (19) with the split quaternion activation function is summarized as \[q\left(i{+}1\right)=q\left(i\right)+\eta_{q}\frac{\partial\Phi\left(z\right) }{\partial z^{*}}\odot e\left(i\right),\] \[\mathbf{V}\left(i{+}1\right)=\mathbf{V}\left(i\right)+\eta_{v}\Psi\left( \mathbf{y}\right)\left[\frac{\partial\Phi\left(z\right)}{\partial z^{*}} \odot e^{*}\left(i\right)\right],\] \[\mathbf{p}\left(i{+}1\right)=\mathbf{p}\left(i\right)+\eta_{p}\left(\mathbf{v }\left[\frac{\partial\Phi\left(z\right)}{\partial z^{*}}\odot e\right] \right)\odot\frac{\partial\Psi\left(\mathbf{y}\right)}{\partial\mathbf{y}^{*}},\] \[\mathbf{W}\left(i{+}1\right)=\mathbf{W}\left(i\right){+}\eta_{w}\mathbf{x} \bigg{[}\bigg{(}v\left[\frac{\partial\Phi\left(z\right)}{\partial z^{*}} \odot e\right]\bigg{)}\odot\left(\frac{\partial\Psi\left(\mathbf{y}\right)}{ \partial\mathbf{y}^{*}}\right)\bigg{]}^{H}.\] The obtained algorithm (38a-d) has the similar expression to that in the real and complex domains. ## IV Quaternion MLP Based on The MCC The cost function of the MCC is defined as \[J_{MCC}\left(\mathbf{w}\right)=\exp\left(-\frac{e(i)e^{*}(i)}{2\sigma^{2}} \right),\] to estimate the parameter \(\mathbf{w}\in H^{n}\). Then, the corresponding quaternion stochastic gradient ascent algorithm for the MLP based on the MCC can be derived from (38a-d) as \[q\left(i{+}1\right)=q\left(i\right)+\eta_{q}J_{MCC}\left(\mathbf{w}\right) \frac{\partial\Phi\left(z\right)}{\partial z^{*}}\odot e\left(i\right),\] \[\mathbf{V}\left(i{+}1\right)=\mathbf{V}\left(i\right)+\eta_{v}J_{MCC}\left( \mathbf{w}\right)\Psi\left(\mathbf{y}\right)\left[\frac{\partial\Phi\left(z \right)}{\partial z^{*}}\odot e^{*}\left(i\right)\right],\] \[\mathbf{p}\left(i{+}1\right)=\mathbf{p}\left(i\right){+}\eta_{v}J_{MCC}\left( \mathbf{w}\right)\left(\mathbf{v}\left[\frac{\partial\Phi\left(z\right)}{ \partial z^{*}}\odot e\right]\right)\odot\frac{\partial\Psi\left(\mathbf{y} \right)}{\partial\mathbf{y}^{*}},\] \[\mathbf{p}\left(i{+}1\right)=\mathbf{p}\left(i\right){+}\eta_{v}J_{MCC}\left( \mathbf{w}\right)\left(\mathbf{v}\left[\frac{\partial\Phi\left(z\right)}{ \partial z^{*}}\odot e\right]\right)\odot\frac{\partial\Psi\left(\mathbf{y} \right)}{\partial\mathbf{y}^{*}},\] ## V Simulation In the simulations, we use two proposed quaternion MLP algorithms to predict the chaotic time-series, which were generated using the following Mackey-Glass time-delay differential equation. \[\dot{x}\left(t\right)=\frac{0.2x\left(t-\tau\right)}{1+x^{10}\left(t-\tau\right)}-0.1x\left(t\right),\] where \(\tau=17\), \(x\left(0\right){=}0.12\), and for all \(t<0\). In this prediction task, we used the previous five historical data to predict the next data in the future. Thus, the sizes of the inputs and outputs were and, the parameter in the hidden layer had dimensions of, and the size of was. ### _Prediction without Noises_ First, we simulated the one-step prediction of Mackey-Glass data without any noise. We set the step size. Fig. 1 shows that the MSE algorithm and MCC algorithm obtained similar results, so we only present the good prediction results of the MSE algorithm in Fig. 2. The prediction results include one real part and three imaginary components. ### _Prediction under Noise_ Additionally, we assumed that the Mackey-Glass data are noisy. To simulate the representative noise scenario, two types of noise signals were generated. The first type of noise was Gaussian. We set the step size ; the simulations of the MCC and MSE algorithms are shown in Fig. 3. From the comparison, we found that the MCC algorithm converged slower than the MSE algorithm while the steady errors of MCC performs better than MSE.The difference between the steady-state errors of the two algorithms is more obvious in Figure 3 (b) in ## VI Conclusion In this paper, we have proposed two algorithms for quaternion MLP networks based on the cost functions of the MSE Fig. 4: Error comparison under Gaussian noise Fig. 5: Error comparison under Gaussian noise Fig. 3: One–step prediction results for the quaternion MSE without noise. Fig. 6: Error comparison under impulsive noise Fig. 7: Error comparison under impulsive noise Fig. 2: Error comparison between the MCC and MSE without noise and MCC, respectively. In the algorithms, we utilized the GHR quaternion gradient. The two algorithms performed similarly under noiseless and Gaussian noise circumstances. The MCC algorithm performed better than the MSE algorithm under impulsive noise.
2309.10605
An Active Noise Control System Based on Soundfield Interpolation Using a Physics-informed Neural Network
Conventional multiple-point active noise control (ANC) systems require placing error microphones within the region of interest (ROI), inconveniencing users. This paper designs a feasible monitoring microphone arrangement placed outside the ROI, providing a user with more freedom of movement. The soundfield within the ROI is interpolated from the microphone signals using a physics-informed neural network (PINN). PINN exploits the acoustic wave equation to assist soundfield interpolation under a limited number of monitoring microphones, and demonstrates better interpolation performance than the spherical harmonic method in simulations. An ANC system is designed to take advantage of the interpolated signal to reduce noise signal within the ROI. The PINN-assisted ANC system reduces noise more than that of the multiple-point ANC system in simulations.
Yile Angela Zhang, Fei Ma, Thushara Abhayapala, Prasanga Samarasinghe, Amy Bastine
2023-09-19T13:20:47Z
http://arxiv.org/abs/2309.10605v1
An Active Noise Control System Based on Soundfield Interpolation Using a Physics-Informed Neural Network ###### Abstract Conventional multiple-point active noise control (ANC) systems require placing error microphones within the region of interest (ROI), inconveniencing users. This paper designs a feasible monitoring microphone arrangement placed outside the ROI, providing a user with more freedom of movement. The soundfield within the ROI is interpolated from the microphone signals using a physics-informed neural network (PINN). PINN exploits the acoustic wave equation to assist soundfield interpolation under a limited number of monitoring microphones, and demonstrates better interpolation performance than the spherical harmonic method in simulations. An ANC system is designed to take advantage of the interpolated signal to reduce noise signal within the ROI. The PINN-assisted ANC system reduces noise more than that of the multiple-point ANC system in simulations. Yile (Angela) Zhang, Fei Ma, Thushara D. Abhayapala, Prasanga N. Samarasinghe, and Amy Bastine Audio and Acoustic Signal Processing Group, The Australian National University, Australia Active noise control (ANC), soundfield interpolation, physics-informed neural network (PINN) ## 1 Introduction Active noise control (ANC) systems reduce unwanted primary noise by superimposing it with secondary noise generated by secondary sources [1]. Multi-channel filtered-x least mean square (FxLMS)-based ANC system were proposed to reduce noise within a region of interest (ROI) using multiple reference and error microphones, and secondary sources [2, 3]. Multiple-point ANC system is a type of multi-channel ANC system that reduces noise around error microphones. To reduce noise within the ROI, this system requires placing monitoring microphones around user's head to measure the residual noise field, which is undesirable. To resolve this constraint, researchers proposed to use the remote microphone (RM) technique to estimate soundfield at ROI without any microphones inside the ROI [4, 5]. This technique provides more freedom of movement to the user, making ANC system more practical [6]. Arikawa _et al._[7] used reference microphones placed outside of the ROI to interpolate the primary noise field and the residual noise field within the ROI. However, they assumed the number of reference microphones to be relatively large. Jung _et al._[8] proposed an ANC headrest system with \(16\) remote monitoring microphones placed around user's head and evaluated ANC performance on automobile road noise data. These RM techniques interpolate the virtual microphone signals based on monitoring microphone signals only, and thus their performance is limited by number of monitoring microphones. Recently, researchers exploited spatial soundfield characteristics to improve soundfield interpolation performance. When combined with the RM technique, they formulated feasible ANC system with improved system setup. They proposed to decompose soundfield onto basis functions using spherical harmonic (SH) [9, 10] and singular value decomposition [11], and interpolated soundfield with a small number of microphones. A drawback of these decomposition approach is the need to fully surround ROI with microphone arrays, either in spherical or multiple circular setup which still restrict user's access to the ROI. Chen _et al._[12] simplified the microphone array structure by designing a compact 2D microphone array for 3D soundfield estimation using SH. However, this design requires a large number of microphones and is infeasible in practice. Maeno _et al._[10] improved the ANC system setup by placing multiple compact reference microphone arrays close to the noise sources, although the error microphones remain surrounding the ROI. Motivated by previous works, we design a practical microphone arrangement and propose a physics-informed neural network (PINN)-based interpolation method [13, 14] for an ANC system. In our proposed system, a small number of monitoring microphones are placed outside the ROI (around user's ears). This is a more feasible arrangement compared to SH ANC system using spherical or circular arrays, providing user more movement flexibility. By integrating the monitoring microphone signal with the acoustic wave equation, we design a PINN that achieves more accurate soundfield interpolation than a SH-based interpolation method in the simulation. Based on the interpolated soundfield, we use the multi-channel FxLMS algorithm to minimize the error signal and achieve better noise reduction within the ROI compared with the multiple-point ANC system. The performance of this work is verified by simulations. ## 2 Problem Formulation Consider an ANC system as shown in Fig. 1 with \(L\) secondary sources and \(Q\) monitoring microphones. A reference sensor is placed close to the primary noise source to detect the primary noise source characteristics, and the reference signal is denoted \(x(n)\) with \(n\) is the discrete time index. Let \(d_{\ell}(n)\), \(\ell=1,\ldots,L\) be the secondary source signals and \(e_{q}^{\rm(M)}(n)\), \(q=1,\ldots,Q\) be the received signal at the \(q^{\text{th}}\) monitoring microphone located at \((\mathrm{x}_{q},y_{q},\mathrm{z}_{q})\) in Cartesian coordinates (or \((r_{q},\theta_{q},\phi_{q})\) in spherical coordinates). The received signal is \[e_{q}^{\rm(M)}(n)=p_{q}(n)+\sum_{\ell=1}^{L}s_{\ell,q}(n)*d_{\ell}(n), \tag{1}\] where \(*\) is the convolution operation, \(p_{q}(n)\) is the primary signal at the \(q^{\text{th}}\) monitoring microphone, and \(s_{\ell,q}(n)\) is the impulse response of secondary path from \(\ell^{\text{th}}\) secondary source to \(q^{\text{th}}\) monitoring microphone. Consider \(V\) virtual microphones positioned at or close to the two ears (ROI) at \((\mathrm{x}_{v},\mathrm{y}_{v},\mathrm{z}_{v})\) with signal \(e_{v}^{\rm(V)}(n)\), \(v=1,\ldots,V\). Although the virtual signals cannot be measured directly, they can be interpolated from the monitoring microphone measurements \[e_{v}^{\rm(V)}(n)=\mathcal{I}(e_{q}^{\rm(M)}(n)), \tag{2}\] where \(\mathcal{I}(\cdot)\) is the interpolation function. The aim of our paper is twofold: 1. Interpolate the virtual microphone signals \(e_{v}^{\rm(V)}\) based on the monitoring signals \(e_{q}^{\rm(M)}\) using a PINN. 2. Set up an ANC system to reduce the noise at ROI using FxLMS algorithm and the interpolated signals \(e_{v}^{\rm(V)}\). ## 3 Methodology In this section, we first introduce the multiple-point ANC system, followed by formulating the PINN-assisted ANC system using PINN-interpolated soundfield. The two ANC systems are depicted in Fig. 2. ### Multiple-point ANC System We consider the standard single-reference, multiple-output FxLMS algorithm [15]. The weights of the adaptive filter for the \(\ell^{\text{th}}\) secondary source is updated iteratively \[\mathbf{w}_{\ell}(n+1)=\mathbf{w}_{\ell}(n)+\mu\sum_{q=1}^{Q} \mathbf{z}_{\ell,q}^{\prime}(n)e_{q}^{\rm(M)}(n), \tag{3}\] where \(\mu\) is the step-size parameter, and the filtered reference signal is \[\boldsymbol{x}_{\ell,q}^{\prime}(n)=s_{\ell,q}(n)*\boldsymbol{x} (n), \tag{4}\] where \(\boldsymbol{x}(n)=[x(n),x(n-1),\ldots,x(n-\mathcal{N}+1)]\) and \(\mathcal{N}\) is the filter length. The multiple-point ANC system aims to reduce noise \(e_{q}^{\rm(M)}(n)\) at multiple monitoring microphones. ### PINN-assisted ANC System Let the time variable \(n\) and Cartesian coordinates \((\mathrm{x},\mathrm{y},\mathrm{z})\) be the input to a fully connected feed-forward network [16], which consists of one input layer, one output layer, and L hidden layers with \(\mathsf{N}\) neurons in each hidden layer. The network output \(\hat{p}(n,\mathrm{x},\mathrm{y},\mathrm{z})\) is the estimated primary signal at point \((\mathrm{x},\mathrm{y},\mathrm{z})\) at time \(n\). Figure 1: System setup: A primary noise source generates the primary noise field and is detected by the reference sensor. An ANC system cancels the primary noise field at the ROI (user’s ears) by superimposing it with a secondary noise field produced by the secondary sources. Figure 2: Block diagram of multiple-point ANC system and PINN-assisted ANC system, which differ by the error signal used in the FxLMS algorithm. We design the PINN to minimize a loss function \[\mathfrak{L} =\underbrace{\frac{1}{Q}\sum_{q=1}^{Q}\left(\hat{p}(n_{q},\mathrm{x}_ {q},\mathrm{y}_{q},\mathrm{z}_{q})-p(n_{q},\mathrm{x}_{q},\mathrm{y}_{q}, \mathrm{z}_{q})\right)^{2}}_{\mathfrak{L}_{\mathrm{data}}}\] \[+\underbrace{\frac{1}{A}\sum_{a=1}^{A}\left(c^{2}\nabla^{2}\hat{p }(n_{a},\mathrm{x}_{a},\mathrm{y}_{a},\mathrm{z}_{a})-\frac{\partial^{2}}{ \partial n^{2}}\hat{p}(n_{a},\mathrm{x}_{a},\mathrm{y}_{a},\mathrm{z}_{a}) \right)^{2}}_{\mathfrak{L}_{\mathrm{PDE}}}, \tag{5}\] where \(\mathfrak{L}_{\mathrm{data}}\) represents the mean squared error loss between the PINN estimated primary signal and the ground truth primary signal \(p(n,\mathrm{x},\mathrm{y},\mathrm{z})\) obtained at the monitoring microphone positions \(\{\mathrm{x}_{q},\mathrm{y}_{q},\mathrm{z}_{q}\}_{q=1}^{Q}\), \(\{\mathrm{x}_{a},\mathrm{y}_{a},\mathrm{z}_{a}\}_{a=1}^{A}\) is the Cartesian positions of \(A\) randomly selected points around the ROI. The PDE loss \(\mathfrak{L}_{\mathrm{PDE}}\) is derived from a discrete approximation of the acoustic wave equation given by [17] \[\nabla^{2}p-\frac{1}{c^{2}}\frac{\partial^{2}p}{\partial t^{2}}=0, \tag{6}\] where \(\nabla^{2}\equiv\partial^{2}/\partial\mathrm{x}^{2}+\partial^{2}/\partial \mathrm{y}^{2}+\partial^{2}/\partial\mathrm{z}^{2}\) is the Laplacian operator, \(\partial^{2}/\partial t^{2}\) is the second partial derivative with respect to time \(t\) and \(c\) is the speed of sound. The trained PINN model can interpolate the primary signal at the virtual microphones and subsequently interpolate \(e_{v}^{(\mathrm{V})}(n)\), \(v=1,\dots,V\). Our proposed PINN-assisted ANC system replaces \(e_{q}^{(\mathrm{M})}(n)\) in (3) of the multiple-point ANC system with \(e_{v}^{(\mathrm{V})}(n)\), and aims to reduce noise \(e_{v}^{(\mathrm{V})}(n)\) at the virtual microphone positions. ## 4 Numerical Experiments ### Experimental Settings We consider an experiment setup as in Fig. 1 with \(L=2\) and \(V=2\). A single primary noise source is located at \((0.6,0.8,1)\) m and generates a tonal noise composed of three sinusoidal waves of 300, 400 and 500 Hz, each with a random phase. The speed of sound is set to \(c=343\ \mathrm{m/s}\). We set the sampling rate as \(24\) kHz and sample all signals for a duration of \(0.1\) s. Two secondary sources are placed at \((0,\pm 0.5,0)\) m along the \(y\)-axis. Eight monitoring microphones (\(Q=8\)) are placed on a \(r=0.26\) m sphere at \((\pm 0.15,\pm 0.15,\pm 0.15)\) m, and the two virtual error microphones are at \((0,\pm 0.1,0)\) m along the \(y\)-axis. We model the primary and secondary paths using the free-field Green's function [17]. This setting is applied to the PINN assisted-ANC system. The multiple-point ANC system cancels noise at the monitoring microphones without any interpolation. **Interpolation using SH method:** The sound pressure of monitoring microphones at \((r=0.26\ \mathrm{m},\theta_{q},\phi_{q})_{q=1}^{Q=8}\) can be decomposed onto the SH as [18] \[p(n,r,\theta_{q},\phi_{q})\approx\sum_{u=0}^{U}\sum_{v=-u}^{u}\alpha_{u,v}(n, r)Y_{u}^{v}(\theta_{q},\phi_{q}), \tag{7}\] where \(Y_{u}^{v}(\theta,\phi)\) is the SHs of order \(u\) and degree \(v\) with coefficient \(\alpha_{u,v}\), \(U=\lceil 2\pi f_{\mathrm{m}}r/c\rceil\) is the maximum order of sound-field [19] with \(f_{\mathrm{m}}\) being the highest frequency of interest. The sound pressure at an arbitrary point \((r_{s},\theta,\phi)\) can be interpolated as \[p_{s}(n,r_{s},\theta,\phi) \approx \sum_{u=0}^{U}\sum_{v=-u}^{u}\alpha_{uv}(n,r)\] \[*\mathcal{F}^{-1}\left[\frac{j_{u}(2\pi f_{\mathrm{m}}r_{s}/c)}{ j_{u}(2\pi f_{\mathrm{m}}r/c)}\right]Y_{u}^{v}(\theta,\phi), \tag{8}\] where \(\mathcal{F}^{-1}\) is the inverse Fourier transform and \(j_{u}(\cdot)\) is the spherical Bessel function of the first kind [17]. Accurate estimation of SH coefficients up to order \(U\) requires the number of measurements \(Q>(U+1)^{2}\)[14]. With our setup \(Q=8\), we set \(U=2\) which is the closest fit to the accuracy criteria [19]. We interpolate the soundfield on different spheres around the region for \(r_{s}\in[0.1,0.4]\) m. On each sphere, the soundfield is interpolated on 400 uniform sampling points [20]. The sound pressure at each point is interpolated using (7) and (8). **Interpolation using PINN method:** We construct the PINN model to have \(\mathsf{L}=1\) layer and \(\mathsf{N}=16\) neurons. The activation function is \(\tanh\). Network parameters are initialized using the Glorot normal initializer [21] and use Adam [22] as the optimizer. We use the automatic differentiation from Tensorflow to calculate the partial derivatives of the pressure signal. The network is trained for 5\(\times\)10\({}^{5}\) epochs with a learning rate of \(0.001\). Time variable \(n\) is normalized to the same range as the coordinate inputs, which is \([-0.15,0.15]\). The eight monitoring microphone positions are combined with 92 randomly selected positions within a \(r=0.26\) m sphere to form \(\{\mathrm{x}_{a},\mathrm{y}_{a},\mathrm{z}_{a}\}_{a=1}^{d=100}\). These positions and the time variable are inputs to the PINN model. We compare the interpolation performance between the SH and PINN method by interpolating the pressure signal at the same positions. ### Results We denote the soundfield interpolation error at radius \(r\) as \[\epsilon_{r}=\frac{\sum_{b=1}^{400}\left(p(n_{b},\mathrm{x}_{b},\mathrm{y}_{b}, \mathrm{z}_{b})-\hat{p}(n_{b},\mathrm{x}_{b},\mathrm{y}_{b},\mathrm{z}_{b}) \right)^{2}}{\sum_{b=1}^{400}p(n_{b},\mathrm{x}_{b},\mathrm{y}_{b},\mathrm{z}_{b })^{2}}. \tag{9}\] Figure 3 shows the soundfield interpolation error of the two methods in dB. For \(r_{s}\) from \(0.2\) m to \(0.4\) m, the PINN method outperformed SH by approximately \(8\) dB. The difference in interpolation error between the two methods is comparatively smaller for \(r_{s}<0.2\) m. Nevertheless, the PINN method consistently had a lower interpolation error. As our PINN method showed better soundfield interpolation than SH method in general, we omit evaluation of ANC system using SH-interpolated signals. To evaluate ANC performance, we define the noise reduction level at the two ear locations as \[\varepsilon=\frac{\sum_{v=1}^{2}e_{v}^{\rm(V)}(n)^{2}}{\sum_{v=1}^{2}p_{v}(n)^{2 }}, \tag{10}\] where \(p_{v}(n)\) is the primary noise signal at the two ears. Figure 4 shows the noise reduction level achieved by the multiple-point and PINN-assisted ANC system. Both systems use the FxLMS algorithm with a step size of \(\mu\) = \(1\times 10^{-5}\) over \(10000\) iterations. While the initial noise power reduction rate in the first 500 iterations is similar for both ANC systems, our PINN approach achieved \(-13\) dB more steady-state noise power reduction than the multiple-point ANC system. In Figure 5, we evaluated the signal power after convergence of the FxLMS algorithm in the \(xy\)-plane for \(441\) evaluation points, which are evenly spaced from \(-0.2\) m to \(0.2\) m along \(x\) and \(y\) axes. Figure 5 (a) is the original primary noise field. Figure 5 (b) and (c) shows the residual noise power in multiple-point ANC system and PINN-assisted ANC system, respectively. In the dotted region, the PINN-assisted ANC showed an overall better noise reduction performance than the multiple-point ANC, and \(-10\) dB lower residual noise field around the two ear regions. This is due to the multiple-point ANC system reducing noise around monitoring microphones, whereas the PINN-assisted ANC system reduces noise around the virtual error microphone at the two ear locations. ## 5 Conclusion In this paper, we proposed a practical ANC system with monitoring microphones placed outside the ROI. Using a PINN, the soundfield at the ROI is interpolated from the microphone signals, and the interpolated soundfield is used to reduce noise at the virtual microphone positions. Our results show that the PINN method achieves an overall better interpolation result than SH method. The noise attenuation performance of our PINN-assisted ANC system is also shown to exceed the multiple-point ANC system. The proposed PINN model is simple and computationally efficient compared to deep neural networks with numerous parameters. We plan to quantitatively compare with other methods and evaluate PINN's computational advantage. Additionally, we will extend our system to spatial ANC and replace the FxLMS algorithm with a machine learning-based controller, such as Deep MCANC [23] and DNoiseNet [24]. Figure 4: Noise power reduction at the two ear locations achieved by the multiple-point ANC and the PINN-assisted ANC system. Figure 5: Signal power (in dB) in the \(xy\)-plane: (a) Primary noise field, and residual noise field for (b) multiple-point ANC system and (c) PINN-assisted ANC system. The dotted square is the projection of monitoring microphones on the \(xy\)-plane and the two circles depict two \(r=0.03\) m regions around the ears. Figure 3: Soundfield interpolation error using SH method and PINN method as a function of the sphere radius \(r_{s}\).
2309.09195
SplitEE: Early Exit in Deep Neural Networks with Split Computing
Deep Neural Networks (DNNs) have drawn attention because of their outstanding performance on various tasks. However, deploying full-fledged DNNs in resource-constrained devices (edge, mobile, IoT) is difficult due to their large size. To overcome the issue, various approaches are considered, like offloading part of the computation to the cloud for final inference (split computing) or performing the inference at an intermediary layer without passing through all layers (early exits). In this work, we propose combining both approaches by using early exits in split computing. In our approach, we decide up to what depth of DNNs computation to perform on the device (splitting layer) and whether a sample can exit from this layer or need to be offloaded. The decisions are based on a weighted combination of accuracy, computational, and communication costs. We develop an algorithm named SplitEE to learn an optimal policy. Since pre-trained DNNs are often deployed in new domains where the ground truths may be unavailable and samples arrive in a streaming fashion, SplitEE works in an online and unsupervised setup. We extensively perform experiments on five different datasets. SplitEE achieves a significant cost reduction ($>50\%$) with a slight drop in accuracy ($<2\%$) as compared to the case when all samples are inferred at the final layer. The anonymized source code is available at \url{https://anonymous.4open.science/r/SplitEE_M-B989/README.md}.
Divya J. Bajpai, Vivek K. Trivedi, Sohan L. Yadav, Manjesh K. Hanawal
2023-09-17T07:48:22Z
http://arxiv.org/abs/2309.09195v1
# SplitEE: Early Exit in Deep Neural Networks with Split Computing ###### Abstract. Deep Neural Networks (DNNs) have drawn attention because of their outstanding performance on various tasks. However, deploying full-fledged DNNs in resource-constrained devices (edge, mobile, IoT) is difficult due to their large size. To overcome the issue, various approaches are considered, like offloading part of the computation to the cloud for final inference (split computing) or performing the inference at an intermediary layer without passing through all layers (early exits). In this work, we propose combining both approaches by using early exits in split computing. In our approach, we decide up to what depth of DNNs computation to perform on the device (splitting layer) and whether a sample can exit from this layer or need to be offloaded. The decisions are based on a weighted combination of accuracy, computational, and communication costs. We develop an algorithm named SplitEE to learn an optimal policy. Since pre-trained DNNs are often deployed in new domains where the ground truths may be unavailable, and samples arrive in a streaming fashion, SplitEE works in an online and unsupervised setup. We extensively perform experiments on five different datasets. SplitEE achieves a significant cost reduction (\(>50\%\)) with a slight drop in accuracy (\(<2\%\)) as compared to the case when all samples are inferred at the final layer. The anonymized source code is available at [https://anonymous.4open.science/r/SplitEE_M-B989/README.md](https://anonymous.4open.science/r/SplitEE_M-B989/README.md). + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: + Footnote †: copyrighted: copyrighted: + Footnote †: copyrighted: copyrighted: + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyrighted: copyright: copyright: copyrighted: copyright: copyright: copyright: copyrighted: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright copyright: copyright: copyright: copyright copyright: copyright: copyright: copyright: copyright: copyright computational and offloading costs. The computational costs capture the cost of running DNN layers on edge, and offloading cost captures the cost of communicating the DNN output from the splitting layer to the cloud. We define a reward function with a weighted difference between confidence and the cost incurred. We use the multi-armed bandit framework and set our objective as minimizing the expected cumulative regret, defined as the difference between the cumulative reward obtained by an oracle and that obtained by the algorithm. SplitEE is based on the classical Upper Confidence Bound (UCB) (Cheng et al., 2015) algorithm. We also develop a variant of SplitEE that takes into account the additional information available at the intermediary levels in the form of side observations. We refer to this variant as SplitEE-S. We use the state-of-art Early-Exit DNN ElasticBERT for natural language inference as a test bed to evaluate the performance of our algorithm. ElasticBERT is based on the BERT backbone and trains multiple exits on a large text corpus. We extensively evaluate the performance of SplitEE and SplitEE-S on five datasets _viz_. IMDb, Yelp, SciTail, QQP and SNLI to cover three types of classification tasks -sentiment classification, entailment classification, and semantic equivalence classification. We first prepare an Early-Exit DNN by fine-tuning it on a similar kind of task to perform inference on an equivalent task with a different distribution in an unsupervised online manner. For instance, we fine-tune ElasticBERT on SST-2, a sentiment classification dataset and then evaluate SplitEE on Yelp and IMDb which have review classification tasks. SplitEE finds an optimal splitting layer such that samples could be inferred locally only if the given sample meets the confidence threshold. In this way, SplitEE only infers 'easy' samples locally forcing less load on mobile devices and offloads 'hard' samples. SplitEE observes the smallest performance drop of \(<2\%\) in the accuracy and \(>50\%\) reduction in cost as compared to the case when all samples are inferred at the final exit. During inference, these DNNs might be applied to a dataset having different latent data distribution from the dataset used to train the DNN. The optimal splitting layer might be different depending on the latent data distribution. Hence SplitEE adaptively learns the optimal split point by utilizing the confidence available at the exit attached to the splitting layer and computational cost. Our main contributions are as follows: * We introduce early exits in split computing and introduce a learning model. In our model, the decision is to find the split point as well as whether to exit or offload from the split point. * To find the optimal split point, we develop upper-confidence-based algorithms SplitEE that decide the splitting layer on the fly without requiring any ground truth information and achieve sub-linear regret. * We optimize the utilization of resources on edge and the cloud without sacrificing much accuracy by only inferring easy samples on edge devices. * Using five distinct datasets, we empirically verify that our algorithms significantly reduce costs with a small drop in accuracy compared to the base-lines and state-of-the-art algorithms. ## 2. Related Works In this section, we discuss the previous works on Split Computing, Early-Exit DNNs and the utilization of DNNs on Mobile devices. ### Split Computing in DNNs Neurosurgeon (Shi et al., 2017) searches for the best splitting layer in a DNN model by minimizing the cost associated with a splitting layer. Split Computing is applied with different approaches. BottleNet (Bottleft et al., 2016) and Bottleftft (Srivastava et al., 2017) introduce a bottleneck in split computing where part of the DNN in the mobile device will encode the sample into a reduced size. The reduced-sized sample is then offloaded. The sample is then decoded and inferred on the cloud. There are multiple training methodologies to encode the input on the mobile device. BottleNet++ (Bottleft et al., 2017) and (Bottleft et al., 2018) perform cross-entropy-based training, Matsubara (Matsubara, 2017) perform knowledge-distillation based training, CDE (Srivastava et al., 2017) and Yao (Yao et al., 2017) perform reconstruction-based training and Matsubara (Matsubara, 2017) perform head network distillation training method to effectively encode the input to offload efficiently. ### Early-exit Neural networks Early-exit DNNs are employed on various tasks. In image classification BranchyNet (Srivastava et al., 2017), among other earlier research, uses classification entropy at each associated exit following each layer to determine whether to infer the sample at the side branch. If the exit point's entropy is below a predetermined threshold, the choice to exit is made. Similarly, SPINN (Shi et al., 2017) and SEE (Srivastava et al., 2017) also use the estimated confidence measure at the exit branch to determine whether to exit early. However, the confidence estimate here is the likelihood of the most likely class. Besides exiting early, works like FlexDNN (Chen et al., 2015) and Edgent (Edsen et al., 2016) focus mainly on the most appropriate DNN depth. Other works, such as Dynexit (Dyneit, 2016), focus on deploying the multi-exit DNN in hardware. It trains and deploys the DNN on Field Programmable Gate Array (FPGA) hardware while Paul _et al._(Paul et al., 2016) explains that implementing a multi-exit DNN on an FPGA board reduces inference time and energy consumption. In the NLP domain, DeeBERT (Denee et al., 2016), ElasticBERT (Denee et al., 2016) and BERxiT (Denee et al., 2016) are transformer-based BERT models. DeeBERT is obtained by training the exit points attached before the last module to the BERT backbone separately while ElasticBERT trains the backbone with attached exits jointly with the final exit. BERxiT proposes a more advanced fine-tuning strategy for the BERT model with attached exits. PABEE(Pabh et al., 2016) and Pece-BERT(Pabh et al., 2016) suggest an early exit depending on the agreement between early-exit classifiers up to a Figure 1. Efficient edge-cloud co-inference setup where part of the layers are executed on the edge device with an option to exit (infer a sample) at the split point and remain on the cloud to infer at the final layer. fixed patience threshold. LeeBERT (Lee et al., 2017) on the other hand applies knowledge distillation across all exit layers rather than just distilling the knowledge prediction from the final layer. ### DNNs in Mobile Devices Pacheco (Pacheco, 2018) utilize both multi-exit DNN and DNN partitioning to offload mobile devices via multi-exit DNNs. Similarly, EPNet (Beng et al., 2017) learns when to exit considering the accuracy-overhead trade-off but in an offline fashion. LEE (Shi et al., 2017), DEE (Shi et al., 2017) and UEE-UCB (Beng et al., 2018) utilize the multi-armed bandit framework to learn the optimal exit. However, they do not have the option to offload and infer only at mobile devices after finding the optimal exit. LEE and DEE provide efficient DNN inference tasks for mobile devices in scenarios such as service outages. Both LEE and DEE assume that the utility is revealed which depends on the ground truth labels. LEE and DEE use the classical UCB1 (Beng et al., 2017) algorithm to learn the optimal exit. UEE-UCB learns the optimal exit in a setup similar to ours, however, it does not have the option to offload. It finds the optimal splitting layer and infers all the samples through the mobile device. It also assumes that the intermediary layers follow the strong dominance property. Following are major differences between our setup in comparison with the previous setups: 1) We take into account both the computational and communication costs in addition to accuracy in deciding the splitting layer, whereas the previous works on split computing considered only the communication cost, while the early exit work considered only computational costs along with the accuracy. 2) Our work is completely in an unsupervised online setup as it does not require any ground truth information. 3) For each sample, we use the contextual information (confidence) to decide whether to exit or offload at the splitting layer dynamically. Table 2 provides a direct comparison to state-of-arts. ## 3. Problem Setup We are given a pre-trained DNN with \(L\) layers with attached exits after every layer. We index the layers using the set \([L]=\{1,2,\ldots L\}\). We consider classification tasks with a target class \(\mathcal{C}\). For an input \(x\) and layer \(i\in[L]\), let \(\hat{P}_{i}(c)\) denote the estimated probability that \(x\) belongs to class \(c\in\mathcal{C}\). Let \(C_{i}=\max_{c\in\mathcal{C}}\hat{P}_{i}(c)\) denote the confidence of estimated probability class. Input \(x\) is processed sequentially through the DNN. The DNN could be split at any layer \(i\in[L]\), where the layers \(1,2,\ldots,i\) are on the mobile device and the remaining layers, i.e., \(i+1,i+2,\ldots,L\) are on the cloud. In our setup for each sample following two-stage decisions has to be made 1) Where to split the DNN? 2) Whether to exit from the splitting layer offload to the cloud. The decision on where to split the DNN does not depend on the individual samples but on the underlying distribution. Whereas the decision to offload or exit is based on each sample as follows: If the split is at the \(i\)th layer, \(C_{i}(x)\) is computed and compared against a pre-defined threshold \(\alpha\). If \(C_{i}(x)\geq\alpha\), the sample exits and is inferred on the mobile device at the splitting layer, otherwise it is offloaded and inferred at the final layer on the cloud. The cost of using the DNN up to layer \(i\) could be interpreted as the computational cost of processing the sample till layer \(i\) and performing inference. Let \(\gamma_{i}\) be the cost associated with the split performed at the \(i\)th layer. We set \(\gamma_{i}\propto i\) as the amount of computation that depends on the depth of the splitting layer in the DNN. We denote the cost of offloading from mobile to cloud as \(o\). The value of \(o\) across all layers depends on the size of the input and the transmission cost (_e.g._ Wi-Fi, 5G, 4G and 3G). We define the reward when the splitting is performed at the \(i\in[L]\) layer as \[r(i)=\left\{\begin{array}{ll}C_{i}-\mu\gamma_{i}&\quad\text{if}\quad C_{i} \geq\alpha\text{ or }i=L\\ C_{L}-\mu(\gamma_{i}+o)&\quad\text{otherwise,}\end{array}\right. \tag{1}\] where \(\mu\) is a conversion factor to express the cost in terms of confidence. \(\mu\) is input by the users depending on their preference for accuracy and computational costs. The reward could be interpreted as follows: if the DNN is confident on the sample in the prediction obtained from the \(i\)th layer, then the reward will be the gain in confidence subtracted by the cost of moving the sample till \(i\)th layer and inferring. If not, then the sample is offloaded to the cloud for inference, where the confidence of \(C_{L}\) is achieved at the last layer, and an additional offloading cost \(o\) is incurred. Observe that if \(i=L\), all the computations are executed on the edge device, and the sample is inferred at \(L\)th layer (without offloading). We define \(i^{*}=\arg\max_{i\in[L]}\mathbb{E}[r(i)]\) which is defined as for \(i\in[L-1]\) as \[\mathbb{E}[r(i)]=\mathbb{E}[C_{i}-\mu\gamma_{i}|C_{i}\geq\alpha] \cdot P[C_{i}\geq\alpha]\\ +\mathbb{E}[C_{L}-\mu(\gamma_{i}+o)|C_{i}<\alpha]\cdot P[C_{i}< \alpha], \tag{2}\] and for the last layer \(L\), it is a constant given as \(\mathbb{E}(r(L))=C_{L}-\mu\gamma_{L}\). The goal is to find an optimal splitting layer \(i^{*}\) such that sample will be inferred at \(i^{*}\) or be offloaded to the cloud for inference. We model the problem of finding the optimal splitting layer as a multi-armed bandit problem (MAB). We define the action set as layer indices in the DNN \(\mathcal{A}=[L]\). Following the terminology of MABs, we also refer to elements of \(\mathcal{A}\) as arms. Consider a policy \(\pi\) that selects arm \(i_{t}\) at time \(t\) based on past observations. We define the cumulative regret of \(\pi\) over \(T\) rounds as \[R(\pi,T)=\sum_{t=1}^{T}\mathbb{E}[r(i^{*})-r(i_{t})] \tag{3}\] where the expectation is with respect to the randomness in the arm selection caused by previous samples. A policy \(\pi^{*}\) is said to be sub-linear if average cumulative regret vanishes, i.e., \(R(\pi^{*},T)/T\to 0\). We experimentally prove that both variants of Algorithm 1 achieves sub-linear regret. ## 4. Algorithm In this section, we develop an algorithm named Split computing with Early Exit (SplitEE). The algorithm is based on the 'optimism' in the face of the uncertainty principle' and uses the upper confidence bounds. In this variant, the inference is performed only at the splitting layer, and the decision to offload or exit is based on confidence in this inference. In the following subsection, we develop another variant named SplitEE-S that makes inferences at each layer and not just at the splitting layer. ### SplitEE The input to this variant is the confidence threshold (\(\alpha\)), the exploration parameter (\(\beta\)), the number of layers (\(L\)), and the computational cost for each layer \(\gamma\) which could be split as \(\gamma=\lambda_{1}+\lambda_{2}\) where \(\lambda_{1}\) could be interpreted as the processing cost whereas \(\lambda_{2}\) is the inference cost at the attached exit. Since we are not utilizing the exits attached to the layer before the chosen splitting layer, hence in this variant, \(\lambda_{2}\) will only be accumulated for the splitting layer selected. ``` 1:Input:\(\alpha\) (threshold), \(\beta\geq 1,L,\ \text{cost}\ \gamma_{i}\ \forall i\in[L]\) 2:Initialize:\(Q(i)\gets 0,N(i)\gets 0\). 3:Initialize by playing each arm once. 4:for\(t=|L|+1,|L|+2,\ldots\)do 5: Observe an instance \(x_{t}\) 6:\(i_{t}\leftarrow\arg\max_{i\in[L]}\left(Q(i)+\beta\sqrt{\frac{\ln(t)}{N(i)}}\right)\) 7: Pass \(x_{t}\) till layer \(i_{t}\), apply threshold \(\alpha\) and observe \(C_{i_{t}}\) 8:if\(C_{i_{t}}\geq\alpha\)then 9: Infer at layer \(i_{t}\) and exit 10:\(r_{t}(i_{t})\gets C_{i_{t}}(x_{t})-\mu y_{i_{t}}\), \(N_{t}(i_{t})\gets N_{t-1}(i_{t})+1\) 11:\(Q_{t}(i_{t})\leftarrow\sum_{j=1}^{t}r_{j}(k)\mathbbm{1}_{\{k=i_{t}\}}/N_{t}(i_ {t})\) 12:else 13: Offload to the last layer. Observe \(C_{L}\) 14:\(r_{t}(i_{t})\gets C_{L}(x_{t})-\mu(Y_{t}+o)\), \(N_{t}(i_{t})\gets N_{t-1}(i_{t})+1\) 15:\(Q_{t}(i_{t})\leftarrow\sum_{j=1}^{t}r_{j}(k)\mathbbm{1}_{\{k=i_{t}\}}/N_{t}(i_ {t})\) 16: Infer at the last layer 17:endif 18:endfor ``` **Algorithm 1** SplitEE The pseudo-code of the SplitEE is given in Algorithm 1. The algorithm works as follows: It plays each arm once for the first \(L\) instances to obtain the rewards \(Q(i)\) and the counters \(N(i)\) for each layer once. Then it plays an arm \(i_{t}\) that maximises the UCB index (line 6) in succeeding rounds. The weighted sum of the empirical average of the rewards \(Q(i)\) and confidence bonuses is used to create UCB indices. If the confidence at layer \(i_{t}\) is above the threshold \(\alpha\), the sample exits the DNN; else it is offloaded to the cloud with additional cost \(o\). Following analysis of UCB1(Cab discarding all exits, the ElasticBERT-base model's backbone with learnt weights remains. For more details of the training procedure, we refer to (Kumar et al., 2018). After having a pre-trained model backbone, we attach task-specific exits (_e.g._ classification heads) after all transformer layers along the backbone and fine-tune it using a labeled dataset (from a similar domain to the evaluation dataset). Sentence-level representations for sentence-level tasks are learned using the \([CLS]\) token. After each transformer layer, this token representation is connected to the classification heads. A sketch of the entire training procedure is provided in figure 2. \(w_{1}\), \(w_{2}\), \(w_{3}\) and \(w_{4}\) represent token embeddings of the given input sequence. The head is attached to produce a representation that can be compared with the task label to compute the loss. Cross-entropy loss is the loss function that we select. Using learnable weights, the attached classification heads transform the \([CLS]\) token's \(q\)-dimensional vector representation into a probability vector for direct comparison with the task label. ### Experimental setup In this section, we explain the experimental setup and details of SplitEE. We have three major steps in the experimental setup which are summarized below: **i) Training the backbone:** Initially, we train the ElasticBERT-base model with MLM and SOP heads attached after every transformer layer of the BERT-base model. After training on a large text corpus, we remove the MLM and SOP heads from the ElasticBERT model, leaving only the backbone. We directly import weights of the learned backbone, hence this part does not need any computation. **ii) Fine-tuning and learning weights (Supervised):** In the backbone obtained by step (i), we attach task-specific exits (heads) after each transformer layer, and to learn weights for these heads, we perform supervised training. Since we assume that we do not have labels for the evaluation task. We utilize a labeled dataset with a similar kind of task but with a different distribution or from a different domain. For example, we evaluate SplitEE on IMDb and Yelp datasets which are review classification datasets and learn the weights for heads using the SST-2 dataset which has a similar task of sentiment classification but with different latent data distribution. **iii) Online learning of optimal splitting layer (Unsupervised):** Finally we use weights from step (2) to learn the optimal splitting layer in an unsupervised and online setup for the evaluation tasks. We perform this step after the model has been deployed and ready for inference. We perform experiments on single NVIDIA RTX 2070 GPU. Part (i) does not require any computation as we can directly import the weights from the backbone. Part (ii) takes a maximum of 10 GPU hour of runtime (on the MNLI dataset). Part (iii) is not computationally involved and could be executed in \(<\) 1 hour of CPU runtime and does not requires GPU support on a single run. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **E. Data** & **\#Samples** & **FT Data** & **\#Samples** \\ \hline IMDb & 25K & SST-2 & 68K \\ \hline Yelp & 560K & SST-2 & 68K \\ \hline SciTail & 24K & RTE & 2.5K \\ \hline QQP & 365K & MRPC & 4K \\ \hline SNLI & 550K & MNLI & 433K \\ \hline \end{tabular} \end{table} Table 1. Information about the size of datasets. FT data is the dataset used to prepare the ElasticBERT backbone for the corresponding task and #Samples is the number of samples in the dataset. E.data is the evaluation dataset. Figure 4. Cost (in \(10^{4}\times\lambda\) units) for different offloading cost (SplitEE) Figure 3. Accuracy for different offloading costs (\(o\)) (SplitEE) We evaluated SplitEE on five datasets covering three types of classification tasks. The datasets used for evaluation are: 1. **Review classification on IMDb (Kumar et al., 2017) and Yelp (Bang et al., 2017)**: IMDb is a movie review classification dataset and Yelp consists of reviews from various domains such as hotels, restaurants etc. For these two datasets, ElasticBERT is finetuned on **SST-2 (Stanford Sentiment classification)** dataset which is also a sentiment classification dataset. 2. **Entailment classification on SciTail:** SciTail is an entailment classification dataset created from multiple questions from science and exams and web sentences. To evaluate SplitEE on SciTail, it is finetuned on **RTE(Recognizing Textual Entailment)** dataset which is also an entailment classification dataset. 3. **Entailment classification on SNLI(Stanford Natural Language Inference) (Multi-class)**: SNLI is a collection of human-written English sentence pairs manually labelled for balanced classification with labels _entailment, contradiction_ and _neutral_. For evaluation of this dataset, ElasticBERT is finetuned on **MNLI(Multi-Genre Natural Language Inference)** which also contains sentence pairs as premise and hypothesis, the task is the same as for SNLI. 4. **Semantic equivalence classification on QQP(Quora Question Pairs)**: QQP is a semantic equivalence classification dataset which contains question pairs from the community question-answering website Quora. For this task, we finetuned ElasticBERT on **MRPC(Microsoft Research Paraphrase Corpus)** dataset which also has a semantic equivalence task of a sentence pair extracted from online news sources. Details about the size of these datasets are in table 1. Observe from the table that the size of the dataset used for fine-tuning is much smaller as compared to the size of the corresponding evaluation dataset. We do not split the evaluation dataset. Except for IMDb and Yelp, other datasets are a part of ELUE (Kumar et al., 2017) and GLUE (Kumar et al., 2017) benchmark datasets. We attach exits after every transformer layer in the ElasticBERT model. The predefined threshold \(\alpha\) is directly taken from the ElasticBERT model which utilizes the validation split of fine-tuning data to learn the best threshold. The choice of action set depends on the number of layers of the DNN being used. The action set is \(\mathcal{A}=[L]\) and for ElasticBERT \(L=12\). We have two types of costs: Computational cost and Offloading cost. As explained in section 3, the computational cost is proportional to the number of layers processed i.e. \(\gamma_{i}=\lambda i\) where \(\lambda\) could be interpreted as per-layer computational cost. We can split \(\lambda=\lambda_{1}+\lambda_{2}\), where \(\lambda_{1}\) and \(\lambda_{2}\) resemble the per-layer processing cost and per-layer inference cost respectively. We relate \(\lambda_{1}\) and \(\lambda_{2}\) in terms of the number of matrix multiplications required to process and infer. We observe that \(\lambda_{2}=\lambda_{1}/6\) (5 matrix multiplications are needed for processing and 1 for inferring). Hence the cost for SplitEE-S will be \(\lambda i\) and \(\lambda_{1}i+\lambda_{2}\) for SplitEE if \(i\)th layer is chosen as the splitting layer. Since offloading cost is also user-defined and depends on the communication network used (e.g. 3G, 4G, 5G and Wi-Fi). Hence in the experiments, we provide results on different offloading costs \(o\) from the set \(\{\lambda,2\lambda,3\lambda,4\lambda,5\lambda\}\) as it is user-defined. With increasing stages of broadband mobile communication powers, we observe that offloading cost is at most five times the per-layer computational cost. For more details on how to compute the offloading cost, we refer to (Kumar et al., 2017). For table 2, we use the fixed offloading cost as \(o=5\lambda\) (highest offloading cost). Without loss of generality, we choose \(\lambda=1\) for conducting all the experiments. The cost accumulated is however Figure 5. Accuracy for different offloading costs (\(o\)) (SplitEE-S) Figure 6. Cost (in \(10^{4}\times\lambda\) units) for different offloading cost (SplitEE-S) left in terms of \(\lambda\) which is user-specific as all the cost is now in terms of \(\lambda\). The trade-off factor \(\mu\) ensures that the reward function gives similar preferences to cost as well as accuracy. For the algorithm, we choose \(\mu=0.1\) to directly compare confidence and cost. We repeat each experiment 20 times and in each run the samples are randomly reshuffled and then fed to the algorithm in an online manner. In each round, the algorithm chooses a splitting layer and accumulates the regret if the choice is not optimal. We plot the expected cumulative regret in figure 7. The accuracy and cost reported for SplitEE and SplitEE-S is computed considering the chosen splitting layer prediction in each round (i.e. for every sample) and then per-sample averaged for 20 runs. ### Baselines **1)** **DeeBERT**: Similar to our setup, we fine-tune DeeBERT and then perform inference on the evaluation dataset. DeeBERT prepares the early exit model in two steps: (1) It learns the general weights and embeddings for the BERT backbone using the loss function attached only at the final layer, this part is similar to BERT fine-tuning. (2) After freezing the weights, it attaches a loss function after every transformer layer except the final layer. Note that DeeBERT does not have the option to offload. DeeBERT uses the entropy of the predicted vector as confidence. We fine-tune the entropy threshold in a similar fashion as used by DeeBERT. Since it does not make any difference, hence we keep the confidence of DeeBERT as the entropy of the predicted vector. Other parameters are kept the same as used by DeeBERT. **2)** **ElasticBERT**: is also based on the BERT-base model, the only difference is ElasticBERT is jointly trained by attaching MLM and SOP heads after every transformer layer, Once the model is trained, it removes the heads leaving the backbone. More details are in section 5.1 and figure 2. All the parameters are kept the same as the ElasticBERT setup. **3) Random selection**: In random selection, we select a random exit point and then process the sample till chosen exit, if the confidence at chosen exit is above the threshold then exit, else offload. Then we calculate the cost and accuracy. We report the average accuracy and cost by running the above procedure 20 times. **4) Final exit**: In this case, we process the sample till the final layer for inference. This setup has a constant cost of \(\lambda L\). This baseline is similar to the basic inference of neural networks. We also utilize this setup for benchmarking. ### Need for offloading As explained in section 5.2, the maximum possible offloading cost is 5-times the per-layer computational cost. Hence if a sample is not gaining sufficient confidence for classification till a pre-specified layer, we might want to offload it to the cloud. Previous methods process the sample throughout the DNN until it gains sufficient confidence. We observed that processing a sample beyond 6th layer was accumulating more processing cost than the offloading cost. While experimenting, we marked that on average DeeBERT processes 51% samples and ElasticBERT processes 35% samples beyond 6th exit layer. These many samples accumulate a large computational cost for edge devices. Since edge devices have fewer resources available, both DeeBERT and ElasticBERT might exploit these resources in terms of battery lifetime depletion and device lifetime. While our setup decides on a splitting layer as a sample arrives. The sample is processed till chosen layer and if the sample gains sufficient confidence it exits the DNN else it offloads to the cloud for inference reducing cost drastically. Additionally, offloading helps in increasing accuracy as the last layer provides more accurate results on samples that were not gaining confidence initially. ### SplitEE and SplitEE-S From figures 3, 4, 5 and 6, we observe that SplitEE and SplitEE-S have comparable performances. However, observe that SplitEE does not utilize confidences of exits prior to chosen splitting layer. Hence we can directly process the sample to the splitting layer reducing the inference cost after each exit. SplitEE-S uses the confidence available at all exits on the edge device to update rewards for multiple arms. The difference between the two is more evident in the regret curve (see fig.7). SplitEE-S curve saturates much earlier than SplitEE. We also observed from the experiments that SplitEE has a larger impact on the reshuffling of the dataset while SplitEE-S is more robust to the reshuffling of the dataset as it needs less number of samples to learn the optimal splitting layer. Since in a real-world scenario, the size of the evaluation dataset might be small and we might need to adapt to changes in the distribution of data fast, in this case, SplitEE-S can be used. However, if the major concern is cost then SplitEE will work better as it further reduces the inference cost (see table 2). Note that DeeBERT and ElasticBERT also incur the inference cost at all exits up to which a sample is processed. ### Analysis with different offloading cost Being user-defined, we analyse the behaviour of accuracy and cost on different offloading costs. Except for the QQP dataset, we observe a drop in accuracy as we increase the offloading cost. We explain the drop as more offloading cost will force more samples to exit early by choosing a deeper exit in DNN. More samples exiting early will make less accurate predictions. Hence initially, when offloading cost is small it exits most samples from initial exits and offloads very few samples. When we increase offloading cost again it just goes deeper to gain confidence for those samples which were offloaded earlier. In terms of cost, it is evident that the cost of SplitEE will go up as we increase the offloading cost. For the QQP dataset, we observe a reverse behaviour as there are very less samples that offload for QQP. We observed that there were many samples that exited the initial layers with miss-classifications (which is also an explanation for the lower cost of ElasticBERT). As we increase the offloading cost SplitEE looks for deeper exit layers to split hence a gain in accuracy. Still, we are always better in terms of cost as well as accuracy when compared to ElasticBERT as shown in figure 4, 6, 3 and 5. Detailed results are in table 2. ### Regret Performance We repeat each experiment 20-times. Each time, a randomly reshuffled data is fed in an online manner to the algorithm. In each step, the algorithm selects a splitting layer and accumulates the regret if the choice is not optimal. In figure 7, we plot the expected cumulative regret along with a 95% confidence interval. We choose the exploration parameter \(\beta=1\). While each plot shows the results for a specific dataset. SplitEE and SplitEE-S outperforms the considered alternatives, yielding a lower cumulative regret and achieving sub-linear regret growth. We also observe that the SplitEE-S achieves lower regret than the SplitEE. This is because the side information provides the algorithm with additional information about the environment, which can be used to learn the optimal splitting layer quickly. As a result, the algorithm with side information can converge to the optimal policy more quickly. As observed from the figure 7, the regret starts saturating after the first 2000 samples for SplitEE and after 1000 samples for SplitEE-S. ## 6. Results In table 2, we report the accuracy and cost across different datasets as well as different models. SplitEE achieves smallest performance drop with a performance drop of (\(<2\%\)) against the final-exit and largest depreciation in cost (\(>50\%\)) as compared to final-exit. For the SciTail dataset, we are getting same accuracy as the final layer. This behaviour is observed as for most of the samples in SciTail, the gain in confidence is not sufficient in the initial layers, hence SplitEE offloads most of the samples and achieves similar performance. It achieves a smaller cost than other baselines since DeeBERT and ElasticBERT process every sample to deeper exits to meet the confidence threshold and accumulate more cost. We observed that in QQP \(15-20\%\) samples were misclassified with high confidence. Hence ElasticBERT exits many samples at initial layers but with a miss-classification incurring a lower cost. However, we have gained accuracy from the final layer. The lower accuracy at final layer is the effect of overthinking1 during inference. in general, the higher costs of DeeBERT and ElasticBERT could be explained as they process the sample till deeper exits until the sample's confidence is above a given threshold. However, SplitEE suggests offloading if the sample does not gains sufficient confidence till the splitting layer. Accuracy of SplitEE is also consistently higher as we also utilize the final layer for inference in conjunction with the splitting layer. Since the accuracy of the final exit is better than that of intermediate ones, SplitEE achieves higher accuracy than other baselines. Footnote 1: overthinking in inference is similar to over-fitting in training. ## 7. Conclusion We addressed the problem of using DNNs in resource-constraint edge devices like Mobile and IoTs. We proposed a new using mobile-cloud co-inference by combining Split computing and Early exits both of which are independently proposed to address the problem of deploying DNNs in resource-constrained environment. In our approach, part of DNN is deployed on the resource-constraint edge device and the remaining on the cloud. In the last layer of DNN implemented on the edge device, we make the inference, and depending on confidence in the inference, the sample either makes an exit or offloads to the cloud. The main challenge in our work is to decide where to split the DNN so that it achieves good accuracy while keeping computational and communication costs low. We developed a learning algorithm named SplitEE to address these challenges using the multi-armed bandit framework by defining a reward that takes into account accuracy and costs. Also, in our setup ground truth labels are not available. Hence SplitEE works in an unsupervised setting using confidence in prediction as a proxy for accuracy. Our experiments demonstrated that SplitEE achieves a significant reduction of cost (up to 50 %) with a slight reduction in accuracy (less than 2 %). We also developed a variant of SplitEE that exploits the side observation to improve performance. Our work can be extended in several ways. One SplitEE assumed that the threshold used to decide whether to exit or offload is fixed \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Model/Data & \multicolumn{2}{c}{IMDB} & \multicolumn{2}{c}{Yelp} & \multicolumn{2}{c}{SciTail} & \multicolumn{2}{c}{SNLI} & \multicolumn{2}{c}{QQP} \\ \hline & Acc & Cost & Acc & Cost & Acc & Cost & Acc & Cost & Acc & Cost \\ \hline Final-exit & 83.4 & 30.0 & 77.8 & 161.0 & 78.9 & 28.3 & 80.2 & 659.2 & 71.0 & 436.6 \\ Random-exit & -1.4 & -31.3\% & -1.2 & -38.0\% & -0.7 & -31.8\% & -2.0 & -41.5\% & -0.1 & -14.8\% \\ DeeBERT & -6.1 & -43.3\% & -2.5 & -59.0\% & -3.6 & -5.3\% & -3.5 & -38.9\% & -6.7 & -50.1\% \\ ElasticBERT & -2.5 & -62.3\% & -2.1 & -62.1\% & -0.1 & -40.2\% & -2.7 & -61.4\% & -0.2 & -57.9\% \\ SplitEE & -1.3 & **-66.6**\% & -1.1 & **-68.3**\% & 0.0 & -49.2\% & -1.6 & **-65.8**\% & -0.1 & **-59.1**\% \\ SplitEE-S & **-1.2** & -64.3\% & **-1.1** & -65.2\% & **0.0** & **-50.5**\% & **-1.7** & -62.5\% & **+0.1** & -55.1\% \\ \hline \end{tabular} \end{table} Table 2. Main Results: Results on different baselines across different datasets. Cost is left in terms of \(10^{4}\times\lambda\) units. \(\lambda\) is user-defined. The offloading cost is taken \(5\lambda\) (worst-case). Figure 7. Regret for different models based on offline validation. However, this can be adapted based on the new samples and can be made a learnable parameter. Also, in our work, we looked at an optimal split across all the samples. However, that can also be adaptive based on the sample. Each sample is of a different difficulty level and deciding split based on its difficulty can further improve the prediction accuracy while still keeping the cost low. ###### Acknowledgements. Manjesh K.Hanawal thanks funding support from SERB, Gov of India, through Core Research Grant (CRG/2022/008807) and MATRICS grant (MTR/2021/000645).
2309.13132
Understanding Calibration of Deep Neural Networks for Medical Image Classification
In the field of medical image analysis, achieving high accuracy is not enough; ensuring well-calibrated predictions is also crucial. Confidence scores of a deep neural network play a pivotal role in explainability by providing insights into the model's certainty, identifying cases that require attention, and establishing trust in its predictions. Consequently, the significance of a well-calibrated model becomes paramount in the medical imaging domain, where accurate and reliable predictions are of utmost importance. While there has been a significant effort towards training modern deep neural networks to achieve high accuracy on medical imaging tasks, model calibration and factors that affect it remain under-explored. To address this, we conducted a comprehensive empirical study that explores model performance and calibration under different training regimes. We considered fully supervised training, which is the prevailing approach in the community, as well as rotation-based self-supervised method with and without transfer learning, across various datasets and architecture sizes. Multiple calibration metrics were employed to gain a holistic understanding of model calibration. Our study reveals that factors such as weight distributions and the similarity of learned representations correlate with the calibration trends observed in the models. Notably, models trained using rotation-based self-supervised pretrained regime exhibit significantly better calibration while achieving comparable or even superior performance compared to fully supervised models across different medical imaging datasets. These findings shed light on the importance of model calibration in medical image analysis and highlight the benefits of incorporating self-supervised learning approach to improve both performance and calibration.
Abhishek Singh Sambyal, Usma Niyaz, Narayanan C. Krishnan, Deepti R. Bathula
2023-09-22T18:36:07Z
http://arxiv.org/abs/2309.13132v2
# Understanding Calibration of Deep Neural Networks for Medical Image Classification ###### Abstract **Background and Objective -** In the field of medical image analysis, achieving high accuracy is not enough; ensuring well-calibrated predictions is also crucial. Confidence scores of a deep neural network play a pivotal role in explainability by providing insights into the model's certainty, identifying cases that require attention, and establishing trust in its predictions. Consequently, the significance of a well-calibrated model becomes paramount in the medical imaging domain, where accurate and reliable predictions are of utmost importance. While there has been a significant effort towards training modern deep neural networks to achieve high accuracy on medical imaging tasks, model calibration and factors that affect it remain under-explored. **Methods -** To address this, we conducted a comprehensive empirical study that explores model performance and calibration under different training regimes. We considered fully supervised training, which is the prevailing approach in the community, as well as rotation-based self-supervised method with and without transfer learning, across various datasets and architecture sizes. Multiple calibration metrics were employed to gain a holistic understanding of model calibration. **Results -** Our study reveals that factors such as weight distributions and the similarity of learned representations correlate with the calibration trends observed in the models. Notably, models trained using rotation-based self-supervised pretrained regime exhibit significantly better calibration while achieving comparable or even superior performance compared to fully supervised models across different medical imaging datasets. **Conclusion -** These findings shed light on the importance of model calibration in medical image analysis and highlight the benefits of incorporating self-supervised learning approach to improve both performance and calibration. keywords: Calibration, deep neural network, fully-supervised, self-supervised, transfer learning, medical imaging. + Footnote †: journal: Computer Methods and Programs in Biomedicine ## 1 Introduction Recent advances in deep neural networks have shown remarkable improvement in performance for many computer vision tasks like classification, segmentation, and object detection (Krizhevsky et al., 2012; He et al., 2017). However, it is essential that model predictions are not only accurate but also well calibrated (Guo et al., 2017). Model calibration refers to the accurate estimation of the probability of correctness or uncertainty of its predictions. As calibration directly relates to the trustworthiness of a model's predictions, it is an essential factor for evaluating models in safety-critical applications like medical image analysis (Jiang et al., 2012; Kompa et al., 2021; Ma et al., 2022; Tomani and Buettner, 2019). Probabilities derived from deep learning models are often used as the basis for interpretation because they provide a measure of confidence or certainty associated with the predictions. When a deep learning model assigns a high probability to a particular class, it indicates a stronger belief in that prediction. For example, in medical diagnosis, a high probability assigned to a certain disease can indicate a higher likelihood of its presence based on the observed input data. However, it is important to note that the reliability of interpretation based on probabilities depends on the calibration of the model (Murphy and Winkler, 1977; Guo et al., 2017; Caruana et al., 2015). Calibration ensures that the assigned probabilities reflect the true likelihood of events, allowing for accurate interpretation. Without proper calibration, the interpretation based solely on probabilities may be misleading or unreliable. Apart from directly interpreting the probabilities as confidence for decision process, several explainability methods (van der Velden et al., 2022) have been proposed that depend on the information extracted from the model predictions like weighting random masks (Petsiuk et al., 2018), perturbation (Fong and Vedaldi, 2017; Uzunova et al., 2019), prediction difference analysis (Zintgraf et al., 2017), contribution scores (Shrikumar et al., 2017). The contribution of calibration to the model's explainability lies in providing reliable probability estimates, which aid in understanding the model's decision-making process and associated uncertainties. It is observed that the improved calibration has a positive impact on the saliency maps obtained as interpretations, also improving their quality in terms of faithfulness and are more human-friendly (Scafarto et al., 2023). This interplay between explainability and calibrated predictions emerges as a pivotal factor in establishing a trustworthy model for medical decision support systems. In healthcare, even minor errors in model prediction can carry life-threatening consequences. Therefore, incorporating uncertainty assessment into model predictions can lead to more principled decision-making that safeguards patient well-being. For example, human expertise can be sought in cases with high uncertainty. A model's predictive uncertainty is influenced by noise in data, incomplete coverage of the domain, and imperfect models. Effectively estimating or minimizing these uncertainties can markedly enhance the overall quality and reliability of the results (Jungo et al., 2020; Jungo and Reyes, 2019). Considerable endeavors have been dedicated to mitigating both data and model uncertainty through strategies like data augmentation (Singh Sambyal et al., 2022; Wang et al., 2019), Bayesian inference (Blundell et al., 2015; Gal and Ghahramani, 2016; Jena and Awate, 2019), and ensembling (Mehrdash et al., 2020; Lakshminarayanan et al., 2017) Modern neural networks are known to be miscalibrated (Guo et al., 2017) (overconfident, i.e., high confidence but low accuracy, or underconfident, i.e., low confidence but high accuracy). Hence, model calibration has drawn significant attention in recent years. Approaches to improve the calibration of deep neural networks include post-hoc strategies (Platt, 1999; Guo et al., 2017), data augmentation (Zhang et al., 2018; Thulasidasan et al., 2019; Hendrycks et al., 2020) and ensembling (Lakshminarayanan et al., 2017). Similar strategies have also been utilized in the domain of medical image analysis to explore calibration with the primary goal of alleviating miscalibration (Frenkel and Goldberger, 2022; Larrazabal et al., 2021; Murugesan et al., 2023; Stolte et al., 2022). Furthermore, recent research has also investigated the impact of different training approaches on the model's performance and calibration. These include the use of focal loss (Mukhoti et al., 2020), self-supervised learning (Hendrycks et al., 2019), and fully-supervised networks with pretraining (Hendrycks et al., 2019). However, the scope of these studies has been limited to exploring calibration in the context of generic computer vision datasets like CIFAR10, CIFAR100, and ImageNet (Ericsson et al., 2021; Wang et al., 2023). Moreover, the majority of these studies have only utilized Expected Calibration Error (ECE) as the calibration metric. Unfortunately, ECE has several drawbacks, rendering it unfit for tasks like multi-class classification and inefficient due to bias-variance trade-off (Nixon et al., 2019). Nevertheless, as reliable and accurate estimation of predictive uncertainty is important, measuring calibration is an ongoing active research area resulting in many new metrics (Nixon et al., 2019; Singh et al., 2021; Thulasidasan et al., 2019; Guo et al., 2017; Nguyen and O'Connor, 2015). Model calibration is tied to the training process that is inherently challenging for medical image analysis applications. The scarcity of labeled training datasets is a major cause for concern (Langlotz et al., 2019; Rahaman and thiery, 2021). Gathering labeled data for the medical domain is a daunting task due to the complex and intricate annotating process requiring domain expertise. _Transfer learning_ is a popular learning paradigm to circumvent the labeled training data scarcity (Mei et al., 2022; Ma et al., 2022). Although transfer learning improves model accuracy, especially for smaller datasets, it also improves the quality of various complementary model components like adversarial robustness, and uncertainty (Hendrycks et al., 2019). Remarkably, the literature suggests that the advantages of popular methods such as transfer learning on classical computer vision datasets do not extend to medical imaging applications (Raghu et al., 2019). _Self-supervised learning (SSL)_ is another promising training regime when learning from scarce labeled data in classical computer vision applications (Tendle and Hasan, 2021; Doersch et al., 2015). Though fully-supervised (pretrained) and self-supervised approaches seem to improve various model performance measures like accuracy, robustness, and uncertainty (Hendrycks et al., 2019; Navarro et al., 2021), the impact of the training regime(s) on model calibration is under-explored. Our current work addresses these crucial gaps in the literature - understanding the calibration of deep neural networks for medical image analysis in the context of different training regimes and several calibration metrics. Accordingly, our main contributions are: 1. We study the effect of different training regimes on the performance and calibration of models used for medical image analysis. Specifically, we compare three different training paradigms: Fully-Supervised with random initialization (\(FS_{r}\)), Fully-Supervised with pretraining (\(FS_{p}\)), and Rotation-based Self-Supervision with pretraining (\(SSL_{p}\)). 2. We leverage several complementary calibration metrics to provide an accurate, unbiased, and comprehensive evaluation of the predictive uncertainty of models. 3. We assess the influence of varying dataset sizes, architecture capacities, and task complexity on the performance and calibration of the models. 4. We identified some of the potential factors that are correlated with the observed changes in the calibration of models. These include layer-wise learned representations as well as the weight distribution of the model parameters. In general, we observe that the rotation-based self-supervised pretrained training approach provides better calibration for medical image analysis tasks than its fully supervised counterpart, with on-par or better performance. Additionally, our findings contradict recent literature (Raghu et al., 2019) that remarked "_transfer offers little benefit to performance"_ for medical datasets. Furthermore, both the weight distribution and the learned representation analysis indicate that self-supervised training provides implicit regularization that in-turn achieves better calibration. ## 2 Methods ### Training Regimes #### 2.1.1 Fully-Supervised and Transfer Learning In a fully-supervised training regime, we use the given input data and the corresponding target value to learn the task. We can train models using two different ways, learning from scratch, i.e., initializing model weights randomly, or pretraining, i.e., transfering knowledge from one task to another by using the learned weights. In the transfer learning approach, a model is first pretrained using supervised learning on a large labeled dataset (Krizhevsky et al., 2012; Donahue et al., 2014). Then the learned generic representations are fine-tuned on the in-domain medical data (Raghu et al., 2019; Wen et al., 2021). Generally, fine-tuning a pretrained model achieves better generalized performance and faster convergence than training a fully-supervised network from scratch (Azizi et al., 2021; Girshick et al., 2014). We have considered \(FS_{r}\) as a baseline in our experiments where the model is trained from scratch. ImageNet pretraining is used as the default pretraining approach, which has shown remarkable performance on medical imaging datasets (Wen et al., 2021). #### 2.1.2 Self-Supervised Learning In self-supervised training regime (Hendrycks et al., 2019; Gidaris et al., 2018), Figure 1, we train a classifier network with a separate auxiliary head to predict the induced rotation in the image. The output of the penultimate layer is given to both the classifier and the auxiliary module. The classifier predicts a k-way softmax output vector based on the chosen task/dataset, whereas the auxiliary module predicts a 4-way softmax output vector indicating the rotation degree (\(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\)and \(270^{\circ}\)). Given a dataset \(\mathcal{D}\), of \(N\) training examples, \(\mathcal{D}=\{x_{i},y_{i}\}_{i=1}^{N}\), the goal is to learn representations using a self-supervised regime. The overall loss during training is the weighted sum of vanilla classification and the auxiliary task loss \[\mathcal{L}(\theta)=\mathcal{L}(y,p(y|R_{r}(x));\theta)+\lambda\mathcal{L}_{ max}(r,p(r|R_{r}(x));\theta) \tag{1}\] where, \(R_{r}(x)\) is a rotation transformation on input image \(x\) and \(r\in\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\) is the ground truth label for the auxiliary task. Note that the auxiliary component does not require ground truth training label \(y\) as input. \(\mathcal{L}_{max}\) is the cross-entropy between \(r\) and the predicted rotation. ### Calibration Metrics _Perfect Calibration_: In a multi-class classification problem, let the input be \(X\) and the label \(Y\in\{1,2,\cdots,K\}\) and \(f\) the learned model. The model's output is \(f(X)=(\hat{Y},\hat{P})\) where \(\hat{Y}\) is a class prediction and \(\hat{P}\) is its associated confidence. If \(\hat{P}\) is always the true probability, then we call the model perfectly calibrated as defined in (2). \[\mathbb{P}\left(\hat{Y}=Y\mid\hat{P}=p\right)=p,\quad\forall p\in[0,1] \tag{2}\] The difference between the true confidence (accuracy) and the predicted confidence (output probability), \(|\mathbb{P}\left(\hat{Y}=Y\mid\hat{P}=p\right)-p|\) for a given \(p\) is known as calibration error or miscalibration. Note that \(\hat{P}\) is a continuous random variable, the probability in (2) cannot be computed using finitely many samples resulting in different approximations for the calibration error as discussed below. #### 2.2.1 Expected Calibration Error (ECE) The most common miscalibration measure is the ECE (Naeini et al., 2015; Guo et al., 2017), which computes the difference in the expectation between confidence and accuracy. It is a scalar summary statistic of calibration. \[\mathbb{E}_{\hat{P}}\left[\left|\mathbb{P}\left(\hat{Y}=Y\mid\hat{P}=p\right)- p\right|\right] \tag{3}\] In practice, we cannot estimate ECE without quantization; therefore, the confidence scores for the predicted class are divided into \(m\) equally spaced bins. For each bin, the average confidence (conf) and accuracy (acc) are computed. The difference between the average confidence and accuracy weighted by the number of samples summed over the bins gives us the ECE measure. Formally, \[\text{ECE}=\sum_{m=1}^{M}\frac{n_{m}}{N}|\operatorname{acc}(m)-\text{conf}(m)| \tag{4}\] where \(n_{m}\) is the number of predictions in bin \(m\). While ECE is used extensively to measure calibration, it has some major drawbacks (Nixon et al., 2019): 1. Structured around binary classification, ECE only considers the class with maximum predicted probability. As a result, it discounts the accuracy with which the model predicts other class probabilities in a multi-class classification setting. 2. Deep neural network predictions are typically overconfident, causing skewness in the output probabilities. Consequently, equal-interval binning metrics like ECE is impacted by only a few bins. 3. The number of bins, as a hyperparameter, plays a crucial role in the quality of calibration estimation. However, determining the optimal number of bins is challenging due to the bias-variance tradeoff. 4. In a static binning scheme like ECE, overconfident and underconfident predictions occurring in the same bin result in a reduction of calibration error. In such cases, it is difficult to infer the true cause of improvement in model calibration. These issues have resulted in the development of novel calibration metrics discussed in the following subsections. Figure 1: Self-Supervised Learning Framework #### 2.2.2 Adaptive Calibration Error (ACE) As \(ECE\) suffers from skewness in the output predictions, ACE mainly focuses on the regions where the predictions are made. It uses an adaptive binning scheme to ensure an equal number of predictions in each bin (Nguyen and O'Connor, 2015; Nixon et al., 2019). Formally, \[\text{ACE}=\frac{1}{KR}\sum_{k=1}^{K}\sum_{r=1}^{R}|\text{acc}(r,k)-\text{conf}( r,k)| \tag{5}\] where, \(\text{acc}(r,k)\) and \(\text{conf}(r,k)\) represent the accuracy and confidence for the adaptive calibration range or bin \(r\) and class label \(k\), respectively. Due to adaptive binning, the bin spacing can be unequal; wide in the areas where the number of data points is less, and narrow otherwise. #### 2.2.3 Maximum Calibration Error (MCE) It refers to the upper-bound estimate of miscalibration useful in safety-critical applications. MCE (Naeini et al., 2015; Guo et al., 2017) captures the worst-case deviation between confidence and accuracy by measuring the maximum difference across all bins \(m\), as shown below: \[\text{MCE}=\max_{m\in\{1,\dots,M\}}|\text{acc}(m)-\text{conf}(m)| \tag{6}\] #### 2.2.4 Overconfidence Error (OE) Modern deep neural networks provide high confident outputs despite being inaccurate. Thus a metric that captures the model's overconfidence provides better model insights. OE (Thulasidasan et al., 2019) captures the overconfidence in the model prediction by penalizing the confidence score only when the model confidence is greater than the accuracy. \[\text{OE}=\sum_{m=1}^{M}\frac{n_{m}}{N}\left[\text{conf}(m)\times\max\big{(} \,\text{conf}(m)-\text{acc}(m),0\big{)}\right] \tag{7}\] #### 2.2.5 Brier or Quadratic Score It is a strictly proper scoring rule that measures the accuracy of the probabilistic predictions (Brier, 1950; Gneiting and Raftery, 2007; Kruppa et al., 2014). It is the mean squared difference between one-hot encoded true label and predicted probability. Formally, \[\text{Brier}=\sum_{k=1}^{K}(\mathbb{I}_{\{Y=k\}}-\hat{P}(Y=k\mid X))^{2} \tag{8}\] #### 2.2.6 Negative Log Likelihood (NLL) For safety-critical applications, using a probabilistic classifier that predicts the correct class and gives the probability distribution of the target classes is encouraged. Using NLL, we can evaluate models with the best predictive uncertainty by measuring the quality of the probabilistic predictions (Vaicenavicius et al., 2019; Kull and Flach, 2015; Quinonero-Candela et al., 2006). Formally, \[\text{NLL}=-\sum_{k=1}^{K}\mathbb{I}_{\{Y=k\}}\log[\hat{P}(Y=k\mid X)] \tag{9}\] Additionally, _Root Mean Square Calibration Error (RMSCE)_(Nguyen and O'Connor, 2015; Hendrycks et al., 2019, 2019) measures the square root of the expected squared difference between confidence and accuracy. As it defines the magnitude of miscalibration, it is highly correlated to ECE. Similar to ACE, _Static Calibration Error (SCE)_(Nixon et al., 2019), extends ECE by measuring calibration over all classes in each bin for a multi-class setting but does not use an adaptive binning approach. As a result, we exclude these metrics from our experimental analysis. It can be observed from the above definitions that none of the individual metrics takes a holistic approach. Hence, it is important to recognize that individual metrics are limited in their ability to provide accurate estimates of calibration. Consequently, a collective evaluation of these metrics is necessary for a better or unbiased understanding of calibration performance. ### Experimental Setup #### 2.3.1 Datasets We used three different datasets to investigate the classification performance and calibration of models trained under different regimes. The datasets have varying characteristics such as different imaging modalities, and sizes. * The Diabetic Retinopathy (DR) dataset contains 35K high-resolution (\(\sim 5000\times 3000\)) retinal fundus scans (EyePACS, Diabetic Retinopathy Detection). Each image is rated for the severity of diabetic retinopathy on a scale of 0-4, which makes it a five-class classification problem. The images are captured under varying imaging conditions, like different models and camera types. * The Histopathologic Cancer dataset contains 220K images (patches of size \(96\times 96\)) extracted from larger digital pathological scans (Ehteshami Bejnordi et al., 2017; Histopathologic Cancer Detection: Modified version of the PatchCamelyon (PCam) Benchmark Dataset; Veeling et al., 2018). Each image is annotated with a binary label indicating the presence of tumor tissue in the histopathologic scans of lymph node sections. * The COVID-19 is a small dataset consisting of 317 high-resolution (\(\sim 4000\times 3000\)) chest X-rays images (Covid-19 Image Dataset; Cohen et al., 2020, 2020). This dataset corresponds to a three-class classification problem. Both DR and Histopathology cancer datasets are segregated into four training datasets of sizes: 500, 1000, 5000, and 10000; and a common test dataset of 2000 images. The _Covid-19_ dataset is partitioned into 60/20 train/validation split and a separate 20% test set for evaluation. The images in all the datasets are resized to \(224\times 224\), which is the standard input resolution for ResNet architectures. #### 2.3.2 Implementation Details **Architectures -** Due to the popularity of ResNet architectures in medical imaging for classification tasks (Azizi et al., 2021; Wen et al., 2021; Mei et al., 2022), we choose the standard ResNet18, ResNet50 (He et al., 2016), and WideResNet (Zagoruyko and Komodakis, 2016) architectures as the network backbone to simulate small, medium, and large architecture sizes, respectively. For the training regimes relying on a pretrained model, we initialize the backbone architectures using ImageNet-pretrained weights, and the classifier and self-supervised modules using the Kaiming uniform initialization (He et al., 2015) variant. **Evaluation Metrics -** We use two performance metrics - _Accuracy_ and _Area under the Receiver Operating Characteristic curve (ROC AUC)_; and six calibration metrics - _ECE, MCE, ACE, OE, Brier_ and _NLL_. The architecture details and hyperparameter settings are presented in the supplementary material Section 5.1. ## 3 Results ### Effect of Training Regimes on Calibration In this study, we investigate the performance and calibration of three different architectures - _ResNet18, ResNet50 & WideResNet_ using three different training regimes - _Fully-Supervised with random initialization (\(FS_{r}\)), Fully-Supervised with pretraining (\(FS_{p}\))_ and _Rotation-based Self-Supervision with pretraining (\(SSL_{p}\))_. For medical image analysis, both the accuracy and reliability of the models are crucial. In this context, there are two key scenarios we need to consider: 1. _High accuracy and high calibration error_ - When a model has high accuracy but is miscalibrated, the model's predictions may not be trustworthy. Both incorrect predictions with high confidence and correct predictions with low confidence are detrimental in healthcare applications. Reliance on accuracy alone is hazardous. 2. _High accuracy and low calibration error_ - This is the ideal scenario, where a model has high accuracy and well-calibrated confidence scores. Predictions from such a model can be trusted in the decision-making process. #### 3.1.1 Effect of Architecture and Dataset Size In this section, we present the findings of our analysis of the DR dataset. The performance and calibration scores of various architectures, as well as the effects of increasing training dataset size, are depicted in Figure 3 for the three different training regimes. Similar results and analysis of WideResNet architecture on the Histopathology dataset is presented in Figure 2 and rest can be found in the supplementary material (Figure 11). Owing to the difficulty of the task, the performance of all training regimes across all the models is not very high (\(\leq\) 75%). However, we do see a clear improvement in performance as the training dataset size increases across all architectures and regimes. Additionally, we observe that initializing models with pretrained weights (with \(SSL_{p}\) having an edge over \(FS_{p}\)) offer a significant advantage over random initialization, which contradicts existing assumptions that transfer learning from ImageNet models is not beneficial. Both \(FS_{p}\) and \(SSL_{p}\) result in similar performance when using larger models (Raghu et al., 2019). Comparing the effect of \(FS_{p}\) and \(SSL_{p}\) training regimes on calibration, we see that \(SSL_{p}\) significantly improves calibration across all metrics for all architectures and training dataset sizes as illustrated in Figures 3(c)-(h). The gap in the calibration metrics for \(SSL_{p}\) and \(FS_{p}\) is highest when using the largest architecture (WideResNet). While a randomly initialized model (\(FS_{r}\)) results in marginally better calibration (sometimes even better than \(SSL_{p}\)), the performance is significantly poor. Overall, we observe that models trained using self-supervision with pretrained weights show better or similar performance with a significant improvement in calibration error compared to fully-supervised pretraining. These results suggest that self-supervised training can help improve both performance and calibration, leading to more robust and reliable models for medical image analysis. We discuss the results on the Covid-19 dataset separately owing to its small size. Figure 4 depicts that all the models result in high performance on this dataset indicating the ease of learning the task. The superior performance of \(FS_{p}\) and \(SSL_{p}\) indicate a definite advantage of transfer through pretrained over random initialization, contradicting the recent findings (Raghu et al., 2019). It is also evident that larger models result in better performance than shallow models. The negative impact of training from a random initialization (\(FS_{r}\)) for over-parameterized models is also evident from the drop in the performance and calibration with the increase in architecture size. While we observe a significant difference in the performance, there is only a marginal change in the calibration metrics. There is no definite trend in the calibration across the three training regimes. Thus, while transfer seems to have a positive impact on performance, calibration does not enjoy a commensurate impact. #### 3.1.2 Issues with using Single Calibration Metric In this section, we discuss the importance of collective evaluation of calibration metrics. For this purpose, let's consider the question - _Does transfer learning improve calibration?_ In the context of DR dataset, we analyze the results in Figure 3. Comparing \(FS_{r}\) and \(FS_{p}\) using only _Brier_ for all architectures and dataset sizes, the general trend we observe is that transfer learning improves calibration. However, this observation fails when we chose _ECE_ metric, which gives us mixed results. Similarly, incorrect conclusions could be drawn when using individual metrics like _NLL_ and _ACE_. Figure 2: Joint evaluation for performance and calibration across different dataset sizes (x-axis) using WideResNet architecture on Histopathology dataset. The shaded region corresponds to \(\mu\pm\sigma\), estimated over 3 trials. \(\uparrow\): higher is better, \(\downarrow\): lower is better. Figure 3: Joint evaluation for performance and calibration across different dataset sizes (x-axis) and architectures for DR dataset. The shaded region corresponds to \(\mu\pm\sigma\), estimated over 3 trials. The underline shows the statistical significance between \(FS_{p}\) and \(SSL_{p}\). Black and Pink color signifies \(p<0.05\) and \(0.05<p<0.1\) level of significance, respectively. \(\uparrow\): higher is better, \(\downarrow\): lower is better. Likewise, we consider the effect of architecture on performance and calibration in the context of the small Covid-19 dataset. From Figure 4, we observe that \(FS_{p}\) and \(SSL_{p}\) have comparable performances with nominal improvement with increasing architecture size. In this case, using only \(ECE\) as the calibration metric would lead us to infer that \(FS_{p}\) provides better calibration than \(SSL_{p}\) for large capacity models. In contrast, \(ACE\) suggests the opposite. However, these two training regimes are quite similar across most other metrics. These examples further highlight that in scenarios where models provide mixed calibration results, selecting the best model is non-trivial/subjective. In section 4, we discuss some potential model selection criteria to address this issue. ### Factors affecting Performance and Calibration In this section, we explore two potential factors linked to the enhanced calibration of the self-supervised training regime. Firstly, we examine the standard deviation of weight distributions and calibration metrics across different training regimes. Secondly, we investigate the similarity of learned representations in the activations. #### 3.2.1 Weight Distribution The weight distribution of a neural network can provide useful insights into the model's performance. Regularization schemes like \(\mathcal{L}_{1}\), \(\mathcal{L}_{2}\), dropout (Ng, 2004; Srivastava et al., 2014) are often employed to find optimal parameters of a model with low generalization error. By adding a parameter norm penalty term to the objective function, the \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) norms encourage sparse weights with many zero values and small weight values respectively. Weighting the contribution of the penalty term controls the regularization effect. For instance, with \(\mathcal{L}_{2}\) norm, the histogram of weights tends to a zero-mean normal distribution with a high penalty that causes the model to underestimate the weights and hence leads to underfitting. In contrast, a low penalty yields a flatter histogram that causes the model to overfit the training data. To strike the right balance, careful hyperparameter tuning is needed to determine the data-dependent optimal penalty term contribution for better generalization. Based on this intuition, we attempt to interpret the performance and calibration of networks trained using different regimes using weight distribution analysis. To the best of our knowledge, the calibration of a model has not been explained in the context of the weight distribution of a network, especially for medical image analysis. The comparison of weight distributions between the models trained using \(FS_{r}\), \(FS_{p}\), and, \(SSL_{p}\) for the DR dataset in Figure 5a-(1),(2) reveals some interesting observations. The weight distribution of the model trained with \(FS_{p}\) exhibits a higher peak than \(SSL_{p}\), indicating that most of the weights are small. Conversely, the \(FS_{r}\) model exhibits the highest standard deviation, resembling a uniform distribution. Now, the question arises: which distribution is preferable, and which scenario leads to better generalization with improved calibration? To address this, we analyze the impact of weight distribution on the performance and calibration of \(FS_{p}\) and \(SSL_{p}\) models using Figure 3 and Figure 5. We observe that both models show similar AUC performance, Figure 4: Comparing performance and calibration across different architectures and training regimes for _Covid-19_ dataset. The error bars correspond to \(\mu\pm\sigma\), estimated over 3 trials. Relying on a single calibration error metric, such as ECE or ACE, can lead to conflicting conclusions when it comes to model selection. By considering a combination of metrics, we gain a more comprehensive understanding of the model’s calibration performance. \(\uparrow\): higher is better, \(\downarrow\): lower is better. with \(SSL_{p}\) displaying a smaller peak in the weight distribution. This difference in weight distribution influences the calibration metrics, where \(SSL_{p}\) demonstrates significantly lower calibration error across most metrics. In other words, the predicted probabilities align more closely with the true probabilities using the \(SSL_{p}\) model. For Histopathology dataset, the weight distribution of the \(SSL_{p}\) model is similar to that of the \(FS_{p}\), as seen in Figure 5b-(1),(2). This similarity in weight distribution could be attributed to an easier task, leading to higher test performance. However, despite the similarity in weight distribution, the \(SSL_{p}\) model still provides better-calibrated outputs compared to the \(FS_{p}\), but the difference in calibration error between these training regimes is now smaller. Considering the standard deviation of the weight distributions, it is suggested that a balance in the spread of weights is important for achieving good performance and calibration. It is important to note that the \(FS_{p}\) model has the highest standard deviation and comparable calibration error, it exhibits low AUC performance, making it inconsequential among other training regimes. In Figure 5-(3),(4), we analyze the layer-wise standard deviation and Frobenius norm of the weights. In Figure 5a, we observe \(SSL_{p}\) influence on the standard deviation and weight magnitudes in every layer of the network. Additionally, we notice that the standard deviation tends to be higher in the initial layers and decreases as we move towards higher layers of the network. In Figure 5b, the standard deviation and magnitude of weights are similar for both \(SSL_{p}\) and \(FS_{p}\) training regimes. This suggests that the features extracted by each layer of the network are similar, which could be attributed to the high performance achieved by both training regimes. Despite the similarity, the \(SSL_{p}\) training regime still produces a better-calibrated model than the \(FS_{p}\), indicating the additional benefits of self-supervised training. For a more comprehensive analysis, Figure 6 further consolidates the trends between performance, the standard deviation of the weights, and model calibration. The figure highlights that achieving good performance and calibration in a model necessitates finding a balance in the spread of weights, a balance which the \(SSL_{p}\) training regime was able to achieve successfully. Due to the different scales of the calibration metrics, we plot them on multiple axes. The weight values and their standard deviation are very small; therefore, we scaled them by \(10^{2}\). In Figure 6a, \(FS_{r}\) (top left, orange) has the highest standard deviation (wide distribution) and gives us the best calibration error (x-axis) but the worst performance compared to other training regimes. The standard deviation for \(FS_{p}\) (bottom right, red) is the lowest, but the calibration error is still high, which is not ideal. On the other hand, \(SSL_{p}\) has a low standard deviation but yields the best performance and calibration. So, when we encounter the gap in the standard deviation of weights between different training regimes (\(SSL_{p}\) and \(FS_{p}\)), we observe the calibration error metrics are well separated (Figure 6a). Alternatively, when the gap is negligible, the calibration error metrics overlap (Figure 6b). In summary, we observe that the \(SSL_{p}\) training regime consistently provides better calibration than the \(FS_{p}\) regime for both datasets. The magnitude of improvement or change in calibration is directly related to the differences in weight distributions. #### 3.2.2 Learned Representation In addition to the diversity of the whole weight space, we explore the impact of layer-wise, learned neural representations on performance and calibration. Towards this end, we use the widely popular Centered Kernel Alignment (CKA) (Kornblith et al., 2019) metric that measures the similarity between the activations of hidden layers in a neural network. Literature suggests that high representational similarity across layers indicates redundancy in learned representations of a network. Furthermore, redundant representations impact the generalizability due to the influence of regularized training (Doimo et al., 2022), which in turn improves the model calibration (Guo et al., 2017). CKA analysis of WideResNet's layer representations for different training regimes on the DR dataset is shown in Figure 7. The CKA plots for \(FS_{p}\) and \(SSL_{p}\) depict comparatively similar patterns. However, the higher layers of \(FS_{p}\) show a significant decrease in representational similarity (darker region shown in blue box) with increasing dataset size. The relatively high CKA values of the deeper layers of \(SSL_{p}\) depict redundancy of learned representations lighter regions) that provides implicit regularization. This in turn explains the reduced calibration error of \(SSL_{p}\) compared to \(FS_{p}\) as seen in Figure 3. A similar Figure 5: Comparing different aspects of WideResNet learned weights for dataset size 10000 on DR-(a) and Histopathology Cancer-(b) datasets. (1) and (2) the normalized histogram of weights of three training regimes. (3) Layer-wise comparison of standard deviation (SD) between \(FS_{p}\) and \(SSL_{p}\). (4) Layer-wise comparison of Frobenius norm between \(FS_{p}\) and \(SSL_{p}\). pattern is observed for ResNet18 and ResNet50 architectures as depicted in Figure 10 of the supplementary material. For the Histopathology dataset, the CKA plots (shown in Figure 12 of the supplementary material) for \(FS_{p}\) and \(SSL_{p}\) depict very similar patterns that explain comparable performance and calibration afforded by these training regimes. To facilitate quantitative comparison, we present the mean CKA value as a summary statistic to represent the CKA plots of individual networks in Table 2 of the Supplementary Material, Section 5.5. While not very significant, these findings align with the trends observed in Figure 7. Furthermore, the difference in the mean CKA values of \(SSL_{p}\) and \(FS_{p}\) fairly correlates with the difference in the magnitude of the calibration metrics of these regimes. ## 4 Discussion For safety-critical applications like medical image analysis, it is imperative to choose models with high accuracy and low calibration errors. In this study, we investigate the performance and calibration of three different architectures using three different training regimes on medical imaging datasets of varying sizes and task complexities. Furthermore, we use six complementary calibration metrics that collectively provide a comprehensive evaluation of the predictive uncertainty of the models. _Model selection with mixed calibration results_ - While using multiple calibration metrics provides a more comprehensive evaluation, deciding on the best model can still be challenging as observed in Section 3.1.2. There are a few strategies that can be employed to aid in the decision-making process. One approach is to use a voting-based scheme, where each model is assigned a vote based on its performance across the calibration metrics. The model with the maximum number of votes is then selected as the best choice. This approach treats all metrics equally and can be useful when there is no significant variation in the importance of different metrics. _Domain specific metric relevance_ - However, it is important to consider that different calibration metrics may have different objectives and importance in specific domains. For example, metrics like OE (Overconfidence Error) explicitly measure the overconfidence of the model predictions, while MCE (Maximum Calibration Error) provides an upper bound on the mistakes made by the model. In such cases, it might be necessary to assign more weightage to these important metrics during the voting process. The determination of metric importance is subjective and can vary depending on the application. Expert knowledge and domain expertise play a crucial role in assigning relative importance to different metrics. By incorporating the opinions of experts, the voting process can be tailored to reflect the specific requirements of the application. _Margin for model selection_ - In addition to assigning weights Figure 6: Comparing calibration metrics (x-axis) vs. standard deviation (SD, y-axis) of WideResNet architecture for dataset size 10000 on DR and Histopathology cancer datasets. Colors represent training regimes (orange for \(FS_{p}\), blue for \(SSL_{p}\), and red for \(FS_{p}\)), and markers are the lowercase initials of each calibration metric, \(e-\text{ECE}\), \(o-\text{QE}\), \(a-\text{ACE}\), \(m-\text{MCE}\), \(b-\text{Brier}\), \(n-\text{NLL}\). Alongside each calibration error cluster, the performance is also reported. Ideally, the metrics should be at the bottom left with comparable performance. **(a)**\(SSL_{p}\) has less calibration error with on-par performance than \(FS_{p}\) training regime, indicating it to be a suitable choice. Calibration error metrics clusters of \(SSL_{p}\) and \(FS_{p}\) are noticeably well separated, correlating with the gap in their SD. **(b)** Here, \(SSL_{p}\) seems to be the best in calibration and performance compared to other training regimes. The noticeable difference we observed here is that the calibration error metrics clusters of \(SSL_{p}\) and \(FS_{p}\) are close (somewhat overlapping) when the SD of their weight distributions are similar. to metrics, introducing a margin or threshold in the voting scheme can help refine the model selection process. This threshold represents the minimum difference in calibration error between two training regimes that must be surpassed for a metric to be considered in the model selection. By setting a threshold, the metrics can be filtered out that do not exhibit significant differences and focus on those that have a substantial impact on model calibration. It is worth noting that the difficulty of choosing a model also arises when one model has higher accuracy but poorer calibration while another model has lower accuracy but better calibration. This dilemma has been discussed in the literature (Minderer et al., 2021), highlighting the need for careful consideration of calibration metrics during model selection. _Selective prediction_ is one scenario where we abstain the classifier that gives us low-confident predictions based on some threshold or cost structure of the specific application (Hernandez-Orallo et al., 2012). In such cases, low-confidence predictions are referred to an expert for further analysis or diagnosis. This approach allows for cautious decision-making when the model's confidence is not sufficient for reliable predictions. Overall, the selection of the best model with mixed calibration results requires a combination of objective evaluation, subjective judgment of metric importance, and consideration of domain-specific requirements. _Calibration Metrics_ - While we have elaborated on the drawbacks of ECE, it provides an intuitive and straightforward interpretation, is simple to implement, and captures pure calibration. Additionally, ECE is associated with the reliability diagram - a powerful tool to visualize model calibration. It's also worth noting that alternative calibration metrics have their own shortcomings. The majority of the existing metrics suffer from challenges like scale-dependent interpretation, lack of normalized range, arbitrary choice of number of bins, etc. (Matsubara et al., 2023). Moreover, composite measures like NLL and Brier blend calibration and refinement, making it challenging to isolate calibration effects. Multiclass settings introduce additional complexity due to the multitude of classes, their diverse interrelations, and the absence of a universally accepted metric for gauging refinement. Moreover, the choice of calibration metric can also be domain or application-dependent. As there is no universally applicable or acceptable calibration metric, we proposed collective evaluation of these metrics for a better or unbiased understanding of calibration performance. _Limitations_ - Our current study focused on medical image classification tasks across three different benchmark datasets. However, due to limited computational resources, we selected datasets with 2D images. Extending this work to 3D datasets as well as other tasks like medical image segmentation and registration, can help broaden our understanding of calibration in the general context of medical image analysis. Additionally, our study highlights that using the rotation-based self-supervised learning (SSL) approach gives better-calibrated results compared to the usual fully-supervised learning. A comparison of other SSL techniques, such as contrastive SSL or generative SSL, would be interesting. _Conclusion_ - In general, for medical image classification tasks, we observe that training regimes have a varying impact on model calibration. Overall, we observe that across different architectures, training regimes, datasets, and sample sizes, (a) transfer learning through pretraining helps improve performance over random-initialized models and (b) pretrained self-supervised approach provides better calibration than its fully supervised counterpart, with on-par or better performance. While we notice a sizeable increase in performance with dataset sizes, only nominal improvement is realized with increasing model capacity. Furthermore, we identified weight distribution and learned representations of a neural network as potential confounding factors that provide useful insights into model calibration, in particular, to explain the superiority of a rotation-based self-supervised training regime over fully supervised training. _Broader Impact_ - We anticipate that this analysis will offer significant insights into calibration across datasets of varying sizes and models of different complexities. This work raises a Figure 7: CKA plots of trained WideResNet architecture using fully-supervised (pretrained, \(FS_{p}\)) and self-supervised (pretrained, \(SSL_{p}\)) regime for DR dataset. The plots represents similarity between representations of features. The range of the CKA metric is between 0 and 1, with 0 indicating two completely distinct activations (not similar) and 1 indicating two identical activations (similar). broader question regarding the search for a unified metric that can provide a comprehensive understanding of model calibration, thereby reducing the need to evaluate models based on multiple criteria. Ensuring accurate and reliable probabilistic predictions is vital for effective risk management and decision-making. It is particularly important when relying on the outputs of probabilistic models that require trust. Additionally, developing well-calibrated models is essential for promoting the widespread acceptance of machine learning methods, especially in fields like AI-driven medical diagnosis, as it directly influences the level of trust in new technologies and improves their explainability. ## Acknowledgment The support and the resources provided by PARAM Sanganak under the National Supercomputing Mission, Government of India at the Indian Institute of Technology, Kanpur are gratefully acknowledged.
2301.13710
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
The edge-of-chaos dynamics of wide randomly initialized low-rank feedforward networks are analyzed. Formulae for the optimal weight and bias variances are extended from the full-rank to low-rank setting and are shown to follow from multiplicative scaling. The principle second order effect, the variance of the input-output Jacobian, is derived and shown to increase as the rank to width ratio decreases. These results inform practitioners how to randomly initialize feedforward networks with a reduced number of learnable parameters while in the same ambient dimension, allowing reductions in the computational cost and memory constraints of the associated network.
Thiziri Nait Saada, Jared Tanner
2023-01-31T15:40:50Z
http://arxiv.org/abs/2301.13710v1
# On the Initialisation of Wide Low-Rank Feedforward Neural Networks ###### Abstract The edge-of-chaos dynamics of wide randomly initialized low-rank feedforward networks are analyzed. Formulae for the optimal weight and bias variances are extended from the full-rank to low-rank setting and are shown to follow from multiplicative scaling. The principle second order effect, the variance of the input-output Jacobian, is derived and shown to increase as the rank to width ratio decreases. These results inform practitioners how to randomly initialize feedforward networks with a reduced number of learnable parameters while in the same ambient dimension, allowing reductions in the computational cost and memory constraints of the associated network. Machine Learning, ICML ## 1 Introduction Neural networks being applied to new settings, limiting transfer learning, are typically initialized with i.i.d. random entries. The edge-of-chaos theory of (Poole et al., 2016) determine the appropriate scaling of the weight matrices and biases so that intermediate layer representations (1) and the median of the input-output Jacobian's spectra (10) are to first order independent of the layer. Without this normalization there is typically an _exponential_ growth in the magnitude of these intermediate representations and gradients as they progress between layers of the network; such a disparity of scale inhibits the early training of the network (Glorot & Bengio, 2010). For instance, consider an untrained fully connected neural network whose weights and biases are set to be respectively identically and independently distributed with respect to a Gaussian distributions: \(W_{ij}^{(l)}\sim\mathcal{N}(0,\frac{\sigma_{W}^{2}}{N_{l-1}})\), \(b_{i}^{(l)}\sim\mathcal{N}(0,\sigma_{b}^{2})\) with \(N_{l}\) the width at layer \(l\). Starting such a network, with nonlinear activation \(\phi:\mathbb{R}\rightarrow\mathbb{R}\), from an input vector \(z^{0}:=x^{0}\in\mathbb{R}^{N_{0}}\), the data propagation is then given by the following equations, \[h_{j}^{(l)}=\sum_{k=1}^{N_{l-1}}W_{jk}^{(l)}z_{k}^{(l)}+b_{j}^{(l)},\qquad z_{ k}^{(l)}=\phi(h_{k}^{(l-1)}) \tag{1}\] where we call \(h^{(l)}\) the preactivation vector at layer \(l\). It has been shown by (Poole et al., 2016) that the pre-activation vectors \(h^{l}\) have geometric properties of length \(q^{l}:=N_{l}^{-1}\left(h^{l}\right)^{T}h^{l}\) and the pairwise covariance \(q_{12}^{l}:=N_{l}^{-1}\left(h_{1}^{l}\right)^{T}h_{2}^{l}\) of two inputs \(x^{0,1}\) and \(x^{0,2}\) which propagate through the network according to functions of the network entries' variances and nonlinear activation \((\sigma_{b},\sigma_{W},\phi)\). These propagation maps were computed by (Poole et al., 2016) in the limiting setting of infinitely wide networks and either i.i.d. Gaussian entries or scaled randomly drawn orthnormal matrices. Here we extend this setting to their low-rank analogous. Consider rank \(r_{l}:=\gamma_{l}N_{l}\) weight matrices, \(W^{(l)}\in\mathbb{R}^{N_{l}\times N_{l-1}}\), formed as \[W_{ij}^{(l)}=\sum_{k=1}^{r_{l}}\alpha_{k,j}^{(l)}(C_{k}^{l})_{i}, \tag{2}\] where the scalars \(\left(\alpha_{k,i}^{(l)}\right)_{1\leq i\leq N_{l-1}}\in\mathbb{R}\stackrel{{ \text{iid}}}{{\sim}}\mathcal{N}(0,\frac{\sigma_{\alpha}^{2}}{N_{l-1}})\) and the columns \(C_{1}^{(l)},\dots,C_{r_{l}}^{(l)}\) are drawn jointly as the matrix \(C^{(l)}:=[C_{1}^{(l)},\dots,C_{r_{l}}^{(l)}]\in\mathbb{R}^{N_{l}\times r_{l}}\) from the Grassmannian of rank \(r\) matrices with orthonormal columns having zero mean and variance \(1/N_{l}\). Similarly, consider bias vectors within the same column span as \(W^{(l)}\), given by \(b^{(l)}(C_{1}^{(l)}+\dots+C_{r_{l}}^{(l)})\), where \(b^{(l)}\in\mathbb{R}\sim\mathcal{N}(0,\sigma_{b}^{2})\). It is shown in Appendix A.2 that, in the large width limit, the preactivation vector \(h^{(l)}\) follows a Gaussian distribution over the \(r-\)dimensional column span of \(W^{(l)}\) with a non-diagonal covariance; this differs from the full rank setting in (Poole et al., 2016) where the entries in (1) are independent. We extend the pre-activation length and correlation maps to this low-rank setting: \[q^{l} =\gamma_{l}\left(\sigma_{\alpha}^{2}\int_{\mathbb{R}}\phi^{2}( \sqrt{q^{l-1}}z)Dz+\sigma_{b}^{2}\right) \tag{3}\] \[:=\mathcal{V}(q^{(l-1)}|\sigma_{\alpha},\sigma_{b},\gamma_{l}) \tag{4}\] where \(Dz:=\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}dz\) is the Gaussian probability measure, and \[q_{12}^{l} =\gamma_{l}\bigg{(}\sigma_{\alpha}^{2}\int_{\mathbb{R}^{2}}\phi(u_ {1})\phi(u_{2})Dz_{1}Dz_{2}+\sigma_{b}^{2}\bigg{)}, \tag{5}\] \[:=\mathcal{C}(q_{ab}^{l-1},q_{aa}^{l-1},q_{bb}^{l-1}|\sigma_{ \alpha},\sigma_{b},\gamma_{l}) \tag{6}\] with \(u_{1}=\sqrt{q_{11}^{l-1}}z_{1},u_{2}=\sqrt{q_{22}^{l-1}}(c_{12}^{l-1}z_{1}+ \sqrt{1-(c_{12}^{l-1})^{2}}z_{2}\) and \[c_{12}^{l}=q_{12}^{l}(q_{11}^{l}q_{22}^{l})^{-\frac{1}{2}}. \tag{7}\] Equations (3) and (5) are derived in Appendix A.3 and Appendix A.4 respectively. These equations exactly recover the equations by (Poole et al., 2016) when \(\gamma_{l}=1\), and show that by appropriately rescaling \(\sigma_{2}^{2}\) and \(\sigma_{b}^{2}\) by \(\gamma_{r}\) the low-rank maps remain consistent with the full rank setting. These two mappings (3) and (5) are functions of the network entries variances, the rank at each layer \(\gamma_{l}\) and the nonlinear activation \((\sigma_{b},\sigma_{W},\gamma_{l},\phi)\) which determine the existence of eventual stable fixed points of \(q^{l}\) and \(q_{ab}^{l}\) as well as the dynamics they follow through the network. The dominant quantity determining the dynamics of the network is \[\chi_{\gamma}:=\gamma\sigma_{\alpha}^{2}\int_{\mathbb{R}}\bigg{(}\phi^{\prime }(\sqrt{q^{*}}z)\bigg{)}^{2}Dz \tag{8}\] which is equal to two fundamental quantities. First, \(\chi_{\gamma}\) is equal to the gradient of the correlation function (7) evaluated at correlation \(c_{12}^{l}=1\), \[\chi_{\gamma}=\frac{\partial c_{12}^{l}}{\partial c_{12}^{l-1}}|_{c_{12}^{l-1 }=1} \tag{9}\] A detailed derivation of the equivalence of (8) and (9) is given in Appendix A.5. When there exists a fixed point \(q^{*}\) such that \(\mathcal{V}(q^{*})=q^{*}\), and \(\chi<1\), then inputs with small initial correlation converge to correlation 1 at an exponential rate; this phase is referred to as _ordered_. Alternatively, when \(\chi>1\) the fixed point \(c^{*}=1\) becomes unstable, meaning that an input and its arbitrarily small perturbation have correlation \(q_{ab}^{l}\) decreasing with layers; this is referred to as the _chaotic_ phase due to all nearby points on a data manifold diverging as they progress through the network. In the ordered phase, the output function of the network is constant whereas in the chaotic phase it is non-smooth everywhere. In both cases (\(\chi>1\) or \(\chi<1\)), in (Schoenholz et al., 2016), the mappings \(\mathcal{V}\) and \(\frac{1}{q^{*}}\mathcal{C}\) are shown to converge exponentially fast to their fixed point, when they exist. Therefore, the data geometry is quickly lost as it is propagated through layers. The boundary between these phases, where \(\chi=1\), is referred to as the edge-of-chaos and determines the scaling of \((\sigma_{w},\sigma_{b},\gamma_{l})\), as functions of nonlinear activation \(\phi(\cdot)\), which ensures a sub-exponential asymptotic behaviour of these maps towards their fixed point and thus a deeper data propagation along layers which facilitates early training of the network. Second, the quantity \(\chi_{\gamma}\) in (8) is equal to the median singular value of the the matrix \(D^{(l)}W^{(l)}\) where \(D^{(l)}\) is the diagonal matrix with at layer \(l\) with entries \(D_{ii}^{(l)}=\phi^{\prime}(h_{i}^{l})\); for details see Appendix A.9. Defining the Jacobian matrix \(J\in\mathbb{R}^{N_{L}\times N_{0}}\) of the input-output map as \[J:=\frac{\partial z^{L}}{\partial z^{0}}=\prod_{l=1}^{L}D^{(l)}W^{(l)}, \tag{10}\] we see that the average singular value of \(J\) is equal to \(\chi_{\gamma}^{L}\). If \(\chi_{\gamma}=1\) the average singular value of \(J\) is fixed at 1 throughout the network, while if \(\chi_{\gamma}\) is greater than or less than 1 the average singular value deviates from 1 at an exponential rate. Further note that the growth of a perturbation from a layer to the following one is given by the average squared singular value of \(D^{(l)}W^{(l)}\). ### Main contributions This manuscript extends the edge-of-chaos analysis of random feed-forward networks to the setting of low-rank matrices, following the work of (Poole et al., 2016). This work is motivated by the recent challenges faced to store in memory the constantly growing number of parameters used to train large Deep Learning models, see (Price and Tanner, 2022) and references therein. As shown in equations (3), (6), and (8), despite the dependence between entries in the low-rank weight matrices (2), that the edge-of-chaos curve defined by \(\chi_{\gamma}=1\) can be retained by scaling the weight and bias variances \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\) respectively by the ratio of the weight matrix rank \(r_{l}\) to layer width \(\gamma_{l}:=r_{l}/N_{l}\), see Figure 1 and contrast with Figure 10. That is, a simple re-scaling retains the dominant first order dynamics of a feedforward network when the weight matrices are initialized to be low-rank. In Section 2 we show that additional first order dynamics are similarly modified through a multiplicative scaling by the rank to width factor \(\gamma_{l}=r_{l}/N_{l}\). In particular, we demonstrate the role of \(\gamma_{l}\) on the length and correlation depth scale as well as the training gradient vectors. However, in Section 3 we show that important second order properties of the dynamics, specifically the variance of the singular values of the input-output Jacobian given in (10), is modified by the reduced rank in a way that cannot be overcome with simple re-scaling. This result alerts practitioners to anticipated greater variability in training low-rank weight matrices and suggests that methods to reduce the variance of the spectrum may be increasingly important in this setting, see (Murray et al., 2021). The manuscript then concludes with numerical experiments in Section 4 which demonstrate that empirical measurements on the Jacobian are consistent with the established formula and a brief summary and future work in Section 5. ## 2 Network dynamics and data propagation The parameter \(\chi_{\gamma}\) further controls the length and correlation depth scaling as well as the relative magnitude of training gradients computed via back-propagation for the sum-of-squares loss function. ### Depth scales a functions of \(\chi_{\gamma}\) The role of \(\chi_{\gamma}\) on the achievable depth scale was pioneered by (Schoenholz et al., 2016) for full-rank feedforward networks. In this subsection we extend their results to the low-rank setting with the suitably adapted spectral mean \(\chi_{\gamma}\) given in (8). #### 2.1.1 Length depth scale Assuming there exists a fixed point \(q^{*}\) such that \(\mathcal{V}(q^{*})=q^{*}\), then the dynamics of \(\mathcal{V}(q)\) can be linearized around \(q^{*}\) to obtain stability conditions and a rate of decay which determine how deeply data can propagate through the network before converging towards the fixed point. Following the computations done in (Schoenholz et al., 2016), setting a perturbation around the fixed point \(q^{*}+\epsilon_{l}\), then around the fixed point, \(\epsilon_{l}\) evolves as \(e^{-\frac{l}{\xi_{l}\cdot\gamma}}\), when \(\gamma_{l}=^{\prime}gamma\) is fixed along layers and we define the following quantities, \[\xi_{q,\gamma}^{-1}:=-\log\bigg{(}\chi_{\gamma}+\gamma\sigma_{\alpha}^{2}\int Dz \phi^{\prime\prime}(\sqrt{q^{*}}z)\phi(\sqrt{q^{*}}z)\bigg{)}.\] Details are given in Appendix A.6. Given that \(\gamma\in(0,1]\), we can see the convergence gets faster towards the fixed point when increasing \(\gamma\). Note that when \(\gamma\sigma_{\alpha}^{2}=\sigma_{W}^{2}\), we recover the results of a full-rank feedforward neural network in (Schoenholz et al., 2016). #### 2.1.2 Correlation depth scale Similarly, we compute the dynamical evolution of the correlation map around its fixed point by considering a perturbation \(\epsilon_{l}\) and we obtain that (see Appendix A.7), when all the ranks are set to be proportional to the width with the same coefficient of proportionality \(\gamma_{l}=\gamma\) at any layer \(l\), then the perturbation vanishes exponentially fast \(\epsilon_{l}=\mathcal{O}(e^{-\frac{l}{\xi_{l}\cdot\gamma}})\) where \[\xi_{c,\gamma}^{-1}:=-\log\big{(}\chi_{\gamma}\big{)}.\] We recover that the correlation depth scale diverges to \(+\infty\) when \(\chi_{\gamma}\to 1\), yielding again the key role of this quantity, even in the low-rank case. As \(\gamma\in(0,1]\), we can see the convergence gets faster towards the fixed point when increasing \(\gamma\), which highlights the tension between low-rank and the depth to which data can propagate along layers. Note again we recover previous results from (Schoenholz et al., 2016) after appropriate scaling of the variance. ### Layerwise scaling of the training gradient for the sum-of-squares loss function As already shown in previous works ((Schoenholz et al., 2016), and (Poole et al., 2016)), there exists an direct link between the capacity for a network to propagate data through layers of a network in the forward pass and to backpropagate gradients of any given error function \(E\). In this section, we extend the results known for full-rank feedforward neural networks with infinite width to the low-rank case, with rank \(r_{l}=\gamma_{l}N_{l}\) evolving proportionally to the width. Figure 1: Edge of Chaos curve of a low-rank neural network where the rank is proportional to the width by a factor \(\gamma\) and nonlinear activation \(\phi(x)=tanh(x)\). The plot is generated with \(\gamma=\frac{1}{4}\), where the axis are rescaled by \(\gamma\). The derivative of the training error follows by the chain rule, \[\frac{\partial E}{\partial h_{i}^{(l)}}:=\delta_{i}^{l}=\big{(}\sum_ {k=1}^{N_{l+1}}\delta_{k}^{l+1}W_{ki}^{(l+1)}\big{)}\phi^{\prime}(h_{i}^{(l)}),\] \[\frac{\partial E}{\partial W_{ij}^{(l)}}=\delta_{i}^{l}\phi(h_{j} ^{(l-1)}),\] \[\frac{\partial E}{\partial\alpha_{ij}^{(l)}}=\bigg{(}\sum_{m=1}^ {r_{l}}\delta_{m}^{l}(C_{i}^{l})_{m}\bigg{)}\phi(h_{j}^{(l-1)}).\] Consider the propagation of the gradients \(\frac{\partial E}{\partial\alpha_{ij}^{(l)}}\) of the error with respect to our trainable parameters \(\alpha^{(l)}:=\big{(}\alpha_{i,j}^{(l)}\big{)}_{i,j}\), which are initalized \(\Big{(}\alpha_{k,i}^{(l)}\big{)}_{1\leq i\leq N_{l-1}}\in\mathbb{R}\stackrel{{ \text{iid}}}{{\sim}}\mathcal{N}(0,\frac{\sigma_{a}^{2}}{N_{l-1}})\). The length of this gradient along layers \(||\nabla_{\alpha^{(l)}}E||_{2}^{2}\) is proportional to \(\tilde{q}^{l}:=\mathbb{E}((\delta_{1}^{l})^{2})\) (see Appendix A.8 for proofs). In our analysis of the variance of the training error we treat the backpropagated weights as independent from the forwarded weights, which wile not strictly true is commonly done due to its efficacy in aiding computations which reflect the observed backward dynamics of the network, see (Pennington & Bahri, 2017). Considering an input vector \(x^{0,a}\), and \(\tilde{q}_{aa}^{l}:=\tilde{q}^{l}(x^{0,a})\), \[\tilde{q}_{aa}^{l}=\tilde{q}_{aa}^{l+1}\frac{N_{l+1}}{N_{l}}\chi_ {l+1},\] see Appendix A.8. With constant width along layers \(\frac{N_{l+1}}{N_{l}}\approx 1\), then the sequence is asymptotically exponential and \(\tilde{q}_{aa}^{l}=\tilde{q}_{aa}^{L}\prod_{k=l+1}^{L}\chi_{\gamma_{k}}\), or, if the proportional coefficient of the rank \(\gamma_{l}=\gamma\) is constant along layers, \(\tilde{q}_{aa}^{l}=\mathcal{O}(e^{\frac{l}{\Delta_{i}\gamma}})\), where \[\xi_{\Delta,\gamma}^{-1}:=-\log(\chi_{\gamma})\] The same critical point is observed in the low-rank setting \(\gamma<1\) as in previous works (Schoenholz et al., 2016) given by \(\chi_{\gamma}=1\) : * When \(\chi_{\gamma}>1\), then \(||\nabla_{\alpha^{(l)}}E||_{2}^{2}\) grows exponentially after \(|\xi_{\nabla,\gamma}|\) layers. This is the chaotic phase with the network is being exponentially-sensitive to perturbations. * When \(\chi_{\gamma}<1\), then \(||\nabla_{\alpha^{(l)}}E||_{2}^{2}\) vanishes at an exponential rate after \(\xi_{\nabla,\gamma}\) layers. This is the ordered phase with the network is being insensitive to perturbations. * When \(\chi_{\gamma}=1\), then \(||\nabla_{\alpha^{(l)}}E||_{2}^{2}\) remains of the same scale across even after an infinite number of layers which is referred to as the edge-of-chaos. ## 3 Dynamical isometry Using tools from Random Matrix Theory, (Pennington et al., 2018) provides a method to compute the moments of the spectral distribution of the Jacobian, revealing secondary information beyond the mean of the spectra. We review the most essential equations to derive the variance of the Jacobian'spectrum here but we refer the reader to (Tao, 2012) for more details on the random matrix transforms. ### Review of the computation of the variance of the Jacobian In this section, we review a set of definitions of random matrix transforms that allow the calculation of the spectra of the product of matrices in terms of their individual spectra. Let \(X\) be a random matrix with spectral density \(\rho_{X}\) \[\rho_{X}(\lambda):=\langle\frac{1}{N}\sum_{i=1}^{N}\delta(\lambda- \lambda_{i})\rangle_{X},\] where \(\langle.\rangle\) is the average with respect to the distribution of the random matrix \(X\), and \(\delta\) is the usual dirac distribution. For a probability density \(\rho_{X}\) and \(z\in\mathbb{C}\setminus\mathbb{R}\), the Stieltjes Transform \(G_{X}\) and its inverse are given by \[G_{X}(z): =\int\frac{\rho_{X}(t)}{t-z}dt,\] \[\rho_{X}(\lambda) =-\pi^{-1}\lim_{\epsilon\to 0^{+}}\text{Im}\big{(}G_{X}( \lambda+\epsilon i)\big{)}.\] The moment generating function is \(M_{X}(z):=zG_{x}(z)-1=\sum\limits_{k=1}^{\infty}m_{k}z^{-k}\) and the \(\mathcal{S}_{X}\) Transform is defined as \(\mathcal{S}_{X}(z):=\frac{1+z}{xM_{X}^{-1}(z)}\). The interest of using the \(\mathcal{S}\) Transform here is that it has the following multiplicative property, which in our case is desirable as the Jacobian is a product of random matrices: if \(X\) and \(Y\) are freely independent, then \(\mathcal{S}_{XY}=\mathcal{S}_{X}\mathcal{S}_{Y}\). In (Pennington et al., 2018), the authors start with establishing \(\mathcal{S}_{JJ^{T}}=S_{D^{2}}^{L}S_{W^{T}W}^{L}\), assuming the input vector is chosen such that \(q^{l}\approx q^{*}\) so that distribution of \(D^{2}\) is independent of \(l\) and we already had the weights identically distributed along layers. The strategy here to compute the spectral density of \(\rho_{JJ^{T}}\) (and thus the density of the singular values of the Jacobian \(J\)) starts with computing the \(\mathcal{S}\) Transforms of \(W^{T}W\) and \(D^{2}\) from their spectral density, determined by respectively, the way of sampling the weights at initialisation and the choice of the activation function in the network. Note that in this study we focus only on two possible distributions for the low-rank weights matrix - either scaled Gaussian weights or scaled orthogonal matrices, that are defined more precisely in the next sections. Once that \(\mathcal{S}_{JJ^{T}}\) is obtained by multiplying \(S_{W^{T}W}\) and \(S_{D^{2}}\), rather than inverting it back to find \(\rho_{JJ^{T}}\), the authors show there is a way to shortcut these steps and obtain directly the moments of \(\rho_{JJ^{T}}\) based on the following set of equations. Defining \[m_{k} :=\int\lambda^{k}\rho_{JJ^{T}}(\lambda)d\lambda\] \[S_{W^{T}W}(z) :=\gamma^{-1}\sigma_{\alpha}^{-2}\big{(}1+\sum_{k=1}^{\infty}s_{k }z^{k}\big{)}\] \[\mu_{k} =\int Dz\big{(}\phi^{\prime}(\sqrt{q^{*}}z)\big{)}^{2k}\] then as derived in (Pennington et al., 2018), the first two moments of the spectrum of the Jacobian are \[m_{1} =(\gamma\sigma_{\alpha}^{2}\mu_{1})^{L}\] \[m_{2} =(\gamma\sigma_{\alpha}^{2}\mu_{1})^{2L}L\left(\frac{\mu_{2}}{ \mu_{1}^{2}}+\frac{1}{L}-1-s_{1}\right).\] The first moment \(m_{1}\) recovers the previous statement that the average squared singular value is equal to \(m_{1}=\chi_{\gamma}^{L}\) and the edge-of-chaos given by \(\chi_{\gamma}=\gamma\sigma_{\alpha}^{2}\mu_{1}=1\) is consistent with previous results as the gradient either vanishes or grows exponentially along with the median of the Jacobian's spectra. Moreover, the variance of the spectrum of \(JJ^{T}\) about its mean \(\chi_{\gamma}=1\) can now be computed \[\sigma_{JJ^{T}}^{2}:=m_{2}-m_{1}^{2}=L\left(\frac{\mu_{2}}{\mu_{1}^{2}}-1-s_{1 }\right). \tag{11}\] The variance \(\sigma_{JJ^{T}}^{2}\) grows linearly with depth as in the full-rank setting, recovering the full-rank result when \(\gamma=1\). As in the edge-of-chaos axes scaling in Figure 1, \(\gamma\sigma_{\alpha}^{2}\) plays the same role as \(\sigma_{W}^{2}\). Note that \(\frac{\mu_{2}}{\mu_{1}^{2}}\geq 1\) and consequently \(\sigma_{JJ^{T}}^{2}\) as given in (11) is only independent of depth \(L\) if \(s_{1}=0\) which is only achieved here in the case of full-rank, i.e. \(\gamma=1\) orthogonal matrices. ### Low-Rank Orthogonal weights Consider a weight matrix whose \(r\) first columns are orthonormal columns sampled from a normal distribution, and the rest is 0, such that \(W^{T}W=\left(\begin{array}{c|c}\sigma_{\alpha}^{2}\mathbb{I}_{r}&0\\ \hline 0&\mathbb{O}_{N-r}\end{array}\right)\). Therefore the spectral distribution of \(\sigma_{\alpha}^{-2}W^{T}W\) is trivially given by \[\rho_{\sigma_{\alpha}^{-2}W^{T}W}(z)=\gamma\delta(z-1)+(1-\gamma)\delta(z),\] from which the \(\mathcal{S}\) Transform is computed, see Appendix A.11, to obtain \(s_{1}=-(\gamma^{-1}-1)\). When \(\gamma=1\), the known result in the full-rank orthogonal case is retrieved. ### Low-Rank Gaussian weights With weights at any layer \(l\) given by 2, the matrix can be rewritten as the product \(W^{l}=C^{l}A^{l}\), where \(C^{l}\in\mathbb{R}^{N_{l}\times r_{l}}\) with \(C^{l}_{ij}=(C^{l}_{j})_{i}\), and \(A^{l}\in\mathbb{R}^{r_{l}\times N_{l-1}}\) with \(A^{l}_{ij}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,\frac{\sigma_{ \alpha}^{2}}{N_{l-1}})\). As \({C^{l}}^{T}C^{l}=\mathbb{I}_{r_{l}}\) by construction, then \({W^{(l)}}^{T}W^{(l)}={A^{l}}^{T}{C^{l}}^{T}C^{l}A^{l}={A^{l}}^{T}A^{l}\) which is a Wishart matrix, whose spectral density is known and given by the Marcenko Pastur distribution (Marcenko and Pastur, 1967) where some mass is added at 0 since the matrix \(A^{l}\) is not full-rank and contains some 0 eigenvalues. Recall that \(r_{l}=\gamma N_{l}\). \[\rho_{A^{T}A}(\lambda)=(1-\gamma)_{+}\delta(\lambda)+\gamma\frac{\sqrt{( \lambda^{+}-\lambda)(\lambda-\lambda^{-})}}{2\pi\lambda\sigma_{\alpha}^{2}} \mathds{1}_{[\lambda^{-},\lambda^{+}]}(\lambda),\] where \(x_{+}=\text{max}(0,x),\lambda^{-}=(1-\frac{1}{\gamma})^{2}\) and \(\lambda^{+}=(1+\frac{1}{\gamma})^{2}\). The \(\mathcal{S}\) Transform \(S_{W^{T}W}\) can be computed (see Appendix A.10) and expanded around 0, which gives \(s_{1}=-\frac{1}{\gamma}\). Note that when \(\gamma=1\), one recovers the result given in (Pennington et al., 2018). The \(\mathcal{S}\) Transforms and first moments in both orthogonal and Gaussian cases are summarized in Table 1. ## 4 Numerical experiments In this section, we give empirical evidence in agreement with the theoretical results established above. Its interest is two-fold: * The variance of the spectrum of the Jacobian does indeed still grow with depth even in the low-rank setting as emphasized in Figure 2. Moreover, at a fixed depth, the rank to width ratio plays a key role in how the spectrum of the Jacobian spreads out around its mean value, which is 1 when the network is initialised on the edge-of-chaos. * Figure 3 shows that the advantage that Scaled Orthogonal Weights have over Scaled Gaussian Weights in Feedforward networks presented in (Pennington et al., 2018) is lost for low-rank matrices. Indeed, from (11), one can see that in both situations, it is not possible to adjust either the activation function nor \(q^{*}\) through a careful choice of variances for the weights and the biases, unless \(\gamma=1\) and \(W^{(l)}\) is a scaled orthonormal matrix. In Figure 2 and Figure 3, the variance of the spectrum of the Jacobian is computed in the low-rank Gaussian and Or \begin{table} \begin{tabular}{l c c} \hline Random Matrix W & \(S_{W^{T}W^{(2)}}\) & \(s_{1}\) \\ \hline LR Scaled Orthogonal & \(\gamma^{-1}\sigma_{\alpha}^{-2}\frac{1+z}{1+\gamma-1z}\) & \(1^{-\frac{1}{\gamma}}\) \\ LR Scaled Gaussian & \(\gamma^{-1}\sigma_{\alpha}^{-2}\frac{1+z}{1+z(1+\gamma^{-1})+\gamma-1z^{2}}\) & \(-\frac{1}{\gamma}\) \\ \hline \end{tabular} \end{table} Table 1: Transforms of weights. LR stands for Low-Rank. thogonal cases when the activation function is chosen to be the identity. Although such a choice of activation function completely destroys the network's expressivity power, it is a simple example of situations in the full-rank case where Gaussian distributed weight matrices lead to ill conditioned Jacobians as depth increases. This still holds in the low-rank setting as shown in the plot since the variance \(\sigma_{JJ^{T}}^{2}>0\). Simulations are performed on a \(1000\)- layer wide feedforward network, initialised and fed with a random input, whose length is set to be equal to \(q^{*}\) so that the network would already be at its equilibrium state without passing by a transient phase. The source code can be found at shorturl.at/syLP9. ## 5 Summary and further work Herein the edge-of-chaos theory of (Poole et al., 2016) and (Schoenholz et al., 2016) has been extended from the setting of full-rank weight matrices to the low-rank setting. Suitable scaling by the ratio of the rank to width factor \(\gamma_{l}:=r_{l}/N_{l}\) recovers the phenomenon driven by the mean of the Jacobian's spectra which defines the edge-of-chaos. Moreover, the variance of the Jacobian's spectra is shown to be strictly increasing with decreasing \(\gamma_{l}\) which suggests greater variability in the initial training of low-rank feedforward networks. The edge-of-chaos initialisation scheme has been successfully generalised to a large set of different settings, including changes of architectures as CNNs (Xiao et al., 2018), LSTMs and GRUs (Gilboa et al., 2019), RNNs (Chen et al., 2018), ResNets (Yang and Schoenholz, 2017) and to extra features like dropout (Schoenholz et al., 2016), (Huang et al., 2019) or batch normalisation (Yang et al., 2019) and pruning (Hayou et al., 2020). It has been improved with changes of activation functions (Hayou et al., 2019), (Murray et al., 2021) to enable the data to propagate even deeper through the network. As a future work, each of these settings could be extended to the setting of low-rank weight matrices. Figure 3: Evolution of the variance of the spectrum of \(JJ^{T}\) with respect to \(\gamma\) where \(\gamma\) is the proportionality coefficient giving the rank of the weights matrices at layer \(l\), whose width is \(N_{l}\), \(r_{l}=\gamma N_{l}\). Points are obtained empirically and averaged over 3 simulations when the lines are derived from the theory, see (11). Confidence intervals of 1 standard deviation around each mean point are shown. The weights are chosen to be low-rank Scaled Orthogonal and the activation function is linear \(\phi:x\mapsto x\). The same seed is used to initialise the weight matrices for each simulation and \(q^{*}\) is set to 0.5. The \(y-\)axis is shown in log scale. Figure 2: Evolution of the variance of the spectrum of \(JJ^{T}\) with respect to \(\gamma\) where \(\gamma\) is the proportionality coefficient giving the rank of the weights matrices at layer \(l\), whose width is \(N_{l}\), \(r_{l}=\gamma N_{l}\). Points are obtained empirically and averaged over 5 simulations when the lines are derived from the theory, see (11). Confidence intervals of 1 standard deviation around each mean point are shown. The weights are chosen to be low-rank Scaled Gaussian and the activation function is linear \(\phi:x\mapsto x\). The same seed is used to initialise the weight matrices for each simulation and \(q^{*}\) is set to 0.5. The \(y-\)axis is shown in log scale. ## Acknowledgments TNS is financially supported by the Engineering and Physical Sciences Research Council (EPSRC). JT is supported by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA) and thanks UCLA Department of Mathematics for kindly hosting him during the completion of this manuscript.
2306.00091
A General Framework for Equivariant Neural Networks on Reductive Lie Groups
Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neural Network architecture capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group G. Our approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. We also introduce the lie-nn software library, which provides all the necessary tools to develop and implement such general G-equivariant neural networks. It implements routines for the reduction of generic tensor products of representations into irreducible representations, making it easy to apply our architecture to a wide range of problems and groups. The generality and performance of our approach are demonstrated by applying it to the tasks of top quark decay tagging (Lorentz group) and shape recognition (orthogonal group).
Ilyes Batatia, Mario Geiger, Jose Munoz, Tess Smidt, Lior Silberman, Christoph Ortner
2023-05-31T18:09:37Z
http://arxiv.org/abs/2306.00091v1
# A General Framework for Equivariant Neural Networks on Reductive Lie Groups ###### Abstract Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neural Network architecture capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group \(G\). Our approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. We also introduce the lie-nn software library, which provides all the necessary tools to develop and implement such general \(G\)-equivariant neural networks. It implements routines for the reduction of generic tensor products of representations into irreducible representations, making it easy to apply our architecture to a wide range of problems and groups. The generality and performance of our approach are demonstrated by applying it to the tasks of top quark decay tagging (Lorentz group) and shape recognition (orthogonal group). ## 1 Introduction Convolutional Neural Networks (CNNs) (LeCun _et al._, 1989) have become a widely used and powerful tool for computer vision tasks, in large part due to their ability to achieve translation equivariance. This property led to improved generalization and a significant reduction in the number of parameters. Translation equivariance is one of many possible symmetries occurring in machine learning tasks. A wide range of symmetries described by reductive Lie Groups is present in physics, such as \(O(3)\) in molecular mechanics, \(\mathrm{SO}(1,3)\) in High-Energy Physics, \(\mathrm{SU}(2^{N})\) in quantum mechanics, and \(\mathrm{SU}(3)\) in quantum chromodynamics. Machine learning architectures that respect these symmetries often lead to significantly improved predictions while requiring far less training data. This has been demonstrated in many applications including 2D imaging with \(\mathrm{O}(2)\) symmetry (Cohen and Welling, 2016; Esteves _et al._, 2017), machine learning force fields with \(\mathrm{O}(3)\) symmetry (Anderson _et al._, 2019; Bartok _et al._, 2013; Batzner _et al._, 2022; Batzia _et al._, 2022a) or jet tagging with \(\mathrm{SO}^{+}(1,3)\) symmetry (Bogatskiy _et al._, 2022; Li _et al._, 2022). One way to extend CNNs to other groups (Finzi _et al._, 2020; Kondor and Trivedi, 2018) is through harmonic analysis on homogeneous spaces, where the convolution becomes an integral over the group. Other architectures work directly with finite-dimensional representations. We follow the demonstration of Bogatskiy _et al._ (2020a) who constructed a universal approximation of any equivariant map with a feed-forward neural network with vector activations belonging to finite-dimensional representations of a wide class of Lie groups. In this way, one can avoid computational challenges created by infinite-dimensional representations. Alternatively, our current work can be thought of as a generalization of the Atomic Cluster Expansion (ACE) formalism of Drautz (2019) to general Lie groups. The ACE formalism provides a complete body-ordered basis of \(\mathrm{O}(3)\)-invariant features. By combining the concepts of ACE and \(\mathrm{E}(3)\)-equivariant neural networks, Batatia _et al._ (2022a) proposed the MACE architecture, which achieves state-of-the-art performance on learning tasks in molecular modelling. The present work generalizes the ACE and MACE architectures to arbitrary Lie groups in order to propose a generic architecture for creating representations of geometric point clouds in interaction. Concretely, our work makes the following contributions: * We develop the \(G\)-Equivariant Cluster Expansion. This new framework generalizes the ACE (Drautz, 2019) and MACE (Batatia _et al._, 2022b) architectures to parameterize properties of point clouds that are equivariant under the action of a reductive Lie group \(G\). * We prove that our architecture is universal, even for a single layer. * We introduce lie-nn, a new library providing all the essential tools to apply our framework to a variety of essential Lie Groups in physics and computer visions, including the Lorentz group, \(\mathrm{SU}(N)\), \(\mathrm{SL}_{2}(\mathbb{C})\) and product groups. * We demonstrate the generality and efficiency of our general-purpose approach by demonstrating excellent accuracy on two prototype applications, jet tagging, and 3D point cloud recognition. ## 2 Background We briefly review a few important group-theoretic concepts: A real (complex) **Lie group** is a group that is also a finite-dimensional smooth (complex) manifold in which the product and inversion of the group are also smooth (holomorphic) maps. Among the most important Lie groups are Matrix Lie groups, which are closed subgroups of \(\mathrm{GL}(n,\mathbb{C})\) the group of invertible \(n\times n\) matrices with complex entries. This includes well-known groups such as \(\mathrm{Sp}(2n,\mathbb{R})\) consisting of matrices of determinant Figure 1: Examples of natural science problems and associated reductive Lie groups. For high energy physics, the Lorentz group \(\mathrm{SO}(1,3)\); for chemistry, the Euclidean group \(\mathrm{E}(3)\); for quantum-chromodynamics, the \(\mathrm{SU}(3)\) group. one, that is relevant in Hamiltonian dynamics A finite-dimensional **representation** of the Lie group \(G\) is a finite-dimensional vector space \(V\) endowed with a smooth homomorphism \(\rho\colon G\to\mathrm{GL}(V)\). Features in the equivariant neural networks live in these vector spaces. An **irreducible** representation \(V\) is a representation that has no subspaces which are invariant under the action of the group (other than \(\{0\}\) and \(V\) itself). This means that \(V\) can not be decomposed non-trivially as the direct sum of representations. A **reductive group** over a field \(F\) is a (Zariski-) closed subgroup of the group of matrices \(\mathrm{GL}(n,F)\) such that every finite-dimensional representation of \(G\) on an \(F\)-vectorspace can be decomposed as a sum of irreducible representations. ## 3 Related Work Lie group convolutionsConvolutional neural networks (CNNs), which are translation equivariant, have also been generalized to other symmetries. For example, G-convolutions [12] generalized CNNs to discrete groups. Steerable CNNs [12] generalized CNNs to \(O(2)\) equivariance and Spherical CNNs [12]\(O(3)\) equivariance. A general theory of convolution on any compact group and symmetric space was given by kondor2018deep. This work was further extended to equivariant convolutions on Riemannian manifolds by weiler2021deep. ACEThe Atomic Cluster Expansion (ACE) [1] introduced a systematic framework for constructing complete \(O(3)\)-invariant high body order basis sets with constant cost per basis function, independent of body order [20]. e3nn + Equivariant MLPsThe e3nn library [12] provides a complete solution to build \(E(3)-\)equivariant neural networks based on irreducible representations. The Equivariant MLPs [12] include more groups, such as \(SO(1,3)\), \(Z_{n}\), but are restricted to reducible representations making them much less computationally efficient than irreducible representations. Equivariant MPNNs and MACEEquivariant MPNNs [12, 13, 14, 15, 16, 17] have emerged as a powerful architecture to learn on geometric point clouds. They construct permutation invariants and group equivariant representations of point clouds. Successful applications include simulations in chemistry, particle physics, and 3D vision. MACE [1] generalized the \(O(3)\)-Equivariant MPNNs to build messages of arbitrary body order, outperforming other approaches on molecular tasks. [1] showed that the MACE design space is large enough to include most of the previously published equivariant architectures. ## 4 The \(G\)-Equivariant Cluster Expansion We are concerned with the representation of properties of point clouds. Point clouds are described as multi-sets (unordered tuples) \(X=[x_{i}]_{i}\) where each particle \(x_{i}\) belongs to a configuration domain \(\Omega\). We denote the set of all such multi-sets by \(\mathrm{msets}(\Omega)\). For example, in molecular modeling, \(x_{i}\) might describe the position and species of an atom and therefore \(x_{i}=(\mathbf{r}_{i},Z_{i})\in\mathbb{R}^{3}\times\mathbb{Z}\), while in high energy physics, one commonly uses the four-momentum \(x_{i}=(E_{i},\mathbf{p}_{i})\in\mathbb{R}^{4}\), but one could also include additional features such as charge, spin, and so forth. A property of the point cloud is a map \[\Phi\colon\mathrm{msets}(\Omega)\to Z \tag{1}\] i.e., \(X\mapsto\Phi(X)\in Z\), usually a scalar or tensor. The range space \(Z\) is application dependent and left abstract throughout this paper. Expressing the input as a multi-set implicitly entails two important facts: (1) it can have varying lengths; (2) it is invariant under the permutations of the particles. The developed in this article are also applicable to fixed-length multi-sets, in which case \(\Phi\) is simply a permutation-invariant function defined on some \(\Omega^{N}\). Mappings that are not permutation-invariant are special case with several simplifications. In many applications, especially in the natural sciences, particle properties satisfy additional symmetries. When a group \(G\) acts on \(\Omega\) as well as on \(Z\) we say that \(\Phi\) is \(G\)**-equivariant** if \[\Phi\circ g=\rho_{Z}(g)\Phi,\qquad g\in G \tag{2}\] where \(\rho_{Z}(g)\) is the action of the group element \(g\) on the range space \(Z\). In order to effectively incorporate exact group symmetry into properties \(\Phi\), we consider model architectures of the form \[\Phi\colon\mathrm{msets}(\Omega)\underset{\text{embedding}}{ \longrightarrow}V\underset{\text{parameterization}}{\longrightarrow}V \underset{\text{readout}}{\longrightarrow}Z, \tag{3}\] where the space \(V\) into which we "embed" the parameterization is a possibly infinite-dimensional vector space in which a convenient representation of the group is available. For simplicity we will sometimes assume that \(Z=V\). The Atomic Cluster Expansion (ACE) framework (Drautz, 2019; Dusson _et al._, 2022; Drautz, 2020)) produces a complete linear basis for the space of all "smooth" \(G\)-equivariant properties \(\Phi\) for the specific case when \(G=\operatorname{O}(3)\) and \(x_{i}\) are vectorial interatomic distances. Aspects of the ACE framework were incorporated into \(\operatorname{E}(3)\)-equivariant message passing architectures, with significant improvements in accuracy (Batatia _et al._, 2022a). In the following paragraphs we demonstrate that these ideas readily generalize to arbitrary reductive Lie groups. ### Efficient many-body expansion The first step is to expand \(\Phi\) in terms of body orders, and truncate the expansion at a finite order \(N\): \[\Phi^{(N)}(X)=\varphi_{0}+\sum_{i}\varphi_{1}(x_{i})+\sum_{i_{1},i_{2}} \varphi_{2}(x_{i_{1}},x_{i_{2}})+\dots+\sum_{i_{1},\dots,i_{N}}\varphi_{N}(x_ {i_{1}},\dots,x_{i_{N}}), \tag{4}\] where \(\varphi_{n}\) defines the \(n\)-body interaction. Formally, the expansion becomes systematic in the limit as \(N\to\infty\). The second step is the expansion of the \(n\)-particle functions \(\varphi_{n}\) in terms of a symmetrized tensor product basis. To define this we first need to specify the embedding of particles \(x\): A countable family \((\phi_{k})_{k}\) is a 1-particle basis if they are linearly independent on \(\Omega\) and any smooth 1-particle function \(\varphi_{1}\) (not necessarily equivariant) can be expanded in terms of \((\phi_{k})_{\mathbf{k}}\), i.e, \[\varphi_{1}(x)=\sum_{k}w_{k}\phi_{k}(x). \tag{5}\] For the sake of concreteness, we assume that \(\phi_{k}:\Omega\to\mathbb{C}\), but the range can in principle be any field. Let a complex vector space \(V\) be given, into which the particle embedding maps, i.e., \[(\phi_{k}(x))_{k}\in V\qquad\forall x\in\Omega.\] As a consequence of (5) any smooth scalar \(n\)-particle function \(\varphi_{n}\) can be expanded in terms of the corresponding tensor product basis, \[\varphi_{n}(x_{1},\dots,x_{n})=\sum_{k_{1},\dots,k_{n}}w_{k_{1}\dots k_{n}} \prod_{s=1}^{n}\phi_{k_{s}}(x_{s}). \tag{6}\] Inserting these expansions into (4) and interchanging summation (see appendix for the details) we arrive at a model for scalar permutation-symmetric properties, \[A_{k}=\sum_{x\in X}\phi_{k}(x),\qquad\mathbf{A_{k}}=\prod_{s=1}^{n}A_{k},\qquad \Phi^{(N)}=\sum_{\mathbf{k}\in\mathcal{K}}w_{\mathbf{k}}\mathbf{A_{k}}, \tag{7}\] where \(\mathcal{K}\) is the set of all \(\mathbf{k}\) tuples indexing the features \(\mathbf{A_{k}}\). Since \(\mathbf{A_{k}}\) is invariant under permuting \(\mathbf{k}\), only ordered \(\mathbf{k}\) tuples are retained. The features \(A_{k}\) are an embedding of \(\operatorname{msets}(\Omega)\) into the space \(V\). The tensorial product features (basis functions) \(\mathbf{A_{k}}\) form a complete linear basis of multi-set functions on \(\Omega\) and the weights \(w_{\mathbf{k}}\) can be understood as a symmetric tensor. We will extend this linear cluster expansion model \(\Phi^{(N)}\) to a message-passing type neural network model in SS 4.4. We remark that, while the standard tensor product embeds \((\otimes_{s=1}^{n}\phi_{k_{s}})_{\mathbf{k}}\colon\Omega^{n}\to V^{n}\), the \(n\)-correlations \(\mathbf{A_{k}}\) are _symmetric tensors_ and embed \((\mathbf{A_{k}})_{\mathbf{k}}\colon\operatorname{msets}(\Omega)\to\operatorname{Sym }^{n}V\). ### Symmetrisation With (7) we obtained a systematic linear model for (smooth) multi-set functions. It remains to incorporate \(G\)-equivariance. We assume that \(G\) is a reductive Lie group with a locally finite representation in \(V\). In other words we choose a representation \(\rho=(\rho_{kk^{\prime}})\colon G\to\operatorname{GL}(V)\) such that \[\phi_{k}\circ g=\sum_{k^{\prime}}\rho_{kk^{\prime}}(g)\phi_{k^{\prime}}, \tag{8}\] where for each \(k\) the sum over \(k^{\prime}\) is over a finite index-set depending only on \(k\). Most Lie groups one encounters in physical applications belong to this class, the affine groups being notable exceptions. However, those can usually be treated in an _ad hoc_ fashion, which is done in all \(E(3)\)-equivariant architectures we are aware of. In practice, these requirements restrict how we can choose the embedding \((\phi_{k})_{k}\). If the point clouds \(X=[x_{i}]_{i}\) are already given in terms of a representation of the group, then one may simply construct \(V\) to be iterative tensor products of \(\Omega\); see e.g. the MTP (Shapeev, 2016) and PELICAN (Bogatskiy _et al._, 2022) models. To construct an equivariant two-particle basis we need to first construct the set of all intertwining operators from \(V\otimes V\to V\). Concretely, we seek all solutions \(C^{\mathbf{\alpha},K}_{k_{1}k_{2}}\) to the equation \[\sum_{k^{\prime}_{1}k^{\prime}_{2}}C^{\mathbf{\alpha},K}_{k^{\prime}_{1}k^{\prime}_ {2}}\rho_{k^{\prime}_{1}k_{1}}(g)\rho_{k^{\prime}_{2}k_{2}}(g)=\sum_{K^{\prime }}\rho_{KK^{\prime}}(g)C^{\mathbf{\alpha},K^{\prime}}_{k_{1}k_{2}}; \tag{9}\] or, written in operator notation, \[C^{\mathbf{\alpha}}\rho\otimes\rho=\rho C^{\mathbf{\alpha}}. \tag{10}\] We will call the \(C^{\mathbf{\alpha},K}_{\mathbf{k}}\)_generalized Clebsch-Gordan coefficients_ since in the case \(G=\mathrm{SO}(3)\) acting on the spherical harmonics embedding \(\phi_{lm}=Y_{l}^{m}\) those coefficients are exactly the classical Clebsch-Gordan coefficients. The index \(\mathbf{\alpha}\) enumerates a basis of the space of all solutions to this equation. For the most common groups, one normally identifies a canonical basis \(C^{\mathbf{\alpha}}\) and assigns a natural meaning to this index (cf. SS A.2). Our abstract notation is chosen because of its generality and convenience for designing computational schemes. The generalization of the Clebsch-Gordan equation (9) to \(n\) products of representations acting on the symmetric tensor space \(\mathrm{Sym}^{n}(V)\) becomes (cf. SS A.6) \[\begin{split}&\sum_{\mathbf{k}^{\prime}}C^{\mathbf{\alpha},K}_{\mathbf{k}^{ \prime}}\overline{\mathbf{\rho}}_{\mathbf{k}^{\prime}\mathbf{k}}=\sum_{K^{\prime}}\rho_{KK^ {\prime}}C^{\mathbf{\alpha},K^{\prime}}_{\mathbf{k}}\qquad\forall K,\quad\mathbf{k}=(k_{1 },\dots,k_{N}),\quad g\in G,\\ &\text{where}\qquad\overline{\mathbf{\rho}}_{\mathbf{k}^{\prime}\mathbf{k}}= \sum_{\begin{subarray}{c}\mathbf{k}^{\prime}=\mathbf{k}^{\prime}\\ \mathbf{\pi}\in S_{n}\end{subarray}}\mathbf{\rho}_{\mathbf{k}^{\prime\prime}\mathbf{k}} \qquad\text{and}\qquad\mathbf{\rho}_{\mathbf{k}^{\prime}\mathbf{k}}=\prod_{t=1}^{n}\rho_{k ^{\prime}_{t}k_{t}}.\end{split} \tag{11}\] Due to the symmetry of the \((\mathbf{A}_{\mathbf{k}})_{\mathbf{k}}\) tensors \(\mathcal{C}^{\mathbf{\alpha},K}_{\mathbf{k}}\) need only be computed for ordered \(\mathbf{k}\) tuples and the sum \(\sum_{\mathbf{k}^{\prime}}\) also runs only over ordered \(\mathbf{k}\) tuples. Again, the index \(\mathbf{\alpha}\) enumerates a basis of the space of solutions. Equivalently, (11) can be written in compact notation as \[\mathcal{C}^{\mathbf{\alpha}}\overline{\mathbf{\rho}}=\rho\mathcal{C}^{\mathbf{\alpha}}. \tag{12}\] These coupling operators for \(N\) products can often (but not always) be constructed recursively from couplings of pairs (9). We can now define the symmetrized basis \[\mathbf{B}^{K}_{\mathbf{\alpha}}=\sum_{\mathbf{k}^{\prime}}C^{\mathbf{\alpha},K}_{\mathbf{k}^{ \prime}}\mathbf{A}_{\mathbf{k}^{\prime}}. \tag{13}\] The equivariance of (13) is easily verified by applying a transformation \(g\in G\) to the input (cf SS A.3). **Universality:** In the limit as the correlation order \(N\to\infty\), the features \((\mathbf{B}^{K}_{\mathbf{\alpha}})_{K,\mathbf{\alpha}}\) form a complete basis of smooth equivariant multi-set functions, in a sense that we make precise in Appendix A.4. Any equivariant property \(\Phi_{V}:\Omega\to V\) can be approximated by a linear model \[\Phi_{V}^{K}=\sum_{\mathbf{\alpha}}c^{K}_{\mathbf{\alpha}}B^{K}_{\mathbf{\alpha}}, \tag{14}\] to within arbitrary accuracy by taking the number of terms in the linear combination to infinity. ### Dimension Reduction The tensor product of the cluster expansion in (7) is taken on all the indices of the one-particle basis. Unless the embedding \((\phi_{k})_{k}\) is very low-dimensional it is often preferable to "sketch" this tensor product. For example, consider the canonical embedding of an atom \(x_{i}=(\mathbf{r}_{i},Z_{i})\), \[\phi_{k}(x_{i})=\phi_{znlm}(x_{i})=\delta_{z_{Z_{i}}}R_{nl}(r_{i})Y_{l}^{m}( \hat{r}_{i}).\] Only the \((lm)\) channels are involved in the representation of \(\mathrm{O}(3)\) hence there is considerable freedom in "compressing" the \((zn)\) channels. Following Darby _et al._ (2022) we construct a sketched \(G\)-equivariant cluster expansion: We endow the one-particle basis with an additional index \(c\), referred to as the sketched channel, replacing the index \(k\) with the index pair \((c,k)\), and renaming the embedding \((\phi_{ck})_{c,k}\). In the case of three-dimensional particles one may, for example, choose \(c=(z,n)\). In general it is crucial that the representation remains in terms of the \(\rho_{k,k^{\prime}}\), that is, (8) becomes \[\phi_{ck}\circ g=\sum_{k^{\prime}}\rho_{kk^{\prime}}(g)\phi_{ck^{\prime}}. \tag{15}\] Therefore, manipulating only the \(c\) channel does not change any symmetry properties of the architecture. We can use this fact to admit a learnable embedding, \[\tilde{A}_{ck}=\sum_{c^{\prime}}w_{cc^{\prime}}\phi_{c^{\prime}k},\] This mechanism is employed in numerous architectures to reduce the dimensionality of the embedding, but the approach taken in by Darby _et al._ (2022) and Batatia _et al._ (2022b) and followed here is the exact opposite: we allow many more learnable \(c\) channels but then decouple them resulting in a much lower-dimensional basis of \(n\)-correlations, defined by \[\tilde{\mathbf{A}}_{ck}=\prod_{t=1}^{n}\Bigg{(}\sum_{c^{\prime}}w_{cc^{\prime}} \sum_{x\in X}\phi_{c^{\prime}k_{t}}(x)\Bigg{)}. \tag{16}\] The resulting symmetrized basis is then obtained by \[\mathbf{B}_{c\mathbf{\alpha}}^{K}=\sum_{\mathbf{k^{\prime}}}C_{\mathbf{k^{\prime}}}^{\mathbf{ \alpha},K}\tilde{\mathbf{A}}_{ck^{\prime}}. \tag{17}\] Following the terminology of Darby _et al._ (2022) we call this architecture the tensor-reduced ACE, or, \(G\)-TRACE. There are numerous natural variations on its construction, but for the sake of simplicity, we restrict our presentation to this one case. **Universality:** Following the proof of Darby _et al._ (2022) one can readily see that the \(G\)-TRACE architecture inherits the universality of the cluster expansion, in the limit of decoupled channels \(\#c\rightarrow\infty\). A smooth equivariant property \(\Phi\) may be approximated to within arbitrary accuracy by an expansion \(\Phi^{K}(X)\approx\sum_{c,\mathbf{\alpha}}c^{K}_{\mathbf{\alpha}}\mathbf{B}_{c,\mathbf{\alpha} }^{K}(X)\). Since the embedding \(\tilde{A}_{ck}\) is learnable, this is a _nonlinear model_. We refer to SS A.4 for the details. ### G-MACE, Multi-layer cluster expansion The \(G\)-equivariant cluster expansion is readily generalized to a multi-layer architecture by re-expanding previous features in a new cluster expansion (Batatia _et al._, 2022b). The multi-set \(X\) is endowed with extra features, \(\mathbf{h}_{i}^{t}=(h_{i,ck}^{t})_{c,k}\), that are updated for \(t\in\{1,...,T\}\) iterations. These features themselves are chosen to be a field of representations such that they have a well-defined transformation under the action of the group. This results in \[x_{i}^{t} =(x_{i},\mathbf{h}_{i}^{t}) \tag{18}\] \[\phi_{ck}^{t}(x_{i},\mathbf{h}_{i}^{t}) =\sum_{\mathbf{\alpha}}w_{\mathbf{\alpha}}^{t,ck}\sum_{k^{\prime},k^{ \prime\prime}}C_{k^{\prime}k^{\prime\prime}}^{\mathbf{\alpha},k}h_{i,ck^{\prime}} ^{t}\phi_{ck^{\prime\prime}}(x_{i}) \tag{19}\] The recursive update of the features proceeds as in a standard message-passing framework but with the unique aspect that messages are formed via the \(G\)-TRACE and in particular can contain arbitrary high correlation order:. \[m_{i,cK}^{t}=\sum_{\mathbf{\alpha}}W_{\mathbf{\alpha}}^{t,cK}\mathbf{B}_{c\mathbf{\alpha}}^{t,K}. \tag{20}\] The gathered message \(\mathbf{m}_{i}^{t}=(m_{i,cK}^{t})_{c,k}\) is then used to update the particle states, \[x_{i}^{t+1}=(x_{i},\mathbf{h}_{i}^{t+1}),\qquad\mathbf{h}_{i}^{t+1}=U_{t}\big{(}\mathbf{m}_ {i}^{t}), \tag{21}\] where \(U_{t}\) can be an arbitary fixed or learnable transformation (even the identity). Lastly, a readout function maps the state of a particle to a target quantity of interest, which could be _local_ to each particle or _global_ to the mset \(X\), \[y_{i}=\sum_{t=1}^{T}\mathcal{R}_{t}^{\mathrm{loc}}(x_{i}^{t}),\qquad\text{ respectively,}\qquad y=\sum_{t=1}^{T}\mathcal{R}_{t}^{\mathrm{glob}}(\{x_{i}^{t}\}_{i}). \tag{22}\] This multi-layer architecture corresponds to a general message-passing neural network with arbitrary body order of the message at each layer. We will refer to this architecture as \(G\)-MACE. The \(G\)-MACE architecture directly inherits universality from the \(G\)-ACE and \(G\)-TRACE architectures: **Theorem 4.1** (Universality of \(G\)-Mace).: _Assume that the one-particle embedding \((\phi_{k})_{k}\) is a complete basis. Then, the set of \(G\)-MACE models, with a fixed finite number of layers \(T\), is dense in the set of continuous and equivariant properties of point clouds \(X\in\mathrm{msets}(\Omega)\), in the topology of pointwise convergence. It is dense in the uniform topology on compact and size-bounded subsets._ ## 5 lie-nn : Generating Irreducible Representations for Reductive Lie Groups In order to construct the G-cluster expansion for arbitrary Lie groups, one needs to compute the generalized Clebsch-Gordan coefficients (11) for a given tuple of representations (see 13). To facilitate this task, we have implemented an open source software library, lie-nn1. In this section we review the key techniques employed in this library. Footnote 1: [https://github.com/lie-nn/lie-nn](https://github.com/lie-nn/lie-nn) ### Lie Algebras of Reductive Lie Groups Formally, the Lie algebra of a Lie group is its tangent space at the origin and carries an additional structure, the Lie bracket. Informally the Lie algebra can be thought of as a linear approximation to the Lie group but, due to the group structure, this linear approximation carries (almost) full information about the group. In particular the representation theory of the Group is almost entirely determined by the Lie algebra, which is a simpler object to work with instead of the fully nonlinear Lie group. Lie algebraThe Lie groups we study can be realized as closed subgroups \(G\subset\mathrm{GL}_{n}(\mathbb{R})\) of the general linear group. In that case their Lie algebras can be concretely realized as \(\mathfrak{g}=\mathrm{Lie}(G)=\{X\in M_{n}(\mathbb{R})\mid\forall t\in\mathbb{R }:\exp(tX)\in G\}\) where \(\exp(X)=1+X+\frac{1}{2}X^{2}...\) is the standard matrix exponential. It turns out that \(\mathfrak{g}\subset M_{n}(\mathbb{R})\) is a linear subspace closed under the commutator bracket \([X,Y]=XY-YX\). Structure theoryWe fix a linear basis \(\{X_{i}\}\subset\mathfrak{g}\), called a set of generators for the group. The Lie algebra structure is determined by the _structure constants_\(A_{ijk}\) defined by \([X_{i},X_{j}]=\sum_{k}A_{ijk}X_{k}\), in that if \(X=\sum_{i}a_{i}X_{i}\) and \(Y=\sum_{j}b_{j}X_{j}\) then \([X,Y]=\sum_{k}\left(\sum_{i,j}A_{ijk}a_{i}b_{j}\right)X_{k}\). The classification of reductive groups provides convenient generating sets for their Lie algebras (or their complexifications). One identifies a large commutative subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) (sometimes of \(\mathfrak{g}_{C}=\mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}\)) with basis \(\{H_{i}\}\) so that most (or all) of the other generators \(E_{\alpha}\) can be chosen so that \([H_{i},E_{\alpha}]=\alpha(H_{i})E_{\alpha}\) for a linear function \(\alpha\) on \(\mathfrak{h}\). These functions are the so-called _roots_ of \(\mathfrak{g}\). Structural information about \(\mathfrak{g}\) is commonly encoded pictorially via the _Dynkin diagram_ of \(\mathfrak{g}\), a finite graph the nodes of which are a certain subset of the roots. There are four infinite families of simple complex Lie algebras \(A_{n}=\mathfrak{su}(n+1),B_{n}=\mathfrak{so}(2n+1),C_{n}=\mathfrak{sp}(2n),D_{ n}=\mathfrak{so}(2n)\) and further five exceptional simple complex Lie algebras (a general reductive Lie algebra is the direct sum of several simple ones and its centre). The Lie algebra only depends on the connected component of \(G\). thus when the group \(G\) is disconnected in addition to the infinitesimal generators \(\{X_{i}\}\) one also needs to fix so-called "discrete generators", a subset \(\mathbf{H}\subset G\) containing a representative from each connected component. Representation theoryThe representation theory of complex reductive Lie algebras is completely understood. Every finite-dimensional representation is (isomorphic to) the direct sum of irreducible representations ("irreps"), with the latter parametrized by appropriate linear functional on \(\mathfrak{h}\) ("highest Figure 2: Examples of Dynkin diagrams and their associated group class. weight"). Further given a highest weight \(\lambda\) there is a construction of the associated irrep with an explicit action of the infinitesimal generators chosen above. The **Weyl Dimension Formula** gives the dimension of an irrep in terms of its highest weight. ### Numerical Computations in lie-nn The most basic class of the lie-nn library encodes a group \(G\) and infinitesimal representation \(d\rho\) of \(\mathfrak{g}\) using the tuple \[\rho:=(A,n,\{d\rho(X_{i})\}_{i},\{\rho(h)\}_{h\in\mathbf{H}})\, \tag{23}\] with \(A\) the structure constants of the group, \(n\) the dimension of the representation, and \(d\rho(X_{i})\) and \(\rho(h)\) being \(n\times n\) matrices encoding the action of the infinitesimal and the discrete generators respectively. The action of infinitesimal generators is related to the action of group generators by the exponential, \(\forall X\in\mathfrak{g},\rho(e^{X})=e^{d\rho(X)}\). As the building blocks of the theory irreps are treated specially; the package implements functionality for the following operations for each supported Lie group: * Constructing the irrep with a given highest weight. * Determining the dimension of an irrep. * Decomposing the tensor product of several irreps into irreps up to isomorphism (the **selection rule**, giving the list of irreducible components and their multiplicities). * Decomposing the tensor product of several irreps into irreps explicitly via a change of basis ("generalized **Clebsch-Gordan** coefficients"). * Computating the symmetrized tensor product of the group (see. A.6 for details). To construct an irrep explicitly as in (23) one needs to choose a basis in the abstract representation space (including a labeling scheme for the basis) so that we can give matrix representations for the action of generators. For this purpose, we use in lie-nn the Gelfand-Tsetlin (GT) basis (Gelfand and Tsetlin, 1950) and associated labeling of the basis by GT patterns (this formalism was initially introduced for algebras of type \(A_{n}\) but later generalized to all classical groups). Enumerating the GT patterns for a given algebra gives the dimension of a given irrep, the selection rules can be determined combinatorially, and it is also possible to give explicit algorithms to compute Clebsch-Gordan coefficients (the case of \(A_{n}\) is treated by Alex _et al._ (2011)). For some specific groups, simplifications to this procedure are possible and GT patterns are not required. In some cases, one wants to compute coefficients for reducible representations or for representations where the analytical computation with GT patterns is too complex. In these cases, a numerical algorithm to compute the coefficients is required. Let \(d\rho_{1},d\rho_{2}\) be two Lie algebra representations of interest. The tensor product on the Lie algebra \(d\rho_{1}\otimes d\rho_{2}(X)\) can be computed as, \[d\rho_{1}\otimes d\rho_{2}\ (X)=d\rho_{1}(X)\otimes 1+1\otimes d\rho_{2}(X) \tag{24}\] Therefore, given sets of generators of three representations \(d\rho_{1},d\rho_{2},d\rho_{3}\), the Clebsch-Gordan coefficients are the change of basis between \((d\rho_{1}(X)\otimes 1+1\otimes d\rho_{2}(X))\) and \(d\rho_{3}(X)\). One can compute this change of basis numerically via a null space algorithm. For some groups, one can apply an iterative algorithm that generates all irreps starting with a single representation, using the above-mentioned procedure (see A.7). ## 6 Applications ### Lie groups and their applications In Table 6.1 we give a non-exhaustive overview of Lie groups and their typical application domains, to which our methodology naturally applies. Benchmarking our method on all of these applications is beyond the scope of the present work, in particular, because most of these fields do not have standardized benchmarks and baselines to compare against. The MACE architecture has proven to be state of the art for a large range of atomistic modeling benchmarks (Batatia _et al._, 2022a). In the next section, we choose two new prototypical applications and their respective groups to further assess the performance of our general approach. ### Particle physics with the \(So(1,3)\) Jet tagging consists in identifying the process that generated a collimated spray of particles called a _jet_ after a high-energy collision occurs at particle colliders. Each jet can be defined as a multiset of four-momenta \([(E_{i},\mathbf{p}_{i})]_{i=1}^{N}\), where \(E_{i}\in\mathbb{R}^{+}\) and \(\mathbf{p}_{i}\in\mathbb{R}^{3}\). Current state-of-the-art models incorporate the natural symmetry arising from relativistic objects, e.g, the Lorentz symmetry, as model invariance. To showcase the performance and generality of the \(G\)-MACE framework we use the Top-Tagging dataset (Butter _et al._, 2019), where the task is to differentiate boosted top quarks from the background composed of gluons and light quark jets. \(G\)-MACE achieves excellent accuracy, being the only arbitrary equivariant model to reach similar accuracy as PELICAN. We refer to Appendix A.8.1 for the details of the architecture. ### 3D Shape recognition 3D shape recognition from point clouds is of central importance for computer vision. We use the ModellNet10 dataset (Wu _et al._, 2015) to test our proposed architecture in this setting. As rotated objects need to map to the same class, we use a MACE model with \(O(3)\) symmetry. To create an encoder version of \(G\)-MACE, we augment a PointNet++ implementation (Yan, 2019) with \(G\)-MACE layers. See the appendix A.8.2 for more details on the architecture. \begin{table} \begin{tabular}{l r r} \hline \hline Group & Application & Reference \\ \hline \(\mathrm{SU}(1)\) & Electromagnetism & (Lagrave _et al._, 2021) \\ \(\mathrm{SU}(3)\) & Quantum Chromodynamics & (Favoni _et al._, 2022) \\ \(\mathrm{SO}(3)\) & 3D point clouds & (Batatia _et al._, 2022a) \\ \(\mathrm{SO}^{+}(1,3)\) & Particle Physics & (Bogatskiy _et al._, 2020b) \\ \(\mathrm{SL}(3,\mathbb{R})\) & Point cloud classification & - \\ \(\mathrm{SU}(2^{N})\) & Entangled QP & - \\ \hline \(\mathrm{Sp}(N)\) & Hamiltonian dynamics & - \\ \(\mathrm{SO}(2N+1)\) & Projective geometry & - \\ \hline \hline \end{tabular} \end{table} Table 1: Lie groups of interests covered by the present methods and their potential applications to equivariant neural networks. The groups above the horizontal line are already available in lie-nn. The ones below the line fall within our framework and can be added. \begin{table} \begin{tabular}{l r r r r} \hline \hline Architecture & \#Params & Accuracy & AUC & \(\mathrm{Rej}_{30\%}\) \\ \hline \hline **PELICAN** & 45k & **0.942** & **0.987** & \(\mathbf{2289\pm 204}\) \\ **partT** & 2.14M & 0.940 & 0.986 & \(1602\pm 81\) \\ **ParticleNet** & 498k & 0.938 & 0.985 & \(1298\pm 46\) \\ **LorentzNet** & 224k & **0.942** & **0.987** & \(2195\pm 173\) \\ **BIP** & 4k & 0.931 & 0.981 & \(853\pm 68\) \\ **LGN** & 4.5k & 0.929 & 0.964 & \(435\pm 95\) \\ **EFN** & 82k & 0.927 & 0.979 & \(888\pm 17\) \\ **TopoDNN** & 59k & 0.916 & 0.972 & \(295\pm 5\) \\ **LorentzMACE** & 228k & **0.942** & **0.987** & \(1935\pm 85\) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons between state-of-the-art metrics on the Top-Tagging dataset. Scores were taken from (Bogatskiy _et al._, 2022; Qu _et al._, 2022; Qu and Gouskos, 2020; Munoz _et al._, 2022; Bogatskiy _et al._, 2020a; Komiske _et al._, 2019; Pearkes _et al._, 2017). \begin{table} \begin{tabular}{l r} \hline \hline Architecture & Accuracy \\ \hline PointNet (Qi _et al._, 2016) & 94.2 \\ PointNet ++ (Qi _et al._, 2017) & 95.0 \\ PointMACE (ours) & **96.1** \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy in shape recognition. Conclusion We introduced the \(G\)-Equivariant Cluster Expansion, which generalizes the successful ACE and MACE architectures to symmetries under arbitrary reductive Lie groups. We provide an open-source Python library lie-nn that provides all the essential tools to construct such general Lie-group equivariant neural networks. We demonstrated that the general \(G\)-MACE architecture simultaneously achieves excellent accuracy in Chemistry, Particle Physics, and Computer Vision. Future development will implement additional groups and generalize to new application domains. ## Acknowledgments and Disclosure of Funding IB's work was supported by the ENS Paris Saclay. CO's work was supported by NSERC Discovery Grant IDGR019381 and NFRF Exploration Grant GR022937. This work was also performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3).IB would like to thank Gabor Csanyi for his support.
2309.12204
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements
We present a neural network for mitigating biased errors in pseudoranges to improve localization performance with data collected from mobile phones. A satellite-wise Multilayer Perceptron (MLP) is designed to regress the pseudorange bias correction from six satellite, receiver, context-related features derived from Android raw Global Navigation Satellite System (GNSS) measurements. To train the MLP, we carefully calculate the target values of pseudorange bias using location ground truth and smoothing techniques and optimize a loss function involving the estimation residuals of smartphone clock bias. The corrected pseudoranges are then used by a model-based localization engine to compute locations. The Google Smartphone Decimeter Challenge (GSDC) dataset, which contains Android smartphone data collected from both rural and urban areas, is utilized for evaluation. Both fingerprinting and cross-trace localization results demonstrate that our proposed method outperforms model-based and state-of-the-art data-driven approaches.
Xu Weng, Keck Voon Ling, Haochen Liu
2023-09-16T10:43:59Z
http://arxiv.org/abs/2309.12204v2
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements ###### Abstract We present a neural network for mitigating pseudorage bias to improve localization performance with data collected from Android smartphones. We represent pseudorange bias using a pragmatic satellite-wise Multiple Layer Perceptron (MLP), the inputs of which are six satellite-receiver-context-related features derived from Android raw Global Navigation Satellite System (GNSS) measurements. To supervise the training process, we carefully calculate the target values of pseudorange bias using location ground truth and smoothing techniques and optimize a loss function containing the estimation residuals of smartphone clock bias. During the inference process, we employ model-based localization engines to compute locations with pseudoranges corrected by the neural network. Consequently, this hybrid pipeline can attend to both pseudorange bias and noise. We evaluate the framework on an open dataset and consider four application scenarios for investigating fingerprinting and cross-trace localization in rural and urban areas. Extensive experiments demonstrate that the proposed framework outperforms model-based and state-of-the-art data-driven approaches. Android smartphones, deep learning, localization, GPS, pseudoranges. ## I Introduction Since the release of Android raw Global Navigation Satellite System (GNSS) measurements, precise localization using ubiquitous and portable Android smartphones has been expected to enable various exciting localization-based applications, such as precise vehicle navigation, smart management of city assets, outdoor augmented reality, and mobile health monitoring [1]. However, it is difficult to keep such promise because the inferior GNSS chips and antennas mounted in mass-market smartphones lead to large pseudorange noise and bias [2, 3, 4]. While filtering or smoothing can reduce pseudorange noise, it remains challenging to mitigate pseudorange bias that might be caused by multipath, non-line-of-sight (NLOS) propagation, modeling residuals of atmospheric delays, smartphone hardware delays, etc [5]. To address this issue, we propose PrNet, a neural network for correcting pseudoranges to improve positioning with Android raw GNSS measurements. As illustrated in Fig. 1, the underlying idea is to train a neural network by regressing from six satellite-receiver-context-related features to pseudorange bias. The squared loss is optimized for all visible satellites each time. After training the neural network, we can predict the biased errors, eliminate them from pseudoranges, and then feed the corrected pseudoranges into a classical localization engine to compute locations. However, we have to overcome two stumbling obstacles to implementing the above idea. 1) _Feature Selection_: over 30 raw GNSS measurements are logged from Android smartphones [1], and relevant features must be carefully chosen to enhance the performance of the neural network while minimizing computational costs [6]. 2) _Data Labeling_: the training data should be labeled with pseudorange bias while we can only obtain the ground truth of smartphones' locations [1]. Previous research has proposed various methods to estimate the pseudorange bias for geodetic GNSS receivers but ignored the pseudorange noise [7, 8, 9, 10]. These methods degrade their own performance when transferred to Android measurements that are about one order of magnitude noisier than geodetic-quality ones. To this end, we nominate two new features by visualizing Android measurements across various dimensions and derive the target values of pseudorange bias using location ground Fig. 1: An overview of our PrNet-based localization pipeline. The blue solid and dashed lines represent the biased pseudoranges and unbiased pseudoranges (we assume all bias is positive here for easy illustration), respectively. The red, purple, yellow, and green segments denote the pseudorange bias of satellites SV1-SV4. truth and the Rauch-Tung-Striebel (RTS) smoothing algorithm [11]. Besides, our experiments show that incorporating estimation residuals of smartphone clock bias into the loss function can enhance the inference ability of the neural network. To recapitulate briefly, our contributions are: * A pipeline for learning the mapping from six satellite-receiver-context-related inputs to pseudorange bias representation, which is parameterized by a pragmatic satellite-wise MLP. This includes two new proposed inputs--unit geometry vectors and smartphone headings--and a visible satellite mask layer to enable parallel computation. * Computation methods to derive labels for pseudorange bias and a differentiable loss function involving estimation residuals of smartphone clock bias. * A comprehensive evaluation from perspectives of horizontal positioning errors, generalization ability, computational load, and ablation analysis. Our codes are available at [https://github.com/ailocar/prnet](https://github.com/ailocar/prnet). We demonstrate that our proposed PrNet-based localization framework outperforms both the model-based and data-driven state-of-the-art pseudorange-based localization approaches. To the best of our knowledge, we present the first data-driven framework for pseudorange correction and localization over Android smartphone measurements. ## II Motivation The primary motivation behind this paper is to remove the biased errors present in pseudoranges collected from Android smartphones. To investigate the pseudorange bias and its impact on horizontal positioning accuracy, we derive the pseudorange bias as detailed in Section V and adopt Weighted Least Squares (WLS), Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel (RTS) smoother to compute locations with the biased pseudoranges. We choose a trace of data in the Google Smartphone Decimeter Challenge (GSDC) dataset [1] and display our findings in Fig. 2. As depicted in Fig. 2(a), biased errors are pervasive across all visible satellites and can reach magnitudes up to 10 meters. Such biased errors might be attributed to multipath, NLOS, residuals in atmospheric delay modeling, smartphone hardware interference, etc. Importantly, they are hardly modeled mathematically. Furthermore, Fig. 2(b) clearly shows how the biased pseudoranges directly translate into localization errors that are challenging to mitigate using conventional filtering or smoothing techniques. ## III Related Work This paper focuses on artificial intelligence (AI) for GNSS localization using pseudoranges, which can be categorized into five types. **(i)**_AI for Pseudorange Correction_: Recently, several learning-based methods have been proposed to predict and correct pseudorange errors using raw GNSS measurements as inputs [8, 9, 10, 12]. However, all these methods use pseudorange errors (comprising noise and bias) to label training data and cannot be transferred to Android raw GNSS measurements directly. **(ii)**_AI for Position Correction_: Various machine learning methods have been proposed to predict offsets between model-based position estimations and ground truth locations [13, 14]. Then, the model-based position estimations are compensated with the learned location offsets. **(iii)**_End-to-end Neural Networks for GNSS_: This type of work directly replaces the model-based positioning engines with deep neural networks [15, 16]. For example, authors in [15] leveraged a set transformer to replace the WLS engine to solve the pseudorange equations linearized about an initial location guess. Test results showed that the set transformer is sensitive to the initial location guess to a certain extent. Compared with the first type, these two kinds of approaches need to learn how to compute locations, but it has been robustly and mathematically well-established. **(iv)**_AI Enhanced Localization Engine_: AI has been utilized to improve the physical models employed in conventional localization engines [17, 18, 19]. For example, a parameter of a filter-based localization engine can be learned instead of being tweaked empirically [19]. **(v)**_AI for Signal Classification_: The final category involves using AI to detect and classify multipath and NLOS propagations. By identifying pseudoranges containing multipath or NLOS errors, these methods can exclude them from the further localization process [20, 21, 22]. ## IV Preliminaries of GNSS After the corrections of atmospheric delays as well as satellite clock offsets [23], the pseudorange measurement \(\rho_{c_{k}}^{(n)}\) from the \(n^{th}\) satellite to a smartphone at the \(k^{th}\) time step is shown below. \[\rho_{c_{k}}^{(n)}=r_{k}^{(n)}+\delta t_{u_{k}}+\varepsilon_{k}^{(n)} \tag{1}\] where the subscript \(k\) represents the \(k^{th}\) time step. \(r_{k}^{(n)}\) denotes the geometry distance from the \(n^{th}\) satellite to the smartphone. \(\delta t_{u_{k}}\) represents the clock offsets of the smartphone relative to the GNSS reference time. We wrap up multipath delays, hardware delays, pseudorange noise, and other potential errors in one item \(\varepsilon_{k}^{(n)}\) called pseudorange errors. Then, we can estimate the smartphone's location \(\mathbf{x}_{k}=\left[x_{k},y_{k},z_{k}\right]^{T}\) in the Earth-centered, Earth-fixed (ECEF) coordinate system and its clock offset \(\delta t_{u_{k}}\) by solving the following linear equation system [23] established by \(M\) visible satellites: \[\mathbf{W}_{k}\mathbf{G}_{k}\left(\mathbf{X}_{k}-\tilde{\mathbf{X}}_{k}\right) =\mathbf{W}_{k}\Delta\mathbf{\rho}_{c_{k}} \tag{2}\] Fig. 2: Impact of pseudorange bias (Trace “2021-04-28-US-MTV-1” by Pixel4 in GSDC dataset as an example). (a) Pseudorange bias of all visible satellites throughout the trace. (b) Horizontal errors calculated with Vincenty’s formulae using Weighted Least Squares (WLS), Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel Smoother (RTSS). where \(\mathbf{X}_{k}=\left[x_{k},y_{k},z_{k},\delta t_{u_{k}}\right]^{T}\) is the unknown user state while \(\tilde{\mathbf{X}}_{k}=\left[\tilde{x}_{k},\tilde{y}_{k},\tilde{z}_{k},\delta \tilde{t}_{u_{k}}\right]^{T}\) is an approximation of the user state. \(\mathbf{W}_{k}\) is a diagonal matrix with the reciprocals of 1-\(\sigma\) pseudorange uncertainty of all visible satellites as its main diagonal to weight pseudoranges. The geometry matrix \(\mathbf{G}_{k}\) is calculated with the satellite location \(\mathbf{x}_{k}^{(n)}=\left[x_{k}^{(n)},y_{k}^{(n)},z_{k}^{(n)}\right]^{T}\) and the approximate user location \(\tilde{\mathbf{X}}_{k}\)[23]: \[\mathbf{G}_{k}=\left[\begin{array}{cccc}a_{x_{k}}^{(1)}&a_{y_{k}}^{(1)}&a_{ z_{k}}^{(1)}&1\\ a_{x_{k}}^{(2)}&a_{y_{k}}^{(2)}&a_{z_{k}}^{(2)}&1\\ \cdots\\ a_{x_{k}}^{(M)}&a_{y_{k}}^{(M)}&a_{z_{k}}^{(M)}&1\end{array}\right] \tag{3}\] where, \[a_{x_{k}}^{(n)} =\frac{\tilde{x}_{k}-x_{k}^{(n)}}{\tilde{r}_{k}^{(n)}},\text{ }a_{y_{k}}^{(n)}=\frac{\tilde{y}_{k}-y_{k}^{(n)}}{\tilde{r}_{k}^{(n)}},\text{ }a_{z_{k}}^{(n)}=\frac{\tilde{z}_{k}-z_{k}^{(n)}}{\tilde{r}_{k}^{(n)}}\] \[\tilde{r}_{k}^{(n)} =\sqrt{\left(\tilde{x}_{k}-x_{k}^{(n)}\right)^{2}+\left(\tilde{y} _{k}-y_{k}^{(n)}\right)^{2}+\left(\tilde{z}_{k}-z_{k}^{(n)}\right)^{2}}\] \[n=1,2,3,\cdots,M\] The pseudorange residual \(\Delta\boldsymbol{\rho}_{c_{k}}\) for the \(M\) visible satellites at the \(k^{th}\) time step are shown as follows. \[\Delta\boldsymbol{\rho}_{c_{k}}=\left[\Delta\rho_{c_{k}}^{(1)},\Delta\rho_{c_ {k}}^{(2)},...,\Delta\rho_{c_{k}}^{(M)}\right]^{T}\] where \[\Delta\rho_{c_{k}}^{(n)} =\rho_{c_{k}}^{(n)}-\tilde{r}_{k}^{(n)}-\delta\tilde{t}_{u_{k}}- \varepsilon_{k}^{(n)}\] \[n=1,2,3,\cdots,M\] The WLS-based solution to (2) is shown below [23]. \[\mathbf{X}_{k} =\tilde{\mathbf{X}}_{k}+\Delta\mathbf{X}_{k}\] \[=\tilde{\mathbf{X}}_{k}+\left(\mathbf{W}_{k}\mathbf{G}_{k}\right) ^{+}\mathbf{W}_{k}\Delta\boldsymbol{\rho}_{c_{k}} \tag{4}\] where \(\Delta\mathbf{X}_{k}=\left(\mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W} _{k}\Delta\boldsymbol{\rho}_{c_{k}}\) is the displacement from the approximate user state \(\tilde{\mathbf{X}}_{k}\) to the true one. The approximate user state \(\tilde{\mathbf{X}}_{k}\) will be updated with the result of (4), and the computation in (4) will be iterated until the accuracy requirement is satisfied. Note that the pseudorange error \(\varepsilon_{k}^{(n)}\) is unknown in practice. The estimated user state \(\hat{\mathbf{X}}_{k}=\left[\hat{x}_{k},\hat{y}_{k},\hat{z}_{k},\delta\hat{t}_{ u_{k}}\right]^{T}\) in the presence of pseudorange errors is: \[\hat{\mathbf{X}}_{k}=\tilde{\mathbf{X}}_{k}+\Delta\mathbf{X}_{k}+\left( \mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W}_{k}\boldsymbol{\varepsilon }_{k}\] where, \[\boldsymbol{\varepsilon}_{k}=\left[\varepsilon_{k}^{(1)},\varepsilon_{k}^{(2)},...,\varepsilon_{k}^{(M)}\right]^{T}\] The resulting state estimation error \(\epsilon_{\mathbf{X}_{k}}\) is: \[\epsilon_{\mathbf{X}_{k}} =\mathbf{X}_{k}-\hat{\mathbf{X}}_{k}=-\left(\mathbf{W}_{k} \mathbf{G}_{k}\right)^{+}\mathbf{W}_{k}\boldsymbol{\varepsilon}_{k}\] \[=\left[\epsilon_{x_{k}},\epsilon_{y_{k}},\epsilon_{z_{k}}, \epsilon_{\delta t_{u_{k}}}\right]^{T} \tag{5}\] ## V Data Labeling In this section, we set forth how to compute the target values of pseudorange bias using location ground truth and RTS smoother. ### _Estimating Pseudorange Errors_ According to (1), pseudorange errors can be calculated as: \[\varepsilon_{k}^{(n)}=\rho_{c_{k}}^{(n)}-r_{k}^{(n)}-\delta t_{u_{k}}\] However, we do not have the ground truth of the smartphone's clock bias \(\delta t_{u_{k}}\). Thus, we employ its WLS-based estimation \(\delta\hat{t}_{u_{k}}\) to calculate pseudorange errors: \[\hat{\varepsilon}_{k}^{(n)}=\rho_{c_{k}}^{(n)}-r_{k}^{(n)}-\delta\hat{t}_{u_{k}} \tag{6}\] Substituting (1) and (5) into (6) yields: \[\hat{\varepsilon}_{k}^{(n)} =\varepsilon_{k}^{(n)}+\epsilon_{\delta t_{u_{k}}}\] \[=\varepsilon_{k}^{(n)}-\mathbf{h}_{k}^{T}\boldsymbol{\varepsilon }_{k} \tag{7}\] where \(\mathbf{h}_{k}^{T}\) is the last row vector of \(\left(\mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W}_{k}\). Theoretically, we can calculate the closed form of the real pseudorange error \(\boldsymbol{\varepsilon}_{k}\) with \(M\) equations defined by (7) using the least squares (LS) algorithm. However, the coefficient matrix of the system of \(M\) equations is close to singular because only two elements are different between any two rows. Consequently, it might cause large errors in numerical computation. Therefore, we use gradient descent to estimate pseudorange errors instead, which will be detailed in Section VII. ### _Smoothing Pseudorange Errors_ The pseudorange error \(\varepsilon_{k}^{(n)}\) couples the biased error \(\mu_{k}^{(n)}\) and unbiased noise \(v_{k}^{(n)}\) together: \[\varepsilon_{k}^{(n)}=\mu_{k}^{(n)}+v_{k}^{(n)} \tag{8}\] where, \[\mu_{k}^{(n)} =\mathrm{E}\left(\varepsilon_{k}^{(n)}\right)\] \[v_{k}^{(n)} =\varepsilon_{k}^{(n)}-\mu_{k}^{(n)}\] Substituting (8) into (7) yields: \[\hat{\varepsilon}_{k}^{(n)}=\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}+v_{k}^ {(n)}-\mathbf{h}_{k}^{T}\boldsymbol{v}_{k} \tag{9}\] where, \[\mathbf{M}_{k}=\left[\mu_{k}^{(1)},\mu_{k}^{(2)},\cdots,\mu_{k}^{(M)}\right]^{T} \tag{10}\] \[\boldsymbol{v}_{k}=\boldsymbol{\varepsilon}_{k}-\mathbf{M}_{k}\] Next, we try to extract the biased error items \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\) from the pseudorange error estimation \(\hat{\varepsilon}_{k}^{(n)}\) to filter out the smartphone's pseudorange noise that has been proven much larger than that of geodetic receivers [4]. **Theorem 1**: _The biased error items \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\) are given by \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}=\mathbf{g}_{k}^{(n)}\left(\tilde {\mathbf{x}}_{k}-\mathbf{x}_{k}\right)\), where \(\tilde{\mathbf{x}}_{k}=\left[\tilde{x}_{k},\tilde{y}_{k},\tilde{z}_{k}\right]^{T}\) represents the mean of the WLS-based location estimation \(\hat{\mathbf{x}}_{k}=\left[\hat{x}_{k},\hat{y}_{k},\hat{z}_{k}\right]^{T}\), and \(\mathbf{g}_{k}^{(n)}\) is the unit geometry vector:_ \[\mathbf{g}_{k}^{(n)}=\left[a_{x_{k}}^{(n)},a_{y_{k}}^{(n)},a_{z_{k}}^{(n)} \right]^{T} \tag{11}\] \[\tilde{\mathbf{X}}_{k}=\mathbf{X}_{k}\] Proof:: Replace the approximate state \(\tilde{\mathbf{X}}_{k}\) with the real state \(\mathbf{X}_{k}\) in (2). Thus, the WLS-based state estimation \(\tilde{\mathbf{X}}_{k}\) is the optimal solution to the following optimization problem: \[\min_{\tilde{\mathbf{X}}_{k}}||\mathbf{W}_{k}\mathbf{G}_{k}\left(\hat{\mathbf{X }}_{k}-\mathbf{X}_{k}\right)-\mathbf{W}_{k}\Delta\boldsymbol{\rho}_{c_{k}}||^ {2}\] The optimal \(\hat{\mathbf{X}}_{k}\) satisfies \[\mathbf{G}_{k}\left(\hat{\mathbf{X}}_{k}-\mathbf{X}_{k}\right)\approx\Delta \boldsymbol{\rho}_{c_{k}}=\boldsymbol{\rho}_{c_{k}}-\mathbf{r}_{k}-\delta t_{u _{k}}=\boldsymbol{\varepsilon}_{k} \tag{12}\] where, \[\mathbf{r}_{k}=\left[r_{k}^{(1)},r_{k}^{(2)},\cdots,r_{k}^{(M)} \right]^{T}\] \[\boldsymbol{\rho}_{c_{k}}=\left[\rho_{c_{k}}^{(1)},\rho_{c_{k}}^{ (2)},\cdots,\rho_{c_{k}}^{(M)}\right]^{T}\] For the \(n^{th}\) satellite, the following equation can be obtained according to (3), (5) and (12): \[\mathbf{g}_{k}^{(n)}\left(\hat{\mathbf{x}}_{k}-\mathbf{x}_{k}\right)+\mathbf{ h}_{k}^{T}\boldsymbol{\varepsilon}_{k}=\varepsilon_{k}^{(n)} \tag{13}\] After calculating the expected values of both sides of (13) and rearranging the result, we have \[\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}=\mathbf{g}_{k}^{(n)}\left( \tilde{\mathbf{x}}_{k}-\mathbf{x}_{k}\right)\] **Corollary 1.1**: _The biased error items \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\) are given by \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}=\bar{r}_{k}^{(n)}-r_{k}^{(n)}\), where \(\bar{r}_{k}^{(n)}=||\mathbf{x}_{k}-\mathbf{x}_{k}^{(n)}||_{2}\)._ Proof:: Using Taylor expansion at \(\mathbf{x}_{k}\), we have \[\bar{r}_{k}^{(n)} =||\tilde{\mathbf{x}}_{k}-\mathbf{x}_{k}^{(n)}||_{2}\] \[\approx||\mathbf{x}_{k}-\mathbf{x}_{k}^{(n)}||_{2}+\mathbf{g}_{k }^{(n)}\left(\tilde{\mathbf{x}}_{k}-\mathbf{x}_{k}\right)\] \[=r_{k}^{(n)}+\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\] This paper uses \(\bar{\varepsilon}_{k}^{(n)}=\bar{r}_{k}^{(n)}-r_{k}^{(n)}\) to label training data. We consider the RTS smoother-based positioning solution as \(\tilde{\mathbf{x}}_{k}\). The smoothing process of RTS smoother for Android raw GNSS measurements is detailed in [24]. Fig. 3 displays the pseudorange errors of four visible satellites before and after smoothing, indicating that the smoothed pseudorange errors are much smoother than the original ones and can represent the biased components. Note that the RTS smoother needs about 120 epochs for convergence. Therefore, during the training process, the data of the first 120 observation epochs are discarded. ## VI Feature Selection To enable better representation and higher efficiency of the neural network, we meticulously select input features, including the commonly utilized ones in the community and some novel features tailored specifically for Android smartphones. ### _Carrier-to-noise Density Ratio \(C/N_{0}\)_ \(C/N_{0}\) has been proven related to pseudorange errors [8, 9, 10]. The \(C/N_{0}\) of an Android smartphone is generally smaller than 50 dB-Hz and can be normalized accordingly: \[F_{1_{k}}^{(n)}=\frac{\mathrm{Cn0DbHz}}{50}\] where "Cn0DbHz" is one Android raw GNSS measurement. ### _Elevation Angles of Satellites_ Satellite elevation angles are also closely correlated with pseudorange errors [8, 9, 10]. We estimate the elevation angle \(E_{k}^{(n)}\) of the \(n^{th}\) satellite using the WLS-based positioning solution \(\hat{\mathbf{x}}_{k}\) and the satellite location \(\mathbf{x}_{k}^{(n)}\)[23]. And it is standardized to \([-1,1]\) as follows: \[F_{2_{k}}^{(n)}=\left[\sin E_{k}^{(n)},\cos E_{k}^{(n)}\right]\] ### _Satellite ID_ We try to predict pseudorange errors of different visible satellites indexed by "Svid". In this work, we only consider GPS L1 signals, and the corresponding satellite ID (PRN index) ranges from 1 to 32, so we normalize it by 32. \[F_{3_{k}}^{(n)}=\frac{\mathrm{Svid}}{32}\] where "Svid" is one Android raw GNSS measurement. ### _Position Estimations_ The surrounding environment of each location possesses unique terrain characteristics, such as the distribution of buildings, mountains, tunnels, and skyways, determining the multipath/NLOS context at the site. Thus, it is reasonable to include position estimation in the input features [10, 12]. We can use the WLS-based position estimations that is provided as knowns in the GSDC dataset. To distinguish between closely located sites, we utilize latitude and longitude with fine-grained units, i.e., degree (\(\varphi_{deg_{b}}\)), minute (\(\varphi_{min_{k}}\)), and second Fig. 3: Pseudorange errors before and after smoothing (using four satellites from Trace “2021-04-28-US-MTV-1” by Pixel 4 in GSDC dataset as examples) (\(\varphi_{sec_{k}}\)) for latitude, and degree (\(\lambda_{deg_{k}}\)), minute (\(\lambda_{min_{k}}\)), and second (\(\lambda_{sec_{k}}\)) for longitude, which can be standardized as follows: \[F_{4_{k}}=\left[\frac{\varphi_{deg_{k}}}{90},\frac{\varphi_{min_{k}}}{60},\frac {\varphi_{sec_{k}}}{60},\frac{\lambda_{deg_{k}}}{180},\frac{\lambda_{min_{k}}}{ 60},\frac{\lambda_{sec_{k}}}{60}\right]\] ### _Unit Geometry Vectors_ The direction of signal propagation can be depicted by the unit geometry vector from the satellite to the smartphone, which is periodic at each site [12]. We visualize the unit geometry vectors in Fig. 4(c). Fig. 4(b) shows that the blue, yellow, and green traces were captured along similar routes and directions, but Fig. 4(a) indicates that only the yellow and green traces share similar patterns of pseudorange bias. It is on the grounds that the unit geometry vectors of yellow and green traces are close to each other but far away from the blue one, as displayed in Fig. 4(c). Hence, pseudorange bias are tightly correlated to unit geometry vectors. We convert the unit geometry vector \(\mathbf{g}_{k}^{(n)}\) from the ECEF coordinate system to the north-east-down (NED) coordinate system to couple it to the location information (WLS-based). Each item in the unit vector falls within \([-1,1]\) and can be directly used as input features: \[F_{5_{k}}^{(n)}=\mathbf{g}_{\mathrm{NED}_{k}}^{(n)}{}^{T}\] ### _Heading Estimations_ Fig. 4(b) shows that the blue and orange traces were collected along similar routes. And Fig. 4(c) indicates that their unit geometry vectors are also close to each other. Their pseudorange bias, however, are quite different, as shown in Fig. 4(a). It might be caused by the opposite moving directions along which the two data files are collected. According to the setup of Android smartphones [1], the moving direction determines the heading of smartphones' antennas, which may affect the GNSS signal reception. Therefore, we include smartphone headings \(\theta_{k}\) into the input features, which can be approximately represented by the unit vector pointing from the current smartphone's location to the next one: \[\theta_{k}=\frac{\hat{\mathbf{x}}_{k+1}-\hat{\mathbf{x}}_{k}}{||\hat{\mathbf{ x}}_{k+1}-\hat{\mathbf{x}}_{k}||_{2}}\] Next, we convert the smartphone heading \(\theta_{k}\) from the ECEF coordinate system to the NED coordinate system constructed at the current location \(\hat{\mathbf{x}}_{k}\) and get \(\theta_{\mathrm{NED}_{k}}\). Each item in the unit vector \(\theta_{\mathrm{NED}_{k}}\) falls within \([-1,1]\). Thus, it can be directly included as an input feature. \[F_{6_{k}}=\theta_{\mathrm{NED}_{k}}{}^{T}\] To sum up, we choose the following input features: \[\mathbf{F}_{k}^{(n)}=\left[F_{1_{k}}^{(n)},F_{2_{k}}^{(n)},F_{3_{k}}^{(n)},F_{ 4_{k}},F_{5_{k}}^{(n)},F_{6_{k}}\right]^{T}\] where \(F_{1_{k}}^{(n)}\), \(F_{2_{k}}^{(n)}\), \(F_{3_{k}}^{(n)}\), and \(F_{5_{k}}^{(n)}\) vary across different satellites while \(F_{4_{k}}\) and \(F_{6_{k}}\) are common features that are shared among all visible satellites at a given observation epoch. ## VII PrNet The MLP has proved itself in learning high-dimensional representation for regression or classification as a single network [25] or a sublayer module [26]. The proposed PrNet Fig. 4: (a) Pseudorange bias of satellite PRN 2 along the traces collected by Pixel 4 in GSDC dataset. Different traces are plotted in a single figure to facilitate easy comparison while the time axis does not correspond to the exact moments when the data were collected; (b) Traces and directions along which each data file is collected; (c) Trajectories of terminal points of the unit geometry vectors from satellite PRN 2 to Pixel 4 on the unit sphere centered at Pixel 4. Fig. 5: Diagram of PrNet. PrNet comprises a satellite-wise MLP and a visible satellite mask; \(B\) represents the batch size; \(F\) denotes the dimension of input features; \(H\) is the number of hidden neurons. is based on a deep MLP that learns the mapping from six satellite-receiver-context-related features to pseudorange bias, i.e., \(\mu_{k}^{(n)}=f\left(\mathbf{F}_{k}^{(n)}\right)\). The diagram of PrNet is shown in Fig. 5. Our approach involves passing a batch of inputs through the neural network to compute the corresponding pseudorange bias. Each sample in the batch represents all visible satellites at a given time step, and all the satellites are processed by the same MLP, called satellite-wise MLP. To address the challenge of varying-number satellites at different time steps, we compute the pseudorange bias of all 32 satellites in the GPS constellation each time, where the inputs of non-visible satellites are set to zero. After the output layer, we add a "visible satellite mask" to filter out the meaningless output of non-visible satellites and retain the information of only visible satellites. This approach allows us to execute parallel computation on inputs of varying quantities. PrNet is designed to learn the representation of the pseudorange bias \(\mu_{k}^{(n)}\). However, the training data are labeled by \(\bar{\varepsilon}_{k}^{(n)}=\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\). To account for this, we add the estimation residual of smartphone clock bias \(-\mathbf{h}_{k}^{T}\hat{\mathbf{M}}_{k}\) in the loss function to align the output of PrNet and the target value \(\bar{\varepsilon}_{k}^{(n)}=\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\hat{\mathbf{M}}_ {k}\): \[\mathcal{L}=\sum_{n=1}^{M}||f\left(\mathbf{F}_{k}^{(n)}\right)-\mathbf{h}_{k} ^{T}\hat{\mathbf{M}}_{k}-\bar{\varepsilon}_{k}^{(n)}||^{2}\] where, \[\hat{\mathbf{M}}_{k}=\left[\hat{\mu}_{k}^{(1)},\hat{\mu}_{k}^{(2) },\cdots,\hat{\mu}_{k}^{(M)}\right]^{T}\] \[\hat{\mu}_{k}^{(n)}=f\left(\mathbf{F}_{k}^{(n)}\right)\] ## VIII Implementation Details ### _Dataset_ We conduct extensive evaluations on the GSDC 2021 dataset [1]1. Most of the GSDC 2021 dataset was collected in rural areas, and only a few traces were located in urban areas. As illustrated in Fig. 6, we use the dataset to design four scenarios that encompass rural fingerprinting, rural cross-trace, urban fingerprinting, and urban cross-trace localization. Scenario I and Scenario II share the same training data, totaling 12 traces. Scenario III and Scenario IV share the same training data, totaling 2 traces. The training data was all collected using Pixel 4. The testing datasets for the four scenarios consist of 2, 1, 1, and 1 trace, respectively. Footnote 1: We opted not to utilize the GSDC 2022 dataset due to the absence of ground truth altitude information in most of its traces, which is essential for computing the target values of pseudorange bias. ### _Baseline Methods_ #### V-B1 Model-based methods We implement the vanilla WLS-based, two filtering-based (MHE and EKF), and one smoothing-based (RTS smoother) localization engines as baseline methods. More details about them can be referred to [24]. #### V-B2 Pbc-Rf Point-based Correction Random Forest (PBC-RF) represents a state-of-the-art _machine learning_ approach for predicting pseudorange errors in _specialized GNSS receivers_, as detailed in [10]. #### V-B3 Fcnn-Lstm Fully Connected Neural Network with Long Short-term Memory (FCNN-LSTM) stands as a state-of-the-art _deep learning_ model designed for the prediction of pseudorange errors in _specialized GNSS receivers_, as detailed in [9]. PBC-RF and FCNN-LSTM have not been made available as open-source software at this time. We implement them as baseline methods according to [9, 10]. Our implementations are available at [https://github.com/Aaron-WengXu](https://github.com/Aaron-WengXu). #### V-B4 Set Transformer To the best of our knowledge, it is the only open-source state-of-the-art work that performs the data-driven art on Android raw GNSS measurements [15]. We trained the neural network as one baseline method. Note that its inference performance is tightly related to the initial locations of smartphones that are measured by the "magnitude of initialization ranges" \(\mu\)[15]. We determine the value of \(\mu\) by calculating the \(95^{th}\) percentile of the horizontal localization errors obtained from the model-based approaches. The set transformer we trained are available at [https://github.com/ailocar/deep_gnsss](https://github.com/ailocar/deep_gnsss). ### _PrNet Implementation_ The proposed neural network is implemented using PyTorch and d2l libraries [27]. After hyper-parameter tuning, we set the number of hidden neurons \(H\) to 40 and the number of hidden layers \(L\) to 20. We use the Adam optimizer with a learning rate decaying from \(1\times 10^{-2}\) to \(1\times 10^{-7}\) for optimizing its weights. No regularization techniques are employed in the training process. In Scenario I and Scenario II (rural areas), the optimization of PrNet can converge within 0.5k iterations. In Scenario III and Scenario IV (urban areas), it takes about 3-5k iterations to polish up the neural network. We utilize WLS, MHE, EKF, and RTS smoother to process the pseudoranges corrected by PrNet for localization. ## IX Experiments ### _Horizontal Localization_ The primary focus of smartphone-based localization is on horizontal positioning errors. Therefore, we quantitatively compare our proposed frameworks against the baseline methods across four scenarios by computing their horizontal errors with Vincenty's formulae. Fig. 7 displays the empirical cumulative distribution function (ECDF) of horizontal errors. The corresponding horizontal scores are summarized in Table I. The proposed PrNet+RTS smoother thoroughly outperforms all the baseline methods that employ classical model-based localization engines or sophisticated data-driven models. Additionally, by comparing the vanilla model-based approaches against their PrNet-enhanced versions, PrNet can reduce the horizontal scores by up to 74% (PrNet+RTS smoother in Scenario I). The set transformer is tied to the "magnitude of initialization ranges" \(\mu\) and tends to yield horizontal scores around this particular value, which explains its bad performance in urban areas where \(\mu\) is initialized with large magnitudes. The original intention of PBC-RF and FCNN-LSTM to process data captured from geodetic GNSS receivers limits their ability to correct pseudoranges for smartphones. Nevertheless, they outperform PrNet+WLS and PrNet+MHE in Scenario II and III. It is due to the fact that we equip them with our best localization engine (RTS smoother) that already surpasses PrNet+WLS and PrNet+MHE in these scenarios. ### _Computational Load_ Deep learning promises supreme performance at the cost of substantial computational load. We summarize the computational complexity of PrNet and the other two deep learning models, i.e., set transformer and FCNN-LSTM, in Table II. In this round of competition, the set transformer outperforms the other two deep learning models even though its computational complexity is \(\mathcal{O}(M^{2})\). Such efficiency is credited to the transformer's parallel computation [27] and fewer sequential operations [15] than the other two methods. Fig. 6: (a) Scenario I: rural fingerprinting. (b) Scenario II: rural cross-trace. (c) Scenario III: urban fingerprinting. (d) Scenario IV: urban cross-trace. Fig. 7: ECDF of horizontal errors in 4 scenarios. (a) Scenario I. (b) Scenario II. (c) Scenario III. (d) Scenario IV. ### _Ablation Studies_ In Table III, we conduct an extensive ablation study to investigate the reliance of PrNet on our design choices, including two novel input features, loss function design, and label computation. We also assess the impact of position estimations, an input feature recently introduced in [10], on the resulting horizontal positioning errors. Furthermore, we analyze the scalability of PrNet by comparing models of different sizes. We present the results of PrNet+RTS smoother averaged over the four scenarios. Our best model is considered a baseline (Row 9). The results verify that all our design choices are reasonable. Specifically, through a stepwise removal of individual features (Rows 1-3), we observe that the two novel input features (unit geometry vectors and heading estimations) significantly impact the localization performance. In contrast, the position estimation has a trivial impact. Row 5-6 shows how the localization performance degrades if we don't involve the estimation residuals of smartphone clock bias or smoothed pseudorange errors in the loss function. Row 7 tells that PeNet can be scaled down to a smaller size with negligible performance loss, suggesting its potential for deployment on the smartphone or edge sides. Row 8 indicates that increasing the size of PrNet cannot improve its performance; in fact, it diminishes performance due to overfitting, which arises from an excessive number of learnable weights in PrNet. ### _Cross-phone Evaluation_ To investigate the generalization ability of PrNet on various mass-market smartphones, we perform cross-phone evaluations using PrNet+RTS smoother and summarize our results in Fig. 8 (other PrNet-based methods share similar results). The training data in the four scenarios are collected by Pixel 4. During the inference process, besides the data collected by Pixel 4, we also use data from other smartphones for evaluation. Note that Google used various combinations of smartphones to collect data along different traces2. Footnote 2: In Scenario I, the data collected by Mi 8 is abnormal so that we exclude it from our analysis. The results suggest that PrNet can be robustly adopted across various smartphones in rural areas (Scenario I and II), but its performance degrades when utilized on Samsung S20 Ultra in urban settings (Scenarios III and IV). Fig. 8 indicates that, in the urban areas, Samsung S20 Ultra obtains better localization performance using the standard RTS smoother compared to the performance of Pixel 4 enhanced by PrNet. Therefore, the performance gap between these two phones in urban environments leads to large data heterogeneity, which could be a potential factor behind the adaptation failures. ## X Conclusion The proposed PrNet-based framework can regress the biased errors in pseudoranges collected by Android smartphones from six satellite-receiver-context-related inputs and eliminate them for better localization performance. We introduce two novel input features and meticulously calculate the target values of pseudorange bias to guide a satellite-wise MLP in learning a better representation of pseudorange bias than prior work. The comprehensive evaluation demonstrates its exceptional performance compared to the state-of-the-art approaches. Our future work includes: 1) more advanced deep learning models can be explored to learn the satellite and temporal correlation; 2) the pseudorange-correction neural network can be trained using location ground truth in an end-to-end way; 3) more open datasets will be collected using other modalities of sensors as ground truth; 4) data augmentation strategies can be leveraged to enhance the generalization ability across heterogeneous data. Fig. 8: Cross-phone Evaluation.
2309.09142
Performance of Graph Neural Networks for Point Cloud Applications
Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications (viz. autonomous driving) require real-time processing at the edge with tight latency and memory constraints. Conducting performance analysis on such DGNNs, thus, becomes a crucial task to evaluate network suitability. This paper presents a profiling analysis of EdgeConv-based DGNNs applied to point cloud inputs. We assess their inference performance in terms of end-to-end latency and memory consumption on state-of-the-art CPU and GPU platforms. The EdgeConv layer has two stages: (1) dynamic graph generation using k-Nearest Neighbors (kNN) and, (2) node feature updation. The addition of dynamic graph generation via kNN in each (EdgeConv) layer enhances network performance compared to networks that work with the same static graph in each layer; such performance enhancement comes, however, at the added computational cost associated with the dynamic graph generation stage (via kNN algorithm). Understanding its costs is essential for identifying the performance bottleneck and exploring potential avenues for hardware acceleration. To this end, this paper aims to shed light on the performance characteristics of EdgeConv-based DGNNs for point cloud inputs. Our performance analysis on a state-of-the-art EdgeConv network for classification shows that the dynamic graph construction via kNN takes up upwards of 95% of network latency on the GPU and almost 90% on the CPU. Moreover, we propose a quasi-Dynamic Graph Neural Network (qDGNN) that halts dynamic graph updates after a specific depth within the network to significantly reduce the latency on both CPU and GPU whilst matching the original networks inference accuracy.
Dhruv Parikh, Bingyi Zhang, Rajgopal Kannan, Viktor Prasanna, Carl Busart
2023-09-17T03:05:13Z
http://arxiv.org/abs/2309.09142v1
# Performance of Graph Neural Networks for Point Cloud Applications ###### Abstract Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications (viz. autonomous driving) require real-time processing at the edge with tight latency and memory constraints. Conducting performance analysis on such DGNNs, thus, becomes a crucial task to evaluate network suitability. This paper presents a profiling analysis of EdgeConv-based DGNNs applied to point cloud inputs. We assess their inference performance in terms of end-to-end latency and memory consumption on state-of-the-art CPU and GPU platforms. The EdgeConv layer has two stages: (1) dynamic graph generation using \(k\)-Nearest Neighbors (\(k\)NN) and, (2) node feature updation. The addition of dynamic graph generation via \(k\)NN in each (EdgeConv) layer enhances network performance compared to networks that work with the same static graph in each layer; such performance enhancement comes, however, at the added computational cost associated with the dynamic graph generation stage (via \(k\)NN algorithm). Understanding its costs is essential for identifying the performance bottleneck and exploring potential avenues for hardware acceleration. To this end, this paper aims to shed light on the performance characteristics of EdgeConv-based DGNNs for point cloud inputs. Our performance analysis on a state-of-the-art EdgeConv network for classification shows that the dynamic graph construction via \(k\)NN takes up upwards of \(95\%\) of network latency on the GPU and almost \(90\%\) on the CPU. Moreover, we propose a quasi-Dynamic Graph Neural Network (qDGNN) that halts dynamic graph updates after a specific depth within the network to significantly reduce the latency on both CPU and GPU whilst matching the original networks inference accuracy. Graph neural network, point cloud, \(k\)-nearest neighbors, dynamic graph construction, performance profiling ## I Introduction Graphs are effective data structures for representing intricate relationships (edges) among entities (nodes) with a high degree of interpretability. This has led to the widespread adoption of graph theory in various domains. Notably, Graph Neural Networks (GNNs) [1] have demonstrated remarkable success in addressing both conventional tasks like computer vision [2, 3] and natural language processing [4, 5], as well as non-traditional tasks such as protein interface prediction [6] and combinatorial optimization [7]. This versatility has made GNNs an integral part of deep learning methodologies, as evidenced by their numerous applications across diverse problem domains [1, 8]. A point cloud is an unstructured collection of raw data points, where each point represents a specific location associated with an object or shape, typically defined within a coordinate system (such as Cartesian or spherical). The ability of a point cloud to capture 3D environments make it crucial for numerous scene understanding tasks [9, 10] and applications across various domains. For instance, point clouds play a vital role in autonomous driving [11, 12], virtual reality [13], augmented reality [14], construction industry [15], robotics, computer graphics, and many more. The availability of affordable and accessible point cloud acquisition systems, such as LIDAR scanners and RGBD cameras [16], has further emphasized the importance of efficient learning and processing techniques for point clouds. Prior to the advent of deep learning-based methods, learning on point clouds involved constructing hand-crafted features [17]. With the introduction of deep learning, the techniques applied to point clouds can be broadly classified into two categories. These two categories are differentiated based on the pre-processing steps performed on the point clouds prior to network input [18]: (1) _Structured-grid based networks:_ These networks preprocess point clouds into a structured input format for the deep neural network. Structuring is typically achieved by generating representative views from the point clouds [19, 20, 21] or by voxelizing the point clouds into 3D grids [22]. (2) _Raw point cloud based networks:_ In this category, networks operate directly on the raw point clouds with minimal to no preprocessing. One approach involves passing multi-layer perceptrons (MLPs) through individual point features to learn local spatial relationships [23, 24, 25]. Another approach involves systematically constructing graphs using point features and applying Graph Neural Networks (GNNs) to learn labels [26, 27, 28]. Structured-grid based networks suffer from drawbacks associated with point cloud transformation. Transforming point clouds into voxels or image views can be computationally expensive and result in bulky data that is difficult to handle. Moreover, these transformations introduce various errors, such as quantization errors, which can impact the inherent properties of point clouds [23]. To address these issues, raw point cloud based networks have emerged as a solution. Among these networks, EdgeConv based networks [26, 29, 30] have become state-of-the-art in various point cloud-related tasks. They are an advancement over the basic PointNet [23] style architectures that directly operate on point features. The EdgeConv layer excels in learning local features and relationships within point clouds while maintaining permutation invariance [31]. The dynamic graph construction in the learned feature space allows EdgeConv to capture semantic similarities between points, regardless of their geometric separation. By leveraging these strengths, EdgeConv-based networks have shown remarkable performance in tasks involving point clouds, overcoming the limitations associated with structured-grid based approaches. Point cloud processing finds significant application in autonomous vehicles, which serves as a prominent use case for edge computing. These applications operate under stringent latency and memory constraints [32]. In intelligent visual systems within autonomous vehicles, point cloud processing typically constitutes just one stage within a multi-stage compute pipeline. As a result, the imposed constraints on latency and memory become even more critical and imperative to address [33]. Analyzing and profiling the performance of networks that process point clouds is a crucial task, particularly considering the prominence of EdgeConv-based networks in this domain. In this work, we make the following contributions: * **Latency analysis:** We perform an in-depth analysis of end-to-end layer-wise latency for EdgeConv-based networks used in classification and segmentation tasks on both state-of-the-art CPU and GPU platforms. * **Breakdown analysis:** Given the two-stage operation of the EdgeConv layer involving (1) dynamic graph construction and (2) GNN-based node feature updation, we perform breakdown analysis between \(k\)NN graph construction and GNN-based feature updating at each layer and across different layers in EdgeConv networks. * **Effects of varying \(k\):** EdgeConv networks dynamically construct a graph from a point cloud using the \(k\)-nearest neighbors (\(k\)NN) algorithm. We study the effects of varying the value of \(k\) from its optimal value, which is determined through a validation set. This analysis examines the impact of varying \(k\) on the network's inference accuracy, inference latency, and memory consumption. * **Quasi-Dynamic GNNs:** Dynamic graph construction after each layer in a Dynamic GNN (DGNN) improves its performance compared to static GNN counterparts. We investigate the extent to which performance improvement is affected when employing a quasi-Dynamic strategy, where the graph is made static towards the end of the network. * **Memory consumption:** We analyze the memory consumption of EdgeConv networks on both CPU and GPU platforms. * **Bottleneck analysis:** By identifying performance bottleneck in the aforementioned networks, we suggest potential research directions that could help mitigate such bottlenecks and improve overall performance. * **Hardware acceleration opportunities:** We discuss the potential for hardware acceleration of EdgeConv-based networks on FPGA devices, which has garnered significant research interest due to its promising benefits. Our aim is to deepen our understanding of EdgeConv-based networks' performance characteristics when processing point clouds and provide insights into optimization opportunities and hardware acceleration possibilities. Section II briefly introduces GNNs and GNNs as they relate to point clouds. Section III describes the networks and datasets used for the experiments, along with the computing platforms on which the experiments were performed. Results are analyzed in section IV. Discussion and conclusion follow in section V and VI respectively. ## II Background ### _Graph Neural Networks (GNNs)_ Graphs \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) are defined via a set of vertices (nodes) \(\mathcal{V}\) and a set of edges (links) connecting these vertices, \(\mathcal{E}=\{(j,i):j,i\in\mathcal{V}\text{ and a link exists from }j\to i\}\). Edges typically have directional context, symbolized via the ordered pairs; an undirected edge between nodes \(i\) and \(j\) is represented often via two directed edges: \((i,j)\) and \((j,i)\). \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacency matrix for a graph with \(n\) nodes representing the above edges and their weights (for a weighted graph). A graph can have features associated with both its nodes and edges - additionally, a graph may even have global graph level attributes. Typical learning tasks on graphs occur at either the node level (viz. node classification) [34], edge level (viz. link prediction) [35] or at the graph level [36]. Neural message passing is a mechanism in GNNs whereby information is passed (shared) across nodes and edges within the graph to update node embeddings via a set of learnable parameters [37]. GNNs employing this message passing framework are called Message Passing Neural Networks (MPNNs) [37]. For a node \(i\) with a node feature vector \(\mathbf{x_{i}}\) and a neighbor set \(\mathcal{N}(i)\), the general formulation for message passing can be described by Equation 1[37] (illustrated in Figure 1), \[\mathbf{x^{{}^{\prime}}_{i}}=\Psi_{\Theta}(\mathbf{x_{i}},\sum_{j\in\mathcal{N}(i)} \mathcal{M}_{\Phi}(\mathbf{x_{i}},\mathbf{x_{j}},\mathbf{e_{ji}})). \tag{1}\] The above equation can be decomposed into the following stages: * **Message generation.** A message is constructed between a node \(i\) and its neighbor \(j\) using node level features \(\mathbf{x_{i}}\) and \(\mathbf{x_{j}}\) and edge level feature \(\mathbf{e_{ji}}\) as \(\mathcal{M}_{\Phi}(\mathbf{x_{i}},\mathbf{x_{j}},\mathbf{e_{ji}})\). Fig. 1: The message passing mechanism employed in GNNs. * **Aggregation.** Such constructed messages from all of the nodes neighbors are aggregated via the aggregation function \(\sum_{j\in\mathcal{N}(i)}\); the aggregation function \(\sum\) is typically permutation invariant for point cloud applications. * **Updation.** Finally, the aggregated message along with \(\mathbf{x_{i}}\) is used to learn the feature update for node \(i\), \(\mathbf{x_{i}}^{\prime}\) via the function \(\Psi_{\Theta}\). For a given layer, such message-aggregate-update paradigm is applied to all the nodes of the graph with parameters \((\Theta,\Phi)\) shared across all the nodes within a layer. The functions \(\mathcal{M}_{\Phi}\) and \(\Psi_{\Theta}\) are typically multi-layer perceptrons (\(\mathcal{MLP}\)s). Such GNN layers, employing the message passing framework to update the node features, as above, are central to GNNs such as Graph Convolutional Network (GCN) [1], GraphSAGE [38], Graph Isomorphism Network (GIN) [39], Graph Attention Network (GAN) [40], Principal Neighborhood Aggregation (PNA) [41], etc. EdgeConv layer [26] also utilizes this message passing paradigm - however, unlike the above networks, each EdgeConv layer first performs dynamic graph construction via a \(k\)-nearest neighbor (\(k\)NN) algorithm to construct a \(k\)NN graph; a \(k\)NN graph is a directed graph in which each node is connected to its \(k\) nearest neighboring nodes via edges (Figure 2). Then, the EdgeConv layer performs message passing within this \(k\)NN graph to update the node embeddings. ### _GNNs for Point Clouds_ The authors in PointNet [23] introduced the notion of ingesting raw, unordered point clouds to learn directly on the associated point level features. This network, at its core, passes each point level feature vector through a shared \(\mathcal{MLP}\) to update the point features. In the message passing framework, \[{\mathbf{x_{i}}^{\prime}}=\Psi_{\Theta}(\mathbf{x_{i}}) \tag{2}\] Essentially, no messages are passed and no aggregations occur - it directly updates all the node embeddings. The final layer is a global-level max pool layer which generates a global feature vector for graph level tasks. Due to the shared \(\mathcal{MLP}\) and global max pool layer, such a network is fully invariant to permutations in the ordering of the \(n\) input points. This permutation invariance is a key characteristic of GNNs despite the PointNet not traditionally being a graph neural network. A class of PointNet derivative networks [42, 43, 44, 45] use associated approaches to learn directly from the point features. EdgeConv layer uses both message passing and dynamic graph construction, as shown in Figure 2, to reach state-of-the-art results for point cloud applications. The message passing paradigm in EdgeConv can be described as below, by (3), \[{\mathbf{x_{i}}^{\prime}}=\sum_{j\in\mathcal{N}(i)}\mathcal{M}_{\Phi}(\mathbf{x_{i}}, \mathbf{x_{j}}-\mathbf{x_{i}}) \tag{3}\] \[\sum\rightarrow\max(.) \tag{4}\] \[\mathcal{M}_{\Phi}(\mathbf{x},\mathbf{y})\rightarrow\mathcal{MLP}_{\Phi}(\mathbf{x}\,||\, \mathbf{y}) \tag{5}\] where the \(\max(.)\) in (4) is channel-wise along the nodes and the \(||\) in (5) is a concatenation operation; \(\mathcal{MLP}_{\Phi}\) is a multi-layer perceptron (parameterized by \(\Phi\)) with a ReLU non-linearity. The inclusion of the edge feature \(\mathbf{x_{j}}-\mathbf{x_{i}}\) adds local context to the updated node embedding, and the node-level feature \(\mathbf{x_{i}}\) helps the network retain global information. A single EdgeConv layer takes in an input tensor \(\mathbf{X}\in\mathbb{R}^{n\times c}\) for a point cloud with \(n\) points, each point represented by a vector node embedding of length \(c\). The output of \(k\)NN graph construction on \(\mathbf{X}\) is a tensor \(\mathbf{X}^{\prime}\in\mathbb{R}^{n\times k\times c}\) representing for each node, its \(k\) neighboring nodes and their node embeddings (feature vectors). Before passing \(\mathbf{X}^{\prime}\) through an \(\mathcal{MLP}\) layer, a nodes feature vector is subtracted from the feature vector of its neighbors (\(\mathbf{x_{j}}-\mathbf{x_{i}}\)) and concatenated to the resultant (\(\mathbf{x_{i}}\,||\,\mathbf{x_{j}}-\mathbf{x_{i}}\)). The tensor thus obtained, \(\mathbf{X_{in}^{\prime}}\in\mathbb{R}^{n\times k\times 2c}\) is passed through \(\mathcal{MLP}_{\Phi}\{2c,\,a_{1},\,a_{2},\,a_{3},\,...,\,a_{m}\}\), which is an \(m\)-layer \(\mathcal{MLP}\) with ReLU activations, to generate \(\mathbf{X_{out}^{\prime}}\in\mathbb{R}^{n\times k\times a_{m}}\) which is finally aggregated (via \(\max\)) to generate \(\mathbf{Y}\in\mathbb{R}^{n\times a_{m}}\) which is the final output from the EdgeConv layer. \(\mathcal{MLP}_{\Phi}\{1024,512,256\}\) is a \(2\)-layer perceptron; the input channels to the first layer is \(1024\), the output channels of the first layer is \(512\) and the output channels of the final (second) layer is \(256\). ## III Experimental Setting ### _Platform Details_ The performance analysis for EdgeConv is conducted on state-of-the-art GPU and CPU platforms (See Table I). The GPU used, for both training and inference, is NVIDIA RTX A6000, which has \(10,752\) NVIDIA Ampere architecture CUDA cores. The CPU utilized for inference is AMD Ryzen Threadripper 3990x with 64 CPU cores. Additionally, we use the PyTorch [46] and PyTorch Geometric [47] libraries to facilitate training and inference on the above platforms - no additional kernel-level optimization is performed. ### _Networks_ The base network that we utilize in the experiments is shown in Figure 3. This network follows the network setting in [26] that achieves state-of-the-art results on the ModelNet40 [48] graph-level classification dataset. The network comprises of four (dynamic) EdgeConv layers. Each layer constructs a \(k\)-nearest neighbor graph based on the current (latest) node embeddings before performing message passing on this graph via a single-layered \(\mathcal{MLP}\) to update the embeddings. Each \begin{table} \begin{tabular}{c|c c} \hline **Platforms** & CPU & GPU \\ \hline \hline Platform & AMD Threadripper 3990x & Nvidia RTX A6000 \\ Platform Technology & TSMC 7 nm & TSMC 7 nm \\ Frequency & 2.90 GHz & 1.8 GHz \\ Peak Performance & 3.7 TFLOPS & 38.7 TFLOPS \\ On-chip Memory & 256 MB L3 cache & 6 MB L2 cache \\ Memory Bandwidth & 107 GB/s & 768 GB/s \\ \hline \end{tabular} \end{table} TABLE I: Specifications of platforms \(\mathcal{MLP}\) in the EdgeConv layer utilizes a ReLU activation and includes a BatchNorm layer. The final \(\mathcal{MLP}\) uses a dropout layer instead of BatchNorm. We train the above network for a range of \(k\) values (\(5\), \(10\), \(15\), \(20\), \(25\), \(30\)) for the EdgeConv layer. Additionally, we also train two quasi-Dynamic variants of such Dynamic GNNs by making static the last and the last two EdgeConv layers, respectively. Making static an EdgeConv layer, here, refers to removing the dynamic graph generating \(k\)NN block from an EdgeConv layer. In such quasi-DGNNs, we perform dynamic graph construction in each of the initial few EdgeConv layers of the network that form the dynamic portion of the network. The latter EdgeConv layers of the network (with the \(k\)NN block removed) do not reconstruct the graph again - this static portion of the network directly uses the last graph that was constructed in the dynamic portion of the network. We thus refer to dynamic EdgeConv layers (with \(k\)NN block) as Dynamic EdgeConv (DEC) and non-dynamic ones (without the \(k\)NN block) as simply EdgeConv (EC). Specifically, the last and the last two DEC layers of the network in Figure 3 are converted to EC layers to analyze the performance of quasi-DGNNs. All the above networks are trained for a total of \(100\) epochs on the entire ModelNet40 training dataset - we use the Adam optimizer [49] with a learning rate of \(0.001\) and a step learning rate scheduler with a gamma of \(0.5\) and a step size of \(20\). ### _Datasets_ We utilize the ModelNet40 [48] dataset which contains \(12,311\) graphs. We split the entire dataset into 80% for training and 20% for testing. The dataset comprises 3D point clouds in the form of CAD models from 40 categories - we pre-process the dataset by centering it and scaling it to a range of \((-1,1)\). Additionally, we sample \(1024\) points during both training and testing. We test using a fixed random seed for reproducibility and equivalence across the tested networks. ### _Performance Metrics_ We utilize classification accuracy as the performance metric. Since point cloud applications are usually real-time, we give importance to latency and memory consumption figures while assessing the networks performance. To this extent, we perform an exhaustive analysis of how latency and memory usage are distributed across a DGNN and within a DEC layer to identify valid performance trade-offs and bottlenecks. ## IV Results and Analysis ### _Baseline Model Latency Analysis_ The baseline latency analysis is performed on a fully dynamic network (with all EdgeConv layers as Dynamic EdgeConv) (Figure 3) with \(k=20\). The value of \(k\) is obtained by cross-validation over a set of values \((5,10,15,20,25,30)\). For cross-validation, we split the training data into a train and validation set. Once a value of \(k\) is selected, we re-train the network over the entire training data. Figure 4 contains Figure 4(a)(i) and 4(a)(ii) that show the distribution of latency across all the layers of the network shown in Figure 3 on GPU and CPU, respectively. Figures 4(b)(i) and 4(b)(ii) contain the per-layer latency analysis and latency distribution within the DEC layers for GPU and CPU, respectively. These figures indicate that the dynamic graph construction via the \(k\)NN algorithm is the bottleneck driving down the networks (and the DEC layers) performance. ### _Analysis Under Varying \(k\)_ We analyze the effect of varying the number of nearest neighbors on the performance of the DEC layer and the point cloud classification model - we first train the network in Figure 3 for different values of \(k\) associated with its DEC layers and then perform inference latency and accuracy analysis. As seen in Figure 5(a), the performance drops as we move away from the optimal \(k\), this performance drop is sharper as we move towards the origin (towards smaller and smaller \(k\)'s). The network latency, for all \(k\) values, on both CPU and GPU, is again dominated by the \(k\)NN algorithm. ### _Quasi-Dynamic GNN (qDGNN)_ Dynamic GNNs improve upon the performance of basic GNNs - this is markedly so for point cloud applications where DGNNs have the added capability of being able to identify and learn from semantically similar points irrespective of their geometric similarity (distance). Such DGNNs, however, as already seen, have a large computational cost linked to the dynamic graph construction operation which effectively bottlenecks the networks performance. Fig. 2: Illustration of the EdgeConv layer. A point cloud (blue) is the input to the EdgeConv layer. EdgeConv layer uses \(k\)NN to generate a directed \(k\)NN graph. The message passing paradigm, applied to this graph, uses a nodes neighbors to update node embeddings (message passing: red lines, node feature updates: green dots). The output of the EdgeConv layer is the point cloud with updated node features (green). Figure 6(a) clearly shows that making static the latter dynamic layers does not affect the networks accuracy - the corresponding performance gains associated with such Quasi-Dynamic GNNs are indicated in Figure 6(b), 6(c)(i) and 6(c)(ii) which show a drastic speed-up when compared against the fully dynamic baseline. This suggests that we can reach state-of-the-art performance, whilst being fast enough to operate at the edge, by utilizing a combination of dynamic and static EdgeConv layers for point clouds. ### _Memory Consumption_ As an example, the memory consumed for the \(k\)NN operation is plotted in Figure 7; however, it is important to note that from a memory consumption point of view, the \(k\)NN graph construction operation is not a bottleneck - several other operators take up a much larger memory footprint compared to \(k\)NN. The linear nature of the curves is also self-explanatory, increase in \(k\) leads to a proportional increase in memory required to serve the \(k\)NN operator. Meanwhile, removing \(k\)NN from latter layers of the network (DEC \(\rightarrow\) EC) directly leads to a proportional reduction in memory consumption. Fig. 4: Latency analysis of the baseline model. 4(a)(i) and 4(a)(ii) show the layer-wise latency (per-graph instance inference level) on GPU and CPU. DEC_2, for example, indicates the second DEC layer in Figure 3. 4(b)(i) and 4(b)(ii) analyze the individual DEC layers (comparing \(k\)NN vs update latency for each). Note that the EdgeConv layer refers to a DynamicEdgeConv layer. Fig. 5: Accuracy and latency analysis under varying \(k\) values. 5(a) shows the variation of accuracy with \(k\). 5(b), 5(c)(i) and 5(c)(ii) show the latency variation and distribution, respectively, versus \(k\). The \(update\) legend in 5(c)(i) and 5(c)(ii) is related to the message passing layer of the Dynamic EdgeConv layer. Fig. 3: Network utilized in the experiment (for classification task). The input point cloud of shape \(n\times 3\) (\(n\) nodes with each node having \(3\) features) is transformed by four successive Dynamic EdgeConv (DEC) layers. The Dynamic suffix underscores the graph construction that occurs in each EdgeConv layer via the \(k\)NN algorithm. The output of each DEC layer is concatenated to shape \(n\times 320\), before being passed through a linear transformation that yields a \(n\times 1024\) shaped output. A global max pool layer reduces this output to a vector of size \(1024\) that is then passed through an MLP and a log-softmax layer to finally output the class log-probability vector. ## V Discussion The experimental results show the significant bottleneck introduced by the dynamic graph construction \(k\)NN layer in point cloud processing networks. \(k\)NN operation occupies up to \(95\%\) latency of the (base) network on GPU and close to \(90\%\) on CPU. Despite this, such a layer is crucial in boosting the network performance to enable a wide array of complex real-world edge applications. In this paper, we shed light on this problem whilst also providing a simple solution - quasi-Dynamic Graph Neural Networks (qDGNN). Such networks significantly improve the latency of the network (reduction in latency by up to \(58\%\) on GPU and up to \(69\%\) on CPU) whilst maintaining the same level of performance as is demonstrated by DGNNs. Accelerating \(k\)NN layers on FPGA and deploying the DGNN on an FPGA platform is also a potential solution that has attracted significant research interest recently [50, 51]; optimizing \(k\)NN algorithms [52] is also an area that has been studied with much interest. ## VI Conclusions and Future Work In this paper, we examined the latency bottleneck associated with a DynamicEdgeConv (EdgeConv) layer whilst examining its inference performance - optimizing the dynamic graph construction stage in such networks is a problem that remains to be solved and a promising research avenue due to its scope of applicability. ## Acknowledgement and Statement This work is supported by the DEVCOM Army Research Lab (ARL) under grant W911NF2220159. **Distribution Statement A**: Approved for public release. Distribution is unlimited.
2309.12211
Physics-informed State-space Neural Networks for Transport Phenomena
This work introduces Physics-informed State-space neural network Models (PSMs), a novel solution to achieving real-time optimization, flexibility, and fault tolerance in autonomous systems, particularly in transport-dominated systems such as chemical, biomedical, and power plants. Traditional data-driven methods fall short due to a lack of physical constraints like mass conservation; PSMs address this issue by training deep neural networks with sensor data and physics-informing using components' Partial Differential Equations (PDEs), resulting in a physics-constrained, end-to-end differentiable forward dynamics model. Through two in silico experiments -- a heated channel and a cooling system loop -- we demonstrate that PSMs offer a more accurate approach than a purely data-driven model. In the former experiment, PSMs demonstrated significantly lower average root-mean-square errors across test datasets compared to a purely data-driven neural network, with reductions of 44 %, 48 %, and 94 % in predicting pressure, velocity, and temperature, respectively. Beyond accuracy, PSMs demonstrate a compelling multitask capability, making them highly versatile. In this work, we showcase two: supervisory control of a nonlinear system through a sequentially updated state-space representation and the proposal of a diagnostic algorithm using residuals from each of the PDEs. The former demonstrates PSMs' ability to handle constant and time-dependent constraints, while the latter illustrates their value in system diagnostics and fault detection. We further posit that PSMs could serve as a foundation for Digital Twins, constantly updated digital representations of physical systems.
Akshay J. Dave, Richard B. Vilim
2023-09-21T16:14:36Z
http://arxiv.org/abs/2309.12211v2
# Physics-informed State-space Neural Networks for Transport Phenomena ###### Abstract This work introduces Physics-informed State-space neural network Models (PSMs), a novel solution to achieving real-time optimization, flexibility, and fault tolerance in autonomous systems, particularly in transport-dominated systems such as chemical, biomedical, and power plants. Traditional data-driven methods fall short due to a lack of physical constraints like mass conservation; PSMs address this issue by training deep neural networks with sensor data and physics-informing using components' Partial Differential Equations (PDEs), resulting in a physics-constrained, end-to-end differentiable forward dynamics model. Through two in silico experiments -- a heated channel and a cooling system loop -- we demonstrate that PSMs offer a more accurate approach than purely data-driven models. Beyond accuracy, there are several compelling use cases for PSMs. In this work, we showcase two: the creation of a nonlinear supervisory controller through a sequentially updated state-space representation and the proposal of a diagnostic algorithm using residuals from each of the PDEs. The former demonstrates the ability of PSMs to handle both constant and time-dependent constraints, while the latter illustrates their value in system diagnostics and fault detection. We further posit that PSMs could serve as a foundation for Digital Twins, constantly updated digital representations of physical systems. State-space model Physics-informed Data-driven Supervisory Control Diagnostics ## 1 Introduction Autonomous systems are envisioned to achieve real-time optimization, be flexible in operation, and be fault-tolerant [1]. These capabilities necessitate a model-based approach. However, purely data-driven methods, which have proliferated in the literature, lack physical constraints like mass conservation, precluding their adoption by industry. In this work, we propose a method that _fuses_ physical measurements from sensors with physical knowledge of the conservation laws. The application of this method is specifically developed for systems dominated by transport phenomena. Transport phenomena, which describe the movement of mass, momentum, and energy, are fundamental to many engineering fields as they offer a predictive understanding of systems' behavior. In chemical engineering, these phenomena facilitate the design and optimization of processes in reactors, separators, and heat exchangers, which are essential for chemical, fuel, and pharmaceutical production [2]. Biomedical engineering applies transport phenomena to comprehend nutrient, oxygen, and waste transportation in biological systems like the circulatory system, driving the development of new medical technologies such as artificial organs [3]. Power plant engineering utilizes transport phenomena in optimizing heat and mass transfer in boilers and heat exchangers for efficient and safe power generation and in emission control [4]. They are also vital in nuclear reactor design, where effective heat transfer from the reactor core to the coolant is paramount [5]. Consequently, modeling transport phenomena is crucial for the design, optimization, and enhancement of engineering systems and processes. A set of transport equations models transport phenomena. The solution of transport equations involves formulating functions of space and time for multiple fields (e.g., pressure, velocity, temperature), given a set of boundary and initial conditions. Classically, there have been two major approaches to solving transport equations: analytical and numerical. The former applies to situations where the problem setting or transport phenomena are simple enough to directly use mathematical methods (e.g., separation of variables) - this is generally not the case for systems of interest described above. The latter is used for systems with complex geometries, tightly coupled fields, time-dependent boundary/initial conditions, or parameters that vary with the field itself. Numerical methods include finite difference, finite volume, or finite element [6]. These methods discretize the domain into a grid, and values at fixed locations represent the unknown fields. The transport equations are then approximated using numerical algorithms (e.g., the forward Euler method, the Crank-Nicolson method) and solved iteratively until convergence is achieved. The numerical methods are the current state-of-the-art approach to modeling transport phenomena for engineering systems. Their main advantage is versatility in addressing a range of complex problems with time-dependent boundary/initial conditions. However, they have several disadvantages. _Discretization error_: numerical methods discretize the domain and approximate the transport equations. This process introduces error, which can be reduced with finer grids - unfortunately increasing computational cost. _Convergence_: because numerical methods involve iterative algorithms, convergence to a solution is not guaranteed and can result in numerical instability. _Static assumptions_: generally, several parameters in a transport equation must be predefined (e.g., the friction factor). These assumptions form a rigidity around the application of numerical methods: they do not provide a straightforward path to adapt to changing system conditions. The first two disadvantages preclude numerical methods as a basis for the control of engineering systems. A majority of model-based control algorithms assume a reduced order or surrogate model for control [7, 8]. In this paper, we introduce Physics-informed State-space Neural Networks for transport phenomena. Physics-Informed Neural Networks (PINNs) are a type of artificial neural network (ANN) used to solve partial differential equations (PDEs) [9]. PINNs combine the expressiveness and flexibility of ANNs with the prior knowledge of physical laws and constraints encoded in PDEs. In a PINN approach, the neural network is trained to approximate the solution of PDEs, given appropriate boundary and initial conditions, by minimizing the difference between the network outputs and the PDE constraints. Thus, PINNs could be a candidate for the solution of a system of PDEs representing transport phenomena. PINNs have several advantages. _Physical priors_: PINNs incorporate prior knowledge about the system through physical constraints. In the context of transport phenomena, prior knowledge is the conservation of mass, momentum, and energy in the spatiotemporal domain. _Mesh-free_: PINNs do not require a mesh or a grid to discretize the spatial domain, as numerical methods do. Instead, PINNs use a neural network to represent the solution in a continuous fashion, without the need for discretization. First, this aspect eliminates the need for mesh generation, which can be time-consuming and challenging. Second, it eliminates discretization error (described in the previous paragraph) arising from the mesh. _Flexible training data_: PINNs can be trained with noisy or scarce data. Thus, sensor measurements, whose outputs have aleatoric sources of noise, may be used to train PINNs. Additionally, the scarcity of data is an essential aspect as the number of sensor locations (i.e., cost) required to utilize PINNs is low. Specifically, we introduce physics-informed _state-space_ neural networks for transport phenomena. A plurality of previous PINN work focuses on solutions of one or more PDEs, with fixed boundary and initial conditions [10], \[x\left(z,t\right)=\mathcal{F}\left(z,t\right)\, \tag{1}\] where \(\mathcal{F}\) is a PINN, \(z\) is a position vector, \(t\) is time, and \(x\) is the field variable(s). However, in a control or state estimation setting, a form representing, \[x\left(z,t,v\right)=\mathcal{F}\left(z,t,v\right)\, \tag{2}\] is needed, where \(v\) represents a vector of arbitrary control actions. The control actions modify the boundary conditions (BCs) of the problem. In this setting, the PINN must be able to generalize to a much larger input space. Arnold and King [11] attempted a similar task for a system governed by a single PDE but did not train a monolithic PINN for varying \(v\) - rather multiple PINNs corresponding to a grid of anticipated control actions. The objective of this work is to develop and demonstrate a method that generates a state-space model by fusing physical measurements from sensors with physical knowledge of transport conservation laws. The paper is organized as follows. In Section 2, the definition and solution of the transport equations, derivation of our proposed method, and its training procedure are presented. Two separate experimental settings were explored. In Section 3, results for modeling a heated channel geometry are presented, and a nonlinear supervisory control method is demonstrated. In Section 4, results for modeling a cooling system are presented, and a demonstration of a capability to use our proposed method for physics-based diagnostics during a system fault. Finally, Section 5 contains a discussion and conclusions on advantages, disadvantages, and additional capabilities to be explored. ## 2 Methods ### Modeling fluid transport There is a large corpus of equations that model fluid transport. This work focuses on one-dimensional single-phase flow at low-speed, i.e., conditions ubiquitous in engineering systems described in the introduction. Under these conditions, the conservation of mass, momentum, and energy are modeled by the following set of PDEs: \[\frac{\partial\rho}{\partial t}+\frac{\partial\left(\rho u\right)}{ \partial z} =0\, \tag{3}\] \[\rho\frac{\partial u}{\partial t}+\rho u\frac{\partial u}{ \partial z} =-\frac{\partial p}{\partial z}+\rho g-\frac{f}{D_{h}}\frac{\rho u|u|}{2}\,\] (4) \[\rho C_{p}\frac{\partial T}{\partial t}+\rho C_{p}u\frac{\partial T }{\partial z} =q^{\prime\prime\prime}. \tag{5}\] In all equations, \(z\) is the spatial dimension, and \(t\) is the temporal dimension. This work uses the field variable set \((p,u,T)\), where \(p\) is the pressure, \(u\) is the one-dimensional velocity, and \(T\) is the temperature. Additionally, there are several parameters. These parameters are either constants or modeled via closure relationships. A closure relationship models density, \(\rho=\rho(p,T)\), pre-defined via experiments. Specific heat capacity is modeled by closure relationships, \(C_{p}=C_{p}(p,T)\), pre-defined via experiments. The friction factor, \(f\), can be obtained via closure relationships (Moody chart) or determined via experiments. The gravitational coefficient, \(g\), is a constant that will depend on the orientation of the component(s) with the gravitational field (and its magnitude). The hydraulic diameter, \(D_{h}\), can be obtained via manufacturer specifications or component measurement. The volumetric heat source, \(q^{\prime\prime\prime}\), depends on the configuration of the thermal energy source, e.g., heater power magnitude and its location. ### Numerical solution The approach to obtaining numerical solutions to the transport equations is discussed. A solver, System Analysis Module (SAM) [12], is utilized. The SAM code specializes in the high-fidelity simulation of advanced Nuclear Power Plants. As part of its capabilities, it can solve the transport equations, Eqs. (3) to (5). The Finite Element Method (FEM) is utilized to solve the PDEs. FEM works by approximating a continuous solution over a region of interest as a combination of simple functions (e.g., polynomials) that are defined over a finite set of points. The solution is then represented as a set of coefficients describing the simple functions' behavior over the entire region. SAM uses the continuous Galerkin FEM and Lagrange shape functions. This work uses first-order elements using piece-wise linear shape functions with the trapezoidal rule for numerical integration. The Jacobian-Free Newton Krylov method is used to solve the system of equations. The second-order backward difference formula is used for time integration. Additional details on the numerical scheme are provided in [12, SS7]. ### PINN solution PINNs are a type of ANN that can also solve PDEs. The modern PINN concept was initially introduced by Raissi et al. [9] and has since received significant adoption by the scientific machine-learning community - reviewed exhaustively in [10]. PINNs solve parameterized and nonlinear PDEs in the form, \[\frac{\partial x}{\partial t}+\mathcal{N}(x;\lambda)=0\, \tag{6}\] where \(x=x(z,t)\) is the solution, and \(\mathcal{N}\) is a nonlinear operator parameterized by \(\lambda\). Each transport equation, Eqs. (3) to (5) can be written in this form. A neural network is formulated to approximate the solution, \[x(z,t)\approx\mathcal{F}(z,t;\theta),\ z\in\Omega,\ t\in\tau\, \tag{7}\] where \(\mathcal{F}\) is a neural network that is parameterized by \(\theta\). The solution is box constrained in an n-dimensional space domain, \(\Omega\), and in a one-dimensional time domain, \(\tau\). The space domain specification would depend on the geometry of the transport path and whether multiple dimensions are needed. The specification of the time domain depends on the timescale at which the system reaches a steady state following changes in BCs. For instance, if the system settles slowly, a large \(\tau\) is required, whereas a small \(\tau\) is needed if the system reaches a steady state quickly. In a purely data-driven approach, a series of measurements would be collected and stored in sets, \[\mathcal{X}_{m} =\{(z_{i},t_{i})_{i=1}^{N}\}\, \tag{8}\] \[\mathcal{Y} =\{(x_{i})_{i=1}^{N}\}\, \tag{9}\] where \(z_{i}\) is the position, \(t_{i}\) is the time, and \(x_{i}\) is the field value for the \(i^{\text{th}}\) data point. The set \(\mathcal{X}_{m}\) contains positions and times, where subscript \(m\) denotes measurement data. The set \(\mathcal{Y}\) contains measured field values. The superscript \(N\) is the dataset size. In a supervised learning setting, \(\mathcal{X}_{m}\) is the input dataset, and \(\mathcal{Y}\) is the output dataset. Formally, \(x_{i}\in\mathbb{R}^{m}\), where \(m\) is the number of fields, \(z_{i}\in\mathbb{R}^{n}\cap\Omega\) where \(n\) is the number of spatial dimensions, and \(t_{i}\in\mathbb{R}\cap\tau\) is a scalar. A corresponding loss function is defined, \[\mathcal{L}_{m}=f_{\text{loss}}\left(\mathcal{F}(\mathcal{X}_{m}),\mathcal{Y }\right), \tag{10}\] where \(f_{\text{loss}}\) is a loss function (e.g., the mean squared error), and the \(\mathcal{L}_{m}\) denotes the measurement loss. The first argument of the loss function is the predicted value, and the second argument is the target value. Thus, \(\mathcal{L}_{m}\) quantifies the error (or loss) between predictions from the neural network and the measured field values. The loss is then backpropagated to the neural network's parameters, \(\theta\). Each parameter will have a gradient of its value with respect to \(\mathcal{L}_{m}\), known as the local loss gradient. Then, an optimizer will adjust each parameter according to its local loss gradient i.e., change the value of the parameter to decrease the entire network's loss averaged over the current batch. In the purely data-driven setting, the values of \(\theta\) are strictly determined by the measured field values. In a PINN setting, an additional source of information is added. The dynamics of the system being modeled are approximately known. For example, if a liquid is advected in a network of pipes, the transport equations can inform the neural network that, e.g., energy conservation must hold. Thus, the transport PDEs inform the neural network if there are inconsistencies in its solution. How is this achieved? First, the exact solution in Eq. (6) is replaced by that approximated by the neural network in Eq. (7), \[\frac{\partial\mathcal{F}}{\partial t}+\mathcal{N}\left(\mathcal{F};\lambda \right)=0\;. \tag{11}\] To evaluate the PDEs, e.g., Eqs. (3) to (5), the gradients of \(\partial_{t}u,\partial_{z}u\), etc. are needed. In the numerical approach, the gradients are obtained by discretizing the space-time domain and approximating them using finite difference. The PINN approach uses a mesh-free method to evaluate the gradients. This is achieved through _forward mode_ automatic differentiation, where the derivative of any output of \(\mathcal{F}\) is obtainable with respect to any input. The physics-informed loss function is defined as, \[\mathcal{L}_{p}=f_{\text{loss}}\left(\frac{\partial\mathcal{F}}{\partial t}( \mathcal{X}_{p})+\mathcal{N}\left(\mathcal{F}(\mathcal{X}_{p}),\lambda\right),\emptyset\right)\;, \tag{12}\] where \(\emptyset\) is a zero set. The target output is always zero to enforce that the neural network satisfies the right-hand side of Eq. (11). It is important to emphasize that the target output is a zero set rather than measured field values. Thus, \(\mathcal{L}_{p}\) can be evaluated at arbitrary inputs sampled from the space-time domain, \((z_{p},t_{p})\sim\Omega\times\tau\). In this way, the known physics of the system regularizes the neural network to the space-time domain outside of the measurement set, \(\mathcal{X}_{m}\). The additional set of inputs is referred to as collocation points, \[\mathcal{X}_{p}=\left\{(z_{p},t_{p})_{i=1}^{N}\right\}\,. \tag{13}\] Now, during training, the parameters of the neural network, \(\theta\), can be updated to minimize both \(\mathcal{L}_{m}\) and \(\mathcal{L}_{p}\). Thus, knowledge gathered from measurements of the system is complemented with a priori knowledge of the physics of the system. The resulting neural network is defined as a PINN. Measurements of a facility require the installation of sensors (e.g., thermocouples and pressure transducers) and associated operation and maintenance costs. The amount of sensors and their locations is itself an optimization task. The PINN approach is particularly powerful when system measurements are sparse or physically precluded (e.g., extreme temperature and pressure environments). ### State-space formulation (PSM) The objective of this work is to apply aspects of PINN to obtain a nonlinear dynamical model as an alternative to traditional linear state-space models. A discrete-time state-space model is represented by, \[x_{k+1} =f\left(x_{k},v_{k}\right)\;, \tag{14}\] \[y_{k} =g\left(x_{k},v_{k}\right)\;, \tag{15}\] where \(x_{k}\), \(v_{k}\), and \(y_{k}\) are vectors representing states, inputs, and output measurements at time \(k\). Additionally, \(f\) is the dynamics model, and \(g\) is the measurement model. Linear state-space models are commonly used in control theory because they provide a tractable and simplified approach to analyzing and designing control systems. A linear state-space model utilizes matrices to approximate the \(f\) and \(g\) functions, \[x_{k+1} =Ax_{k}+Bv_{k}\;, \tag{16}\] \[y_{k} =Cx_{k}+Dv_{k}\;, \tag{17}\] where \(A\) and \(B\) matrices are the state and input matrices, and \(C\) and \(D\) are the output and control pass-through matrices. The linear state-space representation allows the application of powerful mathematical techniques such as eigenvalue analysis, frequency response analysis, and convex optimization algorithms (e.g., Quadratic Programs for Model Predictive Control). However, many physical systems exhibit nonlinear behavior, including in systems with time-varying parameters or when the system is operating in certain conditions (within steep saddle points). Thus, using linear models can lead to inaccuracies or instability in controlling a nonlinear system. A nonlinear dynamics model could address the deficiencies of a linear representation. Specifically, a neural network could be used as the nonlinear function representing \(f\) and \(g\) in Eqs. (14) and (15). The previous section presented PINNs as an approach to solving PDEs, Eq. (11). If the PDEs are a good approximation of the dynamics of a physical system, the PINN will approximate the system's state in the form Eq. (1). However, several issues arise. First, the output of the PINN is a continuous representation of the system state \(x=x(z,t)\). A discrete representation could be defined by, \[x_{k}=\left[\mathcal{F}\left(z_{0},t_{k}\right),\mathcal{F}\left(z_{1},t_{k} \right),...,\mathcal{F}\left(z_{M},t_{k}\right)\right]^{\intercal}\;, \tag{18}\] where \(z_{0},...,z_{M}\) represent the physical locations at which the system's state is defined, and \(t_{k}\) is the discrete time. Without loss of generality, we can assume that \(t_{k}=k\Delta t\), where \(k\) is an integer indexing time, and \(\Delta t\) is a constant time-step size. Therefore, the propagation of the state would be determined by, \[x_{0} =\left[\mathcal{F}\left(z_{0},0\right),\mathcal{F}\left(z_{1},0 \right),...,\mathcal{F}\left(z_{M},0\right)\right]^{\intercal}\;, \tag{19}\] \[x_{1} =\left[\mathcal{F}\left(z_{0},t_{1}\right),\mathcal{F}\left(z_{1 },t_{1}\right),...,\mathcal{F}\left(z_{M},t_{k}\right)\right]^{\intercal}\;,\] (20) \[\vdots\] \[x_{N} =\left[\mathcal{F}\left(z_{0},t_{N}\right),\mathcal{F}\left(z_{1 },t_{N}\right),...,\mathcal{F}\left(z_{M},t_{N}\right)\right]^{\intercal}\;. \tag{21}\] While useful for systems without any inputs, additional modifications are needed to accommodate systems with varying control inputs, i.e., Eq. (2), \[x_{k}=\left[\mathcal{F}\left(z_{0},t_{k},v_{k}\right),\mathcal{F}\left(z_{1}, t_{k},v_{k}\right),...,\mathcal{F}\left(z_{M},t_{k},v_{k}\right)\right]^{ \intercal}\;, \tag{22}\] where \(v_{k}\) is a vector representing control inputs at step \(k\). The control inputs can manipulate BCs, e.g., the temperature or velocity at a pipe inlet, or modify jump conditions between locations, e.g., pressure jump due to the presence of a pump. A problem with Eq. (22) is the assumption that the system's initial condition, \(x(z,t=0)\), must always be the same. For a dynamical system, an arbitrary final input value, \(v_{k\rightarrow\infty}\), would result in a new settling point, \(x_{k\rightarrow\infty}\). If we want the state-space model to generalize, a form propagating forward from arbitrary \(x_{0}\) is needed. Therefore, the initial condition itself should be an input for the PINN, \[x_{k}=\left[\mathcal{F}\left(z_{0},t_{k},x_{0},v_{k}\right),\mathcal{F}\left(z _{1},t_{k},x_{0},v_{k}\right),...,\mathcal{F}\left(z_{M},t_{k},x_{0},v_{k} \right)\right]^{\intercal}\;, \tag{23}\] where \(x_{0}\) is the initial condition. When a dynamical system is subject to an initial condition, the system's behavior in the short term is dominated by the initial state. However, as time passes, \(k\gg 1\), the impact of the initial condition on the system's current state becomes progressively smaller. Therefore, a final modification to the formulation is needed, \[x_{k}=\left[\mathcal{F}\left(z_{0},t_{k},x_{0,k},v_{k}\right),\mathcal{F} \left(z_{1},t_{k},x_{0,k},v_{k}\right),...,\mathcal{F}\left(z_{M},t_{k},x_{0, k},v_{k}\right)\right]^{\intercal}\;, \tag{24}\] where \(x_{0_{k}}\) is the initial condition of the current time step. Before Eq. (22), it was assumed that \(t_{k}=k\Delta t\). When this assumption is applied to Eq. (24), over a collection of multiple experiments, the value of \(t_{k}\) would arbitrarily correspond to combinations of the inputs, \(\left(x_{0,k},v_{k}\right)\), and outputs, \(x_{k}\). From the perspective of training the neural net \(\mathcal{F}\), there would be no relationship between \(x_{k}\) and \(t_{k}\). However, \(t_{k}\) is still an important variable to estimate the time derivatives needed for physics informing. The timescale at which \(t_{k}\) does have a relationship between the inputs at outputs is the discrete time step, \(\Delta t\). The data would be separated into an initial condition at \(t_{k}=0\), and the next step, \(t_{k}=\Delta t\), \[x_{0,k} =\left[\mathcal{F}\left(z_{0},0,x_{0,k},v_{k}\right),\mathcal{F} \left(z_{1},0,x_{0,k},v_{k}\right),...,\mathcal{F}\left(z_{M},0,x_{0,k},v_{k} \right)\right]^{\intercal},\;\mathrm{and}, \tag{25}\] \[x_{k} =\left[\mathcal{F}\left(z_{0},\Delta t,x_{0,k},v_{k}\right), \mathcal{F}\left(z_{1},\Delta t,x_{0,k},v_{k}\right),...,\mathcal{F}\left(z_{M},\Delta t,x_{0,k},v_{k}\right)\right]^{\intercal}\;. \tag{26}\] The relationships between the inputs, \(\left(x_{0,k},v_{k}\right)\), and outputs, \(x_{k}\), in Eq. (24) is visualized in Fig. 1. The resulting neural network is, \[x\left(z_{i},t_{i},v_{i},x_{0,i}\right)=\mathcal{F}\left(z_{i},t_{i},v_{i},x_{0,i}\right)\;, \tag{27}\] predicting each component of Eqs. (25) and (26). To train this neural network, the datasets in Eqs. (8) and (9) are modified, \[\mathcal{X} =\left\{(z_{i},t_{i},v_{i},x_{0,i})_{i=1}^{N}\right\}, \tag{28}\] \[\mathcal{Y} =\left\{x(z_{i},t_{i},v_{i},x_{0,i})_{i=1}^{N}\right\}, \tag{29}\] where \(v_{i}\) is the control input, and \(x_{0,i}\) is the initial condition. Formally, \(v_{i}\in\mathbb{R}^{p}\) where \(p\) is the number of control inputs, and \(x_{0,i}\in\mathbb{R}^{q}\) where \(q\) is the number of states modeled. Using this formulation, the input space of the neural network increases by \(\mathcal{O}(10^{p+q})\). However, this step is essential to allow network usage without relying on a heuristic approach, e.g., a tabular database of multiple models where output is obtained by interpolating between \((v_{i},x_{0,i})\). Besides, this would circumvent a desire to show that the network can generalize well. The trained network is referred to as a Physics-informed State-space neural network Model (PSM). ### PSM architecture and training methodology To train a PSM, an ANN must be constructed. The ANN will represent the operator, \(\mathcal{F}\), in Eqs. (10), (12) and (24). In past research, various PINN architectures have been explored, including Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs) [10]. For this study, we adopt an MLP architecture. The intuition behind this choice is explained by contrasting an MLP's properties against other architectures. Transport system dynamics are inherently Markovian, meaning that the state \(x_{k+1}\) depends solely on the previous state, \(x_{k}\), rather than a sequence of past states. This characteristic makes a state-less architecture, such as an MLP, more suitable than an RNN for this application. Additionally, CNNs are typically employed for tasks involving image or signal processing, where the input exhibits a grid-like structure and exhibits strong spatiotemporal correlations, such as multi-dimensional data. However, in this study, the PSMs are designed to model one-dimensional transport, making an MLP architecture adequate for the task. Each \(i^{\text{th}}\) layer of the MLP provides an output, \[y_{j}^{i}=\sigma^{i}\left(w_{jk}^{i}y_{k}^{i-1}+b_{j}^{i}\right)\, \tag{30}\] where \(w_{jk}^{i}\) is the weight matrix, \(b_{j}^{i}\) is a bias vector, and \(\sigma^{i}\) is the activation function of the \(i^{\text{th}}\) layer. The vectors \(y_{j}^{i}\) and \(y_{k}^{i-1}\) are outputs of the \(i^{\text{th}}\) and \((i-1)^{\text{th}}\) layers. The sizes of the matrices and vectors (denoted by \(j,\ k\)) will depend on the layer-wise size of \(y\). In this work, the ANN is built and trained with PyTorch[13]. The architecture of the neural network and the training workflow is presented in Fig. 2. The current configuration of the neural network is described next. The activation function between all intermediate layers is the hyperbolic tangent. It was chosen because it is nonlinear, is smooth, and has smooth derivatives. After the input layer, there are three "head" layers of size \(s_{H}\), followed by an "intermediate" layer of size \(s_{I}\). The output of the intermediate layer is sent to three separate "tail" layers of size \(s_{T}\). There are three tails for each field variable: pressure, velocity, and temperature. The intuition behind this architecture is that the head layers encode the initial condition and control input into a form used by the tail layers. The tail layers then decode the information separately for each field variable. As discussed previously, two data sets are used to train the PSM. Measurement data from locations \(z_{i}\in\mathcal{X}_{m}\), Eq. (8), and physics-informing from collocation locations \(z_{i}\in\mathcal{X}_{p}\), Eq. (13). In the context of transport systems, the measurement locations are those where physical measurements of fluid flow exist, e.g., thermocouples or pressure transducers. On the other hand, collocation locations are arbitrarily chosen across the entire physical domain. #### 2.5.1 Loss formulation A discussion is presented to explain the differences in the "Measurement Loss", Eq. (10), and "Physics-informed Loss", Eq. (12), in Fig. 2. In conventional PINN formulation, the objective is to solve for the spatiotemporal distribution of field(s) within the boundaries of a domain under the assumptions of a fixed initial condition and BC(s). In general, it is anticipated that there are no measurements inside the spatial domain. However, in transport systems, the direct Figure 1: A visualization of the nomenclature and their relationships. The red lines indicate variables needed at each step \(k\), i.e., the initial condition and input, \(x_{0,k}\), and \(v_{k}\). The blue arrows indicate the prediction of the next state, \(x_{k}\), by the numerical solver or PSM. The dashed arrows indicate the aliasing of \(x_{k}\) as \(x_{0,k+1}\). measurements of pressure, temperature, and indirect measurement of velocity, are available through sensors (the type of sensor is dictated by the application, accuracy needed, and fluid properties) at multiple locations along the fluid transport path. Thus, in the PSM formulation, the spatiotemporal distribution of field(s) _between_ the measurement locations must be solved. In this setting, the measurement losses enforce initial conditions in Eq. (25) but also act as BCs in Eq. (26). On the other hand, the physics-informed losses are calculated at random locations. The losses from these locations tune the coefficients of the PSM such that the PDE equations defined in Eq. (11) are satisfied. To tune the coefficients of the PSM, it is important to choose an appropriate loss function. To train the network, the loss function, \(f_{\mathrm{loss}}\) in Eqs. (10) and (12), must be defined. The loss function quantifies the discrepancy between a target value and a model's prediction. Regression problems in machine learning commonly use the Mean Squared Error (MSE) as the loss function. However, the Log-Cosh loss function, Eq. (31), is utilized in this work. In both equations, \(y_{i}\) is the predicted value, \(\hat{y}_{i}\) is the target, and \(N\) is the total number of samples in a batch. \[f_{\mathrm{loss}}\left(y_{i},\hat{y}_{i}\right)=\frac{1}{N}\sum_{i=1}^{N}\log \left(\cosh\left(y_{i}-\hat{y}_{i}\right)\right)\;, \tag{31}\] The Log-Cosh loss function provides benefits over Mean Squared Error (MSE) by being less sensitive to outliers, having bounded gradients for numerical stability, ensuring more reliable convergence in optimization, and performing better with heavy-tailed noise distributions, allowing for quicker, more effective learning and accurate real-world data fitting. Now that the definitions of the measurement loss, the physics-informed loss, and the loss function have been presented, a discussion on the weighting of the losses follows. The total loss forwarded to the optimizer per epoch is defined as, \[\mathcal{L}_{\Sigma}=\alpha\mathcal{L}_{m}+\beta\mathcal{L}_{p}\;\big{|}\; \alpha+\beta=1\;, \tag{32}\] where \(\alpha\) and \(\beta\) are coefficients that weigh the importance of the measurement and physics-informed losses, respectively. The correct settings for \((\alpha,\beta)\) will depend on the degree of certainty in the measurements (assessable from the manufactured accuracy of sensors) and the confidence in the suitability of the PDEs' representation of the system's dynamics. In this work, during training of the PSMs, we set \(\alpha=0.5\) and \(\beta=0.5\), equally weighting both losses. However, further consideration is warranted when applying PSMs to a physical system. Figure 2: PSM architecture and training workflow. Top: The neural network architecture displaying relative sizes of the MLP layers. Bottom: Measurement and physics-informed losses use output from \(\mathcal{F}\left(\mathcal{X}_{m}\right)\) and \(\mathcal{F}\left(\mathcal{X}_{p}\right)\), respectively. The combined losses are used to update the parameters of the neural network, \(\theta\). #### 2.5.2 Scaling Scaling the input and output of a neural network is crucial when processing physical measurements of different units, scales, and ranges, such as pressure, temperature, and velocity. This process facilitates the optimization of the neural network, enabling the model to converge more efficiently and quickly by ensuring that the gradients have a consistent magnitude across all input dimensions. Unscaled inputs with disparate magnitudes can lead to elongated, narrow contours in the loss function landscape, resulting in slow and unstable convergence. In this work, min-max scaling is used, \[z^{*} =z/z_{\max}\;, \tag{33}\] \[t^{*} =t/t_{\max}\;,\] (34) \[p^{*} =(p-p_{\min})/(p_{\max}-p_{\min})\;,\] (35) \[u^{*} =(u-u_{\min})/(u_{\max}-u_{\min})\;,\] (36) \[T^{*} =(T-T_{\min})/(T_{\max}-T_{\min})\;,\] (37) \[\rho^{*} =(\rho-\rho_{\min})/(\rho_{\max}-\rho_{\min})\;, \tag{38}\] for transforming position, time, pressure, velocity, temperature, and density, respectively. The value for \(z_{\max}\) equals the maximum path length for the liquid. The value for \(t_{\max}\) equals the discrete time-step between measurements, \(\Delta t\) in Eq. (26). The minimum and maximum values for the pressure, velocity, temperature, and density are based on the ranges of data that are collected from experiments. #### 2.5.3 Training procedure The PSM training procedure is listed in Alg. 1. During each epoch, the weights are updated from a fixed measurement dataset. Whereas the collocation points are updated at each epoch. A step learning rate scheduler was used, with a rate decay of 0.5 every 50 epochs. All models were trained for 500 epochs with a batch size of 2048. ``` 1:Input: Measurement data \((\mathcal{X}_{m},\mathcal{Y})\), Governing equations \(G\), Loss coefficients \((\alpha,\beta)\), Number of epochs \(N\), Batch size \(B\), Learning rate scheduler \(\mathcal{S}(\cdot)\) 2:Initialize: Network \(\mathcal{F}\) with random weights \(\theta\) 3:for\(n=1,2,\ldots,N\)do 4: Update learning rate \(\eta\leftarrow\mathcal{S}(n)\) 5:for each mini-batch \((\mathcal{X}_{m,i},\mathcal{Y}_{i})\subset(\mathcal{X}_{m},\mathcal{Y})\) with size \(B\)do 6: Compute predictions \(\hat{\mathcal{Y}}_{i}=\mathcal{F}(\mathcal{X}_{m,i})\) 7: Compute measurement loss \(\mathcal{L}_{m}(\hat{\mathcal{Y}}_{i},\mathcal{Y}_{i})\) using Eq. (10) 8: Randomly sample \(B\) collocation points \(\mathcal{X}_{p,i}\), and extract \((x_{0,i},v_{i})\) from \(\mathcal{X}_{m,i}\) 9: Compute the residual values \(\mathcal{R}_{i}=G(\mathcal{F},\mathcal{X}_{p,i},x_{0,i},v_{i})\) 10: Compute physics-informed loss \(\mathcal{L}_{p}(\mathcal{R}_{i},\emptyset)\) using Eq. (12) 11: Compute total loss \(\mathcal{L}_{\Sigma}=\alpha\mathcal{L}_{m}+\beta\mathcal{L}_{p}\) 12: Update network weights using optimizer: \(\theta\leftarrow\theta-\eta\nabla_{\theta}L_{\Sigma}(\theta)\) 13:endfor 14:endfor 15:return Trained PSM model \(\mathcal{F}\) ``` **Algorithm 1** Training a PSM ## 3 Results: applying PSMs for transport in a Heated Channel Two experiments are designed to investigate PSMs' performance for transport phenomena. The first experiment is a system of three conduits in series (referred to as pipes). The first and last pipes will be adiabatic and \(1.0\,\mathrm{m}\) long. The middle pipe will be heated with a constant volumetric heat source of \(q^{\prime\prime\prime}=50.0\,\mathrm{MW}/\mathrm{m}^{3}\), and \(0.8\,\mathrm{m}\) long. All pipes have a total cross sectional area of \(0.449\,\mathrm{m}^{2}\) and a hydraulic diameter of \(2.972\,\mathrm{mm}\). No solids are modeled - only the internal flow area. Each pipe is discretized into ten elements. The inlet of the first pipe will have varying velocity and temperature BCs, \[u(z =z_{\min},t) =u_{\mathrm{in}}(t)\;, \tag{39}\] \[T(z =z_{\min},t) =T_{\mathrm{in}}(t)\;. \tag{40}\] The trajectory of the BCs is defined by linear ramps that vary in ramp rate, maximum/minimum amplitude, and rest time between manipulations. These parameters are varied to encourage regularization of the PSM. Additionally, the outlet of the third pipe will be a constant pressure BC, \[p(z=z_{\mathrm{max}},t)=p_{\mathrm{out}}. \tag{41}\] In this work, the pressure BCs are presented as absolute pressure; however, the pressure field is presented as gage pressure. The experiment configuration is visualized in Fig. 3. A visualization of the evolution of the field variables is presented in Fig. 4. The PSM will be tasked with predicting these spatiotemporal distributions by learning from a few measurement locations and the approximate PDEs of the system. Hereafter, this experiment will be referred to as the "heated channel" configuration. To close the transport equations, Eqs. (3) to (5), the thermophysical properties of the fluid must be defined. This work utilizes a molten salt, \(\mathrm{LiF}-\mathrm{BeF}_{2}\), known colloquially as 'flibe.' The reason is that flibe is the leading candidate for the primary working fluid of next-generation molten salt reactor designs currently under development [14]. The density and specific heat capacity of flibe are defined by [15], \[\rho=2413-0.488T\;\mathrm{[kg/m^{3}]}\;, \tag{42}\] \[C_{p}=2414\;\mathrm{[J/(kg\,K)]}\;, \tag{43}\] where \(T\) is the fluid temperature in Kelvin. These correlations are adopted to calculate the physics-informed loss in Eq. (12). Figure 4: An overview of the training dataset for the heated channel configuration. Left: Velocity and temperature BCs at the inlet that are manipulated in time. The experiment chosen to display the solution is annotated (”Exp. 8”). Right: Numerical solution of all field variables over the entire spatiotemporal domain. Figure 3: Configuration of the pipes in series. Left: Layout of the pipes and identifications of the locations of imposed BCs and heat source term. Right: A list of the fixed outlet pressure BC, and ranges for the inlet BCs. ### Evaluation with test data The results for the heated channel are discussed in this section. A sample of the control input permutations and numerical solution of the field variables were presented in Fig. 4. In this setting, the PSM must accurately propagate the system's state, \(x_{k}\), from a given initial condition, \(x_{0,k=0}\), and input trajectory, \(v_{k=0,..N}\). This problem is a classical advection problem where the inlet boundary conditions are transported through the pipe incur: pressure losses due to friction from the pipe, modifications in mass flow rate (velocity) due to boundary conditions, and changes in temperature due to energy deposition. In Fig. 5, a rollout of a single input, selected from the _test_ dataset, i.e., data that is not used to train the models, is presented. Subplots A and B show the input trajectory of the velocity and temperature at the inlet, \(u_{\mathrm{in}}\) and \(T_{\mathrm{in}}\). Subplots C-K show pressure, velocity, and temperature at fixed positions vs. time. Subplots L-T show the fields at fixed times vs. position. In subplots C-T, results from the numerical solver ("Num. Sol."), the PSM model, and an Artificial Neural Network (ANN) model are displayed. The PSM model is trained with Eq. (32), which contains both measurement and physics-informed loss, Eqs. (10) and (12), respectively. Whereas the ANN model is trained only with the measurement loss. Thus, the ANN results represent the classical, purely data-driven approach to modeling. In contrast, the PSM results combine knowledge from data with a priori knowledge of the system's dynamics in the form of transport PDEs. The ANN appears to perform on par with the PSM. Comparing the pressure and velocity spatiotemporal predictions, indeed, the ANN does perform as well as the PSM. However, comparing the temperature distributions, there is a significant error introduced by the ANN. The errors occur at \(1.0\leq z\leq 1.8\,\mathrm{m}\), i.e., the heated pipe's location. This observation is more explicitly visualized in Fig. 6, which compares spatiotemporal error in predicting temperature averaged over all test datasets. The PSM significantly outperforms the ANN in the prediction of the temperature field. Why does the ANN fail to accurately predict the temperature field only? This question can be answered by understanding the underlying physics and the sensor measurement locations. The advection of pressure and velocity depends on the frictional losses and the inlet mass flow rate. Because the friction factor and pipe geometry are constant, we expect a constant pressure gradient with respect to position. This is displayed in Subplots L-N in Fig. 5. The measurement of the pressure and velocity fields occurs in the first and last unheated pipes. Therefore, the ANN has to simply interpolate between the measurement locations of pressure before and after the heated section to correctly estimate the pressure field, and be able to modify the gradient according to the change in inlet velocity mass flow rate. The latter is provided as an input (\(v_{k}\)) to the ANN. The results show that the ANN model can accomplish these tasks well for the pressure field. For the velocity field, we expect the spatial distribution to be nearly constant, except after the heated region. There is a change in the density because of the heating (Eq. (42)), which causes the velocity to change in order to conserve mass (Eq. (3)). This causes a small error in ANN prediction, noticeable upon close inspection of Subplots O-Q in Fig. 5. Therefore, the performance for velocity is still acceptable. Figure 5: Contrasting the performance of PSM and ANN models in predicting the evolution of pressure, velocity, and temperature for the heated channel. Subplots A-B: control input setting from the test dataset. Subplots C-K: field temporal evolution at fixed positions. Subplots L-T: field spatial distribution at fixed times. However, for the temperature field, there is a large discrepancy between ANN and PSM performance within the heated region. The temperature field is affected by energy deposition (\(q^{\prime\prime\prime}\) in Eq. (5)), and inlet BCs. When \(q^{\prime\prime\prime}=0\), there is no gradient in temperature. When \(q^{\prime\prime\prime}=c\), a constant, there is a linear gradient in temperature. The PSM correctly predicted both situations, as shown in Subplots I-K and R-T in Fig. 5. However, the ANN has no knowledge of \(q^{\prime\prime\prime}\), and thus it interpolates between the sensor readings. Therefore, the ANN model can only correctly predict temperature distributions in regions where \(q^{\prime\prime\prime}=0\), i.e., there is no temperature gradient and any modification is provided directly by the inlet BC. ### Training with noisy measurements The data from sensor measurements in a physical plant are noisy. The noise stems from inherent randomness during fluid transport & signal measurement itself. It is important to quantify the performance of PSMs in such environments. The introduction of homoscedastic and heteroscedastic noise is studied. The former is introduced by, \[x_{\mathrm{noisy}}=x+\sigma\cdot\epsilon\;, \tag{44}\] where \(x_{\mathrm{noisy}}\) is the noisy signal, \(x\) is the noise-free signal, \(\sigma\) is the noise standard deviation, and \(\epsilon\) is a random noise vector sampled from a standard normal distribution. This type of noise models scenarios where the variance is constant across all magnitudes. Heteroscedastic noise is introduced by, \[x_{\mathrm{noisy}}=x+\sqrt{|x|\cdot k}\cdot\epsilon\;, \tag{45}\] where \(k\) is the variance factor. The variance factor is a parameter that controls the relationship between the signal's magnitude and the added noise's variance. Thus, this type of noise models scenarios where the noise variance is not constant but depends on the underlying signal's magnitude. The noisy signals are visualized in Fig. 7. During training, the signals are manipulated randomly for each batch. Noise is added to the sensor measurements and the initial conditions (\(x_{k}\) and \(x_{0,k}\), respectively, in Eq. (26)). It is assumed that the system's actuation is not noisy (i.e., the control signals, \(v_{k}\) are noise-free). The results from introducing noise are discussed next. The training procedure remains the same, except that artificial noise is added during each batch. In Table 1, the results are tabulated for both types of noise, and varying magnitudes of noise factors (\(\sigma\) in Eq. (44), \(k\) in Eq. (45)). First, inspecting the mean RMSE values, the PSM has a significant advantage in predicting velocity and temperature fields. There is a lower advantage for the pressure fields. This might be because the nominal RMSE values for pressure are already very low when contrasted with the pressure range (\(\Delta P\)) across all experiments. Inspecting the maximum RMSE values, the PSM retains a significant advantage in predicting velocity and temperature fields, with some anomalies noted for \(\sigma=0.05\) and \(k=0.05\). The PSM's advantage for the pressure field is relatively moderate, likely because it has a simple spatiotemporal relationship in the current experimental setting. To summarize, the results show that even when noise is added to the signals, PSMs have a significant advantage over a purely data-driven model. Figure 6: Comparison of root-mean-square error (RMSE) in the prediction of temperature for PSM (left) and ANN (right) models. The RMSE values are averaged over all test datasets. The heated region is present between the dashed red lines. ### PSM Capability: Non-linear Supervisory Control In the previous section, the PSM architecture was shown to approximate the spatiotemporal distribution of all the fields accurately. Because the PSM is an end-to-end differentiable model, we can calculate the gradients for inputs and outputs using auto differentiation. In this section, the capability of a PSM to provide non-linear supervisory control is demonstrated. The Reference Governor (RG) supervisory control scheme is adopted to demonstrate this capability. The RG algorithm enforces pointwise-in-time constraints on a system by modifying the reference input setpoints [16]. The RG algorithm is summarized next. At each time-step \(k\in\mathbb{Z}_{+}\), the RG receives a reference input, \(r_{k}\in\mathbb{R}^{m}\), where \(m\) is the number of inputs. Depending on the system's current state, expected evolution, and imposed constraints, an admissible input \(v_{k}\in\mathbb{R}^{m}\), \[v_{k}=v_{k-1}+\kappa_{k}\left(r_{k}-v_{k-1}\right)\;, \tag{46}\] is obtained. The admissible input is then sent to lower-level controllers of the system (e.g., a PID controller). In the Scalar RG (SRG) formulation, \(\kappa_{k}\in[0,1]\) is a _scalar_ that governs admissible changes to the inputs, such that a complete rejection, \(v_{k}=v_{k-1}\), acceptance, \(v_{k}=r_{k}\), or an intermediate change, \(v_{k-1}\leq v_{k}\leq r_{k}\), is possible. Next, how the SRG algorithm determines \(\kappa_{k}\) is summarized. The RG uses a linear state-space representation, Eqs. (16) and (17), and then imposes constraints on the output variables, \(y_{k}\in Y\), where a set of linear inequalities defines \(Y\). Then, a construct referred to as the Maximal Output Admissible Set (\(O_{\infty}\)) is defined - it is the set of all \(x_{k}\), and constant \begin{table} \begin{tabular}{|l c c c c c c|} & \(\varepsilon_{P,\mathrm{ANN}}[\mathrm{Pa}]\) & \(\varepsilon_{u,\mathrm{ANN}}[\mathrm{mm/s}]\) & \(\varepsilon_{T,\mathrm{ANN}}[^{\circ}\mathrm{C}]\) & \(\frac{\varepsilon_{P,\mathrm{PSM}}}{\varepsilon_{P,\mathrm{ANN}}}[\%]\) & \(\frac{\varepsilon_{u,\mathrm{PSM}}}{\varepsilon_{u,\mathrm{ANN}}}[\%]\) & \(\frac{\varepsilon_{T,\mathrm{PSM}}}{\varepsilon_{T,\mathrm{ANN}}}[\%]\) \\ \hline \multicolumn{7}{|c|}{Mean} \\ \hline Noise-free & 0.51 & 0.17 & 1.01 & 58 & 52 & 6 \\ Homos. \(\sigma=0.01\) & 1.25 & 0.25 & 1.15 & 12 & 25 & 3 \\ Homos. \(\sigma=0.05\) & 0.46 & 0.22 & 0.98 & 143 & 35 & 17 \\ Homos. \(\sigma=0.10\) & 0.83 & 0.23 & 1.01 & 91 & 51 & 28 \\ Heteros. \(k=0.01\) & 0.71 & 0.25 & 1.15 & 94 & 41 & 24 \\ Heteros. \(k=0.05\) & 0.94 & 0.27 & 1.10 & 94 & 68 & 27 \\ Heteros. \(k=0.10\) & 1.07 & 0.34 & 0.97 & 89 & 63 & 33 \\ \hline \multicolumn{7}{|c|}{Maximum} \\ \hline Noise-free & 4.82 & 1.25 & 5.81 & 113 & 52 & 31 \\ Homos. \(\sigma=0.01\) & 8.97 & 1.24 & 6.08 & 10 & 43 & 13 \\ Homos. \(\sigma=0.05\) & 6.17 & 1.24 & 7.00 & 53 & 104 & 14 \\ Homos. \(\sigma=0.10\) & 6.49 & 2.38 & 6.87 & 78 & 35 & 27 \\ Heteros. \(k=0.01\) & 4.80 & 1.34 & 7.70 & 81 & 51 & 19 \\ Heteros. \(k=0.05\) & 5.47 & 1.42 & 6.54 & 95 & 120 & 35 \\ Heteros. \(k=0.10\) & 6.31 & 1.97 & 5.46 & 76 & 69 & 38 \\ \hline \multicolumn{7}{|c|}{Range across all experiments: \(\Delta P=531\,\mathrm{Pa}\), \(\Delta u=200.0\,\mathrm{mm/s}\), \(\Delta T=92.6\,^{\circ}\mathrm{C}\)} \\ \end{tabular} \end{table} Table 1: Comparison of the mean and maximum RMSE for field variables using the ANN across all test datasets. The relative RMSE error for PSMs is displayed in the last three columns: the values are color-coded to highlight significant performance improvement when below 75 %, and a degradation in performance if above 100 %. The range of field values over all experiments is displayed at the bottom. Figure 7: Example of the manipulation of sensor measurements for all fields. During training, the signals are manipulated randomly for each batch. input \(\tilde{v}\), such that, \[O_{\infty}=(x_{k},\tilde{v}):y_{t+k}\in Y,v_{t+k}=\tilde{v},\forall k\in\mathbb{Z }^{+}. \tag{47}\] The \(k\to\infty\) assumption of \(O_{\infty}\) is generally relaxed to a suitably large finite horizon, \(T\). Thus, at each time-step \(k\), the SRG algorithm determines the admissible \(\tilde{v}\), and therefore \(\kappa_{k}\), such that the system remains in \(O_{\infty}\). The \(O_{\infty}\) is constructed by a roll-out of Eqs. (16) and (17), detailed further in [16, 82]. For a fixed set of state-space matrices and constraints, the matrices needed to construct \(O_{\infty}\) can be precomputed and stored for each RG evaluation. A drawback of the SRG formulation is that \(\kappa_{k}\) is a scalar. If \(v_{k}-r_{k}\) is multi-dimensional and \(\kappa_{k}<1\), input movement in all dimensions is bounded. In this work, we utilize the Command Governor (CG), a variant of the RG which selects \(v_{k}\) by solving the quadratic program, \[v_{k}= \operatorname*{argmin}_{v_{k}}\left\|v_{k}-r_{k}\right\|_{Q}^{2}\,\] \[\mathrm{s.t.} (x_{k},v_{k}=\tilde{v})\in O_{\infty}\, \tag{48}\] where \(Q\) is a positive definite matrix signifying the relative importance of each \(v_{k}\) component. The quadratic program is solved by the CVXPY library [17]. Using the CG, constraints can be enforced by manipulating \(v_{k}\) arbitrarily in \(\mathbb{R}^{m}\), constrained by \(O_{\infty}\). To utilize the CG, a linear representation of the system is needed. This work assumes full-state feedback, \(C=\mathbb{I}\), and no control pass-through, \(D=\mathbf{0}\). Because the PSM is end-to-end differentiable, we can approximate the nonlinear model, Eq. (14), as a linear representation using Jacobian linearization, \[x_{k+1} \approx x_{00}+A\delta x_{k}+B\delta v_{k}\, \tag{49}\] \[\delta x_{k} =x_{k}-x_{00}\,\] (50) \[\delta v_{k} =v_{k}-v_{00}\, \tag{51}\] where \(x_{00}\) and \(v_{00}\) are state and input values at the linearization point. The matrices \(A\), and \(B\) are approximated by the Jacobians of \(\mathcal{F}\), Eq. (27), at the linearization point, \[A =\left[\frac{\partial\mathcal{F}_{x_{k+1}}}{\partial x_{k,0}}, \frac{\partial\mathcal{F}_{x_{k+1}}}{\partial x_{k,1}},...,\frac{\partial \mathcal{F}_{x_{k+1}}}{\partial x_{k,N}}\right]^{\intercal}_{(x_{00},v_{00})}\, \tag{52}\] \[B =\left[\frac{\partial\mathcal{F}_{x_{k+1}}}{\partial v_{k,0}}, \frac{\partial\mathcal{F}_{x_{k+1}}}{\partial v_{k,1}},...,\frac{\partial \mathcal{F}_{x_{k+1}}}{\partial v_{k,M}}\right]^{\intercal}_{(x_{00},v_{00})}\, \tag{53}\] where \(N\) and \(M\) are the number of states and inputs. In summary, the RG was introduced as the framework to enforce constraints, followed by a specific version, the CG, which can manipulate multiple inputs to enforce constraints on a system. Then, the approach to utilize the PSM to approximate the linear state-space matrices required by the CG was introduced. The algorithm to utilize this procedure for supervisory control is listed in Alg. 2. ``` 1:Input: PSM model \(\mathcal{F}\), Requested set point trajectory \(r\), Time-dependent constraints on states \(Y_{k}\), Input matrix \(Q\), Matrices \(A\) and \(B\) update interval \(\gamma\) 2:Initialize: Initial condition of system \(x_{0}\), Iteration counter \(i\gets 0\) 3:for\(k=1\) to \(\text{length}(r)\)do 4:if\(k\mod\gamma=0\)then 5: Obtain matrices \(A\) and \(B\) by Jacobian Linearization of \(\mathcal{F}\) at state \(x_{k},v_{k}\) 6: Update the \(O_{\infty}\) matrices with the current linear state-space model 7:endif 8:if Constraint update then 9: Update the \(O_{\infty}\) matrices with the current \(Y_{k}\) 10:endif 11: Formulate the quadratic program in Eq. (48) with inputs \(r_{k},x_{k}\) 12: Solve the quadratic program to obtain the admissible set point \(v_{k}\) 13:Return Admissible set point \(v_{k}\) to controlled system 14: Update state of system \(x_{k}\gets x_{k+1}\) 15:endfor ``` **Algorithm 2** Sequential Neural Command Governor (NCG) using PSMs Next, results demonstrating Alg. 2 are presented. In Fig. 8, an input permutation from the test dataset is chosen randomly to manipulate the inlet BCs. The variation of the temperature BC, \(T_{\rm in}\), causes a significant temperature rise after the heated channel, observable by comparing subplots I-K. It is desirable to constrain excessive increases in temperature to avoid damage to equipment or mitigate corrosion. Thus, a temperature constraint is assigned at a location after the heated channel, \(z=$2.3\,\mathrm{m}$\), shown in subplot K. The NCG algorithm admits input changes to enforce these constraints, shown in subplots A and B. The results demonstrate that the approach is successful in meeting the enforced constraints. There are a few key takeaways. First, the NCG algorithm independently manipulates each input value, minimizing the distance from the requested input trajectories, i.e., Eq. (48). Second, a time-dependent constraint is demonstrated (annotated in subplot K). Accommodating time-dependent constraints enables flexibility during operation - it is envisioned that, as part of a comprehensive autonomous operation framework, an external algorithm could update the constraints enforced on a system depending on its health condition. Lastly, it is important to note that in this demonstration, the PSM is only used to update the \(A\) and \(B\) matrices - the numerical solver is used as the environment with which the NCG algorithm interacts. ## 4 Results: applying PSMs for transport in a Cooling System In addition to the heated channel configuration, a "cooling system" configuration was adopted. A series of six pipes are arranged in a loop. All pipes have a total cross sectional area of \(0.449\,\mathrm{m}^{2}\) and a hydraulic diameter of \(2.972\,\mathrm{mm}\). No solids are modeled - only the internal flow area. All pipes are \(1.0\,\mathrm{m}\) in length, except for the pipes immediately after the heated and cooled sections, which are \(2.0\,\mathrm{m}\) long. The pipes are discretized into 10 elements per meter. A pump provides the head necessary to drive the fluid flow. In addition to a volumetric heat source, a volumetric heat sink is also present. This configuration is commonly found in advanced cooling systems, such as those used in high-performance electronic devices, nuclear reactors, and aerospace applications. The loop configuration is visualized in Fig. 9. There are two independent control inputs, \[q_{\mathrm{in}}^{\prime\prime\prime} =q_{\mathrm{in}}^{\prime\prime\prime}(t)\;, \tag{54}\] \[q_{\mathrm{out}}^{\prime\prime\prime} =-q_{\mathrm{in}}^{\prime\prime\prime}(t)\;,\] (55) \[\Delta p_{\mathrm{pump}} =\Delta p_{\mathrm{pump}}(t), \tag{56}\] where \(q_{\mathrm{in}}^{\prime\prime\prime}\) is the volumetric heat generation rate, \(q_{\mathrm{out}}^{\prime\prime\prime}\) is the volumetric cooling rate, and \(\Delta p_{\mathrm{pump}}\) is the pressure head provided by the pump. A control strategy is adopted that synchronizes the heating and cooling rates to increase/decrease the system's heat generation/cooling capability while maintaining constant temperatures throughout the loop. Both independent control inputs (\(q_{\mathrm{in}}^{\prime\prime\prime}(t),\Delta p_{\mathrm{pump}}(t)\)) are a function of time and will be defined by linear ramps that vary in ramp rate, maximum/minimum amplitude, and rest time between manipulations. Additionally, there is one boundary condition, \[p(z=z_{\mathrm{set}},t)=p_{\mathrm{loop}}\;, \tag{57}\] which is a static pressure boundary that defines the relative value of the pressure in the loop. Figure 8: Demonstration of the NCG algorithm. The PSM instantiates the models needed to utilize the supervisory control algorithm. The numerical solver is used as the environment with which the NCG algorithm interacts. Subplots A-B: Control input permutations. The input value without constraints is the reference input trajectories for the NCG algorithm. Subplots C-K: Field values at fixed locations. Subplot K: A constraint is assigned at this location and manipulated as a function of time. Unlike the heated channel configuration, the loop configuration is more challenging from a dynamics modeling perspective. First, there is no time-dependent manipulation of the field variables via boundary conditions. The manipulation of the pump head and heat source indirectly impact the field variables. Their evolution will have a spatiotemporal effect dictated by the transport equations, Eqs. (3) to (5). A visualization of the evolution of field variables is presented in Fig. 10. The pump head trajectory strongly impacts the velocity of the fluid. Due to the control strategy, the temperature of the fluid remains approximately constant with respect to time but has a spatial variation. ### Evaluation with test data The performance of the PSM and ANN models for the cooling system is discussed next. A sample of the control input permutations and the numerical solution of the field variables are presented in Fig. 11. Through visual inspection, it is apparent that the ANN struggles with accurately predicting the qualitative evolution of all the field variables. Because the ANN relies on a purely data-driven approach, the neural network correctly fits the field values about the spatial locations corresponding with sensors. Yet, it fails to correctly interpolate the field values between the sensor locations Figure 10: An overview of the training dataset for the loop configuration. Left: Pump pressure jump and heat source inputs manipulated in time. The experiment chosen to display the solution is annotated (”Exp. 8”). Right: Numerical solution of all field variables over the entire spatiotemporal domain. Figure 9: Configuration of pipes in a loop. Left: The imposed pump BC location and heat source/sink terms. Right: A list of the ranges for the pump pressure jump, heat source/sink terms, and loop pressure setting. because the problem is more complicated than the advection-dominated heated channel. Whereas, the PSM has good quantitative and qualitative spatiotemporal prediction of all field variables. The contrast between the ANN and PSM models is illustrated further in Fig. 12. The figure presents error in predicting all field variables averaged over all test datasets. The PSM model offers a significant performance advantage. As the prediction horizon for the ANN increases, a significant error is introduced in all fields. However, even for small time horizons, \(t\leq 100\,\mathrm{s}\), there are significant errors in the prediction of temperature and pressure. In summary, the results in this section show that in a complex experimental setting, a purely data-driven approach, i.e., the ANN model, overfits to sensor measurements and results in unphysical predictions over the entire spatiotemporal domain. In contrast, the PSM can generalize well to test data and achieve low error. Figure 11: Contrasting the performance of PSM and ANN models in predicting the cooling system’s evolution of pressure, velocity, and temperature. Subplots A-B: control input setting from the test dataset. Subplots C-K: field temporal evolution at fixed positions. Subplots L-T: field spatial distribution at fixed times. Figure 12: Comparison of the RMSE in predicting the spatiotemporal evolution of all field variables, averaged over all test datasets. Subplots A-C display error for the PSM model, and subplots D-F display error for the ANN model. ### PSM Capability: Diagnostics Diagnostics, in the context of this work, refers to the systematic identification of system degradation or faults. A key benefit of utilizing PSMs is their ability to incorporate the governing physics of the system through PDEs. These PDEs produce residuals - differences between observed and predicted behavior - that can be inspected for diagnostic purposes. In a fully trained PSM, residuals from the mass, momentum, and energy PDEs, Eqs. (3) to (5), stabilize during the system's nominal fault-free operation. If a component degrades during operation, these residuals will change. An algorithm can then use these changes in the residual signature to identify and classify the nature of the degradation. To validate this approach, a test case was designed to simulate a degradation in the cooling system's performance. In this case, an abrupt partial blockage of the system occurs in Pipe 3 (immediately before the heat sink). The blockage was simulated by increasing the friction factor, \(f\) in Eq. (4), of that pipe by an order of magnitude. In Alg. 3, a procedure is proposed to detect degradation and use a classification algorithm to diagnose the incident. The algorithm requires continuous collection and analysis of new datasets, \((\mathcal{X}_{\mathrm{new}},\mathcal{Y}_{\mathrm{new}})\), and analyzing them. It operates using two PSMs: a 'nominal model', which represents the fault-free system, and a 'twin model' whose parameters are updated once degradation is detected. The update is triggered by a parameterized, \(\tau\), in Line 6. After the twin model is updated, residuals from both PSMs are analyzed using a classification model to infer the type of degradation that has occurred. The development of this classification model is outside the scope of the current work. ``` 1:Input: Original dataset \((\mathcal{X},\mathcal{Y})\), New dataset \((\mathcal{X}_{\mathrm{new}},\mathcal{Y}_{\mathrm{new}})\), Nominal PSM model \(\mathcal{F}_{\mathrm{nom}}\), Threshold \(\tau\), Classification model \(\mathcal{C}\) 2:Initialize: Twin PSM model \(\mathcal{F}_{\mathrm{deg}}\), Degradation flag latch\(\leftarrow\) False 3:for all sample \((x_{k},y_{k})\) in \((\mathcal{X}_{\mathrm{new}},\mathcal{Y}_{\mathrm{new}})\)do 4: Predict \(\hat{y}_{k}=\mathcal{F}_{\mathrm{nom}}(x_{k})\) 5: Compute \(E=\frac{1}{n}\sum_{k=1}^{n}(\hat{y}_{k}-y_{k})^{2}\) 6:if\(E>\tau\)and latch is Falsethen 7: Set latch to True 8:endif 9:if latch is True then 10: Train \(\mathcal{F}_{\mathrm{deg}}\) on \((x_{k},y_{k})\) without physics-informing 11:endif 12:endfor 13: Compute nominal residuals as \(r=\text{PDEs}(\mathcal{F}_{\mathrm{nom}}(x))\) for \(x\) in \(\mathcal{X}\) 14: Compute degraded residuals as \(r_{\mathrm{deg}}=\text{PDEs}(\mathcal{F}_{\mathrm{deg}}(x))\) for \(x\) in \(\mathcal{X}\) 15: Create combined residual dataset \(\mathcal{R}=\{r,r_{\mathrm{deg}}\}\) 16: Predict degradation type \(d\) using \(\mathcal{C}\) on \(\mathcal{R}\): \(d=\mathcal{C}(\mathcal{R})\)return\(d\) ``` **Algorithm 3** Degradation Detection and Classification The application of Alg. 3 to the data generated from the degradation test case is discussed next. In Fig. 13, the scaled residual values are contrasted for nominal and degraded models. The residuals are presented as a function of position. Within the cooling system, Pipe 3 is located at \(4.0\,\mathrm{m}\leq z\leq 5.0\,\mathrm{m}\). There are a few important observations to consider. First, the nominal model has a relatively constant spatial distribution for the momentum residual, \(r_{\mathrm{mom}_{\cdot}}\). Whereas, for the degraded model, the spatial distribution shows a qualitative shift in the residual around the location of Pipe 3. Second, the mass and energy residuals do not show a significant shift in the spatial distribution of residuals. From these observations, a classification algorithm can be trained to identify the degradation that has occurred. In summary, in this section, an algorithm was proposed and utilized on a test case to show that PSMs can provide meaningful data to a diagnostics algorithm to identify system degradation or fault. ## 5 Discussion and Concluding Remarks Autonomous systems are envisioned to achieve real-time optimization, be flexible in operation, and be fault tolerant. These capabilities necessitate a model-based approach. However, purely data-driven methods lack physical constraints like mass conservation. Consequently, model-free approaches, such as PID control, dominate the operation of modern transport-dominated systems such as chemical, biomedical, and power plants. To address this challenge, we propose the adoption of PSMs. A PSM is a deep neural network trained by fusing sensor data with physics-informing using the components' PDEs. The result is a physics-constrained end-to-end differentiable forward dynamics model. Two separate experiments were designed, in silico, to demonstrate PSM capabilities for transport-dominated systems. The first experiment was a heated channel, and the second was a cooling system loop. The datasets used to train the PSM models are available online1. Evaluation on test data indicates that the PSM approach is quantitatively and qualitatively more accurate than a purely data-driven approach, such as an ANN. Additionally, both homo- and heteroscedastic noise were introduced during training to demonstrate the applicability of PSMs to a physical plant. Footnote 1: Dataset will be released on journal acceptance. Because PSMs are an end-to-end differentiable state-space model, there are several use cases: 1. The first demonstration used a PSM model to construct a supervisory controller (Section 3.3). To achieve this, the proposed NCG algorithm, Alg. 2, used a linear state-space representation sequentially updated through Jacobian Linearization of the PSM. The capability was utilized to demonstrate both constant and time-dependent constraint enforcement. Thus, PSMs can be utilized for control schemes that require a non-linear or linear state-space representation. 2. The second demonstration proposed a diagnostics algorithm that utilized the PSM as a basis (Section 4.2). Conservation laws, in the form of PDEs, can be evaluated using the PSM. The proposed algorithm, Alg. 3, uses residuals from each of the PDEs, which are available as a function of location. The demonstration simulated a partial blockage of a pipe in the cooling system. Two separate PSMs were used to obtain PDE residuals, one trained on the nominal dataset and one that had been transfer-learned on the degraded dataset. It was demonstrated that the differences in the residuals of the momentum equation can then be used as inputs to a classification algorithm. 3. An additional use-case of PSMs, not covered in this work, is as a basis for Digital Twins (DTs). DTs require that the "digital" model used to represent the physical asset is updated frequently through feedback from measurements to match drifts from anticipated dynamics e.g., due to wear and tear [18]. Because the PSM is end-to-end differentiable, this feature can be accommodated through online learning of PDE coefficients. There are potential drawbacks of the PSM that warrant further research. First, this work did not rigorously study the number and placement of sensors (measurement locations). The optimal choice would be highly dependent on the characteristics of the system. If a PSM is applied to an existing facility, adding additional sensors may not be possible. Second, the current work intentionally focused on relatively small systems with few components. For large systems, e.g., large power plants with multiple loops, a single PSM would not be sufficient to model the entire facility. Additional methods are needed to train and coordinate multiple PSMs representing separate parts of a plant. A potential approach may be to adopt a graphical architecture where PSMs represent transport between nodes. Figure 13: A comparison of the spatial distribution of nominal and degraded residuals, \(r\) and \(r_{\mathrm{deg}}\), respectively, in Alg. 3. The residual values have been scaled by their maximum and minimum value. ## CRediT authorship contribution statement **Akshay J. Dave**: Conceptualization, Methodology, Writing - Original Draft, Funding acquisition. **Richard B. Vilim**: Supervision, Methodology, Writing - Review & Editing. ## Acknowledgements Argonne National Laboratory supports this work under the Laboratory Directed Research & Development Program, Project 2022-0077.
2309.08275
User Power Measurement Based IRS Channel Estimation via Single-Layer Neural Network
One main challenge for implementing intelligent reflecting surface (IRS) aided communications lies in the difficulty to obtain the channel knowledge for the base station (BS)-IRS-user cascaded links, which is needed to design high-performance IRS reflection in practice. Traditional methods for estimating IRS cascaded channels are usually based on the additional pilot signals received at the BS/users, which increase the system training overhead and also may not be compatible with the current communication protocols. To tackle this challenge, we propose in this paper a new single-layer neural network (NN)-enabled IRS channel estimation method based on only the knowledge of users' individual received signal power measurements corresponding to different IRS random training reflections, which are easily accessible in current wireless systems. To evaluate the effectiveness of the proposed channel estimation method, we design the IRS reflection for data transmission based on the estimated cascaded channels in an IRS-aided multiuser communication system. Numerical results show that the proposed IRS channel estimation and reflection design can significantly improve the minimum received signal-to-noise ratio (SNR) among all users, as compared to existing power measurement based designs.
He Sun, Weidong Mei, Lipeng Zhu, Rui Zhang
2023-09-15T09:36:22Z
http://arxiv.org/abs/2309.08275v1
# User Power Measurement Based IRS Channel Estimation via Single-Layer Neural Network ###### Abstract One main challenge for implementing intelligent reflecting surface (IRS) aided communications lies in the difficulty to obtain the channel knowledge for the base station (BS)-IRS-user cascaded links, which is needed to design high-performance IRS reflection in practice. Traditional methods for estimating IRS cascaded channels are usually based on the additional pilot signals received at the BS/users, which increase the system training overhead and also may not be compatible with the current communication protocols. To tackle this challenge, we propose in this paper a new single-layer neural network (NN)-enabled IRS channel estimation method based on only the knowledge of users' individual received signal power measurements corresponding to different IRS random training reflections, which are easily accessible in current wireless systems. To evaluate the effectiveness of the proposed channel estimation method, we design the IRS reflection for data transmission based on the estimated cascaded channels in an IRS-aided multiuser communication system. Numerical results show that the proposed IRS channel estimation and reflection design can significantly improve the minimum received signal-to-noise ratio (SNR) among all users, as compared to existing power measurement based designs. ## I Introduction Intelligent reflecting surface (IRS) has recently emerged as a candidate technology for the future six-generation (6G) wireless communication systems due to its capability of realizing smart and reconfigurable propagation environment cost-effectively [1]. Specifically, an IRS consists of a large number of passive reflecting elements with independently tunable reflection coefficients, which can be jointly designed to alter the phase and/or amplitude of its incident signal to achieve high performance passive beamforming for various purposes, such as signal boosting, interference cancellation, target sensing, etc [1, 2, 3]. To this end, IRS passive beamforming or in general passive reflection should be properly designed. In the existing literature, there are two main approaches for IRS passive beamforming design, which are based on channel estimation pilots and user signal power measurements, respectively. In the former approach, the cascaded base station (BS)-IRS-user/user-IRS-BS channels are first estimated based on the downlink/uplink pilots received at the users/BS with time-varying IRS training reflections, and then the IRS reflection for data transmission is optimized based on the estimated IRS cascaded channels [4, 5, 6, 7]. Alternatively, the authors in [8] proposed to train a deep neural network (NN) to directly learn the mapping from the received pilot signals to the optimal IRS reflection. However, the above pilot-based designs require additional training pilots for IRS channel estimation or NN training, which not only increases the system training overhead but also may not be compatible with the current cellular transmission protocols that cater to the user-BS direct channel (without IRS) estimation only. To efficiently integrate IRS into current wireless systems without the need of changing their protocols, the latter approach designs IRS reflection for data transmission based on the received (pilot or data) signal power measurements at each user's receiver with time-varying IRS reflections, which can be easily obtained in existing wireless systems. For example, passive beam training for IRS-aided millimeter-wave (mmWave) systems [9, 10] and conditional sample mean (CSM)-based IRS reflection for IRS-aided sub-6 GHz systems [11] have been proposed. In particular, it was shown in [11] that in the single-user case, the CSM method can achieve an IRS passive beamforming gain in the order of the number of IRS reflecting elements, which is identical to that under perfect channel state information (CSI) [1]. However, the number of random IRS reflections needed for CSM to obtain sufficient user power measurement samples is very large (hundreds or even thousands) for even the single-user case, which still results in high implementation overhead and large training delay. The fundamental reason for CSM's low efficiency lies in its lack of IRS channel information extraction from the users' power measurements. In this paper, we propose a new IRS cascaded channel estimation and IRS reflection design method based on users' power measurements similar to CSM in [11]. However, different Fig. 1: IRS-aided multicasting with users’ power measurements. from CSM, we first estimate the IRS cascaded channels based on user power measurements and then design the IRS reflection for data transmission based on the estimated channels. This thus overcomes the aforementioned inefficacy of CSM due to the lack of channel information extraction. In particular, our proposed IRS channel estimation method based on user power measurements leverages a simple single-layer NN formulation. Specifically, we first reveal that for any given IRS reflection, the received signal power at each user can be equivalently modeled as the output of a single-layer NN, with its weights corresponding to the coefficients of the cascaded BS-IRS-user channel. Inspired by this, we optimize the weights of the single-layer NN to minimize the mean squared error (MSE) between its output and each user's power measurement via the stochastic gradient descent method, thereby estimating the cascaded BS-IRS-user channel. Next, to evaluate the effectiveness of the proposed channel estimation method, we design the IRS reflection for data transmission based on the estimated cascaded channels in an IRS-aided multiuser multicast communication system, as shown in Fig. 1. We aim to optimize the IRS reflection to maximize the minimum received signal-to-noise ratio (SNR) among all users and solve this problem efficiently by applying various optimization techniques. Numerical results show that the proposed IRS channel estimation and IRS reflection design can yield much better performance than existing user power measurement based schemes such as CSM. _Notations_: Scalars, vectors and matrices are denoted by lower/upper case, boldface lower case and boldface upper case letters, respectively. For any scalar/vector/matrix, \((\cdot)^{*}\), \((\cdot)^{T}\) and \((\cdot)^{H}\) respectively denote its conjugate, transpose and conjugate transpose. \(\mathbb{C}^{n\times m}\) and \(\mathbb{R}^{n\times m}\) denote the sets of \(n\times m\) complex and real matrices, respectively. \(\|\cdot\|\) denotes the Euclidean norm of a vector, and \(|\cdot|\) denotes the cardinality of a set or the amplitude of a complex scalar. \(j=\sqrt{-1}\) denotes the imaginary unit. \(\mathrm{Re}(\cdot)\) and \(\mathrm{Im}(\cdot)\) denote the real and imaginary parts of a complex vector/number, respectively. \(\mathbf{V}\succeq\mathbf{0}\) indicates that \(\mathbf{V}\) is a positive semidefinite matrix. \(\text{Tr}(\cdot)\) denotes the trace of a matrix. The distribution of a circularly symmetric complex Gaussian (CSCG) random variable with zero mean and covariance \(\sigma^{2}\) is denoted by \(\mathcal{CN}(0,\sigma^{2})\). ## II System Model and Problem Formulation As shown in Fig. 1, we consider an IRS-aided multicast communication system, where a single-antenna BS (or multi-antenna BS with fixed downlink precoding) transmits a common message to \(K\) single-antenna users (or independent messages to different users over orthogonal frequency bands), with the help of an IRS consisting of \(N\) reflecting elements. It is assumed that there is a central controller in the system (the BS or another dedicated unit) which can collect the users' received signal power measurements and thereby optimize the IRS passive reflection. Let \(U_{k}\) denote the \(k\)-th user, \(k\in\mathcal{K}\triangleq\{1,2,...,K\}\). In this paper, we consider quasi-static block-fading channels and focus on a given fading block, during which all the channels involved are assumed to be constant. The baseband equivalent channel from the BS to the IRS, that from the BS to \(U_{k}\), and that from the IRS to \(U_{k}\) are denoted by \(\mathbf{h}_{BI}\in\mathbb{C}^{N\times 1}\), \(h_{BU_{k}}\in\mathbb{C}\) and \(\mathbf{h}_{IU_{k}}^{H}\in\mathbb{C}^{1\times N}\), respectively. Let \(\mathbf{\Theta}=\text{diag}(e^{j\theta_{i}},...,e^{j\theta_{N}})\) denote the reflection matrix of the IRS, where \(\theta_{i}\) denotes the phase shift of its \(i\)-th reflecting element, \(1\leq i\leq N\). Due to the hardware constraints, we consider that the phase shift of each reflecting element can only take a finite number of discrete values in the set \(\Phi_{\alpha}=\{\omega,2\omega,3\omega,...,2^{\alpha}\omega\}\), where \(\alpha\) is the number of bits used to uniformly quantize the continuous phase shift in \((0,2\pi]\), and \(\omega=\frac{2\pi}{2^{\alpha}}\)[12]. Let \(P\) denote the transmit power of the BS. The effective channel from the BS to \(U_{k}\) is expressed as \[g_{k}=\sqrt{P}\left(h_{BU_{k}}+\mathbf{h}_{IU_{k}}^{H}\mathbf{\Theta}\mathbf{h}_{BI} \right),\ k\in\mathcal{K}, \tag{1}\] where we have incorporated the effect of the BS transmit power \(P\) into the BS-\(U_{k}\) effective channel, since it may be practically unknown to the central controller. Let \(\bar{\mathbf{v}}^{H}=\left[e^{j\theta_{1}},...,e^{j\theta_{N}}\right]\) denote the passive reflection of the IRS, and \(\bar{\mathbf{h}}_{k}=\text{diag}(\mathbf{h}_{IU_{k}}^{H})\mathbf{h}_{BI}\) denote the cascaded BS-IRS-\(U_{k}\) channel. As such, the channel in (1) can be simplified as \[g_{k}=\sqrt{P}h_{BU_{k}}+\sqrt{P}\bar{\mathbf{v}}^{H}\bar{\mathbf{h}}_{k},\ k\in \mathcal{K}. \tag{2}\] By extending the IRS passive reflection vector into \(\mathbf{v}^{H}=\left[1,\bar{\mathbf{v}}^{H}\right]\) and stacking the direct and cascaded BS-\(U_{k}\) channels into \(\mathbf{h}_{k}^{H}=\sqrt{P}\left[h_{BU_{k}}^{*},\bar{\mathbf{h}}_{k}^{H}\right]\), the baseband equivalent channel in (2) can be further simplified as \[g_{k}=\mathbf{v}^{H}\mathbf{h}_{k},\ k\in\mathcal{K}. \tag{3}\] Let \(s\in\mathbb{C}\) denote the transmitted symbol (pilot or data) at the BS with \(|s|^{2}=1\). Hence, the received signal at \(U_{k}\) is given by \[y_{k}=g_{k}s+n_{k},\ k\in\mathcal{K}, \tag{4}\] where \(n_{k}\sim\mathcal{CN}(0,\sigma^{2})\) denotes the complex additive white Gaussian noise (AWGN) at \(U_{k}\) with power \(\sigma^{2}\). Accordingly, the received SNR at \(U_{k}\) is \[\text{SNR}_{k}=\frac{|g_{k}|^{2}}{\sigma^{2}}=\frac{\mathbf{v}^{H}\mathbf{G}_{k}\mathbf{v}} {\sigma^{2}},\ k\in\mathcal{K}, \tag{5}\] where \(\mathbf{G}_{k}=\mathbf{h}_{k}\mathbf{h}_{k}^{H}\) denotes the covariance matrix of \(\mathbf{h}_{k}\). In this paper, we aim to optimize the IRS passive reflection to maximize the minimum received SNR among all \(K\) users. The associated optimization problem is thus formulated as (P1) \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! measurements (to be presented in the next subsection). Specifically, we assume that the IRS applies randomly generated phase shifts of its reflecting elements (subject to (6b)) to reflect the BS's signals to all \(K\) users simultaneously. Let \(M\) and \(\mathbf{v}_{m}\), \(m\in\mathcal{M}\triangleq\{1,2,...,M\}\), denote the number of random reflection sets generated and its \(m\)-th reflection set, respectively. In the meanwhile, the users independently measure the power of their received signals corresponding to each IRS reflection set and send the results to the central controller (see Fig.1). For each reflection set, we consider that \(U_{k}\) takes \(Q\) samples of its received signal to calculate the RSRP based on them. Note that in practice, it usually holds that \(Q\gg 1\) since the IRS's reflection switching rate is usually much lower than the symbol rate of each user. Thus, the power measurement of \(U_{k}\) under the IRS's \(m\)-th reflection set is given by \[\bar{p}_{k}(\mathbf{v}_{m})=\frac{1}{Q}\sum_{q=1}^{Q}\left|g_{k,m}s+n_{k}(q)\right| ^{2},\ k\in\mathcal{K},\ m\in\mathcal{M}, \tag{7}\] where \(g_{k,m}=\mathbf{v}_{m}^{H}\mathbf{h}_{k}\) and \(n_{k}(q)\sim\mathcal{CN}(0,\sigma^{2})\) denotes the \(q\)-th sampled AWGN at \(U_{k}\). Let \(\mathcal{P}_{k}=[\bar{p}_{k}(\mathbf{v}_{1}),\bar{p}_{k}(\mathbf{v}_{2}),...,\bar{p}_ {k}(\mathbf{v}_{M})]\) denote the collection of \(U_{k}\)'s received signal power measurements under the \(M\) reflection sets of the IRS. After the above power measurements, each user \(U_{k}\) reports \(\mathcal{P}_{k}\) to the central controller, which estimates \(\mathbf{h}_{k},k\in\mathcal{K}\) as presented next. ### _NN-enabled Channel Estimation_ For any given IRS reflection set \(\mathbf{v}\), the desired signal power at each user \(U_{k}\) is given by \[p_{k}(\mathbf{v})=\left|\mathbf{v}^{H}\mathbf{h}_{k}\right|^{2}. \tag{8}\] It is worth mentioning that if the number of samples \(Q\) is sufficiently large, we have \(\bar{p}_{k}(\mathbf{v})\approx p_{k}(\mathbf{v})+\sigma^{2}\). Thus, we aim to estimate \(\mathbf{h}_{k},k\in\mathcal{K}\) based on \(\bar{p}_{k}(\mathbf{v}_{m}),m\in\mathcal{M}\). To this end, note that (8) can be modeled as a single-layer NN, explained as follows. In particular, this NN takes the reflection pattern \(\mathbf{v}\) and the cascaded channel \(\mathbf{h}_{k}\) as its input and weights, respectively, while the nonlinear activation function at the output layer is the squared amplitude of \(\mathbf{v}^{H}\mathbf{h}_{k}\), as given in (8). However, as both \(\mathbf{v}\) and \(\mathbf{h}_{k}\) are complex numbers in general, such a single-layer NN requires the implementation in the complex domain. To avoid this issue, we express (8) equivalently in the real domain as \[p_{k}(\mathbf{v})=\left|\mathbf{v}^{H}\mathbf{h}_{k}\right|^{2}=\left\|\mathbf{x}^{T}\mathbf{R}_{ k}\right\|^{2}, \tag{9}\] where \(\mathbf{x}\) consists of the real and imaginary parts of \(\mathbf{v}\), i.e., \(\mathbf{x}^{T}=\left[\begin{array}{cc}\mathrm{Re}\left(\mathbf{v}^{T}\right),& \mathrm{Im}\left(\mathbf{v}^{T}\right)\end{array}\right]\), and \(\mathbf{R}_{k}\) denotes the real-valued cascaded channel, i.e., \[\mathbf{R}_{k}=\left[\begin{array}{cc}\mathrm{Re}\left(\mathbf{h}_{k}\right)& \mathrm{Im}\left(\mathbf{h}_{k}\right)\\ \mathrm{Im}\left(\mathbf{h}_{k}\right)&-\mathrm{Re}\left(\mathbf{h}_{k}\right)\end{array} \right]\in\mathbb{R}^{(2N+2)\times 2}. \tag{10}\] Based on (9), we can construct an equivalent single-layer NN to (8) in the real-number domain. Specifically, as shown in Fig. 2, the input of this single-layer NN is \(\mathbf{x}\). Let \(W_{k,i,j}\) denote the weight of the edge from the \(i\)-th input to the \(j\)-th neuron in the hidden layer, with \(i=1,2,...,2N+2\) and \(j=1,2\). The two neurons at the hidden layer of this NN are given by \[\left[\begin{array}{c}a_{k}\\ b_{k}\end{array}\right]^{T}=\mathbf{x}^{T}\mathbf{W}_{k}, \tag{11}\] where \(\mathbf{W}_{k}\in\mathbb{R}^{(2N+2)\times 2}\) denotes the weight matrix of this NN, with \(W_{k,i,j}\) being its entry in the \(i\)-th row and the \(j\)-th column. Finally, the activation function at the output layer is given by the squared norm of (11), and the output of this NN is \[\hat{p}_{k}(\mathbf{v})=a_{k}^{2}+b_{k}^{2}=\left\|\mathbf{x}^{T}\mathbf{W}_{k}\right\|^{ 2}. \tag{12}\] By comparing (12) with (9), it is noted that this real-valued NN can imitate the received signal power by \(U_{k}\). In particular, if \(\mathbf{R}_{k}=\mathbf{W}_{k}\), we have \(\hat{p}_{k}(\mathbf{v})=p_{k}(\mathbf{v})\). Motivated by this, we propose to recover \(\mathbf{R}_{k}\) (and \(\mathbf{h}_{k}\)) by estimating the weight matrix \(\mathbf{W}_{k}\) via training this single-layer NN. To this end, we consider that \(\mathbf{W}_{k}\) takes a similar form to \(\mathbf{R}_{k}\) in (10), i.e., \[\mathbf{W}_{k}=\left[\begin{array}{cc}\mathbf{w}_{1,k}&\mathbf{w}_{2,k}\\ \mathbf{w}_{2,k}&-\mathbf{w}_{1,k}\end{array}\right], \tag{13}\] where \(\mathbf{w}_{1,k}\in\mathbb{R}^{N+1}\) and \(\mathbf{w}_{2,k}\in\mathbb{R}^{N+1}\) correspond to \(\mathrm{Re}\left(\mathbf{h}_{k}\right)\) and \(\mathrm{Im}\left(\mathbf{h}_{k}\right)\) in (10), respectively. With (13), we present the following lemma. **Lemma 1**: _If_ \[\|\mathbf{x}^{T}\mathbf{W}_{k}\|^{2}=\|\mathbf{x}^{T}\mathbf{R}_{k}\|^{2} \tag{14}\] _holds for any \(\mathbf{x}\in\mathbb{R}^{2N+2}\), we have \(\mathbf{h}_{k}=\mathbf{w}_{k}e^{j\phi_{k}}\), where \(\mathbf{w}_{k}=\mathbf{w}_{1,k}+j\mathbf{w}_{2,k}\), \(\phi_{k}\in[0,2\pi)\) denotes an arbitrary phase._ By substituting (13) into the left-hand side of (14), we have \[\|\mathbf{x}^{T}\mathbf{W}_{k}\|^{2}=\left|\mathbf{v}^{H}\mathbf{w}_{k}\right|^{2}=\mathbf{v}^{H} \mathbf{w}_{k}\mathbf{w}_{k}^{H}\mathbf{v}. \tag{15}\] Next, by substituting (9) and (15) into (14), we have \[\mathbf{v}^{H}\mathbf{w}_{k}\mathbf{w}_{k}^{H}\mathbf{v}=\left|\mathbf{v}^{H}\mathbf{h}_{k}\right|^{2}= \mathbf{v}^{H}\mathbf{h}_{k}\mathbf{h}_{k}^{H}\mathbf{v},\ \forall\mathbf{v}\in\mathbb{C}^{N+1}, \tag{16}\] which implies that \[\mathbf{v}^{H}\left(\mathbf{h}_{k}\mathbf{h}_{k}^{H}-\mathbf{w}_{k}\mathbf{w}_{k}^{H}\right)\mathbf{v}=0,\ \forall\mathbf{v}\in\mathbb{C}^{N+1}. \tag{17}\] For (17) to hold for any \(\mathbf{v}\in\mathbb{C}^{N+1}\), it should be satisfied that \(\mathbf{h}_{k}\mathbf{h}_{k}^{H}=\mathbf{w}_{k}\mathbf{w}_{k}^{H}\). As such, we have \(\mathbf{h}_{k}=\mathbf{w}_{k}e^{j\phi_{k}}\). The proof is thus completed. It follows from Lemma 1 that we can estimate \(\mathbf{h}_{k}\) by training the single-layer NN in Fig. 2 to estimate \(\mathbf{W}_{k}\) first. Although we cannot derive the exact \(\mathbf{h}_{k}\) due to the presence of the unknown phase \(\phi_{k}\), the objective function of (P1) only depends on the channel covariance matrix \(\mathbf{G}_{k}\), and we have \(\mathbf{G}_{k}=\mathbf{h}_{k}\mathbf{h}_{k}^{H}=\mathbf{w}_{k}\mathbf{w}_{k}^{H},k\in\mathcal{K}\). As such, the unknown common phase does not affect the objective function of (P1). It should also be mentioned that Lemma 1 requires that (14) holds for any \(\mathbf{x}\in\mathbb{R}^{2N+2}\) or \(\mathbf{v}\in\mathbb{C}^{N+1}\). However, due to (6b), the discrete IRS passive reflection set can only take a finite number of values Fig. 2: Single-layer NN architecture for \(U_{k}\). in a subspace of \(\mathbb{C}^{N+1}\), and \(\mathbf{h}_{k}=\mathbf{w}_{k}e^{j\phi_{k}}\) may not always hold in such a subspace. Nonetheless, the proposed design is still effective, as will be explained in Remark 1 later. To estimate \(\mathbf{w}_{k}\) or \(\mathbf{W}_{k}\), we can train the NN in Fig. 2 by using the stochastic gradient descent method to minimize the MSE between its output and the training data. In particular, we can make full use of each user's power measurements, i.e., \(\mathcal{P}_{k},k\in\mathcal{K}\), as the training data. Specifically, we divide them into two data sets, namely, the training set and validation set. The training set consists of \(M_{0}\) (\(M_{0}<M\)) entries of \(\mathcal{P}_{k}\), while the remaining \(M-M_{0}\) entries of \(\mathcal{P}_{k}\) are used as the validation set to evaluate the model fitting accuracy. Accordingly, the MSE for the training data is set as the following loss function, \[\mathcal{L}_{\mathbf{W}_{k}}=\frac{1}{M_{0}}\sum_{m=1}^{M_{0}}{(\hat{p}_{k}(\mathbf{v} _{m})-\bar{p}_{k}(\mathbf{v}_{m}))^{2}}. \tag{18}\] Given this loss function, we can use the backward propagation [14] to iteratively update the NN weights. Specifically, with (13), the weight matrix \(\mathbf{W}_{k}\) can be expressed by a vector \(\mathbf{\gamma}_{k}=\begin{bmatrix}\mathbf{w}_{1,k}^{T},\mathbf{w}_{2,k}^{T}\end{bmatrix} ^{T}\in\mathbb{R}^{2N+2}\). Let \(\mathbf{\gamma}_{k,t}\) denote the updated value of \(\mathbf{\gamma}_{k}\) after the \(t\)-th iteration. As such, the iteration proceeds as \[\mathbf{\gamma}_{k,t+1}=\mathbf{\gamma}_{k,t}-\rho F\left(\mathbf{\gamma}_{k,t}\right), \tag{19}\] where \(\rho>0\) denotes the learning rate, and \(F\left(\mathbf{\gamma}_{k}\right)=\frac{\partial\mathcal{L}_{\mathbf{W}_{k}}}{ \partial\mathbf{\gamma}_{k}}\) denotes the derivative of the loss function \(\mathcal{L}_{\mathbf{W}_{k}}\) with respect to \(\mathbf{\gamma}_{k}\). Here, \(F\left(\mathbf{\gamma}_{k}\right)\) can be calculated using the chain rule, \[F\left(\mathbf{\gamma}_{k}\right)=\frac{\partial\mathcal{L}_{\mathbf{W}_{k}}}{ \partial\hat{p}_{k}}\left[\frac{\partial\hat{p}_{k}}{\partial a_{k}},\;\frac{ \partial\hat{p}_{k}}{\partial b_{k}}\right]\left[\frac{\partial a_{k}}{ \partial\mathbf{\gamma}_{k}},\;\frac{\partial b_{k}}{\partial\mathbf{\gamma}_{k}} \right]^{T}, \tag{20}\] where \(\frac{\partial\mathcal{L}_{\mathbf{W}_{k}}}{\partial\hat{p}_{k}}\) can be calculated based on (18), while the other four derivatives in (20) can be calculated based on (11) as \[\frac{\partial a_{k}}{\partial\mathbf{\gamma}_{k}} =\left[1,\cos\left(\theta_{1}\right),\cdots,\cos\left(\theta_{N} \right),0,-\sin\left(\theta_{1}\right),\cdots,-\sin\left(\theta_{N}\right) \right]^{T},\] \[\frac{\partial b_{k}}{\partial\mathbf{\gamma}_{k}} =\left[0,\sin\left(\theta_{1}\right),\cdots,\sin\left(\theta_{N} \right),1,\cos\left(\theta_{1}\right),\cdots,\cos\left(\theta_{N}\right) \right]^{T},\] \[\frac{\partial\hat{p}_{k}}{\partial a_{k}} =2a_{k},\text{ and }\;\frac{\partial\hat{p}_{k}}{\partial b_{k}}=2b_{k}. \tag{21}\] The NN training process terminates after \(Z\) rounds of iterations, and the weight matrix of the NN is determined as \[\mathbf{W}_{k}^{\mathbf{\divide}}=\arg\min_{1\leq t\leq Z}\left(\sum_{m=M_{0}+1}^{ M}{(\hat{p}_{k,t}(\mathbf{v}_{m})-\bar{p}_{k}(\mathbf{v}_{m}))^{2}}\right), \tag{22}\] based on the validation set, where \(\hat{p}_{k,t}(\mathbf{v}_{m})=\|\mathbf{x}_{m}^{T}\mathbf{W}_{k,t}\|^{2}\) denotes the output of the NN after the \(t\)-th iteration, and \(\mathbf{W}_{k,t}\) denotes the updated version of \(\mathbf{W}_{k}\) after the \(t\)-th iteration. Based on the above, the complex-valued cascaded channel can be estimated as \(\mathbf{w}_{k}^{\mathbf{\divide}}=\mathbf{w}_{1,k}^{\mathbf{\divide}}+j\mathbf{w}_{2,k}^{\mathbf{ \divide}}\). **Remark 1**: _In the case with one-bit IRS phase shifts, i.e., \(\alpha=1\), the cascaded channel \(\mathbf{h}_{k}\) may not be estimated as \(\mathbf{w}_{k}e^{j\phi_{k}}\). This is because in this case, we have \(\mathbf{v}^{\star}=\mathbf{v}\), which results in_ \[\mathbf{v}^{H}\mathbf{h}_{k}\mathbf{h}_{k}^{H}\mathbf{v}=\mathbf{v}^{H}\mathbf{h}_{k}^{\star}\mathbf{h}_{k }^{T}\mathbf{v}. \tag{23}\] Based on (17), we may estimate \(\mathbf{h}_{k}^{\star}\) as \(\mathbf{w}_{k}e^{j\phi_{k}}\), while the actual channel \(\mathbf{h}_{k}\) should be estimated as \(\mathbf{w}_{k}^{\star}e^{-j\phi_{k}}\). However, this does not affect the efficacy of the proposed design, since both estimations lead to the same received signal power due to (23)._ ### _IRS Reflection Optimization_ After estimating \(\mathbf{h}_{k},k\in\mathcal{K}\), we can substitute them into (6a) and solve (P1) accordingly. Next, we present the optimal and suboptimal algorithms to solve (P1) in the cases of \(K=1\) and \(K>1\), respectively. First, if \(K=1\), it has been shown in [15] that (P1) can be optimally solved by applying a geometry-based method, and the details are thus omitted. However, if \(K>1\), problem (P1) is generally difficult to be optimally solved. Next, we consider combining the SDR technique [16] and the successive refinement method [12] to solve it. First, let \(\mathbf{V}=\mathbf{v}\mathbf{v}^{H}\) denote the covariance matrix of \(\mathbf{v}\), with \(\mathbf{V}\succeq\mathbf{0}\). Problem (P1) can be equivalently reformulated as \[\text{(P2):}\max_{\mathbf{V}} \xi\] (24a) s.t. \[\text{Tr}(\mathbf{\hat{G}}_{k}\mathbf{V})\geq\xi,\;\forall\;k, \tag{24b}\] \[\text{rank}(\mathbf{V})=1,\] (24c) \[\mathbf{V}\succeq\mathbf{0},\] (24d) \[\theta_{i}\in\Phi_{\alpha},\;i=1,...,N, \tag{24e}\] where \(\xi\) is an auxiliary variable. Problem (P2) is still difficult to solve due to the rank-one and discrete-phase constraints in (24c) and (24e), respectively. Next, we relax both constraints and thereby transform (P2) into a semidefinite programming (SDP) problem, which can be optimally solved by the interior-point algorithm [17]. However, the obtained solution may not be rank-one, and its entries may not satisfy the discrete constraint (24e). In this case, we can apply the Gaussian randomization method jointly with the solution quantization to construct a rank-one solution that satisfies (24e), denoted as \(\hat{\mathbf{v}}\). Based on the initial passive reflection \(\hat{\mathbf{v}}\), we successively refine \(\theta_{i}\) by enumerating the elements in \(\Phi_{\alpha}\), with \(\theta_{j},j\neq i,j=1,2,...,N\) being fixed, until the convergence is reached. ### _Complexity Analysis_ In the proposed IRS channel estimation and IRS reflection design method, the computational complexity is mainly due to the NN training procedures for channel estimation and the passive reflection optimization for solving (P2). In particular, the training complexity depends on the size of the NN structure. In the NN for \(U_{k},k\in\mathcal{K}\), as shown in Fig. 2, the number of neurons is \(2\), and the number of weights is \(2N+2\), which entails the complexity for all \(K\) users in the order of \(\mathcal{O}\left(KN\right)\)[18]. Furthermore, in the passive reflection optimization, the SDR-based initialization incurs the complexity of \(\mathcal{O}\left((K+N)^{3.5}\right)\), while the successive refinement incurs the complexity of \(\mathcal{O}\left(KN\right)\). Thus, the overall complexity of the proposed design is dominated by the SDR, i.e., \(\mathcal{O}\left((K+N)^{3.5}\right)\). In practice, we can apply the successive refinement only for solving (P2) with affordable performance loss (to be shown in Section IV via simulation), while the complexity is decreased significantly to \(\mathcal{O}\left(KN\right)\), thus reducing the overall complexity to linear over \(N\). ## IV Numerical Results ### _Simulation Setup_ Consider a three-dimensional Cartesian coordinate system in meter (m) with \(K\) users, where the BS is deployed at \((50,-200,20)\), while the locations of all users are randomly generated in a square area with the coordinates of its four corner points given by \((0,0,0),(10,0,0),(10,10,0)\) and \((0,10,0)\), respectively. The IRS is equipped with a uniform planar array (UPA) and assumed to be parallel to the \(y\)-\(z\) plane, with \(N=N_{y}\times N_{z}\) reflecting elements, where \(N_{y}\) and \(N_{z}\) denote the numbers of reflecting elements along the axes \(y\) and \(z\), respectively. We set \(N_{y}=N_{z}=8\) and half-wavelength spacing for the adjacent IRS reflecting elements. The location of the reference point of the IRS is set as \((-2,-1,0)\). Let \(\beta_{0,k}\), \(\beta_{1}\) and \(\beta_{2,k}\) denote the path loss (in dB) of the BS-\(U_{k}\), BS-IRS and IRS-\(U_{k}\) channels, respectively, which are set to \(\beta_{0,k}=33+37\text{log}_{10}(d_{0,k})\), \(\beta_{1}=30+20\text{log}_{10}(d_{1})\) and \(\beta_{2,k}=30+20\text{log}_{10}(d_{2,k})\), respectively, with \(d_{0,k}\), \(d_{1}\) and \(d_{2,k}\) denoting the distance from the BS to \(U_{k}\), that from the BS to the IRS, and that from the IRS to \(U_{k}\). We assume Rayleigh fading for the BS-\(U_{k}\) channel, i.e., \(h_{U_{k}}=10^{-\beta_{0,k}/20}\zeta_{k}\), where \(\zeta_{k}\) denotes the small-scale fading following \(\mathcal{CN}(0,1)\). In addition, a multipath channel model is assumed for the BS-IRS and IRS-\(U_{k}\) channels, and the BS-IRS channel is expressed as \[\mathbf{h}_{BI}=\sqrt{\frac{\varepsilon_{BI}}{1+\varepsilon_{BI}}}\mathbf{h}_{LoS}+ \sqrt{\frac{1}{1+\varepsilon_{BI}}}\mathbf{h}_{NLoS}, \tag{25}\] where \(\varepsilon_{BI}\) is the ratio of the line-of-sight (LoS) path power to that of the non-LoS (NLoS) path. \(\mathbf{h}_{LoS}\) and \(\mathbf{h}_{NLoS}\) denote the LoS and NLoS components, which are respectively given by \[\mathbf{h}_{LoS} =10^{-\beta_{1}/20}e^{\frac{-j2\pi d_{1}}{\lambda}}\mathbf{u}_{N}( \vartheta_{0},\varphi_{0}), \tag{26a}\] \[\mathbf{h}_{NLoS} =\sqrt{\frac{1}{L}}\sum_{l=1}^{L}\kappa_{l}\mathbf{u}_{N}(\vartheta_{ l},\varphi_{l}), \tag{26b}\] where \(\lambda\) denotes the wavelength. In (26), \(L\) denotes the number of NLoS multipath components, \(\kappa_{l}\) denotes the amplitude of the \(l\)-th multipath component following \(\mathcal{CN}(0,10^{-\beta_{l}/10})\), and \(\mathbf{u}_{N}(\vartheta_{l},\varphi_{l})\) denotes the steering vector of the \(l\)-th path from the BS to the IRS with \(\vartheta_{l}\in[0,\pi]\) and \(\varphi_{l}\in[0,\pi]\) denoting the azimuth and the elevation angles of arrival at the IRS in this path, respectively. In particular, let \(\mathbf{e}(\gamma,n)=[1,e^{-j\pi\gamma},e^{-j2\pi\gamma},...,e^{-j(n-1)\pi\gamma }]^{T}\) denote the steering vector function of a uniform linear array with \(n\) elements and directional cosine \(\gamma\). As such, we have \(\mathbf{u}_{N}(\vartheta_{l},\varphi_{l})=\mathbf{e}(\sin(\vartheta_{l})\text{sin}( \varphi_{l}),N_{y})\otimes\mathbf{e}(\cos(\vartheta_{l}),N_{z})\), where \(\otimes\) denotes the Kronecker product. The IRS-\(U_{k}\) channel can be expressed similarly and we denote by \(\varepsilon_{IU_{k}}\) the ratio of its LoS path power to that of the NLoS counterpart. We set \(L=5\), \(\varepsilon_{BI}=10\) and \(\varepsilon_{IU_{k}}=1\). The number of power measurements obtained by one user under each IRS reflection set is \(Q=10\). The transmit power is \(P=30\) dBm, and the noise power is \(\sigma^{2}=-90\) dBm. All results are averaged over \(10^{3}\) realizations of channels and user locations. ### _Benchmark Schemes_ We adopt the CSM [11] and random-max sampling (RMS) [19] methods as benchmark schemes, both of which design the IRS passive reflection for data transmission based on the users' power measurements, but without estimating the IRS cascaded channels by further exploiting the power measurements. Specifically, the RMS method sets the IRS reflection as the one that maximizes the minimum received signal power among all users over \(M\) random IRS reflection sets, i.e., \[\mathbf{v}^{\text{RMS}}=\mathbf{v}_{m^{*}},\quad\text{with}\quad m^{*}=\arg\max_{m\in \mathcal{M}}\ \min_{k\in\mathcal{K}}\bar{p}_{k}(\mathbf{v}_{m}). \tag{27}\] Moreover, the CSM method first calculates the sample mean of the minimum power measurement among all users conditioned on \(\theta_{i}=\psi,\psi\in\Phi_{\alpha}\), i.e., \[\mathbb{E}[p|\theta_{i}=\psi]=\frac{1}{|\mathcal{A}_{i}(\psi)|}\sum_{\mathbf{v} \in\mathcal{A}_{i}(\psi)}\min_{k\in\mathcal{K}}\bar{p}_{k}(\mathbf{v}), \tag{28}\] where \(\mathcal{A}_{i}(\psi)\) denotes a subset of the \(M\) random reflection sets with \(\theta_{i}=\psi\), \(i=1,2,...,N\). Finally, the phase shift of the \(i\)-th reflecting element is set as \[\theta_{i}^{\text{CSM}}=\arg\max_{\psi\in\Phi_{\alpha}}\mathbb{E}[p|\theta_{i} =\psi],\ i=1,...,N. \tag{29}\] In addition, the IRS passive reflection design based on perfect CSI by solving (P2) with \(\mathbf{G}_{k}\) replaced by \(\mathbf{G}_{k},k\in\mathcal{K}\), is included as the performance upper bound to evaluate the efficacy of the proposed scheme. ### _Simulation Results_ We evaluate the received SNR by different schemes in both single-user and multiuser scenarios. For the proposed scheme, we first use the single-layer NN to estimate \(\mathbf{h}_{k},k\in\mathcal{K}\), and then apply the geometry-based method and the SDR method to optimize the IRS passive reflection in the single-user and multiuser scenarios, respectively (labeled as "NN-GE" and "NN-SDR"), as presented in Section III-C. In addition, for both scenarios, we also show the performance by directly applying the Fig. 3: Received SNR versus the number of IRS reflection sets with \(K=1\) and \(\alpha=1\). Fig. 4: Received SNR versus the number of IRS reflection sets with \(K=1\) and \(\alpha=2\). successive refinement method, where the IRS passive reflection is initialized based on RMS given in (27) (labeled as "NN-SR"). First, Fig. 3 and Fig. 4 show the received SNR under different schemes in the single-user case with the number of controlling bits for IRS phase shifts \(\alpha=1\) and \(\alpha=2\), respectively. It is observed that both the NN-GE and NN-SR methods significantly outperform the benchmark schemes by fully exploiting the users' power measurements for channel estimation. In particular, with increasing \(M\), the performance of our proposed scheme quickly converges to the upper bound achievable with perfect CSI. Moreover, the SNR performance improves by increasing \(\alpha\) from 1 to 2, as expected, thanks to the higher phase-shift resolution for both channel estimation and reflection design. Furthermore, the small gap between the NN-SDR and NN-SR schemes demonstrates that the IRS passive reflection can be more efficiently optimized with linear complexity over \(N\) if small performance loss is tolerable. Next, Fig. 5 and Fig. 6 show the minimum received SNR among \(K=5\) users with \(\alpha=1\) and \(\alpha=2\), respectively. Similar observations for the single-user case can be made for the multiuser case. In addition, it is observed that CSM performs worse than RMS in the multiuser case, due to its inefficacy to adapt to more complex utility functions such as that given in (28) when \(K>1\). ## V Conclusion In this paper, we proposed a new IRS channel estimation method based on user received signal power measurements with randomly generated IRS reflections, by exploiting a simple single-layer NN formulation. Numerical results showed that the IRS passive reflection design based on the estimated IRS channels can significantly outperform the existing power measurement based schemes and approach the optimal performance assuming perfect CSI with significantly reduced power measurements in an IRS-aided multiuser communication system. The proposed IRS channel estimation and reflection optimization approach can be extended to other setups such as multi-antenna BS with adaptive precoding, multi-antenna user receivers, multiple IRSs, which will be studied in future work.
2309.13866
On Calibration of Modern Quantized Efficient Neural Networks
We explore calibration properties at various precisions for three architectures: ShuffleNetv2, GhostNet-VGG, and MobileOne; and two datasets: CIFAR-100 and PathMNIST. The quality of calibration is observed to track the quantization quality; it is well-documented that performance worsens with lower precision, and we observe a similar correlation with poorer calibration. This becomes especially egregious at 4-bit activation regime. GhostNet-VGG is shown to be the most robust to overall performance drop at lower precision. We find that temperature scaling can improve calibration error for quantized networks, with some caveats. We hope that these preliminary insights can lead to more opportunities for explainable and reliable EdgeML.
Joey Kuang, Alexander Wong
2023-09-25T04:30:18Z
http://arxiv.org/abs/2309.13866v2
# On Calibration of Modern Quantized Efficient Neural Networks ###### Abstract We explore calibration properties at various precisions for three architectures: ShuffleNetv2, GhostNet-VGG, and MobileOne; and two datasets: CIFAR-100 and PathMNIST. The quality of calibration is observed to track the quantization quality; it is well-documented that performance worsens with lower precision, and we observe a similar correlation with poorer calibration. This becomes especially egregious at 4-bit activation regime. GhostNet-VGG is shown to be the most robust to overall performance drop at lower precision. We find that temperature scaling can improve calibration error for quantized networks, with some caveats. We hope that these preliminary insights can lead to more opportunities for explainable and reliable EdgeML. ## 1 Introduction Enabling critical decision-making machine learning applications on the edge, such as point-of-care testing, increases efficiency and accessibility [1]. Deploying models may require quantization, which calls for consideration to performance drop. Meanwhile, critical decision-making leaves little room for predictive error, thus emphasizing the role of model uncertainty. The seminal work of Guo _et al_. [2] revealing the calibration dilemma present in modern neural networks brought forth calibration estimators and methods, and studies of intrinsic calibration properties of an architecture [3; 4; 5; 6; 7; 8]. There are few works that intersect the model's quantization behaviour and calibration quality. Studies regarding model quantization have focused on reducing undesirable quantization noise by way of novel quantization methods [9; 10; 11; 12; 13; 14]. Intuitively, we recognize that error in quantized outputs can cause decision swaps and change in confidence. In turn, we suspect possible consequences on calibration properties. One work by Xia _et al_. [15] observes that better calibrated models tend to provide worse post-quantization performance. In this extended abstract, we continue to explore the relationship between quantization and calibration: what do model calibration properties look like across various precision bitwidths and how well does a quantized model receive a post-hoc calibration method? ## 2 Experiments We conduct our analyses on the CIFAR-100 (32x32) dataset and PathMNIST2D (28x28) [16; 17]. Using biomedical image data provides context for understanding calibration properties due to related high-stakes tasks, such as lesion detection [18]. PathMNIST2D (9 classes, 107k samples total) was sourced from research on predicting survival from colorectal cancer histology slides. To investigate from a more practical perspective, we select architectures designed for mobile targets and likely to face quantization. Due to the small image sizes involved, we make modifications to the model based on the source paper or other relevant works. For ShuffleNetv2, We follow AugShuffleNet [19] which was modified for CIFAR datasets. While the main GhostNet is built on MobileNetV3 [20], Han _et al_. [21] run toy experiments on CIFAR datasets with a VGG16-like network provided by [22]. With best judgement, we reduce depth, stride, and width of the MobileOne S0 variant to avoid excessive downsampling and overcomplexity [23]. ### Training We train models for 200 epochs with an effective batch size of 32 using SGD with momentum (\(\beta=0.9\)) [24]. We initialize with He initialization [25] and train on cross-entropy loss with a weight decay set to \(10^{-4}\). Note that we specifically avoid other training time regularization methods to avoid confounding with post-hoc calibration [2]. We initialize our learning rate at 0.1 and decay it using a cosine schedule with warm restarts; we follow Scenario 6 of CIFAR-10 settings from [26]. We apply random crop and random horizontal flip. We follow a modified implementation of the Pytorch post-training quantization library by Xia _et al_. [15] to quantize our weights symmetrically per-channel with a moving average min-max observer; activations are quantized asymmetrically per-tensor with a histogram-based observer. We perform quantization at 3 configurations: W8/A8, W4/A8, W4/A4. Each configuration is quantized at 3 trials with randomly sampled training batches for calibrating ranges, where total calibration size is 1024 [27]. All quantized _inference_ is performed at 8-bit. We choose temperature scaling (TS) as our sole post-hoc calibration method [2] and Expected Calibration Error (ECE) as the primary metric for calibration quality (lower is better) [28]. Figure 1: ECE and top-1 error before (left column) and after (right column) temperature scaling. Model backbones are grouped by color and precision levels are grouped by marker size. Lower bitwidth models are correlated with both poorer accuracy and poorer calibration. Figure 3: Reliability grid (blue and red) and confidence distribution (striped teal) of CIFAR-100 models (by column) of each precision level (by row) after temperature scaling. TS does well to correct confidence-accuracy gaps of ShuffleNetV2 and GhostNet-VGG, while extremely poorly calibrated pre-TS 4-bit weight MobileOne models cannot recover. Figure 2: Reliability grid (blue and red) and confidence distribution (striped teal) of CIFAR-100 models (by column) of each precision level (by row) before temperature scaling. Larger red bars indicate greater (worse) confidence-accuracy gap. Although MobileOne FP32 is well-calibrated, more so than GhostNet-VGG FP32 and ShuffleNetV2 FP32, its quantized counterparts are very poorly calibrated. Figure 4: Reliability grid (blue and red) and confidence distribution (striped teal) of PathMNIST models (by column) of each precision level (by row) before temperature scaling. 4-bit weight model performance and calibration consistently deteriorates for models using inverted residual blocks (ShuffleNetV2 and MobileOne). Figure 5: Reliability grid (blue and red) and confidence distribution (striped teal) of PathMNIST models (by column) of each precision level (by row) before temperature scaling. While confidence-accuracy gap tends to diminish post-TS, improvement is not as noticeable under PathMNIST settings, especially when the initial gap is incredibly large. Results and Discussion Figure 1 shows that calibration properties are consistent between models and their precision variants across datasets. Minderer _et al_. [4] has suggested that architecture is a key determinant of calibration properties. The exact factors are unclear, though they do find that it is not related to model size nor amount of pretraining. We observe that GhostNet-VGG is the most robust to both quantization-induced accuracy decrease and ECE increase, while lower-bit ShuffleNetV2 and MobileOne model performance suffers. Notably, these two backbones consist of inverted residual blocks employing depthwise-separable convolutions, which have been shown to suffer from larger accumulation of quantization errors [29]. We find that TS is at some point limited by how well the model performs in the first place. First, we observe expected TS performance between Figure 2 and Figure 3 for most models, where post-TS models are well-calibrated. However, TS is unable to correct the 4-bit MobileOne models that have a larger initial confidence-accuracy gap. This is further shown by Figure 4, emphasized on PathMNIST: higher-precision models are accurate, well-calibrated (vs. counterpart trained on CIFAR-100), and refined (making more confident decisions) resulting in a tight cluster near the origin in Figure 1. Despite this success, all W4/A4 models perform terribly w.r.t. ECE and Top-1 and large confidence-accuracy gaps remain in post-TS models in Figures 3 and 5. This is consistent with [15]; in addition to well-calibrated floating point models posting worse post-quantization performance, their calibration quality appears to drop as well. Their conclusions were drawn from exploring ResNet models, which points to residual connections as a suspect for both of our findings. Interestingly, while prior work [2, 4] has observed that within a model family (e.g., grouped by size), a model with lower classification error tends to have higher calibration error, we show the exact opposite within a set of model's quantized counterparts. We also find the large discrepancy in quantization performance of ShuffleNetV2 particularly interesting between PathMNIST and CIFAR-100 settings; it may be worth investigating in relation to ECE biases and dataset attributes. Further analysis includes investigating the correlation between the pre- and post-TS ECE delta and the overall quantization quality to understand possible parallelisms between calibration properties and quantization behaviour. While we provided an initial discussion on certain design patterns contributing to a deployable model's dismay, additional exploration on intrinsic architecture properties relating to quantization _and_ calibration, training policies, quantization methods, and calibrators, are paramount to outline best practices for _trustworthy_ EdgeML solutions.
2303.17939
LyAl-Net: A high-efficiency Lyman-$α$ forest simulation with a neural network
The inference of cosmological quantities requires accurate and large hydrodynamical cosmological simulations. Unfortunately, their computational time can take millions of CPU hours for a modest coverage in cosmological scales ($\approx (100 {h^{-1}}\,\text{Mpc})^3)$). The possibility to generate large quantities of mock Lyman-$\alpha$ observations opens up the possibility of much better control on covariance matrices estimate for cosmological parameters inference, and on the impact of systematics due to baryonic effects. We present a machine learning approach to emulate the hydrodynamical simulation of intergalactic medium physics for the Lyman-$\alpha$ forest called LyAl-Net. The main goal of this work is to provide highly efficient and cheap simulations retaining interpretation abilities about the gas field level, and as a tool for other cosmological exploration. We use a neural network based on the U-net architecture, a variant of convolutional neural networks, to predict the neutral hydrogen physical properties, density, and temperature. We train the LyAl-Net model with the Horizon-noAGN simulation, though using only 9% of the volume. We also explore the resilience of the model through tests of a transfer learning framework using cosmological simulations containing different baryonic feedback. We test our results by analysing one and two-point statistics of emulated fields in different scenarios, as well as their stochastic properties. The ensemble average of the emulated Lyman-$\alpha$ forest absorption as a function of redshift lies within 2.5% of one derived from the full hydrodynamical simulation. The computation of individual fields from the dark matter density agrees well with regular physical regimes of cosmological fields. The results tested on IllustrisTNG100 showed a drastic improvement in the Lyman-$\alpha$ forest flux without arbitrary rescaling.
Chotipan Boonkongkird, Guilhem Lavaux, Sebastien Peirani, Yohan Dubois, Natalia Porqueres, Eleni Tsaprazi
2023-03-31T10:06:59Z
http://arxiv.org/abs/2303.17939v1
# LyAI-Net: A high-efficiency Lyman-\(\alpha\) forest simulation with a neural network ###### Abstract Context:The inference of cosmological quantities requires accurate and large hydrodynamical cosmological simulations. Unfortunately, their computational time can take millions of CPU hours for a modest coverage in cosmological scales (\(\approx(100h^{-1}\,\mathrm{Mpc})^{3}\)). The possibility to generate large quantities of mock Lyman-\(\alpha\) observations opens up the possibility of much better control on covariance matrices estimate for cosmological parameters inference, and on the impact of systematics due to baryonic effects. Aims:We present a machine learning approach to emulate the hydrodynamical simulation of intergalactic medium physics for the Lyman-\(\alpha\) forest called _LyAI-Net_. The main goal of this work is to provide highly efficient and cheap simulations retaining interpretation abilities about the gas field level, and as a tool for other cosmological exploration. Methods:We use a neural network based on the U-net architecture, a variant of convolutional neural networks, to predict the neutral hydrogen physical properties, density, and temperature. We train the _LyAI-Net_ model with the Horizon-noAGN simulation, though using only 9% of the volume. We also explore the resilience of the model through tests of a transfer learning framework using cosmological simulations containing different baryonic feedback. We test our results by analysing one and two-point statistics of emulated fields in different scenarios, as well as their stochastic properties. Results:The ensemble average of the emulated Lyman-\(\alpha\) forest absorption as a function of redshift lies within 2.5% of one derived from the full hydrodynamical simulation. The computation of individual fields from the dark matter density agrees well with regular physical regimes of cosmological fields. The results tested on Illustris TNG100 showed a drastic improvement in the Lyman-\(\alpha\) forest flux without arbitrary rescaling. Conclusions:Such emulators could be critical for the exploitation of upcoming surveys like WEAVE-QSO. The transfer learning technique is showing promises to alleviate sensitivity to the learned physical model. This technique could have a decisive impact on the results derived from present experiments, such as QSO surveys derived from SDSS3, which has a resolution power \(R=1500\), and SDSS4 (\(R=2000\)) data, which are the resolution ranges available with our emulator. ## 1 Introduction The Lyman-\(\alpha\) forest is a collection of sharp absorption features, generally observed from distant Quasi-Stellar object (QSO) spectra, first observed by Lynds (1971). This phenomenon arises from the photons emitted from background sources, with frequencies higher or equal to Lyman-\(\alpha\) frequency, undergoing redshift due to the Hubble expansion and absorbed by the intervening intergalactic medium. Since the absorptions occur at different distances from an observer in a line-of-sight, they are also redshifted. This mechanism creates a forest-like ensemble of features on the observed QSO broadband Lyman-\(\alpha\) emission. While the process only happens with neutral hydrogen (HI) atoms, which represent a small fraction of the intergalactic medium (IGM), one can use the properties of the Lyman-\(\alpha\) forest to infer the spatial structure of the baryonic matter in the universe. For the concordance \(\Lambda\)CDM model, the majority of matter in the universe is under the form of dark matter (DM) (Jarosik et al., 2011; Planck Collaboration VI, 2020). Thus the baryonic spatial distribution is mainly dictated by the gravitational potential imprinted by the dark matter density. Therefore, Lyman-\(\alpha\) forest observations are good tracers of the total matter in the context of this cosmological model (Petitjean et al., 1995; Hui & Gnedin, 1997; Croft et al., 1998). Moreover, it is crucially important for cosmology, as at the relevant redshifts probed by Lyman-\(\alpha\), we do not have as many easily observable galaxies as at low redshift (\(z\lesssim 1\)). The Lyman-\(\alpha\) forest is thus a complementary probe of the matter clustering at \(z\simeq 2-3\), also known as the cosmic noon era. The Lyman-\(\alpha\) forest can provide much insight into the cosmological model. Past work on the subject includes, for example, the recovering of the properties of the intergalactic medium using the Bayesian inversion method (Pichon et al., 2001), the constraining of the neutrino masses (Palanque-Delabrouille et al., 2015; Yeche et al., 2017), the inference of the three-dimensional matter distribution from 1D Lyman-\(\alpha\) absorption using Bayesian Forward modelling (Porquares et al., 2019; Horowitz et al., 2019; Porquares et al., 2020; Kraljic et al., 2022), the constraints on the thermal history of IGM (Villasenor et al., 2022), and the measurement of the Baryonic Acoustic Oscillation (BAO), which provides a tight constraint on the expansion history of the universe (Font-Ribera et al., 2014). Several large cosmological surveys include nowadays a QSO survey with spectrum measurement (BOSS, eBOSS, DESI Lee et al., 2013; du Mas des Bourboux et al., 2020; Levi et al., 2019). To analyse the Lyman-\(\alpha\) forest observations, we require hydrodynamical simulations of the IGM, which has to be computed alongside a dark matter \(N\)-body simulation. Though the IGM is coupled on large scales with the gravitational potential induced by dark matter overdensity, on small scales, it is dominated by baryonic shocks, cooling, and feedback, for which we do not have analytical solutions. Therefore, the IGM state has to be solved numerically. With upcoming cosmological surveys such as WEAVE-QSO (Pieri et al., 2016), the resolution and volume become higher and larger. They consequentially increase the required computational time drastically. Therefore using full hydrodynamical simulation to match the same volume is becoming intractable. Alternatively, \(N\)-body simulations of pure dark matter are much faster than hydrodynamical simulations. Thus, one could avoid solving for the hydrodynamic altogether and trying to correct phenomenologically the effect at small scales (\(\sim 100h^{-1}\) kpc, the Jeans' scale where Lyman-\(\alpha\) happens). The fluctuating Gunn-Peterson approximation (FGPA, Gunn & Peterson, 1965; Weinberg et al., 1997) is a main framework to estimate a mock Lyman-\(\alpha\) from dark matter overdensity. However, this approximation does come with a few limitations. For example, it fails to capture the baryonic feedbacks at small scales, the power-law breaks down in the regions with higher density and strongly heated gas (Lukic et al., 2014; Kooistra et al., 2022), and the power law starts to decouple where \(z\gtrapproprox\). In most cases, the dark matter overdensity is smoothed before being used for emulation. For example, the baryonic pressure is emulated by convolving the matter density with a Gaussian kernel (Hui & Gnedin, 1997). Other work (Sorini et al., 2016) introduced a numerical simulation called Iteratively Matched Statistics (IMS) with an effort to improve the accuracy higher than Gaussian smoothing. Finally, the LyMAS-2 technique (Peirani et al., 2014; Peirani et al., 2022) relies on a conditional probability to map the flux from a smoothed dark matter field and introduces a Wiener filter to represent the coherence along line-of-sight better. In recent years, the astrophysical community growingly adopted machine learning models because of their ability to provide a universal fitting function and its prediction speed through accelerators such as Graphics Processing Units (GPUs), which allows us to reach both a highly accurate and fast model, provided training data exist. Inspired by the work of Peirani et al. (2014), we propose to swap the conditional PDF with a neural network. In this work, we will test a _U-Net_ architecture, focusing on emulating the most essential fields to describe the IGM for Lyman-\(\alpha\). We introduce such an emulator, called _LyAl-Net_, trained to map a dark-matter density field from an \(N\)-body simulation to IGM fields (neutral hydrogen density and temperature) derived from a sibling hydrodynamical simulation. In this work, the two simulations are Horizon-DM and Horizon-noAGN at \(z=2.43\)(from the Horizon-AGN suite of simulations Dubois et al., 2014; Peirani et al., 2017). We then derive the Lyman-\(\alpha\) forest absorption features from the emulated fields. Previous work followed a similar initial strategy to derive emission and absorption features of the IGM (Villaescusa-Navarro et al., 2013; Harrington et al., 2021). In this work, we improve on their work by a larger set of tests of the accuracy of the models and allowing for flexibility to allow the models to be used in an observational context. We also explored the method to re-calibrate the equation-of-state to improve the emulated Lyman-\(\alpha\) absorption for different gas physics, allowing the existing _LyAl-Net_ to be used in vastly different conditions, only paying with a small additional training set. This allows us to directly affect the quantity instead of simply adjusting the mean transmitted flux. This is a powerful tool for a fast simulation of Lyman-\(\alpha\) forest, with the potential of generalising a machine learning model to emulate different gas physics beyond the training set. This work aims to provide a framework and a proof of concept for generating more accurate emulators of the gas hydrodynamics and the Lyman-\(\alpha\) forest in particular. This paper is organised as follows. We discuss the physical process and the model we use for the Lyman-\(\alpha\) absorption system in Section 2. In Section 3, we motivate and describe the architecture of the neural network used for training, including the data transformations and the hyperparameters for training. We present the prediction results and accuracy benchmarks using Horizon-noAGN and Horizon-DM in Section 4. In Section 5, we discuss the results using IllustrisTNG (Nelson et al., 2019) as a test set and the transfer learning method to fine-tune the equation of state. We discuss the results and conclude in Section 6. ## 2 Physical model of the Lyman-\(\alpha\) absorption In this section, we give a brief reminder of the fundamentals of the physical mechanisms that produce the observed Lyman-\(\alpha\) forest. The details of the atomic physics underlying this derivation are given in earlier work by Meiksin (2009). We focus here on the relevant physical quantities we seek to model with our framework. We can break down the mechanisms behind the observed Lyman-\(\alpha\) absorption features into a few components: the intrinsic linewidth of the resonance of the Lyman-\(\alpha\) transition, the thermal broadening owing to the small-scale random motions of the atoms, and the Doppler shift due to the bulk motions of the hydrogen clouds. The absorption strength of the broadband Lyman-\(\alpha\) emission from one QSO allows us to infer the density of the neutral hydrogen at different distances along a line-of-sight. This absorption rate depends on the number of interactions. We can refer to it as opacity which is a function of the observed frequency \(\nu_{\rm obs}\) as \[\tau(D_{\rm QSO},\hat{n},\nu_{\rm obs})=\int_{0}^{\nu_{\rm QSO}(D_{\rm QSO} )}n_{\rm HI}(s,\hat{n})\sigma_{\rm HI}(\nu_{\rm obs},s,\hat{n})\ {\rm d}s\,, \tag{1}\] where \(n_{\rm HI}\) is the number density of neutral atomic hydrogen, \(\sigma_{\rm HI}\) is the interaction cross-section of photons with the HI atoms. We note that \(s\) is a physical proper distance. As such, \(s_{\rm QSO}(D_{\rm QSO})\) is the physical proper distance corresponding to the comoving distance \(D_{\rm QSO}\) of the QSO on the considered line-of-sight. To derive the expression for all these quantities, we first consider the situation in the rest frame of some atomic hydrogen in space. The atomic cross-section can be described by the Lorentz profile with the following expression \[\sigma_{\rm HI}(\nu)=\frac{\pi e^{2}}{m_{e}c}f_{\rm IL}L(\nu)=\frac{\pi e^{2}}{ m_{e}c}f_{\rm IL}\frac{\Gamma_{nl}/(4\pi^{2})}{(\nu-\nu_{\rm in})^{2}+( \Gamma_{nl}/(4\pi)^{2})}\,, \tag{2}\] where \(L(\nu)\) is the Lorentz profile, \(\nu_{\rm in}\) is the frequency of the transition from lower (\(l\)) to upper (\(u\)) energy level, \(\Gamma_{nl}\) is the upper energy level damping width, \(f_{\rm{J}u}\) is the oscillator strength, \(m_{e}\) and \(e\) are electron mass and its charge, respectively. We note that the transition levels for the Lyman-\(\alpha\) system are \(l=1\) and \(u=2\), which gives \(f_{12}=0.4162\). In addition to the probabilities of the fundamental transition, we have to consider two other components. The HI atoms have finite positive temperatures that induce their thermal motions, which for non-relativistic gas is well described by a Maxwell distribution. This effect is called Doppler broadening, and we denote the distribution by \(G(\nu,T)\). This type of broadening depends on the temperature of the gas, which is itself position-dependent. As HI clouds are in motion with respect to the photons emitted by the QSO, we have to apply a further Doppler boost to put the incoming photon in the rest frame of the cloud. By taking the redshift of the photon emitter into account, this gives the relation between the frequency in the two frames to be \[\nu=\nu_{\rm{obs}}(1+z)\left(1+\frac{\nu_{z}}{c}\right) \tag{3}\] where \(\nu\) is the central frequency, \(\nu_{\rm{obs}}\) the observed frequency,and \(\nu_{z}\) is the line-of-sight velocity. Therefore the cross-section in this scenario is the convolution of the Lorentz and Doppler profile, \[\sigma_{\rm{HI}}(\nu,T)=\frac{\pi e^{2}}{m_{e}c}f_{12}(L*G)(\nu,T)=\frac{\pi e ^{2}}{m_{e}c}f_{12}V(\nu,T). \tag{4}\] The convolution of these two profiles is called a Voigt profile, which we write as \(V(\nu,T)\): \[V(\nu,T)=\int_{\nu=-\infty}^{\infty}{\rm d}\nu^{\prime}\frac{ \Gamma_{21}/(4\pi^{2})}{(\nu^{\prime}-\nu_{12})^{2}+(\Gamma_{21}/(4\pi)^{2})} \\ \frac{1}{\sqrt{\pi}\Delta\nu_{D}(T)}\exp\left[-\left(\frac{\nu- \nu^{\prime}}{\Delta\nu_{D}(T)}\right)^{2}\right]\,, \tag{5}\] where \[\Delta\nu_{D}=\frac{\nu_{12}}{c}\sqrt{\frac{2k_{B}T}{m_{H}}} \tag{6}\] is the Doppler width, which is the frequency broadening of the Lyman-\(\alpha\) frequency due to the Doppler effect, and \(\nu_{12}\) is the Lyman-\(\alpha\) frequency. We may rearrange the terms to simplify the expression by introducing \(x=(\nu-\nu^{\prime})/\Delta\nu_{D}(T)\), which is the frequency offset in the unit of \(\Delta\nu_{D}\). We set \(a=\Gamma_{21}/(4\pi\Delta\nu_{D})\) to be a ratio of the Lyman-\(\alpha\) line width to the Doppler frequency width \(\Delta\nu_{D}\). We can now rewrite the equation (5) to be: \[V(\nu,T)=\] \[\int_{\nu^{\prime}}{\rm d}\nu^{\prime}\left(\frac{1}{\pi\Delta \nu_{D}}\right)\left(\frac{a}{(x(\nu)-x^{\prime})^{2}+a^{2}}\right)\frac{1}{ \sqrt{\pi}\Delta\nu_{D}}\exp\left(-x^{\prime 2}\right)\,. \tag{7}\] The Voigt profile then reads \[V(a,x)=\frac{a}{\pi^{3/2}\Delta\nu_{D}}\int_{-\infty}^{\infty}\frac{e^{-y^{2} }}{(x-y)^{2}+a^{2}}dy\,. \tag{8}\] The normalisation of the Voigt profile is unity. We further transform the expression to express the Voigt profile \(V(a,x)\) in terms of the Voigt function \(H\) as \[V(a,x)=\frac{1}{\sqrt{\pi}\Delta\nu_{D}}H(a,x)\,. \tag{9}\] In the next section, we come back to the reason for using this function to increase computational efficiency. This convention allows us to use the amplitude of the absorption, which yields \[H(a,x)=\frac{a}{\pi}\int_{-\infty}^{\infty}\frac{e^{-y^{2}}dy}{a^{2}+(x-y)^{2 }}\,. \tag{10}\] Therefore the cross-section of the Lyman-\(\alpha\) absorption in the frame of the observer is, \[\sigma_{\rm{HI}}(\nu_{\rm{obs}})=\frac{\pi e^{2}}{m_{e}c}f_{12}\frac{H(a,x)} {\sqrt{\pi}\Delta\nu}\,. \tag{11}\] With the cross-section in Equation (11), we then compute the normalised absorption rate for Lyman-\(\alpha\), also named the Lyman-\(\alpha\) forest, as \[F(\lambda)=\exp\left[-\tau\left(\nu_{\rm{obs}}\right)\right]. \tag{12}\] Thus, we have an explicit algorithm for computing the absorption features encoded by \(F(\lambda)\). We detail the numerical implementation in Section 3.3. We note that we have not discussed the influence of the UV background, which directly affects the amount of neutral hydrogen (Becker & Bolton 2013), the relation between the spin temperature and the gas temperature (Liszt 2001), and possible relativistic effects (Irsic et al. 2016). Also, by considering the function \(F(\lambda)\), we assume that the Lyman-\(\alpha\) profile of the QSO is fully known since this is not the subject of this work. We only focus on how to emulate the quickest and the broadest possible effect of hydrodynamical simulation to obtain the function \(F(\lambda)\). ### Benchmark Model : modified-FGPA To assess the performance of our model to generate Lyman-\(\alpha\) absorption, we employ a similar approach to the fluctuating Gunn-Peterson approximation (FGPA, Gunn & Peterson 1965; Weinberg et al. 1997) which is an analytical framework to estimate a mock Lyman-\(\alpha\) from dark matter overdensity based on the idea that dark matter gravitational potential attracts baryonic matter (Bi & Davidsen 1997). The approximation expresses as \[\tau(z,{\bf r})\propto(1+\delta({\bf r}))^{g}, \tag{13}\] where \(\tau\) is the IGM optical depth at redshift \(z\) at a comoving position \({\bf r}\), \(\delta\) is the dark matter overdensity, and \(\beta\) is derived from the slope of temperature-density power law relation, which relies on the equilibrium between HI collisional recombination and photoionisation. We use a similar approximation for each individual fields, which yields the following relation for the neutral hydrogen density: \[n_{\rm{HI}}{}^{\rm{mock}}=\bar{n}_{\rm{HI}}\left(\frac{\rho_{\rm{DM}}}{\bar{ \rho}_{\rm{DM}}}\right)^{\alpha_{\pi}}\,\,, \tag{14}\] where \(\bar{\rho}_{\rm{DM}}\) is dark matter average density, \(n_{\rm{HI}}^{\rm{mock}}\) is an estimated neutral hydrogen number density, and \(\bar{n}_{\rm{HI}}\) is the coefficient of the power law. And from the temperature-density power law relation, we can estimate a mock hydrogen temperature by using \[T_{\rm{HI}}^{\rm{mock}}=\bar{T}_{\rm{HI}}\left(\frac{\rho_{\rm{DM}}}{\bar{ \rho}_{\rm{DM}}}\right)^{\alpha_{\cal{T}}} \tag{15}\] where \(T^{\rm{mock}}\) is an estimated hydrogen temperature and \(\bar{T}_{\rm{HI}}\) is the coefficient of the power law. The parameters of the dark matter from Horizon-DM fitting with Horizon-noAGN hydrogen density and temperature shows in Table 1. From here, we refer to the benchmark model as modified-FGPA (mod-FGPA). Furthermore, to compare modified-FGPA to _LyAl-Net_, we apply the estimated density and temperature into the Equation (1), which also includes the Voigt profile, and we assume that the line-of-sight velocity of dark matter and gas are approximately the same. We also note that this model is not strictly speaking the FGPA, but it employs the same power-law approximation for the relation between the gas-state fields and the dark matter. We expect the result of this test to be more optimistic than classical FGPA but worse than the one obtained with _LyAl-Net_ because it is less flexible and it ignores the stochastic components of the gas. ## 3 Method The model we use to calculate the absorption at different frequencies of the Lyman-\(\alpha\) broad emission line needs three ingredients that characterise the physical state of the neutral hydrogen: the temperature, the gas density, and the line-of-sight gas velocity. Therefore, the main objective of this work is to construct the neural network which can produce these parameters for the computation of the model of the Lyman-\(\alpha\) forest. We train _LyAl-Net_ for each gas parameter separately to accommodate the optimisation and the prediction interpretation. This section discuses the details of the cosmological simulation used as the training set, the numerical approach for generating the mock Lyman-\(\alpha\) absorption, and the data pre-processing procedure. ### The simulation datasets The simulation that we used is the Horizon-AGN simulation. It is a cosmological hydrodynamic simulation based on the adaptive mesh refinement code Ramses (Teyssier, 2002). The simulation contains \(1024^{3}\) dark matter particles with a resolution of \(M_{\rm DM_{\rm{less}}}=8\times 10^{7}\) M\({}_{\odot}\) and the volume span a cube of \(100h^{-1}\,\rm Mpc\), which took \(4\) million CPU hours to complete with a redshift down to \(z=1.2\). Horizon-AGN has been computed assuming a standard \(\Lambda\)CDM cosmology and adopting cosmological parameters compatible with the data of the WMAP mission (Komatsu et al., 2011): \(\Omega_{m}=0.272\), \(\Omega_{\Lambda}=0.728\), \(\sigma_{8}=0.81\), \(\Omega_{b}=0.045\), \(H_{0}=70.4\) \(\,{\rm{km}}^{-1}\,\rm Mpc^{-1}\), and \(n_{s}=0.967\). Furthermore, the simulation models the gas as ideal monatomic gas with adiabatic index \(\gamma=5/3\). The gas cools down via a metal-dependent model of hydrogen and helium photon emission (Sutherland & Dopita, 1993). This allows the temperature of the gas to cool down to \(10^{4}\) K. At the same time, the gas is heated via a uniform UV background following a model by Haardt & Madau (1996) after redshift \(z_{\rm{eion}}=10\). In this work, we used the Horizon-noAGN as a training set. It is the sibling simulation of Horizon-AGN with one major difference: the noAGN variant does not have AGN feedback due to the lack of black hole growth. The reason for using noAGN is that the computational time for the simulation was reduced by turning off the AGN feedback and emulating their effect by fine-tuning the UV background radiation to achieve the same gas properties (Peirani et al., 2022). Another upside of using Horizon-noAGN is that we can study the effects and sensitivities of the Lyman-\(\alpha\) forest model. We also used TNG100 simulation as a validation set to properly cross-check the results in Section 5.4. ### Dataset generation for training and validation In this section, we describe the steps involved in creating the different datasets required for the training and validating the Machine Learning model. The phase space distribution of dark matter is sampled with particles in the cosmological simulations that we considered. For our emulation strategy, we require the densities to be on a regular grid mesh to facilitate the computations. We have assigned the particles to a \(1024^{3}\) grid mesh, leading to a resolution of \(\sim 98h^{-1}\,\rm kpc\) for each voxel. Several mass assignment schemes exist, among which the most used is the Cloud-In-Cell assignment (Hockney & Eastwood, 1988). Unfortunately, this may lead to mesh cells completely devoid of mass and prone to very poor sampling in cosmic voids, which is exactly where the signal of the Lyman-\(\alpha\) forest is the most relevant (Porquerers et al., 2020). We rely on the particle assignment developed by Colombi et al. (2007) to avoid these problems. This assignment relies on an adaptive filter derived from Smooth Particle Hydrodynamics (SPH) kernel (Monaghan, 1992). We briefly remind the reader of the procedure here. The complete classical description of a particle system is given by the \begin{table} \begin{tabular}{l c c} \hline \hline **Gas Parameter (\(X\))** & \(\mathbf{\widehat{X}}\) & \(\alpha_{\mathbf{X}}\) \\ \hline Temperature & 3.98 & 0.58 \\ Density & -10.43 & 1.27 \\ \hline \hline \end{tabular} \end{table} Table 1: Fitted parameters of the power-law relation of dark matter and IGM parameters in Equation (14) and (15). We use Horizon-DM dark matter overdensity and Horizon-noAGN gas density and temperature. Figure 1: The flowchart illustrates a schematic of the _LyAl-Net_ pipeline for the Lyman Alpha forest simulation. The flux absorption model (Equation (12)) requires three gas physical quantities, namely temperature, density, and the line-of-sight velocity. LyNet emulates only the gas temperature and the gas density of neutral atomic hydrogen. We use dark matter line-of-sight velocity of the same _N_-body simulation as an approximated gas line-of-sight velocity. phase-space distribution, \(f({\bf r},{\bf v}_{L})\). For our computations, we typically require the moment of that distribution, which is for the velocity: \[{\bf v}({\bf r})=\frac{1}{\rho}\int{\rm d}^{3}{\bf v}_{L}\,{\bf v}_{L}f({\bf r}, {\bf v}_{L}), \tag{16}\] where \(f({\bf r},{\bf v})\) is the phase space density of the dark matter particles. The mass density \(\rho({\bf r})\) is obtained similarly from the integral \[\rho({\bf r})=\int{\rm d}^{3}{\bf v}_{L}f({\bf r},{\bf v}_{L}). \tag{17}\] In practice, we compute Equations (16) and (17) into a mesh made of small cubic patch \(\Delta r^{3}\), and approximate the resulting fields piecewise. We do these computations using an approach similar to smooth particle hydrodynamics (Monaghan 1992) to reduce shot noise. We can represent each particle as a smooth cloud with a finite size based on their local density, which depends on the distances between neighbouring particles. Generally, we define the cloud size as \(2R_{\rm SPH}\) and the number of neighbouring particles as \(N_{\rm SPH}\). The algorithm we use preserves the conservation of mass and momentum by an adequate choice of mesh node weighing, as detailed below. To interpolate particles into the grid site, we use the following equation \[\tilde{A}(i,j,k)=\frac{1}{\left[R_{\rm SPH}(i,j,k)\right]^{3}}\left[\sum_{l=1 }^{N_{\rm S}-1}A_{l}W_{l}{\cal S}\left(\frac{d_{l}}{R_{\rm SPH}(i,j,k)}\right) \right]. \tag{18}\] This equation provides a weighted sum of a quantity \(A_{l}\) carried by a particle \(l\). Each neighbouring particle is multiplied by \({\cal S}(x)\), an SPH kernel where \(x\) is the relative distance by calculating the ratio of the distance \(d_{l}\) of a particle \(l\) from the grid mesh node \((i,j,k)\) and \(R_{\rm SPH}(i,j,k)\). They are weighted using the \(W_{l}\) factor such that the total sum for each grid site is equal to unity. This constraint yields the following identity for the weight of each particle: \[W_{l}=1/S_{l}=\left[\sum_{i,j,k}\frac{1}{\left[R_{\rm SPH}(i,j,k)\right]^{3}} {\cal S}\left(\frac{d_{l}}{R_{\rm SPH}(i,j,k)}\right)\right]^{-1}. \tag{19}\] For the particular case of assigning the mass to the grid, we end up with a mass per mesh node of the grid. We divide by the volume corresponding to this mesh node to obtain the mass density. This yields: \[\tilde{\rho}(i,j,k)=\tilde{m}(i,j,k)/\Delta r^{3}. \tag{20}\] Similarly, the interpolated velocity is obtained following the equation (16) which gives the mesh the following identity: \[\tilde{v}(i,j,k)=\tilde{p}(i,j,k)/\tilde{m}(i,j,k)\, \tag{21}\] where \(\tilde{p}\) is the interpolated momentum of the cell. Unlike dark matter, the gas simulation from Ramses has an adaptive mesh refinement structure. Each raw gas cells have a different comoving size. The largest cells have an extent of \(0.0977h^{-1}\,{\rm Mpc}\), while the smallest cells have 1/32th of this length. We homogenised the resolution by assigning the gas cell based on its coordinate to the nearest grid, similar to the nearest grid point scheme. Some of the mesh cells may contain multiple sub-cell, e.g. two \(L_{\rm largest}/4\) cells and one \(L_{\rm largest}/2\) cells. We used the same weight regardless of the length to average the quantities, which is a sufficient approximation given the scale we are working on with the Lyman-\(\alpha\) system. The algorithm can easily be expressed as \[X_{cell}(i,j,k)=\frac{1}{N}\sum_{i=1}^{N}x_{i}, \tag{22}\] where \(X_{cell}\) is the target parameter at coordinate \((i,j,k)\) and \(x_{i}\) is the sub-cell value of the target parameter. ### Mock Lyman-\(\alpha\) absorption generation We have discussed the physics of the Lyman-\(\alpha\) forest earlier in Section 2. Let us now move to the numerical approach to calculate the normalised flux from the neutral hydrogen parameters. The opacity of the gas as a function of the observed frequency of a given line-of-sight defined in Equation (1) is in the continuous form. However, since the dataset is in a grid mesh structure, we have replaced Equation (1) with a Riemann sum to calculate the integral. The opacity now reads as \[\tau_{a,k}=\sum_{j=0}^{j=N}n_{\rm HI,a,j}\sigma_{\rm HI}(y_{\rm obs,j},\mathbf{a},j)\times\delta l\, \tag{23}\] where \(j\) is the index of the position with respect to the line-of-sight, \(l_{j}=j\times\delta l\) is the proper distance from an observer to a source, \(\mathbf{a}\) is the unit vector which is perpendicular to a source plane, \(\delta l\) is the physical length of each cell width, which in our case is \(\delta l=0.098h^{-1}\,{\rm Mpc}\). The Voigt function is expensive to compute numerically. We opted to rely on the Faddeeva function, which is precomputed as part of SciPy (Jones et al. 2001). We refer the reader to Appendix A for the relation between this function and the actual computation we do. We then compute the normalised absorption flux \(F(\nu)\) by following Equation (12). We do not consider the exact shape of the broadband emission Lyman-\(\alpha\) line of the mock QSO itself, which has to be estimated as usual in the observational context. Converting the spatial resolution of the simulation to spectrometer resolution gives a resolution of \(R=\frac{\langle\Delta l\rangle}{\langle\Delta l\rangle}\simeq 30000\). This resolution is much higher than what is possible to achieve with current cosmological surveys. For example, WEAVE-QSO has \(R=5000\) and \(R=20000\) in low and high-resolution mode (Dalton et al. 2014), and SDSS IV (BOSS and eBOSS) has \(R=2000\). This allows us to have an error budget in the simulation compared to the real surveys. Of course, a high resolution does not mean the physics of the simulation is accurate, nor does it contains all of the baryonic physics at that resolution. We will consider the reliability of our model by comparing results obtained with other simulations in Section 5. ### Pre-transformation of the fields before training The dark matter density field is usually described in terms of the density contrast defined as \[\delta_{\rm DM}=\frac{\rho_{\rm DM}}{\tilde{\rho}_{\rm DM}}-1\, \tag{24}\] where \(\tilde{\rho}_{\rm DM}\) is dark matter average density. The 1D distribution of \(\delta_{\rm DM}\) is heavily skewed with a range from -1 to infinity by construction. This is not necessarily the most convenient for emulators to manipulate. Furthermore, given enough flexibility, the skewed dataset does not affect the neural network. However, this transformation benefits several things: it compresses the dynamic range, allowing a scale-invariant process, and should help the training speed and reduce the required sample size to achieve a given accuracy. Therefore, we will provide the transformed density \(A_{\rho}\) derived from the density contrast as \[A_{\rho}(\rho_{\rm DM})=\log_{10}\left(\frac{\rho_{\rm DM}}{\bar{\rho}_{\rm DM}} \right)=\log_{10}(1+\delta_{\rm DM}). \tag{25}\] Similarly, we applied a decimal logarithmic transformation for the density and temperature of neutral hydrogen. \[A_{\rm n,HI}=\log_{10}\frac{n_{\rm HI}}{1\ {\rm g\ cm^{-3}}}\, \tag{26}\] and \[A_{\rm T,HI}=\log_{10}\frac{T_{\rm HI}}{1\ {\rm K}}. \tag{27}\] The choice of this transformation for density and temperature is not only for the coherence of the scale used between the input and prediction. This transformation should facilitate our interpretation of the differences in the dark matter distribution in the prediction analysis. Most importantly, following the suggestion that the relation between density and temperature is a power law in a diffuse IGM regime, which is typically located in the void regions (Hui & Gnedin, 1997; Schaye et al., 1999; Meiksin, 2009), this temperature-density relation expresses as \[T=T_{0}\left(\frac{\rho_{b}}{\bar{\rho}_{b}}\right)^{\gamma-1}, \tag{28}\] where \(\gamma\) is the adiabatic coefficient, \(T_{0}\) is the temperature at the mean density, \(\rho_{b}\) is the density of baryonic matter, and \(\bar{\rho}_{b}\) is the mean baryonic density. Since we are interested in neutral hydrogen, for simplicity, we assume \(\frac{\rho_{b}}{\bar{\rho}_{b}}\approx\frac{n_{\rm HI}}{n_{\rm HI}}\). Hence, a log transformation of this relation becomes a linear relation expressed as \[A_{\rm T,HI}=A_{\rm T,HI}+\left(\gamma-1\right)\left(A_{\rm n,HI}-A_{\rm n,HI} \right). \tag{29}\] We show a density-temperature plot in Figure 2, and we highlight the diffuse IGM region using the threshold of the temperature is \(T>10^{5}\) K and for the density is \(n_{\rm H}<10^{-4}(1+z)\ {\rm cm^{-3}}\)(Martizzi et al., 2019). This gas phase contains the majority of baryonic density, corresponding to \(~{}99.5~{}\%\) of the total volume and \(~{}76.7~{}\%\) of the total hydrogen mass. Atomic hydrogen density is also embedded within this field, therefore, carrying the most important information for a Lyman-\(\alpha\) forest. For dark matter and gas velocities, we do not re-scale for the neural network training since they are similar in magnitude. The hope is that a neural network can construct a phenomenological model that can relate corrections to this mean relation. ### LyAi-Net Architecture This section discusses the neural network architecture used for this project and its advantage. Then we will discuss the training process, including the size of the training sample, the choice of the loss function, and the details of the architecture of _LyAi-Net_. For this work, we chose a U-net architecture for the neural network, firstly introduced by Ronneberger et al. (2015) for biomedical image segmentation. A U-net is based on the convolutional neural network (CNN, O'Shea & Nash, 2015), which can extract and create feature mapping similar to how the human visual cortex works. It is well known for its pattern recognition ability, spatial invariance, and scale invariance. This architecture is much faster than the traditional artificial neural network (ANNs, O'Shea & Nash, 2015), such as multilayer perceptron. It has become an invaluable technique for solving various problems, especially image-focus problems. The significant difference between CNNs and ANNs consists of the convolutional layers. These layers allow the machine to perform convolutions with kernels and leverage on three crucial points: sparse interactions, parameter sharing, and equivariant representations (Goodfellow et al., 2016). The U-net architecture allows us to extract and summarise the information on a small scale while retaining large-scale information thanks to network skipping. With the knowledge that hydrogen gas is the tracer of dark matter and vice versa, the U-net becomes a practical approach to generating the gas fields in a short amount of time compared to the typical hydrodynamic simulation, which also has been used by Wadekar et al. (2020) and Bernardini et al. (2021). The schematic of the _LyAi-Net_ is shown in Figure 3. It comprises three major steps: the convolution side (on the left), the bottleneck (at the bottom), and the up-sampling side (on the right). We chose the \(3\times 3\times 3\) convolution kernel for each convolution step while maintaining the dimension using 'SAME' padding and adopting a dropout layer with a rate value of \(0.2\).1. The \(3\times 3\times 3\) max-pooling is also used to reduce the dimension for each contracting step.2 After the bottleneck phase, we used the \(3\times 3\times 3\) up-sampling layer followed by the same configuration of convolution. For each convolution layer of the up-sampling phase, we used 16 filters applied on the input concatenation and the bypass from the contracting side. Footnote 1: A convolution kernel with ’SAME’ padding adds zeros around the input, such as the output as the same dimensions as the input. Footnote 2: Max-pooling operates by extracting the maximum value within the \(3\times 3\times 3\) kernel, scanning in the tensor. The overall dimension of the input is \(81\times 81\times 81\) pixels, and the output is \(27\times 27\times 27\) pixels, which is cropped to prevent the possible edge effect caused by the convolution. Thus, we need to tile the output into the larger cube to obtain the full simulation from the output, which will be explained later in Section 3.5.3. Figure 2: A Density-Temperature (\(n-T\)) diagram of hydrogen density vs temperature in the Horizon-noAGN simulation at \(z\approx 2.4\). The black dashed line illustrates the fitted strong power-law relation within the diffused IGM regime, which is the main contribution of Lyman-\(\alpha\) forest, corresponding to \(~{}99.5~{}\%\) of the total volume and \(~{}76.7~{}\%\) of the total hydrogen mass. Light grey dashed line separates the different baryonic regimes. We provide the density plot in this phase diagram by colouring each point with a log scale on the right-hand side. We note that this architecture is not optimal, and a better architecture can exist to provide an equivalent or better result with the smaller training parameters. #### 3.5.1 Loss Function We choose a mean squared error (MSE) as the loss function. The main advantage of using this loss function is that it is more stable compared to GAN loss (Nguyen et al., 2021). It also allows a better understanding of what the neural network learns, therefore, better interpretability. It takes a form as \[\text{MSE}\left(Y_{i},\hat{Y}_{i}\right)=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\hat{ Y}_{i})^{2}\, \tag{30}\] where \(n\) is the batch size, \(Y_{i}\) is the prediction, and \(\hat{Y}_{i}\) is the true value. We note that this loss is applied to the transformed variables (\(A_{X}\)), and it only calculates the loss of the central part with \(27\times 27\times 27\) pixels. For the velocity field, this stays linear. For the other quantities, as \(T_{\text{HI}}\) and \(n_{\text{HI}}\), owing to the log transformation, the effective loss function is the mean squared log error (MSLE) which reads \[\text{MSLE}(Y_{i},\hat{Y}_{i})=\frac{1}{n}\sum_{i=1}^{n}\left(\log\frac{Y_{i} }{\hat{Y}_{i}}\right)^{2}. \tag{31}\] This loss weighs high and low values equally and eases the training process. The MSLE loss punishes the underestimation more than the overestimation. It is a two-sided choice, both carrying a risk and an advantage. Indeed it can lead to bias in the mean response, as we will investigate later. Doing so also avoids being overly sensitive to the high-value excursion, which is irrelevant for the Lyman-\(\alpha\) forest. As long as the prediction is in a sensible window of error and the baryonic observation's resolution, this loss function should not be the primary concern. We will discuss prediction bias and error later in Section 4. #### 3.5.2 Training Process We train _LyAl-Net_ to predict each gas parameter separately because we can tweak the training hyper-parameters and interpret the prediction results independently from the other fields. We used Horizon-noAGN simulation as the training set, which contains both dark matter and gas components. It spans a volume of \((100h^{-1}\,\text{Mpc})^{3}\), with a resolution of 1024 pixels on each side. This snapshot has the redshift range from \(z=2.44\) to \(z=2.33\). Since we were limited to a single full simulation, we randomly selected 5120 sub-boxes, with each sub-box having a size of \(81^{3}\) voxels or \(\sim(7.9\ h^{-1}\,\text{Mpc})^{3}\) in volume, which is equivalent to \(\sim 9\%\) of the total simulation volume. The data sampling also allows us to use repeated samples and overlapping of the training set, which encourages the network to learn translational invariance. The size of the sub-box is intended to fit the GPU memory and encourage the neural network to learn the information at a small scale. We note that a physically larger kernel size could improve the solution by better capturing the transfer of power from large to small scales. We will discuss this further in Section 6. Figure 3: We show here the schematic of the _LyAl-Net_ architecture. The input of the network is the transformed dark matter field, \(\log_{10}\left(\delta_{DM}+1\right)\), with the size of \(81\times 81\times 81\) voxels. The output is the targeted gas parameter (temperature or gas density) with the size of \(27\times 27\times 27\) pixels. We crop the volume at its centre from the last convolutional layer. The arrow represents the direction of the data where each colour represents a different operation with \(3\times 3\times 3\) kernel size. We note that all of the convolution layers extract 16 feature maps. The model is trained to predict each gas parameter separately for the simplicity of optimisation and interpretation. We also perform rotational augmentation 3 in all three axes, except the line-of-sight velocity which is rotated solely around the z-axis. The rotation is required for the trained neural network to obtain the rotational invariance (He et al., 2019; Kaushal et al., 2022; Ramanah et al., 2019), reflecting the physical process involved in the prediction. We expect this process to make the emulator robust. This property will be useful for transfer learning for a different dark matter simulation. Footnote 3: To increase the diversity of the dataset from the existing dataset Table 2 shows summarised details of the parameters of the training loop that we used, which includes the number of epochs and the learning rate. We used a batch size of 64 with Adam Optimiser (Kingma and Ba, 2014). We note that the hyper-parameters of the _LyAl-Net_ were obtained via numerical experimentation to ensure a repeatable and quick convergence. #### 3.5.3 Tiling Algorithm As mentioned, the simulation size is discretised into \(1024^{3}\) voxels. Since _LyAl-Net_ produces emulated fields of only \(27^{3}\) voxels from an \(81^{3}\) voxels input, we have to tile emulated elementary cubes to achieve the desired volume before assessing the accuracy and quality of the prediction model. We implemented a tiling algorithm by applying a 27 pixels sliding window, horizontally scanning from the XY-plane 38 times per row, column, and layer (Z direction). We describe with pseudo-code the algorithm in Algorithm 1. The total total volume consists of \(38\times 38\times 38\) elementary cubes (equivalent to \(1026^{3}\) voxels), which is then trimmed to obtain the \(1024^{3}\) voxels. The construction of the complete volume took \(\sim 8\) minutes on Nvidia Tesla V100 and Intel(R) Xeon(R) Gold 6230, 40 cores. This cost can be further reduced by re-arranging some operations directly on the GPU to alleviate GPU-to-CPU data transfer. We postpone this optimisation for later, which could be useful for a fully differentiable model of the Lyman-\(\alpha\) forest. ## 4 Results on Horizon-noAGN In the following, we present the metrics we use to assess the quality of the emulated fields in Section 4.1. Then we consider in turn each of the physical fields in the following section: the number density of atomic neutral hydrogen in Section 4.2, the temperature in Section 4.3 and the gas bulk velocity in Section 4.4. We emphasise that we train _LyAl-Net_ with Horizon-noAGN, and use Horizon-DM as a validation set to see how the model responds to different baryonic feedbacks. We also refer to the term "predicted fields" and "emulated fields" interchangeably. ### Prediction Quality Assessment We consider several metrics to assess the degree of trust that we have on _LyAl-Net_ to predict physical fields. We will first consider the one-point statistics, i.e. the relation between the true and the predicted field at a given point in space. The second statistic that provide insight into the reproduction of the mean spatial structure of the two respective fields. #### 4.1.1 One-point Statistics We consider several metrics to assess the degree of trust we may have on _LyAl-Net_ to predict physical fields. We will first consider the one-point statistics, i.e. the relation between the true and the predicted field at a given point in space. The second statistic that we will consider is the two-point correlation function which will provide insight into the reproduction of the mean spatial structure of the two respective fields. #### 4.1.2 One-point Statistics We quantify the emulator's performance by considering the conditional probability distribution function of the predicted value \(X_{\text{Pred}}\) of the field \(X\) given the true value \(X_{\text{true}}\) of that same \(X\) from the simulation. This conditional probability is obtained from Bayes' theorem directly, which is expressed as \[P(X_{\text{Pred}}|X_{\text{True}})=\frac{P(X_{\text{Pred}},X_{\text{True}}) }{P(X_{\text{True}})}, \tag{32}\] where \(P(X_{\text{Pred}}|X_{\text{True}})\) is the probability of the predicted value of \(X\) given the true value of \(X=X_{\text{Pred}}\), \(P(X_{\text{Pred}},X_{\text{True}})\) is the joint probability of \(X=X_{\text{Pred}}\) and \(X=X_{\text{True}}\), and \(P(X_{\text{True}})\) the probability of \(X_{\text{True}}\). We have estimated the joint probability distribution function for the predicted and true values \(P(X_{\text{Pred}},X_{\text{True}})\) and \(P(X_{\text{True}})\) by using the kernel density estimation (KDE, Silverman, 1986; Scott, 2015). It is a technique to infer a continuous probability density function (PDF) from a finite sample by smoothing the distribution using a weighted kernel function. Owing to the computational cost of this estimator, we built them on a reduced number of voxels. We, therefore, use approximately 50% of the total volume to calculate the KDE estimator. The full details are indicated in Appendix B. #### 4.1.3 Two-point Statistics The two-point correlation function \(\xi(r)\) is defined as \[\xi(|\mathbf{r}|)=\langle\delta_{A}(\mathbf{r}^{\prime})\delta_{B}(\mathbf{r} ^{\prime}+\mathbf{r})\rangle. \tag{33}\] Compared to a homogeneous distribution, this measures the excess probability of finding two-point objects, field fluctuations, separated by a distance \(|\mathbf{r}|\). These objects or fields may be,for example, galaxies or gas quantities. The power spectrum is the image of the correlation function in the spatial frequency domain. It is defined as \[P(|\mathbf{k}|)=\int\mathrm{d}^{3}\mathbf{r}\ \xi(r)e^{-i\mathbf{k}\cdot \mathbf{r}}. \tag{34}\] The metrics that we used to assess both magnitudes and spatial information of the emulated fields are the transfer function \(T_{X}(k)\) and the cross-correlation function \(r_{X}(k)\). The transfer function \(T_{X}(k)\), \(k\) being the spatial comoving wave numbers, is a convenient way to represent the departure of the predicted field to the original correlation structure of the true field \(X\) for each \(k\) scale. This function is commonly used in literature as a benchmark of the performance of an emulator or \begin{table} \begin{tabular}{l c c} \hline \hline **Target HI Parameter** & **Learning Rate** & **Number of Epochs** \\ \hline Density & 0.002 & 1000 \\ Temperature & 0.002 & 1000 \\ Velocity & 0.003 & 100 \\ \hline \hline \end{tabular} \end{table} Table 2: The table summarises the hyper-parameters used for _LyAl-Net_ training with Horizon-noAGN. an approximation (e.g. Bardeen et al., 1986; Leclercq et al., 2013; Vlah et al., 2016; He et al., 2019; Dai & Seljak, 2021). The transfer function is defined as \[T_{X}(k)=\sqrt{\frac{P_{\rm pred,{X}}(k)}{P_{\rm true,{X}}(k)}}\, \tag{35}\] where \(P_{\rm pred}(k)\) is the power spectrum of the emulated field, and \(P_{\rm true}(k)\) is the reference or true field. Finally, we consider the correlation rate \(r_{X}(k)\) to check the linear correspondence \[r_{X}(k)=\frac{P_{\rm true,{X}\cup{\rm pred,{X}}(k)}}{\sqrt{P_{\rm true,{X}}( k)P_{\rm pred,{X}}(k)}}. \tag{36}\] To compute the power spectrum of the different scalar fields considered in this work, we rely on the use of nbodykit(Hand et al., 2018). For the prediction quality assessment of Horizon-noAGN, we decide to include the trained portions of the dataset in the power spectra calculation because only \(\leq 9\%\)4 of the simulation volume was used to train _LyAI-Net_. The main reason is to avoid an irregular volume for power spectra computation, and the volume should be small enough compared to the rest of the validation volume. Footnote 4: The data sampling algorithm for a training set allows repeated sampling. ### Neutral hydrogen Density (\(n_{\rm HI}\)) Among the different quantities required to compute the absorption rate in the Lyman-\(\alpha\) forest, the atomic hydrogen number density is most critical to obtain correctly since the neutral hydrogen clouds are the absorber in the Lyman-\(\alpha\) forest process. In the top row, Figure 4 shows a slice of the atomic hydrogen number density \(n_{\rm HI}\) from the Horizon-noAGN prediction from dark matter overdensity and the (logarithmic) difference between the ground truth and prediction. Visually, the structures of both slices are in good agreement. We note no apparent defects caused by _LyAI-Net_ nor edge effects induced by our tiling procedure to cover the entire volume except for spots where a high relative error is present, typically correlated with high-density regions and where Lyman-\(\alpha\) absorptions are saturated. We now follow the model evaluation procedure in Section 4.1 to compute the true number density's joint and conditional probability distribution from Horizon-noAGN and the emulated number density. The estimated probability distribution function \(P(\log_{10}n_{\rm HI}^{\rm Prod}\log_{10}n_{\rm HI}^{\rm True})\) is shown in Figure 5 with contour lines representing the different levels of probability. The green dashed line, referred to as \(\mu\left(\log_{10}n_{\rm HI}{}^{\rm Pred}|n_{\rm HI}^{\rm True}\right)\) is the mean predicted neutral hydrogen density from the conditional probability, along with the \(68\%\) confidence interval represented in the green band. This is modelled after the fact that the conditional probability is quite close to a Gaussian distribution. The red dashed line is given as a reference where \(n_{\rm HI}^{\rm Pred}=n_{\rm HI}^{\rm True}\). We further added a red band representing the upper and lower boundary of \(10\%\) error from the mid-point as a qualitative reference to detect the mean bias of the emulator, which we call the prediction bias. The contour levels and the mean imply that the prediction behaves well within the main region of the dataset, given that the \(0\)th and \(99\)th percentiles are \(4.5\times 10^{-9}\) and \(3.8\times 10^{-13}\,{\rm cm}^{-3}\) respectively. The bias only starts to occur when \(n_{\rm HI}<10^{-12}\,{\rm cm}^{-3}\) and \(n_{\rm HI}>3\times 10^{-9}\,{\rm cm}^{-3}\) which might be caused by the longer Figure 4: Comparisons of sample slices for different hydrodynamic quantities in the decimal logarithm scale. Left column: fields directly extracted from Horizon-noAGN. Middle column: fields obtained from the application of _LyAI-Net_ to the dark matter field. Right column: the difference between the leftmost panel and the middle panel. The atomic hydrogen number density (temperature, respectively) is presented on the top row (bottom row, respectively). distribution tail and sample variance affecting the KDE itself. To get a better picture of the prediction bias, we define the bias as a function of true value as \[\text{Bias}=\left[10^{n\left(\log_{10}\lambda_{\text{HI}}^{\text{Mean}}\log_{10} \lambda_{\text{HI}}^{\text{True}}\right)}-\lambda_{\text{HI}}^{\text{True}} \right]\frac{1}{\lambda_{\text{HI}}^{\text{True}}}, \tag{37}\] where \(X_{\text{HI}}\) is a gas parameter. We visualise this function for the emulated \(n_{\text{HI}}\) in Figure 6, and it shows the prediction bias where \(10^{-12}\,\text{cm}^{-3}<n_{\text{HI}}<10^{-10}\,\text{cm}^{-3}\) stay within 10% fiducial bias. By comparing _LyAl-Net_ to the benchmark, the prediction from modified-FGPA is unbiased only around the density \(n_{\text{HI}}\)\(\approx 10^{-11}\,\text{cm}^{-3}\), and fail to capture the broader range of the density. While one-point statistics show that _LyAl-Net_ performs well, we extend our analysis to the two-point statistics of the emulated fields. Before calculating the power spectra, we mask the extreme values for emulated and true \(n_{\text{HI}}\) to zero. We based this on the fact that the bias is starting to occur in the tails of the distribution due to the fewer data points. Furthermore, masking has to be done because power spectra are very sensitive to extreme values (see Figure C.1). Therefore, we implement three different masking ranges and refer to them as _fidelity range_: low, medium, and much. Table 3 summarises the threshold values used in this process. These masking values are arbitrary, so we can identify the regions in which _LyAl-Net_ performs best. We emphasise that the maximum number of masked voxels is less than 1%. Therefore, this should not artificially increase the correlation rate \(r(k)\). We compare the transfer function and cross-correlation using different fidelity ranges illustrated in Figure 7. The transfer function is approximately stable for all fidelity ranges while \(T_{\text{HI}}(k)\) of high fidelity performs the best and stays around 0.9 up to \(k\approx 11h\,\text{ Mpc}^{-1}\). The cross-correlation functions \(r_{\text{shift}}\) show the convergence of all fidelity ranges with the amplitude staying above 0.9 up to \(k\approx 3h\,\text{ Mpc}^{-1}\). Based on two-point statistics, this assessment shows that the best performance of the _LyAl-Net_ is up to \(n_{\text{HI}}\leq 1.10\times 10^{-9}\,\text{ cm}^{3}\), which is around the estimated diffuse IGM limit. 5 Footnote 5: We estimated the diffuse IGM limit of neutral hydrogen density by scaling the Martzzi et al. (2019)\(n_{\text{H}}\) limit; \(n_{\text{HI}}=\alpha n_{\text{H}}=10^{-4}(1+z)\,\text{ cm}^{-3}\) where \(\alpha=\text{mean}(n_{\text{HI}}/n_{\text{H}})\), obtained directly from Horizon-noAGN simulation at \(z=2.43\). ### HI gas temperature (\(T_{\text{HI}}\)) We perform the same analysis for the HI gas temperature as the density. We show the bottom panel in Figure 4 a slice of through the temperature field in the Horizon-noAGN simulation, the emulated temperature from dark matter density and the logarithmic difference between those two fields. The emulated and true tem Figure 5: The conditional probability of the predicted decimal logarithm of density \(n_{\text{HI}}\) given the true density of the Horizon-noAGN and their corresponding marginal distribution where the black solid contour lines represent different levels of probabilities (see Equation (32)). The diagonal red-dashed line indicates the unbiased relation between the predicted and true density alongside a 10% fiducial error budget. The green dashed green line indicates the mean of the conditional probability distribution alongside the \(67\%\) confidence interval. We note a good agreement over several orders of magnitude of the predicted gas density from \(1\times 10^{-12}\,\text{cm}^{-3}\) and \(6\times 10^{-10}\,\text{cm}^{-3}\). A significant saturation occurs for very low density at \(n_{\text{HI}}\leq 10^{-12}\,\text{cm}^{-3}\). \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Fidelity Range**} & \multicolumn{2}{c}{**Upper Limit Masking Value**} \\ \cline{2-3} & **Density**\([\text{cm}^{-3}]\) & **Temperature**\([\text{K}]\) \\ \hline low & \(5.16\times 10^{-7}\) & \(3.08\times 10^{6}\) \\ medium & \(1.79\times 10^{-8}\) & \(5.92\times 10^{5}\) \\ high & \(1.10\times 10^{-9}\) & \(8.03\times 10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 3: The different density ranges used in the masking process to evaluate the _LyAl-Net_ performance where the upper-density boundaries are based on 99.95th, 99.99th,and 99.5th percentile for low,medium,and much fidelity range respectively. The lower boundary of neutral hydrogen density is \(4.36\times 10^{-15}\text{cm}^{-3}\) and for temperature is 600 K. The maximum number of masked voxels is less than 0.5%, therefore, this should not artificially increase the cross-correlation \(r(k)\). Figure 6: The emulated \(n_{\text{HI}}\) bias as a function of the true \(n_{\text{HI}}\) of Horizon-noAGN at \(z=2.44\) of _LyAl-Net_ (an orange solid line) and the modified-FGPA (a blue solid line). The bias of modified-FGPA is only unbiased at \(n_{\text{HI}}\approx 10^{-11}\,\text{cm}^{-3}\). The _LyAl-Net_’s bias shows a better prediction overall. It is mostly stable and stays inside the 10 % fiducial bias levels for density range between 1st and 90th percentile, highlighted in a light blue strip which corresponds to \(1\times 10^{-12}\) and \(6\times 10^{-11}\,\text{cm}^{-3}\) respectively. The drastic behaviour of the bias outside the blue strip is expected since the extremities of the density on both sides rarely occur. perature also agree without any apparent defects caused by the _LyAl-Net_. The logarithmic difference map of temperature shows that the map is, on average better than density, with a lower relative error for dense regions. As for the density section, we also compute the emulated temperature's joint and conditional probability distribution given the true temperature, \(P(\log_{10}T_{\rm HI}^{\rm pred}|\log_{10}T_{\rm HI}^{\rm True})\) illustrated in Figure 8, and we follow the same colour scheme as \(n_{\rm HI}\). The contour lines represent different probability levels, the green dashed line is the mean predicted temperature, and the green band is the 68% confidence interval taken from its maximum. The red dashed line is given as a reference where \(T_{\rm HI}^{\rm pred}=T_{\rm HI}^{\rm True}\) with a red band representing the 10% fiducial error from the midpoint as a qualitative reference. Visually, the mean values of the emulated temperature appear to have a small bias across the temperature range, which becomes more prominent in the temperature regime. The details of the emulated temperature bias as a function of the true values are illustrated in Figure 9 in a solid orange line. Thus, we consider our emulator to be nearly unbiased and stable in the range of \(2\times 10^{3}~{}{\rm K}<T_{\rm HI}<10^{4}~{}{\rm K}\). The overprediction only appears when for the extreme temperature above \(10^{4}~{}{\rm K}\). In addition, _LyAl-Net_ performs impressively better than modified-FGPA, which suffers from prediction bias in all ranges of the temperature value. To estimate the fidelity range of _LyAl-Net_, we also picked three masking ranges, summarised in Table 3. From Figure 10, the transfer function indicates the amplitude stays well greater than 0.90 up to \(k\approx 10~{}h~{}{\rm Mpc}^{-1}\) for the most fidelity ranges, with a small drop at \(k<0.3h~{}{\rm Mpc}^{-1}\). The spatial information measured by the cross-correlation function indicates \(r>0.9\) up to \(k=2h~{}{\rm Mpc}^{-1}\) for the low and med fidelity range and up to \(k=3h~{}{\rm Mpc}^{-1}\) in the high fidelity range. This implies the highest performance of the _LyAl-Net_ lies in the scale of \(14h^{-1}~{}{\rm Mpc}\) to \(63h^{-1}~{}{\rm Mpc}\) scale for the temperature range \(1.92\times 10^{3}<T_{\rm HI}<8.03\times 10^{4}~{}{\rm K}\). It is worth pointing out that the mean prediction bias is stable when the values are within the 1st and 90th percentile on both \(n_{\rm HI}\) and \(T_{\rm HI}\). This range is empirical evidence that _LyAl-Net_ performs best in the main distribution of both fields, while the high bias only occurs in the rare and extreme environment in the tails of the distributions. Thus, this implies an intrinsic limitation of the neural network weighing the temper Figure 8: The conditional probability of the predicted decimal logarithm of temperature given the true temperature of the Horizon-noAGN where the contour lines represent different probabilities. The diagonal red-dashed line indicates the unbiased relation between those two temperatures alongside a fiducial 10% error budget in shaded red. The dashed green line indicates the mean from the conditional distribution alongside the 67% confidence interval (in shaded green). Figure 7: Horizon-noAGN prediction quality assessment using two-point correlations for a different choice of fidelity range for \(n_{\rm HI}\). **Top panel:** We show the transfer functions \(T_{n_{\rm HI}}(k)\) for the different for low (blue solid line), and (orange solid line), and (green solid line). The descriptions are given in Table 3. **Bottom panel:** We show the correlation rate \(r_{n_{\rm HI}}(k)\) for the same fidelity ranges as in the top panel, with the same colour convention. Figure 9: The emulated \(T_{\rm HI}\) bias as a function of the true \(T_{\rm HI}\) of Horizon-noAGN at \(z=2.44\) of _LyAl-Net_ (in an orange solid line) and the modified-FGPA (a blue solid line). The modified-FGPA bias lies outside the 10% fiducial bias levels for all temperature range. While the bias of _LyAl-Net_ is mostly stable and stays inside the 10 % fiducial bias levels for temperature values between 1st and 90th percentile highlighted in a light blue strip which corresponds to \(2\times 10^{3}~{}{\rm K}\) to \(2\times 10^{4}~{}{\rm K}\). ature and density in the tails of distribution less due to the lack of data points. Fortunately, in practice, the Lyman-\(\alpha\) absorption is not sensitive to the highly dense region because the saturation occurs at a density lower than where the discrepancy happens (see Equation 12). ### The line-of-sight velocity of the HI gas (\(v_{z,\mathrm{HI}}\)) The gas line-of-sight velocity, \(v_{z,\mathrm{HI}}\), plays a significant role in the Lyman-\(\alpha\) absorption by shifting the frequency of an incoming photon due to the Doppler effect (for a non-relativistic case). With inaccurate values of the velocity field, the emulated flux will result in a change in absorption rate and a shift of Lyman-\(\alpha\) absorption in some frequencies. To estimate \(v_{z,\mathrm{HI}}\), we can use the line-of-sight velocity of dark matter \(v_{z,\mathrm{DM}}\) directly, because their velocity fields are nearly identical since the gravity from dark matter influences the IGM. This estimation is effective on large scales, but it is inevitable for the IGM velocity to decouple at small scales due to various shocks and feedback (Pando et al., 2004). Therefore, naively using \(v_{z,\mathrm{DM}}\) could potentially impact the accuracy of absorption at small scales, which means there is room for improvements to emulate \(v_{\mathrm{HI}}\). To show the decoupling effect empirically, we compare velocity divergences of \(V_{\mathrm{DM}}\) to \(V_{\mathrm{HI}}\) using the transfer function and cross-correlation in Figure 11 in a solid blue line. The transfer function \(T_{\nabla_{\nu}}(k)\) of dark matter shows a steady amplitude \(\approx 1.1\) where \(k<2h\) Mpc\({}^{-1}\), which means the dark matter velocity fluctuation is higher than the gas. The cross-correlation rate \(r_{\nu,\nu}(k)\) on the bottom panel shows the value drops below 0.95 when \(k>2h\) Mpc\({}^{-1}\), meaning the IGM velocity starts to decouple with below \(3\,h^{-1}\) Mpc. Based on the idea that _LyAl-Net_ should be able to correct the small-scale effect, we train _LyAl-Net_ to predict a HI line-of-sight velocity using dark matter line-of-sight velocity as an input. We check whether _LyAl-Net_ improves the results instead of simply adding more systematic effects to the dark matter velocity field and illustrate in a solid orange line in Figure 11. Both methods show similar performance in the spatial distribution of cross-correlation of velocity divergences. However, the transfer function of dark matter shows better reliability in the high-\(k\) regime, while _LyAl-Net_ introduces a loss of amplitude over the same range. Despite this, we use \(v_{z,\mathrm{DM}}\) as a proxy for \(v_{z,\mathrm{HI}}\) and postpone the improvement for velocity prediction in future work. ### The effect from an absence of the baryonic feedback With the additional complexity of the equations and the large increase in the number of timesteps, cosmological hydrodynamic simulations are largely more computationally expensive by at least an order of magnitude (Periani et al., 2017) than an \(N\)-body simulation. While it is cheaper to produce dark matter-only fields, neglecting the baryonic feedback can reduce the accuracy of the emulated gas fields on smaller scales. Since the main goal is to utilise _LyAl-Net_ on pure dark matter \(N\)-body simulations for time efficiency, in this section, we explore how a pure dark matter \(N\)-body simulation can affect the quality of emulated fields. We then look at the impact of this approximation on the Lyman-\(\alpha\) forest absorption in Section 4.6. For this test, we emulate HI density and temperature from Horizon-DM. This simulation is a sibling simulation of Horizon-AGN, which considers only pure dark matter dynamics. We use the same snapshot, i.e. the same redshift range, as the one used for the training of _LyAl-Net_. Since the neural network model is trained on Horizon-noAGN with the dark matter density transformation described by Equation (25), it allows _LyAl-Net_ to be immune to the difference in dark matter particle mass of pure \(N\)-body and a full hydrodynamical simulation. We can use these emulated fields as benchmarks for the case of emulated fields from Horizon-DM directly. Figure 11: Horizon-noAGN prediction quality assessment using two-point correlations of velocity divergence from Horizon-noAGN dark matter velocity and emulated Horizon-noAGN HI velocity, labelled as **Dark Matter** in blue and _LyAl-Net_ in orange respectively. **Top Panel:** Transfer functions comparison shows both have a similar amplitude. However, Horizon-noAGN dark matter velocity divergence outperforms where \(k\gtrsim 6h\) Mpc\({}^{-1}\). **Bottom Panel:** Cross-correlation functions comparison shows both are identical, which implies that _LyAl-Net_ does not improve the spatial distribution. Figure 10: Horizon-noAGN prediction quality assessment using two-point correlations for different choices of fidelity range for emulated \(T_{\mathrm{HI}}\). **Top Panel:** The transfer functions comparison for low, med, and high fidelity range in blue, orange and green respectively. The temperature range is indicated in Table 3. **Bottom Panel:** A comparison of correlation rates of the same fidelity ranges with the same colour scheme. Let us introduce a new metric, which is the ratio of transfer functions, \[\frac{T_{x,\mathrm{H-DM}}(k)}{T_{x,\mathrm{H-noAGN}}(k)}=\sqrt{\frac{P_{x,\mathrm{ H-DM}}(k)}{P_{x,\mathrm{H-noAGN}}(k)}}, \tag{38}\] where the a subscripts \(x,\mathrm{H-noAGN}\) and \(x,\mathrm{H-DM}\) are denoted as a gas field \(x\) emulated using Horizon-noAGN and Horizon-DM respectively. We also use a high fidelity masking range introduced in Table 3 for both fields. The top panel of Figure 12 shows the ratio of the transfer functions for the emulated density and the temperature. The ratio is relatively stable across the whole range in the considered scales for the neutral hydrogen density. However, this stable value stays at \(\approx 0.85\), which means the absence of baryonic feedback impacts the fluctuation of \(n_{\mathrm{HI}}\). The cross-correlation function of \(n_{\mathrm{HI}}\) (middle panel) shows a slight increase in its accuracy at \(k\approx 5h\) Mpc\({}^{-1}\). This is slightly better than the Horizon-noAGN and contradicts the expectation that the emulated fields from Horizon-DM should be worse due to the lack of feedbacks. On the other hand, the emulated temperature from Horizon-DM performs much closer to using Horizon-noAGN because the ratio stays relatively stable at \(0.95\). Its cross-correlation function shows that using Horizon-DM slightly impacts the spatial distribution of the emulated field on a large scale. It is worth pointing out that when \(k>10h\) Mpc\({}^{-1}\), the transfer function ratio starts to deviate for both fields, which implies that the lack of baryonic feedback dominantly affects scales above \(10h\) Mpc\({}^{-1}\), i.e. \(0.62h^{-1}\) Mpc. The accuracy reduction of emulated gas fields by using a dark matter simulation with different settings of baryonic effect is expected, especially when the shocks and feedbacks are absent. In Section 5.4, we will discuss this in more detail and the possible treatment/improvements using IllustrisTNG where the difference of the simulation parameters is much more complicated. ### Lyman Alpha forest Absorption (\(F\)) In this section, we discuss the Lyman alpha forest prediction quality using emulated HI density and temperature using Horizon-noAGN and especially what happens when the baryonic feedback is absent using Horizon-DM. We follow the Equation (23) and calculate the normalised Lyman-\(\alpha\) absorption. Figure 13 shows Lyman-\(\alpha\) absorption derived from the emulator for a sample skewer, and the flux computed from the gas parameters from Horizon-noAGN as a ground truth. The bottom panel shows the residual, defined as \[\mathrm{Residual}\equiv\exp(-\tau_{\mathrm{True}})-\exp(-\tau_{\mathrm{Pred}}) =F_{\mathrm{True}}-F_{\mathrm{Pred}}. \tag{39}\] We also include the flux residual estimated by FGPA as a benchmark model6. A visual inspection shows that the emulated spectra agree with the overall residual staying within the \(\pm 0.1\) fiducial error. Some prominent errors exist in the highly absorbed region where _LyAl-Net_ shows limited performance in extreme-density environments. However, it shows the overall prediction performance better than the FGPA model. Nonetheless, Individual-skewer residuals do not provide enough information to assess the emulated field quality due to a few reasons. The absorption requires three different parameters which can affect different behaviours of the emulated flux: Footnote 6: We include the Voigt profile described in Section 2.1 1. the density \(n_{\mathrm{HI}}\) directly affects the amount of absorption; 2. the temperature \(T_{\mathrm{HI}}\) affects the absorption width due to thermal broadening; 3. the line-of-sight velocity affects the absorption frequency, which will appear as the a shift on spectra Therefore, to suppress the small-scale fluctuation and detect the systematic error of the emulation, we perform an ensemble average of the transmitted flux along the XY-plane to obtain the mean transmitted flux as a function of redshift (\(\langle F(z)\rangle\)) and illustrate in Figure 14. The top panel of the figure shows that the fluctuations of the _LyAl-Net_ flux versus the true flux (referred to as "ground truth") agree on all the redshift ranges. The only main difference is the amount of absorption from emulated fields. The bottom panel shows that the relative error of _LyAl-Net_ using Horizon-noAGN and Horizon-DM are approximately 2.5% and 3.5%, respectively. This is directly affected by the underprediction affecting the column density in the high matter density region of _LyAl-Net_. The relative error also shows the prominent fluctuations around redshift \(z\approx 2.35\), 2.40, and 2.43. The stochastic components can explain this fluctuation from the various feedbacks that _LyAl-Net_ might ignore. To alleviate these effects would require designing a new loss function that would mitigate the impact of large deviations during the training. The relative error of modified-FGPA is approximately 1%. However, the 1D distribution of Lyman-\(\alpha\) optical depth \(\tau\) shows that _LyAl-Net_ outperforms the benchmark model, particularly in the low optical depth region where the modified-FGPA fails to predict. For the two-point summary statistics illustrated in Figure 15, the transfer function using Horizon-noAGN (in blue) shows that Figure 12: Impact of the alteration of dark matter fields from baryonic physics on the quality of the emulation. **Top Panel** shows the ratio of the transfer function of emulated fields using Horizon-noAGN and Horizon-DM showing that the temperature does not suffer from the absence of baryonic feedback. However, the density of neutral hydrogen loses power, meaning the density is smoother. **Middle Panel** and **Bottom Panel** show comparisons of high-fidelity range cross-correlation functions of emulated density and temperature from Horizon-noAGN (illustrated previously in Figure 7 and 10) and Horizon-DM. the amplitude stays above \(0.90\) up to \(k\approx 10h\) Mpc\({}^{-1}\). We note that \(T_{F}(k)\) the initial drop followed by the slow rise in large scales (\(k\approx 0.1h\) Mpc\({}^{-1}\)) implies the predicted flux is smoother than the ground truth. When turning baryonic feedback off, the flux emulated from Horizon-mD behaves the same way as Horizon-noAGN with a smaller amplitude. This reduction in the amplitude is the direct impact of the \(n_{\rm HI}\) underprediction discussed in Section 4.5. Meanwhile, the modified-FGPA model shows a higher amplitude in larger scales which also reflects on the ensemble average in Figure 14 being closer to Horizon-noAGN. The transfer function, however, noticeably diverges when \(k>3\), which implies that _LyAl-Net_ successfully outperforms the benchmark model on smaller scales. The cross-correlation function on the bottom panel shows a remarkable agreement between the emulated fluxes and the ground truth up to \(k\approx 6h\) Mpc\({}^{-1}\) for using Horizon-noAGN and \(k\approx 9h\) Mpc\({}^{-1}\) for Horizon-mM. the modified-FGPA also shows a similar level of agreement as _LyAl-Net_. However, the spatial distribution is better with Horizon-DM. We can think of two possible reasons for this: the cross-correlation of Horizon-DM is slightly better (see Figure 12), or its velocity field is coincidentally closer to the velocity field of the gas. We will investigate this in future work. We now consider if the inaccuracies of the model would be problematic to model observations. For a practical example, Bautista et al. (2015) modelled mock quasar spectra as \[f(\lambda)=\left\{[F(\lambda)\cdot C(\lambda)]*\tilde{W}\left(\lambda,R_{p},R _{w}\right)+N(\lambda)\right\}\cdot M(\lambda)+\delta f_{\rm sky}(\lambda). \tag{40}\] Where \(f(\lambda)\) is a quasar flux, \(F(\lambda)\) the Lyman-\(\alpha\) absorption, \(C(\lambda)\) is a quasar continuum, \(\tilde{W}\left(\lambda,R_{p},R_{w}\right)\) is a BOSS pixelisation kernel, \(N(\lambda)\) is a noise from Bautista et al. (2015), \(M(\lambda)\) is a correction linear function, and \(\delta f_{\rm sky}\left(\lambda\right)\) is the added sky subtraction residual. Let us consider a Lyman-\(\alpha\) absorption from full hydrodynamical simulation to be \(F(z)\). the output from _LyAl-Net_ can be expressed as \[F(z)_{LyAl-Net}=F(z)B(z) \tag{41}\] where \(B(z)\) is an arbitrary intrinsic bias from _LyAl-Net_ responding to the \(N\)-body simulation input. Therefore, one has to take into account a systematic bias from the prediction model. Therefore, the Equation (40), \(F(\lambda)\) now has an extra term of \(B(\lambda)\), which has to be treated. If we assume as a minimum requirement that the QSO's spectra signal-to-noise ratio be \({\rm SNR}=10\) and further take the bias to have a linear scaling with \(2.5\%\) bias. Hence, the requirement for SNR will be changed to \(10.3\). ## 5 Transfer Learning with IllustrisTNG In the previous sections, we test the performance of the direct application of _LyAl-Net_ on simulations. The real universe, however, is not likely to follow any of those. Therefore, it is essential to check the portability of this model. To guarantee this, we need a framework that generalises to other physical feedback and other updates of the gas physics. In this section, we explore the efficiency of the porting of the trained _LyAl-Net_, which is trained on Horizon-noAGN, to different gas physics. We are using IllustrisTNG as a benchmark, and a validation set, to test the accuracy of the predicted density and temperature fields. We first test this by applying _LyAl-Net_ straightaway (out of the box), and then we Figure 13: **Top Panel:**A comparison of Lyman-\(\alpha\) forest normalised fluxes from the same sample skewer of the ground truth and emulated flux, labelled as Ground Truth and _LyAl-Net_, respectively. The fluxes are calculated from Horizon-noAGN and emulated HI parameters from _LyAl-Net_ using Horizon-noAGN and Horizon-DM dark matter. **Bottom Panel:** Residuals of emulated flux and true fluxes along with fiducial error plotted in dashed line at \(-0.1\), and \(0.1\).Both _LyAl-Net_ and modified-FGPA show that the values mostly stay within the fiducial region while the residuals and peaks within the highly absorbed region with _LyAl-Net_ being more stable. will discuss the possibility of improving the prediction results by using a technique inspired by transfer learning. ### TNG100 specification IllustrisTNG is a set of simulations with different settings and resolutions. We choose TNG100 since it has the volume closest to the Horizon-noAGN simulation. TNG100 is a hydrodynamic cosmological simulation where its volume spans a cube with a dimension of \(75h^{-1}\,\mathrm{Mpc}\) on each side. The simulation relies on AREPO, which solves the dynamics equation on a moving unstructured mesh defined by the Voronoi tessellation of a set of discrete points (Springel 2010). We picked snapshot 28, which corresponds to the redshift \(z=2.58\). It is the closest snapshot to the trained _LyAl-Net_ with Horizon-noAGN \(z=2.5\). ### Precomputation As mentioned above, the data structure of TNG100 is slightly different from Horizon-noAGN, as it is based on the Voronoi tessellation. To derive the dataset of the hydrodynamic quantities for our test, we deploy the same technique using the same adaptive filter to efficiently assign the dark matter and gas fields into a mesh structure with the same resolution as the Horizon-noAGN training set (\(\sim 9.78h^{-1}\,\mathrm{Mpc}\)) which yields \(768\times 768\times 768\) voxels. We derived the temperature of the gas from the internal energy, \(u\), of the TNG100 simulation by using the relation 7 Footnote 7: We followed the guide from the official The IllustrisTNG Project ([https://www.tng-project.org/data/docs/faq/gen5](https://www.tng-project.org/data/docs/faq/gen5)). \[T=\mu(\gamma-1)\frac{u}{k_{B}}\, \tag{42}\] where \(T\) is the temperature of the gas in each cell, \(\gamma=5/3\) is the adiabatic index for an ideal monoatomic gas, \(k_{B}\) is the Boltzmann constant, and \(\mu\) is the mean molecular weight. The mean molecular weight is derived from the electron abundance, a ratio of the number density of free electrons and a total hydrogen number density (Kim et al. 2022): This relation is expressed by, \[\mu=\frac{4m_{p}}{1+3X_{H}+4X_{H}x_{e}}\, \tag{43}\] where \(m_{p}\) is proton mass and \(x_{e}=n_{e}/n_{H}\) is the electron abundance which is a fraction of free electrons with respect to the number density of total hydrogen, \(X_{H}\) is the hydrogen mass fraction which we assume to be constant and uniform. The database of the IllustrisTNG simulations does not publicly provide the neutral hydrogen density. We instead choose Figure 14: One-point statistics comparisons of Lyman-\(\alpha\) absorption between Horizon-noAGN (Ground Truth) and emulated absorptions, using _LyAl-Net_ and the benchmark model (mod-FGPA) **Top Panel**: A mean transmitted flux as a function of redshift, \((F(z))\), shows that _LyAl-Net_ tracks well with the ground truth. **Middle Panel**: A comparison of relative error of mean transmitted flux as a function of redshift between flux emulated. Both _LyAl-Net_’s relative errors exhibit the same shape, while Horizon-noAGN has the mean closer to the ground truth. The benchmark has a closer mean transmitted flux. **Bottom Panel**: A comparison of optical depths 1D distribution shows _LyAl-Net_ flux using Horizon-noAGN and Horizon-DM perform better than the benchmark model. Figure 15: The comparisons of transfer functions of and cross-correlation function (Top panel) and cross-correlation function (Bottom panel) of the emulated using _LyAl-Net_ on Horizon-noAGN, Horizon-DM, and modified-FGPA. The transfer function of modified-FGPA noticeably diverges when \(k>3\) which implies that _LyAl-Net_ performs better and more stable in the smaller scales. to estimate it from the total hydrogen density, which is available. Therefore, we estimated the density of neutral hydrogen, \(n_{\rm HIMmock}\), by scaling it from its mean relative abundance in the Horizon-noAGN simulation. This can be expressed as \[n_{\rm HI}{}_{\rm mock}^{\rm TNG100}=\left(\frac{n_{\rm HI}{}^{\rm HnoAGN}}{n_{ \rm H}^{\rm HnoAGN}}\right)n_{\rm H}^{\rm TNG100}. \tag{44}\] The above equation is an important simplification of the complicated physics of the hydrogen ionisation state. In future work, we intend to better model this effect for the training set through adequate computation of this equilibrium and matching it to Horizon-noAGN. For a performance comparison, we train the _LyAl-Net_ to predict the total hydrogen density, \(n_{H}^{\rm HnoAGN}\), with the same Horizon-noAGN, allowing us to scale the density using Equation (44) to obtain the TNG100 mock neutral hydrogen density. ### Assessment of LyAl-Net capabilities We now discuss the performance of the emulated neutral hydrogen density and temperature using _LyAl-Net_ on dark matter overdensity on TNG100. #### 5.3.1 Hydrogen Density We perform the same KDE analysis for one-point statistics on the 400 slices (\(768\times 768\times 400\) voxels). Figure 16 shows that _LyAl-Net_'s prediction bias remains relatively stable and stays around the 10% fiducial bias in the first and ninetieth percentile, as highlighted in blue. The overall prediction is more prominent compared to Horizon-noAGN counterpart, and this is expected to occur. Figure 17 illustrates a transfer function (top panel) and correlation rate (bottom panel), which stay above 90% up to \(k\approx 7h\) Mpc\({}^{-1}\) and \(6h\) Mpc\({}^{-1}\) respectively. In addition, the accuracy drops from \(k\approx 10\)\(h\) Mpc\({}^{-1}\) compared to Horizon-noAGN. #### 5.3.2 Temperature Figure 18 compares the 1D distribution between true and predicted temperature. The predicted temperature appears to have a higher median with a shorter distribution tail in the high temperature where \(T>10^{4}\) K compared to TNG100. This overall underprediction also appears on the bias in Figure 19. This should be mainly the result of the lack of the AGN feedback. Two-point statistics in Figure 20 show that the transfer function of _LyAl-Net_ stays under 50%. This lack of power shows that the fluctuation of the predicted temperature on _LyAl-Net_ is lower than the ground truth, which is also implied from the bias plot. The cross-correlation function on the bottom panel, on the other hand, shows that _LyAl-Net_ performs the best up to \(k\approx 1h\) Mpc\({}^{-1}\) while the _LyAl-Net_ with Horizon-noAGN performs best up to \(k\approx 3h\) Mpc\({}^{-1}\). Figure 16: The comparisons of the prediction bias before and after applying the transfer learning on a TNG100 snapshot 28 (Sec 5.4).The light blue band represents the density within the 1st and 90th percentile. It shows the general improvement in the high-density regime after the 0th percentile while maintaining a similar bias across the density range compared to the out-of-the-box _LyAl-Net_. Figure 17: Top panel: transfer functions and Bottom panel: cross-correlation function of the neutral hydrogen temperature emulated using _LyAl-Net_. The blue line represents the performance of _LyAl-Net_ out-of-the-box, and the orange line represents the performance of _LyAl-Net_ with transfer layers. We also used the (high fidelity) 99.5th percentile range masking corresponds to \(1.13\times 10^{-7}\) to \(1.26\times 10^{-4}\) cm\({}^{-3}\) Figure 18: The decimal logarithm Temperature 1D distribution comparison Between _LyAl-Net_ prediction and the true value of TNG100 ### Transfer learning _LyAl-Net_ mainly translates the dark matter overdensity to HI density and temperature. As the previous sections show, it relies on universal spatial correlations between the matter density and gas parameters. Even though dark matter generally traces gas clouds, there is a spatial limit that _LyAl-Net_ can predict. Results presented in the previous sections show that applying _LyAl-Net_ onto other dark matter simulations, in this case, IllustrisTNG may result in inaccurate emulated gas fields. This is expected since, at minimum, baryonic feedbacks are not identical to Horizon-noAGN. The main goal of applying _LyAl-Net_ is to have an emulator framework that can generalise and mimic the gas physics for a different equation of states, such as cooling rate and UV index, with the possibility of generalising to different cosmology. Achieving this portability of _LyAl-Net_ would ultimately unlock its application for cosmological analysis and the computation of other gas quantities in the IGM physics. In particular, cosmological inference requires lots of forward simulation to properly marginalise systematic effects and uncertainty on gas physics. A generic and cheap way of generating maps of the IGM would be crucial for future surveys, such as Lyman-\(\alpha\)-forest ones. In this section, we explore the possibility of slightly augmenting the _LyAl-Net_ to obtain a hydrodynamic simulation which contains a desirable cosmology. _LyAl-Net_ can be considered as a complex non-linear transformation, where the output gas field (\(y\)) is the response of the input dark matter (\(x\)), which can be expressed as \[y=f(x)\,. \tag{45}\] The emulator, _LyAl-Net_, is only able to achieve adequate transformations of dark matter to gas properties on a restricted range of scales and for a given physics. A _LyAl-Net_ trained on Horizon-noAGN has good emulation capabilities of the gas equation of state. The first thing we should try to preserve is exactly that physical property, equivalent an nth-order expansion of an arbitrary analytical solution of the gas physics. By adjusting the input and output _LyAl-Net_, without retraining the entire network, we may achieve higher-order accuracy to the emulated gas parameters. Based on the Equation (45), the equation yields \[\ddot{y}=g[f[h(x)]]\,, \tag{46}\] where \(h(x)\) and \(g(y)\) are arbitrary transformation functions of the input dark matter overdensity and the output gas fields, respectively. To test this idea, we introduce a custom transformation function which takes the form \[L(x)=Ax+B, \tag{47}\] where \(A\) and \(B\) are two free parameters. This function performs a linear scaling and shifting in the log space for _LyAl-Net_. We assume that the physical feedbacks can be summarised and encoded into a few parameters, reducing the bias, especially the diffuse region of the gas, and ultimately improving the Lyman-\(\alpha\) flux. As indicated in the introduction, to first order, this is expected to improve the emulation of the gas equation of state and density. We note that a better transformation may exist, which will be investigated in future works. We implemented this function into _LyAl-Net_ as a layer without an activation function, which we will refer to as a _transfer layer_. The transfer layers are placed before the dark matter input and after the prediction from _LyAl-Net_, effectively encasing it. By implementing the transfer layers in this way, we lose the linearity of the function \[y^{\prime}=L(y;A,B)=L(LyA|I(x);C,D];A,B)\,. \tag{48}\] In Figure 21 we provide schematics on implementing transfer layers. The weights of _LyAl-Net_ are frozen. As for the original training of the network, the mean square error was used to optimise these parameters. This allows us to leverage a GPU and TensorFlow to optimise the coefficients of transfer layers, namely A, B, C, and D, based on the \(\chi^{2}\) loss function of the ground truth and predicted values of IllustrisTNG-100. Only 512 sub-boxes were sampled, \(27^{3}\) voxels each, equivalent to \(\sim 1\%\) of the total volume. Table 4 summarises the parameters obtained through the described optimisation. We then re-computed the emulated total hydrogen density \(n_{\rm HI}\) and temperature \(T_{\rm HI}\). Figure 19: The comparisons of the prediction bias before and after applying the transfer learning on the snapshot 28 of TNG100 redshift \(z=2.58\). We use the same colour and style conventions as in Figure 16. It shows the general improvement in the high-temperature regime. We note that we nearly reach the requirement of the diffuse IGM limit with this modification. Figure 20: The comparisons of transfer functions of and cross-correlation function (Top panel) and cross-correlation function (Bottom panel) of the neutral hydrogen temperature emulated using _LyAl-Net_, and we also used the nbm fidelity range masking. The blue solid line shows the results obtained with the native _LyAl-Net_, whereas the orange solid line is the result of the transfer after optimising for the four additional parameters. We note a dramatic improvement of the performances of the network to different physics. #### 5.4.1 Hydrogen density For the hydrogen gas density, the bias function in orange in Figure 16 shows that the overall shift in the blue region (1st-90th percentile) is slightly outside the fiducial bias region on the one hand. On the other hand, the bias of extremely high density is suppressed. For the two-point statistics, Figure 17 shows an overall improvement in the transfer function amplitude, though at very small scales at \(k=10h\) Mpc\({}^{-1}\) the predicted density is overboosted by 10% in the process. The cross-correlation, on the other hand, shows an impressive improvement across visible spatial scales. The wave number corresponding to a correlation rate of 90% has increased by \(\sim 40\%\), from \(6.5h\) Mpc\({}^{-1}\) to \(9h\) Mpc\({}^{-1}\). #### 5.4.2 Temperature After applying transfer layers, the bias function of temperature in Figure 19 shows an improvement for the temperature range within the 1st and 90th percentile, where the emulator predicts mean values stable within the 10% fiducial error. From Figure 20, the transfer function shows a drastic improvement of the amplitude being nearly double compared to the original _LyAl-Net_. The cross-correlation also shows that the spatial distribution accuracy now increases to \(k\sim 3h\) Mpc\({}^{-1}\), which performs similarly to the Horizon-noAGN counterpart. ### Lyman-\(\alpha\) forest absorption rate We illustrated a sample skewer of a normalised Lyman-\(\alpha\) flux of TNG100 in Figure 22. The top panel directly compares ground truth in blue, a calculated flux from out-of-the-box _LyAl-Net_ in orange, and _LyAl-Net_ equipped with transfer layers in green. Fluxes from both configurations of _LyAl-Net_ trace the true flux well. The residual plot on the bottom panel shows a similar performance to this skewer. However, the original _LyAl-Net_ performs poorly in the highly-absorbed region. After applying the transfer learning layer, this region has been improved significantly. This is mainly the result of temperature field correction via applying transfer layers, which tends to smooth out and broaden the absorption. The mean transmitted flux comparison in Figure 23 shows _LyAl-Net_ has a relative error of \(\approx 4\%\). This error drops drastically with the transfer layers. We can see that transfer layers improve the absorption accuracy, illustrated on the transfer function from the top panel Figure 24. The amplitude increases to 0.9 up to \(k\approx 10h\) Mpc\({}^{-1}\). The spatial accuracy measured by the cross-correlation function at the bottom shows only a tiny improvement. This implies that the current transfer layers improve mainly the amount of absorption. Moreover, the overall performance is nearly identical to Horizon-noAGN flux prediction, as shown in Figure 15. This finding shows that _LyAl-Net_ with the experimental transfer layer model shows the framework's portability. It can also be extended to extrapolate the different configurations of the baryonic feedback for future cosmological emulation. Moreover, the eight parameters (four for gas density and four for temperature) may be inferred jointly with the density field in a Bayesian hierarchical model of Lyman-\(\alpha\) forest observations, for example, in Porqueres et al. (2020). ## 6 Discussion and conclusion We have presented _LyAl-Net_, a neural network architecture and training procedure to derive hydrogen gas field quantities from dark matter density and velocity fields at a cosmological scale. We have also shown a number of benchmarks on its resilience, portability, and capabilities for producing mock Lyman-\(\alpha\) forest observables. One of the most relevant points of this network is its portability to other gas models. In the existing literature, the mean transmitted flux from simulations has been accounted \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{**Gas**} & \multicolumn{3}{c}{**Transfer Layer Parameters**} \\ \cline{2-5} **Parameter** & **A** & **B** & **C** & **D** \\ \hline Density & 1.180 & 0.657 & 1.133 & 0.084 \\ Temperature & 0.898 & -0.862 & 1.34 & -0.466 \\ \hline \hline \end{tabular} \end{table} Table 4: The parameters of the transfer layers for IllustrisTNG-100 total hydrogen density and temperature at the redshift \(z=2.58\). All parameters are optimised by the MSE function using a sample size of 512 sub-boxes, 27\({}^{3}\) voxels each. Figure 21: A simple schematic of the modified emulator to implement the transfer learning procedure. The weights of the core network, _LyAl-Net_, are now frozen, meaning they are not further trained. We introduce transfer layers between the dark matter overdensity and this network to tune the input to the new problem. We only optimise \(A\), \(B\), \(C\), and \(D\) to fine-tune the prediction results, which can also be considered high-order corrections. for using a simplistic re-scaling factor to match the observations (Regan et al., 2007; Lukic et al., 2014; Borde et al., 2014). This method is equivalent to re-scaling the intensity of the UV background and the photoionisation rate from observed \(\tau\)(Meiksin, 2009). However, this re-scaling does not provide a deeper insight into what happens at the IGM level. In contrast, recent work by Borde et al. (2014) has shown that the UV background is dominated by the UV background and the photoionisation rate. Figure 24: A performance assessment using two-point correlation function of transmitted fluxes emulated by _LyAl-Net_ (labelled in blue) and _LyAl-Net_ with transfer layers (labelled in orange) from IllustrisTNG-100 at \(z=2.58\) dark matter overdensity, where the top panel is a transfer function comparison, and the bottom panel is a cross-correlation function comparison. Figure 23: **Top Panel:** A comparison of the IllustrisTNG-100 mean transmitted flux as a function of redshift. A true flux referred to as _Ground Truth_ in solid blue, and two emulated fluxes calculated from emulated IGM from _LyAl-Net_; 1.) the trained _LyAl-Net_ in orange and 2.) trained _LyAl-Net_ with transfer layers in green. **Bottom Panel:** A comparison of relative error plots of emulated mean transmitted fluxes using trained _LyAl-Net_ and _LyAl-Net_ with transfer layers. It shows the relative error drastically reduces after applying transfer layers. Figure 22: **Top Panel:**A comparison of Lyman-\(\alpha\) forest normalised fluxes from the same sample skewer of the ground truth and emulated fluxes, labelled as Ground Truth,_LyAl-Net_, and _LyAl-Net_ with Transfer Layers respectively from IllustrisTNG100 snapshot 28, redshift \(z=2.58\). The _LyAl-Net_ with transfer layers shows an improvement, especially within the highly absorbed region. **Bottom Panel:** Residuals of the fluxes along with 10% fiducial error plotted in dashed line shows that the residual values of both configurations mostly stay within the fiducial region works focused on improving the accuracy of the field, which yields a better emulation of the flux absorption. These works also utilise a neural network approach to generate hydrodynamical simulations from \(N\)-body simulations of dark matter. For example, Horowitz et al. (2022) used a fully convolutional variational auto-encoder similar to U-Net architecture. A follow-up paper, Wadekar et al. (2020) used a special _U-Net_ architecture to predict the neutral hydrogen density from dark matter simulation. However, to capture the physics of underdense and overdense regions, they had to combine two separate neural networks, whose output is selected depending on the amplitude of the local dark matter density. Sinigaglia et al. (2022) used a hierarchical domain-specific machine learning to obtain Lyman-\(\alpha\) forest flux from dark matter overdensity (nicknamed BAM), but also requires a reconstruction of HI and HII. In this work, we improved on several aspects of existing scientific literature: a characterisation of the stochastic properties of the emulated fields with respect to the simulated fields for different implementations of the hydrodynamic physics; a flexible layer to transpose our results to unknown new physics; a simplification of the overall U-net architecture for Lyman-\(\alpha\); and a mitigation of particle shot-noise effect in voids using an adaptive filter for the dark matter as used in LyMAS (Peirani et al., 2014; Peirani et al., 2022). The first point is critical for using our model as part of a Bayesian analysis. We need to know what are the limits of the model, at least at the level of the 1-point statistics, to be able to fit it to observations, e.g. as a part of a field-level analysis (e.g. BORG in Porqueres et al.2020), a simulation-based approach (e.g. SELFI, Leclercq et al.2019), or a standard correlation function analysis (Slosar et al., 2011). The second point in the above list is the most important, as we have no warranty that the parameters (e.g. star formation, AGN activity) governing the simulated fields (gas density, temperature) agree with observations. A portable model allows us to probe this directly and note the model's limits from observations. A similar procedure was followed in Lee et al. (2015); Palanque-Delabrouille et al. (2015), but limited to matching 2-point statistics for the FGPA model. Once _LyAl-Net_ model is calibrated on observations, it also indirectly allows us to utilise these emulated fields as a by-product. The emulator predicts the hydrodynamics simulation at the field level, which allows _LyAl-Net_ to predict the IGM properties from any \(N\)-body simulation code. We made several improvements with respect to LyMAS-2 (Peirani et al., 2014). _LyAl-Net_ does not require a smoothing filter, but the emulated fields' resolution is fixed, depending on the training set. Therefore, the framework can be applied to any desired resolution. To check the portability of _LyAl-Net_, we first assessed by applying a pre-trained network naively onto IllustrisTNG-100 dark matter. The physics of the baryonic feedbacks are present, whereas it was absent in the training set, which originates from Horizon-noAGN. The results, given in Figure 5.5, showed that the relative error of \(\langle F(z)\rangle\) increases, which was directly impacted by the underprediction of the hydrogen density as expected. We solved this by a simple numerical treatment using transfer layers, drastically improving emulated flux and matching the performance emulated Horizon-noAGN. The transfer layers we introduced may be simply interpreted as a tuning of the equation-of-state by shifting the median of \(n_{H}\) and \(T\) closer to the true values of TNG100. Of course, the form of the equation we used has only a few parameters, only allowing one-dimensional remapping of the value, which are not expected to be able to emulate the higher-order corrections. Simply put, we can improve the mean transmission without arbitrarily scaling the flux, and we apply transfer layers that fine-tune the equation of state of IGM. This approach provides extra information and estimations of IGM for a given baryonic physics while obtaining the individual fields of hydrodynamical simulations as by-products. We remind the reader about the mean transmitted flux as a function of redshift in Figure 14. The relative error when using Horizon-noAGN and Horizon-DM behaved in the same way, while Horizon-DM has a higher relative error. From this empirical observation, we can safely claim that _LyAl-Net_ can transform the unit of dark matter into IGM properties based on overdensity fluctuation. However, it has some limitations in emulating the small-scale effects due to the stochastic nature of the baryonic feedbacks in which _LyAl-Net_ might need to be correctly captured or provide only a mean response, possibly at a low order of accuracy. The approach followed in this work still has some limitations. Notably, we have tested the emulation at a fixed redshift, in term of conformal time wise, and we have no warranty of the portability of _LyAl-Net_ to other simulation time. We have not fully explored the possibility of reducing the complexity of the U-net, and a smaller version may allow reaching the same level of accuracy. We seek to improve also the interpretability in future work. For example, while staying compact, the transfer layer may take a more optimal form than Equation 38. We also seek to reduce the free parameters required to emulate a different environment and feedback sufficiently enough to match the scale and resolution for cosmological surveys. We developed a fast simulator for Lyman-\(\alpha\) forest simulation using a neural network to emulate hydrodynamical simulations of the intergalactic medium. We use Horizon-noAGN as a training set. The current model works well with simulation at \(z\sim 2.5\) while treating the entire simulation independently from redshift. We tested the sensitivity of the emulator using dark matter overdensity from IllustrisTNG-100 at \(z\sim 2.4\). This simulation contains different baryonic feedback from the training set, and the prediction bias occurred as expected. To post-process the emulated fields, we introduced a method called _linear layer_ to calibrate an IGM equation of state. We found that it improves the Lyman-\(\alpha\) absorption flux closer to the full hydrodynamical simulation. This improvement has shown that one can perform a correction from the trained _LyAl-Net_ by a fixed cosmology and baryonic feedbacks, in the form of the linear layer, and the corresponding parameters still need to contain interpretability. This version of _LyAl-Net_ assumes redshift independence for a given training set, which means it is limited to a single redshift that covers a specific volume. While the transfer layers can fine-tune the IGM equation of state, even with a cosmological simulation with a different redshift, as shown for the IllustrisTNG test. However, the IGM density-temperature relation the IGM density-temperature relation becomes less reliable in higher redshift because the dark matter distribution is less evolved. Moreover, with the observations of higher redshift Lyman-\(\alpha\) forest spectra becoming increasingly available, future work will look into redshift dependency for _LyAl-Net_ so that it can be more resilient and robust. Another development for _LyAl-Net_ is to integrate into inference models such as DELFI (Alsing et al., 2019), BOLFI (Leclercq, 2018), and SELFI (Leclercq et al., 2019). Such models also require the emulation speed, which is the main priority in this work since such models require a massive amount of data for a usable analysis. _LyAl-Net_ also allows an improvement on classical cross-correlation analysis of large cosmological surveys (Seljak et al., 2006; Amendola et al., 2013; Ivezic et al., 2019). Further, _LyAl-Net_ opens the way for field-level large-scale structure inferences through a combination of Lyman-\(\alpha\) tracers and galaxy clustering at high redshifts with next-generation galaxy surveys (Jasche & Wandelt 2013; Porquetres et al. 2019; Ivezic et al. 2019; Tsaprazi et al. 2023). Contrary to galaxy clustering, the Lyman-\(\alpha\) forest is sensitive to underdensities and probes the large-scale structure at a higher resolution. Therefore, combining the two is expected to improve the constraining power of field-level inferences of structure formation while providing the ability to emulate the physics of the intergalactic medium without the large computational cost of hydrodynamical simulations. ###### Acknowledgements. We thank valuable discussions with Francisco Villaescusus-Navarro, Shy Genel, Simon Prunet, Benjamin D. Wandelt. This work was supported by the ANR BIG4 project, grant ANR-16-CE23-0002 of the French Agence Nationale de la Recherche. This work was supported by the Simons Collaboration on "Learning the Universe". CB acknowledges the financial support from the Sorbonne Center for Artificial Intelligence (SCA). This work was enabled by the research project grant "Understanding the Dynamic Universe" funded by the Knut and Alice Wallenberg Foundation under Drk AAW 2018.0067. This work has made use of the Horizon cluster hosted by the Institut \(A\) Astrophysics de Paris in which the cosmological simulations were post-processed. We thank Stephane Rouberl for running smoothly his cluster for us. This work is conducted within the Aquila Consortium8. We acknowledge the use of the following packages in this analysis: Numpy (Harris et al. 2020), TensorFlow (Aaddi et al. 2015), JAX (Bradbury et al. 2018), IPython (Preze & Granger 2007), Matplotlib (Hunter 2007), Numba (Lam et al. 2015), SciPy (Jones et al. 2001). Footnote 8: [https://www.aquila-consortium.org/](https://www.aquila-consortium.org/) ## Appendix A Voigt Profile calculation The cross-section of Lyman-\(\alpha\) is modelled by a Voigt profile. This function is very expensive to compute. We rely on the function scipy.special.wofz to efficiently obtain the results. This software implements the Faddeeva algorithm to compute it (Johnson, 2012), which is a complex scaled complementary error function (Faddeeva and Terent'ev, 1961). We note that this is the most computationally expensive part of the entire pipeline. ## Appendix B Kernel Density Estimation implementation We estimated probability density functions using scipy.stats.gaussian_kde, which applies a Gaussian kernel with Scott's Rule for bandwidth selection (Scott, 2015). However, the computational time does not increases linearly with the sample size. Therefore, the practical approach is to use a limited number of voxels. For our test, a full simulation volume of Horizon-noAGN has \(1024\times 1024\times 1024\) voxels. We use \(1024\times 1024\times 500\) voxels which is approximately 50% of the total volume for the analysis. For this reduced number of points, the required wall time to generate the density estimate costs a total of approximately 38 hours on AMD EPYC 7702 (128 cores). ## Appendix C Power spectra masking process The \(n_{\mathrm{HI}}\) prediction suffers much more in the high-density regime where the emulator incorrectly produces invalid predictions. We presume that the bias from the high density is the culprit for the misbehaviour of the diagnostics using two-point statistics documented in the main text. To check this hypothesis, we have selected the density pixels below three different fidelity ranges (see Table 3) and masked the density outside this range to zero. Figure 14 shows the comparisons of density power spectra in the high-fidelity range, which now agrees well with the Horizon-noAGN and confirms the idea mentioned earlier.
2309.15378
Adversarial Object Rearrangement in Constrained Environments with Heterogeneous Graph Neural Networks
Adversarial object rearrangement in the real world (e.g., previously unseen or oversized items in kitchens and stores) could benefit from understanding task scenes, which inherently entail heterogeneous components such as current objects, goal objects, and environmental constraints. The semantic relationships among these components are distinct from each other and crucial for multi-skilled robots to perform efficiently in everyday scenarios. We propose a hierarchical robotic manipulation system that learns the underlying relationships and maximizes the collaborative power of its diverse skills (e.g., pick-place, push) for rearranging adversarial objects in constrained environments. The high-level coordinator employs a heterogeneous graph neural network (HetGNN), which reasons about the current objects, goal objects, and environmental constraints; the low-level 3D Convolutional Neural Network-based actors execute the action primitives. Our approach is trained entirely in simulation, and achieved an average success rate of 87.88% and a planning cost of 12.82 in real-world experiments, surpassing all baseline methods. Supplementary material is available at https://sites.google.com/umn.edu/versatile-rearrangement.
Xibai Lou, Houjian Yu, Ross Worobel, Yang Yang, Changhyun Choi
2023-09-27T03:15:45Z
http://arxiv.org/abs/2309.15378v1
Adversarial Object Rearrangement in Constrained Environments with Heterogeneous Graph Neural Networks ###### Abstract Adversarial object rearrangement in the real world (e.g., previously unseen or oversized items in kitchens and stores) could benefit from understanding task scenes, which inherently entail heterogeneous components such as current objects, goal objects, and environmental constraints. The semantic relationships among these components are distinct from each other and crucial for multi-skilled robots to perform efficiently in everyday scenarios. We propose a hierarchical robotic manipulation system that learns the underlying relationships and maximizes the collaborative power of its diverse skills (e.g., pick-place, push) for rearranging adversarial objects in constrained environments. The high-level coordinator employs a heterogeneous graph neural network (HetGNN), which reasons about the current objects, goal objects, and environmental constraints; the low-level 3D Convolutional Neural Network-based actors execute the action primitives. Our approach is trained entirely in simulation, and achieved an average success rate of 87.88% and a planning cost of 12.82 in real-world experiments, surpassing all baseline methods. Supplementary material is available at [https://sites.google.com/umn.edu/versatile-rearrangement](https://sites.google.com/umn.edu/versatile-rearrangement). Deep Learning in Grasping and Manipulation, Perception for Grasping and Manipulation ## I Introduction Real-world robots typically operate in highly structured environments rather than everyday scenarios that contain adversarial objects (e.g., previously unseen or oversized items) and complex constraints (e.g., boxes, shelves, etc.). While a factory robot simply transfers identical items on a belt drive, a domestic robot tasked with rearranging a pantry may frequently encounter oversized containers on shelves. As illustrated in Fig. 1, real-world object rearrangement tasks are inherently heterogeneous, consisting of current objects, goal objects, and environmental constraints. The semantic relationships among these components (e.g., the "meat can" on the "ground" has a goal location on the "shelf") contain essential information for efficiently completing the task. Robots that understand and utilize such knowledge are more likely to succeed in the real world, where adversarial objects and various environmental constraints are ubiquitous. The object rearrangement problem has traditionally been addressed with model-based task and motion planning (TAMP) [1], which often assumes a fully observable environment and is thus difficult to scale to previously unseen scenarios [2, 3, 4]. Recent deep learning-based approaches can generalize to novel objects, owing to the advances in perception and grasping models [5, 6]. However, they typically assume no environmental constraints (e.g., an open tabletop) [7, 8, 4] or rely on iterative collision checking [9], which limits their generalizability in the real world. Additionally, most existing works focus on graspable objects and separately study pick-place or pushing. Although some have investigated both [7, 8], they only push to facilitate grasping [8] or employ specialized tools [7]. Relatively few have explored coordinating low-cost pushing, which may be limited by the environment, with pick-place to improve the robot's capability and efficiency. Therefore, the problem of rearranging adversarial objects with multiple skills in constrained environments remains unsolved. To address this challenge, we propose to learn from the heterogeneous task components and exploit the distinct Fig. 1: In this adversarial object rearrangement task, the color-coded heterogeneous task components (e.g., current objects, goal objects, and environmental constraints) are linked by different semantic relationships that are crucial to efficiently guiding a multi-skilled robot. By understanding that the “bowl” and the “shelf” are related by “on”, a robot will swiftly push it to the nearby goal and clear space for the “meat can”, which requires pick-place to move from “ground” to “shelf”. semantic relationships among them. We devise an adversarial object rearrangement system that utilizes both pushing and grasping, to maximize the robot's efficiency and generalize to novel constrained environments. Our hierarchical approach represents a task as a heterogeneous graph over a pair of current and goal RGB-D images, which are segmented into objects and environmental constraints. At the high-level, a heterogeneous graph neural network [10] (HetGNN)-based coordinator reasons about the graph and its underlying semantic information and predicts the optimal action primitive and next target, such that the goal configuration can be successfully achieved by the low-level actors with minimal planning costs. The system operates in a closed-loop fashion, continually re-observing the scene at each time step to predict more accurate rearrangement plans. We experiment in both simulated and real-world environments. Our approach achieves, on average, an 88.78% success rate with 12.82 actions in real-world tests, outperforming several baselines by large margins. To the best of our knowledge, this is the first approach that utilizes HetGNN to coordinate robot skills for rearranging adversarial objects in constrained environments. The main contributions of this paper are as follows: * We propose a hierarchical pushing and grasping robotic system that addresses adversarial object rearrangement problems in constrained environments. By leveraging the semantic relationships in the task, the high-level coordinator guides the 3D CNN-based low-level actors to perform more efficiently. * Our approach represents the rearrangement task as a heterogeneous graph and exploits the power of a HetGNN to reason about the underlying relationships among the task components. It learns from an expert planner in simulation and predicts the next target and action end-to-end. * While previous approaches often assume an open workspace or use hard-coded solutions, our method learns to adapt to complex environments, where previously unseen constraints could significantly limit existing works. ## II Related Work Object rearrangement is an essential challenge in robotics and embodied AI [11]. The problem is commonly studied under the broad subject of task and motion planning (TAMP) [1], which is often formulated hierarchically with a high-level task planner (i.e., which action to perform on which item) and a low-level motion planner (i.e., how to move the end-effector) such that the goals can be achieved [2, 11]. Typical TAMP approaches are model-based and often rely on task-specific knowledge and accurate 3D models of the environment [2, 3, 4]. Hence, they often do not generalize well to the real world, where the required information may not be accessible. Recent works have equipped classical TAMP with deep learning-based perception [12, 13] and grasping models [14, 15, 16] to generalize to novel objects [6, 5, 17, 8]. However, many researchers focus exclusively on pick-place [5, 18, 19, 17], largely limiting the robot's capability in the real world where objects are frequently not graspable (e.g., large items with a parallel-jaw gripper or cloths with a suction gripper). To rearrange more adversarial objects, non-prehensile action primitives such as pushing are needed. Inspired by [20], Tang et al. [8] use pushing to facilitate grasping by breaking the clutter, but not to rearrange adversarial objects. While [7] sorts large-scale basic cuboids with both pushing and grasping, they build a specialized end of arm tooling (EOAT) for pushing. Transporter [21] and TRLB [4] bypass the challenge with suction mechanisms. Long-horizon planning for object rearrangement has been studied analytically with Rapidly Exploring Random Tree (RRT) [22] or Monte Carlo Tree Search (MCTS) [23], which explores multiple future possibilities but is less robust to noise and occlusions. PlaNet [24] addresses the partial observability issue with a learned forward dynamics model and plans actions in latent space. Similarly, Visual Robot Task Planning [25] learns to encode the scene into a latent representation and then uses tree search for planning in this latent space. Both works are task-specific in simulation and highly likely require large demonstration data to generalize to a real robot. Other researchers have leveraged spatial relations for planning [18, 19]. Liu et al. [19] take language as an input that specifies the goal configuration and then employ Transformers [26] to translate the spatial relations into a sequence of pick-place instructions. Our approach conveniently uses a single imperfect RGB-D image to specify the goal and directly transfer to the real world. Prior robotics research has investigated Graph Neural Networks [27, 28] in object rearrangement problems [29, 5, 8]. Closely related to our work, NeRP [5] employs a high-level object selection module with k-GNNs that plan for rearranging novel objects with pick-place. However, they are limited to graspable objects on an open tabletop, not considering any environmental constraints. Tang et al. [8] compare the Graph Edit Distance (GED) between the start and goal scene graphs and plan for selective object rearrangement of multiple objects, but also assume a simplified environment. In constrained environments, existing works typically assume a constant structure [30] and rely on iterative collision checking [9], which is computationally expensive and often suffers from noise and occlusion in the real world. These methods are not as generalizable as ours, as we employ a novel HetGNN-based [10] coordinator that exploits the semantic relationships among heterogeneous components in the task and significantly improves the robot's efficiency. ## III Problem Formulation We aim to design an efficient robotic manipulation system that addresses adversarial object rearrangement problems in unstructured real-world environments, where environmental constraints could heavily influence the robot's behavior. We formulate the problem as follows: **Definition 1**.: _Given a goal image \(I_{T}\) describing a desired object configuration in a constrained environment, the goal of the rearrangement task is to apply a sequence of manipulation actions on the current objects to achieve the goal configuration where every object is within \(\tau\) of its corresponding goal location in 3D space._ In our experiments, we use \(\tau=3\)\(cm\). The constrained environments considered in this work are defined as **Definition 2**.: _Constrained environments include geometric constraints such that certain manipulation actions are not always feasible (e.g., pushing an object across height discontinuities)._ We assume about robot skills and objects as follows: **Assumption 1**.: _The robot is capable of pick, place, move, and push. pick-place is a sequence of pick, move, and place, while push requires a single move._ **Assumption 2**.: _The adversarial objects are possibly unknown (i.e., novel objects) to the robot and may not be graspable (e.g., object dimension is larger than the maximum opening of the robot's end effector) for which only push is applicable._ Let \(\mathcal{O}^{t}=\{o_{1}^{t},o_{2}^{t},\cdots,o_{N}^{t}\}\) and \(\mathcal{O}^{T}=\{o_{1}^{T},o_{2}^{T},\cdots,o_{N}^{T}\}\) denote the set of objects in the current scene and the goal scene, respectively. The robot action \(a\in\{\text{pick-place},\text{push}\}\) for a selected object \(o_{i}^{t}\in\mathcal{O}^{t}\) is subject to a binary-valued metric \(\mathcal{S}_{a}(o_{i}^{t},o_{i}^{T},\mathcal{C})\in\{\text{0,1}\}\) where \(\mathcal{C}=\{c_{1},c_{2},\cdots,c_{N}\}\) denotes the set of environmental constraints in the scene. \(\mathcal{S}_{a}=1\) indicates that push is more effective for the selected object at time \(t\), whereas \(\mathcal{S}_{a}=0\) represents that pick-place is more effective. When both actions are applicable, the robot performs push (i.e., \(\mathcal{S}_{a}=1\)) since it costs fewer actions. To reason about the relationships between the objects \(\mathcal{O}^{t},\mathcal{O}^{T}\), and the constrained environment \(\mathcal{C}\), we employ a heterogeneous graph representation \(\mathcal{G}\). A heterogeneous graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) and \(\mathcal{E}\) represent the set of nodes and edges, respectively, is associated with node and edge type mapping functions \(\phi\colon\mathcal{V}\rightarrow\mathcal{F}\) and \(\psi\colon\mathcal{E}\rightarrow\mathcal{R}\), where \(\mathcal{F}\) is the set of node types (e.g., "current", "goal", and "environment") and \(\mathcal{R}\) is the set of edge types describing spatial relations (e.g., a "goal" node is "in" a "box"). The heterogeneous graph \(\mathcal{G}\) is constructed from a pair of RGB-D observations of current and goal configurations \((I_{t},I_{T})\). We would like to learn a high-level coordinator that predicts a selection probability \(p_{o}(\mathcal{C},\mathcal{O}^{t},\mathcal{O}^{T})\) for each object such that the goal configuration can be achieved with the least number of actions by rearranging the most feasible object. The coordinator should simultaneously learn to select the appropriate action for such targets. Specifically, the action probability \(p_{a}(\mathcal{C},\mathcal{O}^{t},\mathcal{O}^{T})=p_{push}(\mathcal{C}, \mathcal{O}^{t},\mathcal{O}^{T})=Pr(\mathcal{S}_{a}=1|\mathcal{G}(\mathcal{V},\mathcal{E}))\); hence the pick-place probability \(p_{pick}(\mathcal{C},\mathcal{O}^{t},\mathcal{O}^{T})=Pr(\mathcal{S}_{a}=0| \mathcal{G}(\mathcal{V},\mathcal{E}))=1-p_{push}\). ## IV Proposed Approach This section describes the proposed adversarial object rearrangement system that coordinates pick-place and push in constrained environments. To address the exploration challenge in long-horizon problems, our approach takes advantage of the hierarchical structure and uses a high-level coordinator in conjunction with low-level actors to guide the robot at each time step \(t\). The goal configuration at time \(T\) is given as a reference RGB-D image \(I_{T}\). Given the current observation of the scene \(I_{t}\), the HetGNN-based coordinator reasons about the underlying relationships in the heterogeneous graph and simultaneously predicts which object should be prioritized and how to move it, such that the goal can be achieved efficiently. The overview of the approach is described in Fig. 2, and the algorithm is delineated in Algorithm 1. ### _Object Matching_ Given the goal configuration specified by an RGB-D image \(I_{T}\), the object matching module finds each object's correspondence in the current observation \(I_{t}\). We first obtain the instance masks \(\mathcal{M}_{T}\) of \(N\) objects in \(I_{T}\) using the SAG [13], an object instance segmentation method with active robotic manipulation. Next, we encode each object's RGB-D cropping from \(\mathcal{M}_{T}\) into a feature vector \(\mathbf{h}_{i}\in\mathbb{R}^{10}\) using a Siamese network [31]. The network is trained with contrastive loss such that the L2 distance in the latent space is close for the same objects and far for different ones [32], Fig. 2: The current RGB-D image \(I_{t}\) and goal \(I_{T}\) are fed into the graph constructor, which encodes the heterogeneous task components (color-coded) into node embeddings with pre-trained 3D encoders. Then the HetGNN updates the embeddings based on its learned parameters, and the high-level coordinator predicts the object selection score \(p_{o}\) and the action selection score \(p_{a}\) for each object. We select the object with the highest \(p_{o}\) as the target and decide which action to execute based on \(p_{a}\). Finally, we feed the decision to the low-level actors, which are responsible for performing the robot’s actions. The closed-loop system will run until the goal configuration is achieved or the maximum number of steps is reached. [33]. The set \(\mathcal{H}_{T}=\{\mathbf{h}_{1}^{T},\mathbf{h}_{2}^{T},\cdots,\mathbf{h}_{N}^{T}\}\) represents the features of objects in the goal configuration. At the current time step \(t\), we follow the same procedure to extract the feature set \(\mathcal{H}_{t}\) of current objects. The L2 distance between each element in \(\mathcal{H}_{t}\) and each element in \(\mathcal{H}_{T}\) is calculated, and the current-goal correspondence \(\mathbf{c}\in\mathbb{R}^{N\times 2}\) is established by associating each goal object to the one with the smallest L2 distance in the current time \(t\). ### _Constructing Heterogeneous Graph_ The high-level coordinator is based on a HetGNN, which exploits the heterogeneity and the underlying semantic information in the input heterogeneous graph. To construct a graph that can efficiently capture the information, we consider three different node types: current objects \(\mathcal{O}^{t}\), goal objects \(\mathcal{O}^{T}\), and the environmental constraints \(\mathcal{C}\). Unlike the traditional homogeneous graphs, the relationships between these nodes are represented by a set of heterogeneous edge types, which could be semantically interpreted (e.g., the edge between "current objects" nodes and "constraints" nodes representing the "in" relationship, the edge between "current objects" nodes and "goal object" nodes representing the "to" relationship). The heterogeneous graph is illustrated in Fig. 2. The nodes \(\mathcal{V}\) include current nodes \(\mathbf{v}^{t}\), goal nodes \(\mathbf{v}^{T}\), and the constraints nodes \(\mathbf{v}^{c}\), representing different types of the heterogeneous task components. The graph connectivity contains two fully-connected sub-graphs, one for current nodes and one for goals nodes. Each current node is also connected to its corresponding goal node, specified by the current-goal correspondence \(\mathbf{c}\). The constraint node(s) is/are individually connected to each object node to propagate the influence of the environmental constraints. Each node embedding is extracted from the geometric shape of the object or environment. Specifically, the point clouds of the current objects \(\mathcal{P}_{t}\), goal objects \(\mathcal{P}_{T}\), and constraints \(\mathcal{P}_{c}\) are obtained through back-projection, and transformed into voxel grids \(V_{t}\), \(V_{T}\), and \(V_{c}\), respectively. We then encode the voxel grids into geometric features \(\mathbf{x}_{t}\), \(\mathbf{x}_{T}\), and \(\mathbf{x}_{c}\) using a 3D encoder \(E_{\phi}\): Conv3D(1, 32, 5) \(\rightarrow\) ELU \(\rightarrow\) Maxpool(2) \(\rightarrow\) Conv3D(32, 32, 3) \(\rightarrow\) ELU \(\rightarrow\) Maxpool(2) \(\rightarrow\) FC(\(32\times 6\times 6\times 6\), 12). The encoder is taken from a pretrained 3D Convolutional Autoencoder, whose latent features could effectively represent the shape of the input object. Finally, each node embedding is concatenated with the object's location \(\mathbf{z}\in\mathbb{R}^{3}\). ### _HetGNN-based Coordinator_ GNNs are effective in discovering underlying relationships among nodes by learning a non-linear function \(\mathcal{F}\), which encodes a graph \(\mathcal{G}\) to \(\mathcal{G}^{\prime}\) with updated node and edge features [34]. We start with the base homogeneous Graph Attention Networks (GAT) [35]. The message-passing function, parameterized by a weight matrix \(\mathbf{\Theta}\) and attention coefficients \(\alpha_{i,j}\), for updating latent features \(\mathbf{x}_{i}\) of node \(\mathbf{v}_{i}\) is defined as \[\mathbf{x}_{i}^{\prime}=\alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i}+\sum_{j\in \mathcal{N}(i)}\alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j} \tag{1}\] where the attention coefficients \(\alpha_{i,j}\) are computed by \[\alpha_{i,j}=\frac{\exp\left(\sigma\left(\mathbf{a}^{\top}[\mathbf{\Theta} \mathbf{x}_{i}\,\|\,\mathbf{\Theta}\mathbf{x}_{j}]\right)\right)}{\sum_{k\in \mathcal{N}(i)\cup\{i\}}\exp\left(\sigma\left(\mathbf{a}^{\top}[\mathbf{\Theta }\mathbf{x}_{i}\,\|\,\mathbf{\Theta}\mathbf{x}_{k}]\right)\right)} \tag{2}\] The \(\mathbf{a}\) is the learned weight matrix of the attention mechanism, \(\mathcal{N}(i)\) is the neighbors of \(\mathbf{v}_{i}\), and \(\sigma=LeakyReLU(\cdot)\). Note that homogeneous graph neural networks could not differentiate different types of nodes and edges. They lack the mechanism to effectively harness the heterogeneous information. To exploit the semantic relationships among the heterogeneous task components, we adopt the approach in [10] that introduces heterogeneity to the homogeneous GNN by dedicating an individual message passing function to each edge type, as shown in Fig. 3. Given a heterogeneous graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), the network aggregates node embeddings by using the message passing functions corresponding to Fig. 3: The HetGNN network takes as input graphs of heterogeneous node types (e.g., \(\mathbf{x}_{T},\mathbf{x}_{t},\mathbf{x}_{c}\)). The message passing functions are duplicated for each edge type to update the weights for different relationships. Finally, the scores \(p_{a}\) and \(p_{o}\) are derived from updated current node features \(\mathbf{x}_{T}^{\prime},\mathbf{x}_{t}^{\prime},\mathbf{x}_{c}\). the active edge types, which are determined by the types of the connected nodes. For instance, the edge between current nodes \(\mathbf{v}_{t}\) and goal nodes \(\mathbf{v}_{T}\) belongs to a "current-to-goal" edge type. The HetGNN includes three graph attention convolutional layers to ensure effective learning of the underlying relational information. After the node embeddings are updated by the HetGNN, two Multi-Layer-Perceptron (MLP)-based prediction heads, object selector \(\psi_{o}:\mathcal{V}\rightarrow\mathcal{O}\) and action selector \(\psi_{a}:\mathcal{V}\rightarrow\mathcal{A}\), are connected to \(\mathbf{x}_{t}\) to estimate which action should be performed on which object. ### _Low-level Actors_ The low-level actors are responsible for executing the actions decided by the high-level coordinator. If pick-place is selected, we first generate a batch of grasp candidates following the shape completion-based sampling algorithm in [36]. Next, we use the Grasp Stability Predictor (GSP)1, a 3D CNN-based 6-DoF grasp detection algorithm, to select a feasible pose for grasping. After the target object has been successfully grasped, we place the object to its corresponding goal location by checking the current-goal correspondence \(\mathbf{c}\) calculated in IV-A. If push is the more effective action for the target, we plan for a direct pushing path while checking collisions using the flexible collision library (FCL) [37]. The robot closes its fingers and follows a straight path, which is divided into multiple short segments of fixed length by the intermediate waypoints. Then we use the mean square error (MSE) between the object's voxel grids and goal location to supervise a simplified model predictive control loop. Footnote 1: Note that the acronym GSP refers to the 3D CNN grasping module in [14], defined here for a concise reference. ### _Expert Planner and Training_ To obtain training data, we generated 3,000 RGB-D images of the randomly positioned training objects (e.g., toy blocks and cylinders of different size) in environments with arbitrary constraints (e.g., bins, shelves). Then, two RGB-D images are randomly sampled as start and goal configurations for a rearrangement task. For all the rearrangement tasks, we define a pick-place cost of **3** and a push cost of **1** following Assumption 1. The training labels, action selection labels, and object selection labels are automatically annotated by an expert planner built in a fully observable simulator. First, the expert planner examines each object's mesh model and reasons about which action primitive to use. It relies on two criteria: 1) if the object is graspable and 2) if a direct pushing path exists between the current and goal location. The binary action selection label will be 1 for push if the object is not graspable or a direct pushing path exists, and 0 for pick-place if the object is graspable and no direct pushing path exists. We assume there are no invalid tasks such as moving an ungraspable object across a discontinuous path (e.g., moving a large plate from the table onto the shelf). Then, based on the action assigned to the objects, the expert planner computes the optimal planning solution analytically using the \(A^{*}\) algorithm, which globally minimizes the predefined operating cost function by computing all possible planning sequences. For an ungraspable object without a direct pushing path, we assign an action cost of 3 because it requires multiple pushing actions. An infinite heuristic cost is associated with a planning sequence if the goal location is blocked by other objects, determined by FCL. The binary object selection label is 1 if it is the first in a sequence planned by \(A^{*}\) and 0 otherwise. The dataset contains 30,000 pairs of current and goal RGB-D images and labels, which are transformed into heterogeneous graphs using methods described in Sec. IV-B. During training, we use a binary cross-entropy loss \(\mathcal{L}_{action}\) to supervise the action predictions: \[\mathcal{L}_{action}=-(y\log(p)+(1-y)\log(1-p)). \tag{3}\] The Huber loss \(\mathcal{L}_{object}\) for the object prediction head output \(\hat{y}\) is defined as \[\mathcal{L}_{object}=\begin{cases}\frac{1}{2}(y-\hat{y})^{2}&\text{if}\ \ |(y-\hat{y})|<\delta\\ \delta((y-\hat{y})-\frac{1}{2}\delta)&\text{otherwise}\end{cases} \tag{4}\] where \(\delta=1.15\). The combined loss \(\mathcal{L}\) is defined as \[\mathcal{L}=\mathcal{L}_{object}+\lambda\mathcal{L}_{action}. \tag{5}\] We empirically found that \(\lambda=0.65\) yields the best performance for our problem. ## V Experiments We experiment in both simulated and real-world settings. These experiments are designed to: 1) demonstrate the effectiveness of our hierarchical system for the adversarial object rearrangement problem; 2) evaluate our HetGNN-based coordinator in various constrained environments and compare it to other baselines; and 3) show the generalizability of our approach to unstructured everyday scenarios. **Evaluation metrics**: Following Definition 1, we define the _success rate_ as \(\frac{\#\text{ of successful rearrangement}}{\#\text{ of total rearrangement problems}}\). If a given rearrangement is not achievable (e.g., lifting a non-graspable object from the ground to the upper shelf), the experiment will be re-initialized. Each test is limited to \(2\times N\) planning steps, where \(N\) is the number of objects. A timeout is also considered a failure. We also consider the _planning cost_, which measures the number of actions taken to rearrange the objects from the start to the goal configuration. Each push costs **1** action and each pick-place costs **3** actions by Definition 2. Because pick-place only approaches could Fig. 4: The testing objects are drawn from the YCB dataset and differ in size, color, and shape from our basic training objects (e.g., blocks, cylinders). Some objects such as bowls and cracker boxes, may not be graspable due to their orientation. not work in our settings due to the non-graspable objects, we instead compare our method with the following four baselines: * **Model** is a model-based approach that assumes access to ground truth IDs and mesh models. It randomly selects a target and checks if its corresponding goal location is available. If that location is occupied, it will push the occupying object to an arbitrary free space. Otherwise, it moves the object using the expert action selection algorithm described in Sec. IV-E. * **Plan** is a variant of the expert planner in Sec. IV-E. It combines an optimal planner with a deep learning-based perception module [13]. Instead of using 3D models, this classical approach is based-on segmentation masks and plans for the entire action sequence with \(A^{*}\) algorithm that globally minimizes the cost function. * **GNN** employs a Homogeneous Graph Neural Network instead of HetGNN for the coordinator. The network is trained with the same dataset as ours, except that the heterogeneous structure is not used. All other components are kept the same as in our approach. * **NeRP+Push** builds upon a recent state-of-the-art object rearrangement approach [5]. It learns a k-GNN-based planner that selects a near-optimal policy and uses pick-place for unknown object rearrangement. To adapt to our test environments, we allow NeRP to heuristically push ungraspable objects when the object mask is larger than a threshold. * **Expert** is our expert planner in Sec. IV-E whose performance is regarded as the upper bound of each scenario. It computes the optimal solution but may fail due to unexpected object dynamics and imperfect low-level actors. ### _Simulation Experiments_ The simulated test environment is in CoppeliaSim 4.0 [38] with Bullet physics engine v2.83. The scene includes a Franka Emika Panda robot arm with the original gripper, different numbers of testing objects, and various environmental constraints. A single-view RGB-D observation is taken with a simulated Kinect camera. **Experiment scenes:** The experiments are depicted in Fig. 5. We first test in the open _Tabletop_ scenario for which the baseline methods were designed to verify if our approach is efficient in the simple environment. Each scene contains five to seven objects from Fig. 4 to demonstrate that our approach is generalizable to different numbers of objects (i.e., clutteredness). Each method is tested 51 times, and the success rate and planning cost are compiled in Table I and II respectively. Then we experiment with increasing the complexity of the environmental constraints: _Shelf_ demonstrates that our approach is able to efficiently solve multi-planar scenarios; _Bins_ is commonly seen in warehouses and introduces more partial occlusions; the additional shelves in _Pantry_ mimic a more realistic scene and show the generalizability to novel constraints, where analytical approaches often face difficulties. The experiments contain six YCB objects, and the results are compiled in Table III and IV. goal locations are occupied. _Plan_'s open-loop planner could not resolve a collision immediately, potentially causing more failures and requiring more actions to complete the task afterward. _GNN_ learns less effectively and makes predictions that are not as accurate as ours (e.g., selecting pick-place while push is feasible). _NeRP+Push_ only pushes when the item is not graspable, since it could not efficiently coordinate different skills. The results suggest that our approach is able to generalize to different numbers of adversarial objects and is the most efficient in cluttered scenes, thanks to the HetGNN coordinator and the closed-loop design. Constrained experiments increase environmental discontinuities and clutteredness, necessitating more accurate reasoning about the relationships among task components. The privileged information available to _Model_ helps maintain its performance, while _GNN_ becomes worse since it has no knowledge of the relations between each component. This indicates that the information learned by HetGNN is crucial to efficient planning. _Plan_ depends on accurate object masks to calculate the true trajectory cost, and _NeRP+Push_ only considers objects' center locations. Consequently, their performance suffers from the additional challenge of constrained experiments. In contrast, our HetGNN-based coordinator that employs 3D shape features generalizes better with partial observations. Overall, we achieved an average success rate of 96.73%, surpassing the best-performing baseline by 7.2%, with a planning cost of only 14.50, which is the closest to _Expert_'s result. ### _Real-robot Experiments_ Our real-world experiment consists of a Franka Emika Panda robot arm with FESTO DHAS soft fingers and an Intel RealSense D415 camera that overlooks the workspace. We test each method 11 times in three real-world scenarios: _Bins_, _Shelf_, and _Novel_. _Model_ is excluded from the baseline methods because ground truth mesh models are not available in the real world. Five adversarial objects are first randomly placed in the scene as the goal configuration and then re-initialized to the start configuration. To address the sim-to-real gap of the RGB-D sensor, we fine-tuned the object-matching module with a dataset consisting of 500 real-world images. Fig. 6 depicts a successful rearrangement task in novel scenarios. Due to the perception challenge and more complex object dynamics in the real world, the performance of all the methods declines compared to the simulated results. However, ours drops much less thanks to the learned model that is robust to noise and occlusion. _Plan_ and _NeRP+Push_ sometimes select the wrong action and falsely store objects when the goal locations are available because of inaccurate masks and noise. Collisions with the occluded geometry also become more frequent in the real world, highlighting the importance of our closed-loop system. We summarized the experimental results in Table V and VI. Our approach achieves an average success rate of 87.88% and completes the task with 12.82 actions, indicating that it is the most efficient and generalizable to novel office objects and constraints. Our method is limited by the analytical pushing algorithm, which may rotate large objects unexpectedly and incur additional costs if they collide with other objects. ## VI Conclusion We presented an object rearrangement system that coordinates pick-place and push in challenging scenarios with adversarial objects and environmental constraints. Our approach hierarchically employs a HetGNN coordinator and low-level 3D CNN-based actors to achieve the goal arrangement in an efficient manner. The proposed simulation-trained rearrangement system achieved an average success rate of 87.88% and a planning cost of 12.82 in real-world \begin{table} \begin{tabular}{c c c c c} \hline \hline & Plan & GNN & NeRP+Push & Ours \\ \hline Shelf & 81.82 & 72.73 & **90.91** & **90.91** \\ Bins & 72.73 & 63.64 & 72.73 & **81.82** \\ Novel & 63.64 & 72.73 & 81.82 & **90.91** \\ \hline \hline \end{tabular} \end{table} TABLE V: Success Rate (%) in Real World \begin{table} \begin{tabular}{c c c c c} \hline \hline & Plan & GNN & NeRP+Push & Ours \\ \hline Shelf & 14.62 & 16.67 & 19.56 & **11.81** \\ Bins & 19.13 & 17.88 & 23.83 & **14.34** \\ Novel & 20.20 & 17.37 & 21.21 & **12.31** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Planning Cost (# actions) in Real World Fig. 6: An example of real-world experiments in the _Novel_ scenario (11 actions). The HetGNN-based coordinator predicts the most feasible target and utilizes pick-place and push accordingly while the low-level actors execute the plan in closed-loop. experiments with adversarial objects and environmental constraints. One avenue for future extension is to simultaneously learn the orientations of objects during placement, as we are currently focusing on arranging objects in terms of positions.
2309.12417
Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC
We are studying the use of deep neural networks (DNNs) to identify and locate primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work focused on finding primary vertices in simulated LHCb data using a hybrid approach that started with kernel density estimators (KDEs) derived heuristically from the ensemble of charged track parameters and predicted "target histogram" proxies, from which the actual PV positions are extracted. We have recently demonstrated that using a UNet architecture performs indistinguishably from a "flat" convolutional neural network model. We have developed an "end-to-end" tracks-to-hist DNN that predicts target histograms directly from track parameters using simulated LHCb data that provides better performance (a lower false positive rate for the same high efficiency) than the best KDE-to-hists model studied. This DNN also provides better efficiency than the default heuristic algorithm for the same low false positive rate. "Quantization" of this model, using FP16 rather than FP32 arithmetic, degrades its performance minimally. Reducing the number of UNet channels degrades performance more substantially. We have demonstrated that the KDE-to-hists algorithm developed for LHCb data can be adapted to ATLAS and ACTS data using two variations of the UNet architecture. Within ATLAS/ACTS, these algorithms have been validated against the standard vertex finder algorithm. Both variations produce PV-finding efficiencies similar to that of the standard algorithm and vertex-vertex separation resolutions that are significantly better.
Simon Akar, Mohamed Elashri, Rocky Bala Garg, Elliott Kauffman, Michael Peters, Henry Schreiner, Michael Sokoloff, William Tepe, Lauren Tompkins
2023-09-21T18:34:00Z
http://arxiv.org/abs/2309.12417v2
Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC ###### Abstract We are studying the use of deep neural networks (DNNs) to identify and locate primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work focused on finding primary vertices in simulated LHCb data using a hybrid approach that started with kernel density estimators (KDEs) derived heuristically from the ensemble of charged track parameters and predicted "target histogram" proxies, from which the actual PV positions are extracted. We have recently demonstrated that using a UNet architecture performs indistinguishably from a "flat" convolutional neural network model. We have developed an "end-to-end" tracks-to-hist DNN that predicts target histograms directly from track parameters using simulated LHCb data that provides better performance (a lower false positive rate for the same high efficiency) than the best KDE-to-hists model studied. This DNN also provides better efficiency than the default heuristic algorithm for the same low false positive rate. "Quantization" of this model, using FP16 rather than FP32 arithmetic, degrades its performance minimally. Reducing the number of UNet channels degrades performance more substantially. We have demonstrated that the KDE-to-hists algorithm developed for LHCb data can be adapted to ATLAS and ACTS data using two variations of the UNet architecture. Within ATLAS/ACTS, these algorithms have been validated against the standard vertex finder algorithm. Both variations produce PV-finding efficiencies similar to that of the standard algorithm and vertex-vertex separation resolutions that are significantly better. ## 1 Introduction Reconstruction of proton-proton collision points, referred to as primary vertices (PVs), is critical for physics analyses conducted by all experiments at the Large Hadron Collider (LHC) and for triggering in LHCb. The precise identification of the PV locations, and their other characteristics, enables the complete reconstruction of final states under investigation. Moreover, it provides crucial information about the collision environment, which is essential for obtaining accurate measurements. The task of PV reconstruction poses a significant challenge across the experiments conducted at the LHC. The LHCb detector has been upgraded for Run 3 of the LHC so that it can process a five-fold increase in its instantaneous luminosity compared to Run 2 and it has removed its hardware-level trigger in favor of a pure software trigger [1]. The average number of visible PVs detected in the vicinity of the beam crossing area has increased from 1.1 to 5.6. In contrast, the ATLAS experiment has observed an average of 40-60 simultaneous collisions (known as pile-up, \(\mu\)) during Run 3 in 2023 and is expected to see 140-200 simultaneous collisions during the coming high-luminosity phase of the LHC. These demanding conditions invite development of new PV reconstruction algorithms to address these challenges. This document presents the implementation and performance of a family of machine learning PV reconstruction algorithms known as PV-Finder for both LHCb and ATLAS. Conceptually, these algorithms compute one-dimensional Kernel Density Estimators (KDEs) that describe where charged track trajectories overlap in the vicinity of the beamline and use these as input feature sets for convolutional neural networks (CNNs) that predict target histograms that are proxies for the PV positions. LHCb has traditionally used heuristically computed KDEs with its CNNs; in this papers it reports merging a fully connected neural network for KDE computation with a CNN to produce an "end-to-end" tracks-to-hist deep neural network (DNN) model and compares its performance with that of older models. ATLAS currently uses an analytical approach for KDE computation (referred to as a KDE-to-hist model) and compares the performance with the Adaptive Multi-Vertex Finder (AMVF) algorithm [2], the heurustic PV identification algorithm currently used in ATLAS. ## 2 PV-Finder in LHCb The original LHCb DNN for reconstructing PVs used a single kernel density estimator (KDE) calculated using a heuristic algorithm as the input feature set for each event (each beam crossing) and produced a target histogram from which PV positions were deduced. We refer to this class of algorithms as KDE-to-hist algorithms. The results of the initial proof-of-principal project, and some details of the "toy Monte Carlo" and KDE used for that study are reported in Ref. [3]. Using track parameters produced by the LHCb Run 3 Vertex Locator (VELO) tracking algorithm [4] leads to significantly better performance [5]. Since then, our research has advanced in several directions. We replaced our original input feature set with four input feature sets: a first KDE based on summed probabilities in voxels projected onto the beam axis, a second KDE based on summed (probability-squared values) in voxels projected onto the beam axis, plus the \(x-\) and \(y-\) coordinates of the maximum summed probability at each value of \(z\) (along the beam axis). We found that using a modified U-Net architecture [6] in place of our original CNN architecture provided equally good fidelity and trained much more quickly. We also investigated using a fully connected network to calculate a KDE from track parameters (a tracks-to-KDE model) and merging this model with a KDE-to-hist model to produce an "end-to-end" tracks-to-hist neural network. The fidelity of the tracks-to-hist studied then was inferior to that of the KDE-to-hist models. The results of these studies were presented at CHEP-2021 [7]. A major advance reported at this conference (CHEP-2023) is that we have produced a tracks-to-hist model that produces efficiencies very similar to the best produced by our KDE-to-hist models _and_ produces significantly lower false positive (FP) rates. These results were reported previously at ACAT-2022 [8]. Below, we summarize the most salient features. Brand new for this conference are results using FP16 arithmetic rather than FP32 arithmetic for the tracks-to-hist model and results using smaller U-Net components in the FP16 tracks-to-hist models. The current tracks-to-hist model, whose architecture is shown in Fig. 1, includes a few updates relative to the original version described in Ref. [7]: the tracks-to-KDE part of the model consists of 6 fully connected layers that are initially trained to produce a KDE and the weights of the first 5 layers are temporarily frozen; a variation with 8 latent feature sets is merged to a KDE-to-hist-like DNN where the classical CNN layers are replaced by a U-Net model. Critically, we also updated the structure of the input data for training and inference. In the earlier approach [7], the target histograms consisted of 4000 bins along the z-direction (beamline), each \(100\,\mathrm{\mu m}\) wide, spanning the active area of the VELO around the interaction point, such that \(z\in[-100,300]\,\mathrm{mm}\). Parameters describing all tracks served as input features. In place of describing the true PVs using a single 4000-bin histogram, we now slice each event into 40 intervals of 100 bins each. For each interval, parameters of tracks whose points of closest approach to the beamline lie within 2.5 mm of the interval edges are used as input features. This approach is motivated by the fact that the shapes of the target histogram are expected to be invariant as a function of the true PV position and it is easier for a DNN to learn to predict target histograms over a smaller range of bins. In particular, the fully connected layers that calculate the KDE-like latent features used as input features by the U-Net layers predict heuristic KDEs as the ground truth much more effectively when training 100-bin intervals rather than the full 4000-bin range. Additionally, the depth of the U-Net part of the DNN can be lower when processing a 100-bin feature set rather than a 4000-bin feature set. With an average of \(\sim 5\) PVs per event, most of the bins in both the KDE and target histograms have no significant activity. We expect this will allow us to eventually build a more performant inference engine in the LHCb software stack. The 40 intervals of 100 bins are independent and homogeneous between events. Each interval is treated independently, after which the predicted 4000-bin histogram is stitched back together. As in past studies, an asymmetry parameter between the cost of overestimating contributions to the target histograms and underestimating them [3] is used as a hyperparameter to allow higher efficiency by incurring higher false positive rates. Performance is evaluated using a heuristic algorithm, based on the PV positions along the beam axis, \(z\). Exactly how efficiencies and FP rates are calculated is described in Ref. [7]. The left-hand plot in Fig. 2 shows how the performance of the DNN algorithms have evolved over time. The efficiency is shown on the horizontal axis and the false positive rate per event is shown on the vertical axis. The solid blue circles show the performance of any early KDE-to-hist model described at ACAT-2019 [3]. The green squares show the performances of a KDE-to-hist described at Connecting-the-Dots in 2020 [5]. Both of the above models were trained using "toy Monte Carlo" with proto-tracking. All subsequent DNN models were Figure 1: This diagram illustrates the end-to-end, tracks-to-hist, DNN Each event is now sliced into 40 independent 100-bin intervals. Six fully connected layers populate 8 100-bin channels in sixth layer, for each track. These contributions are summed and processed by a U-Net model with 5 convolutional layers to construct the final 100-bin histogram. trained using full VELO tracking algorithm [4], leading to significantly better performances (red triangles to be compared to green squares). The cyan circles and the yellow squares correspond to the best achieved performances for KDE-to-hist models using either a classical CNN architecture or the U-Net model described at CHEP-2021 The performances of all above models were obtained using an "older" matching procedure with a fixed search window of 0.5 mm. The magenta diamonds show the performance of the tracks-to-hist model described above using the matching procedure described in Ref. [7]. The new tracks-to-hist model enables the DNN to simultaneously reach high efficiencies (\(>97\%\)) and low false positive rates (0.03 per event or 0.6% per reconstructed PV). Running an inference engine inside a software stack adds another "knob to turn" - throughput versus fidelity. Computing resources are finite, especially in LHCb's first level software trigger which processes 30 MHz of beam crossing data, about 40 Tbit/s, in a GPU application [1]. Modern GPUs provide FP16 performance that can be about twice as fast as FP32 arithmetic, so it is interesting to investigate whether using FP16 arithmetic degrades performance significantly. It similarly interesting to investigate how performance degrades as the size of the convolutional network inside our DNN is reduced. The right-hand plot in Fig. 2 shows the efficiency versus FP rate for four DNN configurations. The magenta diamonds correspond to the default tracks-to-hist configuration. These points are exactly the same as those in the left-hand plot; the ranges of the axes have been modified to focus on the region of interest. The purple "\(\times\)" markers correspond to the same logical configuration, but using FP16 arithmetic rather than FP32. Near 96% efficiency, the FP rate has increased marginally. Near 97% efficiency, the FP rate has increased much more substantially. Reducing the number of U-Net channels from 64 to 32 or 16, while using FP16 arithmetic, (the darker and lighter crosses in the plot) additionally additionally increases the FP rate near 96% efficiency by a small amount, but increases the FP rate much more significantly near 96.5%. We have begun to code an inference engine to run in LHCb's first level software trigger. The details of the model to be instantiated will balance fidelity of the model against throughput. ## 3 PV-Finder in ATLAS The ATLAS experiment at the LHC is a versatile particle detector designed with a symmetric cylindrical geometry and near-complete coverage of \(4\pi\) in solid angle [9]. It has a multi-layer Figure 2: (left) Comparison between the performances of models reported in previous years and the new tracks-to-hist model (magenta diamonds). A cost asymmetry parameter described in Ref. [3] is varied to produce the families of points observed. (right) Comparison between tracks-to-hist models. The magenta diamonds here are the same as in the plot on the left. The other models have U-Net architectures but use FP16 arithmetic rather than FP32. Two of the FP16 models have smaller U-Net components than the FP32 model. NB: the horizontal and vertical scales on the right cover more limited ranges than those on the left. structure with many sub-detector systems including an inner tracking detector, superconducting magnets, electromagnetic and hadronic calorimeters, and a muon spectrometer. An extensive software suite [10] facilitates its various functions such as data reconstruction and analysis, detector operations, trigger and data acquisition systems etc. The input dataset used for studying PV-Finder in ATLAS has been generated using POWHEG BOX[v2][11] interfaced with PYTHIA[8.230][12] and processed through the ATLAS detector simulation framework [10], using the GEANT4 toolkit [13]. The hard-scatter (HS) process involves the production of semi-leptonically decaying top quark pairs (\(t\bar{t}\)) from proton-proton collisions at a center-of-mass energy of 13 TeV, overlaid with simulated minimum-bias events with an average pile-up of 60. ### PV-Finder algorithm and model architecture The flowchart representing work-flow of the PV-Finder algorithm for ATLAS is shown in figure 3. More details about the architecture can be found at the ATLAS PubNote [14]. Truth-matched reconstructed tracks passing tight quality selection cuts [15] and \(\mathrm{p_{T}}>500\) MeV are used for the preparation of input features for the neural network. A track's signed radial and longitudinal impact parameters, \(d_{0}\) and \(z_{0}\), measured at the point of closest approach (POCA) to the beamline, and their uncertainties, \(\sigma(d_{0})\) and \(\sigma(z_{0})\), are used as input to generate KDEs. Each KDE feature is a one-dimensional binned histogram with 12,000 bins in \(z\in[-240,240]\) mm, corresponding to a bin-size of 40 \(\mu\)m. To compute these features, each track is modeled as a correlated radial and longitudinal Gaussian probability distribution \(\mathbb{P}(d,z)\) centred at \((d_{0},z_{0})\) which is defined as follows: \[\mathbb{P}(r)=\mathbb{P}(d,z)=\frac{1}{2\pi\sqrt{|\Sigma|}}\mathrm{exp}\bigg{(} -\frac{1}{2}\Big{(}(d-d_{0}),(z-z_{0})\Big{)}^{T}\Sigma^{-1}\Big{(}(d-d_{0}), (z-z_{0})\Big{)}\bigg{)} \tag{1}\] where \(d\) and \(z\) are coordinates in the radial and longitudinal directions and \(\Sigma=\left(\begin{array}{cc}\sigma^{2}(d_{0})&\sigma(d_{0},z_{0})\\ \sigma(d_{0},z_{0})&\sigma^{2}(z_{0})\end{array}\right)\) is the covariance matrix. The sum of probabilities from all the contributing tracks is considered in each \(z\)-bin and four KDE features are constructed: KDE-A (sum of track probability values), KDE-B (sum of the squares of track probability values), XMax (YMax) (location of the maximum summed track probability in \(x(y)\) (mm)). An example illustrating these four features for a random event is shown in Fig. 4. The vertical grey Figure 3: Flowchart representing work-flow of the PV-Finder algorithm from left to right. lines in the upper plot mark the locations of true primary vertices while horizontal grey line in the lower plot denotes the position of the beam spot in the radial direction. A restricted range of the luminous region is shown so that details can be seen. To train the neural network, a one-dimensional target truth histogram, with the same binning as the input features and calculated by considering Gaussian probabilities around truth vertex locations, is also provided as input along with the four KDE features. A CNN is trained on these features which then outputs a distribution with approximately Gaussian peaks centered at the predicted locations of PVs. An algorithm then takes this predicted distribution and identifies the candidate PV locations on the \(z\)-axis by finding the local maxima. Two NN architectures have been considered for these studies: the UNet architecture is inspired from the original architecture developed for biomedical image segmentation [6] while the UNet++ architecture is a variation of UNet with dense skip connections. ### Performance The PV-Finder algorithm's performance for UNet and UNet++ architectures has been studied and a comparative analysis is conducted with the AMVF algorithm using an independent test data sample. Figure 5 showcases an example of two adjacent vertices accurately located by the PV-Finder algorithm. To quantitatively evaluate the performance of the PV-Finder, vertex classification is performed, and efficiency and false positive rates are calculated. The classification assigns vertices into distinct categories, namely clean, merged, split, and fake based on the distance between the center of a predicted vertex and the \(z\)-location of truth vertices. The classification is illustrated in Figure 6 and demonstrated in Figure 7 for the three approaches. The truth and reconstructed primary vertices are associated based on a vertex-vertex resolution, \(\sigma_{\text{vtx-vtx}}\), which is obtained by computing the \(z\)-difference between pairs of nearby reconstructed vertices and fitting the distribution with the fit function: \(y=\frac{a}{1+\exp\left(b(R_{cc}-|x|)\right)}+c\), where \(a,b,c\) are free parameters, and \(R_{cc}\) is the cluster-cluster resolution referred to as \(\sigma_{\text{vtx-vtx}}\). The vertex-vertex resolution for PV-Finder UNet, PV-Finder UNet++ and AMVF is presented in Figure 8 and Table 1. The vertex finding efficiency is defined as the number of truth vertices assigned to reconstructed vertices as "clean" and "merged" divided by the total number of reconstructable truth vertices while the false positive rate is defined as the average number of predicted vertices not matched to any truth vertex. Figure 9 shows the vertex finding efficiency as a function of the number of reconstructed tracks associated to a truth vertex and Table 1 shows the average efficiency and false positive rates obtained for three cases. ## 4 Conclusion The PV-Finder family of algorithms has been studied by both the LHCb and ATLAS experiments. LHCb has demonstrated the performances of the end-to-end tracks-to-hist approach for several configurations including those that use FP16 arithmetic rather than FP32. ATLAS has demonstrated that a hybrid KDE-to-hist approach produces efficiencies comparable to the ATLAS AMVF algorithm while also achieving significanty improved resolution. These enhanced efficiency and resolution metrics hold significant importance, especially considering the future High Luminosity LHC program. The results are promising and motivate further studies and refinement of the PV-Finder algorithms across experiments. [Copyright 2023 CERN for the benefit of the ATLAS and LHCb Collaborations. CC-BY-4.0 license]
2301.13659
Spyker: High-performance Library for Spiking Deep Neural Networks
Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with fewer samples and consuming less power are among the key features of these networks. However, the theoretical advantages of SNNs have not been seen in practice due to the slowness of simulation tools and the impracticality of the proposed network structures. In this work, we implement a high-performance library named Spyker using C++/CUDA from scratch that outperforms its predecessor. Several SNNs are implemented in this work with different learning rules (spike-timing-dependent plasticity and reinforcement learning) using Spyker that achieve significantly better runtimes, to prove the practicality of the library in the simulation of large-scale networks. To our knowledge, no such tools have been developed to simulate large-scale spiking neural networks with high performance using a modular structure. Furthermore, a comparison of the represented stimuli extracted from Spyker to recorded electrophysiology data is performed to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions. The aim of this library is to take a significant step toward uncovering the true potential of the brain computations using SNNs.
Shahriar Rezghi Shirsavar, Mohammad-Reza A. Dehaqani
2023-01-31T14:25:03Z
http://arxiv.org/abs/2301.13659v1
# Spyker: High-performance Library for Spiking Deep Neural Networks ###### Abstract Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with fewer samples and consuming less power are among the key features of these networks. However, the theoretical advantages of SNNs have not been seen in practice due to the slowness of simulation tools and the impracticality of the proposed network structures. In this work, we implement a high-performance library named Spyker using C++/CUDA from scratch that outperforms its predecessor. Several SNNs are implemented in this work with different learning rules (spike-timing-dependent plasticity and reinforcement learning) using Spyker that achieve significantly better runtimes, to prove the practicality of the library in the simulation of large-scale networks. To our knowledge, no such tools have been developed to simulate large-scale spiking neural networks with high performance using a modular structure. Furthermore, a comparison of the represented stimuli extracted from Spyker to recorded electrophysiology data is performed to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions. The aim of this library is to take a significant step toward uncovering the true potential of the brain computations using SNNs. Spiking Neural Network, Learning Rules, C++/CUDA, Modular Structure, Biological Plausibility ## I Introduction The human brain can operate with amazing robustness and energy efficiency. Artificial neural networks (ANNs) aim at modeling the brain, and three generations of these networks have been developed. Each generation of ANNs improves the quality of the modeling of the brain compared to the last. The first generation of ANNs makes use of the McCulloch-Pitts neurons [1]. Although these neurons are inspired by biological neurons, time dynamics are not considered in this model, and the learning rules proposed for them lack power and biological plausibility. These neurons were used in multi-layer perceptron (MLPs) [2] and Hopfield [3] networks. The second generation of ANNs uses a continuous activation function (ReLU [4] and sigmoid [5], for example) instead of thresholding, which makes them suitable for processing analog signals. They have attracted the attention of researchers in recent years and were able to reach high accuracies [6, 7] (even surpassing humans) and win different challenges [8]. Despite the success of DNNs, there are structural differences between these networks and the human brain. Lack of temporal dynamics, using analog signals for network propagation and activation functions, learning rules without biological roots, and the need for large amounts of data [9] and energy [10] to achieve acceptable results are among these differences. The third generation of neural networks is spiking neural networks (SNNs). The neural models used in these networks simulate biological neurons more accurately, and the coding mechanisms used in these networks are found in neural communications. Furthermore, the learning rules used in these networks have been discovered in the brain [11, 12, 13]. Having lower energy consumption, learning with fewer samples, and solving more complicated tasks due to time dynamics (several electrophysiological studies emphasize the role of temporal dynamics in neural coding [14, 15]) are some of the advantages of SNNs compared to the second generation of ANNs. SNNs can be used to solve machine learning tasks, study and explore brain functionality, and run on specialized hardware with low power consumption. The research being done on these networks aims to address the disadvantages of DNNs with more realistic modeling of the brain functionality. Several high-performance well-established frameworks like PyTorch [16], TensorFlow [17], and MXNet [18] have been developed for DNNs in recent years. These libraries have enabled DNNs to achieve new highs in solving machine learning tasks. SNNs are not yet comparable to DNNs due to the lack of fast simulation tools. There have been some attempts, like SpykeTorch [19] and BindsNet [20]. SpykeTorch, written on top of the PyTorch framework, is a simulator for large-scale spiking neural networks (SDNNs). However, it has a slow runtime, and training even simple networks can take up to days to complete. To our knowledge, Spyker is the first toolbox to simulate large-scale networks with high performance, is easy to use, has the flexibility to be used in multiple languages, and has the compatibility to integrate with other commonly used tools. In order to fill this need, we have developed Spyker. Spyker is a C++/CUDA library written from scratch with both C++ and Python interfaces and support for dense and sparse structures. Although Spyker is a stand-alone library, it has a highly flexible API and can work with PyTorch tensors and Numpy arrays. Figure 1 shows an overview of the library. In order to increase performance, small-sized integers are used alongside floating-point numbers. It also uses highly-optimized low-level back-end libraries such as OneDNN and cuDNN to speed up heavy computations such as convolutions and matrix multiplications. Spyker can be compiled on various CPUs to be optimized locally and take advantage of native CPU-specific instructions. Spiking neural networks are made of different building blocks (see [21] for more details). The first block is the modeling of the biological neurons. Some examples of this are leaky integrate-and-fire [22], spike-response model [23], and Izhikevich model [24]. Another building block is neural coding, which can be rate coding [25], temporal coding, phase coding and synchrony coding [26], or other coding schemes. The final building block is the learning mechanism. Examples of these mechanisms are STDP [27, 28], R-STDP [29], backpropagation [30], and conversion from ANNs to SNNs [31]. Spyker has a modular implementation of these three blocks that enables its users to build SNNs. Spyker provides SNN functionality with a high-performance and easy-to-use interface with an open-source and permissive license. It can run on CPU and CUDA devices and has both dense and sparse interfaces. The library introduces new features and fixes most of the shortcomings of its predecessor. The improvements include adding batch processing, strided convolutions, internal padding for convolutions, fully connected layers, and the rate coding mechanism. Compared to its predecessor, the interface of the library is simpler, closer to the current API of deep learning libraries, and more straightforward to use. In this work, several successful network structures are implemented using this library to prove its operability, its runtime speed is compared to SpykeTorch, and the results indicate Spyker can run up to eight times faster. The proposed work is able to reduce the gap between SNNs and DNNs and bring us a step closer to uncovering the true potential of spiking neural networks. We start with a description of dimensionality of the input arrays and how the spike trains are implemented in the library. Afterward, we provide an explanation of different building blocks of SNNs and how they are implemented in Spyker and modeled in the interface. Then, we implement network structures that have been succesful to prove its operatibility, and we compare the performance of the library to its predecessor on these networks. Furthermore, comparison of the represented stimuli extracted from Spyker to recorded electrophisiology data is performed to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions. Finally, we demonstrate an example usage of the library and discuss the impacts of this work and how it can be further improved. ## II Methods The interface of the Spyker can be better explained when the classes and methods of the interface are grouped by building blocks of SNNs. The categories are feature enhancement, neural coding, neural model, and learning. In this section, the structure of the input to the network is explained. Afterward, the sparse and the dense interfaces are compared. Finally, the building blocks of the library are discussed in detail. ### _Network Input_ Arrays passed through convolutional neural networks that process images are often four-dimensional arrays composed of batch size (B or N), number of channels (C), image height (H), and image width (W). The order can either be BCHW or BHWC (or NCHW or NHWC). SNNs have temporal dynamics, and it is implemented as a dimension that represents time steps in Spyker. The library implements five-dimensional arrays with BTCHW order (T being the time steps). Since DNNs process analog signals, data types used in these networks are (usually four-byte) floating-point numbers. This data type can be computationally expensive compared to a small-sized integer type and take up more space in the memory. Since SNNs process binary signals, Spyker can optionally use eight-bit (or wider) integers alongside floating-point numbers to improve performance further. ### _Dense vs Sparse interface_ The dense interface of Spyker uses the fully allocated memory buffers that are used in neural network computations. However, the sparse interface only needs to hold the indices of the spikes. Conversion between dense and sparse interfaces are provided in the library. The sparse interface has some advantages compared to the dense interface. In the dense interface, the time consumed by each operation is a function of the size of each of the 5 dimensions. However, in the sparse interface, it depends on the number of spikes. This means both memory and time consumed will be greatly reduced when processing sparser signals. Furthermore, since neurons fire at most once when using rank order coding, the increment of the number of time steps will have a smaller effect in the sparse interface compared to the dense interface. ### _Feature Enhancement_ A transformation can be used to enhance features of the input signal (image) before the neural coding process [32, 33, 34]. This results in highlighted features having higher intensities and appearing in earlier time steps, meaning more excitation. Feature enhancement is done through filtering the input here. Various filters are supported in Spyker, and they are introduced in the following subsections. #### Ii-C1 Difference of Gaussian Filter The first filter is the Difference of Gaussian (DoG). This filter increases the intensities of edges and other details in the image (see Figure 2 for an example) [35]. It approximates the center-surround properties of the ganglion cells of the retina [36] (see also [37, 38]). This operation is implemented as spyker.DoG(size, filters, pad, device) where size is the size of the width and the height of the filter, filters is a list of DoG filter descriptions (each description takes in two standard deviations), pad is the size of the padding of the image, and device is the device the filter will run on (CPU, GPU or others). #### Ii-C2 Gabor Filter The following filter is the Gabor filter that determines the presence of specific frequency in content in a specific direction in the image. Research Indicates [39] that the Gabor filter is used in the human visual cortex. The Gabor filter is implemented as spyker.Gabor(size, filters, pad, device). The parameters of this class are the same as the DoG class, but the filters are Gabor filter descriptions, and each description takes in sigma, theta, gamma, lambda, and psi. #### Ii-C3 Laplacian of Gaussian Filter The Laplacian of Gaussian (LoG) layer is also implemented in Spyker, and it is approximated using two DoG filters. An LoG filter with standard deviation \(\sigma\) can be approximated using two DoG filters with (\(\sigma\sqrt{2}\), \(\sigma/\sqrt{2}\)) and (\(\sigma/\sqrt{2}\), \(\sigma\sqrt{2}\)) standard deviations. This filter exists in Spyker as spyker.LoG(size, stds, pad, device) where stds are a list of standard deviations needed to describe multiple LoG filters. #### Ii-C4 Shape of the Filters The previously explained filters have kernel size \(K_{c}\times K_{h}\times K_{w}\), which are square kernels (\(K_{h}=K_{w}\)). The input can have \(B\times C_{i}\times H_{i}\times W_{i}\) shape which corresponds to batch, channels, height, and width of the input, respectively. The output will have \(B\times C_{o}\times H_{o}\times W_{o}\) shape where: \[\begin{split} C_{o}&=C_{i}\times K_{c}\\ H_{o}&=H_{i}+2\times P_{h}-K_{h}+1\\ W_{o}&=W_{i}+2\times P_{h}-K_{w}+1\end{split} \tag{1}\] and \(P_{h}\) and \(P_{w}\) are height and width padding of the filter. The \(K_{c}\) filters are applied to each channel separately. #### Ii-C5 Zero-phase Component Analysis Final implemented layer is zero-phase component analysis (ZCA) Whitening. It has been suggested [34] that this transformation can improve the accuracy of SNNs on real-world images. Spyker implements an efficient version of ZCA whitening by taking advantage of routines from highly optimized linear algebra libraries (BLAS and LAPACK) that operate on symmetric matrices. This layer is implemented as spyker.ZCA class which has a fit(array, epsilon) and a call function. ### _Neural Coding_ SNNs process spike trains, but the input consists of analog values (for example, images are made of pixel values). In order to make these inputs suitable for the network, a conversion scheme is needed. The mapping from stimuli to neural responses is called neural coding [40]. Coding schemes implemented in Spyker are explained in the following subsections. #### Ii-D1 Rate Coding Out of several coding schemes suggested, rate coding is widely used where the rate of firing of the neurons represents information. In this scheme, the rate of firing is dependent on the intensity of the input value (higher intensity corresponds to faster firing) [25]. The exact time of firing in each neuron is stochastic in nature and may be modeled with a Poisson distribution. A lengthy window of time is required to transmit the information in this coding, and the spikes are not quite sparse. #### Ii-D2 Temporal Coding Another popular coding scheme is temporal coding [41]. Recordings in the primary visual cortex show [42] that the response latency decreases with the stimulus contrast. This coding scheme can convey information through the timings of the spikes. Multiple forms of this scheme have been proposed, including rank order coding [43]. Instead of computing the exact timing of each spike, the timings are computed relative to one another in rank order coding. This relative (instead of exact) timing can increase invariance to changes in the input intensity and contrast [43]. It has been suggested [44] that temporal coding might be more efficient in some situations. #### Ii-D3 Coding in Spyker Spyker supports rank order and rate coding. The concept of time is implemented with spikes occuring in time steps in this library. Rank order coding maps higher intensities to earlier time steps of a neuron firing. In order to calculate the time step the neuron will fire in, Spyker sorts the intensity values by default. This calculates rank order between spikes, and the spikes will be distributed among time steps evenly. The sorting operation is computationally Fig. 1: Overview of the Spyker library. Spyker API supports PyTorch tensors and Numpy arrays as well as a built-in data wrapper. The output of Spyker operations have the same container type as the input. The functionality of Spyker can be grouped into subcategories shown in the figure. expensive (specially on GPUs), and optionally, it can be disabled to have runtime improvements (however, accuracy might be affected). Since processing time steps sequentially is inefficient and time-consuming, Spyker processes all the time steps at once. To this end, when a neuron fires in time step \(t_{i}\), it will also fire at time steps \(t_{i+1}\), \(t_{i+2}\),..., \(t_{n}\) where \(n\) is the number of time steps. An example of this cumulative structure can be seen in Figure 2. ### _Neural Model_ Once the input is filtered and coded, it gets processed by the network. The network is built using fully connected, convolution, integrate-and-fire (IF) activation, pooling, and padding layers. These operations are explained in the following subsections. #### Iii-E1 Convolution The integrate-and-fire mechanism is implemented by combining convolution and the IF activation layer. The internal potentials of the neurons are computed using convolution operation, and the IF activation operation produces spikes where neurons have a potential higher than a specified threshold. Multiple layers can be assembled and stacked on top of one another to create deeper structures. The convolution layer has a kernel with \(C_{o}\times C_{i}\times K_{h}\times K_{w}\) shape. the synaptic weights are initialized randomly with a normal distribution. It performs two-dimensional convolution with support for padding and stride. The input has \(B\times T\times C_{i}\times H_{i}\times W_{i}\) shape which corresponds to batch, time steps, channels, height, and width of the input, respectively. The output has \(B\times T\times C_{o}\times H_{o}\times W_{o}\) shape where: \[\begin{split} H_{o}&=\lfloor\frac{H_{i}+2\times P _{h}-K_{h}}{S_{h}}\rfloor+1\\ W_{o}&=\lfloor\frac{W_{i}+2\times P_{w}-K_{w}}{S_ {w}}\rfloor+1\end{split} \tag{2}\] And \(P_{h}\), \(P_{w}\), \(S_{h}\), \(S_{w}\) are the height and width of convolution padding and stride. Padding increases the size of the two-dimensional input before convolution operation by expanding the edges of the input and filling in the new space with a constant value (usually zero). Stride is the number of steps the convolution window takes when it moves on the image. The output of the convolution layers are internal potentials of neurons that need to be passed through an IF activation layer to become output spike trains. This layer is implemented with spyker.Conv(insize, outsize, kernel, stride, pad, mean, std, device) class in Spyker. #### Iii-E2 Fully Connected The fully connected layer is combined with the IF activation to model the IF neurons, much similar to what happens in the convolution layers. This layer has a kernel with \(I\times O\) shape. The synaptic weights are initialized Fig. 2: The figure shows a black and white image being filtered by DoG and Gabor filters. The theta parameter of the Gabor filter is set to -15 degrees. Then the images are coded using rank order coding into four time steps. Spikes are shown with white color on a black background through time steps. Spikes carry on from the previous to the current time step (cumulative structure). randomly with a normal distribution. The input has \(B\times T\times I\) which corresponds to batch, time steps, and input size, respectively. The output has \(B\times T\times O\) shape. The fully connected layer is represented by spyker.FC(insize, outsize, mean, std, device) in the library. #### Ii-D3 Pooling The pooling layer performs two-dimensional max pooling operation with a window size of\(L_{h}\times L_{w}\), a stride of \(S_{h}\times S_{w}\), and a padding of \(P_{h}\), \(P_{w}\). The input has \(B\times T\times C_{i}\times H_{i}\times W_{i}\) shape and the output has \(B\times T\times C_{o}\times H_{o}\times W_{o}\) shape where: \[\begin{split} H_{o}&=\lfloor\frac{H_{i}+2\times P _{h}-L_{h}}{S_{h}}\rfloor+1\\ W_{o}&=\lfloor\frac{W_{i}+2\times P_{w}-L_{w}}{S_ {w}}\rfloor+1\end{split} \tag{3}\] The interface of Spyker has the spyker.pool(array, kernel, stride, pad, rates) function to run the pooling operation on the input given the kernel, stride, and padding size. rates argument is the rate of firing of the neurons when rate coding is used. The pooling operation selects neurons that fire earlier when rank order coding is used, and selects neurons that have a higher firing rate when rate coding is used. ### _Learning_ Learning in the brain happens when the strength of connections change between its neurons, and this change in strength is named synaptic plasticity [45]. Learning methods that utilize synaptic plasticity have been developed for SNNs [27, 28, 29]. #### Ii-F1 Spike-timing-dependent Plasticity One widely recognized synaptic plasticity learning rule is spike-timing-dependent plasticity (STDP) [27, 28]. STDP learning rule operates by adjusting synaptic weights and utilizing the timing of the spikes. A pre-synaptic neuron firing before (after) the postsynaptic neuron results in a strengthened (weakened) connection. STDP allows the neurons to extract and learn frequent features in the input [46]. STDP layer changes synaptic weights with stabilization: \[\Delta W_{i,j}=\begin{cases}A_{k}^{+}(W_{i,j}-L_{k})(U_{k}-W_{i,j}),&t_{j}\leq t _{i}\\ A_{k}^{-}(W_{i,j}-L_{k})(U_{k}-W_{i,j}),&t_{j}\geq t_{i}\end{cases} \tag{4}\] where \(A_{k}^{+}\), \(A_{k}^{-}\), \(L_{k}\), \(U_{k}\) are the positive learning rate, negative learning rate, lower bound, and upper bound of the \(k\)th configuration, respectively. If stabilization is not set, then the formula becomes: \[\Delta W_{i,j}=\begin{cases}A_{k}^{+},&t_{j}\leq t_{i}\\ A_{k}^{-},&t_{j}\geq t_{i}\end{cases} \tag{5}\] then the weights are computed: \[W_{i,j}^{+}=max(L_{k},min(U_{k},\Delta W_{i,j})) \tag{6}\] Input, neurons selected by the winner-take-all mechanism (WTA), and the output are passed to a function belonging to fully connected or convolution layers, and the STDP learning rule is applied. Convolution or fully connected layers in Spyker can have multiple STDP configurations (different learning rules, weight clipping, enabling/disabling stabilizer) implemented as spyker.STDPConfig(positive, negative, stabilize, lower, upper). Each winner neuron can be mapped to an STDP configuration, and that neuron will be updated using the learning rates and such that belongs to the selected configuration. SpykeTorch creates an STDP object for each configuration, and mapping winner neurons to different configurations is done by the user. Compared to SpykeTorch, Spyker provides a more flexible and easy to use API for weight updating and enables batch updating, which improves performance. Samples are processed in mini-batches which increases performance drastically (see the results section), and the batch update rule does not differ from single-sample processing. #### Ii-F2 Reward-modulated STDP Another approach is using the reinforcement (RL) learning rule. One method based on RL is reward-modulated STDP [29]. R-STDP adjusts the STDP such that neurons that respond correctly are rewarded, and punished otherwise. It has been suggested [33] that when the input has non-diagnostic frequent features that are less effective in decision-making, R-STDP is able to discard these features and improve the decision-making process. Since convolution and fully connected layers accept STDP configurations as input, R-STDP can be implemented by passing two configurations to a layer (one for rewarding and one for punishing), and mapping each winner neuron to a configuration based on data labels. If one formulates this, \(\Delta W_{i,j}\) will be: \[\begin{cases}A_{r}^{+}(W_{i,j}-L_{r})(U_{r}-W_{i,j}),&t_{pre}<t_{post}\\ A_{r}^{-}(W_{i,j}-L_{r})(U_{r}-W_{i,j}),&t_{pre}\geq t_{post}\end{cases},\quad \text{if reward}\] \[\begin{cases}A_{p}^{-}(W_{i,j}-L_{p})(U_{p}-W_{i,j}),&t_{pre}<t_{post}\\ A_{p}^{+}(W_{i,j}-L_{p})(U_{p}-W_{i,j}),&t_{pre}\geq t_{post}\end{cases},\quad \text{if punish} \tag{7}\] #### Ii-F3 Winner-take-all and Lateral Inhibition When a neuron fires at a specific location, lateral inhibition [47, 48] operation inhibits other neurons belonging to other neural maps from firing in that location. Lateral inhibition for the convolution operation can be used with spyker.inhibit(array, threshold, inplace) functions. Winner neurons that STDP weight updating will be performed on are selected by the winner-take-all [49, 50] operation. WTA selects neurons that fire earlier, and if the firing time of neurons is the same, then the one that has a higher internal potential will be selected. This operation is implemented with spyker.fcwta(array, radius, count, threshold) for fully connected and spyker.convwta(array, radius, count, threshold) for convolution operations. ## III Results In this section, we will test the performance of the library against the SpykeTorch library. Afterward, a comparison of the represented stimuli extracted from Spyker to recorded electrophysiology data is conducted to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of brain functions. ### _Library Performance_ In this section, we compare the performance of the library to SpykeTorch on two networks that classify the MNIST dataset. #### Iv-B1 R-STDP Network The first network is the Mozfari et al. network [33] which has three convolutional layers. The first layer is trained two times with STDP, the second layer four times with STDP, and the third layer 680 times with R-STDP on the training set while computing the test accuracy at each iteration while training the third layer. We made a small change to the structure of the network (named Alt for alternative). We removed the input padding from the last convolution layer and changed its window size to 4 and the output channels to 400. Results can be seen in Figure 3 and Table I. All the tests are performed on Inte Core i7-9700k with 64G memory and Nvidia Geforce GTX 1080 Ti with 12G memory (Ubuntu 18.04). In order to compare the results, we test whether the two-sample mean difference confidence interval (99.9%) contains zero. The null hypothesis is having the same means, and the alternative is having different means. The test results indicate that the Spyker Python implementation is faster compared to the SpykeTorch implementation (Confidence intervals are [15477, 15859] and [72607, 80737] for Spyker and SpykeTorch respectively, showing no intersection). Furthermore, the alternative implementation is faster both in the Python implementation with [-3738, -3370] interval and the C++ implementation with [-3828, -3339] interval. As expected, the C++ interface is faster compared to the Python interface with [-1078, -520] interval. The results for the accuracy comparisons show that there are no significant differences ([96.932, 98.169] and [95.996, 97.444] for Python vs SpykeTorch implementations respectively, showing intersection, [-0.89, 0.793] for C++ vs Python, [-0.649, 0.813] for Python alternative vs Python, and [-0.763, 0.971] for C++ alternative vs C++). #### Iv-B2 STDP Network Subsequently, the Kheradpisheh et al. network [32] is used for comparisons. This network is made of two convolutional layers. The first layer is trained 2 times with STDP, and the second layer is trained 20 times with STDP on the training set. The output of the network is classified using the SVM classifier. The elapsed time measured consists of the time needed to train the network on the training set and make predictions for the testing set. The time to utilize SVM is not taken into account because the libraries that simulate the neural network portion are compared here. The results can be seen in in Figure 4 and Table II. The test results indicate that the Spyker GPU implementation is faster compared to the SpykeTorch implementation (confidence interval [-2728, -2265]). Since the SpykeTorch implementation processes one sample at a time, we also implemented a single sample version on the GPU, and this implementation runs faster compared to the SpykeTorch implementation (confidence interval [-1795, -1338]). There is also an implementation using the sparse interface of the Spyker (that runs on CPU) that is faster than the SpykeTorch implementation on the GPU (confidence interval [-2586, -2120]). These results show that the Spyker implementation is faster while the accuracy is not significantly different ([-0.373, 0.511] for Spyker GPU, [-0.458, 0.603] for single-sample, and [-0.405, 0.549] for sparse implementation, all against the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Implementation & Time & Time (S\(\pm\)SD) & Accuracy (\%\(\pm\)SD) & Runs \\ \hline SpykeTorch GPU & 47m30s & 2,850\(\pm\)64 & 98.392\(\pm\)0.093 & 30 \\ \hline Spyker GPU Single & 21m23s & 1,283\(\pm\)66 & 98.465\(\pm\)0.095 & 30 \\ \hline Spyker GPU & 05m53s & 353\(\pm\)9 & 98.461\(\pm\)0.079 & 30 \\ \hline Spyker Sparse & 08m16s & 496\(\pm\)1 & 98.464\(\pm\)0.091 & 30 \\ \hline \end{tabular} \end{table} TABLE II: Comparisons of the the runtime and accuracy of Spyker against SpykeTorch on the Kheradpisheh et al. network. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Implementation & Time & Time (S\(\pm\)SD) & Accuracy (\%\(\pm\)SD) & Runs \\ \hline SpykeTorch & 21h17m & 76,672\(\pm\)916 & 96.720\(\pm\)0.163 & 12 \\ \hline Spyker Python & 04h49m & 15668\(\pm\)52 & 97.550\(\pm\)0.169 & 30 \\ \hline Spyker Python Alt & 03h31m & 12,114\(\pm\)14 & 97.632\(\pm\)0.112 & 30 \\ \hline Spyker C++ & 03h52m & 14,869\(\pm\)50 & 97.502\(\pm\)0.157 & 30 \\ \hline \end{tabular} \end{table} TABLE I: Comparisons of the the runtime and accuracy of Spyker against SpykeTorch on the Mozafari et al. network. Fig. 3: Comparison plots of the runtime and accuracy of Spyker against SpykeTorch on the Mozafari et al. network. The plot on the left shows the runtime comparison of Spyker and SpykeTorch implementations. The plot on the right also compares accuracy of the two implementations. Comparisons are between SpykeTorch (ST), implementation using Spyker in Python (SP Py), alternative version using Spyker in Python (SPA Py), and their C++ counterparts (SP C++, SPA C++). The error bars are minimum and maximum values of the samples. ### SpykeTorch implementation). ### _Analyzing the Underlying Structures of the Brain_ In order to demonstrate the use case and the importance of the library in neuroscience research, a similarity analysis is done in this section to compare the biological plausibility of an SNN and a deep CNN model. The neural data needed for the analysis is recorded as spiking activity and LFP signals from Inferior Temporal (IT) cortex using a single electrode (169 sessions from two macaque monkeys, the neural data for the monkeys are pooled together) [51]. The task implemented here is a Rapid Serial Visual Presentation (RSVP). The intervals are 50ms for stimulus and 450ms interstimulus. Eighty-one greyscale images of real-world objects and Gaussian low-pass filtered and high-pass filtered variations of some are shown during the task (total 155 images). The categories of the stimuli are animal face (AF), human face (HF), animal body part(AB), human body part (HB), natual objects (N), and man-made objects (MM). The SNN used here is structurally similar to the one introduced by Shirasvar et al. [52]. The input of the SNN is resized to 32 and passed through 3 LoG filters with stds of 0.471, 1.099, 2.042. The window sizes of the filters are 7. Then, the output is thresholded and coded into 15 time steps. The first convolution layer has 16 output channels with awindow size of 5 and a padding of 2, and the second convolution layer has 32 output channels with a window size of 3 and a padding of 1. The pooling layers have 2 and 3 window sizes, respectively. The layers are trained 20 times on the images, and the learning rates are doubled after each image until they reach 0.15. Firing times (divided by number of time steps) of the final layer is used as the network output. The CNN network used here is a ResNet-50 with the classifier layer replaced. The network is not pretrained. The input image is resized to 256 and cropped to 224. The network is trained 15 times on the dataset with Adam optimizer and 0.0001 learning rate. using a linear SVM classifier to classify the 6 categories. the accuracies for the 6 classes are 51.569 \(\pm\) 2.240 (SD), 48.623 \(\pm\) 2.538, and 51.247 \(\pm\) 2.257 for ResNet-50, SNN, and an SVM classifier that is used on the average firing rates of the neural recordings of the monkeys between 150ms and 200m from the onset, respectively. Figure 5 Shows the results of the analysis. The average Kendall's Tau value for the interval between 125ms and 175ms shown in the figure is tested between the SNN and the ResNet. Using a Mann-Whitney U test with the alpha value of 0.001 results in a p-value of 2.028-07, which shows significant difference between the two. This indicates that the SNN has a closer structure to monkey brain. ### _Rate Coding Output_ In this section, we look at the output of an SNN that uses rate coding. The SNN network used here is the Shirasvar et al. [52]. The number of output channels in the convolutional layers are set to 25 and 50. The training is not changed in that 15 time steps are used with rank order coding. However, the inference is done with 300 time steps and rate coding. Afterward, the spike output of 40 neurons are plotted for one testing sample for each class shown in Figure 6. The figure also contains a plot of T-SNE transformed firing rates as output features and the recall score for each class for the average of 30 runs. The accuracy of the 30 runs is 95.635\(\pm\)0.171 on the testing set. ## IV Library Demonstration In this section, a sample usage of the library is illustrated. The network used here is introduced by Shirasvar et al. [52] to classify the MNIST dataset. The network has two convolutional layers trained with the STDP learning rule. The code shown in this section is only a part of the actual implementation, with the aim of providing a simple example. For the complete implementation, please visit the GitHub repository of Spyker1. Fig. 4: Comparison plots of the runtime and accuracy of Spyker against SpykeTorch on the Kheradpisheh et al. network. The plot on the left shows shows the runtime comparison of Spyker and SpykeTorch implementations. The plot on the right also compares accuracy of the two implementations. Comparisons are between GPU implementation using SpykeTorch (SP GPU), GPU implementation using Spyker with single-sample instead of batch processing (SP Single), GPU implementation using Spyker (SP GPU), and Sparse CPU implementation using Spyker (SP Sparse). The error bars are minimum and maximum values of the samples. ## IV Conclusion Fig. 5: Similarity comparison of SNN and ResNet-50 to monkey neural data. The similarity measurement used here is the cosine similarity. The RDM for the monkey is computed for the 50ms interval after the onset. The RDMs are adjusted with histogram equalization. The RSA is calculated with 50ms window size and 5ms stride and 95% confidence interval. Kendall’s Tau measurement is used for the RSA analysis. The RSA is averaged in the interval between 125ms and 175ms and compared in the plot in the top right with 95% confidence interval. Fig. 6: Raster plot of an SNN network for the MNIST test images. In this figure, 40 neurons are plotted in 300 time steps for 10 samples of the MNIST testing set, each image belonging to one class. ### _Transformation_ The transformation from the input image to the network input consists of feature enhancement and spike coding, shown in Listing 1. Here, a module named Transform is defined that performs the transformation when called. This module applies 3 LoG filters with different standard deviations to the input image with padding to keep the original width and height of the input. The output is stored in 6 channels. Each channel of this output is then coded into fifteen time steps using rank order coding. ``` classTransform: def__init__(self,device): std=[0.471,1.099,2.042] self.fit=spyker.LoG(3,std, pad=3,device=device) def__call__(self,data): data=self.fit(data) spyker.threshold(data,0.01) returnspyker.code(data,15) ``` Listing 1: Implementation of the Transform module ### _Network_ The network has two convolutional layers. Here, a module named Network is defined (shown in Listing 2) to train the neurons and make predictions. Here, the convolution layers are initialized, STDP configurations are set, and the winner selection function is wrapped with a lambda function to keep the hyperparameters in the initialization of the function of the network. ``` classNetwork: def__init__(self,device): self.thresh1,self.thresh2=16,5 self.conv1=spyker.Conv(6,100,5, pad=2,mean=.5,std=.02,device=device) self.conv2=spyker.Conv(100,200,3, pad=1,mean=.5,std=.02,device=device) config1=spyker.STDPConfig(.0004,-.0003) config2=spyker.STDPConfig(.0004,-.0003) self.conv1.stdpconfig=[config1] self.conv2.stdpconfig=[config2] self.wta1=lambdax:spyker.convwa(x,3,5) self.wa2=lambdax:spyker.convwa(x,1,8) ``` Listing 2: Implementation of the Network module ### _Learning_ Training each layer is done in a separate function shown in Listing 3. The training of the layers is done in a sequential order (one layer after another). Training of the first layer is done in the train_layer1 function with the STDP learning rule. Here, the output of the first convolution is computed, and lateral inhibition is performed on it. Then, winner neurons are selected, and STDP weight updating is performed on them. The STDP learning rates in the first layer are multiplied by 1.5 every 2000 samples, and the multiplying process stops once the positive learning rate reaches 0.15. The second layer is trained in a similar way in the train_layer2 function with the STDP learning rule. ``` deftrain_layer1(self,data): output=self.conv1(data) spyker.threshold(output,self.thresh1) spyker.inhibit(output) winners=self.wta1(output) spikes=spyker.fire(output) self.conv1.stdp(data,winners,spikes) deftrain_layer2(self,data): data=self.conv1(data) data=spyker.fire(data,self.thresh1) data=spyker.pool(data,2) output=self.conv2(data) spyker.threshold(output,self.thresh2) spyker.inhibit(output) winners=self.wta2(output) spikes=spyker.fire(output) self.conv2.stdp(data,winners,spikes) ``` Listing 3: The code for training of the network layers After defining the network module, the process of training and classification is implemented. The training process shown in Listing 4 involves training each layer once with quantization afterward. ``` fordata,targetintrainset: network.train_layer1(transform(data)) spyker.quantize(network.conv1.kernel,0,0.5,1) fordata,targetintrainset: network.train_layer2(transform(data)) spyker.quantize(network.conv2.kernel,0,0.5,1) ``` Listing 4: The training process of the network ### _Inference_ The call operator of the network shown in Listing 5 implements the prediction procedure which processes the input spikes and produces the final network output. ``` def__call__(self,data): data=self.conv1(data) data=spyker.fire(data,self.thresh1) data=spyker.pool(data,2) data=self.conv2(data) data=spyker.fire(data,self.thresh2) data=spyker.pool(data,3) returnspyker.gather(data).flatten(1) ``` Listing 5: Inference function of the network After training, the output features for every sample in the training set and the testing set are computed (in the gather function). Then, an SVM classifier is trained on the training set outputs. Finally, predictions are made for the testing set outputs (shown in Listing 6). ``` xtr,ytr=gather(network,transform,train) xte,yte=gather(network,transform,test) svm=LinearSVC(C=2.4).fit(xtr,ytr) pred=svm.predict(xte) accuracy=(pred==testy.numpy()).mean() ``` Listing 6: Implementation of the dimension reduction and classification operations ## V Discussion Our brain has amazing capabilities. It can learn and perform complicated tasks in a robust manner and with low power consumption. Artificial neural networks have been created to mimic the power of the brain processes. Deep neural networks are ANNs that have had major success in recent years. However, there are structural differences between these networks and the brain, and they encounter problems when it comes to tolerance, energy, and sample efficiency. Spiking neural networks are the next generation of artificial neural networks. SNNs are not a new concept. However, they have been brought to attention recently due to their promising characteristics. The aim of these networks is to build a better model of the brain compared to DNNs. Several well-established simulation tools exist for DNNs. These tools have allowed DNNs to reach their great success faster and have helped them to computationally scale up. SNNs lack such high-performance simulation tools. There have been some attempts at creating such tools, but they have not been able to live up to expectations. In this work, we introduced Spyker, a high-performance library written from scratch using low-level tools to simulate spiking neural networks on both CPUs and GPUs. Despite being stand-alone, Spyker has great flexibility and the ability to integrate with other tools to create a smooth developing experience. We compared the performance of this library with SpykerTorch, a simulation tool built on the PyTorch framework. We showed that Spyker is multiple times faster compared to this library. Furthermore, to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions and the role of Spyker in this field, we compared the similarity of a spiking neural network implemented with this library with the similarity of the ResNet model to a macaque monkey brain. Finally, we illustrated an example implementation to demonstrate the easy and modern interface of the library. Strong SNN models can be implemented using the Spyker library to solve real-world machine learning problems. Features like fast processing and having a C++ interface alongside the Python interface make this library ready for both research and production. Generalization is an important concept in machine learning and having neural networks that learn and run fast are quite desirable. SNNs have the potential to become state-of-the-art models in machine learning. Other potential use cases of the library is to study and understand how the brain processes information using simulations. In other words, this library enables us to look at neuroscience through the eyes of a brain-inspired neural network. Although this library has been shown to be perform, there is room for more improvements. Spyker has a sparse interface that runs on the CPU. The sparse interface can be extended to also run on the GPU, and this can improve the performance even further. Furthermore, the support for a larger number of neural models, coding schemes, and learning rules can be added. This helps the library to cover a great range of SNN building blocks. When choosing a model to be deployed on embedded and neuromorphic processors, SNNs are among the top choices due to their energy efficiency. SNNs are often used in neuromorphic computing. Another direction that Spyker can take is in this direction. The computational efficiency of the sparse interface of Spyker can be further improved and made compatible with these types of processors.