id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2308.11200
SegRNN: Segment Recurrent Neural Network for Long-Term Time Series Forecasting
RNN-based methods have faced challenges in the Long-term Time Series Forecasting (LTSF) domain when dealing with excessively long look-back windows and forecast horizons. Consequently, the dominance in this domain has shifted towards Transformer, MLP, and CNN approaches. The substantial number of recurrent iterations are the fundamental reasons behind the limitations of RNNs in LTSF. To address these issues, we propose two novel strategies to reduce the number of iterations in RNNs for LTSF tasks: Segment-wise Iterations and Parallel Multi-step Forecasting (PMF). RNNs that combine these strategies, namely SegRNN, significantly reduce the required recurrent iterations for LTSF, resulting in notable improvements in forecast accuracy and inference speed. Extensive experiments demonstrate that SegRNN not only outperforms SOTA Transformer-based models but also reduces runtime and memory usage by more than 78%. These achievements provide strong evidence that RNNs continue to excel in LTSF tasks and encourage further exploration of this domain with more RNN-based approaches. The source code is coming soon.
Shengsheng Lin, Weiwei Lin, Wentai Wu, Feiyu Zhao, Ruichao Mo, Haotong Zhang
2023-08-22T05:23:04Z
http://arxiv.org/abs/2308.11200v1
# SegRNN: Segment Recurrent Neural Network for Long-Term Time Series Forecasting ###### Abstract RNN-based methods have faced challenges in the Long-term Time Series Forecasting (LTSF) domain when dealing with excessively long look-back windows and forecast horizons. Consequently, the dominance in this domain has shifted towards Transformer, MLP, and CNN approaches. The substantial number of recurrent iterations are the fundamental reasons behind the limitations of RNNs in LTSF. To address these issues, we propose two novel strategies to reduce the number of iterations in RNNs for LTSF tasks: Segment-wise Iterations and Parallel Multi-step Forecasting (PMF). RNNs that combine these strategies, namely SegRNN, significantly reduce the required recurrent iterations for LTSF, resulting in notable improvements in forecast accuracy and inference speed. Extensive experiments demonstrate that SegRNN not only outperforms SOTA Transformer-based models but also reduces runtime and memory usage by more than 78%. These achievements provide strong evidence that RNNs continue to excel in LTSF tasks and encourage further exploration of this domain with more RNN-based approaches. The source code is coming soon. ## 1 Introduction Time series forecasting involves using past observed time series data to predict future unknown time series. It finds applications in various fields, such as energy and smart grids, traffic flow control, server energy optimization [1]. Recurrent Neural Networks (RNNs) [14], as a deep learning architecture, have been extensively adopted for conventional time series forecasting due to their effectiveness in capturing sequential dependencies [11]. In recent years, there has been a shift in focus towards predicting longer horizons, known as Long-term Time Series Forecasting (LTSF) [23]. Figure 1 (a) illustrates the concept of LTSF, where the objective is to provide richer semantic information by predicting a longer future sequence, thus offering more practical guidance. However, extending the forecast horizon poses significant challenges: (i) Forecasting further into the future leads to increased uncertainty, resulting in decreased forecast accuracy. (ii) Longer forecast horizons require models to consider a more extensive historical context for accurate predictions, significantly increasing the complexity of modeling. While RNNs have exhibited remarkable performance in conventional time-series tasks, they have gradually lost prominence in the LTSF domain. Figure 1 (b) and (c) illustrate the limitations of RNNs (either vanilla RNN or its variants: Long Short-Term Memory (LSTM) [15] and Gated Recurrent Unit (GRU) [1]) in LTSF: (i) The model's forecast error rapidly increases as the forecast horizon expands, especially when the horizon reaches 64, at which point it becomes comparable to random forecasting. (ii) The models' inference time rapidly increases with the length of horizon. These observations widely support the belief that RNNs are no longer suitable for LTSF tasks that involve modeling long-term dependencies [23, 24]. Consequently, there is currently no prominent RNN-based solution in the LTSF field. In contrast, Transformers [22], an advanced neural network architecture designed to model long-term dependencies in sequences, have achieved remarkable success in natural language processing, computer vision, and other fields. Consequently, there has been a surge in Transformer-based LTSF solutions, breaking several state-of-the-art (SOTA) records [23, 25, 26]. Figure 1: Challenges faced by vanilla RNN and its variants in LTSF. Data is obtained from the ETTh1 dataset. 2023). The undeniable efficacy of Transformers notwithstanding, their intricate design and substantial computational requirements have constrained their accessibility. Recently, there has been significant debate regarding whether the self-attention mechanism in Transformers is suitable for modeling time-series tasks (Zeng et al., 2023; Li et al., 2023). This leads us to contemplate: _Are RNNs, which are conceptually simple and structurally well-suited for modeling time-series data, truly unsuitable for LTSF tasks?_ _The answer might be No._ It is well-known that RNNs suffer from the vanishing/exploding gradient problem (Bengio, Simard, and Frasconi, 1994), which limits the length of sequences they can effectively model. We can hypothesize that the current RNNs' failure in LTSF stems from excessively long look-back and forecast horizons, which result in prohibitively high recurrent iteration counts. To address this, we propose a straightforward yet powerful strategy: **Minimizing the count of recurrent iterations in RNNs while striving to retain sequential information.** Specifically, we introduce SegRNN, which is designed with two key components: 1. [leftmargin=*] 2. The incorporation of segment technology in RNNs, replacing point-wise iterations with segment-wise iterations, significantly reducing the number of recurrent iterations. 3. The introduction of the Parallel Multi-step Forecasting (PMF) strategy, further reducing the number of recurrent iterations. The comparison between PMF and the traditional Recurrent Multi-step Forecasting (RMF) is illustrated in Figure 2. Experimental results on popular LTSF benchmarks demonstrate that these two key design components significantly improve RNNs' performance in the LTSF domain. Reducing the number of recurrent iterations not only greatly improves the prediction accuracy of RNNs but also significantly enhances their inference speed. In most scenarios, SegRNN outperforms SOTA Transformer-based models, featuring a runtime and memory reduction of over 78%. We provide strong evidence that RNNs still possess powerful capabilities in the LTSF domain. In summary, our contributions are as follows: * We propose SegRNN, which utilizes time-series segment technique to replace point-wise iterations with segment-wise iterations in LTSF. * We further introduce the PMF technique to enhance the inference speed and performance of RNNs. * The proposed SegRNN outperforms the current SOTA methods while significantly reducing runtime and memory usage. * The success of SegRNN demonstrates substantial improvements over existing RNN methods, highlighting the strong potential of RNNs in the LTSF domain. ## 2 Related Work Significant efforts have been devoted to advancing the field of time series forecasting (An and Anh, 2015). With the evolution of hardware capabilities, deep learning approaches have gained prominence in uncovering patterns within time series data (Han et al., 2021). These approaches can be broadly categorized as follows: Transformers.Originally designed for natural language processing tasks (Vaswani et al., 2017), Transformers have achieved remarkable success in various domains (Khan et al., 2022; Arnab et al., 2021). The self-attention mechanism in Transformers enables them to capture long-term dependencies in time series data, leading to a considerable body of work focused on adapting Transformers to LTSF tasks, demonstrating impressive performance (Wen et al., 2023). Earlier efforts, such as LogTrans (Li et al., 2019), Informer (Zhou et al., 2021), Pyraformer (Liu et al., 2021), Autoformer (Wu et al., 2021), and FEDformer (Zhou et al., 2022), aimed at reducing the complexity of Transformers. More recently, PatchTST (Nie et al., 2023) and Crossformer (Zhang and Yan, 2023) leveraged patch-based techniques from computer vision (Dosovitskiy et al., 2021; He et al., 2022), further enhancing the performance of Transformers. The patch technique, which inspired the segment-wise iterations technique in this paper, has proven to be influential. MLPs.Multi-Layer Perceptrons (MLPs) have also found extensive use in time series forecasting (Olivares et al., 2023; Fan et al., 2022; Challu et al., 2023). Recently, DLinear achieved superiority over then-state-of-the-art Transformer-based models through a simple linear layer and channel-independent strategy (Zeng et al., 2023). The success of DLinear has spurred the development of a plethora of MLPs in LTSF, including MTS-Mixers (Li et al., 2023), TSMixer (Vijay et al., 2023), and TiDE (Das et al., 2023). The accomplishments of these MLP-based models have raised questions about the necessity of employing complex and cumbersome Transformers for time series prediction. CNNs.Initially applied in image processing for capturing local patterns and extracting meaningful features (Krizhevsky, Sutskever, and Hinton, 2012; He et al., 2016), Convolutional Neural Networks (CNNs) have also shown remarkable performance in the time series domain (Bai, Kolter, and Koltun, 2018; Franceschi, Dieuleveut, and Jaggi, 2019; Cheng, Huang, and Zheng, 2020). Recently, CNN-based models such as MICN (Wang et al., 2023), TimesNet Figure 2: Comparison of Recurrent Multi-step Forecasting (RMF) and Parallel Multi-step Forecasting (PMF). The positional embedding \(pe_{i}\) in PMF serves as a replacement for the sequential information of the recurrent structure. (Wu et al., 2023), and SCINet (LIU et al., 2022) have demonstrated impressive results in the LTSF field. **RNNs.** Recurrent Neural Networks (RNNs) have long been the primary choice for time series forecasting tasks due to their ability to handle sequential data. Numerous efforts have been devoted to utilizing RNNs for short-term and probabilistic forecasting, achieving significant advancements (Lai et al., 2018; Bergsma et al., 2022; Wen et al., 2018; Tan et al., 2023). However, in the LTSF domain with excessively long look-back windows and forecast horizons, RNNs have been considered inadequate for effectively capturing long-term dependencies, leading to their gradual abandonment (Zhou et al., 2021, 2022). The emergence of SegRNN aims to challenge and change this situation by attempting to address these limitations. ## 3 Preliminaries This section introduces the formulation of the LTSF problem, the Channel Independent (CI) strategy, and the fundamental RNN and its variants. ### LTSF Problem Formulation The LTSF problem deals with predicting the future time series \(Y\in\mathbb{R}^{H\times C}\) based on a historical multivariate time series (MTS) \(X\in\mathbb{R}^{L\times C}\). Here, \(L\) represents the length of the historical look-back window, \(C\) denotes the number of feature dimensions or channels, and \(H\) signifies the length of the forecast horizon. The goal of the LTSF task is to extend the forecasting horizon \(H\) to its maximum potential (e.g., up to 720), which poses a considerable challenge. ### Channel Independent Strategy Intuitively, it may appear optimal to use all historical variables in a MTS to forecast all future variables simultaneously, as it captures the interrelations between the variables. However, recent studies have shown that the Channel Independent (CI) strategy surpasses this traditional approach (Han et al., 2023). The CI strategy aims to identify a function \(f:X^{(i)}\in\mathbb{R}^{L}\to Y^{(i)}\in\mathbb{R}^{H}\) that maps the univariate historical time series data to the future time series values, as opposed to \(f:X\in\mathbb{R}^{L\times C}\to Y\in\mathbb{R}^{H\times C}\), which maps the multivariate historical time series data to the future time series values. The current SOTA LTSF models have embraced the CI strategy (Zeng et al., 2023; Nie et al., 2023; Das et al., 2023), and we similarly align ourselves with this approach. Furthermore, our model introduces a channel identifier (Shao et al., 2022) to enhance its predictive capability for multivariate sequences, which will be discussed comprehensively in the following sections. ### RNN Variants The vanilla RNN faces challenges such as vanishing and exploding gradients, which hinder the model's convergence during training (Bengio et al., 1994). To address these issues, LSTM and GRU architectures optimize the structure of their cell units. Figure 3 illustrates the differences in cell unit structures among these fundamental RNN variants. For more comprehensive information, we strongly recommend referring to the original papers (Hochreiter and Schmidhuber, 1997; Cho et al., 2014). The proposed SegRNN is not limited to a specific RNN cell structure. Considering the stable performance of GRU in practical scenarios, this paper employs GRU as an exemplar to illustrate the architecture and performance of SegRNN. Therefore, for consistency throughout the text, the SegRNN model is assumed to be based on the GRU cell unless explicitly stated otherwise. ## 4 Model Architecture The recurrent iterative nature of RNNs poses challenges for effective convergence when modeling extensive long sequences. SegRNN aims to reduce the number of recurrent iterations to facilitate its convergence. Specifically, SegRNN employs the following strategies: 1. In the encoding phase, it replaces the original time point-wise iterations with sequence segment-wise iterations, effectively reducing the number of iterations from \(L\) to \(L/w\). 2. In the decoding phase, it utilizes the PMF strategy to further reduce the number of iterations from \(H/w\) to 1. The substantial reduction in the number of recurrent iterations in RNN not only leads to a remarkable performance improvement but also results in a significant increase in inference speed. The model architecture of SegRNN is illustrated in Figure 4. ### Encoding **Segment partition and projection.** Given a sequence channel \(X^{(i)}\in\mathbb{R}^{L}\), it can be partitioned into segments \(X^{(i)}_{w}\in\mathbb{R}^{n\times w}\), where \(w\) represents the window length of each segment, and \(n=\frac{L}{w}\) denotes the number of segments. These segments, \(X^{(i)}_{w}\in\mathbb{R}^{n\times w}\), are then transformed to \(X^{(i)}_{d}\in\mathbb{R}^{n\times d}\) through a learnable linear projection \(W_{prj}\in\mathbb{R}^{w\times d}\) followed by a ReLU activation, where \(d\) represents the dimensionality of the hidden state of the GRU. **Recursive encoding.** Subsequently, the transformed \(X^{(i)}_{d}\) is fed into the GRU for recurrent iterations to capture temporal features. Specifically, for \(x_{t}\in\mathbb{R}^{d}\) in \(X^{(i)}_{d}\), the entire process within the GRU cell can be formulated as: \[z_{t} =\sigma\left(W_{z}\cdot[h_{t-1},x_{t}]\right),\] \[r_{t} =\sigma\left(W_{r}\cdot[h_{t-1},x_{t}]\right),\] \[\tilde{h}_{t} =\tanh\left(W\cdot[r_{t}\times h_{t-1},x_{t}]\right),\] Figure 3: Comparison of the cell of RNN and its variants. \(h_{t}=(1-z_{t})\times h_{t-1}+z_{t}\times\tilde{h}_{t}\). After \(n\) recurrent iterations, the hidden feature \(h_{n}\) obtained from the last step already encapsulates all the temporal features of the original sequence \(X^{(i)}\). This hidden feature will be passed to the Decoding part for the subsequent steps of inference and prediction. ### Decoding RMF is a straightforward method for accomplishing multi-step prediction. In RMF, single-step predictions are made to obtain \(\bar{y}_{t}\). The predicted \(\bar{y}_{t}\) is then used as input for the subsequent prediction of \(\bar{y}_{t+1}\), and this process continues until the complete set of prediction results is obtained. By incorporating the segment technique from the Encoding phase with RMF, the number of iterations required for a prediction horizon of \(H\) is reduced to \(H/w\). However, despite the reduction in the number of recurrent iterations, recursive decoding has its limitations. These include: (i) the accumulation of errors resulting from recursive predictions, and (ii) the sequential nature of recursion, which hampers parallel computation within training examples and restricts improvements in inference speed (Vaswani et al., 2017). To address these limitations, we propose a novel prediction strategy called Parallel Multi-step Forecasting (PMF), as described below. Positional embeddings.During the decoding phase, the sequential order between segments is lost due to the break in the recurrent recursion. To address this, \(m\) corresponding positional embeddings, denoted as \(PE^{(i)}\in\mathbb{R}^{m\times d}\), are generated to identify the positions of the segments. Here, \(m=\frac{H}{w}\) represents the number of windows obtained by partitioning the prediction horizon into segments. Each positional embedding \(pe^{(i)}\in\mathbb{R}^{d}\) is constructed by concatenating the relative position encoding \(rp\in\mathbb{R}^{\frac{d}{2}}\) and the channel position encoding \(cp\in\mathbb{R}^{\frac{d}{2}}\). Figure 5 illustrates the positional embeddings for the target sequence \(Y^{(i)}\). The relative position encoding indicates the position of each segment that needs to be predicted within the complete sequence \(Y^{(i)}\). Meanwhile, the channel position encoding represents the channel position \(i\in{1,2,...,C}\) of the current sequence \(Y^{(i)}\) within the multi-channel sequence \(Y\in\mathbb{R}^{H\times C}\)(Shao et al., 2022). The inclusion of the channel position encoding partially compensates for the limitations of the CI strategy in capturing relationships between variables, thereby enhancing the model's performance. Parallel decoding.In the Decoding phase, the same GRU cell used in the Encoding phase is shared. Specifically, the final state \(h_{n}\) obtained from the Encoding phase is duplicated \(m\) times and combined with the \(m\) positional embeddings from \(PE^{(i)}\). These pairs are then simultaneously processed in parallel by the GRU cell. This parallel processing generates \(m\) output vectors, each with a length of \(d\), denoted as \(\bar{Y}_{d}^{(i)}\in\mathbb{R}^{m\times d}\). It is important to note that this approach differs from the previous recursive processing in RMF, as the computation of each vector is independent of the previous time step's result. As a result, intra-sample parallel computation is achieved, leading to improved inference speed. Additionally, prediction errors do not accumulate with the number of iterations, resulting in enhanced prediction accuracy. Prediction and sequence recovery.The \(\bar{Y}_{d}^{(i)}\) undergoes a Dropout layer, randomly dropping out a certain proportion of values for regularization purposes. Subsequently, it is transformed into \(\bar{Y}_{w}^{(i)}\in\mathbb{R}^{m\times w}\) using a learnable linear prediction layer \(W_{prd}\in\mathbb{R}^{d\times w}\). Furthermore, \(\bar{Y}_{w}^{(i)}\) is reshaped Figure 4: The model architecture of SegRNN. Figure 5: Positional embeddings \(PE^{(i)}\) for the target sequence \(Y^{(i)}\). into \(\bar{Y}^{(i)}\in\mathbb{R}^{H}\), representing the final prediction result. ### Normalization and Evaluation Instance normalization.Time series data often experience distribution shift issues, and employing simple sample normalization strategies can help alleviate this problem. In this paper, we utilize a simple sample normalization strategy [13] that involves subtracting the last value of the sequence from the input before encoding and subsequently adding back the value after decoding, formulated as: \[x^{(i)}_{1:L} =x^{(i)}_{1:L}-x^{(i)}_{L},\] \[\bar{y}^{(i)}_{1:L} =\bar{y}^{(i)}_{1:L}+x^{(i)}_{L}.\] Loss function.The mean absolute error (MAE) is employed as the loss function for our model, defined as: \[\mathcal{L}(Y,\bar{Y})=\frac{1}{HC}\sum_{t=1}^{H}\sum_{i=1}^{C}|\bar{y}^{(i)}_{ t}-y^{(i)}_{t}|.\] ## 5 Experiments In this section, we present the main experimental results on popular LTSF benchmarks. Furthermore, we conduct an ablation study to analyze the impact of the segment-wise iterations and the PMF strategies on the effectiveness of RNN in LTSF. We also investigate the influence of some important parameters on SegRNN. All experiments in this section are implemented in PyTorch and executed on two NVIDIA T4 GPUs, each equipped with 16GB of memory. ### Experimental Setup An overview of the experimental setup is presented here, and for further details, please refer to the Appendix. Datasets.The performance evaluation of SegRNN is carried out on 7 popular datasets in the LTSF domain, comprising 4 ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2), as well as Traffic, Electricity, and Weather datasets. The statistics of these datasets are presented in Table 1. Baselines and metrics.As baselines, we select SOTA and representative models in the LTSF domain, including the following categories: (i) Transformers: PatchTST [21], FEDformer [15], Informer [15]; (ii) MLPs: TiDE [13], Dlinear [13]; (iii) CNNs: MICN [12], TimesNet [23]; (iv) RNNs: DeepAR [2], GRU [1]. It is worth mentioning that while DeepAR and GRU were not initially designed for LTSF, we include them as baselines due to the limited presence of prominent RNN solutions in the field. The proposed SegRNN aims to fill this gap. Two commonly employed evaluation metrics, Mean Squared Error (MSE) and Mean Absolute Error (MAE), are used here to assess the performance of the LTSF models. Model configuration.The uniform configuration of SegRNN consists of a look-back of 720, a segment length of 48, a single GRU layer, a hidden size of 512, 30 training epochs, a learning rate decay of 0.8 after the initial 3 epochs, and early stopping with a patience of 10. The dropout rate, batch size, and learning rate vary based on the scale of the data. ### Main Result The multivariate long-term time series forecasting results of SegRNN and other baselines are presented in Table 2. Remarkably, SegRNN achieved top-two positions in 50 out of 56 metrics across all scenarios, including 45 first-place rankings, signifying its significant superiority over other baselines, including the current SOTA transformer-based model, PatchTST. SegRNN demonstrated outstanding performance on the ETT and Weather datasets, nearly surpassing the SOTA performance across all metrics. In larger-scale datasets such as Electricity and Traffic, where the channel numbers exceed 300 and 800, respectively, SegRNN's performance experienced a slight decrease. This is likely due to the relatively smaller capacity of the SegRNN model, as it is built on a single GRU layer. However, even in these cases, SegRNN demonstrated competitive or superior performance compared to the competing models. Regarding the RNN-based methods GRU and DeepAR, SegRNN achieved MSE improvements of 75% and 80%, respectively. This provides strong evidence that SegRNN significantly enhances the performance of existing RNN methods in the LTSF domain. These results highlight the success of the SegRNN design and demonstrate that RNN methods still hold strong potential in the current LTSF domain. ### Ablation Studies Segment-wise iterations vs. point-wise iterations.Figure 6 illustrates the performance differences between segment-wise and point-wise iterations. It is essential to note that the segment length directly determines the number of iterations, and when the segment length \(w=1\), segment-wise \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Datasets & ETTTh1 & ETTh2 & ETTm1 & ETTm2 & Electricity & Traffic & Weather \\ \hline Channels & 7 & 7 & 7 & 7 & 7 & 321 & 862 & 21 \\ Frequency & 1 hour & 1 hour & 15 mins & 15 mins & 1 hour & 1 hour & 10 mins \\ Timesteps & 17,420 & 17,420 & 69,680 & 69,680 & 26,304 & 17,544 & 52,696 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of datasets for evaluation. Figure 6: The forecast error (bar plot) and the inference time (line plot) of SegRNN with different segment length on ETTm1 dataset. Both look-back and horizon are 192. iterations degenerates into point-wise iterations. The following observations can be made: 1. [leftmargin=*] 2. As the segment length increases (i.e., the number of iterations decreases), the forecast error consistently decreases. However, when the segment length equals the look-back length, the model degenerates into a multi-layer perceptron, leading to an increase in prediction error. 3. With the continuous increase of the segment length, the inference time steadily decreases. These findings indicate that a relatively large yet appropriate segment length (i.e., minimizing the number of iterations) significantly improves the performance of the RNN method in LTSF. PMF vs. RMF.Figure 7 illustrates the performance disparities between RMF and PMF. Concerning the forecast error, PMF significantly outperforms RMF for various forecast horizons, exhibiting a more stable distribution. The advantage of PMF becomes increasingly evident as the forecast horizon increases. As for inference time, when the forecast horizon \(H<192\), PMF is slightly slower than RMF. This is because the intra-sample parallel computation advantage of PMF is not fully manifested with fewer iterations; instead, it incurs additional overhead due to data replication in memory. However, when the forecast horizon \(H>192\), the larger number of iterations allows PMF to leverage its intra-sample parallel computation advantage, leading to improved hardware utilization and accelerated computation. In conclusion, these results demonstrate that PMF significantly enhances the performance of RNNs in LTSF tasks compared to RMF. Impact of look-back length.A powerful LTSF model typically performs better with a longer look-back context as it contains more trend and periodic information. The ability to leverage a longer look-back context directly reflects the model's capability to capture long-term dependencies. However, longer look-back contexts also increase the modeling complexity, and many Transformer-based models encounter difficulties when dealing with long look-back scenarios (i.e., \(L>96\)) [23]. Regarding SegRNN, as illustrated in Figure 8, the forecast error consistently diminishes as the look-back length increases. Notably, SegRNN also exhibits commendable performance with relatively shorter look-backs, showcasing its robustness across diverse look-back lengths. This finding suggests that SegRNN not only excels in modeling long-term dependencies but also maintains its robustness in handling varying look-back requirements. Impact of RNN variants.The proposed SegRNN exhibits versatility across various RNN types. Nevertheless, in practical implementations, it consistently achieves lower forecast errors and demonstrates greater stability when employed with the GRU variant, as depicted in Figure 9. This advantage is attributed to the integration of gating mechanisms in the GRU, which enhances its capacity to model long-term dependencies while retaining a simpler structure in comparison to LSTM. Consequently, SegRNN defaults to the adoption of GRU. However, should more potent RNN variants emerge in the future, SegRNN holds the potential to further bolster its robustness. SegRNN vs. PatchTST.We conducted a comparison between SegRNN and the latest SOTA Transformer-based model, PatchTST, to showcase the runtime performance advantage of SegRNN. Table 3 reveals that compared to PatchTST, SegRNN demonstrates a reduction of over 78% in average training time and a decrease of over 82% in average maximum GPU memory consumption. This significant improvement in efficiency is particularly beneficial for practical model training and deployment. ## 6 Conclusion In this paper, we introduce SegRNN, an innovative RNN-based model designed for Long-term Time Series Forecasting (LTSF). SegRNN incorporates two fundamental strategies: (i) the replacement of point-wise iterations with segment-wise iterations, and (ii) the substitution of Recurrent Multi-step Forecasting (RMF) with Parallel Multi-step Forecasting (PMF). The segment-wise iteration strategy significantly reduces the number of required recurrent iterations for extracting temporal features, thus addressing the challenge of effectively training RNNs on excessively long sequences. Moreover, the adoption of PMF further mitigates the issue of error accumulation inherent in traditional RMF methods. By adopting these innovative strategies, SegRNN not only outperforms the current SOTA models in terms of prediction accuracy but also yields substantial efficiency improvements, including a remarkable reduction of over 78% in training time and memory usage. These compelling outcomes serve as robust evidence that RNNs remain potent in LTSF tasks, thereby encouraging further exploration and breakthroughs with more RNN methods in the future. Figure 8: The forecast error of SegRNN with different look-back length on ETTm1 dataset. The horizon is 192. \begin{table} \begin{tabular}{c c c c c} \hline \hline Metric & Datasets & PatchTST & SegRNN & Imp. \\ \hline \multirow{2}{*}{Training Time} & ETTm1 & 94.9 & 29.7 & 69\% \\ & Weather & 273.4 & 50.3 & 82\% \\ & Electricity & 1.97k & 313.8 & 84\% \\ \hline \multirow{3}{*}{MACs (MMac)} & ETTm1 & 265.9 & 213.4 & 20\% \\ & Weather & 797.8 & 640.2 & 20\% \\ & Electricity & 12.2k & 9.79k & 20\% \\ \hline \multirow{3}{*}{Parameters (M)} & ETTm1 & 2.62 & 1.63 & 38\% \\ & Weather & 2.62 & 1.63 & 38\% \\ & Electricity & 2.62 & 1.71 & 35\% \\ \hline \multirow{3}{*}{Max Memory (MB)} & ETTm1 & 289 & 77 & 73\% \\ & Weather & 780 & 124 & 84\% \\ \cline{1-1} & Electricity & 11.3k & 1.23k & 89\% \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of performance metrics between SegRNN and PatchTST on ETTm1 dataset with a single NVIDIA T4 GPU. The look-back is 720 and the horizon is 192. Figure 9: The forecast error of SegRNN with different RNN variants on ETTm1 dataset. The look-back is 720 and the horizon is 192. Appendix In this section, we provide additional information, including more detailed experimental details, SegRNN's results in univariable long-term time series forecasting, convergence of different iteration schemes, and an analysis of the role of positional embeddings in PMF. ### Experimental details **Datasets.** We utilize the most popular multivariate datasets in Long-term Time Series Forecasting (LTSF), including: * ETTs1: These datasets contain data collected from electricity transformers, including 7 indicators recorded from July 2016 to July 2018 in two regions of a province in China. ETTh1 and ETTh2 provide hourly-level data, while ETTm1 and ETTm2 offer data at a 15-minute granularity. Footnote 1: [https://github.com/zhouhaoyi/ETDataset](https://github.com/zhouhaoyi/ETDataset) * Electricity: This dataset comprises hourly electricity consumption for 321 customers from 2012 to 2014. * Traffic2: Collected by the California Department of Transportation, this dataset provides hourly data describing road occupancy measured by 862 sensors on highways in the San Francisco Bay Area. Footnote 2: [https://pems.dot.ca.gov/](https://pems.dot.ca.gov/) * Weather3: This dataset includes 21 meteorological indicators, such as temperature and humidity, recorded every 10 minutes throughout the year 2020. Footnote 3: [https://www.bgc-jena.mpg.de/weitter/](https://www.bgc-jena.mpg.de/weitter/) We split all the datasets into training, validation, and test sets in chronological order. Following previous research [21, 22, 23, 24], the ETT datasets are split into proportions of 6:2:2, while the other datasets are divided into proportions of 7:1:2. **Baselines.** We selected various state-of-the-art (SOTA) deep learning methods in the LTSF domain as our baselines, including: * PatchTST [23]: The most advanced Transformer-based method as of July 2023. It improves the performance of Transformers in LTSF by partitioning time series into patches and adopting a channel independent strategy. The patch technique not only inspired subsequent works such as TS-Mixer but also influenced the segment-wise iterations strategy proposed in this paper. * FEDformer [22]: This method combines Transformer with the seasonal-trend decomposition method and frequency-enhanced techniques, making it one of the classic Transformer-based approaches for time series forecasting. * Informer [22]: A pioneering method that introduced the prob-sparse self-attention mechanism, self-attention distilling, and the generative style decoder. It paved the way for Transformers in LTSF and discussed the challenges faced by RNNs in long-term modeling, which led to the rise of Transformer methods and the decline of RNNs in the LTSF domain. * DLinear [23]: An influential work that defeated the dominant Transformer-based methods, including FEDformer and Informer, by using trend decomposition and a single-layer linear approach. It questioned the necessity of attention mechanisms in modeling time series tasks. Additionally, it indirectly proposed the channel independence technique and brought attention to the importance of long look-back, inspiring subsequent research. * TiDE [23]: Clearly inspired by DLinear, TiDE inherits the channel independence technique and upgrades the linear layer to a multi-layer perceptron capable of modeling nonlinear dependencies. TiDE achieves comparable performance to PatchTST and is currently one of the most advanced MLP methods. * MICN [24]: This method introduces the Multi-scale Isometric Convolution Network to capture both local and global features of time series simultaneously. It is one of the most advanced CNN-based methods as of the current state. * TimesNet [25]: This method extends the analysis of temporal variations into 2D space by transforming 1D time series into a set of 2D tensors based on multiple periods, providing a task-general foundation model for time series analysis. * DeepAR [26]: A classic RNN-based time series forecasting model that utilizes a deep LSTM network for Probabilistic forecasting. We modified it for long-term forecasting to use as a baseline. * GRU [1]: One of the powerful variants of RNN that mitigates the vanishing/exploding gradient problem. We employed it for long-term forecasting as the most basic RNN baseline. For the results in Table 2, the data for PatchTST, FEDformer, and Informer are sourced from the PatchTST's official publication, while the data for DLinear, TiDE, MICN, and TimesNet are from their respective official papers. We implemented DeepAR and GRU ourselves, using a look-back of 720. **Configuration.** We employed the Adam optimizer [10] to train the models for 30 epochs, with a learning rate decay of 0.8 after the initial 3 epochs. Early stopping was implemented with a patience of 10. The specific parameters used for SegRNN on different datasets are presented in Table 4. The meanings of each parameter in the table are as follows: \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Datasets & 1\_pack & s\_len & d\_model & channel & dropout & b\_size & 1\_rate \\ \hline ETTh1 & 720 & 48 & 512 & True & 0.5 & 256 & 0.001 \\ ETTh2 & 720 & 48 & 512 & True & 0.5 & 256 & 0.0002 \\ ETTm1 & 720 & 48 & 512 & True & 0.5 & 256 & 0.0002 \\ ETTm2 & 720 & 48 & 512 & True & 0.5 & 256 & 0.0001 \\ Weather & 720 & 48 & 512 & True & 0.5 & 64 & 0.0001 \\ Electricity & 720 & 48 & 512 & True & 0.1 & 16 & 0.0005 \\ Traffic & 720 & 48 & 512 & False & 0.1 & 8 & 0.003 \\ \hline \hline \end{tabular} \end{table} Table 4: The complete configuration of SegRNN results in Table 2. * 1_back: The length of the historical look-back window. * s_len: The window length for dividing the original sequence into segments. * d_model: The dimensionality of the hidden variables in the RNN layer. * channel: Whether to enable channel positional embeddings. * dropout: The dropout rate. * b_size: The batch size used for training. * 1_rate: The initial learning rate used in the optimization process. ### Univariate Forecasting Results The univariate long-term time series forecasting results of SegRNN and other baselines on the full ETT benchmarks are presented in Table 5. The channel position encoding is disabled in univariate scenarios. Remarkably, SegRNN achieved a top-two position in 30 out of 32 metrics across all scenarios, including 23 first-place rankings, signifying its significant superiority over other baselines. These results further demonstrate the effectiveness of SegRNN and reaffirm the competitiveness of RNN methods in LTSF, whether in multivariate or univariate scenarios. ### Convergence of Different Iteration Schemes Figure 10 illustrates the convergence curve of point-wise iterations and segment-wise iterations. It is evident that segment-wise iterations have considerably enhanced both the final convergence error and the convergence speed when compared to point-wise iterations. Furthermore, as the segment length increases (indicating a decrease in the number of segment-wise iterations), this improvement becomes more pronounced. Conclusionly, reducing the number of iterations as much as possible facilitates the convergence of the RNN and consequently enhances its performance. Translated with www.DeepL.com/Translator (free version) ### Effect of Positional Embeddings Table 6 presents the results of the ablation study on the effect of Positional Embeddings (PE). It can be observed that the Relative Position (RP) encoding plays a crucial role in PE. Compared to the original scheme without RP (None), the MSE is reduced by 28.8%. This is because, for Parallel Multi-step Forecasting (PMF), the sequential order between segments is lost, and thus, it is essential to provide the model with relative positional information. As for the Figure 10: The convergence curve of point-wise iterations and segment-wise iterations on ETTm1 dataset. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c} \multicolumn{1}{c}{Models} & \multicolumn{2}{c}{SegRNN} & \multicolumn{2}{c}{PatchTST} & \multicolumn{2}{c}{Dlinear} & \multicolumn{2}{c}{MICN} & \multicolumn{2}{c}{FEDformer} & \multicolumn{2}{c}{Autoormer} & \multicolumn{2}{c}{Informer} \\ & \multicolumn{2}{c}{(ours)} & \multicolumn{2}{c}{(2023)} & \multicolumn{2}{c}{(2023)} & \multicolumn{2}{c}{(2023)} & \multicolumn{2}{c}{(2022)} & \multicolumn{2}{c}{(2021)} & \multicolumn{2}{c}{(2021)} \\ \hline \multicolumn{1}{c}{Metric} & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline \multirow{3}{*}{ETTh1} & 96 & **0.053** & **0.18** & 0.059 & 0.189 & 0.056 & **0.18** & 0.058 & 0.186 & 0.079 & 0.215 & 0.071 & 0.206 & 0.193 & 0.377 \\ & 192 & **0.068** & 0.208 & 0.074 & 0.215 & 0.071 & **0.204** & 0.079 & 0.21 & 0.104 & 0.245 & 0.114 & 0.262 & 0.217 & 0.395 \\ & 336 & **0.073** & **0.215** & 0.076 & 0.22 & 0.098 & 0.244 & 0.092 & 0.237 & 0.119 & 0.270 & 0.107 & 0.258 & 0.202 & 0.381 \\ & 720 & **0.085** & **0.233** & 0.087 & 0.236 & 0.189 & 0.359 & 0.138 & 0.298 & 0.142 & 0.299 & 0.126 & 0.283 & 0.183 & 0.355 \\ \hline \multirow{3}{*}{ETTh2} & 96 & **0.121** & 0.272 & 0.131 & 0.284 & 0.131 & 0.279 & 0.155 & 0.3 & 0.128 & **0.271** & 0.153 & 0.306 & 0.213 & 0.373 \\ & 192 & **0.158** & 0.317 & 0.171 & 0.329 & 0.176 & 0.329 & 0.169 & **0.316** & 0.185 & 0.330 & 0.204 & 0.351 & 0.227 & 0.387 \\ & 336 & 0.18 & 0.345 & **0.171** & **0.336** & 0.209 & 0.367 & 0.238 & 0.384 & 0.231 & 0.378 & 0.246 & 0.389 & 0.242 & 0.401 \\ & 720 & **0.205** & **0.365** & 0.223 & 0.38 & 0.276 & 0.426 & 0.447 & 0.561 & 0.278 & 0.420 & 0.268 & 0.409 & 0.291 & 0.439 \\ \hline \multirow{3}{*}{ETTm1} & 96 & **0.026** & **0.121** & **0.026** & 0.123 & 0.028 & 0.123 & 0.033 & 0.134 & 0.033 & 0.140 & 0.056 & 0.183 & 0.109 & 0.277 \\ & 192 & **0.039** & 0.152 & 0.04 & **0.151** & 0.045 & 0.156 & 0.048 & 0.164 & 0.058 & 0.186 & 0.081 & 0.216 & 0.151 & 0.310 \\ & 336 & **0.052** & 0.177 & 0.053 & **0.174** & 0.061 & 0.182 & 0.079 & 0.21 & 0.084 & 0.231 & 0.076 & 0.218 & 0.427 & 0.591 \\ & 720 & 0.081 & 0.223 & **0.073** & **0.206** & 0.08 & 0.21 & 0.096 & 0.233 & 0.102 & 0.250 & 0.110 & 0.267 & 0.438 & 0.586 \\ \hline \multirow{3}{*}{ETTm2} & 96 & **0.059** & **0.174** & 0.065 & 0.187 & 0.063 & 0.183 & **0.059** & 0.176 & 0.067 & 0.198 & 0.065 & 0.189 & 0.088 & 0.225 \\ & 192 & **0.084** & **0.215** & 0.093 & 0.231 & 0.092 & 0.227 & 0.1 & 0.234 & 0.102 & 0.245 & 0.118 & 0.256 & 0.132 & 0.283 \\ \cline{1-1} & 336 & **0.108** & **0.25** & 0.121 & 0.266 & 0.119 & 0.261 & 0.153 & 0.301 & 0.130 & 0.279 & 0.154 & 0.305 & 0.180 & 0.336 \\ \cline{1-1} & 720 & **0.153** & **0.304** & 0.172 & 0.322 & 0.175 & 0.322 & 0.21 & 0.354 & 0.178 & 0.325 & 0.182 & 0.335 & 0.300 & 0.435 \\ \hline \multicolumn{1}{c}{Count} & \multicolumn{2}{c}{**30**} & \multicolumn{2}{c}{17} & \multicolumn{2}{c}{14} & \multicolumn{2}{c}{5} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{0} & \multicolumn{2}{c}{0} & \multicolumn{2}{c}{0} \\ \hline \end{tabular} \end{table} Table 5: Univariate long-term time series forecasting results. The forecast horizon \(H\in\{\{96,192,336,720\}\}\) is set for all datasets. The reported SegRNN results are averaged over 5 runs. The best results are highlighted in **bold** and the second best are underlined. The _Count_ row counts the total number of times each method obtained the best or second results. Channel Position (CP) encoding, it also contributes to some performance improvements. CP compensates for the missing relationship information between variables in the Channel Independent (CI) strategy, which has been extensively studied in STID [22]. However, in the case of Traffic, it might be an exception due to having over 800 variables. Modeling overly complex variable information solely through simple CP encoding might not be sufficient. Therefore, in the future, finding more reasonable ways to model complex multivariate relationships in time series forecasting will be a challenging yet promising direction.
2310.18708
Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization
The storage of continuous variables in working memory is hypothesized to be sustained in the brain by the dynamics of recurrent neural networks (RNNs) whose steady states form continuous manifolds. In some cases, it is thought that the synaptic connectivity supports multiple attractor manifolds, each mapped to a different context or task. For example, in hippocampal area CA3, positions in distinct environments are represented by distinct sets of population activity patterns, each forming a continuum. It has been argued that the embedding of multiple continuous attractors in a single RNN inevitably causes detrimental interference: quenched noise in the synaptic connectivity disrupts the continuity of each attractor, replacing it by a discrete set of steady states that can be conceptualized as lying on local minima of an abstract energy landscape. Consequently, population activity patterns exhibit systematic drifts towards one of these discrete minima, thereby degrading the stored memory over time. Here we show that it is possible to dramatically attenuate these detrimental interference effects by adjusting the synaptic weights. Synaptic weight adjustments are derived from a loss function that quantifies the roughness of the energy landscape along each of the embedded attractor manifolds. By minimizing this loss function, the stability of states can be dramatically improved, without compromising the capacity.
Haggai Agmon, Yoram Burak
2023-10-28T13:36:55Z
http://arxiv.org/abs/2310.18708v1
Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization ###### Abstract The storage of continuous variables in working memory is hypothesized to be sustained in the brain by the dynamics of recurrent neural networks (RNNs) whose steady states form continuous manifolds. In some cases, it is thought that the synaptic connectivity supports multiple attractor manifolds, each mapped to a different context or task. For example, in hippocampal area CA3, positions in distinct environments are represented by distinct sets of population activity patterns, each forming a continuum. It has been argued that the embedding of multiple continuous attractors in a single RNN inevitably causes detrimental interference: quenched noise in the synaptic connectivity disrupts the continuity of each attractor, replacing it by a discrete set of steady states that can be conceptualized as lying on local minima of an abstract energy landscape. Consequently, population activity patterns exhibit systematic drifts towards one of these discrete minima, thereby degrading the stored memory over time. Here we show that it is possible to dramatically attenuate these detrimental interference effects by adjusting the synaptic weights. Synaptic weight adjustments are derived from a loss function that quantifies the roughness of the energy landscape along each of the embedded attractor manifolds. By minimizing this loss function, the stability of states can be dramatically improved, without compromising the capacity. ## Introduction In the brain, recurrent neural networks (RNNs) involved in working memory tasks are thought to be organized such that their dynamics exhibit multiple attractor states, enabling the maintenance of persistent neural activity even in the absence of external stimuli. In tasks that require tracking of a continuous external variable, these attractors are thought to form a continuous manifold of neural activity patterns. In some well studied brain circuits in flies and mammals, neural activity patterns have been shown to robustly and persistently reside along a single, low-dimensional manifold, even when the neural activity is dissociated from external inputs to the network [2; 39; 21; 35; 9; 15]. In other brain regions, however, the same neural circuitry is thought to support multiple low-dimensional manifolds such that activity lies on one of these manifolds at any given time, depending on the context or task. The prefrontal cortex, for example, is crucial for multiple working memory and evidence accumulation tasks [14; 34; 46; 47], and it is thus thought that the synaptic connectivity in this brain region supports multiple attractor manifolds. Naively, however, embedding multiple attractor manifolds in a single RNN inevitably causes interference that degrades the network performance. To be specific, we focus here on attractor models of spatial representation by hippocampal place cells [28]. The synaptic connectivity in area CA3 is thought to support auto-associative dynamics, and place cells in this area are often modeled as participating in a continuous attractor network (CAN) [7, 41, 40, 31, 49, 36, 11, 13, 17, 8]. Typically, neurons are first arranged on an abstract neural sheet which is mapped to their preferred firing location in the environment. The connectivity between any two neurons is then assumed to depend on their distance on the neural sheet, with excitatory synapses between nearby neurons and effectively inhibitory synaptic connections between far away neurons. This architecture constrains the population activity dynamics to express a localized self-sustaining activity pattern, or 'bump', which can be centered anywhere along the neural sheet. Thus, the place cell network possesses a two-dimensional continuum of possible steady states. As the animal traverses its environment, this continuum of steady states can be mapped in a one-to-one manner to the animal's position. The same place cells, however, participate in representations of multiple distinct environments, collectively exhibiting global remapping [27]: the relation between activity patterns of a pair of place cells in one environment cannot predict the relation of their activity patterns in a different environment and thus each environment has its unique neural population code. To account for this phenomenon in the attractor framework, multiple, distinct continuous attractors which rely on the same place cells are embedded in the connectivity, each representing a distinct single environment [36, 5, 25, 1]. Conceptually, this is similar to the Hopfield model [18], but instead of embedding discrete memory patterns, each embedded memory pattern is a continuous manifold that corresponds to a different spatial map. Despite the network's ability to represent multiple environments, it has been argued that the embedding of multiple continuous attractors inevitably produces frozen (or quenched) noise that eliminates the continuity of steady states in each of the discrete attractors [36, 32, 19, 25, 26], leading to detrimental drifts of the neural activity. Unlike stochastic dynamical noise which is expressed in instantaneous neural firing rates, time-independent quenched noise is hard-wired in the connectivity -- it breaks the symmetry between the continuum of neural representations. From a qualitative perspective, the bump of activity can be conceptualized as residing on a minimum of an abstract energy landscape in the \(N\) dimensional space of neural activity (where \(N\) is the number of neurons). This energy landscape is precisely flat in the single map case, independently of the bump's position (Fig. 1a). However, contributions to the synaptic connectivity included to support additional maps distort this flat energy landscape into a similar, yet wrinkled energy landscape, and the continuous attractors are replaced by localized discrete attractor states (Fig. 1b). Consequently, the system can no longer stably represent a continuous manifold of bump states for any of its embedded attractors. Instead, activity patterns will systematically drift into one of the energy landscape's discrete minima, thereby degrading the network's ability to sustain persistent memory. This highly detrimental effect has been explored in previous works and was recognized as an inherent property of such networks [25, 26, 1]. External sensory inputs can pin and stabilize the representation [26, 1], but it remained unknown whether internal mechanisms could potentially attenuate the quenched noise in the system and stabilize the representations, independently of external sensory inputs. These insights raise the question, whether it is possible to embed multiple continuous attractors in a single RNN while eliminating interference effects and retaining a precisely flat energy landscape for each attractor, but without simply reducing the network capacity. Here, we first formalize the concept of an energy landscape in the multiple attractor case. We then minimize an appropriately defined loss function to flatten the energy landscape for all embedded attractors. We show that the energy landscape can be made nearly flat, which is reflected by an attenuation of the drifts, and a dramatic improvement in the stability of attractor states across the multiple maps. These results provide a proof of principle - that internal brain mechanisms can support main Figure 1: **Energy landscape.** Schematic illustration of energy surfaces along a 1-dimensional attractor when a single (**a**), and multiple (**b**), maps are embedded. Cyan traces show energy landscapes along bottom of surfaces. tenance of persistent representations in neural populations which participate in multiple continuous attractors simultaneously, with much higher stability than previously and naively expected. ## Results We consider the coding of spatial position in multiple environments, or _maps_, by hippocampal place cells. Within each map, the connectivity between place cells produces a continuous attractor. We adopt a simple CAN synaptic architecture with spatially periodic boundary conditions. In the one-dimensional analogue of this architecture, this connectivity maps into the ring attractor model [7; 41; 31; 49]: neurons, functionally arranged on a ring, excite nearby neighbors, while global inhibitory connections elicit mutual suppression of activity between distant neurons. This synaptic architecture leads to a bump of activity, which can be positioned anywhere along the ring. The overall synaptic connectivity \(\mathbf{J}\) is expressed as a sum over contributions from different maps, \(\mathbf{J}=\sum_{l=1}^{L}J^{l}\), where \(L\) is the number of embedded maps (see Supplementary Material, SM). To mimic the features of global remapping, a distinct spatial map is generated independently for each environment, by choosing a random permutation that assigns all place cells to a set of preferred firing locations that uniformly tile this environment. Consequently, the network possesses a discrete set of continuous ring attractors, each mapping a distinct environment to a low-dimensional manifold of population activity patterns. This is similar to previous models [36; 5; 25] but adapted here to the formalism of a dynamical rate model [1]. Since the connectivity is symmetric, it is guaranteed that the network dynamics will converge to a fixed point [10]. Qualitatively, these fixed points can be viewed as the minima of an abstract high-dimensional energy landscape in which the population activity will eventually lie at steady state: each population activity pattern is associated with an energy value, and the population dynamics are constrained to follow only trajectories that cannot increase the energy and must ultimately settle in a local minimum. When only a single map is embedded in the connectivity, the representations span a true continuum of steady states and can be accurately read out. From the energy perspective, this continuity corresponds to a completely flat energy landscape where a continuum of steady states share an identical energy value. Initialization of the network from any arbitrary state in this case will always converge to a steady state which is a localized _idealized bump_ of activity. Idealized bumps have a smooth symmetric structure, and are invariant as they can evolve at any position along the attractor. It is sufficient, however, to embed only one additional map to distort the continuity of true steady states. Embedding multiple maps introduces quenched noise in the system which eliminates the ability to represent a true continuum of steady states in each of the attractors. From the energy perspective, the quenched noise is reflected in distortions of the energy landscape, as it becomes wrinkled with multiple minima (Fig. 1b). Thus, the discrete attractors lose their ability to represent a true continuum of steady states, and the memory patterns consequently accrue detrimental drifts (Fig. 2). The idealized bumps from the single map case are no longer steady states of the dynamics when multiple maps are embedded. Instead, activity patterns converge on distorted versions of idealized bumps. Nevertheless, as long as the network is below its capacity, this activity is still localized. When initializing the network from an idealized bump at a random position along any of the discrete attractors, it will instantaneously become slightly distorted and will then systematically drift to the energy minimum point in its basin of attraction (Fig. 2b). These effects are enhanced as the number of embedded maps (and thus the magnitude of quenched noise) is increased and until capacity is breached when the exhibited activity patterns are not expected to be localized in any of the attractors. ### Flattening the energy landscape To attenuate the systematic drifts that emerge upon embedding of multiple maps, we focused on flattening of the wrinkled energy landscape. The methodology throughout this work relied on the following two-step procedure: first, the wrinkled energy landscape was evaluated. Then, based on this evaluation, small modifications \(\mathbf{M}\) were added to the original synaptic weights to flatten the energy landscape (SM). Importantly, the goal was to flatten the energy landscape for all maps simultaneously, which rules out the trivial solution of flattening a subset of maps by unembedding the others. Generally, the energy value of each arbitrary state for any RNN with symmetric connectivity is given by the following Lyapunov function [10]: \[E(\vec{I},\mathbf{W})=\sum_{i=1}^{N}\left(\int_{0}^{I_{i}}\mathrm{d}z_{i}z_{i} \phi^{\prime}\left(z_{i}\right)-h\phi\left(I_{i}\right)-\frac{1}{2}\sum_{j=1}^ {N}\phi\left(I_{i}\right)\mathbf{W}_{i,j}\phi\left(I_{j}\right)\right) \tag{1}\] where \(E\) is the energy value, \(\vec{I}\) is the population synaptic activity, \(\mathbf{W}\) is the connectivity, \(h\) is an external constant current, and \(\phi\) is the neural transfer function. To flatten the wrinkled energy landscape, it was first necessary to evaluate it along each of the approximated attractor manifolds. Since \(h\), \(\phi\), and number of neurons (\(N\)) are assumed to be fixed, the energy value (Eq. S8) of each state depends only on the population activity (\(\vec{I}\)) and the synaptic connectivity (\(\mathbf{W}\)) which, in our case, is decomposed into the embedded maps and the weight modifications, namely, \(\mathbf{W}=\sum_{l}J^{l}+\mathbf{M}\). We first took a perturbative approach in which we examined the idealized bumps that emerge when only a single map is embedded in the connectivity, while treating all the contributions to the connectivity that arise from the other maps as inducing a small perturbation to the energy. In Eq. (S8), corrections to the energy of stationary bump states in map \(l\) arise from two sources: First, a contribution arising directly from the synaptic weights associated with the other maps (\(J^{l^{\prime}}\) where \(l^{\prime}\neq l\)), as well as from the modification weights \(\mathbf{M}\) (third term in Eq. S8). This term, to leading order, is linear in the synaptic weights. Second, corrections arising from deformation of the bump state. Because a deformed bump drifts slowly when the perturbation to the synaptic weights is weak, it is possible to conceptually define bump states along the continuum of positions, which are nearly stationary (a precise way to do so will be introduced later on). This contribution to the energy modification is quadratic in the deformations, because the idealized bump states are minima of the unperturbed energy functional, and therefore the energy functional is locally quadratic near these minima. Hence, Figure 2: **Embedding multiple maps distorts the idealized bump and induces systematic drifts.****a,** Two examples (left and right) showing snapshots of population activity at three time points (0 ms, 10 ms and 1500 ms). Activity was initialized at a random position along the attractor in each example. Population activity remains stationary when a single map is embedded (top), but distorts and drifts when ten maps are embedded (bottom). **b,** Superimposed bump positions versus time of ten independent idealized bump initialization (red), in two different embedded maps (top - map #3, bottom - map #5), out of ten total embedded maps. The stable bump position obtained when a single map is embedded is plotted for reference (dashed traces). Systematic drift is evident when ten maps are embedded. to leading order in the perturbation, we neglect this modification in this section. Overall, the energy \(E_{\mathrm{ib}}^{k,l}\) of an idealized bump (ib) state centered around position \(k\) in map \(l\) is, \[E_{\mathrm{ib}}^{k,l}=E_{0}-\frac{1}{2}\sum_{l^{\prime}\neq l}\widetilde{r}_{0}^ {k,l^{\prime}}\,J^{l^{\prime}}\widetilde{r}_{0}^{k,l}-\frac{1}{2}\widetilde{r}_ {0}^{k,l^{\prime}}\mathbf{M}\widetilde{r}_{0}^{k,l} \tag{2}\] where \(E_{0}\) is the energy value of an idealized bump state in the case of a single embedded map (which is independent of \(k\) and \(l\)), and \(\widetilde{r}_{0}^{k,l}=\phi\left(\widetilde{I}_{0}^{k,l}\right)\) is an idealized bump around position \(k\) in map \(l\). As a first step, we evaluated the first two terms in Eq. 2, which represent the Lyapunov energy of idealized bump states in the absence of the weight modifications \(\mathbf{M}\). In each one of the maps, the energy landscape was evaluated using these idealized bumps at \(N\) uniformly distributed locations, centered around the \(N\) preferred firing locations of the neurons. Since this resolution is much smaller than the width of the bump, the continuous energy landscape was densely sampled. As expected, a precisely flat energy landscape was observed when only a single map was embedded (Fig. 3a, blue traces), as the attractor is truly continuous in this case. However, when multiple maps were embedded, a wrinkled energy landscape was observed (Fig. 3a, orange traces), as a consequence of the quenched noise in the system. We next sought small modifications in the synaptic weights which could potentially flatten the energy landscape with only mild effects on the population activity patterns. Under the approximations considered above, flattening the energy landscape requires that the right hand side of Eq. 2 is a constant, independent of \(k\) and \(l\). Because the energy depends on the connectivity in a linear fashion to leading order, we obtain a set of linear equations. Using the discretization described above and when \(N>2L+1\), such a system is underdetermined with more unknown weight modifications than sampled energy evaluations along the attractors. Out of the infinite space of solutions, the least squares solution, which has the overall minimal \(\mathrm{L}_{2}\) norm of connectivity modifications was chosen. Figure 3: **Energy landscape is distorted when multiple maps are embedded but can be flattened to reduce drifts.****a,** Evaluated energy landscape using idealized bumps, along two representative embedded maps (top - map #3, bottom - map #5), out of the total \(L=10\) embedded maps (orange) used in Fig. 2. The energy landscape obtained when \(L=1\) is superimposed in blue. For visibility purposes, energy landscapes are shown after subtraction of the mean across all positions and maps (SM). **b,** Re-evaluated energy landscape using idealized bumps after weight modifications were added (orange). The landscape is precisely flat. Note that this is only an approximation to the actual energy landscape, due to the use of idealized bumps (see text). Top: energy along map #3 (blue trace is identical to orange trace in panel a, top). Bottom: same as top, for all embedded maps concatenated. **c,** Bump trajectories as in Fig. 2b, but with added weight modifications. Qualitatively, the magnitude of the drifts is reduced (compare with Fig. 2b). As expected, re-evaluating the energy using idealized bumps with the addition of the weight modifications yielded a precisely flat energy landscape (Fig. 3b, orange traces). This does not imply, however, that drifts will vanish since we evaluated the true energy landscape only up to first order in the deviations from an idealized bump state. To test whether the weight modifications led to decreased drifts, we independently initialized the network using an idealized bump, centered in each realization at one of ten uniformly distributed positions in each attractor. First, changes in the energy of the population activity through time were monitored, to verify that steady states were reached. As expected for a Lyapunov energy function, this quantity decreased monotonically, and stabilized almost completely for all initial conditions within three seconds, indicating that the convergence to a steady state was nearly complete (Fig. 4a). To further validate that activity was nearly settled within three seconds, we measured the drift rates and the rate of mean squared change in the firing rate of all the neurons as defined below. Only subtle changes were still observed beyond this time (Supplementary Fig. 1), justifying the consideration of the states achieved after three seconds as approximated steady states. We next evaluated a _bump score_ that quantifies the similarity between the population activity with any one of the idealized bumps states, placed at all possible positions along all the attractors (SM). In the large \(N\) limit, a sharp transition is expected when capacity is breached, but this effect is smoothed for a smaller and finite number of neurons, as used in this study (SM). Nevertheless, adding the weight modifications did not decrease the bump score (Fig. 4b), indicating that the network can still reliably represent its multiple embedded maps. Our focus is on networks below the capacity, which we roughly estimate to be \(\sim\)20 maps for \(N=600\). To quantify the stability of the network, we measured the mean squared change in the firing rate of all neurons, from initialization and until approximated steady states were reached (at 3s). This measure of stability improved dramatically due to the synaptic weight modifications (Fig. 4c). A second measure of stability was the drift of the bumps, obtained by examining the mean distance bumps traveled. The bump location was defined as the position that maximized the bump score. We found that the distance bumps traveled over 3s from initialization decreased significantly when Figure 4: **Effects of weight modifications obtained using idealized bumps scheme on the stability of bump states.****a,** Energy difference between consecutive network states without (left) and with (right) the weight modifications as a function of \(L\), the total number of embedded maps. In both cases, the changes in energy of network states are negligible after 3 seconds, indicating that steady states were reached (see also Supplementary Fig. 1). **b,** The bump score (SM) without (blue) and with (orange) the weight modifications as a function of \(L\). **c,** Mean squared change of all neuron’s firing rates [Hz] between time points 0 and 3 sec, without (blue) and with (orange) the weight modifications as a function of \(L\). Note that the scale is logarithmic. **d,** Measured drifts without (blue) and with (orange) the weight modifications as a function of \(L\). Error bars are \(\pm\)1.96 SEM in all panels (SM). the weight modifications were added to the connectivity (Fig. 4d). Since the spatial resolution of the energy landscape evaluation is identical to that in which neurons tile the environment, drifts lower than a single unit practically correspond to a perfectly stable network. These results (Fig. 4c-d) demonstrate that flattening the approximated energy landscape across all maps by adding the synaptic weight modifications indeed increased the stability of the multiple embedded attractors. Despite the approximation of the energy function to first order in the weight modifications, our approach achieved dramatic improvement in the network stability. Two limitations of this approach can be clearly identified: first, the energy landscape was evaluated for idealized bump states, even though these activity patterns are not the true steady states when multiple maps are embedded in the connectivity. Second, weight modifications were designed to correct the energy only up to the first order. Once introduced, these weight modifications produce slight changes in the structure of the steady states, which in turn generate higher-order corrections to the energy that were neglected in the approximation. Next, we describe how these limitations can be overcome by defining a more precise loss function for the roughness of the energy landscape. ### Iterative constrained gradient descent optimization As discussed above, when \(L>1\) the unmodified system does not possess true steady states at all positions: following initialization, an idealized bump state is immediately distorted, and then systematically drifts to a local minimum of the energy functional (Fig. 2). In order to define an energy landscape over a continuum of positions, it is thus necessary to first formalize the concept of states that have reached an energy minimum but also span a continuum of positions. To do so, we considered the minima of the energy functional under a constraint on the center of mass of the bump. In practice, such minima are found by idealized bump initialization, followed by gradient descent on the energy functional under the constraint. A distorted bump then dynamically evolves to minimize the Lyapunov function, while maintaining a fixed center of mass at its initial position (Supplementary Fig. 2). See SM for details on the constrained optimization procedure and its validation. Since the goal of the constraint is to parametrize all states along the approximated continuous attractor, its exact definition is not crucial as long as the optimization is applied along a dense representation of positions. By finding the minima of the energy functional along such a dense sample of positions, it is possible to precisely measure the energy landscape of the system along each one of the attractors. As expected, the evaluated energy landscape obtained using gradient optimization achieved lower energy values compared to the energy landscape obtained using idealized bumps (Fig. 5a, orange traces). Using this formal definition of the position dependent energy, it is possible to formulate a learning objective for the modification weights \(\mathbf{M}\), designed to flatten the energy landscape. We denote by \(E^{k,l}\) the constrained minimum of the energy at position \(k\) in map \(l\), evaluated using the constrained gradient minimization scheme described above. We approached our goal by attempting to equate the values of \(E^{k,l}\) for all \(k\) and \(l\), using an iterative gradient-based scheme (SM). It is straightforward to evaluate the gradient of \(E^{k,l}\) with respect to \(\mathbf{M}\), since up to first order in \(\Delta\mathbf{M}\), \[E^{k,l}(\mathbf{M}+\Delta\mathbf{M})-E^{k,l}(\mathbf{M})\simeq-\frac{1}{2} \vec{R}^{\,k,l^{T}}\left(\Delta\mathbf{M}\right)\vec{R}^{\,k,l} \tag{3}\] where \(\vec{R}^{k,l}\) is the firing rate population vector at the constrained minimum corresponding to \(E^{k,l}\). We note that the constrained minima themselves are modified due to the change in \(\mathbf{M}\), but these modifications contribute only terms of order \(\left(\Delta\mathbf{M}\right)^{2}\) to the change in the energy (see SM). In each iteration of the weight modification scheme, \(E^{k,l}\) and \(\vec{R}^{\,k,l}\) were first evaluated numerically for all \(k\) and \(l\) using constrained gradient optimization and the values of \(\mathbf{M}\) from the previous iteration (starting from vanishing modifications in the first iteration). Next, in iteration \(i\), we sought adjustments \(\Delta\mathbf{M}_{i}\) to the weight modifications \(\mathbf{M}\), that equate the energy across all \(k\) and \(l\), to leading order. As in the previous section, Eq. 3 yields an underdetermined set of linear equations (assuming that \(N>2L+1\)), where the least square solution was chosen in each iteration. Fig. 5b-c shows results obtained over several gradient optimization iterations. The mean absolute value and standard deviation of weight modifications decreased with the iteration number (Fig. 5b), and the flatness of the energy landscape improved systematically in successive iterations (Fig. 5c, top, and Supplementary Fig. 3). The standard deviation of the energy across locations and maps decreased by almost four orders of magnitude after four iterations, and the maximal deviation of the energy from its mean decreased dramatically as well (Fig. 5c, bottom). We next examined the consequences of improved flatness of the energy landscape on the network's stability. Qualitatively, individual states exhibited much less drift after five iterations (Fig. 6a), compared to the unmodified scenario (Fig. 2b), and also in comparison with the scheme based on idealized bumps (Fig. 3c). To systematically quantify the improved stability, we first validated that approximated steady states were reached three seconds after initialization (Fig. 6b), and that the bump score was not significantly affected by these modifications (Fig. 6c) as in the idealized bump approach (Fig. 4a-b). Next, we examined two measures of stability (as in Fig. 4c-d). The mean squared change in the firing rate of all the neurons, across 3s from initialization was dramatically improved, by almost three orders of magnitude, compared to the pre-modified networks for the smaller values of \(L\) (Fig. 6d). Measured drifts were attenuated significantly as well (Fig. 6e). Note that, as described above, drifts which are below or equal to the discretization precision of a single unit correspond to perfect stability. For completeness, drifts were also measured using the phase of the population activity vector and demonstrated similar results (Supplementary Fig. 4). Taken together, these results show that with few constrained gradient optimization iterations, a dramatic increase can be achieved in the inherent stability of a network forming a discrete set of continuous attractors. ## Discussion Interference between distinct manifolds is highly detrimental for the function of RNNs designed to represent multiple continuous attractor manifolds. Naively, it is sufficient to embed only a second Figure 5: **Energy landscape evaluation and modifications using constrained gradient optimization.****a, Energy landscape evaluated using constrained gradient optimization (orange) for maps #3 (top) and #5 (bottom), out of \(L=10\) embedded maps as used in Figs. 2 and 3. For reference, energy landscapes evaluated using idealized bumps are plotted as well (blue traces, identical to the orange traces from Fig. 3a). As expected, the energy after gradient optimization (orange trace) is lower. Re-evaluating the energy landscape to leading order in the weight modifications yielded a precisely flat energy landscape when evaluated using the pre-modified network steady states (yellow).****b,** Top: mean absolute value of the weights in the original connectivity matrix (blue) and in modification adjustments for each gradient optimization iteration, as a function of \(L\), the total number of embedded maps. Bottom: corresponding standard deviation. Error bars are \(\pm 1.96\) SEM (SM). **c,** Top: standard deviation of the energy landscape without (blue) and with the weight modification for each gradient optimization iteration as a function of \(L\). Bottom: same as top, but showing the peak absolute difference between all energy values and their mean. Error bars are \(\pm 1.96\) SEM. manifold in a network to destroy the continuity of the energy landscapes in each attractor. This leads to the loss of the continuity of steady states, which are replaced by a limited number of discrete attractors. In tasks that require storage of a continuous parameter in working memory, these effects are manifested by systematic drifts that degrade the memory over time. Sensory inputs can, hypothetically, pin and stabilize the representation [26; 1], but in the absence of correcting external inputs the representation is destined to systematically drift to the nearest local energy minima, thereby impairing network functionality. This has been thought to be a fundamental limitation of such networks. Here we reexamined this question by adding small weight modifications, tailored to flatten the high dimensional energy landscape along the approximate attractor, to the naive form of the connectivity matrix. We found that appropriately chosen weight modifications can dramatically improve the stability of states along the embedded attractors. Furthermore, the objective of obtaining a flat energy landscape appears to be largely decoupled from the question of capacity, which is also limited by interference between multiple attractors. Indeed, introducing the weight modifications did not qualitatively affect the dependence of the bump score on \(L\) (Fig. 6c). In particular, the kink in this function occurs at approximately \(L=20\) (for \(N=600\)) both in the pre- and post-modified networks. Theoretically, the position of the kink is expected to roughly match the location of a sharp transition in the large \(N\) limit, while keeping \(L/N\) fixed. A recent work [6] suggests that the capacity may depend logarithmically, and thus weakly, on the prescribed density of steady states. It will be interesting to explore the relation of this result to our framework as it was obtained for RNNs composed of binary neurons, with a linear kernel support vector machine based learning rule. We explored our schemes in 1D to reduce the computational cost, but it is straightforward to extend our approach to 2D. One notable difference between 1D and 2D environments is that the number of neurons required to achieve a good approximation to a continuous attractor, even for a single map, scales in proportion to the area in 2D, as opposed to length in 1D. However, for a given number of neurons, there is no substantial difference between the two cases in terms of the complexity of the Figure 6: **Effects of weight modifications obtained using constrained gradient optimization on the stability of bump states.****a,** Bump trajectories as in Figs. 2b and 3c, but with added weight modifications obtained after five constrained gradient optimization iterations. Qualitatively, the drifts almost vanish (compare with Figs. 2b and 3c, and note cyclic boundary conditions). **b,** Same as Fig. 4a, but after five iterations of the constrained gradient optimization scheme. **c-e,** Same as Fig. 4b-d, but with superimposed results obtained after five constrained gradient optimization iterations (yellow traces). Adding weight modifications did not reduce the bump score (panel c). The network stability improved significantly (panels d-e) compared to the unmodified network (blue traces) and to the modified network using the idealized bumps scheme (orange traces). Error bars are \(\pm 1.96\) SEM (SM). problem: the number of equations scales as \(NL\), and the number of parameters (synaptic weights) scales as \(N^{2}\). Since the random permutations are completely unrelated to the spatial organization of the firing fields, quenched (frozen) noise is expected to behave similarly in the two cases. For simplicity, we assumed that each cell is active in each environment, but it is straightforward to adapt the architecture to one in which the participation ratio, \(p\), defined as the average fraction of maps in which each cell participates, is smaller than unity. Measurements in CA3, performed in the same cells in multiple environments, indicate that CA3 cells are active only in a subset of environments, with \(p\sim 15\%\) as a rough estimate [3]. Clearly, the quenched noise in the system increases in proportion to the number of embedded maps \(L\), and it will be interesting in future work to assess how the roughness of the energy landscape depends on \(N\), \(L\), and \(p\). We note that even though small \(p\) implies low interference, it also implies a reduction in the number of neurons participating in each map, and in each bump state. This is expected to reduce the resilience of each attractor to the quenched noise. Hence, the overall effect of \(p\) on the roughness of the energy landscape (when keeping \(N\) fixed) is non-trivial. Previous works have suggested that synaptic scaling [32] or short term facilitation [19, 38] mechanisms can stabilize working memory in networks with heterogeneous connectivity. These mechanisms were achieved, however, in networks which are equivalent to the single map case but with random added heterogeneity. In this respect, the source of heterogeneity in these networks is different from the one in this work, where heterogeneity arises from the embedding of multiple coexisting attractors. It would be interesting to investigate whether a synaptic facilitation mechanism [19] can be implemented in the multiple map case and further improve the network's stability alongside the energy-based approach proposed here. However, since in a discrete set of continuous attractors each neuron is participating in multiple representations, a synaptic scaling mechanism [32] is unlikely to achieve such stabilization. In addition, it may also be interesting to explore other potential methods for the design of a synaptic connectivity that supports multiple, highly continuous attractors, other than the one explored here. These might include generalization of approaches that were based on the pseudo-inverse learning rule for embedding of correlated memories in binary networks [29], or training of rate networks to possess a near-continuum of persistent states by requiring stability of a low-dimensional readout variable [12]. Our approach is based on the minimization of a loss function that quantifies the energy landscape's flatness. This raises several important questions from a biological standpoint. First, the connectivity structure obtained after training is fine-tuned, raising the question of whether neural networks in the brain can achieve a similar degree of fine tuning. A similar question applies very broadly to CAN models, yet, there is highly compelling evidence for the existence of CAN networks in the brain [45, 2, 48, 46, 39, 21, 15]. We also note that out of an infinite space of potential weight modifications only a specific set (least-squares) was chosen, leaving many unexplored solutions to the energy flattening problem which may relax the fine-tuning requirement. Second, our gradient-based learning rule for the minimization of the loss function was not biologically plausible. In recent years, however, many important insights were obtained on computation in biological neural networks by training RNN models using gradient based learning [43, 4, 33, 44, 24, 30, 20, 37, 16, 22, 42]. The (often implicit) assumption is that biological plasticity in the brain can reach similar connectivity structures as those obtained from gradient based learning, even if the learning rules are not yet fully understood. Thus, our results should be viewed as a proof of principle - that multiple embedded manifolds, previously assumed to be inevitably unstable, can be restructured through learning to achieve highly stable, nearly continuous attractors. It will be of great interest to seek biologically plausible learning rules that could shape the neural connectivity into similar structures. Such rules might be derived from the goal of stabilizing the neural representation during memory maintenance, either based solely on the neural dynamics, or on corrective signals arising from a drift of sensory inputs relative to the internally represented memory [23]. **Acknowledgments** The study was supported by the European Research Council Synergy Grant no. 951319 ("KILO-NEURONS"), and by grant nos.1978/13, and 1745/18 from the Israel Science Foundation. We further acknowledge support from the Gatsby Charitable Foundation. Y.B. is the incumbent of the William N. Skirball Chair in Neurophysics. This work is dedicated to the memory of Mrs. Lily Safra, a great supporter of brain research. ## Supplementary Information ### Network connectivity The network consists of \(N=600\) neurons, which represent positions in \(L\) one-dimensional periodic environments. A distinct spatial map is generated for each environment, by choosing a random permutation that assigns all place cells to a set of preferred firing locations that uniformly tile this environment. The synaptic connectivity between place cells is expressed as a sum over contributions from all spatial maps: \[\mathbf{J}=\sum_{l=1}^{L}J^{l}\] (S1) where \(J^{l}_{i,j}\) depends on the periodic distance between the preferred firing locations of cells \(i\) and \(j\) in environment \(l\), as follows \[J^{l}_{i,j}=\begin{cases}A\mathrm{exp}\left[-\dfrac{\left(d^{l}_{i,j}\right)^ {2}}{2\sigma^{2}}\right]+b&i\neq j\\ 0&i=j\end{cases}\] (S2) The first term is an excitatory contribution to the synaptic connectivity that decays with the periodic distance \(d^{l}_{i,j}\), with a Gaussian profile \(\left(0\leq d^{l}_{i,j}<N/2\right)\). The second term is a uniform inhibitory contribution. The parameters \(A>0\), \(\sigma\), and \(b<0\) are listed in Network parameters. Note that the connectivity matrices corresponding to any two maps \(l\) and \(k\) are related to each other by a random permutation: \[J^{l}_{i,j}=J^{k}_{\pi^{l,k}(i),\pi^{l,k}(j)}\] (S3) where \(\pi^{l,k}\) denotes the random permutation from map \(k\) to map \(l\). Even though \(\mathbf{J}\) is a \(N\times N\) matrix, it has only \(\left(N^{2}-N\right)/2\) unique terms since \(\mathbf{J}=\mathbf{J}^{T}\) and since \(\mathbf{J}_{i,i}=0\). ### Dynamics The dynamics of neural activity are described by a standard rate model. The total synaptic current \(I_{i}\) into place cell \(i\) evolves in time according to the following equation: \[\tau\dot{I}_{i}=-I_{i}+h+\sum_{j=1}^{N}\mathbf{J}_{i,j}\cdot\phi\left(I_{j}\right)\] (S4) The synaptic time constant \(\tau\) is taken for simplicity to be identical for all synapses (Network parameters). The external current \(h\) is constant in time and identical for all cells. This current includes two terms: \(h=h_{0}-(L-1)N\bar{J}\bar{R}\). The first term, \(h_{0}\), is the baseline current required to drive activity when a single spatial map is embedded in the connectivity. In the second term, \(\bar{J}\) is the average of the elements in a row of the single-map connectivity matrix, and \(\bar{R}\) is the average firing rate of place cells in the bump steady state when \(L=1\). The second term compensates on average for the inputs arising from the connectivity associated with the \(L-1\) maps other than the active map. It thus guarantees that the mean input to all neurons in an idealized bump state will be independent of the number of embedded maps. The transfer function \(\phi\) determines the firing rate (in Hz) of place cells (\(r\)) as a function of their total synaptic inputs. To resemble realistic neuronal F-I curves, it is chosen to be sub-linear: \[\phi\left(x\right)=\begin{cases}0&x\leq 0\\ \sqrt{x}&x\geq 0\end{cases}\] (S5) Note that \(\phi^{\prime}\left(x\right)\geq 0\;\;\forall x\), which implies that the Lyapunov energy (Eq. S8) cannot increase. We implemented the dynamics using the Euler-method for numeric integration, with a time step \(\Delta t\) (Network parameters). **Network parameters** \begin{tabular}{|c|c|} \hline \(A\) & \(0.665\;\mathrm{Hz}\) \\ \hline \(\sigma\) & \(15\;\mathrm{neuron}\;\mathrm{units}\) \\ \hline \(b\) & \(-0.2076\;\mathrm{Hz}\) \\ \hline \(\tau\) & \(15\;\mathrm{ms}\) \\ \hline \(\Delta t\) & \(0.2\;\mathrm{ms}\) \\ \hline \(h_{0}\) & \(10\;\mathrm{Hz}^{2}\) \\ \hline \end{tabular} **Bump score and location analysis** To identify whether the place cell network expresses a bump state, and to identify its location \(x\) and associated spatial map \(l\), we define an overlap coefficient \(q^{l}\left(x\right)\) that quantifies the normalized overlap between the population activity pattern and the activity pattern corresponding to position \(x\) in spatial map \(l\): \[q^{l}\left(x\right)=\sum_{i}P_{i}^{l}\left(x\right)\cdot\hat{r}_{i}\] (S6) where \(\hat{r}_{i}\) is the normalized firing rate of place cell \(i\) such that the maximal firing rate across all cells is 1, and \(P_{i}^{l}\left(x\right)\) is the normalized firing rate of neuron \(i\) in an idealized bump state localized at position \(x\) in map \(l\). The idealized bump (as defined above) is obtained from the activity of a network in which a single map (map \(l\)) is embedded in the neural connectivity, and therefore there is no quenched noise. This definition of the bump score is similar to use of the overlap measure between the memory pattern and the network state in the theory of Hopfield networks. Next, we define a bump score for each spatial map, defined as the maximum of \(q^{l}\left(x\right)\) over all positions \(x\) in spatial map \(l\): \[Q^{l}=\max_{x}\;q^{l}\left(x\right)\] (S7) Finally, the map with the highest \(Q^{l}\) value is considered as the winning map, and the location \(x\) that generated that value is considered as the location of the place cell bump within that map. **Connectivity modifications to flatten the energy landscape** The Lyapunov energy depends on the total synaptic current of all neurons, represented by the vector \(\vec{I}\), and on the synaptic connectivity \(\mathbf{J}\) as follows (Cohen and Grossberg, 1983), \[E(\vec{I},\mathbf{J})=\sum_{i=1}^{N}\left(\int_{0}^{I_{i}}\mathrm{d}z_{i}z_{i }\phi^{\prime}\left(z_{i}\right)-h\phi\left(I_{i}\right)-\frac{1}{2}\sum_{j= 1}^{N}\phi\left(I_{i}\right)\mathbf{J}_{i,j}\phi\left(I_{j}\right)\right)\] (S8) The evaluation of the energy landscape was performed for all attractors at uniformly distributed positions where the synaptic activities \(\vec{I}\) were centered around place cell preferred firing positions. These synaptic activities were either idealized bumps or the converged activities obtained through constrained gradient optimization (described in the next subsection). After evaluating the energy landscape, we sought to find a connectivity modification matrix \(\mathbf{M}\), such that its addition to the connectivity \(\mathbf{J}\) will result in an identical energy value for each activity pattern \(\vec{I}^{k,l}\) used for the energy landscape evaluation (\(\vec{I}^{k,l}\) represents the population activity which encodes the \(k\)'th place cell preferred firing position in the \(l\) map). Thus, we demanded that \[E\left(\vec{I}^{k,l},\mathbf{J}+\mathbf{M}\right)=C\] (S9) where \(C\), is an unknown constant energy value. Note that only the third term of Eq. S8 directly depends on the connectivity \(\mathbf{J}\), and since this dependence is linear, Eq. S8 can be decomposed into \[E(\vec{I},\mathbf{J}+\mathbf{M})=\sum_{i=1}^{N}\left(\underbrace{\int_{0}^{I_{i}} \mathrm{d}z_{i}z_{i}\phi^{\prime}\left(z_{i}\right)}_{\vec{I_{1}}}- \underbrace{\frac{h\phi\left(I_{i}\right)}{\mathcal{T}_{2}}-\underbrace{\frac{ 1}{2}\sum_{j=1}^{N}\phi\left(I_{i}\right)\mathbf{J}_{i,j}\phi\left(I_{j} \right)}_{T_{3}}-\underbrace{\frac{1}{2}\sum_{j=1}^{N}\phi\left(I_{i}\right) \mathbf{M}_{i,j}\phi\left(I_{j}\right)}_{T_{4}}\right)}_{T_{4}}\] (S10) To comply with the properties of \(\mathbf{J}\), we demanded that \(\mathbf{M}=\mathbf{M}^{T}\), and \(\mathbf{M}_{i,i}=0\). Hence, the number of unknown weight modifications was \(\frac{N^{2}-N}{2}\). Consequently, writing the modification term \(T_{4}\) at the evaluated \(k\) place cell position in map \(l\) (i.e., for activity \(\vec{I}^{k,l}\)) yields, \[T_{4}^{k,l}=-\frac{1}{2}\vec{\phi}^{k,l}\cdot\mathbf{M}\cdot\vec{\phi}^{k,l^{T }}=-\sum_{i=1}^{N}\sum_{j>i}\mathbf{M}_{i,j}\phi_{i}^{k,l}\phi_{j}^{k,l}\] (S11) where \(\phi^{k,l}\) is the transfer function output vector of the synaptic activity vector \(\vec{I}_{k,l}\) which encodes the \(k\)'th place cell preferred firing position in the \(l\) map. By rearranging only the desired \(\frac{N^{2}-N}{2}\) elements of \(\mathbf{M}\) into a column vector \(\vec{m}\), and noting that Eq. S11 defines a set of \(N\cdot L\) linear equations for these elements, we can rewrite Eq. S11 as \[\mathbf{A}\vec{m}=\vec{\kappa}\] (S12) where \(\mathbf{A}\) is a matrix with \(N\cdot L\) rows and \(\frac{N^{2}-N}{2}\) columns, and \(\vec{\kappa}\) is the corresponding vector of solutions (\(\vec{\kappa}=C-\left[T_{1}+T_{2}+T_{3}\right]\) evaluated in each element of \(\vec{\kappa}\) for a specific combination of \(l\) and \(k\)). Subtracting the first equation from all equations yields a similar system of \(N\cdot L-1\) linear equations but which is independent of \(C\). This system is underdetermined (\(\forall\ N\geq 2L+1\)), and the least square solution for the sought weight modifications is given by \[\vec{m}=\mathbf{A}^{T}\left(\mathbf{A}\mathbf{A}^{T}\right)^{-1}\vec{\kappa}\] (S13) Rearranging \(\vec{m}\) back into the corresponding elements in the matrix \(\mathbf{M}\) and updating the connectivity, \(\mathbf{J}+\mathbf{M}\rightarrow\mathbf{J}\), must yield a precisely flat energy landscape when evaluated using the same activities \(\vec{I}^{k,l}\) that were previously used to evaluate the energy landscape without the weight modifications. Moreover, since the original state was a minimum of the Lyapunov energy function with respect to the activities, deviations in the activities contribute only quadratically to the energy. Consequently, the procedure described above equalizes the energy function across all states, to linear order in the weight modifications. #### Iterative constrained gradient optimization In the iterative optimization scheme, the unstable idealized bumps were replaced with the converged states obtained through a constrained optimization. These states exhibit distorted activity patterns and lie at a local energy minimum subject to a constraint on the center of mass (position) of the bump. These states are found by descending through the energy landscape, while constraining the population activity bump to a fixed position. To implement the constrained optimization, the objective function of the synaptic activities \(H\left(\vec{I}\right)\) is defined as follows, \[H\left(\vec{I}\right)=E\left(\vec{I},\mathbf{J}\right)+\lambda\left[f\left( \vec{I}\right)-f_{0}\right]\] (S14) where \(\lambda\) is an adjusting Lagrange multiplier, the function \(f\) (specified next) returns the position (center of mass) associated with \(\vec{I}\), and \(f_{0}\) is the encoded position of activity at initialization at timestep \(t=0\). We seek all converged states \(\vec{I}^{k}\) which minimize \(H\) while encoding all possible place cell preferred firing positions. In the first iteration, these states are achieved by initializing the network from idealized bump (ib) states \(\vec{I}^{k}_{\mathrm{ib}}\) centered around all possible place cell preferred firing positions in all the embedded maps. In the following iterations, the network is initialized with the corresponding converged states from the preceding iteration. From each such initial state, small updates in the activity \(\overrightarrow{\Delta I}\) are added to the activity which minimize the energy value but without changing the output of \(f\). The time evolution of activity states is written as \[\vec{I}_{t+1}=\vec{I}_{t}+\overrightarrow{\Delta I}_{t}\] (S15) The small changes in the activity \(\overrightarrow{\Delta I}_{t}\) are determined through a gradient descent scheme where the objective function is differentiated with respect to \(\vec{I}\), \[\overrightarrow{\Delta I}_{t}=-\eta\cdot\overrightarrow{\nabla}H=-\eta \sum_{i=1}^{N}\frac{\partial E}{\partial I_{i}}\hat{e}_{i}-\eta\lambda\sum_{ i=1}^{N}\frac{\partial f}{\partial I_{i}}\hat{e}_{i}\] (S16) where \(\eta>0\) is the learning rate and the \(\hat{e}_{i}\)'s are standard unit vectors spanning an orthonormal basis. The change in the output position of \(f\) due to changes in the activity is written as \[\Delta f_{t}=\sum_{j=1}^{N}\frac{\partial f}{\partial I_{j}}\hat{e}_{j}\cdot \overrightarrow{\Delta I}_{t}\] (S17) However, since the constraint enforces that there is no change in the encoded position during such gradient steps, we demand that \(\Delta f_{t}=0\). Plugging the expression for \(\overrightarrow{\Delta I}_{t}\) from Eq. S16 yields \[\sum_{j=1}^{N}\frac{\partial f}{\partial I_{j}}\hat{e}_{j}\cdot\left[\eta\sum _{i=1}^{N}\frac{\partial E}{\partial I_{i}}\hat{e}_{i}+\eta\lambda\sum_{i=1}^{ N}\frac{\partial f}{\partial I_{i}}\hat{e}_{i}\right]=0\] (S18) For convenience, we omitted the timestep \(t\) index notation as it is shared with all terms from here onward. The Lagrange multiplier, calculated at each timestep, is thus \[\lambda=-\frac{\sum_{i=1}^{N}\frac{\partial E}{\partial I_{i}}\cdot\frac{ \partial f}{\partial I_{i}}}{\sum_{i=1}^{N}\left(\frac{ \partial f}{\partial I_{i}}\right)^{2}}\] (S19) To obtain an explicit expression for \(\lambda\), we conclude by evaluating \(\frac{\partial E}{\partial I_{i}}\) and \(\frac{\partial f}{\partial I_{i}}\). Since the connectivity is symmetric, the derivative of the energy with respect to \(I_{i}\) is given by \[\begin{split}\frac{\partial E}{\partial I_{i}}=I_{i}\phi^{ \prime}\left(I_{i}\right)-h\phi^{\prime}\left(I_{i}\right)-\frac{1}{2}\sum_{a =1}^{N}\phi^{\prime}\left(I_{i}\right)\mathbf{J}_{i,a}\phi\left(I_{a}\right)- \frac{1}{2}\sum_{a=1}^{N}\phi\left(I_{a}\right)\mathbf{J}_{a,i}\phi^{\prime} \left(I_{i}\right)\\ =\phi^{\prime}\left(I_{i}\right)\left[I_{i}-h-\sum_{a=1}^{N} \mathbf{J}_{i,a}\phi\left(I_{a}\right)\right]\end{split}\] (S20) To differentiate the constraint with respect to \(I_{i}\), we first define the function \(f\left(\vec{I}\right)\). For simplicity, this function is defined in terms of the weighted averaged position of the rates \(\vec{r}=\phi\left(\vec{I}\right)\), where the weights represent the position of the neurons relative to a reference position \(x_{r}\) in the vicinity of the bump (the need to measure displacements relative to a reference position arises due to the periodic boundary conditions). For simplicity, the reference position is chosen as the one that maximizes the bump score. The displacement of the bump from this reference position is evaluated as \[S\left(\vec{r}\right)=\frac{\sum_{i=1}^{N}\Delta x_{i}\cdot r_{i}}{\sum_{i=1}^{ N}r_{i}}\] (S21) where \(\Delta x_{i}\) is the displacement of neuron \(i\) from the reference position \(x_{r}\), defined using periodic boundary conditions such that it lies in the range \([-N/2,N/2]\). Finally, \[f\left(\vec{I}\right)=x_{r}+S\] (S22) Differentiating the constraint yields \[\frac{\partial f}{\partial I_{i}}=\frac{\partial S}{\partial r_{i}}\cdot\frac {\partial r_{i}}{\partial I_{i}}=\phi^{\prime}\left(I_{i}\right)\cdot\frac{x_ {i}\sum_{k}r_{k}-\sum_{j}r_{j}x_{j}}{\left(\sum_{k}r_{k}\right)^{2}}\] (S23) Plugging back Eqs. S20 and S23 in Eq. S19 will yield the required Lagrange multiplier for each time step. #### Gradient of constrained energy with respect to weights To derive Eq. 3 we evaluate the derivative of \(E^{k,l}\) with respect to the elements of \(\mathbf{M}\). Recall that \(I^{k,l}\) is the state that minimizes the energy (Eq. 1) under a constraint on the position of the bump, and that \(E^{k,l}\) is the energy associated with this state. Thus, both \(I^{k,l}\) and \(E^{k,l}\) depend on the weights \(\mathbf{M}\). The gradient of \(E^{k,l}\) with respect to the weights can be obtained from Eq. 1, while taking into account both the direct dependence of the energy on \(\mathbf{M}\), and the implicit dependence arising from the influence of \(\mathbf{M}\) on the state \(\vec{I}^{k,l}\): \[\frac{\partial E^{k,l}}{\partial\mathbf{M}_{ij}}=-\frac{1}{2}\phi\left(I_{i}^ {k,l}\right)\phi\left(I_{j}^{k,l}\right)+\nabla E^{k,l}\cdot\frac{\partial \vec{I}^{k,l}}{\partial\mathbf{M}_{ij}}\] (S24) where \(\nabla\) represents a gradient with respect to \(\vec{I}^{k,l}\), and \(\vec{R}^{k,l}=\phi\left(\vec{I}^{k,l}\right)\). The first term on the right hand side is the contribution to the derivative with respect to \(\mathbf{M}_{ij}\) arising from the explicit dependence of the Lyapunov function on \(\mathbf{M}\). This term is equivalent to the second term in Eq. 3. The second term on the right hand side of Eq. S24 represents the contribution to the change in the Lyapunov energy, arising from the change in the steady state activity pattern \(\vec{I}^{k,l}\) in response to an infinitesimal modification of of \(\mathbf{M}_{ij}\). Since \(\vec{I}^{k,l}\) obeys a constraint on the center of mass of the bump for any choice of the weights, its derivative with respect to \(\mathbf{M}_{ij}\) is in a direction in the \(N\) dimensional neural activity space in which the center of mass is kept fixed. On the other hand, the energy has been minimized precisely within that subspace, and therefore its gradient with respect to \(\vec{I}^{k,l}\), projected on this direction, must vanish (see also Supplementary Fig. 8). Hence, the second term in Eq. S24 vanishes, and up to linear order in \(\Delta\mathbf{M}\) only the explicit dependence of the energy on \(\mathbf{M}\) contributes to the gradient, as stated by Eq. 3. #### Energy landscape shifts Energy landscapes were uniformly shifted throughout the manuscript by a constant (Figs. 3a-b and 5a) in order to allow for the landscapes to be shown on the same plot for a single map and for 10 maps (Fig. 3a). This constant was selected separately for each value of embedded maps \(L\) such that the mean energy of idealized bump states across all states and maps (without any weight modification) is zero: thus, the mean of the blue trace in the bottom panel of Fig. 3b is zero (as well as single map blue traces, Fig. 3a). Uniform shifts of the energy landscape are inconsequential for the stability and dynamics of the bump states and so this was consistently performed solely for visibility purposes of Fig. 3a. ### Statistical analysis For each network with a different number of total embedded maps, 15 realizations were performed in which the permutations between the spatial maps were chosen independently and at random. Standard errors of the mean (SEM) were evaluated across these independent realizations. Error bars in Figs. 4, 5, 6 and Supplementary Figs. 1 and 4 are \(\pm 1.96\) SEM, corresponding to 95% confidence interval. ### Code availability Code is available at public repository [https://doi.org/10.5281/zenodo.10016179](https://doi.org/10.5281/zenodo.10016179). ## Supplementary Figures ## 6 Conclusion Figure 8: **Bump position remains fixed and reaches a local energy minimum during the gradient optimization.****a,** Two examples showing the relative bump position during a constraint (blue) and unconstrained (orange) gradient optimization. The bump position remains fixed during constrained optimization and systematically drifts during unconstrained optimization. **b,** The energy along corresponding projections of representative population activity vectors in the \(N\) dimensional space after gradient optimization has terminated (the scalar \(\alpha\) multiplies the multidimensional activity vectors). The energy along directions which are parallel to the constraint (blue) are flat while the energy along directions which are orthogonal to the constraint are quadratic (orange, yellow, and green traces. \(R^{2}=0.99\)). ## 6 Conclusion Figure 10: **Measured drifts using weighted population activity vector phase.** Same as Fig. 6e (light blue and yellow traces) but with superimposed measured drifts using the phase of the population activity vector (dark blue and orange traces). Very similar results are obtained using the two methods, up to \(L\approx 12\) embedded maps. The bump position obtained using the weighted population activity phase is noisy as all the neuron’s activities contribute to the inference of the bump position using the population vector phase. Therefore, as the load increases, neurons which are in the periphery of the bump that start to fire contribute to an inaccurate inference of the bump position, leading to larger measured drifts than those observed when using the bump score. Error bars are \(\pm\)1.96 SEM.
2310.01259
Faster and Accurate Neural Networks with Semantic Inference
Deep neural networks (DNN) usually come with a significant computational burden. While approaches such as structured pruning and mobile-specific DNNs have been proposed, they incur drastic accuracy loss. In this paper we leverage the intrinsic redundancy in latent representations to reduce the computational load with limited loss in performance. We show that semantically similar inputs share many filters, especially in the earlier layers. Thus, semantically similar classes can be clustered to create cluster-specific subgraphs. To this end, we propose a new framework called Semantic Inference (SINF). In short, SINF (i) identifies the semantic cluster the object belongs to using a small additional classifier and (ii) executes the subgraph extracted from the base DNN related to that semantic cluster for inference. To extract each cluster-specific subgraph, we propose a new approach named Discriminative Capability Score (DCS) that finds the subgraph with the capability to discriminate among the members of a specific semantic cluster. DCS is independent from SINF and can be applied to any DNN. We benchmark the performance of DCS on the VGG16, VGG19, and ResNet50 DNNs trained on the CIFAR100 dataset against 6 state-of-the-art pruning approaches. Our results show that (i) SINF reduces the inference time of VGG19, VGG16, and ResNet50 respectively by up to 35%, 29% and 15% with only 0.17%, 3.75%, and 6.75% accuracy loss (ii) DCS achieves respectively up to 3.65%, 4.25%, and 2.36% better accuracy with VGG16, VGG19, and ResNet50 with respect to existing discriminative scores (iii) when used as a pruning criterion, DCS achieves up to 8.13% accuracy gain with 5.82% less parameters than the existing state of the art work published at ICLR 2023 (iv) when considering per-cluster accuracy, SINF performs on average 5.73%, 8.38% and 6.36% better than the base VGG16, VGG19, and ResNet50.
Sazzad Sayyed, Jonathan Ashdown, Francesco Restuccia
2023-10-02T14:51:10Z
http://arxiv.org/abs/2310.01259v2
# Faster and Accurate Neural Networks ###### Abstract Deep neural networks (DNNs) usually come with a significant computational and data labeling burden. While approaches such as structured pruning and mobile-specific DNNs have been proposed, they incur in drastic accuracy loss. Conversely from prior work, in this paper we leverage the intrinsic redundancy in latent representations to drastically reduce the computational load with very limited loss in performance. Specifically, we show that semantically similar inputs share a significant number of filter activations, especially in the earlier layers. As such, semantically similar classes can be "clustered" so as to create cluster-specific subgraphs. These may be "turned on" when an input belonging to a semantic cluster is being presented to the DNN, while the rest of the DNN can be "turned off". To this end, we propose a new framework called _Semantic Inference_ (SINF). In short, SINF (i) identifies the semantic cluster the object belongs to using a small additional classifier; and then (ii) executes the subgraph extracted from the base DNN related to that semantic cluster to perform the inference. To extract each cluster-specific subgraph, we propose a new approach named _Discriminative Capability Score_ (DCS) that effectively finds the subgraph with the capability to discriminate among the members of a specific semantic cluster. Importantly, DCS is independent from SINF, as it is a general-purpose quantity that can be applied to any DNN. We benchmark the performance of DCS on the VGG16, VGG19, and ResNet50 DNNs trained on the CIFAR100 dataset against 6 state-of-the-art pruning approaches. Our results show that (i) SINF reduces the inference time of VGG19, VGG16, and ResNet50 respectively by up to 35%, 29% and 15% with only 0.17%, 3.75%, and 6.75% accuracy loss; (ii) DCS achieves respectively up to 3.65%, 4.25%, and 2.36% better accuracy with VGG16, VGG19, and ResNet50 with respect to existing discriminative scores; (iii) when used as a pruning criterion, DCS achieves up to 8.13% accuracy gain with 5.82% less parameters than the existing state of the art work published at ICLR 2023; (iv) when considering per-cluster accuracy, SINF performs on average 5.73%, 8.38% and 6.36% better than the base VGG16, VGG19, and ResNet50. We share our code for reproducibility. ## 1 Introduction Deep neural networks (DNNs) have produced significant advances in computer vision (CV), [Krizhevsky et al.(2012); Kirillov et al.(2023); Redmon et al.(2016)], natural language processing (NLP) [Vaswani et al.(2017)], and multi-modal tasks [Radford et al.(2021)], just to name a few. Usually, DNNs process a very large number of parameters. For example, the state of the art YOLOv8 uses a DNN backbone with 53 layers and 40M parameters [Terven & Cordova-Esparza(2023)]. On the other hand, DNNs are increasingly being used in resource-constrained mobile systems. For example, unmanned autonomous vehicles (UAVs) and self-driving cars need to frequently perform object detection [Wu et al.(2020)] and semantic segmentation [Mo et al.(2022)] to avoid obstacles during navigation and build detailed 3D maps [Wang et al.(2020); Fraga-Lamas et al.(2019); Riti Dass (Medium) (2018)]. Applying state-of-the-art DNNs in these scenarios is hardly feasible for real-time edge deployment. As discussed in Section 2, a plethora of existing work has been devoted to reduce the complexity of DNNs [Tan & Le(2019); Howard et al.(2017); Iandola et al.(2016); Sandler et al.(2018)]. Mobile specific DNNs such as MobileNet [Sandler et al.(2018)] and MnasNet [Tan et al.(2019)] decrease the computational requirements at the detriment of accuracy. For example, MobileNet loses up to 6.4% in accuracy as compared to ResNet-152 [He et al.(2016)]. Alternative approaches include pruning [Han et al.(2015b); Li et al.(2018); Chen et al.(2019); Tanno et al.(2019); Singh et al.(2019); Kaya et al.(2019); Yao et al.(2017); Huang et al.(2020)], quantization [Han et al.(2015a); Qin et al.(2022); Cai et al.(2020)], and coding [Gajilala et al.(2020); Han et al.(2015a)], which also incur in excessive DNN performance loss. The key issue is that existing approaches do not guarantee faster inference as previous methods depend on the way the DNN is implemented in hardware. For example, unstructured pruning or weight pruning does not transfer to faster inference since the majority of the DNN circuitry has to be executed regardless of the application[Wen et al.(2016); Ma et al.(2022)]. On the other hand, structured pruning, which encompasses filter pruning and layer pruning, makes the inference faster and less expensive in terms of energy [Ma et al.(2022)]. In stark opposition with prior work, in this paper we propose _Semantic Inference_ (SINF) to reduce the number of weights for a DNN implemented on a edge device. Our key intuition is that in practical mobile settings, DNN inputs come only from a limited set of classes which are usually _semantically similar_ and are _highly correlated_ over time. For example, DNNs deployed in drone-based surveillance systems may only need to detect and identify certain classes (e.g., people, cars, animals) and will hardly encounter indoor objects. Another intuition - proven in Figure 1 - is that semantically similar inputs share a significant number of filter activations among themselves than semantically dissimilar inputs, especially in the earlier layers. For example, it is intuitive that images of seals share significantly more filter activations with images of dolphins than with images of tables. _We use this to logically rearrange the DNN, so that only the portion of the DNN that is devoted to classify the semantic relevant class that the current input belongs to will be executed_. This paper makes the following novel contributions: \(\bullet\) We propose a new inference framework called SINF, which logically partitions the DNN into subgraphs relevant to semantically similar classes, so that only the subgraph related to the cluster the object belongs to gets activated. To create the subgraphs, we propose a new Discriminative Capability Score (DCS) to find the filters that can best distinguish semantically similar classes; \(\bullet\) We benchmark the performance of our SINF on VGG16, VGG19, and ResNet50 DNNs trained on the CIFAR100 dataset. We compare DCS against state-of-the-art discriminative algorithms by Mittal et al.(2019), Molchanov et al.(2019), Hu et al.(2016), Sui et al.(2021), and by Lin et al.(2020). We also use DCS as pruning approach and compare it against the work by Murti et al.(2023), which like DCS does not require retraining and/or fine-tuning. Our results show that (i) SINF reduces the inference time of VGG19, VGG16, and ResNet50 respectively by up to 35%, 29% and 15% with only 0.17%, 3.75%, and 6.75% accuracy loss; (ii) DCS achieves respectively up to 3.65%, 4.25%, and 2.36% better accuracy with VGG16, VGG19, and ResNet50 with respect to existing discriminative scores; (iii) when used as a pruning criterion, DCS achieves up to 8.13% accuracy gain with 5.82% less parameters than the existing state of the art work published at ICLR 2023; (iv) when considering per-cluster accuracy, SINF performs on average 5.73%, 8.38% and 6.36% better than VGG16, VGG19, and ResNet50. ## 2 Related Work **Model Pruning**: The _lottery ticket hypothesis_ introduced by Frankle and Carbin(2019) has spurred a plethora of research work in DNN pruning. For example, Han et al.(2015b) proposed weight-norm-based unstructured pruning, while Paul et al.(2023) tries to explain the success of the magnitude-based pruning methods. Li et al.(2017) used the L\({}_{1}\) norm of the kernel weights to prune entire filters. However, weight-norm-based strategies do not directly take into account the importance of the filters or parameters to preserve the DNN accuracy. Another approach is first-order gradient based methods [Molchanov et al.(2017); Molchanov et al.(2019)] which estimate the importance of the filters based on the gradient of the loss function. Another class of techniques leverages the filter activation maps. For example, RoyChowdhury et al.(2017) study the presence of duplicate neurons and determines that convolutional layers are prone to developing redundant duplicate filters. To find such filters, Sui et al.(2021) uses the change in nuclear norm of the matrix formed from the activation maps when individual filters are removed from a layer. Lin et al.(2020) use the expected rank of the feature maps, while Chen et al.(2023) explain the soft-threshold pruning as an implicit case of Iterative Shrinkage-Thresholding. _Although these methods determine the redundant filters, they fail to focus on the filters which are necessary to distinguish among the classes. Moreover, all of these methods require fine-tuning after pruning._ When the fine tuning is not possible, these methods do not provide satisfactory performance. Recently, Murti et al. (2023) propose a retrain-free _IterTVSPrune_ approach based on Total Variational Distance (TVD) Verdu (2014). The authors state there is no closed-form solution to TVD and approximate it with a simplified form of Hellinger distance to make it applicable to real-world DNNs. Here, we take a semantics-based approach and attempt to find filters able to best discriminate classes belonging to a given semantic cluster. **Quantization and Coding**: Quantization and coding two methods complement other model compression and acceleration methods. The seminal work by Han et al. (2015) compressed the DNN through quantization and Huffman coding to reduce the memory footprint. Among more recent work, post-training quantization [Cai et al. (2020), Fang et al. (2020)] and quantization-aware training [Liu et al. (2021), Bhalgat et al. (2020), Zhong et al. (2022)] have been proposed. Qin et al. (2022) pushes the boundary using single-bit quantization of the popular language model Bidirectional Encoder Representations from Transformers(BERT). Li et al. (2020) designed a layer-wise symmetric quantizer with the learnable clip value only for high-level feature extraction module. Tu et al. (2023) recently designed an algorithm for network quantization catered to the needs of image super resolution. Gajjala et al. (2020) proposes three variants of Huffman encoding to compress the gradients for distributed training of neural networks. Both quantization and coding are complementary to the SINF and can be used to achieve further improvement in performance. **Early Exit in Neural Network** Early exit was proposed by Teerapittayanon et al. to make the DNN inference dynamic by using auxiliary (and relatively small) neural networks attached to the output of the DNN layers. Based on the confidence of the prediction of the auxiliary networks, the decision to traverse the remaining layers is made [Matsubara et al. (2021)]. The training of the auxiliary classifiers can be done jointly with the backbone network as done by Elbayad et al. (2020), Zhou et al. (2020), and Pomponi et al. (2022). The classifiers can be trained using either cross-entropy loss [Lo et al. (2017), Wang et al. (2019)], or through knowledge distillation [Phuong & Lampert (2019), Li et al. (2019)]. The classifiers can also be trained separately as in Liu et al. (2020), Garg & Moschitti (2021), and Xin et al. (2020). Han et al. (2023) tries to improve the performance of the early classifiers by using block-dependent loss which uses information from a subset of the exits close to a block to train it. Dong et al. (2022) addresses the wasteful computation of the early auxiliary classifiers when they are not confident enough by predicting which early exit to use using a lightweight "Exit Predictor". Narayan et al. (2023) modeled the exit selection as an online learning problem and proposed to choose the exit in an unsupervised way. SINF uses an auxiliary classifier but the end goal is not early inference but to predict the next path to route the sample input to. ## 3 Dividing a DNN Into Semantic Subgraphs Let \(\mathcal{D}\) be a labeled dataset with classes taken from a set \(\mathcal{K}\). We define a set of \(K\) clusters of classes \(\{\gamma_{1},\cdots,\gamma_{K}\}\), such that \(\gamma_{1}\cup\gamma_{2}\cup\cdots\gamma_{K}=\mathcal{K}\). We assume that these clusters are defined based on application-level similarities (e.g., classes related to flowers, insects, etc.) or pre-defined at the dataset level (e.g., as in the CIFAR100 dataset). We define \(\mathcal{F}\) as a DNN trained on dataset \(\mathcal{D}\). By viewing \(\mathcal{F}\) as a computation graph, we define the Semantic DNN Subgraph Problem (SDSP). Synthetic DNN Subgraph Problem (SDSP) Find \(K\) subgraphs \(\mathcal{F}_{\gamma_{i}}\cdots\mathcal{F}_{\gamma_{K}}\) such that \[\mathcal{L}_{eval}(\mathcal{F},\mathcal{D})\leq\frac{1}{K}\sum_{i=1}^{K} \mathcal{L}_{eval}(\mathcal{F}_{\gamma_{i}},\mathcal{D}_{\gamma_{i}}), \tag{1}\] where \(\mathcal{F}_{\gamma_{i}}\subset\mathcal{F}\) and \(\mathcal{D}_{\gamma_{i}}\subset\mathcal{D}\) are respectively the subgraphs of \(\mathcal{F}\) and subset of the dataset corresponding to the partition \(\gamma_{i}\). Score function \(\mathcal{L}_{eval}\) is the evaluation criterion being used to score \(\mathcal{F}\) on dataset \(\mathcal{D}\). In other words, the subgraph \(\mathcal{F}_{\gamma_{i}}\) contains the nodes of \(\mathcal{F}\) which best classifies the members of the corresponding partition \(\gamma_{i}\) on \(\mathcal{D}_{\gamma_{i}}\). We perform a series of experiments to validate the intuition behind the SDSP. It is well known that DNN filters identify parts of objects, colors or concepts. These filters are shared among classes to reduce the number of parameters of the DNN [Bau et al. (2017)]. On the other hand, filter activations become sparser as the DNN becomes deeper, with filters reacting only to specific inputs belonging to specific classes. This phenomenon can be ob served in the top portion of Figure 1, which shows the average filter activation strength for the "otter" and "seal" classes in the the 40th and 49th convolutional layers of ResNet50 trained on CIFAR100. To obtain these results, we have taken the average of each filter activation vector for each input in the training set corresponding to the two classes, and performed min-max normalization to obtain values between 0 and 1. This experiment reinforces the notion that filters in earlier layers are less specialized than filters in deeper layers. Moreover, it remarks that filters from semantically similar classes get similarly activated, especially in earlier layers. _To put it in more quantitative terms, the \(L_{l}\) distance of the activation maps of the mentioned classes in the 40th layer is 0.028, while the same for the 49th layer is 0.111_. To further investigate this critical aspect, we have performed additional experiments where we have computed the percentage of filters "shared" among different classes for each layer of ResNet50. Specifically, for a filter in a given layer, we have tagged it with the top 20 classes it gets most activated for. For two classes, the sharing of the filters is calculated as the number of filters tagged with both classes over the number of filters tagged with at classes over the number of filters tagged with at the bottom portion of Figure 1, where the first row shows the filters shared between the "dolphin" and "wale" classes - two semantically similar classes - "dolphin" and "table". _As the numbers suggest, the semantically similar classes share more filters than semantically dissimilar classes_. These results further confirm that filter sharing among classes decreases as the DNN becomes deeper. ## 4 Discriminative Capability Score We describe the procedure to obtain the DCS for a given layer \(l\) and given semantic cluster \(\gamma_{m}\) in Algorithm 1. Let us consider a generic layer \(l\) of a DNN with \(C^{l}_{out}\) number of output channels. We define the activation output of channel \(c_{i}\) of this layer for an input sample \(X^{j}\) as \(\mathbf{A}^{j}_{l,c_{i}}\), which has dimension \(C^{l}_{out}\times H\times W\) where \(H\) and \(W\) are respectively the height and width of the activation output. For each output channel, SINF performs an adaptive pooling operation \(\mathcal{P}(\cdot)\) to reduce the size of the feature map of each channel from \(H\times W\) to \(k\times k\). Flattening the feature map gives a feature vector \(\mathbf{F}^{j}_{c_{i}}\) of size \(k^{2}\). Next, the feature vectors obtained from all the filters are concatenated to generate the complete feature vector \(\mathbf{F}^{j}\), whose dimension is \(C^{l}_{out}\cdot k^{2}\) for layer \(l\). By defining \(N_{c}\) as the number of classes in the semantic cluster and \(N_{f}=C^{l}_{out}k^{2}\) as the number of features from the respective \(l\)-th layer of the DNN, DCS finds a transformation \(\mathbf{W}\in\mathbb{R}^{N_{c}\times N_{f}}\) that optimizes objective function \(\mathcal{L}_{DOF}\) as shown in Equation 2, where \(\mathbf{W}^{*}\) contains information about the discriminative capability of the filters of layer \(l\): \[\mathbf{W}^{*}=\operatorname*{argmin}_{\mathbf{W}}\frac{1}{|\mathcal{D}_{ \gamma_{m}}|}\sum_{j=1}^{j=|\mathcal{D}_{\gamma_{m}}|}\mathcal{L}_{DOF}( \mathbf{W}\cdot\mathbf{F}^{j},t^{j}), \tag{2}\] where \(t^{j}\) denotes the target value or label for the input sample \(X^{j}\) in the dataset \(\mathcal{D}_{\gamma_{m}}\) corresponding to a class cluster \(\gamma_{m}\). The weights in feature importance matrix \(\mathbf{I}=\mathbf{W}^{*}\odot\nabla_{\mathbf{W}^{*}}\mathcal{L}_{DOF}\) directly encode the importance of the individual features for distinguishing the classes. Here, \(\mathcal{L}_{DOF}\) is the objective function averaged over the samples. We define \(s_{i}\) as the norm of the \(i\)-th column of the feature importance matrix \(\mathbf{I}\). In other words, \(s_{i}\) represents the score contribution of a feature \(f_{i}\) in discriminating among the semantic classes. This converts the matrix \(\mathbf{I}\) into a vector \(s\) of length \(C^{l}_{out}\cdot k^{2}\). The final step is to attribute the DCS to individual filters. We use a group norm operation \(\mathcal{G}(\cdot)\) on the obtained feature score vector \(\mathbf{s}\) where the feature scores are grouped into vectors Figure 1: (top) Filter activations in ResNet50; (bottom) Percentage of filters shared between (a) semantically similar classes – “dolphin” and “whale”; (b) semantically dissimilar classes – “dolphin” and “table”. consisting of \(k^{2}\) consecutive feature score values. The DCS of channel \(c_{i}\) is obtained as follows: \[\text{DCS}_{c_{i}}=\sqrt{\sum_{j}u_{c_{i,j}}^{2}} \tag{3}\] where \(\text{DCS}_{c_{i}}\) is the desired discriminative capability score of the channel \(c_{i}\) and \(u_{c_{i},j}\) is the \(j\) th element of the feature cluster corresponding to the same filter. We have investigated through experiments whether the DCS captures the phenomena explained in the previous section. Figure 2 shows the DCS distributions obtained in layer 6, 9, 11, and 13 of VGG16 by considering the cluster "fish" of CIFAR100. Figure 2 confirms that deeper layers are more specialized for individual classes, and thus the average DCS for the filters in the deeper layers is smaller - 0.68 for layer 6 vs 0.39 for layer 13. _In other words, in earlier layers more filters contribute to discriminate classes since filters are extracting more generalized features at early stages_. The decrease of average DCS implies that less filters are related to a given cluster. ## 5 Semantic Inference (SINF) A step-by-step overview of the main operations of SINF is summarized in the top portion of Figure 3. Prior to the deployment of the DNN, SINF uses the DCS to construct the semantic subgraphs, as Figure 2: The DCS distribution for cluster “fish” of CIFAR100 in different layers of VGG16. explained later. After deployment, upon receiving an input, SINF first classifies which semantic cluster the input belongs to (**step 1**). To this end, a _Common Feature Extractor_ is trained to extract the features to correctly predict the semantic cluster (**step 2**). This is actually performed by the _Semantic Route Predictor_ (SRP), whose structure is detailed later in this section (**step 3**). Based on the SRP output, the input will be routed to the selected semantic subgraph by using a _Feature Router_ (**step 4**). Finally, the inference output is obtained from the appropriate subgraph (**step 5**). We remark that although we are representing each subgraph separately for better graphical clarity, in practice the separation is only from a logical perspective. In other words, no additional memory beyond the annotations needed to characterize each subgraph is used by SINF. **Extraction of Subgraphs.** The extraction of the subgraph follows the procedure described in Algorithm 2. We define \(L\) and \(M\) as respectively the last layer and the layer before the Common Feature Extractor. We define \(r_{X}\) as the percentage of retained filters in generic layer \(X\). For semantic cluster \(\gamma_{i}\), we iterate from layer \(L\) to layer \(M\) to extract the subgraph. For each layer \(M\leq l\leq L\), we calculate \(r_{l}\), as well as the DCS score of the filters using Algorithm 1. We rank the filters based on the score and the indices of the top \(r_{l}\) percent filters are saved. This is repeated for all the semantic clusters. If the average accuracy of the extracted subgraphs for the semantic clusters is above an accuracy threshold \(\tau_{acc}\), the indices of the filters belonging to the subgraphs are stored. This procedure is performed for different values of \(r_{L}\) and \(r_{M}\). In this paper, \(r_{L}\) is set between 90% and 10%, with steps of 10, while \(r_{M}\) is set between 10% and 1%, with steps of 2. The accuracy threshold \(\tau_{acc}\) is set to the accuracy of the original DNN as shown in Equation 1. Although we have used categorical cross entropy, this is DNN-independent and can be set to any objective function. ``` 0: Partitioned Dataset \(\mathcal{D}=\{\mathcal{D}_{\gamma_{1}},\mathcal{D}_{\gamma_{2}}......\mathcal{D }_{\gamma_{K}}\}\) Pretrained DNN: \(\mathcal{F}=\mathcal{F}_{L-1}\circ\mathcal{F}_{L-2}\circ\ldots\circ\mathcal{F }_{0}\) Discriminative Objective Function: \(\mathcal{L}_{DOF}\) Filter Retention Percentage at layer \(L\) = \(r_{L}\) Filter Retention Percentage at layer \(M\) = \(r_{M}\) Accuracy Threshold = \(\tau_{acc}\) ``` 0: Filter annotations for the subgraphs for semantic clusters = \(\mathbf{SA}[]\) SA = Empty dictionary() // To store the final filters annotations for extracted subgraphs for\(\mathcal{D}_{\gamma_{i}}\) in \(\mathcal{D}\)do SA = Empty dictionary() for\(l=L,L-1,\ldots,M\)do \(r_{l}\gets r_{M}+\frac{(l-M)(r_{L}-r_{M})}{L-M}\) // Percentage of filters to retain at layer \(l\) DCS\(l\gets DCS(\mathcal{F},\mathcal{D}_{\gamma_{i}},\mathcal{L}_{DOF})\) //Obtaining DCS Scores from algorithm 1 Rank the filters Save the indices of top \(r_{l}\) percent filters in \(\mathbf{SA}_{\gamma_{i}}\) with layer number as key endfor Calculate \(acc_{avg}=\frac{1}{K}\sum_{i=1}^{i=K}(accuracy(\mathcal{F}_{\gamma_{i}}))\) if\(acc_{avg}\geq\tau_{acc}\)then Save \(\mathbf{SA}_{\gamma_{i}}\) in SA with \(\gamma_{i}\) as key endif endfor ``` **Algorithm 2** Subgraph Extraction for Semantic Clusters **Semantic Route Predictor.** The purpose of the semantic route predictor is to predict the semantic clusters an input sample belongs to so that it can be forwarded towards the corresponding route for final prediction. An auxiliary classifier \(\boldsymbol{\chi}\), attached at the \(l\)-th layer of the base model \(\mathcal{F}\) is used for Figure 3: Overview of Semantic Inference (SINF). this task. To train the auxiliary classifier \(\mathbf{\chi}\), the section up to the \(l\)-th layer of \(\mathcal{F}\) is frozen and the activation map of this layer \(l\) is passed as input to the semantic route predictor. The earliest layer which provides good prediction (an accuracy threshold of 75% is used in this case) for semantic routes is chosen as the \(l\) th layer. The architecture of the auxilliary classifier consists of two convolution layers, followed by adaptive average pooling layer stacked on top of three fully connected layers. SINF uses the convolution layers to tailor the activation map from layer \(l\) of the base model for classification of the semantic clusters. The output of semantic route predictor is the probability distribution over the \(K\) different semantic clusters and the input is predicted to belong to the cluster with the highest probability. In our case, the value of \(K\) is 20 as we have 20 semantic clusters. **Feature Router.** The DCS decision can be improved if SINF conditions the output of the semantic router predictor \(\mathbf{\chi}\) on its confidence. The feature router calculates this confidence and routes the activation maps to the appropriate specialized path. It takes the activation map from the semantic router predictor \(\mathbf{\chi}\) along with the probability distribution from its prediction layer. To compute the confidence of the classifier on individual decisions, the feature router employs the lightweight metric proposed by Park et al. (2015). The confidence score can be calculated as \(C_{\chi}=P_{h}-P_{sh}\), using the highest (\(P_{h}\)) and the second highest probabilities (\(P_{sh}\)) for individual semantic clusters. The confidence score is a proxy for the probability that the inference aligns with the correct label. The confidence score of a classifier correlates with the probability of correct inference as is shown by the Park et al. (2015). If the confidence score exceeds a threshold, the activation map is routed to the subgraph corresponding to the predicted semantic cluster - in other words, only the filters corresponding to this subgraph are turned on. Otherwise the base model is used for final decision. ## 6 Experimental Evaluation To quantify improvement of SINF, we consider existing pruning approaches by Molchanov et al. (2019), Hu et al. (2016), Mittal et al. (2019), Sui et al. (2021), and Lin et al. (2020). In addition, to compare with a pruning approach that does not require retraining, we also consider the work Murti et al. (2023) published at ICLR 2023. We chose the CIFAR100 dataset for our experiments, since its 100 classes that are pre-grouped into 20 semantically similar classes. Figure 4: (Top) Cumulative distribution of the confidence values of SINF for the VGG16, VGG19, and ResNet50 DNNs. (Bottom) Accuracy, vs relative inference time and confidence threshold. **Impact of Confidence Threshold.** We first evaluate the impact of the confidence threshold \(\gamma\), given its importance in achieving the right balance between latency and accuracy in SINF. The top row of Figure 4 shows the cumulative distribution of the confidence values of SINF for the VGG16, VGG19, and ResNet50 DNNs computed over the test set. We can observe that the confidence distribution increases in skewness and average, which reflects the increasingly better predictive capabilities of the examined DNNs. In other words, the confidence score of SINF will be higher in a DNN exhibiting better accuracy. In addition, the bottom portion of Figure 4 shows the decrease in accuracy and the relative latency with respect to the original DNN as a function of the confidence threshold. As expected, increasing the confidence score increases the accuracy while also decreasing the gain in latency. As such, the confidence threshold \(\gamma\) can be used as hyper parameter to find the needed trade-off between accuracy and latency. We notice that with VGG19, the overall accuracy actually increases by up to 0.49% for confidence threshold values greater than 0.4 and smaller than 0.9. If we look at the confidence distribution of VGG19, we can see that the auxiliary classifier is more confident about its predictions as compared to the case of VGG16. For VGG19, we can get up to 41.8% reduction in inference time for only 2.83% drop in accuracy. In the best case, SINF reduces the inference time by up to 35%, 29% and 15% with only 0.17%, 3.75%, and 6.75% accuracy loss for VGG19, VGG16, and ResNet50 respectively. **DCS vs Existing Discriminative Metrics.** To evaluate the effectiveness of DCS with respect to prior approaches, we use existing discriminative metrics proposed in existing work while keeping the same inference structure of SINF. Figure 5 compares DCS against gradient-based approaches _Sensitivity_ by Mittal et al. (2019) and _Taylor_ by Molchanov et al. (2019), an approach based on sparsity of activation named _APOZ_ by Hu et al. (2016), an approach based on channel-independence named _CHIP_ by Sui et al. (2021), and an approach based on channel importance named _HRANK_ by Lin et al. (2020). All the approaches are compared without retraining the DNN, as proposed in SINF. Figure 5 shows that in the best case, DCS has 15% higher accuracy than the second-best approach _Taylor_ for VGG16 with 75% sparsity (i.e., percentage of parameters dropped). For VGG19, DCS achieves in the best case 6.54% higher accuracy than the second-best approach _Taylor_ at 63% sparsity. Lastly, in the case of ResNet50, the best case is attained at 51% sparsity, where DCS presents 14.87% more accuracy than the second-best approach _Sensitivity_. On average, SINF achieves 3.65%, 4.25%, and 2.36% better accuracy than the second-best approaches for VGG16, VGG19, and ResNet50 respectively. **DCS as Pruning Criterion.** Since DCS scores the capability of each filter to distinguish a given class, DCS may be used as a pruning criterion. In this case, the dataset can be viewed as a single macro-cluster and DCS can be applied to determine the most relevant filters. For comparison, we consider the state-of-the-art _IterTVSPrune_ by Murti et al. (2023) published at ICLR 2023, which also does not require fine-tuning. _IterTVSPrune_ performs iterative search with a threshold on the maximum accuracy loss to find the smallest DNN satisfying the accuracy constraint. For fair comparison, we have taken the percentage of parameters pruned by _IterTVSPrune_ at each layer and set the same pruning threshold for DCS. In other words, we prune Table 1 summarizes the performance achieved by _DCS_ and _IterTVSPrune_. We did not compare on CIFAR100 with ResNet50 as the authors of Murti et al. (2023) did not provide the performance obtained by applying their approach on ResNet50 trained with CIFAR100. We notice that for different DNN structures and datasets, DCS achieves substantially better performance in 3 out of 5 settings considered while Figure 5: Comparison between DCS and state of the art for VGG19, VGG16, and ResNet50. achieving similar performance in the remaining 2 settings. The best results are obtained in the case of VGG16 trained on CIFAR 100 and ResNet50 trained on CIFAR10, where we see respectively 9.75% and 8.13% accuracy gain with 3.6% and 5.82% less parameters. **Per-Cluster Accuracy Gain.** We posed ourselves the following question: "_Can SINF perform better than the original DNN when considering the accuracy obtained in individual clusters?_" To answer this question, we performed the following experiment. When applying DCS to find the cluster-specific subgraphs, we inserted a constraint that the resulting dataset-level accuracy should be at least as much as the original DNN. Then, we find the subgraphs the with lowest percentage of parameters retained. Figure 6 shows the accuracy gain obtained on the individual clusters as compared to the original VGG16, VGG19, and ResNet50 DNNs. Intriguingly, _SINF provides on the average 5.73%, 8.38% and 6.36% better per-cluster accuracy than the original VGG16, VGG19, and ResNet50 DNNs, respectively, notwithstanding that the number of parameters being used are 30%, 50%, and 44% less than the original DNNs._ We believe the reason behind this improvement is that the semantic partitioning performed by SINF improves the DNN by making it more explainable. This way, the DNN becomes "less confused" among different semantic clusters, which justifies better results when considering per-cluster accuracy. ## 7 Concluding Remarks In this paper, we have proposed a new framework called _Semantic Inference_ (SINF) which allows to achieve faster execution of DNNs without compromising accuracy. As part of SINF, we have proposed a new approach named _Discriminative Capability Score_ (DCS) to find subgraphs inside large DNNs to discriminate among the members of a specific semantic cluster. We have benchmarked the \begin{table} \begin{tabular}{|c c c c c c|} \hline **DNN** & **Dataset** & **Pruned** & **Criterion** & **Accuracy Loss** & **Difference** \\ \hline VGG16 & CIFAR100 & 40.2\% & IterTVSPrune & 18.59\% & +9.75\% Accuracy \\ & & 43.8\% & DCS & 8.84\% & -3.6\% Parameters \\ \hline VGG16 & CIFAR10 & 37.6\% & IterTVSPrune & 1.9\% & +0.61\% Accuracy \\ & & 42\% & DCS & 1.29\% & -4.4\% Parameters \\ \hline VGG19 & CIFAR100 & 59\% & IterTVSPrune & 5.2\% & +0.05\% Accuracy \\ & & 59\% & DCS & 5.15\% & +0\% Parameters \\ \hline VGG19 & CIFAR10 & 49\% & IterTVSPrune & 1.3\% & +0.4\% Accuracy \\ & & 49.65\% & DCS & 0.9\% & -0.65\% Parameters \\ \hline ResNet50 & CIFAR10 & 34.1\% & IterTVSPrune & 9.94\% & +8.13\% Accuracy \\ & & 39.92\% & DCS & 1.81\% & -5.82\% Parameters \\ \hline \end{tabular} \end{table} Table 1: Using DCS as a Pruning Criterion vs _IterTVSPrune_ (ICLR 2023). Figure 6: Performance gain compared to the original DNN. The x-axis shows the ids of different semantic clusters from CIFAR100 dataset and y-axis shows the performance improvement when models corresponding to specific classes are used. performance of SINF on the VGG16, VGG19 and ResNet50 DNNs trained on the CIFAR100 and CIFAR10 datasets. By comparing the performance of SINF with several existing approaches, we have shown that SINF outperforms the state of the art.
2310.09163
Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN
Large pretrained models, coupled with fine-tuning, are slowly becoming established as the dominant architecture in machine learning. Even though these models offer impressive performance, their practical application is often limited by the prohibitive amount of resources required for every inference. Early-exiting dynamic neural networks (EDNN) circumvent this issue by allowing a model to make some of its predictions from intermediate layers (i.e., early-exit). Training an EDNN architecture is challenging as it consists of two intertwined components: the gating mechanism (GM) that controls early-exiting decisions and the intermediate inference modules (IMs) that perform inference from intermediate representations. As a result, most existing approaches rely on thresholding confidence metrics for the gating mechanism and strive to improve the underlying backbone network and the inference modules. Although successful, this approach has two fundamental shortcomings: 1) the GMs and the IMs are decoupled during training, leading to a train-test mismatch; and 2) the thresholding gating mechanism introduces a positive bias into the predictive probabilities, making it difficult to readily extract uncertainty information. We propose a novel architecture that connects these two modules. This leads to significant performance improvements on classification datasets and enables better uncertainty characterization capabilities.
Florence Regol, Joud Chataoui, Mark Coates
2023-10-13T14:56:38Z
http://arxiv.org/abs/2310.09163v2
# Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN ###### Abstract Large pretrained models, coupled with fine-tuning, are slowly becoming established as the dominant architecture in machine learning. Even though these models offer impressive performance, their practical application is often limited by the prohibitive amount of resources required for _every_ inference. Early-exiting dynamic neural networks (EDNN) circumvent this issue by allowing a model to make some of its predictions from intermediate layers (i.e., early-exit). Training an EDNN architecture is challenging as it consists of two intertwined components: the gating mechanism (GM) that controls early-exiting decisions and the intermediate inference modules (IMs) that perform inference from intermediate representations. As a result, most existing approaches rely on thresholding confidence metrics for the gating mechanism and strive to improve the underlying backbone network and the inference modules. Although successful, this approach has two fundamental shortcomings: 1) the GMs and the IMs are decoupled during training, leading to a train-test mismatch; and 2) the thresholding gating mechanism introduces a positive bias into the predictive probabilities, making it difficult to readily extract uncertainty information. We propose a novel architecture that connects these two modules. This leads to significant performance improvements on classification datasets and enables better uncertainty characterization capabilities. ## 1 Introduction The dominant approach to improve machine learning models is to develop larger networks that can handle every potential sample. As a result, despite very impressive performance, the resource overhead is huge (Scao et al., 2023). The push for larger model size is often driven by the need to handle a small percentage of samples that are particularly challenging to infer (Bolukbasi et al., 2017); most inferences do not need the full power of a large network to be successfully executed. Nonetheless, most traditional neural network (NN) models have a fixed processing pipeline. This means that every sample, simple or complex, is processed the same way. To tackle this inefficiency, dynamic networks have been introduced (see (Han et al., 2022) for a review). These models adapt the computational processing to the specific sample being processed. Early-exit dynamic networks (EEDNs) tailor their depth to the sample, allowing easy-to-infer samples to exit at shallower layers. Compared to conventional neural networks, EEDNs incorporate two additional components: 1) Intermediate Inference Modules (IMs) receive a sample's representation at their respective network depth and generate predictions; 2) Gating Mechanisms (GMs) decide which intermediate inference module should be employed to derive the final prediction. The vast majority of EEDNs employ a simple threshold-based gating mechanism. The _threshold GM_ applies thresholds to confidence metrics obtained from the inference modules. To integrate more sophisticated GMs, the gating mechanism can be treated as a post-training add-on component. Given a resource budget and a set of pre-trained IMs, the optimal gating decisions are learned by taking into account both performance and inference cost. Unfortunately in both strategies, the IMs and GMs are decoupled during training; the IMs are trained on the full dataset even though they will only infer a subset of the data at inference time. This inevitably creates a mismatch between the train and test distributions. The reliance on threshold GMs also makes it more difficult to obtain confidence levels for predictions. The information required for this determination has already been "consumed" in the gating decision; moreover, applying a threshold introduces an overconfidence bias. In a setting where a model can produce different outputs using varying levels of computational resources, confidence measures are essential. They enable end-users to make informed decisions, whether to accept a cost-efficient output or to request additional computation for a more reliable answer. Currently, state-of-the-art EEDNs do not offer confidence information. **Contributions:** We propose a novel architecture and training approach for the GMs and IMs given a fixed backbone network. Our approach involves joint training so it directly avoids train-test mismatch and provides good uncertainty characterization. We show empirically that the approach leads to a better overall inference/performance trade-off. The benefits are threefold: 1) we close the training gap between IMs and GMs, which leads to better performance; 2) the architecture produces reliable uncertainty characterization in the form of conformal intervals and well-calibrated predicted probabilities; and 3) the generality of our approach facilitates integration into any state-of-the-art architecture, thus allowing it to significantly outperform dedicated EEDN backbone architectures. ## 2 Related work In this section we briefly summarize the most relevant early-exit dynamic networks literature. Appendix 8.10 contains a more in-depth discussion as well as a larger coverage of peripheral fields, including online learning with adaptive architectures, and sparsely activated Mixtures of Experts. EEDN-tailored Architectures with threshold GMs:Much of the EEDN literature focuses on designing architectures that better lend themselves to early-exiting (EEDN-tailored architectures). BranchyNet (Teerapittayanon et al., 2016) augments the backbone network with early-exit branches and is trained end-to-end using a weighted loss. Shallow-deep networks (SDNs) additionally quantify the confusion of the network by comparing the outputs of different IMs (Kaya et al., 2019). Efficient end-to-end training is difficult because earlier branches influence the performance of later ones. Subsequent EEDN-tailored architectures, such as MSDNet (Huang et al., 2018) and RANet (Yang et al., 2020), alleviate this problem via dense connections or increased resolution. Training can be improved by reducing contil gradient updates on the backbone parameters using position-based rescaling (Li et al., 2019) or meta-learning (Sun et al., 2022). Reintegrating computations from earlier layers can boost performance via ensembling (Wolczyk et al., 2021; Liu et al., 2022; Passalis et al., 2020). In spite of their differences, all of these works rely on confidence score thresholds to decide whether to early-exit. Many recent advances in the EEDN field also rely on simple threshold GMs (Wang et al., 2020; Han et al., 2023; Wang et al., 2021; Chen et al., 2023b). In general, the thresholds are treated as hyperparameters and assigned values using a validation set. Learnable GMs, fixed IMs:A second class of EEDNs considers the problem of learning GMs for frozen pre-trained backbones and IMs. Bolukbasi et al. (2017) train the gates sequentially using a top-down approach. In EPNet (Dai et al., 2020) the gate-training is formulated as a Markov decision process. PTEENet (Lahiany & Aperstein, 2022) employs a much simpler gate architecture. Karpikova et al. (2023) design learnable gates with a specific focus on confidence metrics for images. All of these models learn gating functions after having trained and frozen all of the IMs. While this two-phase procedure simplifies training, it ignores the coupling that is inherent to these two tasks. Addressing the train-test mismatch:Ignoring the influence of the GM on the IMs by training learnable gates after freezing the IMs introduces a train-test mismatch (Yu et al., 2023; Han et al., 2022b). BoostNet (Yu et al., 2023) proposes an architecture-agnostic training procedure inspired by gradient boosting to close the train-test mismatch. Deeper IMs are trained with an emphasis on samples that are not correctly classified by earlier IMs. Han et al. (2022b) cast the problem as a meta-learning task where a _weight prediction network (WPN)_ is trained jointly with the inference modules. For a given sample, the VPN predicts the gate that will lead to the lowest classification loss and weighs the inference module losses accordingly. Although both methods are good steps towards aligning the training of the IMs with the gating mechanism, they do not fully close the train-test gap in practice. Both methods still employ threshold GMs, and the proposed specialized training of the IMs is not directly tied to which gate is ultimately selected by the threshold GMs. ## 3 Problem Setting We consider a classification problem with a training dataset \(\mathcal{D}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{D}\) denotes the input and \(y_{i}\in\mathcal{K}\) its corresponding classification target (\(\mathcal{K}=\{1,\ldots,K\}\)). We are given a fixed network \(NN:\mathbb{R}^{D}\rightarrow\mathcal{K}\) pretrained on the same task that can be decomposed into a composition of \(L\) layers: \(NN(\mathbf{x})=\sigma\circ h^{L}\circ h^{L-1}\circ\cdots\circ h^{1}(\mathbf{x})\), where \(\sigma\) denotes the softmax operator. Let \(\mathbf{z}^{l}\) be the \(l\)-th intermediate representation of \(NN\) such that \(\mathbf{z}^{l}=h(\mathbf{z}^{l-1})\) with \(\mathbf{z}^{0}=\mathbf{x}\). Our goal is to augment this fixed network (the backbone) with additional trainable components such that we can obtain a final prediction \(\hat{y}_{i}\) with its associated predicted probability vector \(\hat{\mathbf{p}}_{i}\) at a reduced inference cost. This setting is categorized as **inference cost-only training** (IC-only training) in the review by Laskaridis et al. (2021). To measure the performance of our model, we consider three types of metrics: (i) performance-based (the accuracy of \(\hat{y}\)); (ii) uncertainty-based (the calibration error and the inefficiency score of the conformal intervals obtained from \(\hat{\mathbf{p}}_{i}\)); and (iii) efficiency-based (the inference cost to obtain \(\hat{y}\)). **Inference cost:** In line with BoostedNet (Yu et al., 2023) and L2W-DEN (Han et al., 2022) the inference cost \(IC_{\hat{y}_{i}}\) for a single prediction \(\hat{y}_{i}\) represents the number of multiply-add (Mul-Add) operations to obtain \(\hat{y}_{i}\). We approximate the expected inference cost \(E[IC]\) as: \[E[IC]\approx IC=\frac{1}{N}\sum_{i=1}^{N}IC_{\hat{y}_{i}}\;. \tag{1}\] **Uncertainty metrics:** We assess calibration of the model via the expected calibration error (ECE) of the probability of the predicted class (the maximum probability of \(\hat{\mathbf{p}}_{i}\) : \(\hat{p}_{i}^{\max}=\max(\hat{\mathbf{p}}_{i})\)). To estimate the ECE, we follow the approach in (Guo et al., 2017). More details concerning computation of the inference cost and ECE are provided in Appendix 8.2. To obtain conformal intervals, we use the typical conformal score \(\mathbf{s}_{i}\triangleq\mathbf{1}-\hat{\mathbf{p}}_{i}\) and compute a conformal threshold \(\tau^{V,\alpha}\) from a validation set \(\mathcal{V}\) so that we can guarantee a **coverage** of \(1-\alpha\). (See Appendix 8.3 for details). A conformal interval \(\hat{\mathcal{C}}_{i}\) for a sample \(\mathbf{x}_{i}\) is then constructed by thresholding the conformal score \(\mathbf{s}_{i}:\mathcal{C}_{i}=\{k;\mathbf{s}_{i}^{(k)}<\tau^{V,\alpha}\}\), where \(\mathbf{s}^{(k)}\) denotes the \(k\)-th element of \(\mathbf{s}\). We measure the performance of our predicted conformal intervals \(\hat{\mathcal{C}}_{i}\) with two metrics: **empirical coverage**\(1-\hat{\alpha}\), which computes the empirical probability that the ground truth is contained in the interval, and **inefficiency**\(|\bar{\mathcal{C}}|\), which is the average cardinality of the set intervals: \[|\bar{\mathcal{C}}|\triangleq\frac{1}{N}\sum_{i=1}^{N}|\hat{\mathcal{C}}_{i}| \,,\quad 1-\hat{\alpha}\triangleq\frac{1}{N}\sum_{i=1}^{N}\mathbb{1}[y_{i} \in\hat{\mathcal{C}}_{i}]\,. \tag{2}\] A well-behaved conformal prediction model has small inefficiency \(|\bar{\mathcal{C}}|\) (intervals are small on average) while maintaining an empirical coverage close to the desired one: \(1-\alpha\approx 1-\hat{\alpha}\). ## 4 Methodology: JEI-DNN We introduce our method named: Jointly-Learned Exit and Inference for a Dynamic Neural Network (JEI-DNN). Our model consists of augmenting a pretrained and fixed network by attaching \(L\) classifiers and \(L\) gates to the intermediate layers of the network architecture. We refer to each of these classifiers as an inference module (IM). The \(l^{th}\) IM, \(f^{l}_{\theta_{l}}(\cdot)\), uses the \(l^{th}\) representation \(\mathbf{z}^{l}_{i}\) of the input \(\mathbf{x}_{i}\) to model class probabilities \(\hat{\mathbf{p}}^{l}_{\theta}(\mathbf{x}_{i})=f^{l}_{\theta}(\mathbf{z}^{l}_{ i})\) and to obtain a prediction \(\hat{y}^{l}_{i}=\arg\max_{k}\hat{\mathbf{p}}^{l,(k)}_{\theta}(\mathbf{x}_{i})\). We set the last IM to be the original classifier of the pretrained network: \(f^{L}=h^{L}\). The output of the \(l^{th}\) gate, \(e^{l}\in\{0,1\}\), is modelled by a random variable \(E^{l}|X\) that specifies whether the \(l\)-th exit gate is open. If a sample reaches the \(l\)-th gate and it is open, then it exits via the \(l^{th}\) IM. The assumption is that some samples are "easy enough" to be classified by the early classifiers \(f^{l}_{\theta}(\mathbf{z}^{l})\). Since the last classifier should be able to classify the entire dataset, the last gate is always set to one, i.e., \(e^{L}=1\). Denoting the indicator function by \(\mathbb{I}[\cdot]\), the output of our model \(\hat{y}\) is: \[\hat{y}(\mathbf{x})=\hat{y}^{l}(\mathbf{x})\text{ where }l=\operatorname*{arg\,min}_{l \in[L]}\mathbb{I}[e^{l}(\mathbf{x})=1],\quad e^{l}(\mathbf{x})\sim p(E^{l}|X =\mathbf{x}). \tag{3}\] We can view each IM \(f^{l}_{\theta}\) as having to solve a subtask over a subset of the data \(\mathcal{D}^{l}\subset\mathcal{D}\), where the partitioning is determined by the gates, and \(\mathcal{D}^{j}\cap\mathcal{D}^{k}=\varnothing\) for \(j\neq k\) and \(\cup_{l\in[L]}\mathcal{D}^{l}=\mathcal{D}\). Since the \(E^{l}\) are random, we can also define the distribution \(P(G)\) of exiting at a gate \(G\in[L]\). This is equal to the probability that the sample does not exit at any previous gate multiplied by the probability that the sample exits at gate \(l\): \[P(G=l|X=\mathbf{x})=P(E^{l}|X=\mathbf{x})\prod_{j=1}^{l-1}\left(1-P(E^{j}|X= \mathbf{x})\right). \tag{4}\] The first \(l\) layers of the network as well as the first \(l\) IMs and gates must be evaluated in order to exit at the \(l^{th}\) layer. Assuming input samples of fixed size, the inference cost \(IC_{\hat{y}^{l}_{i}}\) associated with obtaining a prediction from the \(l^{th}\) IM can be approximated as a constant \(IC^{l}\) that only depends on \(l\). That is, for all predictions \(\hat{y}^{l}_{i}\) obtained at level \(l\) we have \(IC_{\hat{y}^{l}_{i}}=IC^{1}\). Appendix 8.2 provides more details concerning how \(IC^{l}\) is computed for our model. During training we use a normalized version \(\text{IC}^{l}_{\text{norm}}=\frac{\text{IC}^{l}}{\text{IC}^{l}}\) that represents the relative cost incurred by exiting at \(l\) compared to going through the full network (i.e., all \(L\) layers). We can then formulate our loss as the expectation over the exited gates \(G|X\) and the data distribution \(X,Y\) of the traditional cross entropy loss \(\mathcal{L}^{CE}(Y,\hat{\mathbf{p}}^{G|X}_{\theta}(X))\) added to the associated inference cost \(IC^{G|X}\) scaled by a parameter \(\lambda\) that controls the importance of the inference cost: \[\mathcal{L} =\mathbb{E}_{Y,X}\mathbb{E}_{G|X}[\mathcal{L}^{CE}(Y,\hat{ \mathbf{p}}^{G|X}_{\theta}(X))+\lambda IC^{G|X}_{\text{norm}}], \tag{5}\] \[\mathcal{L} \approx\frac{1}{N}\sum_{i=1}^{N}\Big{(}\sum_{l=1}^{L}(\mathcal{ L}^{CE}(y_{i},\hat{\mathbf{p}}^{l}_{\theta}(\mathbf{x}_{i}))+\lambda IC^{l}_{ \text{norm}})P(G=l|\mathbf{x}_{i})\Big{)}\,. \tag{6}\] Our goal is to simultaneously learn the parameters \(\theta\) of the IMs and choose \(P(G=l|\mathbf{x}_{i})\) in order to minimize the expected loss. We target \(P(G=l|\mathbf{x}_{i})\) rather than \(P(E^{j}|\mathbf{x}_{i})\) for \(j=1,\dots,l\) because directly learning the \(P(E^{j})\) leads to optimization issues. The product of probabilities across multiple layers can quickly vanish or saturate to 1. Modeling \(P(G|\mathbf{x}_{i})\).In practice, we only need to evaluate \(P(G=l|\mathbf{x}_{i})\) if the dynamic evaluation has reached that point, i.e., everything up to \(f^{l}_{\theta}(\mathbf{x}^{l}_{i})\) has been evaluated and is accessible. Therefore, our parameterization of \(P(G=l|\mathbf{x}_{i})\) can include any intermediate values calculated by the base architecture, the inference modules, and the gates up to and including layer \(l\). We denote this aggregation of information as \(\mathbf{c}^{sl}_{i}\). This however also implies that we cannot directly model the multiclass distribution \(P(G|\mathbf{x}_{i})\) with the traditional softmax approach since it would require \(f^{l}_{\theta}(\mathbf{x}^{l}_{i})\) to be evaluated for all \(l\). This would defeat the purpose of early-exiting. Instead, we model each \(P(G=l|\mathbf{x}_{i})\) sequentially, starting from the first layer \(l=1\), with a learnable variable \(g^{l}_{\phi}(\mathbf{c}^{sl}_{i})\in[0,1]\). We set \(P(G=l|\mathbf{x}_{i})\) to the minimum of the learnable variable \(g^{l}_{\phi}(\mathbf{c}^{sl}_{i})\in[0,1]\) and \(1-\sum_{j=1}^{l-1}g^{j}_{\phi}(\mathbf{c}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, Any parameterization of the IMs \(f^{l}_{\theta}\) and gates function \(g^{l}_{\phi}(\mathbf{c}^{Sj}_{i})\) can be adopted. However, the modules must be small compared to the size of one layer of the backbone model to prevent them from substantially increasing the computational cost of the entire layer during inference. In our experiments, \(f^{l}_{\theta}\) and \(g^{l}_{\phi}(\mathbf{c}^{\leq j}_{i})\) are parameterized as simple one-layer neural networks and uncertainty metrics are extracted from \(\mathbf{c}^{\leq j}_{i}\) to serve as input to \(g^{l}_{\phi}(\cdot)\). A detailed description of our modules is included in Section 5.1. Appendix 8.2 includes the inference cost of the gates and IMs to demonstrate that the added computation is minimal when compared to the inference cost of a single layer. (The total added computation amounts to less than 1.12% of the Mul-Adds of the backbone.) Uncertainty predictionNext we describe how we use the predicted probabilities \(\hat{\mathbf{p}}^{l}_{\theta}(\mathbf{x}_{i})\) to form conformal intervals. The conformal intervals are derived from a conformal threshold \(\tau^{\mathcal{V},\alpha}\) that is computed using a validation set \(\mathcal{V}\). However, since we can view the IMs as solving different subtasks over different subsets of the data \(\mathcal{D}^{l}\), we should compute a different conformal threshold for each subtask. Hence, we form one validation set per gate, \(\mathcal{V}^{l}\), where \(\mathcal{V}^{l}=\{i;l\sim P_{\phi}(G|\mathbf{x}_{i})\,,i\in\mathcal{V}\}\), and compute a conformal threshold from that subset: \(\tau^{\mathcal{V}^{l},\alpha}\). If there are too few samples in the validation set associated with gate \(l\), we set \(\tau^{\mathcal{V}^{l},\alpha}\) to a general threshold \(\tau^{\mathcal{V},\alpha}\) computed using the entire validation set. The conformal intervals are then constructed as follows: \[\mathcal{C}_{i}=\{k;1-\hat{\mathbf{p}}^{l,(k)}_{\theta}(\mathbf{x}_{i})<\tau^{ \mathcal{V}^{l},\alpha}\,,l\sim P(G|\mathbf{x}_{i})\} \tag{10}\] There are many other ways to construct the sets to compute the threshold in the EDNN setting. In Appendix 8.3, we show that a variety of methods produce similar results. ### Optimization Directly optimizing the loss in equation 9 is challenging because of the \(\min\) operator: \[\phi^{*},\theta^{*}= \operatorname*{arg\,min}_{\phi,\theta}\frac{1}{N}\sum_{i=1}^{N} \sum_{l=1}^{L}\bigl{(}\mathcal{L}^{CE}(y_{i},\hat{\mathbf{p}}^{l}_{\theta}( \mathbf{x}_{i}))+\lambda IC^{l}_{\text{norm}}\bigr{)}\min\left(g^{l}_{\phi}( \mathbf{c}^{\leq l}_{i}),1-\sum_{j=1}^{l-1}g^{j}_{\phi}(\mathbf{c}^{\leq j}_{i })\right)\!. \tag{11}\] However, there is a distinction between the parameters \(\theta\) and \(\phi\) which makes this loss a good candidate for a bi-level optimization formulation. (See (Chen et al., 2023a) for a survey.) Hence, using \(C^{l}_{\theta(\phi)}=\mathcal{L}^{CE}(y_{i},\hat{\mathbf{p}}^{l}_{\theta}( \mathbf{x}_{i}))+\lambda IC^{l}_{\text{norm}}\), we can rewrite equation 11 as follows: \[\phi^{*}=\operatorname*{arg\,min}_{\phi}\mathcal{L}^{out}\triangleq \operatorname*{arg\,min}_{\phi}\frac{1}{N}\sum_{i=1}^{N}\sum_{l=1}^{L}C^{l}_{ \theta^{*}(\phi)}P_{\phi}(G=l|\mathbf{x}_{i}), \tag{12}\] \[s.t.\quad\theta^{*}(\phi)=\operatorname*{arg\,min}_{\theta} \mathcal{L}^{in}\triangleq\operatorname*{arg\,min}_{\theta}\frac{1}{N}\sum_{ i=1}^{N}\sum_{l=1}^{L}\bigl{(}\mathcal{L}^{CE}(y_{i},\hat{\mathbf{p}}^{l}_{ \theta}(\mathbf{x}_{i})+\lambda IC^{l}_{\text{norm}})P_{\phi}(G=l|\mathbf{x}_{ i}). \tag{13}\] Since the gate probabilities \(P_{\phi}(G=l|\mathbf{x}_{i})\) take as input the aggregation of any intermediate values that were previously calculated \(\mathbf{c}^{\leq l}_{i}\), it could be dependent on the \(\theta\). To make that dependence explicit in the calculation of the gradient of \(\theta\), we denote \(P_{\phi}(G=l|\mathbf{x}_{i})=G_{\phi}(m(\theta,\mathbf{x}_{i}))\). The derivative of the loss with respect to the two different set of parameters is given by: \[\frac{\partial\mathcal{L}^{in}}{\partial\theta}= \frac{1}{N}\sum_{i=1}^{N}\sum_{l=1}^{L}\frac{\partial\mathcal{L}^{CE }(y_{i},\hat{\mathbf{p}}^{l}_{\theta}(\mathbf{x}_{i}))}{\partial\theta}P_{ \phi}(G=l|\mathbf{x}_{i})+\frac{\partial G_{\phi}(m(\theta,\mathbf{x}_{i}))}{ \partial\theta}C^{l}_{\theta(\phi)}\,, \tag{14}\] \[\frac{\partial\mathcal{L}^{out}}{\partial\phi}= \frac{1}{N}\sum_{i=1}^{N}\sum_{l=1}^{L}\frac{\partial P_{\phi}(G=l| \mathbf{x}_{i})}{\partial\phi}C^{l}_{\theta^{*}(\phi)}=\frac{1}{N}\sum_{i=1}^{ N}\sum_{l=1}^{L}\frac{\partial\min\left(g^{l}_{\phi}(\mathbf{c}^{\leq l}_{i}),1- \sum_{j=1}^{l-1}g^{j}_{\phi}(\mathbf{c}^{\leq j}_{i})\right)}{\partial\phi}C^ {l}_{\theta^{*}(\phi)}\,. \tag{15}\] We note that equation 15 becomes undefined when the two terms of the \(\min\) operator are equal. We address this issue below when describing how to optimize the gates. Optimizing the IMs:By inspecting equation 14, we can recognize that the left term is the same gradient that is encountered for a straightforward weighted cross-entropy loss. Interestingly, our principled approach leads to an objective with a term that is similar to the one proposed by Han et al. (2022b) to address the train-test mismatch issue. In our case, the weights emerge directly from our proposed gating mechanism: \(P_{\phi}(G=l|\mathbf{x}_{i})\). The right term is driven by the impact of the \(\theta\) parameter on the gates: \(\frac{\partial G_{\phi}(m(\theta,\mathbf{x}_{i}))}{\partial\theta}\). In practice, we observe that ignoring the right term of the gradient does not impact performance and leads to faster convergence, as we show in Appendix 8.7. Optimizing the gatesThe second derivative in equation 15 is more challenging, making direct optimization undesirable. Instead, we construct surrogate binary classification tasks to train the \(g_{\phi}^{l}\). For a given sample \(\mathbf{x}_{i}\) we construct binary targets for \(g_{\phi}^{1}(\mathbf{c}_{i}^{\leq 1}),...,g_{\phi}^{L-1}(\mathbf{c}_{i}^{\leq L -1})\) by evaluating the relative cost of each gate \(C_{\theta^{*}(\phi)}^{l}\) and determining which of the \(L\) gates has the lowest cost, denoted as \(l^{*}=\arg\min_{l}C_{\theta^{*}(\phi)}^{l}\). This is the gate at which \(\mathbf{x}_{i}\) should exit. Hence, we set the binary target of gate \(l^{*}\) to 1, and the binary targets for all of the preceding gates to zero. As for the subsequent gates, since the sample is supposed to exit at \(l^{*}\), we assume that it should also exit at any later gate, so we set the binary targets of all subsequent gates to 1 as well. Hence, the targets of our binary tasks \(t_{i}^{1},...,t_{i}^{L}\) for our gates \(g_{\phi}^{1}(\mathbf{c}_{i}^{\leq 1}),...,g_{\phi}^{L-1}(\mathbf{c}_{i}^{ \leq L-1})\) are given by \(t_{i}^{j}=0\) for \(j<l^{*}\) and \(t_{i}^{j}=1\) for \(j\geq l^{*}\). The connection between those surrogate tasks and the initial objective from equation 12 can be established by showing that they can share the same solution under some assumptions. We demonstrated this is Appendix 8.4. Hence, instead of following the gradient from equation 15, we approximate it by the gradient of the surrogate loss: \[\frac{\partial\mathcal{L}^{out}}{\partial\phi}\approx\frac{1}{N}\sum_{i=1}^{N }\sum_{l=1}^{L}\frac{\partial\mathcal{L}^{CE}(t_{i}^{l},g_{\phi}^{l}(\mathbf{ c}_{i}^{\leq l}))}{\partial\phi}. \tag{16}\] This amounts to summing the gradients of the losses of \(L\) independent binary classification tasks. If the tasks are highly imbalanced, we compute the class imbalanced ratio on a validation set at each epoch and use weighted class training. Following the bi-level optimization procedure, training is carried out by alternating between optimizing using the gradients in equation 16 and equation 14. In practice, we start by training all IMs on the full dataset in a warmup stage as described in Section 5. Algorithm 1 in Appendix 8.5 provides a detailed exposition of the entire algorithm. ## 5 Experiments We use the vision transformers T2T-ViT-7 and T2T-ViT-14 (Yuan et al., 2021) pretrained on the ImageNet dataset (Deng et al., 2009) which we then transfer-learn to the datasets: CIFAR10, CIFAR100-LT (Krizhevsky, 2009) and SVHN (Netzer et al., 2011). For dataset details, see Appendix 8.1.1. For transfer-learning, we use the procedure from (Yuan et al., 2021). We provide checkpoints for all these models in our code. Our backbone \(NN\) for each dataset is the frozen transfer-learned model. We augment the backbone with gates and intermediate IMs at every layer. We generate results by varying the inference cost parameter \(\lambda\) over the range 0.01 to 10. Appendix 8.1.1 contains further details concerning hyperparameters and experimental procedure. ### Our model **Gate design \(g_{\phi}^{l}(\mathbf{c}_{i}^{\leq l})\):** To ensure a lightweight design, we construct a small number of features by computing uncertainty statistics from the output \(\hat{\mathbf{p}}^{l}\) of the \(l\)-th IM, \(f_{\theta}^{l}(\mathbf{x}_{i})\). Hence the \(l\)-th gate can be represented as \(g_{\phi}^{l}(\mathbf{c}_{i}^{\leq l})=g_{\phi}^{l}(m(\theta,\mathbf{x}_{i}))\). We choose \(m(\theta,\mathbf{x}_{i})=[\hat{p}_{i}^{l,\max}(\mathbf{x}_{i}),h^{l}(\mathbf{ x}_{i}),h_{pow}^{l}(\mathbf{x}_{i}),mar^{l}(\mathbf{x}_{i})]^{T}\) where \(\hat{p}_{i}^{l,\max}(\mathbf{x}_{i})\) is maximum predicted probability, \(h^{l}(\mathbf{x}_{i})\) is the entropy of predictions, \(h_{pow}^{l}(\mathbf{x}_{i})\) is the entropy of predictions scaled by a temperature, and \(mar^{l}(\mathbf{x}_{i})\) is the difference between the two most confident predictions. For more detail see Appendix 8.9. Rather than sample \(E^{l}\succ p(E^{l}|\mathbf{x})\) to decide whether to exit, we use the deterministic decision \(P(E^{l})>0.5\), which leads to more stable results. **IM design \(f_{\theta}^{l}(\mathbf{z}_{i}^{\leq l})\):** The IMs \(f_{\theta}^{l}(\mathbf{z}_{i}^{\leq l})\) all have the same architecture. We reduce cost by limiting their input size and number of layers. The input to \(f_{\theta}^{l}\) is only \(\mathbf{z}_{i}^{l}\) and the IM is a single layer NN. The lightweight design allows us to insert exit branches at every layer. This gives our model greater exit flexibility, in contrast with many existing approaches (Chiang et al., 2021; Lahiany and Aperstein, 2022; Li et al., 2023; Ilhan et al., 2023), which typically use only a few exit branches. **Training procedure:** Our training procedure consists of two phases: (i) warm-up; and (ii) bi-level optimization. In the warm-up phase the first \(L\)-\(1\) IMs are trained in parallel on all samples for a fixed number of warm-up epochs. This ensures that intermediate IMs are performing reasonably well when we start training the gates. During the bi-level optimization, we alternate between optimizing the gate parameters \(\phi\) and the IM parameters \(\theta\). See Appendix 8.1.4 for more detail. ### Baselines We compare our proposal with the following baselines: * **BoostedNet**Yu et al. (2023) and **L2W-DEN**Han et al. (2022b) are state-of-the-art benchmarks with architecture-agnostic training procedures for general-purpose networks. * **MSDNet**Huang et al. (2018) and **RANet**Yang et al. (2020) are representative EEDN-tailored architectures. We additionally compare with the improved training procedures in Yu et al. (2023) and Han et al. (2022b). To obtain the uncertainty metrics, we calibrate the predicted \(\hat{\mathbf{p}}\) by first applying the post-calibration algorithm of temperature scaling from Guo et al. (2017) on the output logits using a validation set. Since all baselines also require a validation set to select gating thresholds, we split the validation set into two \(\mathcal{V}=\mathcal{V}^{1}\cup\mathcal{V}^{2}\); \(\mathcal{V}^{1}\) is used to compute the gating thresholds and \(\mathcal{V}^{2}\) is used for hyperparameter tuning, early stopping, setting the conformal threshold, and calibrating the output. ### Results **Observation 1:** _Joint Training of the GMs and the IMs closes the train-test mismatch which leads to significant performance improvement._ Figure 1 depicts performance versus inference cost curves. Appendix 8.8 presents results on additional datasets and architectures. Focusing on the region exceeding \(80\%\) of the end accuracy of the total network, our suggested approach significantly outperforms the baselines for all datasets. Outside of that range, it is on par or better. Moreover, the \(95\%\) confidence intervals around the mean of the accuracy of our proposed approach (shaded areas) are significantly tighter. The CIs of all methods are wider for CIFAR100-LT. The imbalance leads to noisier prediction, which makes it particularly challenging for the baselines. To provide further insight into why our model outperforms the baselines, we analyze the results in the subsequent sections. Appendix 8.6 provides an ablation study which demonstrates the value of learnable gates and joint training. **Observation 2:** _Training the GMs leads to a better gate selection._ Because our GMs are jointly trained with the IMs and take into account the cost of inference, the GMs can avoid poorly performing gates or gates that provide only marginal performance improvement. Figure 2 shows which gates are selected for exit by our proposed method (JEI-DNN) and the baselines. We see that JEI-DNN avoids using early IMs altogether and focuses on IMs 5 to 10. The joint training approach thus effectively addresses the EEDN placement task. Rather than employing fewer more capable, complex GMs and IMs selecting where to place them, as in Li et al. (2023), our approach introduces very simple, lightweight GMs and IMs, and learns to focus on the IMs that can perform satisfactorily for a given inference budget. In contrast, the threshold GMs employed by the baselines lead to more evenly distributed exits. Since the threshold GMs make decisions by thresholding confidence metrics derived from predicted probabilities, it is more difficult to avoid exiting at early poorly performing IMs (these IMs are overly-confident for some samples). Figure 1: Accuracy vs Mul-Add of: **Left** CIFAR100 ((2t-14); **Middle** SVHN ((2t-7); and **Right** CIFAR100LT ((2t-14). The x-axes are scaled by the full model inference cost, Mul-Add (\(IC^{L}\)). Moreover, the thresholding approach implicitly relies on the assumption that the IMs are well calibrated, which cannot be guaranteed (Guo et al., 2017; Minderer et al., 2021). Experimentally, we verify that it is not the case and that calibration is positively correlated with the depth of the IMs (see Figure 3). Consequently, calibration is also correlated with accuracy, which is in line with the findings of Minderer et al. (2021). A poor calibration leads to inaccurate predicted probabilities, which introduces noise in the inputs to the threshold GM. As a result, the exit selection is also more variable, leading to a less predictable accuracy-IC trade-off (as exhibited by the much wider CIs in Figure 1). Observation 3:_Better gate selection concentrates training on better IMs._ Figure 2(b) highlights that the improvement of our model does not come from IMs that are superior over all of the training data. Baseline and JEI-DNN IMs perform similarly. The improvement in IC-accuracy trade-off is explained by _the trend_ of IM accuracies in conjunction with IM usage. JEI-DNN exits very few samples at IMs 1-4 and 11-14. The early IMs are too inaccurate. The accuracy gain achieved by postponing exit to late IMs (11-14) is not justified by the increased inference cost. The GMs must be able to accurately direct easier samples to earlier IMs. JEI-DNN clearly achieves this. For IMs 6-8, where most samples exit, the accuracy is higher over exiting samples (Fig. 2(a)) than all samples (Fig. 2(b)). By contrast, for gates 9-10, where only a few challenging samples exit, the exit accuracy is lower. For JEI-DNN, the most challenging samples exit at gates 9-11; for the baselines, they exit at gate 13. Despite having access to less informative features, JEI-DNN achieves a better average accuracy for these challenging samples at considerably lower inference cost. Observation 4:_Learnable GMs leads to better uncertainty characterization capability_ Fig. 4**(a)** highlights that even with post-calibration, the baselines' predicted probabilities remain extremely poorly calibrated. JEI-DNN's calibration is much better and the calibration error correlates negatively with accuracy. This is desirable; a higher inference cost should lead to better accuracy and better calibration. When the baselines provide more accuracy, the calibration error is much worse. This disparity has two causes: (i) JEI-DNN refrains from using the early, worst-calibrated gates (Fig. 3); (ii) the baselines must increase the GM thresholds to achieve higher accuracy, but this leads to a bias -- samples only exit if the predictions are very confident. Figure 3: ECE of the IMs averaged over all baselines on all datasets with t2t-7. Figure 2: Decomposition of the contributions of the IMs to the final accuracies (depicted by dotted lines) for CIFAR100. The operational point is marked by a star in the left panel of Figure 1. **a)** Accuracy of each IM, \(f_{g}^{l}\), evaluated only on their exited samples (\(\mathcal{D}^{\text{I}}\)). **b)** Accuracy of each IM, \(f_{g}^{l}\), on the full dataset \(\mathcal{D}\). The size of a circle is proportional to the number of samples exited for a trial. We now consider conformal intervals. These should circumvent the threshold GM bias as they are derived via a validation set and have probabilistic guarantees. Fig. 4**(b)** shows that JEI-DNN offers significantly tighter conformal intervals \(|\tilde{\mathcal{C}}|\) for a given constraint on empirical coverage \(\hat{\alpha}\). The empirical coverage values for JEI-DNN are also closer to the requested coverages \(\alpha\) (see Fig. 4**(c)**). Observation 5:_JEI-DNN using a fixed backbone outperforms dedicated EEDN architectures_ We now compare with state-of-the-art dedicated EEDN approaches that employ a trainable backbone. We select MSDNet (Huang et al., 2018), RANet (Yang et al., 2020), BoostedNet (Yu et al., 2023), and L2W-DEN (Han et al., 2022b) as representative models. Figure 5 shows that JEI-DNN with a T2T-ViT-14 backbone achieves much better accuracy while keeping the inference cost low. We achieve an accuracy of 82% for 22% of the inference cost of the full trainable backbone architectures. By contrast, the trainable architectures can only achieve an accuracy of 73% at this cost. However, in the very low-cost regime, dedicated EE architectures like MSDNet and RANet are able to maintain higher accuracy. By this point, however, the reduction in accuracy is severe. ## 6 Limitations Although our proposal outperforms the baselines, both with respect to the accuracy-IC trade-off and uncertainty characterization, it has some drawbacks. First, since we must specify the inference cost parameter \(\lambda\), we only obtain one accuracy/IC operational point per training. To obtain different values, we need to retrain the model with a changed \(\lambda\) value. By contrast, the baseline methods that employ threshold GMs can modify the inference time after deployment by changing the threshold. Second, we cannot directly target a specific accuracy or inference cost. In practice, if there is a constraint on one of the metrics, we either must train multiple models, adjusting \(\lambda\) until the constraint is met, or use an ad-hoc approach by applying different thresholds to the \(g_{\phi}^{\prime}\) values. In summary, if the desired trade-off between inference cost and accuracy is likely to vary over time, and it is acceptable to sacrifice some accuracy, then the approach taken by the baselines would be preferable. ## 7 Conclusion We have introduced a novel early-exit dynamic neural network architecture, JEI-DNN, that is compatible with off-the-shelf backbone architectures. By augmenting a backbone with lightweight, trainable gates and inference modules that are jointly optimized, we close the train-test gap, leading to better performance of inference modules. We show that this approach also outperforms state-of-the-art dedicated EEDN architectures. Furthermore, our approach provides significantly better uncertainty characterization. We show that the inference modules are better calibrated in the proposed JEI-DNN and tighter conformal intervals are obtained for a desired marginal coverage.
2307.05396
Handwritten Text Recognition Using Convolutional Neural Network
OCR (Optical Character Recognition) is a technology that offers comprehensive alphanumeric recognition of handwritten and printed characters at electronic speed by merely scanning the document. Recently, the understanding of visual data has been termed Intelligent Character Recognition (ICR). Intelligent Character Recognition (ICR) is the OCR module that can convert scans of handwritten or printed characters into ASCII text. ASCII data is the standard format for data encoding in electronic communication. ASCII assigns standard numeric values to letters, numeral, symbols, white-spaces and other characters. In more technical terms, OCR is the process of using an electronic device to transform 2-Dimensional textual information into machine-encoded text. Anything that contains text both machine written or handwritten can be scanned either through a scanner or just simply a picture of the text is enough for the recognition system to distinguish the text. The goal of this papers is to show the results of a Convolutional Neural Network model which has been trained on National Institute of Science and Technology (NIST) dataset containing over a 100,000 images. The network learns from the features extracted from the images and use it to generate the probability of each class to which the picture belongs to. We have achieved an accuracy of 90.54% with a loss of 2.53%.
Atman Mishra, A. Sharath Ram, Kavyashree C
2023-07-11T15:57:15Z
http://arxiv.org/abs/2307.05396v1
# Handwritten Text Recognition Using Convolutional Neural Network ###### Abstract OCR (Optical Character Recognition) is a technology that offers comprehensive alphanumeric recognition of handwritten and printed characters at electronic speed by merely scanning the document. Recently, the understanding of visual data has been termed Intelligent Character Recognition (ICR). Intelligent Character Recognition (ICR) is the OCR module that can convert scans of handwritten or printed characters into ASCII text. ASCII data is the standard format for data encoding in electronic communication. ASCII assigns standard numeric values to letters, numeral, symbols, white-spaces and other characters. In more technical terms, OCR is the process of using an electronic device to transform 2-Dimensional textual information into machine-encoded text. Anything that contains text both machine written or handwritten can be scanned either through a scanner or just simply a picture of the text is enough for the recognition system to distinguish the text. The goal of this papers is to show the results of a Convolutional Neural Network model which has been trained on National Institute of Science and Technology (NIST) dataset containing over a 100,000 images. The network learns from the features extracted from the images and use it to generate the probability of each class to which the picture belongs to. We have achieved an accuracy of 90.54% with a loss of 2.53%. Neural Networks, OCR, Convolution, Pooling, Regularisation, Pre-processing Output Layers ## I Introduction In the past three decades, much work has been devoted to handwritten text recognition, which is used to convert human-readable handwritten language into machine-readable codes. Handwritten text recognition has attracted a great deal of interest because it provides a method for automatically processing enormous quantities of handwritten data in a variety of scientific and business applications. The underlying problem with handwritten text has been that various individuals' representations of the same character are not identical. An additional difficulty experienced while attempting to decipher English handwritten characters is the variance in personal writing styles and situational differences in a person's writing style. In addition, the writer's disposition and writing environment may influence writing styles. Attempts to create a computer system that could understand handwriting are the only time the complexity of optical pattern recognition becomes apparent. The strategy using artificial neural networks is thought to be the most effective for creating handwriting recognition systems. Neural networks significantly help in modelling how the human brain operates when identifying handwritten language in a more efficient manner. It enables machines to interpret handwriting on par with or better than human ability. Humans use a variety of writing styles, many of which are difficult to read. Additionally, reading handwriting may be time-consuming and difficult, particularly when one is required to examine several documents with handwriting from various persons. Neural networks are the best choice for the suggested system since they can extract meaning from complicated data and spot trends that are hard to spot manually or using other methods. The primary goal of this project is to create a model based on the concept of Convolution Neural Network that can recognize handwritten digits and characters from a picture. We have built a simple Convolutional Neural Network (CNN) system which has been trained on NIST dataset. ## II Related Work Many researchers tried to develop handwritten text recognition models in the past, but none of them are perfectly accurate and it still requires much more research in this field. M. Brisinello et al [1] proposed a pre-processing method which improves Tesseract OCR 4.0's performance by approximately 20%. They implemented a two-step process which involves clustering of input images and a classifier which identifies whether the images contain text or not. In [2], neural networks are used to sample the pixels in the image into a matrix and match them to a known pixel matrix. It achieved an astounding accuracy of 95.44%. An open-source OCR engine developed by HP called Tesseract OCR Engine has been implemented in this research which has achieved an accuracy of 95.305% which can be seen in [3]. A different approach has been implemented in [4] which has used RNN and embedded system for character recognition of Devanagari which involves 120 Indian languages. Methods that were based on confusion matrices were employed in the study that Yuan-XiangLi et al [5] conducted. The approach starts with the original candidates, utilises them to conjecture which characters are most likely to be correct, and then combines the postulated set with the original candidates to obtain a new set of candidate candidates. In [6] by T. Sari et al., the technology of character segmentation has been incorporated in a different manner of implementation. In many OCR systems, character segmentation is a prerequisite step for character recognition. It is crucial because poorly fragmented characters are unlikely to be correctly recognised. ## III Convolutional Neural Network In the field of Deep Learning, Convolutional neural networks (CNNs) are a type of Artificial Neural Network (ANN) which are frequently used to analyse visual data, including photographs and videos, in order to discover patterns or trends in the visual data.. This type of analysis is typically carried out in order to discover new applications for the visual data. A Multilayer perceptron is typically connected in a fully connected network, and CNN is a simplified version of this type of network architecture. The Convolutional Neural Network (CNN) is made up of a variety of components and operations, such as Pooling, Convolution, and Neural Network, among others. ### _Neural Networks_ Neural Networks or Artificial Neural Networks are computerized systems which are developed in order to replicate animal brain. ANNs are usually used to perform a certain limited task but they can be trained to perform any kind of difficult tasks which can replicate or are better than task done by humans. Neural networks are composed of 3 layers shown in Fig 1 * Input Layer * Hidden Layer * Output Layer Every artificial neuron receives an input and produces a single output that can be spread to a number of different artificial neurons. The feature values of a sample of external data, such as photographs, can serve as the inputs. Alternatively, the outputs of other neurons can serve in this capacity. The objective, such as recognising an object in an image, is completed successfully by the outputs of the final output neurons of the neural network. They are also known as Weighted Graphs rather frequently. Each connection between neurons has a weight 'W' to them along with a bias 'b' added in order to make a weighted input. After that, the weighted input will be utilised in an activation function that is present in a neuron in order to bring about non-linearity in the output of the neuron. An activation function is a mathematical function employed in the hidden layers that uses the weighted input and bias to determine whether or not the particular neuron will be activated. The activation function also applies a non-linear transformation to the input, which enables the system to acquire new knowledge about the input. The output of the activation function will be regarded as an input to the corresponding neuron.The above mentioned process for a single neuron is given in Fig 2: ### _Input_ A basic Convolutional Neural Network use image matrices as the input. Images are stored in a system in the form of mathematical matrices. For Example, A colored image of dimensions 1920 \(\times\) 1080 is stored in the system as a 3d matrix of size (1920, 1080, 3) where, 1980 is width, 1080 is height and 3 represents the number of color channels i.e. RGB. Each cell in the image matrix contains the RGB value of the corresponding original image as shown in Fig 3. The image matrix is given as an input in the neural network for further processing which involves steps such as feature extraction using convolution and pooling for faster processing and other similar processes. ### _Convolution_ The mathematical process known as convolution is performed on two functions in order to generate a third function Fig. 1: A Basic Neural Network Fig. 3: Matrix representation of a color image Fig. 2: Single Neuron that expresses how the shape of one function affects the shape of the second function. In other terms, "convolution" refers to the act of multiplying two functions point-wise to create a third function. In this case, one of the functions is the image pixels matrix, and the other is the filter. After a convolution operation has been performed on an input image matrix, a filter is a tiny matrix that is used to extract features from the input image matrix. Feature Maps is the name given to the matrix that is produced. In the mathematical form, \(y[i,j]=\sum_{m=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}h[m,n]\cdot x[i-m,j-n]\) where m = No. of rows, n = No. of columns, The convolution process for matrices is shown in Fig 4. ### _Pooling_ The process of pooling entails sliding a small two-dimensional matrix over the feature map in effort to reduce the dimensions of the map without losing the knowledge of the features that are located in that particular location. Pooling reduces the quantity of parameters that must be learned, which in turn makes the calculation more effective. Pooling can be broken down into two distinct categories: maximum pooling and average pooling. The output of the max pooling method is determined by the value that is highest in that region, whereas the output of the average pooling method is determined by the average of all the values in that region, as illustrated in Figure 5. ### _Regularization_ Regularization is a technique used to counter the problem of decreasing the complexity of the neural network during training to avoid overfitting a model. There are three types of regularization are used L1, L2 and Dropout. We are using the Dropout regularization method which drops or turns off random nodes in the network according to some probability \(P\). Thus reducing the complexity of the neural network avoiding overfitting. The Dropout process is given in the Fig 6. ### _Hidden and Output layers_ Hidden layers also called Fully Connected Layers uses non-linear activation function to apply non-linear transformation on a flattened input i.e. 1-D input. It uses activation functions like Relu (Rectified Linear Unit), tanh, sigmoid, etc. The output layer uses probabilistic functions like Sigmoid or logistic, Softmax, linear to find probability for each class the network is classifying ## IV System Design ### _Dataset_ The dataset utilized in this project has been contributed by NIST (National Institute of Standards and Technology) which contains a total of 1,01,784 images for a total of 47 classes. The 47 classes include0-9, A-Z, a, b, d, e, f, g, h, n, q, r, t making a total of 47. Few of the small alphabets are considered due to the similarity between small and capital letters which will reduce the complexity for the model training. A sample from the dataset in shown in the below Figure. ### _Preprocessing_ The images from the dataset are of dimensions 128 x 128 which is will increase the number of parameters during training exponentially resulting in longer training time. To Fig. 4: Convolution Operation on Matrices Fig. 5: Pooling process Fig. 6: Dropout Regularization with \(P\)=0.5 Fig. 7: A sample from the character dataset avoid this issue, the images are rescaled to 32 x 32 and the images along with the labels are converted into two NumPy arrays respectively. The image will use less CPU and GPU resources when resized because there are fewer pixels to process. The images are already in grey-scale so we don't need to add extra filters. The resulting dimensions for a single image is 32 x 32 x 1, where 1 represents the black and white color channel. ### _Splitting into Training and Testing Dataset_ Training and testing datasets are created from the dataset. An optimal ratio of 70:30 is employed for splitting. The training dataset contains 71,249 images and the testing dataset contains 30,535 images. Both the datasets are shuffled using a permutation to reduce variance. ### _CNN Model_ The CNN model used for this project is quite different from the other pre-trained CNN model architecture available like ResNet, GoogleNet, etc. The CNN model used for the experiment is a implemented on a custom built architecture which is displayed in the figure 8. Three Convolution and Pooling layers are used. First conv layer a total of 1024 filters of size 5 x 5. Second conv layer uses 512 filters of size 3 x 3 and Third conv layer uses 256 filters of size 3 x 3. The activation function used for these layers is ReLU(Rectified Linear Unit) which has a range from \(0-\infty\). A dropout regularisation layer and a flatten layer are added. In the fully connected layer, two layers containing 256 and 128 neurons respectively connected to the 47 output neurons. ### _Output Layer_ In order to provide an accurate prediction regarding the image's category, the output layer makes use of the softmax probabilistic function. Along with the Adam optimizer, the usage of categorical cross entropy as a loss function is also common. In this case, an active learning algorithm known as the Adam Optimization method is applied. This approach employs individualised learning rates for every one of the parameters. ### _Train the Model_ The CNN model is trained over a period of 20 epochs, with 357 steps each epoch. Figures 9 and 10 shows both the training accuracy against validation accuracy and training loss against validation loss using plots. ## V Results The use of a convolutional neural network model to detect handwritten English alphabetic and numeric characters is discussed in this research. Though many advanced techniques has been invented for this problem, CNN being the predecessor of these techniques has given satisfying results. The model which has been developed for this paper has given almost accurate predictions with an accuracy percentage of 90.54% with a loss of 2.53%. A learning rate of 0.001 is used as a parameter for a better performance. Fig. 8: The CNN model architecture Fig. 10: Images for model testing Fig. 9: The line plot about loss and accuracy With a few random photos of the handwritten characters, the model has been evaluated. The sample images used for the testing are shown in the figure 10.The results are shown in the above table. The model has not been perfected yet due to which, a noticeable error can be seen in the table for the class 't' which is wrongly predicted as 'T'. The receiver operating characteristic curve (ROC) curve is shown in Figures 11 through 14 for a variety of classes. The performance of the classification model at different thresholds is depicted on a graph called the ROC curve, or receiver operating characteristic curve. The True Positive Rate, or recall, and False Positive Rate, or precision, are used in this curve. \[TPR=\frac{TP}{TP+FN}\] \[FPR=\frac{FP}{FP+TN}\text{, where TP = True positive}\] \[\text{FP = False Positive}\] \[\text{FN = False Negative}\] The term "precision" refers to the percentage of cases that were correctly predicted but did not turn out to be positive. When the risk of obtaining a false positive is more significant than the risk of obtaining a false negative, precision might be helpful. The term "recall" refers to the percentage of real positive cases that our model properly predicted. This percentage is expressed as a percentage. It is a helpful statistic in situations where the risk of a false negative is more significant than the risk of a false positive. When both Precision and Recall are equal, it reaches its maximum value. The two-dimensional area under the ROC curve is measured by the ROC curve's AUC (Area Under the Curve). The AUC offers an overall performance measurement across all categorization criteria. The classification performance for a given class curve is improved by a curve's AUC. The accuracy of a curve's categorization is improved with a lower AUC value. It is clear from the ROC curve figures that the classes "1," "J," "S," "T," and "f" have low AUCs for their curves. The similarity between these characters might be the cause for this error. ## VI Conclusion It is possible to draw the conclusion, based on all of the evaluation metrics presented above, that Convolutional Neural Network is an effective method for solving the problem of handwritten character recognition that is also simple to implement and that produces high levels of accuracy and predictions. Although it might not be the most effective recognition algorithm available, it gets the job done nonetheless.
2308.05513
Measuring Renyi Entropy in Neural Network Quantum States
We compute the Renyi entropy in a one-dimensional transverse-field quantum Ising model by employing a swapping operator acting on the states which are prepared from the neural network methods. In the static ground state, Renyi entropy can uncover the critical point of the quantum phase transition from paramagnetic to ferromagnetic. At the critical point, the relation between the Renyi entropy and the subsystem size satisfies the predictions from conformal field theory. In the dynamical case, we find coherent oscillations of the Renyi entropy after the end of the linear quench. These oscillations have universal frequencies which may come from the superpositions of excited states. The asymptotic form of the Renyi entropy implies a new length scale away from the critical point. This length scale is also verified by the overlap of the reduced Renyi entropy against the dimensionless subsystem size.
Han-Qing Shi, Hai-Qing Zhang
2023-08-10T11:52:44Z
http://arxiv.org/abs/2308.05513v2
# Measuring Renyi Entropy in Neural Network Quantum States ###### Abstract We compute the Renyi entropy in a one-dimensional transverse-field quantum Ising model by employing a swapping operator acting on the states which are prepared from the neural network methods. In the static ground state, Renyi entropy can uncover the critical point of the quantum phase transition from paramagnetic to ferromagnetic. At the critical point, the relation between the Renyi entropy and the subsystem size satisfies the predictions from conformal field theory. In the dynamical case, we find coherent oscillations of the Renyi entropy after the end of the linear quench. These oscillations have universal frequencies which may come from the superpositions of excited states. The asymptotic form of the Renyi entropy implies a new length scale away from the critical point. This length scale is also verified by the overlap of the reduced Renyi entropy against the dimensionless subsystem size. _Introduction_.-- In recent years, neural networks with machine learning methods has become a powerful tool in solving quantum many-body problems [1; 2], ranging from static ground states [3] to time-dependent evolutions [4; 5]. Besides these successes in quantum physics, it is also been adopted in a broad spectrum of physics, from computational physics [6] to high energy physics [7] and cosmology [8]. The wide range of its applications is due to the high efficient representing power of neural networks, which can characterize the states of the system with acceptable number of parameters [9]. Therefore, neural networks makes the overwhelming complexity computationally tractable. Computation of entanglement in a quantum system is formidable because of its exponential complexity of states in the Hilbert space [10]. Therefore, to study the entanglement of a quantum system by virtue of the neural networks becomes an intriguing task. Entanglement entropy (von Neumann entropy, i.e., \(S_{1}\) defined below) is an important concept in quantum physics. Measuring entanglement entropy can reveal some universal properties of quantum many-body systems. For instance, in a one-dimensional conformal field theory, there is a universal logarithmic scaling of the entanglement entropy [11; 12]. Renyi entropy (\(S_{n}\), defined below), a generalized version of the entanglement entropy, has attracted much attention recently [13], since Renyi entropy can encode much more information than von Neumann entropy in studying the entanglement spectrum of the density matrix [14]. We will focus on studying the second Renyi entropy \(S_{2}\) of a one-dimensional transverse-field quantum Ising model (TFQIM) by utilizing the neural network methods. We prepare the neural-network quantum states (NQS) of TFQIM with the machine learning methods [5]. Then we transform the studying of \(S_{2}\) by computing the expectation value of a swapping operator [15]. First, we study the Renyi entropy in static ground states. By varying the transverse magnetic field, the peak of the Renyi entropy appears at the critical value of the magnetic field, which reflects a quantum phase transition from paramagnetic to ferromagnetic. Therefore, Renyi entropy can serve as an order parameter to disclose the phase transition. As the transverse magnetic field goes to zero, Renyi entropy tends to \(\ln 2\) which is consistent with the two-degenerated vacuum states in the pure ferromagnetic phase, i.e., the spins all point up or down [11]. We also study the relation between the Renyi entropy and the subsystem size at the critical point, and find that it satisfies the theoretical predictions in conformal field theory [11]. Later, we study the Renyi entropy in a dynamical case induced by a linear quench of the transverse magnetic field. This is a paradigmatic model in studying the formation of one-dimensional defects (kinks) from the celebrated Kibble-Zurek mechanism (KZM) [16; 17]. From KZM, there is an adiabatic-impulse-adiabatic process as the magnetic field traverses from paramagnetic to ferromagnetic state. In the impulse regime nearby the critical point, the system is excited rather than following the original ground state. Therefore, Renyi entropy in this case is assumed to be different from those in the static case. Specifically, we quench the transverse magnetic field from greater than critical values to zero, and then let the system evolve freely. We find that superpositions of different symmetry-broken states result in the coherent oscillations of the Renyi entropy after the end of the quench. The periods (or frequencies) of these oscillations for different quench rates are identical, which implies the degeneracy of energy levels as magnetic field vanishes. The length scale \(\xi\) in this free evolution regime is different from that nearby the critical point, i.e., \(\xi\), which is usually used in KZM to predict the number density of the defects. We see that the relation between the asymptotic behavior of the Renyi entropy and the quench rate satisfies the theoretical formula if we recognize \(\xi\) as the new length scale. Besides, the collapsing of relations between the reduced Renyi entropy and the reduced dimensionless subsystem size also supports the above asserts that in the free evolution regime \(\xi\) is the new length scale. _Model_.-- The Hamiltonian of one-dimensional TFQIM with \(N\) sites is [18] \[H=-J\sum_{i=1}^{N}(\sigma_{i}^{z}\sigma_{i+1}^{z}+h\sigma_{i}^{x}) \tag{1}\] where \(\sigma_{i}^{z}\) and \(\sigma_{i}^{x}\) are the Pauli matrices at the site \(i\) \(J\) represents the coupling strengths between the nearest-neighbor sites while \(h\) denotes the strength of the transverse magnetic field. We adopt the periodic boundary conditions (PBC), i.e., \(\vec{\sigma}_{N+1}=\vec{\sigma}_{1}\). In this paper we set \(J=1\). In the ground state, there is a quantum phase transition at the critical value \(h_{c}=1\). (We will only consider \(h\geq 0\) since \(h\leq 0\) has similar phenomenon.) As \(h\gg 1\), the system is in a paramagnetic phase with spins all pointing along \(x\)-direction. On the contrary, as \(h<1\), the system is in a ferromagnetic phase. In particular, when \(h=0\) the system falls into a complete ferromagnetic phase in which the system is in a two-degenerated state with all spins either pointing up or down along the \(z\)-direction. This phase transition can also be captured by the Renyi entropy [11], defined as \[S_{n}(\rho_{A})=\ \tfrac{1}{1-n}\ln\left[\operatorname{Tr}(\rho_{A}^{n})\right] \tag{2}\] where \(S_{n}(\rho_{A})\) is the \(n\)-th Renyi entropy of the subsystem \(A\), \(\rho_{A}\) is the reduced density matrix by tracing out the rest part. As \(n\to 1\), it reduces to the von Neumann entropy \(S_{1}=-\operatorname{Tr}(\rho_{A}\ln\rho_{A})\). For an infinitely large system, Renyi entropy will diverge at the critical point \(h_{c}=1\), while for a finite system, it has a finite peak at this point. As \(h=0\), due to the \(Z_{2}\) symmetry of the complete ferromagnetic system, the value of Renyi entropy should be \(\ln 2\). As \(h>1\), the system is in a paramagnetic phase and Renyi entropy will decrease and then vanish in the limit of \(h\to\infty\). However, for a time-dependent \(h(t)\), the studying of Renyi entropy in the dynamical phase transition of TFQIM is rare. We assume that the system is undergoing a linear quench of the transverse magnetic field from paramagnetic to ferromagnetic phase, i.e., \[h(t)=-\frac{t}{\tau_{Q}},\quad t\in[-T,0]. \tag{3}\] where \(\tau_{Q}\) is the quench rate and \(-T\) is the initial time of the quench. At the initial time, \(h\gg 1\) and the system is prepared in the paramagnetic phase; By quenching the system across the critical point \(h_{c}=1\), dynamical symmetry-breaking will bring out the topological defects (kinks) with the density depending on the quench rate \(\tau_{Q}\) according to the KZM [16; 17]. This quantum phase transition inevitably excites the system and the final state is a superposition of the excited states with different symmetry-broken domains [19; 20]. Therefore, we can readily imagine that Renyi entropy in the dynamical case will be different from that in static case. _Neural network machine learning._-- We utilize the restricted Boltzmann machines (RBM) as the neural networks to represent the states of the TFQIM. This network has two layers, one includes \(N\) visible neurons \(s_{j}\) (i.e., the spins at site \(j\)) and the other has \(M\) hidden neurons \(h_{i}\). The visible neurons are assigned as input with the directions of spins as \(s(\uparrow)=1\) and \(s(\downarrow)=-1\). The hidden neurons can be assigned the same values \(h_{i}=\{1,-1\}\). According to the structure of RBM, we can write down the expression of NQS with the neural network parameters \(\mathcal{W}=\{a,b,w\}\)[4], \[\Psi_{\text{NQS}}(s,\mathcal{W})=\sum_{\{h_{i}\}}e^{\sum_{j}a_{j}s_{j}+\sum_{ i}b_{i}h_{i}+\sum_{ij}w_{ij}h_{i}s_{j}}\,. \tag{4}\] The dynamics we considered is an evolution from an initial ground state with strong transverse magnetic field to a state with vanishing transverse magnetic field. In order to obtain this initial ground state, we need to train a beginning wave function whose parameters are random complex numbers from machine learning algorithms, which is realized through minimizing the expectation value of the energy with the Stochastic Reconfiguration (SR) method [21]. After preparing the initial ground state, the NQS need to satisfy the time-dependent Schrodinger equation according to the linear quench (3). To this end, we utilize the time-dependent variational principle (TDVP) to solve the time-dependent neural network parameters \(\mathcal{W}(t)\). In each iteration, we need to minimize the Fubini-Study distance \[\mathcal{D}(\mathcal{W})=\arccos\left[\frac{\langle\partial_{t}\Psi|H\Psi \rangle\langle H\Psi|\partial_{t}\Psi\rangle}{\langle\partial_{t}\Psi| \partial_{t}\Psi\rangle\langle H\Psi|H\Psi\rangle}\right]^{1/2} \tag{5}\] which describes the difference between the exact time evolution and variational evolution. Eventually, it can yield an evolution equation about the parameters \(\mathcal{W}\)[4], \[S_{k^{\prime}k}\dot{\mathcal{W}}_{k}=-iF_{k^{\prime}} \tag{6}\] where \(\dot{}\) represents time derivative, \(S_{k^{\prime}k}=\langle O_{k^{\prime}}^{*}O_{k}\rangle-\langle O_{k^{\prime}}^ {*}\rangle\langle O_{k}\rangle\) is a covariance matrix of the operator \(O_{k}=\frac{\partial|\hbar\Psi}{\partial\mathcal{W}_{k}}\) and \(F_{k^{\prime}}=\langle O_{k^{\prime}}^{*}E_{\text{loc}}\rangle-\langle O_{k^{ \prime}}^{*}\rangle\langle E_{\text{loc}}\rangle\) is the generalized forces with \(E_{\text{loc}}=\frac{\langle s|H|\Psi\rangle}{\Psi(s)}\). According to TDVP, the updated parameter is \(\mathcal{W}(t+\delta t)=\mathcal{W}(t)+\dot{\mathcal{W}}\delta t\) in each iteration. _Swapping operation._-- In this paper, we focus on how to compute the second Renyi entropy \(S_{2}\) from the neural networks. For simplicity, we call \(S_{2}\) as Renyi entropy in the following. We will utilize a swapping operation \(S_{\text{wap}}\) to measure \(S_{2}\) from NQS [15]. Assume that one can construct the state \(|\Psi\rangle\) of the system from two parts' basis \(|\alpha\rangle\) and \(|\beta\rangle\), where \(|\alpha\rangle\) is a complete basis of states in the part \(A\), and \(|\beta\rangle\) is a complete basis of state in the complement part \(B\). Therefore, the whole state of the system can be decomposed in the product of basis as \(|\Psi\rangle=\sum_{\alpha\beta}C_{\alpha\beta}|\alpha\rangle|\beta\rangle\). The swapping operator \(S_{\text{wap}}\) will exchange the states in the part \(A\) between the original and the copied one, such that, \[S_{\text{wap}}\left(\sum_{\alpha_{1}\beta_{1}}C_{\alpha_{1}\beta _{1}}|\alpha_{1}\rangle|\beta_{1}\rangle\right)\otimes\left(\sum_{\alpha_{2} \beta_{2}}D_{\alpha_{2}\beta_{2}}|\alpha_{2}\rangle|\beta_{2}\rangle\right)\] \[=\sum_{\alpha\beta}C_{\alpha_{1}\beta_{1}}D_{\alpha_{2}\beta_{2}}| \alpha_{2}\rangle|\beta_{1}\rangle\otimes|\alpha_{1}\rangle|\beta_{2}\rangle \tag{7}\] According to this definition, one can link the expectation value of \(\langle S_{\rm wap}\rangle\) to the Renyi entropy, \[\langle\Psi\otimes\Psi|S_{\rm wap}|\Psi\otimes\Psi\rangle=\sum_{ \alpha\beta}C_{\alpha_{1}\beta_{1}}C^{*}_{\alpha_{2}\beta_{1}}C_{\alpha_{2} \beta_{2}}C^{*}_{\alpha_{1}\beta_{2}}\] \[=\sum_{\alpha_{1}\alpha_{2}}[\rho_{A}]_{\alpha_{1}\alpha_{2}}[ \rho_{A}]_{\alpha_{2}\alpha_{1}}={\rm Tr}(\rho_{A}^{2})=e^{-S_{2}} \tag{8}\] where \([\rho_{A}]_{\alpha_{1}\alpha_{2}}=\sum_{\beta_{1}}C_{\alpha_{1}\beta_{1}}C^{*} _{\alpha_{2}\beta_{1}}\) is the element of the reduced density matrix \(\rho_{A}\). The coefficient \(C\) and \(C^{*}\) can be obtained by samplings from the NQS with quantum Monte-Carlo methods. _Renyi entropy in static ground state.--_ First, we study the Renyi entropy from the neural networks for the static one dimensional TFQIM with \(N=100\) sites. In Fig.1(a), we show the Renyi entropy against different transverse magnetic field strength \(h\) for various network size \(\alpha\). We define \(\alpha\) as the ratio between the hidden and visible neurons \(\alpha=M/N\). Therefore, bigger \(\alpha\) represents higher accuracy of the neural networks. Theoretically, the entropy will diverge at the critical point \(h_{c}=1\) in the infinite-size system. While in the finite system, a finite peak will replace the divergence. Here, the Renyi entropy is for the half-size subsystem, i.e., an interval with 50 sites. We indeed see a peak around \(h_{c}=1\). When increasing \(\alpha\), the peak will move closer to the critical point position \(h_{c}=1\), see the inset plot in Fig.1(a). As \(h\to 0\), the Renyi entropy goes to the theoretical predicted value \(\ln 2\approx 0.6931\) due to the two-degenerated vacuum states in the pure ferromagnetic phase. While \(h\rightarrow\infty\), it decreases and tends to zero. This figure is consistent with theoretical predictions [12]. At the critical point, the dependence of Renyi entropy \(S_{n}\) to the size \(N_{A}\) of the subsystem \(A\) is (with PBC) [12] \[S_{n}(N_{A})=\frac{c}{6}\left(1+\frac{1}{n}\right)\ln\left(\frac{N}{\pi a}\sin \left(\frac{\pi N_{A}}{N}\right)\right)+{\rm const.} \tag{9}\] where \(c=1/2\) is the central charge in TFQIM, while \(a\) is the lattice spacing and was set to \(a=1\) throughout this paper. Fig.1(b) exhibits a linear relation between \(S_{2}\) and the logarithmic term for \(N_{A}\leq 50\). The slope 1/8 is consistent with the theoretical prediction (9). In the inset plot we show the regular relation between \(S_{2}\) and the size \(N_{A}\) in the whole range. The numerical data are symmetric under \(N_{A}\to 100-N_{A}\), and the maximum value appears at \(N_{A}=50\). _Renyi entropy in quenched dynamics._-- In the time-dependent evolution, we consider a linear quench of the transverse magnetic field as in Eq.(3). At the initial time we set \(h=2\), where the system is prepared in the paramagnetic ground state. In the numerics, time step is set to \(\Delta t=10^{-3}\). Then we quench the magnetic field linearly to zero, and keep the system to evolve freely with \(h=0\) for a while. During the quench, the system will be excited to a ferromagnetic state with kink configurations due to the KZM. The superpositions of the excited states will result in different behaviors of the Renyi entropy compared to the static case. We show the time evolution of \(S_{2}\) in Fig.2(a) for various quench rates. \(t=0\) is the instant for ending the quench. We see that Renyi entropy will grow in time and then enter an oscillatory phase. For faster quench (smaller \(\tau_{Q}\)), the ramp is steeper. However, in the oscillatory phase the periods of the oscillations are almost the same for different quench rates. In the Fig.2(b) we slightly shift the Renyi entropy by a phase difference \(t_{*}\) in order to compare their periods conveniently. Specifically, \(t_{*}(\tau_{Q}=1)=0.2\), \(t_{*}(\tau_{Q}=2)=-0.4\), \(t_{*}(\tau_{Q}=4)=0.12\), and \(t_{*}(\tau_{Q}=8)=-0.34\). We can clearly see that their periods are almost similar and are roughly \(T_{\rm period}\approx 1.5867\). Interestingly, we find that this period is coincident with that of the coherent oscillatory behavior of transverse magnetization in [20], in which the analytical period was \(\pi/2\) which is very close to our numerical results. The co Figure 1: (a) Renyi entropy for half-size subsystem in static ground states against the transverse-field strength with network size \(\alpha=1,2,4\). Inset plot shows that the peak of Renyi entropy will be closer to the critical point \(h_{c}=1\) as increasing the size \(\alpha\); (b) Linear relation between the Renyi entropy and \(\ln\left(\frac{N}{\pi a}\sin\left(\frac{\pi N_{A}}{N}\right)\right)\) at the critical point, satisfying the theoretical results (9). The inset plot shows the relation between \(S_{2}\) and the subsystem size \(N_{A}\) in the ordinary coordinates. The error bars stand for the standard errors, and the dashed lines are the fitting lines of the numerical data. Figure 2: (a) Time evolution of Renyi entropy for the various quench rates \(\tau_{Q}=1,2,4\) and \(8\). As \(t<0\), the system undergoes a linear quench from \(h=2\) to \(h=0\); After \(t=0\), we keep the system to evolve with zero transverse magnetic field for a while. Renyi entropy begin to oscillate after the ramp; (b) In order to compare the periods of the oscillations for different quench rates, we shift the Renyi entropy by the phase differences \(t_{*}\). Then we see that the periods for these oscillations are almost the same, which is close to \(\pi/2\). herent oscillations are interpreted as the superpositions of the different symmetry-broken excited states. Therefore, we can speculate that the oscillatory behavior of \(S_{2}\) in our paper may also come from the superpositions of the excited states after the end of the quench. At the end of the quench, i.e., at \(t=0\), we can read off the relation between the Renyi entropy and the logarithmic function of quench rate, which is shown in Fig.3. We already see from Fig.2(a) that the Renyi entropy is oscillatory at \(t=0\), therefore, it is easy to deduce that \(S_{2}\) at the end of quench is not monotonic against the quench rate. From Fig.3 we indeed see that \(S_{2}\) is oscillatory as well with respect to \(\tau_{Q}\). However, as \(\tau_{Q}\) grows, this oscillation becomes smaller, which is reflected in Fig.2(a) that the amplitudes of \(S_{2}\) become smaller as \(\tau_{Q}\) is greater. 1 The asymptotic form of the Renyi entropy as \(\tau_{Q}\rightarrow\infty\) can be fitted by the theoretical formula in static case. At the instant \(t=0\) the system is away from the critical point. According to [22], there emerged a new length scale \(\xi\simeq\sqrt{\tau_{Q}}\ln\tau_{Q}\) in this case. Therefore, the Renyi entropy for an interval can be predicted from the length scale \(\xi\) as [12], Footnote 1: In numerics, the largest quench rate we used is \(\tau_{Q}=16\) in order to balance the precisions and time consumptions of the codes. \[S_{n} = 2\times\frac{c}{12}\left(1+\frac{1}{n}\right)\ln\left(\frac{ \xi}{a}\right)+\text{const.}\] \[\Rightarrow S_{2} = \frac{1}{8}\ln(\sqrt{\tau_{Q}}\ln\tau_{Q})+\text{const.} \tag{10}\] Note that multiplying a prefactor 2 is because the interval has two boundary points [12; 23]. The asymptotic behavior of \(S_{2}\) is denoted as the red dashed line in Fig.3, which is consistent with the theoretical formula (10). In the Fig.4(a) we show the relation of the Renyi entropy with respect to the size of the subsystem at the end of quench \(t=0\), for various quench rates. It is obvious that this relation should be different from the static case, i.e., Eq.(9). As an example, we show in the inset plot the relation between \(S_{2}\) and \(\ln\left(\frac{N}{\pi a}\sin\left(\frac{\pi N_{A}}{N}\right)\right)\) for \(\tau_{Q}=7.4643\). It is away from the linear relation Eq.(9). From Fig.4(a) we can still see the symmetry under \(N_{A}\to 100-N_{A}\). Besides, in the range \(N_{A}\in[0,50]\), Renyi entropy grows as the subsystem size grows, and then saturates nearby \(N_{A}=50\). We define the Renyi entropy at saturation as \(S_{\text{satu.}}\). As was mentioned above, at this case the length scale is \(\xi\simeq\sqrt{\tau_{Q}}\ln\tau_{Q}\). Therefore, the ratio \(N_{A}/(\sqrt{\tau_{Q}}\ln\tau_{Q})\) is a reduced dimensionless scale. We again define a reduced Renyi entropy which is the ratio between \(S_{2}\) and \(S_{\text{satu.}}\) for each quench rate. Their relations can be found in Fig.4(b). We see that for each quench rate they overlap very well, which implies that the length scale \(\xi\simeq\sqrt{\tau_{Q}}\ln\tau_{Q}\) at \(t=0\) is reasonable. _Conclusions and discussions._-- We studied the second Renyi entropy \(S_{2}\) for the one-dimensional TFQIM with neural networks in both static ground state and quenched dynamics. By adopting the swapping operation, we transform the computation of \(S_{2}\) to calculating the expectation value of the swapping operator \(S_{\text{wap}}\). In the static case, the peak of the Renyi entropy can reveal the critical point of the quantum phase transition from paramagnetic to ferromagnetic. Therefore, Renyi entropy in this sense can play a role of order parameter for uncovering phase transitions. The relation between Renyi entropy and the size of the subsystem matches the theoretical predictions very well, which verifies the accuracy of the neural network methods. In the quenched dynamics, we linearly quenched the transverse magnetic field from \(h>1\) to \(h=0\) and then let the system evolve freely. We found the coherent oscillations of the Renyi entropy in the free evolutions. The oscillation periods are identical to \(\pi/2\) which was con Figure 3: Renyi entropy at the end of quench with respect to the logarithmic function of quench rate \(\ln(\sqrt{\tau_{Q}}\ln\tau_{Q})\). The asymptotic form of \(S_{2}\) as \(\tau_{Q}\rightarrow\infty\) is fitted by the red dashed line, which is consistent with the Eq.(10). Figure 4: (a) Renyi entropy against the size of the interval \(N_{A}\) for several quench rates. Renyi entropy will increase with \(N_{A}\) and reach saturation quickly before the half of system. The error bars represent the standard errors. The inset is an example for \(\tau_{Q}=7.4643\) to show that the relation Eq.(9) in static case does not work in the quenched case; (b) The relation between the reduced Renyi entropy and the reduced length scale. They collapse together for each quench rate. sistent with the coherent oscillations of the transverse magnetization studied in existing literatures. We interpreted this oscillation behavior as the superpositions of the different symmetry-broken excited states. At the end of the quench, the system embodies a new length scale as \(\xi\simeq\sqrt{\tau_{Q}}\ln\tau_{Q}\), different from the scale \(\hat{\xi}\simeq\sqrt{\tau_{Q}}\) in the impulse regime which is nearby the critical point. We verified this new length scale from the asymptotic behavior of the Renyi entropy in the limit of large quench rate. Besides, the relation between the reduced Renyi entropy \(S_{2}/S_{\text{satu.}}\) and the reduced size of the subsystem \(N_{A}/\xi\) collapse together, which indicates that the new length scale \(\xi\simeq\sqrt{\tau_{Q}}\ln\tau_{Q}\) at the end of the quench is correct. ## Acknowledgements We appreciate the helpful discussions with Marek Rams. This work was partially supported by the National Natural Science Foundation of China (Grants No.12175008).
2306.01756
CSI-Based Efficient Self-Quarantine Monitoring System Using Branchy Convolution Neural Network
Nowadays, Coronavirus disease (COVID-19) has become a global pandemic because of its fast spread in various countries. To build an anti-epidemic barrier, self-isolation is required for people who have been to any at-risk places or have been in close contact with infected people. However, existing camera or wearable device-based monitoring systems may present privacy leakage risks or cause user inconvenience in some cases. In this paper, we propose a Wi-Fi-based device-free self-quarantine monitoring system. Specifically, we exploit channel state information (CSI) derived from Wi-Fi signals as human activity features. We collect CSI data in a simulated self-quarantine scenario and present BranchyGhostNet, a lightweight convolution neural network (CNN) with an early exit prediction branch, for the efficient joint task of room occupancy detection (ROD) and human activity recognition (HAR). The early exiting branch is used for ROD, and the final one is used for HAR. Our experimental results indicate that the proposed model can achieve an average accuracy of 98.19% for classifying five different human activities. They also confirm that after leveraging the early exit prediction mechanism, the inference latency for ROD can be significantly reduced by 54.04% when compared with the final exiting branch while guaranteeing the accuracy of ROD.
Jingtao Guo, Ivan Wang-Hei Ho
2023-05-24T04:02:49Z
http://arxiv.org/abs/2306.01756v1
# CSI-Based Efficient Self-Quarantine Monitoring System Using Branchy Convolution Neural Network ###### Abstract Nowadays, Coronavirus disease (COVID-19) has become a global pandemic because of its fast spread in various countries. To build an anti-epidemic barrier, self-isolation is required for people who have been to any at-risk places or have been in close contact with infected people. However, existing camera or wearable device-based monitoring systems may present privacy leakage risks or cause user inconvenience in some cases. In this paper, we propose a Wi-Fi-based device-free self-quarantine monitoring system. Specifically, we exploit channel state information (CSI) derived from Wi-Fi signals as human activity features. We collect CSI data in a simulated self-quarantine scenario and present BranchyGhostNet, a lightweight convolution neural network (CNN) with an early exit prediction branch, for the efficient joint task of room occupancy detection (ROD) and human activity recognition (HAR). The early exiting branch is used for ROD, and the final one is used for HAR. Our experimental results indicate that the proposed model can achieve an average accuracy of 98.19% for classifying five different human activities. They also confirm that after leveraging the early exit prediction mechanism, the inference latency for ROD can be significantly reduced by 54.04% when compared with the final exiting branch while guaranteeing the accuracy of ROD. Self-Quarantine Monitoring, Channel State Information (CSI), Branchy Convolutional Neural Network (CNN), Early Exit Prediction, Human Activity Recognition (HAR) ## I Introduction In recent years, COVID-19 (an infectious disease caused by the SARS-CoV-2 virus) has become a global pandemic and caused a serious threat to our health [1]. Over one million cases have been reported in Hong Kong [2], causing a heavy burden on the public health system. According to the existing findings [3], when infected people cough, sneeze, speak, or breath, the virus spreads in little liquid particles from their mouth or nose, and on average, it takes five to six days from the time people are infected with the virus to the time they develop symptoms, but it can take up to 14 days. Therefore, self-quarantine is required for those who have been in close contact with infected people or to any at-risk places to prevent the virus from spreading around the community. ### _Related Works and Motivations_ _Strengths of Device-Free Wireless Sensing Based COVID-19 Monitoring System When Compared With Existing Systems:_ Currently, several COVID-19 monitoring and tracing systems are developed by various countries and regions to control the spread of the COVID-19 virus for protecting the health of their community. For example, the Malaysian government has developed a contact tracing application termed MySejahtera [4] to trace users who have stayed with infected people on the same premises via a QR code check-in scheme. However, this application is unsuitable for monitoring someone who is undertaking self-quarantine since it serves as proactively preventative measures to alert users if they have been in close contact with infected people. Hong Kong government proposed a self-quarantine monitoring scheme named StayHomeSafe [5]. This scheme comprises an application and a wristband with a QR code for scanning at a random time to prevent quarantine users from leaving their place during the quarantine period. [6] also presented a self-quarantine monitoring system that used GPS signals and a camera to locate and verify a quarantine user, respectively. After that, [7] introduced a monitoring system that can identify a quarantine user and periodically check the user's location and health status. However, camera-based monitoring systems usually require complex data pre-processing methods in dark scenarios. They also present privacy leakage risks. Monitoring systems with wearable devices may cause user inconvenience. Compared with the sensing technologies mentioned above, wireless sensing technology has great advantages in low-level privacy leakage, contactless operation, and well-functioning in low-light and non-line-of-sight (NLOS) environments [8]. _Existing CSI-Based HAR and ROD Systems:_ In the past decades, Wi-Fi CSI-based wireless sensing technology is gaining popularity as a result of the availability of different CSI tools like the Linux 802.11n CSI Tool [9] and Nexmon CSI [10]. In terms of these tools, several CSI-based wireless sensing systems have been proposed. For modeling the relationship between human activities and the dynamically changing signal features, machine learning (ML) technologies are applied in these systems. For example, [11] used a one-class support vector machine (SVM) and random forest (RF) algorithms for HAR and ROD. [12] also introduced a k-nearest neighbor (KNN) classifier for human identification. However, conventional ML approaches usually require hand-crafted feature extractors [13]. Hence, more researchers exploit deep learning (DL) algorithms for HAR and ROD. DL technologies can regard as an end-to-end learning approach where feature ex traction, a time-consuming and knowledge-demanding process in traditional ML algorithms, is accomplished automatically and embedded implicitly in the network architecture [14]. For instance, [15] proposed a temporal Unet (TUNet) for sample-level classification of different human activities. [16] further leveraged the ResNet [17] architecture to design a one-dimensional ResNet (1D-ResNet) for the joint task of HAR and indoor localization. [18] also presented an attention-based bi-directional long short-term memory (ABLSTM) for passive HAR and showed satisfactory performance. After that, [19] combined the 1D-CNN and bi-directional LSTM (BiLSTM) architectures to present a deep convolutional LSTM (DeepConvLSTM) model and achieved 92% average accuracy for eleven activities. [20] exploited an efficient 2D-CNN named EfficientNet-B2 [21] for HAR using four action recognition datasets. [22] also leveraged 2D-CNN and transfer learning to conduct ROD. In [23], InceptionTime [24] and BiLSTM-based models are introduced to evaluate their performance for HAR. However, these DL benchmarks either only consider single-task learning or do not achieve a good balance between accuracy and latency. ### _Contributions_ In this paper, we propose a Wi-Fi-based self-quarantine monitoring system. Specifically, we utilize CSI extracted from Wi-Fi signals as human activity features. Since ROD is an easy task than HAR in a self-quarantine scenario, we introduce the early exit prediction mechanism to GhostNet [25] model to design a new DL model for the joint task of HAR and ROD. This mechanism can significantly reduce the inference time of the model for ROD without compromising the detection accuracy. The prediction result is sent to a cloud IoT platform for visualization and remote monitoring. Compared with other monitoring systems mentioned above, our proposed method has great advantages in low-level privacy leakage and contact-free monitoring. Real-world experiments have been conducted to verify the efficiency of the proposed model. The experimental results are compared with the other seven DL benchmarks presented in the literature. Overall, the main contributions of this study are three-fold: * We present a Wi-Fi-based self-quarantine monitoring system to detect human activities and room occupancy where CSI data extracted from Wi-Fi signals are exploited as human activity features. We also introduce a demonstration leveraging the Raspberry Pi 4B, ThingsBoard, and Telegram platforms. * To improve the efficiency for ROD, we then introduce BranchyGhostNet for the joint task of HAR and ROD in which the early exit prediction mechanism can significantly reduce the inference time of the model for ROD without compromising the detection accuracy. * We conduct extensive experiments to evaluate our model. Our results indicate that the proposed BranchyGhostNet can achieve up to 13.22% performance boost for HAR compared with the other six common-used DL models and reduce the inference time for ROD by 54.04% compared with its final exiting branch. The remaining part of the paper proceeds as follows: In Section II, we illustrate our proposed monitoring system and BranchyGhostNet. After that, we introduce the experimental setup for evaluating the proposed system in Section III. Section IV presents the experimental results. Finally, Section V concludes the paper. ## II The Proposed Self-Quarantine Monitoring System and BranchyGhostNet ### _Main Idea of the proposed system_ Our proposed monitoring system aims to detect quarantine user activities and prevent the user from leaving the quarantine place or being in close contact with another person. Fig. 1 demonstrates the overview of our system. We suppose a user is undertaking self-quarantine. The following three parts comprise our monitoring system. \((1)\) Train a lightweight model using the CSI data collected in the simulated self-quarantine scenario. \((2)\) The trained model is converted and deployed to the Raspberry Pi 4B for online testing. \((3)\) In the online testing stage, the prediction result is sent to a cloud IoT platform Fig. 1: The overview of our proposed self-quarantine monitoring system. for remote monitoring, and an alarm message is sent to the messenger if the room is empty or the user comes into contact with another person. The main computational procedure in our proposed system comes from offline model training. We exploit a lightweight model named GhostNet [25] to design a new DL model named BranchyGhostNet for the joint task of HAR and ROD. This model is quite suitable for edge devices because of its low computational cost and inference latency. The learning goal for user \(\mathbf{p}\)'s model is represented as \[\arg\min_{\mathbf{\Theta^{p}}}\mathcal{L}_{\mathbf{p}}=\sum_{i=1}^{n^{\mathbf{p}}}\mathcal{ C}\left(\mathbf{l}_{i\_r}^{\mathbf{p}},\mathbf{B}_{e}^{\mathbf{p}}\left(\mathbf{X}_{i}^{\mathbf{p}} \right)\right)+\sum_{i=1}^{n^{\mathbf{p}}}\mathcal{C}\left(\mathbf{l}_{i\_h}^{\mathbf{p}}, \mathbf{B}_{o}^{\mathbf{p}}\left(\mathbf{X}_{i}^{\mathbf{p}}\right)\right) \tag{1}\] where \(\Theta^{\mathbf{p}}\) means the optimal parameters of all layers for user \(\mathbf{p}\)'s model, \(\mathcal{C}\left(\cdot,\cdot\right)\) denotes as the loss function for model training (e.g., cross-entropy loss function), \(\mathcal{L}_{\mathbf{p}}\) means the overall loss for model training. \(\mathbf{B}_{e}^{\mathbf{p}}\) and \(\mathbf{B}_{o}^{\mathbf{p}}\) represent the early exit and final exit prediction branch of \(\mathbf{p}\)'s model, respectively. \(\left\{\mathbf{X}_{i}^{\mathbf{p}},\ \mathbf{l}_{i\_r}^{\mathbf{p}},\ \mathbf{l}_{i\_h}^{\mathbf{p}}\right\}_{i=1}^{n^{ \mathbf{p}}}\) are samples and their corresponding true ROD and HAR labels from \(\mathbf{p}\)'s dataset with size \(\mathbf{n^{\mathbf{p}}}\). ### _Model Architecture_ A deeper structure is usually required for a DL model to achieve higher accuracy. However, the cost of added latency and energy usage becomes more prohibitive for real-time and energy-sensitive applications when the model continues to get deeper and larger. Since ROD is an easy task than HAR under the self-quarantine scenario and inspired by [26], we introduce the early exit prediction mechanism to the original GhostNet model to conduct the joint task of HAR and ROD while reducing the inference latency for ROD to improve the monitoring efficiency. This mechanism utilizes the early layer of a DL model to learn the hidden features for ROD, which indicates that the early exit prediction branch shares part of the layers with the final prediction branch and allows certain prediction results to exit from the model early. With this mechanism, our proposed BranchyGhostNet structure is presented by adding an exit branch after a certain layer position of the original GhostNet model as shown in Fig. 2. The BranchyGhostNet model comprises 97 convolution layers and two fully connected layers with only about 370 million multiply-accumulate operations (MACs). The main component of our model is the Ghost module, which mainly uses cheap linear operation to augment features and generate more feature maps after obtaining a few inherent channels by a regular convolution operation [25]. The Ghost bottleneck with stride=1 comprises two Ghost modules, and that with stride=2 further includes a depthwise convolution layer with stride=2 between the two Ghost modules. The squeeze-and-excitation module, which can utilize the correlation between feature maps, is also adopted in some Ghost bottlenecks. ## III Experiment Setup ### _Equipment and Data Collection Method_ Fig. 3 illustrates the communication between the devices for CSI data collection. The PC is used to send control messages Fig. 3: CSI data collection method. Fig. 2: The architecture of the proposed BranchyGhostNet. to the AP for generating the wireless signals that contain CSI data, while a Raspberry Pi 4B serves as a data collector to extract CSI data. Ping packets are sent from the PC to the AP, and pong packets are sent back from the AP to the PC. The Pi 4B in monitor mode was configured with Raspberry Pi OS (Buster/Linux 5.4.83) with the pi-5.4.51-plus branch of nexmom_cs1 installed. The following filter parameters were used for configuring nexmon: channel 36/80, Core 1, NSS mask 1. The Raspberry Pi 4B was paired to the AP. A computer was connected to the Pi 4B over the SSH protocol on a separate 2.4 GHz channel to control the data collection and reduce interference. The AP operates on channel 36 with an 80 MHz bandwidth. Note that the model of the AP is not restricted as long as it supports the 802.11ac protocol. Finally, the PC is connected to the 5 GHz channel of the AP to generate data flow from which the Pi 4B in monitor mode can capture CSI data. The PC is configured to send ping flood to the AP at a rate of 100 Hz. Footnote 1: [https://github.com/zeroby0/nexmom_csi/tree/pi-5.4.51-plus](https://github.com/zeroby0/nexmom_csi/tree/pi-5.4.51-plus) ### _Environments and Datasets_ Fig. 4 shows the layout of an environment which was considered for CSI data collection. In this environment, we used the nexmon CSI patch to collect five human activities (sit, stand, walk, stand up and sit down) and three scenarios of room occupancy (nobody, one person, and two persons). The tcpdump command is used to render CSI data for producing a pcap file. An open-source framework named ciserad is used to interpret this file, which generates a 256 x 300 CSI amplitude matrix after computing the absolute values and the transpose operation. Note that 256 means the number of sub-carriers, and 300 represents the number of received data packets. Overall, datasets contain 10,000 CSI data for training and 2,300 CSI data for testing. ### _Data Preprocessing_ Since signal preprocessing can degrade the timeliness of a real-time system, only the amplitude response of CSI is utilized in this study. We minimize the signal preprocessing in our system. First, 8 Pilot sub-carriers and 14 Null sub-carriers (including Guard sub-carriers) are filtered away according to [27]. Then, we adopt the moving average method to eliminate short-term fluctuations of the signals and highlight their long-term trends. Finally, we save them as CSI radio images with 234 x 300 resolution to utilize the spatial and temporal correlations between adjacent channels and samples [28]. ### _Model Training and Testing Configuration_ Our model is implemented by Pytorch and trained using a computer configured with Nvidia RTX 3090 GPU. Before training the model, we use some general data argumentation methods (RandomRsizedCrop, RandomHorizontalFlip, and ColorJitter) when loading our training data. The AdamW optimizer is used to optimize model parameters during the training phase. CosineAnnealingLR scheduler is also applied to attenuate the learning rate via the cosine function, and a cross-entropy loss function is adopted for computing the training loss. The training epoch, training batch size, and testing batch size are set to 400, 50, and one, respectively. For the early exit prediction criterion, we set that the prediction result will output early from the model if it is equal to zero or two. In our experiments, four criteria, i.e., accuracy, precision, recall, and f1-score [29], are used to evaluate the performance of the BranchyGhostNet. We also assess its inference latency in GPU and compare it with the other seven DL benchmarks. ## IV Experiment Evaluation In this section, we use CSI data collected by the Raspberry Pi 4B platform via Wi-Fi channels from a single router as human activity features to validate the performance of our model and compare it with seven other common-used DL benchmarks. We also utilize this model to build a Wi-Fi-based self-quarantine monitoring system. ### _The performance of BranchyGhostNet_ Table I and II show the accuracy, precision, recall, f1-score and mean GPU latency of eight different DL models. As can be seen from the tables below, EfficientNet-B2 presents the best performance among eight DL models for both ROD and HAR. However, it provides the largest latency, which may not be suitable for resource-constrained devices and latency-sensitive tasks. Compared with EfficientNet-B2, our model only has a 0.58% and 0.23% accuracy drop with 73.82% and 43.03% latency reduction for ROD and HAR, respectively. In comparison with the mean latency of BranchyGhostNet shown in Table II, that presented in Table I indicates that after introducing the early exit prediction mechanism for ROD, an about two times computation speed-up ratio can be achieved without comprising the accuracy. From the results in Table I and II, we can also observe that most of the LSTM-based models have worse performance than CNN-based models, which implies a strong feature extraction capability of CNN for CSI-based HAR and ROD. The confusion matrices illustrating ROD and HAR accuracy of our proposed model are presented in Fig. 5 and 6 respectively. From the two confusion matrices shown below, we can see that higher accuracies are obtained for ROD. That may be because the number of clas Fig. 4: The layout of simulated self-quarantine environment. that in ROD, resulting in more powerful shared layers for the model and strong feature extraction capability for ROD. It can be seen from the confusion matrix shown in Figure 6 that all activities except the "walk" activity work well, while the "walk" activity shows a slight drop with lower than 90% accuracy. This activity will cause more fluctuation for the received signals, which may produce more noisy data patterns for the model. ### _Demonstration of the proposed self-quarantine monitoring system_ By exploiting the proposed BranchyGhostNet, we present a Wi-Fi CSI-based self-quarantine monitoring system. Fig. 7 shows the detection result visualization and alarm message receiving in ThingsBoard (cloud internet of things (IoT) platform) and Telegram (messenger) respectively to remotely monitor. Since the Raspberry Pi 4B only has a CPU, we further converted the model from the Pytorch framework into the open neural network exchange (ONNX) framework to reduce the inference latency in the CPU. ## V Conclusion and Future Work This paper proposed a CSI-based device-free system for monitoring a user during the quarantine period while preserving the user's privacy. To improve the efficiency of ROD, we introduced the early exit prediction mechanism to an original GhostNet to design a BranchyGhostNet model for the joint task of HAR and ROD. We have also developed a graphical user interface (GUI) to visualize the prediction result and used a messenger to receive the warnings. We have evaluated Fig. 5: Confusion matrix (%) of BranchyGhostNet for ROD. Fig. 6: Confusion matrix (%) of BranchyGhostNet for HAR. the performance of our proposed model. The experimental results showed that our model can achieve the best balance between latency and accuracy (i.e., provide low latency while guaranteeing high accuracy) compared with the other seven common-used DL benchmarks. After introducing the early exit prediction mechanism for ROD in BranchyGhostNet, its latency can be reduced by 54.04% when compared with the final exiting branch. A limitation of this study is that we currently only collect predefined activity data under an ideal scenario to train and test the model. For future research, how to handle unseen activities without the extra effort of data collection and model retraining can be investigated. We will also explore how to leverage unlabeled CSI data for model training to reduce the need for data labeling. ## VI Acknowledgment This work was supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B 090928001); and by The Hong Kong Polytechnic University (Project No. 4-ZZMU, Q-CDAS).
2305.08544
Quantum Neural Network for Quantum Neural Computing
Neural networks have achieved impressive breakthroughs in both industry and academia. How to effectively develop neural networks on quantum computing devices is a challenging open problem. Here, we propose a new quantum neural network model for quantum neural computing using (classically-controlled) single-qubit operations and measurements on real-world quantum systems with naturally occurring environment-induced decoherence, which greatly reduces the difficulties of physical implementations. Our model circumvents the problem that the state-space size grows exponentially with the number of neurons, thereby greatly reducing memory requirements and allowing for fast optimization with traditional optimization algorithms. We benchmark our model for handwritten digit recognition and other nonlinear classification tasks. The results show that our model has an amazing nonlinear classification ability and robustness to noise. Furthermore, our model allows quantum computing to be applied in a wider context and inspires the earlier development of a quantum neural computer than standard quantum computers.
Min-Gang Zhou, Zhi-Ping Liu, Hua-Lei Yin, Chen-Long Li, Tong-Kai Xu, Zeng-Bing Chen
2023-05-15T11:16:47Z
http://arxiv.org/abs/2305.08544v1
# Quantum Neural Network for Quantum Neural Computing ###### Abstract Neural networks have achieved impressive breakthroughs in both industry and academia. How to effectively develop neural networks on quantum computing devices is a challenging open problem. Here, we propose a new quantum neural network model for quantum neural computing using (classically-controlled) single-qubit operations and measurements on real-world quantum systems with naturally occurring environment-induced decoherence, which greatly reduces the difficulties of physical implementations. Our model circumvents the problem that the state-space size grows exponentially with the number of neurons, thereby greatly reducing memory requirements and allowing for fast optimization with traditional optimization algorithms. We benchmark our model for handwritten digit recognition and other nonlinear classification tasks. The results show that our model has an amazing nonlinear classification ability and robustness to noise. Furthermore, our model allows quantum computing to be applied in a wider context and inspires the earlier development of a quantum neural computer than standard quantum computers. + Footnote †: These authors contributed equally to this work + Footnote †: These authors contributed equally to this work ## Introduction Developing new computing paradigms [1; 2; 3; 4] has attracted considerable attention in recent years due to the increasing cost of computing and the von Neumann bottleneck [5]. Conventional (hard) computing is characterized by precision, certainty, and rigor. In contrast, "soft computing" [1; 2] is a newer approach to computing that mimics human thinking to learn and reason in an environment of imprecision, uncertainty, and partial truth. This approach aims to address real-world complexities with tractability, robustness, and low solution costs. In particular, neural networks (NNs), a subfield of soft computing, have rapidly evolved in both theory and practice during the current machine learning boom [6; 7]. With backpropagation algorithms, NNs have achieved impressive breakthroughs in both industry and academia [8; 9] and may even alter the way computation is performed [4]. However, the training cost of NNs can become very expensive as the network size increases [10]. More seriously, it is difficult for NNs to simulate quantum many-body systems with exponentially large quantum state spaces [11], which restricts basic scientific research and the intelligent development of biopharmaceutical and material design. Quantum computing [3] is another paradigm shift in computing, and it promises to solve the aforementioned difficulties of NNs. How to effectively develop NNs on quantum computing devices is a challenging open problem [12; 13; 11] that is still in its initial stages of exploration. In recent years, many novel and original works have attempted to develop well-performing quantum NN models [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25] on noisy intermediate-scale quantum devices [26], and these networks can be used to learn tasks involving quantum data or to improve classical models. However, despite the remarkable progress in the physical implementation of quantum computing in recent years, a number of significant challenges remain for building a large-scale quantum computer [27; 28; 29]. Thus, if the quest for quantum NNs heavily relies on standard quantum computing devices, the scope of applying quantum NNs might be quite restrictive. A real-world quantum system is always characterized by nonunitary, faulty evolutions and is coupled with a noisy and dissipative environment. The real-system complexities in the quantum domain call for a new paradigm of quantum computing aiming at nonclassical computation using real-world quantum systems. Thus, the new quantum computing paradigm, called soft quantum computing to be compared with classical soft computing, deals with classically intractable computation under the conditions of noisy and faulty quantum evolutions and measurements, while being tolerant of those effects that are detrimental for the standard quantum computing paradigm. Here, we propose for the first time a quantum NN model to illustrate soft quantum computing. Unlike other quantum NN models, we develop NNs for quantum neural computing based on "soft quantum neurons", which are building blocks of soft quantum computing and subject to only single-qubit operations, classically-controlled single-qubit operations and measurements, thus significantly reducing the difficulties of physical implementations. We demonstrate that quantum correlations characterized by non-zero quantum discord are present for quantum neurons in our model. The simulation results show that our quantum perceptron can be used to classify nonlinear problems and simulate the XOR gate. In contrast, classical perceptrons do not possess such nonlinear classification capabilities. Furthermore, our model is able to classify handwritten digits with an extraordinary generalization ability even without hidden layers. Our model also has a significant accu racy advantage over other quantum NNs for the above-mentioned tasks. Prominently, the proposed soft quantum neurons can be integrated into quantum analogues of typical topological architectures [30; 31; 32; 33] in classical NNs. The respective advantages of quantum technology and classical network architectures can thus be well combined in our quantum NN model. **Results Soft quantum neurons.** Quantizing the smallest building block of classical NNs, namely, the neuron, is a key challenge in building quantum NNs. Our soft quantum neuron model is inspired by biological neurons (Fig. 1), which can be implemented on realistic quantum systems. The term "soft" utilized here highlights the ability of our model to handle realistic environments and evolutions, distinguishing it from the standard quantum computing models. It is worth noting that soft computing is a proprietary term that is conceptually opposite to hard computing. In our proposal, a quantum neuron is modelled by a noisy qubit, which can be coupled with its surrounding environment. The initial state of the \(j\)th neuron can be described by a density matrix \(\rho_{j}^{in}\) in the computational basis \(\ket{0}\) and \(\ket{1}\). The quantum neuron \(\rho_{j}^{in}\) accepts \(n_{j}\) outputs \(s_{i}\) (\(i=1,2,...,n_{j}\)) from the final states \(\rho_{i}^{out}\) (\(i=1,2,...,n_{j}\)) of the other possible \(n_{j}\) neurons. The output \(s_{i}\) is determined by a \(2-\)outcome projective measurement on \(\rho_{i}^{out}\) in the computational basis. It is therefore a classical binary signal, namely, \(s_{i}=0\) or \(1\). When \(s_{i}=1\), corresponding to the case where \(\rho_{i}^{out}\) is measured and collapses to the state \(\ket{1}\), the quantum neuron \(\rho_{j}^{in}\) is acted upon by an arbitrary superoperator \(\mathcal{W}_{ij}\), while when \(s_{i}=0\) (\(\rho_{i}^{out}\) collapses to the state \(\ket{0}\)), nothing happens to \(\rho_{j}^{in}\). Ideally, \(\mathcal{W}_{ij}\) can be replaced by a corresponding unitary operator \(W_{ij}\). As a result, the evolution of the whole system from the state \(\bigotimes_{i=1}^{n_{j}}\rho_{i}^{out}\otimes\rho_{j}^{in}\) is \[\begin{split}\rho_{\{i\}j}^{mid}&=\mathcal{T} \bigotimes_{i=1}^{n_{j}}\mathcal{O}_{ij}(\rho_{i}^{out}\otimes\rho_{j}^{in})\\ &\equiv\mathcal{T}\bigotimes_{i=1}^{n_{j}}[\mathcal{P}_{\ket{0}_{ i}}\otimes\hat{I}_{j}+\mathcal{P}_{\ket{1}_{i}}\otimes\mathcal{W}_{ij}](\rho_{i}^{ out}\otimes\rho_{j}^{in}).\end{split} \tag{1}\] Here, \(\mathcal{O}_{ij}\) is a classically-controlled single-qubit operation, the superprojectors \(\mathcal{P}_{\ket{s}}\) are defined by \(\mathcal{P}_{\ket{s}}\rho=\ket{s}\bra{s}\rho\ket{s}\bra{s}\), \(\hat{I}\) is the identity operator, and \(\mathcal{T}\) represents a time-ordering operation. All \(\mathcal{W}_{ij}\) act upon the target neuron \(\rho_{j}^{in}\) with specific temporal patterns. As different quantum operations \(\mathcal{W}_{ij}\) might be noncommutative, the time-ordering of these operations is important. The state of the target neuron after the evolution of Eq. (1) can be obtained by tracing out all the input neurons \(\rho_{i}^{out}\), namely, \(\rho_{j}^{mid}=\mathrm{tr}_{\{i\}}\rho_{\{i\}j}^{mid}=\mathcal{T}\prod_{i=1}^{ n_{j}}[p_{i}\hat{I}_{j}+(1-p_{i})\mathcal{W}_{ij}]\rho_{j}^{in}\), where \(p_{i}\equiv p_{i}(0)=\mathrm{tr}(\ket{0}_{i}\bra{0}\rho_{i}^{out})\). After the evolution of Eq. (1), the target neuron \(\rho_{j}^{mid}\) is independently acted upon by a local bias superoperator \(\mathcal{U}_{j}\). This operator is designed to improve the flexibility and learning ability of quantum neurons. Ideally, \(\mathcal{U}_{j}\) can be replaced by a corresponding unitary operator \(U_{j}\). The action of \(\mathcal{U}_{j}\) is similar to adding bias to neurons in classical NNs [6; 7]. The final state of the target neuron is thus \(\rho_{j}^{out}=\mathcal{U}_{j}(\rho_{j}^{mid})\). Similarly, the output \(s_{j}\) of the target neuron is obtained by a \(2-\)outcome projective measurement on \(\rho_{j}^{out}\) in the computational basis. The output signal \(s_{j}\) of the target neuron is \[s_{j}=\left\{\begin{array}{ll}0&\text{with probability }p_{j}(0)\\ 1&\text{with probability }1-p_{j}(0).\end{array}\right. \tag{2}\] The output \(s_{j}\) can be accepted by all other connecting quantum neurons and affects the evolution of the quantum neurons that accept the output. This completes the specification of our proposed quantum neuron model. Strikingly, our model contains noisy cases, which allow our model to work under the conditions of noisy and faulty quantum evolutions and measurements. An elementary setup of our model is the soft quantum perceptron, which consists of a soft quantum neuron accepting inputs of \(n\) other soft quantum neurons and providing a single output, though in probability. **Quantumness of quantum neurons.** All final states of our quantum neurons are mixed states, as the evolution of these neurons depends on the Figure 1: **A drastically simplified drawing of a neuron and the soft quantum neuron model.****a** The neuron integrates hundreds or thousands of impinging signals through its dendrites. After processing by the cell body, the neuron outputs a signal through its axon to another neuron for processing in the form of an action potential when its internal potential exceeds a certain threshold. **b** Similarly, a “soft quantum neuron” can in principle receive hundreds or thousands of input signals. These signals affect the evolution of the soft quantum neuron. The evolved soft quantum neuron is measured and decides whether to output a signal according to the measurement result. measurements of their input neurons, thus introducing classical probability. Although such measurements make the neurons evolve into mixed states, the proposed quantum neurons can still develop quantum correlations arising from quantum discord. To make this clear, we consider the simplest two-neuron case. For the two neurons in the states \(\rho_{1}^{out}=p_{1}\ket{0}_{1}\bra{0}+(1-p_{1})\ket{1}_{1}\bra{1}\) (\(p_{1}\neq 0,1\)) and \(\rho_{2}^{in}\), the action of an operation \(\mathcal{O}_{12}\) results in the state \[\begin{split}\rho_{12}^{mid}&=(\mathcal{P}_{\ket{0} _{1}}\otimes\hat{I}_{2}+\mathcal{P}_{\ket{1}_{1}}\otimes\mathcal{W}_{12})( \rho_{1}^{out}\otimes\rho_{2}^{in})\\ &=p_{1}\ket{0}_{1}\bra{0}\otimes\rho_{2}^{in}+(1-p_{1})\ket{1}_{1 }\bra{1}\otimes\mathcal{W}_{12}(\rho_{2}^{in}),\end{split} \tag{3}\] where \(\mathcal{W}_{12}\) represents a specific quantum channel. Quantum correlations, if any, of \(\rho_{12}^{mid}\) can be quantified by the quantum discord [34]. Any bipartite state is called fully classically correlated if it is of the form [35]; otherwise, it is quantum correlated. Here, \(\ket{i}_{1}\) and \(\ket{j}_{2}\) are the orthonormal bases of the two parties, with nonnegative probabilities \(p_{ij}\). Obviously, for \(\rho_{12}^{mid}\) in Eq. (3) the first neuron becomes quantum-correlated with the second as long as \(\mathcal{W}_{12}(\rho_{2}^{in})\) and \(\rho_{2}^{in}\) are nonorthogonal [36; 37; 38]. In particular, Refs. [37; 38] show the creation of discord, from classically correlated two-qubit states, by applying an amplitude-damping process only on one of the qubits; for the phase-damping process, see Ref. [35]. Actually, \(\rho_{12}^{mid}\) in Eq. (3) is the _classical-quantum state_, as dubbed in Ref. [37]. While for measurements on neuron-1 the discord is zero, measurements on neuron-2 in general lead to nonzero discord. Thus, we reveal a crucial property of our quantum neuron model. Namely, quantum correlations arising from quantum discord can be developed between the proposed quantum neurons in our model, although these neurons are generally in mixed states. Note that the existing quantum neural network models are mainly based on variational quantum circuits requiring two-qubit gates. More remarkably, our model can be equated to quantum circuits generating quantum entanglement. To illustrate this more clearly, we take three neurons in Fig. 2**a** as an example and represent their interactions by a quantum circuit model shown in Fig. 2**b**. To facilitate the demonstration, we consider the ideal case where \(\rho_{2}^{in}\) and \(\rho_{3}^{in}\) are pure states and \(\mathcal{W}_{ij}\) and \(\mathcal{U}_{j}\) are replaced by corresponding unitary operators \(W_{ij}\) and \(U_{j}\), respectively. According to the principle of deferred measurement [3], measurements can always be moved from the middle step of a quantum circuit to the end of the circuit. Therefore, the circuit in Fig. 2**c** is equivalent to that in Fig. 2**b**. In the equivalent circuit, the unitaries that are conditional on the measurement results are replaced by controlled unitary operations on \(\rho_{2}^{in}\) and \(\rho_{3}^{in}\). It is easy to verify that quantum entanglement can exist between neuron-1 and neuron-2 (as well as neuron-3) in Fig. 2**c**. Another example of the principle of deferred measurement can be found in teleportation [3]. Nonetheless, it remains unclear whether this equivalence can be effectively utilized in computing tasks. We leave this matter for future work. **Soft quantum neural network.** Quantum neurons are connected together in various configurations to form quantum NNs with learning abilities, thus representing a quantum neural computing device obeying the evolution-measurement rules provided above. Our neurons can in principle be combined into quantum analogues of any classical network architecture that has proven effective in many applications. In this work, we present a fully-connected soft quantum feedforward NN (SQFNN) for application to supervised learning. Neurons are arranged in layers in a fully-connected feedforward NN (FNN). Each neuron accepts all the signals sent by the neurons in the previous layer and outputs the integrated signal to each neuron in the next layer. Note that there is no signal transmission between neurons within the same layer. To date, there has been no satisfactory quantum version of this simple model. Because no neuron can perfectly copy its quantum state in multiple duplicates as an output to the next layer due to the quantum no-cloning theorem [39], the output is not perfectly shared by neurons in the next layer. Because of the same theorem, quantum neural computing and standard quantum computing have incompatible requirements that are difficult to reconcile [12]. Our quantum NN model resolves this incompatibility by measuring each soft quantum neuron to give classical information as the integrated signal. This feature is essential for our model to be a genuine quantum NN model, which, while incorporating a neural computing mechanism, uses quantum laws consistently throughout neural computing. In fact, many studies have made bold attempts in this challenging area. For example, Ref. [14] introduces a general "fan-out" unit that distributes information about the input state into several output qubits. The quantum neuron in Ref. [15] is modelled as an arbitrary unitary operator with \(m\) input qubits and \(n\) output qubits. These attempts provide new perspectives for resolving the above-mentioned incompatibility. Unfortunately, none of them directly confronts this incompatibility. The neurons in these schemes still cannot share the outputs of the neurons in the previous layer; conversely, each neuron can only send different signals to different neurons in the next layer. In that sense, our NN is quite different from these quantum NNs. Figure 3 shows the concept of an SQFNN. Without loss of generality, we specify that signals propagate from top to bottom and from left to right. Therefore, the evolution equation of the \(j\)th neuron in the \(l\)th layer is \[\rho_{j^{(l)}}^{\text{out}}\equiv\mathcal{U}_{j^{(l)}}\text{tr}_{ \{i^{(l-1)}\}}\] \[(\bigotimes_{i^{(l-1)}=1}^{n^{(l-1)}}\mathcal{O}_{i^{(l-1)}j^{(l)} }(\rho_{i^{(l-1)}}^{\text{out}}\otimes\rho_{j^{(l)}}^{\text{in}})), \tag{4}\] where \(\mathcal{O}_{i^{(l-1)}j^{(l)}}\) acts on the \(i\)th neuron in the \((l-1)\)th layer and the \(j\)th neuron in the \(l\)th layer. The final state of the output layer of the network can be obtained by calculating the final state of each neuron layer by layer with Eq. (4) after considering the local bias superoperator acting upon each neuron. Note that due to the randomness introduced by the measurement operations, the result of a single run of the quantum NN is unstable, i.e., probabilistic. One way to prevent this instability is to obtain the average output of the network by resetting and rerunning the entire network multiple times. This average output is more representative of the prediction made by our quantum NN and is therefore defined as the final output of the network. For each neuron of the output layer, the average output includes the binary outputs in the computation basis and their corresponding probabilities. Although running the network multiple times seems to consume more time and resources, this increase is only equivalent to an additional constant factor on the original consumption [15] and has no serious consequences. Therefore, running the network multiple times is common for extracting the information of quantum NNs and is also widely adopted by other quantum NN models [15; 20]. Strikingly, this repetitive operation is easy and fast for a quantum computer. For example, the "Sycamore" quantum computer executed an instance of a quantum circuit a million times in 200 seconds [40]. In supervised learning, the NN must output a value close to the label of the training point. The closeness between the output and the label is usually measured by defining a loss function. The loss function in our model can be defined in various ways, e.g., by the fidelity between the output and the expected output or by certain distance measure. In the simulations shown below, a mean squared error (MSE) loss function is adopted, which can be written as \[\mathcal{L}=\sum_{k=1}^{N}\frac{1}{N}\left|y^{k}-\tilde{y}^{k}\right|^{2}, \tag{5}\] where \(N\) represents the size of a training set, \(y^{k}\) represents the label of the \(k\)-th training point, and \(\tilde{y}^{k}\) represents the predicted label of our network for the \(k\)-th training point, which is the average value of the output layer of the network obtained by resetting and rerunning the entire network multiple times. This loss function can be driven to a very low value by updating the parameters of the network, thereby improving the network performance. However, the loss function is nonconvex and thus requires iterative, gradient-based optimizers. As information is forwards-propagated in our network, we can use a backpropagation algorithm to update the parameters of the quantum operations. Moreover, since only single-qubit gates are involved in our model, the total number of parameters is not large and is approximately \(\sum_{l=1}^{L-1}3\left(n_{l}+1\right)\times n_{l+1}\), where \(L\) is the total number of layers in a network and \(n_{l}\) is the number of neurons in the \(l\)th layer. This number is directly proportional to the length \(L\) of the network and the square of the average width of the network (i.e., the average number of neurons per layer). In particular, the state space involved in Figure 3: **The concept of SQFNNs.** The network architecture of a soft quantum feedforward NN (SQFNN) is similar to that of the fully-connected feedforward NN (FNN) displayed at the bottom right corner. There is no feedback in the entire network. Signals propagate unidirectionally from the input layer to the output layer. The first action occurs between the red neuron and the blue neuron. The part in the white box represents the output layer, whose average output is defined as the final output of the network. computing the gradients is always that of a single neuron, thus circumventing the problem that the state-space size grows exponentially with the number of neurons. Many optimization algorithms widely used in classical NNs are therefore effectively compatible with our quantum NN, such as Adagrad [41], RMSprop [42], and Adam [43]. Both classical and quantum samples are available for our network, which is similar to other quantum NNs. For classical data, the input features need to be encoded into qubits and fed to the input layer. For quantum data, the quantum states can be decomposed into a tensor product of the qubits in the input layer, as in quantum circuits. #### Simulations In this section, we benchmark soft quantum perceptrons and SQFNNs with simple XOR gate learning, classifying nonlinear datasets and handwritten digit recognition. Our models show extraordinary generalization abilities and robustness to noise in the numerical simulations. #### XOR gate learning. The XOR gate is a logic gate that cannot be simulated by classical perceptrons because the input-output relationship of the gate is nonlinear. Figure 4 reports the results of XOR gate learning with a soft quantum perceptron. Figures 4**a**,**b** show the structure and setting of the soft quantum perceptron (see Methods for details). The results clearly show that the soft quantum perceptron is able to learn the data structure of the XOR gate with very high accuracy (Fig. 4**c**). Figure 4**d** shows the training process, where the training accuracy of our model converges quickly after even one epoch. These results show that our soft quantum perceptron has an extraordinary nonlinear classification ability. In addition, we add the bit flip channel, the phase flip channel and the bit-phase flip channel to this task to further demonstrate the performance of our model on realistic quantum systems. We assume that each quantum neuron passes through the same type of quantum noise channel with probability \(p\) while waiting to be operated. To make the results more reliable, we repeat the prediction 100 times with the trained model and use the average accuracy as the evaluation metric. We set the highest noise level in the simulations to \(p=0.5\). Measurements in the simulations are calculated within the limit of the infinite shot number. Details of the simulation results can be found in Table 1 in Methods. The result shows that our model is robust to these different quantum channels. A remarkable result is that our model is fully tolerant to the phase flip channel for the XOR gate learning task. In particular, our model achieves up to 75% accuracy even with a probability of a bit flip or bit-phase flip up to 0.40. When the probability of a bit flip or bit-phase flip reaches 0.50, the noise makes qubits \(|0\rangle\) and \(|1\rangle\) completely indistinguishable. Our model also naturally does not work in this case, which is consistent with theoretical predictions. #### Classifying nonlinear datasets. Two standard two-dimensional datasets ("circles" and "moons") are studied to further demonstrate the ability of soft quantum perceptrons to classify nonlinear datasets and form decision boundaries (see Methods for details). For each dataset, 200 (100) points are generated as the training (test) set. Figure 5**a** visualizes the training set for the two datasets, where the red (blue) dots represent class 1 (class 2). Obviously, the two datasets are linearly inseparable. Figure 5**b** reports the results of classifying the "circles" datasets with different models. Displayed from left to right are the simulation results for the classical multilayer perceptron (MLP), the parameterized quantum circuit (PQC) model and our model. The settings of these models are discussed in detail in the Methods. Figure 5**c** shows that all three models achieved 100% classification accuracy on the test set of "circles". However, the soft quantum perceptron converges faster and learns more robust decision bounds. It is worth reemphasizing that soft quantum perceptrons do not have hidden layers and do not require two-qubit gates. We also test the tolerance of soft quantum perceptrons for different noise types on this task (see Table 2 in Methods). The noise types are added in a manner consistent with the XOR gate learning task described above. The results show that the soft quantum perceptron maintains 100% accuracy on the test set of "circles", even when the probability of a bit flip or bit-phase flip is as high as 0.40. In particular, a soft quantum perceptron can achieve up to 96% accuracy when the probability of a bit flip is as high as 0.49. In addition, the soft quantum perceptron can maintain over 90% accuracy when the probability of a phase flip is as high as 0.5. We also found that the robustness of our model can be greatly enhanced when we use SQFNNs. For example, we obtain 100% accuracy when the probability of a phase flip is as high as 0.50 by adopting a 2-4-2-1 network structure. This suggests that the capabilities of our model can be enhanced by building more complex network structures, which provides strong confidence in handling more complex classification problems with our model. Figures 5**d**-**e** show the results of classifying the "moons" datasets with different models. The MLP achieved 100% accuracy, which is slightly higher than the 99% accuracy of the soft quantum perceptron. However, the soft quantum perceptron learns a decision boundary that is better suited to the original data. For comparison, the PQC model can only achieve 92% accuracy. In the experimental setup currently used, our model shows absolute advantages over the PQC model in some tasks. #### Handwritten digit recognition. Finally, we use QuantumFlow, the classical MLP, the PQC model, soft multioutput perceptron (SMP) and the SQFNN to recognize handwritten digits to demonstrate the ability of our models to solve specific practical problems (see Methods for details). QuantumFlow is a codesign framework of NNs and quantum circuits, and it can be used to design shallow networks that can be implemented on quantum computers [18]. SMP can also be regarded as an SQFNN without hidden layers. The simulation setting is discussed in the Methods section. Figure 6 shows the results of different classifiers for classifying different sub-datasets from MNIST [44]. The results show that the classical MLP performs better than the other four quantum models on all these subdatasets except for \(\{3,9\}\). This may be caused by the fact that the classical optimization algorithm has better adaptability to the classical MLP model. Strikingly, the performance of our models (i.e., SMP and the SQFNN) is significantly better than that of QuantumFlow as the number of classes in the dataset increases, implying that our models may have more advantages in dealing with more complex classification problems. For the datasets with two or three classes, our models also perform significantly better than the PQC model and perform comparably to QuantumFlow. For example, the SQFNN achieves \(89.67\%\) accuracy on the \(\{3,8\}\) dataset, which is \(2.47\%\) and \(4.34\%\) higher than those of QuantumFlow and the PQC model, respectively. However, our models require only classically controlled single-qubit operations and single-qubit operations, whereas QuantumFlow requires a large number of controlled two-qubit gates or even Toffoli gates to implement the task. In particular, SMP is able to effectively classify handwritten digits with a structure without hidden layers, which is not possible in classical multioutput perceptrons. **Discussion** In this work, we develop a new routine for quantum NNs as a platform for quantum neural computing on real-world quantum systems. The proposed soft quantum neurons are subject merely to local or classically controlled single-qubit gates and single-qubit measurements. The simulation results show that soft quantum perceptrons have the ability, beyond that of classical perceptrons, of nonlinear classification. Furthermore, our model is able to classify handwritten digits with extraordinary generalization ability, even in the absence of hidden layers. This performance, combined with the quantum correlations arising from quantum discord in our model, makes it possible to perform nonclassical computations on realistic quantum devices that are exten Figure 4: **Results of XOR gate learning with a soft quantum perceptron.****a** The structure of a soft quantum perceptron for learning the XOR gate. The input layer receives two features of the XOR gate, namely, input 1 and input 2 of XOR. The output layer predicts the results. **b** The quantum circuit model corresponding to (**a**). \(R^{Y}(x_{1}^{k}\pi)\) and \(R^{F}(x_{2}^{k}\pi)\) are used to encode the input features (see Methods for details). \(W_{13}^{s_{1}}\), \(W_{23}^{s_{2}}\) and \(U_{3}\) are single-qubit gates with parameters, where the values of the parameters converge during the learning process. **c** The simulation results of learning the XOR gate. The yellow (black) area represents an output of 1 (0), which is consistent with the truth table of the XOR gate. The true table of the XOR gate is displayed at the four corners of the figure. The soft quantum perceptron fully learns the data structure of the XOR gate. **d** The training process of the XOR gate. Loss (accuracy) is the value of the loss function (the test accuracy). The soft quantum perceptron achieves \(100\%\) test accuracy after the first epoch. sible to a large scale. Thus, the proposed computing paradigm is not only physically easy to implement, but also predictably exciting beyond classical computing capabilities. The soft quantum neurons are modelled as independent signal processing units and have more flexibility in the network architecture. Similar to classical perceptrons [6; 7], such units can receive signals from any number of neurons and send their outputs to any number of neurons. This similar property allows our quantum NNs to take classical network architectures that have been proven effective, thereby exploiting the respective advantages of quantum technology and classical network architectures. For example, soft quantum neurons can be combined into quantum convolutional NNs based on convolutional NNs that are widely used in large-scale pattern recognition [30]. Moreover, our model enables the construction of quantum-classical hybrid NNs by introducing classical layers. As the final output of our quantum NN is the classical information, part of the classical information can also be processed by classical perceptrons. This advantage makes our model more flexible and thus more adaptable to various problems. Our results provide an easier and more realistic route to quantum artificial intelligence. However, some limitations are worth noting. Although the quantum state space involved in computing the gradients in our model is always that of a single neuron, there may also be a barren plateau in the loss function landscape, which hinders Figure 5: **Nonlinear decision boundaries by the classical MLP, the PQC model and our model.****a** Displayed from left to right are the visualizations of the “circles” dataset and the “moons” dataset. \(X_{1}\) (\(X_{2}\)) represents the horizontal (vertical) coordinate of the input point. The red (blue) dots represent class 1 (class 2). Both datasets are linearly inseparable. **b** Displayed from left to right are the simulation results for the classical multilayer perceptron (MLP), the PQC model and our model. The classification accuracy is displayed at the bottom right corner of each subfigure. **c** The training process of learning the “circles” dataset with different models. The same results of the “moons” dataset are shown in **d** and **e**. Figure 6: **Handwriting recognition with the classical MLP, the PQC model, QuantumFlow and our models.** The brown, orange, yellow, purple and blue bars represent the soft multioutput perceptron (SMP), SQFNN, QuantumFlow, the classical MLP and PQC models, respectively. SMP can also be regarded as an SQFNN without hidden layers and accurately identifies handwritten digits that are impossible for classical multioutput perceptrons. the further optimization of the network. Additionally, while soft quantum NNs are much easier to build than standard ones, we need to do more work to understand what kinds of tasks they do well in learning. Future work should therefore include further research on optimization algorithms and building various soft quantum NNs inspired by classical architectures to solve problems that are intractable with classical models. **Methods Soft quantum perceptron for XOR gate learning.** We now discuss the details of the simulation setting for XOR gate learning. Figure 4**a** shows the model structure for learning an XOR gate, where two neurons in the input layer receive and encode data points, and the neuron in the output layer predicts the outcome. We adopt a simpler and more efficient angle encoding method instead of the method adopted in Ref. [45] to encode the data (Fig. 4**b**) and accelerates the convergence of the training process. Specifically, for an input set \(\{x^{k}\}\), we encode the \(i\)-th feature \(x^{k}_{i}\) of the \(k\)-th data point by applying a single-qubit rotation gate \(R^{Y}(x^{k}_{i}\pi)\) on the initial qubit \(\ket{0}\), where \(Y\) represents the rotation along the \(Y\) axis and \(x^{k}_{i}\pi\) represents the rotation angle. Note that a common MSE loss function and the Adam algorithm [43] are used in the training processes for all tasks in this study. The soft quantum perceptron for learning XOR is optimized for 20 epochs, and the learning rate is set to 0.1. Table 1 shows how the test accuracy of our model for the XOR gate learning task varies as the flip probability \(p\) increases. **Classifying nonlinear datasets.** A 2-4-1 MLP structure is used for comparison in the task of classifying the "circles" dataset, as classical perceptrons are unable to classify nonlinear datasets. The reason for using this structure is that the 2-4-1 MLP needs to learn 17 parameters, which is approximately the same number of parameters that our model needs to learn. The structure of the PQC model used for comparison is adopted from Ref. [46]. This common layered PQC model is denoted as \[U(\bar{\theta})=B_{d}\left(\bar{\theta}_{d}\right)\cdots B_{\ell}\left(\bar{ \theta}_{\ell}\right)\cdots B_{1}\left(\bar{\theta}_{1}\right) \tag{6}\] where \(\bar{\theta}\) represents the overall learnable parameters of the PQC, \(B_{\ell}\left(\bar{\theta}_{\ell}\right)\) is a parameterized block consisting of a certain number of single-qubit gates and entangling controlled gates, and depth \(d\) represents the total number of such blocks. These qubits and controlled gates in the same block \(B_{\ell}\left(\bar{\theta}_{\ell}\right)\) form a cyclic code. The control proximity range of a cyclic code, denoted as \(r\), defines how the controlled gates work. For any qubit index \(j\in[0,N-1]\) of an \(N\)-qubit circuit, the entangling code clock has one controlled gate with the \(j\)th qubit as the target and the qubit with the index \(k=(j+r)\) mod \((N)\) as the control qubit (see Ref. [46] for details). In each block \(B_{\ell}\left(\bar{\theta}_{\ell}\right)\) of our setting, each qubit is acted on by a parameterized universal single-qubit gate. Then, the code block follows. One more optimizable single-qubit gate \(R^{Y}\) acts on each qubit in the final \(B_{\ell}\left(\bar{\theta}_{\ell}\right)\). The control proximity range of a cyclic code \(r\) is fixed to 1. Specifically, a 2-qubit circuit of depth \(d=1\) and size \(s=6\) is used to classify the "circles" dataset, where \(d\) is the number of blocks \(B_{\ell}\left(\bar{\theta}_{\ell}\right)\) and \(s\) is the total number of gates in the circuit other than in the encoding layer. In particular, the encoding method in Ref. [45] is also used in this PQC model for classifying nonlinear datasets. To enrich the expressivity of our model, we adopt the "parallel encodings" strategy mentioned in Ref. [47] when classifying the "moons" dataset, that is, using multiple neurons to repeatedly encode the same input in the input layer. In the task of classifying the "moons" dataset, we repeatedly encode each input with three neurons. For comparison, we also simulate the results of a 2-10-1 MLP and a 4-qubit PQC with \(d=2\) and \(s=24\). The MLP has 41 parameters to learn. The PQC model also adopts the "parallel encodings" strategy in this task. In particular, the soft quantum perceptron does not have hidden layers, so it is a simpler structure compared to the MLP. In fact, in addition to the results presented in the main text, we also found that a 4-qubit PQC with \(d=10\) and \(s=40\) could only achieve 94% accuracy when classifying the "moons" dataset. Table 2 shows how the test accuracy of our model for classifying the "circles" datasets varies as the flip probability \(p\) increases. Note that this effect is continuous, but the presentation of our results in a discrete table format may create an impression of discontinuity. Moreover, when the probability of bit flip or bit-phase flip reaches 0.5, the \(\ket{0}\) and \(\ket{1}\) components of the corresponding quantum state become indistinguishable in the computational basis, resulting in the inability to extract any relevant information. This causes a sudden drop in the probability of successful learning. This effect is particularly pronounced in close proximity to the 0.5 probability threshold. **Simulation setting for handwritten digit recognition.** The specific simulation setting is as follows. First, we extract several subdatasets from MNIST. For example, \(\{3,6\}\) represents the subdataset containing two classes of the digits 3 and 6. After that, we apply the same downsampling size to all images from the same subdataset of MNIST. Specifically, we downsample the resolution of the original images from \(28\times 28\) to \(4\times 4\) \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Noise channels} & \multicolumn{6}{c}{Flip probability} \\ & 0.10 & 0.20 & 0.30 & 0.35 & 0.40 & 0.50 \\ \hline Bit flip & 100\% & 100\% & 100\% & 100\% & 75\% & 50\% \\ Phase flip & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ Bit-phase flip & 100\% & 100\% & 100\% & 100\% & 75\% & 50\% \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracies of learning the XOR gate with the soft quantum perceptron after a bit flip channel, phase flip channel, or bit-phase flip channel with different flip probabilities. for the datasets with two or three classes, and to \(8\times 8\) for datasets with four or five classes. Finally, we use the structure from Ref. [18] that contains a hidden layer for QuantumFlow, the classical MLP, and the SQFNN, where the hidden layer contains 4 neurons for two-class datasets, 8 neurons for three-class datasets, and 16 neurons for four- and five-class datasets. The input and output layers of these models (including SMP) are determined by the downsampling size and the number of digits in the subdatasets. Note that the PQC model is designed as a 4-qubit circuit of \(d=10\) and \(s=120\) due to the lack of the concept of neurons. The PQC model is usually used as a binary classifier in the current study. Therefore, the PQC model is only used to classify the datasets with two classes in this task. Other QuantumFlow settings, such as accuracy, are consistent with those in Ref. [18]. **Data Availability** Data generated and analyzed during the current study are available from the corresponding author upon reasonable request. **Conflicts of Interest** The authors declare that there is no conflict of interest regarding the publication of this article. **Authors' Contributions** Z.-B.C. conceived and supervised the study. M.-G.Z., Z.-P.L, H.-L.Y., and Z.-B.C built the theoretical model. M.-G.Z., Z.-P.L, and H.-L.Y. performed the simulations. M.-G.Z., Z.-P.L, H.-L.Y., and Z.-B.C cowrote the manuscript, with inputs from the other authors. All authors have discussed the results and proofread the manuscript. **Acknowledgements** We gratefully acknowledge the supports from the National Natural Science Foundation of China (No. 12274223), the Natural Science Foundation of Jiangsu Province (No. BK20211145), the Fundamental Research Funds for the Central Universities (No. 020414380182), the Key Research and Development Program of Nanjing Jiangbei New Aera (No. ZDYD20210101), and the Program for Innovative Talents and Entrepreneurs in Jiangsu (No. JSSCRC2021484).
2303.08812
Trigger-Level Event Reconstruction for Neutrino Telescopes Using Sparse Submanifold Convolutional Neural Networks
Convolutional neural networks (CNNs) have seen extensive applications in scientific data analysis, including in neutrino telescopes. However, the data from these experiments present numerous challenges to CNNs, such as non-regular geometry, sparsity, and high dimensionality. Consequently, CNNs are highly inefficient on neutrino telescope data, and require significant pre-processing that results in information loss. We propose sparse submanifold convolutions (SSCNNs) as a solution to these issues and show that the SSCNN event reconstruction performance is comparable to or better than traditional and machine learning algorithms. Additionally, our SSCNN runs approximately 16 times faster than a traditional CNN on a GPU. As a result of this speedup, it is expected to be capable of handling the trigger-level event rate of IceCube-scale neutrino telescopes. These networks could be used to improve the first estimation of the neutrino energy and direction to seed more advanced reconstructions, or to provide this information to an alert-sending system to quickly follow-up interesting events.
Felix J. Yu, Jeffrey Lazar, Carlos A. Argüelles
2023-03-15T17:59:01Z
http://arxiv.org/abs/2303.08812v2
Trigger-Level Event Reconstruction for Neutrino Telescopes Using Sparse Submanifold Convolutional Neural Networks ###### Abstract Convolutional neural networks (CNNs) have seen extensive applications in scientific data analysis, including in neutrino telescopes. However, the data from these experiments present numerous challenges to CNNs, such as non-regular geometry, sparsity, and high dimensionality. Consequently, CNNs are highly inefficient on neutrino telescope data, and require significant pre-processing that results in information loss. We propose sparse submanifold convolutions (SSCNNs) as a solution to these issues and show that the SSCNN event reconstruction performance is comparable to or better than traditional and machine learning algorithms. Additionally, our SSCNN runs approximately 16 times faster than a traditional CNN on a GPU. As a result of this speedup, it is expected to be capable of handling the trigger-level event rate of IceCube-scale neutrino telescopes. These networks could be used to improve the first estimation of the neutrino energy and direction to seed more advanced reconstructions, or to provide this information to an alert-sending system to quickly follow-up interesting events. ## I Introduction Gigaton-scale neutrino telescopes have opened a new window to the Universe, allowing us to study the highest energy neutrinos. While there are a variety of proposed designs, many follow the detection principle outlined by the DUMAND project [1] and consist of an array of optical modules (OMs) deployed in liquid or solid water. This detector paradigm shows great promise, and analyses by these experiments have already provided the first evidence of astrophysical neutrino sources [2; 3]. Before they can be analyzed, however, high-energy neutrinos must be isolated from the immense cosmic-ray-muon-induced background. While a high-energy neutrino may trigger a detector once every few minutes, cosmic-ray muons typically induce a trigger rate on the order of kHz. Since they are unable to traverse a substantial portion of the Earth without coming to rest, cosmic-ray muons have a distinct zenith dependence. This allows them to be removed by cutting on the reconstructed direction of an event. Thus, a reliable reconstruction that is capable of keeping up with the \(\sim\)kHz background rate is the first step in isolating neutrinos. Moreover, a rapid reconstruction method could serve as part of an alert system that notifies researchers of events that are highly likely to be astrophysical neutrinos. For example, the real-time follow up of such an IceCube event led to the observation of the first astrophysical neutrino source candidate, TXS 0506+056, by detecting a neutrino in coincidence with a gamma-ray flare [2]. Along this line, similar efforts are underway in water-based detectors such as ANTARES, see [8] for a recent review. At the trigger-level, a simple but fast reconstruction is typically done by solving a least squares problem via matrix inversion, as is the case for LineFit[9] in IceCube or QFit in ANTARES [10]. Machine learning has shown promise by delivering a comparable quality reconstruction with less runtime requirements [7; 11]; however, the fastest convolutional neural network (CNN) developed for high-energy neutrinos is not able to keep pace with a kHz-scale trigger-level rate. In this article, we introduce a reconstruction method using a sparse submanifold CNN (SSCNN), which overcomes this runtime issue. We will illustrate our method by focusing on solid-water detectors, but our results and conclusions readily generalize Figure 1: _Event rates of triggers in different neutrino telescopes [4; 5; 6] compared to the run-times of various reconstruction methods._ Sparse submanifold CNNs and their performance are detailed in this article. The CNN and maximum likelihood method run-times are taken from [7]. Notably, sparse submanifold CNNs can process events well above standard trigger rates in both ice- and water-based experiments. to water-based detectors. In this context, our SSCNN achieves better angular resolutions than methods such as Linefit while requiring a comparable run-time, enabling improved trigger-level cuts and serving as a better seed for the likelihood-based reconstruction. Fig. 1 summarizes typical event rates found in neutrino telescopes and compares these to the execution rate of various reconstructions. At the same time, SSCNN is also able to reconstruct the neutrino energy, a task which has not been done at trigger-level. While the rest of this article will concentrate on the implementation of SSCNN in an ice-embedded IceCube detector, it should be noted that our method is also applicable to water-based neutrino telescopes, where we expect similar performance gains. The rest of this article is organized as follows. In Sec. II we motivate and introduce sparse submanifold convolutions; in Sec. III we describe the data sets used for training and testing; in Sec. IV we evaluate the performance of the network. Finally, in Sec. V we conclude with some parting words. The code detailing our implementation of SSCNN has been made available at Ref. [12]. ## II Methods ### Sparse Submanifold Convolutions Convolutional neural networks (CNNs) have become the staple architecture for image-like data, and have achieved great success in a wide range of applications, including neutrino physics [13, 14, 15, 16]. However, data from neutrino telescopes presents inherent challenges to CNNs. In particular: * _Non-regular geometry_: CNNs are designed to operate on images, which are arranged on Cartesian grids. Neutrino telescope sensors are typically spaced irregularly [17, 18, 5, 19], with varying distances and arrangements in between each sensor. * _Sparsity_: Traditional CNNs use convolutions which operate on all points in the given input data. This leads to computational inefficiencies when the data is sparse. * _High dimensionality_: Events occur in large spatial and temporal scales. This makes using traditional CNNs computationally unfeasible on raw 4D data (three spatial and one time) without information loss or significant pre-processing. In this article, we propose a solution to these challenges using sparse submanifold convolutions [20]. This strategy has already shown success in liquid argon time projection chamber neutrino experiments [21, 22]. The usage of sparse submanifold convolutions in our network naturally solves the challenges laid out above. Sparsity and high dimensionality are no longer a concern, as the number of computations performed will depend only on the number of OM hits. With this improved computational efficiency, we can also handle non-regular geometries more smoothly by using the spatial coordinates of each OM hit (in meters from the center of the detector). This allows us to consider data of any shape or arrangement, without restricting ourselves to a Cartesian grid; thus our algorithm can be easily adapted from our IceCube-like test case to, _e.g._ IceCube, KM3NeT, P-ONE, _etc_. Our SSCNN replaces traditional convolutions with sparse submanifold convolutions. While a traditional convolution extracts features by mapping a learned kernel over all input data, a sparse submanifold convolution operates only on the non-zero elements. This circumvents the inefficiency of using CNNs on sparse data, wherein the vast majority of operations are wasted multiplying zeros together. Furthermore, to preserve the sparsity of the data after applying multiple layers in succession, sparse submanifold convolutions enforces that the coordinates and number of output activations matches those of the input. In other words, the features do not spread layer after layer, as shown in Fig. 2. This is a compromise that is necessary for the efficiency of SSCNNs that are very deep, because this effect will cause the data to become less and less sparse throughout the network. The lack of feature spreading will have a minimal impact on performance as long as the network can rely on local information. It should be noted that SSCNNs still compute over a grid-like structure, but this structure can be arbitrarily large because the network only operates on a submanifold of it. Figure 2: _Comparison of conventional and submanifold convolution with a Gaussian kernel._ The submanifold convolution maintains the sparsity of the input, while the traditional convolution blurs the input, making it less sparse. In this example, a traditional convolution would require 18 or 25 matrix multiplications for sparse and non-sparse convolution respectively, whereas the bottom image only requires three matrix multiplications. ### Input Format As input, the SSCNN takes in two tensors: a coordinate tensor \(C\), and a feature tensor \(F\). In symbols: \[C=\begin{bmatrix}x_{1}&y_{1}&z_{1}&t_{1}\\ \vdots&\vdots&\vdots&\vdots\\ x_{n}&y_{n}&z_{n}&t_{n}\end{bmatrix},F=\begin{bmatrix}h_{1}\\ \vdots\\ h_{n}\end{bmatrix}, \tag{1}\] where the coordinate tensor is a \(n\times 4\) tensor representing the space-time coordinates of the OMs in which there were a nonzero number of photon hits. The symbol \(n\) represents the total number of photon hits in the event, which is variable and can typically range from one to hundreds of thousands. The feature tensor contains the number of photon hits which occurred within a 1 ns time window on that OM, starting from the time indicated in the coordinate tensor. Nanosecond units were chosen for very fine timing resolution, as we aim to deploy the network on both low- and high-energy events. However, depending on the application, the 1 ns time window can be expanded to trade off timing resolution for even better run-time and memory efficiency. ## III Event Simulation Our benchmark case follows an ice-embedded IceCube-like geometry, where the OMs are spaced our approximately 125 meters horizontally and 17 meters vertically. The events used in this work are \(\mu^{-}\) from \(\nu_{\mu}\) charged-current interactions. The initial neutrino sampling, charged lepton propagation, and photon propagation were simulated using the Prometheus package [23]. The incident neutrinos have energies between \(10^{2}\) GeV and \(10^{6}\) GeV sampled from a power-law with a spectral index of -1. Since most of the events that trigger neutrino telescopes are downward-going cosmic-ray muons, we generated a down-going dataset. Specifically, the initial momenta have zenith angles between \(80^{\circ}\) and \(180^{\circ}\). It is worth noting that this definition of zenith angle is different from the convention which is typically used by neutrino telescopes, which take \(0^{\circ}\) to be downgoing. Internally, LeptonInjector[24] samples the energy, direction, and interaction vertex. PROPOSAL[25] then propagates the outgoing muon, recording all energy losses that happen within 1 km of the instrumented volume. PPC[26] then generates the photon yield from the hadronic shower and each muon energy loss, and propagates these photons until they either reach an OM or are absorbed. If a photon reaches an OM, the module ID, module position and time of arrival are recorded. We then add noise in the style of [27] to the resulting photon distributions. This model accounts for nuclear decays in the OMs' glass pressure housings and thermal emission of an electron from a PMT's photocathode. The latter process strongly depends on the ambient temperature near the OM and varies between PMTs. Since this information is not publically available, we simplify the model and vary the thermal noise rate linearly from 40 Hz at the top of the detector to 20 Hz near the bottom, which approximately agrees with the findings from [27]. We then take the nuclear decay rate to be 250 Hz and generate a number of photons drawn from a Poisson distribution with a mean of 8 for each decay. Before moving on, it is important to note that the photons generated in the previous steps are only tracked to the surface of the OM. In a full simulation of the detection process, one would need to simulate the electronics inside the OM, which could introduce timing uncertainties. Furthermore, the digitized signal reported by the _e.g._ the IceCube OMs must be unfolded to get the number of photons per unit time. These steps require access to proprietary information that is not available externally. Thus, we cannot include the effects from these detail detector performance in our simulation. For example, the process by which IceCube unfolds the photon arrival times is described in [28]. They find that this process introduces a timing uncertainty typically on the order of 1 ns but that may grow up 10 ns under certain conditions. While this may affect our results, we expect the impact to be small since by group the photon arrival times into ns-wide bins, we are introducing a timing uncertainty with a similar scale. Once all photons have been added, we then implement a trigger criteria similar to the one described in [29]. This requires that a pair of neighboring or next-to-neighboring OMs see light in a 1 \(\mu\)s time window. If 8 such pairs are achieved in a 5 \(\mu\)s time window, we consider the trigger to be satisfied. As before, the exact details of the triggering process require access to proprietary information; however, the events which pass our trigger should be qualitatively similar to those which would trigger IceCube. After this cut, we are left with 462,892 events from 3 million simulated events, which we split between the training and test data sets of 412892 and 50000 sizes respectively. One can see distributions of the events which pass this trigger as a function of true energy, zenith, and azimuth in Fig. 3. In addition to the trigger-level dataset, we also evaluate the network performance on a dataset with further quality cuts, so that we can understand performance on events which are more likely to make it into a final analysis. In order to do this, we consider three quantities: \(N_{\text{OM}}\), \(r_{\text{COG}}\), and \(R_{\text{ell}}\). The first two quantities--the number of distinct OMs that saw light and the distance between the charge-weighted center of gravity and the center of the detector--are fairly straightforward, but the last requires more explanation. To compute \(R_{\text{ell}}\), we fit a two-parameter ellipsoid to all OMs which saw light, and then take the ratio of the long axis to the short axis. A ratio close to one indicates a spherical events, whereas larger ratios indicate longer, track-like events. We perform straight cuts on these variables, requiring events have \(N_{\text{OM}}>11\), \(r_{\text{COG}}<400\) m, and \(2<R_{\text{ell}}<8\). The first cut removes low-charge events which are difficult to reconstruct, while the second removes "corner clipper" events caused by \(\mu^{-}\) passing near the edge of the detector. The final cut on \(R_{\rm ell}\) helps ensure that the events have a long lever-arm for reconstruction. These cuts reduce the split training and testing dataset sizes to 108585 and 13183 events, respectively. The spatial sparsity of these improved quality events is about \(\sim 3\%\), as there are 154 OMs that were hit on average, out of the 5,160 total OMs in our example detector. For the trigger level events, the spatial sparsity is about \(\sim 2\%\). The time dimension adds another level of sparsity, as typical events can last tens of thousands of nanoseconds compared to the microsecond time window. ## IV Performance ### Training and Architecture Details We utilize a ResNet-based architecture, taking advantage of residual connections between layers to promote robust learning for deeper networks. More details on the network architecture can be found in Fig. 4. A typical block of the network consists of a sparse submanifold convolution, followed by batch normalization and the parametric rectified linear unit (PReLU) activation function. Downsampling is performed using a stride 2 sparse submanifold convolution. We use the PyTorch deep learning framework and the MinkowskiEngine[30] library to implement the network. The network was trained on each dataset (trigger and quality) for 25 epochs using a batch size of 128 and the AdamN optimizer. The initial learning rate was set at 0.001 and was dropped periodically during training. For the purpose of this article, we train the network to infer the primary neutrino energy \(E_{\nu}\), and the three components of its directional pointing vector, (\(X_{\nu}\), \(Y_{\nu}\), \(Z_{\nu}\)). The directional vector is learned rather than the zenith and azimuth angles because of complications with azimuthal periodicity and undesirable boundary condition behavior at large or small angles. The network is trained to predict the logarithmic energy, \(\log_{10}(E_{\nu})\), and the normalized directional vectors, as they can vary over a wide range of magnitudes. To train the energy reconstruction task, the LogCosh loss function is used, since it is more robust to outliers than the standard MSE loss. The loss function is defined as follows, \[\mathcal{L}_{E}=\frac{1}{N}\sum_{i}^{N}\log{(\cosh{(x_{i}-y_{i})})}, \tag{2}\] where \(N\) is the number of events in the batch, \(x_{i}\) are the predictions, and \(y_{i}\) are the labels. For the angular reconstruction, an angular distance loss function is used, namely, \[\mathcal{L}_{A}=\frac{1}{N}\sum_{i}^{N}\arccos{\left(\frac{\vec{X_{i}}\cdot \vec{Y_{i}}}{||X_{i}||\ ||Y_{i}||}\right)}, \tag{3}\] where \(\vec{X_{i}}\) and \(\vec{Y_{i}}\) are the predicted and true directional vectors, respectively. Then, the total loss is given by \[\mathcal{L}_{tot}=\alpha_{E}\mathcal{L}_{E}+\alpha_{A}\mathcal{L}_{A}, \tag{4}\] where a weighting factor \(\alpha\) is applied to each of the separate loss terms to ensure balanced learning between the two different tasks. Figure 3: _Distributions of events in true energy and zenith. Left:_ The distribution of events used as a function of true neutrino energy. As expected, the generated distribution is flat when binned logarithmically since the generation was sampled according to a \(E_{\nu}^{-1}\) distribution. Furthermore, the fraction of generated events which produce light, and the fraction of light-producing events which pass the trigger threshold increase with energy. _Right:_ The same distribution as a function of true neutrino zenith angle. Once again the generated distribution is flat in the cosine of this angle, which is proportional to the differential solid angle. nearly flat, with slightly lower efficiency near the horizon. ### Run-time Performance We evaluate the run-time performance of the SSCNN in terms of the forward pass duration on both CPU and GPU hardware. The CPU benchmark is performed on a single core of an Intel Xeon Platinum 8358 CPU, while the GPU benchmark uses a 40 GB NVIDIA A100. As is generally the case for neural networks, running on GPU is preferred due to its superior parallel computation capabilities. Additionally, the use of sparse submanifold convolutions has greatly enhanced our GPU memory efficiency, enabling us to run larger batch sizes during inference. The SSCNN can process events at a rate of 11,098 Hz on a 40 GB NVIDIA A100 GPU, while handling a batch size of 12,288 events simultaneously. This is fast enough to handle the expected \(\sim\)kHz current and planned large neutrino telescopes. The run-time on a single-core CPU is slower and largely dependent on the number of photons hits in the event due to the limited parallel computation capabilities. However, the SSCNN run-time on a CPU core is comparable to that of the likelihood-based method and is more consistent, as indicated by the lower standard deviation on the run-time distribution. The run-time results on both GPU and CPU are summarized in Table 1. ### Reconstruction Performance We first test the network on reconstructing the direction of the primary neutrino. We measure performance using the angular resolution metric, which is calculated by taking the angular difference between the predicted and true directional vectors. Fig. 5 shows the angular resolutions as a function of the true neutrino energy. Lower-energy events generally produce less photon hits, leading to a shorter lever-arm and, consequently, worse resolution. As expected, the trigger-level events are generally harder to reconstruct due to the lower light yield and the presence of corner-clipper events. This is especially true for angular reconstruction, where the SSCNN is able to reach under 4\({}^{*}\) median angular resolution on the highest-energy events. Enforcing the previously described quality cuts improves the results of the SSCNN by roughly 2\({}^{*}\) across the entire energy range. This performance is com Figure 4: _Network architecture overview._ The network accepts as input a 4D point cloud of photon OM hits, as shown by the colored points in the figure. The color indicates the timing (red is earlier, blue is later). Residual connections, denoted with \(\oplus\), are used in between convolutions. Downsampling (dashed red lines) is performed after a series of convolutions. The final layer of the network is a fully connected layer, which outputs the logarithmic \(\nu_{\mu}\) energy, and the three components of the normalized \(\nu_{\mu}\) direction vector. Figure 5: _Angular reconstruction performance as a function of the true neutrino energy._ The angular resolution results are binned by the true neutrino energy, with the median taken from each bin to form the lines shown. \begin{table} \begin{tabular}{|l|l|} \hline **SSCNN (GPU)** & **0.090 \(\pm\) 0.007 ms** \\ \hline **SSCNN (CPU)** & **65.22 \(\pm\) 117.04 ms** \\ \hline Max Likelihood (CPU) & 42.6 \(\pm\) 175 ms \\ \hline \end{tabular} \end{table} Table 1: _Per-event average run-time performance._ The forward pass run-times (mean \(\pm\) STD) for SSCNN was evaluated on trigger-level events. A likelihood-based method for energy and angular reconstruction was included for reference [7; 31]. parable to or better than current trigger-level reconstruction methods used in neutrino telescopes. For example, the current trigger-level direction reconstruction at IceCube is done using the traditional Linefit algorithm [9], which has a median angular resolution of approximately \(10^{\circ}\) on raw data. We also test the networks by reconstructing the energy of the primary neutrino. Fig. 6 summarizes the energy reconstruction results. Events where the interaction point of the neutrino occurs outside the detector, known as through-going events, make up the majority of our dataset. As a result, predicting the neutrino energy has an inherent, irreducible uncertainty produced by the unknown interaction vertex and the muon losses outside of the detector. This missing-information problem leads to an intrinsic uncertainty in the logarithmic neutrino energy of approximately 0.4 for a through-going event. Additionally, the network performs noticeably worse at the lowest and highest energies, with a tendency to over-predict at low energies and underpredict at high energies. This behavior can be attributed to the artificial energy bounds on the simulated training dataset. ## V Conclusions In this article, we have demonstrated the application of an SSCNN for event reconstruction on neutrino telescopes. We have shown that these networks are capable of maintaining competitive performance on the tasks of energy and angular reconstruction while running on the \(\mu\)s time scale. The speedup enables the SSCNN to process events at a rate well above that of the current neutrino telescopes trigger rate, which is expected to be representative of other neutrino telescopes currently operating or under construction, such as IceCube, KM3NeT, P-ONE, and Baikal-GVD. Reaching this threshold makes the SSCNN a feasible option for online reconstruction at the detector site where resources are limited and where first guesses of the energy and direction of the neutrino are made. As discussed in the introduction, this can have a substantial impact on current real-time analyses, where our first estimations can also be utilized in an alert-sending system, which will notify collaborators if the detector sees an interesting event. Additionally, these reconstructions can serve as seeds for more time-consuming reconstructions, and thus improving these first estimations will be beneficial to all subsequent analyses. ###### Acknowledgements. We thank Tong Zhu, Alfonso Garcia Soto, and Miaochen Jin for useful discussions. Additionally, we would like to thank David Kim for help with Linefit and the rest of the Prometheus authors. CAA, JL, FJY are supported by the Faculty of Arts and Sciences of Harvard University. Additionally, FJY is supported by the Harvard Physics Department Purcell Fellowship.
2306.09478
Understanding and Mitigating Extrapolation Failures in Physics-Informed Neural Networks
Physics-informed Neural Networks (PINNs) have recently gained popularity due to their effective approximation of partial differential equations (PDEs) using deep neural networks (DNNs). However, their out of domain behavior is not well understood, with previous work speculating that the presence of high frequency components in the solution function might be to blame for poor extrapolation performance. In this paper, we study the extrapolation behavior of PINNs on a representative set of PDEs of different types, including high-dimensional PDEs. We find that failure to extrapolate is not caused by high frequencies in the solution function, but rather by shifts in the support of the Fourier spectrum over time. We term these spectral shifts and quantify them by introducing a Weighted Wasserstein-Fourier distance (WWF). We show that the WWF can be used to predict PINN extrapolation performance, and that in the absence of significant spectral shifts, PINN predictions stay close to the true solution even in extrapolation. Finally, we propose a transfer learning-based strategy to mitigate the effects of larger spectral shifts, which decreases extrapolation errors by up to 82%.
Lukas Fesser, Luca D'Amico-Wong, Richard Qiu
2023-06-15T20:08:42Z
http://arxiv.org/abs/2306.09478v2
# Understanding and Mitigating Extrapolation Failures in Physics-Informed Neural Networks ###### Abstract Physics-informed Neural Networks (PINNs) have recently gained popularity in the scientific community due to their effective approximation of partial differential equations (PDEs) using deep neural networks. However, their application has been generally limited to interpolation scenarios, where predictions rely on inputs within the support of the training set. In real-world applications, extrapolation is often required, but the out of domain behavior of PINNs is understudied. In this paper, we provide a detailed investigation of PINNs' extrapolation behavior and provide evidence against several previously held assumptions: we study the effects of different model choices on extrapolation and find that once the model can achieve zero interpolation error, further increases in architecture size or in the number of points sampled have no effect on extrapolation behavior. We also show that for some PDEs, PINNs perform nearly as well in extrapolation as in interpolation. By analyzing the Fourier spectra of the solution functions, we characterize the PDEs that yield favorable extrapolation behavior, and show that the presence of high frequencies in the solution function is not to blame for poor extrapolation behavior. Finally, we propose a transfer learning-based strategy based on our Fourier results, which decreases extrapolation errors in PINNs by up to \(82\%\). ## 1 Introduction Understanding the dynamics of complex physical processes is crucial in many applications in science and engineering. Oftentimes, these dynamics are modeled as partial differential equations (PDEs) that depend on time. In the PDE setting, we want to find a solution function \(u(x,t)\) that satisfies a given governing equation of the form \[f(x,t):=u_{t}+\mathcal{N}(u)=0,x\in\Omega,t\in[0,T] \tag{1}\] where \(u_{t}:=\frac{\partial u}{\partial t}\) denotes the partial derivative of \(u\) with respect to time, \(\mathcal{N}\) is a - generally nonlinear - differential operator, \(\Omega\subset\mathbb{R}^{d}\), with \(d\in\{1,2,3\}\) is a spatial domain, and \(T\) is the final time for which we're interested in the solution. Moreover, we impose an initial condition \(u(x,0)=u^{0}(x)\), \(\forall x\in\Omega\) on \(u(x,t)\), as well as a set of boundary conditions. Together, these conditions specify the behaviors of the solution on the boundaries of the spatio-temporal domain. Generally, solving these problems is particularly difficult when the differential operator \(\mathcal{N}\) is highly nonlinear. With recent progress in deep learning, many data-centric approaches based on the universal approximation theorem have been proposed. Among these approaches, physics-informed neural networks (PINN) as introduced in [16] have caught the community's attention because of their simple, but effective way of approximating time-dependent PDEs with relatively simple deep neural networks. PINNs preserve important physical properties described by the governing equations by parameterizing the solution and the governing equation simultaneously with a set of shared network parameters. We will go into more detail on this in the next section. After the great success of the seminal paper [16], many sequels have applied PINNs to solve various PDE applications, e.g. [1, 19, 20, 5]. However, most previous studies using the vanilla PINNs introduced in [16] have demonstrated the performances of their methods in interpolation only. By interpolation, we mean a set of testing points sampled within the same temporal range that the network has been trained. We refer to points sampled beyond the final time of the training domain as extrapolation. In principle, standard PINNs are expected to be able to learn the dynamics in Eq. (1) and, consequently, to approximate \(u(x,t)\) accurately in extrapolation. However, previous work in [7] and [17] has shown that this is not the case: PINNs can deviate significantly from the true solution once they are evaluated in an extrapolation setting, calling into question their capability as a tool for learning the dynamics of physical processes. The question of whether and when PINNs can extrapolate beyond their training domain is therefore relevant from a foundational standpoint, but it also has immediate consequences for applications. As discussed in [2] and [22], PINNs cannot always be retrained from scratch when faced with a point that is outside their initial training domain, so anticipating whether their predictions remain accurate beyond the training domain is crucial. Several PINN-related methods for improving extrapolation behavior - also called long-time integration - have been proposed. However, these methods either use specialized architectures instead of the standard MLP architecture [11, 21], or recast the problem by sequentially constructing the solution function \(u(x,t)\)[17]. To the best of our knowledge, only [7] propose a method for the vanilla PINN setting introduced in [16]. In addition to this, even a basic characterization of extrapolation behavior for PINNs trained to solve time-dependent PDEs is still absent from the literature, with previous work considering vanilla PINNs incapable of extrapolating beyond the training domain and suspecting the presence of high frequencies in the solution function to be the cause of the problem [17]. In this paper, our contributions are therefore as follows. * We show that PINNs are capable of almost perfect extrapolation behavior for certain time-dependent PDEs. However, we also provide evidence that even when the learned solution stays close to the true solution in extrapolation, the \(L^{2}\)-error increases exponentially in time, independent of model size, choice of activation function, training time, and number of samples. * We characterize PDEs for which PINNs can extrapolate well by analyzing the Fourier spectra of the solution functions. We show that unlike with training failures in interpolation, the presence of high frequencies alone is not to blame for the poor extrapolation behavior of PINNs on some PDEs. Rather, standard PINNs seem to generally fail to anticipate shifts in the support of the Fourier spectrum over time. The PDEs for which extrapolation works well exhibit a constant support of the solution function's Fourier spectra over time. * We show that transfer learning from a wider family of PDEs can reduce extrapolation error, even when the initial training regime does not contain the extrapolation domain. This suggests that placing stronger inductive biases, through transfer learning, may improve extrapolation behavior in vanilla PINNs. The rest of this paper is structured as follows: in section 2, we formally introduce physics-informed neural networks and define what we mean by interpolation and extrapolation. We also briefly discuss the DPM method proposed by [7] and its performance on several benchmark PDEs. In section 3, we study PINNs' extrapolation behavior for various PDEs, and show that good extrapolation behavior is attainable, even with standard PINNs. In section 4, we go a step further and characterize the PDEs for which good extrapolation accuracy is possible using the Fourier spectra of their solution functions. In section 5, we investigate the viability of transfer learning approaches in improving extrapolation and demonstrate experimental results. Finally, section 6 discusses our results in the context of the existing literature and section 7 concludes. Background and definitions ### Physics-Informed Neural Networks As mentioned in the previous section, PINNs parameterize both the solution \(u\) and the governing equation \(f\). Denote the neural network approximating the solution \(u(x,t)\) by \(\tilde{u}(x,t;\theta)\) and let \(\theta\) be the network's weights. Then the governing equation \(f\) is approximated by a neural network \(\tilde{f}(x,t,\tilde{u};\theta):=\tilde{u}_{t}+\mathcal{N}(\tilde{u}(x,t; \theta))\). The partial derivatives here can be obtained via automatic differentiation. We note that \(\tilde{f}(x,t,\tilde{u};\theta)\) shares its network weights with \(\tilde{u}(x,t;\theta)\). The name "physics-informed" neural network comes from the fact that the physical laws we're interested in are enforced by applying an extra, problem-specific, nonlinear activation, which is defined by the PDE in Eq. (1) (i.e., \(\tilde{u}_{t}+\mathcal{N}(\tilde{u})\)). We learn the shared network weights using a loss function consisting of two terms, which are associated with approximation errors in \(\tilde{u}\) and \(\tilde{f}\), respectively. The original paper [16] considers a loss function consisting of two error terms, i.e. \(L:=\alpha L_{u}+\beta L_{f}\). Here, \(\alpha,\beta\in\mathbb{R}\) are coefficients and \(L_{u}\) and \(L_{f}\) are defined as follows: \[L_{u} =\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}\left|u(x_{u}^{i},t_{u}^{i})- \tilde{u}(x_{u}^{i},t_{u}^{i};\theta)\right|^{2} \tag{2}\] \[L_{f} =\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left|\tilde{f}(x_{f}^{i},t_{f }^{i},\tilde{u};\theta)\right|^{2} \tag{3}\] \(L_{u}\) enforces the initial and boundary conditions using a set of training data \(\left\{(x_{u}^{i},t_{u}^{i}),u(x_{u}^{i},t_{u}^{i})\right\}_{i=1}^{N_{u}}\). The first element of the tuple is the input to the neural network \(\tilde{u}\) and the second element is the ground truth that the output of \(\tilde{u}\) attempts to match. We can collect this data from the specified initial and boundary conditions since we know them a priori. Meanwhile, \(L_{f}\) minimizes the discrepancy between the governing equation \(f\) and the neural network's approximation \(\tilde{f}\). We evaluate the network at collocation points \(\left\{(x_{f}^{i},t_{f}^{i}),f(x_{f}^{i},t_{f}^{i})\right\}_{i=1}^{N_{f}}\). Note that here, the ground truth \(\left\{f(x_{u}^{i},t_{u}^{i})\right\}_{i=1}^{N_{f}}\) consists of all zeros. We also refer to \(\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left|\tilde{f}(x_{f}^{i},t_{f}^{i},\tilde{u };\theta)\right|\) as the mean absolute residual (MAE): its value denotes how far the network is away from satisfying the governing equation. Note that using this loss, i) no costly evaluations of the solutions \(u(x,t)\) at collocation points are required to gather training data, ii) initial and boundary conditions are enforced using a training dataset that can easily be generated, and iii) the physical law encoded in the governing equation \(f\) in Eq. (1) is enforced by minimizing \(L_{f}\). In the original paper by [16], both loss terms have equal weight, i.e. \(\alpha=\beta=1\), and the combined loss term \(L\) is minimized. ### Interpolation and extrapolation For the rest of this paper, as mentioned briefly in section 1, we refer to points \((x^{i},t^{i})\) as _interpolation points_ if \(t^{i}\in[0,T_{train}]\), and as _extrapolation points_ if \(t^{i}\in(T_{train},T]\) for \(T>T_{train}\). We are primarily interested in the \(L^{2}\)_error_ of the learned solution, i.e. in \(\|u(x^{i},t^{i})-\tilde{u}(x^{i},t^{i};\theta)\|_{2}\), and in the \(L^{2}\)_relative error_, which is the \(L^{2}\) error divided by the norm of the function value at that point, i.e. \(\|u(x^{i},t^{i})\|_{2}\). When we sample evaluation points from the extrapolation domain, we refer to the \(L^{2}\) (relative) error as the _(relative) extrapolation error_. Similarly, we are interested in the (mean) absolute residual as defined above, i.e. in \(\left|\tilde{f}(x^{i},t^{i},\tilde{u};\theta)\right|\). For points sampled from the extrapolation domain, we refer to this as the _extrapolation residual_. ### Prior work on extrapolation failures in vanilla PINNs The dynamic pulling method (DPM) proposed by [7] is motivated by the observation that PINNs typically perform poorly in extrapolation. They attribute this poor generalization performance to the noisy domain loss curves during training and they propose the dynamic pulling method (DPM), which modifies the gradient update as a way to smooth the domain loss curves and thereby improve extrapolation performance. Their method introduces several hyperparameters and they find that following hyperparameter search, their method has better generalization performance compared to vanilla PINNs. We briefly investigate the performance of their method and outline our experiments in the appendix. ## 3 Characterization of extrapolation behavior in PINNs In this section, we compare the extrapolation errors and residuals which standard PINNs display for the Allen-Cahn equation, the viscous Burgers equation, a diffusion equation, and a diffusion-reaction equation, before varying the model's parameters in the second half of the section. More details on the PDEs under consideration can be found in the appendix. ### Extrapolation behavior for different PDEs For each of the four PDEs introduced above, we train a 4-layer MLP with 50 neurons per layer and \(tanh\) activation on the interpolation domains specified for 50000 epochs using the adam optimizer. Figures 1 **(a)** and **(b)** show the \(L^{2}\) relative error and the mean absolute residual for \(t\in[0,1]\). We note that for all four PDEs, we achieve zero interpolation error and residual. We observe that the \(L^{2}\) relative errors for the Burgers' equation and for the Allen-Cahn equation become significantly larger than for the diffusion and diffusion-reaction equations when we move from \(t=0.5\) to \(t=1\). As can be seen in Figure 1**(c)**, the solution learned for the diffusion-reaction equation disagrees only minimally with the true solution, even at \(t=1\), which shows that for this particular PDE, PINNs can extrapolate almost perfectly well. However, we also find that irrespective of the PDE we consider, the \(L^{2}\) relative error and the MAR increase exponentially in \(t\) as soon as we leave the interpolation domain. Figure 1: **(a)**\(L^{2}\) relative extrapolation error of MLP(4, 50) with \(tanh\) activation, trained on \([0,0.5]\). **(b)** MAR for the same MLP, and **(c)** the solution for the diffusion-reaction equation at \(t=1\) and the function learned by the corresponding MLP. Figure 2: Mean \(L^{2}\) relative errors over the interpolation (extrapolation) domain of MLP(4, 50) with \(tanh\) activation in **(a)** and with \(\sin\) activation in **(b)** with increasing number of training epochs. **(c)** plots the relative error against the number of samples, in the order (domain, boundary condition, initial condition). ### Varying model parameters While we observe drastically different extrapolation behaviors depending on the underlying PDE as mentioned above, the extrapolation for a given PDE seems to be more or less independent of model choices, such as number of layers or neurons per layer, activation function, number of samples, or training time. Once the chosen parameters allow the model to achieve zero error in the interpolation domain, adding more layers, neurons, or samples, or alternatively training longer does not seem to have an effect on the extrapolation error and MAR. Figure 2 illustrates this behavior for the Burger's equation by plotting the interpolation and extrapolation errors when training for an increasing number of epochs: **(a)** and **(b)** show this for a \(tanh\) and a \(sin\) activation function, respectively. Figure 2**(c)** shows the effect of increasing the number of samples from the interpolation domain on the extrapolation error. Depicted is the relative \(L^{2}\) error of a 4-layer MLP with 50 neurons each and \(tanh\) activation, trained using the same hyperparameters as in the previous section. ## 4 Fourier-based analysis of extrapolation behavior Motivated by the results of section 3, one might ask which structural elements of a given PDE determine extrapolation performance. That is, given a defined PDE, can we predict how well PINNs will extrapolate on the equation beyond their training domain? Recent literature has found that neural networks tend to be biased towards low-complexity solutions due to implicit regularization inherent in their gradient descent learning processes [13; 12]. In particular, deep neural networks have been found to possess an inductive bias towards learning lower frequency functions, a phenomenon termed the _spectral bias_ of neural networks [15; 3]. To better understand how these phenomena relate to the extrapolation behavior of PINNs, we examine the true and predicted solutions of our four PDEs in the Fourier domain. While we find little evidence to support the claim that the presence of high frequency components hurts extrapolation, we find that the dynamics of the Fourier spectra for a given PDE tend to be strongly predictive of extrapolation performance. Specifically, we decompose the change in the true solution's Fourier spectra over time into two types - changes in the overall amplitude of the components and changes in how these amplitudes are distributed over the support of the spectra. While PINNs are able to handle changes in amplitude quite well, we find that extrapolation is especially poor for those PDEs whose spectra shift over time. ### Examining solutions in the Fourier domain Towards understanding the interaction of extrapolation behavior with the Fourier spectrum1 of the solution to a given PDE, we plot both the reference solution and the predicted solution in the Fourier domain for all four of our PDEs. We additionally plot the absolute difference between the two Fourier spectra of the reference and predicted solution. Plots for the Burgers' equation are provided in Figure 3 while plots for the other three PDEs are provided in Appendix A.3. Footnote 1: For a fixed time \(t\), the Fourier spectrum is computed using a discrete Fourier transform on \(256\) equally spaced points (\(201\) for Allen-Cahn) in the spatial domain. In all cases, the majority of the error in the Fourier domain is concentrated in the lower-frequency regions. While this is partially due to the fact that the low frequency components of the solutions have larger magnitude, it suggests that in extrapolation, PINNs fail even to learn the low frequency parts of the solution. Thus, the presence of high frequencies alone fails to explain the extrapolation failure of PINNs. We provide some additional evidence for this by studying the extrapolation behavior of Multi-scale Fourier feature networks [18] in Appendix A.6. Even though these architectures were designed specifically to make learning higher frequencies easier, we find their extrapolation error to be at least as large or larger than that of standard PINNs. There are still notable differences between the spectra of the two "good" PDEs and the two "bad" PDEs, however. Namely, both the diffusion and diffusion-reaction equations have Fourier spectra whose support does not shift over time; the amplitudes of each frequency decrease with time, but the distribution of these amplitudes is constant. In contrast, for both the Burgers' and Allen-Cahn equations, we see decreases in amplitudes accompanied by shifts in the spectra, suggesting that PINNs may struggle to extrapolate well when the true solution's Fourier spectrum shifts over time. ### Decomposing the changes in Fourier spectra Given the observed differences in the temporal dynamics of the Fourier spectra, we aim to isolate changes in the overall amplitudes from shifts in the Fourier spectra to better understand how each change affects extrapolation behavior. #### 4.2.1 Changes in amplitude To isolate the effect of amplitude changes, we train a PINN on the following PDE. \[\frac{\partial u}{\partial t} =\frac{\partial^{2}u}{\partial x^{2}}+e^{-t}\left(\sum_{j=1}^{K} \frac{(j^{2}-1)}{j}\sin(jx)\right) \tag{4}\] \[u(x,0) =\sum_{j=1}^{K}\frac{\sin(jx)}{j}\qquad u(-\pi,t)=u(\pi,t)=0 \tag{5}\] for \(x\in[-\pi,\pi]\) and \(t\in[0,1]\). The reference solution is given by \(u(x,t)=e^{-t}\left(\sum_{j=1}^{K}\frac{\sin(jx)}{j}\right)\) As with our other experiments, we use \(t\in[0,0.5]\) as the temporal training domain and consider \(t\in(0.5,1]\) as the extrapolation area. \(K\) here is a hyperparameter that controls the size of the spectrum of the solution. Note that for a fixed \(K\), the support of the Fourier spectrum of the reference solution never changes over time, with only the amplitudes of each component scaled down by an identical constant factor. For various values of \(K\), we find that our trained PINNs are able to extrapolate well, provided that we use larger architectures for more extreme values of \(K\), as can be seen in Figure 4**(a)**. Together with our observations of the diffusion and diffusion-reaction equation, this provides support for the claim that extrapolation failures seem to be primarily governed by shifts in the underlying spectra Figure 3: For times \(t=0.25\) (top, interpolation) and \(t=0.99\) (bottom, extrapolation), we plot the reference and predicted solutions in the spatio-temporal (left) and Fourier (middle) domains for the Burgers’ equation. The absolute difference in the Fourier spectra is plotted on the right. rather than changes in the amplitude of the respective components. Additionally, these observations continue to hold even when we increase the complexity of the reference solution by increasing the support of the Fourier spectra. For the sake of completeness, we also investigate the effect of the speed of decay of the amplitudes in the Fourier spectra. We train a PINN on the following variation of the Diffusion-Reaction equation. \[\frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}+e^{-Mt} \left(\sum_{j\in\{1,2,3,4,8\}}\frac{(j^{2}-1)}{j}\sin(jx)\right) \tag{6}\] for \(x\in[-\pi,\pi]\) and \(t\in[0,1]\) with the initial condition \(u(x,0)=\sin(x)+\frac{\sin(2x)}{2}+\frac{\sin(3x)}{3}+\frac{\sin(4x)}{4}+\frac {\sin(8x)}{8}\) and the Dirichlet boundary condition \(u(-\pi,t)=u(\pi,t)=0\). The reference solution is \[u(x,t)=e^{-Mt}\left(\sin(x)+\frac{\sin(2x)}{2}+\frac{\sin(3x)}{3}+\frac{\sin(4 x)}{4}+\frac{\sin(8x)}{8}\right) \tag{7}\] with the same interpolation and extrapolation areas as before. Figure 4**(b)** shows the relative interpolation and extrapolation errors against increasing values of \(M\). We find that an increase in the speed of the decay seem to increase the extrapolation error more than an increase in the size of the spectrum. However, since the solutions to the Allen-Cahn equation and to the Burger's equation are not exhibiting exponentially fast changes in their amplitudes, we now focus on shifts in the support of the spectrum instead. #### 4.2.2 Changes in the support To quantify the temporal shifts in the support of the Fourier spectrum, we consider the _Wasserstein distance_ between the normalized Fourier spectra at different points in time. Consider two discrete CDFs \(F_{1},F_{2}\) supported on the domain \(\mathcal{X}\). The Wasserstein distance between \(F_{1}\) and \(F_{2}\) is defined as \(W(F_{1},F_{2})=\sum_{x\in\mathcal{X}}|F_{1}(x)-F_{2}(x)|\). Given two discrete Fourier spectra \(f_{1},f_{2}\), the _Wasserstein-Fourier distance_[4]. is defined as \(W\left(\frac{f_{1}}{\|f_{1}\|_{1}},\frac{f_{2}}{\|f_{2}\|_{1}}\right)\). The Wasserstein distance is closely related to optimal transport theory and represents the minimum amount of probability mass needed to convert one distribution into another. We present plots of the pairwise Wasserstein-Fourier distances for each \(t_{1},t_{2}\in[0,1]\) for each of our four PDEs in appendix A.4. The Wasserstein-Fourier distance of the true solution is zero everywhere for both the diffusion and diffusion-reaction equations, reflecting the constant support of the spectra. In contrast, the pairwise distance matrices for the Burgers' and Allen-Cahn equations exhibit a Figure 4: Mean \(L^{2}\) relative interpolation and extrapolation errors of MLP(6, 50) with tanh activation, trained on [0, 0.5]. In **(a)**, we plot this against the size of the spectrum i.e. the parameter \(K\) in Equation (4), and in **(b)** we plot this against the speed of the decay of the amplitudes, i.e. the parameter \(M\) in Equation (6). block-like structure, with times in disjoint blocks exhibiting pronouncedly different distributions in the amplitudes of their respective Fourier spectra. These shifts are not captured by the learned solution, leading to large \(L^{2}\) errors. ## 5 Transfer learning and extrapolation Finally, we examine whether transfer learning from PINNs trained across a family of similar PDEs can improve generalization performance. Empirically, in other domains, transfer learning across multiple tasks has been effective in improving generalization [6; 10]. Roughly speaking, for PINNs, one might expect transfer learning to cement stronger inductive biases towards learning the true PDE kernel than training from scratch, which will in turn improve generalization performance. We perform transfer learning following the procedure outlined in [14], where we initially train on a PINN with multiple outputs on a sample from a family of PDEs (e.g. the Burgers' equation with varying values of the viscosity) and transfer to a new unseen PDE in the same family (e.g. the Burgers' equation with a different viscosity) by freezing all but the last layer and training with the loss this new PDE induces. We note that [14] only consider transfer learning for linear PDEs by analytically computing the final PINN layer but we extend their method to nonlinear PDEs by performing gradient descent to learn the final layer instead. We perform transfer learning from a collection of Burgers' equations with varying viscosities (\(\nu/\pi=\{0.01,0.05,0.1\}\)) to a new Burgers equation (\(\nu/\pi=0.075\)). In the first set of experiments, we train on equations in the domain \(t\in[0,0.5]\), and in the second set, we train on equations in the domain \(t\in[0,1]\). For all experiments, we use fully connected PINNs with 5 hidden layers, each 100 hidden units wide, Xavier normal initialization, and \(\tanh\) activation. We train with Adam [8] with default parameters. We use equal weighting of the boundary and domain loss terms. The extrapolation loss reported is sampled uniformly from the extrapolated domain (explicitly \((x,t)\in[-1,1]\times[0.5,1]\)). We show results from our transfer learning experiments in Figure 5. For each experiment, we perform multiple runs, changing only the random seed. Motivated by [7], we primarily examine the interpolation and extrapolation loss of each run as well as decomposed into domain and boundary terms. We observe first that for both baseline and transfer learning experiments, the extrapolation loss is dominated by the domain loss; the boundary loss is subdominant. And as one might expect, transfer learning from PDEs on the whole domain (\(t\in[0,1]\)) substantially improves results compared to baseline. However, we find that transfer learning even when the model does not see the extrapolation domain during initial training (e.g. \(t\in[0,0.5]\)) also improves performance over baseline, though less than transfer learning from the full domain. We find the reverse in interpolation. Namely, our baseline model has the lowest interpolation error, followed by half-domain transfer learning, and then full-domain transfer learning, which performs the worst in interpolation. For a subset of experimental runs, we also examined the \(L^{2}\) error from numerical solutions in extrapolation. Compared to the baseline (no transfer learning), we found an average reduction in extrapolation error of \(82\%\) when transfer learning from the full domain, and of \(51\%\) when transfer learning from half the domain, i.e. with \(t\in[0,0.5]\). ## 6 Discussion In section 3, we saw that independent of the model's parameters, such as model size, choice of activation functions, training time, and number of samples, the \(L^{2}\) relative error increases exponentially in time once we enter the extrapolation domain. This seems to provide at least some evidence against a double-descent phenomenon for the extrapolation error, which [22] speculated might exist. We also saw, using the diffusion and diffusion-reaction equations as examples, that standard PINNs are able to achieve excellent extrapolation behavior. This provides evidence against the assumption that PINNs may be incapable of yielding accurate solutions for extrapolation problems, made for example in [17]. Motivated by recent literature on the spectral bias of neural networks [3; 15; 18], we examined the reference and true solutions to our four PDEs in the Fourier domain in section 4. We found little evidence to support the claim that the presence of high frequencies hinders extrapolation, as was speculated in [17]. Instead, we found that extrapolation performance is far better for those PDEs whose Fourier spectra exhibit unchanging support (diffusion, diffusion-reaction) than those PDEs whose spectra shift over time (Burgers', Allen-Cahn). Additionally, we looked at the Wasserstein-Fourier distance as a metric for quantifying the temporal shift in Fourier spectra and link it to extrapolation. Finally, in section 5, we investigated whether transfer learning from several PDEs in a larger family of PDEs to a specific PDE within the same family can improve generalization performance. We found that compared to training from scratch, extrapolation performance is indeed improved with transfer learning, at the cost of interpolation performance. As one might expect, extrapolation performance is best when including the extrapolation domain in pre-training, though it is still clearly improved when the extrapolation domain is excluded. This may suggest that transfer learning enforces stronger inductive biases from the wider PDE family which in turn improve extrapolation performance. Our results are limited in that we only examined four example PDEs, which future research might extend to other PDEs. We also leave it as future work to investigate the effects of architectures other than MLPs [16], such as Recurrent Neural Networks [11] and ResNets [7], and of other training paradigms, for example domain decomposition [9]. ## 7 Conclusion In summary, we have examined PINNs' extrapolation behavior with a wide range of tools and pushed back against assumptions previously held in the literature. We first found that for some PDEs, PINNs are capable of near perfect extrapolation. However, we also found that the extrapolation error for a given PINN increases exponentially in time, and that this increase is robust to choice of architecture and other hyperparameters of the model. Following this, we examined the solution space learned Figure 5: Domain, boundary, and combined mean squared interpolation error (top) and extrapolation error (bottom) between our baseline (PINNs trained from scratch) and transfer learning experiments. The only variation between data points is the random seed. There are 50, 11, and 10 runs of the baseline, transfer with \(t\in[0,1]\), and transfer with \(t\in[0,0.5]\), respectively. Note the vertical scales differ between the interpolation and extrapolation domains. by PINNs in the Fourier domain, finding that extrapolation performance is improved for PDEs with stable Fourier spectra, and that the presence of high frequencies in the solution function has minimal effect on extrapolation for large enough models. Finally, to shed light on the role of inductive bias in extrapolation, we show that transfer learning can improve extrapolation performance. We believe that our results are of immediate interest for PINN application areas, where standard PINNs are still the most common approach by far, and hope that our work will motivate further investigations into the generalization performance of PINNs.
2302.08291
On-chip fully reconfigurable Artificial Neural Network in 16 nm FinFET for Positron Emission Tomography
Smarty is a fully-reconfigurable on-chip feed-forward artificial neural network (ANN) with ten integrated time-to-digital converters (TDCs) designed in a 16 nm FinFET CMOS technology node. The integration of TDCs together with an ANN aims to reduce system complexity and minimize data throughput requirements in positron emission tomography (PET) applications. The TDCs have an average LSB of 53.5 ps. The ANN is fully reconfigurable, the user being able to change its topology as desired within a set of constraints. The chip can execute 363 MOPS with a maximum power consumption of 1.9 mW, for an efficiency of 190 GOPS/W. The system performance was tested in a coincidence measurement setup interfacing Smarty with two groups of five 4 mm x 4 mm analog silicon photomultipliers (A-SiPMs) used as inputs for the TDCs. The ANN successfully distinguished between six different positions of a radioactive source placed between the two photodetector arrays by solely using the TDC timestamps.
Andrada Muntean, Yonatan Shoshan, Slava Yuzhaninov, Emanuele Ripiccini, Claudio Bruschini, Alexander Fish, Edoardo Charbon
2023-02-16T13:40:53Z
http://arxiv.org/abs/2302.08291v1
On-chip fully reconfigurable Artificial Neural Network in 16 nm FinFET for Positron Emission Tomography ###### Abstract Smarty is a fully-reconfigurable on-chip feed-forward artificial neural network (ANN) with ten integrated time-to-digital converters (TDCs) designed in a 16 nm FinFET CMOS technology node. The integration of TDCs together with an ANN aims to reduce system complexity and minimize data throughput requirements in positron emission tomography (PET) applications. The TDCs have an average LSB of 53.5 ps. The ANN is fully reconfigurable, the user being able to change its topology as desired within a set of constraints. The chip can execute 363 MOPS with a maximum power consumption of 1.9 mW, for an efficiency of 190 GOPS/W. The system performance was tested in a coincidence measurement setup interfacing Smarty with two groups of five 4 mm \(\times\) 4 mm analog silicon photomultipliers (A-SiPMs) used as inputs for the TDCs. The ANN successfully distinguished between six different positions of a radioactive source placed between the two photodetector arrays by solely using the TDC timestamps. artificial neural network, ANN, ANN-reconfigurability, feed-forward ANN, genetic algorithm, time-to-digital converter, TDC, position reconstruction. ## I Introduction For a long time, artificial intelligence (AI) has been applied in different areas such as biology, economics, medical healthcare, automotive, etc. [1, 2, 3, 4]. AI makes use of different tools depending on the problem that needs to be solved, such as deep learning, ANN, probabilistic methods and many others [5, 6, 7]. ANNs, in particular, were initially inspired by the structure of the human brain. Neurons are at the core of our nervous system, whose connections are established through synapses. The connection between neurons is essential for learning, as it serves as a way in which information is sent between them. ANNs are based on the same approach, each of them featuring a set of neurons organized by layers and connections. Each connection has assigned a specific weight, whose role is to decide how much influence the input has on the output value. A typical feed-forward artificial neural network consists of several layers which are usually classified as input, hidden and output layers. Usually, the input layer receives the information coming from outside, in our case, from the TDCs, while the output layer provides the final result. However, there are many ways in which ANNs can be implemented. In the past, ANNs have found applicability in the medical field by assisting image reconstruction algorithms in enhancing image quality [8, 9, 10, 11, 12]. Considering that clinical diagnostic systems, such as magnetic resonance imaging (MRI), computed tomography (CT), or PET, need to handle large amounts of data that are usually processed offline, ANNs proved to be a valuable tool for this task. PET is a nuclear imaging technique heavily used in oncology for diagnostics, treatment, and monitoring of cancerous tumors. This technique uses radioisotopes, which are injected in the patient's body and, in some cases, concentrate in the area where the tumor is located. The radioisotope undergoes positron decay, and the emitted positron travels a short distance in the tissue until it interacts with an electron. This process, called annihilation, results in two annihilation photons emitted with a photon energy of 511 keV at 180 degrees forming a line-of-response (LOR). The emitted gamma rays are absorbed by scintillators and converted into visible photons which are then detected by optical photodetectors, as depicted in Fig. 1. The role of the PET scanner is to acquire a subset of LORs and to reconstruct the most likely annihilation point in 3D. Time-of-flight PET (ToF-PET) systems, in particular, make use of timing information obtained from photodetectors to reconstruct the annihilation point with more advanced reconstruction algorithms, enabling enhanced image quality. Over the years, different techniques have been used to acquire the ToF-PET information (timing, energy to discriminate the 511 keV), such as constant fraction discrimination, leading edge discrimination or different estimation algorithms based on statistical models [13, 14]. More recently, the research focus shifted towards ToF estimation with ANNs. The work carried out in [13] presents a nine-layer off-chip convolutional neural network, which uses digitized waveforms as an input through constant-fraction discriminators to estimate ToF information. The ANN is trained in Matlab with millions of coincidence events acquired with a \({}^{22}\)Na point source at different timing delays. In the end, an average timing precision of 32 ps was obtained by using this approach. Another interesting study has been presented in [15]. The authors analyze the capabilities of monolithic scintillators with respect to timing and spatial resolution by using an off-chip neural network. The proposed ANN is implemented in FPGA and comprises thousands of neurons and coefficients while the readout electronics is composed of custom developed application specific integrated circuits (ASICs). One of the main issues in PET is the large amount of data that is generated from thousands of photodetectors, and which needs to be transferred for external processing [16, 17, 18]. Moreover, the data has to be filtered to remove random and scattered events in order to enhance the final image quality. In this paper, we propose an on-chip fully-reconfigurable ANN with the goal of reducing complexity and minimizing data throughput. Smarty comprises both timestamp circuitry and an ANN. Ten independent channels are directly connected to Smarty's reconfigurable ANN, which has been trained for the reconstruction of the position of a \({}^{22}\)Na radioactive source placed in-between two photodetector arrays. The data from the ten input channels is reduced to a single word per frame and special cases, when not enough channels fired, are automatically discarded by the ANN. The topology of the ANN can be changed within certain design limits, i.e. a maximum of 1024 weights and biases and 128 neurons. Smarty proposes a fully-integrated, reconfigurable ANN used to reconstruct the position of a \({}^{22}\)Na source along the X axis with floating point and quantized representation results presented in the end. This work is organized as follows: in section II the on-chip neural modelling, which is the basis of the entire design, is presented. This is followed by the in-depth description of the Smarty system architecture in section III, where the TDCs and ANN implementation are discussed. Section IV presents the performance characterization results, where the TDCs, the genetic algorithm training, and the first ANN on-chip performance are validated. Section V presents the source position reconstruction measurements performed with Smarty's ANN, followed by section VI which provides the conclusions of this work. ## II On-chip neural network modelling The neural network was firstly described through a mathematical model before being implemented on-chip. The ANN's reconfigurability is encoded by means of a topology file, which contains information on the ANN's structure such as number of neurons, the number of hidden layers and the connections between the neurons. A fully-connected neural network with three input neurons was chosen for the example mathematical description and is depicted in Fig. 2. The neural network comprises one input layer with three neurons, one hidden layer with four neurons and one output layer with two neurons. The mathematical description of the aforementioned ANN is presented below: \[\begin{split}&\begin{split}&\begin{split} 0_{3}=\ w_{0}+\sum_{i=1}^{1}w_{i}\ \times\ \partial_{i-1}=\ w_{0}+\ w_{1}\times\ \partial_{0}\\ &\begin{split}\partial_{4}=\ w_{2}+\sum_{i=1}^{1}w_{i} \times\ \partial_{i-2}=\ w_{2}+\ w_{3}\times\ \partial_{1}\\ &\begin{split}\partial_{5}=\ w_{4}+\sum_{i=1}^{1}w_{i} \times\ \partial_{i-3}=\ w_{4}+\ w_{5}\times\ \partial_{2}\\ &\begin{split}\end{split}\end{split}\end{split}\] \[\begin{split}&\begin{split}\begin{split} 0_{6}=\ w_{6}+\sum_{i=1}^{1}w_{i}\times\ \partial_{i-4}=\ w_{6}+\ w_{7}\times\ \partial_{3}+\ w_{8}\times\ \partial_{4}+\ w_{9}\times\ \partial_{5}\\ &\begin{split}\partial_{7}=\ w_{10}+\sum_{i=1}^{1}w_{i} \times\ \partial_{i-8}=\ w_{10}+\ w_{11}\times\ \partial_{3}+\ w_{12}\times\ \partial_{4}+\ w_{13}\times\ \partial_{5}\\ &\begin{split}\partial_{8}=\ w_{14}+\sum_{i=1}^{1}w_{i} \times\ \partial_{i-12}=\ w_{14}+w_{15}\times\ \partial_{3}+\ w_{16}\times\ \partial_{4}+\ w_{17}\times\ \partial_{5}\end{split}\end{split}\end{split}\] Fig. 1: Conceptual representation of a PET ring. The interaction between positron and electron (annihilation) results in two gamma rays emitted with a photon energy of 511 keV at 180 degrees along a LOR. The gamma rays reach first the scintillator, which converts them into visible photons. The visible photons are further detected by the photodetectors. \[O_{9}=w_{18}+\sum_{i=2}^{21}w_{i}\times O_{i-16}=w_{18}+w_{19}\times O_{3}+w_{2 0}\times O_{4}+w_{21}\times O_{5}\] \[O_{10}=w_{22}+\sum_{i=25}^{26}w_{1}\times O_{i-1}=w_{22}+w_{23} \times O_{6}+w_{24}\times O_{7}+w_{25}\] \[\times O_{8}+w_{26}\times O_{9}\] \[O_{11}=w_{27}+\sum_{i=2}^{31}w_{1}\times O_{i-2}=w_{27}+w_{28} \times O_{6}+w_{29}\times O_{7}+w_{30}\] \[\times O_{8}+w_{31}\times O_{9} \tag{1}\] This model serves as basis for any feed-forward fully-connected ANN of any dimension. The topology file is uploaded in a 624-bit memory. Following, the neural network's weights and biases are separately stored in a 10.24 kbit dual-port memory in the chip. In order to understand the ANN's maximum capability, a Matlab model was implemented in floating point and used as a reference. From now on, the Matlab code will be referred to as the golden code, which provides a set of golden outputs. Due to its floating-point representation, it provides much higher precision than a fixed-point representation. The output results of this model are therefore considered to be correct and used as reference. The ANN was also fully described in C and high-level synthesis (HLS) was used to obtain the equivalent register transfer level (RTL) implementation. The ANN's performance in floating point (golden code) and fixed point (C code) are then compared. A conceptual representation of the modelling and comparison procedures is depicted in Fig. 3. An ANN example was used in order to determine the number of fractional bits needed for the on-chip implementation. The on-chip ANN has to be carefully designed considering availability of resources and area, therefore, the number of fractional bits is an important decision in the ANN design. A large input layer of 10 neurons, 4 hidden layers of 8 neurons each and a 6-neuron output layer ANN was analyzed. A set of random coefficients (weights and biases) sampled from a uniform distribution was used. The ANN's input is given by the 20-bits TDC output codes, which were simulated as random numbers sampled from a uniform distribution across different ranges: [0, 300000], [300000, 600000], [600000, 900000], [900000, 1000000]. The same ANN configuration with the same values was used in the HLS test bench. At the end, the output performance files of both Matlab golden code and HLS were compared as illustrated in Fig. 4. The result indicated a relative rounding error of less than 0.03 % across all TDC input ranges for 8 fractional bit representation. ## III System architecture Smarty is part of a system-on-chip (SoC) in TSMC's 16 nm FinFET process and it interfaces with a Risc-V processor through an AXI bus. The system comprises a timing block, Fig. 4: Relative rounding error of Smarty ANN’s outputs. The error was obtained by comparing the golden outputs in floating point and the fixed point outputs. Three different TDC ranges were analyzed: 0 corresponds to [0, 300000], 1 corresponds to [300000, 600000], 2 corresponds to [600000, 900000] and 3 corresponds to [900000, 100000]. Fig. 3: Conceptual representation of Smarty’s ANN modelling and comparison procedure. The ANN’s performance in Matlab (floating point) is compared to that of the HLS-inferred system which is fixed point bounded. At the end, the relative error between the two different results is calculated. Fig. 2: Example of a fully-connected artificial neural network which served as a basis for the extended mathematical model. The neural network presents: one input layer with three neurons: (O\({}_{1}\), O\({}_{4}\), O\({}_{3}\)), one hidden layer with four neurons: (O\({}_{6}\), O\({}_{7}\), O\({}_{8}\), O\({}_{9}\)) and one output layer with two neurons: (O\({}_{10}\), O\({}_{11}\)). consisting of 10 TDCs and a processing block consisting of a fully-reconfigurable feed-forward ANN. ANN readout and configuration are also performed through the AXI bus. The SoC PLL provides the clock for the entire system. The ANN was operated at 100 MHz. A block diagram of the main blocks implemented in Smarty is presented in Fig. 5. Three different isolated supply voltage domains are provided for the design: a dedicated power supply for the TDC voltage-controlled ring oscillators (VDD_RING) enabling individual control of the oscillation frequency of the TDCs, a supply voltage dedicated to the ANN (VDD_ANN) allowing the user to test the ANN independently, and a dedicated supply voltage for all the remaining circuits of Smarty (VDD_CORE). Hereafter, the main blocks are described in detail. ### _Time-to-digital converter_ All 10 TDCs present in Smarty are based on the same architecture, a ring topology that comprises a voltage-controlled oscillator (VCO), an asynchronous ripple-counter and a thermometer decoder, as shown in Fig. 6. The VCO is based on four-delay stages connected in a ring, comprising buffers, inverters and NAND gates as illustrated in Fig. 7. An enable signal (EN) is formed by the rising edge of a START and STOP signal; the VCO starts oscillating when this signal is asserted. The oscillation stops on the EN's falling edge and the frozen state is read using the four outputs Q\(<\)0:3\(>\). The 20-bit asynchronous ripple counter keeps track of the number of oscillations through the ring and returns the most-significant bits (MSB), while the least-significant bits are given by the four outputs Q\(<\)0:3\(>\) through a thermometer decoder. In order to reduce power consumption, the VCO's outputs are buffered and are only available when EN_read is asserted, the only exception being the Q\(<\)3\(>\) signal, which acts as a clock signal for the counter and has an always-on buffer in order to balance the load along the ring. The final TDC result is given by the following equation: \[N_{result}=4\ \times\ N_{coarse}+\ N_{fine}, \tag{2}\] where \(N_{coar}\) is the counter value and \(N_{fine}\) is the decoded fine bit value. Each TDC can be independently read out. Signals TDC_START_ELECTRIC and TDC_STOP_ELECTRIC are generated by the FPGA during the electrical testing, so as to perform single-shot measurements. TDC_START_ELECTRIC is generated with the same frequency. An adjustable phase with respect to the STOP signal allows different impulse widths to be fed into the TDC, thereby sweeping a larger TDC range. Another option is to start all the TDCs together by using TDC_START_ALL and TDC_STOP_ELECTRIC. In this way, the measurement time is decreased and all TDCs can measure the same impulse width. The TDC's oscillation frequency can be easily determined by monitoring the 7\({}^{th}\) counter bit of each TDC, selected using the TDC_CNT_SEL signal. The TDC's operating principle is illustrated in a timing diagram in Fig. 8. Fig. 5: Smarty block diagram. The chip comprises 10 TDCs whose outputs provide the ANN’s inputs. Two dual port memories are used for storing the weights, biases and the ANN outputs. All TDCs can be bypassed and the ANN can be used as a stand-alone structure. Three different isolated supply domains are provided: VDD_RING, VDD_CORE and VDD_ANN. Fig. 6: TDC’s main building blocks: a VCO that returns the four phases of the TDC (Q\(<\)0:3\(>\)), a 20-bit asynchronous ripple counter that returns the TDC’s MSBs and a thermometer decoder which returns the decoded values of the TDC’s LSBs. Fig. 7: TDC’s ring oscillator structure [20]. ### _Feed-forward ANN_ The ANN comprises three main memory blocks: a 4096 bits memory, which contains the description parameters for all the neurons, a 10 kbit coefficient memory which comprises the values for all the weights and biases and a 624 bit memory for the topology file. The ANN also comprises 4 processors that are fully synthesized through HLS and are used to accelerate the operations performed by the neural network. A control logic unit implements all the necessary sequential steps that are described in the behavioral code and it is fully inferred by the HLS tool. A conceptual diagram of the neural network is depicted in Fig. 9. The on-chip ANN implementation requires the use of many resources and area, therefore, constraints in terms of the maximum number of neurons and coefficients that can be used were imposed. As a result, the ANN can benefit from a maximum of 128 neurons in total, along with 1024 weights and biases in a fully-connected configuration. The ANN's Si area is 89.79 \(\upmu\)m \(\times\) 182.16 \(\upmu\)m and the TDC bank is 20 \(\upmu\)m \(\times\) 250 \(\upmu\)m, both designed in 16 nm FinFET CMOS technology. ## IV Performance characterization results The Smarty chip was characterized from different perspectives. Initially, the TDCs were characterized, followed by the ANNs, finally, a coincidence measurement setup was built with two boards placed in front of each other in order to reconstruct the radioactive source position, along the line connecting them. ### _TDC performance_ Electrical tests were performed to determine the transfer functions of each TDC. The transfer function was determined by using the TDC_START_ALL and TDC_STOP_ELECTRIC signal. A large range of impulse widths was covered by sending with an FPGA the TDC_STOP_ELECTRIC and TDC_START_ALL with different phases with respect to each other. The transfer functions of all ten TDCs are shown in Fig. 10. The bin width of each TDC was measured by monitoring the oscillation period of the 7\({}^{\text{th}}\) counter bit. The TDCs present an average LSB of 53.5 ps. The nonlinearities of the TDCs were measured through a code density test, illuminating a photodetector connected to the TDC with white light and accumulating multiple frames. The DNL and INL results of all TDCs are presented in Table I. Fig. 8: TDC operating principle. Q\(<\)3\(>\) signal represents the counter CLK. The VCO oscillates during the period when the EN signal is set to high. Fig. 10: Transfer functions of all ten TDCs. Measurements performed over a range of 800 ns. Fig. 9: Conceptual representation of the neural network implementation. The ANN’s main blocks are: weights and bias memory, neuron memory and the control logic unit. An AXI bus provides the communication with the ANN. ### _Genetic algorithm training_ The ANN has been trained in Python 3.6 by using the PyTorch open-source machine learning framework and genetic algorithms [19, 20, 21]. After exploring different training approaches, the use of genetic algorithms proved to be a good solution for the problems that need to be solved by using Smarty's ANN. The training starts with a set of \(n\) individuals which are part of a group called population. The number of individuals in each population is chosen by the user and is different from one problem to another. There is no specific number that gives the best result in the end. Initially, the evolution starts from a random population with a certain number of individuals. As in real life, each individual is characterized by genes organized in chromosomes, in this case weights and bias values. A loss function is defined and each individual is evaluated with it during the training process. In the end, the best individual in the final generation is chosen as the final solution. The mutation and crossover are important parameters which influence the final result. The crossover is the chromosome combination process between two parents whose result is a new offspring. The crossover is usually implemented between the parents which present the best chromosomes so that the new generation has more advanced individuals. A mutation is a genetic operator that takes place after the crossover occurred and represents a random change in one or more sections of the new offspring's chromosome. In this way, there is a high chance that the algorithm converges faster and it is not stuck in local minima. As in the case of the population size and generation number, there is no fixed value for the mutation and crossover parameters which returns the best performance. These values are chosen by the user depending on the problem that is being solved. An illustration of the genetic algorithm main steps used in this framework is depicted in Fig. 11. The GA was used to train a fixed ANN topology across all generations as depicted in Fig. 12. Each individual in each generation represents an ANN. The mutation and crossover operators were used to optimize the weights and biases and minimize the loss function. All the following results are obtained through training with genetic algorithms and their respective chosen parameters will be mentioned along with each presented result. ### _Measurement setups_ To assess the performance of Smarty, three different measurement setups have been created. The first setup, presented in Fig. 13 is a PCB (green), was designed to accommodate the testing of the entire SoC and can be used for electrical measurements only. The second PCB (black) is an interface board which comprises a set of SMAs which allow the interface and control of Smarty's TDCs. In this way, the TDCs can be accessed and triggered externally or interfaced with photodetectors, such as silicon photomultipliers. The last PCB is an application dedicated design which comprises a set of five silicon photomultipliers (Hamamatsu S14160/S14161 series). The A-SiPMs are placed in a line arrangement, one next to each other as depicted in Fig. 14. A dedicated 3D-printed support was designed for this board in order to allow its attachment to an optical table to keep it in a stable position. Each A-SiPM has a corresponding amplifier and comparator whose output is available through SMA connectors. Fig. 11: Main steps of the genetic algorithm used in Smarty’s ANN. Adapted from [20]. Fig. 12: An ANN example. Each circle in each generation is an ANN of a fixed topology. Each generation has 20 individuals. The biases and weights recombine to create the children of the next generation. A mutation occurs randomly in the chromosomes and it is represented in the red circles. The sensor board is shown in Fig. 14. In this way, the A-SiPMs signals can be sent as inputs to the TDCs in Smarty. A connection between the interface board and the board which comprises the photodetectors is done through coaxial cables of equal lengths as shown in Fig. 15 b). The A-SiPM features 50% PDE at 450 nm for an excess bias voltage of 2.7 V, corresponding to a total voltage of 40.7 V [19]. ### _Electro-optical 4NN performance evaluation_ The first measurements performed with Smarty's ANN were single-shot optical measurements. The A-SiPMs were illuminated with a 375 nm picosecond diode laser (PiL037-FC). The board was interfaced with an FPGA and a START signal was generated by the FPGA serving as a trigger for the laser controller. The arrival of the A-SiPM output pulse with respect to a STOP signal, which was also generated by the FPGA, was measured. In a single-shot measurement setup, different delays between the START and STOP signals are measured by the TDCs. A specific TDC code with some variations is generated at each measured interval. The TDC codes corresponding to all five A-SiPMs were read out and a set of 10,000 frames were accumulated for each single-shot measurement. A picture with the measurement setup is presented in Fig. 15. The data of all the single-shot points were transferred to the PC and used to train the ANN using the Python 3.6 training flow with GA. The TDC outputs are connected to the ANN, therefore all the TDC codes were used as input data for the ANN. The chosen topology for the training contains 5 inputs, 5 hidden layers with 13 neurons each and one output neuron. For the GA algorithm training the following parameters were chosen: 10 generations with 30 individuals each (each individual is a neural network of the topology described before), uniform crossover (each gene is chosen for either parent with a certain probability in order to be transferred to one of the two children), 0.2% mutation rate and 10000 epochs. The inputs of the ANN are the TDC codes Fig. 14: Sensor board which comprises five 4 mm \(\times\) 4 mm A-SiPM (Hamamatsu S14160/S14161 series). An amplifier (ADA4807) and a comparator (LT1394) are connected to each A-SiPM; the output of the comparator is made available through SMA connectors. The board can be interfaced with the board of Fig. 13 through coaxial cables as shown in Fig. 15 b). Fig. 13: SoC board (green PCB) connected to the interface board (black PCB). A set of SMA connectors is available for testing and interfacing with other circuits. Fig. 15: a) Optical measurement setup for the single-shot measurements. A sensor board which comprises five A-SiPMs is used. All SiPMs are illuminated with a 375 nm picosecond laser. A diffuser (DG20-220-MD) is used between the laser and A-SiPMs in order to assure uniform illumination across all five photodetectors. b) Conceptual representation of the measurement setup presented in a). obtained through the optical single-shot measurement while the output of the ANN is the delay between the START and STOP signals of the TDC (called EN width pulse). The final training results are presented in Fig. 16. The training results show the capability of the ANN to distinguish all 6 different single-shot points solely using the TDC codes. The loss is calculated for each individual in each generation and at the end, the individual with the best average loss across all the frames is chosen. In this case, the loss is calculated as the absolute error of the ANN output. It can clearly be noted that the loss value improved across all 10 generations. At the end of the training, the weights and biases were extracted and transferred to the on-chip ANN. As discussed earlier, the on-chip design makes use of a fixed-point representation, therefore the weights and biases required off-chip quantization. The final on-chip ANN performance is illustrated in Fig. 17. Initially, a naive quantization method was used by directly converting to fixed point through rounding. However, in this case, the results were far from the desired values. A second approach that consisted of scaling the weights and bias before converting to fixed-point representation featured much better results. The performance can be further improved by exploring different quantization methods as well as performing quantized-aware training. The quantized ANN was able to distinguish all 6 single-shot measured points as well as to successfully interpolate three never-before-seen points (28, 33, 47 represented by green dots in Fig. 17). ## V ANN performance in a coincidence measurement setup The ANN designed in Smarty can be used in different measurement setups dedicated to different applications selected by the user. In the scope of this paper, the performance of the ANN was tested in a coincidence measurement setup as well, with applicability for PET scenarios. The goal is to test the ANN's capabilities for the reconstruction of the position of a radioactive source. One of the goals of a PET system is to reconstruct the position of the annihilation point along a line-of-response with very good precision. Through the following simulations and measurements, the ANN's performance capabilities have been tested aiming for a good reconstruction along the X axis. ### _Simulation setup_ Firstly, the performance of the ANN was tested by using synthetic data generated with the Geant4 platform [20]. The simulated model emulates the behavior of the gamma interaction inside a 4 mm \(\times\) 4 mm \(\times\) 20 mm LYSO scintillator. This specific scintillator dimension was chosen so that it can match the total area occupied by all five A-SiPMs of the sensor board (sidewise readout). A spherical \({}^{22}\)Na source with a diameter of 3 mm and an intensity of 3.7 MBq placed on disk case with a diameter of 25 mm and a thickness of 6 mm was used in the simulation environment. The gamma rays emitted by the source interact with the scintillators placed in opposition. Upon interaction, the scintillators produce a burst of visible photons. The resulting detection times recorded at the output surface of the scintillator (in this case the 20 mm surface covered by the A-SiPMs) are used as training data for the ANN. Each source position was simulated and the photon arrival times at the output surface of each scintillator recorded individually for each source position. A conceptual block diagram which depicts the simulation environment is shown in Fig. 18. Fig. 16: a) Histogram of the EN width pulse estimation at the output of the neural network when presented with blind validation input frames for 6 different values for the single-shot optical measurement. Ground truth is shown by red dots. b) Average loss of the best performing individual in each generation of the GA. The EN width is presented in arbitrary units and it represents the number of clock cycles chosen during the optical measurement (one clock cycle is 5 ns). Fig. 17: Smarty ANN on-chip performance in the single-shot measurement setup. The coefficient quantization effects on the ANN’s output are depicted in yellow and blue lines. Yellow: naive quantization method which rounds the coefficients. Blue: coefficients are clipped within a desired range. The ANN interpolated three never-before-seen points, represented by green dots. All the timestamps that reach the two scintillators corresponding to detector1 and detector 2 are used for the ANN training. Each scintillator has an ID which is 1 or 2 so that the spatial information on the arrival of gamma photons is retained. Each source position is simulated one-at-a-time and generates its own dataset containing the spatial coordinates of the source, \(\mathrm{X_{source}}\) and \(\mathrm{Y_{source}}\), the time of arrival of the photons at the detection surface of the scintillator and the ID of the scintillation (detector 1 and detector 2). The recorded timestamps were then transformed into TDC codes in order to be consistent with the real measurement scenario in which the ANN receives directly the final codes of all 10 TDCs. Each TDC records a single timestamp per frame per A-SiPM. The length of the frame is set so that multiple TDCs fire. As before, the ANN has been trained in Python 3.6 by using solely timestamps. The organization of the timestamps in preparation for the ANN training is the following: * all the timestamps are sorted and all their corresponding source position coordinates, \(\mathrm{X_{source}}\) and \(\mathrm{Y_{source}}\), are retained; * the data is organized in exposure frames; * if all 10 TDCs fired, all ten timestamps are kept for each frame;1 Footnote 1: There are also frames with fewer number of timestamps due to the fact that not all TDCs fired. The first timestamp for each A-SiPM in one frame is kept. A jitter of 120 ps FWHM has been imposed over the synthetic data. * Each A-SiPM has an ID from 1 to 10, each timestamp is assigned to a specific A-SiPM depending on its position at the detection surface of the scintillator; * At the end, the dataset is organized in a training set which contains 80% of all the data and a validation set which contains 20% of the entire dataset. As before, the ANN was trained using the GA using solely the TDCs' timestamps. The GA's training framework used for the simulation of the coincidence setup is as follows: * the first generation contains a population of 20 individuals. Each individual is an ANN with a certain topology that will be further presented along with the simulation results; * the Adam optimization algorithm [24] is used along with a large number of epochs with a variable learning rate that decreases from 0.01 in the first epoch to 0.001 in the last epoch, whereas it is changed across different training trials; * the algorithm is ran over 30 generations with a uniform crossover and small mutation rate (\(<1\%\)); * each frame of the dataset contains the TDCs' timing information and their corresponding radioactive source position in the plane; * finally, the average loss of each individual is calculated for each generation and the individual with the best average loss value from the last generation is reported. The loss function is defined as the distance between the estimated source position and the actual source position in space and it is calculated as follows: \[loss=\ \|P_{1}P_{2}\|=\sqrt{(X_{1}-X_{2})^{2}-(Y_{1}-Y_{2})^{2}}, \tag{3}\] where \(P_{1}\) is the point corresponding to the actual source position in space with its respective coordinates (\(X_{1}\) and \(Y_{1}\)), and \(P_{2}\) is the point corresponding to the ANN estimated source position with its respective coordinates (\(X_{2}\) and \(Y_{2}\)). The ANN's reconfigurability consists in changing its topology within the design limitations. In general, there is no best choice in terms of the ANN's topology, crossover or mutation rate. The choice is made by the user and depends on the problem at hand. In this sense, the exploration space is very large and cannot be covered entirely. Therefore, the ANN's performance in a coincidence measurement setup for radioactive source position reconstruction was analyzed by training two different feed-forward topologies: fully-connected narrow-deep and wide-shallow as depicted in Fig. 19. The topology of the feed-forward narrow-deep configuration comprises 10 input neurons, 5 hidden layers of 13 neurons per layer and 2 output neurons, while the feed-forward wide-shallow configuration comprises 10 input neurons, 1 hidden layer with 70 neurons and 2 output neurons. The performance of the best performing topology, the narrow-deep fully-connected ANN with a mutation rate of 0.2% is depicted in Fig. 20. In both cases, the training has been performed with different Fig. 18: Geant4 simulation setup with two scintillators placed in coincidence at a distance of 220 mm. Two 4 mm \(\times\) 4 mm \(\times\) 20 mm LYSO scintillators were used. The black dots represent the radioactive source positions that are simulated in the Geant4 environment one-at-a-time. mutation rates, 1% and 0.2% respectively. The ANN presents better results in the case of a training performed with 0.2%. For both topologies, the ANN was able to distinguish six different radioactive source positions along the X axis. The performance of both ANN topologies is presented in Table II. ### _On-chip performance_ The coincidence measurement setup is presented in Fig. 21. Two A-SiPM sensor boards are placed in coincidence at a distance of 220 mm from each other. A \({}^{22}\)Na radioactive source is placed on an optical rail and its position along the X axis can be changed manually. The 10 comparator outputs from the two boards are connected to the Smarty board via equal length coaxial cables. The control of the boards is performed with the aid of an FPGA. Thousands of frames are accumulated with the TDCs at each source position along the X axis. All the data acquired with the TDCs for different radioactive source positions is used for the training of the ANN. The ANN has been trained using the GA as described before. Compared with the previous training, a classification approach was used due to its better performance in this specific case. Fig. 21: Smarty coincidence measurement setup. a) Two A-SiPM boards are placed in coincidence at a distance of 220 mm from each other. Each board is coupled with a LYSO scintillator of 4 mm \(\times\)4 mm \(\times\)20 mm. A \({}^{22}\)Na source can be moved along the X axis between the two sensor boards on a dovetail optical rail. b) Conceptual representation of the coincidence measurement setup. Fig. 19: a) Narrow-deep fully-connected ANN with 10 input neurons, 5 hidden layers of 13 neurons each and 2 output neurons. b) Wide-shallow fully-connected NN with 10 input neurons, 1 hidden layer of 70 neurons and 2 output neurons. Fig. 20: Narrow-deep feed-forward fully-connected ANN trained with a mutation rate of 0.2%. a) Histogram of the X coordinate estimation at the output of the neural network when presented with never-before-seen validation input frame for 6 radioactive source positions. The ANN's output is divided in a set of classes depending on the number of source positions used for training. The ANN returns the class the source position corresponds to instead of its absolute value in mm. The ANN comprises 10 input neurons, 5 hidden layers with 13 neurons each and 3 output neurons. The performance of the ANN is reported as a confusion matrix. For the training process, frames in which only the TDCs corresponding to one detector fired were considered non-valid frames and a class called -120 mm, which is a position beyond the distance between the two photodetectors, was associated to them. Non-valid frames are frames in which only one of the detectors (detector 1 or detector 2) fired, therefore, no coincidence can be performed by the ANN. The classification results of two radioactive source positions placed at -57 mm and 65 mm along the X axis between the two sensor boards are presented in Fig. 22 for both floating point and quantized models. The performance is quantified using two parameters: accuracy and precision. \[\begin{split} ACCURACY=\frac{TP+TN}{TP+TN+FP+FN},\\ PRECISION=\frac{TP}{TP+FP}\end{split} \tag{4}\] where TP are true positives, TN true negatives, FP false positives and FN false negatives. The average accuracy and precision of the floating-point representation is 83.59% and 68.69% respectively, while the quantized model presents an accuracy of 83.48% and a precision of 68.59%. From the results presented in Fig. 22, it can be observed that the ANN was able to clearly distinguish between valid and non-valid frames with high degree of certainty. Both positions, -57 mm and 65 mm have been distinguished by the ANN, however, it favors the former, most likely due to unequal number of frames in the input dataset and bias differences between the two detectors that led to a slight increase in the count rate on the SiPMs on the left. This effect is more evident in the quantized model. The same analysis has been repeated for three different source positions, -120 mm (represented by non-valid frames), -57 mm, 0 mm and 65 mm. The ANN's classification results are shown in Fig. 23. In this case, the floating-point model has an accuracy of 81.26% and a precision of 53.70%, while the quantized model has an accuracy of 81.34% and a precision of 53.88%. As in the previous case, the ANN distinguished with a very high degree of certainty the difference between valid and non-valid frames. The two models have a similar performance. ### _Hardware performance evaluation_ The execution time of the ANN was measured considering an ANN with 10 input neurons, 5 hidden layers with 13 neurons per hidden layer and 3 output neurons. For a 105 MHz clock, the ANN has an execution time of 22.44 us. Considering a total number of 1710 operations, the ANN executes 76.15 MOPS. However, the ANN was designed to run at a maximum frequency of 500 MHz which results in a maximum performance of 363 MOPS. At a frequency of 100 MHz, which proved to be sufficient for the current experiment, the ANN itself consumes 0.4 mW, which is equivalent to 190 GOPS/W. The entire SoC highlighting the Smarty design is shown in Fig. Fig. 23: Confusion matrices of the ANN classification results for 3 different source positions placed along the X axis, -57 mm, 0 mm and 65 mm. The non-valid frames are marked with -120. a) Floating point representation, b) quantized model. The numbers represent the number of frames. Fig. 22: Confusion matrices of the ANN classification results for 2 different source positions placed along the X axis, -57 mm and 65 mm. The non-valid frames are marked with -120. a) Floating point representation, b) quantized model. The numbers represent the number of frames. ## VI Conclusions Smarty is a fully-integrated and fully-reconfigurable feed-forward artificial neural network with ten TDCs designed in a 16 nm FinFET CMOS technology, intended to operate in a PET coincidence measurement setup with reduced system complexity and data throughput. The system exhibits a high degree of flexibility, the user being able to decide the ANN topology in accordance to the problem at hand, within a set of constraints. The average LSB of the TDCs is 53.5 ps, while the ANN operates at 363 MOPS with an efficiency of 190 GOPS/W. Smarty was validated in an experimental coincidence setup where it successfully distinguished between six different positions of a radioactive source when using a floating-point implementation and three different source positions with a quantized model. In all the test cases, the ANN was able to filter with high degree of certainty frames which were considered as being non-valid, i.e. frames without timestamps in which no coincidence occurred. Smarty represents the first step towards reducing output data throughput and overall system complexity by bringing a level of preprocessing close to the photodetector. Conflicts of Interest: The authors declare no conflict of interest.
2308.09842
Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees
Identifying safe areas is a key point to guarantee trust for systems that are based on Deep Neural Networks (DNNs). To this end, we introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe, i.e., where the property does hold. Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe. Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits, and can provide a tight (with provable probabilistic guarantees) lower estimate of the safe areas. Our empirical evaluation on different standard benchmarks shows the scalability and effectiveness of our method, offering valuable insights for this new type of verification of DNNs.
Luca Marzari, Davide Corsi, Enrico Marchesini, Alessandro Farinelli, Ferdinando Cicalese
2023-08-18T22:30:35Z
http://arxiv.org/abs/2308.09842v2
# Enumerating Safe Regions in Deep Neural Networks ###### Abstract Identifying safe areas is a key point to guarantee trust for systems that are based on Deep Neural Networks (DNNs). To this end, we introduce the _AllDNN-Verification_ problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe, i.e., where the property does hold. Due to the #P-hardness of the problem, we propose an efficient approximation method called \(\epsilon\)-ProVe. Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits, and can provide a tight --with provable probabilistic guarantees-- lower estimate of the safe areas. Our empirical evaluation on different standard benchmarks shows the scalability and effectiveness of our method, offering valuable insights for this new type of verification of DNNs. ## Introduction Deep Neural Networks (DNNs) have emerged as a groundbreaking technology revolutionizing various fields ranging from autonomous navigation [12, 13] to image classification [14] and robotics for medical applications [15]. However, while DNNs can perform remarkably well in different scenarios, their reliance on massive data for training can lead to unexpected behaviors and vulnerabilities in real-world applications. In particular, DNNs are often considered "black-box" systems, meaning their internal representation is not fully transparent. A crucial DNNs weakness is the vulnerability to adversarial attacks [16, 1], wherein small, imperceptible modifications to input data can lead to wrong and potentially catastrophic decisions when deployed. To this end, Formal Verification (FV) of DNNs [12, 13] holds great promise to provide assurances on the safety aspect of these functions before the actual deployment in real scenarios. In detail, the decision version of the _DNN-Verification_ problem takes as input a trained DNN \(\mathcal{N}\) and a safety property, typically expressed as an input-output relationship for \(\mathcal{N}\), and aims at determining whether there exists at least an input configuration which results in a violation of the safety property. It is crucial to point out that due to the fact that the DNN's input is typically defined in a continuous domain any empirical evaluation of a safety property cannot rely on testing all the (infinitely many) possible input configurations. In contrast, FV can provide provable assurances on the DNNs' safety aspect. However, despite the considerable advancements made by DNN-verifiers over the years [17, 18, 19], the binary result (_safe_ or _unsafe_) provided by these tools is generally not sufficient to gain a comprehensive understanding of these functions. For instance, when comparing two neural networks and employing a FV tool that yields a _unsafe_ answer for both (i.e., indicating the existence of at least one violation point) we cannot distinguish whether one model exhibits only a small area of violation around the identified counterexample, while the other may have multiple and widespread violation areas. To overcome this limitation, a quantitative variant of FV, asking for the number of violation points, has been proposed and analyzed, first in [1] for the restricted class of Binarized Neural Networks (BNNs) and more recently in [11] for general DNNs. Following [11] we will henceforth refer to such counting problem as _#DNN-Verification_. Due to the #P-Completeness of the _#DNN-Verification_, both studies in [1, 12], focus on efficient approximate solutions, which allow the resolution of large-scale real-world problems while providing provable (probabilistic) guarantees regarding the computed count. Solutions to the _#DNN-Verification_ problem allow to estimate the probability that a DNN violates a given property but they do not provide information on the actual input configurations that are safe or violations for the property of interest. On the other hand, knowledge of the distribution of safe and unsafe areas in the input space is a key element to devise approaches that can enhance the safety of DNNs, e.g., by patching unsafe areas through re-training. To this aim, we introduce the _AllDNN-Verification_ problem, which corresponds to computing the set of all the areas that do not result in a violation for a given DNN and a safety property (i.e., enumerating all the safe areas of a property's input domain). The _AllDNN-Verification_ is at least as hard as _#DNN-Verification_, i.e., it is easily shown to be #P-Hard. Hence, we propose \(\epsilon\)-ProVe, an approximation ap proach that provides provable (probabilistic) guarantees on the returned areas. \(\epsilon\)-ProVe is built upon interval analysis of the DNN output reachable set and the iterative refinement approach [22], enabling efficient and reliable enumeration of safe areas.1 Footnote 1: We point out that the _AllDNN-Verification_ can also be defined to compute the set of unsafe regions. For better readability, in this work, we will only focus on safe regions. The definition and the solution proposed are directly derivable also when applied to unsafe areas. Notice that state-of-the-art FV methods typically propose an over-approximated output reachable set, thereby ensuring the soundness of the result. Nonetheless, the relaxation of the nonlinear activation functions employed to compute the over-approximate reachable set has non-negligible computational demands. In contrast, \(\epsilon\)-ProVe provides a scalable solution based on an underestimation of the output reachable set that exploits the _Statistical Prediction of Tolerance Limits_[13]. In particular, we demonstrate how, with a confidence \(\alpha\), our underestimation of the reachable set computed with \(n\) random input configurations sampled from the initial property's domain \(A\) is a correct output reachable set for at least a fraction \(R\) of an indefinitely large further sample of points. Broadly speaking, this result tells us that if all the input configurations obtained in a random sample produce an output reachable set that does not violate the safety property (i.e., a _safe_ output reachable set) then, with probability \(\alpha\), at least a subset of \(A\) of size \(R\cdot|A|\) is safe (i.e., \(R\) is a lower bound on the safe rate in \(A\) with confidence \(\alpha\)). In summary, the main contributions of this paper are the following: * We initiate the study of the _AllDNN-verification_ problem, the enumeration version of the DNN-Verification * Due to the #P-hardness of the problem, we propose \(\epsilon\)-ProVe a novel polynomial approximation method to obtain a provable (probabilistic) lower bound of the safe zones within a given property's domain. * We evaluate our approach on FV standard benchmarks, showing that \(\epsilon\)-ProVe is scalable and effective for real-world scenarios. ## Preliminaries In this section, we discuss existing FV approaches and related key concepts on which our approach is based. In contrast to the standard robustness and adversarial attack literature [16, 15, 14], FV of DNNs seeks formal guarantees on the safety aspect of the neural network given a specific input domain of interest. Broadly speaking, if a DNN-Verification tool states that the region is provably verified, this implies that there are no adversarial examples - violation points - in that region. We recall in the next section the formal definition of the satisfiability problem for _DNN-Verification_[16]. ### DNN-Verification In the _DNN-Verification_, we have as input a tuple \(\mathcal{T}=\langle\mathcal{N},\mathcal{P},\mathcal{Q}\rangle\), where \(\mathcal{N}\) is a DNN, \(\mathcal{P}\) is precondition on the input, and \(\mathcal{Q}\) a postcondition on the output. In particular, \(\mathcal{P}\) denotes a particular input domain or region for which we require a particular postcondition \(\mathcal{Q}\) to hold on the output of \(\mathcal{N}\). Since we are interested in discovering a possible counterexample, \(\mathcal{Q}\) typically encodes the negation of the desired behavior for \(\mathcal{N}\). Hence, the possible outcomes are SAT if there exists an input configuration that lies in the input domain of interest, satisfying the predicate \(\mathcal{P}\), and for which the DNN satisfies the postcondition \(\mathcal{Q}\), i.e., at least one violation exists in the considered area, UNSAT otherwise. To provide the reader with a better intuition on the _DNN-Verification_ problem, we discuss a toy example. **Example 1**.: _(DNN-Verification) Suppose we want to verify that for the toy DNN \(\mathcal{N}\) depicted in Fig. 1 given an input vector \(x=(x_{1},x_{2})\in[0,1]\times[0,1]\), the resulting output should always be \(y\geq 0\). We define \(\mathcal{P}\) as the predicate on the input vector \(x=(x_{1},x_{2})\) which is true iff \(x\in[0,1]\times[0,1]\), and \(\mathcal{Q}\) as the predicate on the output \(y\) which is true iff \(y=\mathcal{N}(x)<0\), that is, we set \(\mathcal{Q}\) to be the negation of our desired property. As reported in Fig. 1, given the vector \(x=(1,0)\) we obtain \(y<0\), hence the verification tool returns a SAT answer, meaning that a specific counterexample exists and thus the original safety property does not hold._ ### #DNN-Verification Despite the provable guarantees and the advancement that formal verification tools have shown in recent years [15, 16, 14, 22], the binary nature of the result of the _DNN-Verification_ problem may hide additional information about the safety aspect of the DNNs. To address this limitation in [16] the authors introduce the _#DNN-Verification_, i.e., the extension of the decision problem to its counting version. In this problem, the input is the same as the decision version, but we denote as \(\Gamma(\mathcal{T})\) the set of _all_ the input configurations for \(\mathcal{N}\) satisfying the property defined by \(\mathcal{P}\) and \(\mathcal{Q}\), i.e. \[\Gamma(\mathcal{T})=\left\{x\;\big{|}\;\mathcal{P}(x)\wedge\mathcal{Q}( \mathcal{N}(x))\right\} \tag{1}\] Then, the _#DNN-Verification_ consists of computing \(|\Gamma(\mathcal{T})|\). The approach (reported in Fig.2) solves the problem in a sound and complete fashion where any state-of-the-art FV tool for the decision problem can be employed to check each Figure 1: A counterexample for a toy _DNN-Verification_ problem. node of the Branch-and-Bound (Bunel et al. 2018) tree recursively. In detail, each node produces a partition of the input space into two equal parts as long as it contains both a point that violates the property and a point that satisfies it. The leaves of this recursion tree procedure correspond to partitioning the input space into parts where we have either complete violations or safety. Hence, the provable count of the safe areas is easily computable by summing up the cardinality of the subinput spaces in the leaves that present complete safety. Clearly, by using this method, it is possible to exactly count (and even to enumerate) the safe points. However, due to the necessity of solving a _DNN-Verification_ instance at each node (an intractable problem that might require exponential time), this approach becomes soon unfeasible and struggles to scale on real-world scenarios. In fact, it turns out that under standard complexity assumption, no efficient and scalable approach can return the exact set of areas in which a DNN is provably safe (as detailed in the next section). To address this concern, after formally defining the _AllDNN-Verification_ problem and its complexity, we propose first of all a relaxation of the problem, and subsequently, an approximate method that exploits the analysis of underestimated output reachable sets obtained using statistical prediction of tolerance limits (Wilks 1942) and provides a tight underapproximation of the safe areas with strong probabilistic guarantees. ## The AllDNN-Verification Problem The _AllDNN-Verification_ problem asks for the set of all the safe points for a particular tuple \(\langle\mathcal{N},\mathcal{P},\mathcal{Q}\rangle\). Formally: **Definition 1** (_AllDNN-Verification Problem_).: **Input**: _A tuple \(\mathcal{T}=\langle\mathcal{N},\mathcal{P},\mathcal{Q}\rangle\)._ **Output**: _the set of safe points \(\Gamma(\mathcal{T})\), as given in (1)._ Considering the example of Fig. 2 solving the _AllDNN-Verification_ problem for the safe areas consists in returning the set: \(\Gamma=\Bigg{\{}\big{[}[0.5,1],[0,1]\big{]}\cup\big{[}[0,0.24],[0,0.49]\big{]} \Bigg{\}}\). ### Hardness of _AllDNN-Verification_ From the #P-completeness of the _#DNN-Verification_ problem proved in (Marzari et al. 2023) and the fact that exact enumeration also provides exact counting it immediately follows that the _AllDNN-Verification_ is _#P-hard_, which essentially states that no polynomial algorithm is expected to exists for the _AllDNN-Verification_ problem. #### \(\epsilon\)-ProVe: a Provable (Probabilistic) Approach In view of the structural scalability issue of any solution to the _AllDNN-Verification_ problem, due to its #P-hardness, we propose to resort to an approximate solution. More precisely, we define the following approximate version of the _AllDNN-Verification_ problem: **Definition 2** (\(\epsilon\)-_Rectilinear Under-Approximation of safe areas for DNN (\(\epsilon\)-Rua-Dnn)_).: **Input**: _A tuple \(\mathcal{T}=\langle\mathcal{N},\mathcal{P},\mathcal{Q}\rangle\)._ **Output**: _a family \(\mathcal{R}=\{r_{1},\ldots,r_{m}\}\) of disjoint rectilinear \(\epsilon\)-bounded hyperrectangles such that \(\bigcup_{i}r_{i}\subseteq\Gamma(\mathcal{T})\) and \(|\Gamma(\mathcal{T})\setminus\bigcup_{i}r_{i}|\) is minimum._ A _rectilinear \(\epsilon\)-bounded hyperrectangle_ is defined as the cartesian product of intervals of size at least \(\epsilon\). Moreover, for \(\epsilon>0,\) we say that a rectilinear hyperrectangle \(r=\times_{i}[\ell_{i},u_{i}]\) is \(\epsilon\)_-aligned_ if for each \(i,\) both extremes \(\ell_{i}\) and \(u_{i}\) are a multiple of \(\epsilon\). The rational behind this new formulation of the problem is twofold: on the one hand, we are relaxing the request for the exact enumeration of safe points; on the other hand, we are requiring that the output is more concisely representable by means of hyperrectangles of _some significant_ size. Note that for \(\epsilon\to 0\), \(\epsilon\)-_RUA-DNN_ and _AllDNN-Verification_ become the same problem. More generally, whenever the solution \(\Gamma(\mathcal{T})\) to an instance \(\mathcal{T}\) of _AllDNN-Verification_ can be partitioned into a collection of rectilinear \(\epsilon\)-bounded hyperrectangles, \(\Gamma(\mathcal{T})\) can be attained by an optimal solution for the \(\epsilon\)-_RUA-DNN_. This allows as to tackle the _AllDNN-Verification_ problem via an efficient approach with strong probabilistic approximation guarantee to solve the \(\epsilon\)-_RUA-DNN_ problem. Our method is based on two main concepts: the analysis of an underestimated output reachable set with probabilistic guarantees and the _iterative refinement_ approach (Wang et al. 2018b). In particular, in Fig. 3 we report a schematic representation of the approach that can be set up through reachable set analysis. Let us consider a possible domain for the safety property, i.e., the polygon highlighted in light blue in the upper left corner of Fig. 3. Suppose that the undesired output reachable set is the one highlighted in red called \(\mathcal{R}^{*}\) in the bottom left part of the image, i.e., this set describes all the unsafe outcomes the DNN should never output starting from \(\mathcal{X}\). Hence, in order to formally verify that the network respects the desired safety property, the output reachable set computed from the domain of the property (i.e., the green \(\mathcal{R}\) area of the left side of the image) should have an empty intersection with the undesired reachable set (the red one). If this condition is not respected, as, e.g., in the left part of the figure, then there Figure 2: Explanatory image execution of exact count for a particular \(\mathcal{N}\) and safety property. exists at least an input configuration for which the property is not respected. To find all the portions of the property's domain where either the undesired reachable set and the output reachable set are disjoint, i.e., \(\mathcal{R}^{*}\bigcap\mathcal{R}=\emptyset\), or, dually, discover the unsafe areas where the condition \(\mathcal{R}\subseteq\mathcal{R}^{*}\) holds (as shown in the right part of Fig. 3) we can exploit the _iterative refinement_ approach [22]. However, given the nonlinear nature of DNNs, computing the exact output reachable set is infeasible. To address this issue, the reachable set is typically over-approximated, thereby ensuring the soundness of the result. Still, the relaxation of the nonlinear activation functions used to compute the over-approximated reachable set is computationally demanding. In contrast, we propose a computationally efficient solution that uses underestimation of the reachable set and construct approximate solutions for the \(\epsilon\)-_RUA-DNN_ problem with strong probabilistic guarantees. ### Probabilistic Reachable Set Given the complexity of computing the exact minimum and maximum of the function computed by a DNN, we propose to approximate the output reachable set using a statistical approach known as _Statistical Prediction of Tolerance Limits_[23]. We use a Monte Carlo sampling approach: for an appropriately chosen \(n\), we sample \(n\) input points and take the smallest and the greatest value achieved in the output node as the lower and the upper extreme of our probabilistic estimate of the reachable set. The choice of the sample size is based on the results of [23] that allow us to choose \(n\) in order to achieve a given desired guarantee on the probability \(\alpha\) that our estimate of the output reachable set holds for at least a fixed (chosen) fraction \(R\) of a further possibly infinitely large sample of inputs. Crucially, this statistical result is independent of the probability function that describes the distribution of values in our function of interest and thus also applies to general DNNs. Stated in terms more directly applicable to our setting, the main result of [23] is as follows. **Lemma 1**.: _For any \(R\in(0,1)\) and integer \(n\), given a sample of \(n\) values from a (continuous) set \(X\) the probability that for at least a fraction \(R\) of the values in a further possibly infinite sequence of samples from \(X\) are all not smaller (respectively larger) than the minimum value (resp. maximum value) estimated with the first \(n\) samples is given by the \(\alpha\) satisfying the following equation_ \[n\cdot\int_{R}^{1}x^{n-1}\ dx=\alpha \tag{2}\] ### Computation of Safe Regions We are now ready to give a detailed account of our algorithm \(\epsilon\)-ProVe. Our approximation is based on the analysis of an under-estimated output reachable set obtained by sampling a set of \(n\) points \(P_{A}\) from a domain of interest \(A\). We start by observing that, it is possible to assume without loss of generality that the network has a single output node on whose reachable set we can verify the desired property [10]. For networks not satisfying this assumption, we can enforce it by adding one layer. For example, consider the network in the example of Fig. 4 and suppose we are interested in knowing if, for a given input configuration in a domain \(A=[0,1]\times[0,1]\), the output \(y_{1}\) is always less than \(y_{2}\). By adding a new output layer with a single node \(y^{*}\) connected to \(y_{1}\) by weight \(-1\) and to \(y_{2}\) with weight \(1\) the condition required reduces to check that all the values in the reachable set for \(y^{*}\) are positive. In general, from the analysis of the underestimated reachable set of the output node computed as \(\mathcal{R}=[min(y_{i}),max(y_{i})]\ \forall i=1,\ldots,n\) we can obtain one of these three conditions: \[\begin{cases}A\text{ is unsafe}&\text{upper bound of }\mathcal{R}<0\\ A\text{ is safe}&\text{lower bound of }\mathcal{R}\geq 0\\ unknown&\text{otherwise}\end{cases} \tag{3}\] With reference to the toy example in Figure 4, assuming we sample only \(n=2\) input configurations, \((1,0)\) and \((0,1)\) which when propagated through \(\mathcal{N}\) produce as a result in the new output layer the vector \(y^{*}=[8,5]\). This results in the estimated reachable set \(\mathcal{R}=[5,8]\). Since the lower bound of this interval is positive, we conclude that the region \(A\), under consideration, is completely safe. To confirm the correctness of our construction, we can check the partial values of the original output layer and notice that no input generates \(y_{1}\geq y_{2}\). Specifically, if all inputs Figure 4: Example of computation single reachable set for a DNN with two outputs. Figure 3: Explanatory image of how to exploit reachable set result for solving the _AllDNN-Verification_ problem. result in \(y_{1}\geq y_{2}\) (violating the specification we are trying to verify), then the reachable set must have, by construction, a negative upper bound, leading to the correct conclusion that the area is unsafe. On the other hand, if only some inputs produce \(y_{1}\geq y_{2}\), then we obtain a reachable set with a negative lower bound and a positive upper bound, thus we cannot state whether the area is unsafe or not, and we should proceed with an interval refinement process. Hence, this approach allows us to obtain the situations shown to the right of Fig. 3, that is, a situation in which the reachable set is either completely positive (\(A\) safe) or completely negative (\(A\) unsafe). We present the complete pipeline of \(\epsilon\)-ProVe in Algorithm 1. Our approach receives as input a standard tuple for the _DNN-Verification_ and creates the augmented DNN \(\mathcal{N}^{\prime}\) (line 3) following the intuitions provided above. Moreover, we initialize respectively the set of safe regions as an empty set and the unchecked regions as the entire domain of the safety specification encoded in \(\mathcal{P}\) (line 5). Inside the loop (line 6), our approximation iteratively considers one area \(A\) at the time and begins computing the reachable set, as shown above. We proceed with the analysis of the interval computed, where in case we obtain a positive reachable set, i.e., the lower bound is positive (lines 9-10), then the area under consideration is deemed as safe and stored in the set of safe regions we are enumerating. On the other hand, if the interval is negative, that is, the upper bound is negative, then we ignore the area and proceed (lines 11-12). Finally, if we are not in any of these cases, we cannot assert any conclusions about the nature of the region we are checking, and therefore we must proceed with splitting the area according to the heuristic we prefer (lines 13-14). The loop ends when either we have checked all areas of the domain of interest, or we have reached the \(\epsilon-precision\) on the iterative refinement. In detail, given the continuous nature of the domain, it is always possible to split an interval into two subparts, that is, the process could continue indefinitely in the worst case. For this reason, as is the case of other state-of-the-art FV methods that are based on this approach, we use a parameter to decide when to stop the process. This does not affect the correctness of the output since our goal is to (tightly) underapproximate the safe regions, and thus in case the \(\epsilon-precision\) is reached, the area under consideration would not be considered in the set that the algorithm returns, thus preserving the correctness of the result. Although the level of precision can be set arbitrarily, it does have an effect on the performance of the method. In the supplementary material, we discuss the impact that different heuristics and different setting of hyper-parameters have on the resulting approximation. ### Theoretical Guarantees In this section, we analyze the theoretical guarantees that our approach can provide. We assume that the IntervalRefinement procedure consists of iteratively choosing one of the dimensions of the input domain and splitting the area into two halves of equal size as in [22]. The theoretical guarantee easily extends to any other heuristic provided that each split produces two parts both at least a fixed constant fraction \(\beta\) of the subdivided area. Moreover, we assume that reaching the \(\epsilon\) precision is implemented as testing that the area has reached size \(\epsilon^{d}\), i.e., it is the cartesian product of \(d\) intervals of size \(\epsilon\). It follows that, by definition, the areas output by \(\epsilon\)-ProVe are \(\epsilon\)-bounded and \(\epsilon\)-aligned. The following proposition is the basis of the approximation guarantee (in terms of the size of the safe area returned) on the solution output by \(\epsilon\)-ProVe on an instance of the \(\epsilon\)-_RUA-DNN_ problem. **Proposition 2**.: _Fix a real number \(\epsilon>0\), an integer \(k\geq 3\), and a real \(\gamma>k\epsilon\). Let \(\mathcal{T}\) be an instance of the \(\epsilon\)-RUA-DNN problem. Then for any solution \(\mathcal{R}=\{r_{1},\ldots,r_{m}\}\) such that for each \(i=1,\ldots,m\), \(r_{i}\) is \(\gamma\)-bounded, there is a solution \(\mathcal{R}^{(\epsilon)}=\{r_{1}^{(\epsilon)},\ldots,r_{m}^{(\epsilon)}\}\) such that each \(r_{i}^{(\epsilon)}\) is \(\epsilon\)-aligned and \(||\mathcal{R}^{(\epsilon)}||\geq\left(\frac{k-2}{k}\right)^{d}||\mathcal{R}||\), where \(d\) is the number of dimensions of the input space, and for every solution \(\mathcal{R}^{\prime}\), \(||\mathcal{R}^{\prime}||=|\cup_{i}r_{i}|\) is the total area covered by the hyperrectangles in \(\mathcal{R}^{\prime}\)._ The result is obtained by applying the following lemma to each hyperrectangle of the solution \(\mathcal{R}\). **Lemma 3**.: _Fix a real number \(\epsilon>0\) and an integer \(k\geq 3\). For any \(\gamma>k\epsilon\) and any \(\gamma\)-bounded rectilinear hyperrectangle \(r\subseteq\mathbb{R}^{d},\) there is a \(\epsilon\)-aligned rectilinear hyperrectangle \(r^{(\epsilon)}\) such that: (i) \(r^{(\epsilon)}\subseteq r\); and (ii) \(|r^{(\epsilon)}|\geq\left(\frac{k-2}{k}\right)^{d}|r|\)._ Figure 5: An Example of applying Lemma 3 with \(k=3\). Fig. 5 gives a pictorial explanation of the lemma. In the example shown, \(k=3\) and the parameter \(\epsilon\) is the unit of the grid, which we can imagine superimposed to the bidimensional (\(d=2\)) space in which the hyperrectangles live. Hence \(\gamma=3\epsilon\). The non-\(\epsilon\)-aligned \(r_{i}\) (depicted in blue) is \(\gamma\)-bounded, since its width \(w=(b_{1}-a_{1})=4\epsilon\) and its height \(h=(b_{2}-a_{2})=3\epsilon\) are both \(\geq 3\epsilon\). Hence, it covers completely at least \(w-2\) columns and \(h-2\) rows of the grid. These rows and columns define the green \(\epsilon\)-aligned hyperrectangle \(r_{i}^{(\epsilon)},\) of dimension \[\geq(w-2)\cdot(h-2)\geq\frac{k-2}{k}w\cdot\frac{k-2}{k}h=\left(\frac{k-2}{k} \right)^{d}|r_{i}|.\] The following theorem summarizes the coverage approximation guarantee and the confidence guarantee on the safety nature of the areas returned by \(\epsilon\)-ProVe. **Theorem 4**.: _Fix a positive integer \(d\) and real values \(\epsilon,\alpha,R\in(0,1),\) with \(R>1-\epsilon^{d}.\) Let \(\mathcal{T}\) be an instance of the AllDNN-Verification with input in2\([0,1]^{d},\) and let \(k\) be the largest integer such that \(\Gamma(\mathcal{T})\) can be partitioned into \(k\epsilon\)-bounded rectilinear hyperrectangle. Let \(\mathcal{R}^{(\epsilon)}=\{r_{i}^{(\epsilon)}\mid i=1,\ldots,m\}\) be the set of areas returned by \(\epsilon\)-ProVe using \(n\) samples at each iteration, with \(n\geq\log_{R}(1-\alpha^{1/m})\). Then,_ Footnote 2: This assumption is w.l.o.g. modulo some normalization. 1. _(coverage guarantee) with probability_ \(1-R^{n}\) _the solution_ \(\mathcal{R}^{(\epsilon)}\) _is a_ \(\left(\frac{k-2}{k}\right)^{d}\) _approximation of_ \(\Gamma(\mathcal{T}),\) _i.e.,_ \(||\mathcal{R}^{(\epsilon)}||\geq\left(\frac{k-2}{k}\right)^{d}|\Gamma(\mathcal{ T})|\)_;_ 2. _(safety guarantee) with probability at least_ \(\alpha\)_, in each hyperrectangle_ \(r\in\mathcal{R}^{(\epsilon)}\) _at most_ \((1-R)\cdot|r|\) _points are not safe._ This theorem gives two types of guarantees on the solution returned by \(\epsilon\)-ProVe. Specifically, point 2. states that for any \(R<1\) and \(\alpha<1\), \(\epsilon\)-ProVe can guarantee that with probability \(\alpha\) no more than \((1-R)\) of the points classified as safe can, in fact, be violations. Moreover, point 1. guarantees that, provided the space of safety points is not too scattered--formalized by the existence of some representation in \(k\epsilon\)-bounded hyperrectangles-- the total area returned by \(\epsilon\)-ProVe is guaranteed to be close to the actual \(\Gamma(\mathcal{T})\). Finally, the theorem shows that the two guarantees are attainable in an efficient way, providing a quantification of the size \(n\) of the sample needed at each iteration. Note that the value of \(m\) needed in defining \(n\) can be either estimated, e.g., by a standard doubling technique, or it could be set using the upper limit \(2^{d\log(1/\epsilon)}\), which is the maximum number of possible split operations performed before reaching the \(\epsilon\) precision limit. Proof.: (Sketch) The safety guarantee (item 2.) is a direct consequence of Lemma 1. In fact, an hyperrectangle \(r\) is returned as safe if all the \(n\) sampled points from \(r\) are not violations, i.e., their output is \(\geq 0\) (see (3)). By Lemma 1, at most \((1-R)\) of the points in \(r\) can give an output \(<0\), i.e., can be a violation, where, from the solution of (2), \[\hat{\alpha}=(1-R^{n}). \tag{4}\] Since samples are chosen independently in different hyperrectangles, this bound on the number of violations in a hyperrectangle of \(\mathcal{R}^{(\epsilon)}\) holds simultaneously for all \(m\) of them with probability \(\hat{\alpha}^{m}\). Taking \(\alpha=\hat{\alpha}^{m}\), and solving (4) for \(n\) we get \(n\geq\log_{R}(1-\alpha^{1/m})\) as desired. For the coverage guarantee, we start by noticing that under the hypotheses on \(k\), Proposition 2 guarantees the existence of a solution \(\mathcal{R}^{*}_{1}\) made of \(\epsilon\)-bounded and \(\epsilon\)-aligned rectilinear hyperrectangles. Let \(\mathcal{R}^{*}_{2}\) be a solution obtained from \(\mathcal{R}^{*}_{1}\) by partitioning each hyperrectangle into pieces of minimum possible size \(\epsilon^{d}\), each one \(\epsilon\)-aligned. The first observation is that, being the solution produced by \(\epsilon\)-ProVe made of \(\epsilon\)-bounded and \(\epsilon\)-aligned rectilinear hyperrectangles, also \(\mathcal{R}^{*}_{2}\) is among the solutions possibly returned by the algorithm. We now observe that with probability \(1-R^{n}\) each hyperrectangle \(r^{\prime}\) in \(\mathcal{R}^{*}_{2}\) must be contained in some hyperrectangle \(r\) in the solution \(\mathcal{R}^{(\epsilon)}\) returned by \(\epsilon\)-ProVe. First note that if in the iterations of the execution of \(\epsilon\)-ProVe, \(r^{\prime}\) keeps on being contained in an area where both safe and violation points are sampled, then eventually \(r^{\prime}\) will become itself an area to analyze. At such a step, clearly every sample in \(r^{\prime}\) will be safe and \(r^{\prime}\) will be included in \(\mathcal{R}^{(\epsilon)},\) as desired. Therefore, the only possibility for \(r^{\prime}\) not to be contained in any \(r\in\mathcal{R}^{(\epsilon)}\) is that at some iteration an area \(A\supseteq r^{\prime}\) is analyzed and all the \(n\) points sampled in \(A\) are violation points, so that the whole \(A\) (including \(r^{\prime}\)) is discarded (as unsafe) by \(\epsilon\)-ProVe. However, by Lemma 1 with probability \((1-R^{n})\) this can happen only if \(\epsilon^{d}=|r^{\prime}|<(1-R)|A|,\) which contradicts the hypotheses. Hence, with probability \((1-R^{n})\), the \(r^{\prime}\) must be contained in a hyperrectangle of \(\mathcal{R}^{(\epsilon)}\), concluding the argument. Fig. 6 (left) shows the correlation between the number of points to be sampled based on the number of areas obtained by \(\epsilon\)-ProVe if we want to obtain a total confidence of \(\alpha=99.9\%\) and a lower bound \(R=99.5\%\). As we can notice from the plot, if we compute our output reachable set sampling \(n=3250\) points, we are able to obtain the desired confidence and lower bound if the number of regions is in \([1,10000]\). For this reason, in all our empirical evaluations, we use \(n=3500\) to compute \(\mathcal{R}\). An explanatory example of Figure 6: Left: Correlation point to sample and # of (un)safe areas using \(\epsilon\)-ProVe to obtain a confidence \(\alpha=99.9\%\) and a lower bound \(R=99.5\%\). Right: example of a set of safe regions (in green) returned by \(\epsilon\)-ProVe (scaled x100). the possible result achievable using our approach is depicted in Fig. 6 (right). ## Empirical Evaluation In this section, we evaluate the scalability of our approach, and we validate the theoretical guarantees discussed in the previous section. Our analysis considers both simple DNNs to analyze in detail the theoretical guarantees and two real-world scenarios to evaluate scalability. The first scenario is the ACAS xu [16], an airborne collision avoidance system for aircraft, which is a well-known standard benchmark for formal verification of DNNs [16, 17, 18]. The second scenario considers DNN trained and employed for autonomous mapless navigation tasks in a Deep Reinforcement Learning (DRL) context [16, 15, 14]. All the data are collected on a commercial PC equipped with an M2 Apple silicon. The code used to collect the results and several additional experiments and discussions on the impact of different heuristics for our approximation are available in the supplementary material. ### Correctness and Scalability Experiments These experiments aim to estimate the correctness and scalability of our approach. Specifically for each model tested, we used \(\epsilon\)-ProVe to return the set of safe regions in the domain of the property under consideration3. All data are collected with parameters \(\alpha_{TOT}=99.9\%\) and \(R=99.5\%\) and \(n=3500\) points to compute the reachable set used for the analysis. The results are presented in Table 1. For all experiments, we report the number of safe regions returned by \(\epsilon\)-ProVe (for which we also know the hyperrectangles position in the property domain), the percentage of safe areas relative to the total starting area (i.e., the safe rate), and the computation time. Moreover, we include a comparison, measured as percentage distance, of the safe rate computed with alternative methods, such as an exact enumeration method (whenever feasible due to the scalability issue discussed above) and a Monte Carlo (MC) Sampling approach using a large number of samples (i.e., 1 million). It's important to note that the MC sampling only provides a probabilistic estimate of the safe rate, lacking information about the location of safe regions in the input domain. Footnote 3: we recall that similarly can be done with the set of unsafe areas The first block of Table 1 involves two-dimensional models with two hidden layers of 32 nodes activated with ReLU. The safety property consists of all the intervals of \(\mathcal{P}\) in the range \([0,1]\) and a postcondition \(\mathcal{Q}\) that encodes a strictly positive output. Notably, \(\epsilon\)-ProVe is able to return the set of safe regions in a fraction of a second, and the safe rate returned by our approximation deviates at most a \(1.75\%\) from the one computed by an exact count, which shows the tightness of the bound returned by our approach. In the second block of Tab. 1, the Mapless Navigation (MN) DNNs are composed of 22 inputs, two hidden layers of 64 nodes activated with \(ReLU\), and finally, an output space composed of five nodes, which encode the possible actions of the robot. We test a behavioral safety property where \(\mathcal{P}\) encodes a potentially unsafe situation (e.g., _there is an obstacle in front_), and the postcondition \(\mathcal{Q}\) specifies the unsafe action that should not be selected (\(\bigvee_{i=2,3}y_{1}<y_{i}\) if \(y_{1}\) encodes a forward movement). The table illustrates how increasing input space and complexity affects computation time. Nevertheless, the proposed approximation remains efficient even for ACAS xu tests, returning results within seconds. Crucially, focusing on \(Model\_MN\_2\) and \(\phi_{2}\) ACAS Xu_3.3, \(\epsilon\)-ProVe states that all the property's domain is safe (i.e., no violation points). The correctness of the results was verified by employing VeriNet [1], a state-of-the-art FV tool. ## Discussion We studied the _AllDNN-Verification_, a novel problem in the area of FV of DNNs asking for the set of all the safe regions for a given safety property. Due to the #P-hardness of the problem, we proposed an approximation approach, \(\epsilon\)-ProVe, which is, to the best of our knowledge, the first method able to efficiently approximate the safe regions with some guarantees on the tightness of the solution returned. We believe \(\epsilon\)-ProVe is an important step to provide consistent and effective tools for analyzing safety in DNNs. An interesting future direction could be testing the enumeration of the unsafe regions for subsequent patching or a safe retrain on the areas discovered by our approach. \begin{table} \begin{tabular}{|c||c c c||c c||c|} \hline **Instance** & \multicolumn{2}{c||}{\(\epsilon\)**-ProVe** (\(\alpha_{TOT}=99.9\%\))} & \multicolumn{2}{c||}{**Exact count** or _MC sampling_} & \multicolumn{2}{c||}{**Und-estimation (\% distance)**} \\ & \# Safe regions & Safe rate & Time & Safe rate & Time & \\ \hline Model\_2\_20 & 335 & 78.50\% & 0.4s & 79.1\% & 234min & 0.74\% \\ Model\_2\_56 & 251 & 43.69\% & 0.3s & 44.46\% & 196min & 1.75\% \\ \hline Model\_MN\_1 & 545 & 64.72\% & 60.6s & _67.59\%_ & _0.6s_ & 4.24\% \\ Model\_MN\_2 & 1 & 100\% & 0.4s & _100\%_ & _0.4s_ & - \\ \hline \(\phi_{2}\) ACAS Xu\_2\_1 & 2462 & 97.47\% & 26.9s & 99.25\% & _0.6s_ & 1.81\% \\ \(\phi_{2}\) ACAS Xu\_3.3 & 1 & 100\% & 0.4s & _100\%_ & _0.5s_ & - \\ \hline \end{tabular} \end{table} Table 1: Comparison of \(\epsilon\)–ProVe and Exact count or Monte Carlo (MC) sampling approach on different benchmark setups. Full results and other different experiments are reported in the supplementary material.
2305.11997
Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly. Towards finding robust counterfactuals, existing literature often assumes that the original model $m$ and the new model $M$ are bounded in the parameter space, i.e., $\|\text{Params}(M){-}\text{Params}(m)\|{<}\Delta$. However, models can often change significantly in the parameter space with little to no change in their predictions or accuracy on the given dataset. In this work, we introduce a mathematical abstraction termed $\textit{naturally-occurring}$ model change, which allows for arbitrary changes in the parameter space such that the change in predictions on points that lie on the data manifold is limited. Next, we propose a measure -- that we call $\textit{Stability}$ -- to quantify the robustness of counterfactuals to potential model changes for differentiable models, e.g., neural networks. Our main contribution is to show that counterfactuals with sufficiently high value of $\textit{Stability}$ as defined by our measure will remain valid after potential $\textit{naturally-occurring}$ model changes with high probability (leveraging concentration bounds for Lipschitz function of independent Gaussians). Since our quantification depends on the local Lipschitz constant around a data point which is not always available, we also examine practical relaxations of our proposed measure and demonstrate experimentally how they can be incorporated to find robust counterfactuals for neural networks that are close, realistic, and remain valid after potential model changes. This work also has interesting connections with model multiplicity, also known as, the Rashomon effect.
Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
2023-05-19T20:48:05Z
http://arxiv.org/abs/2305.11997v3
# Robust Counterfactual Explanations for Neural Networks ###### Abstract There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly. Towards finding robust counterfactuals, existing literature often assumes that the original model \(m\) and the new model \(M\) are bounded in the parameter space, i.e., \(\|\text{Params}(M)-\text{Params}(m)\|<\Delta\). However, models can often change significantly in the parameter space with little to no change in their predictions or accuracy on the given dataset. In this work, we introduce a mathematical abstraction termed _naturally-occurring_ model change, which allows for arbitrary changes in the parameter space such that the change in predictions on points that lie on the data manifold is limited. Next, we propose a measure - that we call _Stability_ - to quantify the robustness of counterfactuals to potential model changes for differentiable models, e.g., neural networks. Our main contribution is to show that counterfactuals with sufficiently high value of _Stability_ as defined by our measure will remain valid after potential "naturally-occurring" model changes with high probability (leveraging concentration bounds for Lipschitz function of independent Gaussians). Since our quantification depends on the local Lipschitz constant around a data point which is not always available, we also examine practical relaxations of our proposed measure and demonstrate experimentally how they can be incorporated to find robust counterfactuals for neural networks that are close, realistic, and remain valid after potential model changes. This work also has interesting connections with model multiplicity, also known as, the Rashomon effect. ## 1 Introduction Counterfactual explanations (Wachter et al., 2017; Karimi et al., 2020; Barocas et al., 2020) have garnered significant interest in various high-stakes applications, such as lending, hiring, etc. Counterfactual explanations aim to guide an applicant on how they can change a model outcome by providing suggestions for improvement. Given an original data-point (e.g., an applicant who is denied a loan), the goal is to try to find a point on the other (desired) side of the decision boundary (a hypothetical applicant who is approved for the loan) which also satisfies several other preferred constraints, such as, (i) proximity to the original point; (ii) changes in as few features as possible; and (iii) conforming to the data manifold. Such a data-point that alters the model decision is widely referred to as a "counterfactual." However, in several real-world scenarios, such as credit lending, the models making these high-stakes decisions have to be updated due to various reasons (Upadhyay et al., 2021; Black et al., 2021), e.g., to retrain on a few additional data points, change the hyper-parameters or seed, or transition to a different model class (Pawelczyk et al., 2020). Such model changes can often cause the counterfactuals to become invalid because typically they are quite close to the original data point, and hence, also quite close to the decision boundary. For instance, suppose the counterfactual explanation suggests an applicant to increase their income by \(10K\) to get approved for a loan and they actually act upon that, but now, due to updates to the original model, they are still denied by the updated model. If counterfactuals become invalid due to model updates, this can lead to confusion and distrust in the use of algorithms in high-stakes applications altogether. Users would typically act on the suggested counterfactuals over a period of time, e.g., increase their income for credit lending, but only to find that it is no longer enough since the model has slightly changed (perhaps due to retraining with a new seed or hyperparameter). This cycle of invalidation and regenerating new counterfactuals can not only be frustrating and time-consuming for users but also potentially hurts an institution's reputation. This motivates our primary question: _How do we provide theoretical guarantees on the robustness of counterfactuals to potential model changes?_ Towards addressing this question, in this work, we introduce the abstraction of "naturally-occurring" model change for differentiable models. Our abstraction allows for arbitrary changes in the parameter space such that the change in predictions on points that lie on the data manifold is limited. This abstraction motivates a measure of robustness for counterfactuals that arrives with provable probabilistic guarantees on their validity under naturally-occurring model change. We also introduce the notion of "targeted" model change and provide an impossibility result for such model change. Next, by leveraging a _computable relaxation_ of our proposed measure of robustness, we then design and implement algorithms to find robust counterfactuals for neural networks. Our experimental results validate our theoretical understanding and illustrate the efficacy of our proposed algorithms. We summarize our contributions here: * **Abstraction of "naturally-occurring" model change for differentiable models:** Existing literature (Upadhyay et al., 2021; Black et al., 2021) on robust counterfactuals often assumes that the original model \(m\) and the new model \(M\) are bounded in the parameter space, i.e., \(\|\text{Params}(M)-\text{Params}(m)\|{<}\Delta\). Building on Dutta et al. (2022) for tree-based models, we note that models can often change significantly in the parameter space with little to no change on their predictions or accuracy on the given dataset. To capture this, we introduce an abstraction (see Definition 5), that we call _naturally-occurring_ model change, which instead allows for arbitrary changes in the parameter space such that the change in predictions on points that lie on the data manifold is limited. Our proposed abstraction of naturally-occurring model change also has interesting connections with predictive/model multiplicity, also known as, the Rashomon Effect (Breiman, 2001; Marx et al., 2020). * that we call Stability - to quantify the robustness of counterfactuals to potential model changes for differentiable models. Stability of a counterfactual \(x\in\mathbb{R}^{d}\) with respect to a model \(m(\cdot)\) is given by: \[R_{k,\sigma^{2}}(x,m)=\frac{1}{k}\sum_{x_{i}\in N_{x,k}}\left(m(x_{i})-\gamma_ {x}\|x-x_{i}\|\right),\] where \(N_{x,k}\) is a set of \(k\) points in \(\mathbb{R}^{d}\) drawn from the Gaussian distribution \(\mathcal{N}(x,\sigma^{2}\text{I}_{d})\) with \(\text{I}_{d}\) being the identity matrix, and \(\gamma_{x}\) is the local Lipschitz constant of the model \(m(\cdot)\) around \(x\) (Definition 6). Our main contribution in this work is to provide a theoretical guarantee (see Theorem 1) that counterfactuals with a sufficiently high value of Stability (as defined by our measure) will remain valid with high probability after "naturally-occurring" model change. Our result leverages concentration bounds for Lipschitz functions of independent Gaussian random variables (see Lemma 3). Since our proposed Stability measure depends on the local Lipschitz constant which is not always available, we also examine practical relaxations of our measure of the form: \[\hat{R}_{k,\sigma^{2}}(x,m)=\frac{1}{k}\sum_{x_{i}\in N_{x,k}}\left(m(x_{i}) - \left|m(x)-m(x_{i})\right|\right).\] The first term essentially captures the mean value of the model output in a region around it (higher mean is expected to be more robust and reliable). The second term captures the local variability of the model output in around it (lower variability is expected to be more reliable). This intuition is in alignment with the results in Dutta et al. (2022) for tree-based models. * **Impossibility under targeted model change:** We also make a clear distinction between our proposed naturally-occurring and targeted model change. Under targeted model change, we provide an impossibility result (see Theorem 2) that given any counterfactual for a model, one can always design a new model that is quite similar to the original model and that renders that particular counterfactual invalid. However, in this work, our focus is on non-targeted model change such as retraining on a few additional data points, changing some hyperparameters or seed, etc., for which we have defined the abstraction of "naturally-occurring" model change (see Definition 5). * **Experimental results:** We explore methods for incorporating our relaxed measure into generating robust counterfactuals for neural networks. We introduce T-Rex:I (Algorithm 1), which finds robust counterfactuals that are close to the original data point. T-Rex:I can be integrated into any base technique for generating counterfactuals to improve robustness. We also propose T-Rex:NN (Algorithm 2), which generates robust counterfactuals that are data-supported, making them more realistic (along the lines of Dutta et al. (2022) for tree-based models). Our experiments show that T-Rex:I can improve robustness for neural networks without significantly increasing the cost, and T-Rex:NN consistently generates counterfactuals that are similar to the data manifold, as measured using the local outlier factor (LOF). ### Related Works Counterfactual explanations have seen growing interest in recent years (Verma et al., 2020; Karimi et al., 2020; Wachter et al., 2017). Regarding their robustness to model changes, Pawelczyk et al. (2020); Kanamori et al. (2020); Poyiadzi et al. (2020) argue that counterfactuals situated on the data manifold are more likely to be more robust than the closest counterfactuals. Later, Dutta et al. (2022) demonstrate that generating counterfactuals on the data manifold may not be sufficient for robustness. While the importance of robustness in local explanation methods has been emphasized (Hancox-Li, 2020), the problem of specifically generating robust counterfactuals has been less explored, with the notable exceptions of some recent works (Upadhyay et al., 2021; Rawal et al., 2020; Black et al., 2021; Dutta et al., 2022; Jiang et al., 2022). In Upadhyay et al. (2021), the authors propose an algorithm called ROAR that uses min-max optimization to find the _closest_ counterfactuals that are also robust. In Rawal et al. (2020), the focus is on analytical trade-offs between validity and cost. Jiang et al. (2022) introduces a method for identifying close and robust counterfactuals based on a framework that utilizes interval neural networks. Black et al. (2021) propose that local Lipschitzness can be leveraged to generate consistent counterfactuals and propose an algorithm called Stable Neighbor Search to generate consistent counterfactuals for neural networks. Our research builds on this perspective and further performs Gaussian sampling around the counterfactual, leading to a novel estimator for which we are also able to provide probabilistic guarantees going beyond the bounded model change assumption. Furthermore, examining all three performance metrics, namely, cost, validity (robustness), and likeness to the data-manifold has received less attention with the notable exception of Dutta et al. (2022) but they focus only on tree-based models (non-differentiable). We also refer to Mishra et al. (2021) for a survey. We note that Laugel et al. (2019); Alvarez-Melis and Jaakkola (2018) propose an alternate perspective of robustness in explanations (called \(L\)-stability in Alvarez-Melis and Jaakkola (2018)) which is built on similar individuals receiving similar explanations. Pawelczyk et al. (2022); Maragno et al. (2023); Dominguez-Olmedo et al. (2022) focus on finding counterfactuals that are robust to small input perturbations (noisy counterfactuals). In contrast, our focus is on counterfactuals remaining valid after some changes to the model, and providing theoretical guarantees thereof. Our work also shares interesting conceptual connections with a body of work on model multiplicity or predictive multiplicity, also known as the Rashomon effect (Breiman, 2001; Marx et al., 2020; Black et al., 2022; Hsu and Calmon, 2022). Breiman (2001) suggested that models can be very different from each other but have almost similar performance on the data manifold. The term predictive multiplicity was suggested by Marx et al. (2020), defining it as the ability of a prediction problem to admit competing models with conflicting predictions and introduce formal measures to evaluate the severity of predictive multiplicity. Black et al. (2022) investigates ways to leverage model multiplicity beneficially in model selection processes while simultaneously addressing its concerning implications. Watson-Daniels et al. (2023) offered a framework for measuring predictive multiplicity in classification, introducing measures that encapsulate the variation in risk estimates over the ensemble of competing models. Hsu and Calmon (2022) unveiled a novel metric, Rashomon Capacity, for measuring predictive multiplicity in probabilistic classification. Our proposed abstraction of naturally-occurring model change, as explored in this work, can be viewed as a fresh perspective on model multiplicity that further emphasizes on the models that are more likely to occur. ## 2 Preliminaries Let \(\mathcal{X}\subseteq\mathbb{R}^{d}\) denote the input space and let \(\mathcal{S}{=}\{x_{i}\in\mathcal{X}\}_{i=1}^{n}\) be a dataset consisting of \(n\) independent and identically distributed data points generated from a density \(q\) over \(\mathcal{X}\). We also let \(m(\cdot):\mathbb{R}^{d}\rightarrow[0,1]\) denote the original machine learning model that takes a \(d\)-dimensional input value and produces an output probability lying between \(0\) and \(1\). The final decision is denoted by \(\mathbb{1}(m(x)\geq 0.5)\) where \(\mathbb{1}(\cdot)\) denotes the indicator function. **Definition 1** (\(\gamma-\)Lipschitz).: _A function \(m(\cdot)\) is said to be \(\gamma-\)Lipschitz if \(|m(x)-m(x^{\prime})|{\leq}\gamma\|x-x^{\prime}\|\) for all \(x,x^{\prime}{\in}\mathbb{R}^{d}\)._ Here \(\|\cdot\|\) denotes the Euclidean norm, i.e., for \(u\in\mathbb{R}^{d}\), we have \(\|u\|=\sqrt{u_{1}^{2}+u_{2}^{2}+\ldots+u_{d}^{2}}\). In Remark 2, we also discuss relaxations to local Lipschitz constants from global Lipschitz constants. We denote the updated or changed model as \(M(\cdot):\mathbb{R}^{d}\rightarrow[0,1]\) where \(M\) is a random entity. We mostly use capital letters to denote random entities, e.g., \(M\), \(X\), etc., and small letters to denote non-random entities, e.g., \(m\), \(x\), \(\gamma\), \(n\), etc. ### Background on Counterfactuals **Definition 2** (Closest Counterfactual \(\mathcal{C}_{p}(x,m)\)).: _Given \(x\in\mathbb{R}^{d}\) such that \(m(x)<0.5\), its closest counterfactual (in terms of \(l_{p}\)-norm) with respect to the model \(m(\cdot)\) is defined as a point \(x^{\prime}\in\mathbb{R}^{d}\) that minimizes the \(l_{p}\) norm \(\|x-x^{\prime}\|_{p}\) such that \(m(x^{\prime})\geq 0.5\)._ \[\mathcal{C}_{p}(x,m)=\arg\min_{x^{\prime}\in\mathbb{R}^{d}}\|x-x^{\prime}\|_{ p}\text{ such that }m(x^{\prime})\geq 0.5.\] When one is interested in finding counterfactuals by changing as few features as possible, the \(l_{1}\) norm is used (enforcing a sparsity constraint). These counterfactuals are also called _sparse_ counterfactuals (Pawelczyk et al., 2020). Closest counterfactuals often fall too far from the data manifold, resulting in unrealistic and anomalous instances, as noted in Poyiadzi et al. (2020); Pawelczyk et al. (2020); Kanamori et al. (2020); Verma et al. (2020); Karimi et al. (2020); Albini et al. (2022). This highlights the need for generating counterfactuals that lie on the data manifold. **Definition 3** (Closest Data-Manifold Counterfactual \(\mathcal{C}_{p,\mathcal{X}}(x,m)\)).: _Given \(x\in\mathbb{R}^{d}\) such that \(m(x)<0.5\), its closest data-manifold counterfactual \(\mathcal{C}_{p,\mathcal{X}}(x,m)\) with respect to the model \(m(\cdot)\) and data manifold \(\mathcal{X}\) is defined as a point \(x^{\prime}\in\mathcal{X}\) that minimizes the \(l_{p}\) norm \(\|x-x^{\prime}\|_{p}\) such that \(m(x^{\prime})\geq 0.5\)._ \[\mathcal{C}_{p,\mathcal{X}}(x,m)=\arg\min_{x^{\prime}\in\mathcal{X}}\|x-x^{ \prime}\|_{p}\text{ such that }m(x^{\prime})\geq 0.5.\] In order to assess the similarity or anomalous nature of a point concerning the given dataset \(\mathcal{S}\subseteq\mathcal{X}\), various metrics can be employed, e.g., K-nearest neighbors, Mahalanobis distance, Kernel density, LOF. These metrics play a crucial role in understanding the quality of counterfactual explanations generated by a model. One widely used metric in the literature on counterfactual explanations (Pawelczyk et al., 2020; Kanamori et al., 2020; Dutta et al., 2022) is the Local Outlier Factor (LOF). **Definition 4** (Local Outlier Factor (Breunig et al., 2000)).: _For \(x\in\mathcal{S}\), let \(L_{k}(x)\) be its \(k\)-Nearest Neighbors (\(k\)-NN) in \(\mathcal{S}\). The \(k\)-reachability distance \(rd_{k}\) of \(x\) with respect to \(x^{\prime}\) is defined by \(rd_{k}(x,x^{\prime})=\max\{\delta(x,x^{\prime}),d_{k}(x^{\prime})\}\), where \(d_{k}(x^{\prime})\) is the distance \(\delta\) between \(x^{\prime}\) and its the \(k\)-th nearest instance on \(\mathcal{S}\). The \(k\)-local reachability density of \(x\) is defined by \(lrd_{k}(x)=|L_{k}(x)|(\sum_{x^{\prime}\in L_{k}(x)}rd_{k}(x,x^{\prime}))^{-1}\). Then, the k-LOF of \(x\) on \(\mathcal{S}\) is defined as follows:_ \[LOF_{k,\mathcal{S}}(x)=\frac{1}{|L_{k}(x)|}\sum_{x^{\prime}\in L_{k}(x)}\frac {lrd_{k}(x^{\prime})}{lrd_{k}(x)}.\] _Here, \(\delta(x,x^{\prime})\) is the distance between two \(d\)-dimensional feature vectors. The LOF Predicts \(-1\) for anomalous points and \(+1\) for inlier points._ **Goals:** In this work, our main goal is to provide _probabilistic guarantees_ on the robustness of counterfactuals to potential model changes for differential models such as neural networks. Towards achieving this goal, our objective involves: (i) introducing an abstraction that rigorously defines the class of model changes that we are interested in; and (ii) establishing a measure, denoted as \(R_{\Phi}(x,m)\), for a counterfactual \(x\) and a given model \(m(\cdot)\), that quantifies its robustness to potential model changes. Here, \(\Phi\) represents the hyperparameters of the robustness measure. Ideally, we desire that the measure \(R_{\Phi}(x,m)\) should be high if the counterfactual \(x\) is less likely to be invalidated by potential model changes. We seek to provide: (i) theoretical guarantees on the validity of counterfactuals with sufficiently high value of \(R_{\Phi}(x,m)\); and also (ii) incorporate our measure \(R_{\Phi}(x,m)\) into an algorithmic framework for generating robust counterfactuals which also meet other requirements, such as, low cost or likeness to the data manifold. ## 3 Main Theoretical Contributions In this section, we first introduce our proposed abstraction of _naturally-occurring_ model change and then propose a novel measure - that we call _Stability_ - to quantify the robustness of counterfactuals to potential model changes. We derive a theoretical guarantee that counterfactuals that have a sufficiently high value of _Stability_ will remain valid after potential _naturally-occurring_ model change with high probability. But since our quantification would depend on the local Lipschitz constant around a data point, which is not always known, we also examine a practical relaxation of our proposed measure and demonstrate its applicability. ### Naturally-Occurring Model Change A popular assumption in existing literature (Upadhyay et al., 2021; Black et al., 2021) to quantify potential model changes is to assume that the model changes are bounded in the parameter space, i.e., \[\|\text{Params}(M)-\text{Params}(m)\|<\Delta\text{ for a constant }\Delta.\] Here, \(\text{Params}(M)\) denote the parameters of the model \(M\), e.g., weights of a neural network. However, we note that Figure 1: Models can often change drastically in the parameter space causing little to no change in the actual decisions on the points on the data manifold. Figure 2: Illustrates our proposed abstraction of naturally-occurring model change: The distribution of the changed model outputs \(M(x)\) (stochastic) is centered around the original model output \(m(x)\). The points specifically lying on the data-manifold acting as anchors without much change as they exhibit lower variance in model outputs compared to points outside the manifold. This visualization also connects with the Rashomon effect, encapsulating the diverse yet similarly accurate models that can be learned from a given dataset. models can often change drastically in the parameter space causing little to no change on the actual decisions on the points on the data manifold (see Fig. 1 for an example). In this work, we relax the bounded-model-change assumption, and instead introduce the notion of a naturally-occurring model change as defined in Definition 5. Our abstraction allows for arbitrary model changes such that the change in predictions on points that lie on the data manifold is limited (see Fig. 2 for an illustration). **Definition 5** (Naturally-Occurring Model Change).: _A naturally-occurring model change is defined as follows:_ 1. \(\mathbb{E}\left[M(X)|X=x\right]=\mathbb{E}\left[M(x)\right]=m(x)\) _where the expectation is over the randomness of_ \(M\) _given a fixed value of_ \(X=x\in\mathbb{R}^{d}\)_._ 2. _Whenever_ \(m(x)\) _is_ \(\gamma_{m}\)_-Lipschitz, any updated model_ \(M(x)\) _is also_ \(\gamma-\)_Lipschitz for some constant_ \(\gamma\)_. Note that, this constant_ \(\gamma\) _does not depend on_ \(M\) _since we may define_ \(\gamma\) _to be an upper bound on the Lipschitz constants for all possible_ \(M\) _as well as_ \(m\)_._ 3. \(\mathrm{Var}\left[M(X)|X=x\right]=\mathrm{Var}\left[M(x)\right]=\nu_{x}\) _which depends on the fixed value of_ \(X=x\in\mathbb{R}^{d}\)_. Furthermore, whenever_ \(x\) _lies on the data manifold_ \(\mathcal{X}\)_, we have_ \(\nu_{x}\leq\nu\) _for a small constant_ \(\nu\)_._ Closely connected to naturally-occurring model change is the idea of the Rashomon effect, alternatively known as predictive or model multiplicity. (Breiman, 2001; Pawelczyk et al., 2020; Marx et al., 2020; Hsu and Calmon, 2022) which suggests that models can be very different from each other but have almost similar performance on the data manifold, e.g., \(\frac{1}{n}\sum_{i=1}^{n}|M(x_{i})-m(x_{i})|\) is small when the points \(x_{i}\) lie on the data manifold. Under the naturally-occurring model change, this holds in expectation as follows: **Lemma 1** (Connection to Roshomon Effect).: _For points \(x_{1},\ldots,x_{n}\in\mathcal{X}\) (lying on the data-manifold) under naturally-occurring model change, the following holds:_ \[\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}|M(x_{i})-m(x_{i})|\right]\leq\sqrt{ \nu}. \tag{1}\] Thus, Definition 5 might be better suited over boundedness in the parameter space. Proof of Lemma 1 is in Appendix B. **Remark 1** (Targeted Model Change).: _In contrast with naturally-occurring model change, we also introduce the notion of targeted model change (adversarial, worst-case) which essentially refers to a model change that is more deliberately targeted to make a particular counterfactual invalid. For example, one could have a new model \(M(x)=m(x)\) almost everywhere except at or around the targeted point \(x^{\prime}\), i.e., \(M(x^{\prime})=1-m(x^{\prime})\). See Section 3.3 for more details._ ### A Measure of Robustness With Probabilistic Guarantees on Validity #### 3.2.1 Proposed Measure: Stability **Definition 6** (Stability).: _The stability of a counterfactual \(x\in\mathbb{R}^{d}\) is defined as follows:_ \[R_{k,\sigma^{2}}(x,m)=\frac{1}{k}\sum_{x_{i}\in N_{x,k}}\left(m(x_{i})-\gamma \|x-x_{i}\|\right), \tag{2}\] _where \(N_{x,k}\) is a set of \(k\) points drawn from the Gaussian distribution \(\mathcal{N}(x,\sigma^{2}\mathrm{I}_{d})\) with \(\mathrm{I}_{d}\) being the identity matrix, and \(\gamma\) is an upper bound on the Lipschitz constant for all models \(M(\cdot)\) under naturally-occurring change._ **Remark 2** (Relaxations to local Lipschitz).: _While we prove our theoretical result (Theorem 1) with the global Lipschitz constant \(\gamma\), we can relax this to local Lipschitz constants \(\gamma_{x}\), around a given point \(x\). This is because we sample from a Gaussian centered around the point \(x\) and hence mainly capture the variability around \(x\). So most points will be very close to \(x\) but a few points can still lie far away. Potential extensions of our guarantees could apply to truncated Gaussian and uniform sampling methods, given their sub-Gaussian properties. This is because Lipschitz concentration inherently extends to sub-Gaussian random variables (Baraniuk et al., 2008)._ #### 3.2.2 Probabilistic Guarantee **Theorem 1** (Probabilistic Guarantee).: _Let \(X_{1},X_{2},\ldots,X_{k}\) be \(k\) iid random variables with distribution \(\mathcal{N}(x,\sigma^{2}I_{d})\) and \(Z=\frac{1}{k}\sum_{i=1}^{k}(m(X_{i})-M(X_{i}))\). Suppose \(\left|\mathbb{E}\left[Z|M\right]-\mathbb{E}\left[Z\right]\right|<\epsilon^{\prime}\). Then, for any \(\epsilon>2\epsilon^{\prime}\), a counterfactual \(x\in\mathcal{X}\) under naturally-occurring model change satisfies:_ \[\Pr(M(x)\geq R_{k,\sigma^{2}}(x,m)-\epsilon)\geq 1-\exp\left(\frac{-k \epsilon^{2}}{8(\gamma_{m}+\gamma)^{2}\sigma^{2}}\right).\] _The probability is over the randomness of both \(M\) and \(X_{i}^{\prime}\)s._ **Intuition Behind Our Result:** This stability metric (Definition 6) is a way to measure the robustness of counterfactuals that are subject to natural model changes (see Definition 5). The first term in the metric, represented by \(\frac{1}{k}\sum_{i=1}^{k}m(X_{i})\), captures the average model outputs for a group of points centered around the counterfactual \(x\). The second term, represented by \(\gamma\|x-X_{i}\|\), is an upper bound on the potential difference in outputs of any new model on the points \(x\) and \(X_{i}\) (Recall the Lipschitz property of \(M\) around the point \(x\)). Using our measure, the guarantee in Theorem 1 can be rewritten as: \[\Pr\left(\frac{1}{k}\sum_{i=1}^{k}m(X_{i})- M(x)\leq\frac{\gamma}{k}\sum_{i=1}^{k}\|x-X_{i}\|+\epsilon\right)\\ \geq 1-\exp\left(\frac{-k\epsilon^{2}}{8(\gamma+\gamma_{m})^{2} \sigma^{2}}\right). \tag{3}\] This form of the inequality allows for the following interpretation of Theorem 1: The distance between the output of the new model on an input \(x\), i.e., \(M(x)\), and the average prediction of the neighborhood of the given input by the old model, i.e., \(\frac{1}{k}\sum m(X_{i})\) is upper bounded by \(\epsilon\)-corrected, \(\gamma\) multiplied average distance of the datapoints within the neighborhood of the input \(x\), i.e., \(\frac{1}{k}\sum\|x-X_{i}\|\). Proof Sketch:.: The complete proof of Theorem 1 is provided in Appendix C.1. Here, we include a proof sketch. Notice that, using the Lipschitz property of \(M(\cdot)\) around \(x\), we have \(M(x)\geq M(X_{i})-\gamma\|x-X_{i}\|\) for all \(X_{i}\). Thus, \[M(x) \geq\frac{1}{k}\sum_{i=1}^{k}(M(X_{i})-\gamma\|x-X_{i}\|) \tag{4}\] \[\stackrel{{(a)}}{{\geq}}\frac{1}{k}\sum_{i=1}^{k}( m(X_{i})-\gamma\|x-X_{i}\|)-\epsilon, \tag{5}\] where (a) holds from Lemma 2 with probability at least \(1-\exp\Big{(}\frac{-k\epsilon^{2}}{8(\gamma+\gamma_{m})^{2}\sigma^{2}}\Big{)}\). **Lemma 2** (Deviation Bound).: _Let \(X_{1},X_{2},\ldots,X_{k}\sim\mathcal{N}(x,\sigma^{2}I_{d})\) and \(Z{=}\frac{1}{k}\sum_{i=1}^{k}(m(X_{i}){-}M(X_{i}))\). Suppose \(|\mathbb{E}\left[Z|M\right]-\mathbb{E}\left[Z\right]|<\epsilon^{\prime}\). Then, under naturally-occurring model change, we have \(\mathbb{E}\left[Z\right]{=}0\). Moreover, for any \(\epsilon{>}2\epsilon^{\prime},\)_ \[\Pr(Z\geq\epsilon)\leq\exp\bigg{(}\frac{-k\epsilon^{2}}{8(\gamma+\gamma_{m}) ^{2}\sigma^{2}}\bigg{)}. \tag{6}\] Proof Sketch:.: The proof of Lemma 2 leverages concentration bounds for Lipschitz functions of independent Gaussian random variables (see Lemma 3). The complete proof of Lemma 2 is provided in Appendix C. **Lemma 3** (Gaussian Concentration Inequality).: _Let \(W=(W_{1},W_{2},\ldots,W_{n})\) consist of n i.i.d. random variables belonging to \(\mathcal{N}(0,\sigma^{2})\), and \(Z=f(W)\) be a \(\gamma\)-Lipschitz function, i.e., \(|f(W)-f(W^{\prime})|\leq\gamma\|W-W^{\prime}\|.\) Then, we have,_ \[\Pr(Z-\mathbb{E}\left[Z\right]\geq\epsilon)\leq\exp\bigg{(}\frac{-\epsilon^{ 2}}{2\gamma^{2}\sigma^{2}}\bigg{)}\text{ for all }\epsilon>0. \tag{7}\] For the proof of Lemma 3 refer to Boucheron et al. (2013) in p.125. Our robustness guarantee (Theorem 1) essentially states that \(\Pr(M(x)\leq R_{k,\sigma^{2}}(x,m)-\epsilon)\leq\exp\big{(}\frac{-k\epsilon^{2 }}{8(\gamma+\gamma_{m})^{2}\sigma^{2}}\big{)}\) under naturally-occurring model change. For instance, if we find a counterfactual \(x\) such that \(R_{k,\sigma^{2}}(x,m)-\epsilon\) is greater or equal to \(0.5\), then \(M(x)\) would also be greater than \(0.5\) with high probability. The term \(\exp\big{(}\frac{-k\epsilon^{2}}{8(\gamma+\gamma_{m})^{2}\sigma^{2}}\big{)}\) decays with \(k\). #### 3.2.3 Practical Relaxation of Stability and Its Properties While our proposed measure Stability (Definition 6) has probabilistic guarantees, we note that it relies on the Lipschitz constant \(\gamma\) (or the local Lipschitz constant \(\gamma_{x}\) around the point \(x\)), which is often unknown. Therefore, we next propose a practical relaxation of the measure as follows: **Definition 7** (Stability (Relaxed)).: _The stability (relaxed) of a counterfactual \(x\in\mathbb{R}^{d}\) is defined as follows:_ \[\hat{R}_{k,\sigma^{2}}(x,m)=\frac{1}{k}\sum_{x_{i}\in N_{x,k}}(m(x_{i})-|m(x)- m(x_{i})|),\] _where \(N_{x,k}\) is a set of \(k\) points drawn from the Gaussian distribution \(\mathcal{N}(x,\sigma^{2}\mathrm{I}_{d})\) with \(\mathrm{I}_{d}\) being the identity matrix._ To arrive at this relaxation, we utilize the Lipschitz property to approximate the aspect that involves the Lipschitz constant, specifically, by approximating \(\gamma_{x}||x-x_{i}||\) with \(|m(x)-m(x_{i})|\). Another possibility is to consider an estimate of \(\gamma_{x}\) given by: \[\hat{\gamma}_{x}=\max_{x_{i}\in N_{x,k}}\frac{|m(x)-m(x_{i})|}{\|x-x_{i}\|}. \tag{8}\] We observed that the experimental results with both these stability estimates are in the same ballpark. To gain a deeper understanding of the relaxed stability measure, we now consider some desirable properties of counterfactuals that make them more robust and then demonstrate that our proposed relaxation of Stability (Definition 7) satisfies those desirable properties. These properties are inspired from Dutta et al. (2022) which proposed these properties for tree-based ensembles. The first property is based on the fact that the output of a model \(m(x)\in[0,1]\) is expected to be higher if the model has more confidence in that prediction. **Property 1**.: _For any \(x\in\mathbb{R}^{d}\), a higher value of \(m(x)\) makes it less likely to be invalidated due to model changes._ Having a high \(m(x)\) alone does not guarantee robustness, as local variability around \(x\) can make predictions less reliable. E.g., points with high \(m(x)\) near the decision boundary are also vulnerable to invalidation with model changes. **Property 2**.: _An \(x{\in}\mathbb{R}^{d}\) is less likely to be invalidated if several points close to \(x\) (denoted by \(x^{\prime}\)) have a high value of \(m(x^{\prime})\)._ Counterfactuals may also be more likely to be invalidated if it lies in a highly variable region of the model output function. This is because the confidence of the model predictions in that region may be less reliable. **Property 3**.: _An \(x\in\mathbb{R}^{d}\) is less likely to be invalidated if model outputs around \(x\) have low variability._ Our stability measure aligns with these desired properties. Given a point \(x\in\mathbb{R}^{d}\), it generates a set of \(k\) points centered around \(x\). The first term \(\frac{1}{k}\sum_{x^{\prime}\in N_{x,k}}m\left(x^{\prime}\right)\) is expected to be high if the model output value \(m(x)\) is high for \(x\) as well as several points close to \(x\). But the mean value of \(m(x^{\prime})\) around a point \(x\) may not always capture the variability in that region, hence, the second term of our stability measure, i.e., \(\frac{1}{k}\sum_{x^{\prime}\in N_{x,k}}|m(x)-m(x^{\prime})|\). This term captures the variability of the model output values in a region around \(x\). It is worth noting that the variability term is only useful in conjunction with the mean term. This is because even points on the opposite side of the decision boundary can have varying levels of variability, regardless of whether \(m(x^{\prime})\) is less or greater than \(0.5\). In Fig. 3, we provide an example on the synthetic moon dataset to observe the effect of our stability measure on naturally-changed models. Note that these changed models were realized from actual experiments by retraining with different weight initializations. ### Impossibility Under Targeted Model Change In this work, we make a key distinction between naturally-occurring and targeted model changes. While we are able to provide probabilistic guarantees for naturally-occurring model change, we also demonstrate an impossibility result for targeted model change. **Theorem 2** (Impossibility Under Targeted Change).: _Given a model and a counterfactual, one can design another similar model such that the particular targeted counterfactual can be invalidated._ What this result essentially demonstrates is that for a given model, one can design another similar model such that any particular targeted counterfactual can be invalidated. The proof relies on the possibility that one could have a new model \(M(x)=m(x)\) almost everywhere except at or around the targeted point \(x^{\prime}\), i.e., \(M(x^{\prime})=1-m(x^{\prime})\). ## 4 Generating Robust Counterfactuals using Our Proposed Measure: Stability In this section, we examine two techniques of incorporating our proposed measure, Stability (relaxed; see Definition 7), for generating robust counterfactuals for neural networks. To begin, along the lines of Dutta et al. (2022), we first define a counterfactual robustness test. **Definition 8** (Counterfactual Robustness Test).: _A counterfactual \(x\in\mathbb{R}^{d}\) satisfies the robustness test if:_ \[\hat{R}_{k,\sigma^{2}}(x,m)\geq\tau. \tag{9}\] Now, we would like to find a reasonable point \(x^{\prime}\) that receives a positive prediction from the model (essentially \(m(x^{\prime})\geq 0.5\)), while also satisfying the robustness test, \(\hat{R}_{k,\sigma^{2}}(x^{\prime},m)\geq\tau\). The threshold value of \(\tau\) can be adjusted based on the desired effective validity (recall Theorem 1). Hence, a larger threshold would likely ensure that the new model, \(M\), remains valid with high probability. In trying to find a reasonable point \(x^{\prime}\), one may strive to generate robust counterfactuals that are as close as possible to the original point. One might also want the generated counterfactuals to be as realistic as possible, i.e., lie on the data manifold. Toward that end, we propose two algorithms. We propose Algorithm 1, T-Rex:I, which incorporates our measure to find robust counterfactuals that are close to the Figure 3: Effect of stability measure on naturally-occurring model changes: (a) corresponds to the original data distribution and the trained model. (b)-(e) demonstrate some examples of changed models obtained on retraining with different weight initializations. One may notice that the model decision boundary is changing a lot in the sparse regions of the data-manifold (few data-points), possibly violating the bounded-parameter change assumption but the predictions on the dense regions of the data-manifold do not change much (in alignment with Rashomon effect). This motivates our proposed abstraction of naturally-occurring model change which allows for arbitrary changes in the parameter space with little change in the actual predictions on the dense regions of the data manifold. (f) demonstrates our proposed measure of stability \(\hat{R}_{k,\sigma^{2}}(x,m)\) (high mean model output, low variability, _almost_ like a Gaussian filter) for which we derive probabilistic guarantees on validity. In essence, we show that under the abstraction of naturally-occurring model change, the stability measure captures the reliable intersecting region of changed models with high probability. In the original model, we observe that certain non-robust regions (i.e., those caused by overfitting to certain data points in the original model) have higher local Lipschitz values and variability. Counterfactuals assigned to these regions (even if \(m(x)\) is high) would be invalidated in the changed models. The stability measure, which samples around a region, penalizes these higher local Lipschitz values. original data point. T-Rex:I works with any preferred base method for generating counterfactuals. It evaluates the stability of the generated counterfactual and, if necessary, iteratively updates the generated counterfactual through a gradient ascent process until a robust counterfactual that meets the desired criteria is obtained. **Remark 3** (Gradient of Stability).: _In Algorithm 1, we compute the gradient of \(R(x,m)\) with respect to \(x\) (not model parameters \(m\)). Such gradients w.r.t. \(x\) instead of m are also computed commonly in adversarial machine learning and also in feature-attributions for explainability. We use TensorFlow tf.GradientTape for automatic differentiation, which allows for the computation of gradients with respect to certain inputs._ To ensure that the counterfactuals are as realistic as possible, we also define the T-Rex:NN Counterfactual, which considers counterfactuals that lie within a dataset to avoid any unrealistic or anomalous results (see Algorithm 2). **Definition 9** (Robust Nearest Neighbor Counterfactual).: _Given \(x\in\mathbb{R}^{d}\) such that \(m(x)<0.5\), its robust nearest neighbor counterfactual \(\mathcal{C}_{p,\mathcal{S}}^{(\tau)}(x,m)\) with respect to the model \(m(\cdot)\) and dataset \(\mathcal{S}\) is defined as another point \(x^{\prime}\in\mathcal{S}\) that minimizes the \(l_{p}\) norm \(\left\|x-x^{\prime}\right\|_{p}\) such that \(m\left(x^{\prime}\right)\geq 0.5\) and \(\hat{R}_{k,\sigma^{2}}(x^{\prime},m)\geq\tau\)._ The closest data-supported counterfactual serves as a reliable reference, as it inherently has a high Local Outlier Factor (LOF). The T-Rex:I algorithm may find counterfactuals with lower costs, but they may compromise on the LOF and result in unrealistic samples. To address this, we propose Algorithm 2, T-Rex:NN, for finding data-supported counterfactuals. The algorithm first finds \(k\) nearest neighbors counterfactuals of \(x\) in dataset \(\mathcal{S}\), checks through each of them to see if they satisfy the robustness test, \(\hat{R}_{k,\sigma^{2}}(x^{\prime},m)\geq\tau\), and terminates once such a counterfactual is found. ``` Input: Model \(m(\cdot)\), Datapoint \(x\) with \(m(x)\)\(<\)\(0.5\), Dataset \(\mathcal{S}\), Algorithm parameters (\(K,\sigma^{2},k,\tau\)). Find \(K\) nearest neighbor counterfactuals \(x^{\prime}_{i}\in\mathcal{S}\) to \(x\) with respect to model \(m(\cdot)\), i.e., \(\mathrm{NN}_{x}=(x^{\prime}_{1},x^{\prime}_{2},\ldots,x^{\prime}_{K})\). for\(x^{\prime}_{i}\in\mathrm{NN}_{x}\)do Perform counterfactual robustness test on \(x^{\prime}_{i}\): Check if \(\hat{R}_{k,\sigma^{2}}(x^{\prime}_{i},m)\geq\tau\) if counterfactual robustness test is satisfied: then Output \(x^{\prime}_{i}\) and exit endif endfor Output: No robust counterfactual found and exit ``` **Algorithm 2** T-Rex:N: Theoretically Robust EXplanations: Nearest Neighbor Version ## 5 Experiments In this section, we present experimental results to demonstrate the effectiveness of our proposed measure in capturing robustness and then generating robust counterfactuals that remain valid after potential model changes. We illustrate how our proposed Algorithm 1 & 2 utilizes our stability measure to effectively generate robust counterfactuals. **Datasets:** We conduct experiments on several benchmark datasets, namely, HELOC (FICO, 2018), German Credit, Cardiotocography (CTG), Adult (Dua & Graff, 2017), and Taiwanese Credit (Yeh & hui Lien, 2009). These have two classes, with one class representing the most favorable outcome, and the other representing the least desirable outcome for which we aim to generate counterfactuals. For simplicity, we normalize the features to lie between \([0,1]\). **Performance Metrics:** Our metrics of interest are: * Cost: Average \(l_{1}\) or \(l_{2}\) distance between counterfactuals \(x^{\prime}\) and original points \(x\). * Validity (%): Percentage of counterfactuals that remain valid under the new model \(M\). * LOF: Predicts \(-1\) for anomalous points, and \(+1\) for inliers. A high average LOF essentially suggests the points lie on the data manifold and hence more realistic, i.e., _higher is better_ (see Definition 4). We use an existing implementation from Scikit-Learn to compute the LOF. **Methodology:** We begin by training a baseline neural network model and aim to find counterfactuals for data points with true negative predictions. To test the robustness of these counterfactual examples, we then train \(50\) new models (\(M\)) and evaluate the \(validity\) of the counterfactuals under different model change scenarios, which include: (i) Weight Initialization (WI): Retraining new models using the same hyperparameters but with different weight initialization by using different random seeds for each new model; and (ii) Leave Out (LO): Retraining new models by randomly removing a small portion (\(1\%\)) of the training data each time (with replacement) as well as different weight initialization. **Hyperparameter selection:** Our theoretical findings indicate that higher \(k\) improves robustness, but comes at the cost of increased computational cost. We determine \(k=1000\) was sufficient. The value of \(\sigma^{2}\) was determined by analyzing the variance of the features. In the dataset with the features between \([0,1]\), we found that a value of \(\sigma^{2}=0.01\) produced good results. The threshold \(\tau\) is a critical aspect of our method and can be adjusted based on the desired effective validity. A higher \(\tau\) value improves validity at the expense of \(l_{1}\) or \(l_{2}\) cost. See Appendix D for more details. **Baseline:** We compare our approaches with established baselines. First, we find the min Cost (\(l_{1}\) and \(l_{2}\)) counterfactual (Wachter et al., 2017) and use it as our base method for generating counterfactuals. We then compare T-Rex:I to the Stable Neighbor Search (SNS) (Black et al., 2021) and Robust Algorithmic Recourse (ROAR) (Upadhyay et al., 2021). We evaluate the performance of our Robust Nearest Neighbor (Algorithm 2:T-Rex:NN) against the Nearest Neighbor (NN) counterfactuals (closest data-support robust counterfactual in Definition 9). We choose a value of \(\tau\) to get high validity and compare cost and LOF with baselines. **Results:** Results for HELOC, German Credit, and CTG datasets are summarized in Table 1. Observe that the min Cost counterfactual is not robust to variations in the training data or weight initialization as expected. ROAR generates counterfactuals with high validity, albeit at the expense of a higher cost. Our proposed method, T-Rex:I, significantly improves the validity of the counterfactuals compared to the minimum cost. The T-Rex:I algorithm achieves comparable validity results to the SNS method for both types of model changes, and often accomplishes this with lower costs and higher LOF. This can be observed across all three datasets for both \(l_{1}\) and \(l_{2}\) cost metrics. The T-Rex:NN algorithm also significantly improves the validity of the counterfactuals compared to the traditional Nearest Neighbor (NN) method and maintains a high LOF. It comes at a price of increased cost, but the counterfactuals are guaranteed to be realistic since they are data supported. Refer to Appendix D for additional results for Adult and Taiwanese credit datasets. **Ablation:** To evaluate the efficacy of our proposed stability measure, we conduct an ablation study on the German credit dataset. We first evaluate a robustness measure that solely relies on the model's prediction of the counterfactual, denoted as \(r(x^{\prime},m)=m(x^{\prime})\). We then examine a measure that only incorporates the mean, the average predictions for \(k\) points sampled from the distribution \(N(x^{\prime},\sigma^{2}I_{d})\), denoted as \(r_{k,\sigma^{2}}(x^{\prime},m)=\frac{1}{k}\sum_{x^{\prime}_{i}\in N_{x^{\prime },k}}m(x^{\prime}_{i})\). We compare these with our proposed robustness measure \(\hat{R}_{k,\sigma^{2}}(x^{\prime},m)\), which also takes into account the variability around the counterfactual. The results of the ablation study, for various \(\tau\) thresholds, are summarized in Table 7 in Appendix D. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{\(l_{1}\)_based_} & \multicolumn{4}{c}{\(l_{2}\)_based_} \\ \cline{3-10} & Method & COST & LOF & WI VAL & LO VAL & COST & LOF & WI VAL & LO VAL \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & min Cost & 0.40 & 0.49 & 38.8\% & 35.2\% & 0.11 & 0.75 & 13.5\% & 13.5\% \\ & min Cost+T-Rex:I (Ours) & 1.02 & 0.38 & 98.2\% & 98.1\% & 0.29 & 0.68 & 98.5\% & 98.2\% \\ & min Cost+SNS & 1.20 & 0.30 & 98.0\% & 97.8\% & 0.31 & 0.64 & 97.9\% & 97.0\% \\ & ROAR & 1.69 & 0.41 & 92.6\% & 91.2\% & 1.91 & 0.43 & 86.3 \% & 84.8\% \\ \cline{2-10} & NN & 1.91 & 0.80 & 51.1\% & 50.3\% & 0.56 & 0.80 & 51.1\% & 50.3\% \\ & T-Rex:NN (Ours) & 2.50 & 0.92 & 84.0\% & 84.0\% & 0.77 & 0.92 & 84.0\% & 84.0\% \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & min Cost & 1.42 & 0.77 & 58.8\% & 56.7\% & 0.48 & 0.81 & 26.6\% & 26.6\% \\ & min Cost+T-Rex:I (Ours) & 4.81 & 0.72 & 98.0\% & 96.5\% & 1.20 & 0.75 & 99.2\% & 98.7\% \\ & min Cost+SNS & 5.71 & 0.67 & 97.5\% & 98.1\% & 1.44 & 0.68 & 99.9\% & 98.9\% \\ & ROAR & 7.63 & 0.54 & 96.3\% & 92.3\% & 6.81 & 0.58 & 87.8\% & 85.2\% \\ \cline{2-10} & NN & 7.05 & 1.00 & 95.3\% & 95.4\% & 2.50 & 1.00 & 95.3\% & 95.3\% \\ & T-Rex:NN (Ours) & 10.13 & 1.00 & 100\% & 100\% & 3.04 & 1.00 & 100\% & 100\% \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & min Cost & 0.21 & 0.94 & 74.6\% & 70.2\% & 0.08 & 1.00 & 19.7\% & 14.1\% \\ & min Cost+T-Rex:I (Ours) & 1.11 & 0.83 & 100\% & 98.8\% & 0.42 & 0.94 & 100\% & 99.7\% \\ & min Cost+SNS & 3.34 & -1.00 & 100\% & 98.2\% & 1.07 & -1.00 & 100\% & 99.3\% \\ & ROAR & 3.68 & 0.64 & 98.7\% & 96.4\% & 1.35 & 0.59 & 98.9\% & 97.2\% \\ \cline{2-10} & NN & 0.39 & 1.00 & 70.5\% & 67.5\% & 0.15 & 1.00 & 70.5\% & 67.5\% \\ & T-Rex:NN (Ours) & 2.22 & -0.33 & 100\% & 100\% & 1.00 & -0.33 & 100\% & 100\% \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results. ## 6 Discussion We introduce an abstraction called naturally-occurring model change and propose a measure, Stability, to quantify the robustness of counterfactuals with probabilistic guarantees. We show that counterfactuals with high Stability will remain valid after potential model changes with high probability. We investigate various techniques for incorporating stability in generating robust counterfactuals and introduce the T-Rex:I and T-Rex:NN algorithms. We also make a novel conceptual connection with the body of work on model multiplicity, further emphasizing on the models that are more likely to occur. **Limitations and Broader Impact:** The naturally-occurring model changes rest on assumptions that may not apply to all models or datasets. Our relaxed stability, although practically implementable, lacks the same theoretical guarantees as the initial stability measure. Estimating the Lipschitz constant around a counterfactual can be computationally demanding, particularly when leveraging gradient descent to optimize stability. Though generating robust counterfactuals is a key step towards trustworthy AI, it can fall short of other important factors such as fairness (Sharma et al., 2019; Gupta et al., 2019; Ley et al., 2022; Raman et al., 2023; Ehyaei et al., 2023). Future research could explore links between robustness and fairness, improving the estimation of stability, or integrating Stability into training-time-based approaches for generating robust counterfactuals. DisclaimerThis paper was prepared for informational purposes in part by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates ("JP Morgan"), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy, or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
2305.15529
Editable Graph Neural Network for Node Classifications
Despite Graph Neural Networks (GNNs) have achieved prominent success in many graph-based learning problem, such as credit risk assessment in financial networks and fake news detection in social networks. However, the trained GNNs still make errors and these errors may cause serious negative impact on society. \textit{Model editing}, which corrects the model behavior on wrongly predicted target samples while leaving model predictions unchanged on unrelated samples, has garnered significant interest in the fields of computer vision and natural language processing. However, model editing for graph neural networks (GNNs) is rarely explored, despite GNNs' widespread applicability. To fill the gap, we first observe that existing model editing methods significantly deteriorate prediction accuracy (up to $50\%$ accuracy drop) in GNNs while a slight accuracy drop in multi-layer perception (MLP). The rationale behind this observation is that the node aggregation in GNNs will spread the editing effect throughout the whole graph. This propagation pushes the node representation far from its original one. Motivated by this observation, we propose \underline{E}ditable \underline{G}raph \underline{N}eural \underline{N}etworks (EGNN), a neighbor propagation-free approach to correct the model prediction on misclassified nodes. Specifically, EGNN simply stitches an MLP to the underlying GNNs, where the weights of GNNs are frozen during model editing. In this way, EGNN disables the propagation during editing while still utilizing the neighbor propagation scheme for node prediction to obtain satisfactory results. Experiments demonstrate that EGNN outperforms existing baselines in terms of effectiveness (correcting wrong predictions with lower accuracy drop), generalizability (correcting wrong predictions for other similar nodes), and efficiency (low training time and memory) on various graph datasets.
Zirui Liu, Zhimeng Jiang, Shaochen Zhong, Kaixiong Zhou, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
2023-05-24T19:35:42Z
http://arxiv.org/abs/2305.15529v1
# Editable Graph Neural Network for Node Classifications ###### Abstract Despite Graph Neural Networks (GNNs) have achieved prominent success in many graph-based learning problem, such as credit risk assessment in financial networks and fake news detection in social networks. However, the trained GNNs still make errors and these errors may cause serious negative impact on society. _Model editing_, which corrects the model behavior on wrongly predicted target samples while leaving model predictions unchanged on unrelated samples, has garnered significant interest in the fields of computer vision and natural language processing. However, model editing for graph neural networks (GNNs) is rarely explored, despite GNNs' widespread applicability. To fill the gap, we first observe that existing model editing methods significantly deteriorate prediction accuracy (up to 50% accuracy drop) in GNNs while a slight accuracy drop in multi-layer perception (MLP). The rationale behind this observation is that the node aggregation in GNNs will spread the editing effect throughout the whole graph. This propagation pushes the node representation far from its original one. Motivated by this observation, we propose Editable Graph Neural Networks (EGNN), a neighbor propagation-free approach to correct the model prediction on misclassified nodes. Specifically, EGNN simply stitches an MLP to the underlying GNNs, where the weights of GNNs are frozen during model editing. In this way, EGNN disables the propagation during editing while still utilizing the neighbor propagation scheme for node prediction to obtain satisfactory results. Experiments demonstrate that EGNN outperforms existing baselines in terms of effectiveness (correcting wrong predictions with lower accuracy drop), generalizability (correcting wrong predictions for other similar nodes), and efficiency (low training time and memory) on various graph datasets. ## 1 Introduction Graph Neural Networks (GNNs) have achieved prominent results in learning features and topology of graph data (Ying et al., 2018; Hamilton et al., 2017; Ling et al., 2023; Zeng et al., 2020; Hu et al., 2020; Zhou et al., 2021; Jiang et al., 2022; Han et al., 2022b, a; Ling et al., 2023; Duan et al., 2022; Zhou et al., 2021). Based on spatial message passing, GNNs learn each node through aggregating representations of its neighbors and the node itself recursively. Once trained, the model is typically deployed as static artifacts to make decisions on a wide range of tasks, such as credit risk assessment in financial networks (Petrone and Latora, 2018) and fake news detection in social networks (Shu et al., 2017). However, the cost of making a wrong decision could be higher in these graph applications. Over-trusted creditworthiness on borrowers can lead to severe loss for lenders, and failure detection of fake news has a serious negative impact on society. Ideally, it is desirable to correct these serious errors and generalize corrections to similar mistakes, while preserving the model's prediction accuracy on unrelated input samples. To obtain generalization ability for similar samples, the most prevalent method is to fine-tune the model with a new label on the single example to be corrected. However, this approach often spoils the model prediction on other unrelated samples. To cope with the challenge, many model editing frameworks have been proposed to adjust model behaviors by correcting errors as they appear (Sinitsin et al., 2020; Mitchell et al., 2021, 2022; De Cao et al., 2021). Specifically, these editors usually require an additional training phase to help the model "prepare" for the editing process before applying any edits (Sinitsin et al., 2020; Mitchell et al., 2021, 2022; De Cao et al., 2021). Although model editing has shown promise to modify vision and language models, to the best of our knowledge, there is no existing work tackling the critical mistakes in graph data. Despite the straightforward concept, it is challenging to efficiently change GNNs' behaviors on the massively connected nodes. First, due to the message-passing mechanism in GNNs, editing the model behavior on a single node can propagate changes across the entire graph, significantly altering the node's original representation, which may destroy the prediction performance on the training dataset. Therefore, compared to the neural networks for computer vision or natural language processing, it is harder to maintain the model prediction on other input samples. Second, unlike other types of neural networks, the input nodes are connected in the graph domain. Thus, when editing the model prediction on a single node using gradient descent, the representation of each node in the whole graph is required (Liu et al., 2022; Han et al., 2023; Hamilton et al., 2017). This distinction introduces complexity and computational challenges when making targeted adjustments to GNNs, especially on large graphs. In this work, we delve into studying the graph model editing problem, which is more challenging than the independent sample edits. We first observe the existing editors significantly harm the overall node classification accuracy although the misclassified nodes are corrected. The test accuracy drop is up to 50%, which prevents GNNs from being practically deployed. We experimentally study the rationale behind this observation from the lens of loss landscapes. Specifically, we visualize the loss landscape of the Kullback-Leibler (KL) divergence between node embeddings obtained before and after the model editing process in GNNs. We found that a slight weight perturbation can significantly enlarge the KL divergence. In contrast, other types of neural networks, such as Multi-Layer Perceptrons (MLPs), exhibit a much flatter region of the KL loss landscape and display greater robustness against weight variations. Such observations align with our viewpoint that after editing on misclassified samples, GNNs are prone to widely propagating the editing effect and affecting the remaining nodes. Based on the sharp loss landscape of model editing in GNNs, we propose Editable Graph Neural Network (EGNN ), a neighbor propagation-free approach to correct the model prediction on the graph data. Specifically, suppose we have a well-trained GNN and we found want to correct its prediction on some of the misclassified nodes. EGNN stitches a randomly initialized MLP to the trained GNN. We then train the MLP for a few iterations to ensure that it does not significantly alter the model's prediction. When performing the edit, we only update the parameter of the stitched MLP while freezing the parameter of GNNs during the model editing process. In particular, the node embeddings from GNNs are first inferred offline. Then MLP learns an additional representation, which is then combined with the fixed embeddings inferred from GNNs to make the final prediction. When a misclassified node is received, the gradient is back propagated to update the parameters of MLP instead of GNNs'. In this way, we decouple the _neighbor propagation process_ of learning the structure-aware node embeddings from the _model editing process_ of correcting the misclassified nodes. Thus, EGNN disables the propagation during editing while still utilizing the neighbor propagation scheme for node prediction to obtain satisfactory results. Compared to directly applying the existing model editing methods to GNNs: * We can leverage the GNNs' structure learning meanwhile avoiding the spreading edition errors to guarantee the overall node classification task. * The experimental results validate our solution which could address all the erroneous samples and deliver up to **90% improvement in overall accuracy**. * Via freezing GNNs' part, EGNN is scalable to address misclassified nodes in the million-size graphs. We save more than \(2\times\) in terms of memory footprint and model editing time. ## 2 Preliminary Graph Neural Networks.Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be an undirected graph with \(\mathcal{V}=(v_{1},\cdots,v_{|\mathcal{V}|})\) and \(\mathcal{E}=(e_{1},\cdots,e_{|\mathcal{E}|})\) being the set of nodes and edges, respectively. Let \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\) be the node feature matrix. \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) is the graph adjacency matrix, where \(\mathbf{A}_{i,j}=1\) if \((v_{i},v_{j})\in\mathcal{E}\) else \(\mathbf{A}_{i,j}=0\). \(\tilde{\mathbf{A}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I})\tilde{\mathbf{D}}^{- \frac{1}{2}}\) is the normalized adjacency matrix, where \(\tilde{\mathbf{D}}\) is the degree matrix of \(\mathbf{A}+\mathbf{I}\). In this work, we are mostly interested in the task of node classification, where each node \(v\in\mathcal{V}\) is associated with a label \(y_{v}\), and the goal is to learn a representation \(\mathbf{h}_{v}\) from which \(y_{v}\) can be easily predicted. To obtain such a representation, GNNs follow a neural message passing scheme (Kipf and Welling, 2017). Specifically, GNNs recursively update the representation of a node by aggregating representations of its neighbors. For example, the \(l^{\text{th}}\) Graph Convolutional Network (GCN) layer (Kipf and Welling, 2017) can be defined as: \[\mathbf{H}^{(l+1)}=\text{ReLU}(\tilde{\mathbf{A}}\mathbf{H}^{(l)}\mathbf{\Theta}^{(l)}), \tag{1}\] where \(\mathbf{H}^{(l)}\) is the node embedding matrix containing the \(\mathbf{h}_{v}\) for each node \(v\) at the \(l^{\text{th}}\) layer and \(\mathbf{H}^{(0)}=\mathbf{X}\). \(\mathbf{\Theta}^{(l)}\) is the weight matrix of the \(l^{\text{th}}\) layer. The Model Editing Problem.The goal of model editing is to alter a base model's output for a misclassified sample \(x_{e}\) as well as its similar samples via model finetuning only using a single pair of input \(x_{e}\) and desired output \(y_{e}\) while leaving model behavior on unrelated inputs intact (Sinitsin et al., 2020; Mitchell et al., 2021, 2022). We are the first to propose the model editing problem in graph data, where the decision faults on a small number of critical nodes can lead to significant financial loss and/or fairness concerns. For the node classification, suppose a well-trained GNN incorrectly predicts a specific node. **Model editing** is used to correct the undesirable prediction behavior for that node by using the node's features and desired label to update the model. Ideally, the model editing ensures that the updated model makes accurate predictions for the specific node and its similar samples while maintaining the model's original behavior for the remaining unrelated inputs. Some model editors, such as the one presented in this paper, require a training phase before they can be used for editing. ## 3 Proposed Methods In this section, we first empirically show vanilla model editing performs extremely worse for GNNs compared with MLPs due to node propagation (Section 3.1). Intuitively, due to the message-passing mechanism in GNNs, editing the model behavior on a single node can propagate changes across the entire graph, significantly altering the node's original representation. Then through visualizing the loss landscape, we found that for GNNs, even a slight weight perturbation, the node representation will be far away from the original one (Section 3.2). Based on the observation, we propose a propagation-free GNN editing method called EGNN (Section 3.3). ### Motivation: Model Editing may Cry in GNNs **Setting:** We train GCN, GraphSAGE, and MLP on Cora, Flickr, Reddit, and ogbn-arxiv, respectively, following the training setup as described in Section 5. To evaluate the difficulty of editing, _we ensured that the node to be edited was not present during training_, meaning that _the models were trained inductively._ Specifically, we trained the model on a subgraph containing only the training node and evaluated its performance on the validation and test set of nodes. Next, we selected a misclassified node from the validation set and applied gradient descent only on that node until the model made a correct prediction for it. Following previous work (Sinitsin et al., 2020; Mitchell et al., 2022), we perform 50 independent edits and report the averaged test accuracy before and after performing a single edit. **Results:** As shown in Table 1, we observe that **(1)** GNNs consistently outperform MLP on all the graph datasets before editing. This is consistent with the previous graph analysis results, where the neural message passing involved in GNNs extracts the graph topology to benefit the node representation learning and thereby the classification accuracy. **(2)** After editing, the accuracy drop of GNNs is significantly larger than that of MLP. For example, GraphSAGE has an almost 50% drop in test accuracy on ogbn-arxiv after editing even a single point. MLP with editing even delivers higher overall accuracies on Flickr and ogbn-arxiv compared with GNN-based approaches. One of the intuitive explanations is the slightly fine-tuned weights in MLP mainly affect the target node, instead of other unrelated samples. However, due to the message-passing mechanism in GNNs, the edited node representation can be propagated over the whole graph and thus change the decisions on a large area of nodes. These comparison results reveal the unique challenge in editing the correlated nodes with GNNs, compared with the conventional neural networks working on isolated samples. **(3)** After editing, the test accuracy of GCN, GraphSAGE, and MLP become too low to be practically deployed. This is quite different to the model editing problems in computer vision and natural \begin{table} \begin{tabular}{c c c c c} \hline \hline & & GCN & GraphSAGE & MLP \\ \hline \multirow{3}{*}{Cora} & w./o. edit & **89.4** & 86.6 & 71.8 \\ & w./ edit & **84.36** & 82.06 & 68.33 \\ & \(\Delta\) Acc. & 5.03\(\downarrow\) & 4.53\(\downarrow\) & **3.46**\(\downarrow\) \\ \hline \multirow{3}{*}{Flickr} & w./o. edit & **51.19** & 49.03 & 46.77 \\ & w./ edit & 13.94 & 17.15 & **36.68** \\ & \(\Delta\) Acc. & 37.25\(\downarrow\) & 31.88\(\downarrow\) & **10.08**\(\downarrow\) \\ \hline \multirow{3}{*}{Reddit} & w./o. edit & 95.52 & **96.55** & 72.41 \\ & w./ edit & **75.20** & 55.85 & 69.86 \\ & \(\Delta\) Acc. & 20.32\(\downarrow\) & 40.70\(\downarrow\) & **2.54**\(\downarrow\) \\ \hline \multirow{3}{*}{ogbn-arxiv} & w./o. edit & **70.20** & 68.38 & 52.65 \\ & w./ edit & 23.70 & 19.06 & **45.15** \\ \cline{1-1} & \(\Delta\) Acc. & 46.49\(\downarrow\) & 49.31\(\downarrow\) & **7.52\(\downarrow\)** \\ \hline \hline \end{tabular} \end{table} Table 1: The test accuracy (%) before (“w./o. edit”) and after editing (“w./ edit”) on one single data point. \(\Delta\) Acc is the accuracy drop before and after performing the edit. All results are averaged over 50 simultaneous model edits. The best result is highlighted by **bold faces.** language processing, where the modified models only suffer an acceptable accuracy drop. ### Sharp Locality of GNNs through Loss Landscape Intuitively, due to the message-passing mechanism in GNNs, editing the model behavior for a single node can cause the editing effect to propagate across the entire graph. This propagation pushes the node representation far from its original one. **Thus, we hypothesized that the difficulty in editing GNNs as being due to the neighbor propagation of GNNs.** The model editing aims to correct the prediction of the misclassified node using the cross-entropy loss of desired label. Intuitively, the large accuracy drop can be interpreted as the low model prediction similarity before and after model editing, named as the locality. To quantitatively measure the locality, we use the metric of KL divergence between the node representations learned before and after model editing. The higher KL divergence means after editing, the node representation is far away from the original one. In other words, the higher KL divergence implies poor model locality, which is undesirable in the context of model editing. Particularly, we visualize the locality loss landscape for Cora dataset in Figure 1. We observe several **insights:** (1) GNNs (e.g., GCN and GraphSAGE) suffer from a much sharper loss landscape. Even slightly editing the weights, KL divergence loss is dramatically enhanced. That means GNNs are hard to be fine-tuned while keeping the locality. (2) MLP shows a flatter loss landscape and demonstrates much better locality to preserve overall node representations. This is consistent to the accuracy analysis in Table 1, where the accuracy drop of MLP is smaller. To deeply understand why model editing fails to work in GNNs, we also provide a pilot theoretical analysis on the KL locality difference between before/after model editing for one-layer GCN and MLP in Appendix D. We theoretically show that when model editing corrects the model predictions on misclassified nodes, GNNs are susceptible to altering the predictions on other connected nodes. This phenomenon results in an increased KL divergence difference. ### Egnn Neighbor Propagation Free GNN Editing In our previous analysis, we hypothesized that the difficulty in editing GNNs as being due to the neighbor propagation. However, as Table 1 suggested, the neighbor propagation is necessary for obtaining good performance on graph datasets. On the other hand, MLP could stabilize most of the node representations during model editing although it has worse node classification capability. Thus, we need to find a way to "disable" the propagation during editing while still utilizing the neighbor propagation scheme for node prediction to obtain satisfactory results. Following the motivation, we propose to combine a compact MLP to the well-trained GNN and only modify the MLP during editing. In this way, we can correct the model's predictions through this additional MLP while freezing the neighbor propagation. Meanwhile during inference, both the GNN and MLP are used together for prediction in tandem to harness the full potential of GNNs for prediction. The whole algorithm is shown in Algorithm 1. Figure 1: The loss landscape of various model architectures on Cora dataset. Similar results can be found in Appendix C ``` procedureMLP training procedure: Input: MLP \(g_{\Phi}\), dataset \(\mathcal{D}\), the node embedding \(\mathbf{h}_{v}\) for each node \(v\) in \(\mathcal{D}\) for\(t=1,\cdots,T\)do Sample \(\mathbf{x}_{v}\), \(y_{v}\sim\mathcal{D}^{\text{train}}\) \(\mathcal{L}_{\text{loc}}=\text{KL}(\mathbf{h}_{v}+g_{\Phi}(\mathbf{x}_{v})||\mathbf{h}_{v})\) \(\mathcal{L}_{\text{task}}=-\log p_{\Phi}(y_{v}|\mathbf{h}_{v}+g_{\Phi}(\mathbf{x}_{v}))\) \(\mathcal{L}=\mathcal{L}_{\text{task}}\,+\,\alpha\mathcal{L}_{\text{loc}}\) \(\mathbf{\Phi}\leftarrow Adam(\mathbf{\Phi},\nabla\mathcal{L})\) end for end for procedureEGNN EDIT PROCEDUTE: Input: data pair \(x_{e},y_{e}\) to be edited, the node embedding \(\mathbf{h}_{e}\) for node \(e\) \(\hat{y}=\arg\max_{y}p_{\Phi}(y|\mathbf{x}_{e},\mathbf{h}_{e})\) while\(\hat{y}\neq y_{v}\)do \(\mathcal{L}=-\log p_{\Phi}(y|\mathbf{x}_{e},\mathbf{h}_{e})\) \(\mathbf{\Phi}\gets Adam(\mathbf{\Phi},\nabla\mathcal{L})\) end for end for ``` **Algorithm 1**Proposed EGNN **Before editing.** We first stitch a randomly initialized compact MLP to the trained GNN. To mitigate the potential impact of random initialization on the model's prediction, we introduce a training procedure for the stitched MLP, as outlined in Algorithm 1 "MLP training procedure": we train the MLP for a few iterations to ensure that it does not significantly alter the model's prediction. By freezing GNN's weights, we first get the node embedding \(\mathbf{h}_{v}\) at the last layer of the trained GNN by running a single forward pass. We then stitch the MLP with the trained GNNs. Mathematically, we denote the MLP as \(g_{\Phi}\) where \(\Phi\) is the parameters of MLP. For a given input sample \(\mathbf{x}_{v},\mathbf{y}_{v}\), the model output now becomes \(\mathbf{h}_{v}+g_{\Phi}(\mathbf{x}_{v})\). We calculate two loss based on the Figure 2: The overview of EGNN. We fix the backbone of GNNs (in blue), while only update the small MLPs (in orange) during editing. The wrongly predicted nodes are highlighted with orange color, and the MLP only require its node feature. prediction, i.e., the task-specific loss \(\mathcal{L}_{\text{task}}\) and the locality loss \(\mathcal{L}_{\text{loc}}\). Namely, \[\mathcal{L}_{\text{task}} =-\log p_{\boldsymbol{\Phi}}(y_{v}|\boldsymbol{h}_{v}+g_{ \boldsymbol{\Phi}}(\boldsymbol{x}_{v})),\] \[\mathcal{L}_{\text{loc}} =\text{KL}(\boldsymbol{h}_{v}+g_{\boldsymbol{\Phi}}(\boldsymbol{x }_{v})||\boldsymbol{h}_{v}),\] where \(\boldsymbol{h}_{v}+g_{\boldsymbol{\Phi}}(\boldsymbol{x}_{v})\) is the model prediction with the additional MLP and \(p_{\boldsymbol{\Phi}}(y_{v}|\boldsymbol{h}_{v}+g_{\boldsymbol{\Phi}}( \boldsymbol{x}_{v}))\) is the probability of class \(y_{v}\) given by the model. \(\mathcal{L}_{\text{task}}\) is the cross-entropy between the model prediction and label. \(\mathcal{L}_{\text{loc}}\) is the locality loss, which equals KL divergence between the original prediction \(\boldsymbol{h}_{v}\) and the prediction with the additional MLP \(\boldsymbol{h}_{v}+g_{\boldsymbol{\Phi}}(\boldsymbol{x}_{v})\). The final loss \(\mathcal{L}\) is the weighted combination of two parts, i.e., \(\mathcal{L}=\mathcal{L}_{\text{task}}\) + \(\alpha\mathcal{L}_{\text{loc}}\) where \(\alpha\) is the weight for the locality loss. \(\mathcal{L}\) is used to guide the MLP to fit the task while keep the model prediction unchanged. **When editing.**EGNN **freezes the model parameters of GNN and only updates the parameters of MLP.** Specifically, as outlined in Algorithm 1 "EGNN Edit Procedure", we update the parameters of MLP until the model prediction for the misclassified sample is corrected. Since MLP only relies on the node features, we can easily perform these updates in mini-batches, which enables us to edit GNNs on large graphs. Lastly, we visualize the KL locality loss landscape of EGNN (including GCN-MLP and SAGE-MLP) in Figure 1. It is seen that the proposed EGNN shows the most flattened loss landscape than MLP and GNNs, which implied that EGNN can preserve overall node representations better than other model architectures. ## 4 Related Work and Discussion Due to the page limit, below we discuss the related work on model editing. We also discuss the limitation in Appendix B. Model Editing.Many approaches have been proposed for model editing. The most straightforward method adopts standard fine-tuning to update model parameters based on misclassified samples while preserving model locality via constraining parameters travel distance in model weight space (Zhu et al., 2020; Sotoudeh and Thakur, 2019). Work (Sinitsin et al., 2020) introduces meta-learning to find a pre-trained model with rapid and easy finetuned ability for model editing. Another way to facilitate model editing relies on external learned editors to modify model editing considering several constraints (Mitchell et al., 2021; Hase et al., 2021; De Cao et al., 2021; Mitchell et al., 2022). The editing of the activation map is proposed to correct misclassified samples in (Dai et al., 2021; Meng et al., 2022) due to the belief of knowledge attributed to model neurons. While all these works either update base model parameters or import external separate modules for model prediction to induce desired prediction change, the considered data is i.i.d. and may not work well in graph data due to essential node interaction during neighborhood propagation. In this paper, we propose EGNN, using a stitched MLP module to edit the output space of the base GNN model, for node classification tasks. The key insight behind this solution is the sharp locality of GNNs, i.e., the prediction of GNNs can be easily altered after model editing. ## 5 Experiments The experiments are designed to answer the following research questions. **RQ1:** Can EGNN correct the wrong model prediction? Moreover, what is the difference in accuracy before and after editing using EGNN? **RQ2:** Can the edits generalize to correct the model prediction on other similar inputs? **RQ3:** What is the time and memory requirement of EGNN to perform the edits? ### Experimental Setup Datasets and Models.To evaluate EGNN, we adopt four small-scale and four large-scale graph benchmarks from different domains. For small-scale datasets, we adopt Cora, A-computers (Shchur et al., 2018), A-photo (Shchur et al., 2018), and Coauthor-CS (Shchur et al., 2018). For large-scale datasets, we adopt Reddit (Hamilton et al., 2017), Flickr (Zeng et al., 2020), _ogbn-arxiv_ (Hu et al., 2020), and _ogbn-products_ (Hu et al., 2020). We integrate EGNN with two popular models: GCN (Kipf and Welling, 2017) and GraphSAGE (Hamilton et al., 2017). _To avoid creating confusion, GCN and GraphSAGE are all trained with the whole graph at each step._ We evaluate EGNN under the **inductive setting**. Namely, we trained the model on a subgraph containing only the training node and evaluated it on the whole graph. Details about the hyperparameters and datasets are in Appendix A. Compared Methods.We compare our EGNN editor with the following two baselines: the vanilla gradient descent editor (GD) and Editable Neural Network editor (ENN) (Sinitsin et al., 2020). GD is the same editor we used in our preliminary analysis in Section 3. **We note that for other model editing, e.g., MEND (Mitchell et al., 2021), SERAC (Mitchell et al., 2022) are tailored for NLP applications, which cannot be directly applied to the graph area**. Specifically, GD applies the gradient descent **on the parameters of GNN** until the GNN makes right prediction. ENN trains **the parameters of GNN** for a few steps to make it prepare for the following edits. Then similar to GD editor, it applies the gradient descent **on the parameters of GNN** until the GNN makes right prediction. For EGNN, we only train **the stitched MLP** for a few steps. Then we only update **weights of MLP** during edits. Detailed hyperparameters are listed in Appendix A. Evaluation Metrics.Following previous work (Sinitsin et al., 2020; Mitchell et al., 2022, 2021), we evaluate the effectiveness of different methods by the following three metrics. **DrawDown (DD)**, which is the mean absolute difference of test accuracy before and after performing an edit. A smaller drawdown indicates a better editor locality. **Success Rate (SR)**, which is defined as the rate of edits, where the editor successfully corrects the model prediction. **Edit Time**, which is defined as the wall-clock time of a single edit that corrects the model prediction. ### The Effectiveness of EGNN Editing GNNs In many real-world applications, it is common to encounter situations where our trained model produces incorrect predictions on unseen data. It is crucial to address these errors as soon as they are identified. To assess the usage of editors in real-world applications (**RQ1**), **we select misclassified nodes from the validation set, which is not seen during the training process.** Then we \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multirow{2}{*}{Editor} & \multicolumn{3}{c}{Cora} & \multicolumn{3}{c}{A-computers} & \multicolumn{3}{c}{A-photo} & \multicolumn{3}{c}{Count-CS} \\ \cline{3-13} & & Acc\({}^{\dagger}\) & DD\({}_{\ddagger}\) & SR\({}^{\dagger}\) & Acc\({}^{\dagger}\) & DD\({}_{\ddagger}\) & SR\({}^{\dagger}\) & Acc\({}^{\dagger}\) & DD\({}_{\ddagger}\) & SR\({}^{\dagger}\) & Acc\({}^{\dagger}\) & DD\({}_{\ddagger}\) & SR\({}^{\dagger}\) \\ \hline \multirow{3}{*}{GCN} & GD & 84.37\(\pm\)5.84 & 5.03\(\pm\)6.40 & 1.0 & 44.78\(\pm\)22.41 & 43.09\(\pm\)22.32 & 1.0 & 28.70\(\pm\)21.26 & 65.08\(\pm\)20.13 & 1.0 & 91.07\(\pm\)3.23 & 3.30\(\pm\)2.22 & 1.0 \\ & ENN & 37.16\(\pm\)3.80 & 52.24\(\pm\)4.76 & 1.0 & 55.11\(\pm\)10.99 & 72.36\(\pm\)10.87 & 1.0 & 16.71\(\pm\)14.81 & 77.07\(\pm\)15.20 & 1.0 & 4.94\(\pm\)3.78 & 89.43\(\pm\)3.34 & 1.0 \\ \cline{2-13} & GNN & **87.80\(\pm\)**2.34 & **1.80\(\pm\)**2.13 & **1.0** & **82.85\(\pm\)**5.20 & **2.32\(\pm\)**5.11 & 0.98** & **91.97\(\pm\)**5.55 & **2.39\(\pm\)**5.34 & **1.0** & **94.54\(\pm\)**0.07 & **-0.17\(\pm\)**0.07 & **1.0** \\ \cline{2-13} & GD & 82.06\(\pm\)4.33 & 4.54\(\pm\)5.23 & 1.0 & 21.68\(\pm\)20.98 & 61.15\(\pm\)20.33 & 1.0 & 38.98\(\pm\)30.24 & 55.32\(\pm\)29.35 & 1.0 & 91.55\(\pm\)5.55 & 50.15\(\pm\)5.32 & 1.0 \\ \cline{2-13} & SAGE & ENN & 31.61\(\pm\)4.5 & 53.44\(\pm\)2.23 & 1.0 & 16.89\(\pm\)16.98 & 65.94\(\pm\)16.75 & 1.0 & 15.06\(\pm\)11.92 & 79.24\(\pm\)11.25 & 1.0 & 13.71\(\pm\)2.73 & 81.45\(\pm\)2.11 & 1.0 \\ \cline{2-13} & GNN & **85.65\(\pm\)**1.23 & **0.55\(\pm\)**1.26 & **1.0** & **84.34\(\pm\)**4.84 & **2.72\(\pm\)**5.03 & 0.94 & **92.53\(\pm\)**2.90 & **1.83\(\pm\)**3.22 & **1.0** & **95.27\(\pm\)**10.08 & **-0.01\(\pm\)**0.10 & **1.0** \\ \hline \hline \end{tabular} \end{table} Table 2: The results on four small scale datasets after applying one single edit. The reported number is averaged over 50 independent edits. **SR** is the edit success rate, **Acc** is the test accuracy after editing, and **DD** are the test drawdown, respectively. “OOM” is the out-of-memory error. employ the editor to correct the model's predictions for those misclassified nodes, and measure the drawdown and edit success rate on the test set. The results after editing on a single node are shown in Table 2 and Table 3. We observe that _Unlike editing Transformers on text data (Mitchell et al., 2021, 2022; Huang et al., 2023), all editors can successfully correct the model prediction in graph domain_. As shown in Table 3, all editors have 100% success rate when edit GNNs. In contrast, for transformers, the edit success rate is often less than 50% and drawdown is much smaller than GNNs (Mitchell et al., 2021, 2022; Huang et al., 2023). This observation suggests that **unlike transformers, GNNs can be easily perturbed to produce correct predictions. However, at the cost of huge drawdown on other unrelated nodes. Thus, the main challenge lies in maintaining the locality between predictions for unrelated nodes before and after editing.** This observation aligns with our initial analysis, which highlighted the interconnected nature of nodes and the edit on a single node may propagate throughout the entire graph. _EGNN significantly outperforms both GD and ENN in terms of the test drawdown_. This is mainly because both GD and ENN try to correct the model's predictions by updating the parameters of Graph Neural Networks (GNNs). This process inevitably relies on neighbor propagation. In contrast, EGNN has much better test accuracy after editing. Notably, for Reddit, the accuracy drop decreases from roughly 80% to \(\approx 1\%\), which is significantly better than the baseline. This is because EGNN decouples the neighbor propagation with the editing process. Interestingly, ENN is significantly worse than the vanilla editor, i.e., GD, when applied to GNNs. As shown in Appendix C, we found that this discrepancy arises from the ENN training procedure, which significantly compromises the model's performance to prepare it for editing. In Figure 3, 4, and 5 we present the ablation study under the sequential setting. This is a more challenging scenario where the model is edited sequentially as errors arise. In particular, we plot the test accuracy drawdown against the number of sequential edits for GraphSAGE on the ogbn-arxiv dataset. We observe that _EGNN consistently surpasses both GD and ENN in the sequential setting_. However, the drawdown is considerably greater than that in the single edit setting. For instance, EGNN exhibits a 0.64% drawdown for GraphSAGE on the ogbn-arxiv dataset in the single edit setting, which escalates up to a 20% drawdown in the sequential edit setting. These results also highlight the hardness of maintaining the locality of GNN prediction after editing. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multirow{2}{*}{Editor} & \multicolumn{3}{c}{Flickr} & \multicolumn{3}{c}{Reddit} & \multicolumn{3}{c}{ogbn-} & \multicolumn{3}{c}{ogbn-} \\ & & Acc! & DD\(\downarrow\) & SR\(\uparrow\) & Acc! & DD\(\downarrow\) & SR\(\uparrow\) & Acc! & DD\(\downarrow\) & SR\(\uparrow\) & Acc! & DD\(\downarrow\) & SR\(\uparrow\) \\ \hline \multirow{3}{*}{GCN} & GD & 13.95\(\pm\)11.0 & 37.25\(\pm\)10.2 & 1.0 & 75.20\(\pm\)12.3 & 20.32\(\pm\)11.3 & 1.0 & 23.71\(\pm\)16.9 & 46.50\(\pm\)14.9 & 1.0 & OOM & OOM & 0 \\ & ENN & 25.82\(\pm\)14.9 & 25.38\(\pm\)16.9 & 1.0 & 11.65\(\pm\)1.5 & 84.36\(\pm\)1.1 & 1.0 & 16.39\(\pm\)7.7 & 53.62\(\pm\)6.7 & 1.0 & OOM & OOM & 0 \\ & ENN & **44.91\(\pm\)**12.2 & **6.34\(\pm\)**10.3 & **1.0** & **94.46\(\pm\)**10.4 & 1.03\(\pm\)0.6 & **1.0** & **67.34\(\pm\)**8.7 & **2.67\(\pm\)**4.4 & **1.0** & **74.19\(\pm\)**3.4 & **0.81\(\pm\)**0.23 & **1.0** \\ \hline \multirow{3}{*}{Graph-SAGE} & GD & 17.16\(\pm\)12.2 & 31.88\(\pm\)12.2 & 1.0 & 55.85\(\pm\)22.5 & 40.71\(\pm\)20.3 & 1.0 & 19.07\(\pm\)14.1 & 36.68\(\pm\)10.1 & 1.0 & OOM & OOM & 0 \\ & ENN & 28.73\(\pm\)5.6 & 20.31\(\pm\)5.6 & 1.0 & 5.88\(\pm\)3.9 & 90.68\(\pm\)4.3 & 1.0 & 8.14\(\pm\)8.6 & 47.61\(\pm\)7.6 & 1.0 & OOM & OOM & 0 \\ & ENN & **43.52\(\pm\)**10.8 & **5.12\(\pm\)**10.8 & **1.0** & **96.50\(\pm\)**0.1 & **0.05\(\pm\)**0.1 & **1.0** & **67.91\(\pm\)**2.9 & **0.64\(\pm\)**2.3 & **1.0** & **76.27\(\pm\)**0.6 & **0.17\(\pm\)**0.10 & **1.0** \\ \hline \hline \end{tabular} \end{table} Table 3: The results on four large scale datasets after applying one single edit. “OOM” is the out-of-memory error. Figure 3: Sequential edit drawdown of GCN on four small scale datasets. ### The Generalization of the Edits of Egnn Ideally, we aim for the edit applied to a specific node to generalize to similar nodes while preserving the model's initial behavior for unrelated nodes. To evaluate the generalization of the Egnn edits, we conduct the following experiment: **(1)** We first select a particular group (i.e., class) of nodes based on their labels. **(2)** Next, we randomly flip the labels of 10% of the training nodes within this group and train a GNN on the modified training set. **(3)** For each flipped training node, we correct the trained model's prediction for that node back to its original class and assess whether the model's predictions for other nodes in the same group are also corrected. If the model's predictions for other nodes in the same class are also corrected after modifying a single flipped node, it indicates that the Egnn edits Figure 4: Sequential edit drawdown of GraphSAGE on four small scale datasets. Figure 5: Sequential edit test drawdown of GCN and GraphSAGE on Reddit and ogbn-arxiv dataset. Figure 6: The subgroup and overall test accuracy before and after one single edit. The results are averaged over 50 independent edits. Figure 7: T-SNE visualizations of GNN embeddings before and after edits on the Cora dataset. The flipped nodes are all from class 0, which is marked in red color. can effectively generalize to address similar erroneous behavior in the model. To answer **RQ2**, we conduct the above experiments and report the **subgroup and overall test accuracy** after performing a single edit on the flipped training node. The results are shown in Figure 6. We observe that: _From Figure 5(a) and Figure 5(c), EGNN significantly improves the subgroup accuracy after performing even a single edit_. Notably, the subgroup accuracy is significantly lower than the overall accuracy. For example, on Flickr dataset, both GCN and GraphSAGE have a subgroup accuracy of less than \(5\%\) before editing. This is mainly because the GNN is trained on the graph where \(10\%\) labels of the training node in the subgroup are flipped. However, even after editing on a single node, the subgroup accuracy is significantly boosted. These results indicate that the EGNN edits can effectively generalize to address the wrong prediction on other nodes in the same group. In Figure 7, we also visualize the node embeddings before and after editing by EGNN on the Cora dataset. We note that all of the flipped nodes are from class 0, which is marked in red color in Figure 7. Before editing, the red cluster has many outliers that lie in the embedding space of other classes. This is mainly because the labels of some of the nodes in this class are flipped. In contrast, after editing, the nodes in the red cluster become significantly closer to each other, with a substantial reduction in the number of outliers. ### The Efficiency of Egnn We want to patch the model as soon as possible to correct errors as they appear. Thus ideally, the editor should be efficient and scalable to large graphs. Here we summarize the edit time and memory required for performing the edits in Table 4. We observe that EGNN is about \(2\sim 5\times\) faster than the GD editor in terms of the wall-clock edit time. This is because EGNN only updates the parameters of MLP, and totally gets rid of the expensive graph-based sparse operations (Liu et al., 2022b, a; Han et al., 2023b). Also, updating the parameters of GNNs requires storing the node embeddings in memory, which is directly proportional to the number of nodes in the graph and can be exceedingly expensive for large graphs However, with EGNN, we only use node features for updating MLPs, meaning that memory consumption is not dependent on the graph size. Consequently, EGNN can efficiently scale up to handle graphs with millions of nodes, e.g., ogbn-products, whereas the vanilla editor raises an OOM error. ## 6 Conclusion In this paper, we explore a and important problem, i.g., GNNs model editing for node classification. We first empirically observe that the vanilla model editing method may not perform well due to node aggregation, and then theoretically investigate the underlying reason through the lens of locality loss landscape with quantitative analysis. Furthermore, we propose EGNN to correct misclassified samples while preserving other intact nodes, via stitching a trainable MLP. In this way, the power of GNNs for prediction and the editing-friendly MLP can be integrated together in EGNN. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multirow{2}{*}{Editor} & \multicolumn{3}{c}{Flickr} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{ogbn-} & \multicolumn{2}{c}{ogbn-} \\ & & Edit & Peak & Edit & Peak & Edit & Peak & Edit & Peak \\ & & Time (ms) & Memory (MB) & Time (ms) & Memory (MB) & Time (ms) & Memory (MB) & Time (ms) & Memory (MB) \\ \hline \multirow{2}{*}{GCN} & GD & 379.86 & 707 & 1835.24 & 3429 & 663.17 & 967 & OOM & OOM \\ & EGNN & 246.63 & 315 & 765.15 & 2089 & 299.71 & 248 & 5122.53 & 5747 \\ \hline Graph- & GD & 712.07 & 986 & 4781.92 & 5057 & 668.77 & 1109 & OOM & OOM \\ SAGE & EGNN & 389.37 & 328 & 1516.68 & 2252 & 174.82 & 260 & 5889.59 & 6223 \\ \hline \hline \end{tabular} \end{table} Table 4: The edit time and memory required for editing.
2305.01548
Explainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks
In conversational question answering, users express their information needs through a series of utterances with incomplete context. Typical ConvQA methods rely on a single source (a knowledge base (KB), or a text corpus, or a set of tables), thus being unable to benefit from increased answer coverage and redundancy of multiple sources. Our method EXPLAIGNN overcomes these limitations by integrating information from a mixture of sources with user-comprehensible explanations for answers. It constructs a heterogeneous graph from entities and evidence snippets retrieved from a KB, a text corpus, web tables, and infoboxes. This large graph is then iteratively reduced via graph neural networks that incorporate question-level attention, until the best answers and their explanations are distilled. Experiments show that EXPLAIGNN improves performance over state-of-the-art baselines. A user study demonstrates that derived answers are understandable by end users.
Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum
2023-05-02T15:52:53Z
http://arxiv.org/abs/2305.01548v2
Explainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks ###### Abstract. In conversational question answering, users express their information needs through a series of utterances with incomplete context. Typical ConvQA methods rely on a single source (a knowledge base (KB), _or_ a text corpus, _or_ a set of tables), thus being unable to benefit from increased answer coverage and redundancy of multiple sources. Our method Explaingnn overcomes these limitations by integrating information from a mixture of sources with user-comprehensible explanations for answers. It constructs a heterogeneous graph from entities and evidence snippets retrieved from a KB, a text corpus, web tables, _and_ infoboxes. This large graph is then iteratively reduced via graph neural networks that incorporate question-level attention, until the best answers and their explanations are distilled out. Experiments show that Explaingnn improves performance over state-of-the-art baselines. A user study demonstrates that derived answers are understandable by end users. + Footnote †: [ footnote]Footnote 1: Code, data, and demo at [https://explaigm.mpi-inf.mpg.de](https://explaigm.mpi-inf.mpg.de). + Footnote †: [ footnote]Footnote 2: Code, data, and demo at [https://explaigm.mpi-inf.mpg.de](https://explaigm.mpi-inf.mpg.de). + Footnote †: [ footnote]Footnote 3: Code, data, and demo at [https://explaigm.mpi-inf.mpg.de](https://explaigm.mpi-inf.mpg.de). + Footnote †: [ footnote]Footnote 4: Code, data, and demo at [https://explaigm.mpi-inf.mpg.de](https://explaigm.mpi-inf.mpg.de). ## 1. Introduction **Motivation**. In conversational question answering (ConvQA), users issue a sequence of questions, and the ConvQA system computes crisp answers [39, 45, 47]. The main challenge in ConvQA systems is that inferring answers requires understanding the current context, since incomplete, ungrammatical and informal _follow-up questions_ make sense only when considering the conversation history so far. Existing ConvQA models mostly focused on using either (i) a curated knowledge base (KB) [22, 23, 24, 26, 31, 51], or (ii) a text corpus [5, 17, 38, 39, 41], or (iii) a set of web tables [18, 33] as source to compute answers. These methods are not geared for tapping into multiple sources jointly, which is often crucial as one source could compensate for gaps in others. Consider the conversation: \(q^{1}\)_: Who wrote the book Angels and Demons?_ \(a^{1}\): Dan Brown \(q^{2}\): _the main character in his books?_ \(a^{2}\): Robert Langdon \(q^{3}\): _who played him in the films?_ \(a^{3}\): Tom Hanks \(q^{4}\): to which headquarters was robert flow in the book? \(a^{4}\): CERN \(q^{5}\): how long is the novel? \(a^{5}\): 768 pages \(q^{6}\): what about the movie? \(a^{6}\): 2 h 18 min Some of these questions can be conveniently answered using a KB (\(q^{1}\), \(q^{3}\)), tables (\(q^{5}\), \(q^{6}\)), or infoboxes (\(q^{1}\), \(q^{3}\)) as they ask about salient attributes of entities, and some via text sources (\(q^{2}\), \(q^{3}\), \(q^{4}\)) as they are more likely to be contained in book contents and discussion. However, none of these individual sources represents the whole information required to answer _all questions_ of this conversation. Recently, there has been preliminary work on ConvQA over a mixture of input sources [9, 10]. This improves the recall for the QA system with higher _answer coverage_, and the partial _answer redundancy_ across sources helps improve precision. **Limitations of state-of-the-art methods**. Existing methods for ConvQA over heterogeneous sources rely on neural sequence-to-sequence models to compute answers [9, 10]. However, this has two significant limitations: (i) sequence-to-sequence models are _not explainable_, as they only generate strings as outputs, making it infeasible for users to decide whether to trust the answer; (ii) sequence-to-sequence models require inputs to be cast into token sequences first. This loses insightful information on relationships between evidences [64]. Such inter-evidence connections can be helpful in separating relevant information from noise. **Approach**. We introduce Explaingnn1 (EXPLAinable Conversational Question Answering over Heterogeneous Graphs via Iterative Graph Neural Networks), a flexible pipeline that can be configured for optimizing _performance, efficiency, and explainability_ for ConvQA systems over heterogeneous sources. The proposed method operates in three stages: Footnote 1: Code, data, and demo at [https://explaigm.mpi-inf.mpg.de](https://explaigm.mpi-inf.mpg.de). 1. Derivation of a self-contained _structured representation (SR)_ of the user's information need (or intent) from the potentially incomplete input utterance and the conversational context, making the entities, relation, and expected answer type explicit. 2. Retrieval of relevant evidences and answer candidates from heterogeneous information sources: a curated KB, a text corpus, a collection of web tables, and infoboxes. 3. Construction of a graph from these evidences, as the basis for applying graph neural networks (GNNs). The GNNs are iteratively applied for computing the best answers and supporting evidences in a small number of steps. A key novelty is that each iteration reduces the graph in size, and only the final iteration yields the answer and a small user-comprehensible set of explanatory evidences. Our overarching goal is to provide end-user _explain_ability to _GNN_ inference by _iteratively_ reducing the graph size, so that the final answers can indeed be claimed to be causal w.r.t. the remaining evidences, hence the name Explain.A to example of such GNN-based reduction is in Fig. 1. **Contributions**. We make the following salient contributions: * Proposing a new method for ConvQA over heterogeneous sources, with a focus on computing explainable answers. * Devising a mechanism for iterative application of GNN inference to reduce such graphs until the best answers and their explanatory evidences are obtained. * Developing an attention mechanism which ensures that during message passing only question-relevant information is spread over the local neighborhoods. ## 2. Concepts and Notation We now introduce salient concepts and notation, that will help understand the remainder of the paper. Table 1 contains the important notation. **Question**. A _question_\(q\) asks about _factoid_ information, like _Who wrote the book Angels and Demons_? (intent is explicit), or _How long is the novel?_ (intent is implicit). **Answer**. An _answer_\(a\) to \(q\) can be an entity (like Tom Hanks), or a literal (like 768 pages). **Conversation**. A _conversation_ is a sequence of questions and answers \(\langle q^{1},a^{1},q^{2},a^{2},\dots\rangle\). The initial question \(q^{1}\) is complete, i.e. makes the information need explicit. The follow-up questions \(q^{t}\) (\(t\)\(\sim\)1) may be incomplete, building upon the ongoing conversational history, therefore leaving context information implicit. **Turn**. A conversation turn \(t\) comprises a \(\langle q^{t},a^{t}\rangle\) pair. **Knowledge base**. A curated _knowledge base_ is defined as a set of facts. Each fact consists of a subject, a predicate, an object, and an optional series of \(\langle\)qualifier-predicate, qualifier-object\(\rangle\) pairs: \(\langle s\), \(p\), \(o\); \(qp_{1}\), \(q_{0}\); \(qp_{2}\), \(q_{02}\);....\(\rangle\). An example fact is: (Angels and Demons, cast member, Tom Hanks; character, Robert Langdon). **Text corpus**. A _text corpus_ consists of a set of text documents. **Table**. A _table_ is a structured form of representing information, and is typically organized into a grid of rows and columns. Individual rows usually record information corresponding to specific entities, while columns refer to specific attributes for these entities. The row-header, column-header, and the cell hold the entity name, attribute name, and the attribute value, respectively. **Infobox**. An _infobox_ consists of several entries that are \(\langle\)attribute name, attribute value\(\rangle\) pairs, and provide salient information on a certain entity. An infobox can be perceived as a special instantiation of a table, recording information on a single entity, and consisting of exactly two columns and a variable number of rows. **Evidence**. An _evidence_\(\epsilon\) is a short text snippet expressing factual information, and can be retrieved from a KB, a text corpus, a table, and an infobox. To be specific, _evidence_s are verbalized KB-facts, text-sentences, table-records, or infobox-entries. Figure 1. Toy heterogeneous graph for answering \(q^{3}\), showing two pruning iterations. The graph is iteratively reduced by GNN inference to identify the key evidences. The subgraph surrounded by the blue dotted line is the result of the first iteration, while the green line indicates the graph after the second. From this smaller subgraph, the final answer (Tom Hanks) is inferred. **Structured representation**. The _structured representation SR_(Kumar et al., 2017) for \(q\) is an intent-explicit version of the question. The SR represents the current question using four slots: (i) _context entity_, (ii) _question entity_, (iii) _relation_, (iv) _expected answer type_. This intent-explicit representation can be represented in linear form as a single string, using delimiters to separate slots (\(\uparrow\)' in our case). The SR for \(q^{\text{s}}\) is: \[\begin{array}{l}\texttt{(Angels and Demons|Robert\_Langdon|}\\ \texttt{who played him in the films|human)}\end{array}\] In this example, Angels and Demons is the context entity, Robert Langdon the question entity, who played him in the films the relation, and human the expected answer type. We consider a relaxed notion of relations, in the sense of not being tied to KB terminology that canonicalizes textual relations to predicates. This allows for softer matching in evidences, and answering questions for which the information cannot easily be represented by such predicates. ## 3. Overview The architecture of Explaingnn (Fig. 2) follows the pipeline of Convusse(Kumar et al., 2017): (i) an intent-explicit structured representation (SR) of the current information need is generated, (ii) evidences are retrieved from heterogeneous sources, and (iii) this large set of relevant evidences is used for answering the question and providing explanatory evidences. ### Question understanding We use (Kumar et al., 2017) and generate a structured representation (SR) capturing the complete intent in the current question and the conversational context (the SR was loosely inspired by literature on quantity queries (Kumar et al., 2017)). For generating SRs, we leverage a fine-tuned auto-regressive sequence-to-sequence model (BART (BART, 2017)). **Preventing hallucination**. We propose a novel mechanism to avoid _hallucinations_(Sanderson, 2017) in SRs. For \(q^{\text{s}}\) of the running example, the trained model could generate Robert de Niro as the (topically unrelated) question entity of the output SR (Dan Brown|Robert de Niro|who played him in the films|human). This would lead the entire QA system astray. The SR is supposed to represent the information need on the _surface level_, and therefore expected to use the vocabulary present in the input (the conversational history and current question). This makes it possible to identify hallucinations: output words that are absent from the entire conversation so far, indicate such a situation. To fix this, we generate the top-\(k\) SRs (\(k\)=10 in experiments), and choose the highest-ranked SR that does not include any hallucinated words. Note that the expected answer type is an exception here: it may, by design, not be present in the input. So we remove this slot before performing the hallucination check. Answer types are often not made explicit but substantially help the QA system (Kumar et al., 2017). ### Evidence retrieval _KB evidences_ and entity disambiguations are obtained via running Clocq(Kumar et al., 2017) on the SR (without delimiters). _Text, table and infobox evidences_ are obtained by mapping the disambiguated KB-entities to Wikipedia pages, which are then parsed for extracting text-sentences, table-records, and infobox-entries corresponding to the respective entities. Evidences from KB, web tables, or infoboxes, being natively in (semi-)structured form, are then _verbalized_(Sanderson, 2017; Sanderson, 2017) into token sequences (as in (Kumar et al., 2017)). Examples can be seen inside Fig. 1, where each evidence is tagged with its source. **Use of SR slot labels.** Convusse considers all entities in the SR during retrieval, regardless of the slot in which they appear. In contrast, Explaingnn _restricts_ the entities by retaining only those mentioned within the _context entity_ or _question entity_ slots. Evidences are then only retrieved for this restricted set of entities. This prunes noisy disambiguations in the relation and type slots. ## 4. Heterogeneous Answering We first describe heterogeneous answering graph construction (Sec. 4.1). Next, we present the proposed question-aware GNN architecture, consisting of the encoder (Sec. 4.2), the message passing procedure (Sec. 4.3), the answer candidate scoring (Sec. 4.4), and the multi-task learning mechanism for GNN training (Sec. 4.5). ### Graph construction Given the evidences retrieved in the previous stage, we construct a _heterogeneous answering graph_ that has two types of nodes: entities and evidences. The graph contains _textual information_ as entity labels and evidence texts, as well as the _connections_ between these \begin{table} \begin{tabular}{l l} \hline \hline **Notation** & **Concept** \\ \hline \(q^{\text{s}}\), \(a^{\text{t}}\) & Question and answer at turn \(t\) \\ \(SR^{\text{t}}\) & Structured representation at turn \(t\) \\ \hline \(e,\epsilon\) & Entity, evidence \\ \(E,\mathcal{E}\) & Sets of entity nodes and evidence nodes in graph \\ \(\mathcal{N}(e)\) & Evidences in 1-hop neighborhood of entity \(e\) \\ \(\mathcal{N}(\epsilon)\) & Entities in 1-hop neighborhood of evidence \(\epsilon\) \\ \hline \(e,\epsilon,SR\) & Encoding of \(e\) / \(\epsilon\) / \(SR\) \\ \(e^{\text{t}}\), \(e^{\text{t}}\) & Encoding of \(e\) / \(\epsilon\) after \(l\) GNN layers \\ \(d\) & Encoding dimension \\ \(\alpha_{e,\epsilon},\alpha_{e}\) & SR-attention of \(e\) (\(e\)) for updating \(e\) (\(e\)) \\ \(m_{e},m_{e}\) & Aggregated messages passed to \(e\) / \(\epsilon\) \\ \(s_{e},s_{e}\) & Score for \(e\) / \(\epsilon\) \\ \(w_{e},w_{e}\) & Multi-task weight for answer / evidence score prediction \\ \(\mathcal{L}\), \(\mathcal{L}_{e}\), \(\mathcal{L}_{e}\) & Loss functions: total, entity relevance, evidence relevance \\ \hline \hline \end{tabular} \end{table} Table 1. Notation for salient concepts in Explaingnn. Figure 2. An overview of the Explaingnn pipeline, illustrating the three main stages of the approach. two kinds of nodes. Specifically, an entity node \(\mathbf{e}\) is connected to an evidence node \(\mathbf{e}\), if \(\mathbf{e}\) is mentioned in \(\mathbf{\epsilon}\). There are no direct edges between pairs of entities, or pairs of evidences. An example heterogeneous graph is shown in Fig. 1. **Inducing connections between retrieved evidences**. Shared entities are the key elements that induce _relationships_ between the initial plain _set_ of retrieved evidences. So one requires entity markup on the verbalized evidences coming from the different sources, that are grounded to a KB for _canonicalization_. Note that during evidence verbalization, original formats are not discarded. Thus, for KB-facts, entity mappings are already known. For text, table, and infobox evidences from Wikipedia, we link anchor texts to their referenced entity pages. These are then mapped to their corresponding KB-entities. In absence of anchor texts, named entity recognition and disambiguation systems can be used (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). Dates and years that appear in evidences are detected using regular expressions, and are added as entities to the graph as well. In Fig. 1, entity mentions are underlined within evidences. ### Node encodings GNNs incrementally update node encodings within local neighborhoods, leveraging message passing algorithms. However, these node encodings have to be initialized first using an encoder. **Evidence encodings**. For the initial encoding of the nodes, we make use of cross-encodings (Wang et al., 2018) (originally proposed for sentence pair classification tasks in (Krizhevsky et al., 2015)). The evidence text, concatenated with the SR, is fed into a pre-trained language model. By using SR-specific cross-encodings, we ensure that the node encodings capture the information relevant for the current question, as represented by the SR. The encodings obtained for the individual tokens are averaged, yielding the initial evidence encoding \(\mathbf{\epsilon^{0}}\in\mathbb{R}^{d}\). **Entity encodings**. The entity encodings are derived analogously, using cross-encodings with the SR. We further append the KB-type of an entity to the entity label with a separator, before feeding the respective token sequence into the language model. Including entity types is beneficial, and often crucial, for the reasons below: (i) The cross-encoding can leverage the attention between the _expected answer type_ in the SR and the _entity type_, which can be viewed as a soft-matching between the two. (ii) When there are multiple entities with the same label, the entity type may be a discriminating factor downstream for each of these entities (e.g., there are three entities of different types named "_Robert Langdon_" in Fig. 1). (iii) For long-tail entities, the entity type can add decisive informative value to the plain entity label. Analogous to evidence nodes, the encodings of individual tokens are averaged to obtain the entity encoding \(\mathbf{e^{0}}\in\mathbb{R}^{d}\). **SR encoding**. The SR is also encoded via the language model, averaging the token encodings to obtain the SR encoding \(\mathbf{5R}\in\mathbb{R}^{d}\). Note that the same language model is used for the initial encodings of evidences, entities and the SR. The parameters of this language model are updated during GNN training, to ensure that the encoder adapts to the syntactic structure of the SR. ### Message passing The core part of the GNN is the _message passing_(Wang et al., 2018; Wang et al., 2018) procedure. In this step, information is propagated among neighboring nodes, leveraging the graph structure. Given our graph design, in each message passing step information is shared between evidences and the connected entities. Again, we aim to focus on question-relevant information (Bahdan et al., 2017; Chen et al., 2018), as captured by the SR, instead of spreading general information within the graph. Therefore, we propose to weight the messages of neighboring entities using a novel _attention mechanism_, that re-weights the messages by their question-relevance, or equivalently, their SR-relevance. This _SR-attention_ is computed by \(d^{\mathbf{e}}_{\mathbf{e},\mathbf{e}}\in\mathbb{R}\): \[\alpha^{l}_{\mathbf{e},\mathbf{e}}=\underset{\mathcal{N}(\mathbf{\epsilon})}{\text{softmax}} \Big{(}\text{lin}^{l}_{\alpha_{\mathbf{e}}}(\mathbf{e}^{l-1})\cdot\mathcal{SR}\Big{)}= \frac{\text{lin}^{l}_{\alpha_{\mathbf{e}}}(\mathbf{e}^{l-1})\cdot\mathcal{SR}}{\sum \limits_{\mathbf{e}_{i}\in\mathcal{N}(\mathbf{\epsilon})}\text{lin}^{l}_{\alpha_{\mathbf{e }}}(\mathbf{e}^{l-1}_{i})\cdot\mathcal{SR}} \tag{1}\] where we first project the entity encodings using a linear transformation \((\text{lin}^{l}_{\alpha_{\mathbf{e}}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d})\), and then multiply with the SR encoding to obtain a score. The softmax function is then applied over all entities neighboring a respective evidence (\(\mathbf{e}_{i}\in\mathcal{N}(\mathbf{\epsilon})\)). Thus, an entity can obtain different SR-attention scores for each evidence, depending on the scores of other neighboring entities. The messages passed to \(\mathbf{\epsilon}\) are then aggregated, weighted by the respective SR-attention, and projected using another linear layer: \[\mathbf{m}^{l}_{\mathbf{e}}=\text{lin}^{l}_{m_{\mathbf{e}}}\bigg{(}\sum\limits_{\mathbf{e}\in \mathcal{N}(\mathbf{\epsilon})}\mathbf{a}^{l}_{\mathbf{e},\mathbf{e}}\cdot\mathbf{e}^{l-1}\bigg{)} \tag{2}\] where \(\text{lin}^{l}_{m_{\mathbf{e}}}\) is the linear layer (\(\text{lin}^{l}_{m_{\mathbf{e}}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\)). The updated evidence encoding is then given by adding the evidence encoding from the previous layer (\(\mathbf{\epsilon}^{l-1}\)), and the messages passed from the neighbors (\(\mathbf{m}^{l}_{\mathbf{e}}\)), activated by a ReLU function: \[\mathbf{\epsilon}^{l}=\text{ReLU}\big{(}\mathbf{m}^{l}_{\mathbf{e}}+\mathbf{\epsilon}^{l-1} \big{)} \tag{3}\] The intuition here is that in each evidence update, the question-relevant information held by neighboring entities is passed on to an evidence, and then incorporated in its encoding. The process for updating the entity encodings is analogous, but makes use of different linear transformation functions. The SR-attention \(\alpha^{l}_{\mathbf{e},\mathbf{e}}\) of evidences for an entity \(\mathbf{e}\) is obtained as follows: \[\alpha^{l}_{\mathbf{e},\mathbf{e}}=\underset{\mathcal{N}(\mathbf{\epsilon})}{\text{softmax}} \Big{(}\text{lin}^{l}_{\alpha_{\mathbf{e}}}(\mathbf{\epsilon}^{l-1})\cdot\mathcal{SR} \Big{)} \tag{4}\] where \(\text{lin}^{l}_{\alpha_{\mathbf{e}}}\) (\(\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\)) is the linear transformation function. Here, the softmax function is applied over all evidences surrounding the respective entity (i.e. \(\mathbf{e}_{i}\in\mathcal{N}(\mathbf{\epsilon})\)). Again, the messages passed to an entity \(\mathbf{e}\) are weighted by the respective SR-attention, and projected using a linear layer (\(\text{lin}^{l}_{m_{\mathbf{e}}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\)): \[\mathbf{m}^{l}_{\mathbf{e}}=\text{lin}^{l}_{m_{\mathbf{e}}}\bigg{(}\sum\limits_{\mathbf{e}\in \mathcal{N}(\mathbf{\epsilon})}\mathbf{a}^{l}_{\mathbf{e},\mathbf{e}}\cdot\mathbf{\epsilon}^{l-1} \bigg{)} \tag{5}\] The updated entity encoding is then given by: \[\mathbf{e}^{l}=\text{ReLU}\big{(}\mathbf{m}^{l}_{\mathbf{e}}+\mathbf{e}^{l-1}\big{)} \tag{6}\] These message passing steps are repeated \(L\) times, i.e. the GNN has \(L\) layers. Within these layers the question-relevant information is spread over the graph. Basically, nodes in the graph learn about their question relevance, based on the surrounding nodes and their relevance, and capture this information in their node encodings. ### Answer score prediction Scoring answer candidates makes use of the node encodings obtained after \(L\) message passing steps. We model the answer prediction as a _node classification_ task (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016), by computing an answer score for each entity node with consideration of their question relevance as captured within the node encodings. The computation of the answer score \(s_{e}\) is similar to the technique used for computing the SR-attention of an entity: \[s_{e}=\text{softmax}_{E}\Big{(}\text{lin}_{e}(\mathbf{e}^{L})\cdot SR\Big{)} \tag{7}\] We project the entity encoding using a linear layer (\(\text{lin}_{e}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\)), and multiply the projected encoding with the encoding of the SR. The softmax function is applied over all entity nodes (i.e. \(\mathbf{e}_{i}\in E\)). ### Multi-task learning Our training data for the GNN consists of \(\langle\)graph, answer\(\rangle\) pairs. The gold answer is always an entity or a small set of entities. Consequently, the positive training data is sparse: there can be hundreds of entities in the graph but only one gold answer. To better use our training data, we propose a _multi-task learning_ (MTL) (Krizhevsky et al., 2014; Krizhevsky et al., 2016) approach. Given a GNN, we pose two complementary node classification tasks: (i) the answer prediction, and (ii) the prediction of evidence relevance. Evidences connected to gold answers are viewed as relevant, and others as irrelevant. Our method learns to predict a relevance score \(s_{e}\) for each evidence node \(\epsilon\), analogous to the answer score prediction: \[s_{e}=\text{softmax}_{e\in\mathcal{E}}\Big{(}\text{lin}_{e}(\mathbf{e}^{L})\cdot SR \Big{)} \tag{8}\] where \(\mathcal{E}\) is the set of all evidence nodes (and \(\text{lin}_{e}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\)). For both tasks, answer prediction and evidence prediction, we use binary-cross-entropy over the predicted scores as loss functions: \(\mathcal{L}_{e}\) and \(\mathcal{L}_{e}\), respectively. The final loss used for training the GNN is then defined as a weighted sum: \[\mathcal{L}=w_{e}\cdot\mathcal{L}_{e}+w_{e}\cdot\mathcal{L}_{e} \tag{9}\] where \(w_{e}\) and \(w_{e}\) are hyper-parameters to control the multi-task learning, and are chosen such that \(w_{e}+w_{e}=1\). The described GNN architecture can then be trained for predicting scores of answer candidates and evidences, and used for inference on the whole input graphs in _one shot_. ## 5. Iterative graph neural networks We now outline how we use trained GNNs for iteratively reducing the graph at inference time. Specifically, we comment on the benefits that such iterative GNNs have for robustness, explainability and efficiency. **Drawbacks of a one-shot prediction**. There are several drawbacks of predicting the answer from the full graph _during inference_: (i) Directly predicting the answer from hundreds of answer candidates is non-trivial, and node classification may struggle to manifest fine-grained differences between such candidate answers in their encodings. This can negatively impact the _robustness_ of the method on unseen data. (ii) Further, if the answer is predicted from the whole input graph at inference time, this means that all nodes in the graph contribute towards the answer prediction. Showing the whole graph, consisting of hundreds of nodes, to explain how the answer was derived is not practical. The SR-attention scores could be an indicator as to which nodes were more relevant, but attention is not always sufficient as an explanation (Krizhevsky et al., 2016). Hence, _answer explainability_ would be limited. (iii) Finally, obtaining cross-encodings for hundreds of nodes (entities and evidences) can be computationally expensive, affecting the _runtime efficiency_ of the system. A large fraction of these initial graph nodes might be rather irrelevant, which can often be identified using a more light-weight (i.e. more efficient) encoder. **Iterative inference of GNNs**. To overcome these drawbacks, we propose _iterative GNNs_: instead of predicting the answer in one shot, we _iteratively apply trained GNNs_ of the outlined architecture _during inference_. The key idea is to shrink the graph after each iteration. This can be done via the evidence scores \(s_{e}\) predicted by the GNNs, to identify the most relevant evidences. These evidences, and the connected entities, are used to initiate the graph given as input to the next iteration. In the final iteration, the answer is predicted from the reduced graph only. Fig. 3 illustrates training and inference with iterative GNNs. Note that these GNNs are still trained on the full graphs in the training data, and are run iteratively only at inference time. This iterative procedure is feasible as the proposed GNN architecture is inherently independent of the input graph size: the same GNN trained on 500 evidences can be applied on a graph with 100, 20, or 5 evidences. The GNNs essentially learn to spread question-relevant information within local neighborhoods, which is not only required for large graphs with hundreds of evidences, but also for smaller graphs with a handful of nodes. Further, this iterative procedure is tractable only with the flexibility of scoring both entities and evidences in the graph, using the same GNN architecture. **Enhancing robustness**. Within each iteration, the task complexity is decreased compared to the task complexity the original GNN was trained on. For example, the GNN was originally trained for predicting small answer sets and relevant evidences from several hundreds of nodes, but in the iterative setup, rather needs to identify the top-100 evidences during inference. Thus, the GNN model now Figure 3. Training of and inference with iterative GNNs. has to be _less discriminatory_ at inference time than during training. This can help improve _robustness_. **Facilitating explainability.** A primary benefit of the iterative mechanism is that the intermediate graphs can be used to better understand how answer prediction works via a GNN. Showing all the information contained in the original input graph with hundreds of entities and evidences to the user is not practical: we can iteratively derive a small set of evidences (say five), from which the answer is predicted. These evidences can then be presented to the end user, enhancing _user explainability_. **Improving efficiency.** To facilitate the runtime _efficiency_ of the answering process, we refrain from encoding entities via cross-encodings in the _shrinking_ (or _pruning_) iterations. Instead, we initiate the entity encodings, using a sum of the surrounding evidences, weighted by their question-relevance (i.e. their SR-attention): \[\mathbf{e}^{0}=\sum_{e\in\mathcal{N}(\mathbf{e})}\alpha_{\mathbf{e},e}\cdot\mathbf{e}^{0} \tag{10}\] where the SR-attention \(\alpha_{\mathbf{e},e}\) is computed as in Eq. 1, employing a different linear projection. This can be perceived as obtaining _alternating encodings_ of entities: the initial evidence encodings are used to initialize entity encodings (inspired by (Beng et al., 2017)). The message passing would then proceed as outlined in Sec. 4.3. **Instantiation.** We train several one-shot GNNs of the architecture outlined above, using different weights on the answer prediction and evidence relevance prediction tasks (\(w_{e}\) and \(w_{e}\) respectively) in the MTL setup. Further, we train GNNs using either cross-encodings or alternating encodings for entities. Training is conducted on the full input graphs present in the training set. We then simply instantiate all _pruning iterations_ with the GNN that obtains the best evidence prediction performance on the graphs in the development (dev) set. Similarly, we use the trained GNN that obtained the best answering performance on the dev set to initiate the final _answering iteration_. Finally, the answer predicted by the system is given by: \[a_{pred}=\arg\max_{e\in\mathcal{E}}s_{e} \tag{11}\] This is shown to the end user, together with the explanatory evidences \(\{\epsilon_{pred}\}\) (see outputs in Fig. 2), that are simply the set of evidences used in the answer prediction step. ## 6. Results and Insights ### Experimental setup **Dataset**. We train and evaluate Explaingn on the ConvMix(Chen et al., 2017) benchmark, which was designed for ConvQA over heterogeneous information sources. The dataset has 16,000 questions (train: 8,400 questions, dev: 2,800 questions, test: 4,800 questions), within 3,000 conversations of five (2,800) or ten turns (200, only used for testing). **Metrics**. For accessing the answer performance, we use _precision at 1_ (P@1), as suggested for the ConvMix benchmark. To investigate the ranking capabilities of different methods in more detail, we also measure the _mean reciprocal rank_ (**MRR**), and _hit at 5_ (**Hit@5**). The _answer presence_ **(Ans. pres.)** is the fraction of questions for which a gold answer is present in a given set of evidences. **Baselines.** We compare Explaingn with the state-of-the-art method on the ConvMix dataset, Convinse(Chen et al., 2017). Convinse leverages a Fusion-in-Decoder (FiD) (Li et al., 2017) model for obtaining the top answer, which is designed to generate a single answer string. In (Chen et al., 2017), the ranked entity answers are then derived by collecting the top-\(k\) answer candidates with the highest surface-form match with respect to Levenshtein distance with the generated answer string. This procedure is somewhat limiting when measuring metrics beyond the first rank (i.e. MRR or Hit@5). Therefore, we enhanced the FiD model to directly generate top-\(k\) answer strings, and then consider the answer candidate with the highest surface-form match for each such generated string (**top-\(k\) FiD**), for fair comparison. We further compare with baselines proposed in (Chen et al., 2017) using question completion and resolution, and use the values reported in (Chen et al., 2017) for consistency. Note that these methods transform a conversational question to a self-sufficient form, and still need to be coupled with retrieval and answer generation modules. **Configurations.** The QU and ER stages were initialized using the Convinse(Chen et al., 2017) code and data: we used Wikidata as the KB, and made use of the same version (31 January 2022) as earlier work. The Wikipedia evidences were taken from here2, whenever applicable, and retrieved on-the-fly otherwise. This ensures that results are comparable with the results provided in earlier work. ConvMix has \(\simeq\) 3% of yes/no questions: these are out of scope for Explaingn. To be able to report numbers on the full benchmark, we follow previous work (Chen et al., 2017) and detect such questions as starting with an auxiliary verb, and answer "yes" to these. Footnote 2: [http://qa.mpi-inf.mpg.de/convinse/convmix_data/wikipedia.zip](http://qa.mpi-inf.mpg.de/convinse/convmix_data/wikipedia.zip) The GNNs were implemented from scratch via PyTorch. We used DistilRoBERTa provided by Hugging Face3 as encoder, which we found to perform slightly better than DistilBERT (Wang et al., 2019) (the distillation procedure is the same). DistilRoBERTa has an embedding dimension of \(d\)=768. The GNNs were trained for 5 epochs. We chose an epoch-wise evaluation strategy, and kept the model that achieved the best performance on the dev set. We found three layer GNNs (\(L\)=3) most effective. With our graph schema of alternating entities and evidences, using an odd number of layers allows for information from evidences to reach relevant entities in their immediate (one hop) and slightly distant neighborhoods (three or five hops, say). AdamW was used as optimizer, using a learning rate of \(10^{-5}\), batch size of 1, and a weight decay of 0.01. Footnote 3: [https://huggingface.co/distilroberta-base](https://huggingface.co/distilroberta-base) We also used the dev set for choosing the number of GNN iterations, and MTL weights for pruning and answering. The number of iterations was set to \(i\)=3. The GNN that maintains the highest answer presence among the top-5 evidences was chosen for instantiating the pruning iterations (alternating encodings for entities, w\({}_{e}\)=0.3, w\({}_{e}\)=0.7), and the GNN obtaining the highest P@1 for the answer prediction (cross-encodings for entities, w\({}_{e}\)=0.5, w\({}_{e}\)=0.5). In case there are more than 500 evidences, we retain only the top-500 obtained via BM25 scoring as input to the answering stage. A single GPU (WVIDIA Quadro RTX 8000, 48 GB GDDR6) was used to train and evaluate the models. ### Key findings This section will present the main experimental results on the ConvMix test set. All provided metrics are averaged over all _questions_ in the dataset. The best method for each column is shown in **bold**. Statistical significance over the best baseline is indicated by an asterisk (*), and is measured via paired \(t\)-tests for MRR, and McNemar's test for binary variables (P@1 or Hit@5), with \(p<0.05\) in both cases. **Explaignn improves the answering performance.** The main results in Table 2 demonstrate the performance benefits of Explaingnn over the baselines. Explaingnn significantly outperforms the best baseline on all metrics, illustrating the success of using _iterative graph neural networks_ in ConvQA. As is clear from the method descriptions in Table 2, all baselines crucially rely on the generative reader model of FiD. FiD can ingest multiple evidences to produce the answer, but fails to capture their relationships explicitly, that is unique to our graph-based pipeline. Our adaptation of using top-\(k\) FiD instead of the default top-1 improved the ranking capabilities (MRR, Hit@5) of the strongest baseline Convinse. However, Explaingnn still substantially improved over Convinse with top-\(k\) FiD. **Explaignn is robust to wrong predictions in earlier turns.** Unlike many existing works, we also evaluated the methods in a _realistic scenario_, in which the predicted answers \(a_{pred}\) are used as (noisy) input for the conversational history instead of the standard yet impractical choice of inserting gold answers from the benchmark. Results are shown in Table 3 (cf. Table 2 results, that show an evaluation with gold answers). While the performance of all methods drops in this more challenging setting, the trends are very similar, indicating that Explaingnn can successfully overcome failures (i.e. incorrect answer predictions) in earlier turns. Explaingnn again outperforms all baselines significantly, including a P@1 jump from \(0.279\) for the strongest baseline to \(0.339\). **Heterogeneous sources improve performance.** Analysis of source combinations is in Table 4. The first takeaway is that the answering performance was the best when the full spectrum of sources was used. Explaingnn can make use of the enhanced answer presence in this case, and answer more questions correctly. Next, the results indicate that adding an information source is always beneficial: the performance of combinations of two sources is in all cases better than for the two sources individually. ### In-depth analysis **Multi-task learning enables flexibility.** A systematic analysis of the effect of different multi-task learning weights is shown in Table 5. This analysis is conducted on the dev set, for choosing the best GNN for pruning and answering, respectively. Entities are either encoded via cross-encodings, or alternating encodings (see Eq. 10). The results indicate the runtime benefits of using alternating encodings for entities. Further, when optimized for evidence relevance prediction (\(w_{e}>0.5\)) it can maintain a high answer presence (measured within top-5 evidences), indicating that light-weight encoders are indeed sufficient for the pruning iterations. Further, we found putting equal weights on the answer and evidence relevance prediction to be beneficial for answering. **Iterative GNNs do not compromise runtimes.** Table 6 reports results of varying the number of iterations \(i\in\{1,2,3,4\}\), and the graph size in the number of evidences the answer is predicted from \((|\mathcal{E}|\in\{500,100,50,20,5\})\). For each row, the reduction in graph size in terms of the number of evidences considered was kept roughly constant for consistency. By our smart use of alternating encodings of entities in the pruning iterations, _runtimes remain immune_ to the number of pruning iterations (times for \(i\)=4 are not necessarily higher than those for \(i\)=3, and so on). Rather, the runtime is primarily influenced by the size of the graph (in terms of the number of evidences) given to the final answer prediction \begin{table} \begin{tabular}{l|c c c|c} \hline \hline **Method \(\downarrow\)** & **P@1** & **MRR** & **Hit@5** & **Ans. pres.** \\ \hline **KB** & 0.363 & 0.427 & 0.511 & 0.617 \\ **Text** & 0.233 & 0.300 & 0.380 & 0.530 \\ **Tables** & 0.064 & 0.084 & 0.108 & 0.155 \\ **Infoboxes** & 0.256 & 0.302 & 0.362 & 0.409 \\ \hline **KB+Text** & 0.399 & 0.464 & 0.549 & 0.672 \\ **KB+Tables** & 0.363 & 0.429 & 0.515 & 0.629 \\ **KB+Infoboxes** & 0.376 & 0.443 & 0.532 & 0.640 \\ **Text+Tables** & 0.235 & 0.305 & 0.392 & 0.540 \\ **Text+Infoboxes** & 0.309 & 0.369 & 0.445 & 0.572 \\ **Tables+Infoboxes** & 0.263 & 0.312 & 0.374 & 0.453 \\ \hline \hline **All sources** & **0.406** & **0.471** & **0.561** & **0.683** \\ \hline \hline \end{tabular} \end{table} Table 4. Effect of varying source combinations at inference time (test set). Explaingnn is still _trained_ on all sources. \begin{table} \begin{tabular}{l|c c c} \hline \hline **Method \(\downarrow\)** & **P@1** & **MRR** & **Hit@5** \\ \hline **Q. Resolution**(Spi and Welling, 2017) & 0.282 & 0.289 & 0.297 \\ **Q. Rewriting**(Spi and Welling, 2017) & 0.271 & 0.278 & 0.285 \\ **Convinse(Spi and Welling, 2017)** & 0.342 & 0.365 & 0.386 \\ **Convinse(Spi and Welling, 2017)** & 0.343 & 0.378 & 0.431 \\ \hline **Explaignn (proposed)** & **0.406\({}^{*}\)** & **0.471\({}^{*}\)** & **0.561\({}^{*}\)** \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of answering performance on the ConvMix (Spi and Welling, 2017) test set, using _gold answers_\(\{a_{gold}\}\) in the history. \begin{table} \begin{tabular}{l|c c c} \hline \hline **Method \(\downarrow\)** & **P@1** & **MRR** & **Hit@5** \\ \hline **Q. Resolution**(Spi and Welling, 2017) + BM25 + FiD(Hit et al., 2017) & 0.282 & 0.289 & 0.297 \\ **Q. Rewriting**(Spi and Welling, 2017) + BM25 + FiD(Hit et al., 2017) & 0.271 & 0.278 & 0.285 \\ **Convinse(Spi and Welling, 2017)** & 0.342 & 0.365 & 0.386 \\ **Convinse(Spi and Welling, 2017)** (top-\(k\) FiD)** & 0.343 & 0.378 & 0.431 \\ \hline **Explaignn (proposed)** & **0.406\({}^{*}\)** & **0.471\({}^{*}\)** & **0.561\({}^{*}\)** \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison of answering performance on the ConvMix test set, using _predicted answers_\(\{a_{pred}\}\) in the history. step (compare runtimes within each iteration group). Recall that this graph size for answer prediction can impact _explainability_ to end users, if it is not small enough. Notably, _performance_ remains reasonably stable in most cases. A key takeaway from these results is that the trained GNN models generalize well to graphs of different sizes. Concretely, while all of these models are trained on graphs established from 500 evidences, they can be applied to score nodes in graphs of variable sizes. **Explaignn can be applied zero-shot**. For testing the generalizability of Explaingnn, we applied the pipeline trained on the ConvMix dataset out-of-the-box, without any training or fine-tuning, on the ConvQuestions(Chen et al., 2017) dataset. ConvQuestions is a competitive benchmark for ConvQA methods operating over KBs. We test the same Explaingnn pipeline in two different modes: (i) using only facts from the KB, and (ii) using evidences from all information sources. Table 7 shows the results (\({}^{\dagger}\) and \({}^{\ddagger}\) indicate statistical significance over the leaderboard toppers Krr (Zhu et al., 2017) and Praline(Praline, 2017), respectively). In the KB-only setting, Explaingnn obtains state-of-the-art performance, reaching the highest MRR score. Also, we found that integrating heterogeneous sources can improve the answer performance substantially for ConvQuestions, even though this benchmark was created with a KB in mind. **Iterative GNNs improve robustness**. In Sec. 4, we argued that iterative GNNs enhance the pipeline's robustness over a single GNN applied on the full graph in one shot. While the performance of the one-shot GNN on the ConvMix dev set is comparable (Table 6), we found that it cannot generalize as well to a different dataset. When applied on ConvQuestions, the respective performance of the one-shot GNN is significantly lower than for Explaingnn, in both KB-only (P@1 : 0.330 versus 0.281) and heterogeneous (P@1 : 0.363 versus 0.318) settings. **SR-attention, cross-encoding and entity types are crucial**. Table 8 shows results (\({}^{\dagger}\) indicates significant performance drop) of our ablation study. We found that each of these mechanisms helps the pipeline to improve QA performance. The most decisive factor is the SR-attention (Sec. 4.3), which ensures that only the question-relevant information is spread within the local neighborhoods: without this component, the performance drops substantially (P@1 from 0.442 to 0.062). Similarly, the cross-encodings (Sec. 4.2) initialize the nodes with question-relevant encodings. Also notable is the crucial role of entity types (Sec. 4.2), that help suppress irrelevant answer candidates with mismatched answer types. **Error analysis**. We identified three key sources of error: (i) the answer is not present in the initial graph (53.9% of error cases), which can be mitigated by improving the QU and ER stages of the pipeline, (ii) the answer is dropped when shrinking the graph (8.1%), and (iii) the answer is present in the final graph but not identified as the correct answer (38.0%). The graph shrinking procedure is responsible for only a few errors (2.2% in the first iteration, 5.9% in second), demonstrating the viability of our iterative approach. ## 7. User Study on Explainability We evaluate the explainability of our pipeline by a user study on Amazon Mechanical Turk (AMT). The use case scrutinized here is that a user has an information need, obtains the answer predicted by the system, and is unsure whether to trust the provided answer. Thus, the key objective of the explanations in this work is to help \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method \(\downarrow\)** & **P@1** & **MRR** & **Hit@5** & **Ans. pres. and no. of evidences after pruning iteration** & **HA runtime** \\ \hline **Explaignn** (\(i\)=1: 500\(\rightarrow\)4\({}_{pred}\)) & 0.442 & 0.502 & 0.581 & – & – & - & 1,017 ms \\ \hline **Explaignn** (\(i\)=2: 500\(\rightarrow\)100\(\rightarrow\)4\({}_{pred}\)) & 0.441 & 0.504 & 0.587 & – & – & 0.687 100 & 744 ms \\ **Explaignn** (\(i\)=2: 500\(\rightarrow\)50\(\rightarrow\)4\({}_{pred}\)) & 0.440 & 0.504 & 0.588 & – & – & 0.675 50 & 591 ms \\ **Explaignn** (\(i\)=2: 500\(\rightarrow\)20\(\rightarrow\)4\({}_{pred}\)) & 0.438 & 0.504 & 0.591 & – & – & 0.655 20 & 515 ms \\ **Explaignn** (\(i\)=2: 500\(\rightarrow\)5\(\rightarrow\)4\({}_{pred}\)) & 0.422 & 0.480 & 0.560 & – & – & 0.589 5 & 459 ms \\ \hline **Explaignn** (\(i\)=3: 500\(\rightarrow\)200\(\rightarrow\)100\(\rightarrow\)4\({}_{pred}\)) & 0.441 & 0.504 & 0.585 & – & 0.694 200 & 0.685 100 & 995 ms \\ **Explaignn** (\(i\)=3: 500\(\rightarrow\)150\(\rightarrow\)50\(\rightarrow\)4\({}_{pred}\)) & 0.441 & **0.505** & 0.586 & – & 0.693 150 & 0.678 50 & 741 ms \\ **Explaignn** (\(i\)=3: 500\(\rightarrow\)100\(\rightarrow\)20\(\rightarrow\)4\({}_{pred}\); **proposed)** & **0.442** & **0.505** & 0.589 & – & 0.687 100 & 0.654 20 & 20 & 601 ms \\ **Explaignn** (\(i\)=3: 500\(\rightarrow\)50\(\rightarrow\)5\(\rightarrow\)4\({}_{pred}\)) & 0.419 & 0.475 & 0.556 & – & 0.675 50 & 0.579 5 & 511 ms \\ \hline **Explaignn** (\(i\)=4: 500\(\rightarrow\)300\(\rightarrow\)150\(\rightarrow\)100\(\rightarrow\)4\({}_{pred}\)) & 0.441 & 0.504 & 0.587 & 0.696 300 & 0.691 150 & 0.686 100 & 1,232 ms \\ **Explaignn** (\(i\)=4: 500\(\rightarrow\)200\(\rightarrow\)100\(\rightarrow\)50\(\rightarrow\)4\({}_{pred}\)) & 0.440 & 0.504 & 0.585 & 0.694 200 & 0.685 100 & 0.677 50 & 50 & 945 ms \\ **Explaignn** (\(i\)=4: 500\(\rightarrow\)200\(\rightarrow\)50\(\rightarrow\)20\(\rightarrow\)4\({}_{pred}\)) & 0.436 & 0.500 & 0.584 & 0.694 200 & 0.677 50 & 0.652 20 & 769 ms \\ **Explaignn** (\(i\)=4: 500\(\rightarrow\)100\(\rightarrow\)20\(\rightarrow\)5\(\rightarrow\)4\({}_{pred}\)) & 0.422 & 0.476 & 0.553 & 0.687 100 & 0.654 20 & 20 & 0.575 5 & 577 ms \\ \hline \hline \end{tabular} \end{table} Table 6. Varying the no. of iterations, pruning factors, and evidences the answer is predicted from _during inference_ (dev set). \begin{table} \begin{tabular}{l|c c c} \hline \hline **Method \(\downarrow\)** & **P@1** & **MRR** & **Hit@5** \\ \hline **Convex (Chen et al., 2017)** & 0.184 & 0.200 & 0.219 \\ **Focal Entity Model (Zhu et al., 2017)** & 0.248 & 0.248 & 0.248 \\ **Oat (Chen et al., 2017)** & 0.166 & 0.175 & – \\ **Oat (Chen et al., 2017)** (w/ gold seed entities)** & 0.250 & 0.260 & – \\ **Conquer (Chen et al., 2017)** & 0.240 & 0.279 & 0.329 \\ **Praline (Praline, 2017)** & 0.294 & 0.373 & 0.464 \\ **Krr (Zhu et al., 2017)** (w/ gold seed entities)** & **0.397** & 0.397 & 0.397 \\ \hline **Explaignn** (KB-only) & 0.330\({}^{\ddagger}\) & **0.399\({}^{\ddagger}\)** & **0.480\({}^{\ddagger}\)** \\ **Explaignn** & 0.363\({}^{\ddagger}\) & **0.447\({}^{\dagger\ddagger}\)** & **0.546\({}^{\ddagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 7. Out-of-the-box Explaingnn, without further training or fine-tuning, on the ConvQuestions(Chen et al., 2017) benchmark. the user decide whether to trust the answer. If the user is able to make this decision easily, the explanations serve their purpose. **User study design.** For a predicted answer to a conversational question, the user is given the conversation history, the current question, the SR, and the explaining evidences. Examples of the input presented to the user are shown in Fig. 9 and Fig. 10 (we used five explanatory evidences). The user then has to decide whether the provided answer is correct. We randomly sample 1,200 instances on which we measure user accuracy. One of our main considerations during sampling was that one half (600) was correctly answered and the other half (600) incorrectly answered by Explaingn. **User study interface**. For each instance, we then ask Turkers the following questions: (i) _"Do you think that the provided answer is correct?"_ (**User correctness**), (ii) _"Are you certain about your decision?"_ (**User certainty**), and (iii) _"Why are you certain about the decision?"_ or _"Why are you uncertain about the decision?"_. The first two questions can be answered by either _"Yes"_, or _"No"_. Depending on the answer to the second question, the third question asks for reasons for their (un)certainty. The user can select multiple provided options. If the user is certain, these options are either _good explanation_, _prior knowledge_, and _common sense_. If the user is uncertain, then she must choose between _bad explanation_ or _question_/_conversation unclear_. The idea is to remove confounding cases in which users make a decision regardless of the provided explanation, since we cannot infer much from these. Note that Turkers were not allowed to access external sources like web search. **Quality control**. For quality control, we restrict the participation in our user study to Master Turkers with an approval rate of \(\geq\)95%. Further, we added honeypot questions, for which the answer and provided information are clearly irrelevant with respect to the question (domain and answer type mismatch), We excluded submissions from workers who gave incorrect responses to the honeypots. **Explaignn provides explainable answers**. Findings are presented as a confusion matrix in Table 11 (values computed after removing confounding assessments). In the 771 of the 1,200 observations that remain, we found that the user can accurately decide \begin{table} \begin{tabular}{l l} \hline \hline **Answer correctness** (Correctness of the provided answer) & \(\checkmark\) **Answer correctness** (Correctness of the provided answer) & \(\checkmark\) **Predicted correctness** (User assessment of the answer correctness) & \(\checkmark\) **Predicted correctness** (User assessment of the answer correctness) & \(\checkmark\) **User correctness** \\ **User correctness** (Correctness of the user assessment) & \(\checkmark\) **User correctness** (Correctness of the user assessment) & \(\checkmark\) **User correctness** (Correctness of the user assessment) & \(\checkmark\) **User correctness** (Correctness of the user assessment) & \(\checkmark\) **User certainty** (User certainty about her assessment) & \(\checkmark\) **User truth** \\ \hline **User certainty** (User certainty about her assessment) & \(\checkmark\) **User certainty** (User certainty about her the correctness of the system answer (76.1%) and is certain about their assessment (79.8%) most of the time. This proves that our explanations are indeed useful to end users. If the user is certain about their assessment, then we observed that the accuracy in deciding the correctness was higher: P(User correct | User certain) = 0.792. **Anecdotal examples**. Fig. 9 and Fig. 10 show example outputs of Explaingnn, and the corresponding results from the user study. For the examples in Fig. 9 the Explaingnn answer was _correct_. Based on the explanation (system interpretation and supporting evidences) the users could _certainly_ tell the correctness of the answer. For the examples in Fig. 10, the answer provided by Explaingnn was _incorrect_. In the first case, the user was able to identify this incorrectness _with certainty_ from the explanation (the first and second supporting evidences mention that Allison Janney played Bonnie Plunkett, and not Christy Plunkett). The user could then try to reformulate (Les and Plunkett, 2018) the question to obtain the correct answer. In the second case, the provided information was not sufficient, since there is no information on the club Rivaldo played for in 1999. So the user was _uncertain_ about the answer correctness. Note that even in this scenario the end user would understand that the provided answer needs to be verified. This is in contrast to incorrect answers generated by large language models, which often look perfect on surface, giving the user a false sense of correctness. The guidelines, code, and the results for the user study are publicly available4. Footnote 4: [https://qa.mpi-inf.mpg.de/explaignn/explaignn_user_study.zip](https://qa.mpi-inf.mpg.de/explaignn/explaignn_user_study.zip) \begin{table} \begin{tabular}{l|c|c|} & **User correct** & **User incorrect** & \\ \hline **User certain** & 0.632 & 0.166 & 0.798 \\ **User uncertain** & 0.129 & 0.073 & 0.202 \\ \hline & 0.761 & 0.239 & \\ \end{tabular} \end{table} Table 11. Confusion matrix for the user study, showing the probabilities of user correctness against user certainty. \begin{table} \begin{tabular}{l|l} \hline **Answer correctness** (Correctness of the provided answer) & \(\mathcal{X}\) **Answer correctness (Correctness of the provided answer)** & \(\mathcal{X}\) **Predicted correctness (User assessment of the answer correctness)** & \(\mathcal{X}\) **Predicted correctness (User assessment of the answer correctness)** & \(\mathcal{X}\) **User correctness (Correctness of the user assessment)** & \(\mathcal{Y}\) **Predicted correctness (User assessment of the answer correctness)** & \(\mathcal{X}\) ** ## 8. Related Work **Conversational question answering**. There has been extensive research on ConvQA (Wang et al., 2018; Wang et al., 2019) in recent years, which can largely be divided into methods using a KB (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), methods using a text corpus (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), and methods integrating tables (Wang et al., 2019; Wang et al., 2019). In **ConvQA over KBs**, the (potentially completed) question is often mapped to logical forms that are run over the KB to obtain the answer (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). A different type of approach is to search for the answer in the local neighborhoods of an ongoing context graph, which captures the relevant entities of the conversation (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). Early **ConvQA systems over textual sources** assumed the relevant information (i.e. text passage or document) to be given (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), and modeled the problem as a machine reading comprehension (MRC) task (Wang et al., 2019). This assumption was challenged in (Wang et al., 2019), which proposed ORConvQA introducing a retrieval stage. Recent works follow similar ideas, and mostly rely on _question rewriting_(Wang et al., 2019; Wang et al., 2019) or _question resolution_(Wang et al., 2019), and then employ a MRC model. In related work on **ConvQA over tables**, the answer is either derived via logical forms (Wang et al., 2019), or via pointer-networks operating on graph-encodings of the tables (Wang et al., 2019). All of these methods rely on a single information source for answering questions, inherently limiting their answer coverage. Recently, there has been preliminary work on ConvQA using a mixture of the sources (Chen et al., 2021; Chen et al., 2021). The method proposed in (Chen et al., 2021) appends incoming questions to the conversational history, and then generates a program to derive the answer from a table using a sequence-to-sequence model. In (Chen et al., 2021), evidences from heterogeneous sources are concatenated and fed into a sequence-to-sequence model to generate the answer. Both methods heavily rely on sequence-to-sequence models where the generated outputs are not explainable, and may even be hallucinated. **QA over heterogeneous sources**. In addition to work on ConvQA over heterogeneous sources, there is a long line of work on answering _one-off questions_ using such mixtures of sources (Chen et al., 2021; Chen et al., 2021; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). More recently, UniK-QA (Wang et al., 2019) proposed the verbalization of evidences, and then applied FiD (Chen et al., 2021) for the answering task. UDT-QA (Wang et al., 2019) improved over UniK-QA by implementing more sophisticated mechanisms for evidence verbalization, and use TS (Wang et al., 2019) to generate the answer. Similarly, Shen et al. (Shen et al., 2019) propose a dataset and method for answering questions on products from heterogeneous sources, leveraging BART (Wang et al., 2019). These approaches, being sequence-to-sequence models at their core, face similar problems as mentioned before. HeteroQA (Chen et al., 2021) explore heterogeneity in the context of community QA where retrieval sources could be posts, comments, or even other questions. **Explainable QA**. Existing work is mostly on single-turn methods operating over a single source, with template (Chen et al., 2021; Chen et al., 2021) and graph-based derivation sequences (Wang et al., 2019; Wang et al., 2019) as mechanisms for ensuring explainability. Works on text-QA provide end users with actual passages and reasoning paths used for answering (Wang et al., 2019; Wang et al., 2019). Post-hoc explainability for QA over KBs and text is investigated in (Wang et al., 2019). These methods cannot be easily adapted to a conversational setting with incomplete follow-up questions. **Explainability in GNNs**. Explainability for GNNs is an active field of research (Wang et al., 2019), devising general techniques to identify important features for the nodes in the graphs, or provide model-level explanations. Such approaches are mostly designed for developers, and therefore hardly applicable in our scenario. Through our iterative model, we propose a QA-specific method for deriving explanations of GNN predictions that can be understood by _average web users_. ## 9. Conclusion There are three main takeaways for a broader audience from this work. First, at a time when large language models (LLMs) like Chat-GPT are used as a one-stop shop for most NLP tasks including ConvQA, our method Explaingnn stands out by providing traceable provenance of its answer predictions. Next, explainability for graph neural networks is an unsolved concern: we propose an iterative model that sequentially reduces the graph size as a medium for offering causal insights into the prediction process. Finally, for several systems in IR and NLP, performance, efficiency, and explainability are seen as trade-offs. Through our highly configurable solution, we show that in certain use cases, it is actually possible to find configurations that lie at the sweet spots of all the three factors. A natural future work would be to generalize these insights to other problems, like graph-based neural recommendation.
2308.02903
LaDA: Latent Dialogue Action For Zero-shot Cross-lingual Neural Network Language Modeling
Cross-lingual adaptation has proven effective in spoken language understanding (SLU) systems with limited resources. Existing methods are frequently unsatisfactory for intent detection and slot filling, particularly for distant languages that differ significantly from the source language in scripts, morphology, and syntax. Latent Dialogue Action (LaDA) layer is proposed to optimize decoding strategy in order to address the aforementioned issues. The model consists of an additional layer of latent dialogue action. It enables our model to improve a system's capability of handling conversations with complex multilingual intent and slot values of distant languages. To the best of our knowledge, this is the first exhaustive investigation of the use of latent variables for optimizing cross-lingual SLU policy during the decode stage. LaDA obtains state-of-the-art results on public datasets for both zero-shot and few-shot adaptation.
Zhanyu Ma, Jian Ye, Shuang Cheng
2023-08-05T15:51:45Z
http://arxiv.org/abs/2308.02903v1
# LaDA: Latent Dialogue Action For Zero-shot Cross-lingual Neural Network Language Modeling ###### Abstract Cross-lingual adaptation has shown effectiveness in low-resource spoken language understanding (SLU) systems. However, existing approaches often show unsatisfactory performance on intent detection and slot filling, especially for distant languages, which are dramatically different from source language in scripts, morphology, or syntax. To solve the aforementioned challenges, we propose Latent Dialogue Action (LaDA) layer to optimize decoding strategy. The model consists of an extra latent dialogue action layer. It enables our model to enhance a system's ability to handle conversations in complex multi-lingual intent and slot value of distant languages. To our best knowledge, our work is the first comprehensive study of the use of latent variables for cross-lingual SLU policy optimization during decode stage. LaDA achieves state-of-the-art results in zero-shot and few-shot adaptation both on public datasets. **Keywords:** Artificial Intelligence; Computer Science; Language understanding; Machine learning; Cross-linguistic analysis; Neural Networks ## Introduction In spoken language understanding (SLU) systems, data-driven neural-based supervised training strategies have proven successful [22, 13, 14]. However, gathering large quantities of high-quality training data is not only costly but also time-consuming, making these techniques inapplicable to low-resource languages owing to a lack of training data [16, 15]. Cross-lingual adaptation has evolved organically to address this challenge [16, 17], using training data in high-resource source languages while minimizing the need for training examples in low-resource target languages. In general, the first challenge in cross-lingual adaptation is that imperfect alignment of word-level and sentence-level representations between the source and target language limits the adaptation performance [17]. Existing cross-lingual transfer learning approaches rely mostly on pretrained cross-lingual word embeddings or contextual models, which represent natural language texts with comparable meanings in different languages in close proximity to each other in a common vector space [14, 15]. Unfortunately, those approaches often show good performance on intent detection in a few languages [15]. The results on slot filling and intent detection, however, are not satisfactory, especially for some languages at great distances, which are dramatically different from English in scripts, morphology, or syntaxes, such as Hindi (hi) and Thai (th) [18]. Similarly, even though we assume that the alignment is perfect, the trained model generations still suffer from intent/slot conflict, repetitiveness and contradictions owing to grammatical and syntactical variances across languages. Obviously, focusing on alignment of representations learning alone is not enough. Therefore, we further argue the second challenge that current cross-lingual models do not adequately understand the deeper meaning of slots generations and frequently contradict their own intents. Only focusing on multi-language representation learning will lose the effect of model generation and semantic parsing is also significant in the decoding stage. For conventional modular systems, the slot probabilistic space is defined by hand-crafted semantic representations such as slot-values, and the objective is to establish a dialog policy that selects the optimal hand-crafted values at each dialog turn [16]. But it is limited because it can only handle simple multi-lingual intent domains whose entire slot space can be captured by hand-crafted representations in similar language scripts, morphology, or syntax [15, 17]. Meanwhile, this cripples a system's ability to handle conversations in complex multi-lingual intent and slot value, especially when significant differences in grammatical and syntactical variances across languages. Some recent works show that slot and intent are highly correlated and have similar semantic meanings in a sentence [16, 15, 18]. Intent recognition can help with slot filling, they propose joint models to consider the correlation between these two tasks. However, the error of intent recognition will continue until the slot information is filled. Even if the intent is correctly identified, the slot information has a high probability of be ing wrong. In this paper, we are concerned with the problem of conflict between intent and slot information in cross-linguistic contexts where there are significant grammatical differences and improve the performance of SLU task both in few-shot and zero-shot scenarios. To solve the aforementioned challenges, we present a new model architecture named Latent Dialogue Action (LaDA) that is capable of training on supervised data indicating slot filling. Prior work has shown that learning with latent variables leads to benefits like multi-lingual representation Bao, He, Wang, Wu, and Wang (2020); Liu, Winata, Xu, Lin, and Fung (2020); Liu et al. (2019); Ma, Liu, and Ye (2023). The model consists of a standard decoder architecture with an extra latent dialogue action layer for each output token, in addition to the usual language modeling output, as shown Fig. 1. Both the language modeling layer and the latent dialogue action layer are trained using labeled data, with the bulk of decoder parameters being shared across the two tasks. To our best knowledge, our work is the first comprehensive study of the use of latent variables for slot filling policy optimization in cross-lingual spoken language understanding. Experiments demonstrate our model enhances a system's ability to handle conversations in complex multi-lingual intent and slot value, especially when significant differences in grammatical and syntactical variances across languages, improving adaption robustness. And LaDA achieves state-of-the-art results in zero-shot and few-shot adaptation of German, Spanish, French, Hindi and Thai for the natural spoken language understanding task (i.e., the intent detection and slot filling). ## Model In this section, we will introduce the LaDA model. We will start by laying out the notation and background of language modeling and then introduce our new architecture. ### Problem Formulation NLU is primarily responsible for intent detection and slot filling. Given \(L\) words \(u=[w_{1},w_{2},\ldots,w_{L}]\) as well as a collection of predetermined intent types \(I\) and slots types \(S\), The purpose of the intent detection is to attempt to anticipate the intent. \(o^{I}\in I\) based on the utterance \(u\) while the slot filling is a sequence tagging problem, mapping the word sequence \([w_{1},w_{2},\ldots,w_{L}]\) into semantic slots \([s_{1},s_{2},\ldots,s_{L}]\) where \(s\in S\). Few-shot learning (\(\mathrm{FSL}\)) extracts prior experience that allows quick adaption to new problems. Therefore, \(\mathrm{FSL}\) models are usually first trained on a set of source language, then evaluated on another set of unseen target language. A target language only contains a few labeled examples, which is called support set \(\mathcal{S}=\left\{(x^{(i)},y^{(i)})\right\}_{i=1}^{|\mathcal{S}|}\). \(\mathcal{S}\) includes \(K\) examples (K-shot) for each of \(N\) classes (N-way). Taking classification problem as an instance: given an input query example \(x=\langle x_{1},x_{2},\ldots,x_{n}\rangle\) and a K-shot support set \(\mathcal{S}\) as references, we find the most appropriate class \(y^{*}\) of \(x\): \[y^{*}=\underset{y}{\arg\max}\ \ \ p(y\mid x,\mathcal{S}). \tag{1}\] Actually, zero-shot learning is a limit of few-shot learning, which further tests the cross-lingual ability of the model. In zero-shot setting, support set \(\mathcal{S}\) is NULL, directly use the trained cross-lingual model to infer the target language that is not in the training set. ### Intent detection and slot filling Intent detection and slot filling are two key tasks of \(\mathrm{SLU}\). Given the the word sequence \(u=w_{1},w_{2},\ldots,w_{L}\) are passed into multi-lingual language model 234 and obtain derive sentence representations \(h^{u}\) and contextual embeddings \(e_{1},e_{2},\ldots,e_{L-1},e_{L}\). To jointly learn both tasks, the objective function \(\mathcal{L}\) is formulated as: Footnote 2: [https://huggingface.co/bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) Footnote 3: [https://github.com/facebookresearch/XLM/](https://github.com/facebookresearch/XLM/) \[\mathcal{L}_{\mathcal{SLU}}=-\sum_{i=1}^{n^{I}}y_{i}^{i}log(y_{i}^{I})-\sum_ {j=1}^{L}\sum_{i=1}^{n^{S}}y_{j,i}^{S}log(y_{j,i}^{S}) \tag{2}\] \(n^{I}\) is the number of intent types and \(y_{i}^{I}\) is the gold intent label. \(L\) is the sequence length of a sentence, \(n^{S}\) means the number of slot types as well as \(y_{j}^{S}\) is the gold slot label. \[y^{I}=Softmax(W_{h}^{I}h^{u}+b_{h}^{I}) \tag{3}\] \(y^{I}\)is the intent prediction distribution and \(W_{h}^{I},b_{h}^{I}\) are trainable parameters. The slot distribution for each word can be predicted as \(y_{j}^{S}\) \[y_{j}^{S}=Softmax(W_{h}^{S}h_{j}^{S}+b_{h}^{S}),j\in\{1,\ldots,L\} \tag{4}\] Figure 1: Cross-lingual \(\mathrm{SLU}\) task and the framework of LaDA. Ac-Layer donates latent dialogue action and L-Layer donates language modeling output. and \(W_{h}^{S},b_{h}^{S}\) are trainable parameters. Here \(h_{j}^{S}\) is a combined representation with a word embedding \(e_{j}\) and the intent embedding \(h^{I}\), \[h_{j}^{S}=\text{Average}(e_{j},h^{I}). \tag{5}\] ### LaDA Language Model Let word sequence of \(U\), \(u=w_{1},w_{2},\ldots,w_{L}\) be supervised training data where each token sequence \(x_{1},...,x_{T}\) is labeled. This is accomplished either by marking the whole sequence with a class \(y=c\) or, In the finely grained scenario, each token has a class label, giving \(y_{1},...,y_{T}\). Then the goal is to learn to generate conditioned on a specified given class, which means modeling \(P(x_{t}|x_{1},...,x_{t-1},y_{t})\). Using conditional generation probability, the objective function \(P_{LaDA}\) is: \[P(x_{t}|x_{1},...,x_{t-1},y_{t})\propto P(x_{t}|x_{1},...,x_{t-1})P(y_{t}|x_{1},...,x_{t}). \tag{6}\] The first term could be calculated using a language model, but the second needs a classifier that minimizes the cross-entropy loss: \[\mathcal{L}_{\text{action}}=-\log P(y_{t}=c|x_{1},...,x_{t}). \tag{7}\] We thus propose LaDA, a model that combines conventional language modeling with a latent dialogue action layer. This enables efficient training of the model using supervised data. Then, at the moment of inference, we can produce based on required properties (positive class labels). As shown in Fig. 1, Initial processing of input tokens is performed by a shared autoregressive core, for which a transformer decoder was used in our studies. The processed token representations are then delivered to two separate levels. The first layer is a conventional language modeling layer consisting of a linear layer followed by a softmax to generate a multinomial distribution. This layer is trained by optimizing loss \(L_{SILU}\). The second layer is for dialogue action and it also maps each token representation into a \(|y_{j}^{S}|\) dimensional vector using a linear layer. Then, however, a _sigmoid_ is applied to each word to generate an independent binomial distribution. \[y_{j}^{S}=\text{Sigmoid}(W_{ac}^{S}h_{j}^{S}+b_{ac}^{S})=\frac{exp(W_{ac}^{S} h_{j}^{S}+b_{ac}^{S})}{exp(W_{ac}^{S}h_{j}^{S}+b_{ac}^{S})+1} \tag{8}\] Where \(W_{ac}^{S}\) and \(b_{ac}^{S}\) are trainable parameters. Note that while tokens \(x_{1},...,x_{t-1}\) are inputted and processed by the shared transformer core, the next token candidates for \(x_{t}\) are encoded in the row vectors of the linear layer in the latent dialogue action layer. This layer optimizes loss \(L_{\text{action}}\) from Eq. 7. The final objective function is formulated as: \[\mathcal{L}=\mathcal{L}_{S\mathcal{L}U}+\alpha\mathcal{L}_{\text{action}}, \tag{9}\] where \(\alpha\) is a hyperparameter weighting the dialogue action loss. To generate a sequence conditioned on a certain class \(c\) according to Eq. 6, we aggregate the outputs from the two layers to determine the probability of the next token \[P(x_{t})=P(x_{t})P_{\text{action}}^{\alpha}(y_{t}=c), \tag{10}\] Adjusting the parameter \(\alpha\) at the time of inference allows us to change the weight of the latent dialogue action relative to the language model layer, where \(\alpha\) = 0 reverts to traditional language modeling. Tokens are generated in the same way as traditional language models, from left to right. Two characteristics make LaDA's unified design efficient: 1. Because the latent dialogue action is autoregressive rather than bidirectional, the computations of previous token representations may be used for future token classifications rather than processing the complete sequence \(x_{1},...,x_{t}\) at each time step \(t\). 2. Instead of categorizing each token candidate \(x_{t}\) individually, the latent dialogue action layer classifies them all simultaneously. In large transformers, executing it once requires the same processing power as the typical language modeling layer. LaDA's computing efficiency during training and inference is comparable to running the language model alone. ``` 0: Training data tuples in English; inference data tuples in target language; 0: Intent and slot value in English; Intent and slot value in target language as shown in figure1. 1:\(\text{P}\text{X}\text{L}\text{M}\leftarrow\text{Pretrained Cross-lingual LM}\) 2:\(X\leftarrow\text{Training data tuples in English}\) 3:#training loop 4:for\((x_{t},y)\in X\)do 5:\(E_{t}\gets\text{P}\text{X}\text{L}(x_{t})\) #get semantic embedding 6:\(P(x_{t})\gets L-Layer(E_{t})\)#refer Eq. 1 and Fig. 1 7:\(P_{\text{action}}^{\alpha}\leftarrow Ac-Layer(E_{t})\)#refer Eq. 8 and Fig. 1 8:\(\hat{y}\gets P(x_{t})P_{\text{action}}^{\alpha}\)#refer Eq. 10 9:\(total\_loss\leftarrow\text{task\_loss\_fn}(\hat{y},y)\)#refer Eq. 9 10:#update model parameters 11:endfor ``` **Algorithm 1** The LaDA algorithm. ## Experiments ### Implementable Details We introduce several competitive baselines, including CoSDA-ML Qin et al. (2021), ORT Liu et al. (2021), SLU-LR Liu et al. (2020), BiLSTEM with CRF (LVM) Liu et al. (2019), XLM Conneau and Lample (2019) and mBERT Pires et al. (2019). we adopt classification accuracy (Acc.) to evaluate the performance of intent detection while using F1 score to measure the performance of slot fillings. Our investigations employ WordPiece embeddings with a 110k-token vocabulary Devlin et al. (2019). LASER produces 1024-dimensional sentence embeddings, whereas mBERT generates 768-dimensional word embeddings. Notice that we take the first subword embedding as word-level representation following Qin et al. (2021) while incorporating mBERT for slot filling. We also set the size of intent embedding to 768, then train our model for 9 epochs with a batch size of 64 and a learning rate of \(0.002\). \(\alpha\) in Eq. 9 we set it as 0.125. We adopt AdamW (Loshchilov and Hutter, 2018) to optimize our LaDA and select the hyperparameters by beam search (\(num\_beam=4\)). To be noticed, we choose the best model reported by validation split from English dialog data. Besides, we adopt the gold intent instead of the predicted intent \(o^{I}\) in equation: \(o^{I}=Argmax(y^{I})\) to guide the slot filling during the early stages of training period to avoid error continuation. We fine-tune mBERT using the sequence labeling task and then apply it to the slot filling job following (Pires et al., 2019) ### Datasets **MTOP** is a Multilingual Task-Oriented Parsing dataset provided by (Li et al., 2021) that covers interactions with a personal assistant (intent recognition and slot filling tasks). We use the standard flat version. A compositional version of the data designed for nested queries is also provided. MTOP contains 100K+ human-translated examples in 6 languages (en, de, es, fr, th, hi) and 11 domains. ### Baselines We introduce several competitive baselines, including CoSDA-ML (Qin, Ni, et al., 2021), SLU-LR (Liu et al., 2020), BiLSTEM with CRF (LVM) (Liu et al., 2019), XLM (Conneau and Lample, 2019) and mBERT (Pires et al., 2019). ### Metrics The different performance evaluation metrics (PEM) used for quantifying our method are accuracy (Acc.), precision (P), recall (R), F1 score (F1), and their micro-average weighted values. The LaDA's predictions can be divided into TP (true-positive), TN (true-negative), FP (false-positive), and FN (false-negative). Here, TP/FP (and TN/FN) signify the cardinality of correct/incorrect predictions for the positive (and negative) class respectively. Based on TP, TN, FP, and FN values, the PEMs are defined as: Acc. \(=\frac{TP+TP}{TP+TN+FP+FN}\), \(P=\frac{TP}{TP+FP},R=\frac{TN}{TN+FN}\), and \(F1=\frac{2TP}{2TP+FP+FN}\). ### Zero-shot setting Table 1, 2 show that LaDA improved the state-of-the-art performance in the zero-shot environment, and our zero-shot best German, Spanish, French, Hindi and Thai model's intent detection and slot filling performance is comparable to the strong baseline CoSDA-ML 1%-few-shot model, which uses some multilingual resources. LaDA enhances a system's ability to handle conversations in complex multi-lingual intent and slot value, especially when significant differences in grammatical and syntactical variances across languages, improving adaption robustness. Self-supervised training with the LaDA also disentangles latent variables for various slot types, increasing robustness. Even though Thai intention recognition we have improved by more than 50% from mBERT, but Thai slot filling performance is limited. We hypothesize there is not enough Thai corpus in the cross-lingual pre-trained model data. To sum up, On the zero-shot scenario for MTOP, our model outperforms CoSDA-ML which is the former SOTA model in intent accuracy/f1-score by 1.74%/0.93% in German, 6.42%/3.62% in Spanish, 3.53%/2.51% in French, 9.74%/2.61% in Hindi, 8.67%/3.69% in Thai. ### Ablation Study To investigate the effects of individual component, we conduct ablation study and report the results in Table 1, 2. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**Test on MTOP: Intent Acc**} \\ & de & es & fr & hi & th \\ \hline \multicolumn{6}{|c|}{Zero-shot Learning} \\ \hline \hline ORT & 72.91 & 69.34 & 66.54 & 43.18 & 44.05 \\ SLU+LR\&ALVM & 72.73 & 71.09 & 66.22 & 41.69 & 46.77 \\ BiLSTM with CRF & 72.43 & 69.90 & 66.88 & 46.70 & 46.19 \\ BiLSTM with LVM & 68.52 & 68.00 & 62.62 & 37.10 & 52.36 \\ mBERT & 68.29 & 65.02 & 66.18 & 22.76 & 24.77 \\ XLM & 58.61 & 58.46 & 65.24 & 36.65 & 34.22 \\ mBERT+CoSDA-ML & 88.70 & 84.45 & 85.95 & 63.23 & 68.81 \\ XLM+CoSDA-ML & 77.13 & 76.93 & 85.85 & 65.46 & 61.12 \\ LaDA & **90.44** & **90.87** & **89.48** & **75.20** & **77.48** \\ \hline _w/o LaDA_ & 81.43 & 80.18 & 82.63 & 65.24 & 68.00 \\ _w/o LASER, w/ mBERT_ & 82.07 & 83.61 & 80.59 & 69.32 & 69.26 \\ _w/o LASER, w/ XLM_ & 81.41 & 81.09 & 82.30 & 71.00 & 73.41 \\ _w/o mBERT, w/ XLM_ & 87.22 & 88.36 & 86.44 & 73.61 & 75.30 \\ \hline \hline \end{tabular} \end{table} Table 1: Acc comparison on MTOP between different cross-lingual approaches. We highlight the best scores in bold and underline the second best for each language. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**Test on MTOP: Slot F1**} \\ & de & es & fr & hi & th \\ \hline \multicolumn{6}{|c|}{Zero-shot Learning} \\ \hline \hline ORT & 51.42 & 36.17 & 32.75 & 20.63 & 18.31 \\ SLU+LR\&ALVM & 52.30 & 37.88 & 34.17 & 21.28 & 19.67 \\ BiLSTM with CRF & 43.19 & 29.12 & 30.27 & 20.13 & 16.71 \\ BiLSTM with LVM & 45.73 & 34.05 & 32.07 & 17.12 & 19.62 \\ mBERT & 45.25 & 42.88 & 43.99 & 13.43 & 9.38 \\ XLM & 36.99 & 23.23 & 34.76 & 15.99 & 5.28 \\ mBERT+CoSDA-ML & 74.19 & 70.30 & 72.12 & 47.99 & 33.50 \\ XLM+CoSDA-ML & 64.90 & 40.77 & 61.00 & 42.10 & 13.91 \\ LaDA & **75.12** & **73.92** & **74.63** & **50.60** & **37.46** \\ \hline _w/o LaDA_ & 67.42 & 63.90 & 70.56 & 45.81 & 31.40 \\ _w/o LASER, w/ mBERT_ & 70.21 & 66.21 & 69.80 & 45.29 & 33.62 \\ _w/o LASER, w/ XLM_ & 69.42 & 64.93 & 67.30 & 44.00 & 32.74 \\ _w/o mBERT, w/ XLM_ & 72.61 & 70.88 & 74.02 & 48.45 & 35.14 \\ \hline \hline \end{tabular} \end{table} Table 2: F1 comparison on MTOP between different cross-lingual approaches. We highlight the best scores in bold and underline the second best for each language. Firstly, we remove the latent dialogue action from the proposed model and find that it would perform worse without the guidance from latent action. It drops 9.01/7.70 in German, 10.69/10.02 in Spanish, 6.85/4.07 in French, 9.96/4.79 in Hindi, 9.48/6.06 in Thai, for intent detection /slot filling respectively in MTOP. It is probably because latent action can provide related information and help to find more accurate semantic slots. Secondly, we investigate the importance of sentence alignments by replacing LASER with two pre-trained language models, i.e., mBERT or XLM, to derive the sentence embeddings. It validates the effectiveness of LASER that the performance of models with either mBERT or XLM is inferior to that with LASER which can produce aligned sentence representations in different languages. In addition, we also find that mBERT can produce better token-level embeddings for slot filling than XLM, thus we derive word-embedding with mBERT in LaDA. ### Inference Speed The inference speed of various models is shown in Figure 2. The experimental environment is Intel(R) Xeon(R) Silver 4214R [email protected] and NVIDIA Quadro GP100(16G). LaDA has only one additional latent dialogue action layer per token, but otherwise it is the same size as the baseline and thus generates almost the same number of samples per second. Even though separate models must be used for encoding and classification, the reorderer operating on beam search candidates does not cause much slowdown. ## Conclusion In this paper, we propose a new network structure: latent dialogue action layer, which acts in the decoding stage of conditional generative models. Extensive experimental results show that our model enhances a system's ability to improve adaption robustness and handle conversations in complex multi-lingual intent and slot value, especially when significant differences in grammatical and syntactical variances across languages. Our proposed method is independent of the backbone network (e.g. mBERT, XLM model) and is highly adaptable. Actually, reinforcement learning can also be used as an action in the intent and slot state space, and we will continue to explore this aspect. As future work, we plan to investigate the performance of our method on different cross-lingual tasks. ## Acknowledgements The research work is supported by National Key RD Program of China (No.2022YFB3904700), Key Research and Development Program of in Shandong Province (2019JZZY020102), Key Research and Development Program of Jiangsu Province (No.BE2018084), Industrial Internet Innovation and Development Project in 2021 (TC210A02M, TC210804D), Opening Project of Beijing Key Laboratory of Mobile Computing and Pervasive Device.
2310.01073
Complex-valued neural networks to speed-up MR Thermometry during Hyperthermia using Fourier PD and PDUNet
Hyperthermia (HT) in combination with radio- and/or chemotherapy has become an accepted cancer treatment for distinct solid tumour entities. In HT, tumour tissue is exogenously heated to temperatures between 39 and 43 $^\circ$C for 60 minutes. Temperature monitoring can be performed non-invasively using dynamic magnetic resonance imaging (MRI). However, the slow nature of MRI leads to motion artefacts in the images due to the movements of patients during image acquisition. By discarding parts of the data, the speed of the acquisition can be increased - known as undersampling. However, due to the invalidation of the Nyquist criterion, the acquired images might be blurry and can also produce aliasing artefacts. The aim of this work was, therefore, to reconstruct highly undersampled MR thermometry acquisitions with better resolution and with fewer artefacts compared to conventional methods. The use of deep learning in the medical field has emerged in recent times, and various studies have shown that deep learning has the potential to solve inverse problems such as MR image reconstruction. However, most of the published work only focuses on the magnitude images, while the phase images are ignored, which are fundamental requirements for MR thermometry. This work, for the first time, presents deep learning-based solutions for reconstructing undersampled MR thermometry data. Two different deep learning models have been employed here, the Fourier Primal-Dual network and the Fourier Primal-Dual UNet, to reconstruct highly undersampled complex images of MR thermometry. The method reduced the temperature difference between the undersampled MRIs and the fully sampled MRIs from 1.3 $^\circ$C to 0.6 $^\circ$C in full volume and 0.49 $^\circ$C to 0.06 $^\circ$C in the tumour region for an acceleration factor of 10.
Rupali Khatun, Soumick Chatterjee, Christoph Bert, Martin Wadepohl, Oliver J. Ott, Rainer Fietkau, Andreas Nürnberger, Udo S. Gaipl, Benjamin Frey
2023-10-02T10:34:24Z
http://arxiv.org/abs/2310.01073v2
# Fourier PD and PDUNet: Complex-valued networks to speed-up MR Thermometry during Hyperthermia ###### Abstract Hyperthermia (HT) in combination with radio- and/or chemotherapy has become an accepted cancer treatment for distinct solid tumour entities. In HT, tumour tissue is exogenously heated to temperatures of 39 to 43 \({}^{\circ}\)C for 60 minutes. Temperature monitoring can be performed noninvasively using dynamic magnetic resonance imaging (MRI). However, the slow nature of MRI leads to motion artefacts in the images due to the movements of patients during image acquisition time. By discarding parts of the data, the speed of the acquisition can be increased - known as Undersampling. However, due to the invalidation of the Nyquist criterion, the acquired images have lower resolution and can also produce artefacts. The aim of this work was, therefore, to reconstruct highly undersampled MR thermometry acquisitions with better resolution and with less artefacts compared to conventional techniques like compressed sensing. The use of deep learning in the medical field has emerged in recent times, and various studies have shown that deep learning has the potential to solve inverse problems such as MR image reconstruction. However, most of the published work only focusses on the magnitude images, while the phase images are ignored, which are fundamental requirements for MR thermometry. This work, for the first time ever, presents deep learning based solutions for reconstructing undersampled MR thermometry data. Two different deep learning models have been employed here, the Fourier Primal-Dual network and Fourier Primal-Dual UNet, to reconstruct highly undersampled complex images of MR thermometry. It was observed that the method was able to reduce the temperature difference between the undersampled MRIs and the fully sampled MRIs from 1.5 \({}^{\circ}\)C to 0.5\({}^{\circ}\)C. keywords: Hyperthermia, MRI, MR Image Reconstruction, Deep Learning, Undersampled MRI, Undersampled, MR Reconstruction, Complex Image, MR Thermometry Msc: [2022] 00-01, 99-00 + Footnote †: journal: ## 1 Introduction Hyperthermia (HT) has become one of the well-accepted cancer treatments, in combination with radio / chemotherapy. In HT, tumour tissue is exogenously heated to temperatures of 39 to 44 \({}^{\circ}\)C for 60 minutes to sensitise tumour cells for chemo- and/or radiotherapy (Chorric et al., 2015; Datta et al., 2019; Van der Zee, 2002; Kok et al., 2020). Temperature monitoring is an important part of quality-controlled HT and can be performed noninvasively by Magnetic Resonance Imaging (MRI). However, a major challenge is that the MRI is an inherently slow process. Consequently, the scan time for high-resolution imaging is long, compromising with the temporal resolution. Longer scan times can also lead to an increase in motion artefacts due to the movements of the patient during image acquisition time. The speed of image acquisition can be increased by discarding parts of the data, known as undersampling (Chatterjee et al., 2022). But that leads to loss of resolution and can also produce artefacts due to invalidation of the Nyquist criterion (Nyquist, 1928; Shannon, 1949) Hence, MRI image reconstruction and reduction of motion artefacts are in high demand. This work aims to reconstruct highly undersampled MR thermometry acquisitions of patients with sarcoma with better resolution and with less artefacts compared to conventional techniques such as compressed sensing. The use of deep learning in the medical field is spreading, including for undersampled MR reconstruction. Using the ReconResNet model as the network backbone, the NCC1701 pipeline has been revealed to be capable of removing artefacts from highly undersampled images (Chatterjee et al., 2022) with acceleration factors as high as 20. However, this work does not only focus on the magnitude images, while the phase images are ignored, which are fundamental requirements for MR thermometry. (Chatterjee et al., 2022). ### Thermal Therapy and Thermometry One of the oldest and easiest thermal analysis techniques is thermometry. It involves the measurement of temperatures and the measurement of time. Magnetic Resonance Imaging (MRI) has the ability to map temperatures (Cline et al., 1994), and it has been more than 30 years that several extensive studies have been performed to understand the quality of temperature monitoring in thermal treatment. The hotspot must be located correctly during ablation therapy with the use of MR guidance. It is necessary to locate the ablation site extremely precisely in order to burn just the unhealthy cells and spare the normal ones. Temperatures are achieved using microwave (MW) or radio frequency (RF) or ultrasound (US) or infrared (IR) techniques. Thermal therapy can be divided into two techniques. Low temperature or hyperthermia (HT), where tumour tissue is heated to a minimum temperature of 40 to 44\({}^{\circ}C\) for 60 minutes with the aim of directly killing cancer cells, increasing oxygenation, and thus also increasing radiosensitisation of the cancer cell (Kim and Hahn, 1979). Local, regional, and whole body hyperthermia can be classified on the basis of the size of the heated area. External heat sources, as well as intraluminal or interstitial insertion of microwave guided or wires, can be used to apply heat to the tumour. High-temperature thermal ablation, where the tumour tissues are heated to temperatures of 50-80\({}^{\circ}C\) or higher for a shorter period of time with the aim of killing cancer cells(Thomsen, 1991). ### Deep learning in medical imaging The use of deep learning in the medical field, especially in the field of medical imaging, is increasing rapidly. Deep learning achieved outstanding performance in the task of undersampled MR image reconstruction or the elimination of artefacts present in those MRIs (Qin et al., 2018; Lyu et al., 2021). Wang et al. (Wang et al., 2016) applied deep learning to compressed sensing MRI. Deep Residual Network (ResNet) (He et al., 2016) was proposed in 2016 to optimise and improve the accuracy of deep learning architecture. ResNet was able to solve the vanishing gradient problem and open up the door to a deeper network. ResNet was proposed mainly for image classification, but later it has been used for many other applications, such as image classification (Mou et al., 2017; Zhang et al., 2019), image segmentation (Pakhomov et al., 2019) and image denoising (Jifara et al., 2019; Zhu et al., 2017). The residual learning model proved to be very efficient in MRI reconstruction as well (He et al., 2016). One of the most commonly used network architectures for MRI reconstruction is UNet (Hyun et al., 2018), which was first employed for the task of MR reconstruction in 2018. UNet is capable of reconstructing highly undersampled images. In 2021 (Chatterjee et al., 2022a) Chatterjee et al. came up with the NCC1701 pipeline with the ReconResNet model as the backbone. This has shown an improvement in reconstruction of undersampled Cartesian and radial MRIs over UNet, and demonstrated that it is capable of reconstructing up to acceleration factors of 20 and 17, for Cartesian and radial MRIs, respectively. UNet and ReconResNet work only in the image space (magnitude images), completely discarding the phase images. Even though these methods work with both image and k-space, they apply real-valued convolution operations on the complex-valued image space and k-space data, which leads to a destruction of the rich geometric relationship present within the complex data. In 2021, the first time complex-valued convolutions were applied for the task of undersampled MRI reconstruction directly in the k-space (Chatterjee et al., 2021). On the other hand, Adler and Oktem 2018 (Adler and Oktem, 2018) proposed the primal-dual network or PDNetwork or PDNet for the reconstruction of sparse computed tomography (CT) data. Given that CT and radial MRI reconstructions have mathematical similarities due to the Fourier slice theorem, Ernst et al. (Ernst et al., 2023) in 2023 applied PDNet successfully for the task of undersampled radial MRI reconstruction, and also extended PDNet to PDUNet - which outperformed PDNet with statistical significance. Both networks employ two types of network blocks, filtering in the image space and singogram space. In 2022, these models were further extended using complex-valued convolutions into Fourier-PDNet and Fourier-PDUNet by Chatterjee et al. (Chatterjee et al., 2022c). These two models work in the k-space (i.e. the raw data space of MRI) instead of the sinogram space, along with working in the image space. Although several models have been proposed for reconstructing undersampled MRIs, including models that work in both image and k-space, and models that work directly with complex data, the main focus of the evaluations carried out on those manuscripts was on the magnitude images. Current research aims to bridge this gap by focussing on both magnitude, as well as phase images, and then further evaluating the quality of MR thermometry from the reconstructed MRIs. ### MR-guided thermometry With the benefit of obtaining 3D temperature maps, MR-guided hyperthermia provides a non-invasive approach for temperature monitoring(Wust et al., 2006).On the basis of proton density, T1 or T2 relaxation time, the water molecule's molecular diffusion coefficient, magnetisation transfer, temperature-sensitive contrast agents, proton resonance frequency (PRF) shift imaging, or spectroscopy, various methods of measuring temperature with an MR system have been reported(Kuroda, 2005; Ludemann et al., 2010; Quesson et al., 2000; Rieke and Pauly, 2008; Wlodarczyk et al., 1999). Techniques such as measuring longitudinal and transverse relaxation times (Parker et al., 1983), the diffusion coefficient, or the proton density rely heavily on the characteristics of the tissue. Because of this, the PRF shift technique is now the preferred technique for MRI-based temperature measurements due to its potential for online imaging and tumour control during treatments, ie, good linearity and high temperature sensitivity(Cernicanu et al., 2008; Gellermann et al., 2005). The PRF shift method's pre-clinical calibrations and uses are outlined in(McDannold, 2005). The current standard for noninvasive temperature assessments in daily clinical practice is the PRF shift measurement. The goal of the guided system is to understand the real-time temperature distribution and deliver quality controlled treatment and also be able to co-relate treatment temperature with treatment outcome in terms of actual thermal tissue damage. Fig.1 1 shows an example of MR-based temperature monitoring at different time points. ## 2 Methodology Most of the previous work took either of these two directions: working only with the magnitude images (ignoring the phase images completely) or working with the complex image by splitting the data into real and imaginary parts before supplying it to the network as two separate channels. The first approach is not suitable for the current task at hand, while the second approach destroys the rich geometric structure present in the complex data. Both these approaches apply real-valued convolution operations. As the data are complex-valued, applying complex-valued convolution should be better suited, which is capable of working directly with the complex-valued data without splitting them into channels, effectively preserving the geometric structure. ### Experiment Design MR images of 44 patients with different sarcoma cancers who have received the HT treatment in a combination of radiation/chemotherapy were used in this study (refer to Fig. 2). One key goal of this work is the reconstruction of the temperature, so phase images are as important as magnitude images. As the next step, the magnitude and phase images were combined together, and complex images were created. These complex images are artificially undersampled. Afterwards, the undersampled complex images were randomly divided into three different sets - Training, Validation and Test sets and the number of subjects was 26, 7 and 11, respectively. In the following steps, we have used a training and validation set to train our PD Net (Adler and Oktem, 2018) / PDUNet (Ernst et al., 2023) models, and a test set has been used to test the models. After testing, to validate the results produced by the models, the Structural Similarity Index (SSIM) (Renieblas et al., 2017)) has been used. ### Network Architectures Primal-Dual network or PD Net: The Primal-Dual network is a deep learning-based technique for computed tomography data with sparse sampling (Adler and Oktem, 2018). The algorithm unrolls a proximal-dual method with convolutional neural networks in place of the proximal operators to accommodate for (potentially non-linear) forward operators in deep neural networks. The algorithm is trained end-to-end, using only the raw measured data, and is not dependent on any initial reconstruction, such as filtered backprojection. This not only raises the standard of the final reconstruction, but also ensures data consistency. The quality of PD Net depends on the number of iterations, just like many iterative algorithms, for coverage of all the parameters of the network, an optimal number of iterations is needed. The fewer the parameters of the convolutional block, the more iterations are needed for convergence. Primal-Dual UNet or PD UNet: Primal-Dual UNet(Ernst et al., 2023) is the improved version of the Primal-Dual network in terms of accuracy and reconstruction speed. A UNet has been used in place of a convolutional block of PD net for image space to obtain a higher number of parameters with low processing time. ### Data consistency step In the data consistency step, the actual acquired undersampled data replaces the network's output. The network only helps to fill the data which were ignored before during the undersampled data, this is how the final output is not totally dependent on the network. Following (Hyun et al., 2018), a data consistency step was performed for the after reconstructing the undersampled Cartesian data. To obtain the corresponding k-space, FFT was performed on the output image first. Then, to identify the k-space values that were not acquired, an inverted mask was applied to this. The missing estimated k-space data from the network were combined with the measured data. To obtain the final output, iFFT was applied to this combined k-space. ### Dataset In this work, MRI images of 44 patients treated at the Department of Radiation Oncology of the University Hospital Er Figure 1: Example of non-invasive PRF shift MR thermometry to monitor and control temperature during clinical hyperthermia (HT). The initial temperature map at time step 0 requires two MR images, which are calculated by voxel-wise subtracting the second phase image from the first reference phase image. This reference image is subtracted from the phase images of further acquired MR images, which are taken every 10 minutes during HT therapy. In the initial high-precision magnitude MR image, the temperature map is shown as a colour overlay (blue: relative temperature decrease; green: constant temperature; red: relative temperature rise). Figure 3: The network design known as the Primal-Dual Network (PDNet) was utilised to address the missing data problem in computed tomography. Primal iterates are displayed in blue boxes, whereas dual iterates are displayed in green boxes. The architecture of all the blue boxes is the same and is shown in the matching large boxes. When several arrows lead to the same box, concatenation occurs. As the data are transmitted to the dual iterates, the initial estimates enter from the left. Primal-Dual Network (PDNet): Network architecture used to solve the tomography problem. The dual iterates are shown in green boxes, whereas the primal iterates are shown in blue boxes. All blue boxes have the same architecture, which is illustrated in the corresponding large boxes. Several arrows pointing to one box indicate concatenation. Figure 2: Experiment Design langen, acquired from 2015 to 2020, who underwent HT treatment with MR thermometry, were used. Image sets of 44 different patients with different types of sarcoma cancer, originating mainly in the leg of the patients (details are in Table 3), have been acquired at 1.5T, the scanning sequence is GR or Gradient Recalled, Sequence Name fl2D or Fast low angle shot (FLASH 2D). The age range of the patients is from 23 to 80 years. 26 subjects out of 44 were used for the training set selected randomly; seven subjects were used for the validation set, and 11 subjects were used for testing the model. MR Thermometry images have two types of acquisition, static acquisition and dynamic acquisition, and the parameters of these two types of acquisition have been shown in Table 1 and in Table 2 respectively. datasets do not contain any raw MR data, using the MR-Under (Chatterjee, 2020) pipeline, the single-channel fully sampled raw data and various undersampled datasets were generated artificially. Cartesian raw data have been artificially undersampled using the k-space sampling pattern (sampling mask)(Lustig et al., 2007) which was created by randomly choosing completely sampled readout lines in the phase encoding direction, with the centre of the distribution following a one-dimensional normal distribution the distribution 5 (Fig. 5 (a)) that matches the k-centre space (referred to as 1D Varden). Furthermore, another mask is designed to include a densely sampled centre consisting of eight lines while gradually decreasing the sampling density toward the edges of k-space(Lustig et al., 2007)(referred as 2D Varden)5(Fig 5(b)) densely sampled centre covers 2. 5% of the k space randomly and distributed according to a two-dimensional normal distribution pattern. Three distinct Cartesian undersampling patterns were used in the first round of trials. 1D and 2D Varden masks were generated by randomly sampling 25% or 10% of the k-space - achieving acceleration factors of 4 and 10, respectively. ### Implementation As input, the complex undersampled images are used to train the network to obtain the reconstructed complex images. However, most deep learning networks are implemented only on real-valued data, not complex-valued data. Therefore, the use of complex-valued convolution or CV-CNN(Chatterjee et al., 2022c) was a necessity. The convolution operation is the main component of CNNs, and it is computed by the sum of the product of two functions - the input (x) and the kernel (w) and the outcome is referred to as the feature map or activation map (s), and it is given by: \[s(t)=(w\star x)(t)=\sum_{a}x(t+a)w(a) \tag{1}\] In this case, w and x are both real-valued. Complex-valued convolutional networks, commonly referred to as CV-CNNs, improve on this by using the complex-valued convolution operation, which is defined as: \[\begin{split} C_{r}(t)&=(w_{r}\star x_{r})\left(t \right)-(w_{i}\star x_{i})\left(t\right)\\ C_{i}(t)&=(w_{i}\star x_{r})\left(t\right)+(w_{r} \star x_{i})\left(t\right)\end{split} \tag{2}\] where \(x_{r}\) and \(x_{i}\) are the real and imaginary components of the complex-valued input \(x\), respectively. Similarly, \(w_{r}\) and \(w_{i}\) are components of the complex-valued kernel \(w\), and \(C_{r}\) and \(C_{i}\) are components of the generated complex-valued feature map s. This can also be expressed in matrix notation: \[\left[\begin{array}{c}\Re(\mathbf{w}\star\mathbf{x})\\ \Im(\mathbf{w}\star\mathbf{x})\end{array}\right]=\left[\begin{array}{cc} \mathbf{w}_{r}&-\mathbf{w}_{i}\\ \mathbf{w}_{i}&\mathbf{w}_{r}\end{array}\right]\star\left[\begin{array}{c} \mathbf{x}_{r}\\ \mathbf{x}_{i}\end{array}\right] \tag{3}\] CV-CNNs can learn more sophisticated representations while preserving the algebraic structure of complex-valued data. Fig.6 shows the working mechanism of the proposed framework. ### Training and Inference Fig.6 demonstrates the operational principles of the proposed framework, which include a network backbone called ReconResNet and for reconstructions using Cartesian undersampling. During the training process, only the network backbone was used. However, the entire framework is used during inference. L1 or mean absolute error (MAE) has been calculated to train ReconResNet. The absolute difference between a prediction and the actual value is calculated for each sample in a dataset known as the L1 loss or Mean Absolute Error (MAE). \[f(y,\hat{y})=\frac{1}{N}\sum_{i=1}^{N}|y_{i}-\hat{y}_{i}| \tag{4}\] Where \(y\) is the actual value or ground truth, \(\hat{y}\) is the predicted value, and \(N\) is the whole dataset. The model was trained for 100 epochs, with a batch size of one, and the loss value was then minimised using Adam Optimiser (Initial learning rate 0.0001, decayed by 10 after every 50 epochs; \(\beta_{1}=0.9,\beta_{2}=0.999,\epsilon=1e-09\)). This network was implemented using PyTorch (Paszke et al., 2019), Python version 3.10.9 was used and was trained using NVIDIA GeForce RTX 2080 Ti. ### Evaluation Criteria Structural Similarity Index 5(SSIM) (Renieblas et al., 2017), Normalised root-mean-squared error (NRMSE, 7) and Universal Image Quality Index (UIQI) (Wang and Bovik, 2002) have been used to evaluate the results. The range of SSIM values is between zero and one, where the higher the SSIM value, the higher the similarity between two images. \[SSIM(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{x}y+C_{2})}{(\mu_{x}^{2}+\mu_ {y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \tag{5}\] where \(x\) and \(y\) are the two images between which the structural similarity is to be calculated, \(\mu_{x},\mu_{y},\sigma_{x},\sigma_{y}\) and \(\sigma_{xy}\), are the pixel means of \(x\), pixel means of \(y\), standard deviations, and cross-covariance for images x and y, respectively. \(c_{1}=(k_{1}L)^{2}\) and \(c_{2}=(k_{2}L)^{2}\) where \(L\) is the dynamic range of the pixel-values, \(k_{1}=0.01\) and \(k_{2}=0.03\). Figure 5: (a) 1D Varden Mask and (b) 2D Varden Mask. All of them are for image size 256x256, taking 30thc k-space. To statistically compare the two images (output and ground truth), NRMSE was used, calculated as: \[MSE=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\hat{Y}_{i})^{2} \tag{6}\] \[NRMSE=\frac{\sqrt{MS\,E}}{\sqrt{\frac{1}{n}\sum_{i=1}^{n}Y^{2}}} \tag{7}\] where the pixels of the fully sampled ground truth image have been denoted as \(Y_{i}\), the pixels of the undersampled image or the reconstruction (depending on the comparison performed) have been denoted as \(\hat{Y}_{i}\) and \(n\) denotes the number of pixels in the image. Universal Image Quality Index (UIQI)(Wang and Bovik, 2002) Loss of correlation, luminance distortion, and contrast distortion are the three components that make up any image distortion when it is modelled. The proposed index is simple to calculate and adaptable to numerous image processing applications, as opposed to using conventional error summation techniques. \[Q=\frac{\sigma_{x_{y}}}{\sigma_{x}\sigma_{y}}\cdot\frac{2\bar{x}\bar{y}}{(\bar {x})^{2}+(\bar{y})^{2}}\cdot\frac{2\sigma_{x}\sigma_{y}}{\sigma_{x}^{2}+ \sigma_{y}^{2}} \tag{8}\] \[\bar{f}=\frac{1}{MN}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}x[i,j]\quad\bar{y}=\frac{1 }{MN}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}y[i,j]\] \[\sigma_{xy}=\frac{1}{M+N-1}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}(x[i,j]-\bar{x})(y[ i,j]-\bar{y})\] \[\sigma_{x}^{2}=\frac{1}{M+N-1}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}(x[i,j]-\bar{x})^ {2}\] \[\sigma_{y}^{2}=\frac{1}{M+N-1}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}(y[i,j]-\bar{y}) ^{2}\] where x and y are two images, considered as matrices having M and N number of columns and rows with x[i,j], y[i,j] pixels where (0\(\geq\)i \(>\) M, 0 \(\geq\)j \(>\) N ) and \(Q\) is the Universal image quality index. \(Q\) can be obtained by multiplying three components together. The correlation coefficient is the initial component, which quantifies the level of linear correlation between the images x and y; the range varies [-1,1]. The second component assesses the similarity of mean luminance between images and has a value range of [0, 1]. With a range of [0, 1], the third component quantifies how closely the contrasts of the images match. ### Temperature Map Temperature maps were generated to evaluate the retrieved temperatures by the Fourier-PDUNet and Fourier-PDNet models, as well as from the undersampled inputs and ground-truth images for comparison. To create these temperature maps, the proton Resonance Frequency Shift (PRFS) (McDannold, 2005) method has been used. For MRI-based temperature measurements, the PRF shift approach is currently the clinically recommended practice. Linear proton Resonance Frequency Shift (PRFS) of water molecules with temperature is one of the widely used methods for MR-based temperature monitoring (McDannold, 2005). In Gradient Recalled Echo (GRE) images, the change of resonance frequency is expressed as phase change. The temperature difference can be derived by calculating the phase difference between a GRE image at a certain temperature and a reference temperature (Ishihara et al., 1995). The linear relationship between the temperature difference and the phase change can be expressed as the following equation. \[\Delta T=\frac{\phi(T)-\phi(Tref)}{\Upsilon\alpha\beta TE} \tag{9}\] where \(\Delta T\) = Temperature difference, \(\alpha\) = Temperature sensitivity of PRFS, \(\Upsilon\) = Gyro-magnetic constant, \(\delta\) = Main Magnetic Figure 6: Working Model field strength and TE = Echo time. According to Eq.10, a complex calculation has been performed to construct the phase difference, which could avoid the phase wrapping problem during the heating cycle (Peters, 2000). \[\Delta\phi=\mathrm{atan}\left(\frac{\mathrm{Re}(I_{\mathrm{ref}})\cdot\mathrm{Im} (I_{H})-\mathrm{Im}(I_{\mathrm{ref}})\cdot\mathrm{Re}(I_{H})}{\mathrm{Re}(I_{ \mathrm{ref}})\cdot\mathrm{Re}(I_{H})+\mathrm{Im}(I_{\mathrm{ref}})\cdot \mathrm{Im}(I_{H})}\right) \tag{10}\] where Re and \(I_{m}\) are the real and imaginary components of the heated (\(I_{H}\)) and reference (\(I_{ref}\)) images. ## 3 Results In this research, images from 44 patients with sarcoma cancer were undersampled with an acceleration factor of 4, resulting in average SSIM values of 1D varden 25% is 63% and 31%, for magnitude and phase images where the Fourier-PDNet and Fourier-PDUNet models managed to reconstruct those data with average SSIM values of 91% and 90% for magnitude images, while achieving 44% and 40% for phase images, respectively. The results have been displayed using the violin plots in Fig. 10. Example outputs from two different subjects for 1D varden 25% sampling patterns are shown in Fig. 7 for qualitative evaluation. The average SSIM values of the 2D varden undersampled MRIs with 25% undersampling were 43% and 29% for the magnitude and phase images. The Fourier-PDNet and Fourier-PDUNet models managed to reconstruct those with average SSIM values of 94% and 93% for the magnitude images while achieving 47% and 46% for the phase images, respectively. The SSIM values using violin plots and example results are shown in Figures 10 and 8, respectively. Finally, with an acceleration factor of 10, the SSIM values of the undersampled images of the 2D varden 10% were 39% and 28% for the magnitude and phase images, respectively. Fourier-PDNet and Fourier-PDUNet models improved the SSIM values to 87% and 86% for the magnitude images while achieving 43% and 41% for the phase images, respectively. The violin plots of the SSIM values and the qualitative comparison for two subjects are shown in Figures 10 and 9, respectively. Table 5 provides a complete qualitative overview of the results using NRMSE and UIQI, along with the SSIMs. The temperature difference between the ground truth and the highest undersampled images (2D varden 10%) was 1.299\(\pm\)0.032, which is 1.3 \({}^{\circ}\)C more than the ground truth. But the models managed to reduce the difference to 0.618\(\pm\)0.016 and 0.643\(\pm\)0.022, using Fourier-PDNet and Fourier-PDUNet models, respectively, which are only around half a \({}^{\circ}\)C more than the ground truth 4. So, the models give 60% better accuracy in reconstructing the temperature maps compared to undersampled MRIs. This means that the model can speed up MR acquisition by a factor of 10 with only half \({}^{\circ}\)C of temperature difference. Examples of the reconstructed temperature map are shown in Fig. 11. ## 4 Discussion The assessment of the proposed framework for reconstructing MRI images from undersampled data has shown that it can be applied effectively not only to MRI but also to MR thermometry images, as the model was also capable of reconstructing the temperatures. To the best of the knowledge of the authors, this manuscript is the first one to deal with undersampled MR thermometry, and in extension, MR-guided hyperthermia, using deep learning based methods; while this is also the first research discussing the need and possibility of accelerating MR-guided hyperthermia. From the results, it can be observed that the framework seems to be robust against various undersampling patterns. For example, the SSIM values of 1D varden 25% are 63% and 31%, for the magnitude and phase images where the Fourier-PDNet and Fourier-PDUNet models managed to reconstruct those data with average SSIM values of 91% and 90% for the magnitude images, while achieving 44% and 40% for the phase images, respectively. SSIM values of the 2D varden 25% are 43% and 29% for the magnitude and phase images. The Fourier-PDUNet and Fourier-PDUNet models managed to reconstruct those data with average SSIM values of 94% and 93% for the magnitude images while achieving 47% and 46% for the phase images. 2D varden 10% is 39% and 28% for the magnitude and the phase images. The Fourier-PDNet and Fourier-PDUNet models managed to reconstruct those data with average SSIM values of 87% and 86% for the magnitude images while achieving 43% and 41% for the phase images, respectively, and the result has been displayed in violin plot10 as well as in table 5 The results show that both the Fourier Primal-Dual network (PD Net) and Fourier Primal-Dual UNet (PDUNet) were able to alleviate the undersampling problem and show that the deep learning model has the potential to improve the novel hyperthermia treatment. From the quantitative result Table 5, it has been clear that Fourier PD net outperformed Fourier PDUNet in SSIM. The same phenomenon has been observed in UIQI, but a slightly different phenomenon has been reported for NRMSE. For the 1D varden 25% and the 2D varden 25%, the value of NRMSE for PDUNet outperformed the output PDNet. Also, from the result, it has been clear that both of the models performed way better for magnitude images than the phase images, which could be the reason for the present 0.5\({}^{\circ}\)C temperature difference in the reconstructed temperature from the ground truth. Improvement of the models for phase images can also decrease the difference in temperature. Finally, this research demonstrates the possibility of accelerating MR thermometry during MR-guided hyperthermia. By using undersampling patterns such as 2D varden 10%, the acquisition can be ten times faster, and the methods presented here reduce the compromise in terms of temperature accuracy considerably. The faster acquisition would not only reduce the chances of having motion artefacts due to movements of the patient during the scan, but it would also significantly improve the temporal resolution of the imaging, making it possible to capture subtle changes in temperature over time. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Type of Undersampling** & **Undersample** & **Fourier-PDNet** & **Fourier-PDUNet** \\ \hline 1D Varden 25\% & 1.221\(\pm\)0.035 & 0.637\(\pm\)0.016 & 0.641\(\pm\)0.018 \\ \hline 2D Varden 25\% & 1.296\(\pm\)0.031 & 0.596\(\pm\)0.014 & 0.657\(\pm\)0.019 \\ \hline 2D Varden 10\% & 1.299\(\pm\)0.032 & 0.618\(\pm\)0.016 & 0.643\(\pm\)0.022 \\ \hline \end{tabular} \end{table} Table 4: The mean value of MSE of reconstructed temperature map from different Models \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{**Input**} & & \multicolumn{3}{c|}{**2D Varden 10\%**} & \multicolumn{3}{c|}{**2D Varden 25\%**} & \multicolumn{3}{c|}{**1D Varden 25\%**} \\ \cline{3-10} & & SSIM & NRMSE & UIQI & SSIM & NRMSE & UIQI & SSIM & NRMSE & UIQI \\ \hline \multirow{2}{*}{**Input**} & Undersample & _Magnitude_ & 0.385\(\pm\)0.009 & 0.541\(\pm\)0.022 & 0.536 \(\pm\)0.003 & 0.343\(\pm\)0.008 & 0.478\(\pm\)0.015 & 0.563\(\pm\)0.002 & 0.653\(\pm\)0.005 & 0.358\(\pm\)0.012 & 0.705\(\pm\)0.003 \\ & _Phase_ & 0.28\(\pm\)0.011 & 1.147\(\pm\)0.006 & 0.356 \(\pm\)0.013 & 0.294\(\pm\)0.011 & 1.128\(\pm\)0.005 & 0.363\(\pm\)0.012 & 0.312\(\pm\)0.009 & 0.172\(\pm\)0.005 & 0.382\(\pm\)0.011 \\ \hline \multirow{3}{*}{**Output**} & Fourier-PDUNet & _Magnitude_ & _0.886\(\pm\)0.004_ & _0.121\(\pm\)0.002_ & _0.807\(\pm\)0.006_ & _0.941\(\pm\)0.002_ & _0.675\(\pm\)0.001_ & _0.551\(\pm\)0.003_ & _0.509\(\pm\)0.001_ & _0.129\(\pm\)0.002_ & _0.843\(\pm\)0.004_ \\ & & _Phase_ & _0.429\(\pm\)0.01_ & _0.935\(\pm\)0.006_ & _0.433\(\pm\)0.01_ & _0.47\(\pm\)0.009_ & 0.983\(\pm\)0.004 & _0.501\(\pm\)0.009_ & _0.443\(\pm\)0.001_ & 0.961\(\pm\)0.005 & _0.477\(\pm\)0.01_ \\ \cline{2-10} & Fourier-PDUNet & _Magnitude_ & 0.864\(\pm\)0.012 & 0.864\(\pm\)0.012 & 0.799\(\pm\)0.008 & 0.933\(\pm\)0.011 & 0.084\(\pm\)0.005 & 0.526\(\pm\)0.009 & 0.905\(\pm\)0.002 & 0.144\(\pm\)0.002 & 0.834\(\pm\)0.003 \\ & _Phase_ & 0.405\(\pm\)0.013 & 1.095\(\pm\)0.003 & 0.452\(\pm\)0.011 & 0.462\(\pm\)0.014 & **0.963\(\pm\)0.01** & 0.498\(\pm\)0.012 & 0.401\(\pm\)0.012 & _0.934\(\pm\)0.005_ & 0.466\(\pm\)0.011 \\ \hline \end{tabular} \end{table} Table 5: Result of Cartesian undersampling patterns, while being trained separately Figure 8: Qualitative Result: 2D varden 25% Figure 7: Qualitative Result: 1D varden 25% Figure 10: (a)SSIM Value of reconstructed magnitude images of different models. (b) SSIM values of reconstructed phase images of different models. Where the blue violin diagram shows the SSIM value of Fourier-PDUNet, the orange violin plot shows the SSIM value of Fourier-PDNet, and the green violin plot shows the SSIM value of Undersampled images. Figure 9: Qualitative Result: 2D varden 10% ## 5 Conclusion and future works This paper introduced deep learning based reconstruction of undersampled MR thermometry acquired during hyperthermia using Fourier-PDNet and Fourier-PDUNet models. After all the different experiments with different types of undersampling methods of different percentages, the results show that the methods were able to alleviate the undersampling problem and managed to get SSIM Score of 0.886\(\pm\)0.004 for magnitude images and 0.429\(\pm\)0.01 for phase images for highest undersampling pattern 2D varden 10% - which means that the MR acquisition is now ten times faster and it also manages to bring the temperature difference close to the ground-truth which is 0.618\(\pm\)0.016. Still, half a \({}^{\circ}\)C temperature difference can be seen in the deep learning results. This can be attributed to the performance difference of the models between the magnitude and phase images. Future work will focus on improving the networks' performance on the phase images, which should also reduce the temperature difference. Furthermore, combining the Fourier-PDNet and Fourier-PDUNet models with dynamic MRI-centric pipelines (Sarasaen et al., 2021; Chatterjee et al., 2022b) could allow these models to better exploit the spatio-temporal nature of MR thermometry data, improving the overall reconstruction quality. Another future direction for research is to focus on the latent space. Exploring the latent space can be useful for improving the image reconstruction quality in undersampled image reconstruction tasks where the input images contain artefacts; the latent space represents, in theory, a low-dimensional representation of the input images without the artefacts. The input images with artefacts can be considered as augmented versions of the input images. Different types of variational auto-encoder(Makhzani et al., 2015) methods can be used, such as Factorised Variational Auto-encoder (FactorVAE) (Kim and Mnih, 2018), Vector Quantised Variational Auto-encoder (VQ-VAE) (Van Den Oord et al., 2017), Masked autoencoders (MAE)(He et al., 2022) etc. Use of post hoc explainability methods like Saliency (Simonyan et al., 2013), Occlusion (Zeiler and Fergus, 2014), Guided Backpropagation (Mahendran and Vedaldi, 2016) etc. can give a better understanding of what went wrong with phase images, which can help the authors to improve the network accordingly. ## Acknowledgement This research has received support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (MSCA-ITN) grant "Hyperboost" project, no. 955625.
2306.04835
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design. However, GNNs lack transparency as they cannot provide understandable explanations for their predictions. To address this issue, counterfactual reasoning is used. The main goal is to make minimal changes to the input graph of a GNN in order to alter its prediction. While several algorithms have been proposed for counterfactual explanations of GNNs, most of them have two main drawbacks. Firstly, they only consider edge deletions as perturbations. Secondly, the counterfactual explanation models are transductive, meaning they do not generalize to unseen data. In this study, we introduce an inductive algorithm called INDUCE, which overcomes these limitations. By conducting extensive experiments on several datasets, we demonstrate that incorporating edge additions leads to better counterfactual results compared to the existing methods. Moreover, the inductive modeling approach allows INDUCE to directly predict counterfactual perturbations without requiring instance-specific training. This results in significant computational speed improvements compared to baseline methods and enables scalable counterfactual analysis for GNNs.
Samidha Verma, Burouj Armgaan, Sourav Medya, Sayan Ranu
2023-06-07T23:40:18Z
http://arxiv.org/abs/2306.04835v1
# Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity ###### Abstract Graph neural networks (Gnns) have various practical applications, such as drug discovery, recommendation engines, and chip design. However, GNNs lack transparency as they cannot provide understandable explanations for their predictions. To address this issue, counterfactual reasoning is used. The main goal is to make minimal changes to the input graph of a Gnn in order to alter its prediction. While several algorithms have been proposed for counterfactual explanations of Gnns, most of them have two main drawbacks. Firstly, they only consider edge deletions as perturbations. Secondly, the counterfactual explanation models are transductive, meaning they do not generalize to unseen data. In this study, we introduce an inductive algorithm called InduCE, which overcomes these limitations. By conducting extensive experiments on several datasets, we demonstrate that incorporating edge additions leads to better counterfactual results compared to the existing methods. Moreover, the inductive modeling approach allows InduCE to directly predict counterfactual perturbations without requiring instance-specific training. This results in significant computational speed improvements compared to baseline methods and enables scalable counterfactual analysis for Gnns. ## 1 Introduction and Related Work The applications of Graph Neural Networks (Gnns) have percolated beyond the academic community. Gnns have been used for drug discovery [20], designing chips [16], and recommendation engines [31]. Despite significant success in prediction accuracy, Gnns, like other deep learning based models, lack the ability to explain why a particular prediction was made. Explainability of a prediction model is important towards making it trust-worthy. In addition, it sheds light on potential flaws and generates insights on how to further refine a model. **Existing Works:** At a high level, Gnn explainers can be classified into the two groups of _instance-level_[32; 14; 18; 36; 7; 35; 13; 21; 12; 4; 1; 27] or _model-level_ explanations [33]. Consistent with their nomenclature, instance-level explainers explain a specific input graph, whereas model-level explainers provide a high-level explanation in understanding general behaviour of the Gnn model trained over a set of graphs. Recent research has also focused on global concept-based [30; 2] explainers that provide both model and instance-level explanations. Instance-level methods can broadly be grouped into two categories: _factual_ reasoning [32; 14; 18; 36; 7; 35] and _counterfactual_ reasoning [13; 21; 4; 1; 27]. Given the input graph and a Gnn, factual reasoners seek to identify the smallest sub-graph that is sufficient to make the same prediction as on the entire input graph. Counterfactual reasoners, on the other hand, seek to identify the smallest perturbation on the input data that changes the Gnn's prediction. Perturbations correspond to removal and addition of edges. Compared to factual reasoning, counterfactual reasoners have the additional advantage of providing a means for recourse [23]. For example, in drug discovery [8, 29], mutagenicity is an adverse property of a molecule that hampers its potential to become a drug [9]. While factual explainers can attribute the subgraph causing mutagenecity, counterfactual reasoners can identify this subgraph along with the changes that would make the molecule non-mutagenic. In this work, we study counterfactual reasoning over GNNs towards node classification. To illustrate our problem, let us consider the input graph shown in Fig. 1. Here, each node belongs to the _green_ class if it is part of the motif (subgraph) shown on the right. Otherwise, it belongs to the _yellow_ class. The dotted edge on node A does not exist, for now. At this stage, if we ask the counterfactual reasoner to flip the label of node A, the best answer would be to add the dotted edge. Similarly, for node B, one possible answer would be to delete the edge marked with \(\otimes\). Existing works on counter-factual reasoning over GNNs suffer from two key limitations: * **Ability to add edges:** Most of the existing techniques do not consider addition of edges (or nodes); they only consider edge removals. This limitation severely compromises the search space consisting of possible "changes" on the input graph. As an example, in Fig. 1, if we only consider deletions, it is impossible to flip the label of A. * **Inductive modeling:** Existing techniques, with the exception of Gem[12], are transductive in nature, i.e., they cannot generate counterfactuals on unseen nodes. As an example, if the model is trained to generate counterfactuals on node \(v\) of graph \(G\), it cannot be used to generate counterfactuals on another node \(u\) of \(G\). Consequently, these transductive models need to be retrained on each node of an input graph. In contrast, an inductive model learns parameters from a train set of nodes, which in turn can be used to _predict_ counterfactual on unseen nodes. In addition, an inductive model is robust to changes in the input graph due to external factors such as new friend connections in a social network, citations in a citation network, etc. Table G in the Appendix presents a structured summary of the instance-level explainers. **Contributions:** In this work, we develop InduceE (Inductive Counter-factual Explanations), that addresses the above limitations of existing counterfactual reasoners. We propose InduceE to addresses these challenges and make the following contributions: * **Novel formulation:** We formulate the novel problem of _model-agnostic, inductive_ counterfactual reasoning over GNNs for node classification. It is worth noting that both inductive modeling and the ability to add edges introduce non-trivial challenges. In inductive modeling, we need to learn parameters that embodies general rules to be used for predicting counterfactuals. In the transductive approach, since parameters are learned for each specific node, there is no generalization component. Edge additions introduce a significant scalability challenge as the number of possible additions grows quadratically to the number of nodes in the graph. In contrast, the number of edge deletions is \(O(|\mathcal{E}|)\), where \(\mathcal{E}\) is the set of edges in the graph. (SS 2). * **Algorithm:** Identifying the smallest number of edge additions or removals that alter the prediction is a combinatorial optimization problem. We prove that computing the optimal solution to the problem is NP-hard (SS 2. As a heuristic, we _learn_ to solve this combinatorial optimization problem through reinforcement learning powered by _policy gradients_[28] (SS 3). * **Empirical validation:** Through extensive experiments on benchmark graph datasets, we show that InduceE outperforms state-of-the-art algorithms in metrics relevant to counterfactual reasoning. We further analyze the generated counterfactuals and provide compelling evidence that enabling edge additions is indeed the reason driving InduceE's superior performance. Finally, we also Figure 1: The figure contains two graphs with the right graph being labeled “Motif”. Each node in the left graph belongs to either the green class (label) or yellow. Green class indicates a node that is part of a subgraph isomorphic to the motif; yellow otherwise. Addition of the dotted edge incident on node A changes its label from yellow to green since it becomes part of the motif. showcase the computation gains obtained due to embracing the inductive paradigm instead of transductive modeling (SS 4). ## 2 Preliminaries and Problem Formulation We use the notation \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) to denote a graph with node set \(\mathcal{V}\) and edge set \(\mathcal{E}\). We assume each node \(v_{i}\in\mathcal{V}\) is characterized by a feature vector \(x_{i}\in\mathbb{R}^{d}\). Furthermore, \(l(v):v\rightarrow\mathcal{C}\) is a function mapping each node \(v\) to its true class label drawn from a set \(\mathcal{C}\). We assume there exists a Gnn\(\Phi\) that has been trained on \(\mathcal{G}\). Given an input node \(v_{i}\in\mathcal{V}\), we assume \(\Phi(\mathcal{G},v,c)\) outputs a probability distribution over class labels \(c\in\mathcal{C}\). The predicted class label is therefore the class with the highest probability, which we denote as \(L_{\Phi}(\mathcal{G},v)=\arg\max_{c\in\mathcal{C}}\{\Phi(\mathcal{G},v,c)\}\). **Problem 1** (Counterfactual Reasoning on GNNs): _Given input graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), a target node \(v\in\mathcal{V}\), a Gnn model \(\Phi\), and an optional set of node pairs \(\mathcal{V}_{c}=\{(v_{i},v_{j})\mid v_{i},v_{j}\in\mathcal{V}\}\) between which edges may be perturbed, find the closest graph \(\mathcal{G}^{*}\) by minimizing the number of perturbations, such that \(L_{\Phi}(\mathcal{G}^{*},v)\neq L_{\Phi}(\mathcal{G},v)\) and all perturbed edges are among pairs in \(\mathcal{V}_{c}\)._ In a real world, we may not have control over all perturbations. \(\mathcal{V}_{c}\) allows us to specify that. If \(\mathcal{V}_{c}\subseteq\mathcal{E}\), we restrict to only deletions. On the other hand, if \(\mathcal{V}_{c}\cap\mathcal{E}=\emptyset\), we only allow additions. In our problem, we enforce two restrictions on the counterfactual reasoner. First, it should be _model-agnostic_, i.e., only the output of \(\Phi\) is visible to us, but not its parameters. Second, the reasoner should be _inductive_, which means we should learn a _predictive model_\(\Pi\), that can predict the counterfactual graph \(\mathcal{G}^{*}\) given the inputs \(\mathcal{G}\), Gnn\(\Phi\), and target node \(v\). **Theorem 1** (NP-hardness): _Counterfactual reasoning for GNNs, i.e., Prob. 1, is NP-hard._ We prove NP-hardness by mapping counter-factual reasoning over GNNs to the _set-cover_ problem. The details The are provided in the App. A). Owing to NP-hardness, it is not feasible to identify the closest counterfactual graph in polynomial time. Hence, we aim to design effective heuristics. ## 3 Induce Our goal is to learn an inductive counterfactual reasoning model \(\Pi\), and thus, the proposed algorithm is broken into two phases: _training_ and _inference_. During training, we learn the parameters of the model \(\Pi\) and during inference, we predict the counterfactual graph using \(\Pi\). Theorem 1 prohibits us from supervised learning since generating training data of ground-truth counterfactuals is NP-hard. Hence, we use reinforcement learning. Through _discounted rewards_, reinforcement learning allows us to model the combinatorial relationships [10] in the perturbation space. ### Learning \(\Pi\) as an MDP Given graph \(\mathcal{G}\), we randomly select a subset of vertices from \(\mathcal{V}\) to train \(\Pi\). Given a target node \(v\), the task of \(\Pi\) is to iteratively delete or add edges such that with each perturbation the likelihood of \(\Phi(\mathcal{G}^{t},v)\neq\Phi(\mathcal{G},v)\) changes maximally. Here, \(\mathcal{G}^{t}=(\mathcal{V},\mathcal{E}^{t})\) denotes graph \(\mathcal{G}\) after \(t\) perturbations starting with \(\mathcal{G}^{0}=\mathcal{G}\). We model this task of iterative perturbations as a _markov decision process (MDP)_. Specifically, the _state_ captures a latent representation of the graph indicative of how it would react to a perturbation. An _action_ corresponds to an edge addition or deletion by \(\Pi\). Finally, the _reward_ is a function of the number of perturbations, which we want to minimize, and the probability of \(\Phi(\mathcal{G}^{t},v)\) flipping following the next action (edge addition or deletion), a value that we want to maximize. We next formalize each of these notions. **State:** Intuitively, the state should characterize how likely the class label of the target node \(v\) would flip following a given action. Towards that end, we observe that a Gnn of \(\ell\) layers aggregates information from the \(\ell\)-hop neighborhood of \(v\). Nodes outside this neighborhood do not impact the prediction of a Gnn. Motivated by this design of GNNs, the state in our problem is the set of node representations in the \(h\)-hop neighborhood of the target node \(v\), where ideally \(h>\ell\). Specifically, at time \(t\), the state is: \[\mathbf{S}^{t}_{v}=\left\{\mathbf{x}^{t}_{u}\mid u\in\mathcal{N}^{h}_{v}\right\},\] (1) \[\mathcal{N}^{h}_{v}=\left\{u\in\mathcal{V}\mid sp(v,u)\leq h\right\}\] (2) Here, \(sp(v,u)\) denotes the length of the shortest path from \(v\) to \(u\) in the original graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). The representations of nodes, i.e., \(\mathbf{x}^{t}_{u}\), are constructed using a combination of semantic, topological, and statistical features. * _Original node Features:_ It is common to encounter graphs where nodes are annotated with features or labels (recall the definition of \(x_{i}\) in SS 2). We retain these features. * _Degree Centrality:_ The higher the degree of a node, the more information it receives from its neighbors. Thus, when an edge is added or deleted from the target node to a high-degree node, it may have significant impact on the representation of the target. Based on this observation, we use the degree of a node as a part its representation. * _Entropy:_ The entropy of a node at time \(t\) is defined as \(e_{u}^{t}=-\sum_{\forall c\in\mathcal{C}}p_{c}\log p_{c}\), where \(p_{c}=\Phi\left(\mathcal{G}^{t},v,c\right)\). The entropy quantifies the uncertainty of the Gnn \(\Phi\) on a given node. We hypothesize that if the \(\Phi\) is highly certain (i.e., low entropy) about the class label of some node \(u\), then any perturbation on \(u\) is unlikely to make it flip. Similarly, the opposite is true on nodes with high entropy. Due to this information content of entropy, we use it as one of the features in \(\mathbf{x}_{u}^{t}\). * _Class label:_ Finally, we include the predicted class label of a node, i.e., \(L_{\Phi}(\mathcal{G}^{t},u)\) in the form of one-hot encodings of dimension \(\mathcal{C}\). The final representation of node \(u\) and time \(t\) is therefore the concatenation of the above features, i.e., \[\mathbf{x}_{u}^{t}=x_{i}\parallel degree_{u}^{t}\parallel e_{u}^{t}\parallel \left(\text{one-hot}\left(L_{\Phi}\left(\mathcal{G}^{t},u\right)\right)\right) \tag{3}\] Here, \(\parallel\) represents the _concatenation_ operator. **Actions:** The action space consists of all possible edge deletions in the \(h\)-hop neighborhood of target node \(v\) and additions of edges from \(v\) to other non-attached nodes in its \(h\)-hop neighborhood. Formally, the sets are defined as follows: \[\mathcal{E}_{v,del}^{t}=\left\{e=\left(u_{i},u_{j}\right)\in \mathcal{E}^{t}\mid u_{i},u_{j}\in\mathcal{N}_{v}^{h}\right\} \tag{4}\] \[\mathcal{E}_{v,add}^{t}=\left\{e=\left(v,u_{j}\right)\not\in \mathcal{E}^{t}\mid u_{j}\in\mathcal{N}_{v}^{h}\right\} \tag{5}\] The action space is the perturbation set: \(\mathcal{P}^{t}=\mathcal{E}_{v,del}^{t}\cup\mathcal{E}_{v,add}^{t}\) (6) **Reward:** Our objective is to flip the predicted label of target node \(v\) with the minimum number of perturbations in \(\mathcal{N}_{v}^{h}\) in order to find the counterfactual. To capture these intricacies, we formulate the reward of an action \(a\) as a combination of the prediction accuracy of Gnn \(\Phi\) and the number of perturbations made so far. \[\mathcal{R}_{v}^{t}(a)=\frac{1}{\mathcal{L}_{v,pred}^{t+1}+\beta \times d(\mathcal{G},\mathcal{G}^{t})}\text{, where} \tag{7}\] \[\mathcal{L}_{v,pred}^{t}=\sum_{\forall c\in\mathcal{C}}\mathds{1} _{(v)=c}\log\left(\Phi\left(\mathcal{G}^{t},v,c\right)\right),d(\mathcal{G}, \mathcal{G}^{t})\text{= }t+1 \tag{8}\] In simple terms, \(\mathcal{L}_{v,pred}^{t+1}\) is the _log-likelihood_ of the data predicted by \(\Phi\) in \(\mathcal{G}^{t+1}\) on \(v\). \(\mathcal{G}^{t+1}\) is the created upon perturbing \(\mathcal{G}^{t}\) with action \(a\). \(\beta\) is a hyper-parameter that regulates how much weight is given to log-likelihood of the data vs. the perturbation count. \(d\) is the distance function, which in our case is simply the number of edge edits made to \(\mathcal{G}\) at time step \(t\). **State Transitions:** At time \(t\), the action corresponds to selecting a perturbation \(a\in\mathcal{P}^{t}\) (Recall Eq. 6) from \(p_{a,v}^{t}\sim\Pi\left(a\mid\mathcal{S}_{v}^{t}\right)\). We will discuss the computation of \(p_{a,v}^{t}\) in SS 3.2. ### Neural Architecture for Policy Training To learn \(p_{a,v}^{t}\), we we take the representations in \(\mathbf{S}_{v}^{t}\), and pass them through a neural network comprising of a \(K\)-layered _Graph Attention Network_ (Gat) [22], an MLP, and a final SoftMax layer. Figure 2: Pipeline of the policy learning algorithm in InduCE. \(\delta\) indicates the maximum number of allowed perturbations. The Gat learns a \(d\)-dimensional representation \(\mathbf{a}\in\mathbb{R}^{d}\) for each perturbation \(a\in\mathcal{P}_{v}^{t}\). \(\mathbf{a}\) is then passed through an _Multi-layered Perceptron (MLP)_ to embed them into a scalar representing their value, which is finally passed over a SoftMax layer to learn a distribution over \(\mathcal{P}_{v}^{t}\). The entire network is trained end-to-end. We next detail each of these components. **Gat:** Let \(\forall u\in\mathcal{N}_{v}^{h},\mathbf{h}_{u}^{0}=\mathbf{x}_{u}^{t}\) (Recall Eq.3). In each layer \(k\in[1,K]\), we perform the following transformation: \[\mathbf{h}_{u}^{k}=\sigma\left(\sum_{\forall u^{\prime}\in\mathcal{N}_{u}^{k} \cup\{u\}}\alpha_{u,u^{\prime}}^{k}\mathbf{W}^{k}\mathbf{h}_{u^{\prime}}^{k-1}\right) \tag{9}\] \(\sigma\) is an activation function, \(\alpha_{u,u^{\prime}}^{k}\) are learnable, layer-specific attention weights, and \(\mathbf{W}^{k}\in\mathbb{R}^{d^{k-1}\times d^{k}}\) is a learnable, layer-specific weight matrix where \(d^{k}\) is the hyper-parameter denoting the representation dimension in hidden layer \(k\). In our implementation, we use LeakyReLU with negative slope \(0.01\) as the activation function. The attention weights are learned through an MLP followed by a SoftMax layer. Specifically, \[e_{u,u^{\prime}}^{k}=\texttt{MLP}(\mathbf{h}_{u}^{k-1}\parallel \mathbf{h}_{u^{\prime}}^{k-1})\text{, where }e_{u,u^{\prime}}^{k}\in\mathbb{R}, \alpha_{u,u^{\prime}}^{k}=\frac{exp\left(e_{u,u^{\prime}}^{k} \right)}{\sum_{\tilde{u}\in\mathcal{N}_{u}^{k}\cup\{u\}}exp\left(e_{u,\tilde{ u}}^{k}\right)}\] After \(K\) layers, the Gat outputs the final representation \(\boldsymbol{\mathcal{X}}_{u}=\mathbf{h}_{u}^{K}\) for each node \(u\) in \(v\)'s neighborhood. Semantically, given the initial state representation \(\mathbf{x}_{u}^{t}\), the Gat enriches them further by merging with topological information. Finally, the representation of an action \(a\in\mathcal{P}_{v}^{t}\) is set to \(\mathbf{a}=\mathcal{X}_{u}\parallel\left|\mathcal{X}_{v}\parallel t(u,v)\right.\), where: \[t(u,v)=\begin{cases}0,&a\in\mathcal{E}_{v,del}^{t}\text{ (Recall Eq. \ref{eq:Gat})}\\ 1,&a\in\mathcal{E}_{v,add}^{t}\text{ (Recall Eq. \ref{eq:Gat})}\end{cases} \tag{10}\] **MLP and SoftMax layers:** The value of \(a\) is \(s_{a}=\texttt{MLP}(\mathbf{a})\), where \(s_{a}\in\mathbb{R}\). Finally, we get a distribution over all actions in \(\mathcal{P}_{v}^{t}\) as: \[p_{a,v}^{t}=\Pi(a\mid\mathcal{S}_{v}^{t})=\frac{exp(s_{a})}{\sum_{\forall a^{ \prime}\in\mathcal{P}_{v}^{t}}exp(s_{a^{\prime}})} \tag{11}\] ### Policy Loss Computation We iteratively sample an action as per Eq. 11 till either the label flips or we exceed the maximum number of perturbations (which is a hyper-parameter). This iterative selection generates a trajectory of perturbations \(\mathcal{T}_{v}=\{a_{1},\cdots,a_{m}\}\). We use the standard loss for policy gradients on \(\mathcal{T}_{v}\)[28]. More specifically, we minimize the following loss function: \[\mathcal{J}(\Pi)=-\frac{1}{\mathcal{V}_{tr}}\left(\sum_{\forall v\in\mathcal{ V}_{tr}}\left(\sum_{t=0}^{|\mathcal{T}_{v}|}\log p_{a,v}^{t}\mathcal{R}_{v}^{t}(a_ {t})+\eta Ent(\mathcal{P}_{v}^{t})\right)\right) \tag{12}\] Here, \(\mathcal{V}_{tr}\subseteq\mathcal{V}\) is the subset of nodes on which the RL policy is being trained. \(Ent(\mathcal{P}_{v}^{t})\) is the entropy of the current probability distribution over the action space. \[Ent(\mathcal{P}_{v}^{t})=-\sum_{\forall a\in\mathcal{P}_{v}^{t}}p_{a,v}^{t} \log(p_{a,v}^{t}) \tag{13}\] By adding the entropy to the loss, we encourage the RL agent to explore when there is high uncertainty. \(\eta\) is a hyper-parameter balancing the _explore-exploit_ trade-off. For simplicity of exposition, we omit the discussion on discounted rewards in Eq. 12. Discounted rewards better capture the combinatorial relationship in the perturbation space. Refer to App. B for details. ### Training and Inference Fig. 2 presents the training pipeline. Starting from the original graph, we compute the state representation at each iteration \(t\). The state is passed to the neural network to compute a distribution over the perturbation space. A perturbation is sampled from this distribution and the graph is accordingly modified. The Gnn\(\phi\) is then applied on the modified graph. If the label flips or the number of perturbation exceeds the maximum limit, we update the policy parameters. Otherwise, we update the state and continue building the perturbation trajectory in the same manner. The pseudocode of the training pipeline is provided in Alg.1 in the Appendix. **Inductive inference:** We iteratively make forward passes till the label flips or we exceed the budget. The forward pass is identical to the training phase with the only exception being we deterministically choose the perturbation with the highest likelihood instead of sampling. **Transductive Inference:** This phase proceeds identical to the training phase with the only exception that we learn a target node specific policy instead of one that generalizes across all nodes. **Complexity of InduCE**: The time complexity of training phase is \(\mathcal{O}(|\mathcal{V}_{tr}|(|\mathcal{V}|+|\mathcal{E}|))\) and the test phase is \(\mathcal{O}(|\mathcal{V}|_{test}|(|\mathcal{V}|+|\mathcal{E}|))\). Here \(\mathcal{V}_{tr}\) and \(\mathcal{V}_{test}\) denote the number of nodes in the train and test sets respectively. The derivations are provided in App. E. ## 4 Experiments In this section, we benchmark InduCE against established baselines. The code base and datasets used in our evaluation are available anonymously at [https://github.com/idea-iitd/InduCE.git](https://github.com/idea-iitd/InduCE.git). Details of the hardware and software platform are provided in App. F. ### Datasets **Benchmark Datasets:** We use the same three benchmark graph datasets used in [21; 12; 13]. Statistics of these datasets are listed in Table 1. Each dataset has an undirected base graph with pre-defined motifs attached to random nodes of the base graph, and randomly added additional edges to the overall graph. The class label of a node indicates whether it is part of a node or not. Further details on the datasets are provided in App. F.1. \(\bullet\)**Real Dataset:** We additionally use real-world datasets from the Amazon-photos co-purchase network [19] and ogbn-arxiv [25]. In the Amazon dataset, each node corresponds to a product, edges correspond to products that are frequently co-purchased, node features encode bag-of-words from product reviews and the node class label indicates the product category. The ogbn-arxiv dataset is a citation network. The nodes are all computer science arXiv papers indexed by MAG [25]. Each directed edge represents that one paper cites another. The features are word embeddings of the title and the abstract computed by the skip-gram model [15]. The labels are subject areas. Since the class labels in these datasets are not based on presence or absence of motifs, the corresponding cells in Table 1 are marked as "NA". ### Baselines We benchmark InduCE against the state-of-the-art baselines of **(1)** CF-GnnExplainer[13], **(2)** CF\({}^{2}\)[21], and **(3)** Gem[12]. In addition, we also compare against the state-of-the-art factual explainer **(4)** PGEExplainer to show that when factual explainers are used for counter-factual reasoning by removing the factual explanation (subgraph) from the input graph, they are not effective. This is consistent with prior reported literature [13; 21; 12]. Finally, we also compare against **(5)** Random perturbations. While CF\({}^{2}\) and CF-GnnExplainer are transductive, Gem and PGEExplainer are inductive. The codebase of all algorithms have been obtained from the respective authors. We do not consider [34] and [4] since they are limited to graph classification. We omit GnnExplainer[32], since both CF\({}^{2}\) and CF-GnnExplainer outperformed GnnExplainer. Further, we do not study Bacciu et al. [3] since it uses internal representations of the black-box Gnn model to exploit domain-specific knowledge. We focus on a domain-agnostic setting. Furthermore, unlike in \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Tree-Cycles** & **Tree-Grid** & **BA-Shapes** & **Amazon** & **ogbn-arxiv** \\ \hline \# Classes & 2 & 2 & 4 & 6 & 40 \\ \# Nodes & 871 & 1231 & 700 & 397 & 169,343 \\ \# Edges & 1950 & 3410 & 4100 & 2700 & 1,166,243 \\ Motif size (\# nodes) & 6 & 9 & 5 & NA & NA \\ Motif size (\# edges) & 6 & 12 & 6 & NA & NA \\ \# Nodes from motifs & 360 & 720 & 400 & NA & NA \\ Avg node degree & 2.23 & 2.77 & 5.86 & 15.90 & 6.89 \\ \hline \hline \end{tabular} \end{table} Table 1: The statistics of the benchmark datasets. [3], we do not assume access to the embeddings of the black-box model. Hence, our algorithm is also applicable in situations where the internal details of the Gnn are hidden from the end-user due to proprietary reasons. #### 4.2.1 Performance measures To quantify performance, we use the standard measures from the literature [13]. * **Fidelity:** Fidelity is the percentage of nodes whose labels do not change when the edges produced by the explainer (algorithm) are perturbed. Lower fidelity is better. Furthermore, it may be argued that fidelity is the most important metric among the three measures. * **Size:** Explanation size is the number of edges perturbed for a given node. Lower size is better. * **Accuracy:** Accuracy is the percentage of explanations that are correct. As standard in Cr\({}^{2}\),CF-GnnExplainer, and Gem, this translates to the percentage of edges in the counterfactual that belong to the motif. Since nodes have a non-zero class label only if they belong to a motif, the explanation for nodes should be edges in the motif itself. Note that accuracy is computable only on the benchmark datasets since they include ground-truth explanations. * **Sparsity:** Sparsity is defined as the proportion of edges from \(\mathcal{N}_{v}^{\ell}\), i.e., the \(\ell\)-hop neighbourhood of the target node, that is retained in the counter-factual \(v\)[35]; a value close to \(1\) is desired. Since sparsity is inversely correlated to size, we present sparsity values of our experiments in App. G. **Other settings:** Details of additional experimental settings regarding the counterfactual task, the black-box Gnn, training and inference are given in App. F.3. ### Quantitative Results on Benchmark Datasets **Transductive methods:** Table 2 presents the results (for now, we will focus on the first four rows). Our method InduCE in the transductive setting outperforms all the baselines almost in all settings. For Tree-Cycles and BA-Shapes, CF-GnnExplainer is producing better accuracy. However, we note that its fidelity is much worse, indicating it fails to find an explanation more frequently. More generally, while CF-GnnExplainer consistently achieves the lowest size among the baselines, its fidelity is much worse. This indicates that CF-GnnExplainer is able to solve only the easy cases and hence the low size is deceptive as it did not solve the difficult ones. **Inductive methods:** Table 3 shows that InduCE is superior to Gem and PGExplainer in most cases. The fidelity scores produced by Gem and PGExplainer are much higher (worse). This indicates, in most of the cases, Gem and PGExplainer are unable to find a counterfactual example. Also recall that the explanation size is fixed in Gem and PGExplainer since they work with fixed budgets. **Transductive vs Inductive:** We further compare the inductive version (Table 3) of our method, InduCE with the transductive baselines (Table 2). While the transductive methods have a clear \begin{table} \begin{tabular}{l||c|c|c||c|c||c|c||c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c||}{**Tree-Cycles**} & \multicolumn{3}{c||}{**Tree-Grid**} & \multicolumn{3}{c}{**B-Shapes**} \\ \cline{2-10} & **Fld\(\%\)\(\downarrow\)** & **Size \(\downarrow\)** & **Acc.(\%\(\uparrow\))** & **Fld\(\%\)\(\downarrow\)** & **Size \(\downarrow\)** & **Acc.(\%\(\uparrow\))** & **Fld\(\%\)\(\downarrow\)** & **Size \(\downarrow\)** & **Acc.(\%\(\uparrow\))** \\ \hline Random & **0** & 31.82 \(\pm\)3.22 & 67.08 & **0** & 8.32 \(\pm\)4.95 & 73.44 & **0** & 203.97 \(\pm\)27.26 & 15.57 \\ CF-GnnEx & 49.0 & 1.05 \(\pm\)0.23 & **100** & 10 & 1.37 \(\pm\)0.58 & 92.24 & 37.0 & 1.31 \(\pm\)0.55 & **95.83** \\ CF\({}^{2}\) & 76.38 & 4.18 \(\pm\)8.99 & 67.68 & 98.45 & 5.5 \(\pm\)1.45 & 44.61 & 23.68 & 4.10 \(\pm\)6.64 & 70.54 \\ INuCE (transductive) & **0** & **1.01 \(\pm\)0.12** & 96.61 & **0** & **1.02 \(\pm\)0.12** & **97.67** & **0** & **1.30 \(\pm\)0.90** & 95.31 \\ \hline \hline \multicolumn{10}{l}{**CF-GnnEx\(\%\)\(+\)\(+\)\(+\)\(+\)\(+\)**} & 100 & NULL & NULL & 100 & NULL & NULL & NULL & \multicolumn{1}{c}{} \\ CF\({}^{2}\)\(+\)\(+\)\(+\) & 13.89 & 28.34 \(\pm\)7.56 & 19.24 & 28.68 & 12.90 \(\pm\)7.71 & 27.44 & 100 & NULL & NULL \\ INuCE (transductive)\(-\)\(+\) & **0** & 1.40 \(\pm\)1.49 & 81.94 & **0** & 1.24 \(\pm\)0.43 & 92.64 & 6.6 & 1.42 \(\pm\)1.49 & 83.22 \\ \hline \hline \end{tabular} \end{table} Table 2: **Results for transductive methods:** Lower fidelity, smaller size, and higher accuracy are desired. The best results are highlighted in bold. Fid. denotes fidelity and Acc. denotes Accuracy. \begin{table} \begin{tabular}{l||c|c|c||c|c||c|c||c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c||}{**Tree-Cycles**} & \multicolumn{3}{c||}{**Tree-Grid**} & \multicolumn{3}{c}{**B-Shapes**} \\ \cline{2-10} & **Fld\(\%\)\(\downarrow\)** & **Size \(\downarrow\)** & **Acc.(\%\(\uparrow\))** & **Fld\(\%\)\(\downarrow\)** & **Size \(\downarrow\)** & **Acc.(\%\(\uparrow\))** & **Fld.(\%\(\downarrow\))** & **Size \(\downarrow\)** & **Acc.(\%\(\uparrow\))** \\ \hline **PGExplainer** & 34.72 & 6 & 76.85 & 41.09 & 6 & 66.93 & 6.58 & 6 & 89.25 \\ **Gem** & 95 & 6 & 88.97 & 97 & 6 & **94.57** & 17 & 6 & **98.44** \\ **InduCE** (inductive)\(+\) & **0** & **2.31 \(\pm\)1.44** & **96.65** & **0** & **4.67 \(\pm\)**2.91 & 91.05 & **2.6** & **4.37 \(\pm\)**3.53 & 64.40 \\ \hline **InduCE** (inductive)\(+\) & 36.3 & **1.67 \(\pm\) 0.90** & 90.32 & 16.3 & 6.38 \(\pm\)3.74 & 86.31 & 40.8 & **3.37 \(\pm\)3.04** & 56.08 \\ \hline \hline \end{tabular} \end{table} Table 3: **Results for inductive methods.** The best result in each category is highlighted in bold. advantage of re-training the model instance wise, the results produced by InduCE-inductive are comparable. As noted earlier, although CF-GnnExplainer achieves better size than InduCE-inductive, its fidelity is much worse indicating that the low size is a manifestation of not being able to explain the hard cases that InduCE is able to explain. Moreover, in addition to the ability to generalize to unseen nodes, inductive modeling also imparts a dramatic speed-up in generating explanations (see Table (a)a). **Impact of edge additions:** We seek answers to two key questions: **(1)** How much does the performance of InduCE deteriorate if we restrict edge additions? **(2)** If we empower the baselines also with additions, do they match up to InduCE? To answer the first question, we study the performance of InduCE in the setting where only edge deletions are allowed. The rows corresponding to InduCE (transductive)\(--\) and InduCE (inductive)\(--\) in Tables 2 and 3 present these results. It is evident that the deletion-only version produces inferior results for both the transductive and inductive versions. In Fig. 3, we further study the frequency distribution of edge additions and deletions in the counter-factual explanations produced by InduCE in Tree-Cycles dataset (results on other datasets are in App. I). We observe that additions dominate the perturbations, and thereby, further establishing its importance, which InduCE unleashes. To address the second question, we empower CF-GnnExplainer and CF\({}^{2}\) with edge additions, denoted as CF-GnnExplainer \(++\) and CF\({}^{2}++\) respectively. 1 Both CF\({}^{2}\) and CF-GnnExplainer use a _mask-based_ strategy. A mask is a learnable binary matrix of the same dimension as the \(\ell\)-hop neighborhood of the target node. By taking an element-wise product of the mask with the adjacency matrix, one obtains the edges to be deleted. When empowered with additions, the mask itself becomes the new adjacency matrix. Surprisingly, the performance of CF-GnnExplainer drops, while for CF\({}^{2}\), we see improvement in fidelity in two out of three datasets. Further investigation into this performance reveals that edge additions significantly increase the search space of possible perturbations (See Table H in Appendix). A mask-based strategy is a single-shot learnable paradigm that does not examine the marginal effect of each perturbation. When the perturbation space increases, it overwhelms the learning procedure. In contrast, InduCE uses reinforcement learning where a trajectory of perturbations is selected based on their marginal gains. This allows better modeling of the combinatorial nature of counter-factual reasoning. Footnote 1: Gem is not extendible to additions (See App. F.2 for details), PGExplainer does not incorporate perturbations with the intent of flipping the label since it is a factual explainer. Overall, the above experiments reveal that both additions, as well as an algorithm equipped to model large combinatorial spaces, are required to perform well. **Additional experiments:** App. H contains further empirical data on **(1)** the impact of heuristic features and **(2)** the choice of Gnn architecture in the MDP on performance. Experiments on counterfactual size vs accuracy trade-off are given in App. J. ### Quantitative Results on Real Datasets In Tables (a)a and (b)b, we present the results.Consistent with the performance on benchmark datasets, InduCE continues to outperform all the baselines almost in both transductive and inductive settings. We note that most of the baselines failed to produce counterfactuals in Amazon. In ogbn-arxiv, on the other hand, all baselines except PGExplainer fails to scale; they crash with out-of-memory Figure 3: The distributions of the edit size and their internal composition of edge additions and deletions by InduCE on the Tree-Cycles dataset. exception. In contrast, InduCE produces promising performance with the transductive version achieving \(0\%\) fidelity. ### Efficiency Table 5(a) presents the inference times of various algorithms. First, the inductive methods (InduCE, PGExplainer and Gem) are much faster than the others. Between the inductive methods, PGExplainer is the fastest. InduCE-inductive is slower since the search space for InduCE is larger due to accounting for both edge additions and deletions. Second, InduCE-inductive is up to \(79\) times faster than the transductive methods such as CF-GnnExplainer and CF\({}^{2}\). This speed-up is a result of only doing forward passes through the neural policy network, whereas, transductive methods learn the model parameters on each node separately. Even the transductive version of InduCE is faster than the other transductive methods for Tree-Cycles and Tree-Grid. **Scalability against graph size:** Table 5(b) presents the inference time per node across all datasets. We observe that InduCE scales to million-sized networks such as ogbn-arxiv. We observe that the growth of the running time is closely correlated with the neighbourhood density, i.e., the average degree of the graph, and not the graph size. In a Gnn with \(\ell\) layers, only the \(\ell\)-hop neighborhood of the target node matters. ### Case Study: Counter-factual Visualization In this section, we visually showcase how counter-factual explanations reveal vulnerabilities of Gnns and why edge additions are important. **Revealing Gnn vulnerabilities:** A sample counterfactual explanation by various algorithms on Tree-Cycles dataset is provided in Fig. 3(a). The target node is part of a motif (6-cycle) and therefore the expected counter-factual explanation is to make it a non-member of a 6-cycle. CF-GnnExplainer correctly finds on such explanation by deleting an edge. Both Gem and CF\({}^{2}\) recommend a much larger explanation than necessary. In contrast, InduCE adds an edge. More interestingly, the target node continues to remain part of the motif. This uncovers a limitation of Gnn since it falsely classifies the target node as a non-motif node although it is not. Furthermore, this limitation is uncovered only since InduCE can add edges. Similar observations in other datasets are available in Figs. 0(a)-0(b) in Appendix. **Impact of additions:** In Fig. 3(b) top-left, we share an example where InduCE flips the label of a target node (orange) by making it part of the 6-node cycle motif through edge addition. Since baseline strategies are only capable of deletes, they fail to flip the label of such nodes. Further in the top-right (Tree-Grid) example, we see InduCE breaks the grid motif and connects the target node (orange) to a non-motif neighbour, hence colluding its embeddings and flipping its label. These examples showcase the importance of edge addition to intuitively explain how the black-box Gnn works. ## 5 Conclusion The ability to explain predictions is critical towards making a model trustworthy. In this work, we proposed InduCE to understand Gnns via counterfactual reasoning for the node classification task. While several algorithms in the literature produce counterfactual explanations of Gnns, they suffer from restricted counterfactual space exploration and transductivity. InduCE provides a boost to counterfactual analysis on Gnns by unleashing the power of edge additions and inductively predicting \begin{table} \end{table} Table 4: Results for (a) transductive and (b) inductive methods on the Amazon dataset. “NULL” denotes that the method could not produce a counterfactual. \begin{table} \end{table} Table 5: Results for (a) transductive and (b) inductive methods on the ogbn-arxiv dataset. “DNS” denotes that the method could not produce a counterfactual as it did not scale. Refer to App. K that details reasons on why these baselines failed to scale on ogbn-arxiv.” explanations on unseen nodes. The proposed features not only lead to better explanations but also provide a significant speed-up allowing InduCE to perform counterfactual analysis at scale. **Limitations:**InduCE performs counter-factual reasoning by perturbing only the topological space. In future, we will consider characterization of the node feature space and explore the joint combinatorial space of topology and features.
2306.11586
Provably Powerful Graph Neural Networks for Directed Multigraphs
This paper analyses a set of simple adaptations that transform standard message-passing Graph Neural Networks (GNN) into provably powerful directed multigraph neural networks. The adaptations include multigraph port numbering, ego IDs, and reverse message passing. We prove that the combination of these theoretically enables the detection of any directed subgraph pattern. To validate the effectiveness of our proposed adaptations in practice, we conduct experiments on synthetic subgraph detection tasks, which demonstrate outstanding performance with almost perfect results. Moreover, we apply our proposed adaptations to two financial crime analysis tasks. We observe dramatic improvements in detecting money laundering transactions, improving the minority-class F1 score of a standard message-passing GNN by up to 30%, and closely matching or outperforming tree-based and GNN baselines. Similarly impressive results are observed on a real-world phishing detection dataset, boosting three standard GNNs' F1 scores by around 15% and outperforming all baselines.
Béni Egressy, Luc von Niederhäusern, Jovan Blanusa, Erik Altman, Roger Wattenhofer, Kubilay Atasu
2023-06-20T15:03:31Z
http://arxiv.org/abs/2306.11586v3
# Provably Powerful Graph Neural Networks for Directed Multigraphs ###### Abstract This paper proposes a set of simple adaptations to transform standard message-passing Graph Neural Networks (GNN) into provably powerful directed multigraph neural networks. The adaptations include multigraph port numbering, ego IDs, and reverse message passing. We prove that the combination of these theoretically enables the detection of any directed subgraph pattern. To validate the effectiveness of our proposed adaptations in practice, we conduct experiments on synthetic subgraph detection tasks, which demonstrate outstanding performance with almost perfect results. Moreover, we apply our proposed adaptations to two financial crime analysis tasks. We observe dramatic improvements in detecting money laundering transactions, improving the minority-class F1 score of a standard message-passing GNN by up to \(45\%\), and clearly outperforming tree-based and GNN baselines. Similarly impressive results are observed on a real-world phishing detection dataset, boosting a standard GNN's F1 score by over \(15\%\) and outperforming all baselines. ## 1 Introduction Graph neural networks (GNNs) have become the go-to machine learning models for learning from relational data. They have been applied in various fields, ranging from biology, physics, and chemistry to social networks, traffic, and weather forecasting [8; 70; 15; 52; 61; 30; 67; 4]. More recently, there has been growing interest in using GNNs to identify financial crime [10; 28; 60; 59; 42]. The underlying motivating task for this paper is to detect financial crimes that manifest as subgraph patterns in transaction networks. For example, see Figure 1 for examples of established money laundering patterns, or Figure 5 in the appendix for an illustrative scenario. The task seems to lend itself nicely to the use of GNNs. Unfortunately, current GNNs are not equipped to deal with financial transaction networks effectively. Firstly, financial transaction networks are in fact directed multigraphs, i.e., edges (or transactions) are directed and there can be multiple edges between two nodes (or accounts). Secondly, most GNNs cannot detect certain subgraph patterns, such as cycles [13; 14]. In fact, many efforts have been made to overcome this limitation [65; 26; 44; 66; 37; 49], all focusing on simple (undirected) graphs. But even on simple graphs, the problem is far from solved. Until very recently, for example, there was no linear-time permutation-equivariant GNN that could count 6-cycles with theoretical guarantees [26]. This paper addresses both of these issues. To the best of our knowledge, this is the first GNN architecture designed specifically for directed multigraphs. Secondly, we first prove that the proposed architecture can theoretically detect any subgraph pattern in directed multigraphs, and then empirically confirm that our proposed architecture can detect the patterns illustrated in Figure 1. Our proposed architecture is based on a set of simple adaptations that can transform any standard GNN architecture into a directed multigraph GNN. The adaptations are reverse message passing [27], port numbering [49], and ego IDs [65]. Although these individual building blocks are present in existing literature, the theoretical and empirical power of combining them has not been explored. In this work, we fill this gap: We combine them, adapt them to directed multigraphs, and showcase both the theoretical and empirical advantages of using them in unison. **Our contributions.** 1) We propose a set of simple and intuitive adaptations that can transform message-passing GNNs into provably powerful directed multigraph neural networks. 2) We prove that suitably powerful GNNs equipped with ego IDs, port numbering, and reverse message passing can identify any directed subgraph pattern. 3) The theory is tested on synthetic graphs, confirming that GNNs using these adaptations can detect a variety of subgraph patterns, including directed cycles up to length six, scatter-gather patterns, and directed bicliques, setting them apart from previous GNN architectures. 4) The improvements translate to significant gains on financial datasets, with the adaptations boosting GNN performance dramatically on money laundering and phishing datasets, outperforming state-of-the-art financial crime detection models on both simulated and real data. ## 2 Related Work Xu et al. [63] showed that standard MPNNs are at most as powerful as the Weisfeiler-Lehman (WL) isomorphism test, and provided a GNN architecture, GIN, that theoretically matches the power of the WL test. Although the WL test can asymptotically almost surely differentiate any two non-isomorphic graphs [2], standard MPNNs cannot -- in certain graphs -- detect simple substructures like cycles [13; 14]. This observation motivated researchers to go beyond standard MPNNs. One direction considers emulating the more powerful k-WL isomorphism test, by conducting message passing between k-tuples or using a tensor-based model [39; 41]. Unfortunately, these models have high complexity and are impractical for most applications. Another line of work uses pre-calculated features to augment the GNN. These works explore adding subgraph counts [9; 3], positional node embeddings [17; 16], random IDs [1; 50], and node IDs [37]. A recent class of expressive GNNs called Subgraph GNNs, model graphs as collections of subgraphs [19; 69]. Papp et al. [43] drop random nodes from the input graph and run the GNN multiple times, gathering more information with each run. Zhang and Li [66] instead extract subgraphs around each node and run the GNN on these. Also falling into this category is ID-GNN using ego IDs [65], where each node is sampled with its neighborhood and given an identifier to differentiate it from neighboring nodes. Although the authors claim that ID-GNNs can count cycles, the proof turns out to Figure 1: Money Laundering Patterns. The gray fill indicates the nodes to be detected by the synthetic pattern detection tasks. The exact degree/fan pattern sizes here are for illustrative purposes only. be incorrect. In fact, Huang et al. [26] show that the whole family of Subgraph GNNs cannot count cycles of length greater than 4, and propose \(\mathrm{I}^{2}\)-GNNs that can provably count cycles up to length 6. There has been much less work on GNNs for directed graphs. Zhang et al. [68] propose a spectral network for directed graphs, but it is difficult to analyze the power of this network or apply it to larger datasets. Similar approaches can be found in [55] and [38]. Jaume et al. [27] extend message passing to aggregate incoming and outgoing neighbors separately, rather than naively treating the graph as undirected. Directed multigraphs have not specifically been considered. GNNs have been used for various financial applications [33; 18; 12; 67; 32; 62; 64]. Closest to our work, GNNs have been used for fraud detection. Liang et al. [34] and Rao et al. [45] work on bipartite customer-product graphs to uncover insurance and credit card fraud, respectively. Liu et al. [35] use heterogeneous GNNs to detect malicious accounts in the device-activity bipartite graph of an online payment platform. Weber et al. [60] were the first to apply standard GNNs for anti-money laundering (AML), and more recently Cardoso et al. [10] proposed representing the transaction network as a bipartite account-transaction graph and showed promising results in the semi-supervised AML setting. However, it is not clear how these approaches help with detecting typical fraud patterns. ## 3 Background ### Graphs and Financial Transaction Graphs We will be working with directed multigraphs, \(G\), where the nodes \(v\in V(G)\) represent accounts, and the directed edges \(e=(u,v)\in E(G)\) represent transactions from \(u\) to \(v\). Each node \(u\) (optionally) has a set of associated account features \(h^{(0)}(u)\); this could include the account number, bank ID, and account balance. Each transaction \(e=(u,v)\) has a set of associated transaction features \(h^{(0)}_{(u,v)}\); this includes the amount, currency, and timestamp of the transaction. The incoming and outgoing neighbors of \(u\) are denoted by \(N_{in}(u)\) and \(N_{out}(u)\) respectively. Multiple transactions between the same two accounts are possible, making \(G\) a multigraph. In node (or edge) prediction tasks, each node (or edge) will have a binary label indicating whether the account (or transaction) is illicit. Financial Crime Patterns.Figure 1 shows a selection of subgraph patterns indicative of money laundering [21; 23; 54; 59; 53]. Unfortunately, these are rather generic patterns and also appear extensively amongst perfectly innocent transactions. For example, a supermarket will have a very high fan-in, and a group of friends may produce a bipartite subgraph. As a result, detecting financial crime relies not just on detecting individual patterns, but also on learning relevant combinations. This makes neural networks promising candidates for the task. However, standard message-passing GNNs typically fail to detect the depicted patterns, with the exception of degree-in. In the next section, we describe architectural adaptations, which enable GNNs to detect each one of these patterns. Subgraph Detection.Given a subgraph pattern \(H\), we define subgraph detection for nodes, as the task of deciding for each node in a graph whether it is contained in a subgraph that is isomorphic to \(H\); i.e., given a node \(v\in V(G)\), decide whether there exists a graph \(G^{\prime}\), with \(E(G^{\prime})\subseteq E(G)\) and \(V(G^{\prime})\subseteq V(G)\), such that \(v\in V(G^{\prime})\) and \(G^{\prime}\cong H\). ### Message Passing Neural Networks Message-passing GNNs, commonly referred to as Message Passing Neural Networks (MPNNs), form the most prominent family of GNNs. They include GCN [31], GIN [63], GAT [56], GraphSAGE [22], and many more architectures. They work in three steps: 1) Each node sends a message with its current state \(h(v)\) to its neighbors, 2) Each node aggregates all the messages it receives from its neighbors in the embedding \(a(v)\), 3) Each node updates its state based on \(h(v)\) and \(a(v)\) to produce a new state. These 3 steps constitute a layer of the GNN, and they can be repeated to gather information from further and further reaches of the graph. More formally: \[a^{(t)}(v) =\textsc{Aggregate}\left(\{h^{(t-1)}(u)\mid u\in N(v)\}\right),\] \[h^{(t)}(v) =\textsc{Update}\left(h^{(t-1)}(v),a^{(t)}(v)\right),\] where \(\{\{.\}\}\) denotes a multiset, and Aggregate is a permutation-invariant function. We will shorten Aggregate to Agg and for readability, we will use \(\{.\}\) rather than \(\{\{.\}\}\) to indicate multisets. In the case of directed graphs, we need to distinguish between the incoming and outgoing neighbors of node \(u\). In a standard MPNN, the messages are passed along the directed edges in the direction indicated. As such, the aggregation step only considers messages from incoming neighbors: \[a^{(t)}(v)=\textsc{Agg}\left(\{h^{(t-1)}(u)\mid u\in N_{in}(v)\}\right),\] where we now aggregate over the incoming neighbours, \(N_{in}(u)\). The edges of an input graph may also have their own input features. We denote the input features of directed edge \(e=(u,v)\) by \(h^{(0)}((u,v))\). When using edge features during the message passing, the aggregation step becomes: \[a^{(t)}(v)=\textsc{Agg}\left(\{(h^{(t-1)}(u),h^{(0)}((u,v)))\mid u\in N_{in}(v )\}\right).\] In the remainder, we omit edge features from formulas when not needed, in favor of brevity. ## 4 Methods In this section, we introduce simple adaptations that can be made to standard MPNNs (Message Passing Neural Networks) to enable the detection of the fraud patterns in Figure 1. We consider the adaptations in increasing order of complexity with regard to the patterns they help to detect. We provide theoretical proofs to motivate the adaptations and include results on the synthetic subgraph detection dataset in section 7.1 to support the theoretical results empirically. ### Reverse Message Passing When using a standard MPNN with directed edges, a node does not receive any messages from outgoing neighbors (unless they happen to also be incoming neighbors), and so is unable to count its outgoing edges. For example, a standard MPNN is unable to distinguish nodes \(a\) and \(b\) in Figure 2. Further, note that naive bidirectional message passing, where edges are treated as undirected and messages travel in both directions, does not solve the problem, because a node can not then distinguish incoming and outgoing edges. So this would fail to distinguish nodes \(a\) and \(d\) in the same figure. To overcome this issue, we need to indicate the direction of the edges in some way. We propose using a separate message-passing layer for the incoming and outgoing edges respectively, i.e., adding _reverse message passing_. Note that this is akin to using a relational GNN with two edge types [51]. More formally, the aggregation and update mechanisms become: \[a_{in}^{(t)}(v) =\textsc{Agg}_{in}\left(\{h^{(t-1)}(u)\mid u\in N_{in}(v)\} \right),\] \[a_{out}^{(t)}(v) =\textsc{Agg}_{out}\left(\{h^{(t-1)}(u)\mid u\in N_{out}(v)\} \right),\] \[h^{(t)}(v) =\textsc{Update}\left(h^{(t-1)}(v),a_{in}^{(t)}(v),a_{out}^{(t)} (v)\right),\] where \(a_{in}\) is now an aggregation of incoming neighbors and \(a_{out}\) of outgoing neighbors. We now prove that message-passing GNNs with reverse MP can solve degree-out. **Proposition 4.1**.: _An MPNN with sum aggregation and reverse MP can solve degree-out._ Figure 3: Nodes (in gray) with different fan-ins that are not distinguishable by a standard MPNN. The edge labels indicate incoming and outgoing port numbers, respectively. Figure 2: Nodes (\(a\) and \(b\)) with different out-degrees are not distinguishable by a standard MPNN with directed message passing. Note that naive bidirectional message passing on the other hand is unable to distinguish nodes \(a\) and \(d\). The proof of Proposition 4.1 can be found in Appendix B. In Section 7.1, we use a synthetic pattern detection task to confirm that the theory translates into practice. ### Directed Multigraph Port Numbering People often make multiple transactions to the same account. In transaction networks, these are represented as parallel edges. To detect fan-in (or fan-out) patterns, a model has to distinguish between edges from the same neighbor and edges from different neighbors. Using unique account numbers (or in general node ids) would naturally allow for this. However, using account numbers does not generalize well. During training, a model can memorize fraudulent account numbers without learning to identify fraudulent patterns. But such an approach will not generalize to unseen accounts. Instead, we adapt port numbering [49] to directed multigraphs. Port numbering assigns IDs to each incident edge at a node. This allows a node to identify messages coming across the same edge in consecutive message-passing rounds. To adapt port numbering to directed multigraphs, we assign each directed edge an incoming and an outgoing port number, and edges coming from (or going to) the same node, receive the same incoming (or outgoing) port number. Port numbers have been shown to increase the expressivity of GNNs on simple graphs, but message-passing GNNs with port numbers alone cannot even detect 3-cycles in some cases [20]. In general, the assignment of port numbers around a node is arbitrary. A node with \(d\) incoming neighbors can assign incoming port numbers in \(d!\) ways. In order to break this symmetry in our datasets, we use the transaction timestamps to order the incoming (or outgoing) neighbors. In the case of parallel edges, we use the earliest timestamp to decide the order of the neighbors. Since timestamps carry meaning in financial crime detection, the choice of ordering is motivated; indeed two identical subgraph patterns with different timestamps can have different meanings. Computing the port numbers in this way can be a time-intensive step, with runtime complexity dominated by sorting all the edges by their timestamps: \(\mathcal{O}(m\log m)\), where \(m=|E(G)|\). However, the port numbers can be calculated in advance for the train, validation, and test sets, so that the train and inference times remain unaffected. Each edge receives an incoming and an outgoing port number as additional edge features. Figure 3 shows an example of graphs with port numbers. We now prove that GNNs using port numbers can correctly identify fan-in and fan-out patterns. Note that the following proof, and later proofs using port numbers, do not rely on the timestamps for correctness. However, if timestamps that uniquely identify the ports are available, then permutation invariance/equivariance of the GNN will be preserved. **Proposition 4.2**.: _An MPNN with max aggregation and multigraph port numbering can solve fan-in._ A proof can be found in Appendix B. Adding reverse MP, one can use the same argument to show that fan-out can also be solved. Both propositions are confirmed empirically in Section 7.1. **Proposition 4.3**.: _An MPNN with max aggregation, multigraph port numbering, and reverse MP can solve fan-out._ ### Ego IDs Although reverse MP and multigraph port numbering help with detecting some of the suspicious patterns in Figure 1, they are not sufficient to detect directed cycles, scatter-gather patterns, and directed bicliques. You et al. [65] introduced ego IDs specifically to help detect cycles in graphs. The idea is that by "marking" a "center" node with a distinct (binary) feature, this node can recognize when a sequence of messages cycles back around to it, thereby detecting cycles that it is part of. However, it turns out that the proof of Proposition 2 in the paper is incorrect, and ego IDs alone do not enable cycle detection. We give a counterexample in Figure 4, giving more details below. Indeed, Huang et al. [26] also note that the proof "confuses walks with paths". As a counterexample, consider nodes \(u\) and \(u^{\prime}\) in Figure 4. They would receive the same label when using a standard MPNN with ego IDs (\(u\) and \(u^{\prime}\) being the center nodes), but \(u^{\prime}\) is part of several \(5\)-cycles and \(u\) is not. To see intuitively why, note first that the blue edges form two \(3\)-cycles on the one side and a \(6\)-cycle on the other, and that the 1-WL test can not distinguish these alone. Now, when \(u\) and \(u^{\prime}\) are the center nodes, the only extra information is that \(u\) and \(u^{\prime}\) are marked, but since they are fully connected in their graphs, the mark does not help in distinguishing any of the other nodes, so the neighbors remain indistinguishable and indeed the two ego graphs remain indistinguishable. This means they could not both be labeled correctly in any task that relies on \(5\)-cycles. Note that the predictions for any of the other respective nodes (e.g., \(a\) and \(a^{\prime}\)) will not be identical, since when these are the center nodes with the ego ID, then the GNN can detect that the ones on the left are in \(3\)-cycles, whereas the ones on the right are not. We see this reflected in the individual results in Table 1. Although ego IDs offer a boost in detecting short cycles, they do not help the baseline (GIN) in detecting longer cycles. This can also be explained theoretically: Assuming a graph has no loops (edges from a node to itself), walks of length two and three that return to the start node are also cycles since there is no possibility to repeat intermediate nodes. Therefore Proposition 2 from You et al. [65] applies in these cases and it is not surprising that GIN+EgoIDs can achieve impressive F1 scores for 2- and 3-cycle detection. However, in combination with reverse MP and port numbering, ego IDs can detect cycles, scatter-gather patterns, and bipartite subgraphs, completing the list of suspicious patterns. In fact, it can be shown that a suitably powerful standard MPNN with these adaptations can distinguish any two non-isomorphic (sub-)graphs, and given a _consistent_ use of port-numbering they will not mistakenly distinguish any two isomorphic (sub-)graphs. GNNs fulfilling these two properties are often referred to as _universal_. The crux of the proof is showing how the ego ID, port numbers, and reverse MP can be used to assign unique IDs to each node in the graph. Given unique node IDs, sufficiently powerful standard MPNNs are known to be universal [37, 1]. **Theorem 4.4**.: _Ego IDs combined with port numbering and reverse MP can be used to assign unique node IDs in connected directed multigraphs._ The idea of the proof is to show how a GNN can replicate a labeling algorithm that assigns unique node IDs to each node in an ego node's neighborhood. The labeling algorithm as well as the full proof are provided in Appendix B. The universality of the adaptations now follows from this theorem. **Corollary 4.4.1**.: _GIN with ego IDs, port numbering, and reverse MP can theoretically detect any directed subgraph patterns._ The proof follows from Theorem 4.4 and Corollary 3.1 from Loukas [37]. Similar statements can be made for simple undirected graphs. One can remove the reverse MP from the assumptions since this is only needed to make the proof work with directed edges. Therefore, equivalent statements for undirected graphs can also be made: **Theorem 4.5**.: _Ego IDs combined with port numbering can be used to assign unique node IDs in connected undirected graphs._ **Corollary 4.5.1**.: _GIN with Ego IDs and port numbering can theoretically detect any subgraph pattern in undirected graphs._ The ablation study in Table 1 of Section 7.1 again supports the theoretical analysis. The combination of the three adaptations achieves impressive scores when detecting subgraph patterns. ## 5 Datasets **Synthetic Pattern Detection Tasks.** The AML subgraph patterns seen in Figure 1 are used to create a controllable testbed of synthetic pattern detection tasks. The key design principle is to ensure Figure 4: Example nodes (\(u\) and \(u^{\prime}\)) with different \(5\)-cycle counts that are not distinguishable by a standard MPNN with ego IDs. For example, \((u^{\prime},a^{\prime},b^{\prime},c^{\prime},d^{\prime},u^{\prime})\) is a \(5\)-cycle on the right, but there are no \(5\)-cycles involving \(\mathtt{u}\) on the left. The example also works with undirected edges. that the desired subgraph patterns appear randomly, rather than being inserted post hoc into a network. The problem with inserting patterns is that it skews the random distribution, and simple indicators (such as the degrees of nodes) can be enough to solve the task approximately. Consider the extreme case of generating a random \(3\)-regular graph and inserting a pattern. Additionally, if only inserted patterns are labeled, then randomly occurring patterns will be overlooked. To ensure that the desired subgraph patterns appear randomly, we introduce the _random circulant graph_ generator. Details of the generator and pseudocode can be found in Appendix C.1. The pattern detection tasks include degree-in/out (number of incoming edges), fan-in/out (number of unique incoming neighbors), scatter-gather and directed biclique patterns, and directed cycles of length up to six. Detailed descriptions can be found in Appendix C.2. Anti-Money Laundering (AML).Given the strict privacy regulations around banking data, real-world financial crime data is not publicly available. Instead, we use simulated money laundering data from IBM, recently released on Kaggle [48]. The simulator behind these datasets generates a financial transaction network by modeling agents (banks, companies, and individuals) in a virtual world. The generator uses well-established money laundering patterns to add realistic money laundering transactions. We use two small and two medium-sized datasets, one of each size with a higher illicit ratio (HI) and a lower illicit ratio (LI). The dataset sizes and illicit ratios can be seen in Table 4 in the appendix. We use a 60-20-20 temporal train-validation-test split, i.e., we split the transaction indices after ordering them by their timestamps. Details can be found in Appendix D. Ethereum Phishing Detection (ETH).Since banks do not release their data, we turn to cryptocurrencies for a real-world dataset. We use an Ethereum transaction network published on Kaggle [11], where some nodes are labeled as phishing accounts. We use a temporal train-validation-test split, but this time splitting the nodes. We use a 65-15-20 split because the illicit accounts are skewed towards the end of the dataset. More details and dataset statistics can be found in Appendix D. ## 6 Experimental Setup Base GNNs and Baselines.GIN with edge features [25] is used as the main GNN base model with our adaptations added on top. GAT [56] and PNA [57] are also used as base models, and we refer to their adapted versions as Multi-GAT and Multi-PNA, respectively. All three are also considered baselines. Additionally, GIN with ego IDs can be considered an ID-GNN [65] baseline, and GIN with port numbering can be considered a CPNGNN [49] baseline. Since AML is posed as a transaction classification problem, we also include a baseline using edge updates (GIN+EU). The edge updates are based on Battaglia et al. [5]. This approach is similar to replacing edges with nodes and running a GNN on said _line graph_. This architecture recently achieved state-of-the-art results in self-supervised money laundering detection [10]. We do not focus on including a more expansive range of GNN architectures as baselines, for the simple reason that without (the proposed) adaptations, they are not equipped to deal with directed multigraphs. As far as we are aware, there are no other GNNs that one could expect to achieve state-of-the-art results on directed multigraphs. In addition, we include a baseline representing the parallel line of work in financial crime detection that uses pre-calculated graph-based features (GFs) and tree-based classifiers to classify nodes or edges individually. We train a LightGBM [29] model on the individual edges (or nodes) using both the original raw features and additional GFs. This approach has produced state-of-the-art results in financial applications [60; 36]. Details of the model and the graph-mining algorithms can be found in Appendix E.2. Given the size of the AML and ETH datasets, we use neighborhood sampling [22] for all GNN-based models in these experiments. Further details of the experimental setup for the different datasets can be found in Appendix E. Scoring.Since we have very imbalanced datasets, accuracy and many other popular metrics are not good measures of model performance. Instead, we use the minority class F1 score. This aligns well with what banks and regulators use in real-world scenarios. ## 7 Results ### Synthetic Pattern Detection Results The synthetic pattern detection results can be seen in Table 1. The degree-out results reveal that the standard message-passing GNNs are unable to solve the degree-out task, achieving F1 scores below \(44\%\). However, all the GNNs that are equipped with reverse MP score above \(98\%\), thus supporting Proposition 4.1. The next column shows that port numbering is the critical adaptation in solving fan-in, though the F1 score is quite high even for the baseline GIN. On the other hand, for the fan-out task, the combination of reverse MP and port numbering is needed to score above \(99\%\). Again, these results support Propositions 4.2 and 4.3. The ablation study of cumulative adaptations on top of GIN also supports Corollary 4.4.1: The combination of reverse MP, port numbering, and ego IDs, scores high on all of the subtasks, with only 6-cycle detection coming in below \(90\%\). We see similar results when using other base GNN models, with PNA+Adaptations achieving the best overall results. Moreover, on the more complex tasks -- directed cycle, scatter-gather, and biclique detection -- the combination of the three is what leads to the first significant improvement in F1 scores. In the most extreme case, scatter-gather detection, the minority class F1 score jumps from \(67.84\%\) with only reverse MP and port numbers to \(97.42\%\) with ego IDs added in. No adaptation alone is able to come close to this score, so it is clear that the combination is needed here. Similar jumps can be seen for directed 4-, 5-, and 6-cycle, and biclique detection. Increasing the dataset size and restricting the task to only the "complex" subtasks further increases the scores, with 6-cycle detection reaching above \(97\%\). More details can be found in Appendix F.1. ### AML Results The results on the AML datasets can be seen in Table 2. For AML Small HI, we see that our adaptations boost the minority class F1 score of GIN from \(39.3\%\) to above \(84\%\), a gain of \(45\%\). The largest single improvement is brought by reverse MP, taking the F1 score from \(39.3\%\) up to \(74.8\%\). The final adaptation -- ego IDs -- does not seem to make a difference here. The results for the other AML datasets show a similar trend with overall gains of \(34\%\), \(41\%\), and \(18\%\) when using GIN, again with diminishing returns as more adaptations are added. The greatest individual gains come from reverse MP, but all three adaptations are needed to achieve the highest scores. The two rows corresponding to adding port numbers -- GIN+Ports and +Ports -- indicate clear gains from using port numbering both as an individual adaptation and on top of reverse MP. The support for ego IDs is less clear, with ego IDs improving GIN performance only on the larger datasets, and not impacting the scores significantly when added on top of reverse MP and port numbering. This behavior is in fact not so surprising given that the purpose of ego IDs is to mark/distinguish the center from its neighborhood. The same goal can also be achieved by suitably varied input features. When no input features are used, the effect of ego IDs is more pronounced. See Figure 9 in the appendix. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Model & deg-in & deg-out & fan-in & fan-out & C2 & C3 & C4 & C5 & C6 & S-G & B-C \\ \hline GIN [63, 25] & 02.77\(\pm\) & 43.58 & 0.59\(\pm\)0.17 & 35.91 & 34.67 & 58.00 & 50.80 & 43.12 & 48.59 & 69.31 & 63.12 \\ GAT [56] & 10.33 & 10.53 & 9.69 & 0.00 & 0.00 & 0.00 & 25.86 & 0.00 & 0.00 & 0.00 & 0.00 \\ PNA [57] & 02.98\(\pm\) & 43.02 & 0.59\(\pm\)0.13 & 38.93 & 25.77 & 54.75 & 51.92 & 48.79 & 48.40 & 65.88 & 65.51 \\ GIN+EU [5, 10] & 02.50 & 42.74 & 58.01 & 39.13 & 32.58 & 55.91 & 54.65 & 47.62 & 49.68 & 68.54 & 64.64 \\ \hline GIN+EgoIDs [65] & 02.78\(\pm\) & 51.48 & 05.08 & 49.24 & 23.14 & 57.67 & 53.12 & 44.37 & 45.42 & 66.44 & 63.90 \\ GIN+Ports [49] & 02.67 & 45.00 & 02.80 & 41.51 & 27.79 & 56.11 & 42.68 & 41.11 & 44.99 & 67.99 & 65.76 \\ \hline GIN+Reverse MP [27] & 03.87\(\pm\) & 00.08 & 01.00 & 02.88 & 35.96 & 63.85 & 69.09 & 67.44 & 71.23 & 65.83 & 66.18 \\ +Ports & 03.51 & 03.83 & 03.81 & 03.10 & 39.15 & 03.58 & 69.00 & 70.35 & 75.04 & 67.84 & 65.78 \\ +EgoIDs (Multi-GIN) & 09.38\(\pm\) & 09.09 & 09.09 & 09.53 & 08.97 & 09.87 & 09.46 & 01.71 & 05.23 & 07.94 & 09.18 \\ \hline Multi-GAT & 08.05 & 08.26 & 09.28 & 09.38 & 08.03 & 08.93 & 08.50 & 08.52 & 31.31 & 08.05 & 08.02 \\ Multi-PNA & 09.64 & 09.28 & 09.38 & 09.41 & 59.41 & 09.44 & 09.49 & 09.46 & 03.75 & 09.07 & 08.70 \\ Multi-GIN+EU & 09.45 & 09.28 & 09.76 & 09.07 & 09.67 & 09.67 & 09.71 & 08.78 & 05.78 & 08.53 & 09.61 & 07.86 \\ \hline \hline \end{tabular} \end{table} Table 1: Minority class F1 scores (\(\%\)) for the synthetic subgraph detection tasks. First from the top are the standard MPNN baselines; then the results with each adaptation added separately on top of GIN; followed by GIN with the adaptations added cumulatively; and finally, results for the other GNN baselines with the three adaptations (Multi-GNNs). The C\(k\) abbreviations stand for to directed \(k\)-cycle detection, S-G stands for scatter-gather and B-C stands for biclique detection. We report minority class F1 scores averaged over five runs. We omit standard deviations in favor of readability. We further note that GIN with the adaptations clearly outperforms all of the baselines for three out of the four AML datasets. This is particularly impressive when compared with LightGBM+GFs since the graph-based features align perfectly with the illicit money laundering patterns used by the simulator. Moreover, this has been the state-of-the-art method in previous financial applications [60; 36]. For AML Medium LI, the PNA score is almost as high as GIN+Adaptations, but it can be seen further down the table that the adaptations also boost PNA performance significantly, leading to the highest score for the dataset. Finally, the adaptations were tested with several base models in addition to GIN, namely GAT, PNA, and GIN with edge updates. In each case, and across almost all AML datasets, we see clear gains from using the adaptations, although the gains on AML Medium LI are less pronounced. These gains underline the effectiveness and versatility of the approach. To test the reliance of the models on the input features (edge features) versus on the graph topology, additional experiments were run with all the input features removed. Noteworthy is that the performance of GIN with the adaptations when using no features is close to the performance when all the features are provided. This means that the adapted model is able to use the graph topology effectively to identify illicit transactions. Please see Appendix F.2 for details. For training times and inference throughput rate of GIN with the different adaptations, please see Table 5 in the appendix. Notably, with the adaptations, the inference rate of GIN is above 18k transactions per second on a single GPU. ### ETH Results Finally, we test our adaptations on a real-world financial crime dataset -- Ethereum phishing account classification, the results of which are given in Table 2. Similar to the AML datasets, we see a continuous improvement in final scores as we add more adaptations. In total, the minority class F1 score jumps from \(26.9\%\) without adaptations to \(42.9\%\) with reverse MP, port numbering, and ego IDs. Again, the largest single improvement is due to the reverse MP. In this case, GIN+Adaptations does not outperform all of the baselines (PNA and LightGBM+GFs score higher), but the adaptations also significantly boost PNA performance, thereby beating all the baselines by more than \(13\%\). ## 8 Conclusion This work proposes a set of simple adaptations to transform a standard message-passing GNN into a provably powerful directed multigraph learner. We make three contributions to the area of graph neural networks. Firstly, the theoretical analysis fills a gap in the literature about the power of combining different GNN adaptations/augmentations. In particular, we prove that ego IDs combined with port numbering and reverse message passing enable a suitably powerful message-passing GNN (e.g., GIN) to calculate unique node IDs and therefore detect any directed subgraph patterns. Secondly, the theoretical results are confirmed in practice with a range of synthetic subgraph detection tasks. The empirical results align closely with the theory, showing that the combination of all three adaptations is needed to detect the more complex subgraphs. Lastly, we show how our adaptations can be applied to two important financial crime problems: detecting money laundering transactions and detecting \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Model & AML Small HI & AML Small LI & AML Medium HI & AML Medium LI & ETH \\ \hline MLP & \(24.11\pm 1.47\) & \(14.59\pm 2.08\) & \(28.16\pm 6.97\) & \(7.95\pm 1.11\) & \(0.10\pm 0.00\) \\ LightGBM+GFs [29; 47] & \(16.91\pm 10.87\) & \(30.01\pm 0.31\) & \(610.98\pm 10.58\) & \(30.20\pm 0.58\) & \(83.20\pm 0.10\) \\ GIN [63; 25] & \(39.31\pm 4.86\) & \(33.09\pm 3.86\) & \(38.48\pm 2.54\) & \(22.04\pm 3.31\) & \(26.92\pm 7.52\) \\ GAT [56] & \(36.17\pm 7.74\) & \(25.67\pm 7.07\) & \(15.66\pm 20.22\) & \(4.90\pm 7.52\) & \(15.94\pm 3.89\) \\ PNA [57] & \(61.87\pm 42.05\) & \(38.29\pm 6.07\) & \(68.96\pm 40.78\) & \(38.02\pm 8.89\) & \(51.49\pm 42.69\) \\ GIN+EU [50; 10] & \(62.51\pm 2.01\) & \(39.27\pm 5.34\) & \(10.77\pm 10.20\) & \(29.28\pm 7.70\) & \(33.92\pm 7.34\) \\ \hline GIN+EgolDs [65] & \(36.82\pm 2.03\) & \(31.77\pm 3.38\) & \(48.90\pm 2.42\) & \(23.33\pm 3.40\) & \(26.01\pm 2.27\) \\ GIN+Ports [49] & \(49.43\pm 7.22\) & \(41.43\pm 15.66\) & \(69.00\pm 10.92\) & \(23.42\pm 3.14\) & \(32.96\pm 0.25\) \\ \hline GIN+ReversMP [27] & \(74.78\pm 14.00\) & \(37.98\pm 10.00\) & \(60.80\pm 10.00\) & \(35.67\pm 2.79\) & \(36.86\pm 8.12\) \\ +Ports & \(89.00\pm 2.08\) & \(62.50\pm 9.18\) & \(70.18\pm 10.00\) & \(39.20\pm 3.60\) & \(42.51\pm 7.16\) \\ +EgolDs (Multi-GIN) & \(84.00\pm 2.08\) & \(92.14\pm 9.50\) & \(90.78\pm 11.02\) & \(40.55\pm 2.08\) & \(42.86\pm 2.53\) \\ \hline Multi-GAT & \(46.79\pm 9.26\) & \(27.02\pm 2.38\) & \(30.27\pm 10.57\) & \(31.5\pm 1.04\) & \(46.36\pm 0.74\) \\ Multi-PNA & \(29.83\pm 11.08\) & \(58.66\pm 10.88\) & \(89.92\pm 1.54\) & \(44.07\pm 5.86\) & \(65.92\pm 5.30\) \\ Multi-GIN+EU & \(51.68\pm 5.68\) & \(59.00\pm 0.076\) & \(78.00\pm 1.50\) & \(29.64\pm 3.60\) & \(48.37\pm 6.62\) \\ \hline \hline \end{tabular} \end{table} Table 2: Minority class F1 scores (\(\%\)) for the AML and ETH tasks. HI indicates a higher illicit ratio and LI indicates a lower illicit ratio. The models are organized as in Table 1. phishing accounts. GNNs with our adaptations achieve impressive results in both tasks, clearly outperforming relevant baselines. Reverse message passing and port numbering again prove crucial in reaching the highest scores, however, we find that with sufficiently varied input features, the use of ego IDs does not provide further advantages. Although this work has focused on financial crime applications, our theoretical and practical results have wider relevance. Immediate future work could involve exploring applications of our methods to other directed multigraph problems, for example, in cybersecurity. Additional future work could explore the relationship between the computational complexity of different subgraph detection problems and the GNN learning and inference complexity. ## Acknowledgments and Disclosure of Funding The support of the Swiss National Science Foundation (project number 172610) for this work is gratefully acknowledged.
2309.03061
Learning Active Subspaces for Effective and Scalable Uncertainty Quantification in Deep Neural Networks
Bayesian inference for neural networks, or Bayesian deep learning, has the potential to provide well-calibrated predictions with quantified uncertainty and robustness. However, the main hurdle for Bayesian deep learning is its computational complexity due to the high dimensionality of the parameter space. In this work, we propose a novel scheme that addresses this limitation by constructing a low-dimensional subspace of the neural network parameters-referred to as an active subspace-by identifying the parameter directions that have the most significant influence on the output of the neural network. We demonstrate that the significantly reduced active subspace enables effective and scalable Bayesian inference via either Monte Carlo (MC) sampling methods, otherwise computationally intractable, or variational inference. Empirically, our approach provides reliable predictions with robust uncertainty estimates for various regression tasks.
Sanket Jantre, Nathan M. Urban, Xiaoning Qian, Byung-Jun Yoon
2023-09-06T15:00:36Z
http://arxiv.org/abs/2309.03061v1
Learning Active Subspaces for Effective and Scalable Uncertainty Quantification in Deep Neural Networks ###### Abstract Bayesian inference for neural networks, or Bayesian deep learning, has the potential to provide well-calibrated predictions with quantified uncertainty and robustness. However, the main hurdle for Bayesian deep learning is its computational complexity due to the high dimensionality of the parameter space. In this work, we propose a novel scheme that addresses this limitation by constructing a low-dimensional subspace of the neural network parameters-referred to as an _active subspace_-by identifying the parameter directions that have the most significant influence on the output of the neural network. We demonstrate that the significantly reduced active subspace enables effective and scalable Bayesian inference via either Monte Carlo (MC) sampling methods, otherwise computationally intractable, or variational inference. Empirically, our approach provides reliable predictions with robust uncertainty estimates for various regression tasks. Sanket Jantre\({}^{\star}\) Nathan M. Urban\({}^{\star}\) Xiaoning Qian\({}^{\star\dagger}\) Byung-Jun Yoon\({}^{\star\dagger}\)\({}^{\star}\) Computational Science Initiative, Brookhaven National Laboratory, Upton, NY \({}^{\dagger}\) Department of Electrical & Computer Engineering, Texas A&M University, College Station, TX Active subspace, Bayesian deep learning, subspace inference, uncertainty quantification (UQ) ## 1 Introduction Neural networks (NN) are highly flexible and good approximators of complex functions at the expense of overparameterization, where the number of learnable parameters far exceeds the available training data. This increases computational demands and elevates the risk of overfitting [1], where the model starts to memorize noise rather than capture meaningful patterns. To avoid overconfidence and miscalibration, uncertainty quantification (UQ) in neural network predictions is crucial, especially in fields involving critical decision-making, such as clinical diagnostics or autonomous driving [2]. Addressing these challenges, Bayesian modeling presents a principled way to quantify the model prediction uncertainty [3]. Moreover, the Bayesian framework demonstrates heightened resilience against noise and adversarial perturbations due to their inherent probabilistic predictive capabilities [4]. To this effect, the confluence of deep neural networks and Bayesian inference in the form of Bayesian neural networks (BNN) has significantly advanced probabilistic machine learning. However, exact posterior inference is intractable in neural networks due to the extremely high dimensionality. Instead, approximation methods, such as mean-field variational inference [5], provide a computationally feasible way to perform posterior inference in BNNs. Nonetheless, this approximation severely limits the expressiveness of the inferred posterior, ultimately degrading the quality of the uncertainty estimates [6]. To this end, Bayesian inference over a low-dimensional subspace of neural network weights offers an elegant solution for improving accuracy, robustness, and uncertainty quantification [7]. Alternatively, active subspace methods [8] developed for inverse problems involving computer models perform dimension reduction by constructing a linear subspace of inputs with directions accounting for most variation in the function's output. In this paper, we demonstrate the utility of active subspace inference for Bayesian deep learning. Accordingly, we identify a low-dimensional subspace embedded in a high-dimensional parameter space to capture most of the variability in the neural network output. Our main contributions: * We propose two active subspace methods: output-informed (**AS**) and likelihood-informed (**LIS**) for scalable Bayesian inference in deep learning. The compact subspace facilitates the use of otherwise intractable posterior approximation methods. * We empirically demonstrate the superior uncertainty quantification offered by our approach compared to Bayesian inference over the full network and an existing subspace inference method. **Related work.** The framework for interpretable inference through _effective dimensionality_ of the parameter space was provided by [9] in the context of BNNs. [10] demonstrated that near local optima, numerous directions in parameter space have low impact on neural network predictions. [11] performed inference over low-dimensional subspace spanned by the SGD iterates of NN weights. [7] further substantiated that posterior inference with a compact subspace of the entire parameter space is as effective as with the full network. Alternately, active subspace finds its roots in the computer model UQ literature [12], where dimension reduction is performed on the high-dimensional input space using model outputs. The identified active subspace is then exploited for cheap approximate modeling of the computationally expensive simulators [13, 14]. The combination of active subspace meth
2304.02383
How good Neural Networks interpretation methods really are? A quantitative benchmark
Saliency Maps (SMs) have been extensively used to interpret deep learning models decision by highlighting the features deemed relevant by the model. They are used on highly nonlinear problems, where linear feature selection (FS) methods fail at highlighting relevant explanatory variables. However, the reliability of gradient-based feature attribution methods such as SM has mostly been only qualitatively (visually) assessed, and quantitative benchmarks are currently missing, partially due to the lack of a definite ground truth on image data. Concerned about the apophenic biases introduced by visual assessment of these methods, in this paper we propose a synthetic quantitative benchmark for Neural Networks (NNs) interpretation methods. For this purpose, we built synthetic datasets with nonlinearly separable classes and increasing number of decoy (random) features, illustrating the challenge of FS in high-dimensional settings. We also compare these methods to conventional approaches such as mRMR or Random Forests. Our results show that our simple synthetic datasets are sufficient to challenge most of the benchmarked methods. TreeShap, mRMR and LassoNet are the best performing FS methods. We also show that, when quantifying the relevance of a few non linearly-entangled predictive features diluted in a large number of irrelevant noisy variables, neural network-based FS and interpretation methods are still far from being reliable.
Antoine Passemiers, Pietro Folco, Daniele Raimondi, Giovanni Birolo, Yves Moreau, Piero Fariselli
2023-04-05T11:47:27Z
http://arxiv.org/abs/2304.02383v1
# How good Neural Networks interpretation methods really are? A quantitative benchmark. ###### Abstract Saliency Maps (SMs) have been extensively used to interpret deep learning models decision by highlighting the features deemed relevant by the model. They are used on highly nonlinear problems, where linear feature selection (FS) methods fail at highlighting relevant explanatory variables. However, the reliability of gradient-based feature attribution methods such as SM has mostly been only qualitatively (visually) assessed, and quantitative benchmarks are currently missing, partially due to the lack of a definite ground truth on image data. Concerned about the appohenic biases introduced by visual assessment of these methods, in this paper we propose a synthetic quantitative benchmark for Neural Networks (NNs) interpretation methods. For this purpose, we built synthetic datasets with nonlinearly separable classes and increasing number of _decoy_ (random) features, illustrating the challenge of FS in high-dimensional settings. We also compare these methods to conventional approaches such as mRMR or Random Forests. Our results show that our simple synthetic datasets are sufficient to challenge most of the benchmarked methods. TreeShap, mRMR and LassoNet are the best performing FS methods. We also show that, when quantifying the relevance of a few non linearly-entangled predictive features diluted in a large number of irrelevant noisy variables, neural network-based FS and interpretation methods are still far from being reliable. ## 1 Introduction Decision processes are different in machines and humans. The training of Neural Networks (NNs) is solely guided by the minimizing of a loss function, which incidentally and only partially aligns with human understanding of the problem, since the optimization of the loss function relies on the labeling of training samples rather than articulate expert insights, eventually making these models diverge from human expectations. An extreme example of such divergence are _Clever Hans_ predictors, which exclusively rely on dataset artifacts [1]. More often, their decision processes are influenced by contextual features which are not necessary causal, and which are thus acting as confounders (e.g. presence of wooden hurdles in horse pictures). Therefore, _Clever Hans_ predictors can appear proficient on training data but remain prone to potentially high generalization error in unseen settings or on independent test data (e.g. absence of wooden hurdles in horse pictures). For these reasons, interpretability has become an emerging crucial aspect of Machine Learning (ML), and solutions for Explainable Artificial Intelligence (XAI) [1] are now also ethical requirements posed by institutions like the European Union [2]. In many _real-life_ applications, a clear understanding of the model decision-making process is indeed necessary for its adoption in a production environment. Interpretability is even more relevant in the context of nonlinear techniques with strong prediction power and modeling capabilities such as Deep Neural Networks (DNN) [1], since linear models are notoriously easier to explain and can therefore be complemented with simple but sound and effective FS methods (LASSO [3], shrunken centroid method [4]) that exploits the additivity of input variables [5]. Due to the intrinsic human-readability properties of simpler models (i.e. linear models, decision trees), they are often preferred in critical settings. On the other hand, complex nonlinear models such as DNNs exploit non-trivial interactions between input features, making them a model-of-choice for solving difficult tasks that require higher abstraction (e.g. Genome interpretation, protein folding and human-level real-time control [6; 7; 8; 9]). In theory, DNNs can ignore irrelevant features during the training phase, but this depends on the optimizer and the loss function landscape, which can be highly non-convex. In recent years, gradient-based _a posteriori_ interpretation methods for DNNs such as Saliency Maps (SMs) rapidly gained popularity [10]. The most common approaches include Integrated Gradients [11], DeepLift [12], Input \(\times\) Gradient [13], SmoothGrad [14] and Guided Backpropagation [15]. They can be easily applied to any NN model with libraries such as Captum [16]. These methods have been developed mainly in the context of computer vision, and their ability to identify the _salient_ pixels for the classification of images was primarily qualitatively assessed by visually inspecting the obtained SMs in relation to the input images, because there was no _ground truth_ for what should be considered salient, since this concept is intrinsically model-dependent. To the best of our knowledge, quantitative benchmarks of the reliability of SM interpretations is currently missing, especially on non-image data. At the same time, several studies showing puzzling SM interpretation results and perplexing behaviors have been already published, raising concerns about these methods [17; 18; 19]. The interpretation of non-linear model is an extremely active field of research and, besides _a posteriori_ interpretation methods such as SMs, in recent years several DNNs architectures incorporating FS capabilities have been developed, such as CancelOut [20], DeepPINK [21], LassoNet [22], FSNet [23], Concrete Autoencoder [24] and Diet-Net [25]. In this paper, we benchmarked the most recent nonlinear FS and interpretation methods for DNNs on synthetic, simplistic, yet challenging artificial datasets, comparing them with older and more conventional FS approaches. Each of our datasets consists of few (2-7) variables that jointly and nonlinearly correlate with the output class, as well as a variable number of irrelevant random features (decoys). To allow fair assessment, both relevant and irrelevant variables have been sampled with the exact same variance: each feature follows a uniform marginal distribution in 4 out of 5 of our datasets. In our fifth dataset, features have been simply standardized. In this way, methods that exploit variance signatures, such as Principal Component Analysis are not applicable. These datasets have been constructed in such a way that the classes cannot be segregated by linear decision boundaries, making linear FS methods totally unsuitable for the problem at hand. The synthetic nature of the datasets grants us complete knowledge of the predictive signal and allows us to quantitatively benchmark the ability of nonlinear FS methods to detect nonlinearly and jointly relevant features in controlled sample-to-feature ratio experimental settings. As baseline methods, we used additional FS methods that are not based on NNs, like Random forests (RFs) or minimum redundancy maximum relevance (mRMR). Our results show that the DNN-based FS methods tested are not able to extract relevant features if they are diluted in a random set of noisy variables. Conversely, feature relevances extracted from RFs are more reliable on average. We obtained similar results while benchmarking several SM a posteriori interpretation methods. These results indicate that the field of FS and DNNs interpretation needs to further refine the available methods, and suggest that a standardized _quantitative validation_ of the newly proposed methods should be used to assess their performance and limits, to give users a more realistic idea of the situations in which these approaches are actually reliable. ## 2 Methods ### Benchmark datasets #### 2.1.1 Uniformly-distributed variables We first built four datasets representing archetypal nonlinear binary classification tasks. Each dataset contains \(n=1000\) observations and \(m=p+k\) features, uniformly distributed in the \([0,1]\) interval. \(p\) and \(k\) denote the number of predictive and irrelevant features, respectively. Each dataset was then built by attributing a label to data points according to a nonlinear combination of the predictive features. All the remaining features are effectively random with respect to the labels, and thus act as decoys when it comes to feature selection. The number of positives is equal to the number of negatives, to prevent any artifact due to class imbalance. Here we describe the different characteristics of each outcome (label). They are also visually shown in Fig. 1. * RING: positive labels are associated to the points that form a bi-dimensional ring, defined by the features in positions \(j\in\{0,1\}\). The total number of predictive features is 2. Points where assigned to the positive class when: \[|\sqrt{(x_{0}-0.5)^{2}+(x_{1}-0.5)^{2}}-0.35|\leq 0.1151\] * XOR: the bi-dimensional space formed by the features in position \(j\in\{0,1\}\) is divided into 4 identical regions. Points in the upper left and lower right quadrants are labelled as positive samples. The total number of predictive features is 2. Points were considered positive when: \[(x_{0}-0.5)(0.5-x_{1})\geq 0\] * RING+XOR: the samples that are either positive samples in the RING dataset (considering the features in position \(j\in\{0,1\}\)) or positive samples in the XOR dataset (considering the features in positions \(j\in\{2,3\}\)) are positive. This dataset thus contains 4 predictive features, in positions \(j\in\{0,1,2,3\}\). Points were considered positives when they satisfied any of the following: \[|\sqrt{(x_{0}-0.5)^{2}+(x_{1}-0.5)^{2}}-0.35|\leq 0.0704\] \[(x_{2}-0.5)(0.5-x_{3})\geq 0.0337\] * RING+XOR+SUM: the samples that are positive in the RING, XOR datasets or such that \(x_{4}+x_{5}+\epsilon>0.5\) are positive samples. \(x_{j}\) denotes the \(j\)-th feature and \(\epsilon\) is Gaussian noise sampled from \(\mathcal{N}(\mu=0,\sigma=0.2)\). Like in the RING and XOR datasets, the corresponding predictive features \(\{0,1,2,3\}\) do not contain noise. In RING+XOR+SUM the predictive features are in positions \(\{0,1,2,3,4,5\}\). See Suppl. Fig. 4 for a visual explanation. Points were considered positives when they satisfied at least one of the following inequalities: \[|\sqrt{(x_{0}-0.5)^{2}+(x_{1}-0.5)^{2}}-0.35|\leq 0.0479\] \[(x_{2}-0.5)(0.5-x_{3})\geq 0.0598\] \[x_{4}+x_{5}+\mathcal{N}(\mu=0,\sigma=0.2)\geq 1.4074\] The four outcomes are shown in Fig.1. #### 2.1.2 Gaussian graphical model While for sufficiently large values of \(m\) the datasets defined above are already challenging, we built an additional dataset (called DAG) characterized also by confounding effects. Indeed, these effects occur in most real-life settings, and are very likely to _misguide_ the different models towards learning predictive patterns from irrelevant associations. This dataset has been generated by a directed Gaussian graphical model. The exact procedure is detailed in Suppl. Mat. 1.1. By construction, features can be categorized based on their degree of relevance: Figure 1: The four datasets, with their predictive features shown by pairs. Orange and blue correspond to the positive and negative classes, respectively. Each column is associated with one dataset, and each row corresponds to a distinct features pair. * Features \(X_{i}\) that are highly relevant as they are causal for the target variable \(Y\): \(X_{i}\rightarrow\ldots\to Y\) * Features \(X_{i}\) that are weakly relevant as they correlate with the target variable only due to indirect effects, like in forks: \(X_{i}\leftarrow\ldots\gets X_{j}\rightarrow\ldots\to Y\) * Irrelevant features ### Benchmark procedure We assessed the reliability of FS methods on nonlinear ML tasks with a growing degree of difficulty by incrementally diluting the relevant features in the uniformly-distributed datasets. An exponential increase of the number \(k\) of decoy features is added in each run: \(m\in\{2,4\}\cup K\) for XOR and RING datasets, \(m\in\{4\}\cup K\) for RING+XOR and \(m\in\{6\}\cup K\) for RING+XOR+SUM, where \(K=\{8,16,32,64,128,256,512,1024,2048\}\). For each run, we set the number of samples to \(n=1000\). For each run and dataset, we assessed the performance of both predictors and FS methods. First, for each embedded FS method that relies on a predictive model, we evaluated the latter using a 6-fold cross-validation procedure. Then we computed the AUROC and AUPRC of each step accordingly. We also reported these metrics for a baseline NN without prior or posterior FS. In the paper, we simply refer to it as "Neural Network". Second, we evaluated the ability of each FS algorithm to rank the predictive features higher than the decoys. More specifically, we quantified the latter as the percentage of predictive features among the highest \(p\) and \(2p\) top-ranked features, where \(p\) corresponds to the number of truly predictive features in each dataset (\(p=2\) for XOR and RING, \(p=4\) for RING+XOR, \(p=6\) for RING+XOR+SUM, \(p=7\) or \(81\) in DAG depending on the definition used). We refer to them as best \(p\) and best \(2p\) scores for short. To ensure fair comparison, and avoid assigning good performance to badly-designed FS methods (where feature importances are influenced by their position/indices in the data matrix), we randomly permuted the columns of the input data matrix in each fold of the \(k\)-fold cross-validation procedure. For feature attribution methods (e.g. saliency maps), only the points from the held-out sets of the 6-fold cross-validation have been used to select features. ### Machine Learning models To ensure a fair comparison between NN-based FS methods, we tried to reuse the same neural architecture whenever possible. By default, NNs have been implemented with Pytorch [26]. The data has been centered by replacing each input vector \(x_{i}\) by \(2x_{i}-1\), so each feature ranges between \(-1\) and \(1\). The default model is a three-layer perceptron with LeakyReLU activation functions, with a 0.2 slope for negative values. The 2 hidden layers have \(16\) neurons each. Additionally, L2 regularisation on the parameters is used, with a regularisation parameter of \(10^{-2}\). The model was trained with the Adam optimizer [27] for up to \(1000\) epochs, with a learning rate of \(0.005\) and a batch size of 64. A scheduler guided the optimisation of each model by adjusting the learning rate over time, decreasing it by a \(0.9\) factor every time the total loss function stagnates for 10 epochs (with a cooldown of 5 epochs). Gaussian noise, with a standard deviation of 0.05, was added to the inputs in order to regularise the models further. In order to minimize overfitting risks and get the best out of each NN-based approach, we implemented an early stopping criterion. We saved the NN parameters at each epoch and kept the ones that minimize the loss function on a held-out set composed of 20% of the training data. The training was interrupted prematurely if the validation loss had not decreased during the last 5 epochs. All feature attribution methods are based on the architecture described above. However, due to the peculiarities of some embedded FS methods or the constraints of their implementation, these latter methods might build on subtle variants of this architecture. When relevant, these differences are explained in the next section. ### Feature selection methods The benchmarked algorithms belong to three categories: feature attribution, embedded, and filter methods. We now describe them in details. #### 2.4.1 Feature attribution methods Feature attribution methods propose an _a posteriori_ interpretation of the method \(M\) by approximately reconstructing the decision process followed by \(M\) in order to produce the prediction \(y_{i}\). Gradient-based interpretation methods for NNs like Saliency Maps (SMs) belong to this category. Given a trained NN model \(M\), the forward pass \(M(x_{i})\) of sample \(x_{i}\) is computed, alongside with the gradient \(\partial M(x_{i})/\partial x_{i}\) of the target output \(y_{i}=M(x_{i})\) with respect to the input \(x_{i}\). The gradient values identify which positions in \(x_{i}\) are the most relevant for the prediction. Indeed, because the gradient points towards the direction of steepest descent, the highest components of the gradient indicate which input variables require the least change to produce the largest variation in the output \(y_{i}\). In this study we used the Captum [16] library to benchmark the Integrated Gradients [11], Saliency [10], DeepLift [12], Input \(\times\) Gradient [13], SmoothGrad [14], Guided Backpropagation [15]. From the same library, we also benchmarked non-gradient-based interpretation methods like Deconvolution [28], Feature Ablation, Feature Permutation [29] and Shapley Value [30] interpretation approaches. For SmoothGrad, data was injected with random noise sampled from a zero-centered Gaussian distribution with 0.1 standard deviation (Captum's implementation of NoiseTunnel). The operation has been repeated \(50\) times per input vector. For the Integrated Gradients, DeepLift, Feature Ablation and Shapley Value Sampling methods, the \(0\) vector was supplied as baseline point. The baseline point is used for different purposes depending on the method. For example, the basepoint provided for the Integrated Gradients method defines the point from which to compute the integral and smooth the feature attribution vector. Because all these gradient-based methods only provide instance-level feature importances, we computed the overall feature importances as the average of absolute values of the instance-level importances. Because it was _a priori_ unclear to us whether these instance-level scores should be computed on the training set or the validation set (with their difference explained by the generalization error), we compared the two settings in section 3.7. #### 2.4.2 Embedded FS methods The second category of FS approaches are the embedded methods, which operate _at training time_, such as FSNet [23], Concrete Autoencoder [24], CancelOut [20], Random Forest [31], DeepPINK [21] and LassoNet [22]. In particular, FSNet and Concrete Autoencoder jointly _rank_ features and train the model. FSNet's architecture is composed of a selector and an encoder network, which branches into a classifier and a final sub-network comprising a decoder and a reconstruction layer. The selector consists in a matrix multiplication operation, involving a low-rank matrix of shape \(k\times 2p\) obtained from a weights predictor, followed by a softmax activation (after addition of Gumbel noise to the logits): \(2p\) features are selected, and we computed both best \(p\) and best \(2p\) scores using the feature importances as proposed in [23]. The weights predictor of the selector is composed of a fully-connected layer with no activation. We chose the identity function as activation for both the encoder and decoder, as no further refinement of the input features was deemed necessary. The reconstruction layer is the reverse operation of the selector network, and performs a matrix multiplication analogously. The corresponding low-rank matrix is predicted from a predictor module composed of a fully-connected layer with no activation. The classifier has the same architecture as described in the previous section, except that its input size is restricted to \(2p\). The whole model was trained for \(2000\) epochs. \(30\) bins (latent size of \(10\)) were used to compute the inputs frequencies necessary to predict the low-rank matrices. CancelOut is composed of the common architecture described in the previous section (trained in the same manner), preceded by a CancelOut layer. We experimented with 2 variants of the method: 1) with a Sigmoid activation and CancelOut weights regularisation (we denote the corresponding model by CancelOut Sigmoid for short), and 2) with a Softmax activation layer and without regularization (CancelOut Softmax). In the former case, we used a \(\lambda_{1}=0.2\) coefficient for the variance and a \(\lambda_{2}=0.1\) for the regularisation term. Because L1 regularisation encourages importances weights to converge to 0.5 (sigmoid(0) = 0.5), we replaced it by the sum of CancelOut weights, unlike the original implementation. Indeed, this choice of regularisation better promotes sparsity among feature importances. In both cases, CancelOut weights were initialised with the same value \(\beta=1\). The model has been trained for \(300\) epochs, as the convergence of CancelOut weights requires longer time than the optimisation of the classifier alone. All models built for evaluating the LassoNet [22] feature selection method contained \(32\) hidden neurons and ReLU activation functions, as only the latter were available among activation functions in the lassonet Python package. To evaluate the Concrete Autoencoder (CAE), we used the concrete-autoencoder Python package [24], and implemented the underlying classifier with Keras [32] in accordance with the architecture described in the previous section. The CAE has been trained for 300 epochs, with initial temperature \(10\) and final temperature \(0.01\). The CAE has been trained twice, once for selecting \(p\) and once for selecting \(2p\) features. The model has been trained for \(10\) epochs. Each hidden layer is composed of \(32\) neurons, and followed by a 20% dropout. DeepPINK requires knockoff features, that we generated in two different ways depending on the nature of the dataset. For the DAG dataset, we relied on the framework of Model-X Knockoff features designed by Candes et al. [33]. Details of Model-X Knockoff features are explained in Suppl. Mat. 1.2. For the remaining 4 datasets, knockoff features were generated by simply sampling a uniform distribution (the marginal distribution of the original data matrix \(X\) is multivariate but uniform). Random forests were grown with the Scikit-learn Python package [34] and are composed of \(500\) trees each. We used the default parameters. We selected features from trained random forests using the widely-used impurity-based feature importance scores, as implemented in Scikit-Learn. More specifically, the importance of a feature is defined as the total reduction in Gini impurity caused by all node splits involving that feature, averaged across all trees in the forest. #### 2.4.3 Filter methods The last category of methods that we considered is the filter-method algorithms, such as Relief [35] or minimum redundancy maximum relevance (mRMR) [36]. These methods do not assign predictions; therefore no AUROC or AUPRC values are reported. mRMR relies on statistical measures such as mutual information [37], which requires a discretisation of the input variables. Therefore, we divided each feature into 20 equally-sized bins, which we are sufficiently thin for capturing the nonlinear dependencies, and sufficiently large for robust estimation (\(\sim\) 40 observations per bin). AUROC and AUPRC were computed for FS methods based on a predictive model, and reported as a function of the input feature size. ## 3 Results Feature Selection (FS) is widely used across many fields of science [38; 39; 40]. The goal of FS algorithms is to identify or rank features in function of the _predictive signal_ they carry with respect to a prediction label. In this study we benchmarked 20 FS methods on 5 synthetic datasets (RING, XOR, RING+XOR, RING+XOR+SUM and DAG) providing nonlinear binary classification tasks (see Methods). These datasets represent classical ML problem, such as the XOR problem, the discrimination of points lying on a ring-shaped subspace and combinations thereof (see Methods for more details). ### Random forests outperform other methods on RING by a large margin In the RING dataset, positive labels are associated to the points lying on a bi-dimensional ring defined by features in positions \(0,1\) (see Suppl. Fig. S1). Top panels in Fig. 2 show the AUROC and AUPRC of all trained models, as a function of the total number of features \(m=p+k\). It is noteworthy that all models but the Random Forest have AUROCs and AUPRCs approaching a random predictor (50%) for values of \(m\) greater than 32. This effect is consistent with the percentages of relevant features reported in the bottom graphs of panel Fig. 2. Both Random forests and TreeSHAP perfectly re-identified the relevant features, even when \(m=2048\). However, their decaying performance (as \(m\) increases) suggest that tree-based models may lose their FS capabilities in extremely high-dimensional (\(m\gg 2048\)). ### Neural networks are better suited for solving the XOR problem As shown in Fig. 3, NN models obtained overall better results, and mRMR underperformed, even for a small number of irrelevant features \(k\). In particular, LassoNet reached maximal best \(p\) and best \(2p\) scores for each \(m\), and \(>90\%\) AUROC and AUPRC for each \(m\leq 512\). This relatively high predictive performance of LassoNet can be attributed to the internal cross-validation procedure used to find the optimal degree of sparsity of the NN parameters. The FS methods, besides LassoNet, that found the highest proportions of relevant features are Relief, CancelOut Sigmoid, and feature attribution methods like DeepLift or Deconvolution. Most methods undergo drastic performance losses when \(m\geq 128\). ### RF, mRMR and LassoNet are the best performing methods on RING+XOR This dataset comprises four relevant features (\(p=4\)). Similarly to what we observed for the RING dataset, we see from Fig. 4 and Tab. 2 that Random Forests and TreeSHAP (which interprets learned Random Forests) perform the best, along with mRMR. Random Forests achieve high AUROC and AUPRC values and correctly rank the features in most settings. In particular, it constitutes the only FS method capable of ranking over the \(60\%\) of relevant features among the top 4 features for \(m\leq 2048\). For any other method to achieve a similar percentage of relevant features selected \(>80\%\), the total number of features needs to be decreased to \(m=16\), thus requiring to make the problem orders of magnitude simpler. Among these remaining methods, mRMR dominates regardless of the number of features. Overall, only RF, mRMR and LassoNet result in best \(p\) and best \(2p\) scores \(>50\%\) when \(m>256\). All remaining methods except LassoNet appear to perform close to random for \(m\geq 256\). LassoNet starts performing random for \(m\geq 1024\) Figure 3: Performance of the different models and feature selection methods on the XOR dataset. (Top) AUROC and AUPRC of each trained model as a function of the number of features. (Bottom) Percentage of relevant features selected by each FS method in the top \(p\) and \(2p\), respectively. Shaded areas correspond to the best \(p\) and best \(2p\) scores of a dummy FS method that performs worse than random. Methods have been sorted by decreasing order of average performance in the legend. Figure 2: Performance of the different models and feature selection methods on the RING dataset. (Top) AUROC and AUPRC of each trained model as a function of the number of features. (Bottom) Percentage of relevant features selected by each FS method in the top \(p\) and \(2p\), respectively. Shaded areas correspond to the best \(p\) and best \(2p\) scores of a dummy FS method that performs worse than random. Methods have been sorted by decreasing order of average performance in the legend. ### Ring+XOR+SUM is challenging for all methods There are six relevant features in this dataset, two of them being linear with some additive noise. The prediction problem appears to be quite difficult, with AUROC and AUPRC values \(<\) 0.85 even for \(p=6\) when no decoy feature is added (see Fig. 5). Contrary to the previous experiments, overall performance drops gradually with \(m\). This can be explained by the slightly larger number of relevant features (\(p=6\)) as well as their heterogeneity, allowing models to detect the features that are the easiest to pick, given the characteristics of each model (XOR features for NNs, RING features for mRMR and tree-based models, SUM features for most models). RF/TreeSHAP and mRMR are the best FS methods, with the best \(p\) scores \(\geq 40\%\) and the best \(2p\) scores \(\geq 60\%\) for each value of \(m\). Concrete autoencoder ranks last as FS method, and DeepPINK ranks last as a predictive model. Finally, we observe that CancelOut Sigmoid clearly outperforms its softmax variant by a large margin. ### Disentangling causal from spurious effects on DAG is too challenging for FS methods In the fifth and last dataset in this benchmark, which has been generated by a graphical model (see Methods), relevant features can be defined either as features that are causal for the observed variable \(Y\), or features that are expected to correlate with \(Y\) due to indirect (confounding) effects. We refer to this dataset as DAG. In Table 1, we reported the best \(p\) and \(2p\) scores based on these two definitions. We can see that, in both settings, TreeSHAP and its underlying Random Forest model are outperforming other methods. Concrete autoencoder, FSNet, CancelOut (softmax) and DeepPINK fail at detecting relevant features. Among feature attribution methods, Input \(\times\)Gradient, Feature permutation and Shapley value sampling slightly improve over the other methods. Overall, none of the benchmarked approaches seems to be relatively better at disentangling indirect correlations from causal effects on the DAG dataset. ### Benchmark summary Complementary to the presented figures, we summarised in Tab. 2 the best \(p\) and best \(2p\) scores for all FS methods on each dataset. For readability purposes, we only report the mean scores, computed across all values of \(m\). The table is organised in two parts: instance-level feature attribution methods (_a posteriori_ gradient-based methods such as Saliency Maps), and the remaining approaches. Best performing methods from each category are highlighted in bold. Among model-based and _a priori_ FS techniques, TreeSHAP and its underlying model Random Forests both Figure 4: Performance of the different models and feature selection methods on the RING+XOR dataset. (Top) AUROC and AUPRC of each trained model as a function of the number of features. (Bottom) Percentage of relevant features selected by each FS method in the top \(p\) and \(2p\), respectively. Shaded areas correspond to the best \(p\) and best \(2p\) scores of a dummy FS method that performs worse than random. Methods have been sorted by decreasing order of average performance in the legend. outperform all approaches by a large margin on all datasets but XOR. Overall, mRMR appears to be the second best-performing technique, despite its underperformance on XOR. Complementary to mRMR, LassoNet achieved maximal performance on XOR, while getting average results on the remaining datasets. Instance-level _a posteriori_ methods do not show significant differences, except on the DAG dataset. In the latter setting, Input \(\times\) Gradient, Feature Ablation and Shapley value sampling seem to perform relatively better. In order to compare the methods in situations where the curse of dimensionality is exacerbated, we reported the same results on sub-sampled versions of the same datasets, with \(n\in\{250,500\}\). These results are shown in Supp. Tables 1 and 2. We observed that mRMR, tree-based methods and LassoNet consistently outperform other methods in the same settings. ### Bootstrapping influences the quality of feature attribution scores Since practitioners may use _a posteriori_ SM methods in different ways, and since it was unclear whether using points from the training set can indeed improve the quality of feature ranking, we investigated whether the latter can benefit from bootstrapping and the use of the training set. In the first setting, we computed one feature importance vector per instance from the held-out set after training, and averaged them across the whole held-out set. In a second stage, we added a bootstrapping component by repeating this step 10 times and re-training the model each time on a random sample composed of 80% of the training set (sampling with replacement). The final importance vectors have been obtained by averaging across the 10 runs. In the third setting, we removed bootstrapping but computed the feature importance vectors on the points from the training set only, and left the held-out set unused. ## 4 Discussion ### Feature selection and modeling quality are interdependent Care should be taken in interpreting the results presented in our study. Indeed, each FS method necessarily relies on some modeling assumptions, and the relevance of selected features is heavily impacted by the model's adequacy to the data. In particular, the inference process does not guarantee to yield the optimal solution (the set of parameters that produces the least generalisation error). Despite the flexibility of NNs, as described by the universal approximation Figure 5: Performance of the different models and feature selection methods on the RING+XOR+SUM dataset. (Top) AUROC and AUPRC of each trained model as a function of the number of features. (Bottom) Percentage of relevant features selected by each FS method in the top \(p\) and \(2p\), respectively. Shaded areas correspond to the best \(p\) and best \(2p\) scores of a dummy FS method that performs worse than random. Methods have been sorted by decreasing order of average performance in the legend. Figure 6: Average best \(2p\) score on each of the 5 datasets, using three different approaches to infer feature importances from instance-level feature attribution methods. In the first setting, only the points from the held-out set have been used to rank features. In the second case, bootstrapping was performed by randomly sampling (with replacement) 80% of the training set before training the NN. In the last setting, only the points from the training set have been used. theorem [41] for example, the optimal architecture choice (with smallest generalisation error) is unknown and can only be found by cross-validation. The eventual lack of regularisation, coupled with the presence of a large number of input features, therefore increasing the effective number of parameters, is likely to drive the model towards learning from irrelevant correlations. In such case, because the model's decision process relies on irrelevant features, the FS method building on this model will necessarily attach higher importance to those features. Therefore, FS techniques based on NNs might be hard to exploit in practice, as they require building the most accurate model, and guiding the inference process to the optimal solution. This goal is easier to reach when provided with sufficiently deep insights about the data. In this sense, FS resembles a chicken-and-egg problem, and because of that FS methods cannot be blindly applied on a dataset without minimal prior knowledge of the data and the limitations of the model used. In particular, all feature attribution / SM methods considered in this study, as well as many of the other FS techniques, rely on a neural network that requires proper training. Because the number of parameters varied with the number of input features, it remains highly probable that the corresponding models either under- or overfitted the data in some situations, regardless of the presence of dropout and L2 regularisation. Overall, there is no guarantee that the Adam optimizer consistently guided the model to the globally-optimal solution, or that the generalisation error was minimal. ### Tree-based modeling and decision tree induction are two separate concepts Random forests have largely outperformed other methods on the RING, RING+XOR, RING+XOR+SUM and DAG datasets, both as a FS technique and as predictive models. However, the dataset where RFs perform comparatively (to other methods) worse is XOR, which consists of data points obeying a simple logical rule. Such rule can be perfectly captured through a piece-wise linear function. In particular, among all off-the-shelf ML models, decision trees are the optimal choice for modeling such data, as only three decision splits should be theoretically sufficient for perfect segregation of the classes. However, performance of RFs is sub-optimal as observed in Fig. 3. \begin{table} \begin{tabular}{l r r r r r} \hline \hline Dataset & \multicolumn{2}{c}{\% causal features} & \multicolumn{2}{c}{\% correlated features} & \multicolumn{2}{c}{Performance (\%)} \\ \cline{2-7} Method & Best p & Best 2p & Best p & Best 2p & AUROC & AUPRC \\ \hline Saliency maps & 14.3 & 21.4 & 9.5 & 14.6 & - & - \\ Integrated gradient & 14.3 & 21.4 & 9.7 & 14.6 & - & - \\ DeepLift & 14.3 & 21.4 & 9.7 & 14.6 & - & - \\ Input \(\times\) Gradient & 19.0 & 26.2 & 10.1 & 14.2 & - & - \\ SmoothGrad & 14.3 & 21.4 & 9.5 & 14.6 & - & - \\ Guided backpropagation & 14.3 & 21.4 & 9.5 & 14.6 & - & - \\ Deconvolution & 14.3 & 21.4 & 9.5 & 14.6 & - & - \\ Feature ablation & 19.0 & 26.2 & 9.7 & 14.2 & - & - \\ Feature permutation & 16.7 & 21.4 & 10.3 & 14.4 & - & - \\ Shapley value sampling & 16.7 & 26.2 & 10.3 & 14.6 & - & - \\ \hline mRMR & 16.7 & 19.0 & 6.6 & 11.5 & - & - \\ LassoNet & 14.3 & 14.3 & 6.8 & 11.1 & 71.9 & 66.0 \\ Relief & 14.3 & 14.3 & 6.8 & 9.5 & - & - \\ Concrete Autoencoder & 2.4 & 11.9 & 4.9 & 8.0 & 50.1 & 52.8 \\ FSNet & 2.4 & 4.8 & 3.5 & 7.2 & 56.7 & 55.5 \\ CancelOut (softmax) & 2.4 & 4.8 & 5.8 & 8.2 & 46.5 & 48.5 \\ CancelOut (sigmoid) & 16.7 & 21.4 & 9.7 & 13.8 & 56.2 & 56.8 \\ DeepPINK & 0.0 & 0.0 & 4.3 & 8.4 & 50.0 & 50.0 \\ Random Forest & 21.4 & 40.5 & 12.8 & **17.9** & **75.4** & **73.9** \\ TreeSHAP & **28.6** & **42.9** & **13.4** & 17.3 & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Best p and best 2p scores of all feature selection methods on the DAG dataset. \(p=7\) when considering only causal features as relevant (first multi-column) and \(p=81\) when also including confouders (second multi-column). Additionally, AUROC and AUPRC scores have been reported for embedded methods. The quality of its feature selection and prediction is not linked to the sophistication of the modeling _per se_, _but rather the optimality of the decision tree induction algorithm_. Indeed, Scikit-learn's implementation is based on the heuristic CART algorithm [42], which is unlikely to infer the tree with highest information gain and minimal number of nodes. In particular in the XOR dataset, selecting one of the two relevant features as first decision split is not sufficient, as it does not produce any change in the class proportions within the newly obtained hyper-paraleipipeds. Therefore, RFs are counter-intuitively better at growing from ring-shaped data, since any split on one of the 2 corresponding features results in an increase of class purity (strictly positive information gain). On the XOR dataset, optimal inference would require a lookahead of 1 feature, or bivariate decision splits. In conclusion, the relatively lower performance of RFs on the XOR dataset can mostly be attributed to the sub-optimality of its underlying tree induction algorithm. ### Univariate feature selection remains relevant in a high dimensionality context Although the datasets have been constructed in a way that they are highly challenging for both linear and univariate FS methods, it must be noted that (nonlinear) univariate filter methods remain highly relevant in some contexts. First, they are computationally more efficient (and embarrassingly parallel), making them competitive in a high-dimensional setting (e.g. Whole Genome Sequencing data). Second, they are still capable of capturing relevant features when they _individually_ correlate with the explained variable, in a nonlinear fashion. This is shown by the maximal performance (\(100\%\) best \(p\) score for any value of \(m\)) of mutual information (MI) on the RING dataset (as shown in Suppl. Tab. 3), suggesting that mutual information (MI) consistently detects the reduction of entropy caused by the ring-shaped function that generated the data. Indeed, this ring-shaped data is leaking information about the class at the level of individual features (the probability distribution of each feature is altered, when conditioned on the class distribution). Because real-life problems are more likely to exhibit univariate nonlinear correlations than our artificial datasets, simple information-theoretic approaches could remain highly relevant. However, data availability is a crucial prerequisite for accurate estimation of MI. ## 5 Conclusion In this paper, we investigated the usefulness of nonlinear FS approaches in the case of high-dimensional data with a low sample-to-feature ratio. What emerges from the results is that Random Forests and Relief outperform neural network-based approaches on all datasets that exhibit nonlinear correlations between individual features and the target \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline Dataset & \multicolumn{2}{c}{RING} & \multicolumn{2}{c}{XOR} & \multicolumn{2}{c}{RING+XOR} & \multicolumn{2}{c}{RING+XOR+SUM} & \multicolumn{2}{c}{DAG} \\ \cline{2-10} Method & Best k & Best 2k & Best k & Best 2k & Best k & Best 2k & Best k & Best 2k & Best k & Best 2k \\ \hline Saliency maps & 25.8 & 34.8 & 53.8 & 54.5 & 35.0 & 42.5 & 56.1 & 60.8 & 14.3 & 21.4 \\ Integrated gradient & 24.2 & 36.4 & 53.0 & 54.5 & 35.4 & 41.7 & 55.3 & 60.0 & 14.3 & 21.4 \\ DeepLift & 24.2 & 36.4 & 53.0 & 55.3 & 34.6 & 42.5 & 55.3 & 60.0 & 14.3 & 21.4 \\ Input \(\times\) Gradient & 26.5 & 34.8 & 54.5 & 54.5 & 35.8 & 43.3 & 55.8 & 60.8 & 19.0 & 26.2 \\ SmoothGrad & 25.8 & 36.4 & 53.8 & 54.5 & 36.2 & 42.9 & 56.1 & 61.1 & 14.3 & 21.4 \\ Guided backpropagation & 25.8 & 36.4 & 53.8 & 54.5 & 35.4 & 42.5 & 56.4 & 61.1 & 14.3 & 21.4 \\ Deconvolution & 25.8 & 35.6 & 53.8 & 55.3 & 35.0 & 43.3 & 55.6 & 60.8 & 14.3 & 21.4 \\ Feature ablation & 25.0 & 34.8 & 54.5 & 54.5 & 34.2 & 42.1 & 56.1 & 60.6 & 19.0 & 26.2 \\ Feature permutation & 25.8 & 35.6 & 53.0 & 54.5 & 32.9 & 41.7 & 55.6 & 60.0 & 16.7 & 21.4 \\ Shapley value sampling & 23.5 & 37.9 & 52.3 & 53.8 & 35.4 & 42.9 & 55.3 & 60.6 & 16.7 & 26.2 \\ \hline mRMR & **100.0** & **100.0** & 12.5 & 28.3 & 81.7 & 88.8 & 74.7 & 84.7 & 16.7 & 19.0 \\ LassoNet & 34.8 & 35.6 & **81.8** & **81.8** & 44.6 & 52.9 & 64.2 & 67.8 & 14.3 & 14.3 \\ Relief & 40.2 & 45.5 & 72.7 & 74.2 & 37.1 & 42.1 & 43.9 & 53.3 & 14.3 & 14.3 \\ Concrete Autoencoder & 19.7 & 24.2 & 22.7 & 27.3 & 19.2 & 30.0 & 25.8 & 36.4 & 2.4 & 11.9 \\ FSNet & 21.2 & 28.0 & 31.1 & 38.6 & 25.4 & 33.3 & 33.3 & 43.1 & 2.4 & 4.8 \\ CancelOut (softmax) & 14.4 & 25.0 & 22.0 & 28.8 & 18.8 & 29.2 & 30.0 & 38.1 & 2.4 & 4.8 \\ CancelOut (sigmoid) & 21.2 & 31.1 & 67.4 & 72.0 & 35.8 & 46.7 & 58.6 & 63.6 & 16.7 & 21.4 \\ DeepPINK & 17.4 & 25.0 & 17.4 & 24.2 & 20.4 & 32.9 & 28.1 & 37.8 & 0.0 & 0.0 \\ Random Forest & **100.0** & **100.0** & 54.5 & 59.8 & **88.8** & **95.4** & **85.3** & **90.8** & 21.4 & 40.5 \\ TreeSHAP & **100.0** & **100.0** & 40.2 & 43.9 & 81.7 & 88.3 & 78.3 & 86.1 & **28.6** & **42.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Best k and best 2k score percentages on the 5 datasets. For the first 4 datasets, scores have been averaged over \(m\in\{2,4,6,8,16,32,64,128,256,512,1024,2048\}\).Top and bottom parts of the table correspond to instance-level feature attribution and embedded/filter FS methods, respectively. Best performing methods are highlighted in bold. variable (all but the XOR dataset). Therefore, DNN models might not be the best choice for feature selection on datasets with these characteristics, i.e. a low density of relevant features, a prevalence of nonlinear patterns and variance homogeneity (for both predictive and decoy features). In real-life applications, we mainly encounter that relationships among the features in the data tend to be additive. However, our study indicates that when both additive and nonlinearly entangled features are present, we can only see the former unless a considerable number of samples are available compared to the input feature size.
2309.03075
Parameterizing pressure-temperature profiles of exoplanet atmospheres with neural networks
Atmospheric retrievals (AR) of exoplanets typically rely on a combination of a Bayesian inference technique and a forward simulator to estimate atmospheric properties from an observed spectrum. A key component in simulating spectra is the pressure-temperature (PT) profile, which describes the thermal structure of the atmosphere. Current AR pipelines commonly use ad hoc fitting functions here that limit the retrieved PT profiles to simple approximations, but still use a relatively large number of parameters. In this work, we introduce a conceptually new, data-driven parameterization scheme for physically consistent PT profiles that does not require explicit assumptions about the functional form of the PT profiles and uses fewer parameters than existing methods. Our approach consists of a latent variable model (based on a neural network) that learns a distribution over functions (PT profiles). Each profile is represented by a low-dimensional vector that can be used to condition a decoder network that maps $P$ to $T$. When training and evaluating our method on two publicly available datasets of self-consistent PT profiles, we find that our method achieves, on average, better fit quality than existing baseline methods, despite using fewer parameters. In an AR based on existing literature, our model (using two parameters) produces a tighter, more accurate posterior for the PT profile than the five-parameter polynomial baseline, while also speeding up the retrieval by more than a factor of three. By providing parametric access to physically consistent PT profiles, and by reducing the number of parameters required to describe a PT profile (thereby reducing computational cost or freeing resources for additional parameters of interest), our method can help improve AR and thus our understanding of exoplanet atmospheres and their habitability.
Timothy D. Gebhard, Daniel Angerhausen, Björn S. Konrad, Eleonora Alei, Sascha P. Quanz, Bernhard Schölkopf
2023-09-06T15:22:33Z
http://arxiv.org/abs/2309.03075v1
# Parameterizing pressure-temperature profiles ###### Abstract Context:Atmospheric retrievals (AR) of exoplanets typically rely on a combination of a Bayesian inference technique and a forward simulator to estimate atmospheric properties from an observed spectrum. A key component in simulating spectra is the pressure-temperature (PT) profile, which describes the thermal structure of the atmosphere. Current AR pipelines commonly use ad hoc fitting functions here that limit the retrieved PT profiles to simple approximations, but still use a relatively large number of parameters. Aims:In this work, we introduce a conceptually new, data-driven parameterization scheme for physically consistent PT profiles that does not require explicit assumptions about the functional form of the PT profiles and uses fewer parameters than existing methods. Methods:Our approach consists of a latent variable model (based on a neural network) that learns a distribution over functions (PT profiles). Each profile is represented by a low-dimensional vector that can be used to condition a decoder network that maps \(P\) to \(T\). Results:When training and evaluating our method on two publicly available datasets of self-consistent PT profiles, we find that our method achieves, on average, better fit quality than existing baseline methods, despite using fewer parameters. In an AR based on existing literature, our model (using two parameters) produces a tighter, more accurate posterior for the PT profile than the five-parameter polynomial baseline, while also speeding up the retrieval by more than a factor of three. Conclusions:By providing parametric access to physically consistent PT profiles, and by reducing the number of parameters required to describe a PT profile (thereby reducing computational cost or freeing resources for additional parameters of interest), our method can help improve AR and thus our understanding of exoplanet atmospheres and their habitability. ## 1 Introduction With now over 5000 confirmed planet detections outside the solar system (see, e.g., Christiansen, 2022), the exoplanet science community is increasingly expanding from detecting new planets to also characterizing them. One important tool for this are atmospheric retrievals (AR), that is, "the inference of atmospheric properties of an exoplanet given an observed spectrum" (Madhusudhan, 2018). The properties of interest here include, for example, the chemical composition of the atmosphere (i.e., abundances of different chemical species) or the presence of clouds or haze. Deriving these properties from a spectrum is a classic example of an inverse problem. As with many such problems, the standard approach is to combine Bayesian inference methods, such as nested sampling (Skilling, 2006), with a simulator for the "forward" direction--in our case, turning a set of atmospheric parameters \(\mathbf{\theta}\) into the corresponding spectrum. Given the spectrum \(y\) of a planet of interest, one can then estimate a posterior distribution \(p(\mathbf{\theta}\,|\,y)\) by iteratively drawing a sample of parameter values from a prior \(\pi(\mathbf{\theta})\), passing them to a simulator, and comparing the simulated spectrum \(\hat{y}\) to the observed one through a likelihood function \(L\) to guide the next parameter sample that is drawn. In practice, the simulator is often deterministic, in which case \(L\) is essentially equivalent to the noise model: For example, optimizing \(\mathbf{\theta}\) to minimize \(\|y-\hat{y}(\mathbf{\theta})\|^{2}\) implies the assumption that the noise in \(y\) comes from an uncorrelated Gaussian. Traditionally, due to computational limitations, much of the existing work on atmospheric retrievals has made the simplifying assumption of treating atmospheres as one-dimensional. More recent studies, however, have also begun to explore 2D and 3D approaches (see, e.g., Chubb and Min, 2022; Nixon and Madhusudhan, 2022; Zingales et al., 2022). Pressure-temperature profilesOne of the key factors that determines the observable spectrum of a planet's atmosphere is its thermal structure, that is, the temperature of each atmospheric layer. The thermal structure is described by the pressure-temperature profile, or PT profile, which is a function \(f:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\) that maps an atmospheric pressure to its corresponding temperature. In reality, of course, this function also depends on the location (e.g., in the case of the Earth, the air over the equator is warmer than over the poles), and such variations can be accounted for when using general circulation models (GCMs). However, as mentioned above, it is common to limit oneself to one-dimensional approximations. The implications of such an approximation are discussed, for example, in Blecic et al. (2017). There are, in principle, two different ways to determine the PT profile during an AR: One may either solve a set of (simpli fied) radiative transfer equations, or approximate the PT profile through an ad hoc fitting function (see, e.g., Seager, 2010; Line et al., 2013; Heng, 2017). In the latter case, which is the one that we consider in this work, one has to choose a parameterization for the PT profile, that is, one needs to assume a parametric family of functions \(f_{\mathbf{z}}:P\to T\) which map pressure values onto corresponding temperature values and have tunable parameters \(\mathbf{z}\in\mathbb{R}^{d}\) that control how exactly this mapping looks like (see Figure 1 for an illustration). These parameters \(\mathbf{z}\) are a subset of the full set of parameters \(\mathbf{\theta}\) for which the retrieval procedure estimates a posterior distribution (see above). Throughout this work, we commonly refer to \(\mathbf{z}\) as a (latent) representation of a PT profile. The choice of \(f_{\mathbf{z}}\) will, in general, be determined by a trade-off between different (and possibly conflicting) requirements and desiderata, including the following: * For every profile that we can expect to find during the retrieval, there should exist a \(\mathbf{z}\) such that \(f_{\mathbf{z}}\) is a good approximation of this profile (surjectivity). * For every value of \(\mathbf{z}\) from some given domain (e.g., a subset of \(\mathbb{R}^{d}\)), \(f_{\mathbf{z}}\) is a valid PT profile. Together with surjectivity, this implies that the image of \(\mathbf{z}\mapsto f_{\mathbf{z}}\) is equal to the set of physically sensible (and relevant) PT profiles. * The mapping \(\mathbf{z}\mapsto f_{\mathbf{z}}\) should be smooth in the sense that small changes in \(\mathbf{z}\) should only result in small changes in \(f_{\mathbf{z}}\) (i.e., continuity); otherwise, the retrieval loop may not converge. * The dimensionality \(d=\dim(\mathbf{z})\) should be as small as possible, since methods such as nested sampling often do not scale favorably in the number of retrieval parameters. * We should be able to define a simple prior \(p(\mathbf{z})\) for \(\mathbf{z}\), and for \(\mathbf{z}\sim p(\mathbf{z})\), the induced distribution over functions \(f_{\mathbf{z},\mathbf{z}-p(\mathbf{z})}\) should constitute an appropriate prior distribution over the PT profiles of the population of planets that we are considering. * Ideally, the dimensions of \(\mathbf{z}\) are interpretable as physical quantities, or correspond to physical processes (e.g., "\(z_{1}\) is the mean temperature," or "\(z_{2}\) controls the presence of an inversion"). Of course, as suggested above, no choice of parameterization will perfectly satisfy all requirements simultaneously, and some sort of compromise is required. Related workBarstow and Heng (2020), who have identified the parameterization of PT profiles as one of the "outstanding challenges of exoplanet atmospheric retrievals," find the approaches of Madhusudhan and Seager (2009) and Guillot (2010) to be the most popular among the atmospheric retrieval community. These parameterizations, as well as the ones proposed by Hubeny et al. (2003), Hansen (2008), Heng et al. (2011), and Robinson and Catling (2012), all consist of (semi)-analytic expressions that seek to describe the relationship between pressure and temperature mainly in terms of physically interpretable quantities, such as the opacity or optical depth. Simpler (yet less physically motivated) approaches include the use of splines (e.g., Zhang et al., 2021) as well as low-order polynomials (e.g., Konrad et al., 2022). Finally, a data-driven approach was presented by Schreier et al. (2020), who compute a singular value decomposition of a representative set of PT profiles, and parameterize new profiles as a combination of the first \(k\) singular vectors. All these approaches typically use four to six parameters to describe a PT profile, which is often a substantial fraction of the total number of retrieved parameters. To the best of our knowledge, no studies to date have used machine learning (especially neural networks) to parameterize pressure-temperature profiles of exoplanet atmospheres. ContributionsIn this work, we explore a conceptually new approach to parameterizing PT profiles that is based on the idea of using neural networks to learn efficient (i.e., low-dimensional) representations of PT profiles, as well as the corresponding decoding mechanisms, from simulated data.1 Using two different datasets of PT profiles obtained with full climate models, we show that our approach can achieve better fit quality with fewer parameters than our two baseline methods, while still easily integrating into an existing atmospheric retrieval framework. Footnote 1: We presented early versions of this work at AbSciCon 2022 and the “ML and the Physical Sciences” workshop at NeurIPS 2022. ## 2 Method Our goal is the following: We want to learn a model (in our case: a neural network) that takes two inputs: (1) a pressure value \(p\), and (2) a vector \(\mathbf{z}\in\mathbb{R}^{d}\), also called "representation", which contains a highly compressed description of a PT profile. The output of the model is the temperature \(t\) at pressure \(p\), for the profile described by \(\mathbf{z}\). During an atmospheric retrieval, the Bayesian inference routine can then propose values of \(\mathbf{z}\), which can be converted to a PT profile by conditioning the network on the proposed \(\mathbf{z}\) (that is, fixing the input \(\mathbf{z}\)) and evaluating it at different pressures \(p_{i}\) to get the corresponding temperatures \(t_{i}\). Thus, we will infer a posterior distribution over \(\mathbf{z}\), which corresponds directly to a posterior over PT profiles. To learn such a network, we assume that we have a dataset \(\mathcal{D}\) of PT profiles which were generated by a full climate model or radiative transfer code to ensure they are physically consistent. These models typically simulate an atmosphere in a layer-by-layer fashion, meaning that each PT profile is given in the form Figure 1: Schematic illustration of the problem setting considered in this paper: “Parameterizing PT profiles” means finding a way to represent PT profiles as vectors \(\mathbf{z}\in\mathbb{R}^{d}\), together with a decoding mechanism \(f_{\mathbf{z}}:P\to T\) that converts these \(\mathbf{z}\)-vectors back into a function that maps pressure values onto temperature values. Our goal in this work is to learn both the representations \(\mathbf{z}\) and the mechanism \(f_{\mathbf{z}}\) from data obtained with simulations using full climate models. of a set support points, \(\{(p_{i},t_{i})\}_{i=1,\ldots,N}\), where \(N\) is the number of layers, and \(p_{i}\) and \(t_{i}\) are the pressure and temperature in the \(i\)-th layer. Importantly, our method does not require that \(p_{i}\) be the same for all profiles in \(\mathcal{D}\): Different profiles can use different pressure grids, and this is indeed the case for the datasets we work with. Moreover, in principle, even the number of layers \(N\) can vary between profiles. (We do not explicitly consider this case here, but it implies only minor technical complications at training time.) This flexibility also makes it easy, for example, to combine PT profiles from different models or pre-existing atmosphere grids into a single training dataset. The approach that we propose in the following is inspired by the idea of a (conditional) neural process (NP; Garnelo et al. 2018, 2018). A neural process is a type of machine learning model that learns a distribution over functions and combines the advantages of Gaussian processes with the flexibility and scalability of neural networks. In astrophysics, the usage of (conditional) neural processes is still relatively rare; examples include Park & Choi (2021), Covorici-Hajdinjak et al. (2021), and Jankov et al. (2022), all of which use conditional NPs for interpolation. Our method works as follows (see also Figure 2). We employ two neural networks: an encoder \(E\) and a decoder \(D\). The encoder is only required during training and will not be used during an AR. At training time, we take a PT profile--consisting of a vector \(\mathbf{p}=(p_{1},...,p_{N})\) of pressure values and a vector \(\mathbf{t}=(t_{1},...,t_{N})\) of corresponding temperatures--from our training data set and pass it to \(E\), which outputs a representation \(\mathbf{z}\in\mathbb{R}^{d}\) of the profile: \[\mathbf{z}=E(\mathbf{p},\mathbf{t})\,. \tag{1}\] The dimensionality \(d\) of \(\mathbf{z}\) can be chosen freely (but changing \(d\) will require re-training \(E\) and \(D\)). Generally, we want to keep \(d\) as low as possible (e.g., \(d=2\)), but of course, higher values also imply more expressive or informative representations. In the next step, the representation \(\mathbf{z}\) is then used to condition the decoder network \(D\). Once conditioned on \(\mathbf{z}\), the decoder network \(k\) is a PT profile, that is, a function that takes in a single value (a pressure) and outputs a single value (a temperature). We then pass all pressure values \(p_{i}\) of our input profile through \(D\) to get the respective predicted temperature values \(\hat{t}_{i}\): \[\hat{t}_{i}=D(p_{i}\,|\,\mathbf{z})\,, \tag{2}\] where we use \(D(\cdot\,|\,\mathbf{z}):\mathbb{R}^{+}\to\mathbb{R}^{+}\) to denote the function that is given by conditioning \(D\) on \(\mathbf{z}\) (i.e., fixing \(\mathbf{z}\) as an input). We train the encoder and decoder jointly (i.e., we update the weights of the neural networks) to minimize the difference between the true and predicted temperatures. More specifically, we minimize the following mean squared error (MSE) reconstruction loss, which is a standard choice in machine learning for regression problems: \[\mathcal{L}_{\text{rec}}(\mathbf{t},\hat{\mathbf{t}})=\frac{1}{N}\sum_{i=1}^{N}(t_{i }-\hat{t}_{i})^{2}\,. \tag{3}\] In practice, we do this not only for a single PT profile, but minimize the average of \(\mathcal{L}_{\text{rec}}\) over our entire training dataset in a batched fashion. We also note that this choice of reconstruction loss assigns the same weight to all \(N\) atmospheric layers, and refer to Section 5.4 for a brief discussion of possible alternatives. To ensure that we can define a simple prior for \(\mathbf{z}\) during a retrieval, we furthermore want to constrain the distribution of the \(\mathbf{z}\)'s produced by the encoder. There are various potential approaches for this (see Appendix A). For this work, we adopt an idea from Zhao et al. (2017) and add a second term to the loss function that encourages \(\mathbf{z}\) to follow a standard Gaussian: \[\mathcal{L}_{\text{MMD}}=\text{MMD}^{2}\left(\{\mathbf{z}_{i}\}_{i=1,\ldots,b}, \{\mathbf{s}_{i}\}_{i=1,\ldots,b}\right)\,, \tag{4}\] where: \[\mathbf{s}_{i}\sim\mathcal{N}_{d}(0,1)\,,\,\,\,i=1,\ldots,b\,.\] Here, \(\mathcal{N}_{d}(0,1)\) denotes a \(d\)-dimensional standard Gaussian distribution (i.e., mean zero and identity matrix as covariance matrix). Furthermore, \(b\) is the batch size of our training, and MMD stands for Maximum Mean Discrepancy (Borgwardt et al. 2006; Gretton et al. 2012), which is a kernel-based metric that measures whether two samples come from the same distribution (see Appendix A.2 for more details). By minimizing this MMD term, we encourage the model to produce values of \(\mathbf{z}\) that follow the same distribution as the \(\mathbf{s}_{i}\), that is, a \(d\)-dimensional standard Gaussian. Experimentally, we found that despite this MMD term, there are sometimes a few profiles that are mapped to a \(\mathbf{z}\) very far from the origin, which is undesirable if we want to define a simple prior for \(\mathbf{z}\) during a retrieval. To prevent this problem, we introduce a third loss term with a softplus barrier function on the norm of \(\mathbf{z}\): \[\mathcal{L}_{\text{norm}}(\mathbf{z})=\nicefrac{{1}}{{k}}\cdot\log\left(1+\exp\{ k\cdot\left(\|\mathbf{z}\|_{2}-\tau\right)\right)}\,. \tag{5}\] This is a smooth version of the ReLU\((x):=\max(0,x)\) function, where \(k\) controls the amount of smoothing: Larger values of \(k\) increase the similarity to ReLU. For this work, we have fixed \(k\) at 100 (ad hoc choice), after observing that using a standard ReLU function destabilized the training for us. The parameter \(\tau\) defines the threshold on \(\|\mathbf{z}\|_{2}\) and was set to \(\tau=3.5\) (ad hoc choice). Figure 2: Schematic illustration of our model: During training, the encoder network \(E\) maps a PT profile (consisting of a vector \(\mathbf{p}=(p_{1},...,p_{N})\) of pressure values and a vector \(\mathbf{t}=(t_{1},...,t_{N})\) of corresponding temperature values) onto a latent representation \(\mathbf{z}\in\mathbb{R}^{d}\). This latent code is then used to condition a decoder network \(D\), and \(D(\cdot\,|\,\mathbf{z})\) is evaluated on each \(p_{i}\) to get the corresponding predicted temperatures \(\hat{t}_{i}\). During an atmospheric retrieval, \(\mathbf{z}\) is proposed by the Bayesian inference method (e.g., nested sampling). The total loss that we minimize is then just a weighted sum (with \(\beta,\gamma\in\mathbb{R}^{+}\)) of our three loss terms: \[\mathcal{L}=\mathcal{L}_{\text{rec}}+\beta\cdot\mathcal{L}_{\text{MMD}}+\gamma \cdot\mathcal{L}_{\text{norm}}\,. \tag{6}\] The hyperparameters \(\beta\) and \(\gamma\) control the tradeoff between the different loss terms: (1) Increasing \(\beta\) corresponds to placing a stronger prior on the distribution of \(\mathbf{z}\) in the latent space, usually at the cost of a higher reconstruction error. We have tried different values of \(\beta\) and found that the performance is relatively similar over a range of about 0.1 to 1. Too high values of \(\beta\) can lead to model collapse, where the prior on \(\mathbf{z}\) becomes too strong and prevents the encoder from learning meaningful representations. For this work, we set \(\beta=0.1\). This was an ad hoc choice and not the result of systematic optimization, which would likely yield a different value for each dataset and value of \(\dim(\mathbf{z})\). (2) The value of \(\gamma\) should be chosen such that \(\gamma\cdot\mathcal{L}_{\text{norm}}\) dominates the other loss terms when \(\|\mathbf{z}\|_{2}>t\). We set \(\gamma=10\) for this work (again without optimization). To use our trained models during atmospheric retrieval, we provide a convenient Python wrapper that allows the model to be loaded as a normal function, taking as input a value for \(\mathbf{z}\) and a numpy array of pressure values, and returning an array of corresponding temperatures. We note again that neither the lengths of the pressure grids nor the exact values need to match the pressure grid(s) used during training. (However, the range of pressure values should be approximately the same; otherwise, evaluating \(D(\cdot\,|\mathbf{z})\) at pressure values it has never encountered during training may result in nonphysical outputs). ## 3 Datasets We validate our proposed approach using two different, publicly available datasets of PT profiles: one for terrestrial planets, and one for hot Jupiters. A visual comparison of the two datasets and the respective \(P\)-\(T\) space that they cover is shown in Figure 3. ### PvATMOS The PvATMOS dataset was generated by Chopra et al. (2023) and consists of 124 314 stable atmospheres of Earth-like planets around solar-type stars generated with Atmos, a 1D coupled photochemistry-climate model (Arney et al., 2016; Meadows et al., 2018). The atmospheres in this dataset were simulated using \(N=101\) layers covering an altitude range of 0-80 km. Consequently, each PT profile is given by two 101-dimensional vectors for the pressures and temperatures. In addition, about 80 other atmospheric quantities are available, such as fluxes and abundance profiles of chemical species. We manually removed one PT profile from the dataset because it was completely out-of-distribution and caused stability problems during training. Furthermore, it has been brought to our attention that the simulations of the PvATMOS dataset suffered from a couple of issues that affect the physical correctness of the simulated PT profiles. In particular, the ClimMain.f specified a H\({}_{2}\)O \(k\)-coefficient file that assumed pressure broadening by H\({}_{2}\)O itself, which is not appropriate for a \(N_{2}\)-dominated atmosphere. Further, the model version also did not allow for convection above 40 km. Consequently, the simulated PT profiles tend to underpredict stratospheric and mesospheric temperatures compared to the expectation from a real Earth twin. Newer versions of the Atmos code are not affected by these issues and use updated H\({}_{2}\)O and CO\({}_{2}\)\(k\)-coefficients (Teal et al., 2022; Vidaurri et al., 2022). While this may limit the scientific usefulness of the PT profiles, it should not affect our ability to use them to demonstrate the applicability of our method as such, that is, to show how we can learn efficient, low-dimensional parameterizations of PT profiles from simulated data. ### Govl-2020 This dataset was published by Goyal et al. (2020) and consists of 11 293 PT profiles, corresponding to 89 hot Jupiters, each with four recirculation factors, six metallicities and six C/O ratios.2 Due to missing data, we manually removed three profiles. The data were simulated with Atmo (Amundsen et al., 2014; Tremblin et al., 2015, 2016; Drummond et al., 2016),3 a radiative-convective equilibrium model for atmospheres, using \(N=51\) atmospheric layers. In addition to the PT profiles, the dataset also includes transmission and emission spectra for each atmosphere, as well as over 250 abundance profiles of different chemical species. Footnote 2: The authors report that around 12 % of the models did not converge; hence the dataset is smaller than the expected 12 816 combinations. Footnote 3: We point out that Atmos and Atmo are not the same code. ## 4 Experiments and results In this section, we describe the experiments that we have performed to test our proposed approach and show our results. Figure 3: These plots show all the PT profiles in our two datasets. Each segment of a line—that is, the connection between the points \((p_{i},t_{i})\) and \((p_{i+1},t_{i+1})\)—is color-coded by the density of the profiles at its respective \((p,t)\) coordinate, which was obtained through a 2D KDE. The green line in the PvATMOS plot shows the one PT profile that we manually removed for being out-of-distribution. \(\vartriangle\) ### Training and evaluation procedure We use stochastic gradient descent to train our models (i.e., one pair of neural networks \(E\) and \(D\) for each combination of a dataset and a value of \(\text{dim}(\mathbf{z})\)) on a random subset of the datasets described in Section 3, and evaluate their performance on another random subset that is disjoint from the training set. For more detailed information about our network architectures, their implementation, and our training procedure, see Appendix B, or take a look at our code, which is available online. ### Reconstruction quality As a first experiment, we study how well our trained model can approximate previously unseen PT profiles from the test set (cf. Appendix B.3), and compare the performance with two baseline methods: (1) low-order polynomials, and (2) a PCA-based approach similar to the method of Schreier et al. (2020). Setup:For each \(\text{dim}(\mathbf{z})\in\{1,2,3,4\}\), we train three instances of our model (using three different random seeds to initialize the neural networks and control the split between training and validation) on the training set. We do not consider higher values of \(\text{dim}(\mathbf{z})\) here mainly because one of the goals of this work was to demonstrate that our method requires fewer fitting parameters than the baseline methods. Then, we take the learned decoder and, for each profile in the test set, use nested sampling to find the optimal \(\mathbf{z}\) (denoted \(\mathbf{z}^{*}\)) to reproduce that profile. In this case, "optimal" means that \(\mathbf{z}^{*}\) minimizes the mean squared error (MSE): \[\mathbf{z}^{*}=\operatorname*{arg\,min}_{\mathbf{z}}\frac{1}{N}\cdot\sum_{i=1}^{N}(t_ {i}-D(p_{i}\,|\,\mathbf{z}))^{2}\,. \tag{7}\] We add this additional optimization step to the evaluation to remove the effect of the encoder: Ultimately, we are only interested in how well \(D\) can approximate a given profile for some \(\mathbf{z}\)--which does not necessarily have to match the output of \(E\) for that profile perfectly. We have chosen nested sampling over gradient-based optimization--which is possible given that \(D\) is fully differentiable both with respect to \(p\) and \(\mathbf{z}\)--because the latter is generally not available during an AR, unless one uses a differentiable forward simulator (see, e.g., Diff-r from Yip et al., 2022). Nested sampling, therefore, should give us a more realistic idea of which profiles we can find during a retrieval. Our optimization procedure is based on _UltraNest_(Buchner, 2021), with 400 live points and a truncated standard normal prior, limiting each \(z_{i}\) to \([-4,4]\) (because \(\mathcal{L}_{\text{norm}}\) limits \(\|\mathbf{z}\|\) to \(\tau=3.5\)). For the polynomial baseline, we simply fit each profile in the test set using a polynomial with degree \(n_{\text{poly}}-1\) for \(n_{\text{poly}}\in\{2,3,4,5\}\). The minimization objective of the fit is again the MSE. Finally, for the PCA baseline, we compute a PCA on all the temperature vectors in the training set, and then fit the temperature vectors in the test set as a linear combination of the principal components (PCs), for \(n_{\text{PC}}\in\{2,3,4,5\}\), once again minimizing the MSE. We note that this PCA baseline must be Figure 4: Examples of pressure-temperature profiles from our test set (black) together with the best approximation using our trained model (red). Columns show different values of \(\text{dim}(\mathbf{z})\in\{1,2,3,4\}\); rows show the best, median, and worst example (in terms of the RMSE). Results are shown for the PvATMOS dataset; additional plots for the Goyal-2020 are found in Figure D.1 in the appendix. taken with a grain of salt, for two reasons. First, it requires us to project all profiles onto a single common pressure grid to ensure that the \(i\)-th entry of the temperature vectors always has the same interpretation (i.e, temperature at pressure \(p_{i}\)). We do this by linear interpolation. Consequently, the common pressure grid is determined by the intersection of the pressure ranges of all profiles, meaning that profiles may get clipped both at high and low pressures. In practice, this affects in particular the Govat-2020 data where, for example, the pressure in the deepest layer varies by more than an order of magnitude between different profiles, whereas for PvATMOS, the profiles all cover a similar pressure range. Second, combining principal components only returns vectors, not functions, meaning that if we want to evaluate a profile at an arbitrary pressure value, we again have to use interpolation. Results: We are showing some example profiles and the best-fit reconstruction using our model in Figure 4. The main results of this experiment--the distribution of the (root) mean square error (RMSE) on the test set--are found in Figure 5. The RMSE is simply defined as the square root of the MSE defined in Equation (3), that is, the root of the mean squared difference between a given temperature vector and its best-fit reconstruction. As Figure 5 shows, our method achieves a significantly lower reconstruction error than both baseline methods, both for the PvATMOS dataset and for the Govat-2020 dataset (see Figure 4 in the appendix). We find that on the PvATMOS dataset, even the one-dimensional version of our model (i.e., each profile is parameterized as just a single number) outperforms the polynomial baseline with five fitting parameters, and is approximately on-par with the PCA solution using five components. To account for the absolute scale of the data (i.e., Earth-like temperatures vs. hot Jupiters) and allow comparisons across the two datasets, we also compute the mean relative error (MRE): \[\text{MRE}=\frac{1}{N}\cdot\sum_{i=1}^{N}\frac{|\hat{t}_{i}-t_{i}|}{t_{i}}\,. \tag{8}\] We compute this quantity for every profile in the test set, and then take the median of this distribution. For PCA and our method, the medians are then also mean-averaged over the different runs. The final results are given in Figure 6. Looking at the mean relative errors, we note our errors for the Goyal-2020 dataset are systematically larger than for the PvATMOS dataset. We suspect that can be explained as the result of two factors: First, the PvATMOS training dataset is ten times bigger than the Goyal-2020 training dataset, and, second, the Goyal-2020 dataset covers a more diverse set of functions compared to the PvATMOS dataset. Interpreting the latent representations of PT profiles in the context of the corresponding atmospheres In this section, we take a closer look at the latent representations of PT profiles that our models learn and the extent to which they lend themselves to interpretation. We begin with Figure 7, where we present an illustration of our entire method using a model with \(\text{dim}(\mathbf{z})=2\) that we trained on the Goyal-2020 dataset. The upper part of the figure shows how the encoder \(E\) maps PT profiles--which are given by vectors \(\mathbf{p}\) and \(\mathbf{t}\), which in this example are 51-dimensional--onto 2-dimensional latent representations \(\mathbf{z}\). The three colored dots in the latent space correspond to the three PT profiles shown on top, which are examples from our test set Figure 5: Distributions of the reconstruction error (RMSE) for a polynomial baseline, a PCA-based baseline, and our method, for different number of fitting parameters. For our model, as well as the PCA baseline, each distribution is the average of three runs using different random seeds controlling the initialization and the train/validation split. The dashed lines indicate the respective medians. Results are shown for the PvATMOS dataset; additional plots for the Goyal-2020 are found in Figure 4 in the appendix. (i.e., profiles that were not used for training). The gray dots in the latent space show the representations of all PT profiles in our test set. Their distribution approximately follows a 2D standard Gaussian (cf. the marginal distributions), which is precisely what we would expect, as this was the reason we introduced the \(\mathcal{L}_{\text{MMO}}\) term to the loss function. The bottom part of the figure illustrates what happens if we place a regular grid on the latent space and, for each \(\mathbf{z}\) on the grid, evaluate \(D(\cdot\,|\,\mathbf{z})\) on a fixed pressure vector \(\mathbf{p}^{\prime}\), which does not have to match the \(\mathbf{p}\) of any profile in the training or test dataset. For example, \(\mathbf{p}^{\prime}\) can have a higher resolution (i.e., more atmospheric layers) than the training data. In both parts of the figure, we observe that PT profiles with different shapes--implying different physical processes--are mapped to different parts of the latent space, and that similar PT profiles are grouped together in the latent space. Here, "similar" refers not only to the shape of the PT profiles, but also to the properties of the corresponding atmospheres. To see this, we turn to Figure 8. As mentioned above, both the PvATMOS and Goyal-2020 datasets contain not only PT profiles, but also various other properties of the corresponding atmospheres, such as concentrations or fluxes of chemical species. In Figure 8, we show several scatter plots of the latent representations of the test set (i.e., the \(\mathbf{z}\) we obtained in the first experiment), color-coded according to some of these atmospheric properties--for example, the mean concentration of CO\({}_{2}\) in the atmosphere corresponding to a given PT profile. For ease of visualization, we limit ourselves to the case \(\dim(\mathbf{z})=2\). We observe that there are clear patterns in these scatter plots that show a strong correlation between our latent representations and the properties of the corresponding atmospheres, again demonstrating that PT profiles from physically similar atmospheres are grouped together. With this in mind, we return to the lower half of Figure 7, where we evaluate \(D\) on a grid of \(\mathbf{z}\) values. We observe that not only do different shapes of PT profiles correspond to different parts of the latent space, but the PT profiles given \(D(\cdot\,|\,\mathbf{z})\) also vary smoothly as a function of \(\mathbf{z}\). This means that the decoder not only reproduces the real PT profiles that were used to train it, but also allows smooth interpolation between them. Along the axes of the latent space, one can identify specific behaviors: For example, fixing \(z_{2}=0\) and increasing \(z_{1}\) leads to cooling in the upper atmosphere, while fixing \(z_{1}=2\) and increasing \(z_{2}\) leads to heating that extends further into the upper layers. Profiles around the center of the latent space (i.e., \(z_{1}\approx z_{2}\approx 0\)) are almost isothermal, indicating a relatively uniform temperature distribution throughout the atmosphere. A negative value of \(z_{1}\) is generally associated with the presence of a thermal inversion, and for these cases, \(z_{2}\) seems to encode the altitude where the inversion happens. For instance, for \(z_{1}=-2\) the profile shows an inversion in the mid-atmosphere (around \(\log_{10}(P)=-2\)) for \(z_{2}=2\), whereas for \(z_{2}=-2\), the inversion only happens at very high altitudes/low pressures around \(\log_{10}(P)=-6\). Finally, we can also draw direct connections between the behavior of the PT profiles in latent space and the correlations of \(\mathbf{z}\) with some of the atmospheric parameters that we show in Figure 8. For example, a high concentration of TiO (panel 3 of Figure 8) corresponds to the hot, inverted atmospheres in the upper left of the latent space. This is consistent with expectations from the literature, where TiO is indeed one of the canonical species proposed to cause temperature inversions in hot Jupiters (e.g., Hubeny et al., 2003; Fortney et al., 2008; Piette et al., 2020). Overall, this suggests that the latent space is to some extent interpretable: While the latent variables may not correspond directly to known physical parameters--as is the case, for example, for the analytical parameterizations of Madhusudhan and Seager (2009) and Guillot (2010)--their relationships with other variables may provide insight into the behavior of exoplanet atmospheres. Finally, a small word of caution: While we have just shown that the latent space is _in principle_ interpretable, any _specific_ interpretation--that is, a statement along the lines of "PT profiles with hot upper layers are found in the top left corner of the latent space"--will only hold for one specific model. If we re-train that same model using a different random seed to initialize the neural network, or using a different dataset, we will end up with an equivalent but different latent space. This is because our model is, in general, not identifiable, and there exists no preferred system of coordinates for \(\mathbf{z}\): For example, we can rotate or mirror the latent space without losing information, and unless we add additional constraints to the objective function, there is no reason why the model should favor any particular orientation. However, this simply means that the value of \(\mathbf{z}\) can only be interpreted for a given trained model. It does not, in any way, affect the ability of our model to be used for atmospheric retrievals. ### Usage in an atmospheric retrieval A major motivation for the development of our method is the possibility of using it for atmospheric retrievals. In this experiment, we show how we can easily plug our trained model into an existing atmospheric retrieval pipeline. The target PT profile we use for this experiment is taken from the PvATMOS test set. We made Figure 6: Comparison of the median test set MRE for the different methods and datasets (lower is better). Reminder: For each profile, we compute the _mean_ relative error (over all atmospheric layers), and then aggregate by computing the _median_ over all profiles. Finally, for the PCA baseline as well as our method, the results are also _mean_-averaged over different random seeds. ## 6 Conclusion Fig. 7: A visual illustration of the encoder \(E\), the decoder \(D\), and their connection via the latent space. See Section 4.3 in the main text for a detailed explanation. This figure is best viewed in color. sure that it is covered by the training data (i.e., there are similar PT profiles in the training set); however, in order not to make it too easy for our method, we chose a PT profile shape that is comparatively rare in the training data (see below). In Appendix C, we consider an even more challenging setting and show two additional results for PT profiles that were generated with a different code and that are not covered by our training distribution. Setup: For our demonstration, we add our trained model to the atmospheric retrieval routine first introduced in Konrad et al. (2022) and developed further in Alei et al. (2022) and Konrad et al. (2023), which is based on petitRADTRANS (Molliere et al., 2019), LIFEsim (Dannert et al., 2022) and PyMultiNest (Buchner et al., 2014). The goal of this experiment is to compare the PT profile parameterization capabilities of a version of our model (trained on the PyATMOS dataset using \(\dim(\mathbf{z})=2\)) with the fourth-order polynomial (i.e., five fitting parameters) used in Konrad et al. (2022) and Alei et al. (2022). We perform two cloud-free atmospheric retrievals (using both our PT model or the polynomial one) on the simulated thermal emission spectrum of an Earth-like planet orbiting a Sun-like star at a distance of \(10\,\mathrm{pc}\), assuming a photon noise signal-to-noise ratio of \(10\) at a wavelength of \(\lambda=$11.2\,\mathrm{\SIUnitSymbolMicro m}$\). The wavelength range was chosen as \([3\,\mathrm{\SIUnitSymbolMicro m},20\,\mathrm{\SIUnitSymbolMicro m}]\), and the spectral resolution was \(R=\nicefrac{{\lambda}}{{\lambda}}=200\). The number of live points for PyMultiNest was set to \(700\). In addition to the parameters required for the PT profile, we retrieve \(10\) parameters of interest; see Table 1 for an overview. This setup closely matches Konrad et al. (2022) and Alei et al. (2022). For the retrieval with our model, we use a 2-dimensional uniform prior, \(\mathcal{U}_{2}(-4,4)\) for \(\mathbf{z}\). The reason for choosing this uniform prior instead of a Gaussian is that the shape of the ground truth PT profile is relatively under-represented in the PyATMOS dataset that we use for training. This means that our trained model, together with a Gaussian prior on \(\mathbf{z}\), assigns it a very small prior probability, making it difficult for the retrieval routine to find the correct PT profile. Using a uniform prior is an easy way to give more weight to "rare" profiles. We discuss this choice, as well as potential alternatives, in more detail Section 5.3. Results: We show the main results of this experiment in the form of the retrieved PT profile and spectrum residuals in Figure 9. First, we find that the spectrum residuals demonstrate that both retrievals reproduce the simulated spectrum with sufficient accuracy, despite the differences in the PT parametrization. The residual's quantiles are centered on the truth, and are significantly smaller than the photon noise-level. Second, for the retrieved PT profiles, we visually find that the result obtained with our model--unlike the polynomial baseline--is in good agreement with the ground truth, except at the highest layers of the atmosphere, where it tends to underestimate the true temperature. We believe this can be explained by the fact that the upper layers of the atmosphere (i.e., low pressures) have little to no effect on the simulated exoplanet spectrum (cf. the emission contribution functions in the right panel of Section 4.4), making it hard for the retrieval routine to constrain the PT structure in the upper atmosphere. Third, the retrieved constraints for the surface pressure \(P_{0}\) and temperature \(T_{0}\), are much tighter and more accurate for our model than the polynomial baseline. Of course, this is also partly because our model represents a much narrower prior over the PT profiles that we are willing to accept as an explanation of the data (i.e., the spectrum) compared to the polynomial. This is an assumption, and in this case, it is justified by construction. In general, however, the decision of what is an appropriate prior for a given atmospheric retrieval problem is beyond the scope of the method as such. We discuss this in more detail in Section 5.2. Finally, we note that the retrieval using our model was significantly faster than the baseline: by decreasing the number of parameters required to fit the PT profile from five (polynomial) to two (our model), we reduced the total runtime by a factor of 3.2. ## 5 Discussion In this section, we discuss the advantages and limitations of our method and suggest directions for future work. ### Advantages Our approach for parameterizing PT profiles has multiple potential benefits compared to existing approaches. First, our model makes realistic, physically consistent PT profiles--which re Figure 8: Examples of 2D latent spaces, where each \(\mathbf{z}\) from the test set is color-coded by a property of the corresponding atmosphere. Results are shown for the Goyal-2020 dataset; additional plots for the PyATMOS are found in Figure D.3 in the appendix. quire a computationally expensive solution of radiative transfer equations--available as an ad hoc fitting function. It works, in principle, for all types of planets (Earth-like, gas giants, etc.) as long as we can create a suitable training dataset through simulations. In this context, "suitable" refers to, for example, the total size of the dataset and the density in parameter space (cf. the single out-of-distribution profile for PvATMOS). Our method then basically provides a tool which allows using the distribution of the PT profiles in that training set as a prior for the PT profile during a retrieval. Second, during an atmospheric retrieval, the parameterization of our model uses a simple prior for \(\mathbf{z}\) that does not require hard-to-interpret fine-tuning (see also Section 5.3). Third, as we have shown in Section 4, our method can fit realistic PT profiles with fewer parameters than the baseline and still achieve a lower fit error. Limiting the number of parameters needed to express a PT profile during retrieval can lead to faster processing times, or conversely, allow the retrieval of additional parameters when operating within a fixed computational budget. We would like to emphasize at this point that accelerating atmospheric retrievals is not only relevant for the analysis of existing observational data (e.g., from the James Webb Space Telescope), but can also be beneficial during the planning stages of future exoplanet science missions, such as LIFE (Quanz et al., 2022) or the recently announced Habitable Worlds Observatory (Clery, 2023), which require the simulation of thousands of retrievals. Finally, looking to the future, it seems plausible that approaches that replace the entire Bayesian inference routine with machine learning may gain further traction in the context of atmospheric retrievals (see, e.g., Marquez-Neila et al., 2018; Soboczenski et al., 2018; Cobb et al., 2019; Ardevol Martinez et al., 2022; Yip et al., 2022, or Vasist et al., 2023). In this case, reducing the number of retrieval parameters may no longer be a major concern-once trained, the inference time of these models is relatively independent of the number of parameters. However, even in this case, our approach will still be useful to provide a parameterized description of the posterior over physically consistent PT profiles. ### The role of the training dataset The practical value of our method stands and falls with the dataset that is used to train it. This should come as no surprise, but there are a few aspects to this that we will discuss in more detail here. First, it is important to understand that the training dataset defines a distribution over the functions that our model learns to provide parameterized access to. When using our model in an atmospheric retrieval, this distribution then becomes the prior over the set of PT profiles to be considered. Compared to, for example, low-order polynomials, this is a much narrower prior: While polynomials can fit virtually anything (this is the idea of a Taylor series), our model will only produce outputs that resemble the PT profiles that it encountered during training. If this is a good fit for a given target profile, we can expect to significantly outperform the broader prior of polynomials. Conversely, if our prior does not cover the target profile, we cannot expect good results. However, this is not a limitation of our method in particular, but is true for all approaches that use a parameterization for the PT profile instead of, for example, a level-by-level radiative transfer approach. As Line et al. (2013) explain, using "[a] parameterization does force the retrieved atmospheric temperature structure to conform only to the profile shapes and physical approximations allowed by that parameterization." Thus, as always in Bayesian inference, the decision to use our method as a prior for the PT profile (rather than, e.g., low-order polynomials) represents an assumption about the space of possible explanations for the data (e.g., the observed spectrum) that one is willing to accept, and it is up to the scientist performing an atmospheric retrieval to decide whether that assumption is justified. Finally, it is important to keep in mind that our model will learn to replicate any anomalies and imbalances present in the training data. For example, if atmospheres without inversions are overrepresented in the training data (compared to atmospheres with inversions), our learned function distribution will also place a higher probability on PT profiles without inversions, which will obviously affect the results of a retrieval. \begin{table} \begin{tabular}{l l l c c c} \hline \hline Parameter & Description & Prior & True value & Retrieved (polynomial) & Retrieved (our method) \\ \hline \(a_{4}\) & PT profile parameter (polynomial) & Uniform(0, 10) & — & 1.33 \({}^{+0.39}_{-0.67}\) & — \\ \(a_{3}\) & PT profile parameter (polynomial) & Uniform(0, 100) & — & 30.49 \({}^{+0.83}_{-2.73}\) & — \\ \(a_{2}\) & PT profile parameter (polynomial) & Uniform(0, 300) & — & 106.78 \({}^{+15.09}_{-87.47}\) & — \\ \(a_{1}\) & PT profile parameter (polynomial) & Uniform(0, 500) & — & 204.71 \({}^{+118.24}_{-118.24}\) & — \\ \(a_{0}\) & PT profile parameter (polynomial) & Uniform(0, 600) & — & 384.86 \({}^{+178.00}_{-500.09}\) & — \\ \(z_{1}\) & PT profile parameter (our method) & Uniform(\(-4,4\)) & — & — & \(-2.56\) \({}^{+1.69}_{-1.34}\) \\ \(z_{2}\) & PT profile parameter (our method) & Uniform(\(-4,4\)) & — & — & \(-0.04\) \({}^{+1.19}_{-1.55}\) \\ \(\log_{10}(P_{0})\) & Surface pressure (in bar) & Uniform(\(-2,2\)) & 0.013 & \(-0.48\) \({}^{+0.60}_{-0.61}\) & \(-0.05\) \({}^{+0.14}_{-0.16}\) \\ \(R_{\mu\mu}\) & Planet radius (in \(R_{\mu}\)) & Normal(\(1,0.2\)) & 1.00 & 1.00 \({}^{+0.03}_{-0.57}\) & 1.00 \({}^{+0.03}_{-0.65}\) \\ \(\log_{10}(M_{\mu\mu})\) & Planet mass (in \(M_{\mu\mu}\)) & Normal(\(0,0.4\)) & 0.00 & 0.02 \({}^{+0.87}_{-0.57}\) & 0.03 \({}^{+0.65}_{-0.65}\) \\ \(\log_{10}(X_{\mathrm{N_{2}}})\) & N\({}_{2}\) mass fraction & Uniform(\(-2,0\)) & \(-0.11\) & \(-1.04\) \({}^{+0.89}_{-0.89}\) & \(-1.05\) \({}^{+0.93}_{-0.83}\) \\ \(\log_{10}(X_{\mathrm{O_{2}}})\) & O\({}_{2}\) mass fraction & Uniform(\(-2,0\)) & \(-0.70\) & \(-1.10\) \({}^{+0.86}_{-0.88}\) & \(-1.11\) \({}^{+0.99}_{-0.94}\) \\ \(\log_{10}(X_{\mathrm{CO_{2}}})\) & CO\({}_{2}\) mass fraction & Uniform(\(-10,0\)) & \(-3.40\) & \(-2.72\) \({}^{+1.33}_{-1.25}\) & \(-3.22\) \({}^{+0.80}_{-0.75}\) \\ \(\log_{10}(X_{\mathrm{CH_{4}}})\) & CH\({}_{4}\) mass fraction & Uniform(\(-10,0\)) & \(-5.77\) & \(-5.06\) \({}^{+1.56}_{-1.27}\) & \(-5.61\) \({}^{+0.84}_{-0.84}\) \\ \(\log_{10}(X_{\mathrm{H_{2}O}})\) & H\({}_{2}\)O mass fraction & Uniform(\(-10,0\)) & \(-3.00\) & \(-2.29\) \({}^{+1.36}_{-1.32}\) & \(-2.82\) \({}^{+0.77}_{-0.73}\) \\ \(\log_{10}(X_{\mathrm{O_{2}}})\) & O\({}_{3}\) mass fraction & Uniform(\(-10,0\)) & \(-6.52\) & \(-6.00\) \({}^{+1.07}_{-1.04}\) & \(-6.39\) \({}^{+0.72}_{-0.69}\) \\ \(\log_{10}(X_{\mathrm{CO}})\) & CO mass fraction & Uniform(\(-10,0\)) & \(-6.90\) & \(-7.47\) \({}^{+2.93}_{-2.40}\) & \(-7.76\) \({}^{+2.45}_{-2.13}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of all parameters in our simulated atmospheric retrieval, including the respective priors, true values, and retrieved values for both PT profile parameterization schemes. For the latter, we report the median as well as the 2.5% and 97.5% percentiles. ### Choosing a prior distribution for \(\mathbf{z}\) The last point in the previous section is closely related to the question of which prior one should choose for \(\mathbf{z}\) during a retrieval. The natural choice, of course, is to assume a \(d\)-dimensional standard Gaussian prior for \(\mathbf{z}\) as that is the distribution that we encourage during training by using the \(\mathcal{L}_{\text{MMD}}\) loss. However, there are a few additional considerations to keep in mind here. First, using a Gaussian prior for \(\mathbf{z}\) means that the induced distribution over PT profiles, given by \(D(\cdot\,|\,\mathbf{z})\), will (approximately) match the distribution in the training dataset. This is a good choice if the training dataset itself constitutes a good prior for the PT profiles one wants to consider during a retrieval. On the other hand, consider now the case mentioned above where the training dataset contains all the expected shapes, but is highly biased towards one of the modes, not because this reflects the (assumed) true distribution of atmospheres, but as an artifact of the dataset generation process. In this case, it may make sense to choose a non-Gaussian prior (e.g., a uniform distribution) for \(\mathbf{z}\) to compensate for this imbalance. When doing so, it is important to ensure that \(\mathbf{z}\) stays in the value range that the decoder \(D\) encountered during training (i.e., \(\|\mathbf{z}\|<\tau\)). Another (and perhaps more principled) way to solve this problem is to resample the training dataset to balance the different modes (e.g., "inversion" and "no inversion"). For our experiment in Section 4.4, we have validated that such a resampling-based approach does indeed work, but have chosen not to include all the details of this additional experiment so as not to distract too much from the main message of our paper. Finally, there is the fact that our trained model does not only return physically consistent PT profiles, but also allows smooth interpolation (or, to some extent, extrapolation) between them. In cases where this is not desired, for example because one wants to be sure that only PT profiles very close to the training dataset are produced, one can choose a data-driven prior distribution for \(\mathbf{z}\). A simple way to achieve this is to train a normalizing flow (see, e.g., Kobyzev et al.2021 or Papamakarios et al.2021 for recent introductions) to match the distribution of \(\mathbf{z}\)'s obtained from the training dataset. If we then use the normalizing flow to sample \(\mathbf{z}\), we will almost perfectly reproduce the distribution of the training data set. However, this can also exacerbate some of the problems we discussed earlier, for example, if the training data is unbalanced. ### Directions for future research We see several directions for future work. For example, to account for the fact that not all parts of the atmosphere contribute equally to the observed spectrum, one possible approach could be to use a modified reconstruction loss weighted by the respective contribution function for each point in the profile. In the case of emission spectra, this could focus the expressiveness of the model on regions of higher pressure, which have a greater influence on the spectrum, rather than regions at high altitudes, where the thermal structure of the atmosphere is difficult to constrain from the available observational data. For transmission spectra, which probe the upper layers of the atmosphere, one would do the opposite and give more weight to high altitudes. Another promising direction could be to extend our method to parameterize not only temperature as a function of pressure, but also chemical abundance profiles (e.g., the concentration of CO\({}_{2}\) in each layer), to make them accessible during a retrieval. Finally, given a sufficiently large grid of training data, our approach can also be used to parameterize the thermal structure of 3-dimensional atmospheres, where temperature depends not only on pressure but also on the longitude and latitude. ## 6 Summary and conclusion In this work, we have introduced a new approach to parameterizing PT profiles in the context of atmospheric retrievals of exoplanets. Unlike existing approaches for this problem, we do not make any a priori assumptions about the family of functions that describe the relation between \(P\) and \(T\), but instead learn a distribution over such functions from data. In our framework, PT profiles are represented by low-dimensional real vectors that can be used to condition a particular neural network, which, once conditioned, _is_ the corresponding PT profile (i.e., a single-argument, single-output function that maps pressure onto temperature values). Figure 9: Results for our simulated atmospheric retrieval of an Earth-like planet using petitRADTRANS, LIFEsim, and PyMultiNest. _Left:_ The retrieved PT profiles for the polynomial baseline and our model, together with the emission contribution function. _Right:_ The relative fitting error of the spectrum, computed as (true spectrum – simulated spectrum) / true spectrum. We experimentally demonstrated that our proposed method works for both terrestrial planets and hot Jupiters, and when compared with existing methods for parameterizing PT profiles, we found that our methods give better approximations on average (i.e., smaller fitting errors) while requiring a smaller number of parameters. Furthermore, we showed that our learned latent spaces are still interpretable to some extent: For example, the latent representations are often directly correlated with other properties of the atmospheres, meaning that physically similar profiles are grouped together in the latent space. Finally, we have demonstrated by example that our method can be easily integrated into an existing atmospheric retrieval framework, where it resulted in a significant speedup in terms of inference time, and produced a tighter and more accurate posterior for the PT profile than the baseline. Given access to a sufficient training dataset, we believe that our method has the potential to become a valuable tool for exoplanet characterization: Not only could it allow the use of physically consistent PT profiles during a retrieval, but it can also reduce the number of parameters required to model the thermal structure of an atmosphere. This can either reduce the overall computational cost or free up resources for other retrieval parameters of interest, allowing for more efficient and accurate atmospheric retrievals, which in turn could improve our understanding of exoplanetary habitability and the potential for extraterrestrial life. ###### Acknowledgements. We thank Sandra T. Bastelberger and Nicholas Wogan for pointing out the potential problems with the PyATMOS simulations and taking the time to discuss the issue with us. Furthermore, we thank Markus Bonse, Felix Dannert, Maximilian Dax, Emily Garvin, Jean Hayor and Vincent Simper for their helpful comments on the manuscript. Part of this work has been carried out within the framework of the National Centre of Competence Research Planets supported by the Swiss National Science Foundation under grants 51NF40-182901. 851940. 202606. SPQ and EA acknowledge the financial support of the SNSF. TDG acknowledges funding through the Max Planck ETH Center for Learning Systems. BSK acknowledges funding through the European Space Agency's Open Space Innovation Platform (ESA OSFP) program.
2306.13385
Solving a class of multi-scale elliptic PDEs by means of Fourier-based mixed physics informed neural networks
Deep neural networks have garnered widespread attention due to their simplicity and flexibility in the fields of engineering and scientific calculation. In this study, we probe into solving a class of elliptic partial differential equations(PDEs) with multiple scales by utilizing Fourier-based mixed physics informed neural networks(dubbed FMPINN), its solver is configured as a multi-scale deep neural network. In contrast to the classical PINN method, a dual (flux) variable about the rough coefficient of PDEs is introduced to avoid the ill-condition of neural tangent kernel matrix caused by the oscillating coefficient of multi-scale PDEs. Therefore, apart from the physical conservation laws, the discrepancy between the auxiliary variables and the gradients of multi-scale coefficients is incorporated into the cost function, then obtaining a satisfactory solution of PDEs by minimizing the defined loss through some optimization methods. Additionally, a trigonometric activation function is introduced for FMPINN, which is suited for representing the derivatives of complex target functions. Handling the input data by Fourier feature mapping will effectively improve the capacity of deep neural networks to solve high-frequency problems. Finally, to validate the efficiency and robustness of the proposed FMPINN algorithm, we present several numerical examples of multi-scale problems in various dimensional Euclidean spaces. These examples cover both low-frequency and high-frequency oscillation cases, demonstrating the effectiveness of our approach. All code and data accompanying this manuscript will be made publicly available at \href{https://github.com/Blue-Giant/FMPINN}{https://github.com/Blue-Giant/FMPINN}.
Xi'an Li, Jinran Wu, You-Gan Wang, Xin Tai, Jianhua Xu
2023-06-23T09:10:43Z
http://arxiv.org/abs/2306.13385v6
Solving a class of multi-scale elliptic PDEs by Fourier-based mixed physics informed neural networks ###### Abstract Deep neural networks have garnered widespread attention due to their simplicity and flexibility in the fields of engineering and scientific calculation. In this study, we probe into solving a class of elliptic partial differential equations(PDEs) with multiple scales by utilizing Fourier-based mixed physics informed neural networks(dubbed FMPINN), its solver is configured as a multi-scale deep neural network. In contrast to the classical PINN method, a dual (flux) variable about the rough coefficient of PDEs is introduced to avoid the ill-condition of neural tangent kernel matrix caused by the oscillating coefficient of multi-scale PDEs. Therefore, apart from the physical conservation laws, the discrepancy between the auxiliary variables and the gradients of multi-scale coefficients is incorporated into the cost function, then obtaining a satisfactory solution of PDEs by minimizing the defined loss through some optimization methods. Additionally, a trigonometric activation function is introduced for FMPINN, which is suited for representing the derivatives of complex target functions. Handling the input data by Fourier feature mapping will effectively improve the capacity of deep neural networks to solve high-frequency problems. Finally, to validate the efficiency and robustness of the proposed FMPINN algorithm, we present several numerical examples of multi-scale problems in various dimensional Euclidean spaces. These examples cover both low-frequency and high-frequency oscillation cases, demonstrating the effectiveness of our approach. All code and data accompanying this manuscript will be made publicly available at [https://github.com/Blue-Giant/FMPINN](https://github.com/Blue-Giant/FMPINN). keywords: Multi-scale; Rough coefficient; FMPINN; Fourier feature mapping; Flux variable; Reduce order **AMS subject classifications.** 35J25, 65N99, 68T07 + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Multi-scale problems, governed by partial differential equations(PDEs) with multiple scales, are prevalent in diverse scientific and engineering fields like reservoir simulation, high-frequency scattering, and turbulence modeling. This paper focuses on solving the following type of multi-scale problem. \[\begin{cases}-\mathbf{div}\bigg{(}A^{\varepsilon}(\mathbf{x})\nabla u^{\varepsilon }(\mathbf{x})\bigg{)}=f(\mathbf{x}),&\mathbf{x}\in\Omega,\\ \mathcal{B}u^{\varepsilon}(\mathbf{x})=g(\mathbf{x}),&\mathbf{x}\in\partial\Omega.\end{cases} \tag{1.1}\] where \(\Omega\) is a bounded subset of \(\mathbb{R}^{d}(d=1,2,3,\ldots)\) with piecewise Lipschitz boundary and satisfies the interior cone condition, \(\varepsilon\) is a small positive parameter that signifies explicitly the multiscale nature of the rough coefficient \(A^{\varepsilon}(\mathbf{x})\). \(\mathcal{B}\) is a boundary operator in \(\partial\Omega\) that imposes the boundary condition of \(u^{\varepsilon}\), such as Dirchlete, Neumman and Robin. \(\nabla\) and \(\mathbf{div}\) are the gradient and divergence operators, respectively. \(f(\mathbf{x})\in L^{2}(\Omega)\) is a given function. In addition, \(A^{\varepsilon}(\mathbf{x})\) is symmetric and uniformly elliptic on \(\Omega\). It means that all eigenvalues of \(A^{\varepsilon}\) are uniformly bounded by two strictly positive constants \(\lambda_{\min}(A^{\varepsilon})\) and \(\lambda_{\max}(A^{\varepsilon})\). In other word, for all \(\mathbf{x}\in\Omega\) and \(\mathbf{\xi}\in\mathbb{R}^{d}\), we have \[\lambda_{\min}(A^{\varepsilon})|\mathbf{\xi}|^{2}\leqslant\mathbf{\xi}^{T}A^{\varepsilon }(\mathbf{x})\mathbf{\xi}\leqslant\lambda_{\max}(A^{\varepsilon})|\mathbf{\xi}|^{2}. \tag{1.2}\] The multi-scale problem(1.1) frequently arise in the fields of physical simulations and engineering applications, including the study of flow in porous media and the analysis of mechanical properties in composite materials [1, 2, 3]. Generally, the analytical solutions of (1.1) are seldom available, then solving numerically this problem through approximation methods is necessary. Lots of numerical methods focus on efficient, accurate and stable numerical schemes have gained favorable achievement, such as heterogeneous multi-scale methods [2, 3, 4], numerical homogenization [5, 6, 7], variational multi-scale methods [8, 9], multi-scale finite element methods [10, 11, 12], flux norm homogenization [13, 14], rough polyharmonic splines (RPS)[15], generalized multi-scale finite element methods [16, 17, 18], localized orthogonal decomposition [19, 20], etc. In contrast to standard numerical methods including FEM and FDM, they alleviate substantially the computational complexity in handling all relevant scales, improve the numerical stabilities and expedite the convergence. However, they still will encounter the curse of complex domain and dimensionality in general. Deep neural networks(DNN), an efficient meshfree method without the discretization for a given interested domain, have drawn more and more attention from researchers to solve numerically the ordinary and partial differential equations as well as the inverse problems for complex geometrical domain and high-dimensional cases [21, 22, 23, 24, 25, 26, 27], due to their extraordinary universal approximation capacity[28]. Among these methods, the physics-informed neural networks (PINN) dating back to the early 1990s again attracted widespread attention of researchers and have made remarkable achievements for approximating the solution of PDEs by embracing the physical laws with neural networks, on account of the rapid development of computer science and technology[24, 29]. This method skillfully incorporates the residual of governing equations and the discrepancy of boundary/initial constraints, then formulates a cost function can be optimized easily via the automatic differentiation in DNN. Many efforts have been made to further enhance the performance of PINN are concluded as two aspects: refining the selection of the residual term and designing the manner of initial/boundary constraints. In terms of the residual term, there are XPINN [30], cPINN [31], two-stage PINN [32] and gPINN [33], and so on. By subtly encoding the I/B constraints into DNN in a hard manner, the PINN can be easy to train with low computational complexity and obtain a high-precision solution of PDEs with complex boundary conditions[34, 35, 36]. Motivated by the reduction of order in conventional methods[12], some attempts have been made to solve the high-order PDEs by reframing them as some first-order systems, this will overcome the shortcomings of the computational burden for high-order derivatives in DNN. For example, the deep mixed residual method [27], the local deep learning method[37] and the deep FOSLS method[38, 39]. Many studies and experiments have indicated that the general DNN-based algorithms are commonly used to solve a low-frequency problem in varying dimensional space, but will encounter tremendous challenge for high-frequency problems such as multi-scale PDEs(1.1). The frequency principle[40] or spectral bias[41] of DNN shows that neural networks are typically efficient for fitting objective functions with low-frequency modes but inefficient for high-frequency functions. Then, a series of multi-scale DNN(MscaleDNN) algorithms were proposed to overcome the shortcomings of normal DNN for high-frequency problems by converting high-frequency contents into low-frequency ones via a radial scale technique [42, 43, 44, 45]. After that, some corresponding mechanisms were developed to explain this performance of DNN, such as the Neural Tangent Kernel (NTK)[46, 47]. Furthermore, many researchers attempted to utilize a Fourier feature mapping consisting of sine and cosine to improve the capacity of MscaleDNN, which will alleviate the pathology of spectral bias and let neural networks capture high frequencies component effectively[48, 49, 50, 45, 46, 45, 46]. Recently, some works[52, 53] have shown that general PINN architecture is unable to capture the multi-scale property of the solution due to the effect of rough coefficient in multi-scale PDEs. In [52], Wing Tat Leung et.al proposed a Neural homogenization-based PINN(NH-PINN) method to solve (1.1), it can well overcome the unconvergence of PINN for multi-scale problems. However, NH-PINN also will encounter the dilemma of dimensional and the burden of computation, because it will convert one low-dimensional problem into a high-dimensional case. By carefully analyzing the Neural Tangent Kernel matrix associated with the PINN, Sean P. Carney et. al[53] found that the Forbehuis norm of the NTK matrix will become unbound as the oscillation factor \(\varepsilon\) in \(A^{\varepsilon}\) tends to zero. It means that the evolution of residual loss term in PINN will become increasingly stiff as \(\varepsilon\to 0\), then lead to poor training behavior for PINN. In this paper, a Fourier-based multi-scale mixed PINN(FMPINN) structure is proposed to solve the multi-scale problems (1.1) with rough coefficients. This method consists of the general PINN architecture and the aforementioned MscaleDNN model with subnetworks being used to capture different frequencies component. To overcome the weakness of the normal PINN that failed to capture the jumping gradient information of the oscillating coefficient when tackling the governed equation in multi-scale PDEs(1.1), a (dual)flux variable is introduced to alleviate the adverse effect of the rough coefficient. Meantime, it can also reduce the computational burden of PINN for the second-order derivatives of space variables. In addition, the Fourier feature mapping is used in our model to learn each target frequency efficiently and express the derivatives of multi-frequency functions easily, it will remarkably improve the capacity for our FMPINN model to solve multi-scale problems. In a nutshell, the primary contributions of this paper are summarized as follows: 1. We propose a novel neural networks approach by combining normal PINN and MscaleDNN with subnetworks structure to address multi-scale problems, leveraging the Fourier theorem and the F-principle of DNN. 2. Inspired by the reduced order scheme for high-order PDEs, a dual (flux) variable about the rough coefficient of multi-scale PDEs is introduced to address the gradient leakage about the rough coefficient for PINN. 3. By introducing some numerical experiments, we show that the classical PINN method with MscaleDNN solver is still insufficient in providing accurate solutions for multi-scale equations. 4. We showcase the exceptional performance of FMPINN in solving a class of multi-scale elliptic PDEs with essential boundaries in various dimensional spaces. Our method outperforms existing approaches and demonstrates its superiority in addressing these complex problems. The remaining parts of our work are organized as follows. In Section 2, we briefly introduce the underlying conceptions and formulations for MscaleDNN and the structure of PINN. Section 3 provides a unified architecture of the FMPINN to solve the elliptic multi-scale problem (1.1) based on its equivalent reduced order scheme, and gives the option of activation function as well as the error analysis of our proppsed method. Section 4 details the FMPINN algorithm for approximating the solution of multi-scale PDEs, then provide the option of activation function and the simple error analysis for FMPINN method. In Section 5, some scenarios of multi-scale PDEs are performed to evaluate the feasibility and effectiveness of our proposed method. Finally, some conclusions of this paper are made in Section 6. ## 2 Multi-scale Physics Informed Neural Networks ### Multi-scale Deep Neural Networks with ResNet technique The basic concept and formulation of DNN are described briefly in this section, which helps audiences to understand the DNN structure through functional terminology. Mathematically, a deep neural network defines the following mapping \[\mathcal{F}:\mathbf{x}\in\mathbb{R}^{d}\Longrightarrow\mathbf{y}=\mathcal{F}(x)\in \mathbb{R}^{c} \tag{2.1}\] with \(d\) and \(c\) being the dimensions of input and output, respectively. In fact, the DNN functional \(\mathcal{F}\) is a nested composition of the following single-layer neural unit: \[\mathbf{y}=\{y_{1},y_{2},\cdots,y_{m}\}\ \ \text{and}\ \ y_{l}=\sigma\left(\sum_{n=1} ^{d}w_{ln}*x_{n}+b_{l}\right) \tag{2.2}\] where \(w_{ln}\) and \(b_{l}\) are called weight and bias of \(l_{th}\) neuron, respectively. \(\sigma(\cdot)\) is an element-wise non-linear operator, generally referred as the activation function. Then, we have the following formulation of DNN: \[\mathbf{y}^{[d]}=\sigma\circ(\mathbf{W}^{[\ell]}\mathbf{y}^{[\ell-1]}+\mathbf{b}^{[\ell]}),\ \ \text{for}\ \ \ell=1,2,3,\cdots\cdots,L \tag{2.3}\] and \(\mathbf{y}^{[0]}=\mathbf{x}\), where \(\mathbf{W}^{[\ell]}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}},\mathbf{b}^{[\ell]}\in \mathbb{R}^{n_{\ell+1}}\) stand for the weight matrix and bias vector of \(\ell\)-th hidden layer, respectively, \(n_{0}=d\) and \(n_{L+1}\) is the dimension of output, and "\(\circ\) " stands for the elementary-wise operation. For convenience, the output of DNN is denoted by \(\mathbf{y}(\mathbf{x};\mathbf{\theta})\) with \(\mathbf{\theta}\) standing for its all weights and biases. Residual neural network (ResNet) [54] as a common skillful technique by introducing skip connections between adjacent or nonadjacent hidden layers can overcome effectively the vanishing gradient of parameters in the backpropagation for DNN, then make the network much easier to train and improve well the performance of DNN. Many experiment results showed that the ResNet can also improve the performance of DNNs to approximate high-order derivatives and solutions of PDEs [21; 27]. We utilize the one-step skip connection scheme of ResNet in this work. Except for the normal data flow, the data will also flow along with the skip connection if the two consecutive layers in DNN have the same number of neurons, otherwise, the data flows directly from one to the next layer. The filtered \(\mathbf{y}^{[\ell+1]}(\mathbf{x};\mathbf{\theta})\) produced by the input \(\mathbf{y}^{[\ell]}(\mathbf{x};\mathbf{\theta})\) is expressed as \[\mathbf{y}^{[\ell+1]}(\mathbf{x};\mathbf{\theta})=\mathbf{y}^{[\ell]}(\mathbf{x};\mathbf{\theta})+ \sigma\circ\bigg{(}\mathbf{W}^{[\ell+1]}\mathbf{y}^{[\ell]}(\mathbf{x};\mathbf{\theta})+\mathbf{b} ^{[\ell+1]}\bigg{)}.\] As we are aware, a normal DNN model is capable of providing a satisfactory solution for general problems. However, it will encounter troublesome difficulty to solve multi-scale problems with high-frequency components. Recently, a MscaleDNN architecture has shown its remarkable performance to deal with high-frequency problems by converting original data to a low-frequency space [42; 43; 44; 46]. A schematic diagram of MscaleDNN with \(Q\) subnetworks is depicted in Fig. 1. The detailed procedure of MscaleDNN is described in the following. Figure 1: A schematic diagram of MscaleDNN with \(Q\) subnetworks, \(\sigma\) stands for the activation function 1. Generating a scale vector or matrix with \(Q\) parts \[\Lambda=(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}\cdots,\mathbf{k}_{Q-1},\mathbf{k}_{Q})^{T},\] (2.4) where \(\mathbf{k}_{i}(i=1,2,\ldots,Q)\) is a scalar or matrix (trainable or untrainable). 2. Converting the input data \(\mathbf{x}\) into \(\tilde{\mathbf{x}}=\Lambda\odot\mathbf{x}\) with \(\odot\) being the Hadamard product, then feeding \(\tilde{\mathbf{x}}\) into the pipeline of MscaleDNN. It is \[\begin{cases}\hat{\mathbf{x}}=\mathbf{k}_{i}\mathbf{x}\\ \mathbf{F}_{i}(\mathbf{x})=\mathcal{FCN}_{i}\left(\hat{\mathbf{x}}\right)\end{cases}\qquad i =1,2,\ldots,Q,\] (2.5) where \(\mathcal{FCN}_{i}\) stands for the \(i_{th}\) fully connected subnetwork and \(\mathbf{F}_{i}\) is its output. 3. Obtaining the result of MscaleDNN by aggregating linearly the output of all subnetworks, each scale input goes through a subnetwork. It is \[\mathbf{NN}(\mathbf{x})=\mathbf{W}_{O}\cdot[\mathbf{F}_{1}(\mathbf{x}),\mathbf{F}_{2}(\mathbf{x}),\cdots, \mathbf{F}_{Q}(\mathbf{x})]+\mathbf{b}_{O},\] (2.6) where \(\mathbf{W}_{O}\) and \(\mathbf{b}_{O}\) stand for the weights and biases of the last linear layer, respectively. From the perspective of Fourier transformation and decomposition, the first layer of the MscaleDNN model will be treated as a series of basis in Fourier space and its output is the combination of basis functions [42; 44; 46]. ### Overview of Physics-Informed Neural Networks In the scope of PINN, a type of PDE governed by parameters as the toy to show its implementation, it is \[\begin{split}\mathcal{N}_{\mathbf{\lambda}}[\hat{u}(\mathbf{x})]=\hat{f} (\mathbf{x}),\quad\mathbf{x}\in\Omega\\ \mathcal{B}\hat{u}\left(\mathbf{x}\right)=\hat{g}(\mathbf{x}),\qquad\mathbf{x} \in\partial\Omega\end{split} \tag{2.7}\] in which \(\mathcal{N}_{\mathbf{\lambda}}\) stands for the linear or nonlinear differential operator with parameters \(\mathbf{\lambda}\), \(\mathcal{B}\) is the boundary operator, such as Dirichlet, Neumann, periodic boundary conditions, or a mixed form of them. \(\Omega\) and \(\partial\Omega\) respectively illustrate the zone of interest and its border. For approximating the solution of the multi-scale PDE, a multi-scale deep neural network is used. In classical PINN, the ideal parameters of the DNN can be obtained by minimizing the following composite loss function \[Loss=Loss_{R}+\gamma Loss_{B} \tag{2.8}\] with \[\begin{split} Loss_{R}&=\frac{1}{N_{R}}\sum_{i=1}^ {N_{R}}\left\|\mathcal{N}_{\mathbf{\lambda}}[u_{NN}(\mathbf{x}_{I}^{i})]-\hat{f}(\mathbf{ x}_{I}^{i})\right\|^{2}\\ Loss_{B}&=\frac{1}{N_{B}}\sum_{j=1}^{N_{B}}\left\| \mathcal{B}u_{NN}\left(\mathbf{x}_{B}^{j}\right)-\hat{g}(\mathbf{x}_{B}^{j})\right\|^ {2}\end{split} \tag{2.9}\] where \(\gamma>0\) is used to control the contribution for the corresponding loss term. \(Loss_{R}\) and \(Loss_{B}\) depict the residual of the governing equations and the loss on the boundary condition, respectively. If some additional observed data are available inside the interested domain, then a loss term indicating the mismatch between the predictions produced by DNN and the observations can be taken into account \[Loss_{D}=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\left\|u_{NN}(\mathbf{x}^{i})-u_{Data}^{i }\right\|^{2}. \tag{2.10}\] ## 3 Fourier-based mixed PINN to solve multi-scale problem In this section, the unified architecture of FMPINN is proposed to overcome the adverse effect of derivative for rough coefficient \(A^{e}\) by embracing a multi-output neural network with an equivalent reduced-order formulation of the multi-scale problem (1.1). ### Failure of classical PINN Despite the success of various PINN models in studying ordinary and partial differential equations, it has been observed in [52] that the classical PINN approach fails to provide accurate predictions for multi-scale PDEs(1.1). Furthermore, we find that a direct application of the PINN with multi-scale DNN framework on solving (1.1) still cannot provide a satisfactory solution, because of the ill-posed NTK matrix caused by rough coefficient \(A^{\varepsilon}\). For example, let us consider the following one-dimensional elliptic equation with a homogeneous Dirichlet boundary in \(\Omega=[0,1]\): \[\begin{cases}-\dfrac{d}{dx}\bigg{(}A^{\varepsilon}(x)\dfrac{d}{dx}u^{ \varepsilon}(x)\bigg{)}=5\cos(\pi x)\\ \qquad\qquad\qquad u^{\varepsilon}(0)=u^{\varepsilon}(1)=0\end{cases},\] in which \(A^{\varepsilon}(x)=\dfrac{1+x^{2}}{2+\sin(2\pi x/\varepsilon)}\) with \(\varepsilon>0\) being a small constant. We employ the classical PINN method with the MscaleDNN framework(see Fig. 1) to solve (1.1), called this method as MPINN. The scale factors \(\Lambda\) for MscaleDNN is set as \((1,2,3,4,5,10,\cdots,90,95,100)\) and the size of each subnetwork is chosen as \((1,30,40,30,30,30,1)\). The activation function of the first hidden layer for all subnetworks is set as Fourier feature mapping(see Section 3.3) and the other activation functions(except for their output layer) are chosen as \(\frac{1}{2}\sin(x)+\frac{1}{2}\cos(x)\)[55], their output layers are all linear. For \(\varepsilon=\frac{1}{32},\frac{1}{64}\) and \(\frac{1}{128}\),We train the aforementioned MPINN model for \(50000\) epochs and conduct testing every \(1000\) epochs within the training cycle. The optimizer is set as Adam with Figure 2: Left: the rough coefficient \(A^{\varepsilon}\). Middle: the MPINN approximated solution vs the reference solution. Right: \(l^{2}\) relative error varies with the testing epoch. an initial learning rate of \(0.01\) and the learning rate will decay by \(2.5\%\) for every \(100\) epochs. Finally, the results are demonstrated in Fig. 2. As \(\varepsilon=1/32\), the coefficient \(A^{\varepsilon}(x)\) possesses a little multi-scale information, the MPINN performs quite well. However, the permeability \(A^{\varepsilon}(x)\) will exhibits various multi-scale properties for \(\varepsilon=1/64\), the performance of MPINN deteriorates with a low relative error and the MPINN fails to converge for \(\varepsilon=1/128\). In addition, we perform the MPINN with different setups of the hyperparameters such as the learning rate and the \(\omega_{B}\) for \(Loss_{B}\) in (2.8) as well as the network size, but we still cannot obtain a satisfactory result. ### Unified architecture of FMPINN Based on the above observation, it is necessary to seek some extra techniques to improve the accuracy of the PINN. Inspired by the mixed finite element method [12; 56] and the mixed residual method[27], we can leverage a mixed scheme to solve (1.1) by replacing the flux term \(A^{\varepsilon}\nabla u\) in (1.1) with an auxiliary variable. This strategy not only can avoid the unfavorable effect of the oscillating coefficient \(A^{\varepsilon}\), but also can reduce the computation burden of second-order derivatives in cost function when utilizing a multi-scale deep neural network to approximate the solution of (1.1). Therefore, we introduce a flux variable \(\mathbf{\phi}(\mathbf{x})=\big{(}\phi_{1}(\mathbf{x}),\ldots,\phi_{d}(\mathbf{x})\big{)}=A^{ \varepsilon}(\mathbf{x})\nabla u^{\varepsilon}(\mathbf{x})\) and rewrite the first equation in (1.1) as the following expressions: \[\begin{split}&-\text{\bf div}\mathbf{\phi}(\mathbf{x})=f(\mathbf{x})\\ &\mathbf{\phi}(\mathbf{x})-A^{\varepsilon}(\mathbf{x})\nabla u^{\varepsilon} (\mathbf{x})=\mathbf{0}\end{split} \tag{3.1}\] Then we turn to search a couple of functions \((u^{\varepsilon},\mathbf{\phi})\) in admissible space, rather than approximating a unique solution of the original problem (1.1). Here and thereafter, \((u^{\varepsilon},\mathbf{\phi})\in\mathcal{A}=\mathcal{H}^{1}(\Omega)\times \mathcal{H}(\text{\bf div};\Omega)\) with \(\mathcal{H}^{1}(\Omega)=\big{\{}v\in L^{2}(\Omega):\nabla v\in L^{2}(\Omega) \big{\}}\) and \(\mathcal{H}(\text{\bf div};\Omega)=\big{\{}\mathbf{\psi}\in[L^{2}(\Omega)]^{d}: \text{\bf div}\mathbf{\psi}\in L^{2}(\Omega)\big{\}}\). When utilizing numerical solvers to address the equation (3.1), one can obtain the optimum solution by minimizing the following least-squares formula in the domain \(\Omega\): \[u^{*},\mathbf{\phi}^{*}=\operatorname*{arg\,min}_{(u,\mathbf{\phi})\in \mathcal{H}^{1}(\Omega)\times\mathcal{H}(\text{\bf div};\Omega)}\mathcal{L}(u,\mathbf{\phi}) \tag{3.2}\] with \[\mathcal{L}(u,\mathbf{\phi})=\int_{\Omega}\big{|}-\text{\bf div}\mathbf{\phi}(\mathbf{x}) -f(\mathbf{x})\big{|}^{2}d\mathbf{x}+\beta\int_{\Omega}\big{|}\mathbf{\phi}(\mathbf{x})-A^{ \varepsilon}(\mathbf{x})\nabla u^{\varepsilon}(\mathbf{x})\big{|}^{2}d\mathbf{x} \tag{3.3}\] where \(\beta>0\) is used to adjust the approximation error of the flux variable and flux term. Generally, two independent neural networks are necessary to approximate the flux variable \(\mathbf{\phi}\) and solution \(u\), but \(\mathbf{\phi}\) is unconstrained without any coercive boundary condition. Based on the potentiality of DNN for approximating any linear and non-linear complex functions, we take a DNN with multi outputs to model ansatzes \(\mathbf{\phi}\) and \(u\), denoted by \(\mathbf{\phi}_{NN}\) and \(u_{NN}\), respectively. Fig. 3 describes the multi-output neural network for input \(\mathbf{x}\in\mathbb{R}^{2}\). Once the expressions of auxiliary functions \(\mathbf{\phi}\) and solution \(u\) have been determined, we can discretize Figure 3: The multi-output neural network for approximating the state and flux variables (3.3) by the Monte Carlo method [57], then employ the PINN conception and obtain the following form \[\mathscr{L}_{in}(S_{I};\mathbf{\theta})=\frac{|\Omega|}{N_{in}}\sum_{i=1}^{N_{in}} \bigg{[}\big{|}-\mathbf{div}\mathbf{\phi}_{NN}(\mathbf{x}_{I}^{i};\mathbf{\theta})-f(\mathbf{x} _{I}^{i})\big{|}^{2}+\beta\big{|}\mathbf{\phi}_{NN}(\mathbf{x}_{I}^{i},\mathbf{\theta})-A^{ \varepsilon}(\mathbf{x}_{I}^{i})\nabla u_{NN}(\mathbf{x}_{I}^{i},\mathbf{\theta})\big{|}^ {2}\bigg{]}, \tag{3.4}\] for \(\mathbf{x}_{I}^{i}\in S_{I}\), here and hereinafter \(S_{I}\) stands for the collection sampled from \(\Omega\) with prescribed probability density. Same to the traditional numerical methods such as FDM and FEM for addressing PDEs, boundary conditions play a crucial role in DNN representation as well. They serve as important constraints that ensure the uniqueness and accuracy of the solution. Consequently, the output \(u_{NN}\) of DNN should also satisfy the boundary conditions of (1.1), which means \[\mathscr{L}_{bd}(S_{B};\mathbf{\theta})=\frac{1}{N_{bd}}\sum_{j=1}^{N_{bd}}\bigg{[} \mathcal{B}u_{NN}\big{(}\mathbf{x}_{B}^{j};\mathbf{\theta}\big{)}-g(\mathbf{x}_{B}^{j}) \bigg{]}^{2}\to 0\ \ \text{for}\ \ \mathbf{x}_{B}^{j}\in S_{B}. \tag{3.5}\] here and hereinafter \(S_{B}\) represents the collection sampled on \(\partial\Omega\) with prescribed probability density. According to the above results, the weights and biases of the DNN model are updated by optimizing gradually the following cost function: \[\mathscr{L}(S_{I},S_{B};\mathbf{\theta})=\mathscr{L}_{in}(S_{I};\mathbf{\theta})+ \gamma\mathscr{L}_{bd}(S_{B};\mathbf{\theta}) \tag{3.6}\] where \(S_{I}=\{\mathbf{x}_{I}^{i}\}_{i=1}^{N_{in}}\) and \(S_{B}=\{\mathbf{x}_{B}^{j}\}_{j=1}^{N_{bd}}\) stand for the train data of \(\Omega\) and \(\partial\Omega\), respectively. The term of \(\mathscr{L}_{in}\) composed of the residual governed by differential equations and the discrepancy with respect to flux, minimizes the residual of the PDE, whereas the term of \(\mathscr{L}_{bd}\) pushes the DNN solver to match the given boundary conditions. In addition, a constant parameter \(\gamma>0\) is introduced to forces well the \(\mathscr{L}_{bd}(S_{B};\mathbf{\theta})\to 0\) in the loss function, it is increasing gradually with training process going on. Based on the analysis in [39], a nonconstant continuous activation function \(\sigma\) can guarantee the mapping \(\mathbf{\theta}\mapsto(u_{NN},\mathbf{\phi}_{NN})\) is continuous, then the distance between approximation functions \(\mathbf{q}_{NN}=(u_{NN},\mathbf{\phi}_{NN})\) and exact solution \(\mathbf{q}^{*}=(u^{*},\mathbf{\phi}^{*})\) will decrease by adjusting gradually the parameters of DNN, i.e., \[d(\mathbf{q}^{*},\mathcal{A}_{k})=\inf_{\mathbf{q}_{NN}\in\mathcal{A}_{k}}\|\mathbf{q}^{*} -\mathbf{q}_{NN}\|\to 0\ \ \text{as}\ \ k\rightarrow\infty.\] It means the loss function \(\mathscr{L}(S_{I},S_{B};\mathbf{\theta})\) will attain the corresponding minimum when \(d\to 0\). Hence, Our purpose is to find an optimal set of parameter \(\mathbf{\theta}^{*}\) such that the approximations \(u_{NN}\) and \(\mathbf{\phi}_{NN}\) minimize the loss function \(\mathscr{L}(S_{I},S_{B};\mathbf{\theta})\). In order to obtain the ideal \(\mathbf{\theta}^{*}\), one can update the weights and biases of DNN through the optimization methods such as gradient descent (GD) or stochastic gradient descent (SGD) during the training process. In this context, the SGD method with a "mini-batch" of training data is given by: \[\mathbf{\theta}^{k+1}=\mathbf{\theta}^{k}-\alpha^{k}\nabla_{\mathbf{\theta}^{k}}\mathscr{ L}(\mathbf{x};\mathbf{\theta}^{k})\ \ \text{with}\ \ \mathbf{x}\in S_{I}\ \text{or}\ \mathbf{x}\in S_{B} \tag{3.7}\] where the "learning rate" \(\alpha^{k}\) decreases with \(k\) increasing. ### Option of activation function for FMPINN and its explanation Choosing a suitable and effective activation function is a critical concern when aiming to enhance the performance of DNN in computer vision, natural language processing, and scientific computation. Generally, an activation function such as rectified linear unit \(\text{ReLU}(\mathbf{z})\) and hyperbolic tangent function \(\tanh(\mathbf{z})\), can obviously improve the capacity and nonlinearity of neural networkS to address various nonlinear problems, such as the solution of various PDEs and classification. Recently, the works [40; 41] manifested that the DNN often captures firstly the low-frequency component for target functions, then match the high-frequency component, they called it as the spectral bias or frequency preference of DNN. Under this phenomenon, many researchers attempt to utilize a Fourier feature mapping consisting of sine and cosine as the activation function to improve the capacity of MscaleDNN, it will mitigate the pathology of spectral bias and enable networks to learn high frequencies more effectively[41; 46; 49; 50]. It is expressed as follows: \[\zeta(\mathbf{x})=\left[\begin{array}{c}\cos(\mathbf{\kappa}\mathbf{x})\\ \sin(\mathbf{\kappa}\mathbf{x})\end{array}\right], \tag{3.8}\] where \(\mathbf{\kappa}\) is a user-specified vector or matrix (trainable or untrainable) which is consistent with the number of neural units in the first hidden layer for DNNs. Further, the work [45] designed a soften Fourier mapping by introducing a relaxing parameter \(s\in(0,1]\) in \(\zeta(\mathbf{x})\), numerical results show that this modification will improve the performance of \(\zeta(\mathbf{x})\). Actually, this activation function is used in the first hidden layer of DNN and maps the input data in \(\Omega\) into a range of \([-1,1]\), then enhances the ability of DNN and expedites its convergence. Therefore, a real function \(\mathcal{P}(x)\) represented by DNN can be expressed as follows \[\mathcal{P}(\mathbf{x})=\sum_{n=0}^{\bar{N}}\bigg{(}S\left(\cos(\mathbf{k}_{n}\mathbf{x}); \mathbf{\bar{\theta}}_{n}\right)+T\left(\sin(\mathbf{k}_{n}\mathbf{x});\mathbf{\bar{\theta}}_{n }\right)\bigg{)},\] where \(S(\cdot,\mathbf{\bar{\theta}}),T(\cdot,\mathbf{\bar{\theta}})\) are the DNNs or the sub-modules of DNNs, respectively, \(\{\mathbf{k}_{0},\mathbf{k}_{1},\mathbf{k}_{2},\cdots\}\) are the frequencies of interest for the objective function. Obviously, the first hidden layer performed by Fourier feature mapping mimics the Fourier basis function, and the remaining blocks with different activation functions are used to learn the coefficients of these functions. After performing the Fourier mapping for input points with a given scale factor, the neural network can well capture the fine varying information for multi-scale problems. **Remark 1**.: **(Lipschitz continuous)** If an activation function \(\sigma\) is continuous(i.e., \(\sigma\in C^{1}\)) and satisfies the following boundedness condition: \[|\sigma(x)|<1\quad\text{ and }\quad|\sigma^{\prime}(x)|<1\] for any \(x\in\mathbb{R}\). Then, we have \[|\sigma(x)-\sigma(y)|\leqslant|x-y|\quad\text{ and }\quad|\sigma^{\prime}(x)- \sigma^{\prime}(y)|<|x-y|\] for any \(x,y\in\mathbb{R}\). Obviously, the activation functions \(\tanh(x)\), sigmoid(x), Fourier feature mapping \(\zeta(a)\) and \(\frac{1}{2}sin(x)+\frac{1}{2}cos(x)\) are all satisfy the above condition and have a good regularity, they will overcome the gradient explosion of parameter in the backpropagation for DNN and improve the capacity of DNN. ### Simple error analysis for FMPINN In recent times, there have been endeavors to rigorously analyze the convergence rate of the deep mixed residual method and compare it with the deep Galerkin method (DGM) and deep Ritz method (DRM) across different scenarios[39; 58; 59]. In this study, we investigate those results of convergence again, then provide the expression of generalization error for FMPINN and some remarks of errors. For convenience, let \(\mathbf{q}^{*}=(u^{*},\mathbf{\phi}^{*})\) be the exact solution of equation (3.1) or the minimum of cost function (3.2) with (3.3) for coercive boundary constraints. Meantime, the \(\mathbf{q_{\theta^{*}}}=(u_{\mathbf{\theta^{*}}},\mathbf{\phi_{\theta^{*}}})\) stands for the final output of DNN optimized by SGD optimizer(such as Adam or LBFGS) that attains the local minimum of (3.6). Further, we let \(\widehat{\mathcal{L}}(u,\mathbf{\phi})\) be the cost function evaluated on \(N\) points sampled from \(\Omega\) and denote the output of DNN as \(\mathbf{q_{\theta}}=(u_{\mathbf{\theta}},\mathbf{\phi_{\theta}})\). Finally, \(\mathcal{S}_{NN}\) represents the function space sapnned by the output of DNN. Then, the total error(or generalization error) between the exact solution \(\mathbf{q^{*}}\) and the output of DNN \(\mathbf{q_{\theta}}\) can be expressed as \[\big{\|}u_{\mathbf{\theta}}-u^{*}\big{\|}_{\mathcal{H}^{1}(\Omega)}+\big{\|}\mathbf{ \phi_{\theta}}-\mathbf{\phi^{*}}\big{\|}_{\mathcal{H}^{1}(\mathbf{div},\Omega)} \leqslant C(coe)\sqrt{\delta_{app}+\delta_{est}+\delta_{opt}} \tag{3.9}\] Figure 4: Illustration of the total error for FMPINN. with \[\begin{cases}\delta_{app}=\inf_{(u,\mathbf{\phi})\in\mathcal{S}_{NN}}\|u-u^{*}\|^{2}_{ \mathcal{H}^{(1)}}(\Omega)+\|\mathbf{\phi}-\mathbf{\phi}^{*}\|^{2}_{\mathcal{H}(\mathbf{div},\Omega)}\\ \delta_{est}=\sup_{(u,\mathbf{\phi})\in\mathcal{S}_{NN}}[\mathcal{L}(u,\mathbf{\phi})- \widetilde{\mathscr{L}}(u,\mathbf{\phi})]+\sup_{(u,\mathbf{\phi})\in\mathcal{S}_{NN}}[ \widetilde{\mathscr{L}}(u,\mathbf{\phi})-\mathcal{L}(u,\mathbf{\phi})]\\ \delta_{opt}=\widetilde{\mathscr{L}}(u_{\mathbf{\theta}^{*}},\mathbf{\phi}_{\mathbf{\theta }^{*}})-\widetilde{\mathscr{L}}(u_{\mathbf{\theta}},\mathbf{\phi}_{\mathbf{\theta}})\end{cases}\] In which, the approximated error \(\delta_{app}\) indicates the difference between \((u^{*},\mathbf{\phi}^{*})\) and its projection onto \(\mathcal{S}_{NN}\), the estimation error \(\delta_{est}\) measures the difference between the continuous cost function \(\mathcal{L}\) and discrete cost function \(\widetilde{\mathscr{L}}\), the optimization error \(\delta_{opt}\) stands for the discrepancy between the output of DNN with optimizing and the output of DNN without optimizing. In Fig. 4, we depict the diagram of error for FMPINN. **Remark 2**: _For the approximating error, it is generally dependent on the architectural design of the neural network and the choice of the activation function. Classical radial basis network[60], the vanilla DNN and extreme learning machine(ELM)[61] are the common meshless method for approximating the solution of PDEs. To address the spatio-temporal problems, some hybrid network frameworks have been designed by combining PINN with traditional numerical methods to solve PDE, such as FDM-PINN and Runge-Kutta PINN[24; 62]. Moreover, instead of soft constraints by a hard manner for the boundary or initial conditions in those methods, the approximation will automatically meet the boundary and initial conditions of PDEs, then reduce the complexity and improve the precision of NN[35]. On the other hand, a powerful activation function, such as the hyperbolic tangent activation function and Fourier feature mapping, not only enhance the nonlinearity of DNN, but also improve its approximating capacity and accuracy. In addition, some available data are generally considered as a loss term to reduce the approximating error._ **Remark 3**: _Generally, the proposed FMPINN surrogate can provide more accurate approximations as the number of random collocation points increases. However, it will lead to heavy computational costs for lots of samplings. Then, it is worthwhile to take into account the trade-off between accuracy and computational cost when designing a DNN surrogate and determining its training mode. Alternatively, one can employ some effective low-discrepancy sampling approaches to decrease the statistical error, such as the Latin hypercube sampling method[63], quasi-random sampling[64] and multilevel Monte Carlo method[65]._ **Remark 4**: _Since the cost function generally is non-convex and has several local minima, then the gradient-based optimizer will almost certainly become caught in one of them. Therefore, choosing a good optimizer is important to reduce the optimization error and get a better minimum. In many scenarios of optimizing DNN, the Adam optimization method has shown its good performance including efficiency and accuracy, it can dynamically adjust the learning rates of each parameter by using the first and second moments estimation of the gradients[66]. BFGS is a quasi-Newton method and numerically stable, it may provide a higher-precision approximated solution[67]. In an implementation, the limited memory version of BFGS(L-BFGS) is the common choice to decrease the optimization error and accelerate convergence for cases with a little amount of training data and/or residual points. Further, by combining the merits of the above two approaches, one can optimize the cost function firstly by the Adam algorithm with a predefined stop criterion, then obtain a better result by the L-BFGS optimizer._ ## 4 FMPINN algorithm For the FMPINN method with the MscaleDNN model composed of \(Q\) subnetworks as in Fig.1 being its solver, the input data for each subnetwork will be transformed by the following operation \[\widehat{\mathbf{x}}=a_{i}*\mathbf{x},\quad i=1,2,\ldots,Q\] with \(a_{i}\geqslant 1\) being a positive scalar factor, it means the scale vector \(\Lambda=(a_{1},a_{2},\ldots,a_{Q})\) as in (2.4). Denoting the output of each subnetwork as \(\mathbf{F}_{i}(i=1,2,\ldots,Q)\), then the overall output of the MscaleDNN model is obtained by \[\mathbf{y}(\mathbf{x};\mathbf{\theta})=\frac{1}{Q}\sum_{i=1}^{Q}\frac{\mathbf{F}_{i}}{a_{i}}.\] According to the above discussions, the procedure of the FMPINN algorithm for addressing the multi-scale problem (1.1) in finite-dimensional spaces is described in the following. 1. Generating the \(k_{th}\) training set \(\mathcal{S}^{k}\) includes interior points \(S^{k}_{I}=\{\mathbf{x}^{i}_{I}\}_{i=1}^{N_{in}}\) with \(\mathbf{x}^{i}_{I}\in\mathbb{R}^{d}\) and boundary points \(S^{k}_{B}=\{\mathbf{x}^{j}_{B}\}_{j=1}^{N_{bd}}\) with \(\mathbf{x}^{j}_{B}\in\mathbb{R}^{d}\). Here, we draw the random points \(\mathbf{x}^{i}_{I}\) and \(\mathbf{x}^{j}_{B}\) from \(\mathbb{R}^{d}\) with positive probability density \(\nu\), such as uniform distribution. 2. Calculating the objective function \(\mathscr{L}(\mathcal{S}^{k};\mathbf{\theta}^{k})\) for train set \(\mathcal{S}^{k}\): \[\mathscr{L}(\mathcal{S}^{k};\mathbf{\theta}^{k})=\mathscr{L}_{in}(S^{k}_{I};\mathbf{ \theta}^{k})+\gamma\mathscr{L}_{bd}(S^{k}_{B};\mathbf{\theta}^{k})\] with \(\mathscr{L}_{in}(\cdot;\mathbf{\theta}^{k})\) being defined in (3.4) and \(\mathscr{L}_{bd}(\cdot;\mathbf{\theta}^{k})\) being defined in (3.5). 3. Take a descent step at the random point of \(\mathbf{x}^{k}\): \[\mathbf{\theta}^{k+1}=\mathbf{\theta}^{k}-\alpha^{k}\nabla_{\mathbf{\theta}^{k}}\mathscr{ L}(\tilde{\mathbf{x}}^{k};\mathbf{\theta}^{k})\ \ \text{with}\ \ \tilde{\mathbf{x}}^{k}\in\mathcal{S}^{k},\] where the "learning rate" \(\alpha^{k}\) decreases with \(k\) increasing. 4. Repeat steps 1-3 until the convergence criterion is satisfied or the objective function tends to be stable. **Algorithm 1** FMPINN algorithm for solving multi-scale PDEs(1.1) ## 5 Numerical experiments The goal of our experiments is to show that our Fourier-based mixed physics-informed neural networks are indeed capable of approximating the analytical solution given in (1.1). For comparison purposes, the PINN method with MscaleDNN being its solver and the local deep learning method(LDLM) with normal DNN being its solver are as the baseline to solve (1.1) in varying-dimensional spaces. ### Model and training setup In the aforementioned FMPINN and MPINN models, a standard MscaleDNN with multi subnetworks that stretch the input data via various scale factors is configurated as their solver. The MscaleDNN consists of 25 subnetworks according to the manually defined frequencies vector \(\Lambda=(1,2,3,4,5,10,\cdots,90,95,100)\). Each subnetwork contains 5 hidden layers with proper size and the activation function of the first hidden layer for each subnetwork is set as Fourier feature mapping and the other activation functions(except for the output layer) are set as \(\frac{1}{2}\sin(x)+\frac{1}{2}\cos(x)\), its output layer is linear. The overall output is a weighted sum of the outputs of all subnetworks through the relevant scale factors. In terms of the LDLM[37], two activation functions are considered for this model: LDLM1 with \(ReQU=\max\{0,x\}^{2}\) being its activation for hidden layers and LDLM2 with \(\frac{1}{2}\sin(x)+\frac{1}{2}\cos(x)\) being its activation function for hidden layers, their output are all linear. In our numerical experiments, all training data are sampled from the domain(including its boundaries) of interest in Euclidean space \(\mathbb{R}^{d}\), the sampling probability densities are assigned as the uniform distribution. We train all neural networks by an Adam optimizer with an initial learning rate of 0.01, and the learning rate will be decayed by 2.5% for every 100 training epochs [66]. Here, the following \(l^{2}\) relative error is used to evaluate our models: \[REL=\sqrt{\frac{\sum_{i=1}^{N^{\prime}}|\tilde{u}(\mathbf{x}^{i})-u^{*}(\mathbf{x}^{i} )|^{2}}{\sum_{i=1}^{N^{\prime}}|u^{*}(\mathbf{x}^{i})|^{2}}}\] where \(\tilde{u}(\mathbf{x}^{i})\) and \(u^{*}(\mathbf{x}^{i})\) are the approximate solution of deep neural network and exact solution for testing points \(\{\mathbf{x}^{i}\}(i=1,2,\cdots,N^{\prime})\), respectively, and \(N^{\prime}\) represents the number of sample points for testing. In order to visualize the training process, our model will be evaluated once for every 1000 iterations in the whole training cycle and recorded the result at the end. In our codes, the penalty parameter \(\gamma\) is set as \[\gamma=\begin{cases}\gamma_{0},&\text{if}\ \ i_{\text{epoch}}<M_{\text{max}}*0.1\\ 10\gamma_{0},&\text{if}\ \ M_{\text{max}}*0.1<=i_{\text{epoch}}<M_{\text{max}}*0.2\\ 50\gamma_{0},&\text{if}\ \ M_{\text{max}}*0.2<=i_{\text{epoch}}<M_{\text{max}}*0.25\\ 100\gamma_{0},&\text{if}\ \ M_{\text{max}}*0.25<=i_{\text{epoch}}<M_{\text{max}}*0.5\\ 200\gamma_{0},&\text{if}\ \ M_{\text{max}}*0.5<=i_{\text{epoch}}<M_{\text{max}}*0.75\\ 500\gamma_{0},&\text{otherwise}\end{cases} \tag{5.1}\] where the \(\gamma_{0}=10\) in all our tests and \(M_{\text{max}}\) represents the total number of epochs. We implement and perform all neural network models by means of the package of Pytorch (version 1.14.0) on a workstation (64-GB RAM, single NVIDIA GeForce RTX 4090 24-GB). ### Performance of FMPINN for solving multi-scale elliptic PDEs **Example 5.1**.: Firstly, we consider the one-dimensional case for (1.1) with Dirichlet boundary in interval \([0,1]\), in which \(A^{\varepsilon}(x)\) is given by \[A^{\varepsilon}(x)=\left(2+\cos\left(2\pi\frac{x}{\varepsilon}\right)\right)^ {-1} \tag{5.2}\] with a small parameter \(\varepsilon>0\) such that \(\varepsilon^{-1}\in\mathbb{N}^{+}\) and the force term \(f(x)=1\). Under these conditions, a unique solution is given by \[u^{\varepsilon}(x)=x-x^{2}+\varepsilon\left(\frac{1}{4\pi}\sin\left(2\pi \frac{x}{\varepsilon}\right)-\frac{1}{2\pi}x\sin\left(2\pi\frac{x}{\varepsilon }\right)-\frac{\varepsilon}{4\pi^{2}}\cos\left(2\pi\frac{x}{\varepsilon} \right)+\frac{\varepsilon}{4\pi^{2}}\right). \tag{5.3}\] Clearly, the analytical solution induces its boundary condition \(u(0)=u(1)=0\). In this example, we use the FMPINN, MPINN, LDLM1, and LDLM2 models to solve (1.1) when \(\varepsilon=0.1,0.01\) and \(0.001\), respectively. The size of hidden layer for each subnetwork of FMPINN and MPINN is set as \((30,40,30,30,30)\) and the balance parameter \(\beta\) in (3.4) is set as \(10\). The hidden layer's size for LDLM is set as \((300,400,300,300,300)\). Their parameters' numbers are comparable. At each training step, we randomly sample \(3000\) points inside the \([0,1]\) and \(500\) boundary points as a training dataset. In addition, the testing dataset includes \(1000\) equidistant samples from \([0,1]\). All models are trained for \(50000\) epochs. We depict the related experiment results in Figs. 5, 6 and 7, respectively. Meantime, the final relative errors and total running time are listed in Table 1. Based on these figures, the FMPINN model can perfectly capture the oscillation of the exact solution for \(\varepsilon=0.1,0.01\) and \(0.001\), but LDLM models are not convergent for these cases. At the same time, the performance of MPINN competes with that of FMPINN when \(\varepsilon=0.1\). However, the MPINN model fails to solve the multi-scale problem for \(\varepsilon=0.1\) and \(0.01\). Compared to \(\varepsilon=0.01\), the rough coefficient \(A^{\varepsilon}\) with \(\varepsilon=0.001\) have more oscillation in the interval \([0,1]\), but the FMPINN still can keep its remarkable performance. According to the point-wise errors in Figs. 5(d), 6(d) and 7(d) and the relative error in Figs. 5(h), 6(h) and 7(h), we can conclude that the FMPINN is able to approximate high-precisely the exact solution of (1.1) in one-dimensional space. In addition, the total time in Table 1 shows the running time of FMPINN is less than that of MPINN for \(50000\) training epochs. _Influence of hyper-parameter \(\beta\):_ In the previous tests, the parameter \(\beta\) was initially set to \(10\). Now, we study the influence of \(\beta\) for our FMPINN model. In these tests, we take \(\varepsilon=0.001\) in (5.2), and consider values of \(\beta\) equal to \(1,5,10,15,20\) and \(25\), while keeping all other parameters fixed. All models with different \(\beta\) values are trained for \(50000\) epochs. Fig. 8 plots the results of flux loss for the training \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{REL} & \multicolumn{4}{c|}{Total time(s)} \\ \hline \(\varepsilon\) & FMPINN & MPINN & LDLM1 & LDLM2 & FMPINN & MPINN & LDLM1 & LDLM2 \\ \hline \(0.1\) & 2.92e-6 & 2.60e-7 & 0.3227 & 0.3389 & 680.734 & 865.849 & 345.791 & 373.537 \\ \hline \(0.01\) & 3.43e-5 & 0.94 & 0.3397 & 0.3406 & 689.729 & 868.199 & 351.451 & 377.089 \\ \hline \(0.001\) & 9.28e-5 & 0.99 & 0.3389 & 0.3398 & 691.458 & 875.297 & 358.435 & 388.273 \\ \hline \end{tabular} \end{table} Table 1: The relative error and running time of FMPINN, MPINN, LDLM1 and LDLM2 for Example 5.1 Figure 5: Rough coefficient, loss of flux term and testing results for Example 5.1 when \(\varepsilon=0.1\). Figure 6: Rough coefficient, loss of flux term and testing results for Example 5.1 when \(\varepsilon=0.01\). Figure 7: Rough coefficient, loss of flux term and testing results for Example 5.1 when \(\varepsilon=0.001\). process as well as the relative error for testing. Additionally, the final relative errors obtained from the tests are listed in Table 2. According to the above results in Fig. 8 and Table 2, it can be observed that the FMPINN model exhibits remarkable and stable performance across different values of \(\beta\). The performances of the FMPINN model for \(\beta=1\) and \(\beta=5\) are slightly weaker than that of other cases. The loss of flux term is also stable and consistent with the trendlines of REL. Therefore, for the subsequent tests, we will continue to set \(\beta=10\). **Example 5.2**.: Let us attempt to solve the following three-scale problem with Dirichlet boundary in \(\Omega=[0,1]\). In which, \[A^{\varepsilon}(x)=\left(2+\cos\left(2\pi\frac{x}{\varepsilon_{1}}\right) \right)\left(2+\cos\left(2\pi\frac{x}{\varepsilon_{2}}\right)\right)\] with two small parameter \(\varepsilon_{1},\varepsilon_{2}>0\) such that \(\varepsilon_{1}^{-1},\varepsilon_{2}^{-1}\in\mathbb{N}^{+}\) and an exact solution is given by \[u^{\varepsilon}(x)=x-x^{2}+\frac{\varepsilon_{1}}{4\pi}\sin\left(2\pi\frac{x} {\varepsilon_{1}}\right)+\frac{\varepsilon_{2}}{4\pi}\sin\left(2\pi\frac{x} {\varepsilon_{2}}\right). \tag{5.4}\] Clearly, \(u^{\varepsilon}(0)=u^{\varepsilon}(1)=0\). One can obtain the force side after careful computation, we omit it here. We solve the above three scale problem when \(\varepsilon_{1}=0.1\) and \(\varepsilon_{2}=0.01\) by employing the aforementioned FMPINN, MPINN, LDLM1 and LDLM2 models, respectively. Their all settings are same as the Example 5.1. The training dataset includes 3000 interior random points and 500 boundary random points, and the testing dataset includes 1000 equidistant samples. The related experiment results are listed in Table 3 and plotted in Fig. 9, respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\beta\) & 1 & 5 & 10 & 15 & 20 & 25 \\ \hline REL & 3.31e-4 & 1.37e-4 & 9.27e-5 & 7.75e-5 & 6.40e-5 & 8.60e-5 \\ \hline \end{tabular} \end{table} Table 2: The relative error of FMPINN model VS various \(\beta\) for Example 5.1. Figure 8: The loss of flux term VS training epoch and the relative error VS testing epoch for Example 5.1 when \(\varepsilon=0.001\). Figure 9: Rough coefficient, loss of flux term and testing results for Example 5.2 when \(\varepsilon_{1}=0.1\) and \(\varepsilon_{2}=0.01\). Fig. 9 shows that the FMPINN model still is well able to capture all oscillations of the exact solution for the three-scale problem, the MPINN model also captures the profile of the solution of (1.1) with \(\varepsilon_{1}=0.1\) and \(\varepsilon_{2}=0.01\). However, the LDLM1 and LDLM2 all fail to fit the solution. Figs. 9(d) - 9(g) not only show the point-wise errors of FMPINN for major points that are close to zero but also reveal the point-wise error of FMPINN is very smaller than that of the MPINN and the LDLM models are all bad. Additionally, Fig. 9(h) and Table 3 illustrate that the REL of FMPINN is superior to that of MPINN by more than two orders of magnitude, and its running time is 696.537 seconds and less than that of MPINN. From the above results, we conclude that the FMPINN model is remarkable to address the (1.1) with rough coefficient in one-dimensional space, it generally outperforms the MPINN and LDLM models. **Example 5.3**.: We consider the following two-dimensional problem for (1.1) with Dirichlet boundary in regular domains \(\Omega=[-1,1]\times[-1,1]\). In this example, we choose the \(f(x_{1},x_{2})=5\) and provide the following two-scales coefficient with scale separation \[A^{\varepsilon}(x_{1},x_{2})=\frac{1.5+\sin(2\pi x_{1}/\varepsilon)}{1.5+\sin (2\pi x_{2}/\varepsilon)}+\frac{1.5+\sin(2\pi x_{2}/\varepsilon)}{1.5+\cos(2 \pi x_{1}/\varepsilon)}+\sin(4x_{1}^{2}x_{2}^{2})+1. \tag{5.5}\] where \(\varepsilon>0\) is a small parameter such that \(\varepsilon^{-1}\in\mathbb{N}^{+}\). Since the corresponding exact solution can not be expressed explicitly in this example, then a reference solution \(u^{\varepsilon}(x_{1},x_{2})\) is set as the finite element solution computed by numerical homogenization method [15] on a square grid \([-1,1]\times[-1,1]\) with mesh-size \(h=1/128\). We solve the above two scale problem when \(\varepsilon=0.05\) by employing the aforementioned FMPINN, MPINN, LDLM1 and LDLM2 models, respectively. The size of hidden layer for each subnetwork of FMPINN and MPINN is set as (40, 60, 40,40,40) and the hidden layers' size for LDLMs is set as \((400,250,250,200,200)\). At each training step, the training dataset includes 5000 points sampled inside the \(\Omega\) and 2000 boundary points sampled from the \(\partial\Omega\), respectively. In order to test our models, the testing dataset is the collection of all grid points in domain \([-1,1]\times[-1,1]\) with mesh-size \(h=1/128\). The related experiment results are listed in Table 4 and plotted in Fig. 10, respectively. In this example, the \(A^{\varepsilon}(x_{1},x_{2})\) have two different frequency components and is quite oscillating(seeing Fig. 10(a)), then DNN will encounter some troubles to address multi-scale PDEs(1.1). According to the results of point-wise error (Figs.10(d) - 10(g)) and relative errors(Fig.10(h)), the performance of our FMPINN model is still superior to that of the MPINN, LDLM1 and LDLM2 models, and can obtain a favorable approximation to multi-scale problems(1.1). In addition, the test REL curve in Fig. 10(h) indicates the FMPINN model is stable in the whole training cycle and its tendency is consistent with the curve of loss for flux term in Fig. 10(c). Clearly, the running time of our FMPINN model is about half of that of the MPINN model, which means the FMPINN model is efficient in solving multi-scale PDEs(1.1) with two scales coefficient. **Example 5.4**.: We consider the following two-dimensional problem for (1.1) with Dirichlet boundary in regular domains \(\Omega=[-1,1]\times[-1,1]\). In this example, we choose the \(f(x_{1},x_{2})=1\) and provide a multi-frequency coefficient \[A^{\varepsilon}(x_{1},x_{2})=\Pi_{i=1}^{5}\bigg{(}1+0.5\cos\big{(}2^{i}\pi(x _{1}+x_{2})\big{)}\,\bigg{)}\bigg{(}1+0.5\sin\big{(}2^{i}\pi(x_{2}-3x_{1}) \big{)}\,\bigg{)}. \tag{5.6}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & FMPINN & MPINN & LDLM1 & LDLM2 \\ \hline REL & 0.0139 & 0.99 & 0.2431 & 0.2401 \\ \hline Total time(s) & 2098.258 & 3885.934 & 626.685 & 689.619 \\ \hline \end{tabular} \end{table} Table 4: The relative error and consumed time of FMPINN, MPINN, LDLM1 and LDLM2 for Example 5.3. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & FMPINN & MPINN & LDLM1 & LDLM2 \\ \hline REL & 3.36e-4 & 5.02e-2 & 0.5341 & 0.8372 \\ \hline Total time(s) & 696.537 & 965.076 & 377.18 & 399.575 \\ \hline \end{tabular} \end{table} Table 3: The relative error and consumed time of FMPINN, MPINN, LDLM1, and LDLM2 for Example 5.2. Figure 10: Rough coefficient, reference solution, loss of flux term and testing results for Example 5.3 same as the Example 5.3, a reference solution \(u^{\varepsilon}(x_{1},x_{2})\) is set as the finite element solution computed by numerical homogenization method [15] on a square grid \([-1,1]\times[-1,1]\) with mesh-size \(h=1/128\). Figure 11: Rough coefficient, reference solution, loss of flux term and testing results for Example 5.4 By meticulously implementing the previously mentioned FMPINN, MPINN, and LDLMs models with the specified setups, we obtain the approximated solution of (1.1) with (5.6). The setup for all models is identical to that of Example 5.3. During each training step, the training dataset comprises 5000 points randomly sampled from \(\Omega\) and 2000 boundary points sampled from the boundary \(\partial\Omega\), respectively. Meantime, the testing dataset composes of grid points on the square domain \([-1,1]\times[-1,1]\) with mesh-size \(h=1/128\). The related experiment results are listed in Table 5 and plotted in Fig. 11, respectively. In this example, the \(A^{\varepsilon}(x_{1},x_{2})\) is obviously oscillating with six different frequency components(seeing Fig. 11(a)), it will increase the difficulty for DNN to address multi-scale PDEs(1.1). The point-wise error (Figs.11(d) - 11(g)) and the relative errors(Fig.11(h)) indicate that our FMPINN model is still favorable to capture the solution of multi-scale problems with complex multi-frequency coefficient, but the MPINN, LDLM1 and LDLM2 models all performance poorly for approximating the solution of (1.1). Additionally, the test REL curve in Fig. 11(h) and the curve of loss for flux term in Fig. 11(c) are all flat indicates the FMPINN model is stable in the whole training cycle. Moreover, the running time of our FMPINN model is less than that of the MPINN model in solving multi-scale PDEs(1.1) for coefficient (5.6). **Example 5.5**.: We next study the performance of our FMPINN model to solve the elliptic equation (1.1) with Dirichlet boundary in a cubic domain \(\Omega=[0,1]\times[0,1]\times[0,1]\). In which, we take \[A^{\varepsilon}(x_{1},x_{2},x_{3})=2+\sin\left(\frac{2\pi x_{1}}{\varepsilon} \right)\sin\left(\frac{2\pi x_{2}}{\varepsilon}\right)\sin\left(\frac{2\pi x _{3}}{\varepsilon}\right). \tag{5.7}\] with a small parameter \(\varepsilon>0\) such that \(\varepsilon^{-1}\in\mathbb{N}^{+}\). Also, we let the force side \(f(x_{1},x_{2},x_{3})=20\) and the boundary function \(g(x_{1},x_{2},x_{3})=0\) on \(\partial\Omega\). We utilize the FMPINN, MPINN, LDLM1 and LDLM2 models to approximate the solution of three-dimensional multi-scale problem (1.1) with rough coefficient (5.7) when \(\varepsilon=0.1\), the setups the four models are same as the Example 5.4. The training dataset includes 7500 interior points and 1000 boundary points randomly sampled from \(\Omega\) and \(\partial\Omega\), respectively. To facilitate the process, a reference solution \(u^{\varepsilon}(x_{1},x_{2},x_{3})\) is established as the numerical solution obtained using the finite difference method on the domain \([-1,1]\times[-1,1]\times[-1,1]\) with a mesh-size \(h=1/64\). The test dataset is formed by including all grid points within the domain \([-1,1]\times[-1,1]\) with a mesh-size \(h=1/64\), while keeping the value of \(z\) fixed at 0.3125. We list the total running time and REL in Table 6 and plot the related results in Fig. 12. Based on the results in Fig.12, we can see that our FMPINN model still outperforms the MPINN and LDLMs model for multi-scale problems in three-dimensional space. The point-wise absolute error and the relative error of the former one are much smaller than that of the latter three, the precision of FMPINN is very good with a small absolute point-wise error. Additionally, the REL curve and the loss curve of the flux term are all flat in the later period of the training process, which means the performance of FMPINN is stable. The running time of FMPINN is 5179.601 seconds and less 3800 seconds than MPINN's. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & FMPINN & MPINN & LDLM1 & LDLM2 \\ \hline REL & 0.0071 & 0.0335 & 0.8048 & 0.5326 \\ \hline Total time(s) & 5179.601 & 9271.072 & 1065.541 & 1195.233 \\ \hline \end{tabular} \end{table} Table 6: The relative error and running time of FMPINN, MPINN, LDLM1 and LDLM2 for Example 5.5. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & FMPINN & MPINN & LDLM1 & LDLM2 \\ \hline REL & 0.0628 & 0.99 & 0.936 & 0.9127 \\ \hline Total time(s) & 2013.258 & 3985.934 & 606.685 & 659.619 \\ \hline \end{tabular} \end{table} Table 5: The relative error and consumed time of FMPINN, MPINN, LDLM1 and LDLM2 for Example 5.4. Figure 12: Rough coefficient, exact solution, loss of flux term and testing results for Example 5.5 **Example 5.6**.: We consider the following eight-dimensional problem for (1.1) with Dirichlet boundary in regular domains \(\Omega=[0,1]^{8}\). In which, we take \[A(x_{1},x_{2},\cdots,x_{8})=1+\frac{1}{8}\bigg{[} \cos(2\pi x_{1})+\cos(4\pi x_{2})+\cos(8\pi x_{3})+\cos(16\pi x_{4 })+\] \[\cos(16\pi x_{5})+\cos(8\pi x_{6})+\cos(4\pi x_{7})+\cos(2\pi x_{8 })\bigg{]}.\] Meantime, an exact solution satisfied (1.1) is given by \[u(x_{1},x_{2},\cdots,x_{8})=\prod_{j=1}^{8}\sin(\pi x_{j})\] The functions \(f(x_{1},x_{2},\cdots,x_{8})\) in \(\Omega\) and \(g(x_{1},x_{2},\cdots,x_{8})\) on \(\partial\Omega\) are easy to obtain according to the rough coefficient and exact solution, we omit it. In this example, we only perform the FMPINN, LDLM1 and LDLM2 model to solve the (1.1) in eight-dimensional space, because the huge computation requirement of MPINN has exceeded the limitation of memory for our station. The size of hidden layers for each subnetwork of FMPINN is set as (60, 80, 60, 60, 60) and the hidden layers' size for LDLM is set as \((400,500,300,300,300)\). At each training step, we construct the training dataset by sampling 20000 interior points inside the \(\Omega\) and 5000 boundary points from the \(\partial\Omega\). A testing dataset is given that included 1600 random points distributed in \(\Omega\). The related experiment results are plotted in Fig.13 and listed in Table 7. Additionally, the point-wise error for the FMPINN model evaluated on 1600 sample points is projected into a rectangular region with mesh size \(40\times 40\). Noting that the mapping is only aimed at visualizing, it is independent of the actual coordinates of those points. For an eight-dimensional problem, the FMPINN still can obtain a satisfactory solution for (1.1) with small point-wise absolute error and relative error. However, the LDLM1 and LDLM2 both fail to approximate the solution of (1.1). Additionally, the loss of flux term and overall REL show that the FMPINN model is also stable during the training process. The running time of LDLMs is less thah that of FMPINN in Table 7, but their performance are obviously weaker that the latter's. ## 6 Conclusion Physics-informed neural networks (PINN) have gained significant popularity in solving both forward and inverse problems. However, the normal PINN with a multi-scale DNN framework is unable to solve multiscale PDEs with rough coefficients. Inspired by the mixed finite element method, this work designs a Fourier-based mixed PINN(dubbed FMPINN) by combining a dual (flux) technique and Fourier decomposition to solve a class of elliptic multi-scale PDEs. By incorporating the loss of the flux term into the loss function, our model achieves improved stability and robustness. To handle multi-frequency contents, a Fourier activation function has been used to address the input data transformed radially by different frequency factors, and a sub-network is designed to match the target function, this strategy can improve clearly the accuracy and convergence rate for the FMPINN method. Compared to the previous works of PINN, this novel method skillfully casts the original problem into two first-order systems, it will overcome the shortcomings of the computational burden for high-order derivatives in DNN and the ill-condition of neural tangent kernel matrix resulting from the rough coefficient. Computational results show this novel method is feasible and efficient to solve this multi-scale equation with an inhomogeneous coefficient in various dimensional spaces. In the future, we aim to extend this novel network architecture, incorporating Fourier theory and lower-order mixed schemes, to tackle more complex multiscale problems. ## Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Credit authorship contribution Statement Xi'an Li: Conceptualization, Methodology, Investigation, Validation, Writing - Original Draft. Jinran Wu: Investigation, Writing - Review & Editing. You-Gan Wang: Writing - Review & Editing, Xin Tai: Writing - Review & Editing, Jianhua Xu: Writing - Review & Editing. ## Acknowledgements The authors wish to thank Prof. Dr. Zhi-Qin John Xu for valuable suggestions which improved the quality of the paper.
2310.08725
Heterophily-Based Graph Neural Network for Imbalanced Classification
Graph neural networks (GNNs) have shown promise in addressing graph-related problems, including node classification. However, conventional GNNs assume an even distribution of data across classes, which is often not the case in real-world scenarios, where certain classes are severely underrepresented. This leads to suboptimal performance of standard GNNs on imbalanced graphs. In this paper, we introduce a unique approach that tackles imbalanced classification on graphs by considering graph heterophily. We investigate the intricate relationship between class imbalance and graph heterophily, revealing that minority classes not only exhibit a scarcity of samples but also manifest lower levels of homophily, facilitating the propagation of erroneous information among neighboring nodes. Drawing upon this insight, we propose an efficient method, called Fast Im-GBK, which integrates an imbalance classification strategy with heterophily-aware GNNs to effectively address the class imbalance problem while significantly reducing training time. Our experiments on real-world graphs demonstrate our model's superiority in classification performance and efficiency for node classification tasks compared to existing baselines.
Zirui Liang, Yuntao Li, Tianjin Huang, Akrati Saxena, Yulong Pei, Mykola Pechenizkiy
2023-10-12T21:19:47Z
http://arxiv.org/abs/2310.08725v1
# Heterophily-Based Graph Neural Network for Imbalanced Classification ###### Abstract Graph neural networks (GNNs) have shown promise in addressing graph-related problems, including node classification. However, in real-world scenarios, data often exhibits an imbalanced, sometimes highly-skewed, distribution with dominant classes representing the majority, where certain classes are severely underrepresented. This leads to a suboptimal performance of standard GNNs on imbalanced graphs. In this paper, we introduce a unique approach that tackles imbalanced classification on graphs by considering graph heterophily. We investigate the intricate relationship between class imbalance and graph heterophily, revealing that minority classes not only exhibit a scarcity of samples but also manifest lower levels of homophily, facilitating the propagation of erroneous information among neighboring nodes. Drawing upon this insight, we propose an efficient method, called Fast Im-GBK, which integrates an imbalance classification strategy with heterophily-aware GNNs to effectively address the class imbalance problem while significantly reducing training time. Our experiments on real-world graphs demonstrate our model's superiority in classification performance and efficiency for node classification tasks compared to existing baselines. Keywords:Graph neural networks, Imbalanced classification, Heterophily ## 1 Introduction GNNs have gained popularity for their accuracy in handling graph data. However, their accuracy, like other deep learning models, is highly dependent on data quality. One major challenge is class imbalance, where some classes have far fewer examples than others. This can lead to biased classification results, favoring the majority class while neglecting the minority classes [5]. The issue of imbalanced datasets commonly arises in classification and recognition tasks, where accurate classification of minority classes is critical. Graph imbalance classification has real-world applications, like identifying spammers in social networks [18] and detecting fraud in financial networks [8]. In these cases, abnormal nodes are rare, making graph imbalance classification very challenging. Finding effective solutions to this problem is valuable for both research and practical applications. The class-imbalanced problem has been extensively studied in machine learning and deep learning, as evident by prior research [6]. However, these methods may not effectively handle imbalanced graph data due to the interconnected nature of nodes within graphs. Graph nodes are characterized not only by their own properties but also by the properties of their neighboring nodes, introducing non-i.i.d. (independent and identically distributed) characteristics. Recent studies on graph imbalance classification have focused on data augmentation techniques, such as GraphSMOTE [19] and GraphENS [12]. However, our observations indicate that class imbalance in graphs is often accompanied by heterophilic connections of minority nodes, where minority nodes have more connections with nodes of diverse labels than the majority class nodes. This finding suggests that traditional techniques may be insufficient in the presence of heterophily. To address this challenge, we propose incorporating a graph heterophily handling strategy into graph imbalanced classification. Our approach builds upon the bi-kernel design of GBK-GNN [2] to capture both homophily and heterophily within the graph. Additionally, we introduce a class imbalance-aware loss function, such as logit adjusted loss, to appropriately reweight minority and majority nodes. The complexity of GBK-GNN makes training computationally challenging. To overcome this, we propose an efficient version of the GBK-GNN that achieves both efficacy and efficiency in training. Our main contributions are as follows: (1) We provide comprehensive insights into the imbalance classification problem in graphs from the perspective of graph heterophily and investigate the relationship between class imbalance and heterophily. (2) We present a novel framework that integrates graph heterophily and class-imbalance handling based on the insights and its fast implementation that significantly reduces training time. (3) We conduct extensive experiments on various real-world graphs to validate the effectiveness and efficiency of our proposed framework in addressing imbalanced classification on graphs. ## 2 Related Work **Imbalanced Classification** Efforts to counter class imbalance in classification entail developing unbiased classifiers that account for label distribution in training data. Existing strategies fall into three categories: loss modification, post-hoc correction, and re-sampling techniques. Loss modification adjusts the objective function by assigning greater weights [5] to minority classes. Post-hoc correction methods [11] adapt logits during inference to rectify underrepresented minority class predictions. Re-sampling employs techniques, such as sampling strategies [13] or data generation [1], to augment minority class data. The widely utilized Synthetic Minority Over-sampling Technique (SMOTE) [1] generates new instances by merging minority class data with nearest neighbors. To tackle class imbalance in graph-based classification, diverse approaches harness graph structural information to mitigate the challenge. GraphSMOTE [19] synthesizes minor nodes by interpolating existing minority nodes, with connectivity guided by a pretrained edge predictor. The Topology-Aware Margin (TAM) loss [16] considers each node's local topology by comparing its connectivity pattern to the class-averaged counterpart. When nearby nodes in the target class are denser, the margin for that class decreases. This change enhances learning adaptability and effectiveness through comparison. GraphENS [12] is another technique that generates an entire ego network for the minor class by amalgamating distinct ego networks based on similarity. These methods effectively combat class imbalance in graph-based classification, leveraging graph structures and introducing inventive augmentation techniques. **Heterophily Problem** In graphs, homophily [10] suggests that nodes with similar features tend to be connected, and heterophily suggests that nodes with diverse features and class labels tend to be connected. Recent investigations have analyzed the impact of heterophily on various tasks, emphasizing the significance of accounting for attribute information and devising methods attuned to heterophily [20]. Newly proposed models addressing heterophily can be classified into two categories: non-local neighbor extension and GNN architecture refinement [20]. Methods grounded in non-local neighbor extension seek to alleviate this challenge through neighborhood exploration. The H2GCN method [22], for instance, integrates insights from higher-order neighbors, revealing that two-hop neighbors frequently encompass more nodes of the same class as the central ego node. NLGNN [9] follows a pathway of employing attention mechanisms or pointer networks to prioritize prospective neighbor nodes based on attention scores or their relevance to the ego node. Approaches enhancing GNN architectures aspire to harness both local and non-local neighbor insights, boosting model capacity by fostering distinct and discerning node representations. A representative work is GBK-GNN (Gated Bi-Kernel Graph Neural Network) [2], which employs dual kernels to encapsulate homophily and heterophily details and introduces a gate to ascertain the kernel suitable for a given node pair. ## 3 Motivation Node classification on graphs, such as one performed by the Graph Convolutional Network (GCN), differs fundamentally from non-graph tasks due to the interconnectivity of nodes. In imbalanced class distributions, minority nodes may have a higher proportion of heterophilic edges in their local neighborhoods, which can negatively impact classification performance. To investigate the relationship between homophily and different classes, especially minorities, we conducted a small analysis on four datasets: Cora, CiteSeer, Wiki, and Coauthor CS (details about datasets can be found in Section 5.1). Our analysis involves computing the average homophily ratios and calculating node numbers across different categories. In particular, the average homophily ratio for nodes with label \(y\) is defined as: \[h\left(y,\mathcal{G}_{\mathcal{V}}\right)=\frac{1}{|\mathcal{V}_{y}|}\sum_{i \in\mathcal{V}_{y}}\frac{|\mathcal{N}_{i\neq}|}{|\mathcal{N}_{i}|} \tag{1}\] where \(\mathcal{V}y\) represents a set of nodes with label \(y\), \(\mathcal{N}i\) is the set of neighbors of node \(v_{i}\) (excluding \(v_{i}\)) in graph \(\mathcal{G}\), and \(\mathcal{N}_{i^{*}}\) is the set of neighbors (excluding \(v_{i}\)) whose class is the same as \(v_{i}\). Observations.The results for Cora datasets are shown in Fig. 1. It is evident that the average homophily ratios of minority classes are relatively smallersuggesting higher proportions of heterophilic edges in their local neighborhoods. This could negatively affect classification performance when using existing imbalance strategies like data augmentation and loss reweight. This finding highlights the importance of considering the homophily and heterophily properties of nodes when designing graph classification models. We propose a novel approach that addresses imbalance issue effectively, considering node heterophily. Problem Formulation.An attributed graph is denoted by \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathrm{X})\), where \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\) represents the set of \(n\) nodes in \(\mathcal{G}\), and \(\mathcal{E}\) is the set of edges with edge \(e_{ij}\) connecting nodes \(v_{i}\) and \(v_{j}\). For simplicity, we consider undirected graphs, but the argument could be generalized for directed graphs. The node attribute matrix is represented by \(\mathrm{X}=\left[\mathbf{x}_{1}^{\top},\ldots,\mathbf{x}_{n}^{\top}\right]\in \mathcal{R}^{n\times d}\), where \(\mathbf{x}_{i}\) indicates the feature of node \(v_{i}\). \(\mathcal{Y}\) denotes the label information for nodes in \(\mathcal{G}\) and each node \(v_{i}\) is associated with a label \(y_{i}\in\mathcal{R}^{n}\). During the training, only a subset of the classes, denoted by \(\mathcal{Y}_{\mathcal{L}}\), is available containing the labels for node subset \(\mathcal{V}_{\mathcal{L}}\). Moreover, \(\mathcal{C}=\{c_{1},c_{2},\ldots,c_{m}\}\) represents the set of \(m\) classes assigned to each node, with \(|C_{i}|\) denoting the size of the \(i\)-th class, which indicates the number of nodes with class \(c_{i}\). To quantify the degree of imbalance in the graph, we use the imbalance ratio, defined as \(\mathbf{r}=\frac{\max_{i}(|C_{i}|)}{\min_{i}(|C_{i}|)}\). For imbalanced graphs, the imbalance ratio of \(\mathcal{Y}_{\mathcal{L}}\) is high (Table 1 of Section 5 shows that the first two datasets have relatively balanced classes and the last two have imbalanced classes). **Imbalance Node Classification on Graphs.** In the context of an imbalanced graph \(\mathcal{G}\) with a high imbalance ratio \(\mathbf{r}\), the aim is to develop a node classifier \(f:f(\mathcal{V},\mathcal{E},\mathrm{X})\rightarrow\mathcal{Y}\) that can work well for classifying nodes belonging to both the majority and minority classes. Figure 1: Category distributions (left) and average homophily ratios (right) of Cora. ## 4 Methodology In this section, we present our solution to address class imbalance that incorporates heterophily handling and imbalance handling components (Section 4.1). We also propose a fast version that effectively reduces training time (Section 4.2). The main objective of our model is to minimize the loss of minority classes while ensuring accurate information exchange during the message-passing process. ### Im-GBK #### 4.1.1 Heterophily Handling We build our model based on the GBK-GNN [2] model, which is a good model for graph classification, though not able to handle class imbalance. GBK-GNN is designed to address the lack of distinguishability in GNN, which stems primarily from the incapability to adjust weights adaptively for various node types based on their distinct homophily properties. As a consequence, a bi-kernel feature transformation method is employed to capture either homophily or heterophily information. In this work, we, therefore, introduce a learnable kernel-based selection gate that aims to distinguish if a pair of nodes are similar or not and then selectively choose appropriate kernels, i.e., homophily or heterophily kernel. The formal expression for the input transformation is presented below. \[\mathbf{h}_{i}^{(l)}=\sigma\left(\mathbf{W_{f}h}_{i}^{(l-1)}+\frac{1}{| \mathcal{N}(v_{i})|}\sum_{v_{j}\in\mathcal{N}(v_{i})}\alpha_{ij}\mathbf{W_{s} h}_{j}^{(l-1)}+(1-\alpha_{ij})\,\mathbf{W_{d}h}_{j}^{(l-1)}\right) \tag{2}\] \[\alpha_{ij}=Sigmoid\left(\mathbf{W_{g}}\left[\mathbf{h}_{i}^{(l-1)},\mathbf{ h}_{j}^{(l-1)}\right]\right) \tag{3}\] \[\mathcal{L}=\mathcal{L}_{0}+\lambda\sum_{l}^{L}\mathcal{L}_{g}^{(l)} \tag{4}\] where \(\mathbf{W_{s}}\) and \(\mathbf{W_{d}}\) are the kernels for homophilic and heterophilic edges, respectively. The value of \(\alpha_{ij}\) is determined by \(\mathbf{W_{g}}\) and the embedding layer of nodes \(i\) and \(j\). The loss function consists of two parts: \(\mathcal{L}_{0}\), a cross-entropy loss for node classification, and \(\mathcal{L}_{g}^{(l)}\), a label consistency-based cross-entropy, i.e., to discriminate if labels of a pair of nodes are consistent for each layer \(l\) to guide the training of the selection gate or not. A hyper-parameter \(\lambda\) is introduced to balance the two losses. The original GBK-GNN method does not explicitly address the class imbalance issue, which leads to the model being biased toward the majority class. Our method employs a class-imbalance awareness for the GBK-GNN design, and therefore it mitigates the bias. #### 4.1.2 Class-Imbalance Handling with Logit Adjusted Loss When traditional softmax is used on such imbalanced data, it can result in biases toward the majority class. This is because the loss function used to train the model typically treats all classes equally, regardless of their frequency. As a result, the model tends to optimize for overall accuracy by prioritizing the majority classes while performing poorly on the minority classes. This issue can be addressed by adjusting the logits (i.e., the inputs to the softmax function) for each class to be inversely proportional to their frequencies in the training data, which effectively reduces the weight of the majority classes and increases the weight of the minority classes [11]. In this study, we calculate logit-adjusted loss as follows: \[\mathcal{L}_{logit-adjusted}=-\log\frac{e^{f_{y}(x)+\tau\cdot\log\pi_{y}}}{\sum_ {y^{\prime}\in[L]}e^{f_{y^{\prime}}(x)+\tau\cdot\log\pi_{y^{\prime}}}} \tag{5}\] where \(\pi_{y}\) is the estimate of the class prior. In this approach, a label-dependent offset is added to each logit, which differs from the standard softmax cross-entropy approach. Additionally, the class prior offset is enforced during the learning of the logits rather than being applied post-hoc, as in other methods. #### 3.2.2 Class-Imbalance Handling with Balanced Softmax Another approach called balanced softmax [13] focuses on assigning larger weights to the minority classes and smaller weights to the majority class, which encourages the model to focus more on the underrepresented classes and improve their performance. The traditional softmax function treats all classes equally, which can result in a bias towards the majority class in imbalanced datasets. Balanced softmax, on the other hand, adjusts the temperature parameter of the softmax function to balance the importance of each class, effectively reducing the impact of the majority class and increasing the impact of the minority classes. Formally, given \(N_{k}\), \(l_{v}\), \(y_{v}\), \(l_{v,y_{v}}\) represent \(k\)-\(th\) of class \(N\), logit and the label of node \(v\), logit associated with the true label \(y_{v}\) for the input node \(v\), respectively. In this study, the balanced softmax is calculated as: \[\mathcal{L}_{balanced-softmax}=-\log\frac{e^{l_{v},y_{v}+\log N_{y}}}{\sum_{k \in\mathcal{Y}}e^{l_{v,k}+\log|N_{k}|}}. \tag{6}\] #### 3.2.3 Loss Function We design our loss function to combine these two components that handle heterophily and class imbalance in the proposed Im-GBK model. The learning objective of our model consists of (i) reducing the weight of the majority classes and increasing the weight of the minority classes in the training data, and (ii) improving the model's ability to select the ideal gate. To achieve this objective, we incorporate two loss components into the loss function: \[\mathcal{L}=\mathcal{L}_{im}+\lambda\sum_{l}^{L}\mathcal{L}_{g}^{(l)}. \tag{7}\] The first component, from the class-imbalance handler, denoted as \(\mathcal{L}_{im}\), applies either the _logit adjusted loss_\(\mathcal{L}_{logit-adjusted}\) or the _Balanced Softmax_\(\mathcal{L}_{balanced-softmax}\) approach to reduce the impact of the majority class and focus on underrepresented classes. The second component, denoted as \(\mathcal{L}g^{(l)}\), applies cross-entropy loss to each layer \(l\) of the heterophily handler to improve the model's discriminative power and adaptively select gate to emphasize homophily or heterophily. The hyper-parameter \(\lambda\) balances the two losses in the overall loss function. ### Fast Im-GBK The major limitation of Im-GBK lies in efficiency, as the additional time is required to compute the gate result from the heterophily handling component. The challenge arises from the need to label edges connecting unknown nodes, which is typically addressed by learning an edge classifier. Specifically, in the message-passing process, the gate acts as an edge classifier that predicts the homophily level of a given edge. As a result, this process requires additional time proportional to the number of edges in the graph, expressed as \(T_{\mathbf{extra}}=|\mathbf{E}|\times time(e_{ij})\), where \(time(e_{ij})\) represents the time to compute one edge. Therefore, we propose to use a graph-level homophily ratio [21] instead of the pair-wise edge classifier. This removes the kernel selection process before training and significantly reduces training time. We use Eq. 2 to aggregate and update the embedding layers similarly. The formal definition of the gate generator is: \[H(\mathbf{G})=\frac{\sum_{\left({v_{i},v_{j}}\right)\in\mathbf{E}}\mathbb{I} \left({y_{i}=y_{j}}\right)}{|\mathbf{E}|} \tag{8}\] \[\alpha_{ij}=\begin{cases}\max((1-\epsilon)\mathbb{I}(y_{i}=y_{j}),\epsilon)& y_{i},\,y_{j}\in V_{train}\\ H(\mathbf{G})&\text{otherwise}\end{cases} \tag{9}\] \[\mathcal{L}_{Fast\ Im-GBK}=\mathcal{L}_{im} \tag{10}\] where the hyper-parameter \(\epsilon\) will serve as the minimum similarity threshold, and \(\mathcal{L}_{im}\) is aforementioned Class-Imbalance Handling loss. ## 5 Experiments We address four questions to enhance our understanding of the model: **RQ1:** How do Im-GBK and fast-Im-GBK models perform in comparison to baselines in node classification for an imbalanced scenario? **RQ2:** How is the performance efficiency of fast-Im-GBK compared to other baseline models? **RQ3:** What is the role of each component in our Im-GBK model, and do they positively impact the classification performance? ### Experiment Settings _Datasets_. We conduct experiments on four datasets from PyTorch Geometric [3], which are commonly used in graph neural network literature, to evaluate the efficiency of our proposed models for the node classification task when classes are imbalanced. Table 1 presents a summary of the datasets. In addition, using CiteSeer and Cora datasets, we generate two extreme instances in which each minority class with random selection has only five training examples. _Experiment environment_. For each dataset, we randomly select 60% of total samples for training, 20% for validation, and the remaining 20% for testing. We use Adam optimizer and set learning rate \(lr,weight\ decay\) as 0.001 and \(5e^{-4}\), respectively. All baselines follow the same setting except for the layer number of 128. We run each experiment on the same random seeds to ensure reproducibility. Model training is done on NVIDIA GeForce RTX 3090 (24GB) GPU with 90GB memory. The code depends on PyTorch 1.7.0 and PyG 2.0.4. _Evaluation metric_. The quality of classification is assessed by average accuracy, AUC-ROC, and F1 scores. Each experiment is repeated five times to avoid randomness and compute the final value. _Baselines_. We evaluate our approach with representative and state-of-the-art approaches, including three classic GNN models (GCN [7], GAT [17], GraphSage [4]) using three traditional imbalance techniques (Over-sampling, Re-weight, SMOTE [1]), original GBK-GNN [2], GraphSMOTE [19], and TAM [16] (we choose the combination of GCN+TAM+Balanced Softmax and GCN+TAM+ENS, referred to as GCN-TAM-BS and GCN-TAM-ENS, respectively). ### Comparisons with baselines (RQ1) In this section, we analyze the performance of classic graph neural networks (GNNs) and traditional imbalance learning approaches. As shown in the learning objective (Section 4.1), \(\lambda\) plays an important role in the tradeoff between classification error and consistency. Thus, we first explore the impact of hyperparameter \(\lambda\) on the performance. We set \(\lambda\) between 0 and 5 with an interval of 0.5 and the results are shown in Fig. 2. According to our analysis, the impact of \(\lambda\) on the results is insignificant if it is not 0. Therefore, in the following experiments, we always set \(\lambda=1\), considering the overall trends for both methods on all datasets. We examine the performance of different models. The results are reported in Table 2 and Table 3 on original datasets and extreme datasets, respectively. From the results, it can be observed that: * The experiment demonstrates that Im-GBK models, which use logit adjust loss and balanced softmax (denoted as Im-GBK (LogitAdj) and Im-GBK (BLSM), respectively), achieve comparable or better results to state-of-the-art methods in most original datasets. Specifically, in Cora and CiteSeer, Im-GBK (LogitAdj) and Im-GBK (BLSM) are comparable to GraphSMOTE and GCN-TAM, while they outperform it on Wiki. Furthermore, Fast Im-GBK demonstrates superiority on Wiki and Coauthor-CS, and all proposed methods performed better than the baselines in extremely imbalanced cases. * The models designed specifically for imbalance problems perform better in the extreme cases (Cora Extreme and CiteSeer Extreme); refer to Table 3. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline & **Hom.Ratio** & **Imbalance Ratio** & **Nodes** & **Edges** & **Features** & **Classes** \\ \hline CiteSeer [14] & 0.736 & 2.655 & 3327 & 9104 & 3703 & 6 \\ Cora [14] & 0.810 & 4.544 & 2708 & 10556 & 1433 & 7 \\ Wiki [14] & 0.712 & 45.111 & 2405 & 17981 & 4973 & 17 \\ Coauthor CS [15] & 0.808 & 35.051 & 18333 & 163788 & 6805 & 15 \\ \hline \end{tabular} \end{table} Table 1: Statistics of the node classification datasets. Our models show superior performance in extreme circumstances, as indicated by achieving the best performance consistently w.r.t the most of metrics, whereas GCN-TAM and GraphSMOTE, as specifically designed models for the imbalanced classification task, also exhibit their capabilities in differentiating minority classes. * Models designed to account for graph heterophily generally outperform classic GNNs and modified classic GNNs. This also empirically validates the rationality of using heterophilic neighborhoods for imbalanced classification to some extent. Overall, the experiment demonstrates that the proposed methods are effective in imbalanced node classification and offer better or comparable performance to state-of-the-art. In extreme cases, our approaches outperform and show a better capability in differentiating minority classes. ### Comparison in Efficiency (RQ2) Section 3 showed that the proposed Im-GBK model could be time-consuming due to its second loss function component. To address this, we replace the kernel selection process by using a graph-level homophily ratio. Table 4 presents training time comparisons, revealing that models using gate-selection mechanisms, like GBK-GNN and Im-GBK, require significantly more training time. For example, while most models could complete one epoch within 1 second, GBK-GNN and Im-GBK took over 10 seconds and around 160 seconds to train one epoch on the CS dataset. GraphSMOTE also requires more time than Fast Im-GBK because it has to generate several synthetic nodes for each minority class. However, the results demonstrate that our proposed Fast Im-GBK model shows a significant reduction in training time compared to GBK-GNN and GraphSMOTE. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{Cora} & \multicolumn{3}{c|}{CiteSeer} & \multicolumn{3}{c|}{Wiki} & \multicolumn{3}{c}{Coauthor-CS} \\ \hline & ACC & AUC & F-1 & ACC & AUC & F-1 & ACC & AUC & F-1 & ACC & AUC & F-1 \\ \hline GCN & 0.853 & 0.981 & 0.847 & 0.719 & 0.905 & 0.687 & 0.664 & 0.875 & 0.592 & 0.935 & 0.995 & 0.914 \\ GCN+SMOTE & 0.855 & 0.980 & 0.848 & 0.709 & 0.904 & 0.673 & 0.649 & 0.865 & 0.605 & 0.939 & 0.996 & 0.921 \\ GCN+Re-weight & 0.848 & 0.981 & 0.840 & 0.712 & 0.905 & 0.683 & 0.672 & 0.873 & 0.631 & 0.935 & 0.996 & 0.915 \\ GAT & 0.853 & 0.969 & 0.846 & 0.730 & 0.896 & 0.702 & 0.243 & 0.643 & 0.191 & 0.889 & 0.975 & 0.822 \\ GAT+SMOTE & 0.839 & 0.970 & 0.822 & 0.716 & 0.876 & 0.887 & 0.281 & 0.691 & 0.250 & 0.370 & 0.721 & 0.185 \\ GAT+Re-weight & 0.827 & 0.965 & 0.821 & 0.703 & 0.888 & 0.677 & 0.125 & 0.580 & 0.111 & 0.913 & 0.987 & 0.891 \\ GraphSAGE & 0.805 & 0.968 & 0.780 & 0.697 & 0.895 & 0.676 & 0.691 & 0.877 & 0.578 & 0.905 & 0.988 & 0.829 \\ GraphSAGE+SMOTE & 0.798 & 0.971 & 0.776 & 0.724 & 0.902 & 0.702 & 0.693 & 0.884 & 0.688 & 0.633 & 0.947 & 0.397 \\ GraphSAGE+Re-weight & 0.794 & 0.966 & 0.769 & 0.703 & 0.888 & 0.677 & 0.668 & 0.884 & 0.563 & 0.898 & 0.986 & 0.816 \\ \hline GraphSMOTE & 0.872 & 0.984 & 0.864 & **0.769** & **0.929** & **0.741** & 0.569 & 0.859 & 0.440 & 0.940 & 0.996 & 0.926 \\ GCN-TAM & **0.878** & 0.931 & **0.868** & 0.752 & 0.840 & 0.727 & 0.651 & 0.823 & 0.563 & 0.929 & 0.956 & 0.914 \\ GBK-GNN-BS & 0.876 & 0.974 & 0.866 & 0.730 & 0.915 & 0.707 & 0.681 & 0.881 & 0.611 & 0.936 & **0.997** & 0.918 \\ Im-GBK (LogitAdj) & 0.866 & 0.979 & 0.853 & 0.728 & 0.912 & 0.699 & 0.674 & 0.887 & 0.622 & 0.932 & 0.996 & 0.912 \\ Im-GBK (BLSM) & 0.861 & 0.979 & 0.846 & 0.721 & 0.909 & 0.700 & 0.677 & 0.888 & 0.615 & 0.933 & 0.996 & 0.914 \\ \hline Fast Im-GBK & 0.876 & **0.988** & 0.863 & 0.766 & 0.926 & 0.738 & **0.723** & **0.911** & **0.655** & **0.951** & **0.997** & **0.939** \\ \hline \end{tabular} \end{table} Table 2: Comparison of different methods on original datasets. ### Ablation Analysis (RQ3) Subsection 5.3 showed that the proposed method exhibits clear advantages in performance compared to other baselines in differentiating minority classes for extremely imbalanced graphs. To further investigate the fundamental factors underlying the performance improvements of our proposed method in Im-GBK, we conduct ablation analyses using one of the extreme cases, CiteSeer Extreme. We show the effectiveness of the model handling class imbalance classification by ablating the model Class-Imbalance Handler and Heterophily Handler, respectively. In Table 5, 'Class-Imbalance Handling Loss' represents two Class-Imbalance Handling losses introduced in Section 4.1. 'Heterophily handling' refers to the method introduced in Section 4.1 to capture graph heterophily, and '\(\times\)' means this part is ablated. Considering all strategies, it can be observed from Table 5 \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{Cora Extreme} & \multicolumn{3}{c}{CiteSeer Extreme} \\ \hline & ACC & AUC & F-1 & ACC & AUC & F-1 \\ \hline GCN & 0.746 & 0.929 & 0.647 & 0.697 & 0.877 & 0.607 \\ GCN+SMOTE & 0.745 & 0.930 & 0.648 & 0.699 & 0.877 & 0.610 \\ GCN+Re-weight & 0.756 & 0.934 & 0.667 & 0.700 & 0.878 & 0.612 \\ GAT & 0.679 & 0.878 & 0.532 & 0.694 & 0.880 & 0.605 \\ GAT+SMOTE & 0.653 & 0.861 & 0.427 & 0.677 & 0.847 & 0.593 \\ GAT+Re-weight & 0.689 & 0.869 & 0.516 & 0.681 & 0.873 & 0.596 \\ GraphSAGE & 0.657 & 0.873 & 0.498 & 0.682 & 0.866 & 0.596 \\ GraphSAGE+SMOTE & 0.671 & 0.884 & 0.494 & 0.680 & 0.871 & 0.593 \\ GraphSAGE+Re-weight & 0.674 & 0.878 & 0.509 & 0.687 & 0.873 & 0.598 \\ GraphSMOTE & 0.770 & 0.928 & 0.674 & 0.703 & 0.893 & 0.612 \\ GCN-TAM-BS & **0.829** & 0.888 & **0.797** & 0.718 & 0.801 & 0.649 \\ GCN-TAM-ENS & 0.790 & 0.884 & 0.769 & 0.680 & 0.797 & **0.659** \\ GBK-GNN & 0.692 & 0.905 & 0.521 & 0.696 & 0.892 & 0.607 \\ \hline Im-GBK (LogitAdj) & 0.717 & 0.914 & 0.599 & 0.703 & 0.896 & 0.615 \\ Im-GBK (BLSM) & 0.800 & 0.931 & 0.761 & 0.705 & 0.897 & 0.655 \\ \hline Fast Im-GBK & 0.727 & **0.941** & 0.570 & **0.737** & **0.899** & 0.641 \\ \hline \end{tabular} \end{table} Table 3: Comparison of different methods on extreme datasets. Figure 2: Experimental results on Cora (extreme) that either dropping 'Class-Imbalance Handling Loss' or 'Heterophily handling' components will result in a decrease in performance. ## 6 Conclusion In this paper, we studied the problem of imbalanced classification on graphs from the perspective of graph heterophily. We observed that if a model cannot handle heterophilic neighborhoods in graphs, its ability to address imbalanced classification will be impaired. To address the graph imbalance problem effectively, we proposed a novel framework, Im-GBK, and its faster version, Im-GBK, that simultaneously tackles heterophily and class imbalance. Our framework overcomes the limitations of previous techniques by achieving higher efficiency while maintaining comparable performance. Extensive experiments are conducted on various real-world datasets, demonstrating that our model outperforms most baselines. Furthermore, a comprehensive parameter analysis is performed to validate the efficacy of our approach. In future research, we aim to explore alternative methods for modeling graph heterophily and extend our approach to real-world applications, such as fraud and spammer detection.
2310.07416
A Novel Voronoi-based Convolutional Neural Network Framework for Pushing Person Detection in Crowd Videos
Analyzing the microscopic dynamics of pushing behavior within crowds can offer valuable insights into crowd patterns and interactions. By identifying instances of pushing in crowd videos, a deeper understanding of when, where, and why such behavior occurs can be achieved. This knowledge is crucial to creating more effective crowd management strategies, optimizing crowd flow, and enhancing overall crowd experiences. However, manually identifying pushing behavior at the microscopic level is challenging, and the existing automatic approaches cannot detect such microscopic behavior. Thus, this article introduces a novel automatic framework for identifying pushing in videos of crowds on a microscopic level. The framework comprises two main components: i) Feature extraction and ii) Video labeling. In the feature extraction component, a new Voronoi-based method is developed for determining the local regions associated with each person in the input video. Subsequently, these regions are fed into EfficientNetV1B0 Convolutional Neural Network to extract the deep features of each person over time. In the second component, a combination of a fully connected layer with a Sigmoid activation function is employed to analyze these deep features and annotate the individuals involved in pushing within the video. The framework is trained and evaluated on a new dataset created using six real-world experiments, including their corresponding ground truths. The experimental findings indicate that the suggested framework outperforms seven baseline methods that are employed for comparative analysis purposes.
Ahmed Alia, Mohammed Maree, Mohcine Chraibi, Armin Seyfried
2023-10-11T12:01:52Z
http://arxiv.org/abs/2310.07416v1
A Novel Voronoi-based Convolutional Neural Network Framework for Pushing Person Detection in Crowd Videos ###### Abstract Analyzing the microscopic dynamics of pushing behavior within crowds can offer valuable insights into crowd patterns and interactions. By identifying instances of pushing in crowd videos, a deeper understanding of when, where, and why such behavior occurs can be achieved. This knowledge is crucial to creating more effective crowd management strategies, optimizing crowd flow, and enhancing overall crowd experiences. However, manually identifying pushing behavior at the microscopic level is challenging, and the existing automatic approaches cannot detect such microscopic behavior. Thus, this article introduces a novel automatic framework for identifying pushing in videos of crowds on a microscopic level. The framework comprises two main components: i) Feature extraction and ii) Video labeling. In the feature extraction component, a new Voronoi-based method is developed for determining the local regions associated with each person in the input video. Subsequently, these regions are fed into EfficientNetV1B0 Convolutional Neural Network to extract the deep features of each person over time. In the second component, a combination of a fully connected layer with a Sigmoid activation function is employed to analyze these deep features and annotate the individuals involved in pushing within the video. The framework is trained and evaluated on a new dataset created using six real-world experiments, including their corresponding ground truths. The experimental findings indicate that the suggested framework outperforms seven baseline methods that are employed for comparative analysis purposes. **Keywords: Artificial Intelligence, Deep Learning, Complex Data Analytics, Computer Vision, Intelligent Systems, Pushing Detection, Crowd Management** ## 1 Introduction With the rapid development of urbanization, the dense crowd has become widespread in various locations, such as religious sites, train stations, concerts, stadiums, malls, and famous tourist attractions. In such highly dense crowds, pushing behavior can easily arise. Such behavior could increase the crowd's density, potentially posing a threat not only to people's comfort but also to their safety [1, 2, 3, 4, 5]. People in crowds start pushing for different reasons. It could be for saving their lives from fire [6, 7, 8] or other hazards, catching a bargain on sail or simply accessing an over-crowded subway train [9, 10], and gaining access to a venue [11, 12, 3, 4, 13]. Understanding the microscopic dynamics of pushing plays a pivotal role in effective crowd management, helping safeguard the crowd from tragedies and promoting overall well-being [1, 14]. This has led to several studies aiming to comprehend pushing dynamics, especially in crowded event entrances [15, 16, 17, 18, 19, 20]. Lugering et al. [15] defined pushing as a behavior that pedestrians use to reach a target (like accessing an event) faster. This behavior involves pushing others using arms, shoulders, elbows, or the upper body, as well as utilizing gaps among neighboring people to navigate forward quicker. The study [15] has introduced a manual rating to understand pushing dynamics at the microscopic level. The method relies on two trained psychologists to classify pedestrians' behaviors over time in a video of crowds into pushing or non-pushing categories, helping to know when, where, and why pushing behavior occurs. However, this manual method is time-consuming, tedious and prone to errors in some scenarios. Additionally, it requires trained observers, which may not always be feasible. Consequently, an increasing demand is for an automatic approach to identify pushing at the microscopic level within crowd videos. Detecting pushing behavior automatically is a demanding task that falls within the realm of computer vision. This challenge arises from several factors, such as dense crowds gathering at event entrances, the varied manifestations of pushing behavior, and the significant resemblance and overlap between pushing and non-pushing actions. Recently, machine learning algorithms, particularly Convolutional Neural Network (CNN) architectures, have shown remarkable success in various computer vision tasks, including face recognition [21], object detection [22], and abnormal behavior detection [23]. One of the key reasons for this success is that CNN can learn the relevant features [24, 25, 26] automatically from data without human supervision [27, 28]. As a result of CNN's success in abnormal behavior detection, which is closely related to pushing detection, some studies have started to automate pushing detection using CNN models [16, 17, 29]. For instance, Alia et al. [16, 30] introduced a deep learning framework that leverages deep optical flow and CNN models for pushing patch detection in video recordings. Another study [29] introduced a fast hybrid deep neural network model based on GPU to enhance the speed of video analysis and pushing patch identification. Similarly, the authors of [17, 31, 32] developed an intelligent framework that combines deep learning algorithms, a cloud environment, and live camera stream technology to annotate the pushing patches in real-time from crowds accurately. Yet, the current automatic methods focus on identifying pushing behavior at the level of regions (macroscopic level) rather than at the level of individuals (microscopic level), where each region can contain a group of persons. In other words, the automatic approaches reported in the literature can not detect pushing at the microscopic level, limiting their contributions to help comprehend pushing dynamics in crowds. For example, they cannot accurately determine the relationship between the number of individuals involved in pushing behavior and the onset of critical situations, thereby hindering a precise understanding of when a situation may escalate to a critical level. To overcome the limitations of the aforementioned methods, this article introduces a novel Voronoi-based CNN framework for automatically identifying instances of microscopic pushing behavior from crowd video recordings. The proposed framework comprises two components: feature extraction and labeling. The first component utilizes a novel Voronoi-based EfficientNetV1B0 CNN architecture for feature extraction. The Voronoi [33]-based method is used to identify the local region of each person over time, and then the EfficientNetV1B0 model [34] extracts deep features from these regions. In this article, the local region is defined as the zone focusing only on a single person (target person), including his surrounding spaces and physical interactions with his direct neighbors. This region is crucial in guiding the proposed framework to focus on microscopic behavior. On the other hand, the second component employs a fully connected layer with a Sigmoid activation function to analyze the deep features and detect the pushing persons. The framework (CNN and fully connected layer) is trained from scratch on a dataset of labeled local regions generated from six real-world video experiments with their ground truths [35]. The main contributions of this work are summarized as follows: 1. To the best of our knowledge, this article presents the first framework for automatically identifying pushing at the individual level in videos of human crowds. 2. This article introduces a novel feature extraction method for characterizing microscopic behavior in videos of crowds, particularly pushing behavior. 3. The article creates a fresh dataset derived from local regions and includes data from six real-world experiments, each paired with corresponding ground truths. This dataset represents a valuable resource for future research in this domain. The remainder of this article is organized as follows. Section 2 reviews some automatic approaches to abnormal behavior detection in videos of crowds. The architecture of the proposed framework is introduced in Section 3. Section 4 presents the processes of training and evaluating the framework. Section 5 discusses experimental results and comparisons. Finally, the conclusion and future work are summarized in Section 6. ## 2 Related Work This section begins by providing an overview of CNN-based approaches for automatic video analysis and detecting abnormal behavior in crowds. It then discusses the methods for automatically detecting pushing patches in crowd videos. ### CNN-based Abnormal Behavior Detection Typically, behavior is considered abnormal when seen as unusual under specific contexts. This implies that the definition of abnormal behavior depends on the situation [36]. To illustrate, running inside a bank might be considered abnormal behavior, whereas running at a traffic light could be viewed as normal [37]. Several behaviors have been addressed automatically in abnormal behavior detection applications in crowds, including walking in the wrong direction [38], running away [39], sudden people grouping or dispersing [40], human falls [41], suspicious behavior, violent acts [42], abnormal crowds [43], hitting, and kicking [44]. Tay et al. [36] developed a CNN-based method for identifying abnormal activities from videos. The researchers specifically designed and trained a customized CNN to extract features and label samples, utilizing a dataset comprising both normal and abnormal samples. In another study, Alaffi et al. [45] introduced two approaches for detecting abnormal behaviors in crowd videos, varying in scale from small to large. For detecting anomaly behaviors in a small-scale crowd at the object level, the first method utilizes a hybrid approach that combines a pre-trained CNN model with a random forest classifier. On the other hand, the second method employs a two-step approach to identify abnormal behaviors in a large-scale crowd. Initially, a pre-trained model is used as the first classifier to identify frames containing abnormal behaviors. Subsequently, the second classifier, specifically You Only Look Once (version 2), is utilized to analyze the identified frames and detect abnormal behaviors exhibited by individuals. Nevertheless, constructing an accurate CNN classifier requires a substantial training dataset, often unavailable for many human behaviors. To address the limited availability of large datasets containing both normal and abnormal behaviors, some researchers have employed one-class classifiers using datasets that exclusively consist of normal behaviors. Creating or acquiring a dataset containing only normal behavior is comparatively easier than obtaining a dataset that includes both normal and abnormal behaviors. [46, 47]. The fundamental concept behind the one-class classifier is to learn exclusively from normal behaviors, thereby establishing a class boundary between normal and undefined (abnormal) classes. For example, Sabokrou et al. [46] utilized a pre-trained CNN to extract motion and appearance information from crowded scenes. They then employed a one-class Gaussian distribution to build the classifier, utilizing datasets of normal behavior. Similarly, in [47, 48], the authors constructed one-class classifiers by leveraging a dataset composed exclusively of normal samples. In [47], Xu et al. employed a convolutional variational autoencoder to extract features, followed by the use of multiple Gaussian models to detect abnormal behavior. Meanwhile, in [48], a pre-trained CNN model was employed for feature extraction, while one-class support vector machines were utilized for detecting abnormal behavior. Another study by Ilyas et al. [49] conducted a separate study where they utilized a pre-trained CNN along with a gradient sum of the frame difference to extract meaningful features. Subsequently, they trained three support vector machines on normal behavior data to identify abnormal behaviors. In general, one-class classifiers are frequently employed when the target behavior class or abnormal behavior is rare or lacks a clear definition [50]. However, pushing behavior is well-defined and not rare, particularly in high-density and competitive situations. Furthermore, this type of classifier considers new normal behavior as abnormal. In order to address the limitations of CNN-based and one-class classifier approaches, multiple studies have explored the combination of multi-class CNNs with one or more handcrafted feature descriptors [23, 49]. In these hybrid approaches, the descriptors are employed to extract valuable information from the data. Subsequently, CNN learns and identifies relevant features and classifications based on the extracted information. For instance, Duman et al. [37] employed the classical Farneback optical flow method [51] and CNN to identify abnormal behavior. They used Farneback and CNN to estimate direction and speed information and then applied a convolutional long short-term memory network to build the classifier. Hu et al. [52] employed a combination of the histogram of gradient and CNN for feature extraction, while a least-squares support vector was used for classification. Direkoglu [23] utilized the Lucas-Kanade optical flow method and CNN to extract relevant features and identify "escape and panic behaviors". Almazroey et al. [53] used Lucas-Kanade optical flow, a pre-trained CNN, and feature selection methods (specifically neighborhood component analysis) to extract relevant features. These extracted features were then used to train a support vector machine classifier. In another study [54], Zhou et al. introduced an approach based on CNN for detecting and localizing anomalous activities. Their approach involved integrating optical flow with a CNN for feature extraction and utilizing a CNN for the classification task. Hybrid-based approaches could be more suitable for automatically detecting pushing behavior due to the limited availability of labeled pushing data. Nevertheless, most of the reviewed hybrid-based approaches for abnormal behavior detection may be inefficient for detecting pushing since 1) The descriptors used in these approaches can only extract limited essential data from high-density crowds to represent pushing behavior. 2) Some CNN architectures commonly utilized in these approaches may not be effective in dealing with the increased variations within pushing behavior (intra-class variance) and the substantial resemblance between pushing and non-pushing behaviors (high inter-class similarity), which can potentially result in misclassification. ### CNN-based Pushing Behavior Detection In more recent times, a few approaches that merge effective descriptors with robust CNN architectures have been developed for detecting pushing regions in crowds. For example, Alia et al. [16] introduced a hybrid deep learning and visualization framework to aid researchers in automatically detecting pushing behavior in videos. The framework combines deep optical flow and visualization methods to extract the visual motion information from the input video. This information is then analyzed using an EfficientNetV1B0-based CNN and false reduction algorithms to identify and label pushing patches in the video. The framework has a drawback in terms of speed, as the motion extraction process is based on a CPU-based optical flow method, which is slow. Another study [29] presented a fast hybrid deep neural network model that labels pushing patches in short videos lasting only two seconds. The model is based on an EfficientNetB1-based CNN and GPU-based deep optical flow. To support the early detection of pushing patches within crowds, the study [17] presented a cloud-based deep learning system. The primary goal of such a system is to offer organizers and security teams timely and valuable information that can enable early intervention and mitigate hazardous situations. The proposed system relies mainly on a fast and accurate pre-trained deep optical flow, an adapted version of the EfficientNetV2B0-based CNN, a cloud environment and live stream technology. Simultaneously, the optical flow model extracts motion characteristics of the crowd in the live video stream, and the classifier analyzes the motion to label pushing patches directly on the stream. Moreover, the system stores the annotated data in the cloud storage, which is crucial to assist planners and organizers in evaluating their events and enhancing their future plans. To the best of our knowledge, current pushing detection approaches in the literature primarily focus on identifying pushing at the patch level rather than at the individual level. However, identifying the individuals involved in pushing would be more helpful for understanding the pushing dynamics. Hence, this article introduces a new framework for detecting pushing individuals in videos of crowds. The following section provides a detailed discussion of the framework. ## 3 Proposed Framework Architecture This section describes the proposed framework for automatic pushing person detection in videos of crowds. As depicted in Fig. 1, there are two main components: feature extraction and labeling. The first component extracts the deep features from each individual's behavior. In contrast, the second Figure 1: The architecture of the proposed framework. In \(f_{t}\), \(f\) signifies an extracted frame, while \(t\) indicates its timestamp in seconds, counted from the beginning of the input video (with \(t\) taking values like \(1,2,3,\ldots\)). For a target person \(i\) at \(f_{t}\), \(\mathcal{L}_{i}(f_{t})\) denotes the local region, while \(\mathcal{N}_{i}(f_{t})\) represents the direct neighbors. FC stands for fully connected layer, while GAP refers to global average pooling. component analyzes the extracted deep features and annotates the pushing persons within the input video. The following sections will discuss both components in more detail. ### Feature Extraction Component This component aims to extract deep features from each individual's behavior, which can be used to classify pedestrians as pushing or non-pushing. To accomplish this, the component consists of two modules: Voronoi-based local region extraction and EfficientNetV1B0-based deep feature extraction. The first module selects a frame every second from the input video and identifies the local region of each person within those extracted frames. Subsequently, the second module extracts deep features from each local region and feeds them to the next component for pedestrian labeling. Before diving into these modules, let us define the local region term at one frame. A frame \(f_{t}\) is captured every second from the input video. Here, \(t\) represents the timestamp, in seconds, since the start of the video and can range from 1 to \(T\), where \(T\) is the total duration of the video in seconds. We can analyze individual pedestrians within each of these frames, such as \(f_{t}\). For instance, consider a pedestrian \(i\) positioned at \(\langle x,y\rangle_{i}\). Let \(\mathcal{N}_{i}\) denote the set of pedestrians whose Voronoi cells are adjacent to that of pedestrian \(i\). Specifically, pedestrian \(j\) belongs to \(\mathcal{N}_{i}\) if and only if their Voronoi cells share a boundary. The local region for pedestrian \(i\) at \(f_{t}\), \(\mathcal{L}_{i}\), forms a two-dimensional closed polygon, defined by the positions of all pedestrians in \(\mathcal{N}_{i}\). As illustrations, Fig. 1(a) provides examples of both \(\mathcal{N}_{i}\) (left image) and \(\mathcal{L}_{i}\) (right image). The region \(\mathcal{L}_{i}\) encapsulates the crowd dynamics around individual \(i\), reflecting potential interactions between \(i\) and its neighbors \(\mathcal{N}_{i}\). Notably, the characteristics around a pushing individual might diverge from those around a non-pushing one, a distinction pivotal for highlighting pushing behaviors. Fig. 1(b) showcases examples of such \(\mathcal{L}_{i}\) regions for pushing and non-pushing individuals. The following section introduces a novel method for extracting \(\mathcal{L}_{i}\). #### 3.1.1 Voronoi-based Local Region Extraction This section presents a novel method for extracting the local regions of pedestrians from the input video over time. The technique consists of several steps: frame extraction, dummy points generation, direct neighbor identification, and local region extraction. Based on the definition of \(\mathcal{L}_{i}\) presented earlier, the determination of each \(i\)'s regional boundary is contingent upon \(\mathcal{N}_{i}\) at \(f_{t}\) (\(\mathcal{N}_{i}(f_{t})\)). Nonetheless, this definition might not always guarantee the inclusion of every \(i\) within their respective local region. This can be particularly evident when \(i\) at \(f_{t}\) lacks neighboring points from all directions, exemplified by person 37 in Fig. 2(a). To address this issue, we introduce a step to generate dummy points. This involves adding points around each \(i\) at \(f_{t}\) in areas where they lack direct neighbors. This ensures every \(i\) remains encompassed within their local regions, as illustrated by person 37 in Fig. 2(c). For this purpose, as depicted in Fig. 2(b) and Algorithm 1, firstly, this step involves reading the trajectory data of \(i\) that corresponds to \(f_{t}\) (Algorithm 1, lines 1-8). Concurrently, the area surrounding every Figure 2: An illustration of direct neighbors (a) and examples of local regions (b). The red circles represent individuals engaged in pushing, while the green circles represent individuals not involved in pushing. Direct neighbors \(j\) of a person \(i\) are indicated with blue circles. is divided into four equal square regions, each can accommodate at least one \(i\) (Algorithm 1, lines 9-17). The location \(\langle x,y\rangle_{i}\) corresponds to the first 2D coordinate of each region (Algorithm 1, lines 12-13). In contrast, the remaining 2D coordinates (\(\langle x1,y1\rangle\),\(\langle x2,y2\rangle\),\(\langle x3,y3\rangle\), \(\langle x4,y4\rangle\)) required for identifying the regions can be determined by: \[\begin{split}\langle x1,y1\rangle&=\langle x-r,y+r \rangle\\ \langle x2,y2\rangle&=\langle x+r,y+r\rangle\\ \langle x3,y3\rangle&=\langle x+r,y-r\rangle\\ \langle x4,y4\rangle&=\langle x-r,y-r\rangle, \end{split} \tag{1}\] where \(r\) is the dimension of each square region. Subsequently, each region is checked to verify if it has any pedestrians. In case a region is empty, a dummy point in its center is appended to the Figure 4: a) Example of a simple Voronoi decomposition. b) Example of bounded Voronoi decomposition. Both are constructed using 30 pedestrian points and 21 dummy points. Figure 3: An illustration of the effect of dummy points on creating the local regions, as well as a sketch of the dummy points generation technique. a) \(\mathcal{L}_{37}\) and \(\mathcal{L}_{3}\) without dummy points. b) a sketch of the dummy points generation technique. c) \(\mathcal{L}_{37}\) and \(\mathcal{L}_{3}\) with dummy points. The white polygon represents the border of the local regions. Yellow small circles refer to the generated dummy points, while black points in b denote the positions of pedestrians. \(r\) is the dimension of each square. input trajectory data. Fig. 3b illustrates an example of four regions surrounding person 37 and two dummy points (yellow dots in first and second empty regions), see Algorithm 1, lines 18-24. After generating the dummy points for all \(i\) at \(f_{t}\), the trajectory data is forwarded to the next step, direct neighbor identification. Fig. 3c shows a crowd with dummy points in a single \(f_{t}\). The third step, direct neighbor identification, employs a combination of Voronoi Diagram [33] and Convex Hull [55] to find \(\mathcal{N}_{i}(f_{t})\) from the input trajectory data with dummy points. A Voronoi Diagram is a method for partitioning a plane into several polygonal regions (named Voronoi cells \(\mathcal{V}_{8}\)) based on a set of objects/points (called sites) [33]. Each \(\mathcal{V}\) contains edges and vertices, which form its boundary. Fig. 4a depicts an example of a Voronoi Diagram for \(51\,\mathcal{V}s\) of 51 sites, where black and yellow dots denote the sites. In the same figure, the set of sites contains \(\langle x,y\rangle_{i}\) (dummy points are included) at a specific \(f_{t}\), then each \(\mathcal{V}_{i}\) includes only one site \(\langle x,y\rangle_{i}\), and all points within \(\mathcal{V}_{i}\) are closer to site \(\langle x,y\rangle_{i}\) than any other sites \(\langle x,y\rangle_{q}\). Where \(q\in\) all \(i\) at that \(f_{t}\), and \(q\neq i\). Furthermore, \(\mathcal{V}_{i}\) and \(\mathcal{V}_{q}\) at \(f_{t}\) are considered adjacent if they share at least one edge or two vertices. For instance, as seen in Fig. 4, \(\mathcal{V}_{4}\) and \(\mathcal{V}_{34}\) are adjacent, while \(\mathcal{V}_{4}\) and \(\mathcal{V}_{3}\) are not adjacent. Since the Voronoi Diagram contains unbounded cells, determining the adjacent cells for each \(\mathcal{V}_{i}\) at \(f_{t}\) may yield inaccurate results. For instance, most cells of yellow points, which are located at the scene's borders, are unbounded cells, as depicted in Fig. 4a. For further clarity, \(\mathcal{V}_{i}(f_{t})\) becomes unbounded when \(i\) is a vertex of the convex hull that includes all instances of \(i\) at \(f_{t}\). As a result, the Voronoi Diagram may not provide accurate results when determining adjacent cells, which is a crucial factor in identifying \(\mathcal{N}_{i}(f_{t})\). To overcome such limitation, Convex Hull [55] is utilized to finite the Voronoi Diagram (unbounded cells) as shown in Fig. 4b. The Convex Hull is the minimum convex shape that encompasses a given set of points, forming a polygon that connects the outermost points of the set while ensuring that all internal angles are less than \(180^{\circ}\)[56]. For this purpose, the intersection of each \(\mathcal{V}_{i}(f_{t})\) with Convex Hull of all \(i\) at \(f_{t}\) is calculated, then the \(\mathcal{V}_{i}(f_{t})\) in the diagram are updated based on the intersections to obtain the bounded Voronoi Diagram of all \(i\) at \(f_{t}\) (Algorithm 2, lines 5-12). In more details, the Convex Hull of all \(i\) at \(f_{t}\) is measured (Algorithm 2, line 8). After that, the intersection between \(\mathcal{V}_{i}(f_{t})\) and the Convex Hull at \(f_{t}\) is computed. And finally, we update the Voronoi Diagram at each \(f_{t}\) using the calculated interactions to obtain the corresponding bounded one as shown in Fig. 4b (Algorithm 2, lines 8-11). After creating the bounded Voronoi Diagram, individuals in the direct adjacent Voronoi cells of \(\mathcal{V}_{i}(f_{t})\) are \(\mathcal{N}_{i}(f_{t})\), (Algorithm 2, lines 12-20). For example, in Fig. 4b, direct adjacent Voronoi cells of \(\mathcal{V}_{3}\) at \(f_{t}\) are \(\{\mathcal{V}_{2},\mathcal{V}_{22},\mathcal{V}_{35},\mathcal{V}_{15},\mathcal{ V}_{7}\}\), and \(\mathcal{N}_{3}=\{2,22,35,15,7\}\). The last step, local region extraction, aims to extract the local region of each \(i\) at \(f_{t}\), where \(i\notin\) dummy points. The step firstly finds \(\mathcal{L}_{i}(f_{t})\) based on each \(\langle x,y\rangle_{j}\), where \(j\in\mathcal{N}_{i}(f_{t})\), Fig. 3c. Then, \(\mathcal{L}_{i}(f_{t})\) are cropped from corresponding \(f_{t}\) and passed to the next module, which will be discussed in the next section. Fig. 2b displays examples of cropped local regions. #### 3.1.2 EfficientNetV1B0-based Deep Feature Extraction To extract the deep features from each individual's behavior, the feature extraction part of EfficientNetV1B0 is trained on their local regions \(\mathcal{L}_{i}(f_{t})\). EfficientNetV1B0 is a CNN architecture that has gained popularity for various computer vision tasks due to its efficient use of resources and fewer parameters than other state-of-the-art models [34]. Furthermore, it has achieved high accuracy on multiple image classification datasets. Additionally, The experiments in this article (Section 5.2) indicate that combining EfficientNetV1B0 with local regions yields the highest accuracy compared to other popular CNN architectures integrated with local regions. Therefore, EfficientNetV1B0's feature extraction part is employed to find more helpful information from each individual's behavior. The architecture of the efficientNetV1B0-based deep feature extraction model is depicted in Fig. 1. Firstly, it applies a \(3\times 3\) convolution operation to the input image, a local region with dimensions of \(224\times 224\times 3\). Following this, 16 mobile inverted bottleneck convolution (MBConv) blocks [57] are employed to extract deep features (feature maps) \(\in\mathbb{R}^{7\times 7\times 320}\) from each \(\mathcal{L}_{i}(f_{t})\). In more detail, the MBConv blocks used consist of one MBConv1, \(3\times 3\), six MBConv6, \(3\times 3\), and nine MBConv6, \(5\times 5\). Fig. 5 illustrates the structure of the MBConv block, which employs a \(1\times 1\) convolution operation to expand the depth of feature maps and capture more information. A \(3\times 3\) depthwise convolution follows this to decrease the computational complexity and number of parameters. Additionally, batch normalization and swish activation [58] are applied after each convolution operation. The MBConv block then employs a Squeeze-and-Excitation block [59] to enhance the architecture's representation power. The Squeeze-and-Excitation block initially performs global average pooling to reduce the channel dimension. Then it applies an excitation operation with Swish [58] and Sigmoid [60] activations to learn channel-wise attention weights. These weights represent the significance of each feature map and are multiplied by the original feature maps to generate the output feature maps. After the Squeeze-and-Excitation block, another \(1\times 1\) convolution with batch normalization is used to reduce the output feature maps' dimensionality, resulting in the final output of the MBConv block. The main difference between MBConv6 and MBConv1 is the depth of the block and the number of operations performed in each block; MBConv6 is six times that of MBConv1. Note that MBConv6, \(5\times 5\) performs the identical operations as MBConv6, \(3\times 3\), but MBConv6, \(5\times 5\) applies a kernel size of \(5\times 5\), while MBConv6, \(3\times 3\) uses a kernel size of \(3\times 3\). ### Labeling Component The objective of the labeling component is to analyze the feature maps obtained from the previous component and identify the pushing individuals in the input video. This is accomplished through a binary classification task, followed by an annotation process. To carry out the classification task, as shown in Fig. 1, a \(1\times 1\) convolution operation, global average pooling2D, a fully connected layer, and a Sigmoid activation function are combined. A \(1\times 1\) convolutional operation is used to increase the number of channels in feature maps, leading to more information. The new dimension of feature maps for each \(\mathcal{L}_{i}(f_{t})\) is \(7\times 7\times 1280\). After that, the global average pooling2D is utilized to transform the feature maps to \(1\times 1\times 1280\) and feed them to the fully connected layer. Then, the fully connected layer with a Sigmoid activation function finds the probability \(\delta\) of the pushing label for the corresponding \(i\) at \(f_{t}\). Finally, the classifier uses threshold to identify the class of \(i\) at \(f_{t}\) as Eq. (2): \[Class(i,f_{t})=\begin{cases}\text{pushing}&\text{if }\delta\geq threshold\\ \text{non-pushing}&\text{if }\delta<threshold\end{cases} \tag{2}\] Figure 5: The architecture of MBConv block. By default, the threshold value for binary classification is set to 0.5, which works well for a dataset with a balanced distribution. Unfortunately, the new pushing dataset created in Section 4.1 for training and evaluating the proposed framework is imbalanced, and using the default threshold may lead to poor performance of the introduced trained classifier on that dataset [62]. Therefore, adjusting the threshold in the trained classifier is required to obtain better accuracy for both pushing and non-pushing classes. The methodology for finding the optimal threshold for the classifier will be explained in detail in Section 4.3. Following training and adjusting the classifier's threshold, it can categorize individuals \(i\) as pushing or non-pushing. At the same time, the annotation process draws a red circle around the head of each pushing person in the corresponding frames \(f_{t}\) and finally generates an annotated video. The following section will discuss the training and evaluating processes of the propsed framework. Figure 6: Overhead view of exemplary experiments. a) Experiment 270, as well as Experiments 50, 110, 150, and 280 used the same setup but with different widths of the entrance area ranging from 1.2 to 5.6 m based on the experiment [35]. b) Experiment entrance_2 [61] The entrance gate’s width is 0.5 m in all setups. ## 4 Training and Evaluating the Framework This section introduces a novel labeled dataset, as well as presents the parameter setups for the training process, evaluation metrics, and the methodology for improving the framework's performance on an imbalanced dataset. ### A Novel Dataset Preparation Here, it is aimed to create the labeled dataset for training and evaluating the proposed framework. The dataset consists of a training set, a validation set for the learning process, and two test sets for the evaluation process. These sets comprise \(\mathcal{L}_{i}(f_{t})\) labeled as either pushing or non-pushing. In this context, each pushing \(\mathcal{L}_{i}(f_{t})\) means \(i\) at \(f_{t}\) contributes pushing, while every non-pushing \(\mathcal{L}_{i}(f_{t})\) indicates that \(i\) at \(f_{t}\) follows the social norm of queuing. The following will discuss the data sources and methodology used to prepare the sets. The dataset preparation is based on three data sources: 1) Six videos of real-world experiments of crowded event entrances. 2) Pedestrian trajectory data. 3) Ground truths for pushing behavior. Six video recordings of experiments with their corresponding pedestrian trajectory data are selected from the data archive hosted by Forschungszentrum Julich [35, 61]. This data is licensed under CC Attribution 4.0 International license. The experimental situations mimic the crowded event entrances, and static top-view cameras were employed to record the experiments with a frame rate of 25 frames per second. For more clarity, Fig. 6 shows overhead views of exemplary experiments, and Table 1 summarizes the various characteristics of the chosen experiments. Additionally, ground truth labels constructed by the manual rating system [15] are used for the last data source. In this system, social psychologists observe and analyze video experiments frame-by-frame to manually identify individuals who are pushing over time. The experts use PeTrack software [63] to manage the manual tracking process and generate the annotations as a text file. For further details on the manual system, readers can refer to Ref. [15]. Here, the methodology used for papering the dataset is described. As shown in Fig. 7, it consists of two phases: local region extraction; and local region labeling and set generation. The first phase aims to extract local regions (samples) from videos while avoiding duplicates. To accomplish this, the phase initially extracts frames from the input videos second by second. It employs After that the Voronoi-based local region extraction module to identify and crop the samples from the extracted frames. Table 2 demonstrates the number of extracted samples from each video, and Fig. 2b shows several examples of local regions. Preventing the presence of duplicate samples between the training, validation, and test sets is crucial to obtain a reliable evaluation for the model. Therefore, this phase removes similar and slightly different samples before proceeding to the next phase. It involves using a pre-trained MobileNet CNN model to extract deep features/embeddings from the samples and cosine similarity to find duplicate or near duplicate samples based on their features [64]. This technique is more robust than comparing pixel values, which can be sensitive to noise and lighting variations [65]. Table 2 depicts the number of removed duplicate samples. On the other hand, the local region and set generation phase is responsible for labeling the extracted samples and producing the sets, including one training set, one validation set, and two test sets. This phase utilizes the ground truth label of each \(i\) at \(f_{t}\) to label the samples (\(\mathcal{L}_{i}(f_{t})\)). \begin{table} \begin{tabular}{c c c c c c} \hline \hline Video experiment * & Width (m) & Pedestrian total & Number of gates & Duration (s) & Resolution \\ \hline 50 & 4.5 & 42 & 1 & 37 & 1920 \(\times\) 1440 \\ 110 & 1.2 & 63 & 1 & 53 & 1920 \(\times\) 1440 \\ 150 & 5.6 & 57 & 1 & 57 & 1920 \(\times\) 1440 \\ 270 & 3.4 & 67 & 1 & 59 & 1920 \(\times\) 1440 \\ 280 & 3.4 & 67 & 1 & 67 & 1920 \(\times\) 1440 \\ Entrance\_2 & 3.4 & 123 & 2 & 125 & 1920 \(\times\) 1080 \\ \hline \hline \end{tabular} *The same names as reported in [35, 61]. m stands for meter, and s refers to second. \end{table} Table 1: Characteristics of the chosen experiments. Figure 7: Pipeline of dataset preparation. In the part ‘Local Region Labeling and Set Generation’, red refers to the pushing class and pushing sample, while the non-pushing class and non-pushing sample are represented in green. If \(i\) at \(f_{t}\) contributing to pushing, \(\mathcal{L}_{i}(f_{t})\) is categorized as pushing; otherwise, it is classified as non-pushing. Examples of pushing samples can be found in Fig. 2b. The generated labeled dataset from all video experiments comprises 3384 pushing samples and 8994 non-pushing samples. The number of extracted pushing and non-pushing samples from each video is illustrated in Table 2. After creating the labeled dataset, the sets are generated from the dataset. Specifically, the second phase randomly divides the extracted frames from video experiments 110, 150, 270, 280, and Entrance_2 into three sets: 70 %, 15 %, and 15 % for training, validation, and test sets, respectively. Then, using these sets, it generates the training, validation, and test (test set 1) sets from the labeled corresponding samples (\(\mathcal{L}_{i}(f_{t})\)). Another test set (test set 2) is also developed from the labeled samples extracted from the complete video experiment 50. Table 2 shows the summary of the generated sets. To summarize, four labeled sets were created: the training set, which consists of 2160 pushing samples and 6112 non-pushing samples; the validation set, which contains 466 pushing samples and 1254 non-pushing samples; the test set 1, which includes 441 pushing samples and 1284 non-pushing samples; and the test set 2, comprising 317 pushing samples and 344 non-pushing samples. ### Parameter Setup Table 3 shows parameters used during the training process. They were chosen based on experimentation to obtain optimal performance with the new dataset. To prevent overfitting, the training was halted if the validation accuracy did not improve after 20 epochs. ### Evaluation Metrics and Performance Improvement This section will discuss the metrics chosen for evaluating the performance of the proposed framework. Additionally, it will explore the methodology employed to enhance the performance of the trained imbalanced classifier, thereby improving the overall effectiveness of the framework. Given the imbalanced distribution of the generated local region dataset, the framework exhibits a bias towards the majority class (non-pushing). Consequently, it becomes crucial to employ appropriate metrics for evaluating the performance of the imbalanced classifier. As a result, a combination of metrics was adopted, including macro accuracy, True Pushing Rate (TPR), True Non-Pushing Rate (TNPR), and Area Under the receiver operating characteristic Curve (AUC) on both test set 1 and test set 2. The following provides a detailed explanation of these metrics. TPR, also known as sensitivity, is the ratio of correctly classified pushing samples to all pushing samples, and it is defined as: \[TPR=\frac{TP}{TP+FNP}, \tag{3}\] where TP and FNP denote correctly classified pushing persons and incorrectly predicted non-pushing persons. TNPR, also known as specificity, is the ratio of correctly classified non-pushing samples to all non-pushing samples, and it is described as: \[TNPR=\frac{TNP}{TNP+FP}, \tag{4}\] where TNP and FP stand for correctly classified non-pushing persons and incorrectly predicted pushing persons. Macro accuracy, or balanced accuracy, is the average proportion of correct predictions for each class individually. This metric ensures that each class is given equal significance, irrespective of its size or distribution within the dataset. For more clarity, it is just the average of TPR and TNPR as: \[Macro\,accuracy=\frac{TPR+TNPR}{2}. \tag{5}\] AUC is a metric that represents the area under the Receiver Operating Characteristics (ROC) \begin{table} \begin{tabular}{l l l l l l l l l l l l l l} \hline \hline & \multicolumn{3}{c}{Number of samples} & \multicolumn{3}{c}{Labeled dataset} & \multicolumn{3}{c}{Training set} & \multicolumn{3}{c}{Validation set} & \multicolumn{3}{c}{Test set1} & \multicolumn{3}{c}{Test set2} \\ \hline Video & Original & Deleted & Distinct & P & NP & P & NP & P & NP & P & NP & P & NP \\ \hline 110 & 1046 & 1 & 1045 & 548 & 497 & 365 & 331 & 99 & 84 & 84 & 82 & & \\ 150 & 1469 & 70 & 1399 & 625 & 774 & 455 & 547 & 83 & 113 & 87 & 114 & & \\ 270 & 1627 & 11 & 1616 & 577 & 1039 & 401 & 727 & 84 & 161 & 92 & 151 & & \\ 280 & 1822 & 44 & 1778 & 287 & 1491 & 213 & 1104 & 44 & 181 & 30 & 206 & & \\ Entrance\_2 & 6204 & 325 & 5879 & 1030 & 4849 & 726 & 3403 & 156 & 715 & 148 & 731 & \\ Total & 12168 & 451 & 1117 & 3067 & 8650 & 2160 & 6112 & 466 & 1254 & 441 & 1284 & & \\ 50\({}^{a}\) & & & & & & & & & & & & & 317 & 344 \\ \hline \multicolumn{10}{l}{\({}^{*}\) Video 50 is used exclusively for the evaluation process, while the remaining video experiments will be employed for both training and evaluation.} \\ \end{tabular} \end{table} Table 2: Summary of the prepared sets. \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Optimizer & Adam \\ Loss function & Binary cross-entropy \\ Learning rate & 0.001 \\ Batch size & 32 \\ Epoch & 100 \\ \hline \hline \end{tabular} \end{table} Table 3: The hyperparameter values used in the training process. curve. The ROC curve illustrates the performance of a classification model across various threshold values. It plots the false positive rate (FPR) on the horizontal axis against the true positive rate (TPR) on the vertical axis. AUC values range from 0 to 1, where a perfect model achieves an AUC of 1, while a value of 0.5 indicates that the model performs no better than random guessing [66]. Fig. 8a shows an example of a ROC curve with AUC value. As mentioned above, the binary classifier employs a threshold to convert the calculated \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Test set 1 \%} & \multicolumn{4}{c}{Test set 2 \%} \\ \cline{2-9} Threshold & Macro accuracy & TNPR & TPR & (TPR-TNPR) & Macro accuracy & TNPR & TPR & (TPR-TNPR) \\ \hline Default: 0.5 & 83 & 92 & 74 & 18 & 82 & 88 & 76 & 12 \\ **Optimal: 0.038** & **85** & 84 & 86 & **2** & 82 & 81 & 83 & 2 \\ \hline \multicolumn{9}{l}{TNPR and TPR are true non-pushing rate and true pushing rate, respectively.} \\ \end{tabular} \end{table} Table 4: Performance of the proposed framework on both test sets. Figure 8: ROC curves for the introduced framework. a) ROC curve with an optimal threshold on the validation set. b) ROC curves with AUC values on test set 1 and test set 2. TPR stands for true pushing rate, while FPR refers to false pushing rate. Figure 9: Confusion matrix of the proposed framework on a) Test set 1 with default threshold. b) Test set 1 with the optimal threshold. c) Test set 2 with default threshold. d) Test set 2 with the optimal threshold. probability into a predicted class. The pushing class is predicted if the probability exceeds the threshold; otherwise, the non-pushing label is predicted. The default threshold is typically set at 0.5. However, this value leads to poor performance of the introduced framework because EfficientNetV1B0 and classification were trained on imbalanced dataset [62]. In other words, the default threshold yields a high TNPR and a low TPR in the framework. To address the imbalance issue and enhance the framework's performance, it becomes necessary to determine an optimal threshold that achieves a better balance between TPR and FPR (1-TNPR). To accomplish this, the ROC curve is utilized over the validation set to identify the threshold value that maximizes TPR and minimizes FPR. Firstly, TPR and TNPR are calculated for several thresholds ranging from 0 to 1. Then, the threshold that yields the minimum value for the following objective function (Eq. (6) is considered the optimal threshold: \[Objective\,function=|TPR-TNPR|. \tag{6}\] As shown in Fig. 7(a), the red point refers to the optimal threshold of the classifier used in the proposed framework, which is 0.038. ## 5 Evaluation and Results Here, several experiments were conducted to evaluate the performance of the proposed framework. Initially, the performance of the proposed framework itself is assessed. Subsequently, It is compared with five other CNN-based frameworks. The influence of the deep feature extraction module on the proposed framework's performance is also investigated. Finally, the impact of the local region extraction module on the framework's performance is explored. All experiments and implementations were performed on Google Colaboratory Pro, utilizing Python 3 programming language with Keras, TensorFlow 2.0, and OpenCV libraries. In Google Colaboratory Pro, the hardware setup comprises an NVIDIA GPU with a 15 GB capacity and a system RAM of 12.7 GB. Moreover, the framework and all the baselines developed for comparison in the experiments were trained using the same sets (Table 2) and hyperparameter values (Table 3). ### Performance of the Proposed Framework The performance of the proposed framework was evaluated using the generated dataset (Table 2) and various metrics, including macro accuracy, TPR, TNPR, and AUC. We first trained the proposed framework's EfficientNetB0-based deep feature extraction module and labeling component on the training and validation sets. Subsequently, the framework's performance on test set 1 and test set 2 were assessed. Table 4 shows that the introduced framework, with the default threshold, obtained macro accuracy of 83 %, TPR of 74 %, and TNPR of 92 % on test set 1. On the other hand, it achieved 82 % macro accuracy, 88 % TNPR, and 76 % TPR on test set 2. However, it is clear that the TPR is significantly lower than the TNPR on both test sets, see Fig. 8(a) and c. To balance the TPR and TNPR and improve the TPR, the optimal threshold is 0.038, as shown in Fig. 7(a). This threshold increases TPR by 12 % and 7 % on test set 1 and test set 2, respectively, without affecting the accuracy, see Fig. 8(b) and d. In fact, the framework's accuracy improved by 2 % on test set 1. The ROC curves with AUC values for the framework on the two test sets are shown in Fig. 7(b), with AUC values of 0.92 and 0.9 on test set 1 and test set 2, respectively. To summarize, with the optimal threshold, the proposed framework achieved an accuracy of 85 %, TPR of 86 %, and TNPR of 84 % on test set 1, while obtaining 82 % accuracy, 81 % TPR, and 83 % TNPR on test set 2. The next section will compare the framework's performance with five baseline systems for further evaluation. ### Comparison with Baseline CNN-based Frameworks In this section, the results of further empirical comparisons are shown to evaluate the framework's performance against five baseline systems. Specifically, it explores the impact of the EfficientNetV1B0-based deep feature extraction module on the overall performance of the framework. To achieve this, EfficientNetV1B0 in the deep feature extraction module of the proposed framework was replaced with other CNN architectures, including EfficientNetV2B0 [67] (baseline 1), Xception [68] (baseline 2), DenseNet121 [69] (baseline 3), ResNet50 [70](baseline 4), and MobileNet [71] (baseline 5). To ensure fair comparisons, the five baselines were trained and evaluated using the same sets, hyperparameters, and metrics as those used for the proposed framework. Before delving into the comparison of the results, it is essential to know that CNN models renowned for their performance on some datasets may perform poorly on others [72]. This discrepancy becomes more apparent when datasets differ in several aspects, such as size, clarity of relevant features among classes, or overall data quality. Powerful models can be prone to overfitting issues, while simpler models may struggle to capture relevant features in complex datasets with intricate patterns and relationships. Therefore, it's crucial to carefully select or develop an appropriate CNN architecture for a specific issue. For instance, EfficientNetV2B0 demonstrates superior performance compared to EfficientNetV1B0 across various classification tasks [67], including the ImageNet dataset. Moreover, it surpasses the previous version in identifying regions that exhibit pushing persons in motion information maps of crowds [16, 17]. These remarkable outcomes can be attributed to the efficient blocks employed for feature extraction, namely the Mobile Inverted Residual Bottleneck Convolution [57] and Fused Mobile Inverted Residual Bottleneck Convolution [73]. Nevertheless, it should be noted that the presence of these efficient blocks does not guarantee the best performance in identifying pushing individuals based on local regions within the framework. Hence, in this section, the impact of six of the most popular and efficient CNN architectures on the performance of the proposed framework was empirically studied. For clarity, EfficientNetV1B0 was used within the framework, while the remaining CNN architectures were employed in the baselines. The performance results of the proposed framework, as well as the baselines, are presented in Table 5 and visualized in Fig. 10. The findings indicate that EfficientNetV1B0 with optimal threshold leads the framework to achieve superior macro accuracy and AUC with balanced TPR and TNFR compared to CNNs used in baselines 1-5. This can be attributed to the architecture of EfficientNetV1B0, which primarily relies on the Mobile Inverted Residual Bottleneck Convolution with relatively few parameters. As such, the architectural design proves to be particularly suited for the generated dataset focusing on local regions. The visualization in Fig. 11 shows the optimal threshold values for the baselines. These thresholds, as shown in Table 5 and Fig. 10, mostly improved the macro accuracy, TPR, and balanced TPR and TNFR in the baselines. For example, baseline 1 with optimal threshold achieved 84 % macro accuracy, roughly similar to the proposed framework. However, it fell short of achieving a balanced TPR and TNFR along with improving TPR on both test sets as effectively as the framework. To provide further clarity, baseline 1 achieved 80 % TPR with 8 % as the difference between TPR and TNFR, whereas the proposed framework attained an 86 % TPR with 2 % as the difference between TPR and TNFR on test set 1. Similarly, on test set 2, the framework achieved 81 % TPR, while baseline 1 achieved a TPR of 74 %. Compared to other baselines that utilize optimal thresholds on test set 1, the proposed framework outperformed them regarding macro accuracy, TPR, and TNFR. Similarly, on test set 2, the framework surpasses all baselines except for the ResNet50-based baseline (baseline 4). However, it is essential to note that this baseline only achieved better TNFR, whereas the introduced framework excels in macro accuracy and TPR. As a result, the framework emerges as the superior choice on test set 2. To alleviate any confusion in the comparison, Fig. 12 shows the ROC curves with AUC values compared to its baselines on test set 1. Likewise, Fig. 13 depicts the same for test set 2. The AUC values show that the proposed framework achieved better performance than the baselines on both test sets. Moreover, they substantiate that EfficientNetV1B0 is the most suitable CNN for extracting deep features from the generated local region samples. Figure 10: Comparison between the framework (based on EfficientNetV1B0) with the baseline frameworks based on other popular CNN architectures. In conclusion, the experiments demonstrate that the proposed framework, utilizing EfficientNetV1B0, achieved the highest performance compared to the baselines relying on other CNN architectures on both test sets. Furthermore, the optimal thresholds in the developed framework and the baselines resulted in a significant improvement in the performance across both test sets. ### Impact of Deep Feature Extraction Module This section aims to investigate how the deep feature extraction module affects the framework's performance. For this purpose, a new baseline (baseline 6) is developed, incorporating a Voronoi-based local region extraction module and labeling component. In other words, the deep feature Figure 11: ROC curves with optimal thresholds for the baselines over the validation set. TPR stands for true pushing rate, while FPR refers to false pushing rate. ROC stands for Receiver Operating Characteristics. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline & & \multicolumn{6}{c}{Test set 1 \%} & \multicolumn{6}{c}{Test set 2 \%} \\ \cline{3-11} Framework & Threshold & M. acc. & TNPR & TPR & \(|\) TPR-TNPR & M. acc. & TNPR & TPR & \(|\) TPR-TNPR \(|\) \\ \hline The framework & Default: 0.5 & 83 & 92 & 74 & 18 & 82 & 88 & 76 & 12 \\ & Optimal: 0.038 & **85** & 84 & 86 & **2** & **82** & 81 & 83 & **2** \\ \hline Baseline 1 & Default: 0.5 & 83 & 91 & 74 & 17 & 80 & 91 & 69 & 22 \\ & Optimal: 0.167 & 84 & 88 & 80 & 8 & 81 & 89 & 74 & 15 \\ \hline Baseline 2 & Default: 0.5 & 79 & 91 & 67 & 24 & 77 & 90 & 64 & 26 \\ & Optimal: 0.062 & 81 & 80 & 81 & 1 & 78 & 76 & 79 & 3 \\ \hline Baseline 3 & Default: 0.5 & 77 & 92 & 62 & 30 & 74 & 89 & 58 & 31 \\ & Optimal: 0.038 & 81 & 77 & 84 & 7 & 76 & 70 & 83 & 13 \\ \hline Baseline 4 & Default: 0.5 & 70 & 92 & 49 & 43 & 75 & 87 & 64 & 23 \\ & Optimal: 0.024 & 75 & 81 & 69 & 12 & 79 & 88 & 77 & 11 \\ \hline Baseline 5 & Default: 0.5 & 79 & 94 & 65 & 29 & 77 & 92 & 62 & 30 \\ & Optimal: 0.076 & 83 & 83 & 84 & 1 & 80 & 80 & 79 & 1 \\ \hline \hline \multicolumn{11}{l}{M. acc means macro accuracy. TNPR and TPR are true non-pushing rate and true pushing rate, respectively.} \\ \end{tabular} \end{table} Table 5: Comparative analysis of the developed framework and the five CNN-based frameworks. extraction module is removed from the proposed framework to construct this baseline. Table 6 demonstrates that the baseline exhibited poor performance, with macro accuracy of 67 % on test set 1 and 59 % on test set 2. Additionally, Fig. 12 and Fig. 13 illustrate AUC values of 72 % on test set 1 and 61 % on test set 2 for baseline 6. Comparing this baseline with the weakest baseline in Table 5, which utilizes ResNet50, it is evident that deep feature extraction leads to macro accuracy improvement of at least 8 % on test set 1 and at least 20 % on test set 2. Similarly, deep feature extraction enhances AUC values by at least 11 % on test set 1 and more than 24 % on test set 2. In summary, the deep feature extraction module significantly enhances the performance of the framework. ### Impact of Local Region Extraction The primary goal of this section is to evaluate the impact of the Voronoi-based local region extraction module on the performance of the proposed framework. To accomplish this, firstly, baseline 7 was created, which replaces this module with a new one that relies on static dimensions; to extract a local square region for each individual. In this new module, the target person's position serves as the center of the extracted area, and each square region dimension is roughly 60 cm on the ground. Such dimension is enough to make the region contains the target person with his/her surrounding spaces. Fig. 15b shows an example of a square local region of a target person (\(i\)). Then, a new dataset was generated utilizing the same video experiments and the same splitting technique used in preparing the local region dataset (Table 2) to train and evaluate baseline 7. The main difference is that the samples in this new dataset are static square local regions (Fig. 15b) instead of dynamic polygonal regions (Fig. 15a). According to the data presented in Table 7, baseline 7 achieved a macro accuracy of 79 % on test set 1 and 62 % on test set 2. This indicates that the Voronoi-based method results in 6 % improvement in accuracy for test set 1 and a significant 20 % improvement for test set 2. Additionally, Fig. 14 demonstrates that the module enhanced the AUC value by 11 % for test set 1 and 13 % for test set 2. In summary, the Voronoi-based local region extraction module enhanced the accuracy of the proposed framework by a minimum of 6 %. This \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Test set 1 \%} & \multicolumn{3}{c}{Test set 2 \%} \\ \cline{2-7} Threshold & Macro accuracy & TNPR & TPR & Macro accuracy & TNPR & TPR \\ \hline Default: 0.5 & 59 & 97 & 18 & 58 & 59 & 57 \\ Optimal: 0.342 & 67 & 91 & 44 & 59 & 38 & 79 \\ \hline \hline \end{tabular} * TNPR and TPR are true non-pushing rate (Sensitivity) and true pushing rate (Specificity), respectively. \end{table} Table 6: Performance results of the baseline 6. Figure 12: ROC curves with AUC values on the test set 1. Comparison between the introduced framework (based on EfficientNetV1B0) with five baselines based on different CNN architectures, as well as the one baseline without the deep feature extraction module (baseline 6). TPR stands for true pushing rate, while FPR refers to false pushing rate. ROC represents Receiver Operating Characteristics. AUC stands for the area under the ROC Curve. Figure 13: ROC curves with AUC values on the test set 2. Comparison between the framework (based on EfficientNetV1B0) with five baselines based on different CNN architectures, as well as the one baseline without the deep feature extraction module (baseline 6). TPR stands for true pushing rate, while FPR refers to false pushing rate. ROC represents Receiver Operating Characteristics. AUC stands for the area under the ROC Curve. indicates that the Voronoi module is more effective than the new local region extraction in guiding the framework to identify relevant features from the input video. ## 6 Conclusion and Future Work This article introduced a new framework for automatically identifying pushing at the microscopic level within video recordings of crowds. The proposed framework utilizes a novel Voronoi-based method to determine the local region of each person in the input video over time. It further applies EfficientNetV1B0 to extract deep features from these local regions, capturing valuable information about individual behavior. Finally, a fully connected layer with a Sigmoid activation function is employed to analyze the deep features and annotate the pushing persons over time in the input video. To train and evaluate the performance of the framework, a novel dataset was created using six real-world experiments with their trajectory data and corresponding ground truths.The experimental findings demonstrated that the proposed framework surpassed seven baseline methods in terms of macro accuracy, true pushing rate, and true non-pushing rate. The proposed framework has some limitations. First, it was designed to work exclusively with top-view camera video recordings that include trajectory data. Second, it was trained and evaluated based on a limited number of real-world experiments, which may impact its generalizability to a broader range of scenarios. Our future goals include improving the framework in two key areas: 1) Enabling it to detect pushing persons from video recordings without the need for trajectory data as input. 2) Improving its performance in terms of macro accuracy, true pushing rate, and true non-pushing rate by utilizing video recordings of additional real-world experiments and transfer learning techniques. Acknowledgments.The authors are thankful to Anna Sieben, Helena Lugering, and Ezel Usten for \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Test 1 set \%} & \multicolumn{3}{c}{Test 2 set \%} \\ \cline{3-8} Framework & Optimal threshold & Macro accuracy & TNPR & TPR & Macro accuracy & TNPR & TPR \\ \hline The framework & 0.5 & 85 & 84 & 86 & 82 & 81 & 83 \\ Baseline 7 & 0.241 & 79 & 78 & 82 & 62 & 42 & 82 \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison to baseline 7. Figure 14: ROC curves with AUC values of the proposed framework against baseline 7 on a) test set 1 and b) test set 2. TPR stands for true pushing rate, while FPR refers to false pushing rate. ROC represents Receiver Operating Characteristics. AUC stands for the area under the ROC Curve. Figure 15: a) An example of a polygonal local region based on the bounded Voronoi Diagram. b) An example of a square local region based on static dimension. \(i\) stands for the target person. the valuable discussions, manual annotation of the pushing behavior in the video of the experiments. ## Declarations **Conflict of interest** The authors declare that there is no conflict of interests regarding the publication of this article. **Ethical approval** The experiments used in the dataset were conducted according to the guidelines of the Declaration of Helsinki and approved by the ethics board at the University of Wuppertal, Germany. Informed consent was obtained from all subjects involved in the experiments. **Funding** This work was funded by the German Federal Ministry of Education and Research (BMBF: funding number 01DH16027) within the Palestinian-German Science Bridge project framework, and partially by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)--491111487. **Availability of data and code** All videos and trajectory data used in generating the patch-based dataset were obtained from the data archive hosted by the Forschungszentrum Julich under CC Attribution 4.0 International license [35, 61]. The implementation of the proposed framework, codes used for building training and evaluating the models, as well as test sets and trained models are publicly available at: [https://github.com/PedestrianDynamics/VCNN4PuDe](https://github.com/PedestrianDynamics/VCNN4PuDe) or at [74] (accessed on 23 July 2023). The training and validation sets are available from the corresponding authors upon request. **Authors' contributions** Conceptualization, A.A.; methodology, A.A., A.S.; software, A.A.; validation, A.A.; formal analysis, A.A.; investigation, A.A.; data curation, A.A.; writing--original draft preparation, A.A.; writing--review and editing, A.A., M.M., M.C. and A.S.; supervision, M.M., M.C. and A.S.; All authors have read and agreed to the published version of the manuscript. **Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicateate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/).
2308.08235
The Expressive Power of Graph Neural Networks: A Survey
Graph neural networks (GNNs) are effective machine learning models for many graph-related applications. Despite their empirical success, many research efforts focus on the theoretical limitations of GNNs, i.e., the GNNs expressive power. Early works in this domain mainly focus on studying the graph isomorphism recognition ability of GNNs, and recent works try to leverage the properties such as subgraph counting and connectivity learning to characterize the expressive power of GNNs, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for models for enhancing expressive power under different forms of definition. Concretely, the models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.
Bingxu Zhang, Changjun Fan, Shixuan Liu, Kuihua Huang, Xiang Zhao, Jincai Huang, Zhong Liu
2023-08-16T09:12:21Z
http://arxiv.org/abs/2308.08235v1
# The Expressive Power of Graph Neural Networks: A Survey ###### Abstract Graph neural networks (GNNs) are effective machine learning models for many graph-related applications. Despite their empirical success, many research efforts focus on the theoretical limitations of GNNs, i.e., the GNNs expressive power. Early works in this domain mainly focus on studying the graph isomorphism recognition ability of GNNs, and recent works try to leverage the properties such as subgraph counting and connectivity learning to characterize the expressive power of GNNs, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for models for enhancing expressive power under different forms of definition. Concretely, the models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement. Graph Neural Network, Expressive Power, Separation Ability, Approximation Ability. ## 1 Introduction Graph neural networks (GNNs) have emerged as a prominent model in the field of deep learning, attracting extensive research interests [1, 2, 3]. GNNs have displayed superior ability in learning from graph data, and their variants have been extensively applied in numerous real-world scenarios, including recommendation systems [4], computer vision [5], natural language processing [6], molecular analysis [7], data mining [8], and anomaly detection [9]. For more introductions to the foundations and applications of GNNs, see references [10, 11, 12] for more details. Compared to well-structured texts and images, graphs are irregular. A fundamental assumption behind machine learning on graphs is that the predicted targets should be invariant to the order of nodes on graphs. To match this assumption, GNNs introduce an inductive bias called permutation invariance [13]. Specifically, GNNs give outputs that are independent of how the node indices of the graph are assigned and the order in which they are processed, i.e., the model parameters are independent of the node order and are shared throughout the graph. Due to this new parameter sharing mechanism, we need new theoretical tools to study their expressive power. However, studying the expressiveness of GNNs faces many challenges. Firstly, most GNNs are usually used as black-box feature extractors for graphs, and it is unclear how well they can capture different graph features and topology. Secondly, due to the introduction of permutation invariance, the results of the classical universal approximation theorem of neural networks (NNs) [14, 15] cannot be directly generalized to GNNs [16, 17, 18]. In addition, in practice, the study of expressive power is related to a number of long-standing difficult problems in graph theory [19, 20]. For example, in the prediction of properties of chemical molecules, it is necessary to judge whether the molecular structure is the same or similar to that of a molecule with known properties, which involves the problem of graph/subgraph isomorphism judgement [19, 21] and graph matching [22, 23], etc. [24] There exists a pioneering study of the expressive power of GNNs. Morris et al. [25, 26] and Xu et al. [27] introduced to use graph isomorphism recognition to explain the expressive power of GNNs, leading the trend to analyze the separation ability of GNNs. Maron et al. [16] and Chen et al. [28] introduced to use the ability of GNNs to approximate graph functions to account for their expressive power and further gave the set representation of invariant graph functions that can be approximated by GNNs, leading the trend to analyze the approximation ability of GNNs. While multiple studies have emerged in recent years that characterize and enhance the expressive power of GNNs, a comprehensive review in this direction is still lacking. Sato [29] explored the relationship between graph isomorphism testing and expressive power and summarized strategies to overcome the limitations of GNNs expressive power. However, they only describe the GNNs separation ability and approximation ability to characterize the GNNs expressive power, and there also exists other abilities, including subgraph counting ability, spectral decomposition ability [30, 31, 32], logical ability [33, 34, 35, 36, 37, 38, 39], etc., which are also considered as the main categories of GNNs expressive power. Therefore, it is crucial to clarify the definition and scope of the GNNs expressive power, which motivates this work. In this work, we consider that GNNs expressive power consists of two aspects, including the feature embedding ability and topology representation capability. GNNs, as a member of neural networks, have strong feature embedding ability. The topology representation ability is the unique ability of GNNs, which makes GNNs different from other machine learning models. Based on these two components, we then analyze the boundaries of the GNNs expressive power and its influencing factors. It is found that the factors affecting the GNNs expressive power also include both features and topology, in which the defects of GNNs in learning and keeping graph topology are the main factors limiting the expression. Based on the analysis of the factors influencing the expressive power of GNNs, this paper summarizes the existing work of improving GNNs expressive power into three categories, i.e., graph feature enhancement, graph topology enhancement and GNNs architecture enhancement. Graph feature enhancement is to improve the expressive power by enhancing the feature embedding effect. Graph topology enhancement, seeks to represent the graph topology more effectively to help GNNs capture more intricate graph topology information. GNNs architecture enhancement involves improving the permutation invariant aggregation functions that limit the GNNs expressive power. We also point out several shortcomings in existing benchmarks and evaluation metrics in this direction, and highlight the challenges to determine the GNNs expressive power. Furthermore, we propose several promising future research directions, including a physics-informed methodology for designing more powerful GNNs and utilizing graph neural architecture search. The structure of this work is organized as follows: Section 2 introduces the preliminary knowledge, including basics of graph neural networks and graph isomorphism. Section 3 gives a unifying definition of GNNs expressive power. Section 4 analyzes the influencing factors and existing works that are designed to improve GNNs expressive power. Section 5 points several challenges and opportunities in this research direction. Finally, we conclude with Section 6. ## 2 Preliminary Before presenting the relevant definitions and the problem description, Table I gives a summary of all the notations used in the article. ### _Basics of Graph Neural Networks_ **Graph.** In the following, we consider the problem of representation learning on one or several graphs of possibly varying sizes. Let \(G=(\mathcal{V},\mathcal{E})\in\mathcal{G}\) be a graph of \(n\) nodes with node set \(\mathcal{V}\) and edge set \(\mathcal{E}\), directed or undirected. Each graph has an adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\), and potential node features \(\mathbf{X}=(\mathbf{x}_{1},...,\mathbf{x}_{n})^{\mathrm{T}}\in\mathbb{R}^{n \times e}\). Combining the adjacency matrix and node features, we may also denote \(G=(\mathbf{A},\mathbf{X})\). When explicit node features are unavailable, an unfeatured graph \(G\) will consider \(\mathbf{X}=\mathbf{1}\mathbf{1}^{\mathrm{T}}\) where \(\mathbf{1}\) is a \(n\times 1\) vector of ones. For a node \(v\in\mathcal{V}\), the set of neighboring nodes is represented as \(N(v)\). We also use \(\mathcal{V}[G]\) to denote the node set of graph \(G\). **Message Passing Neural Networks (MPNNs).** Out of the numerous architectures proposed, MPNNs have been widely adopted [40, 13, 41]. MPNNs employ a neighborhood aggregation strategy to encode each node \(v\in\mathcal{V}\) with a vector representation \(\mathbf{h}_{v}\). This node representation is updated iteratively by gathering representations of its neighbors before performing a nonlinear transformation of them with neural network (NN) layers: \[\mathrm{MessagePassing:}\ \mathbf{m}_{u}^{(l)}=\mathrm{MSG}^{(l)}\left(\left\{ \mathbf{h}_{u}^{(l-1)}:u\in N(v)\right\}\right), \tag{1}\] \[\mathrm{Aggregation:}\ \mathbf{a}_{v}^{(l)}=\mathrm{AGG}^{(l)}\left(\left\{ \mathbf{m}_{u}^{(l-1)}:u\in N(v)\right\}\right), \tag{2}\] \[\mathrm{Update:}\ \mathbf{h}_{v}^{(l)}=\mathrm{UPD}^{(l)}\left(\mathbf{h}_{v}^{( l-1)},\mathbf{a}_{v}^{(l)}\right). \tag{3}\] where, \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\), \(\mathbf{m}_{u}^{(l)}\) represents the message from the neighbors, \(\mathbf{m}_{v}^{(l)}\) represents the aggregated message from neighbors at layer \(l\), \(\mathrm{MSG}^{(l)}\), \(\mathrm{AGG}^{(l)}\) and \(\mathrm{UPD}^{(l)}\) denote the message passing, aggregation and update functions of layer \(l\). The neighbor aggregation strategy employed by MPNNs allows them to learn relationships between nearby nodes. This inductive bias is well suited for problems requiring relational reasoning [42, 43, 44, 45, 46]. Despite their widespread success, MPNNs nonetheless suffer from several limitations, including over-smoothing [47, 48, 49], over-compression [50, 51], the failure to distinguish between node identity and location [52, 53], and limited expressive power [25, 27]. The lack of expressive power is evident in their inability to compute several crucial graph properties (e.g., shortest/longest cycles, diameters and local clustering coefficients [54]), and their failure to learn the graph topology (for example, whether a graph is connected or whether a particular pattern, such as a cycle, exists in the graph [55]). To address tasks where the graph topology is critical, such as predicting the chemical properties of molecules [56, 57] and solving combinatorial optimization problems [58, 59], more powerful GNNs are required. **k-order Linear Invariant GNN (k-IGN) Framework.** The k-IGN by Maron is another widely used framework for investigating GNN's expressive power [16, 60]. The k-IGN comprises of alternating layers composed of invariant or equivariant linear layers and nonlinear activation layers. \begin{table} \begin{tabular}{|c||c|} \hline **Notations** & **Descriptions** \\ \hline \{\} & A set. \\ \hline \{\} & A multiset. \\ \hline \(G=(\mathcal{V},\mathcal{E})\) & An unfeatured graph. \\ \hline \(\mathcal{V}\) & The set of nodes in a graph. \\ \hline \(\mathcal{E}\) & The set of edges in a graph. \\ \hline \(N(v)\) & The set of the neighboring nodes of node \(v\). \\ \hline \(G=(\mathbf{A},\mathbf{X})\) & An featured graph. \\ \hline \(\mathbf{A}\) & The adjacency matrix. \\ \hline \(\mathbf{X}\) & The feature matrix. \\ \hline \(\mathbf{h}_{v}^{(l)}\) & The embedding of node \(v\) in the \(l\)-th layer. \\ \hline \(\mathbf{H}^{(l)}\) & The ebbedding matrix in the \(l\)-th layer. \\ \hline \(\mathrm{MSG}^{(l)}\) & The message passing function in the \(l\)-th layer. \\ \hline \(\mathrm{AGG}^{(l)}\) & The aggregation function in the \(l\)-th layer. \\ \hline \(\mathrm{UPD}^{(l)}\) & The update function in the \(l\)-th layer. \\ \hline \end{tabular} \end{table} TABLE I: Notations in the survey Each linear function is given as \(L\), and the non-linear activation function is denoted as \(\sigma\). The parameter k in k-IGN refers to the order of the tensor \(\mathbf{A}\), which corresponds to the indices used to represent its elements. For instance, a first-order tensor represents the node feature, a second-order tensor represents the edge feature, and a \(k\)-th order tensor encodes hyper-edge feature. For \(k=2\), the linear layer consists of 15 equivariant layers and 2 invariant layers, which can be expressed as follows: \[L(\mathbf{A})=\sum_{i=1}^{15}\theta_{i}L_{i}(\mathbf{A})+\sum_{i=16}^{17} \theta_{i}\bar{L}_{i}(\mathbf{A}) \tag{4}\] After adding the non-linear activation layer, the 2-IGN architecture can be presented as: \[\mathbf{A}^{(l+1)}=\sigma\left(L^{(l+1)}\left(\mathbf{A}^{(l)}\right)\right) \tag{5}\] IGN has demonstrated not less expressive power than MPNN and is therefore being widely adopted to augment expressiveness. Aside from the k-IGN frameworks, there also have been efforts to construct more powerful equivariant networks by combining a set of simple permutation equivariant operators [35, 36, 37, 38, 39]. **Permutation Invariance.** Graphs are inherently disordered, with no fixed node order. As a basic inductive bias in graph representation learning, permutation invariance refers to the fact that the output of a model is independent of the input order of nodes. Specifically, a model \(f\) is permutation invariant if it satisfies \(f\circ\pi(G)=f(G)\) for all possible permutations \(\pi\). **Permutation Equivariance.** On the other hand, permutation equivariance means that the output order of the model corresponds to the input order. When we focus on node features, we expect the model to output in the same order as the input nodes. Specifically, a model \(g\) is permutation invariant equivariant if it satisfies \(g\circ\pi(G)=\pi\circ g(G)\) for all possible permutations \(\pi\). The design philosophy of GNNs is to generate a permutation equivariant function on the graph by applying a local permutation invariant function to aggregate the neighbor features of each node [60]. ### _Basics of Graph Isomorphism_ **Graph Isomorphism.** Graph isomorphism refers to the topological equivalence of two graphs. Given two graphs \(G=(\mathbf{A},\mathbf{X})\) and \(G^{\prime}=(\mathbf{A}^{\prime},\mathbf{X}^{\prime})\) with adjacency matrices \(\mathbf{A}\) and \(\mathbf{A}^{\prime}\) and feature matrices \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) respectively, an isomorphism between these graphs is a bijective mapping \(\pi:\mathcal{V}[G]\longrightarrow\mathcal{V}[G^{\prime}]\) that satisfies \(\mathbf{A}_{uv}=\mathbf{A}^{\prime}{}_{\pi(u)\pi(v)}\) and \(\mathbf{X}_{v}=\mathbf{X}^{\prime}{}_{\pi(v)}\). We use \(G\cong G^{\prime}\) to denote that \(G\) and \(G\) are isomorphic, where \(G^{\prime}\) can be represented as \(\pi(G)\). If there is no bijection \(\pi\) that satisfies these conditions, we say that \(G\) and \(G^{\prime}\) are non-isomorphic. **Weisfeiler-Lehman test (WL test).** Graph isomorphism problems are notoriously difficult and are classified as NP-hard in graph theory. The WL test of isomorphism [62, 63], also known as the color refinement algorithm [64]), is an effective algorithm to solve graph isomorphism problems. The WL test utilizes a graph coloring technique to assign color labels to each node in the graph. This iterative process is illustrated in Figure 1[65]. Initially, every node in the graph is assigned a unique color label \(\mathbf{c}\). In each iteration, the central node and its neighbors are combined to form a set of multiple colors, which are subsequently mapped to a new unique color using Hash function, which becomes the new label of the central node. The process is repeated until the node color labels no longer change, resulting in a color distribution histogram of the graph. By comparing the color distributions of two graphs, the WL test can determine whether they are isomorphic or not. This process can be divided into several steps: neighbor label aggregation, multisets sorting, label compression, and label update and can be mathematically formulated as follows: \[\mathbf{c}_{v}^{(l)}=\mathrm{Hash}\left(\mathbf{c}_{v}^{(l-1)},\left\{ \mathbf{c}_{u}^{(l-1)}|u\in N(v)\right\}\right). \tag{6}\] ``` 1:Multiset-label determination For \(l=0\), set \(M^{l}(v):=c^{0}(v)=c(v)\). For \(l>0\), assign a multiset-label \(M^{l}(v)\) to each node \(v\) in \(G\) and \(G^{\prime}\) which consists of the multiset \(\{c^{I1}(u)|u\in N(v)\}\). 2:Sorting each multiset Sort elements in \(M^{l}(v)\) in ascending order and concatenate them into a string \(s^{l}(v)\). Add \(c^{l-1}(v)\) as a prefix to \(s^{l}(v)\) and call the resulting string \(s^{l}(v)\). 3:Label compression Sort all of the strings \(s^{l}(v)\) for all \(v\) from \(G\) and \(G^{\prime}\) in ascending order. Map each string \(s^{l}(v)\) to a new compressed label, using a function\(f:\Sigma*\rightarrow\Sigma\) such that \(\mathrm{Hash}(s^{l}(v))=\mathrm{Hash}(s^{l}(w))\) if and only if \(s^{l}(v)=s^{l}(w)\). 4:Relabeling Set \(c^{l}(v):=\mathrm{Hash}(s^{l}(v))\) for all nodes in \(G\) and \(G^{\prime}\). ``` **Algorithm 1** One iteration of 1-WL test of graph isomorphism It is important to note that identical coloring results could only suggest but cannot guarantee graph isomorphism. While the WL test is fast and efficient for most graphs [66], it has limitations in distinguishing members of many important graphs, such as regular graphs. This issue becomes evident when considering non-isomorphic graphs, as displayed in Figure 2, where the algorithm mistakenly identifies them as isomorphic. The more powerful k-WL algorithm was initially introduced by Weisfeiler [67], expanding the coloring of individual vertices to the coloring of k-tuples. A k-tuple is a unit of k neighbounced nodes, and the k-WL test [17, 18] captures higher-order interactions in the graph by considering the process of iterative aggregation between all neighbounced k-tuple units. Specifically, the labels for tuple units consisting of k nodes are computed, treating these tuple units as significant graph entities for iterative operations. The k-WL becomes increasingly expressive as k grows. For sufficiently large k, it can distinguish any finite set of graphs. It has been observed that the discrimination potential for k\(>\)1, (k+1)-WL test is unequivocally stronger than the k-WL test, creating a hierarchical structure where the discriminative power increases with higher k values. Therefore, the k-WL test excels at detecting subgraph topology with larger k values, exhibiting an enhanced discrimination ability concerning graph structures. k-WL iteratively recolors all \(n^{k}\) k-tuples defined on a graph \(G\) with \(n\) nodes. In iteration \(0\), each k-tuple \(\vec{v}=(v_{1},...,v_{k})\in V(G)^{k}\) is initially assigned a color as its atomic type \(c_{k}^{(l)}(\vec{v})\). If \(G\) has \(d\) colors, then \(c_{k}^{(0)}(\vec{v})\) is an ordered vector encoding. Let \(c_{k}^{(l)}(\vec{v})\) denote the color of k-tuple \(\vec{v}\) on graph \(G\) at the t-th iteration of the k-WL. In the l-th iteration, k-WL updates the color of each \(\vec{v}\ inV(G)^{k}\) according to \[c_{k}^{(l+1)}(\vec{v})=\text{Hash}(c_{k}^{(l)}(\vec{v}),\] \[\{c_{k}^{(l)}(\vec{v}[x/1]|x\in V(G))\}),..., \tag{7}\] \[\{\{c_{k}^{(l)}(\vec{v}[x/k]|x\in V(G))\}).\] Let \(c_{k}^{(l)}(G)\) denote the encoding of G at l-th iteration of k-WL. Then, \[c_{k}^{(l)}(G)=\text{Hash}(\{\{c_{k}^{(l)}(\vec{v})|\vec{v}\in V(G)^{k}\}\}). \tag{8}\] Moreover, k-WL has interesting connections to logic, games, and linear equations [34, 37, 68]. Another variation of k-WL is known as k-folklore WL (k-FWL), where it is shown that the expressive power of (k+1)-WL is equivalent to that of k-FWL [65]. ## 3 The expressive power of GNNs ### _The necessity of GNNs expressive power_ **Expressive power.** All machine learning problems can be abstracted as learning a mapping \(f^{*}\) from the feature space \(\mathcal{X}\) to the target space \(\mathcal{Y}\). \(f^{*}\) is usually approximated using a model \(f_{\theta}\) by optimizing some parameters \(\theta\). In practice, \(f^{*}\) is typically a priori unknown, so one would like \(f_{\theta}\) to approximate a wide range of \(f^{*}\) as much as possible. An estimate of how wide this range is called the _expressive power_ of the model, providing an important measure of the model potential [69], shown in Fig 3 (a). The powerful expressive power of NNs is reflected in their ability to approximate all continuous functions [70], specifically the ability to embed data within the feature space \(\mathcal{X}\) into the target space \(\mathcal{Y}\) generated by any continuous Fig. 1: The illustration of the aggregation and update process of WL test. (a) Given two graphs without features, and add color labels to all nodes. (b) In the first iteration, the different information aggregated by the nodes is mapped into new color labels, and then these new labels are redistributed to nodes, and the number of labels is counted after the distribution. After 1st iteration, \(G1\) and \(G2\) have the same color distributions to determine whether they are isomorphic, and the next iteration is performed. (c) Perform the node neighbor aggregation and redistribution of color labels steps again, and get the different color distributions of \(G1\) and \(G2\), at this time, it can be determined that the two are non-isomorphic. Fig. 2: Some non-isomorphic graphs that cannot be distinguished by WL test. function, which is in fact feature embedding ability, show in Fig 3 (b). Due to the strong expressive power of NNs, similarly, few works doubt the expressive power of GNNs that have demonstrated significantly superior performance in a variety of application tasks because they naturally feature the superior performance of GNNs to their excellent feature embedding ability. However, some graph-augmented multilayer perceptron (MLP) models [71, 72] outperform GNNs despite the former's less expressive power than the latter in a number of node classification problems [73], where MLPs [74] use only information from each node to compute feature embeddings of nodes, and GNNs iteratively aggregate the features of neighbouring nodes in each layer, thus allowing the use of global information to compute feature embeddings of nodes. This fact results in a contradiction between our intuitive understanding of expressive power, yet at the same time illustrates that feature embedding power does not describe the expressive power of GNNs well. GNNs, compared to NNs, add the inductive bias of permutation invariance so that they can propagate and aggregate information over the topology of the graph. [48, 75] demonstrate that if GNNs only have feature embedding ability but lack the ability to maintain the graph topology, then their performance may be inferior to MLP. In addition, we know that the expressive power of feed-forward neural networks is often limited by their width, whereas the GNNs' expressive power is not only limited by their width, but also how they use the graph topology for message propagation and node updates. It follows that the topology representation ability of GNNs, i.e., the ability to maintain the graph topology, becomes the key to their superior performance. To characterise the expressive power of GNNs using their topology representation ability then requires a new set of theoretical tools. ### _The definition and representation of GNNs expressive power_ #### 3.2.1 The definition of GNNs expressive power Graph data typically contains both feature and topology information. As a neural network model for graph representation learning in the family of NNs, the feature embedding ability of GNNs is unquestionable, and their expressive power is more reflected in their topology representation ability. GNNs map node \(v\) from original space \(\mathcal{X}\) to embedding space \(\mathcal{Y}\) to obtain node embedding \(\mathbf{h}_{v}\) via function \(f\), where \(\mathbf{h}_{v}=f(\mathbf{A}\cdot\mathbf{x}_{v})\). For nodes \(v_{1}\), \(v_{2}\) in different topological positions on graph \(G\), if \(\mathbf{x}_{1}\neq\mathbf{x}_{2}\), the results of the feature embedding ability ensure that \(f(\mathbf{A}\cdot\mathbf{x}_{1})\neq f(\mathbf{A}\cdot\mathbf{x}_{2})\). And if \(\mathbf{x}_{1}=\mathbf{x}_{2}\), to maintain the topology of the graph, it meanwhile requires that \(f(\mathbf{A}\cdot\mathbf{x}_{1})\neq f(\mathbf{A}\cdot\mathbf{x}_{2})\). **Feature embedding ability.** Let \(\mathbf{X}=\mathbf{random}\), i.e., input graph with node features, the feature embedding ability can be expressed by the size of the value domain space \(\mathcal{F}=\{f(\mathbf{A}\cdot\mathbf{X})|\mathbf{X}=\mathbf{random}\}\) of \(f\). **Topology representation ability.** Let \(\mathbf{X}=\mathbf{1}\),, i.e., input graph without node features, the topology representation ability can be expressed by the size of the value domain space \(\mathcal{F}^{\prime}=\{f(\mathbf{A}\cdot\mathbf{X})|\mathbf{X}=\mathbf{1}\}\) of \(f\). The feature embedding ability reflects that nodes with different features can get different node embeddings. Topology representation ability reflects that nodes with different topological positions can get different node embeddings. Together, these two constitute the expressive power of GNNs, illustrated with reference to Fig 4. **The expressive power of GNNs.** Let \(\mathbf{X}=\mathbf{random}\), the size of the space \(\mathcal{F}\cap\mathcal{F}^{\prime}\) describes the expressive power of \(f\). Typically, \(\mathcal{F}\supseteq\mathcal{F}^{\prime}\), which means that the topology representation ability is the main factor limiting the expression of GNNs. And for two models, \(f_{1}\) and \(f_{2}\), if \(\mathcal{F}_{1}\supset\mathcal{F}_{2}\), we say that \(f_{1}\) is more expressive than \(f_{2}\). Several prevailing views in current research on the expressive power of GNNs characterise the expressive power as approximation ability, separation ability and subgraph counting ability, respectively. Fig. 3: An illustration of the expressive power of NNs and their affects on the performance of learned models. (a) Machine learning problems aim to learn the mapping from the feature space to the target space based on several observed examples. (b) The expressive power of NNs refers to the gap between the two spaces \(\mathcal{F}\) and \(\mathcal{F}^{\prime}\). Although NNs are expressive (\(\mathcal{F}^{\prime}\) is dense in \(\mathcal{F}\)), the learned model \(f^{\prime}\) based on NNs may differ signifcantly from \(f^{*}\) due to their overfit of the limited observed data. This figure is with reference to Fig. 5.5 in [69]. _Approximation ability._ The approximation perspective examines GNN's ability in approximating functions on graphs, considering the feature embedding and topology representation. Despite this, exploration of the type of functions that GNNs can approximate remains insufficient to address practical problems directly. _Separation ability._ This perspective examines GNN's ability in graph isomorphism identification, focusing solely on topology representation. Nevertheless, its scope is limited to unfeatured graphs, resulting in inadequate practical applicability when solving real-world problems. _Subgraph counting ability._ This perspective examines the potential of GNNs in detecting and utilizing graph subgraph topology, similarly concerned with topology representation. Unlike separability, graph subgraph topology is highly relevant to molecular functional groups and communities in real-world tasks, hence, holding practical application value. #### 3.2.2 Approximation ability Models with stronger expressive power can learn more complex mapping functions. Neural networks are known for their strong expressive power, and one of the most famous results is the universal approximation theorem for neural networks [14, 15, 70]. As a member of the neural network family, GNNs can also use the ability to approximate mapping functions to describe their own expressive power. However, because GNNs add inductive bias [13] to graph learning, i.e., permutation invariance, the results of the generalized approximation theorem for neural networks cannot be simply generalized to GNNs. In this context, Maron et al. [16, 17, 18] describe specifically the types of functions that GNNs can approximate, using the _approximate ability_ to approximate to describe the expressive power of GNNs, that is, what kind of permutation invariant graph functions GNNs can approximate. They further give a description of the set of functions that the model can approximate as a measure of expressive power. Assume that GNN is a mapping \(f_{\theta}:\mathcal{G}\to R^{d\times n\times m}\) on graph, where \(\mathcal{G}\) is the set of graphs \(G=(A,X)\), \(X\in R^{d\times n}\) is node feature, \(d\) is the dimension of the feature, \(n=|V|\) and \(m=|\mathcal{G}|\). Denote the output of GNNs by \(f_{\theta}(G)\), the formal definition of approximate ability can be given. **Function approximation.** Consider a function \(f^{*}\) to be learned, when the input of GNNs is the set of graphs \(\mathcal{G}\), if \(||f^{*}(G)-f_{\theta}(G)||_{\infty}:=sup_{G}|f^{*}(G)-f_{\theta}(G)|<\varepsilon\), for \(G\in\mathcal{G}\) Fig. 4: An illustration of the expressive power of GNNs. a) The feature embedding ability of GNNs is the same as that of NNs in that both maps the observed examples in the feature space \(\mathcal{X}\) to the target space \(Y\) via \(f\). The strength of the feature embedding ability is measured by the size of the value domain space \(\mathcal{F}\) of \(f\). b) The topology representation capability of GNNs is achieved by mapping the observed examples in the feature space to the target space via \(f\) and preserving the original topology between the examples. The strength of the capability is measured by the size of the value domain space \(\mathcal{F}^{\prime}\) with \(\mathbf{X}=\mathbf{1}\). c) The expressive power of GNNs consists of a combination of feature embedding ability and topology representation ability, measured by the size of the intersection of \(\mathcal{F}\) and \(\mathcal{F}^{\prime}\) with \(\mathbf{X}=\mathbf{random}\). and \(\varepsilon>0\), then \(f^{*}\) is successfully approximated by \(f_{\theta}\). **Approximation ability.** The measure of the size of the set of functions composed of all potential \(f_{\theta}\) is the approximation ability of GNNs. Figure 5 shows the representation of input and output under the definition of approximation ability. However, these theoretical findings are not sufficient to explain the unprecedented success of GNNs in practice, because simply knowing what graph functions the model can approximate does not guide practical production work. It is challenging to analyze the expressive power of GNNs starting from a practical application perspective. #### 3.2.3 Separation ability Considering how GNNs make predictions about molecules, GNNs should be able to predict whether a topology corresponds to a valid molecule by identifying whether the topology of the graph is the same or similar to the topology corresponding to a consistent valid molecule. This problem of measuring whether two graphs have the same topology involves the problem of graph isomorphism [19, 21]. In the graph isomorphism perspective, Xu et al. [27] and Morris et al. [25] used _separation ability_ to describe the expressive power of GNNs, that is, whether GNNs can effectively identify non-isomorphic graph topology. By gaining insight into the high similarity and close connection between the WL test and GNNs in the iterative aggregation process, they further used the WL test as a measure of the strength of the separation ability and indicated that the upper limit of the separation ability is 1-WL test. Similarly, the formal definition of the separation ability can be given. **Graph isomorphism recognition.** Consider two non-isomorphic graphs \(G_{1}=(A_{1},X_{1})\) and \(G_{2}=(A_{2},X_{2})\), where \(X_{1}=X_{2}=\mathbf{1}\). If \(f_{\theta}(G_{1})\neq f_{\theta}(G_{2})\), then it is assumed that GNNs can identify non-isomorphic graph pairs. **Separation ability.** The measure of the size of the set of all identifiable possible non-isomorphic pairs \(|\mathcal{G}_{1}|\) is the separation ability of GNNs. Figure 7 shows the representation of input and output under the definition of separation ability. The results of the separation ability show the role that GNNs expressive power plays in practice. And in fact, it has been shown that both separation and approximation ability are theoretically unified. Chen et al. [28] demonstrated the equivalence between the two abilities, showing that the basis of the approximation ability is actually the embedding of two non-isomorphic graphs into different points of the target space, and that a model that can distinguish all nonidentical graphs is also a general function approximator. The result of equivalence provides, in a sense, a practical application for approximation ability. However, the result of separation ability requires comparison of pairs of graphs, which requires a prior a different GNN for each pair of graphs to distinguish them. This is of little help in practical learning scenarios, because people prefer to give meaningful representations to all graphs through a single GNN. In addition, the graph isomorphism problem measures the expressive power at the graph level, while the expressive power at the node level is equally important, and whether this theory can explain the expressive power of nodes is still questionable [76]. #### 3.2.4 Subgraph counting ability Considering solving problems at the node level is also very instructive for practice. Continuing with the prediction problem of predicting molecular functions, in addition to the need to identify whether the graph topology is the same as the known functional group topology to determine the molecular properties, sometimes the number of functional groups also has an impact on the function of the molecule, which requires determining how many identical graph topology there are in the graph. To address this issue, Chen et al. [55] have proposed to use _subgraph counting ability_ to describe the GNNs expressive power, emphasizing the GNNs' ability to recognize subgraph topology in graphs. On the one hand, this ability can evaluate node-level embeddings, comparable to the separation ability. On the other hand, real-world graph topology such as molecular and community topology typically involve subgraph topology, Hence this ability has more practical guidance for applications compared to the approximation ability. Similarly, the formal definition of the separation ability can be given. **Subgraph counting.** Consider two isomorphic subgraphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\) of graph \(G=(V,E)\), where \(V_{1},V_{2}\in V\), \(E_{1},E_{2}\in E\) and \(V_{1}\neq V_{2}\). If \(f_{\theta}(G_{1})\neq f_{\theta}(G_{2})\), then it is assumed that GNNs can count subgraphs. **Subgraph counting ability.** The measure of the size of the set of all identifiable countable isomorphic subgraphs \(|\mathcal{G}_{1}|\) is the subgraph counting ability of GNNs. Figure 5 shows the representation of input and output under the definition of subgraph counting ability. ### _The strength of GNNs expressive power_ When considering the topology representation of GNNs without taking into account feature embeddings, GNNs expressive power is confined to the 1-WL test. This conclusion arises from the observations by Xu et al. [27] and Morris et al. [25] that highlight the striking similarity between neighborhood aggregation in GNNs and neighbor label aggregation in the WL test. The choice of aggregation functions in GNNs is crucial since employing injection functions results in GNNs being equipotent with the WL test in terms of expressive power. As the consideration extends to feature embedding, GNNs outperform the 1-WL test in expressive power. A compelling demonstration by Kanatsoulis et al. [32] showcases GNNs' ability to discern any graph where at least one node exhibits a unique feature. This finding reaffirms our justifiable definition of GNNs expressive power, emphasizing that the feature embeddings bolsters the GNNs' topology representation. Provided that the node features adequately characterize node identities, Loukas et al. [77, 78, 79] has confirmed that GNNs with powerful message-passing and update functions become Turing universal, which corroborated in a small instance of the graph isomorphism problem [80]. Nonetheless, the limitations in depth and width of GNNs leads to a diminished expressive power. ## 4 Existing work of improving GNNs expressive power ### _Factors affecting GNNs expressive power_ GNNs expressive power involves the expression of graph features and graph topology, so the effect of graph feature embedding and graph topology representation will affect the expression of GNNs. The graph features can affect the expression of GNNs. First of all, node features have strong discrimination ability. [77] proved that GNNs can distinguish graphs with different node features. [32] further proved that GNNs can distinguish any graph with at least one node feature differences. In addition, the global graph feature also has discrimination ability. [38] proved that GNNs can distinguish graphs with different global graph features. The topology of the graph also affects the expression of GNNs. GNNs do not directly encode the graph topology for learning, but use the topology of the graph for information transmission and aggregation, which is realized by the permutation invariant aggregation function. This design hinders the expression of GNNs [81]. Because the permutation invariant aggregation function assumes that the states of all adjacent nodes are equal, the relationship between adjacent nodes is ignored. The node encoding obtained through neighborhood aggregation represents the rooted subtree around the central node, as illustrated in the Figure 6. However, rooted subtrees have limited expressive power for representing non-tree graphs. Therefore, the central node cannot distinguish whether two adjacent nodes are adjacent or not, and cannot identify and reconstruct the graph topology. General GNNs can only explicitly reconstruct a star graph from a 1-hop neighborhood, but cannot model any connections between neighborhoods [55]. Therefore, the permutation invariant aggregation functions cause GNNs to lose the topology of the context in the topology representation and thus fail to learn the basic topological properties of the graph [54]. ### _Methods to improve GNNs expressive power_ Improving the expressive power of GNNs can similarly be approached in terms of both improving the feature embedding effect and improving the topology representation effect. The enhancement of the feature embedding effect depends on the enhancement of the features themselves, including the extraction of dependencies between features and the addition of related features. There are two effective ways to enhance the effect of topology representation. One way is to directly encode relevant topology information for GNNs learning, and the other way is to optimise the GNNs model architecture to eliminate the obstacle of maintaining topology caused by permutation invariant aggregation functions. In this article, we summarise these three approaches as: Graph feature enhancement (enhancing the effect of feature embedding), Graph topology enhancement (encoding of topology information directly), and GNNs architecture enhancement (mitigating the defect of permutation invariant aggregation functions). In line with our expectations, the more powerful model designs for GNNs that are currently known for their means of improving expressive power can all be subsumed under the three approaches we summarised. Table II examines and systematically categorizes the design of more powerful expressive GNNs in recent years them based on their employed design methodologies. ### _Graph feature enhancement_ Graph feature enhancement is to improve the embedding efficiency of graph features. One method is to extract the dependency between features to increase the information quantity of features to improve the embedding effect. The other method is to improve the utilization rate of feature Fig. 5: Input and output of the GNNs model under different expressive power representations. When approximate ability is used to characterise the expressive power, the input of the model is a set of graphs and the output is a graph embeddings. When separation ability, the input is a pair of graphs and the output is graph embeddings. When the subgraph counting ability, the input is a single graph and the output is a node (set) embeddings. Fig. 6: The rooted subtree in message passing. The left one represents the original graph \(G\), and the objective is to obtain an embedding of node \(v_{1}\) on \(G\), where node \(v_{1}\) is denoted by red color. The right one is the subtree obtained for node \(v_{1}\) after two rounds of iteration. information by multiple embedding to enhance the embedding effect. A detailed description of this method of enhancing the effect of feature embedding is shown in Figure 7. #### 4.3.1 Extracting dependencies between features **AM-GCN** The adaptive multi-channel graph convolution network (AM-GCN) [82] computes node similarity based on their features, constructing a k-nearest neighbor(k-NN) \begin{table} \begin{tabular}{|p{56.9pt}||p{113.8pt}||p{113.8pt}||p{113.8pt}|} \hline **Methods** & **Subclasses** & **Models** & **Descriptions** \\ \hline Graph feature enhancement & Extracting dependencies between features & AM-GCN [82] & Capture the dependencies between node features by a k-nearest neighbor graph. & Capture the dependencies between node features by a k-nearest neighbor graph. \\ \cline{2-4} & Enhancing feature use & ACR-GNN [38] & Calculate and add the global node features. \\ \hline & & Twin-GNN [84] & Assign color labels and identity to nodes. \\ \cline{2-4} & Adding extra topology information & LD-GNN [52] & Inject a unique color identity to nodes. \\ \cline{2-4} & Adding extra topology information & CLIP [85] & Give a distinct color to a subset of nodes with same features. \\ \cline{2-4} & & RP-GNN [86] & Randomly assign an order of nodes as their extra features. \\ \cline{2-4} & & RNI-GNN [87] & Input randomized initial node features. \\ \cline{2-4} & & DE-GNN [88] & Capture the distance from a given node to the node set to be learnt. \\ \cline{2-4} & Encoding micro-topology & P-GNN [53] & Measure the distance from a node to a set of randomly selected anchor nodes. \\ \cline{2-4} & Graph topology enhancement & PEG [89] & Use separate channels to update both the node features and position features. \\ \cline{2-4} & & SMP [78] & Employ one-hot encoding learn rich local topological information. \\ \cline{2-4} & & GD-WL [90] & Encode generalised distance information. \\ \cline{2-4} & Encoding global-topology & Eigen-GNN [91] & Incorporate eigen decomposition on the adjacency matrix to obtain eigenvectors as topology features. \\ \cline{2-4} & & GSN [92] & Encode the count of pre-defined subgraph topology appearance in the graph. \\ \cline{2-4} & & GraphSNN [93] & Enable the transmission of diverse messages based on the topology features of the central node’s 1-hop neighborhood subgraph. \\ \cline{2-4} & Encoding local-topology & GNN-AK [94] & Extend the scope of local aggregation into more generalized subgraphs. \\ \cline{2-4} & & NGNN [95] & Extract a local subgraph around each node. \\ \cline{2-4} & & MPSN [96] & Operates message passing on simplicial complexes. \\ \cline{2-4} & & ESAN [97] & Represent each graph as a set of subgraphs derived from a predefined policy. \\ \cline{2-4} & & LRP-GNN [55] & Refer to RP-GNN that operates on subgraphs essentially. \\ \cline{2-4} & & k-GNN [25] & High-order message passing occurs between k-tuple subgraphs. \\ \cline{2-4} & & K-hop GNN [98] & Message passing occurs between k-hop neighbors. \\ \cline{2-4} & & KP-GNN [99] & Learn peripheral subgraph information based on K-hop GNNs. \\ \hline & & GIN [27] & Aggregation function models an injective function operating on multisets. \\ \cline{2-4} & Improving aggregation function & modular-GCN [100] & Use three GCN modules of different propagation strategies to compensate the missing topology information. \\ \cline{2-4} & & diagonal-GNN [32] & Use the diagonal modules to capture broader topology relationships. \\ \cline{2-4} & & GNN-LF/HF [30] & Graph convolution corresponding to low-pass/high-pass filtering operations. \\ \cline{2-4} & & Geom-GCN [101] & Map graphs to a continuous latent space and construct topology neighborhoods based on the geometric relations defined in the latent space. \\ \cline{2-4} & & PG-GNN [102] & Model the dependency relationships between neighboring nodes as a permutation-sensitive function. \\ \cline{2-4} & & k-IGN [60] & Comprise of a sequence of alternating layers composed of invariant or equivariant linear layers and nonlinear activation layers. \\ \cline{2-4} & & PPCN [103] & Apply module strategy in k-IGN. \\ \cline{2-4} & & k-FGNN [104] & Improved tensor calculation techniques in k-IGN. \\ \cline{2-4} & & Ring-GNN [28] & Only matrix addition and matrix multiplication are used in 2-IGN. \\ \cline{2-4} & & SUN [105] & Variant of 2-IGN capture of local and global operations simultaneously. \\ \cline{2-4} & & GNNML [36] & Combine various computation operators. \\ \cline{2-4} & & k-MPNN [61] & Message passing over k-order neighbors via combination of computational operators. \\ \hline \end{tabular} \end{table} TABLE II: Summary of the powerful model graph by selecting the k most analogous neighbors for each node. Then subsequently performs convolutions over both the k-NN graph and original graph, as shown in Figure 8. This design captures the dependencies between node features, thus achieving an enhancement of node features. For a k-NN graph \(\mathcal{G}_{f}=(\mathbf{A}_{f},\mathbf{X})\), \(\mathbf{A}_{f}\) denotes the adjacency matrix. And the message passing along \(\mathcal{G}_{f}\) can be represented as equation (9). \[\mathbf{h}_{f}^{(l+1)}=\mathrm{RELU}\left(\tilde{\mathbf{D}}_{f}^{-1/2}\tilde{ \mathbf{A}}_{f}\tilde{\mathbf{D}}_{f}^{-1/2}\mathbf{h}_{f}^{(l)}\mathbf{W}_{f }^{(l)}\right), \tag{9}\] where \(\mathbf{W}_{f}\) is the weight matrix, \(\tilde{\mathbf{A}}_{f}=\mathbf{A}_{f}+\mathbf{I}_{f}\), and \(\tilde{\mathbf{D}}_{f}\) is the diagonal degree matrix of \(\tilde{\mathbf{A}}_{f}\). **CL-GNN** GNNs can extract more information regarding the local neighborhood dependencies of the nodes by sampling [86], while sampling the predicted node labels also allows GNNs to focus on the relationships between node features, graph topology, and label dependencies. [83] consider using a collective learning framework (CL-GNN), which employs Monte Carlo sampling to further consider label dependencies. This approach enables GNNs to incorporate more information into the estimated joint label distribution, leading to more expressive node embeddings. #### 4.3.2 Enhancing feature use **ACR-GNN** Barcelo et al. [38] adds a readout function in the aggregation operation to calculate the global node features for node update, resulting in a novel architecture known as aggregate-combine-readout GNN (ACR-GNN). The update process can be described as follows: \[\mathbf{h}_{v}^{(l+1)}=\mathrm{UPD}^{(l)}\left(\mathbf{h}_{v}^{( l)},\mathrm{AGG}^{(l)}\left(\left\{\mathbf{h}_{u}^{(l)}:u\in N(v)\right\} \right),\right.\\ \left.\mathrm{READ}^{(l)}\left(\left\{\mathbf{h}_{u}^{(l)}:u\in V \right\}\right)\right). \tag{10}\] Enhancing the effect of feature embedding, either by adding additional features or by augmenting its own features, brings an increase in computational complexity to varying degrees while improving the expressive power of the model, creating an additional burden on model training. In addition, the addition of random features requires extra attention, and the injection of randomness may destroy the permutation invariance of GNNs, so the permutation invariance has to be restored by additional design [86]. ### _Graph topology enhancement_ Graph topology enhancement lies in the direct encoding of learned topology information. One approach is to manually add additional topological relevant node features, the other is to directly encode the topology information inherent in the graph. According to the different topology information encoded, the latter method can be further divided into: encoding the micro-topology (such as distance, position and other topology information), encoding the local-topology (containing the local topological information mainly based on subgraph topology), and encoding the global-topology. A detailed description of this method of encoding the topology of the graph directly is shown in Figure 9. #### 4.4.1 Adding extra topology information The aim of adding extra topology information is to increase the discrimination between nodes topologically. A prevalent and straightforward approach entails assigning distinct, recognizable features to nodes, such as identifications [84, 52], colors [85], and random features [106, 87, 86, 107]. The added features can be represented by a characteristic matrix \(\mathbf{C}\), as demonstrated in the equation 11, which enhances the node features \(\mathbf{X}\) via \(\mathbf{C}\). Especially, this method can also be applied to unfeatured graphs, in which case the additional features serve as node features. \[G=(\mathbf{A},\mathbf{X}\oplus\mathbf{C}). \tag{11}\] **Twin-GNN** Wang et al. [84] have introduced the Twin-WL test as an extension of the 1-WL test, with improved discriminative power obtained by simultaneously assigning color labels and identity \(id\) to the nodes and executing two sets of aggregation schemes. The identity aggregation as Equation 12. Based on the Twin-WL test, the Twin-GNN has been developed to operate two sets of message passing mechanisms for node features and node identities. The identity passing encodes more complete topology information, allowing for a stronger expression than 1-WL test without significantly increasing computational costs. \[\mathbf{id}_{v}^{(l)}=f_{id}\left(\mathbf{id}_{v}^{(l-1)},\left\{\mathbf{id}_ {u}^{(l-1)}|u\in N(v)\right\}\right), \tag{12}\] where \(f\) is a function able to convert the identity multiset to the topology information that can be compared across different graphs. **ID-GNN** The Identity-aware GNNs (ID-GNNs) [52] are a universal extension of GNNs. A L-layer ID-GNN first extracts the L-hop network \(G_{v}^{(L)}\) of node \(v\), and injects a unique color identity to \(v\). Then, the heterogeneous message passing is performed on the \(G_{v}^{(L)}\) to obtain the node embedding with injected identity. This added identification information enables the distinguishing of subgraphs with the same topology, thereby enhancing the GNNs expressive power. For \(u\in\mathcal{V}(G_{v}^{(L)})\), there can be categorized into two types throughout the embedding process: nodes with coloring and nodes without coloring. When passing the message of the colored nodes, perform another set of message passing mechanism: \[\mathbf{m}_{s}^{(l)}=\mathrm{MSG}_{\mathrm{color}}^{(l)}\left(\left\{\mathbf{ h}_{s}^{(l-1)}:s\in N(u)\right\}\right). \tag{13}\] Fig. 7: A detailed description of this method of Graph feature enhancement. **CLIP** The GNN called Colored Local Iterative Procedure (CLIP) uses colors to disambiguate identical node features. Nodes with the same features are assigned to different subsets, and then the nodes in a same subset will be given a distinct color \(\mathbf{c}_{i}\) from a finite color set \(C\). And the colored graph can be represented as the following Equation. The CLIP is capable of capturing topology characteristics that traditional MPNNs fail to distinguish, and becomes a provably universal extension of MPNNs. \[G=(\mathbf{A},\mathbf{X}\oplus[\mathbf{c}_{i}]). \tag{14}\] **RP-GNN** Murphy et al. [86] considered randomly assigning an order of nodes as their extra features and proposed the relational pooling GNN (RP) as a general method in GNNs. This method takes advantage of the enumeration idea, by enumerating all possible results of different input orders, and then taking the average result as the final output. Suppose there are all \(m\) possible permutations of graph \(G\) and for one permutation \(\pi\), \(\mathbf{I}_{\pi}\) is a one-hot encode matrix of the permutation sequence, \(\pi\circ G=(\mathbf{A},\mathbf{X}\oplus\mathbf{I}_{\pi})\) denotes a permutated graph. Each permutated graph can get the node embeddings \(\mathbf{h}_{v}^{\pi}\) via GNN, and take the average of them as the final embedding \(\mathbf{h}_{v}\). \[\mathbf{h}_{v}=\frac{1}{m!}\sum_{m}\left(\mathbf{h}_{v}^{\pi}\right). \tag{15}\] **RNI-GNN** The random node initialization (RNI) [87] is proposed as a means of training and implementing GNNs with randomized initial node features \(\mathbf{R}\). RNI-GNNs are shown to be effective in achieving universal without relying on high-order feature calculations. This also holds with partially randomized initial node features. \[G=(\mathbf{A},\mathbf{X}\oplus\mathbf{R}). \tag{16}\] Fig. 8: The propagation process of information through three graph convolution in the architecture of AM-GCN. The left part shows the process to get the k-NN graph by by selecting the k most analogous neighbors for each node in the original graph. The right part shows the convolution over the k-NN graph, original graph and the common convolution process. This figure is referenced to [82] Figure 1. Fig. 9: A detailed description of this method of Graph topology enhancement. #### 4.4.2 Encoding the micro-topology Encoding the micro-topology is to determine the position of each node within the graph topology, accomplished through distance encoding or position encoding. **DE-GNN** The distance encoding (DE) [88] is proposed as a generic initialization mean of contributing to the node embeddings in GNNs by capturing the distance from a given node to the node set to be learnt. Given a node set \(S\) whose topology representation is to be learnt, for every node \(u\) in the graph, DE is defined as a mapping \(f_{\mathrm{DE}}\) of a set of landing probabilities of random walks from each node of the set \(S\) to node \(u\). DE-GNN is proved to be able to effectively distinguish node sets within nearly all regular graphs where traditional GNNs struggle. While it introduces significant memory and time overhead. \[\mathbf{d}_{v|S}=\mathrm{AGG}(f_{\mathrm{DE}}(v,u)|u\in S), \tag{17}\] \[\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\otimes\mathbf{d}_{v|S}, \tag{18}\] where \(\mathbf{d}_{v|S}\) denotes the distance of node \(v\) and set \(S\), \(\otimes\) is the concatenation operation. **P-GNN** In order to mitigate the computational cost of DE, an approach called position encoding (PE) is applied using the absolute position of the nodes in the graph as an additional feature. This technique approximates DE by measuring the distances between position features and can be shared among different node sets. The position-aware GNN (P-GNN) proposed by You et al. [53] incorporates PE by measuring the distance from a node to a set of randomly selected anchor nodes as Figure 10. However, it has been observed that this approach has slow convergence rate and unsatisfactory performance. The procedure of PE is shown as following: \[\mathbf{d}_{v,u}^{q}=\left\{\begin{array}{l}\mathbf{d}_{v,u},\mathbf{d}_{v,u}\leq q\\ \infty,\mathrm{otherwise},\end{array}\right. \tag{19}\] \[\mathbf{d}_{\mathrm{standard}}=\frac{1}{1+\mathbf{d}_{v,u}^{q}}, \tag{20}\] where \(q\) is the hop from \(v\) to \(u\), \(\mathbf{d}_{v,u}\) is the shortest distance of \(v\) and \(u\). **PEG** Utilizing PE, [89] introduces a novel approach to constructing more powerful GNNs - a class of GNN layers termed PEG. PEG uses separate channels to update both the node features and position features, exhibiting permutation equivariance and rotation equivariance simultaneously, where Stability is actually an even stronger condition than permutation equivariance because it characterizes how much gap between the predictions of the two graphs is expected when they do not perfectly match each other. In addition to DE or PE, incorporating marked topology information into message passing mechanisms shows significant enhancement of the GNNs expressive power. **SMP** The topology message passing (SMP) [78] method employs one-hot encoding to propagate the sole node identification, enabling the learning of local context matrices surrounding each node. This confers SMP with the capability to learn rich local topological information. In SMP, each node of a graph maintains a local context matrix \(\mathbf{U}_{i}\) rather than a feature vector \(\mathbf{x}_{i}\) as in MPNN. The \(\mathbf{U}_{i}\) is initialized as a one-hot encoding \(\mathbf{U}_{i}^{(0)}=\mathbf{1}\) for every \(v_{i}\in\mathcal{V}\), and if there are features \(\mathbf{x}_{i}\) associated with node \(v_{i}\), they are appended to the same row of the local context as \(\mathbf{U}_{i}^{(0)}[i,:]=[1,\mathbf{x}_{i}]\). The message passing process is as Equation 21 showed. \[\mathbf{U}_{i}^{(l+1)}=\mathrm{UPD}(\mathbf{U}_{i}^{l},\mathrm{AGG}( \mathrm{MSG}(\mathbf{U}_{j}^{l})_{v_{j}\in N(v_{i})}). \tag{21}\] **GD-WL** One of the main drawbacks of 1-WL is the lack of distance information between nodes. Inspired by some distance coding methods [89, 78, 53, 90] further introduced a new colour refinement framework called Generalized Distance Weisfeller-Lehman (GD-WL).The update rule of GD-WL is very simple and can be written as: \[\mathbf{c}_{v}^{(l)}=\mathrm{Hash}\left(\mathbf{c}_{v}^{(l-1)},\left\{\left\{ \left(d_{vu},\mathbf{c}_{u}^{(l-1)}\Big{(}\left|u\in N(v)\right\}\right\} \right)\right). \tag{22}\] Where \(d_{vu}\) can be an arbitrary distance metric. #### 4.4.3 Encoding the local-topology This method focuses on local topology, mainly utilizing subgraphs explicitly or implicitly for node embedding to learn more topology information. **GSN** Graph sub-topology Networks (GSN) [92] represent a topologically-aware message passing scheme based on sub-topology encoding. GSN explicit encodes the count of pre-defined subgraph topology appearance in the graph, as a topology role in message passing to capture a richer topological features and thereby exhibit superior expressive power over the WL test. However, the pre-defined subgraphs require the domain expert knowledge, hampering the model's widespread adoption. Considering \(\mathbf{r}_{v}\) as the topology role induced by the subgraph counts, the message passing process is as following: \[\mathbf{m}_{v}^{(l+1)}=\mathrm{MSG}(\mathbf{h}_{v}^{(l)},\mathbf{h}_{u}^{(l) },\mathbf{r}_{v},\mathbf{r}_{u}:u\in N(v)). \tag{23}\] **GraphSNN** Wijesinghe et al. [93] introduced the neighborhood subgraph graph neural network (GraphSNN), which enables the transmission of diverse messages based on the topology features of the central node's 1-hop neighborhood subgraph during message passing. The weight of the delivered message is determined by the extent of overlap subgraph between the central node and its neighbors in Equation 24 and Equation 25, with edge overlap also factored into the weight calculation. By injecting the topology information of the neighborhood subgraph in the message passing process as Equation 26, GraphSNN can improve its expressive power without necessitating specialized knowledge or sacrificing computational complexity. \[w_{vu}=\frac{\left|E_{vu}\right|}{\left|V_{vu}\right|\cdot\left|V_{vu}-1\right| }\cdot\left|V_{vu}\right|^{\lambda}, \tag{24}\] \[\mathbf{W}=(\frac{w_{vu}}{\sum_{u\in N(v)}w_{vu}})_{v,u\in V}, \tag{25}\] \[h_{v}^{(l+1)}=\mathrm{MLP}_{\theta}\left(\gamma^{(l)}\left(\sum_{u\in N(v)} \mathbf{W}_{vu}+1\right)\mathbf{h}_{v}^{(l)}\right. \tag{26}\] where \(\lambda>0\) and the overlap subgraph is denoted as \(\mathcal{G}_{vu}=(V_{vu},E_{vu})=S_{v}\cap S_{u}\) for neighboring nodes \(v\), \(u\). \(\mathbf{W}\) is the weight matrix. \(\gamma\) and \(\theta\) is learnable scalar parameter. **GNN-AK**[94] extends the scope of local aggregation in GNNs beyond 1-hop neighbors (star-like) into more generalized subgraphs. This method, termed as GNN-AK, computes the encoding of induced subgraphs of nodes and serves as a wrapper to uplift any GNN, where GNN perform as a kernel. Let \(N_{k}(v)\) be the set of nodes in the k-hop egonet rooted at node \(v\), containing node \(v\). For \(N_{k}(v)\subseteq\mathcal{V}\), \(G[N_{k}(v)]\) is the induced subgraph of \(v\): \(G[N_{k}(v)]=(N_{k}(v),(i,j)\in\mathcal{E}|i,j\in N_{k}(v)\). Every subgraph \(G[N_{k}(v)]\) is encoded as \(\mathbf{g}_{sub(v)}\), concatenating with the node embedding \(\mathbf{h}_{sub(v)}\) aggregated from \(N_{k}(v)\), to obtain the final embedding \(\mathbf{h}_{v}\): \[\mathbf{g}_{\mathrm{sub}(v)}^{(l+1)}=\mathrm{POOL}^{(l)}(\mathbf{h}_{\mathrm{ sub}(i)}|i\in N_{k}(v)), \tag{27}\] \[\mathbf{h}_{v}=\mathrm{CON}(\mathbf{g}_{\mathrm{sub}(v)},\mathbf{h}_{\mathrm{ sub}(v)}). \tag{28}\] **NGNN** Similar to the design philosophy of GNN-AK, Nested Graph Neural Networks (NGNNs) [95] extract a local subgraph around each node and apply a GNN to learning the representation of each subgraph. The overall graph representation is obtained by aggregating these subgraph representations. However, NGNN differs from GNN-AK in subgraph design. Specifically, the height of the rooted subgraph in NGNN can be determined based on the given hop of the node. **MPSN** The Message Passing Simplex Networks (MPSNs) [96], which operates message passing on simplicial complexes (SCs) [108]. SCs are in correspondence with subgraph topology, where SCs of varying dimensions concretize topology information of graph at different levels. Specifically, 0-simplices can be interpreted as vertices, 1-simplices as edges, 2-simplices as triangles, and so on. **ESAN** An empirical observation is that although two graphs may not be discriminable by MPNNs, they generally contain distinguishable subgraphs. Thus, [97] recommends representing each graph as a set of subgraphs derived from a predefined policy and proposes a novel equivariant subgraph aggregation network (ESAN) framework for handling them, which comprises the DSS-GNN architecture and subgraph selection policy. The core of the approach is to represent the graph \(G\) as a bag (multiset) \(S_{G}=\{\{G_{1},...,G_{m}\}\}\) of its subgraphs, and to make a prediction on a graph based on this subset. Motivated by the linear characterization, DSS-GNN adopt the same layer topology: \[F_{DSS-GNN}=E_{\mathrm{sets}}\circ R_{\mathrm{subgraphs}}\circ E_{\mathrm{ subgraphs}}, \tag{29}\] \[E_{\mathrm{subgraphs}}:(L(\mathbf{A},\mathbf{X}))_{i}=L^{1}(\mathbf{A}_{i}, \mathbf{X}_{i})+L^{2}(\sum_{j=1}^{m}\mathbf{A}_{j},\sum_{j=1}^{m}\mathbf{X}_{ i}), \tag{30}\] where \(E_{\mathrm{subgraphs}}\) is an equivariant feature encoding layer, \(R_{\mathrm{subgraphs}}\) is a subgraph readout layer, \(E_{\mathrm{sets}}\) is a universal set encoding layer. And \((L(\mathbf{A},\mathbf{X}))_{i}\) denotes the output of the layer on the i-th subgraph, \(L^{1},L^{2}\) represent two graph encoders and can be any type of GNNs layer. The methods outlined above pertain primarily to explicit utilization of subgraph topology, however implicit utilization can also serve to enhance the learning of graph topology. **LPR-GNN** Inspired by RP-GNN [86] and subgraph counting task, the Local Relational Pooling model (LRP-GNN) [55] is proposed to achieve competitive performance on sub-topology counting. LRP-GNN refers to RP-GNN that operates on subgraphs essentially, but in comparison to previous works [92, 93, 94], LRP can not only compute sub-topology, but also learn they relevant relation of sub-topology. **k-GNN** The higher-order message passing (k-GNN) scheme, implicitly utilizes subgraphs. The k-WL test serves as the foundation for k-GNNs. Whereas the standard WL test iteratively aggregates neighbors of a single node, k-WL operates on tuples of k-nodes for iterative aggregation. These k-tuples can be interpreted as subgraphs, where Fig. 10: The update strategy of P-GNN. (a) is anchor nodes sets of different sizes. (b) is the process of the embedding of \(v_{1}\) in the \(l\)-th layer. This figure is referenced to [53] Figure 2. higher-order message passing occurs between subgraphs. Equation 31 redefines the neighborhood representation of k-tuples, and Equation 32 enables reliable capture of inherent local graph topology through information propagation and embedding update over k-neighborhoods. It has been proved that the discrimination power of 1-WL is equivalent to 2-WL. Moreover, for k\(\geq\)2, (k+1)-WL is strictly more powerful than k-WL. Notably, 3-WL is considered to be the strongest expressive power to identify all non-isomorphic graphs to date [26, 109]. The k-GNNs expressive power is equal to k-WL, hence when k\(\geq\)2, k-GNNs offers the strongest expression rivaled to 3-WL. \[N(s)=\left\{t\in[V]^{k}\|s\cap t\mid=k-1\right\}, \tag{31}\] \[\mathbf{h}_{k}^{(l+1)}(s)=\sigma\left(\mathbf{h}_{k}^{(l)}(s)\cdot\mathbf{W}_ {1}^{(l+1)}+\sum_{u\in N(s)}\mathbf{h}_{k}^{(l)}(u)\cdot\mathbf{W}_{2}^{(l+1)} \right), \tag{32}\] Where all subsets of size \(k\) over the set \(\mathcal{V}\) are considered as \([V]^{k}\), where a \(k\)-set in \([V]^{k}\) is denoted by \(s=\{v_{1},\ldots,v_{k}\}\) as the central tuple. Moreover, \(N(s)\) denotes the neighborhood of \(s\), and \(\mathbf{W}_{1}\), \(\mathbf{W}_{2}\) are the parameter matrix. **K-hop GNN** In MPNN, the representation of a node is updated by aggregating its direct neighbours, which are called 1-hop neighbours. Some work extends the message passing to the concept of [110, 111, 112, 113, 114] K-hops. In K-hop message passing, the node representation is updated not only by aggregating the information of the node's 1-hop neighbours, but also by aggregating all the neighbours within the node's K-hop. The K-hop neighbours of node \(v\) refer to all neighbours whose distance from node \(v\) is less than or equal to K. Denote by \(N(v)^{k}\) the K-th hop neighbour of node \(v\), i.e., the neighbour whose distance to the node is exactly equal to K. \[\mathbf{m}_{v}^{(l,k)}=\mathrm{MSG}^{(l)}\left(\left\{\right\}\mathbf{h}_{u}^{ (l-1)}:u\in N(v)^{k}\right\}\right), \tag{33}\] \[\mathbf{h}_{v}^{(l,k)}=\mathrm{UPD}^{(l)}\left(\mathbf{m}_{v}^{(l,k)}, \mathbf{h}_{v}^{(l-1)}\right). \tag{34}\] \[\mathbf{h}_{v}^{(l)}=\mathrm{AGG}^{(l)}\left(\left\{\right\}\mathbf{h}_{v}^{ (l,k)}|k=1,2,...,K\right\}\right), \tag{35}\] The K-hop message passing is more powerful than 1-WL and can distinguish almost all rule-graphs, but its expressive power is similarly limited by 3-WL. **KP-GNN**[99] introduced K-hop peripheral-subgraph-enhanced GNN (KP-GNN), a new GNN framework with k-hop message passing, which significantly improves the expressive power of K-hop GNN. KP-GNN aggregates in the neighbour aggregation of each hop not only the neighbouring nodes, but also aggregates peripheral subgraphs (subgraphs induced by neighbours in that hop). This additional information helps KP-GNN learn more expressive local topology information around nodes. This improvement occurs in the message passing process: \[\mathbf{m}_{v}^{(l,k)}=\mathrm{MSG}^{(l)}\left(\left\{\right\}\mathbf{h}_{u}^ {(l-1)}:u\in N(v)^{k}\right\}\right),G_{v}^{k})\,, \tag{36}\] Where \(G_{v}^{k}\) is subgraphs induced by neighbours of \(v\) in hop \(k\). #### 4.4.4 Encoding the global-topology Encoding the global-topology is to directly extract the topology information of the whole graph for learning. **Eigen-GNN** Zhang et al. [91] introduced the topology eigen decomposition module, designed as a universal plug-in to GNNs termed as Eigen-GNN. This approach incorporates eigen decomposition of the adjacency matrix \(\mathbf{A}\) of the graph to obtain eigenvectors as topology features, which are subsequently concatenated with node features. The resulting amalgamation is then utilized as input to GNNs, as denoted in Equation 37. \[\mathbf{H}^{(0)}=[\mathbf{X},f(\mathbf{Q})], \tag{37}\] where \(\mathbf{Q}\) is the eigenvectors corresponding to the top-d largest absolute eigenvalues of \(\mathbf{A}\), \(f\) is a function such as normalization or identity mapping. ### _GNNs architecture enhancement_ As can be seen from the previous analysis, GNNs cannot learn the correct graph topology information due to the permutation invariant aggregation functions, thus losing its expression. Therefore, in addition to directly encoding topology information for learning, optimizing the GNNs architecture also helps GNNs learn more topology information for expressing by improving aggregation operations or designing equavariant GNNs without aggregation operation. A detailed description of this method of optimizing the GNNs architecture is shown in Figure 11. #### 4.5.1 Improving aggregation functions Improving aggregation functions involves improvements to the aggregation operations, wherein most of the modifications are made while retaining the permutation invariance Fig. 11: A detailed description of this method of GNNs architecture enhancement. of the aggregation function. However, there exist a few exceptions to break the permutation invariance. The improvement of aggregator that preserves invariance is established based on the theory proposed by Xu et al. [27]. **GIN** The essence of MPNN is actually a function that maps multiple set, where allow repeated elements to exist. The capacity of multi-set functions to discern these repeated elements inherently constrains the expressive power of MPNN. As a result, optimizing the aggregation function necessitates the development of an injective function operating on multisets, ensuring distinct embeddings for different nodes. The graph isomorphism network (GIN) is a simple architecture to satisfy this prerequisite: \[\mathbf{h}_{v}^{(l+1)}=\mathrm{MLP}\left(\left(1+\varepsilon^{(l)}\right) \mathbf{h}_{v}^{(l)}+\sum_{u\in N(v)}\mathbf{h}_{u}^{(l)}\right), \tag{38}\] where \(\varepsilon^{(l)}\) represents the learnable or fixed scalar and the MLP plays a critical role in strengthening the expressive power by ensuring the injective property. **modular-GCN** The global parameter assignment strategy enforced by GCN is actually a permutation invariant aggregation operation. Dehmamy et al. [100] has insight into the loss of topology information in GCN, and designed three GCN modules using three different propagation strategies as (1)\(f_{1}=\mathbf{A}\), (2)\(f_{2}=\mathbf{D}^{-1}\mathbf{A}\), and (3)\(f_{3}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) to address this limitation. Combining the three GCN modules compensates for the loss of missing topology information, realizing the optimization of the aggregation function. Furthermore, the general aggregation operations in GNNs are interpreted as graph convolutions. The graph topology acts as a node feature filter, smoothing node features during the convolution process, which corresponds to a filtering operation. Thereby enable the analysis of GNNs expressive power through spectral decomposition techniques [31]. **diagonal-GNN** Kanatsoulis et al. [32] utilizes spectral decomposition tools to view GNNs as graph feature filters, as shown in Equation 39, and proposed a GNN designed with the diagonal modules (diagonal-GNN). The use of diagonal modules allows GNNs to capture broader topology relationships while also capturing neighborhood relationships, enabling GNNs show the equivalent recognition abilities of featured graphs even on unfeatured graphs. \[\mathbf{H}=\sigma(\sum_{k=0}^{K-1}\mathbf{A}^{k}\mathbf{X}\mathbf{W}_{k}), \tag{39}\] \[diagonal-GNN:\mathbf{H}=\sigma(\sum_{k=0}^{K-1}diag(\mathbf{A}^{k})\mathbf{X} \mathbf{W}_{k}), \tag{40}\] where \(K\) is the length of graph filter, \(\sigma\) denotes a nonlinearity, \(\mathbf{W}_{k}\) is the filter parameters and can be a matrix, a vector, or a scalar. **GNN-LF/HF** Leveraging spectral decomposition tools to design various convolution processes to handle specific filtering conditions, GNNs can recognize and process more complex low-frequency or high-frequency information. [30] considers two distinct modes of low-pass filtering (LF) and high-pass filtering (HF), develop corresponding convolution operations, and design new models GNN-LF/HF respectively. The propagation mechanism of GNN-LF can be expressed by the following equations: \[\mathbf{H}^{(l+1)} =f_{\mathrm{COM}}(\frac{1+\alpha\mu-2\alpha}{1+\alpha\mu-\alpha }\tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-1/2}\mathbf{ H}^{(l)}\] \[+\frac{\alpha\mu}{1+\alpha\mu-\alpha}\mathbf{X}+\frac{\alpha- \alpha\mu}{1+\alpha\mu-\alpha}\tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-1/2}\mathbf{X}), \tag{41}\] \[\mathbf{H}^{(0)}=\frac{\mu}{1+\alpha\mu-\alpha}\mathbf{X}+\frac{1-\mu}{1+ \alpha\mu-\alpha}\tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}}\tilde{\mathbf{D }}^{-1/2}\mathbf{X}. \tag{42}\] The propagation mechanism of GNN-HF is given by the following equations: \[\mathbf{H}^{(l+1)} =f_{\mathrm{COM}}(\frac{\alpha\beta-\alpha+1}{\alpha\beta+1} \tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-1/2}\mathbf{ H}^{(l)} \tag{43}\] \[+\frac{\alpha}{\alpha\beta+1}\mathbf{X}+\frac{\alpha\beta}{ \alpha\beta+1}\tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{- 1/2}\mathbf{X}),\] \[\mathbf{H}^{(0)}=\frac{1}{\alpha\beta+1}\mathbf{X}+\frac{\beta}{\alpha\beta+1 }\tilde{\mathbf{D}}^{-1/2}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-1/2}\mathbf{ X}, \tag{44}\] where \(\mu\), \(\alpha\), \(\beta\) are hyper parameters that control the trade-off between the topology and feature information at different scales. \(\tilde{\mathbf{A}}\) and \(\tilde{\mathbf{D}}\) are the normalized adjacency matrix and the diagonal degree matrix, respectively, that account for the topology of the graph data. **Geom-GCN** Pei et al. [101] leverage the network geometry theory to propose a geometric aggregation scheme, implemented within graph convolution networks as Geom-GCN. This scheme maps graphs to a continuous latent space via node embeddings and subsequently constructs topology neighborhoods based on the geometric relations defined in the latent space, effectively preserving the intricate topological patterns present within the graphs. Building upon the definition of the original neighborhood, they further define topology neighborhoods \(\mathcal{N}(v)=(\{N_{g}(v),N_{s}(v)\},\tau)\), consisting of a set of neighborhood \(\{N_{g}(v),N_{s}(v)\}\) and a relation operator on neighborhoods \(\tau\). \(N_{g}(v)=\{u|u\in\mathcal{V},(u,v)\in\mathcal{E}\}\) is the neighborhood in the graph. \(N_{s}(v)=\{u|u\in\mathcal{V},d(\mathbf{z}_{u},\mathbf{z}_{v})<\rho\}\) is the neighborhood in the latent space, where \(d\) is the distance function, \(\rho\) is the pre-given parameter, and \(\mathbf{z}_{v}\in\mathbf{R}\) can be considered as the position of nodes in the latent space via the mapping \(f:v\rightarrow\mathbf{z}_{v}\). And \(\tau:(\mathbf{z}_{v},\mathbf{z}_{u})\to r\in R\), where \(R\) is the set of the geometric relationships and \(r\) can be specified as an arbitrary geometric relationship of interest according to the particular latent space andapplications. On the basis, a bi-level aggregation scheme is introduced: \[\mathbf{m}_{(i,r)}^{v,l+1}=\underset{i\in g,s}{\mathrm{AGG}_{i}}(\{\mathbf{h}_ {u}^{l}|u\in N_{i}(v),\tau(\mathbf{z}_{u},\mathbf{z}_{u})=r\}), \tag{45}\] \[\mathbf{m}_{v}^{l+1}=\underset{i\in g,s}{\mathrm{AGG}_{2}}((\mathbf{m}_{(i, r)}^{v,l+1},(i,r))), \tag{46}\] where \(\mathbf{m}_{(i,r)}^{v,l+1}\) can be considered as the feature of virtual nodes in the latent space which is indexed by \((i,r)\) which is corresponding to the combination of a neighborhood \(i\) and a relationship \(r\). The approach of optimizing the permutation invariant aggregation function presents significant limitations. Therefore, alternative designs have been proposed to address this constraint. **PG-GNN** Permutation sensitive functions are regarded as a symmetry breaking mechanism that disrupts the equidistant status of adjacent nodes, allowing for the capture of relationships between neighboring nodes. However, few permutation-sensitive GNNs [11, 85, 1] are available to real-world applications, due to their complexity. Huang et.al. [102] adopts Recurrent neural networks (RNNs) to model the dependency relationships between neighboring nodes as a permutation-sensitive function approach, wherein all permutations \(\pi\) act on the \(\mathcal{V}\), leading to the following aggregation strategy: \[\mathbf{h}_{v}^{(l+1)}=\sum_{\pi\text{in}\Pi}\mathrm{RNN}(\mathbf{h}_{\pi(u_ {1})}^{(l)},\mathbf{h}_{\pi(u_{2})}^{(l)},...,\mathbf{h}_{\pi(u_{n})}^{(l)})+ \mathbf{W}^{(l)}\mathbf{h}_{v}^{(l)}, \tag{47}\] where \(u_{1:n}\in N(v)\), \(\mathbf{W}\) is the parameter matrix. #### 4.5.2 Adopting equivariant architecture The bottleneck of GNNs expressive power lies in the permutation invariance of their aggregation operators [77]. While the limitation only serves as a condition to induce permutation equivariance, rather than a requisite component in GNN design. Consequently, abandoning aggregation operations and employing distinct arrangements of equivariant functions or operators can lead to the direct design of equivariant GNNs [17, 18, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 222, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 289, 288, 289, 291, 285, 286, 287, 288, 289, 280, 289, 281, 282, 285, 283, 286, 287, 288, 289, 282, 286, 289, 292, 287, 288, 289, 280, 281, 289, 282, 283, 285, 286, 287, 288, 289, 280, 282, 287, 288, 289, 280, 283, 289, 284, 285, 286, 287, 288, 289, 281, 289, 280, 282, 286, 287, 288, 289, 289, 280, 281, 282, 289, 280, 283, 284, 285, 286, 287, 288, 289, 282, 289, 281, 282, 286, 287, 288, 289, 280, 284, 289, 285, 286, 287, 289, 288, 281, 288, 289, 280, 282, 281, 283, 284, 285, 286, 287, 288, 289, 282, 289, 280, 284, 286, 288, 287, 289, 282, 285, 286, 288, 289, 287, 288, 289, 280, 289, 280, 281, 282, 283, 285, 286, 287, 288, 289, 282, 289, 280, 284, 286, 287, 288, 289, 282, 28, 281, 28, 285, 286, 287, 288, 28, 289, 282, 2 area of research holds the promise of clarifying these internal constraints and unraveling the mysteries of the GNN's black-box. **The physics-informed Designs.** Message passing, as the mainstream paradigm of GNNs, has achieved great success in many applications. However, in terms of expressive power, MPNN is constrained by the WL test constantly. While other frameworks such as LGNN have been successfully applied, their potential for further development is restricted by the complexity of tensor calculations. Bronstein et al. [92] develop a continuous learning GNN model inspired by physics, which draws on differential geometry, algebraic topology, and differential equations. Their model departs from the traditional MPNN framework, introducing a completely new design and exhibiting unprecedented expressive power. Such advancements are expected to pave the way for the development of additional GNN models that challenge the limitations of the MPNN paradigm, which has long dominated the field. **Graph neural architecture search**. Adopting a modular design for GNNs, without altering the original architecture, can enhance their expressive capacity to varying degrees. This finding underscores the potential promise of graph neural architecture search (GraphNAS) for boosting expressive power. Developing powerful GNN models often necessitates expertise and presents a significant calculation threshold. GraphNAS represents a low-cost and efficient means of automating GNN design. However, current efforts in graph neural architecture search prioritize model generalization over expressiveness in the search space design. Therefore, the challenge lies in designing a specific search space that facilitates the automated design of a powerful GNN model. This issue remains a pressing problem for the research community to tackle. ## 6 Conclusion Research on the GNNs expressive power has matured considerably, with an increasing number of improved models emerging. However, these studies have not significantly contributed to a deeper understanding of GNNs expressive power. Consequently, we propose a unified theoretic framework for delineating GNNs expressive power, encompassing the definition, limitations, and analysis of the factors influencing expression. Furthermore, we utilize this framework to summarize and categorize the methods currently employed for enhancing GNNs expressive power. Overall, as a paradigm of graph learning models, GNNs expressive power pertains to both graph features and topology. Accordingly, devising methods should address both aspects. Our unified framework offers a novel, standardized path for researching GNNs expressive power in this context.
2301.10171
Spectral Cross-Domain Neural Network with Soft-adaptive Threshold Spectral Enhancement
Electrocardiography (ECG) signals can be considered as multi-variable time-series. The state-of-the-art ECG data classification approaches, based on either feature engineering or deep learning techniques, treat separately spectral and time domains in machine learning systems. No spectral-time domain communication mechanism inside the classifier model can be found in current approaches, leading to difficulties in identifying complex ECG forms. In this paper, we proposed a novel deep learning model named Spectral Cross-domain neural network (SCDNN) with a new block called Soft-adaptive threshold spectral enhancement (SATSE), to simultaneously reveal the key information embedded in spectral and time domains inside the neural network. More precisely, the domain-cross information is captured by a general Convolutional neural network (CNN) backbone, and different information sources are merged by a self-adaptive mechanism to mine the connection between time and spectral domains. In SATSE, the knowledge from time and spectral domains is extracted via the Fast Fourier Transformation (FFT) with soft trainable thresholds in modified Sigmoid functions. The proposed SCDNN is tested with several classification tasks implemented on the public ECG databases \textit{PTB-XL} and \textit{MIT-BIH}. SCDNN outperforms the state-of-the-art approaches with a low computational cost regarding a variety of metrics in all classification tasks on both databases, by finding appropriate domains from the infinite spectral mapping. The convergence of the trainable thresholds in the spectral domain is also numerically investigated in this paper. The robust performance of SCDNN provides a new perspective to exploit knowledge across deep learning models from time and spectral domains. The repository can be found: https://github.com/DL-WG/SCDNN-TS
Che Liu, Sibo Cheng, Weiping Ding, Rossella Arcucci
2023-01-10T14:23:43Z
http://arxiv.org/abs/2301.10171v2
# Spectral Cross-Domain Neural Network with Soft-adaptive Threshold Spectral Enhancement ###### Abstract Electrocardiography (ECG) signals can be considered as multi-variable time-series. The state-of-the-art ECG data classification approaches, based on either feature engineering or deep learning techniques, treat separately spectral and time domains in machine learning systems. No spectral-time domain communication mechanism inside the classifier model can be found in current approaches, leading to difficulties in identifying complex ECG forms. In this paper, we proposed a novel deep learning model named Spectral Cross-domain neural network (SCDNN) with a new block called Soft-adaptive threshold spectral enhancement (SATSE), to simultaneously reveal the key information embedded in spectral and time domains inside the neural network. More precisely, the domain-cross information is captured by a general Convolutional neural network (CNN) backbone, and different information sources are merged by a self-adaptive mechanism to mine the connection between time and spectral domains. In SATSE, the knowledge from time and spectral domains is extracted via the Fast Fourier Transformation (FFT) with soft trainable thresholds in modified Sigmoid functions. The proposed SCDNN is tested with several classification tasks implemented on the public ECG databases _PTB-XL_ and _MIT-BJH_. SCDNN outperforms the state-of-the-art approaches with a low computational cost regarding a variety of metrics in all classification tasks on both databases, by finding appropriate domains from the infinite spectral mapping. The convergence of the trainable thresholds in the spectral domain is also numerically investigated in this paper. The robust performance of SCDNN provides a new perspective to exploit knowledge across deep learning models from time and spectral domains. The repository can be found: [https://github.com/DL-WG/SCDNN-TS](https://github.com/DL-WG/SCDNN-TS) Deep Learning; Spectral Domain Neural Network; ECG Signal; Medical Time-Series; Cross-domain Learning ## I Introduction Time Series (TS) has been widely studied in research fields such as physics, finance, medical, and nature language processing [1]. Due to the time dependency [2], classifying TS is different from traditional classification tasks involving image or sequence classifications without time dependencies [3]. The performance of traditional classification tasks often relies on the number of samples with proper labels [4]. In fact, well-labelled TS is often out of reach in real applications as stated in [2]. Moreover, the issue of imbalanced datasets commonly exists in medical TS data [4]. As pointed out by [3], another critical challenge of Time Series Classification (TSC) involves the model generalization. It is found that different models are often required for distinct classification tasks even with the same input TS data [5]. The state-of-the-art TSC approaches are either based on handcrafted Feature Engineering (FE) [2, 6, 7, 8, 9] or Deep Learning (DL) methods [10, 11, 12, 13, 14]. The former can be divided into three main categories, namely statistical feature extractions [15], entropy-based methods [16] and frequency-based methods [17]. After preprocessing [18], traditional machine learning classifiers such as Gradient Boosting Decision Tree (GBDT), Support Vector Machine (SVM) and Random Forest (RF) are employed to accomplish the classification tasks using handcrafted features as model input. However, the performance of FE-based methods is extremely sensitive to the quality of feature extractions on a case-to-case basis [19]. Furthermore, distinct FE methods are often required for different TSC tasks even for the same dataset [20]. Thus, there is an insurmountable obstacle to extending specific FE methods to general classification tasks. Such challenges can be found in medical TSC as stated in the work of [21, 22]. Electrocardiography (ECG) is a specific type of medical TS that describes the heartbeats of 12 different leads to detect various aspects of heart health [23]. In clinical applications, ECG data are widely used to diagnose cardiovascular diseases, such as myocardial infarction(MI), hypertrophy(HYP), and Conduction Disturbance(CD) [24]. A standard ECG TS consists of three waves, known as P-wave, QRS-complex and T-wave respectively [25]. As recognized in numerous researches [26, 27, 28], various cardiovascular diseases will impact these three waves, resulting in unrecognizable signals (also called repolarization in medical science). Thus identifying these abnormal signals is crucial for clinical diagnosis. In fact, successful ECG classification methods not only improve the accuracy of cardiology diagnosis, but also enable the possibility of monitoring the state of human health state using wearable devices. Classical FE-based TSC approaches extract the features relying on different waveforms of 12-lead ECG, mostly focusing on the P-QRS-T wave [25] and RR-interval [29]. FE is carried out to extract statistical, energy-based and frequency-based features which are used to classify ECG TS via traditional Machine Learning (ML) classifiers [6, 7, [8, 9]. In FE based methods, the quality of the features and thus the classification performance is sensitive to the algorithms selected for feature computation [30]. Therefore, specific FE algorithms often need to be designed for different classification tasks. In recent years, much research attention has been given to applying DL approaches for ECG classification, and more generally, for TSC tasks. For example, Convolutional Neural Networks (CNNs) have been widely employed to classify 12-lead ECG signals [11, 12, 13, 14], improving the accuracy of disease diagnosis in comparison to traditional approaches. The works of [31, 32] have used Long Short-term Memory (LSTM) [33], a variant of Recurrent Neural Network (RNN), to classify ECG pattern with imbalanced data. A wavelet layer before LSTM has been added in [34] to merge Spectral Domain (SD) knowledge into the Neural Network (NN). In their work, the spectral operation has been performed only on the input ECG TS, instead of processing deep knowledge inside the NN. Recent works of [35, 36] have used multi-scale Deep Convolutional Neural Network (DCNN) with ensemble learning to detect heart arrhythmia from 12-lead ECG. MLFB-Net [37] with CNN concatenated bidirectional GRU [38] and attention mechanism [39] has also been implemented for ECG classification where each lead ECG is treated individually. In addition, the work of [40] has tried the Transformer structure with an attention mechanism to capture latent and deep knowledge from 12-lead ECG simultaneously. However, their works only consider the ECG signals in time domain, and neglect the valuable information embedded in the spectral domain. The knowledge representation on the SD of images has been introduced in the work of [41]. Following this idea, [42, 43, 44, 45] have further explored the SD representation of image data on CNNs. Nevertheless, none of the aforementioned studies has established the links between the Time Domain (TD) and the SD inside the NN. In other words, the spectral information is either processed outside the neural network [41, 42] or inside the NN but without connections to the TD. The very recent works of [46, 47] have extended SD knowledge to an infinite mapping space to approximate the proper space with Fourier Neural Operator (FNO) for image prediction tasks. In [48], FNO has been utilized to mix multi-scale features in the SD to reach a more general representation of images. However, in these works, the dimension of the SD is fixed and only low-frequency domain information is kept in the neural network. Hence, it is extremely challenging to obtain the proper SD due to the pre-fixed threshold, and the high-frequency domain is ignored. The latter may lead to potential information loss due to the hard thresholds [49]. As discussed, although much effort has been dedicated to time series classification involved spectral knowledge, the noted algorithms suffer from the following limitations and challenges: 1. Current approaches either process the spectral information outside the deep learning model or treat time and spectral domains separately in the neural network. No communication between time and spectral domains inside the neural network has been performed; 2. When filtering information in the spectral domain, pre-selected number of spectral modes are employed, leading to potential information loss. To overcome these bottlenecks, we propose a novel neural network structure, named Spectral Cross-domain neural network (SCDNN), that interacts the TD and the SD inside the NNs. In addition, we have developed a new soft self-adaptive threshold determination mechanism that is deployed in the Soft-adaptive Threshold Spectral Enhancement (SATSE) block of Spectral Cross-domain Neural Network (SCDNN). More precisely, this new mechanism enables SCDNN to find the optimal number of spectral modes instead of using pre-fixed thresholds as implemented in existing approaches. Furthermore, the information loss of spectral filtering can be decreased thanks to the controllable soft threshold. To verify our model's performance and versatility, four diverse ECG classification tasks are deployed. These tasks consist of cardiovascular disease and ECG Form classifications on the public database \(PTB-XL\) and \(MIT-BIH\) with different sub-datasets. The former, published recently in 2020, contains the most detailed ECG lables while the latter is considered as one of the largest openaccess ECG datasets. SCDNN achieves substantial enhancement compared to existing TS classification models, in relation to almost all comparison metrics evaluated. In particular, numerical results demonstrate that SCDNN contributes to tackle the bottleneck of identifying ECG patterns with complex waveforms (without necessarily pathological symptoms) on the \(PTB-XL\) database. In summary, the principle contributions of our work are listed below: 1. We designed a novel neural network structure, named SCDNN, to establish the communication of spectral and time domains inside NNs. More precisely, Fast Fourier Transformation (FFT) and Inverse Fast Fourier Transformation (IFFT) are added on the ResNet backbone after each Res block to entract the information in the spectral domain. 2. In the SATSE block of SCDNN, trainable soft thresholds in the spectral domain enable the proposed model to find optimal spectral modes and reduce the information loss compared to using pre-fixed thresholds. 3. We have tested SCDNN in four different ECG classification tasks performed on the public ECG database \(PTB-XL\) and \(MIT-BIH\). SCDNN outperforms the state-of-the-art works on all classification tasks with imbalanced data. The method proposed in this study is data-agnostic, and can be applied/extended to other time-series classification/prediction problems. The rest of this paper is organized as follows. Section II describes the proposed SCDNN with a detailed explanation of information processing in the spectral domain. The numerical results of the proposed network, compared to state-of-the-art approaches, on the \(PTB-XL\) and \(MIT-BIH\) databases are presented in Section \(3\) and \(4\), respectively. We end the paper with a conclusion in Section \(5\) where we have also mentioned potential future works. ## II SCDNN: methodology In this section, we introduce the workflow of the proposed SCDNN with a special focus on the SATSE block. The proposed model is composed of ResNet18 [50] backbone, four SATSE blocks, one adaptive average pooling layer [51], one adaptive max pooling layer [52] and one fully-connected layer as displayed in Fig 1. The backbone includes \(4\) Res blocks (also known as residual blocks [50]). The structure of each block is depicted in Fig 2 where all Res blocks used in this study have the same structures with a different number of channels, following the standard ResNet structure [50]. Each Res block is combined with one SATSE block to form a Convolutional Fourier (ConvF) component as illustrated in Fig 2. The SATSE block enables the NN to learn from the spectral domain. In each ConvF block, the output of SATSE will be added to the output of the Res block to avoid the information loss from spectral learning [53] as illustrated in Fig 1. Two pooling layers are applied individually on the output of the last ConvF block for dimension reduction. Two pooled features are concatenated with channel dimension, then flattled to a \(1D\) vector before passing to the fully-connected layer for classification. ### _Res Block_ Since untrainable issues have been reported for very deep neural networks [54], ResNet was proposed by [50] where skip connections between various convolutional layer were adopted to decrease the issue of information vanishing in deep layers of neural networks. A light version of ResNet, namely ResNet18 [55] is chosen in this work as the backbone. Fig 2 depicts the layout of each Res block, consisting of four \(1D\) convolutional layers with a common kernel size and stride as defined in [50]. The convolutional layers are employed to extract features across all channels. To enhance the gradient stabilization, a \(BatchNorm\) layer [56] is deployed after each convolutional layer followed by a \(ReLU\)[57] activation function to avoid the gradient vanishing in deep neural networks [58]. ### _Satse_ In this section, we explain in details the soft adaptive threshold and the spectral learning in SD. The pipeline of SATSE is illustrated in Fig 3. First, FFT is deployed in SATSE for spectral knowledge converting. The model is thus capable of capturing deep features in TD and SD simultaneously. To choose the most proper spectral domain, we make use of trainable thresholds in each SATSE block to select the spectral modes. The soft threshold mechanism is capable of avoiding information loss in modes selection since the threshold value is obtained through network back propagation. Additionally, as shown in Fig 3, a trainable weight matrix is added to establish connections among different spectral modes. Therefore, those spectral modes in different SATSE blocks are determined mutually. For the \(i^{th}(i=1,...,4)\) SATSE block in SCDNN, the output of the Res block is denoted as \(\{f_{i,k}^{(j)}\}\) where \(\{j,k\}\in\{0,...,L_{i}-1\}\times\{0,...,C_{i}-1\}\). Here \(C_{i}\) and \(L_{i}\) are the number of channels and the signal length respectively. Let \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the FFT and IFFT on discrete signals, the computation of SD features \(f_{i,k}^{S,(j)}\) of each SATSE block Fig. 1: Workflow of SCDNN where \(\lambda_{L}\) and \(\lambda_{H}\) denote the coefficients corresponding to low-frequency and high-frequency information Fig. 2: Res Block Architecture and each channel is performed via discrete Fourier transform, \[f_{i,k}^{S,(j)} =\sum_{n=0}^{L_{i}-1}e^{-\frac{i}{2\pi}nj}f_{i,k}^{(n)}. \tag{1}\] \[\mathcal{F}(f_{i,k}) =[f_{i,k}^{S,(j)}]_{j=0,..,L_{i}-1} \tag{2}\] For the sake of clarity, \(\hat{i}\) denotes the imaginary unit in \(e^{-i\frac{2\pi}{1}}\) while the index \(i\) (e.g., \(f_{i,k}\)) is related to the order of the SATSE block. To compute the soft trainable thresholds, modified sigmoid functions in the high- and low- frequency domain are defined respectively as : \[\sigma^{L}(x) =1-\frac{1}{1+e^{\gamma_{i}(-x+\varphi_{i}\cdot L_{i})}} \tag{3}\] \[\sigma^{H}(x) =\frac{1}{1+e^{\gamma_{i}(-x+\varphi_{i}\cdot L_{i})}} \tag{4}\] where \(\varphi_{i}\) denotes the trainable threshold ratio and \(\gamma\) controls the slope of the modified sigmoid function. The dual sigmoid functions \(\sigma^{L}\) and \(\sigma^{H}\) enable the SATSE block to capture knowledge in both low- and high-frequency domains. The use of soft thresholds in Eq (4) enables the NN back-propagation compared to hard threshold functions, i.e., \[\sigma^{L}_{\text{hard}}(x) =\mathbb{1}_{x>\varphi_{i}L_{i}} \tag{5}\] \[\sigma^{H}_{\text{hard}}(x) =\mathbb{1}_{x\leq\varphi_{i}L_{i}}. \tag{6}\] We then obtain filtered high- and low-frequency elements in the spectral domain via \[\widetilde{f_{i,k}^{S^{H}}}=\sigma^{H}(k)f_{i,k}^{S},\quad\ \widetilde{f_{i,k}^{S^{L}}}=\sigma^{L}(k)f_{i,k}^{S}. \tag{7}\] By definition in Eq (3) and (4), when the values of \(\gamma\) (i.e., the slope of the sigmoid functions) are important, \(\sigma^{L}(x)\approx\sigma^{L}_{\text{hard}}(x)\) and \(\sigma^{H}(x)\approx\sigma^{H}_{\text{hard}}(x)\). Therefore, symmetric results can be obtained when \(\varphi=\lambda\) and \(\varphi=1-\lambda,\forall\lambda\in[0,1]\) by simply reversing the role of \(\sigma^{L}\) and \(\sigma^{H}\). Thus the initial value of \(\varphi\) is set to be smaller than 0.5 in the training. To enable the communication of cross-domain and cross-block information, trainable weight matrices \(W_{i,k}\in\mathbb{C}^{4,C_{i}}\) for different SATSE blocks and different channels are employed in the inverse Fourier transformation, \[\widetilde{f_{i,k}^{S^{H}}} =W_{i,k}\odot\widetilde{f_{i,k}^{S^{H}}}\,,\quad\ \widetilde{f_{i,k}^{S^{L}}}=W_{i,k}\odot\widetilde{f_{i,k}^{S^{L}}} \tag{8}\] \[f_{i,k}^{H^{\prime}} =\mathcal{F}^{-1}(\widetilde{f_{i,k}^{S^{H}}})=\frac{1}{L_{i}} \sum_{n=0}^{L-1}e^{i\frac{1}{2\pi}L_{i}n}\widetilde{f_{i,n}^{S^{H}}}\] (9) \[f_{i,k}^{L^{\prime}} =\mathcal{F}^{-1}(\widetilde{f_{i,k}^{S^{L}}})=\frac{1}{L_{i}} \sum_{n=0}^{L}e^{i\frac{2\pi}{L_{i}}kn}\widetilde{f_{i,n}^{S^{L}}}, \tag{10}\] where \(\{f_{i,k}^{H^{\prime}}\}\) and \(\{f_{i,k}^{L^{\prime}}\}\) (\(i=0..3,k=0...L_{i}-1\)) represent the inverse Fourier sequences in time domain. The element-wise multiplication is denoted as \(\odot\) in Eq 8, enabling cross spectral domain communications. Finally, the low- and high-frequency domain knowledge is converted to time domain through IFFT separately, following Eq. (9) and (10). We then obtain the output of the SATSE block \(O_{i}^{\text{SATSE}}\) by adding the original time domain component \(\{f_{i,k}\}\) thanks to two real trainable parameters \(\lambda_{L}\) and \(\lambda_{H}\), \[O_{i,k}^{SATSE} =f_{i,k}+\lambda_{L}f_{i,k}^{L^{\prime}}+\lambda_{H}f_{i,k}^{H^{ \prime}} \tag{11}\] \[O_{i}^{SATSE} =[O_{i,k}^{SATSE}]_{k\in\{0,...,C_{i}-1\}}.\] Instead of a pre-defined threshold (e.g., [46]), trainable soft thresholds \(\varphi_{i}\) in the SATSE block avoid the information loss due to prior assumptions in SCDNN. The training process of SCDNN is summarized in Algorithm 1. ``` 1:Initialize model parameters 2:Set Epochs number \(E\), learning rate \(\eta\), batch size \(m\), number of sequential Res blocks \(r\) 3: 4:for\(k=0\) to \(E\)do 5: Load ECG signals and labels with batch size 6:for\(i=0\) to \(r\)do 7: Forward propagation to ResBlock\({}_{i}\) 8: Convert the output from ResBlock\({}_{i}\) to SD via \(FFT\) (cf., Eq (1)) 9: Feed SD feature toward \(SATSE_{i}\) block 10: Filtering SD features individually (cf., Eq (7)) 11: Share mutually information on SD (cf., Eq (8)) 12: Convert features from SD to TD via \(IFFT\) (cf., Eq (9) and (10)) 13: Obtain the output of SATSE (cf., Eq (11)) 14: Adaptive average and max pooling with Concatenation 15: Flatten pooled features 16: Apply softmax to smooth the output probability 17: Compute the model loss 18: optimize model parameters with loss and \(\eta\) via Adams optimizer 19:endfor 20:endfor ``` **Algorithm 1** Training of SCDNN ## III Experiments on PTB-XL database In this section, three different classification tasks using the same ECG dataset [59] are tested to verify the model performance. Moreover, the comparison against the state-of-the-art approaches and the ablation study are also explained. ### _Database Description_ The novel network SCDNN is tested on the public ECG dataset \(PTB-XL\)[59], which includes 21837 12-lead ECG signals from 18885 patients. All data are downsampled under 100 Hz for a signal duration of 10 seconds. Four types of labels can be found in this dataset, namely Form, Rhythm, Super-class diseases and Sub-class diseases. For evaluating the model robustness, three classification tasks are assigned, corresponding to Form, Super-class and Sub-class labels respectively. Classification of Rhythm labels is not performed due to the extremely imbalanced classes [59]. In each of the three classification tasks, all ECG signals are single-labelled. The definition and the classification task for each of the three types of labels are listed here: * **Super-class**: classify the raw ECG signals to \(4\) types of diseases, namely CD, HVP, STTC and MI, and a Norm class (no disease). * **Sub-class**: classify the raw ECG signals to \(22\) specific diseases (each belongs to one of the Super-classes) and a Norm class. * **Form**: classify the raw ECG signals to \(19\) different forms, which are not explicitly related to disease detection. These forms include non-diagnostic T abnormalities(NDT), ventricular premature complex(PVC) [59] etc. The benchmark of these three tasks using state-of-the-art TS classification methods on well-separated training, validation and test datasets is provided in the work of [60]. In this paper, we make use of the same settings to implement SCDNN. The numbers of ECG signals in training, test and validation datasets are displayed in Table I. The classification of Super-class and Sub-class labels relies on the same data separation. As depicted in Fig 4, significant label imbalance problems can be observed in all three tasks mainly due to the large number of samples with the Norm (i.e., no disease) label. ### _Implementation_ #### Iii-B1 Model Input The training set is used to train the model, and the validation set is employed to validate the model after each epoch. For all three classification tasks, the model inputs consist of 12-lead ECG signals without extra preprocessing while the outputs include the probability of each label. The class with the highest probability is selected as the final prediction after a _softmax_ layer. #### Iii-B2 Training Parameters Setting In all three tasks, the number of epochs, batch size, learning rate and weight decay rate are fixed as 100, \(256\), \(1e-3\) and \(2e-5\), respectively. To fairly compare with the benchmark [60], \(CrossEntropyLoss\)[61] and Adams [62] are selected as the loss function and the optimizer for the training. In SATSE blocks, the initial values of \(\varphi_{i}\) and \(\gamma_{i}\) are set as \(0.4\) and \(0.5\) in each block while \(\lambda_{L}\) and \(\lambda_{H}\) are initialized as \(0\). These values will be updated in network back-propagation. ### _Evaluation and Comparison_ In this paper, we adopt \(Precision,Recall,F1\ score\) and _Accuracy_ as metrics to compare our model performance regarding the state-of-the-art approaches [35]. These metrics are commonly used in evaluating the performance of classification problems especially for imbalanced data [63]. Since all three tasks are multi-class classifications, the metrics must be averaged by proper scheme [64]. \(Macro\) averaging method is used here because of the label imbalance as suggested by [65]. Area under Curve (AUC) is also widely employed for evaluating classification performance [66, 67]. However, since \(AUC\) is not sensitive to imbalanced data [60], this metric is not appropriate to quantify the test results in this study. To compare the performance of SCDNN, eight existing DL models (as listed in the first columns of Table II, III and IV) used in the benchmark paper [60] are adopted for the three classification tasks. In addition, the standard ResNet18 is tested on three tasks for evaluating the enhancement of SCDNN. All models are trained with the same parameters mentioned in Section III-B2, and their performance is evaluated by the same test dataset. The numerical results of SCDNN against the state-of-the-art approaches for each of the three tasks are displayed in Table II, III and IV, respectively. In all three tasks, SCDNN reaches \begin{table} \begin{tabular}{l c c c c c} \hline Task & Total & Training & Test & Validation & \(n^{o}\) class \\ \hline Super-class & 21430 & 17111 & 2163 & 2156 & 5 \\ Sub-class & 21430 & 17111 & 2163 & 2156 & 23 \\ Form & 8988 & 7201 & 882 & 905 & 19 \\ \hline \end{tabular} \end{table} TABLE I: Number of samples of different classification tasks in _PTB-XL_ 2020 Fig. 3: SATSE Architecture the highest score of \(Macro\ Recall,Macro\ Precision,F1\) and _Accuracy_, which ensures the robustness of the proposed approach. What can be apparently observed in Table III is that SCDNN surpasses all baseline works significantly on all three metrics in Super-class classification, demonstrating the significant advantage of SCDNN on slightly imbalanced classification tasks. In particular, thanks to the extra spectral information captured by SATSE, SCDNN overperforms other Residual based models, namely _resnet1d_wang_, _inception1d_ and _xresnet1d101_. The Sub-class classification is extremely difficult due to the highly imbalanced labels. As illustrated by the last column of Table III, SCDNN still achieves the best _Accuracy_ score of \(71\%\) regarding other methods. On the other hand, as shown by Fig 4, for some sub-classes there is no sufficient data to learn from the training set. Thus, the scores of the other three metrics (i.e., _MacroPrecision, MacroRecall_ and _MacroF1_) drop significantly compared to the Super-class classification (Table II). Nevertheless, SCDNN still achieves the best result on \(MacroRecall,MacroPrecision\) and \(MacroF1\), meaning that this model reaches the best performance in terms of identifying illed individuals (i.e., true positives) with the lowest rate of mis-specification (false positives) regarding other baseline works. We also perform the Form classification for ECG signals as suggested in the benchmark paper [60]. This task consists of classifying ECG TS regarding different pre-defined forms, such as abnormal QRS (ABQRS), ventricular premature complex (PVC) or low amplitude T-waves (LOWT) [59]. As shown in Table IV, SCDNN manages to improve significantly the state-of-the-art results in terms of all four metrics with an advantage of \(25\%,17\%,19\%\) and \(9\%\) respectively. In fact, the information embedded in the deep frequency domain can be well captured by SCDNN, which enhances, in particular, the recognition of complex waveforms. On the other hand, the listed existing approaches do not take into account the deep frequency domain. As a consequence, the trained classification models can be potentially mislead by complex ECG signals consisted of many unrecognized waveforms. As pointed out by [73], some subclinical cardiac diseases can lead to chaotic waveforms other than P-QRS-T. These complex waveforms, difficult to identify in the TS, can be potentially better recognized using deep SD features. The substantial improvement on _Macro Precision_ and _Macro Recall_ clearly demonstrates the impact of frequency-domain features in reducing overfittings. In summary, numerical results demonstrate that SCDNN \begin{table} \begin{tabular}{l l l l l} \hline Methods & Precision(M) & Recall(M) & F1(M) & _Accuracy_ \\ \hline resnet1d\_wang [68] & 0.70 & 0.62 & 0.63 & 0.72 \\ inception1d [69] & 0.68 & 0.60 & 0.60 & 0.70 \\ xresnet1d101 [60] & 0.70 & 0.62 & 0.62 & 0.72 \\ LSTM\_bidir [70] & 0.68 & 0.61 & 0.62 & 0.71 \\ fcn\_wang [68] & 0.71 & 0.62 & 0.63 & 0.72 \\ LSTM [33] & 0.70 & 0.62 & 0.63 & 0.72 \\ Wavelet+NN [71] & 0.58 & 0.51 & 0.52 & 0.63 \\ naive bayes [72] & 0.08 & 0.20 & 0.12 & 0.42 \\ \hline Resnet18 [50] & 0.67 & 0.62 & 0.64 & 0.72 \\ **Proposed SCDNN** & **0.72** & **0.67** & **0.69** & **0.75** \\ \hline \end{tabular} \end{table} TABLE II: Super-class results on _PTB-XL_ database Fig. 4: Label distribution in three classification tasks \begin{table} \begin{tabular}{l l l l l} \hline Methods & Precision(M) & Recall(M) & F1(M) & _Accuracy_ \\ \hline resnet1d\_wang & 0.36 & 0.33 & 0.3 & 0.68 \\ inception1d & 0.37 & 0.34 & 0.34 & 0.68 \\ xresnet1d101 & 0.38 & 0.36 & 0.35 & 0.68 \\ LSTM\_bidir & 0.37 & 0.33 & 0.33 & 0.67 \\ fcn\_wang & 0.45 & 0.32 & 0.33 & 0.68 \\ LSTM & 0.40 & 0.34 & 0.33 & 0.68 \\ Wavelet+NN & 0.25 & 0.21 & 0.21 & 0.57 \\ naive bayes & 0.02 & 0.05 & 0.03 & 0.42 \\ \hline Resnet18 & 0.35 & 0.28 & 0.29 & 0.68 \\ **Proposed SCDNN** & **0.51** & **0.40** & **0.43** & **0.71** \\ \hline \end{tabular} \end{table} TABLE III: Sub-class results on _PTB-XL_ overperforms the benchmark approaches in terms of both disease identification and form recognition for ECG signals. To further inspect the impact of the novel SATSE block in SCDNN, the three classification tasks are also performed using standard \(ResNet18\) for an ablation study. Compared to the original \(ResNet18\), the advantage of SCDNN can be observed on all metrics in Table II to IV, especially for Sub-class and Form classifications which are considered more difficult.. A variety of initial values of \(\varphi_{i}(i=1,..,4)\) have also been tested for the Super-class classification task to ensure the model robustness. The initial values are set to \(\varphi_{i}^{(0)}\in[0.1,0.2,0.3,0.4,0.5]\) and the total number of Epochs is set to 100. For the three classification tasks, the obtained results on the four metrics are shown in Fig. 5. The solid blue lines represent the averaged score of SCDNN regarding different initial values of \(\varphi_{i}^{(0)}\) while the transparent zones refer to the standard deviation. The red curves illustrate the average of all benchmark approaches (except naive bayes) as displayed in Table II to IV where the red transparent zones are the associated standard deviations. In addition, the best score among existing approaches (of each metric) and the results of ResNet18 (for ablation experiment) are drawn by purple and green dashed lines, respectively. From Fig. 5, one can clearly observe the significant advantage of SCDNN compared to the state-of-the-art approaches, which is consistent with our analysis of Table II to IV. Further more, the proposed SCDNN shows a strong robustness by overpassing the highest score of existing approaches (purple dashed line) with almost all different initial values tested. The standard deviation of SCDNN regarding the values of \(\varphi_{i}^{(0)}\) is also relatively small. We have also displayed these metrics of \(fcn_{wang}\), \(xresnet101\) and SCDNN for each of the classes in all three classification tasks as shown in Fig. 6. To further inspect the algorithm performance in relation to the size of each classes, we set the x-axis in a decreasing order regarding the number of samples in each class in the test dataset. Overall, a consistent advantage of the proposed SCDNN, compared to the other two approaches can be observed. In particular, \(fcn_{wang}\) and \(resnet101\) reach a high precision with classes of small numbers of samples in Super-class (Fig 6 (a)) and Form (Fig 6 (b)) tasks. However, the associated \(Recall\) and \(F1\) score drop substantially for the same classes, implying considerable overfittings. On the other hand, the three scores of SCDNN remain consistent in all three tasks, showing the good generalisation of SCDNN. The evolution of model parameters against training Epochs is illustrated in Fig 7. Regardless the initial values, the convergence can be observed for \(\varphi_{1},\varphi_{2}\) and \(\varphi_{3}\) in Fig 7(a-c). On the other hand, \(\varphi_{4}\) converges numerically to 0 or 1 regarding different initial values as shown in Fig 7 (d). In fact, as explained in Section II-B, \(\varphi_{i}\to 0\) and \(\varphi_{i}\to 1\), this will lead to the same numerical outputs for the neural network by simply reversing the role of \(\sigma^{L}\) and \(\sigma^{H}\). We also display the evolution of \(\lambda^{H}\) and \(\lambda^{L}\) accordingly in Fig 7(e,f). Regardless the initial values of \(\varphi^{i}\), the evolution of \(\lambda^{H}\) and \(\lambda^{L}\) remains numerically stable. These results underline the robustness of the proposed SCDNN with a large range of initial parameters. ### _Computational Efficiency_ For fair comparison, we also illustrate the computational efficiency of different CNN-based approaches in Fig 8 regarding Giga floating point operations per second (GFlops) and number of model parameters (per million), evaluated on the _PTB-XL_ database. GFlops stands for the number of operations in GPU, thus representing the complexity of the neural networks. Both the GFLOPs (blue columns) and the number of parameters (red columns) show that our proposed SCDNN has a relatively low computational cost, especially when compared with other Resnet based approaches such as _xresnet1d101_ and _resnet1d_wang_. In particular, regarding the performance of _xresnet1d101_ on all three classification tasks (e.g., \(F_{1}\) as shown in Fig 8), the strength of time-spectral domain communication in SCDNN can not be compensated by employing a deeper neural network. Among all studied Fig. 5: The score of SCDNN on the four metrics against the benchmark approaches. The transparent zones refer to the standard deviation regrading different initial thresholds for SCDNN and different approaches for the Mean Benchmark. models, _fcn_wang_ is the most computationally efficient thanks to its simplified structure. The low computational cost highlights the possibility of extending SCDNN to high-dimensional problems, for instance, images or videos. To summarise, SCDNN offers considerable improvement in comparison to existing approaches on various tasks, especially for imbalanced time-series datasets, with a low computational cost. Furthermore, the ablation experiment highlights the contribution of the proposed SATSE block with soft trainable thresholds in the spectral domain. ## IV Experiment on MIT-BIH database To further ensure the validity of SCDNN, we have also evaluated its performance on the MIT-BIH database [74], which includes 48 recordings of two-lead ECG signals [75]. Here we use the preprocessed data1 where these ECG signals are split into 109444 heartbeat samples, classified in five groups, namely, normal beat (N), unclassifiable beat (Q), supraventricular premature beat (S), premature ventricular contraction (V) and fusion of ventricular beat (F) [74]. The number of samples regarding different labels is shown in Fig 9, splits into a training and a test dataset of 87553 and 21891 samples, respectively. Similar to the _PTB-XL_ database, an imbalanced issue of ECG labels can be noticed here due to the dominance of the Normal label. Footnote 1: [https://www.kaggle.com/datasets/mondejar/mithh-database](https://www.kaggle.com/datasets/mondejar/mithh-database) We display the performance of SCDNN against existing approaches (those have been performed on MIT-BIH database) in terms of _Average precision_, _Average recall_, _Average F1_ and _Accuracy_ in Table V. The _Average recall_ and the _Average precision_ are also specified as sensitivity and specificity in some references (e.g., [76]). As pointed out by [77], it is worth mentioning that different preprocessing strategies have been chosen for these methods as shown in the second column of Table V. All methods presented in Table V achieve a high _Accuracy_ score topped by SCDNN and CNN-LSTM [78]. An advantage of SCDNN can also be found in the other three metrics compared to the works of [76] and [79]. The outstanding performance of SCDNN on MIT-BIH database further demonstrate its robustness in TS classification tasks. ## V Discussion On both _PTX-BL_ and _MIT-BH_ databases, SCDNN exhibits a significantly better performance compared to benchmark approaches, for ECG classifications. More importantly, our exploratory work brings new insight into the communication between time and spectral domains inside the neural network through trainable coefficients. In summary, the strength of such model can be highlighted by following observations 1. **better prediction with low computational cost:** SCDNN outperforms all existing approaches on both databases without adding a considerable number of layers or parameters. This fact demonstrates that the spectral domain information is difficult to be recognised by simply constructing deeper neural networks. 2. **ablation experiment:** The proposed SCDNN outperforms considerably the model backbone (i.e., _Resnet18_), which demonstrates the impact of spectral domain processing in SCDNN, namely SATSE blocks. 3. **robustness regarding initial values:** All different metrics of classification performance varies little against the initial parameter settings in SATSEs, and these trainable parameters converge almost to the same values during the training. As a pioneer work, we didn't consider the attention mechanism, which is now commonly employed in deep learning models. In fact, implement extra attentions on the Fourier modes in the spectral domain can potentially improving the information filtering. Along this line, only FFT and IFFT are investigated in this work. Other spectral-time transformation operators can also be integrated in SCDNN. ## VI Conclusion and future work ECG time series classification is crucial in cardiovascular disease diagnosis, while current methods struggle to detect Fig. 6: The (Macro) _Precision_, _Recall_ and \(F_{1}\) for all three tasks on _PTB-XL_ in each different class. The x-axis represents the number of samples in each class. correct patterns of ECG signals [83, 84]. In this paper, we proposed a novel neural network named Spectral Cross-Domain Neural Network to combine the information embedded in time and spectral domains. For this purpose, the self-adaptive threshold spectral enhancement (SATSE) block \begin{table} \begin{tabular}{l l l l l l} \hline Methods & Preprocessing & Average & Average Recs & Average Recs & Average & _Accuracy_ \\ & & Precision & call & F1 & \\ \hline & Daubechies wavelet Z-score & 0.91 & 0.97 & / & 0.94 \\ 9-layer CNN [79] & & Balanced sampling & / & / & / & 0.93 \\ 11-layer CNN [14] & & Balanced sampling & / & / & / & 0.97 \\ Modified U-net [80] & & Z-score & / & / & / & 0.97 \\ CNN-LSTM [78] & & Standardization (0-1) & / & / & / & **0.99** \\ Multi-Perspective & & Symbolic Baseline Corrected & / & / & / & 0.96 \\ CNN [81] & & & & & & \\ CNN-LSTM with multiple input layers [82] & & approximation (SBCX) & / & / & / & 0.94 \\ Resnet33-CBAM [76] & & Balanced sampling & **0.99** & 0.97 & 0.97 & 0.98 \\ \hline **Proposed SCDNN** & Raw ECG & **0.99** & **0.99** & **0.99** & **0.99** \\ \hline \end{tabular} \end{table} TABLE V: Classification results on the testset of MIT-BIH database Fig. 8: GFLOPs and number of parameters of different approaches against the \(F_{1}\) score where the exact values are shown in Table II-IV Fig. 7: Evolution of \(\varphi_{i},i=1,..,4,\lambda^{H}\) and \(\lambda^{L}\) against training Epochs with different initial values of \(\varphi_{i}\) Fig. 9: Label distribution in three classification tasks is developed in this work where trainable thresholds are employed to filter the information in the high- and the low-frequency domains, respectively. The new model is tested on two different 12-lead ECG signal database with total four different classification tasks. Compared to the state-of-the-art approaches, our model shows a significant advantage of different metric scores on all the classification tasks considered. These results highlight the potential of SD knowledge for reducing information loss between Convolutional layers. Furthermore, the trainable thresholds enable the block to choose the proper frequency modes in SD. The contribution of SATSE has been also numerically demonstrated by performing ablation experiments on SCDNN. The current version of SCDNN treat all channels in time and spectral domains equivalently. The potential of cross-domain attention is worth investigating in the upcoming work. Furthermore, various methods for SD features exploitation could be attempted, such as Fractional Fourier transform [85] and Wavelet transform [86]. We are also considering further examination of the convergent values of the soft thresholds, for instance, regarding the position/depth of the current layer in the neural network. The model proposed in this paper is general for time-series data. Applications of SCDNN and SATSE in other research fields, including computer vision, natural language processing, speech signals and physics systems, will be further investigated. ## Acronyms \begin{tabular}{l l} **NN** & Neural Network \\ **AUC** & Area under Curve \\ **ConvF** & Convolutional Fourier \\ **ML** & Machine Learning \\ **TS** & Time Series \\ **TSC** & Time Series Classification \\ **FFT** & Fast Fourier Transformation \\ **SD** & Spectral Domain \\ **FNO** & Fourier Neural Operator \\ **TD** & Time Domain \\ **SCDNN** & Spectral Cross-domain Neural Network \\ **RNN** & Recurrent Neural Network \\ **IFFT** & Inverse Fast Fourier Transformation \\ **CNN** & Convolutional Neural Network \\ **LSTM** & Long Short-term Memory \\ **DL** & Deep Learning \\ **SATSE** & Soft-adaptive Threshold Spectral Enhancement \\ **ECG** & Electrocardiography \\ **FE** & Feature Engineering \\ **GBDT** & Gradient Boosting Decision Tree \\ **SVM** & Support Vector Machine \\ **RF** & Random Forest \\ **DCNN** & Deep Convolutional Neural Network \\ \end{tabular}
2302.06243
An Order-Invariant and Interpretable Hierarchical Dilated Convolution Neural Network for Chemical Fault Detection and Diagnosis
Fault detection and diagnosis is significant for reducing maintenance costs and improving health and safety in chemical processes. Convolution neural network (CNN) is a popular deep learning algorithm with many successful applications in chemical fault detection and diagnosis tasks. However, convolution layers in CNN are very sensitive to the order of features, which can lead to instability in the processing of tabular data. Optimal order of features result in better performance of CNN models but it is expensive to seek such optimal order. In addition, because of the encapsulation mechanism of feature extraction, most CNN models are opaque and have poor interpretability, thus failing to identify root-cause features without human supervision. These difficulties inevitably limit the performance and credibility of CNN methods. In this paper, we propose an order-invariant and interpretable hierarchical dilated convolution neural network (HDLCNN), which is composed by feature clustering, dilated convolution and the shapley additive explanations (SHAP) method. The novelty of HDLCNN lies in its capability of processing tabular data with features of arbitrary order without seeking the optimal order, due to the ability to agglomerate correlated features of feature clustering and the large receptive field of dilated convolution. Then, the proposed method provides interpretability by including the SHAP values to quantify feature contribution. Therefore, the root-cause features can be identified as the features with the highest contribution. Computational experiments are conducted on the Tennessee Eastman chemical process benchmark dataset. Compared with the other methods, the proposed HDLCNN-SHAP method achieves better performance on processing tabular data with features of arbitrary order, detecting faults, and identifying the root-cause features.
Mengxuan Li, Peng Peng, Min Wang, Hongwei Wang
2023-02-13T10:28:41Z
http://arxiv.org/abs/2302.06243v1
An Order-Invariant and Interpretable Hierarchical Dilated Convolution Neural Network for Chemical Fault Detection and Diagnosis ###### Abstract Fault detection and diagnosis is significant for reducing maintenance costs and improving health and safety in chemical processes. Convolution neural network (CNN) is a popular deep learning algorithm with many successful applications in chemical fault detection and diagnosis tasks. However, convolution layers in CNN are very sensitive to the order of features, which can lead to instability in the processing of tabular data. Optimal order of features result in better performance of CNN models but it is expensive to seek such optimal order. In addition, because of the encapsulation mechanism of feature extraction, most CNN models are opaque and have poor interpretability, thus failing to identify root-cause features without human supervision. These difficulties inevitably limit the performance and credibility of CNN methods. In this paper, we propose an order-invariant and interpretable hierarchical dilated convolution neural network (HDLCNN), which is composed by feature clustering, dilated convolution and the shapley additive explanations (SHAP) method. The novelty of HDLCNN lies in its capability of processing tabular data with features of arbitrary order without seeking the optimal order, due to the ability to agglomerate correlated features of feature clustering and the large receptive field of dilated convolution. Then, the proposed method provides interpretability by including the SHAP values to quantify feature contribution. Therefore, the root-cause features can be identified as the features with the highest contribution. Computational experiments are conducted on the Tennessee Eastman chemical process benchmark dataset. Compared with the other methods, the proposed HDLCNN-SHAP method achieves better performance on processing tabular data with features of arbitrary order, detecting faults, and identifying the root-cause features. _Note to Practitioners--_This paper was motivated by the problem of fault detection and diagnosis to process multiple variables and identify the root-cause features in real chemical processes. In this case, the order of features affects the fault detection performance and the precise root-cause feature is required to be identify to avoid the same faults. This paper presents a novel order-invariant and interpretable framework for fault detection and root cause analysis in real chemical processes to process data with features of arbitrary order, thus reducing the burden on users. It utilizes the collected historical data for training and requires no human supervision. The newly collected data is automatically classified into a normal or fault type. Once a fault happens, the corresponding root-cause feature is identified without any prior knowledge. In our future work, we plan to focus on the faults with multiple root-cause features. In addition, the incomplete dataset and simultaneous-fault diagnosis are worth of investigation. Fault Diagnosis, Deep Learning, Dilated Convolution Neural Network, Interpretability ## I Introduction With the advent of Industry 4.0, chemical processes become more intelligent and automatic. This trend has raised the urgent need of detecting anomalies and diagnosing faults efficiently and correctly. Chemical faults result in chemical contamination, potential explosion, and other seriously chemical hazards, and thus intelligent fault diagnosis methods are required to find the underlying causes of the faults. Current fault detection and diagnosis methods can be classified into two categories: model-based and data-based. Model-based methods are less accurate with higher level of complexity as they depend on the modeling of complex physical and chemical processes. Therefore, data-based methods have become increasingly popular recently. Among these methods, deep learning methods have been used widely and achieved electri-fying performance in fault detection and diagnosis problems [1]. In particular, convolution neural network (CNN) is one of the most representative deep learning architectures based on convolution calculations. The main advantage of CNN is that it automatically detects the important features without any human supervision. And it can easily process high-dimensional data by sharing convolution kernels. Currently, some work has been done to show the potential of utilizing CNN to detect fault in chemical processes. For example, Wang _et al._ proposed a feature fusion fault diagnosis method using a normalized CNN for complex chemical processes [2]. Huang _et al._ introduced a novel fault diagnosis method that consists of sliding window processing and a CNN model [3]. However, applying the current CNN models in real chemical processes is still challenging. On the one hand, CNN models rely on convolution operation to extract information within each size-fixed convolution kernel, leading to result that only the information among adjacent features can be extracted. Different from images, the data of chemical processes involves multiple variables through the time domain, which is considered as tabular data. Thus the order of these variables determine the extracted information through kernels, resulting in instability of processing tabular data by convolution layers. On the other hand, the existing CNN methods only provide classification results while the analysis of the root-cause features is lacking. Because of the encapsulation mechanism of feature extraction, most CNN models are opaque without any knowledge of the internal working principles, thus users have no information on feature contribution to the prediction. Without analyzing the underlying root-cause features, the same faults will repeat and result in serious consequences. In this paper, we propose an order-invariant and interpretable hierarchical dilated convolution neural network (HDLCNN) composed by feature clustering, dilated convolution and shapley additive explanations (SHAP) method to process tabular data with features of arbitrary order and obtain credible root-cause features. Dilated convolution, a variant of CNN that expands the kernel by inserting holes between the kernel elements, is utilized in our method [4]. It is adopted to increase the receptive field size without increasing the number of model parameters. Since receptive field is the corresponding region in the input that determines a unit in a certain layer in the network, dilated convolution with larger receptive field has the capability of extracting global features thus weakening the impact of the order of features. To further eliminate the effects of the order, we utilize feature clustering method to agglomerate highly correlated features before convolution layers. Hierarchical clustering method is applied since it builds a hierarchy of clusters and is not affected by the input order of data. In addition, a major difficulty to apply CNN methods in real chemical processes is to get credible fault detection results and obtain exact root-cause features. This means that interpretability, the degree to which a human can understand the model's result, is vital for human to trust the decisions made by complex models. To solve this problem, we apply the SHAP method to interpret the complex black box model, which is a method to explain individual predictions based on the game theoretically optimal Shapley values [5]. Compared with other interpretability methods, it has the advantages of solid theoretical foundation in game theory and intuitive visualization based on the origin data. Also, it is model-agnostic while providing both local and global interpretability. Therefore, we utilize the SHAP method to provide interpretability by computing the SHAP values to quantify feature contribution and then obtaining the root-cause features. The main contributions of this article are as follows: * A dilated convolution based order-invariant classifier, namely HDLCNN, is developed to solve chemical fault detection and diagnosis. Dilated convolution is a variant of CNN with larger receptive field, enabling the proposed method to extract more information within a size-fixed convolution kernel thus achieving better performance on processing tabular data with features of arbitrary order. * Hierarchical clustering algorithm is applied to agglomerate highly correlated features before the convolution layers to further weaken the effect of the order of features. As a data pre-processing method, hierarchical clustering treats each input as a separate cluster and then sequentially merges similar clusters thus being unaffected on the order. * SHAP method is applied to provide credible and visual interpretability of the classification results from HDLCNN. The computed SHAP values quantify feature contribution and are utilized to obtain the root-cause features. * The experimental results on the Tennessee Eastman (TE) chemical process benchmark dataset demonstrate that, in contrast to the existing methods, the proposed HDLCNN-SHAP method achieves better performance in the key operations of processing tabular data with features of arbitrary order, detecting faults, and identifying the root-cause features. The rest of this paper is organized as follows. The related work is introduced and the motivation of this work is described in Section II. The proposed order-invariant and interpretable chemical fault detection and diagnosis method is shown in Section III. The experiments of our proposed method based on the TE dataset are introduced in Section IV. And Section V summarizes this paper. ## II Background Theory and Motivation ### _Related Work_ Till now, researchers has shown the effectiveness of applying CNN for chemical fault detection and diagnosis. For example, Chadha _et al._ proposed a 1-D CNN model to extract meaningful features from the time series sequence [6]. Wang _et al._ proposed a fault diagnosis method using deep learning multi-model fusion based on CNN and long short-term memory (LSTM) [7]. Gu _et al._ proposed an incremental CNN model to detect faults in a real chemical industrial process [8]. He _et al._ proposed a multi-block temporal convolutional network to learn the temporal-correlated features [9]. However, these methods mainly focus on extracting temporal features and ignore the effect of the order of features. Zhong _et al._ discussed the impact of the arrangement order of features on fault diagnosis and used enumeration method to find the optimal order [10]. However, it is time consuming to find the optimal order and the problem will be exacerbated since chemical processes involve multiple variables. Therefore, it is necessary to design an effective network that is less affected by the order of features. In this paper, we propose an order-invariant fault detection method based on hierarchical clustering and dilated convolution. The proposed HDLCNN methods can process chemical tabular data with arbitrary feature order and provide accurate fault classification results. Another limitation of the current CNN methods is that only the classification results are provided without an explanation of the results. Considered as black box systems, the CNN models produces useful fault classification results without revealing any information about its internal workings. To avoid the black box problem, interpretability methods have aroused interests of researchers, which aim to help humans readily understand the reasoning behind predictions made by the complex models. Interpretability methods can be divided into two categories: ante-hoc interpretability and post-hoc interpretability. The former one refers to design interpretable models to directly visualize and interpret the internal information while the latter one refers to apply interpretation methods after model training. Generally, researchers prefer to use ante-hoc interpretable bayesian network (BN) to recognize the root-cause features in chemical processes. BN is a probabilistic graphical model based on random variables and the corresponding conditional probability. It can be used to identify the propagation probability among measurable variables to determine the root-cause features. Liu _et al._ proposed a strong relevant mechanism bayesian network by combining mechanism correlation analysis and process state transition to identify the unmonitored root-cause features [11]. Liu _et al._ proposed a multi-state BN to recognize a node into multiple states [12]. However, it is expensive to design a BN model since it requires prior knowledge and expert rules. And no universally acknowledged method is developed for constructing networks from raw data. On the contrary, CNN models provide accurate classification results without human supervision. Therefore, utilizing suitable post-hoc inpretability methods to explain the internal parameters of CNN models is a possible solution to visualize the feature contribution and thus identify the root-cause features. In this paper, we propose an interpretable fault diagnosis method based on the SHAP method. The root-cause features are obtained with SHAP values thus greatly improving the practicability of fault diagnosis methods in real chemical processes. ### _Motivation of This Work_ Fault detection and diagnosis technologies are vital in real chemical processes since they aim to discover the faults in the early stage and thus reduce maintenance costs. For this task, the current CNN methods only focus on extracting temporal features in multi-variables chemical processes. However, chemical processes have tabular data involving both time and feature domains. Similar to the effect of the pixel positions in images, the order of features in tabular data also affect the classification results, since it determines the information extracted within a size-fixed convolution kernel. Although deep CNN models can achieve global receptive field and extract order-invariant information ultimately, they are computational expensive and inefficient for chemical data. Therefore, we aim to design a shallow and order-invariant CNN model. Fig. 1 shows an example of tabular data with \(n\) features and \(m\) time duration. Each row represents a feature and the order of these rows determine the information extracted within a kernel. The left part shows the raw data with original order of features, and the right part shows the optimal order by changing the order of features to make highly correlated features closer. We seek the optimal order since it results in the best performance of CNN model. To further explain this problem, we conduct some experiments on the TE dataset to analyze the correlation of these features. The TE dataset contains 22 continuous process features and the corresponding correlation coefficients are shown in Fig. 2. From Fig. 2, we can see that there are 14 pairs of features have correlation coefficients greater than 0.7. Therefore, utilizing convolution kernels to extract features of the original order will loss the information related to the correlation of features, since the close-related variables are not considered effectively. Zhong _et al._ proved the impact of the order led on model performance and devoted much efforts to find the optimal order [10]. However, using enumeration method to select the optimal order is inefficient. An alternative solution is to design a model which is less affected by the order of features and can process arbitrary order effectively. In this paper, we propose an order-invariant HDLCNN model based on feature clustering and dilated convolution to extract features with arbitrary order. Dilated convolution is utilized to extract information involving more variables due to its larger receptive field. In addition, as a data pre-processing method, feature clustering is used to further agglomerate correlated features before training the dilated convolution model. More details are described in Section III. This design enables the proposed model to effectively process the tabular data with features of arbitrary order thus no longer necessary to seek the optimal order. On the other hand, chemical processes have a very high risk of serious incident consequences by handling and processing materials under hazardous conditions. Therefore, once a fault is detected, it is necessary to analyze the corresponding root causes to identify the underlying issues and avoid the same faults. Generally, researchers design ante-hoc interpretable BN models to identify root-cause features. But these methods require prior knowledge and expert rules leading to much manual intervention for real applications. On the contrary, Figure 1: The tabular data with \(n\) features and \(m\) time duration. Highly correlated features are closer in the optimal order. Figure 2: The correlation coefficients of the 22 features in the TE dataset. post-hoc interpretability methods analyze the complex models after training and require no prior knowledge, thus can be combined with opaque CNN models to leverage their strengths of automatically extracting the important features. In particular, specific post-hoc interpretability methods are proposed to explain the CNN-based models and make them more transparent. Zhou _et al._ utilized global average pooling layers in CNN models to generate class activation maps (CAM), which indicates the discriminative regions used by the CNNs for prediction [13]. Further, Selvaraju _et al._ extends the CAM method to any CNN-based differentiable architecture by using the gradients of targets and flowing into the final convolution layer, namely gradient-weighted CAM (Grad-CAM) [14]. However, these CAM-based methods only produce a coarse localization map highlighting the important regions [14] while fault diagnosis requires to provide pixel-level explanation and precisely locate the correct root-cause feature. Therefore, these model-specific interpretability methods are not satisfying. In contrast, model-agnostic methods are more flexible and independent of the underlying machine learning model. For example, partial dependence plot (PDP) [15] and individual conditional expectation plot (ICEP) [16] are designed to display the effect of a feature on the prediction. However, they assume the independence of each feature and fail to process multiple features simultaneously, thus they are inappropriate for chemical processes. Besides, Ribeiro _et al._ proposed a technique that explains individual predictions by training local surrogate models to approximate the predictions of the underlying black box model, namely local interpretable model-agnostic explanations (LIME) [17]. But it also ignores the correlation between features and only provide local explanations. In contrast, SHAP provides both local and global explanations by computing the contribution of each feature to the corresponding prediction [5]. Also, it considers the interaction effect after obtaining the individual feature effects. Therefore, we apply SHAP method to improve the interpretability of our CNN-based model. The visualization of feature contribution and analysis of root-cause features based on SHAP values are shown in Section IV. ### _Dilated Convolution_ CNN is a representative deep learning model which has been widely used in different fields. It takes the raw data, trains the model, then extracts the features automatically for better classification. Although increasing depth of CNN models can achieve larger receptive field size and higher performance, the number of parameters will greatly increase. With the consideration of it, dilated convolution is proposed. The key idea of dilated CNN (DLCNN) is to maintain the high resolution of feature maps and enlarge the receptive field size in CNN [4]. It expands the kernel by inserting holes among original elements thus enlarging the receptive field. Comparing with traditional CNN, it involves a hyper-parameter named dilation rate which indicates a spacing between the non-zero values in a kernel. Fig. 3 shows the comparison of CNN and DLCNN. Theoretically, CNN can be seen as a DLCNN with dilation rate \(r=1\) and the normal convolution calculation is as follows: \[Y[i,j]=\sum\nolimits_{m+n=i}\sum\nolimits_{p+q=j}H[m,n]\cdot X[p,q] \tag{1}\] where \(Y\), \(H\) and \(X\) are the 2-D output, filter and input respectively. A sample of DLCNN is introduced in Fig. 3 (b). With a dilation rate \(r\), (\(r-1\)) data points will be skipped in the process of convolution. The dilated convolution calculation is defined as follows: \[Y_{r}[i,j]=\sum\nolimits_{rm+n=i}\sum\nolimits_{rp+q=j}H[m,n]\cdot X[p,q] \tag{2}\] With a dilation rate \(r\), this method offers a larger receptive field at the same computational cost. While the number of weights in the kernel is unchanged, they are no longer applied to spatially adjacent samples. Define \(c_{l}\) as the receptive field size of the feature map \(y_{l}\) at the layer \(l\) in a DLCNN. Then, the receptive field size of layer \(l\) can be computed as follows: \[c_{l}=s_{l+1}\cdot c_{l+1}+[(r_{l+1}(h_{l+1}-1)+1)-s_{l+1}] \tag{3}\] where \(s_{l}\) refers to the stride and \(h_{l}\) indicates the kernel size. It is obvious that the receptive field increases linearly as the dilation rate increases. This enables models to have a larger receptive field with the same number of parameters and computation costs. ### _SHAP Method_ In this paper, we utilize the SHAP method [5] to provide interpretability and obtain the root-cause features, which is based on shapley values [18] and game theory [19]. It computes the shapley values for each feature of the data samples and these values indicate the contribution that the feature generates in the prediction. More specifically, a shapley value is the average marginal contribution of a feature among all possible coalitions [18]. Consider a simple linear model: \[\hat{f}(x)=\beta_{0}+\beta_{1}x_{1}+\ldots+\beta_{p}x_{p} \tag{4}\] where \(x\) is a data sample with \(p\) features and each \(x_{i}\) is a feature value. \(\beta_{i}\) is the weight of the feature \(i\). The contribution Figure 3: The comparison of CNN and DLCNN. \(\phi_{i}\) of the feature \(i\) on the prediction \(\hat{f}(x)\) can be computed as follows: \[\phi_{i}(\hat{f})=\beta_{i}x_{i}-E\left(\beta_{i}X_{i}\right)=\beta_{i}x_{i}- \beta_{i}E\left(X_{i}\right) \tag{5}\] where \(E\left(\beta_{i}X_{i}\right)\) is the mean effect estimate for the feature \(i\). Then, the contribution is the difference between the feature effect and the average effect. On this basis, SHAP defines an explanation model \(g\) based on the additivity property of shapley values as follows: \[g\left(z^{\prime}\right)=\phi_{0}+\sum_{i=1}^{M}\phi_{i}z_{i}^{\prime} \tag{6}\] where \(z^{\prime}\in\{0,1\}^{M}\) is a one-hot vector of features, \(M\) is the number of input features and \(\phi_{i}\) is the contribution of the feature \(i\). Assuming a model inputs a dataset \(P\) and outputs a prediction \(Y(S)\), the shapley value \(\phi_{i}\) is computed as follows: \[\phi_{i}=\sum_{S\subseteq F\backslash\{i\}}\frac{|S|!(|F|-|S|-1)!}{|F|!} \cdot\mathbf{Y} \tag{7}\] \[\mathbf{Y}=Y_{S\cup\{i\}}\left(x_{S\cup\{i\}}\right)-Y_{S}\left(x_{S}\right) \tag{8}\] where \(S\) is a subset of the features and \(F\) is the set of all features [5]. ## III Methodology In this paper, we propose an order-invariant and interpretable fault diagnosis method, namely HDLCNN-SHAP, based on feature clustering, dilated convolution and the SHAP method for chemical fault detection and diagnosis. The proposed method mainly contains two parts: a hierarchical dilated convolution model and an explainer based on the SHAP method. The input data is first pre-processed by the feature clustering method and then fed into the hierarchical dilated convolution model to provide classification results. Then the trained model is seen as a black box and we apply the SHAP method to interpret the model performance. The overall architecture is shown in Fig. 4. In the following subsections, we will introduce the main flow of our method in detail. ### _Data Pre-processing_ As shown in Fig. 2, different features may be strongly correlated which requires the ability of extracting the hidden information of the relevance among these features. In this case, a convolution layer is difficult to extract enough information within size-fixed kernels. Instead of seeking the optimal order, we cluster the correlated features before training the dilated convolution model. As a data pre-processing step, we apply hierarchical clustering to divide the features into two categories based on relevance. Compared with other clustering methods, hierarchical clustering is easy to understand and implement. More importantly, the clustering results are not affected by the input order of data. Assume that we have a set of training samples: \(\mathbf{X}=\{\mathbf{x_{1}},\mathbf{x_{2}},\mathbf{x_{3}},\ldots,\mathbf{x_{n}}\}\), where \(\mathbf{x_{i}}\in\mathbf{R^{p}}\). Based on it, we build a set of the \(\mathbf{p}\) features: \(\mathbf{F}=\{\mathbf{f_{1}},\mathbf{f_{2}},\mathbf{f_{3}},\ldots,\mathbf{f_{p}}\}\), where \(\mathbf{f_{i}}\in\mathbf{R^{n}}\). Then we divide these \(\mathbf{p}\) features into two categorizes based on the hierarchical clustering algorithm. It mainly contains the following three steps: 1. Each feature \(\mathbf{f_{i}}\) is treated as a single cluster. Then we compute the euclidean distance \(d(\mathbf{f_{i}},\mathbf{f_{j}})\) between two clusters \(\mathbf{f_{i}}\) and \(\mathbf{f_{j}}\). 2. The two closest clusters \(\mathbf{f_{i}},\mathbf{f_{j}}\) are merged into a single cluster \(\mathbf{f_{s}}\). Then \(\mathbf{f_{i}},\mathbf{f_{j}}\) are removed and \(\mathbf{f_{s}}\) is added. 3. Iterate the previous step until there is only one cluster remaining. At each iteration, the distance matrix is updated to reflect the distance of the newly formed cluster \(\mathbf{f_{s}}\) with the remaining clusters. We use the Ward variance minimization algorithm to calculate the distance mentioned above. The distance between the newly formed cluster \(\mathbf{f_{s}}\) and any remaining cluster \(\mathbf{f_{t}}\) is defined as: \[d^{\star}(\mathbf{f_{s}},\mathbf{f_{t}})=\sqrt{d_{1}(\mathbf{f_{s}},\mathbf{f_{t}})+d_{2}(\bm {f_{s}},\mathbf{f_{t}})-d_{3}(\mathbf{f_{s}},\mathbf{f_{t}})} \tag{9}\] \[d_{1}(\mathbf{f_{s}},\mathbf{f_{t}})=\frac{|\mathbf{f_{t}}|+|\mathbf{f_{i}}|}{T}d(\mathbf{f_{t}}, \mathbf{f_{i}})^{2} \tag{10}\] Figure 4: The overall architecture of the proposed order-invariant and interpretable HDLCNN-SHAP method for chemical fault detection and diagnosis. \[d_{2}(\mathbf{f_{s}},\mathbf{f_{t}})=\frac{|\mathbf{f_{t}}|+|\mathbf{f_{j}}|}{T}d(\mathbf{f_{t}},\mathbf{f_ {j}})^{2} \tag{11}\] \[d_{3}(\mathbf{f_{s}},\mathbf{f_{t}})=\frac{|\mathbf{f_{t}}|}{T}d(\mathbf{f_{i}},\mathbf{f_{j}})^{2} \tag{12}\] where \(T=|\mathbf{f_{t}}|+|\mathbf{f_{i}}|+|\mathbf{f_{j}}|\). Finally, we obtain two sets of features \(\mathbf{F_{1}},\mathbf{F_{2}}\) and each of them contains features with high correlation coefficients. The original data is reordered based on \(\mathbf{F_{1}}\) and \(\mathbf{F_{2}}\). For each sample \(\mathbf{x_{i}}\), the processed sample \(\mathbf{x^{\prime}_{i}}\) contains \(\mathbf{p}\) features: \(\mathbf{F^{\prime}}=\big{\{}\mathbf{f^{\prime}_{1}},\dots,\mathbf{f^{\prime}_{m}},\mathbf{f^ {\prime}_{m+1}},\dots,\mathbf{f^{\prime}_{p}}\big{\}}\), where \(\{\mathbf{f^{\prime}_{1}},\dots,\mathbf{f^{\prime}_{m}}\}\) belongs to \(\mathbf{F_{1}}\) and \(\big{\{}\mathbf{f^{\prime}_{m+1}},\dots,\mathbf{f^{\prime}_{p}}\big{\}}\) belongs to \(\mathbf{F_{2}}\). ### _Hierarchical Dilated Convolution Model_ After feature clustering, the processed data is then fed into the dilated convolution layers for feature extraction. To explain the model structure clearly, we take the data from TE dataset as input and define the processed input data as a set \(\mathbf{D}=\{\mathbf{d_{1}},\mathbf{d_{2}},\mathbf{d_{3}},\dots,\mathbf{d_{n}}\}\), where \(\mathbf{d_{i}}\in\mathbf{R^{22\times 20}}\). Each data sample \(\mathbf{d_{i}}\) has 22 features and 20 time duration. The details are shown in Fig. 5 and the procedures of the model are summarized as follows. 1. The size of the processed data is \((N\times 1\times 22\times 20)\) and we divide the 22 features into two segments. Since the processed data is reordered in the previous step, each segment contains highly correlated features belonging to the same cluster. Each segment has a size of \((N\times 1\times 11\times 20)\). 2. The segmented data is processed by a dilated convolution layer with dilation rate \(r=2\). Then the hidden information about feature correlation is extracted locally and the size of the extracted features is \((N\times 16\times 7\times 16)\). 3. The extracted features are concatenated to obtain the entire information about two feature clusters. The size of the concatenated features is \((N\times 16\times 14\times 16)\). 4. The concatenated features are then processed by a dilated convolution layer with dilation rate \(r=2\). This step can further extract the global information and the size of the new extracted features is \((N\times 32\times 10\times 12)\). 5. A max pooling layer is applied to help over-fitting and reduce the computational cost. Now the size of the extracted features is \((N\times 32\times 5\times 6)\). 6. The extracted features are flattened to couple information that exists vertically and horizontally. The output data of the fully connected layer has a size of \((N\times 960)\). 7. A linear layer is used to change the dimensionality of the data. Then a softmax activation function is applied to impart non-linearity into the model and output the probability distributions of the possible classes (11 in this case). The size of the final output is \((N\times 11)\). Finally, we obtain the classification results and a trained model that requires explanation. The performance of this model is described in Section IV and the interpretability method is introduced in the following subsection. ### _Deep SHAP Explainer_ To interpret the order-invariant hierarchical model mentioned in Section III-B, we apply an explainer based on the SHAP method which combines deep learning important features (DeepLIFT) [20] and shapley values to leverage extra knowledge about the properties of deep neural networks to improve computational performance [5]. DeepLIFT is an algorithm to compute the feature importance of the input with a given output based on back-propagation [20]. This method uses a summation-to-delta property to compute the contribution scores \(C_{\Delta x_{i}\Delta y}\) for each input \(x_{i}\): \[\sum_{i=1}^{n}C_{\Delta x_{i}\Delta y}=\Delta y \tag{13}\] where \(y\) is the model output, \(\Delta y=y-y^{0}\), \(\Delta x_{i}=x_{i}-x^{0}\), \(x^{0}\) refers to the reference input, and \(y^{0}\) represents the reference output. Compared with Equation 6, if we define \(\phi_{i}=C_{\Delta x_{i}\Delta y}\) and \(\phi_{0}=y^{0}\), then DeepLIFT approximates SHAP values for linear models. Deep SHAP takes DeepLIFT as a compositional approximation of SHAP values and recursively passes the multipliers of DeepLIFT backwards through the network [5]. Deep SHAP explainer effectively achieve linearization by combining the SHAP values computed for smaller components into SHAP values for the whole model. Therefore, we can quantify the contribution of each feature from each data sample to obtain local explanation. Based on the feature contribution of each data sample, we further interpret the model globally by Fig. 5: Details of the proposed order-invariant hierarchical model based on dilated convolution. calculating the average value of data samples for each feature, which can be mathematically described as follows: \[\Phi_{i}=\frac{1}{n}\sum_{j=1}^{n}\phi_{i}(x_{j}) \tag{14}\] where \(\phi_{i}(x_{j})\) refers to the contribution of the feature \(i\) of the input \(x_{j}\). Finally, we identify the root-cause feature \(\mathbf{\gamma}\) as the one with the highest contribution. \[\mathbf{\gamma}=\arg\max(\Phi_{i}) \tag{15}\] ### _The Entire Fault Detection and Diagnosis Procedure_ Based on the description above, the entire order-invariant and interpretable fault detection and diagnosis procedure via feature clustering, hierarchical dilated convolution model and SHAP method mainly consists of six steps: 1. Collecting the monitored variables via sensors in chemical processes. Obtaining the training set from the collected samples and normalizing the data. 2. Processing the training samples based on hierarchical clustering method. The features are reordered according to the clustering results and the highly correlated features are closer in the processed data. 3. Training the hierarchical dilated convolution model with the processed data. Storing the model parameters for later usage. 4. Acquiring online samples and processing them in the same way as mentioned above. 5. Restoring the trained model and classifying the new sample into a normal or fault type. 6. If a fault happens, identifying the corresponding root-cause feature based on SHAP method. Feature contribution is computed and the one with highest contribution is considered as the root-cause feature. ## IV Experiment Study In this paper, we use the TE dataset to verify the effectiveness of the proposed method. It simulates actual chemical processes and is widely used as a benchmark in chemical fault detection and diagnosis [21, 22, 23]. There are totally 21 types of faults and 22 continuous measured variables in this dataset. For the training set, it has 980 samples including 500 samples in the normal case and 480 samples in the case of failure for each fault type. For the test set, it has 960 samples including 160 normal samples and 800 fault samples for each fault type. Details are described in the following subsections. ### _Experiment Setup_ The downloaded TE dataset has a sampling period of 180 seconds, leading to few data samples for training and test. Therefore, current CNN methods use simulation model to generate more data samples for feature extraction. Similarly, we refer to the simulation method from [24] on MATLAB to obtain more data for classification. The sampling period is set to 36 seconds (100 samples/h). The simulator runs for 48 hours in the normal state, then 4800 training normal samples are collected. For each fault type, the simulator runs for 48 hours to collect 4800 training fault samples. For the testing data of each fault, the simulator runs for 8 hours in the normal state at the beginning to collect 800 test normal samples. Then a fault disturbance is introduced and the simulator continues to run for 40 hours to collect 4000 test fault samples. Next, the collected data is processed in the range [0,1] to eliminate the adverse effects caused by singular data. To extract the features in both spatial and temporal domains, each data sample is reshaped into a 2-D array with 22 features and 20 time duration. To demonstrate the performance of our proposed model on the chemical fault detection and diagnosis, we select Fault 1, Fault 2, Fault 3, Fault 8, Fault 10, Fault 11, Fault 12, Fault 13, Fault 14, and Fault 20 for binary and multi-class fault detection and diagnosis. Fault 10 and Fault 11 are chose for \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Fault ID} & \multicolumn{2}{c|}{PCA} & \multicolumn{2}{c|}{KPCA} & \multicolumn{2}{c|}{KDPCA} & \multicolumn{2}{c|}{KDICA} & \multicolumn{2}{c|}{MLPP} & \multirow{2}{*}{DSAE} & \multicolumn{2}{c|}{VS-SVDD} & \multicolumn{2}{c|}{MBTCN} & \multicolumn{2}{c|}{CNN} & \multicolumn{2}{c|}{DLCNN} & \multicolumn{2}{c|}{HDLCNN} \\ \cline{2-13} & SPE & T\({}^{2}\) & SPE & T\({}^{2}\) & SPE & T\({}^{2}\) & SPE & T\({}^{2}\) & SPE & T\({}^{2}\) & SPE & T\({}^{2}\) & \\ \hline 1 & 99.5 & 99.1 & 100.0 & 99.3 & 100.0 & 99.5 & 100.0 & 100.0 & 99.7 & 100.0 & 99.3 & 99.0 & 100.0 & 99.4 & 98.8 & 100.0 \\ \hline 2 & 98.4 & 98.5 & 99.0 & 95.3 & 99.1 & 98.3 & 98.5 & 98.8 & 98.9 & 99.8 & 96.8 & 98.0 & 99.0 & 99.1 & 99.1 & 98.7 \\ \hline 3 & 0.6 & 3.6 & 6.8 & 9.0 & 9.6 & 9.4 & 19.4 & 19.8 & 23.8 & 39.6 & 67.4 & 42.0 & 85.4 & 85.7 & 92.4 & 97.6 \\ \hline 8 & 96.8 & 97.4 & 97.9 & 97.4 & 97.8 & 97.6 & 97.8 & 99.4 & 100.0 & 98.7 & 87.0 & 98.0 & 89.0 & 96.7 & 97.7 & 95.9 \\ \hline 10 & 15.4 & 36.7 & 52.5 & 48.6 & 63.5 & 42.6 & 80.6 & 92.9 & 71.3 & 94.2 & 68.3 & 73.0 & 86.6 & 93.7 & 93.9 & 95.6 \\ \hline 11 & 63.8 & 41.4 & 77.6 & 51.0 & 91.0 & 33.6 & 81.4 & 90.3 & 93.6 & 95.6 & 81.3 & 98.0 & 99.5 & 93.4 & 93.0 & 94.5 \\ \hline 12 & 92.5 & 98.5 & 98.5 & 98.9 & 99.1 & 99.1 & 99.7 & 100.0 & 99.6 & 100.0 & 94.1 & 100.0 & 96.5 & 81.7 & 82.7 & 90.7 \\ \hline 13 & 95.0 & 94.3 & 95.2 & 94.3 & 95.4 & 96.3 & 95.9 & 95.9 & 96.5 & 91.2 & 78.1 & 95.0 & 95.6 & 96.3 & 96.8 & 96.9 \\ \hline 14 & 99.9 & 98.8 & 100.0 & 99.6 & 100.0 & 99.9 & 100.0 & 100.0 & 100.0 & 99.6 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline 20 & 42.3 & 34.0 & 59.8 & 49.1 & 66.8 & 51.5 & 72.7 & 83.9 & 86.7 & 93.6 & 78.6 & 78.0 & 90.1 & 97.2 & 97.3 & 96.4 \\ \hline Average & 70.4 & 70.2 & 78.7 & 74.2 & 82.2 & 72.3 & 84.6 & 88.1 & 87.0 & 91.3 & 85.1 & 88.1 & 94.2 & 94.3 & 95.2 & **96.6** \\ \hline \end{tabular} \end{table} TABLE I: Binary Fault detection accuracy of the selected 10 faults Fig. 6: The feature clustering dendrogram of the 22 features in TE dataset. root cause analysis since their root-cause features are proven and widely used. The corresponding true root-cause features are X(18) and X(9) [21, 23]. ### _Feature Clustering_ As shown in Fig. 1, highly correlated features are closer in the optimal order. Although it is hard to obtain the optimal order, we can cluster the features with higher correlation to achieve similar effect. As described in Section III-A, we divide the 22 features into two categorizes based on the correlation. The corresponding hierarchical clustering dendrogram is shown in Fig. 6. We can see that the first category includes 11 features which are X(1), X(2), X(3), X(4), X(8), X(9), X(12), X(14), X(15), X(17) and X(19). The second category also includes 11 features which are X(5), X(6), X(7), X(10), X(11), X(13), X(16), X(18), X(20), X(21) and X(22). Refer to the correlation coefficients of the 22 features shown in Fig. 2, we can see that the highly correlated features are classified into the same cluster. ### _Contrast and Ablation Experiments_ Firstly, we evaluate the proposed method for binary fault detection and diagnosis. In this case, only one fault is considered at a time. To show the efficiency, the proposed HDLCNN model is compared with the existing data-driven methods including principal component analysis (PCA), kernel principal component analysis (KPCA), integrated kernel dynamic principal component analysis (KDPCA), kernel dynamic independent component analysis (KDICA) [25], modified locality preserving projection (MLPP) [26], denoising sparse autoencoder (DSAE) [22], variable selection and support vector data description (VS-SVDD) [27] and multi-block temporal convolutional network (MBTCN) [9]. As shown in Table. I, our proposed model results in the highest average fault detection rate, which is marked bold. The following experiment results are marked in a similar way. As ablation experiments, we compare CNN, DLCNN and HDLCNN. CNN is the baseline model with traditional convolution layers and DLCNN contains dilated convolution layers without hierarchical feature clustering. HDLCNN is our proposed method involving both hierarchical feature clustering and dilated convolution layers. It is obvious that the average fault detection accuracy of DLCNN is increased by 1.0% compared to CNN, which shows the effect of dilated convolution. In addition, hierarchical feature clustering is proved instrumental since the average fault detection accuracy of HDLCNN is increased by 1.5% compared to DLCNN. Then, to explore the performance of the proposed method for multi-class fault detection and diagnosis, we combine the normal case with the selected 10 types of faults. As shown in Fig. 7, the extracted features of HDLCNN are visualized. Specifically, we utilize t-SNE method to reduce the dimension of the features to 2, and then plot them by class. It is obvious that the embedding of the extracted features belonging to different classes are separate. Therefore, it is unsurprising that the softmax layer can get accurate classification results. PCA, DSAE, CNN and DLCNN are selected as comparison methods. As shown in Table. II, the proposed model achieves the highest average fault detection rate. Similarly, we consider CNN, DLCNN and HDLCNN as ablation experiments. Due to the dilated convolution, the average fault detection accuracy of DLCNN is increased by 3.0% compared to CNN. And with hierarchical feature clustering, the average fault detection accuracy of HDLCNN is increased by 2.6% compared to DLCNN. ### _Ablation Experiments of Sensitivity to Feature Order_ To demonstrate the ability of our proposed HDLCNN model to process tabular data with features of arbitrary order, we find the separate-correlated order of features and compare to the close-correlated order. More specifically, we consider the separate-correlated order as [X(21), X(8), X(4), X(3), X(15), X(16), X(2), X(6), X(20), X(13), X(17), X(18), X(9), X(1), X(7), X(10), X(5), X(14), X(19), X(11), X(12), X(22)], in which highly correlated features are separated. And the close-correlated order is formed in an opposite way. In this case, the average multi-class fault detection accuracy of CNN, Fig. 7: The t-SNE embedding of the extracted features of our proposed HDLCNN model. DLCNN and HDLCNN is shown in Table. III. We see that the performance of CNN and DLCNN is influenced by the order of the features and the differences are 5.5% and 2.2% respectively. This proves that dilated convolution can extract more information and weaken the effect of the feature order. Further, HDLCNN is order-invariant and the difference is only 0.4%, which confirms the effect of hierarchical feature clustering. In a word, the ablation experiments demonstrate that our proposed method can effectively process tabular data with features of arbitrary order without seeking the optimal order. The confusion matrices obtained by HDLCNN are illustrated in Fig. 8. ### _Local and Global Explanation_ We further analyze the feature contribution and obtain the root-cause features based on SHAP values. Fault 10 and Fault 11 are selected for root cause analysis, and the corresponding true root-cause features are X(18) and X(9), respectively. For Fault 10, the stripper temperature (X(18)) is directly affected because of the random variation of temperature in feed C [23]. And for Fault 11, the random variation in reactor cooling water inlet temperature results in abnormal behaviour of the reactor temperature (X(9)) [21]. First, local explanation is provided to indicate the feature contribution to the prediction from a single sample. Fig. 9 shows the visualization of the SHAP values of a single sample. The left column indicates the gray image of the sample, the middle column shows the SHAP values of classifying this sample to the normal case, and the right column shows the SHAP values of classifying this sample to the case of failure. Red pixels indicate high SHAP values and blue pixels denote low SHAP values. High SHAP value means great feature contribution of this sample to be classified to the corresponding type of fault. The corresponding heatmaps are shown in Fig. 10. It is obviously that the most important features are X(18) and X(9) for Fault 10 and Fault 11 respectively, which are also the true root-cause features. Then, global explanation is described to show the feature contribution of the overall dataset. We compute the average feature contribution for each sample and consider the feature with highest importance as the corresponding root-cause feature. Fig. 11 shows the average feature importance and the features with highest importance are marked red. We see that the most important features are X(18) and X(9) for Fault 10 and Fault 11 respectively, which are also the true root-cause features. Fig. 12 denotes the relationship between the measured feature values and the corresponding SHAP values. For Fault 10, high measured values of X(18) obviously correspond high SHAP values which means high stripper temperature (X(18)) may be the main cause of the failure. On the contrary, for Fault 11, we see that low measured values of X(9) mainly refer to high SHAP values which reminds us to pay attention to the reduction of reactor temperature (X(9)). Figure 11: The average feature importance of Fault 10 and 11. Figure 8: The confusion matrices of HDLCNN. Figure 10: The heatmap of a single sample of Fault 10 and 11. Figure 9: The visualization of SHAP values of a single sample of Fault 10 and 11. ## V Conclusion In this paper, we propose an order-invariant and interpretable HDLCNN-SHAP method for chemical fault detection and diagnosis based on feature clustering, dilated convolution and SHAP method. The ability to detect faults and obtain the root-cause features is essential for fault detection and diagnosis methods in real chemical processes. Comparing with the existing methods, our proposed method can effectively process tabular data with features of arbitrary order without seeking the optimal order. In addition, root-cause features are precisely identified without any human supervision. The proposed method is evaluated on a simulation dataset based on an actual chemical process. Experimental results show that the proposed method achieves better performance for both binary and multi-class fault detection and diagnosis compared with other popular data-driven methods. Moreover, the proposed method is order-invariant, which results in insensitivity to the order of the features. Local and global explanation are further described to obtain the root-cause features. In our future work, we will focus on more practical and complex fault detection problems. Simultaneous-fault diagnosis is a common problem in real applications and the problem of faults with multiple root-cause features is consequential as well. Besides, incomplete and high-dimensional datasets are worth of investigation for solving fault detection problems in real-world chemical processes.
2308.04369
SSTFormer: Bridging Spiking Neural Network and Memory Support Transformer for Frame-Event based Recognition
Event camera-based pattern recognition is a newly arising research topic in recent years. Current researchers usually transform the event streams into images, graphs, or voxels, and adopt deep neural networks for event-based classification. Although good performance can be achieved on simple event recognition datasets, however, their results may be still limited due to the following two issues. Firstly, they adopt spatial sparse event streams for recognition only, which may fail to capture the color and detailed texture information well. Secondly, they adopt either Spiking Neural Networks (SNN) for energy-efficient recognition with suboptimal results, or Artificial Neural Networks (ANN) for energy-intensive, high-performance recognition. However, seldom of them consider achieving a balance between these two aspects. In this paper, we formally propose to recognize patterns by fusing RGB frames and event streams simultaneously and propose a new RGB frame-event recognition framework to address the aforementioned issues. The proposed method contains four main modules, i.e., memory support Transformer network for RGB frame encoding, spiking neural network for raw event stream encoding, multi-modal bottleneck fusion module for RGB-Event feature aggregation, and prediction head. Due to the scarce of RGB-Event based classification dataset, we also propose a large-scale PokerEvent dataset which contains 114 classes, and 27102 frame-event pairs recorded using a DVS346 event camera. Extensive experiments on two RGB-Event based classification datasets fully validated the effectiveness of our proposed framework. We hope this work will boost the development of pattern recognition by fusing RGB frames and event streams. Both our dataset and source code of this work will be released at https://github.com/Event-AHU/SSTFormer.
Xiao Wang, Zongzhen Wu, Yao Rong, Lin Zhu, Bo Jiang, Jin Tang, Yonghong Tian
2023-08-08T16:15:35Z
http://arxiv.org/abs/2308.04369v2
SSTFormer: Bridging Spiking Neural Network and Memory Support Transformer for Frame-Event based Recognition ###### Abstract Event camera-based pattern recognition is a newly arising research topic in recent years. Current researchers usually transform the event streams into images, graphs, or voxels, and adopt deep neural networks for event-based classification. Although good performance can be achieved on simple event recognition datasets, however, their results may be still limited due to the following two issues. Firstly, they adopt spatial sparse event streams for recognition only, which may fail to capture the color and detailed texture information well. Secondly, they adopt either Spiking Neural Networks (SNN) for energy-efficient recognition with suboptimal results, or Artificial Neural Networks (ANN) for energy-intensive, high-performance recognition. However, seldom of them consider achieving a balance between these two aspects. In this paper, we formally propose to recognize patterns by fusing RGB frames and event streams simultaneously and propose a new RGB frame-event recognition framework to address the aforementioned issues. The proposed method contains four main modules, i.e., memory support Transformer network for RGB frame encoding, spiking neural network for raw event stream encoding, multi-modal bottleneck fusion module for RGB-Event feature aggregation, and prediction head. Due to the scarce of RGB-Event based classification dataset, we also propose a large-scale PokerEvent dataset which contains 114 classes, and 27102 frame-event pairs recorded using a DVS346 event camera. Extensive experiments on two RGB-Event based classification datasets fully validated the effectiveness of our proposed framework. We hope this work will boost the development of pattern recognition by fusing RGB frames and event streams. Both our dataset and source code of this work will be released at [https://github.com/Event-AHU/SSTFormer](https://github.com/Event-AHU/SSTFormer). Spiking Neural Networks, Transformer Networks, Bottleneck Mechanism, Video Classification, Bio-inspired Computing ## I Introduction The mainstream video-based classification algorithms are widely developed based on RGB cameras. With the help of deep learning, many representative deep models are proposed, such as TSM [1], C3D [2], Video-SwinTrans [3], and SlowFast [4], etc. These works learn the deep feature representation well and contribute significantly to many practical applications and video-related tasks. However, these models may perform poorly in some extremely challenging scenarios (for example, fast motion, low illumination, and over-exposure) due to the utilization of RGB cameras. This is because RGB cameras adopt a global synchronous exposure mechanism and have a limited frame rate. Recently, biologically inspired event cameras (also called DVS, Dynamic Vision Sensor) have attracted great interest due to their unique imaging principle and key characteristics including _high dynamic range, high temporal resolution, low latency, low power,_ etc. Different from RGB cameras which record the light intensity of a scene in a synchronous way, each pixel in event cameras outputs an event spike only when the variation of light intensity exceeds a certain threshold. The event that corresponds to a decrease in brightness is termed _ON_ event, otherwise, _OFF_ event. Usually, each event can be represented as a quadruple \(\{x,y,t,p\}\), where \(x,y\) denotes the spatial coordinates, \(t\) is the timestamp, and \(p\) is the polarity (1 and -1 are used to denote the _ON_ and _OFF_ event, respectively). Therefore, event cameras have been applied in many computer vision tasks, such as object detection, visual tracking [5, 6], pose estimation, etc. There are some works, such as Graph Neural Networks (GNN) [7, 8], Convolutional Neural Networks (CNN) [9], and Spiking Neural Networks (SNN) [10, 11] which have been proposed for pattern recognition of event cameras. For example, Asynchronous Event-based Graph Neural Network (short for AEGNN [8]) is proposed to process events as spatio-temporal graphs in an incremental learning manner. Zhou et al. combine the spiking neurons with Transformer network and Fig. 1: Illustration of video classification by fusing RGB frames and event stream. The bottom event images are stacked for visualization.
2305.13992
Liouville Space Neural Network Representation of Density Matrices
Neural network quantum states as ansatz wavefunctions have shown a lot of promise for finding the ground state of spin models. Recently, work has been focused on extending this idea to mixed states for simulating the dynamics of open systems. Most approaches so far have used a purification ansatz where a copy of the system Hilbert space is added which when traced out gives the correct density matrix. Here, we instead present an extension of the Restricted Boltzmann Machine which directly represents the density matrix in Liouville space. This allows the compact representation of states which appear in mean-field theory. We benchmark our approach on two different version of the dissipative transverse field Ising model which show our ansatz is able to compete with other state-of-the-art approaches.
Simon Kothe, Peter Kirton
2023-05-23T12:21:08Z
http://arxiv.org/abs/2305.13992v2
# Liouville Space Neural Network Representation of Density Matrices ###### Abstract Neural network quantum states as ansatz wavefunctions have shown a lot of promise for finding the ground state of spin models. Recently, work has been focused on extending this idea to mixed states for simulating the dynamics of open systems. Most approaches so far have used a purification ansatz where a copy of the system Hilbert space is added which when traced out gives the correct density matrix. Here, we instead present an extension of the Restricted Boltzmann Machine which directly represents the density matrix in Liouville space. This allows the compact representation of states which appear in mean-field theory. We benchmark our approach on two different version of the dissipative transverse field Ising model which show our ansatz is able to compete with other state-of-the-art approaches. ## I Introduction Developing techniques to study the many-body physics of open quantum systems opens up the avenue to access behavior beyond that possible in equilibrium. This is relevant for understanding the dynamics of a variety of experimental platforms, but it also allows us to address fundamental questions about what kinds of physics is realizable in situations where incoherent driving and dissipation compete with coherent Hamiltonian dynamics. For example, in arrays of superconducting qubits coherent hopping of excitations can compete with onsite losses [1], or in semiconductor microcavities non-linearities in the Hamiltonian can compete with photon losses [2]. Performing accurate simulations of the models describing this kind of physics can be very challenging. Even in the simplest case, where the dynamics of the system is well captured by a Markovian master equation [3], the effective size of the space required grows exponentially with the number of degrees of freedom included. This makes it difficult to make concrete statements about what phases are stable at large systems sizes, in the thermodynamic limit. Thus, developing new techniques, able to capture all of the required physical processes is a key challenge to be overcome. For closed systems a variety of techniques are available, allowing calculation of both ground states and dynamics in a variety of situations. These range from density functional theory, which relies on approximations of the electronic density [4], to Monte Carlo methods [5; 6; 7; 8; 9] which draw a small number of samples from the Hilbert space approximating the correct distribution, and tensor network methods [10; 11; 12], which rely on compression of the full many-body wavefunction. Recently, there has been much progress in using neural networks to accurately represent wavefunctions of many-body systems [13; 14; 15]. These neural network quantum states (NNQS) rely on the ability of neural networks to learn and represent any sufficiently smooth function [16; 17; 18], allowing an efficient description without requiring an exponential number of parameters. The simplest architecture used for NNQS is the Restricted Boltzmann Machine (RBM). They consist of two fully connected layers, one visible and one hidden. This simple structure of the architecture translates to a function which can be easily implemented numerically, allowing for an efficient Markov sampling of the relevant probability distribution [19; 20] and efficient optimization. By increasing the hidden unit density the expressibility of the ansatz increases, allowing them to represent any state in the limit of infinite parameters. This has already shown to be competitive with other state-of-the-art approaches for finding ground [13] and excited [21] states of 1D and 2D spin systems. This ansatz has been applied to a wide range of problems in this area, from simulating topologically complex states [22] to investigating how the eigenvalue spectrum of the quantum Fisher matrix gives information about how networks learn groundstates [23]. RBMs are able to represent some states with a volume-law entropy scaling [24]. However, this is not a generic property [25] and there are certain volume-law states which still do not permit an efficient representation. Architectures beyond the RBM have also been used to study related problems. These range from recurrent neural networks [26; 27] to convolutional neural networks [28; 29; 30] and deep autoregressive models, which forgo the need for Monte Carlo sampling altogether [31]. These approaches have also begun to be applied to the dynamics and steady-states of open quantum systems. Neural density machines (NDMs) [32; 33; 34; 35] use a pair of copies of a pure state NNQS coupled together by an extra layer of hidden neurons. The density matrix of interest is then obtained by tracing out the degrees of freedom associated with one of the copies. Such purification schemes have been also used in tensor network approaches [36; 37; 11]. Other developments use a POVM representation of the density matrix, achieving accurate results [38; 39; 40] as well as autoregressive Gram-Hadamard density operators which extend the NDM by adding additional layers which allows the capture of more complex correlations [41]. Another ansatz was introduced by Yoshioka and Hamazaki [42] which uses a binary encoding to vectorise their density matrix. So far, the literature has focused mostly on purifying the density matrix in Hilbert space, with the exception of Ref. [42]. Here, we propose a different approach, which writes the den sity matrix directly in Liouville space. The Liouville density machine (LDM) ansatz which we propose does not require additional visible units and maintains a correspondence of the visible layer and the concrete physical system. We show that this is able to efficiently represent a larger range of physically relevant states than the NDM improving learning and accuracy, while retaining the simplicity of RBMs. This paper is organized as follows: In Sec. II we give a brief description of the Lindblad master equation which describes Markovian open system dynamics. Sec. III details how finding the steady-state of this equation can be recast in terms minimization of a cost function and how it can be estimated using Monte Carlo sampling. Then, in section IV, we give a brief introduction to NNQS, using the original ansatz of Ref. [13], going on to show how this can be generalized to open systems. Sec. V gives a detailed comparison of the different approaches to solve this problem, using two versions of the dissipative transverse field Ising model as a benchmark. We gain insights into which kinds of states are easy and difficult to represent efficiently with the architectures described. Finally in Sec. VI we give our conclusions and discuss possible ways to build upon these results. The appendices contain a detailed derivation of the stochastic reconfiguration algorithm and an explanation of how we produce the Markov chains for estimating the required expectation values from the neural network. ## II Open system dynamics The presence of an external environment fundamentally changes the nature of the dynamics of a quantum system. The state is no longer captured by a wavefunction and the non-unitary dynamics cannot be described by the Schrodinger equation. Instead, the natural description is in terms of a reduced density operator for the system, \(\rho\), and a master equation which defines its time evolution. This then allows for energy dissipation, decoherence and, most importantly for the present manuscript, the relaxation into a non-equilibrium steady-state in the long time limit. If the coupling to the environment is weak and structureless, one can assume that system-induced correlations within the bath decay faster than the effects of the bath on the system. This leads to the particularly simple Lindblad master equation [3] which takes the form \[\frac{d\rho}{dt}=\mathcal{L}\rho=-i[H,\rho]+\sum_{i}\gamma_{i}\mathcal{D}[A_{ i}]\rho. \tag{1}\] Here, the sum runs over the dissipation channels \(i\) which are defined by the jump operators \(A_{i}\) and rates \(\gamma_{i}\). The dissipation superoperators are given by the Lindblad form \[\mathcal{D}[A]\rho=A\rho A^{\dagger}-\frac{1}{2}\left(A^{\dagger}A\rho+\rho A ^{\dagger}A\right). \tag{2}\] We only consider time-independent master equations. The time evolution of the density operator is then governed by the formal expression \[\rho(t)=\mathrm{e}^{\mathcal{L}t}\rho(0), \tag{3}\] such that in the long time limit the stationary state satisfies \[\lim_{t\rightarrow\infty}\mathcal{L}\rho(t)=0. \tag{4}\] To simplify what follows both mathematically and numerically we recast Eq. (1) in Liouville space (sometimes also referred to as Choi's space). In this space the density matrix is reshaped into a vector, \[\rho=\sum_{m}\sum_{n}\rho_{m,n}\ket{m}\bra{n}\rightarrow\ket{\rho}\rangle= \sum_{m,n}\rho_{m,n}\ket{m,n}\rangle. \tag{5}\] Here the density matrix is expanded in the basis \(\{\ket{m}\}\) with expansion coefficients \(\rho_{m,n}\). The double-ket notation, \(\ket{\cdot}\rangle\), used above denotes a vectorized operator which lives in a Hilbert space consisting of two copies of the original, \(\ket{\rho}\rangle\in\mathcal{H}\otimes\mathcal{H}\). Then superoperators which act on these operator-kets are elements of Liouville space \(L\in(\mathcal{H}\otimes\mathcal{H})^{*}\otimes(\mathcal{H}\otimes\mathcal{H})\)[42; 43]. Here we often abbreviate the double index \((m,n)\) with a single index \(s\) which runs over the ket- and bra-indices of the density matrix \(\sum_{m,n}\rho_{m,n}\ket{m,n}\rangle\equiv\sum_{s}\rho(s)\ket{s}\rangle\). With this Eq. (1) now reads: \[\frac{d}{dt}\ket{\rho}\rangle=L\ket{\rho}\rangle. \tag{6}\] where \[L= -i\left(H\otimes\mathbbm{1}-\mathbbm{1}\otimes H^{T}\right)\] \[+\sum_{k}\left[A_{k}\otimes A_{k}^{*}-\frac{1}{2}\left(\mathbbm{ 1}\otimes(A_{k}^{\dagger}A_{k})^{T}+(A_{k}^{\dagger}A_{k})\otimes\mathbbm{1} \right)\right]\] is the matrix form of the superoperator \(\mathcal{L}\) that acts from the left on a density-ket, \(\ket{\rho}\rangle\). We can then use standard linear algebra results to find the eigenvalues and eigenkets of the matrix \(L\), such that \[L\ket{\rho_{i}}\rangle=\lambda_{i}\ket{\rho_{i}}\rangle. \tag{7}\] A stationary state has \(\lambda_{i}=0\) while all other states have \(\mathrm{Re}\,\lambda_{i}<0\). As we have seen above, the size of the required Liouville space scales much more quickly than the already exponentially growing Hilbert space. For example, for spin-1/2 lattice problems, the state vector of a closed system grows as \(\mathcal{O}(2^{N})\), with the system size \(N\). The density matrix, on the other hand, grows as \(\mathcal{O}(4^{N})\) and hence the Liouvillian has up to \(\mathcal{O}(16^{N})\) elements which limits the size of accessible systems even further. In the next sections we will detail our approach of tackling this problem. ## III Finding the steady-state Many routes to accessing larger system sizes with numerical simulations are variational methods [44; 45]. For example, tensor network methods [46] can be seen as variationally finding the steady-state by optimizing over the set of states captured by a given tensor network architecture, while corner space RG [47] optimizes a particular (small) set of basis states which can be used to build up to larger system sizes. As we will see later, neural network techniques make use of the same underlying mathematical structures. Crucial to these methods is the use an ansatz function which specifies the full state with a finite number of parameters, which we denote by \(\alpha\). We may then write \(\rho(s;t)\rightarrow\rho_{\alpha}(s;t)\) and optimize over \(\alpha\), to find the closest representation of the steady-state within this class. For this approach to be an efficient solution, the number of parameters must grow at most polynomially with the system size. To find the steady-state, \(\lim_{t\rightarrow\infty}\rho(t)\equiv\rho_{ss}\), we require a cost function which measures how close the ansatz is to the true stationary density operator. To do this we make use of the fact that \(L\left|\rho_{ss}\right\rangle\rangle=0\) and that for all other states this quantity is finite. This gives us a cost function to optimize as well as a metric for how well the variational state parametrizes the true steady-state. The cost function we wish to optimize is given by the quantity \[\left|\mathrm{C}(\alpha)\right|^{2}=\left|\frac{\langle\langle\rho_{\alpha}|L |\rho_{\alpha}\rangle\rangle}{\langle\langle\rho_{\alpha}|\rho_{\alpha}\rangle \rangle}\right|^{2}, \tag{8}\] this uniquely vanishes in the steady-state. Note that the vectors appearing above are in Liouville space, hence the denominator involves the sum over all elements of the density matrix \(\langle\langle\rho_{\alpha}|\rho_{\alpha}\rangle\rangle=\sum_{s}|\rho_{\alpha} (s)|^{2}\). Expanding this cost function yields a form that can easily evaluated via Markov chain Monte Carlo \[C_{\alpha} =\frac{\langle\langle\rho_{\alpha}|L|\rho_{\alpha}\rangle\rangle} {\langle\langle\rho_{\alpha}|\rho_{\alpha}\rangle\rangle}\] \[=\frac{1}{\sum_{s^{\prime}}|\rho_{\alpha}(s^{\prime})|^{2}}\sum_{ s,s^{\prime}}\rho_{\alpha}^{*}(s)\rho_{\alpha}(s^{\prime})\langle\langle s|L|s^{ \prime}\rangle\rangle\] \[=\frac{1}{\sum_{s^{\prime}}|\rho_{\alpha}(s^{\prime})|^{2}}\sum_ {s,s^{\prime}}\frac{|\rho_{\alpha}(s)|^{2}\rho_{\alpha}(s^{\prime})}{\rho_{ \alpha}(s)}\langle\langle s|L|s^{\prime}\rangle\rangle\] \[=\sum_{s}p(s;\alpha)C_{\mathrm{loc}}(s;\alpha),\] where \[p(s;\alpha)=\frac{|\rho_{\alpha}(s)|^{2}}{\sum_{s^{\prime}}|\rho_{\alpha}(s^{ \prime})|^{2}}, \tag{9}\] defines a probability distribution over the entire density matrix which we can draw samples from. The local cost associated to each element of this distribution is then \[C_{\mathrm{loc}}(s;\alpha)=\sum_{s^{\prime}}\langle\langle s|L|s^{\prime} \rangle\rangle\frac{\rho_{\alpha}(s^{\prime})}{\rho_{\alpha}(s)}. \tag{10}\] All parts of this local cost can be efficiently calculated. The states \(s^{\prime}\) which connect, via the matrix element of the Liouvillian, to the original state \(s\) are generated during the Metropolis-Hastings step and can be accessed at any later stage of the process, see App. B for details on the algorithm used. Since, for a local Liouvillian, the number of non-zero elements grows only linearly in system size, the evaluation of the matrix elements of \(L\) can also be done efficiently. The samples we draw follow the distribution \(p(s;\alpha)\), and so the cost can be calculated as a simple mean over the local costs [20] \[C_{\alpha}\approx\frac{1}{N_{s}}\sum_{s}C_{\mathrm{loc}}(s;\alpha), \tag{11}\] where \(N_{s}\) is the number of samples and the sum runs over the Monte Carlo samples. Along with an estimate for the cost function we also need a way of updating the parameters, \(\alpha\) such that the state we find is optimized. The simplest way to do this is via stochastic gradient ascent (SGA), where the gradients of \(C_{\alpha}\) are also estimated with the Monte Carlo samples and at each step in the simulation the parameters are updated as \[\alpha\rightarrow\alpha^{\prime}=\alpha+\eta\nabla_{\alpha}C_{\alpha}, \tag{12}\] with the learning rate \(\eta\). There are several problems with this approach, e.g. it has been shown [23] that SGA has severe problems with steep energy surfaces. The main problem for the present case is that \(L\) also has a left eigenstate with eigenvalue \(0\), \[\langle\left\lceil\right\rceil L=0. \tag{13}\] This is the trace-state which is defined as \[\langle\left\lceil\right\rceil\rho\rangle\rangle=\mathrm{Tr}\left[\rho\right], \tag{14}\] and since the dynamics of any physical master equation is necessarily trace preserving we find the result above. This means that there is another state which optimizes the cost function. A solution to both of these problems is to instead use Stochastic Reconfiguration (SR) [20; 23] to update the parameters. SR can be derived by asking which parameter update \(\gamma\), \[\alpha\rightarrow\alpha^{\prime}=\alpha+\eta\gamma, \tag{15}\] best approximates a step in real time (see App. A for a derivation). Since the trace state cannot be found by a real-time evolution generated by \(L\) this guarantees that we will find the correct steady-state when optimizing the cost, Eq. (8). Furthermore, SR takes into account the curvature of the energy landscape, speeding up the optimization on flat areas and slowing down in the presence of strong curvature. The result is that the updates are calculated as \[\gamma=S^{-1}f, \tag{16}\] where \(S_{i,j}=\langle O_{i}^{*}O_{j}\rangle-\langle O_{i}^{*}\rangle\langle O_{j}\rangle\) is the quantum Fisher matrix and \(f_{i}=\langle O_{i}^{*}L\rangle-\langle O_{i}^{*}\rangle\langle L\rangle\) is the the gradient of the cost function in Eq. (8). The angle brackets, \(\langle\ \cdot\ \rangle\), denote the expectation value over the Monte Carlo samples and the operator \(O_{i}(s)\) is the logarithmic derivative of \(\rho_{\alpha}(s)\) with respect to the \(i\)th parameter \[O_{i}(s)=\frac{1}{\rho_{\alpha}(s)}\frac{\partial}{\partial\alpha_{i}}\rho_{ \alpha}(s). \tag{17}\] The matrix \(S\) is defined to be positive, however when estimating it via Monte Carlo sampling it can happen that some eigenvalues vanish and \(S\) becomes singular. One can work around this problem by either calculating the pseudo-inverse or adding a small regularization, \(\lambda\approx 10^{-3}\), to the diagonal. We employ the latter method here. We next need to specify the form of our ansatz function for the density operator, \(\rho_{\alpha}\). To do this we first give a brief overview of how neural networks can be used to represent many-body wavefunctions. ## IV Neural network quantum states Neural network quantum states are a variational ansatz whose parameters and functional form are defined by an underlying neural network architecture. This approach has been recently developed to describe the many-body wavefunction of interacting spin systems and was introduced in Ref. [13]. The key insight here is that the heart of the many-body problem is to find the relevant parts of Hilbert space in which e.g. the ground state of a system lives. This is essentially a problem of dimensionality reduction and feature extraction, which are the two strongest points of neural networks. NNQS use this ability of neural networks to efficiently represent any continuous function [16; 17; 18], e.g. a wavefunction [13; 21; 22] or density matrix [33; 34; 35] by learning the most important features which define that function. The simplest NNQS architecture is the RBM. These are bilayer neural networks with one visible and one hidden layer of neurons. The network is parametrized by a bias attached to each neuron and a set of weights which connects the two layers. This setup is shown schematically in Fig. 1. In the same spirit as the discussion in the previous section, it is possible to then write a variational ansatz for a wavefunction for a spin-1/2 lattice model. This takes the form \[\Psi_{\alpha}(s) =\sum_{\{h_{i}\}}\mathrm{e}^{\sum_{j}a_{j}s_{j}+\sum_{i}b_{i}h_{i }+\sum_{ij}W_{ij}h_{i}s_{j}} \tag{18}\] \[=\mathrm{e}^{\sum_{j}a_{j}s_{j}}\prod_{i=1}^{M}2\cosh\left[b_{i} +\sum_{j}^{N}W_{ij}s_{j}\right]. \tag{19}\] Where in the second line we use the fact that the sum over configurations can be analytically calculated. Here \(a_{i}(b_{j})\) denotes the visible (hidden) bias of the \(i\)th site(\(j\)th hidden unit), and \(W_{ij}\) the weights connecting the \(j\)th visible unit to the \(i\)th hidden unit. \(M(N)\) denotes the number of hidden(visible) units. The ratio between hidden and visible units is \(\beta=M/N\). For \(\beta=0\) the only states which can be described are those that would arise in a mean-field description, i.e. those which are a product of single site wavefunctions. Increasing the value of \(\beta\) allows a systematic increase in the amount of entanglement which can be described [13]. A further strength of the RBM wavefunction is the simplicity of its derivatives with respect to the parameters. This allows for efficient evaluation of all of the derivatives required to update the state. In its simplest form the RBM ansatz described above is only able to discriminate between two different visible states, \(s=[-1,1]\), which allows at most for the representation of pure states of spin-1/2 systems. However, this is not sufficient to describe a vectorised density operator in Liouville space which, even for a spin-1/2 system has 4 elements. To overcome this issue we use an approach Figure 1: Schematic representation of a RBM. It shows the visible units (green) connected to the hidden units (blue) via the weight matrix (double-headed arrows). The single-headed arrows labeled by \(a_{i}\) and \(b_{i}\) show the biases applied to the neurons. based on that introduced in Ref. [19] for the study of pure states of spin-1 lattices. This was achieved by adding an additional set of biases and weights which allows discrimination between the \(s=[-1,0,1]\) states. We can adapt this idea to provide a basis for a NNQS ansatz for density-kets, \(\ket{\rho}\)). Such an ansatz needs to be able to discriminate between the four states of the local density operator, while maintaining a one-to-one correspondence between the visible layer and the physical system. By adding yet one more set of visible biases and weights we are able to discriminate the four different local states which allows us to represent a density-ket. Furthermore, we do not require any additional visible nodes to achieve this, as in Ref. [42]. The total ansatz function now reads: \[\rho_{\alpha}(s)=\mathrm{e}^{\sum_{j}^{N}a_{j}^{(1)}s_{j}+a_{j}^ {(2)}s_{j}^{2}+a_{j}^{(3)}s_{j}^{3}}\\ \times\prod_{i=1}^{M}2\cosh\left[b_{i}+\sum_{j}^{N}U_{i,j}s_{j}+V _{i,j}s_{j}^{2}+W_{i,j}s_{j}^{3}\right], \tag{20}\] where \(N\) and \(M\) are the number of visible and hidden nodes respectively. We have introduced a set of biases, \(a^{(n)}\) to the visible units which allow us to distinguish different visible biases, \(b\) again gives the bias on each hidden neuron and \(U\), \(V\), \(W\) are the weight matrices that connect the two layers. Theses biases and connections are shown schematically in Fig. 2. Similar to the pure-state case of Eq. (18), this ansatz has labeling freedom and very simple derivatives which can be found analytically and efficiently implemented. To label the density matrix elements on each site we use the mapping \[s=\begin{cases}&2\rightarrow\ket{\uparrow\uparrow}\rangle\\ &1\rightarrow\ket{\uparrow\downarrow}\rangle\\ -1\rightarrow\ket{\downarrow\uparrow}\rangle\\ -2\rightarrow\ket{\downarrow\downarrow}\rangle\end{cases} \tag{21}\] where \(s=\pm 2\) denote the diagonal density matrix elements, while \(s=\pm 1\) are the off-diagonal elements. If the hidden unit density, \(M=0\), this ansatz is able to exactly capture those states described by mean-field theory, i.e. where the full density matrix can be factorized onto individual sites. This then allows the parameter \(M\) to systematically increase the amount the correlations which can be represented in a similar way to the bond dimension of an MPS. This ansatz, which we refer to as a _Liouville Density Machine_ (LDM), bears some similarities to that proposed in Ref. [42]. In both cases the problem is transformed into Liouville space but in Ref. [42] this is achieved by adding an additional visible unit on each site. This allows them to represent the four required states. However, using two visible units to represent the state of a single spin leads to difficulties representing certain product states, limiting the expressibility of the ansatz. These approaches should be seen in contrast to those in Refs. [32; 33; 34; 35]. In these papers a purification ansatz is used, an extra layer of hidden units is added to represent the state of an auxiliary system which, when traced over, leaves the appropriate density matrix for the original system. Similar techniques have been applied to MPS simulations [36; 37]. In what follows we will refer to this purification ansatz as a _Neural Density Machine_ (NDM). ## V Results To benchmark and understand the strengths and limitations of the LDM ansatz we study the stationary state of the 1D dissipative transverse-field Ising (TFI) model [40; 42; 48; 49; 50; 35]. The system consists of a chain of spin-1/2 particles. The Hamiltonian part of the evolution is governed by \[H=\frac{J}{4}\sum_{i}^{N-1}\sigma_{i}^{z}\sigma_{i+1}^{z}+\frac{h}{2}\sum_{i}^ {N}\sigma_{i}^{x}. \tag{22}\] Here, \(N\) is the number of sites, \(\sigma\) are the usual Pauli matrices, \(J\) is the interaction strength and \(h\) is the strength of the transverse field. The dissipation is governed by excitation loss on each site, so the jump operators which appear in the Liouvillian are \(A_{i}=\sqrt{\gamma}\sigma_{i}^{-}\). We choose the units of the interaction and field such that the dissipation strength is \(\gamma=1\). In all calculations that follow we set the interaction strength to \(J/\gamma=2\). The steady-state of this model then has simple solutions in two limiting cases. When \(h\to 0\) the dissipation dominates the dynamics and the stationary state is a pure product-state with all the spins pointing down \[\lim_{h\to 0}\rho_{ss}=\bigotimes^{N}\ket{\downarrow}\bra{\downarrow}. \tag{23}\] In the opposite limit where \(h\rightarrow\infty\) there is only competition between the local onsite field and the dissipation Figure 2: Schematic representation of an LDM. Similar to the RBM, it has visible (green) and hidden (blue) units. LDMs, however have three visible biases \(a_{i}^{(1,2,3)}\) as well as three weight matrices \(U\), \(V\), and \(W\) (double arrows). The weight matrices couple to different powers of the configuration vector \(s\) and encode correlations between different local states. and so the steady-state ends up again as a product-state, but this time the state on each site is mixed \[\lim_{h\rightarrow\infty}\rho_{ss}=\bigotimes\limits^{N}\frac{1}{2}\left(\ket{ \uparrow}\bra{\uparrow}+\ket{\downarrow}\bra{\downarrow}\right). \tag{24}\] At intermediate values of \(h\) the steady-state interpolates between these two, building up complex long range classical and quantum correlations. We will compare the results of the LDM ansatz to those obtained using the NDM approach as implemented in the NetKet library [51]. For small system sizes we will also be able to compare to results obtained from exact diagonalization, these are calculated using QuTIP [52]. As a first example we look at the case of a small system with \(N=6\). Our findings are summarized in Fig. 3. In each case we randomly initialize the parameter values and at each step we take Monte Carlo samples to approximate the cost function and the best updates to the parameters. At the end of each run we produce a new Markov chain, but this time sample from a probability distribution which follows the diagonal of the density matrix. This allows us to estimate the expectation value of observables of interest. Since there are fewer diagonal states than entries in the full density matrix we usually only need about 500-800 diagonal samples. In panels (a)-(b) of Fig. 3 we show the expectation value of \(\sigma_{i}^{x}\) on the central site and the \(\sigma_{i}^{z}\sigma_{i+1}^{z}\) correlation function on the central pair of sites as a function of the field strength, \(h\). This gives a good indication of how well the various approaches are able to produce single site observables. We see that, in general, there is good agreement between the exact results and those obtained using both the LDM and NDM approaches. In all cases the agreement is worst in the central region where \(1\leq h/\gamma\leq 2.5\), which is in agreement with the results of [35]. For the LDM a hidden unit density of \(\beta=1\) corresponds to 132 parameters for the NDM this is 174 parameters. We see that even with only 3/4 of the parameters the LDM generally gives as good or better results than the NDM. By increasing the value of \(\beta\) in the LDM we also increase the number of parameters so that, at \(\beta=2\), there are 246 parameters. We see that for the LDM ansatz increasing the value of \(\beta\) and hence the number of parameters used is able to significantly decrease the deviation from the exact result, thus we may use \(\beta\) as a way of checking for convergence when exact results are no longer possible. We can also see this effect more clearly in Fig. 3(c) and (d) where we show the Monte Carlo estimated cost function for each ansatz. The value of the cost function is significantly decreased at all values of \(h\) when \(\beta\) is increased. We also see that for the LDM ansatz the cost function is at a maximum in the regions where the convergence to the steady-state is worst, this allows us to use this estimate to again judge the accuracy of our results for system sizes where exact methods are unavailable. This is not true of the NDM approach where the cost function reaches a maximum at intermediate values of \(h\) and does not significantly decrease as \(h\) increases further. This is because the mixed product state, described in Eq. (24), is not so easy to represent in a purification ansatz; this mixed state requires a large amount of entanglement between the real and auxiliary spins. This is not the case for the LDM approach which can represent this state exactly with a hidden unit density \(\beta=0\). A further comparison is shown in Fig. 4. This figure shows how the cost function of the LDM evolves for two Figure 3: Comparison of the steady-state of Eq. (22) using both the NDM and LDM approaches. The system size, \(N=6\), is small enough that exact diagonalization is possible. (a) The expectation value of \(\sigma_{i}^{x}\) as a function of \(h\). (b) The expectation value of the \(\sigma_{i}^{z}\sigma_{i+1}^{z}\) correlation function. In both cases the expectation value is taken on the central site(s). The blue lines show the exact result, the NDM with \(\beta=1\) is green, the LDM with \(\beta=1(2)\) is orange (red). (c) The absolute value squared of the cost function in Eq. 8 for the two LDM results. (d) The expectation value \(\langle L^{\dagger}L\rangle\) used as a cost function for the NDM ansatz employed by NetKet. For both cases with \(\beta=1\) we used 4500 samples and optimized for 1000 steps with a learning rate of \(\eta=10^{-2}\) and a regularization of \(\lambda=10^{-2}\), for the expectation values we used 500 diagonal samples. In the \(\beta=2\) case we used 6000 samples and 2000 steps and 800 diagonal samples. The other meta parameters were the same in both cases. Figure 4: Convergence of the LDM ansatz for two different hidden unit densities. Panel (a) shows the running estimate for the cost function while panel (b) is the variance in the same quantity. Increasing the hidden unit density improves the accuracy of the results. Both calculations were done at \(h/\gamma=1\) using the same parameters as Fig. 3. different values of \(\beta\) over 6000 steps at one of the most difficult points, \(h=\gamma\). By increasing the number of variational parameters from 132 to 246 we were able to reduce the cost function by an order of magnitude. We also see that simply checking the value of the cost function does not give an accurate stopping condition for the algorithm. After around 1400 steps the cost function for \(\beta=2\) is very small but the variance is quite large. This means that the LDM has not found an eigenstate of the Liouvillian but is still giving a small value for the cost function. We propose that a condition based on a combination of both of these quantities can give a good way to automatically stop the learning process when a good approximation to the steady-state has been reached. We now go on to examine how the accuracy of these approaches scales to larger system sizes. At \(N=16\) it is impossible to use exact methods to compare against, however this model is straightforward to solve with MPS simulations which we found to be fully converged for a bond-dimension of \(\chi=7\). Results of these calculations are shown in Fig. 5. The number of parameters for the NDM is 1104 while the LDM has 1126. For reference, a bond dimension of \(\chi=7\) corresponds to 1952 matrix elements in the MPS. We see very similar behavior to the \(N=6\) case, both approaches are more difficult to converge in the region of intermediate \(h/\gamma\) and the cost function for the LDM has a peak in this region. In Fig. 6 we show how the convergence can again be improved by increasing the hidden unit density \(\beta\). Here we choose \(h=2\gamma\) as this is the point where the convergence is worst. We see that as \(\beta\) is increased the cost function decreases towards zero and the expectation value moves towards that found in the MPS simulation. The expectation value here is a two-point correlation function which, in general, are harder to converge than single-site operators. ### TFI Model with rotated Hamiltonian By making a simple change to the model discussed above it is possible to make the convergence of both neural network approaches considerably worse. To do this we change the Hamiltonian to a rotated basis [53, 48, 50] \[H=\frac{J}{4}\sum_{i=0}^{4}\sigma_{i}^{x}\sigma_{i+1}^{x}+\frac{h}{2}\sum_{i= 0}^{5}\sigma_{i}^{z}, \tag{25}\] but keep the dissipation processes the same as for the previous model. In this case the dissipation does not explicitly break the \(\mathbb{Z}_{2}\) symmetry of the model as the interaction term is perpendicular to the dissipation. Therefore the competition between the coherent and dissipative dynamics gives rise to complex correlations in the steady-state. This leads to a very rich mean-field phase diagram in high dimensions with possibilities for both first and second order phase transitions between different magnetic orderings [53, 49, 54]. In 1D these phase transitions turn into continuous crossovers, but complex correlations still build up when \(h\sim J\sim\gamma\). For a detailed review of the behavior of this model in 1D see Ref. [50]. We again study the convergence of both the LDM and NDM approaches for finding the steady-state of this model for a system size of \(N=6\). In panels (a) and (b) of Fig. 7 we show how both a single site and two-site observable varies with the applied field \(h\). We see that, even when using a large amount of samples and parameters the NDM ansatz is not able to find a good Figure 6: Improvement of convergence as a function of \(\beta\) for the \(N=16\) TFI model at \(h=2\gamma\). In (a) we show the \(\sigma_{i}^{x}\sigma_{i+1}^{x}\) correlation function and in panel (b) the estimate for the cost function. All calculations ran for 7000 steps. To accommodate for higher parameter counts we increased the number of samples with \(\beta\) from 9000 at \(\beta=1\) to 17000 at \(\beta=2\). Other parameters are the same as in Fig. 5. Figure 5: Steady-state of Eq. (22) as a function of field strength, similar to Fig. 3, for a larger system size \(N=16\). The expectation value of \(\sigma_{i}^{x}\) and \(\sigma_{i}^{z}\sigma_{i+1}^{z}\) on the central site(s) are shown in panels (a)–(b) and the relevant cost functions in panels (c)–(d). The LDM and NDM results are compared to those obtained from MPS simulations. We used \(\beta=1.4\) in the case of LDM and \(\beta=1\) for the NDM to ensure that both approaches use a similar number of parameters. The NDM has 1104 parameters, while the LDM has 1126. In both cases we evolved for 7000 steps with a learning rate of \(\eta=10^{-3}\) we took 9000 Monte Carlo samples at each step and a regularization of \(\lambda=3\times 10^{-3}\). We used 800 diagonal samples to estimate the expectation values. approximation to the exact result, while the LDM is able to get much closer to the expected result, especially at small values of \(h\). We see that for both approaches the cost function estimate is much larger than it was for the simpler model described by Eq. (22). This is because the steady-state in this case has much more complex correlations than in the previous model, simple expressions like those in Eqs. (23)-(24) are not available, except at very large \(h\to\infty\) where the steady-state is the same as given in Eq. (23). We next go on to show how using measures of the entanglement found in the steady-state can give good intuition for when these kinds of difficulties arise. ### Entanglement Properties To further understand the convergence of these approaches we investigate the entanglement present in the steady-states of both models over a range of parameters. Contrary to pure states, quantifying the amount of correlations that are present in a mixed state isn't as simple as just calculating the entanglement entropy between two halves of the system [55]. For our purposes we find that the negativity [56; 57] \[\mathcal{N}=\frac{||\rho^{T_{A}}||-1}{2}, \tag{26}\] provides a useful measure of the correlations which are difficult to represent using the LDM approach described above. Here, \(||\rho^{T_{A}}||\) denotes the trace norm of the partially transposed density matrix with the transpose taken over the degrees of freedom labeled by \(A\). This quantity gives a measure for the separability of a state. If two subsystems are entangled, the partial transpose can lead to negative eigenvalues, which leads to a trace norm greater than one and hence a non-zero negativity. Panel (a) of Fig. 8 shows the negativity for both models considered in this paper for the three different possible bipartitions of a four-site system. In the case of the simpler model in Eq. (22) with the \(\sigma^{z}\sigma^{z}\) interaction we see a clear peak in the negativity at around \(h\sim 0.9\gamma\) for all partitions and a fast decay to zero at values of h above and below this point. This is because of the two limiting cases described in Eqs. (23) and (24) which both have zero negativity. The peak corresponds well to the range of \(h\)-values which were the the most difficult to find convergence with the neural networks. In case of the more difficult model described in Eq. (25) with \(\sigma^{\pi}\sigma^{x}\) interactions we see a much higher negativity across the whole range of values of \(h\). This ties in well with our experience that this model is much harder to represent using both the LDM and NDM approaches across the board, with generally slight improvements around \(h=0\) and \(h\gtrsim 4\gamma\). We see that there is no region where the negativity reaches zero. This model does not have a simple product state steady-state anywhere in the observed parameter range. ## VI Conclusion In summary we have proposed a NNQS ansatz which compactly represents density matrices in Liouville space, allowing us to find the steady-state of lattice models described by a Markovian master equation. This LDM approach was shown to be able to calculate the steady-state of a 1D open transverse field Ising model with 6 and 16 sites. The results were compared to the powerful NDM ansatz as implemented in NetKet. We found that our approach is always able to reach a comparable accuracy and in many cases is better able to find steady-state, especially when it contains a lot of correlations. We were Figure 7: Optimizing the rotated TFI model as in Eq. 25 for \(N=6\). For both the LDM and NDM the optimization was run for 10000 steps, with hidden-unit-density of \(\beta=2\). At each step we used 15000 samples to estimate the cost function and required gradients. In panel (a) we give the steady-state expectation value of \(\sigma_{i}^{x}\) and in (b) we show the two-point correlation function \(\langle\sigma_{i}^{x}\sigma_{i+1}^{z}\rangle\). Panels (c) and (d) show the relevant cost functions for each ansatz. For both the NDM and LDM we used a learning rate of \(\eta=10^{-2}\), a regularization of \(\lambda=3\times 10^{-3}\) and 800 diagonal samples. Figure 8: (a) The steady-state entanglement negativity, defined in Eq. (26) and (b) purity, \(\mathcal{P}=\mathrm{Tr}[\rho^{2}]\), for the \(\sigma^{x}\sigma^{x}\)-model (red) and the \(\sigma^{z}\sigma^{z}\)-model (blue). Comparing with the results of Figs. 3 and 7 we observe a correlation between a large negativity and poor accuracy of the neural network. able to show that the accuracy of this approach is able to be systematically improved by increasing the number of hidden units and hence free parameters in the ansatz. This permits a clear understanding of the class of states accessible to the LDM ansatz. As we show in Figs. 3 and 5, the most difficult regions to find convergence are very strongly correlated with parameters where the true steady-state has high negativity. This is in contrast with the NDM approach where there is difficulty representing mixed states with no correlations leading to a plateaus in the cost function which do not significantly decrease as the steady-state becomes more separable. The results shown here are just a starting point for examining the usefulness of neural network approaches to finding the steady-state of open quantum systems. The RBM ansatz we used is the simplest possible architecture and extending the approach here to deep networks such as as deep RBMs [58], Recurrent Neural Networks [59], or transformers [60], which have already proven successful in closed system, provides a route to improving both the accuracy and numerical efficiency. The models studied here are also very simple and are accessible by other methods such as tensor network based techniques. However, the lack of an underlying lattice geometry for these neural networks can be exploited to study models with long range interactions and in higher dimensions which are much more difficult to simulate using other approaches. ###### Acknowledgements. We acknowledge useful discussions with Andrew Daley, Damian Hoffman, Daniela Pfannkuche and Filippo Vicentini. SK acknowledges financial support from EPSRC (EP/T517938/1).
2307.13899
Regularizing Neural Networks with Meta-Learning Generative Models
This paper investigates methods for improving generative data augmentation for deep learning. Generative data augmentation leverages the synthetic samples produced by generative models as an additional dataset for classification with small dataset settings. A key challenge of generative data augmentation is that the synthetic data contain uninformative samples that degrade accuracy. This is because the synthetic samples do not perfectly represent class categories in real data and uniform sampling does not necessarily provide useful samples for tasks. In this paper, we present a novel strategy for generative data augmentation called meta generative regularization (MGR). To avoid the degradation of generative data augmentation, MGR utilizes synthetic samples in the regularization term for feature extractors instead of in the loss function, e.g., cross-entropy. These synthetic samples are dynamically determined to minimize the validation losses through meta-learning. We observed that MGR can avoid the performance degradation of na\"ive generative data augmentation and boost the baselines. Experiments on six datasets showed that MGR is effective particularly when datasets are smaller and stably outperforms baselines.
Shin'ya Yamaguchi, Daiki Chijiwa, Sekitoshi Kanai, Atsutoshi Kumagai, Hisashi Kashima
2023-07-26T01:47:49Z
http://arxiv.org/abs/2307.13899v2
# Regularizing Neural Networks ###### Abstract This paper investigates methods for improving generative data augmentation for deep learning. Generative data augmentation leverages the synthetic samples produced by generative models as an additional dataset for classification with small dataset settings. A key challenge of generative data augmentation is that the synthetic data contain uninformative samples that degrade accuracy. This is because the synthetic samples do not perfectly represent class categories in real data and uniform sampling does not necessarily provide useful samples for tasks. In this paper, we present a novel strategy for generative data augmentation called _meta generative regularization_ (MGR). To avoid the degradation of generative data augmentation, MGR utilizes synthetic samples in the regularization term for feature extractors instead of in the loss function, e.g., cross-entropy. These synthetic samples are dynamically determined to minimize the validation losses through meta-learning. We observed that MGR can avoid the performance degradation of naive generative data augmentation and boost the baselines. Experiments on six datasets showed that MGR is effective particularly when datasets are smaller and stably outperforms baselines. ## 1 Introduction While deep neural networks achieved impressive performance on various machine learning tasks, training them still requires a large amount of labeled training data in supervised learning. The labeled datasets are expensive when a few experts can annotate the data, e.g., medical imaging. In such scenarios, _generative data augmentation_ is a promising option for improving the performance of models. Generative data augmentation basically adds pairs of synthetic samples from conditional generative models and their target labels into real training datasets. The expectations of generative data augmentation are that the synthetic samples interpolate missing data points and perform as oversampling for classes with less real training samples [1]. This simple method can improve the performance of several tasks with less diversity of inputs such as medical imaging tasks [2; 3; 4; 5]. However, in general cases that require the recognition of more diverse inputs (e.g., CIFAR datasets [6]), generative data augmentation degrades rather than improves the test accuracy [7]. Previous studies have indicated that this can be caused by the low quality of synthetic samples in terms of the diversity and fidelity [7; 8]. If this hypothesis is correct, we can expect high-quality generative models (e.g., StyleGAN2-ADA [9]) to resolve the problem; existing generative data augmentation methods adopt earlier generative models e.g., ACGAN [10] and SNGAN [11]. Contrary to the expectation, this is not the case. We observed that generative data augmentation fails to improve models even when using a high-quality StyleGAN2-ADA (Figure 1). Although the samples partially appear to be real to humans, they are not yet sufficient to train classifiers in existing generative data augmentation methods. This paper investigates methodologies for effectively extracting useful information from generative models to improve model performance. We address this problem based on the following hypotheses. First, _synthetic samples are actually informative but do not perfectly represent class categories in real data_. This is based on a finding by Brock et al. [14] called "class leakage," where a class conditional synthetic sample contains attributes of other classes. For example, they observed failure samples including an image of "tennis ball" containing attributes of "dogs" (Figure 4(d) of [14]). These class leaked samples do not perfectly represent the class categories in the real dataset. If we use such class leaked samples for updating classifiers, the samples can distort the decision boundaries, as shown in Figure 3. Second, _regardless of the quality, the generative models originally contain uninformative samples to solve the tasks_. This is simply because the generative models are not explicitly optimized to generate informative samples for learning the conditional distribution \(p(y|x)\); they are optimized only for learning the data distribution \(p(x)\). To maximize the gain from synthetic samples, we should select appropriate samples for training tasks. In this paper, we present a novel regularization method called _meta generative regularization_ (MGR). Based on the above hypotheses, MGR is composed of two techniques for improving generative data augmentation: _pseudo consistency regularization_ (PCR) and _meta pseudo sampling_ (MPS). PCR is a regularization term using synthetic samples in training objectives for classifiers. Instead of supervised learning with negative log-likelihood \(-\log p(y|x)\), i.e., cross-entropy, on synthetic samples, we regularize the feature extractor to avoid the distortions on decision boundaries. That is, PCR leverages synthetic samples only for learning feature spaces. PCR penalizes the feature extractors by minimizing the gap between variations of a synthetic sample, which is inspired by consistency regularization in semi-supervised learning [15; 16; 17]. MPS corresponds to the second hypothesis and its objective is to select useful samples for training tasks by dynamically searching optimal latent vectors of the generative models. Therefore, we formalize MPS as a bilevel optimization framework of a classifier and a finder that is a neural network for searching latent vectors. Specifically, this framework updates the finder through meta-learning to reduce the validation loss and then updates the classifier to reduce the PCR loss (Figure 2). By combining PCR and MPS, we can improve the performance even when the existing generative data augmentation degrades the performance (Figure 1). We conducted experiments with multiple vision datasets and observed that MGR can stably improve baselines on various settings by up to 7 percentage points of test accuracy. Further, through the visualization studies, we confirmed that MGR utilizes the information in synthetic samples to learn feature representations through PCR and obtain meaningful samples through MPS. ## 2 Preliminary ### Problem Setting We consider a classification problem in which we train a neural network model \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) on a labeled dataset \(\mathcal{D}=\{(x^{i},y^{i})\in\mathcal{X}\times\mathcal{Y}\}_{i=1}^{N}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are the input and output label spaces, respectively. Here, we can use a generative model \(G_{\Phi}:\mathcal{Z}\times\mathcal{Y}\rightarrow\mathcal{X}\), which is trained on \(\mathcal{D}\). We assume that \(G_{\Phi}\) generates samples from a latent vector \(z\in\mathcal{Z}\) and conditions on samples with a Figure 1: Accuracy gain using meta generative regularization on Cars [12] with ResNet-18 classifier [13] and StyleGAN2-ADA [9] (FID: 9.5) categorical label \(y\in\mathcal{Y}\), where \(z\) is sampled from a standard Gaussian distribution \(p(z)=\mathcal{N}(0,I)\) and \(y\) is uniformly sampled from \(\mathcal{Y}\)2. We refer to the classification task on \(\mathcal{D}\) as the main task, and \(f_{\theta}\) as the main model. \(f_{\theta}\) is defined by a composition of a feature extractor \(g_{\psi}\) and a classifier \(h_{\omega}\), i.e., \(f_{\theta}=h_{\omega}\circ g_{\psi}\) and \(\theta=[\psi,\omega]\). To validate \(f_{\theta}\), we can use a small validation dataset \(\mathcal{D}_{\text{val}}=\{(x_{\text{val}}^{i},y_{\text{val}}^{i})\in\mathcal{ X}\times\mathcal{Y}\}_{i=1}^{N_{\text{val}}}\), which has no intersection with \(\mathcal{D}\) (i.e., \(\mathcal{D}\cap\mathcal{D}_{\text{val}}=\emptyset\)). Footnote 2: Although we basically use conditional \(G_{\Phi}\) for comparing our method and generative data augmentation, our method can be used with unconditional \(G_{\Phi}\) (see Sec. 4.6). ### Generative Data Augmentation A typical generative data augmentation trains a main model \(f_{\theta}\) with both real data and synthetic data from the generative models [19]. We first generate synthetic samples to be utilized as additional training data for the main task. Most previous studies on generative data augmentation [19; 20; 8] adopt conditional generative models for \(G_{\Phi}\), and generate a pseudo dataset \(\mathcal{D}_{\text{p}}\) as \[\mathcal{D}_{\text{p}}=\{(x_{\text{p}}^{i},y_{\text{p}}^{i})\mid x_{\text{p}}^ {i}=G_{\Phi}(z^{i},y_{\text{p}}^{i})\}_{i=1}^{N_{\text{p}}}, \tag{1}\] where \(z^{i}\) is sampled from a prior distribution \(p(z)\), and \(y_{\text{p}}^{i}\) is uniformly sampled from \(\mathcal{Y}\). Subsequently, \(f_{\theta}\) is trained on both of \(\mathcal{D}\) and \(\mathcal{D}_{\text{p}}\) using the following objective function. \[\min_{\theta} \mathcal{L}(\theta)+\lambda\mathcal{L}_{\text{p}}(\theta), \tag{2}\] \[\mathcal{L}(\theta) = \mathbb{E}_{(x,y)\in\mathcal{D}}\ell(f_{\theta}(x),y),\] (3) \[\mathcal{L}_{\text{p}}(\theta) = \mathbb{E}_{(x_{\text{p}},y_{\text{p}})\in\mathcal{D}_{\text{p}} }\ell_{\text{p}}(f_{\theta}(x_{\text{p}}),y_{\text{p}}), \tag{4}\] where \(\ell\) is a loss function of the main task (e.g., cross-entropy), \(\ell_{\text{p}}\) is a loss function for the synthetic samples, and \(\lambda\) is a hyperparameter for balancing \(\mathcal{L}\) and \(\mathcal{L}_{\text{p}}\). In previous works, \(\ell_{\text{p}}\) is often set the same as \(\ell\). Although optimizing Eq. (2) with respect to \(\theta\) is expected to boost the test performance by interpolating or oversampling conditional samples [1], its naive application degrades the performance of \(f_{\theta}\) on general settings [7]. In this paper, we explore methods to resolve the degradation of generative data augmentation and maximize the performance gain from \(\mathcal{D}_{\text{p}}\). ## 3 Proposed Method In this section, we describe our proposed method, MGR. The training using MGR is formalized as alternating optimization of a main model and finder network for searching latent vectors of \(G_{\Phi}\), as shown in Figure 2. To maximize the gain from synthetic samples, MGR regularizes a feature extractor \(g_{\psi}\) of \(f_{\theta}\) using PCR by effectively sampling useful samples for the generalization from \(G_{\Phi}\) using MPS. We show the overall algorithm of MGR in Appendix A. Figure 4: Meta pseudo sampling framework. We meta-optimize the finder \(F_{\phi}\) to generate a useful latent vector \(F_{\phi}(z)\) for training a model parameter \(\theta\) through minimizing the validation loss with the once updated parameter \(\theta^{\prime}\) by a real sample \(x\) and a synthetic sample \(x_{\text{p}}=G_{\Phi}(F_{\phi}(z))\). Figure 3: Distortion of decision boundary caused by generative data augmentation with conditional synthetic samples leaking another class attribute. If the synthetic sample \(x_{\text{p}}\) is “tennis ball dog” (reprinted from [18]) with its conditional label \(y_{\text{p}}=\) “ball”, the supervised learner of a task head \(h_{\omega}\) distorts the decision boundary of “dog” and “ball” to classify \(x_{\text{p}}\) as “ball”. ### Pseudo Consistency Regularization As discussed in Secion 1, we hypothesize that the synthetic samples do not perfectly represent class categories, and training classifiers using them can distort the decision boundaries. This is because \(y_{\mathrm{p}}\) is not reliable due to \(\mathcal{D}_{\mathrm{p}}\) can contain class leaked samples [14]. To avoid the degradation caused by the distortion, we propose utilizing \(x_{\mathrm{p}}\) to regularize only the feature extractor \(g_{\phi}\) of \(f_{\theta}\) by discarding a conditional label \(y_{\mathrm{p}}\). For the regularization, we borrow the concept of consistency regularization, which was originally proposed for semi-supervised learning (SSL) [15; 16; 17]. These SSL methods were designed to minimize the dissimilarity between the two logits (i.e., the output of \(h_{\omega}\)) of strongly and weakly transformed unlabeled samples to obtain robust representations. By following this concept, the PCR loss is formalized as \[\ell_{\text{PCR}}(x_{\mathrm{p}};\psi)=\|g_{\psi}(T(x_{\mathrm{p}}))-g_{\psi} (x_{\mathrm{p}})\|_{2}^{2}, \tag{5}\] where \(T\) is a strong transformation such as RandAugment [21], which is similar to the one used in UDA [16] for SSL. The difference between PCR and UDA is that PCR penalizes only \(g_{\psi}\), whereas UDA trains the entire \(f_{\theta}=h_{\omega}\circ g_{\psi}\). \(\ell_{\text{PCR}}\) can be expected to help \(g_{\psi}\) learns features of inter-cluster interpolated by \(x_{\mathrm{p}}\) without distorting the decision boundaries. Using \(\ell_{\text{PCR}}\), we rewrite Eq. (4) as \[\mathcal{L}_{\text{PCR}}(\theta)=\mathbb{E}_{x_{\mathrm{p}}\in\mathcal{D}_{ \mathrm{p}}}\ell_{\text{PCR}}(x_{\mathrm{p}};\psi). \tag{6}\] ### Meta Pseudo Sampling Most generative data augmentation methods generate a synthetic sample \(x_{\mathrm{p}}\) from \(G_{\Phi}\) with a randomly sampled latent vector \(z\). This is not the best option as \(G_{\Phi}\) is not optimized for generating useful samples to train \(f_{\theta}\) in predicting the conditional distribution \(p(y|x)\); it is originally optimized to replicate \(p(x|y)\) or \(p(x)\). The main concept of MPS is to directly determine useful samples for training \(f_{\theta}\) by optimizing an additional neural network called a _finder_\(F_{\phi}:\mathcal{Z}\rightarrow\mathcal{Z}\). \(F_{\phi}\) takes a latent vector \(z\sim p(z)\) as input and outputs a vector of the same dimension as \(z\). By using \(F_{\phi}\), we generate a synthetic sample as \(x_{\mathrm{p}}=G_{\Phi}(F_{\phi}(z),y)\). The role of \(F_{\phi}\) is to find the optimal latent vectors that improve the training of \(f_{\theta}\) through \(x_{\mathrm{p}}\). Although we can consider optimizing \(G_{\Phi}\) instead of \(F_{\phi}\), we observed that optimizing \(G_{\Phi}\) is unstable and causes the performance degradations of \(f_{\theta}\) (Section 4.5). The useful samples for generalization should reduce the validation loss of \(f_{\theta}\) by using them for optimization. Based on this simple suggestion, we formalize the following bilevel optimization problem for \(F_{\phi}\). \[\min_{\phi}\mathcal{L}_{\text{val}}(\theta^{*})=\mathbb{E}_{(x_{ \text{val}},y_{\text{val}})\in\mathcal{D}_{\text{val}}}\ell(f_{\theta^{*}}(x_{ \text{val}}),y_{\text{val}})\] \[\text{subject to}\qquad\quad\theta^{*}=\operatorname*{argmin}_{ \theta}\mathcal{L}(\theta)+\lambda\mathcal{L}_{\text{PCR}}(\theta,\phi). \tag{7}\] Note that the finder parameter \(\phi\) is added to the arguments of \(\mathcal{L}_{\text{PCR}}\). We can optimize \(F_{\phi}\) with a standard gradient descent algorithm because \(F_{\phi}\), \(G_{\Phi}\), and \(f_{\theta}\) are all composed of differentiable functions as well as existing meta-learning methods such as MAML [22] and DARTS [23]. We approximate \(\theta^{*}\) because the exact computation of the gradient \(\nabla_{\phi}\mathcal{L}_{\text{val}}(\theta^{*})\) is expensive [22; 23]: \[\theta^{*}\approx\theta^{\prime}=\theta-\eta\nabla_{\theta}(\mathcal{L}( \theta)+\lambda\mathcal{L}_{\text{PCR}}(\theta,\phi)), \tag{8}\] where \(\eta\) is an inner step size. Thus, we update \(F_{\phi}\) by using \(\theta^{\prime}\) that is updated for a single step and then alternately update \(\theta\) by applying Eq. (6) with \(x_{\mathrm{p}}\) generated from the updated \(F_{\phi}\). The overall optimization flow is shown in Figure 4. Approximating gradients with respect to \(\phi\).Computing \(\nabla_{\phi}\mathcal{L}_{\text{val}}(\theta^{\prime})\) requires the product computation including second-order gradients: \(\nabla_{\phi,\theta}^{2}\mathcal{L}_{\text{PCR}}(\theta,\phi)\nabla_{\theta^{ \prime}}\mathcal{L}_{\text{val}}(\theta^{\prime})\). This causes a computation complexity of \(\mathcal{O}((|\Phi|+|\phi|)|\theta|)\). To avoid this computation, we approximate the term using the finite difference method [23] as \[\nabla_{\phi,\theta}^{2}\mathcal{L}_{\text{PCR}}(\theta,\phi)\nabla_{\theta^{ \prime}}\mathcal{L}_{\text{val}}(\theta^{\prime})\approx\frac{\nabla_{\phi} \mathcal{L}_{\text{PCR}}(\theta^{+},\phi)-\nabla_{\phi}\mathcal{L}_{\text{PCR}}( \theta^{-},\phi)}{2\varepsilon}, \tag{9}\] where \(\theta^{\pm}\) is \(\theta\) updated by \(\theta\pm\eta\nabla_{\theta}\mathcal{L}_{\text{val}}(\theta)\). \(\varepsilon\) is defined by \(\frac{\text{const}}{\|\nabla_{\theta}\mathcal{L}_{\text{val}}(\theta)\|_{2}}\). We used \(0.01\) of the constant for \(\varepsilon\) based on [23]. This approximation reduces the computation complexity to \(\mathcal{O}(|\Phi|+|\phi|+|\theta|)\). We confirm the speedup by Eq. (9) in Appendix B.1. Techniques for improving finderWe introduce two techniques for improving the training of \(F_{\phi}\) in terms of the architectures and a penalty term for the outputs. While arbitrary neural architectures can be used for the implementation of \(F_{\phi}\), we observed that the following residual architecture produces better results. \[F_{\phi}(z):=z+\tanh(\text{MLP}_{\phi}(z)), \tag{10}\] where MLP is multi layer perceptron. To ensure that \(F_{\phi}(z)\) does not diverge too far from the distribution \(p(z)\), we add a Kullback-Leibler divergence term \(D_{\text{KL}}(p_{\phi}(z)\|p(z))\), where \(p_{\phi}(z)\) is the distribution of \(F_{\phi}(z)\) into Eq. (7). When \(p(z)\) follows the standard Gaussian \(\mathcal{N}(0,I)\), \(D_{\text{KL}}(p_{\phi}(z)\|p(z))\) can be computed by \[D_{\text{KL}}(p_{\phi}(z)\|p(z))=-\frac{1}{2}(1+\log\sigma_{\phi}-\mu_{\phi}^{ 2}-\sigma_{\phi}), \tag{11}\] where \(\mu_{\phi}\) and \(\sigma_{\phi}\) is the mean and variance of \(\{F_{\phi}(z^{i})\}_{i=1}^{N_{\text{p}}}\). In Appendix B.2, we discuss the effects of design choice based on the ablation study. ## 4 Experiments In this section, we evaluate our MGR (the combination of PCR and MPS with the experiments on multiple image classification datasets. We mainly aim to answer three research questions with the experiments: (1) Can PCR avoid the negative effects of \(x_{\text{p}}\) in existing methods and improve the performance of \(f_{\theta}\)? (2) Can MPS find better samples for training than that by uniform sampling? (3) How practical is the performance of MGR? We compare MGR with baselines including conventional generative data augmentation and its variants in terms of test performance (Sec. 4.2 and 4.3). Furthermore, we conduct a comprehensive analysis of PCR and MPS such as the visualization of trained feature spaces (Sec. 4.4), quantitative/qualitative evaluations of the synthetic samples (Sec. 4.5), performance studies when changing generative models (Sec. 4.6), and comparison to data augmentation methods such as TrivialAugment [24] (Sec. 4.7). ### Settings Baselines.We compare our method with the following baselines. **Base Model**: training \(f_{\theta}\) with only \(\mathcal{D}\). **Generative Data Augmentation (GDA)**: training \(f_{\theta}\) with \(\mathcal{D}\) and \(G_{\Phi}\) using Eq. (2). **GDA+MH**: training \(f_{\theta}\) with \(\mathcal{D}\) and \(G_{\Phi}\) by decoupling the heads into \(h_{\omega}\) for \(\mathcal{D}\) and \(h_{\omega_{\text{p}}}\) for \(\mathcal{D}_{\text{p}}\). MH denotes multi-head. This is a naive approach to avoid the negative effect of \(x_{\text{p}}\) on \(h_{\omega}\) by not passing \(x_{\text{p}}\) through \(h_{\omega}\). GDA+MH optimizes the parameters as \(\operatorname*{argmin}\limits_{\theta}\mathcal{L}(f_{\theta}(x),y)+\lambda \mathcal{L}(h_{\omega_{\text{p}}}(g_{\psi}(x_{\text{p}})),y_{\text{p}})\). **GDA+SSL**: training \(f_{\theta}\) with \(\mathcal{D}\) and \(G_{\Phi}\) by applying an SSL loss for \(\mathcal{D}_{\text{p}}\) that utilizes the output of \(h_{\omega}\) unlike PCR. This method was originally proposed by Yamaguchi et al. [25] for transfer learning, but we note that \(G_{\Phi}\) was trained on the main task dataset \(\mathcal{D}\) in contrast to the original paper. By following [25], we used UDA [16] as the SSL loss and the same strong transformation \(T\) as of MGR for the consistency regularization. Datasets.We used six image datasets for classification tasks in various domains: Cars [12], Aircraft [26], Birds [27], DTD [28], Flowers [29], and Pets [30]. Furthermore, to evaluate smaller dataset cases, we used subsets of Cars that were reduced by \(\{10,25,50,75\}\%\) in volume; we reduced them by random sampling on a fixed random seed. We randomly split a dataset into \(9:1\) and used the former as \(\mathcal{D}\) and the latter as \(\mathcal{D}_{\text{val}}\). Architectures.We used ResNet-18 [13] as \(f_{\theta}\) and generators of conditional StyleGAN2-ADA for \(256\times 256\) images [18] as \(G_{\Phi}\). \(F_{\phi}\) was composed of a three-layer perceptron with a leaky-ReLU activation function. We used the ImageNet pre-trained weights of ResNet-18 distributed by PyTorch.3 For StyleGAN2-ADA, we did not use pre-trained weights. We trained \(G_{\Phi}\) on each \(\mathcal{D}\) from scratch according to the default setting of the implementation of StyleGAN2-ADA.4 Note that we used the same \(G_{\Phi}\) in the baselines and our proposed method. Footnote 3: [https://github.com/pytorch/vision](https://github.com/pytorch/vision) 4: [https://github.com/NVlabs/stylegan2-ada](https://github.com/NVlabs/stylegan2-ada) Training.We trained \(f_{\theta}\) by the Nesterov momentum SGD for 200 epochs with a momentum of 0.9, and an initial learning rate of 0.01; we decayed the learning rate by 0.1 at 60, 120, and 160 epochs. We trained \(F_{\phi}\) by the Adam optimizer for 200 epochs with a learning rate of \(1.0\times 10^{-4}\). We used mini-batch sizes of 64 for \(\mathcal{D}\) and 64 for \(\mathcal{D}_{\mathrm{p}}\). The input samples were resized into a resolution of \(224\times 224\); \(x_{p}\) was resized by differentiable transformations. For synthetic samples from \(G_{\Phi}\) in PCR and GDA+SSL, the strong transformation \(T\) was RandAugment [21] by following [16], and it was implemented with differentiable transformations provided in Kornia [31]. We determined the hyperparameter \(\lambda\) by grid search among \([0.1,1.0]\) with a step size of \(0.1\) for each method by \(\mathcal{D}_{\mathrm{val}}\). To avoid the overfitting, we set the hyperparameters of MGR that are searched with only applying PCR i.e., we did not use meta-learning to choose them. We used \(\lambda_{\text{KL}}\) of \(0.01\). We selected the final model by checking the validation accuracy for each epoch. We ran the experiments three times on a 24-core Intel Xeon CPU with an NVIDIA A100 GPU with 40GB VRAM and recorded average test accuracies with standard deviations evaluated on the final models. ### Evaluation on Multiple Datasets We confirm the efficacy of MGR across multiple datasets. Table 0(a) shows the top-1 accuracy scores of each method. As reported in [7], GDA degraded the base model on many datasets; it slightly improved the base model on only one dataset. GDA+MH, which had decoupled classifier heads for GDA, exhibited a similar trend to GDA. This indicates that simply decoupling the classifier heads is not a solution to the performance degradation caused by synthetic samples. In contrast, our MGR stably and significantly outperformed the baselines and achieved the best results. The ablation of MGR discarding PCR or MPS is listed in Table 0(a). We confirm that both PCR and GDA+MPS improve GDA. While GDA+SSL underperforms the base models, PCR outperforms the base models. This indicates that using unsupervised loss alone is not sufficient to eliminate the negative effects of the synthetic samples and that discarding the classifier \(h_{\omega}\) from the regularization is important to obtain the positive effect. MPS yields only a small performance gain when combined with GDA, but it significantly improves its performance when combined with PCR i.e., MGR. This suggests that there is no room for performance improvements in GDA, and MPS can maximize the potential benefits of PCR. ### Evaluation on Small Datasets A small dataset setting is one of the main motivations for utilizing generative data augmentation. We evaluate the effectiveness of MGR on smaller datasets. Table 0(b) shows the performance when reducing the Cars dataset into a volume of \(\{10,25,50,75\}\%\). Note that we trained \(G_{\Phi}\) on each reduced dataset, not on \(100\%\) of Cars. In contrast to the cases of the full dataset (Table 0(a)), no baseline methods outperformed the base model in this setting. This is because \(G_{\Phi}\) trained on the small datasets generates low-quality samples with less reliability on the conditional label \(y_{\mathrm{p}}\) that are not appropriate in supervised learning. On the other hand, MGR improved the baselines in large margins. This indicates that, even when the synthetic samples are not sufficient to represent the class categories, our MGR can maximize the information obtained from the samples by utilizing them to regularize feature extractors and dynamically finding useful samples. ### Visualization of Feature Spaces In this section, we discuss the effects of PCR and MPS through the visualizations of feature spaces in training. To visualize the output of \(g_{\psi}\) in 2D maps, we utilized UMAP [32] to reduce the dimensions. \begin{table} \end{table} Table 1: Top-1 accuracy (%) of ResNet18. Underlined scores outperform that of Base Model, and **Bolded scores** are the best among the methods. UMAP is a visualization method based on the structure of distances between samples, and the low dimensional embeddings can preserve the distance between samples of the high dimensional input. Thus, the distance among the feature clusters on UMAP visualization of \(g_{\psi}\) can represent the difficulty of the separation by \(h_{\omega}\). We used the official implementation by [32]5 and its default hyperparameters. We plotted the UMAP embeddings of \(g_{\psi}(x)\) and \(g_{\psi}(x_{\mathrm{p}})\) at \(\{5,15,30\}\) epochs, as shown in Figure 5; we used ResNet-18 trained on the Cars dataset. At first glance, we observe that GDA and PCR formed completely different feature spaces. GDA forms the feature spaces by forcing to treat the synthetic samples the same as the real samples through cross-entropy loss and trying to separate the clusters of samples according to the class labels. However, the synthetic samples leaked to the inter-cluster region at every epoch because they could not represent class categories perfectly as discussed in Sec. 1. This means that the feature extractor might be distorted to produce features that confuse the classifier. On the other hand, the synthetic samples in PCR progressively formed a cluster at the center, and the outer clusters can be seen well separated. Since UMAP can preserve the distances between clusters, we can say that the sparse clusters that exist far from the center are considered easy to classify, while the dense clusters close to the center are considered difficult to classify. In this perspective, PCR helps \(g_{\psi}\) to leverage the synthetic samples for learning feature representations interpolating the dense difficult clusters. This is because the synthetic samples tend to be in the middle of clusters due to their less representativeness of class categories. That is, PCR can utilize the partial but useful information contained in the synthetic samples while avoiding the negative effect. Further, we observe that applying MPS accelerates the convergence of the synthetic samples into the center. From these observations, we conclude that PCR and MPS can help models learn useful feature representations for solving tasks. Footnote 5: [https://github.com/lmcinnes/umap](https://github.com/lmcinnes/umap) ### Analysis of MPS Evaluation of validation loss.We investigate the effects on validation losses when using MPS. Through the meta-optimization by Eq. (7), MPS can generate samples that reduce the validation loss of \(f_{\theta}\). We recorded the validation loss per epoch when applying uniform sampling (Uniform) and when applying MPS. We used the models trained on Cars and applied the PCR loss on both models. Figure 5(a) plots the averaged validation losses. MPS reduced the validation loss. In particular, MPS were more effective in early training epochs. This is related to accelerations of converging the central cluster of synthetic samples discussed in Section 4.4 and Figure 5. That is, MPS can produce effective samples for regularizing features and thus speed up the entire training of \(f_{\theta}\). Quantitative evaluation of synthetic samples.We evaluate the synthetic samples generated by MPS. To assess the characteristics of the samples, we measured the difference between the data Figure 5: UMAP visualization of feature spaces on training. The plots in the figures represent real and synthetic samples in the feature spaces. Our methods (PCR and PCR+MPS) can help the feature extractors separate real sample clusters. In contrast, the existing method (GDA) confuses the feature extractor by leaking synthetic samples out of the clusters. distribution and distribution of the synthetic samples. We leveraged the Frechet Inception distance (FID, [33]), which is a measurement of the distribution gap between two datasets using the closed-form computation assuming multivariate normal distributions: \[\text{FID}(\mathcal{D},\mathcal{D}_{\text{p}})=\|\mu-\mu_{\text{p}}\|_{2}^{2}+ \operatorname{Tr}\left(\Sigma+\Sigma_{\text{p}}-2\sqrt{\Sigma\Sigma_{\text{p}}} \right),\] where \(\mu\) and \(\Sigma\) are the mean and covariance of the feature vectors on InceptionNet for input \(\{x^{i}\}\). Since FID is a distance, the lower \(\text{FID}(\mathcal{D},\mathcal{D}_{\text{p}})\) means that \(\mathcal{D}_{\text{p}}\) contains more high-quality samples in terms of realness. We computed FID scores using 2048 samples in \(\mathcal{D}\) and 2048 synthetic samples every epoch; the other settings are given in Section 4.1. The FID scores in training are plotted in Figure 5(b). We confirm that MPS consistently produced higher-quality samples than Uniform. This indicates that the sample quality is important for generalizing \(f_{\theta}\) even in PCR, and uniform sampling can miss higher quality samples in generative models. Since the performance gain by GDA+MPS in Table 0(a) did not better than MGR, the higher-quality samples by MPS can still contain uninformative samples for the cross-entropy loss, but they are helpful for PCR learning good feature representations. Qualitative evaluation of synthetic samples.We evaluate the qualitative properties of the synthetic samples by visualizing samples of a class. The samples generated by Uniform and MPS are shown in Figure 6(a) and 6(b), where the dataset was Cars and the class category was Hummer. Compared with Uniform, MPS produced samples with more diverse body colors and backgrounds. That is, MPS focuses on the color and background of the car as visual features for solving classifications. In fact, since Hummer has various colors of car bodies and can drive on any road with four-wheel drive, these selective generations of MPS are considered reasonable in solving classification tasks. ### Effect of Generative Models Here, we evaluate MGR by varying the generative model \(G_{\Phi}\) for producing synthetic samples. As discussed in Sec. 3.1 and 3.2, MGR can use arbitrary unconditional/conditional generative models as \(G_{\Phi}\) unless it has a latent space. To confirm the effects when changing \(G_{\Phi}\), we tested MGR with FastGAN [34], BigGAN [18], and StyleGAN-XL [35]. Table 2 shows the results on Cars. The unconditional FastGAN achieved similar improvements as conditional cases. However, since unconditional generative models are generally of low quality in FID, they are slightly less effective. For the conditional generative models, we observe that MGR performance improvement increases as the quality of synthetic samples improves. These results suggest the potential for even greater MGR gains in the future as the performance of the generative model improves. We also evaluate MGR with recent diffusion models in Appendix B.3. ### Combination of MGR and Data Augmentation To assess the practicality of MGR, we evaluate the comparison and combination of MGR and existing data augmentation methods. Data augmentation (DA) is a method applied to real data and is an independent research field from generative data augmentation. Therefore, MGR can be combined with DA to improve performance further. Table 3 shows the evaluation results when comparing and combining MGR and DA methods; we used AugMix [36], RandAugment [21], and TrivialAugment [24] as the DA methods. The improvement effect of MGR is comparable to that of DA, and the highest accuracy was achieved by combining the two methods. This suggests that the proposed method using synthetic samples and DA plays different roles in classifier training and that MGR is also very effective in practical use. ## 5 Related Work We briefly review generative data augmentation and training techniques using generative models. The earliest works of generative data augmentation are [19; 37; 38]. They have demonstrated that simply adding synthetic samples as augmented data for classification tasks can improve performance in few-shot learning, person re-identification, and medical imaging tasks. Tran et al.[20] have proposed a generative data augmentation method that simultaneously trains GANs and classifiers for optimizing \(\theta\) to maximize the posterior \(p(\theta|x)\) by an EM algorithm. Although this concept is similar to our MPS in terms of updating both \(G_{\Phi}\) and \(f_{\theta}\), it requires training the specialized neural architectures based on GANs. In contrast, MPS is formalized for arbitrary existing generative models with latent variables, and it requires no restrictions to the training objectives of generative models. On the analysis of generative data augmentation, Shmelkov et al. [7] have pointed out that leveraging synthetic samples as augmented data degrades the performance in general visual classification tasks. They have hypothesized that the cause of the degradation is the less diversity and fidelity of synthetic samples from generative models. Subsequent research by Yamaguchi et al. [8] have shown that the scores related to the diversity and fidelity of synthetic samples (i.e., SSIM and FID) are correlated to the test accuracies when applying the samples for generative data augmentation in classification tasks. Based on these works, our work reconsiders the training objective and sampling method in generative data augmentation and proposes PCR and MPS. More recently, He et al. [39] have reported that a text-to-image generative model pre-trained on massive external datasets can achieve high performance on few-shot learning tasks. They also found that the benefit of synthetic data decreases as the amount of real training data increases, which they attributed to a domain gap between real and synthetic samples. In contrast, our method does not depend on any external dataset and successfully improves the accuracy of the classifier even when synthetic data are not representatives of class labels (i.e., having domain gaps). ## 6 Limitation One of the limitations of our method is the requirement of bilevel optimization of classifier and finder networks. This optimization is computationally expensive in particular when used with generative models that require multiple inference steps, such as diffusion models as discussed in Appendix B.3. We have tried other objective functions not requiring bilevel optimization, but at this time, we have not found an optimization method that outperforms MPS (see Appendix B.4). Nevertheless, since recent studies rapidly and intensively focus on the speedup of diffusion models [40; 41; 42], we can expect that this limitation will be negligible in near the future.
2301.12809
The Hidden Power of Pure 16-bit Floating-Point Neural Networks
Lowering the precision of neural networks from the prevalent 32-bit precision has long been considered harmful to performance, despite the gain in space and time. Many works propose various techniques to implement half-precision neural networks, but none study pure 16-bit settings. This paper investigates the unexpected performance gain of pure 16-bit neural networks over the 32-bit networks in classification tasks. We present extensive experimental results that favorably compare various 16-bit neural networks' performance to those of the 32-bit models. In addition, a theoretical analysis of the efficiency of 16-bit models is provided, which is coupled with empirical evidence to back it up. Finally, we discuss situations in which low-precision training is indeed detrimental.
Juyoung Yun, Byungkon Kang, Zhoulai Fu
2023-01-30T12:01:45Z
http://arxiv.org/abs/2301.12809v2
# The Hidden Power of Pure 16-bit Floating-Point Neural Networks ###### Abstract Lowering the precision of neural networks from the prevalent 32-bit precision has long been considered harmful to performance, despite the gain in space and time. Many works propose various techniques to implement half-precision neural networks, but none study _pure_ 16-bit settings. This paper investigates the unexpected performance gain of pure 16-bit neural networks over the 32-bit networks in classification tasks. We present extensive experimental results that favorably compare various 16-bit neural networks' performance to those of the 32-bit models. In addition, a theoretical analysis of the efficiency of 16-bit models is provided, which is coupled with empirical evidence to back it up. Finally, we discuss situations in which low-precision training is indeed detrimental. Machine Learning, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden, Power, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden, Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Power, Hidden, Hidden, Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden, Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden, Power, Hidden, Hidden, Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden, Power, Hidden, Hidden, Power, Hidden, Hidden, Hidden Power, Hidden, Hidden,, Hidden Power, Hidden, Hidden,, Hidden Power, Hidden, Hidden, Hidden Power, Hidden, Hidden,, Hidden,, Hidden Power, Hidden, Hidden, Hidden,, Hidden Power, Hidden, Hidden, Hidden,,, Hidden Power, Hidden, Hidden, Hidden,,,, Hidden Power, Hidden, Hidden, Hidden,, Hidden Power, Hidden, Hidden, Hidden,,,, Hidden Power, Hidden, Hidden,, Hidden Power, Hidden, Hidden,,, Hidden Power, Hidden, Hidden, Hidden,,,, Hidden Power, Hidden, Hidden,, Hidden,,,, Hidden Power, Hidden, Hidden, Hidden,,,, Hidden Power, Hidden, Hidden, Hidden,,,, Hidden Power, Hidden, Hidden, Hidden,,,,, Hidden Power, Hidden, Hidden, Hidden,,,,, Hidden Power, Hidden, Hidden,,,, Hidden,,,, Hidden Power, Hidden, Hidden,,, Hidden,,,,, Hidden Power, Hidden, Hidden,,,,,, Hidden Power, Hidden, Hidden,,,,,, Hidden Power, Hidden, Hidden,,,,,, Hidden Power, Hidden, Hidden,,,,,, Hidden Power, Hidden, Hidden,,,,,, Hidden Power, Hidden, Hidden,,,,,, Hidden Power, Hidden, Hidden,,,,,,, Hidden Power, Hidden,, Hidden,,,,,,, Hidden Power, Hidden,,,,,, Hidden Power, Hidden,,,,,,, Hidden Power, Hidden,,,,,,, Hidden Power, Hidden,,,,,,,, Hidden Power, Hidden,,,,,,,, Hidden Power,,,,,,,,,, Hidden Power,,,,,,,,,, Hidden Power,,,,,,,,,,,, Hidden Power,,,,,,,,,,,,, Hidden Power,,,,,,,,,,,,,, Hidden Power mixed precision algorithms like loss scaling. Our finding is positive. That is, pure 16-bit neural networks, without any floating-point 32 components, despite being imprecise by nature, can be precise enough to handle a major application of machine learning - the classification problem. Our work first formalizes the intuitive concept of error tolerance and proposes a lemma that theoretically guarantees 16-bit "locally" achieves the same classification result as 32-bit under certain conditions. Combined with our preliminary observations, we conjecture that the 16-bit and 32-bit have close results in handling classification problems. We then validate our conjecture through extensive experiments on Deep neural network (DNN) and Constitutional Neural Network (CNN) problems. Our contributions follow: * We aim to debunk the myth that plain 16-bit models do not work well. We demonstrate that training neural network models in pure 16 bits with no additional measures to "compensate" results in competitive, if not superior, accuracy. * We offer theoretical insights on _why_ half precision models work well, as well as empirical evidence that supports our analysis. * We perform extensive experiments comparing the performance of various _pure_ 16-bit neural networks against that of 32-bit and mixed precision networks and find that the 16-bit neural networks can perform as well as their 32-bit counterparts. We also identify factors that could negatively influence the success of 16-bit models. ## 2 Related Work Several techniques have been proposed to reduce the precision of machine learning models while maintaining their accuracy to some degree. The approach that best aligns with our work is that of lowering the precision of the floating point numbers used in those models, but other approaches involve algorithm modification, fixed point formats, and hardware acceleration. **Algorithm modification**. These approaches aim to modify certain components of the main algorithm to allow low-precision work. The work by (De Sa et al., 2017) offers a way to address the issues associated with low precision in gradient descent by assigning different precisions to the weights and the gradients. Such a scheme is assisted by special hardware implementation to speed up this mixed-precision process. A similar approach is taken by (Bjorck et al., 2021) in reinforcement learning. This work proposes various mechanisms to perform reinforcement learning on low precision reliably. For example, adopting numerical methods to improve the Adam optimization algorithm so that underflow can be prevented. Many of such techniques proposed are somewhat ad-hoc to the target problem but might also be useful in a more general setting. **Fixed point formats**. A _fixed point format_ is another number format often used to represent real numbers, although not standard. Several other works have adopted fixed point formats to allow low precision due to the intuitive representation and high extensibility to other configurations. To verify the issues with low precision in deep learning, the authors of (Gupta et al., 2015) investigate the behavior of deep neural networks when the precision is lowered. In this work, the empirical results show that reducing the precision of the real numbers comprising the weights of the network shows a graceful degradation of accuracy. (Chen et al., 2017) performs neural network training in fixed point numbers by quantizing a floating point number into a low-precision fixed point number during training. The idea of quantizing floating point is also used in (Lin et al., 2016), where the authors approach the conversion as an optimization problem. The objective of the optimization is to reduce the network's size by adopting different bit-width for each layer. On a more system-based approach, (Kumar et al., 2020) proposes compiler support for converting floating point numbers to low-precision fixed point numbers. (Gopinath et al., 2019) also takes compiler- and language-based support for achieving low-precision numbers. Although not entirely the same as _real_ number fixed point, integer quantization can also be considered a special fixed point format and has become a viable choice in low-precision neural network design. In (Wu et al., 2018), floating point weights are quantized to convert them into signed integer representations. The authors discover that adopting an integral quantization technique results in a regularization-like behavior, leading to increased accuracy. Similar to this approach, work by (Das et al., 2018) also proposes to use integer operations to achieve high accuracy. Such integer operations effectively convert the floating point numbers into what is known as dynamic fixed point (DFP) format. The work in (Jacob et al., 2017) also aims to quantize the weights to integers. However, the weights remain integers only during the forward pass and become floating point numbers for back-propagation to account for minute updates. Another work (Courbariaux et al., 2015) takes an extreme quantization approach to binarize the weights to -1 and 1. The weights remain binary during the forward and backward pass but become floating points during the weight update phase. (Xiao et al., 2022) adopts a post-training quantization approach to reduce the memory footprint of large-scale language models. (Koster et al., 2017) proposes an adaptive numerical format that retains the advantages of floating and fixed point numbers. This is achieved by having a shared exponent that gets updated during training. **Mixed precision**. On the other hand, _mixed precision_ ap proaches maintain standard floating point numbers in two different precision. The first practically successful reduced-precision floating point mechanism was proposed by (Micikevicius et al., 2018). In this work, the authors devise a scheme to perform mixed precision training by maintaining a set of master full-precision floating point weights that serves as the 'original copy' of the half-precision counterparts. While this technique does result in reduced running time, the mixed-precision nature of it limits the performance gain achieved. (Wang et al., 2019) propose a 4-bit floating point architecture mixed with a small amount of 8-bit precision. In addition to the format, the authors devise a two-phase rounding procedure to counter the low accuracy induced by the low-precision format. **Hardware support**. Lastly, we would like to point out that many of these works either implicitly or explicitly require hardware support due to the individual floating/fixed point formats. Most approaches typically use FPGAs and FPUs to implement these formats, but other works such as (Sharma et al., 2017) propose novel architectures tailored to addressing the bit formats of the floating point numbers. Some previously mentioned works, such as (Wang et al., 2019), also hint at the possibility of leveraging hardware assistance. Unlike these previous works, our work focuses explicitly on the IEEE floating point format, which is the de-facto standard for general computing machinery. More precisely, we investigate the pros and cons of using a pure 16-bit IEEE floating point format in training neural networks without external support. ## 3 Background General NotationThe real numbers and integers are denoted by \(\mathbf{R}\) and \(\mathbf{Z}\), respectively. Given a vector \(x=(x_{0},...,x_{n-1})\in\mathbf{R}^{n}\), its infinity norm or maximum norm:, denoted by \(|x|_{\infty}\), is the maximum element in the vector, namely \(|x|_{\infty}\stackrel{{\text{def}}}{{=}}\max_{i}|x_{i}|\). ### Floating-Point Representation We write \(\mathbf{F}_{16}\) to denote the set of 16-bit floating-point numbers excluding \(\pm\infty\) and NaN (Not-a-Number). Following IEEE-754 standard (IEEE Computer Society, 2008), each \(x\in\mathbf{F}_{16}\) can be written as \[x=(-1)^{s}\times g_{0}.g_{1}...g_{10\ (2)}\times 2^{e} \tag{1}\] where \(s\in\{0,1\}\), \(g_{i}\in\{0,1\}\) (\(0\leq i\leq 10\)), and \(e\in\mathbf{Z}\). We call \(s\), \(g_{0}.g_{1}....g_{10}\) and \(e\) the sign, the significand, and the exponent, respectively. They satisfy \(g_{0}\neq 0\) and \(-14\leq e\leq 15\). The case where \(g_{0}=0\) and \(e=-14\) is called a subnormal number. We write \(\mathbf{F}_{32}\) for the set of 32-bit (single-precision) floating-point numbers excluding \(\pm\infty\) and NaN. The following property holds: \[\mathbf{F}_{16}\subset\mathbf{F}_{32}\subset\mathbf{R} \tag{2}\] Namely, a 16-bit or 32-bit floating-point number is a real, and a 16-bit can be exactly represented as a 32-bit (by padding with zeros). Tab. 3 lists the range and the precision of 16-bit and 32-bit floating-point numbers. ### Floating-point Errors Rounding is necessary when representing real numbers that cannot be written as Eq. 1. _Rounding error_ of a real \(x\) at precision \(p\) refers to \(|x-\circ_{p}(x)|\) where the rounding operation, \(\circ_{p}:\mathbf{R}\rightarrow\mathbf{F}_{p}\), defines the nearest floating-point number of \(x\). Namely, \(\circ_{p}(x)\stackrel{{\text{def}}}{{=}}\operatorname*{argmin}_{y \in\mathbf{F}_{p}}|x-y|\). 1 Rounding error is usually small, on the order of machine epsilon (Tab. 3.1), but it can be propagated and become more significant. For example, the floating-point code sin(0.1) goes through three approximations. First, 0.1 is rounded to the floating-point \(\circ_{p}(0.1)\) for some precision \(p\). Then, the rounding error is _propagated_ by the floating-point code sin. Lastly, the calculation output is rounded again if an exact representation is not possible. Footnote 1: For simplicity, this definition of the rounding operation ignores the case where a tie needs to be broken. Floating-point errors are usually measured in _absolute error_ or _relative error_. This paper focuses on classification problems where output numbers are probabilities between 0 and 1. Thus, we use the absolute error \(|x-y|\) for quantifying the difference between two floating point numbers \(x\) and \(y\). ## 4 Theory Suppose \(M_{16}\) and \(M_{32}\) are 16-bit and 32-bit deep learning models trained by the same neural network architecture and hyper-parameters. By abuse of notation, we consider \(M_{16}\) and \(M_{32}\) as classifiers or functions that return the probability vector from the last layer given an input \(x\) (e.g., an image). Let \(x\) be an arbitrarily chosen input. Suppose \(M_{32}(x)\) returns \((p_{0},\cdots,p_{N-1})\), and \(M_{16}(x)\) returns \((p_{0}^{\prime},\cdots,p_{N-1}^{\prime})\). Clearly, the classification result of an input \(x\) made by a clas \begin{table} \begin{tabular}{l c c c} \hline \hline Type & Size & Range & Machine-epsilon \\ \hline Half & 16 bits & 6.55E\(\pm\)4 & 4.88E-04 \\ Single & 32 bits & 3.4E\(\pm\)38 & 5.96E-08 \\ \hline \hline \end{tabular} \end{table} Table 1: Some characteristics of the 16-bit and 32-bit floating-point formats. sifier \(M\) is given as \[\text{pred}(M,x)\stackrel{{\text{def}}}{{=}}\operatorname{argmax}_{i} \{p_{i}|p_{i}\in M(x)\}. \tag{3}\] Due to the floating-point error, the classification results of \(M_{32}\) and \(M_{16}\) on \(x\) can be different. To quantify this difference, we define the _floating point error_ as follows. **Definition 4.1**.: Given the 16-bit classifer \(M_{16}\), the 32-bit \(M_{32}\), and an input \(x\), the _floating point error_ between the classifiers is given as \[\delta(M_{32},M_{16},x)\stackrel{{\text{def}}}{{=}}|M_{32}(x)-M_ {16}(x)|_{\infty} \tag{4}\] The degree to which this difference affects the outcome is an important question we investigate in this work. In fact, we can show a sufficient condition (denoted by C) that guarantees the absence of difference between two classifiers. **(C.)**: If the difference between the largest of \(p_{i}\) and the second largest is greater than twice the floating point error, then the two classifiers \(M_{16}\) and \(M_{32}\) have the same classification result on \(x\). Illustration for condition (C): suppose \(M_{32}(x)=(0.8,0.1,0.05,0.05)\) for a classification problem of four labels. Let the largest error between this probability vector and \(M_{16}(x)\) be \(\delta\). Then in the worst case, 0.8 can drop to \(0.8-\delta\) for the 16-bit, and the second largest probability becomes \(0.1+\delta\). If \(0.8-\delta>0.1+\delta\), then \(M_{16}\) and \(M_{32}\) must have the same classification result on \(x\), e.g., \(M_{16}(x)=(0.7,0.15,0.1,0.05)\). Below, we formalize condition (C) following introduction of the notion of _error tolerance_. **Definition 4.2**.: The _error tolerance_ of a classifier \(M\) with respect to an input \(x\) is defined as the gap between the largest probability and the second-largest one: \[\Gamma(M,x)\stackrel{{\text{def}}}{{=}}p_{0}-p_{1}, \tag{5}\] where \(p_{0}=|M(x))|_{\infty}\), and \(p_{1}=|M(x)\backslash p_{0}|_{\infty}\). Here, \(M_{32}(x)\backslash p_{0}\) refers to a vector of elements in \(M_{32}(x)\) but with \(p_{0}\) removed. The error tolerance can be thought of as quantifying the stability of the prediction. We have the following lemma corresponding to condition C. **Lemma 4.3**.: Consider a classification problem characterized by a pair \((X,Y)\) where \(X\) is the space of input data, and \(Y=\{0,..N-1\}\) is the labels of classification. Suppose a learning algorithm trains a 32-bit model \(M_{32}:X\rightarrow\mathbf{F}_{32}^{N}\) and a 16-bit model \(M_{16}:X\rightarrow\mathbf{F}_{16}^{N}\) on a dataset \(D\subseteq X\times Y\). We have: If \[\Gamma(M_{32},x)\geq 2\delta(M_{32},M_{16},x) \tag{6}\] then \(\operatorname{pred}(M_{32},x)=\operatorname{pred}(M_{16},x)\). Proof.: Let \(M_{32}(x)\) be \((p_{0},....p_{N-1})\). Without loss of generality we assume \(p_{0}\) is the largest one in \(\{p_{i}\}\) (\(0\leq i\leq N-1\)). We denote \(\delta(M_{32},M_{16},x)\) by \(\delta\) hereafter. Following Eq. 6, we have \[\forall i\in\{1,...,N-1\},p_{0}-p_{i}\geq 2\delta. \tag{7}\] Let \(M_{16}(x)\) be \((p^{\prime}_{0},....p^{\prime}_{N-1})\). Then for each \(i\in\{1,...,N-1\}\), we have \[p^{\prime}_{0} \geq p_{0}-\delta\] By Eq. 4 and Def. of \[p_{0},p^{\prime}_{0}\] \[\geq p_{i}+\delta\] By Eq. 5 and Eq. 7 \[\geq p^{\prime}_{i}\] By Eq. 4 Thus \(p^{\prime}_{0}\) remains the largest in the elements of \(M_{16}(x)\) \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & \multicolumn{4}{c}{Floating-point error} & \multicolumn{4}{c}{Error tolerance} \\ \cline{2-9} Epochs & Min & Max & Mean & Variance & Min & Max & Mean & Variance \\ \hline 10 & 0.00E+00 & 1.68E-01 & 2.76E-03 & 5.62E-05 & 9.29E-05 & 1.00E+00 & 7.66E-01 & 7.58E-02 \\ 20 & 7.84E-15 & 2.07E-01 & 2.88E-03 & 8.53E-05 & 3.65E-05 & 1.00E+00 & 8.24E-01 & 6.31E-02 \\ 50 & 0.00E+00 & 3.82E-01 & 3.69E-03 & 1.97E-04 & 2.74E-06 & 1.00E+00 & 8.79E-01 & 4.72E-02 \\ 100 & 0.00E+00 & 5.64E-01 & 4.12E-03 & 3.27E-04 & 3.24E-04 & 1.00E+00 & 9.16E-01 & 3.42E-02 \\ 200 & 0.00E+00 & 6.75E-01 & 4.23E-03 & 4.76E-04 & 1.93E-04 & 1.00E+00 & 9.47E-01 & 2.19E-02 \\ 500 & 0.00E+00 & 9.35E-01 & 3.76E-03 & 6.18E-04 & 1.44E-03 & 1.00E+00 & 9.79E-01 & 7.72E-03 \\ 1000 & 0.00E+00 & 9.95E-01 & 3.14E-03 & 5.77E-04 & 1.91E-03 & 1.00E+00 & 9.91E-01 & 2.98E-03 \\ \hline \hline \end{tabular} \end{table} Table 2: The columns of “Floating-point errors” and “Error tolerance” refer to statistics of \(\delta(M_{32},M_{16},x)\) and \(\Gamma(M32,x)\) respectively, where \(x\) ranges over the images of the MNIST dataset. Below we illustrate Lemma 4.3 through a simple neural network trained on MNIST. Our 32-bit implementation has three Dense layers followed by a softmax layer at the end. Our 16-bit implementation uses the same architecture, except all floating-point operations are performed on 16-bit. Table 4 shows our results of error tolerance \(\Gamma\) and floating-point error \(\delta\). Observe that the mean floating-point error is of the magnitude of 1E-3 with a variance of 1E-5 or 1E-4; the error tolerance is 1E-1 with a variance of 1E-2. Thus, one can argue that Eq. 6, namely, \(\Gamma>2\delta\), holds for most data in MNIST. The table also shows that floating-point errors can be larger than the tolerance in some corner cases. Thus, we expect our 16-bit and 32-bit implementations to have close but different accuracy results. Results at training and testing are presented. Table 4 shows the accuracy and loss results in MNIST comparing 16-bit and 32-bit implementations. We can see consistently that the 16-bit results have accuracy close to those of the 32-bit models, sometimes even better. This result motivates us to study whether the 16-bit model is similar to the 32-bit model for more complex neural networks. In theory, if 80% of data in a dataset satisfy Eq. 6, Lemma 4.3 tells us that the 32-bit and 16-bit models will have at least 80% of classification results being the same. The main challenge here is that we cannot determine if Eq. 6 always holds or the percentage of data that satisfies it. We believe that for complex neural networks, most data meet Eq. 6. This is because the loss function for the classification problem is a cross-entropy in the form of \(-\Sigma\log(p_{i})\), which should guide \(p_{i}\) toward \(1\) during training and in turn, causes a large error tolerance \(\Gamma\) compared to relatively small \(\delta\). In fact, Table 4 shows that the floating point errors (\(\delta\)'s) are nearly two orders of magnitude smaller than the \(\Gamma\)'s. Although the gap might close over the epochs, the difference remains sufficiently large to satisfy the condition of the lemma. With this theoretical development and observations, we propose the following conjecture. The accuracy of a 16-bit neural network for classification problems, in the absence of significant errors involving floating-point overflow/underflow, will be close to that of a 32-bit neural network. We anticipate a situation where floating-point errors can become significant due to overflow or underflow since 16-bit floating-point is known to have a smaller range (Table 3.1). The conjecture may be surprising, so we devote our next section to in-depth validation. ## 5 Experiments We aim to compare the performance of 16-bit operations to 32-bit operations in deep neural network (DNN)2 and convolutional neural network (CNN) models. we experiment over three CNN models, including AlexNet (Krizhevsky et al., 2012), VGG16 (Simonyan and Zisserman, 2015), and ResNet-34 (He et al., 2016). We will see how the different precision settings affect the computational time and accuracy of the models. Unlike case studies in the previous section, we use 100 epochs for all experiments and gradually increase the batch size from 64 to 384 to see how it affects 16-bit training. All random seeds used in this study's experiments are fixed to facilitate the comparison. The experiments were conducted on NVIDIA's RTX3080 Laptop GPU. Footnote 2: While DNNs subsume CNNs, we use the term DNN to refer to fully-connected, non-convolutional neural networks. Table 4 gives the result most representative of our work. A more detailed description and analysis of these results will follow in the subsequent subsections. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Train accuracy} & \multicolumn{2}{c}{Test accuracy} & \multicolumn{2}{c}{Train loss} & \multicolumn{2}{c}{Test loss} \\ \cline{2-9} Epochs & 32-bit & 16-bit & 32-bit & 16-bit & 32-bit & 16-bit & 32-bit & 16-bit \\ \hline 10 & 90.3\% & 90.0\% & 90.8\% & 91.0\% & 3.49E-01 & 3.69E-01 & 3.23E-01 & 3.39E-01 \\ 20 & 92.2\% & 92.3\% & 92.8\% & 92.8\% & 2.71E-01 & 2.94E-01 & 2.57E-01 & 2.74E-01 \\ 50 & 94.9\% & 95.3\% & 94.9\% & 94.7\% & 1.81E-01 & 2.10E-01 & 1.80E-01 & 2.04E-01 \\ 100 & 96.7\% & 96.4\% & 96.3\% & 95.8\% & 1.16E-01 & 1.49E-01 & 1.28E-01 & 1.52E-01 \\ 200 & 98.4\% & 97.3\% & 97.4\% & 97.3\% & 6.05E-02 & 9.32E-02 & 8.97E-02 & 1.10E-01 \\ 500 & 99.8\% & 99.1\% & 97.7\% & 98.1\% & 1.38E-02 & 4.00E-02 & 7.87E-02 & 8.54E-02 \\ 1000 & 100.0\% & 99.8\% & 97.8\% & 98.1\% & 2.97E-03 & 2.04E-02 & 9.02E-02 & 8.29E-02 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparing accuracy and loss results between 32-bit and 16-bit neural networks on the MNIST dataset. Figure 1: Top-1 accuracy (top row) and computational time (bottom row) on MNIST Classification using DNN Figure 2: Top-1 Accuracy, Top-2 Accuracy, and Computational Time for Cifar-10 Classification ### DNN Experiments In the DNN experiments, we train a DNN with three hidden layers to perform the MNIST classification task and compare the performances between those of the 16-bit model and the 32-bit model. Each of the three layers in the DNN has 4096 neurons, whose outputs are fed to the next layer after passing through ReLU activation. Other works, such as (Micikevicius et al., 2018), use a 32-bit softmax layer regardless of the overall precision settings to prevent potential numerical instability, but we leave stick with a 16-bit softmax to see how a "pure" 16-bit model fares against 32-bit ones. We also vary the types of optimizers among RMSProp, Adam, and SGD to examine their effects on performance. In addition, we confirm from this experiment that not only SGD but also other optimizers, such as RMSProp and Adam, can be used in 16-bit if \(\epsilon\) is properly set. In all experiments in this section, we use the learning rate of \(10^{-3}\) and fix \(\epsilon\) in RMSProp and Adam to \(10^{-3}\) as well. We direct readers to the Appendix for experiments in other settings. Finally, in addition to the 32-bit baseline, we also compare the mixed-precision training algorithm proposed by (Micikevicius et al., 2018) (as provided by TensorFlow). Figure 1 shows that 16-bit deep neural networks are better than 32-bit and mixed precision in terms of computational time while maintaining similar test accuracy. In detail, (see Figure 3 in the Appendix) 16-bit SGD's computational time is decreased by 40.9%, and the test accuracy was increased 0.6%, respectively, compared to 32-bit while 16-bit computational time was decreased by 59.8% and accuracy was increased 0.59% compared to mixed precision when the batch size was the smallest at 64. Other batch sizes yielded consistent trends, albeit with smaller magnitude (Keskar et al., 2017). Optimization-wise, 16-bit RMSProp reduced the runtime by 40.4% and 59.6% compared to 32-bit and mixed, respectively, and accuracy is increased by 0.3% and 0.4%. 16-bit Adam improved the runtime by 41.6% and 58.1%, and 6.4% and 7.5% in terms of accuracy compared to 32-bit and mixed precision, respectively. Of the total of 33 experimental groups calculated with three optimizers, 31 decreased running time by more than 40% compared to 32 bits while maintaining the accuracy to a similar level. Every computational time and test accuracy in 16-bit deep neural networks is better than those of 32-bit and mixed precision. ### CNN Experiments 16-bit CNN experiments were conducted to determine whether 16-bit is enough for training a more complex image classification problem and how numerically different it is from 32-bit. We used the CIFAR10 dataset and three convolutional neural networks (CNN) models: AlexNet, VGG16, and ResNet-34. All of these experiments were carried out using Adam in a 16-bit environment. Since the batch normalization (BN) layer is not implemented as an off-the-shelf module in 16 bits, the experiment was conducted focusing on CNN models that could be used without the batch normalization layer. See the Appendix for treatment on 16-bit BN implementations. #### 5.2.1 Training Time and Accuarcy Figure 1 shows that 16-bit CNNs are better than 32-bit and mixed precision in terms of computational time and test accuracy. Figure 2 shows that 16-bit operations can also be applied to CNN models for image classification. We found that 16-bit CNNs maintain similar accuracy to 32-bit. AlexNetAt the smallest batch size of 64, 16-bit AlexNet's top-1 and top-2 accuracy results are increased by 0.9% and 0.6% compared to 32-bit, respectively, and training time decreased by 29.1%. The running time of 16-bit AlexNet is reduced by more than 29% compared to its 32-bit counterpart. WGG-16In terms of top-1 and top-2 accuracy, 16-bit VGG16 increased by 1.6% and 0.6% compared to 32-bit, respectively, and learning time decreased by 43.7%. All computational speeds of 16-bit VGG16 were decreased by more than 40% compared to 32-bit. ResNet-3416-bit ResNet-34 decreased 0.6% and 0.4% in terms of top-1 and top-2 accuracy, and computational time decreased 44.7% compared to 32-bit. The running time of 16-bit ResNet-34 were reduced by more than 39%. Overall, a 2.6% (AlexNet, Batch Size: 384) decrease in 16-bit top1-accuracy compared to 32-bit in 33 experimental sets was the largest decrease, and the largest increase is 2.7% (ResNet-34, Batch Size: 192). The smallest decrease in running time is by 29.1% (AlexNet, Batch Size: 64) compared to 32 bits, and 45.6% (VGG16, Batch Size: 288) the largest decrease. In other words, all 16-bit experimental groups used in all experiments decreased their computational speed by at least 29.1%. This experiment shows that by using 16-bit operations in \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Results & FP32 & FP16 \\ \hline AlexNet & Time & 381s & 270s \\ & Accuracy & 68.9\% & 69.5\% \\ \hline VGG16 & Time & 1445s & 812s \\ & Accuracy & 82.9\% & 84.3\% \\ \hline ResNet-34 & Time & 1914s & 1058s \\ & Accuracy & 76.6\% & 76.1\% \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of the time and accuracy performances of the three CNNs. image classification with CNN models, the running time (whether training or testing) can be greatly reduced while maintaining similar or higher accuracy compared to 32-bit operations. Our results demonstrate the efficiency of neural networks in training CNN models in a low-precision setting. #### 5.2.2 Model Size The preservation of the trained model weights is an important aspect of contemporary deep learning. It is useful for model training and storage if a model with similar accuracy has less storage size. Table 5 shows the stored model has about half the size of the 32-bit size, faithfully reflecting the half-precision size reduction. 16-bit neural network allows for reducing the size of the model by half while maintaining similar accuracy. This opens up possibilities for 16-bit models to afford more complex and accurate architectures to attain better results. ### Limitation and Discussion This subsection reports the limitations we find while using the 16-bit neural network. Light hyperparameter-tuning.During our experiments, we do not need to tune hyperparameters of 16-bit neural network training, e.g. learning rates, except the epsilon in the settings of the optimizers. The SGD (Goodfellow et al., 2016) optimizer does not have epsilon, so it can be used directly. For the other optimizers we have tested in our experiments, RMSProp (Hinton et al.) and Adam (Kingma & Ba., 2015), we will need to change epsilon, whose default value (in Tensorflow), 1E-7, can easily trigger significant inaccuracy for 16-bit training. As a detail, the parameter epsilon corresponds to \(\epsilon\) in the weight updates below: RMSProp : \[w_{t}= w_{t-1}-\eta\frac{g_{t}}{\sqrt{v_{t}}+\epsilon}\] ADAM : \[w_{t}= w_{t-1}-\eta\frac{\hat{m_{t}}}{\sqrt{\hat{v_{t}}}+\epsilon}\] These optimizers introduced \(\epsilon\) in to enhance numerical stability, but \(\epsilon\) = 1E-7 in the denominator causes floating-point overflow when \(v_{t}\) in RMSProp or \(\hat{v_{t}}\) in Adam are close to 0. As mentioned previously, we set \(\epsilon\) = 1E-3 for 16-bit training. Missing 16-bit Batch normalization.Tensorflow's current batch normalization layer does not directly support pure 16-bit operations. The batch normalization layer presumably originates from mixed precision works, which would cast 16-bit input values to 32-bit for calculation and then downcast to 16 bits. This type conversion only results in runtime overhead due to the intermediate 32-bit computation. Since this work aims to report results on pure 16-bit neural networks, we have to implement a 16-bit batch normalization layer on our own. Our implementation of the 16bit batch normalization can be found in Appendix. Batch size.Our CNN experiments show that the accuracy of 16-bit neural networks decreases in larger batch sizes. This can be due to the precision loss incurred when averaging the cost over a larger number of samples in the mini-batches. This is probably a minor limitation since neural networks in memory-constrained environments usually do not use large batch sizes. Despite these limitations, we have confirmed that 16-bit neural networks learn as well as 32-bit with only minor fine-tuning (\(\epsilon\) in the optimizers). Thus, we believe ML practitioners can readily benefit from 16NN when it comes to solving op optimization problems since it is faster, less memory-consuming, yet achieves similar accuracy as the 32-bit. Instead of spending the same amount of time and space as 32-bit, we can form a network with an enhanced cost-to-benefit ratio. ## 6 Conclusion In this work, we have shown that for classification problems, the 16-bit floating-point neural network can have an accuracy close to the 32-bit. We have proposed a conjecture on their accuracy closeness and have validated it theoretically and empirically. Our experiments also suggest a small amount of accuracy gain, possibly due to the regularizing effect of lowering the precision. These findings show that it is much safer and more efficient to use 16-bit precision than what is commonly perceived: the runtime and memory consumption is lowered significantly with little or no loss in accuracy. In the future, we plan to expand our work to verify similar characteristics in other types of architectures and problems such as generative models and regression.
2304.10618
ULEEN: A Novel Architecture for Ultra Low-Energy Edge Neural Networks
The deployment of AI models on low-power, real-time edge devices requires accelerators for which energy, latency, and area are all first-order concerns. There are many approaches to enabling deep neural networks (DNNs) in this domain, including pruning, quantization, compression, and binary neural networks (BNNs), but with the emergence of the "extreme edge", there is now a demand for even more efficient models. In order to meet the constraints of ultra-low-energy devices, we propose ULEEN, a model architecture based on weightless neural networks. Weightless neural networks (WNNs) are a class of neural model which use table lookups, not arithmetic, to perform computation. The elimination of energy-intensive arithmetic operations makes WNNs theoretically well suited for edge inference; however, they have historically suffered from poor accuracy and excessive memory usage. ULEEN incorporates algorithmic improvements and a novel training strategy inspired by BNNs to make significant strides in improving accuracy and reducing model size. We compare FPGA and ASIC implementations of an inference accelerator for ULEEN against edge-optimized DNN and BNN devices. On a Xilinx Zynq Z-7045 FPGA, we demonstrate classification on the MNIST dataset at 14.3 million inferences per second (13 million inferences/Joule) with 0.21 $\mu$s latency and 96.2% accuracy, while Xilinx FINN achieves 12.3 million inferences per second (1.69 million inferences/Joule) with 0.31 $\mu$s latency and 95.83% accuracy. In a 45nm ASIC, we achieve 5.1 million inferences/Joule and 38.5 million inferences/second at 98.46% accuracy, while a quantized Bit Fusion model achieves 9230 inferences/Joule and 19,100 inferences/second at 99.35% accuracy. In our search for ever more efficient edge devices, ULEEN shows that WNNs are deserving of consideration.
Zachary Susskind, Aman Arora, Igor D. S. Miranda, Alan T. L. Bacellar, Luis A. Q. Villon, Rafael F. Katopodis, Leandro S. de Araujo, Diego L. C. Dutra, Priscila M. V. Lima, Felipe M. G. Franca, Mauricio Breternitz Jr., Lizy K. John
2023-04-20T19:40:01Z
http://arxiv.org/abs/2304.10618v1
# ULEEN: A Novel Architecture for Ultra Low-Energy Edge Neural Networks ###### Abstract The deployment of AI models on low-power, real-time edge devices requires accelerators for which energy, latency, and area are all first-order concerns. There are many approaches to enabling deep neural networks (DNNs) in this domain, including pruning, quantization, compression, and binary neural networks (BNNs), but with the emergence of the "extreme edge", there is now a demand for even more efficient models. In order to meet the constraints of ultra-low-energy devices, we propose ULEEN, a model architecture based on weightless neural networks. Weightsless neural networks (WNNs) are a class of neural model which use table lookups, not arithmetic, to perform computation. The elimination of energy-intensive arithmetic operations makes WNNs theoretically well suited for edge inference; however, they have historically suffered from poor accuracy and excessive memory usage. ULEEN incorporates algorithmic improvements and a novel training strategy inspired by BNNs to make significant strides in improving accuracy and reducing model size. We compare FPGA and ASIC implementations of an inference accelerator for ULEEN against edge-optimized DNN and BNN devices. On a Xilinx Zynq Z-7045 FPGA, we demonstrate classification on the MNIST dataset at 14.3 million inferences per second (13 million inferences/Joule) with 0.21 \(\mu\)s latency and 96.2% accuracy, while Xilinx FINN achieves 12.3 million inferences per second (1.69 million inferences/Joule) with 0.31 \(\mu\)s latency and 95.83% accuracy. In a 45nm ASIC, we achieve 5.1 million inferences/Joule and 38.5 million inferences/second at 98.46% accuracy, while a quantized Bit Fusion model achieves 9230 inferences/Joule and 19,100 inferences/second at 99.35% accuracy. In our search for ever more efficient edge devices, ULEEN shows that WNNs are deserving of consideration. ## I Introduction In the last decade, deep neural networks (DNNs) have driven revolutionary improvements in domains including image classification, natural language processing (NLP), and medical diagnostics, in some cases achieving superhuman accuracy [1]. This advancement has been driven by an exponential increase in the size and complexity of the models we can train. At the same time, there is a second movement in the opposite direction: deployment of AI models on _smaller and smaller devices_. Techniques such as pruning, compression, low-precision quantization [2], and Once-for-All [3] can greatly reduce the memory requirements and computational demands of large models, enabling their use on low-power, resource-constrained edge devices. However, on the "extreme edge", where computation is performed directly adjacent to physical sensors [4], energy budgets are minuscule and efficiency is of utmost importance. Such devices are often deployed in inaccessible environments, and may be expected to last years or decades on a small battery, or by harvesting energy directly from the phenomena they measure [5]. Binary neural networks (BNNs) [6, 7, 8, 9, 10] take quantization to its logical extreme, reducing network weights and activations to single-bit values. By replacing energy-intensive multiplication with XNOR operations, they achieve energy efficiency orders of magnitude better than DNNs. However, like DNNs, BNNs must propagate activations through many layers of computation. This may still present a significant critical path for real-time edge applications. Weightless Neural Networks (WNNs) are a distinct class of neural model which perform computation primarily using lookup tables, rather than arithmetic or logical functions. Individual weightless neurons, also known as RAM nodes, concatenate binary inputs to form an address into their lookup table, and produce a binary response. WNNs are inspired by the dendritic trees of biological neurons, which perform highly nonlinear decode processing [11]. Unlike traditional or binary neurons, RAM nodes are capable of learning _nonlinear_ functions of their inputs. The concept of WNNs is not new - the earliest implementations date to the 1950s [12]. Their simple structure was well-suited to early VLSI techniques, and they achieved some prominence in the 1970s-80s. However, these early WNNs were surpassed by DNNs in both accuracy and memory efficiency by the 1990s. Thus, while the simplicity and non-linearity of WNNs makes them appealing for deployment on the edge, algorithmic and architectural enhancements are needed to make them comparable or superior to optimized DNN and BNN models. In this paper, we demonstrate techniques to improve the accuracy and reduce the hardware requirements of WNNs. We present a weightless neural architecture we call ULEEN - Ultra Low Energy Edge Networks, suitable for inference applications under extreme energy constraints. ULEEN incorporates a set of techniques not previously used in WNNs, including efficient ensembles, pruning, and a multi-epoch gradient-based training algorithm using the straight-through estimator [13]. We also improve upon prior work by including counting Bloom filters with hardware-friendly hashing, a novel nonlinear thermometer encoding, and threshold-based one-shot training [14, 15]. We present FPGA and ASIC implementations of an inference accelerator for this architecture and compare it against state-of-the-art efficient inference platforms for DNNs and BNNs. Our specific contributions in this paper are as follows: 1. ULEEN, a weightless neural architecture which incorporates principles from prior WNNs, BNN-inspired training methodologies, and further algorithmic enhancements. Through these improvements, we increase the best reported WNN accuracy on the MNIST dataset from 91.5% to 98.46%. 2. An energy-efficient inference accelerator architecture for ULEEN which can be implemented on an FPGA or as an ASIC. 3. Comparisons of our ULEEN accelerator with state-of-the-art quantized, optimized DNNs and BNNs on ASIC and FPGA. In comparison to FINN [8] (FPGA), we demonstrate 1.4-2.6x improved latency, 1.2-2.6x improved throughput, and 6.8-8.5x improved energy at equal or better accuracy. In comparison to Bit Fusion [16] (ASIC), we demonstrate 2014-19549x improved throughput and 479-663x improved energy with!1% reduction in accuracy. 4. A toolchain for generating ULEEN models using either single- or multi-shot training rules. A second toolchain which produces RTL from trained ULEEN models using our accelerator architecture. These are available at: _URL omitted for double-blinding_. The remainder of our paper is organized as follows: In Section II, we provide additional background on WNNs and one specific WNN, WiSARD, which we use as the base model for our improvements. In Section III, we present the ULEEN architecture in detail. In Section IV, we discuss software and hardware implementation details. In Section V, we compare our accelerator architecture against prior DNN and BNN accelerators and our model architecture against prior memory-efficient WNNs, and provide additional results exploring the performance of our models. In Section VI, we give some additional context into the prior work in this domain. Lastly, in Section VII, we discuss future work and conclude. ## II Background **Weightless neural networks (WNNs)** are neural models which perform computation using lookup tables. The basic computational unit, the RAM node, is an \(n\)-input, \(2^{n}\)-output lookup table with learned 1-bit entries. Each permutation of the contents of this table represents a unique Boolean function, meaning there are \(2^{2^{n}}\) possible functions for a single RAM node, many of which are nonlinear. By contrast, a single XNOR-and-popcount neuron in a BNN can only learn one of the \(n\!*\!2^{n}\) linear functions of its inputs. The downside of this expressiveness is that the size of a RAM node grows exponentially with its number of inputs, quickly becoming intractable. Therefore, layers in a WNN are typically composed of many RAM nodes which are each only sensitive to a subset of the layer's inputs. Training a WNN entails learning Boolean functions in its component RAM nodes. Many approaches have been explored for this, including both supervised [17] and unsupervised [18] techniques. Most typically, WNNs are trained using a supervised one-shot approach. All RAM nodes begin filled with zeros. Encoded inputs are presented sequentially to the network. When a RAM node receives a training input, it sets the corresponding location in its memory to 1. Note that presenting the same pattern to a node again has no further effect; thus, there is no advantage to having multiple epochs of training. By leveraging one-shot training techniques, WNNs can be trained up to four orders of magnitude faster than DNNs and other well-known computational intelligence models such as SVM [19]. Since WNNs only learn patterns they were exposed to during training, one might expect them to have difficulty generalizing to new data. However, although an inference sample may not be identical to any training sample, many of the "subpatterns" seen by individual RAM nodes will have also been present in training data. Therefore, as long as an inference sample is not too different from any training sample, the network can still effectively generalize. We have only described the fundamentals of WNNs here; many variants have been explored, and we refer the curious reader to [12] for a more in-depth discussion. We will however present one WNN model architecture, WiSARD, in detail, since it serves as the baseline for the improvements discussed in this paper. Though not the focus of this paper, computer architects may note some similarities between WNNs and the predictors used in modern microprocessors. For instance, RAM nodes are conceptually similar to branch predictor tables, and using a concatenated input vector to index into a RAM node is conceptually similar to using a branch history register in a table-based branch predictor. Table updates in branch predictors can be viewed as an online version of the one-shot training rule. **WiSARD:** While foundational research in WNNs began in the 1950s, the first WNN architecture to achieve broad success was WiSARD (Wilkie, Stonham, and Aleksander's Recognition Device) [20], introduced in 1981 and produced commercially from 1984. WiSARD was designed for classification tasks for purposes such as anomaly detection. A WiSARD model, as depicted in Figure 1, is composed of submodels, known as _discriminators_, which are each specialized for one of the possible output classes. These discriminators are in turn composed of \(n\)-input RAM nodes; for an \(I\)-input model, there are \(N\equiv I/n\) such nodes per discriminator. Inputs to the network are assigned to RAM nodes using a pseudo-random mapping (also referred to as reordering); typically this same mapping is shared between all discriminators, meaning RAM nodes at the same index in different discriminators will have the same inputs. During training, inputs are presented only to the discriminator corresponding to the correct output class. During inference, inputs are first presented to all discriminators. The outputs of the RAM nodes in each discriminator are then summed to produce response values, and the class corresponding to the discriminator with the strongest response is taken to be the prediction of the network. Figure 2 shows a simplified WiSARD model performing inference. In this example, the response from Discriminator 1 is the strongest since the input image contains the digit "1". If an input seen during inference is identical to one seen during training, all RAM nodes of the corresponding discriminator will output 1, yielding the maximum possible response. On the other hand, if a pattern is similar but not identical to training patterns, then some subset of the RAM nodes may produce a 0, but many will still output 1. As long as the response of the correct discriminator is still stronger than the responses of any other discriminator, the network will output a correct prediction. In practice, WiSARD has a far greater ability to generalize than simpler WNN models. WiSARD's performance is directly related to the choice of \(n\). Small values of \(n\) force the model to only learn simple patterns, which may improve its ability to generalize but can also prevent it from effectively learning. Large values of \(n\) produce more specialized behavior, but may also result in overfitting to training data [20]. Recent results have formally demonstrated that the VC dimension1 of WiSARD is very large [22], indicating it has a large theoretical capacity to learn patterns. Footnote 1: The Vapnik–Chervonenkis (VC) dimension measures the complexity of the knowledge represented by a set of functions that can be encoded by a binary classification algorithm [21]. While usually approximated by statistical methods, it is possible to establish the exact VC dimension for some learning methods, including WiSARD. ## III Proposed Design: ULEEN The ULEEN model includes several enhancements over the baseline WiSARD model: (i) counting/continuous Bloom filters, (ii) non-linear thermometer encoding, (iii) ensembles, and (iv) pruning. Furthermore, ULEEN models can be trained in two ways: (i) an enhanced version of the conventional one-shot technique or (ii) a novel multi-shot training algorithm, inspired by BNNs and based around backpropagation and the straight-through estimator. ### _Model Overview_ Figure 3 shows the ULEEN model at a high level, including training with the multi-shot technique. Inputs are fed to an ensemble of smaller models. We find that ensembles of small models give better accuracy than monolithic large models, and incorporate this into ULEEN's design. The aggregation of the results from individual models is done in the "Vectorized Addition" block in the figure. During multi-shot training, we Fig. 1: A depiction of the WiSARD WNN model with \(M\) classes, \(n\) inputs per RAM node, and \(N\) RAM nodes per discriminator. This model has a total of \(MN\) RAM nodes and \(MN2^{n}\) bits of state. Fig. 2: A WiSARD WNN model recognizing digits. In this example, the input image contains “1”, and the corresponding discriminator produces the strongest response. take a softmax of the outputs of the aggregation and compute cross-entropy loss. A multi-bit thermometer encoding is used to represent inputs with greater granularity. RAM nodes are implemented using hash-based Bloom filters for compression. During multi-shot training, Bloom filters internally hold floating-point values and are binarized using a unit step function. The multi-shot training technique for ULEEN is a modified version of backpropagation based on the straight-through estimator [13]. The flow of gradients during backpropagation is shown with the green dotted arrow in the figure; the threshold units are treated as identity functions and ignored. Pruning, which eliminates RAM nodes which contribute least to overall accuracy, is a post-training technique and is therefore not shown in Figure 3. #### Iii-A1 Counting/Continuous Bloom Filters A major challenge impacting WiSARD is its exponential growth in model size as the number of inputs to each RAM node increases. However, in practice, these large RAM nodes learn Boolean functions with few minterms, meaning their contents are sparse. In Bloom WiSARD [23], it was demonstrated that hash-based data structures such as Bloom filters can be used to reduce the data footprint of RAM nodes. Bloom filters consist of a lookup table and \(k\) independent hash functions, and output 1 only if all \(k\) hashed locations corresponding to an input are 1. Since no collision checking occurs, Bloom filters may produce false positives. However, this was found to have minimal impact on accuracy. In order enable more sophisticated training approaches, we use two variants of the Bloom filter: _counting_ and _continuous_ Bloom filters. After training, the contents of these filters are binarized, and they are replaced with conventional Bloom filters for inference. Figure 3 shows the substitution of enhanced Bloom filters in place of conventional RAM nodes in the discriminators. When training a WiSARD model using conventional Bloom filters, RAM entries are set the first time a pattern is seen during training, and are not subsequently updated. The issue with this approach is that it treats rare patterns as equally important to common ones. Bleaching [15] is a technique in which patterns which were seen fewer than some threshold \(b\) times during training are ignored during inference. Typically, this is achieved using tables of counters. In ULEEN, while training using the one-shot strategy, we replace Bloom filters with counting Bloom filters, which use multi-bit counters instead of single-bit entries, enabling bleaching and hashing in the same model. During training, whenever a pattern is seen, the smallest of its corresponding counter values is incremented (multiple counters in the event of a tie). Figure 4 shows the behavior of a counting Bloom filter during inference. The input to the filter is hashed, and these hashes are used to access counter values. For input \(x_{1}\), this corresponds to counter values \(\{3,4,2\}\); for \(x_{2}\), this corresponds to \(\{3,4,3\}\). The smallest accessed counter value is then found (2 for \(x_{1}\); 3 for \(x_{2}\)) and compared against a threshold value \(b\) to determine the response of the filter. For this example, the threshold \(b=3\), so \(x_{1}\) produces an output of 0 and \(x_{2}\) an output of 1. Filter responses become "possibly seen at least \(b\) times" and "definitely not seen \(b\) times". When training using the multi-shot strategy, we go further by replacing Bloom filter entries with floating point values. These _continuous_ Bloom filters produce an output of 1 if the smallest entry is at least 0. Using continuous Bloom filters allows us to train using gradient-based techniques. Since the hash functions used for our Bloom filters do not need to be cryptographically secure and have constant-length input, we are able to use simple **arithmetic-free** hash functions. In particular, we use functions drawn randomly from the H3 family [24], a set of hash functions which consist of simple logical operations and differ from each other only by the choice of a random parameter. Bloom WiSARD derived its independent hash functions using a double-hashing technique based on the MurmurHash [25] algorithm. Though simpler than cryptographic hash functions, MurmurHash is designed to handle variable-length inputs and requires substantial arithmetic. #### Iii-A2 Gaussian Nonlinear Thermometer Encoding Inputs to WNNs are traditionally represented as 1-bit values: 1 if an input is greater than its mean in the training data and 0 otherwise. Fig. 3: The ULEEN model is composed of an ensemble of submodels, each of which is a WNN. Each submodel is composed of discriminators. Discriminators use continuous Bloom filters during training to allow for gradient-based weight updates. This figure shows the multi-shot training process, which uses backpropagation based on the straight-through estimator. However, the granularity that can be provided using a single bit is very limited. Integer encodings are not a good choice, since different bits carry different amounts of information, meaning the least significant bits are essentially noise when used to form an address. Thermometer encodings are the preferred multi-bit encoding in prior work [26]. Thermometer encoding is a _unary_ coding in which a value is compared against a series of increasing thresholds, with the \(i\)'th bit of the result representing the comparison against the \(i\)'th threshold. As Figure 5 shows, input values behave like the mercury in an analog thermometer: as a value increases and passes thresholds, bits are set from least to most significant. Most prior work using thermometer encodings uses equal intervals between the thresholds. The disadvantage of this approach is that a large number of bits may be dedicated to encoding outlying values. In ULEEN, we instead assume that each input follows a normal distribution, and compute its mean and standard deviation from training data. For a \(t\)-bit encoding, we use thresholds to divide the Gaussian into \(t+1\) regions of equal probability. This Gaussian encoding provides increased resolution for values near the center of their range. Gaussian thermometer encodings were also explored in [27] in a narrow scope; we use them more generally, showing they are useful even when the underlying data is not actually Gaussian. #### Iii-B3 Efficient Ensembles Ensembles, as shown in Figure 6, combine multiple weak classifiers into a single strong classifier. Ensembles have been extensively studied in machine learning, and are the driving concept behind machine learning techniques such as Bayesian averaging, Boosting, and Bagging [28]. One consideration when designing an ensemble model is how the predictions of multiple submodels should be combined into a single output. As Figure 3 shows, ULEEN composes submodels by summing the response values for each discriminator across the submodels. In other words, a model with \(L\) submodels and \(M\) classes will produce an \(L\times M\) matrix of responses \(R\), and a top-level \(M\)-element response vector \(\vec{r}\triangleq R^{\mathsf{T}}\vec{1}\). One might reasonably expect that using ensembles would increase total model size, since the number of RAM nodes is much greater. In practice, we have found this to not necessarily be the case - recall that the key strength of ensembles is their ability to combine weak classifiers. This means that individual submodels of an ensemble can be made much smaller than standalone models without harming ensemble accuracy. In an ensemble of submodels with different numbers of inputs per filter, filters with many inputs may also selectively "ignore" simple patterns where most inputs are don't-cares, relying on filters with fewer inputs to capture these patterns. This helps reduce the number of table entries needed for Bloom filters. Since the amount of hashing required for inference increases linearly with the number of submodels, we avoid using ensembles with many submodels. We also do not train ensembles with the one-shot training rule, since this rule gives poor accuracy when small Bloom filters have many inputs. Ensembles of WiSARD models were investigated in [29], but without backpropagation or multi-shot training, and resulted in large models with marginal accuracy benefit. #### Iii-B4 Pruning Pruning removes parameters and connections from a model to reduce its size and complexity. Optimized DNNs [3, 30, 31, 32, 33] have applied pruning techniques with excellent results, but similar concepts have not been used in prior WNNs. After training, the correlations between RAM node outputs and correct model outputs are calculated for each RAM node in the model. A fixed fraction of the RAM nodes with the lowest correlation in each discriminator are removed from the model. Since this pruning reduces the maximum possible response of a discriminator, we next learn a set of integer biases which are added to the outputs of the discriminators. Finally, we fine-tune the remaining RAM nodes using the multi-shot training process. In our experiments, we observed that we could typically prune around 30% of RAM nodes, reducing model size proportionately, with a minimal impact on accuracy. Though this does not approach the degree of pruning which is frequently possible for DNNs, it is still a significant reduction in ULEEN's memory requirement. When used in an ensemble, the bias can Fig. 4: An example of a counting Bloom filter with three hash functions and threshold \(b=3\). Fig. 5: Like the mercury passing the gradations in a thermometer, in a thermometer encoding, bits are set to 1 from least to most significant as the encoded value increases. Fig. 6: Simplified view of an ensemble model. An ensemble combines multiple weak classifiers to create a single stronger classifier. be summed across the submodels, meaning the only overhead for pruning is a single addition. ### _Training ULEEN_ We use both one-shot and multi-shot training techniques for ULEEN. The one-shot technique is similar to how WiSARD models are traditionally trained. The multi-shot technique provides better accuracy, particularly for larger datasets, and is necessary in order to use ensemble models and pruning, but does require much more training time than the one-shot approach. #### Iii-B1 One-shot Training The one-shot training technique is a computationally simple approach which is similar to how prior WNNs were trained. The process is summarized in Figure (a)a. Hyperparameters include the thermometer encoding and the configuration information for the Bloom filters (number of inputs, table size, and number of hash functions). During training, encoded input samples are sequentially presented to the discriminator corresponding to the correct output class. Counting Bloom filters update their contents by incrementing counters according to the method discussed previously (Section III-A1). After training, patterns that were rarely seen during training are discarded using bleaching [15]. Bleaching finds a value \(b\) such that all patterns seen fewer than \(b\) times during training are discarded. If bleaching is not used, almost all RAM node entries may become 1 given a large training dataset, harming generalization. This behavior is known as _saturation_. WNNs which do not use bleaching need careful manual feature extraction and data selection to avoid saturation. We use a binary search strategy to find the value of \(b\) which maximizes model accuracy on the validation set. Afterwards, all filter values less than \(b\) are replaced with 0, and all remaining filter values are replaced with 1. This allows the counting Bloom filters to be replaced with conventional binary Bloom filters for inference, simplifying the hardware. #### Iii-B2 Multi-shot Training While the one-shot technique is efficient, it is ultimately limited by its inability to consider feedback during training. However, conventional gradient-based techniques do not work for WNNs since the outputs of Bloom filters are binary rather than continuous. A similar problem also impacts BNNs, which led to the development of gradient-based training algorithms that worked with binary weights. We base our multi-shot training technique on the BNN training algorithm described in [7]. ULEEN uses continuous Bloom filters, with floating-point entries between -1.0 and 1.0, for training with the multi-shot technique. During the forward training pass, entries are binarized using a unit step function: \[f(x)=\begin{cases}0&x<0\\ 1&x\geq 0\end{cases}\] Backpropagation does not work with the unit step function, since its derivative is 0 everywhere except \(x=0\) (where it is infinite). However, when computing gradients, we instead treat the unit step function as the identity function \(f(x)=x\), meaning \(f^{\prime}(x)=1\). This technique, known as the _straight-through estimator_, has been used for training low-precision and binary networks [13], but we are the first to use it for WNNs. Since our model is only a single layer, we do not have to propagate gradients through the indexing and hash operations. Figure (b)b summarizes our multi-shot training process. After training, models are pruned and fine-tuned according to the method discussed in Section III-A4. After pruning, the continuous Bloom filters are binarized by applying the unit step function. Models were trained using the Adam optimizer [34] with base learning rate \(10^{-3}\). Weights were initialized with uniform distribution \(\mathcal{U}(-1,1)\). To prevent overfitting, we added dropout regularization [35] (\(p=0.5\)) to the outputs of the filters. For the MNIST [36] dataset, we also experimented with a simple form of data augmentation: we made 9 copies of each image in the training set, shifted horizontally and vertically between -1 and 1 pixels. ### _Inference Accelerator Architecture_ Figures 8 and 9 show the block diagram of our pipelined ULEEN inference accelerator. In order to simplify control logic, units on the chip operate in lockstep to the greatest extent possible. This means that an entire input sample must be read in before computation can begin. This deserialization is performed by the bus interface (not shown in the figure). Input data may optionally be compressed by replacing unary thermometer-encoded values with binary values representing how many bits are set. This reduces data movement from Fig. 7: ULEEN supports both one- and multi-shot training techniques. (a) In one-shot training, encoded training samples are sequentially presented to the model and used to update counter values. Afterwards, a bleaching threshold is optimized to eliminate rare patterns. (b) In multi-shot training, a gradient-based update rule is used to learn counter values over multiple epochs. Next, weakly correlated filters are pruned and replaced with a constant bias, and the remaining filters are fine-tuned. off-chip, but requires a decompression unit to recover the thermometer encoding, shown on the left in Figure 8. This unit is eliminated from the design if input data is not compressed. The discriminators in Figure 9 exhibit several optimizations. Though each H3 hash function in a Bloom filter requires a different random parameter, there is no disadvantage to sharing these parameters between all Bloom filters in a submodel. Therefore, a central register file (shown as "Param RF" in the figure) is used to store hash parameters. Since input order is shared between discriminators in a submodel, when hash parameters are also shared, all discriminators will receive the same hashed values. Therefore, it is redundant to calculate hashes for each discriminator separately; instead, we use a single central hashing block. The hash block is itself composed of many pipelined hash units which process input sequences with a throughput of 1 hash/cycle. As shown in the left part of Figure 9, these hash units perform only AND and OR operations, with no arithmetic component. If hashing is fully parallelized, the performance of the design will be heavily bottlenecked by off-chip bandwidth. Therefore, we reduce the number of hash units to the minimum number sufficient to achieve maximum throughput, and accumulate partial hash results in a buffer. Once the buffer is full and the last partial hash is available, all Bloom filters perform a lookup in lockstep. Since hashing is moved into a central block, discriminators themselves contain only lookup units. The lookup unit, shown at the right of Figure 9, consists of a lookup table and the hardware to perform an AND reduction. A 1-bit accumulator in each lookup unit can take as input either the output of the LUT or the AND of that output and its current contents. Once all hash lookups have been performed, the outputs of the lookup units are marked as valid. Each submodel in an ensemble must compute its own hashes, since input orders and hash input and output widths vary. Since different submodels have different table contents, sizes, and pruning, they also have their own sets of filter units. Popcounts and submodel response accumulation are performed using a vector of adder trees, shown in the right of Figure 8. Bias values are added to these results (Section III-A4), and the index of the strongest response is computed to produce a final inference result. ## IV Evaluation Methodology We compare ULEEN against state-of-the-art BNNs, quantized DNNs, and WNNs, using designs in FPGAs, ASICs and Fig. 8: A diagram of the ULEEN inference accelerator architecture. Input is deserialized and, if needed, decompressed, before being passed to an ensemble of submodels. The outputs of the submodels are summed and biased to get per-class responses, and the index of the strongest response is taken as the prediction. Fig. 9: Details of each submodel in the ULEEN inference accelerator. Each submodel contains a hardware block for computing hash functions, and a number of hardware units for performing lookups. These blocks collectively compose the Bloom filters, which are divided into separate units to eliminate redundant computation. software models. The methodology for the various comparisons is described in this section. The FINN [8, 37] framework from Xilinx Research Labs for BNN inference on FPGAs is used as a comparison for our FPGA implementation. We use the SFC-max, MFC-max and LFC-max networks described in [8]. These are 3-layer fully-connected network topologies for classifying the MNIST dataset [36], with different numbers of neurons (SFC contains 256 neurons per layer, MFC contains 512 neurons per layer and LFC contains 1024 neurons per layer). The "max" suffix indicates that these are performance-optimized designs intended to achieve high peak throughput. To the best of our knowledge, these are most efficient implementations of BNNs on FPGAs for MNIST. We compare our ASIC implementation against Bit Fusion [16, 38], a quantized DNN hardware architecture which supports dynamic precision. A highly quantized LeNet-5 architecture (2-bit; Ternary) is used by Bit Fusion for classifying MNIST. This NN architecture is proposed in [39] and shown to have accuracy close to the original LeNet5. We use a Bit Fusion design with precision optimized for this model; i.e. minimum and maximum precision are 2 bits. We identify three efficient variations of the accelerator for the workload, with configuration parameters: \begin{tabular}{l} \{S=8x8, WBUF=32KB, ABUF=16KB, OBUF=8KB\} \\ \{S=16x16, WBUF=64KB, ABUF=32KB OBUF=16KB\} \\ \{S=32x32, WBUF=64KB, ABUF=32KB, OBUF=16KB\} \\ \end{tabular} Here, 'S' denotes the size of the systolic array in the Bit Fusion core, and WBUF, ABUF, OBUF denote the size of the weight, activation and output buffers respectively. We refer to these variations as BF8, BF16 and BF32 respectively in Section V. To implement the Bit Fusion accelerator, we obtained the RTL from the authors [16] and synthesized it using Cadence RTL Compiler 2017 and the FreePDK45 [40] library. The Bit Fusion simulator [38], which combines the energy for the core reported by Cadence with the energy for RAMs/buffers reported by Cacti [41], was used to evaluate the total energy spent on inference. In order to gauge the impact of our improvements versus prior WNNs, we compared ULEEN model sizes and accuracies against Bloom WiSARD [23] on the same nine multi-class classification datasets originally used to evaluate that model. For datasets which did not have explicit train and test sets, we performed a 2:1 train/test split. ### _Software Implementation_ We implemented one-shot training in Python, using the Numba JIT compiler to optimize performance-critical sections. This code is CPU-only and single-threaded, but we used a machine with 64 cores (2x Intel E5-2698 v3) to train many models in parallel. We used a semi-automated sweeping methodology to identify optimal model hyperparameters. In particular, we swept over ranges of values for inputs, entries, and hash functions per Bloom filter and thermometer encoding bits per input. The results of this sweep were then used to identify models which achieved a good balance between accuracy and size. The multi-shot training flow is considerably more computationally intensive, as it requires gradient computation in addition to multiple epochs of training. However, it provides increased accuracy, and is necessary to use ensembles and pruning. We implemented multi-shot training using the PyTorch machine learning framework, and ran training on an NVIDIA Tesla M40 GPU. Forward and backward passes for Bloom filters were implemented as a single multi-dimensional gather/scatter operation, enabling efficient memory-parallel computation. Despite these optimizations, training times were much longer than with the one-shot algorithm. We believe this algorithm could benefit greatly from the much higher memory bandwidth and larger caches available on newer GPUs. ### _Hardware Implementation_ The ULEEN accelerator source is written using Mako-templated SystemVerilog. Mako [42] is a template library which allows the Python interpreter to be run as a preprocessing step for code in arbitrary languages. We used Mako to automatically extract parameters from trained models, construct state machines, and decide how many functional units to instantiate in order to hit throughput targets while minimizing area. This template allowed us to generate new accelerators just by changing command-line parameters. We simulated our designs using Synopsys VCS 2018 to ensure functional correctness and to evaluate the latencies and throughputs of our accelerators. For our FPGA implementation and comparison, we use the Zynq Z7045 SoC platform, which was also used for FINN. Our design has the same I/O interface width as FINN (112 bits). We used Xilinx Vivado 2019.2 to synthesize and implement our designs. We targeted the same frequency of operation (200 MHz) as FINN, though we were unable to achieve this frequency in all cases due to FPGA routing congestion. Resource usage and power consumption were obtained from Vivado reports. For the ASIC implementation and comparison, we use the same tool and library as Bit Fusion for a fair comparison (FreePDK45 [40] and Cadence RTL Compiler 2017). Our designs also have the same 192-bit interface width. The frequency of operation is 500 MHz, the same as the Bit Fusion designs. We used area and power numbers reported by Cadence RTL Compiler (there are no RAM blocks in our design, so using Cacti is not required, unlike with Bit Fusion). ## V Results ### _Selected ULEEN Models_ We identified three ULEEN models (ULN-S, ULN-M, ULN-L), shown in Table I, which target different design points along the size/accuracy curve. All submodels use two hash functions per filter - we found that using one hash function was not always sufficient to avoid excessive collisions, while more than two hash functions increased hardware cost with no clear benefit to accuracy. All submodels within a model use the same thermometer encoding, and all models were pruned at a 30% ratio. As shown in the table, the ensembles consisted of between 3 and 6 submodels, with ensemble accuracies ranging from 96.20% to 98.46%. Some of the individual submodels have accuracies as low as 80%; however, as discussed in Section III-A3, ensembles are able to combine weak classifiers to produce accurate predictions. ### _Performance of ULEEN enhancements_ Figure 10 illustrates the impacts of the optimizations we use in ULEEN. The first three models are prior work, including the original 1981 WiSARD architecture [20] and the state-of-the-art 2019 Bloom WiSARD architecture [23]. Subsequent models incorporate progressively more of our improvements. Our final model, ULN-L, reduces error by 82% and model size by 68% versus Bloom WiSARD. Using multi-shot learning and ensemble models yielded significant improvements to accuracy, while pruning provides a significant reduction in model size. ### _FPGA Comparison (vs FINN)_ Table II and Figure 11 show the comparison of ULEEN with FINN. ULN-S, ULN-M, ULN-L are compared to the FINN SFC, MFC and LFC models respectively. We report accelerator latency in \(\mu\)s and peak throughput in thousands of inferences per second (kIPS). We also provide both the energy for a single sample in isolation (batch=1) and steady-state energy per inference (batch=\(\infty\)). Note that FINN does not provide some results for the MFC model. Overall, we achieve a 1.4-2.6x improvement in latency and a 1.2-2.6x improvement in throughput versus FINN, with higher accuracies in all three cases. Our improvements in energy are proportionately even stronger. We reduce energy by 6.8-9.6x for a single inference and by 6.8-8.5x for steady-state inference. These improvements clearly demonstrate that WNNs can outperform even optimized BNN implementations. All three FINN models, as well as ULN-S and ULN-M, were implemented at 200 MHz. Due to FPGA routing congestion and consequent long wires, we were limited to implementing our large design at 85 MHz. ### _ASIC Comparison (vs Bit Fusion)_ Table III and Figure 12 show the comparison between Bit Fusion and ULEEN. Bit Fusion is designed to operate with a batch size of 16; we use the same batch size for ULEEN to ensure a fair comparison. ULEEN achieves a single-inference (batch=1) latency between 0.032 \(\mu\)s (ULN-S) and 0.056 \(\mu\)s (ULN-L). The fastest Bit Fusion model (BF32) requires 838 \(\mu\)s, but since this model performs inference in simultaneous batches of 16, a direct comparison to single-inference latency is difficult. Bit Fusion implements an aggressively quantized LeNet-5 model, achieving accuracy superior to ULN-L by 0.89%. Fig. 10: Iterative impacts of our improvements to WNNs. We show three prior results, including the previous best model (Bloom WiSARD), and the improvements to error and model size as we sequentially incorporate ULEEN’s optimizations. Results for MNIST dataset. However, ULN-L uses between **479-663x** less energy and achieves **2014-19549x** greater throughput. Models such as Bit Fusion have their place in devices such as mobile phones, where an energy budget in the hundreds of \(\mu\)J is generally acceptable. However, ULEEN presents a preferable solution for extreme edge devices, or even for larger devices if a very high throughput is needed. Note that the Bit Fusion accelerator is a hardware architecture intended to run any quantized convolutional neural networks. Here a quantized (ternary) LeNet-5 model is being run on Bit Fusion. Changing the size of the Bit Fusion accelerator (e.g. systolic array dimensions) impacts the performance of the accelerator, but not the accuracy of the model. On the other hand, with ULEEN, the accelerator and model sizes are linked. ULEEN establishes an interplay between accuracy, efficiency, and area, which can be explored depending on the application. degradation up to 80%, which gives 97.65% accuracy at a 75.31 KiB model size. Accuracy deteriorates rapidly past this point, but even with 98% pruning, our test error is under 10%. While model size decreases proportionately with the pruning ratio, the amount of hashing required does not, since different filters may be pruned from different discriminators. Therefore, in practice it is likely less efficient to use a high pruning ratio than it is to just train a smaller model. #### V-B2 One-shot Model Sweeping Figure 14 summarizes the results from sweeping across a variety of model configurations using the one-shot training rule. We observe diminishing returns from increasing the number of thermometer encoding bits per Bloom filter and entries per Bloom filter (note that the entries per filter in the figure is on a log scale). Accuracy for models trained with the one-shot technique increases roughly logarithmically with model size. We are unable to reach 96% accuracy with a model size under 2 MB. This demonstrates the limitations of the one-shot algorithm and the importance of incorporating feedback; we are able to reach higher accuracies with much smaller models using the multi-shot learning rule. ## VI Related Work Our work on WNNs is suitable for edge inference applications with very low power budgets due to ULEEN's high energy efficiency. In this section, we discuss other works that target similar use cases. **Quantized DNNs:** Quantization involves transforming higher precision floating-point numbers to lower precision floating-point, fixed-point or integer [51] values. Quantizing networks leads to reduced memory access costs and increases compute efficiency. DNN accelerators commonly provide hardware support for lower precisions, such as 8-bit integer in Google TPU [52] or 12-bit floating-point in Microsoft Brainwave [53]. Bit Fusion [16], which we compare to, proposes an accelerator architecture that dynamically composes low-bitwidth compute units to match the precision requirements of DNNs. Ternary Weight Networks (TWNs) [39, 54, 55] have been proposed, where weights are constrained to +1, 0 and -1. The accuracy of TWNs has been shown to be only slightly worse than the full precision networks but significantly higher than BNNs. We compare the accuracy and efficiency of our ULN-L model against a Bit Fusion accelerator running a ternarized LeNet-5. More ASIC and FPGA implementations of ternary neural networks for MNIST are presented in [56], but these have lower accuracy, throughput and energy-efficiency than our ULEEN models. **Binary NNs:** Many methods to binarize weights or activation functions in DNNs have been proposed [6, 7, 9, 10, 57]. Most modern BNN approaches first take the XNOR of the input with a learned weight vector, followed by a popcount operation. Several works have explored ASIC and FPGA accelerators for BNNs (e.g., FINN [8], FP-BNN [58], and YodaNN [59]). As illustrated in the comparison with FINN, ULEEN WNNs can achieve better latency and energy than BNNs. ReBNet [60] is another FPGA-based BNN accelerator for MNIST. Our throughput and accuracy are higher than ReBNet, but no energy numbers are provided in the paper. **Compressed DNNs:** To efficiently run DNNs on devices with limited hardware resources, model compression is used [61, 62, 31, 63]. Pruning [30, 33, 64] reduces the total number of parameters, but may require retraining to avoid accuracy degradation. Weight sparsity in CNNs has been used to reduce computation complexity [65, 66, 67, 68], and bit sparsity is exploited in [69, 70]. We use two forms of compression in ULEEN: Bloom filters to reduce the size of the RAM nodes, and pruning to reduce the number of RAM nodes. **Prior WNNs:** Prior work [23, 71] demonstrated that replacing the RAM nodes in WiSARD with Bloom filters improved memory efficiency and model robustness, though we are the first to use counting Bloom filters (which facilitates bleaching). We are also the first to implement a Bloom filter-based WNN in hardware, as prior work used an impractical hashing algorithm. In [72], a WiSARD-based rock-paper-scissors player was built using a ternary system, where each input digit value was uniquely associated to _rock_, _paper_, or _scissors_ states. Prior memory-efficient WNNs [14, 23] report up to 91.5% accuracy on MNIST while our improvements raise accuracy to 98.46%. **Symbolist Approaches:** Symbolist paradigms such as decision trees and Bayesian inference generally work well for small-scale problems. However, these models typically require sophisticated, heavy domain-dependent preprocessing, or incur large (e.g., half a million LUTs [73]) or very customized circuit implementations (e.g., clockless probabilistic p-bits [74]). **Other edge DNN acceleration approaches:** Microcontroller-based approaches to edge inference, such as TinyML, have attracted a great deal of interest recently due to Fig. 14: Results from our one-shot sweeping runs, showing the most accurate model which was achieved with a given maximum size, the most accurate model which was achieved with a given number of encoding bits, and the most accurate model which was achieved with a given number of entries per Bloom filter. their ability to use inexpensive off-the-shelf hardware [75]. However, these approaches to machine learning are thousands of times slower than dedicated accelerators. A TinyML MNIST solution on Arduino Nano using downscaled 8x8 images had?130,000x lower throughput and more than 32,000x higher latency than our FPGA implementation of ULN-M on 28x28 images, in addition to being less accurate [75]. ## VII Conclusion Inference on the extreme edge demands new approaches to machine learning. While existing techniques use quantized DNNs or BNNs, we propose ULEEN, an approach based on Weightless Neural Networks. ULEEN introduces and incorporates counting Bloom filters, Gaussian thermometer encoding, additive ensembles, pruning, and a novel multi-shot training algorithm to achieve superior energy and latency compared to quantized DNNs and BNNs. We present FPGA and ASIC implementations of an inference accelerator for ULEEN. Compared against the FINN FPGA-based platform for BNNs, we improve latency by 1.4-2.6x, throughput by 1.2-2.6x, and energy by 6.8-8.5x. Compared to the Bit Fusion low-precision DNN ASIC, we reduce energy by 479-663x and throughput by 2014-19549x, though with slightly lower accuracy. The energy per inference for the most accurate FINN model is 5.637 \(\mu\)J whereas ULEEN consumes only 0.826 \(\mu\)J per inference for the largest model presented. For the Bit Fusion model, the energy per inference ranges from 93.5-129 \(\mu\)J, whereas ULEEN models only consume 0.01-0.2 \(\mu\)J (i.e. more than 400x improvement), illustrating the potential of ULEEN for extreme edge devices. The most direct opportunity for future work that we see is the development of convolutional WNNs. Convolution is important for good accuracy on larger image datasets such as ImageNet. Although support for convolution adds model and hardware complexity, we believe that WNNs have the potential to excel in this domain, as they can learn nonlinear filters, something that is not possible with traditional CNNs. Our results show that WNNs hold substantial promise for inference on the edge. WNNs are efficient, but have historically trailed BNNs and quantized DNNs in accuracy; the improvements in ULEEN narrow or close that gap, demonstrating their suitability for ultra-low-energy and high-throughput inference. _Acknowledgement: This research was supported in part by Semiconductor Research Corporation (SRC) Task 3015.001/3016.001 and National Science Foundation grant number 1763848. Any opinions, findings, conclusions or recommendations are those of the authors and not of the funding agencies._
2305.06435
Phase transitions in the mini-batch size for sparse and dense two-layer neural networks
The use of mini-batches of data in training artificial neural networks is nowadays very common. Despite its broad usage, theories explaining quantitatively how large or small the optimal mini-batch size should be are missing. This work presents a systematic attempt at understanding the role of the mini-batch size in training two-layer neural networks. Working in the teacher-student scenario, with a sparse teacher, and focusing on tasks of different complexity, we quantify the effects of changing the mini-batch size $m$. We find that often the generalization performances of the student strongly depend on $m$ and may undergo sharp phase transitions at a critical value $m_c$, such that for $m<m_c$ the training process fails, while for $m>m_c$ the student learns perfectly or generalizes very well the teacher. Phase transitions are induced by collective phenomena firstly discovered in statistical mechanics and later observed in many fields of science. Observing a phase transition by varying the mini-batch size across different architectures raises several questions about the role of this hyperparameter in the neural network learning process.
Raffaele Marino, Federico Ricci-Tersenghi
2023-05-10T19:59:28Z
http://arxiv.org/abs/2305.06435v3
# Phase transitions in the mini-batch size for sparse and dense neural networks ###### Abstract The use of mini-batches of data in training artificial neural networks is nowadays very common. Despite its broad usage, theories explaining quantitatively how large or small the optimal mini-batch size should be are missing. This work presents a systematic attempt at understanding the role of the mini-batch size in training two-layer neural networks. Working in the teacher-student scenario, with a sparse teacher, and focusing on tasks of different complexity, we quantify the effects of changing the mini-batch size \(m\). We find that often the generalization performances of the student strongly depend on \(m\) and may undergo sharp phase transitions at a critical value \(m_{c}\), such that for \(m<m_{c}\) the training process fails, while for \(m>m_{c}\) the student learns perfectly or generalizes very well the teacher. Phase transitions are induced by collective phenomena firstly discovered in statistical mechanics and later observed in many fields of science. Finding a phase transition varying the mini-batch size raises several important questions on the role of a hyperparameter which have been somehow overlooked until now. ## 1 Introduction The widespread diffusion of neural networks in many scientific fields urges a better understanding of the processes that underlie their training. The discipline that studies how well simple devices like Turing machines or more abstract constructions, such as classifier systems, can learn and infer from observed data after the training process, is the statistical learning theory [1]. The statistical learning theory is at the cornerstone of Machine Learning (ML), and it deals with the statistical inference problem of finding a predictive function based on data. Even though neural networks are well analyzed from statistical learning, they have become an active field of research in statistical mechanics. Statistical mechanics predicts the properties of a macroscopic system from the laws of its microscopic dynamics [2]. In this area, a major role is played by phase transitions that regulate what is achievable in principle (information theoretical thresholds) and what is achievable in practice (algorithmic thresholds) [3, 4, 5, 6, 7, 8]. The graphical representation (the phase diagram) of the various phases delimited by phase transitions allows researchers to predict the response of the system as a function of its own tunable parameters. The phase diagrams of simple but realistic neural networks ( e.g., perceptron, one hidden layer, committee machine [9, 10, 11, 12, 13]) are well known and understood, when the algorithmic dynamics is governed by equilibrium processes [14]. However, the out-of-equilibrium dynamics of the learning processes [15, 16, 17] are much more difficult to study and most of the results about it are restricted to dense models where the Martin-Siggia-Rose formalism [18] can be applied to derive DMFT equations [19, 20]. Statistical mechanics tools have been also applied in the realm of artificial intelligence [21, 22] for building up consistent theories for deep learning [23, 24, 25]. Deep learning (DL) is a supervised learning method [1], which has shown very powerful empirical performance for solving very complex real-world problems in areas such as computer vision [26], natural language processing [27, 28], speech recognition [29], recommendation systems [30], drug discovery [31], and much more [32, 33, 34]. In simple words, deep learning can be seen as a fully connected neural network [35] that takes some data set \(\mathcal{D}\), input and targets, and learns the rules for forecasting new input data. For learning the rules, one minimizes a loss function by optimizing the weights of the neural network, using an optimization algorithm [36, 37, 38, 39, 40, 41, 42]. The weights are collected into a set \(\Theta\). The peculiarity of a deep learning model is the application of a non-linear function on each output of the hidden layer, and, in general, on each neuron of the output layer. These non-linear functions are called _activation functions_. Statistically, deep learning estimates a function \(\hat{f}(\vec{x},\Theta)\), with \(\vec{x}\) the input data, that minimizes a loss function \(\mathcal{L}(\vec{y},\hat{f}(\vec{x},\Theta))\), where \(\vec{y}\) represents the target. This minimization is performed over a set of pairs \((\vec{x},\vec{y})_{\eta}\), where \(\eta\) indices an element of \(\mathcal{D}\) and \(|\mathcal{D}|\), the cardinality of \(\mathcal{D}\), can be finite or infinite. Over the last decades, the practitioners of neural networks have developed many very useful tricks and smart procedures, like mini-batch, dropout and several other regularizations for speeding up the training process. A theory justifying many of these choices is often lacking, and so it is very difficult to make optimal choices for who is not an expert practitioner. Among these "tricks" the use of the so-called mini-batch, introduced as a technical requirement for dealing with huge databases, actually turns out to be crucial for optimal training. In machine learning, a mini-batch is a subset of the full dataset that is used to train a model [43]. Rather than training the model on the entire dataset at once, the training data is divided into smaller batches, or mini-batches, which are fed to the model one at a time. Mini-batch training is a commonly used optimization technique in deep learning. It enables the model to make multiple updates to its weights and biases based on the gradients computed from each mini-batch. This process of making small updates to the model weights is called Stochastic Gradient Descent (SGD) [44, 45, 46]. The size of the mini-batch is a hyperparameter that can be tuned to optimize the training process. A larger mini-batch size can lead to faster training times, but it can also make it harder for the model to converge to a good solution. A smaller mini-batch size can improve the model's convergence but may slow down the training process. Mini-batch training provides a compromise between the two extremes of batch training (using the entire dataset at once) and online training (updating the model after each individual data point). This hyperparameter is chosen for fitting the GPU hardware during the training process, and only few theoretical analysis have been performed to understand if exists or not an optimal value for it [47, 48, 49, 50]. In Ref. [47], the authors present an empirical law for selecting the mini-batch size that minimizes Stochastic Gradient Descent learning time for single and multiple learner problems, where the empirical law depends by some empirical parameters that must be deduced by the data, the model topology and the learning algorithm used. In other words, the empirical law is not universal. In Ref. [48], the authors review the small mini-batch methods for deep learning and present numerical analyses in which the best performance is obtained for mini-batch sizes between \(m=2\) and \(m=32\). The results provided evidence that increasing the mini-batch size implies a degradation of the test performance and a progressively smaller range of learning rates that allows stable training. However, there is no theoretical motivation supporting these results. The manuscripts Ref. [49] and Ref. [50] showed that one can improve the performance of Stochastic Gradient Descent, on both training and test sets, by keeping constant the learning rate and increasing the mini-batch size during training. Those results were obtained over different architectures of neural networks. From the point of view of statistical physics, understanding the key role of the mini-batch in the learning process, in particular, whether it exists a phase transition that sets an optimal value for the mini-batch size, is still an open question. Here we consider this problem and provide a positive answer by showing the existence of a phase transition in the mini-batch size ruling the ability to train optimally neural networks. More precisely, we confine ourselves into the "Teacher-Student" scenario, where the _teacher_ is sparse and with binary weights, while the _student_ can be sparse, or dense, with binary or continuous weights depending on the information that the teacher gives to the student. The teacher builds its own neural network and creates an infinite set of data \(\mathcal{D}\). She gives to the student the whole data set and asks to build her own neural network. The student can have information or not about the topology of the teacher network. In the case she knows everything, she needs only to infer the sign of the teacher's weights. Therefore, the student could use a greedy algorithm. If she does not know the topology of the neural network, she is allowed to build a neural network with continuous weights and use a Stochastic Gradient Descent algorithm for generalizing the teacher weights. The "Teacher-Student" model has been a useful tool for building up knowledge in science, and in deep learning. For instance, in Ref. [51] the authors studied the generalization analysing the dynamics and the performance of over-parametrized two-layer neural networks in the teacher-student setup, showing that the dynamics of the SGD, in the online learning case, can be described by a set of differential equations that are asymptotically exact in the limit of large inputs, in the case where the student networks have more parameters than the teacher. In Ref. [52], instead, the authors provide closed-form asymptotic expressions for multi-class teacher-student perceptron generalization errors in the high-dimensional regime. Again, in the high-dimensional regime some authors [53] studied the generalisation of the model where the teacher and student can act on different spaces, generated with fixed, but generic feature maps. They show a rigorous formula for the asymptotic training loss and generalisation error. In this setup, the few analytical results on the existence of phase transitions in the thermodynamic limit are for the Ising perceptron [14]. In this case, the generalization error as a function of the ratio between the number of training examples and the dimension of the input space has a first-order phase transition that allows identifying the value at which the Ising perceptron learns. These analyses, however, leave open the issue of whether it exists a phase transition for the mini-batch size in neural networks with hidden layers. For filling this gap, in this paper, we present, as far as we know for the first time, the existence of a phase transition in the hyperparameter \(m\) such that, for each value of \(m\) smaller than \(m_{c}\), the critical value of mini-batch size, the generalization/inference of the teacher weights is impossible, while for each value of \(m\) bigger than \(m_{c}\) the generalization/inference is possible. Moreover, we present strong evidence that the phase transitions are independent of the model used or the algorithm used, and that they seem to be of the first order, depending on the value of the sparsity in the teacher model. To make our results general and robust we study several neural network topologies and inference problems. We will consider both dense and sparse topologies. The latter is much less used, but there are promising future applications given the large saving in memory and computing power. The paper is organized as follows: in Sec. 2 we make a short summary of our results. In Sec. 3 we introduce the probabilistic model called "Teacher-Student", the notation used, and the algorithms used for the numerical analysis. In Sec. 4 we present our numerical analysis for the models listed in Sec. 3. In Sec. 5 we discuss the results. ## 2 Short Summary of the Main Results To help the reader of getting the main message of this manuscript, here we summarize our results. We study four different teacher-student models. * The first one, i.e., **ST-SS-2M-OC** (see Sec. 3 and Sec. 4.1), the teacher and the student have the same sparse topology embedded by a bipartite random \(d\)-regular graph between the input and the hidden layer, and between the hidden and the multidimensional output layer, with LeakyReLu activation functions on the hidden and output neurons. In this case, we observe that the normalized validation loss goes to zero discontinuously for each value of \(d>2\) at different values of the mini-batch size \(m\). In other words, the student easily can infer the weights of the teacher only above a particular value of the mini-batch size \(m_{c}\). * In the second model, i.e., **ST-SS-2M-MC** (see Sec. 3 and Sec. 4.2), the teacher is the same as the first one, while the student neural network is embedded with a random \(d_{s}\)-regular graph, with \(d_{s}\) bigger than the one of the teacher \(d_{t}\) (the activation functions are LeakyReLu). In other words, the aim of the student is to infer the weights of the teacher and at the same time understand which weights of its own network must be set to zero for reaching the perfect inference. We observe the existence of phase transitions in the mini-batch size, but this time of two different natures. For \(d_{t}=2\) and \(d_{s}>2\) we observe a second-order phase transition, while for \(d_{t}>2\) and \(d_{s}>d_{t}\) we observe again phase transitions of the first order. * The third model, i.e., **ST-SS-1M-OC** (see Sec. 3 and Sec. 4.3), differs from the other two by changing the activation function, which now becomes an \(erf(\cdot)\) on the hidden neurons and identity on the output neuron. The multidimensional output layer now becomes one-dimensional. The student knows the topology of the teacher network and needs only to infer the weights. In this case, the perfect inference is not anymore possible, however, as shown in Fig. 6, the existence of a first-order phase transition in the size of the mini-batch is still present. This time, the transition divides two regions, one where the generalization is poor, and another where it is possible having good performance in generalizing the planted teacher weights. * The fourth model, i.e., **ST-DS-1M-MC** (see Sec. 3 and Sec. 4.4), has a teacher network which is the same as in the case of **ST-SS-1M-OC**, but the student does not know anything about the topology of the network. She has access only to the data and creates her own dense neural network. By tuning the hyperparameters of the student model, we observe that the presence of an optimal mini-batch that allows a very good generalization is still present in correspondence with another optimal value of the learning rate (associated with the Stochastic Gradient Descent used as an optimizer). This model will be analysed more deeply in future work, but it suggests a possible existence of phase transitions. ## 3 The Models and the Algorithms In this section, we introduce the probabilistic model called "Teacher-Student". This model is pretty simple. We have one teacher and one student. The teacher builds a neural network, and asks the student to infer its weights, given a data set \(\mathcal{D}\) made of input-output pairs. In other words, a teacher generates a random neural network. She generates a certain number \(M\) of input vectors \(\vec{x}_{\eta}\), \(\eta=1,\ldots,M\) and computes the associated outputs using the neural network, i.e., \(\vec{y}_{\eta}\), \(\eta=1,\ldots,M\), where \(M\) is \(|\mathcal{D}|\). The student, thus, is provided with the data, i.e. the input-output pairs \(\left(\vec{x}_{\eta},\vec{y}_{\eta}\right)_{\eta\in[M]}\), and her objective is to infer teacher's weights from these data. The teacher could give or not the exact topology of the model to the student: if she does, then, we call this scenario optimal-case, otherwise, we call it mismatch-case. For example, consider a supervised regression task. The data set \(\mathcal{D}\) is composed of \(M\) pairs \(\left(\vec{x}_{\eta},\vec{y}_{\eta}\right)_{\eta\in[M]}\in\mathbb{R}^{d_{s}+d _{s}}\) identically and independently sampled from \(\mathbf{P}(\vec{x},\vec{y})\). The prior probability \(\mathbf{P}(\vec{x})\) is assumed to be known and \(\mathbf{P}(\vec{y}|\vec{x})\) is modeled by a two layer neural network. Given a feature vector \(\vec{x}_{\eta}\in\mathbb{R}^{d_{s}}\), the respective label \(\vec{y}_{\eta}\in\mathbb{R}^{d_{s}}\) is defined as \[\vec{y}_{\eta}=\psi\left(\mathbf{W}_{\text{out}}^{*}\,\phi\left(\mathbf{W}_{ \text{in}}^{*}\vec{x}_{\eta}\right)\right), \tag{1}\] where \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) and \(\psi:\mathbb{R}\rightarrow\mathbb{R}\) are two activation functions that act element-wise, while \(\mathbf{W}_{\text{in}}^{*}\) is a \(k\times d_{x}\) matrix and \(\mathbf{W}_{\text{out}}^{*}\) is a \(d_{y}\times k\) matrix. In most cases, we choose these matrices to be sparse, with non-zero elements taking values \(\pm 1\), Given a new sample \(\vec{x}\sim\mathbf{P}(\vec{x})\) outside the training data, the goal is to obtain an estimation function \(\hat{f}(\vec{x},\Theta):\mathbb{R}^{d_{x}}\rightarrow\mathbb{R}^{d_{y}}\) (where \(\Theta\) is an arbitrary set of parameters to be learned from the data) for the respective label \(\vec{y}\). The error is quantified by a loss function \(\mathcal{L}(\vec{y},\hat{f}(\vec{x},\Theta))\). The loss function used in this manuscript is a quadratic loss of the type \[\mathcal{L}(\vec{y},\hat{f}(\vec{x},\Theta))=\frac{1}{2}\sum_{\eta=1}^{M}( \hat{f}(\vec{x}_{\eta},\Theta)-\vec{y}_{\eta})^{2} \tag{2}\] We are interested in understanding the role of sparsity in neural networks. More precisely, our goal is to estimate the teacher model with another two-layer neural network with the same activation functions and the same number of neurons in each layer, which we will refer to as the student. Formally the student model reads \[\hat{f}(\vec{x}_{\eta},\Theta)=\psi\left(\mathbf{W}_{\text{out}}\,\phi\left( \mathbf{W}_{\text{in}}\vec{x}_{\eta}\right)\right), \tag{3}\] where \(\Theta\) identifies the set of the parameters that must be inferred, i.e., the elements of the matrices \(\mathbf{W}_{\text{out}}\) and \(\mathbf{W}_{\text{in}}\). ### Training Algorithms We allow the student to choose between two different algorithms for estimating the teacher model. More precisely, the student can choose between a greedy algorithm and a Stochastic Gradient Descent algorithm (to be better described below), depending on the nature of the parameters to be optimized, which in turn depends on the information provided by the teacher. We choose to work with a virtually infinite data set, i.e. we never present to the student twice the same data item, because we have found a very weak dependence of the results on the data set size \(M\) and a fast convergence towards the large \(M\) limit. Once the algorithm is fixed and the data set infinitely large, the most relevant parameter in the training process is the mini-batch size \(m\), that is how many data items are used in each step of the training. We will show that an optimal choice for this parameter can drive the algorithms to infer/generalize the teacher model. To the best of our knowledge, this is the first time a phase transition in this important parameter is found. The first algorithm that the student can use for minimizing the loss function is a greedy algorithm. This algorithm is used when the teacher provides the student with information about the discrete nature of the matrix elements. The greedy algorithm is just a Metropolis updating rule [54] at zero temperature, that is the proposed new \(\Theta\) configuration must imply a lower value for the loss function (computed on the validating set) in order to be accepted. At zero temperature there are no thermal fluctuations helping the evolution to escape local minima in the loss function. However, the finite (and small) value of \(m\) introduces a different kind of noise that can play the same beneficial role in escaping local minima to reach the optimal configuration. At any given step of the greedy algorithm, an element \(w_{ij}\) of one of the matrices entering the model definition (3) is selected at random and let us assume the support of this weight is \(\pm 1\). Calling \(\overline{w}_{ij}=-w_{ij}\) the new value we propose for this weight, the change in loss is given by \[\Delta\mathcal{L}=\frac{1}{2}\sum_{\eta=1}^{m}(\hat{f}\left(\vec{x}_{\eta}, \Theta\setminus w_{ij}\cup\overline{w}_{ij}\right)-\vec{y}_{\eta})^{2}-\frac{ 1}{2}\sum_{\eta=1}^{m}\left(\hat{f}\left(\vec{x}_{\eta},\Theta\right)-\vec{y} _{\eta}\right)^{2}. \tag{4}\] The algorithm accepts the flip only if \(\Delta\mathcal{L}\leq 0\). This difference \(\Delta\mathcal{L}\) depends on the mini-batch size \(m\) and can fluctuate a lot changing the mini-batch for small values of \(m\). We use the same mini-batch set for an entire Monte Carlo Sweep, i.e. the attempt to update all the non-zero matrix elements. The second algorithm is a Stochastic Gradient Descent algorithm. This algorithm is used when no information is given to the student about the teacher model. Thus, the student can model the teacher by her choice and chooses to use real variables for \(\Theta\). To update the weights the student computes the gradient of the loss function in (2) over a mini-batch of size \(m\) and uses the usual updating rule \[\Theta\leftarrow\Theta-\lambda\nabla\mathcal{L}(\vec{y},\hat{f}(\vec{x}, \Theta),m), \tag{5}\] where \(\nabla\) is the gradient operator over the weights and \(\lambda\) is the learning rate. ### Teacher-student scenarios The above algorithms are used for analyzing different cases of the teacher-student scenario. We list them below. * **Sparse teacher-student, two matrices, optimal-case (ST-SS-2M-OC)**. In this case, the teacher is sparse and the matrices \(\mathbf{W}_{\text{in}}^{*}\) and \(\mathbf{W}_{\text{out}}^{*}\) have non-zero values equal to \(\pm 1\). The student knows the topology of the neural network, i.e., she knows the position of the non-zero values in \(\mathbf{W}^{*}_{\text{in/out}}\), and she knows that the non-zero elements of these matrices can take only the values \(\pm 1\). The student needs only to infer the sign of the weights. The analysis of this model is explained in Sec. 4.1. * **Sparse teacher-student, two matrices, mismatch-case (ST-SS-2M-MC)**. In this case, the teacher is sparse and the matrices \(\mathbf{W}^{*}_{\text{in}}\) and \(\mathbf{W}^{*}_{\text{out}}\) have non-zero values equal to \(\pm 1\). The student does not know exactly the topology of the neural network, as the teacher provides her with a topology of larger connectivity, that contains the teacher network. The student is allowed to infer the teacher network by setting some weights to zero. So the student weights will take values in \(\{\pm 1,0\}\). The analysis of this model is explained in Sec. 4.2 * **Sparse teacher-student, one matrix, optimal-case (ST-SS-1M-OC)**. In this case, the teacher matrix \(\mathbf{W}^{*}_{\text{in}}\) is still sparse with \(\pm 1\) elements and needs to be inferred, while the matrix \(\mathbf{W}^{*}_{\text{out}}\) is fixed and provided to the student. For simplicity, we consider a scalar output \(y\) (i.e. \(d_{y}=1\)) equal to the mean of the neurons of the hidden layer. The student knows also the topology of the teacher's neural network and that weights take values in \(\pm 1\). So, she just needs to infer the sign of the weights. The analysis of the model is explained in Sec. 4.3 * **Sparse teacher, dense student, one matrix, mismatch-case (ST-DS-1M-MC)**. As in the previous case, the teacher matrix \(\mathbf{W}^{*}_{\text{in}}\) is sparse, with \(\pm 1\) weights, while the \(\mathbf{W}^{*}_{\text{out}}\) is dense with all elements equal to \(1\) (i.e. takes the mean of the hidden layer). The student is provided with the matrix \(\mathbf{W}^{*}_{\text{out}}\) and needs to infer the matrix \(\mathbf{W}^{*}_{\text{in}}\). Having no information about the topology of the latter, she decides to train a dense neural network with the aim of generalizing the values of the sparse matrix \(\mathbf{W}^{*}_{\text{in}}\) with continuous values. The analysis of the model is explained in Sec. 4.4 All this model will be fully explained in the next section, where we will also present the numerical results. ## 4 Results In this section, we present our results on each single model listed in Sec. 3. ### Sparse teacher-student, two matrices, optimal-case For this analysis, we assume the framework given in (1). The teacher builds a neural network with the same amount of neurons in each layer. More precisely, the teacher builds \(\mathbf{W}^{*}_{\text{in}}\) and \(\mathbf{W}^{*}_{\text{out}}\) to be sparse, with \(d_{x}=d_{y}=k=N\). The total number of non-zero entries is \(Nd\) for each matrix. In other words \(\mathbf{W}^{*}_{\text{in}}\) and \(\mathbf{W}^{*}_{\text{out}}\) are two square matrices \(N\times N\), where each row and each column contains only \(d\) non-zero elements (in positions randomly chosen). The activation functions \(\psi\) and \(\phi\) are non linear and set to be \(\text{LeakyRELU}(x)\): \[\text{LeakyRELU}(x)=\left\{\begin{array}{ll}ax&\quad x<0\\ x&\quad x\geq 0\end{array}\right. \tag{6}\] We fix the parameter \(a=0.1\). In this analysis, the student knows the model and the positions of the non-zero entries in the two matrices. She also knows that non-zero weights are discrete, taking values \(\pm 1\). The activation functions are known as well. The student builds her own neural network and decides to use a greedy algorithm for inferring the values of the matrices elements. She defines the loss function as in (2). Under the above assumptions, we perform an accurate analysis of the model as follows. We build the teacher neural network, fixing the value of \(N\) to \(50\), \(100\), \(200\), \(400\) and the parameter \(d\), i.e., the number of non-zero entries in each row and column of matrices \(\mathbf{W}^{*}_{\text{in/out}}\), to \(d=2,\,3,\,4,\,5\), and \(6\). Each non-zero element is \(\pm 1\) with \(50\%\) probability. We sample the independent components of the input vectors \(\vec{x}\in\mathbb{R}^{N}\) from a normal distribution of zero mean and unitary variance, i.e. \(x_{i}\sim\mathcal{N}(0,1)\). Thus, we create the associated labels \(\vec{y}\in\mathbb{R}^{N}\), using the neural network defined above. Given the teacher neural network defined by \(\mathbf{W}^{*}_{\text{in/out}}\), the two weight matrices \(\mathbf{W}_{\text{in/out}}\) defining the student network have non-zero elements in the same positions, and only the signs need to be inferred. For every value of \(N\) and \(d\), we perform the numerical analysis, starting from a random initialization for the student weights and aiming at minimizing the loss in (2) using the greedy algorithm described above with different mini-batch size \(m\). Each Monte Carlo sweep 1 is performed with a new set of size \(m\) of pairs \((\vec{x}_{\eta},\vec{y}_{\eta})_{\eta=1,\dots,m}\). We never use twice the same data pair (as in the online learning problem) or equivalently we assume to have an infinite amount of data to train the network with. Moreover, we define a set of pairs \((\vec{x}_{\eta},\vec{y}_{\eta})_{\eta=1,\dots,30}\) to be the validation data set. These data are used only for computing the validation loss, and they are never used during the training process. Figure 1: The figure shows the validation loss function normalized to the number of non-zero elements, averaged over 100 samples, as a function of \(m\), i.e., the mini-batch size, for different values of \(d\), and different values of \(N\). Error bars are standard deviation of the mean. Figure 2: **Left**: The figure displays the averaged Hamming distance over 100 samples as a function of the mini-batch size \(m\), for different values of the parameter \(d\), i.e., the number of non-zero entries in each row and column of each matrix \(\mathbf{W}_{\text{in/out}}^{*}\). Error bars are standard deviation of the mean. **Right**: The figure displays the fraction of Hamming distance trajectories that have found an asymptotic value smaller than 0.25 as a function of \(m-m_{c}(d)\). In this case all the curves collapse one on top of the other, showing a step function. In Fig. 1, we show the validation loss function normalized to the number of non-zero elements, averaged over 100 samples, as a function of \(m\), i.e., the mini-batch size. The value of the validation loss is the one obtained after a time of 1024 Monte Carlo sweeps. We have checked that at that time the process has reached a stationary regime and more iterations would not change the results. The curves drawn, for any values of \(d\), seem to be independent of the value of \(N\) used. In other words, finite size effects are negligible and we can consider the results as in the thermodynamic limit. Results in Fig. 1 show that for \(d=2\) the inference of the teacher network is possible for any value of \(m\), while the curves with \(d>2\) show a more interesting behaviour. For \(d>2\) the validation loss value jumps discontinuously to zero at a critical value for the mini-batch size that we call \(m_{c}(d)\). This means that by running the learning algorithm with \(m>m_{c}(d)\) the student is able to infer perfectly the teacher network, while perfect learning is impossible for lower values of \(m\). What happens at \(m=m_{c}(d)\) looks like a phase transition. To the best of our knowledge, such a clear phase transition in the mini-batch size was never observed before. To better understand the nature of this transition, we study a different observable, the (normalized) Hamming distance between teacher and student matrices weights. The Hamming distance is simply the number of weights elements which are different. The normalized version we use is just divided by the number of non-zero elements. In case the student guesses at random the signs of the teacher's weights, the mean Hamming distance is equal to 0.5, while a perfect inference of the teacher matrix would correspond to a zero Hamming distance. Data for the mean Hamming distance are reported in the left panel of Fig. 2. As for the validation loss discussed above, we observe no \(N\)-dependence in the data and therefore we average the Hamming distance over all values of \(N\). The variation of the mean Hamming distance when the value of \(m\) is increased is consistent with a discontinuous phase transition at \(m_{c}(d)\) for \(d>2\). We define \(m_{c}(d)\) as the first value of \(m\) at given \(d\) for which the Hamming distance is compatible with 0. To get a quantitative measure of the sharpness of the phase transition at \(m_{c}(d)\) we compute the fraction of trajectories (over different samples and different \(N\) values) that have a Hamming distance smaller than 0.252. The nice scaling and the step-like behaviour as a function of \(m-m_{c}(d)\) suggest the transition is very sharp (see right panel in Fig. 2). Footnote 2: The results do not depend on this threshold value as long as it is chosen in the “gap” visible in the left panel of Fig. 2. The last quantitative aspect to discuss is the dependence of the critical value \(m_{c}\) on the number \(d\) of non-zero elements in the network matrices. As shown in Fig. 3 we find that a linear scaling nicely fit all the data. This is very reasonable: indeed, the number of parameters (i.e. signs) to infer is exactly \(2dN\) and each mini-batch of data contains \(2mN\) numbers. So a linear scaling of \(m_{c}(d)\) seems the best achievable scaling. Figure 3: In this figure the value of \(m_{c}\), i.e., the _critical_ value of the mini-batch size, as a function of \(d\) is plotted. The values of \(m_{c}(d)\) fit well a linear behaviour as a function of \(d\). ### Sparse teacher-student, two matrices, mismatch-case For this analysis, we assume the framework given in (1), where the teacher is built exactly as in the case discussed in Sec. 4.1: matrices \(\mathbf{W}^{*}_{\text{in,out}}\) are sparse with exactly \(d_{t}\) non-zero elements per row and column; non-zero weights take values \(\pm 1\) with equal probability. The activation functions \(\psi\) and \(\phi\) are non-linear and set to be equal to (6). In this analysis, the teacher does not give to the student complete information about the topology of her network (i.e. the location of non-zero elements in the matrices). The teacher gives to the student just partial information, that is sparse matrices \(\mathbf{W}_{\text{in,out}}\) built as before but with \(d_{s}(>d_{t})\) non-zero elements per row and column. The network corresponding to this slightly more connected topology is such that it contains the teacher network as a subgraph. In other words, the student can infer the teacher network by just setting some proper weights to zero. The reason beyond this choice is straightforward: we are interested in checking how general the results shown in Sec. 4.1 are, in particular, when some sort of noise is added to the inference process. The present setting is meant to model a situation where a perturbation is introduced in the network topology, such that the ground truth (i.e. the teacher network ) is preserved. The student will work with variables, representing the weights, taking values in \(\{\pm 1,0\}\). In such a way she is in principle able to infer the exact network of the teacher if enough information is provided to her. The student uses the greedy algorithm described in Sec. 3 to optimize the loss function. As before, we use an online learning setting, where each data pair \((\vec{x}_{\eta},\vec{y}_{\eta})\) is used for just one Monte Carlo sweep. Moreover 30 data pairs are reserved for the computation of the validation loss and never used in the training. In Fig. 4 we report the mean validation loss function normalized to the number of non-zero elements, averaged over 100 samples, as a function of \(m\), the mini-batch size. In the left panel we have fixed \(d_{t}=2\) and we observe that the curves goes to zero in a continuous way, and without showing any evident finite size effect. We call \(m_{c}(d_{s}|d_{t})\) the critical value of \(m\) at which the mean loss attains the zero value. The phase transition at \(m_{c}(d_{s}|d_{t})\) separates a phase where the teacher network can not be correctly learned, from a phase where the student can almost perfectly learn it. A careful inspection of the region \(m>m_{c}(d_{s}|d_{t})\) reveals that the validation loss is not exactly zero, and it increases slightly with \(m\). A possible explanation for this observation is the following: the mini-batch size acts as a source of noise that allows the greedy algorithm to better optimize the loss function by escaping from local minima, but if \(m\) becomes too large, the effective noise decreases and the training algorithm can get trapped in sub-optimal minima with a very small values of the loss (in any case the value of the loss is tiny and the student network still generalizes very well the teacher one). In the left panel of Fig. 4, a comparison with the optimal case (i.e. \(d_{s}=2\) and \(d_{t}=2\)) shows an interesting phenomenon. In the case where the complete information is given to the student, the system does not present any phase transition. Even with a mini-batch of size \(m=1\) the student is able to infer almost completely the teacher weights. In contrast, when the teacher hides a little bit of information to the student, providing a noisy version of the network topology, the amount of knowledge that the student needs to infer the teacher weights is much larger. We observe that the minimal size of the mini-batch that the student needs for completely inferring the teacher weights is roughly ten times bigger if \(d_{s}=3\) and even more if \(d_{s}=4\). In the right panel of Fig. 4 we present the averaged validation loss obtained by the student when the value \(d_{t}=4\) is used by the teacher. Also in this case, finite size effects are negligible, thus suggesting that we are already probing the large Figure 4: The figures display the averaged validation loss normalized to the number of non-zero elements as a function of the mini-batch size \(m\), for different values of \(N\) and different values of \(d_{s}\), averaged over 100 samples (error bars are the standard deviation of the mean). In the left panel we fix \(d_{t}=2\), while in the right panel we fix \(d_{t}=4\). limit. As before, we compare the optimal case \(d_{s}=d_{t}=4\) with the mismatched cases \(d_{s}=6,8\). The latter cases require more information to infer the teacher network and this implies a larger critical value \(m_{c}(d_{s}|d_{t})\). The nature of the phase transition at \(m_{c}(d_{s}|d_{t})\) remains discontinuous as in the optimal case, but the jump at the critical point seems to decrease, so we can not exclude that the transition could become continuous for larger values of \(d_{s}\). The phase transition can be observed also in the fraction of weights inferred correctly by the student. For simplicity, we show in Fig. 5 results only for the matrix \(\mathbf{W}_{\text{in}}\), but similar results hold for \(\mathbf{W}_{\text{out}}\) as well. We define the True Positive (TP) rate as the fraction of non-zero weights correctly inferred by the students and the True Negative (TN) rate as the fraction of zero weights correctly inferred by the student. In case the teacher network is perfectly recovered, both TP and TN equal 1. In Fig. 5, we show TP and TN rates for different values of \(d_{t}\) and \(d_{s}\). The top panels refer to the \(d_{t}=2\) case, where the TP and TN rates reach the value 1 at \(m_{c}(d_{s}|d_{t}=2)\) in a continuous way and slightly decrease for larger \(m\) values (in agreement with the results obtained via the validation loss). The bottom panels are for \(d_{t}=4\) and we observe the TP and TN rates to jump at \(m_{c}(d_{s}|d_{t}=4)\) to the value 1, corresponding to a perfect recovery of the teacher network (again in agreement with results from the validation loss). An interesting observation is that increasing the value of \(d_{s}\) for a given \(d_{t}\) has the main effect of translating the transition point, without changing sensibly the nature of the transition. The increase of \(m_{c}(d_{s}|d_{t})\) with \(d_{s}\) is simple to explain: for a larger value of \(d_{s}\) (i.e. a larger noise) the number of configurations of the student weights is much greater and much more information is required to select the proper weights. ### Sparse teacher-student, one matrix, optimal-case For this analysis, the teacher modifies the setting in (1) by fixing the matrix \(\mathbf{W}_{\text{out}}^{*}\) of size \(k\times 1\) and with all elements equal to 1. Moreover, the activation function \(\psi\) becomes linear. In practice, the output signal is nothing but the average of the hidden layer. Figure 5: TP and TN rates measure the fraction of correctly inferred non-zero and zero weights in the matrix \(\mathbf{W}_{\text{in}}\). They are plotted as a function of the mini-batch size \(m\) for several values of the network connectivity: \(d_{t}=2,d_{s}=3\) (top left), \(d_{t}=2,d_{s}=4\) (top right), \(d_{t}=4,d_{s}=6\) (bottom left), \(d_{t}=4,d_{s}=8\) (bottom right). The function \(\phi\) remains non-linear, and to make a connection with previous works, we fix \(\phi(t)=\text{erf}(t/\sqrt{2})\) and \(d_{x}=k=N\). Under these assumptions, the teacher model can be written as \[y=\frac{1}{N}\sum_{r=1}^{N}\phi\left(\frac{((\mathbf{W}^{*})^{\intercal}\vec{x}) _{r}}{\sqrt{d}}\right), \tag{7}\] where \(y\in\mathbb{R}\), and \(\mathbf{W}^{*}=\mathbf{W}^{*}_{\text{in}}\) to simplify the notation. The matrix \(\mathbf{W}^{*}\) is assumed sparse, with exactly \(d\) non-zero elements in each row and column, set randomly to \(\pm 1\) with equal probability. A variant of this model is studied in Ref. [55], where the authors describe the different regimes of the Stochastic Gradient Descent learning algorithm for this two-layer neural networks in the high-dimensional input layer limit, but with a mini-batch size fixed to \(1\). Moreover, in order to solve exactly the dynamics, they need to work with a matrix \(\mathbf{W}^{*}\) which is dense for both the teacher and the student. Eventually, the student network is over-parametrized by enlarging the hidden layer with respect to the teacher one (\(k_{s}>k_{t}\)). However, this study provides no clue on what happens changing the mini-batch size, nor making the teacher (and/or the student) sparse. As a first analysis, we assume that the teacher provides the complete topology to the student. In other words, the student knows the activation function \(\phi\), and the position of the non-zero elements in \(\mathbf{W}^{*}\), and just needs to infer the sign of the weights in her own neural network \[\hat{f}(\vec{x},\Theta)=\frac{1}{N}\sum_{r=1}^{N}\phi\left(\frac{(\mathbf{W} \vec{x})_{r}}{\sqrt{d}}\right), \tag{8}\] Figure 6: **Top** panels displays the averaged validation loss as a function of the mini-batch size \(m\) for different values of \(N\) at \(d=2\) (**left**) and \(d=3\) (**right**). Error bars are the standard deviation of the mean, obtained from 100 samples. **Bottom** panels display the fraction of successful learning processes, defined as those reaching a loss function value smaller than the threshold value 0.02 after 1024 Monte Carlo Sweeps. where \(\hat{f}(\vec{x},\Theta)\) is a scalar function, i.e., \(\hat{f}:\mathbb{R}^{N}\rightarrow\mathbb{R}\), and the set \(\Theta\) contains all the non-zero parameters of the matrix \(\mathbf{W}\). Before the learning process starts the non-zero elements of this matrix are set to \(\pm 1\) randomly. Also in this case, the natural choice of the student the learn the matrix is to use the greedy algorithm, with a loss function defined in (2). Although, the model described by (7) and (8) is simpler with respect to the ones described in Sec. 4.1 and Sec. 4.2, the inference of \(\mathbf{W}\) is more complicated since the output layer is just performing an average and this introduces new permutation symmetry, which in turn implies that several permuted weights are able to generalize well the teacher model. For this reason, we are going to study the learning process by just measuring the loss function. Also in this case, we perform a numerical analysis to understand whether changing the mini-batch size \(m\) the learning process undergoes a transition between two phases, one where the inference is impossible and one where the inference is possible. In this case, it is more appropriate to say that the transition is between two phases, one where the student is not able to generalize the teacher model, and one where the generalization is possible. We define a set of pairs \((\vec{x}_{\eta},y_{\eta})_{\eta=1,\ldots,1024}\) to be the validation data set. In Fig. 6, we present the analysis performed over different values of \(N\) and \(d\). More precisely, in the top panels we present the averaged validation loss as a function of the hyper-parameter \(m\), for two values of \(d\), i.e. \(d=2\) top left panel and \(d=3\) top right panel, and different values of \(N=32,\,64,\,128\). For each data set, we observe a clear sharp decrease in the mean value of the loss function when \(m\) crosses a critical value \(m_{c}(N|d)\), thus suggesting a sort of first-order transition in the minimization of the loss. The validation loss function, however, does not reach the \(0\) value. This effect is in agreement with the observation written above. This effect is also observed in the Hamming distance, which is not plotted. In this case, we observe that for both values of \(d\), the Hamming distance starts from a random guess value for small values of \(m\), and reaches at \(m_{c}(N|d)\) a value of \(0.15\) for each value of \(N\) analyzed. In contrast to the validation loss function that remains constant above the value of \(m_{c}(N|d)\), the Hamming distance increases as well as \(m\) increases, showing, therefore, that the configurations found are _far away_ from the true planted model, but they are able to generalize well the teacher model. Given the jump-like behaviour of the loss shown in the upper panels of Fig. 6, we can easily define a threshold value for the loss that discriminates successful learning processes. This fraction is plotted in the bottom panels of Fig. 6, where we observe good scaling in the success probability of the learning process. The values for \(m_{c}(N|d)/d\) are reported in Fig. 7 as a function of \(N\). The linear growth supports the scaling \(m_{c}(N|d)\sim dN\) for the critical batch size. In analogy with the argument presented at the end of Sec. 4.1, we can compare this critical mini-batch size with the number of parameters to be inferred. In the present model, the parameters in the \(\mathbf{W}\) matrix are \(dN\) and, given \(y\) is a scalar, each mini-batch provides \(O(m)\) informative numbers. So one would expect the optimal scaling to be \(m_{c}\sim dN\) and this is confirmed by the data in Fig. 7. Figure 7: Data for \(m_{c}(N|d)/d\) grow linearly with \(N\), thus supporting the scaling \(m_{c}(N|d)\sim dN\) for the critical batch size. ### Sparse teacher, dense student, one matrix, mismatch-case For this analysis, we assume that the teacher model is the same as the one described in Sec. 4.3. In other words, the components of \(\vec{x}\in\mathbb{R}^{N}\) are i.i.d. from \(\mathcal{N}(0,1)\) and \(y\in\mathbb{R}\) is defined by Eq. (7) with the same activation function \(\phi\). In contrast to Sec. 4.3, here we assume the student has no knowledge about the topology of the teacher neural network, i.e. the non-zero elements of \(\mathbf{W}\), while she knows the functional form in Eq. (7). Because of her lack of knowledge, the best the student can do is to try to model the teacher neural network using a dense matrix \(\mathbf{W}\) with real elements. The learning of the optimal \(\mathbf{W}\) matrix can be possibly achieved by optimizing the loss function defined in Eq. (2). This training process is very similar to the one which is commonly used to train modern neural networks through a deep learning approach, so relevant in many different disciplines. Having no knowledge about the teacher neural network, the problem is not anymore an inference problem, but a learning problem. The student wants to find a configuration of the weights that is able to generalize the teacher model. Again, our main focus is to understand whether the mini-batch size plays a crucial role in this process of generalizing the teacher model. In this particular case, however, a new hyper-parameter comes into play. It is the learning rate \(\lambda\). It determines the step size at each iteration while moving toward a minimum of the loss function. It needs to be set _a priori_ and can eventually be changed dynamically during the learning process. For the sake of simplicity, we always keep it fixed during the learning process. For the teacher model, we use a sparse matrix \(\mathbf{W}^{*}\) with \(d_{t}=2\) non-zero elements in each column and in each row: they are set in random positions to \(\pm 1\) with equal probability. We use \(N=32\), \(64\) and build a student network with a hidden layer composed of \(N\) neurons. The matrix \(\mathbf{W}\) is dense, and each element of the matrix is initialized uniformly random in \([0,1]\). We define a set of pairs \((\vec{x}_{l},y_{l})_{\eta=1,\ldots,1024}\) to be the validation data set. In the left panel of Fig. 8 we show the evolution of the validation loss as a function of the number of steps of the SGD algorithm (i.e. the number of times that we compute the gradient). Data are for a student network with \(N=32\) neurons in the hidden layer, and the SGD algorithm uses \(\lambda=0.1\) and different values of \(m\). The behaviour of the learning process has a marked dependence on the mini-batch size \(m\). For \(m=1\) we observe that the SGD dynamics get trapped by local minima with a very large loss function. For larger \(m\) values we observe a steady improvement of the generalization, which becomes better both by increasing the SGD steps and the \(m\) value. Focusing on the validation loss value at the largest time (\(10^{6}\) SGD steps) we observe that the improvement stops around the optimal value of the mini-batch \(m_{c}\simeq 64\). Above such a value we do not observe any improvement in the generalization (actually the validation loss can even become a little bit larger by further increasing \(m\)). For the optimal value \(m=64\), it becomes very clear the typical shape of the SGD relaxation, that proceeds through successive plateaus. This is a very well known phenomenon [55, 56, 57] and it is interesting to notice that it shows up only if the mini-batch size is set to the optimal value \(m_{c}\). The behaviour discussed above, where the learning and generalization processes get improved by increasing the mini-batch size \(m\), looks very general. In the right panel of Fig. 8 we show the value of the validation loss achieved after \(10^{6}\) SGD steps as a function of the learning rate \(\lambda\) and the mini-batch size \(m\). We observe that for any value of \(\lambda\) the generalization error decreases as a function of \(m\) until an optimal value \(m_{c}\) is reached. The value of \(m_{c}\) depends on the learning rate \(\lambda\) but its existence seems very robust in a wide \(\lambda\) range. It is worth noticing that the validation loss varies by several orders of magnitude Figure 8: **Left**: The figure displays the validation loss as a function of the SGD steps (i.e. the number of times that we compute the gradient) for the training of a student network with \(N=32\) neurons in the hidden layer (\(\lambda=0.1\), \(d_{t}=2\), \(d_{s}=32\)) and different values of \(m\). **Right**: The figure displays the phase diagram of the validation loss achieved after \(10^{6}\) SGD steps as a function of the learning rate \(\lambda\) and the mini-batch size \(m\) (here \(N=64\), \(d_{t}=2\), \(d_{s}=64\)). increasing the value of \(m\). ## 5 Discussion Given the central role played by the mini-batch in training artificial neural networks, it is surprising that very little results are available on the effects of changing its size. In this work we try to fill this gap by presenting an accurate numerical analysis that better quantifies the role of the mini-batch size \(m\) in training two-layer artificial neural networks. Working within the teacher-student scenario and fixing the teacher to be a sparse neural network, we study four different models for the student neural network. In all the cases we observe a crucial dependence of the generalization error on the mini-batch size \(m\). In some cases, such a strong dependence turns into a sharp phase transition between phases where the student is able or unable to generalize well the teacher's neural network. In other words, we observed that above the critical value \(m_{c}\), the student can either infer exactly the teacher model or at least generalize it very well. The robustness of our results is supported by the fact we find similar behavior in four different models, where the kind of task and the amount of information provided to the student vary. In particular we have studied both the case where the network topology is known to the students and the case where it is not. Moreover we have studied a regression problem where the output of network has the same input size and a simpler classification problem where the network has a single output neuron. To train the student network we have used SGD (in case the network parameters are real variables) and a SGD-like algorithm working with discrete variables otherwise. In all cases the effect of changing the mini-batch size is clear. The general picture that comes out from this study is the following. For small values of \(m\) the information provided to the algorithm at each step is too noisy and the training process get stuck in sub-optimal configurations, that are called _glassy states_ in statistical physics[58]. Only for \(m\) large enough the information collected at each step from the data allows the training algorithm to optimize the loss function and in turn produce a student network with good generalization skills. We have shown that in some models these two regimes are separated by a clear phase transition, and the critical value \(m_{c}\) scales with the model parameters (e.g. the input size \(N\) and the teacher connectivity \(d\)) in a way to match the amount information provided in each batch with the number of parameters to be assigned/inferred in the training process. This matching is a very simple rule that may help in understanding _a priori_ what is the optimal value for the mini-batch size. ## Data availability statement The numerical codes used in this study and the data that support the findings are available from the corresponding author upon request.
2305.07918
CVGG-Net: Ship Recognition for SAR Images Based on Complex-Valued Convolutional Neural Network
Ship target recognition is a vital task in synthetic aperture radar (SAR) imaging applications. Although convolutional neural networks have been successfully employed for SAR image target recognition, surpassing traditional algorithms, most existing research concentrates on the amplitude domain and neglects the essential phase information. Furthermore, several complex-valued neural networks utilize average pooling to achieve full complex values, resulting in suboptimal performance. To address these concerns, this paper introduces a Complex-valued Convolutional Neural Network (CVGG-Net) specifically designed for SAR image ship recognition. CVGG-Net effectively leverages both the amplitude and phase information in complex-valued SAR data. Additionally, this study examines the impact of various widely-used complex activation functions on network performance and presents a novel complex max-pooling method, called Complex Area Max-Pooling. Experimental results from two measured SAR datasets demonstrate that the proposed algorithm outperforms conventional real-valued convolutional neural networks. The proposed framework is validated on several SAR datasets.
Dandan Zhao, Zhe Zhang, Dongdong Lu, Jian Kang, Xiaolan Qiu, Yirong Wu
2023-05-13T13:38:09Z
http://arxiv.org/abs/2305.07918v1
# CVGG-Net: Ship Recognition for SAR Images Based on Complex-Valued Convolutional Neural Network ###### Abstract Ship target recognition is a vital task in synthetic aperture radar (SAR) imaging applications. Although convolutional neural networks have been successfully employed for SAR image target recognition, surpassing traditional algorithms, most existing research concentrates on the amplitude domain and neglects the essential phase information. Furthermore, several complex-valued neural networks utilize average pooling to achieve full complex values, resulting in suboptimal performance. To address these concerns, this paper introduces a Complex-valued Convolutional Neural Network (CVGG-Net) specifically designed for SAR image ship recognition. CVGG-Net effectively leverages both the amplitude and phase information in complex-valued SAR data. Additionally, this study examines the impact of various widely-used complex activation functions on network performance and presents a novel complex max-pooling method, called Complex Area Max-Pooling. Experimental results from two measured SAR datasets demonstrate that the proposed algorithm outperforms conventional real-valued convolutional neural networks. The proposed framework is validated on several SAR datasets. Complex activation function, Complex Area Max-Pooling, Complex-valued convolutional neural network, Synthetic Aperture Radar(SAR), SAR target recognition. ## I Introduction YNTHETIC aperture radar (SAR) is a high-resolution active microwave imaging sensor that operates without limitations related to weather or time [1], playing a crucial role in both military and civilian applications. SAR target recognition involves using variety of methods to identify the category of targets in SAR images, and it represents a popular research direction in microwave remote sensing applications[2]. Presently, SAR target recognition methods can be roughly categorized into traditional methods, deep learning-based methods, and complex information-based methods. Traditional SAR image target recognition methods rely on two key technologies: feature extraction and target recognition. SAR images possess unique geometric, mathematical, and electromagnetic features that significantly differ from optical images. Common feature extraction methods for SAR images include template matching [3] and feature fusion [4]. Once features are extracted, an appropriate classifier is selected to recognize and classify them. Popular classifiers currently include support vector machines [5-6], sparse representation classification[7], and others. However, most traditional recognition methods are adaptations of optical pattern recognition, characterized by complex feature designs and limited recognition rates. With the advent of deep learning technology, frameworks such as Convolutional Neural Networks (CNNs) and Fully Convolutional Neural Networks (FCNs) have demonstrated promising performance in SAR image target recognition, increasingly becoming mainstream in the field. Chen et al. [8] and Ding et al. [9] employed CNNs for SAR image target recognition, achieving impressive results on the MSTAR dataset. Concurrently, Lin et al. [10] integrated channel convolution and attention mechanisms, effectively focusing on the target area within SAR images. Deep learning-based methods can automatically extract features and perform recognition tasks, eliminating the need for manual feature design or complex physical modeling. However, the majority of deep learning-based work emphasizes amplitude information, while phase information remains a crucial factor in SAR image target recognition [11]. SAR utilizes microwave coherent imaging, rendering SAR images complex-valued [12]. Compared to real values, complex values offer superior representation and generalization characteristics[13]. Consequently, there is an urgent need to develop SAR image target recognition methods that fully harness the advantages of complex-valued information. In 2017, literature [14] first introduced the CV-CNN, which employs complex average pooling to achieve better results than real-valued networks on multi-channel POLSAR image classification tasks. Yu et al. [15] proposed the CV-FCNN, consisting only of convolutional layers and utilizing 1x1 complex convolution to learn cross-channel feature information, while Zhang et al. [16] presented the CV-MotionNet, a complex-valued convolutional neural network architecture that eliminates the need for motion compensation in classifying SAR moving ship targets. However, recognizing targets from SAR images can be challenging due to the effects of coherent speckle noise [17], which results in the discretization of target pixels and poor target separability. Although complex-valued SAR images contain rich phase information, relying solely on amplitude information makes it difficult to obtain satisfactory recognition results. To address these issues, we propose a complex-valued convolutional neural network (CVGG-Net) for target recognition in SAR images that fully utilizes the amplitude and phase information in complex-valued SAR data. Moreover, we introduce a novel complex max-pooling method, termed "Complex Area Max-Pooling", which aids the network in extracting more effective features. Compared to traditional real-valued CNNs, the proposed method achieves higher accuracy. ## II Proposed Target Recognition Method In recent years, CNNs have made significant strides in computer vision tasks. The CVGG-Net, proposed in this study, is based on the architecture of VGG[18] and incorporates the complex-valued convolution operation from [19]. Specifically designed for recognizing target objects in SAR images, the network accepts single-channel input. As illustrated in Fig. 1, the proposed CVGG-Net comprises 13 complex-valued convolutional blocks, 5 complex area max-pooling layers, 3 complex fully connected layers, 1 amplitude evaluation layer, and 1 softmax layer. CVGG-Net is designed with a VGG-like architecture, closely resembling VGG16. All layers within the network are complex-valued, allowing complex convolutional layers to extract information present in the amplitude and phase of complex SAR images. This approach enables the extraction of richer features compared to traditional deep learning methods. Complex area max-pooling is utilized to reduce the number of network parameters and enhance overall robustness. The complex fully connected layers further extract target features, which are then converted to real values in the last layer for calculating cross-entropy loss with labels. Finally, the network leverages a softmax layer for target recognition. ### _Complex-valued convolutional blocks_ A complex-valued convolutional block consists of a complex convolutional layer, a complex batch normalization layer, and a complex activation function layer, as shown in Fig. 2. #### Ii-A1Complex Convolutional Layer Complex convolution is an extension of traditional convolution in computer vision that operates in the complex domain. Unlike traditional convolution, which only utilizes amplitude information, complex convolution extracts target features by using both amplitude and phase information present in complex-valued SAR images. This approach results in complex-valued operations being added to the traditional convolution process. Experimental results have demonstrated that complex convolutional layers outperform conventional convolutional layers, indicating their superiority in extracting meaningful target features. According to [19], when the convolution operation is extended to the complex field, the complex vector \(\mathrm{I=x+yj}\) and the complex convolution kernel \(\mathrm{W=A+B}\) perform corresponding element-wise multiplication and summation operations, where \(\mathrm{j=\sqrt{-1}}\). Separating the real and imaginary values in the complex feature layer, the complex-valued convolution is equivalent to: \[\begin{array}{l}W*I=(A+B)*(x+yj)\\ =(A*x-B*y)+(A*y+B*x)j\,.\end{array} \tag{1}\] In the right hand side of the above equation, * stands for traditional (real) convolution. \(\mathrm{x}\) and \(\mathrm{y}\) represent the real and imaginary parts of the complex-valued vector respectively, \(\mathrm{A}\) and \(\mathrm{B}\) represent the real and imaginary parts of the complex convolution kernel, respectively. It can be seen that one complex-valued convolution operation is equivalent to four conventional convolution operations, as shown in Fig. 3. #### Ii-A2Complex Batch Normalization Layer In deep neural networks, batch normalization is often employed to stabilize the intermediate output values of each layer, promote model convergence, and mitigate the risk of overfitting [20]. Batch normalization of complex values can be scaled by the square root of the variance of the two principal components, real and imaginary, through dividing the zero-centered data \((X-E(X))\) by the square root of the \(2\times 2\) covariance matrix \(V\)[19]. \[\widetilde{\chi}=\frac{X-E(X)}{\sqrt{V}} \tag{4}\] \[V=\begin{pmatrix}cov(\Re\{x\},\Re\{x\})&cov(\Re\{x\},\Im\{x\})\\ cov(\Im\{x\},\Re\{x\})&cov(\Im\{x\},\Im\{x\})\end{pmatrix} \tag{5}\] #### Ii-A3Complex Activation Function Layer In order to handle complex-valued representations, a family of complex activation functions have been proposed. CRelu introduced in [21], extending the traditional Relu activation function to the complex domain. \[\mathrm{CRelu(z)=Relu(\Re\{z\}+jRelu(\Im\{z\})} \tag{6}\] CRelu applies separate Relu activations to both the real and imaginary parts of neurons, satisfying the Cauchy Riemann equation when both parts are either positive or negative. Existing literature lacks a definitive consensus on the most suitable activation function for complex-valued neural networks. The primary requirement for the function is to be nonlinear, Fig. 1: Architecture of the CVGG-Net. Fig. 2: Schematic diagram of complex-valued convolutional blocks. with gradients that do not suffer from issues such as explosion or vanishing during training [22]. In this paper, we have evaluated several common complex-valued activation functions from the perspective of ease of implementation. After extensive experimentation, we have selected CReLU as the optimal activation function. ### _Complex Area Max-Pooling_ As shown in Fig. 4(a), many researchers split complex-valued data into imaginary and real parts, and perform real value-based complex max-pooling on the real and imaginary parts, respectively. However, this approach is unreasonable and unsuitable for complex-valued SAR data. Fig. 4(b) illustrates the amplitude-based complex max-pooling (CAMaxPool) proposed in [23], which preserves the complex values corresponding to the coordinates with the maximum amplitude. Building on CAMaxPool, this paper further proposes complex area-based max-pooling, which selects the coordinates corresponding to the maximum area, as shown in Fig. 4(c). These two complex-valued max-pooling methods can be viewed as a choice between the values of f1 and f2. As shown in Fig. 5, f1 represents the length of the vector OA, and f2 is the area of the triangle AOB. \[\mathrm{f_{1}(z)=\sqrt{x^{2}+y^{2}}} \tag{7}\] \[\mathrm{f_{2}(z)=|xy|} \tag{8}\] The proposed complex area max-pooling introduces an alternative option for selecting pooling coordinates. In essence, the proposed complex area max-pooling tends to retain elements where both real and imaginary parts are present. Although in the SAR context, we cannot provide an appropriate physical explanation for this pooling strategy, our experiments have demonstrated that area-based max-pooling outperforms canonical amplitude-based complex max-pooling. ## III Experimental Results ### _A.Experimental data and platform_ Two SAR datasets are utilized to validate the proposed framework. One is CSRSDD (Complex SAR images Rotation Ship Detection dataset) annotated in [24]. The other is OpenSARship, a public dataset released by Shanghai JiaoTong University in 2017 [25]. #### Iii-1)CSRSDD All data were collected in GF-3 Spotlight (SL) mode with 1m resolution and HH or HV polarization mode. Each image size was 1024x1024 pixels, containing more abundant features. The rotating box is used to mark the target, and the annotation format refers to the DOTA dataset format [26]. The data is primarily concentrated in the port area, with the offshore scene accounting for more than 80%, featuring a complex background and significant interference. To suit the recognition task, we cut the targets out according to the annotation files. The amplitude slices are saved as.iff files, and the complex slices as.mat files, with the largest dimensions being 610x944 pixels and the smallest 12x34 pixels, as shown in Fig. 6. Five categories are screened out, as presented in Table I. Fig. 4: Three kinds of complex max-pooling. (a) Real value-based complex max-pooling. (b) Amplitude-based complex max-pooling. (c) Area-based complex max-pooling. Fig. 5: Diagram of coordinates. Fig. 3: Schematic diagram of complex-valued convolution structure. #### Iv-C1 OpenSARship The dataset was collected from Sentinel-I images and contains ground range multi-look products and slant range single-look complex products with both VV and VH polarization. In this paper, three types of ship targets--Bulk carrier, Cargo, and Tanker--are selected for research. Ship categories with too few samples are discarded, and each type of ship is randomly divided into training and testing sets according to Table II. In order to facilitate network processing, all image slices in both CSRSDD and OpenSARship are normalized, filled or cropped to 224x224 pixels. All experiments in this paper were conducted on Pytorch 1.8.1 and a device with Ubuntu18.04 system. Hardware features include Quadro RTX 8000 GPU (40GB memory), CPU AMD 3950x and 64G RAM. Due to the limitation of GPU memory, we set the batch size to be 32, the initial learning rate as 0.0001, and adopt Adam optimizer for optimization, train it for 100 epochs in total. ### _Comparison of Proposed network and Real-valued Networks_ As comparison for benchmark, a small complex-valued network CVnet5 with five complex-valued convolution blocks is built in this paper. Its structure is shown in Fig. 7. To verify the effectiveness of the proposed method, we compare it with commonly used real-valued networks such as ResNet18, VGG16, and Net5, which is the real-valued network corresponding to CVnet5. Each experiment is performed 10 times, and the average value is taken as the final experimental result. The results on the CSRSDD dataset and OpenSARship dataset are shown in Table III. 77.17% and 78.86% recognition rates, respectively. With our area-based max-pooling, the network performance is improved by 0.75% and 0.71%, respectively. On the OpenSARShip dataset, the network performance is improved by 0.3% and 0.87%, respectively, with area-based max-pooling. These results indicate that the proposed area-based max-pooling is more effective than amplitude-based max-pooling for complex-valued networks. ### Comparison with other complex-valued networks To verify the effectiveness of the proposed method, we compared the recognition results with other complex-valued convolutional networks, including CVnet5 and the complex-valued network CV-Net proposed in [17]. As shown in Table VII, the performance of the proposed method is superior to other methods on both the CSRSDD and OpenSARShip datasets. Even when CV-Net is combined with our complex area max-pooling, the recognition rates are increased by 0.07% and 0.23%. This improvement can be attributed to the fact that the CVGG-Net, based on complex-valued convolution blocks and complex area max-pooling, is better suited for SAR image target recognition tasks. ## IV Conclusion Traditional target recognition methods for SAR images often neglect the phase information, which is crucial for recognition accuracy. Considering that SAR images are inherently complex-valued, we propose a complex-valued convolutional neural network method named CVGG-Net for target recognition in SAR images. Additionally, we introduce a novel complex max-pooling method based on area, which outperforms other complex max-pooling methods. Our experimental results, obtained from CSRSDD and OpenSARShip datasets, demonstrate the effectiveness of our proposed method, including the complex area max-pooling and CVGG-Net algorithms. In the future, we plan to extend complex-valued convolutional neural networks to other fields, such as SAR object detection and semantic segmentation.
2308.06422
Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of Deep Neural Networks Through Cluster-Based Tree-Structured Parzen Estimation
As the complexity and computational demands of deep learning models rise, the need for effective optimization methods for neural network designs becomes paramount. This work introduces an innovative search mechanism for automatically selecting the best bit-width and layer-width for individual neural network layers. This leads to a marked enhancement in deep neural network efficiency. The search domain is strategically reduced by leveraging Hessian-based pruning, ensuring the removal of non-crucial parameters. Subsequently, we detail the development of surrogate models for favorable and unfavorable outcomes by employing a cluster-based tree-structured Parzen estimator. This strategy allows for a streamlined exploration of architectural possibilities and swift pinpointing of top-performing designs. Through rigorous testing on well-known datasets, our method proves its distinct advantage over existing methods. Compared to leading compression strategies, our approach records an impressive 20% decrease in model size without compromising accuracy. Additionally, our method boasts a 12x reduction in search time relative to the best search-focused strategies currently available. As a result, our proposed method represents a leap forward in neural network design optimization, paving the way for quick model design and implementation in settings with limited resources, thereby propelling the potential of scalable deep learning solutions.
Seyedarmin Azizi, Mahdi Nazemi, Arash Fayyazi, Massoud Pedram
2023-08-12T00:16:51Z
http://arxiv.org/abs/2308.06422v3
# Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of ###### Abstract As the complexity and computational demands of deep learning models rise, the need for effective optimization methods for neural network designs becomes paramount. This work introduces an innovative search mechanism for automatically selecting the best bit-width and layer-width for individual neural network layers. This leads to a marked enhancement in deep neural network efficiency. The search domain is strategically reduced by leveraging Hessian-based pruning, ensuring the removal of non-crucial parameters. Subsequently, we detail the development of surrogate models for favorable and unfavorable outcomes by employing a cluster-based tree-structured Parzen estimator. This strategy allows for a streamlined exploration of architectural possibilities and swift pinpointing of top-performing designs. Through rigorous testing on well-known datasets, our method proves its distinct advantage over existing methods. Compared to leading compression strategies, our approach records an impressive 20% decrease in model size without compromising accuracy. Additionally, our method boasts a 12\(\times\) reduction in search time relative to the best search-focused strategies currently available. As a result, our proposed method represents a leap forward in neural network design optimization, paving the way for quick model design and implementation in settings with limited resources, thereby propelling the potential of scalable deep learning solutions. ## I Introduction Deep neural networks (DNNs) have emerged as highly powerful and versatile tools for tackling real-world problems across various domains, including computer vision [1, 2, 3, 4] and natural language processing [5, 6, 7]. The remarkable performance of DNNs can be attributed to their capacity to learn intricate patterns and representations from large-scale data, involving millions or even billions of arithmetic operations and parameters. However, this success comes at the expense of high compute cycles, memory footprint, I/O bandwidth, and energy consumption, resulting in an increased carbon footprint and limitations in deploying them on resource-constrained platforms. To address these challenges, quantization and structured pruning techniques have emerged as promising strategies, offering the potential to mitigate the computational and memory burden of DNNs while maintaining satisfactory accuracy [8, 9, 10, 11]. These techniques have proven effective in making DNNs more efficient, enabling their deployment on edge devices, and accelerating inference in cloud environments. However, the successful application of quantization and structured pruning heavily relies on finding the optimal bit-width and layer-width for each layer of the DNN, which presents a formidable challenge due to the exponential growth of the search space as the number of layers increases. The challenges in optimizing the bit-width for each layer of a DNN are further compounded by the inherent diversity in weight distributions across different layers (see Fig. 1 in addition to the sensitivity of a DNN's predictions to each layer's weights. The weight distributions and sensitivity values can vary significantly, and as a result, different layers may benefit from different bit-widths to achieve maximum gains in terms of memory and computation cost savings without compromising accuracy. Additionally, widening layers can profoundly impact the accuracy of the model [2]. Therefore, the search process should not be confined to reducing the layer-width only; it must also identify situations where widening a layer, combined with a sufficiently reduced bit-width, leads to potential accuracy gains without compromising cost savings. The search process for optimizing the bit-width and layer-width of DNNs should extend beyond traditional metrics such as FLOPs and memory footprint when evaluating cost savings. While these metrics are useful, they may not directly reflect the real-world performance and efficiency of the model. Therefore, it is imperative to consider factors like real latency, throughput, and energy consumption when comparing models in terms of cost savings. Hence, devising an algorithm that effectively explores the bit-width and layer-width search space while considering the diverse weight distributions and sensitivity values and the varying impact of layer-width on accuracy and real-world performance becomes critical. This necessitates an approach that can intelligently adapt the bit-width and layer-width for each layer, striking a balance between latency, throughput, or energy consumption and the preservation of model accuracy. This paper presents a novel model-based optimization method based on tree-structured Parzen estimators (TPEs) [12] Fig. 1: Distribution of weights in three representative layers of the MobileNetV1 architecture trained on the CIFAR-100 dataset. to address the challenge of simultaneously searching for the optimal bit-width and layer-width values for DNN layers. The major innovations of the presented optimization method are as follows: * Exploiting Second-Order Derivatives: In addition to leveraging the distribution of layer weights, our method incorporates second-order derivatives of the loss function with respect to layer weights. This exponential pruning of the search space is particularly effective when dealing with pre-trained models, resulting in enhanced optimization efficiency. * Hardware-Aware Objective Function: Our approach takes into account essential information about the target hardware that will execute the optimized DNN. We construct latency, throughput, and energy consumption models, which, combined with model accuracy, define a composite objective function guiding the optimization process. * Handling Flat Loss Landscapes: DNNs often exhibit flat loss landscapes, which pose challenges for conventional optimizers. To address this, we fit surrogate distributions to both desirable and undesirable observations of the objective function. This adaptation enables us to achieve comparable or better objective values compared to state-of-the-art optimizers while significantly reducing convergence time, typically by a factor of at least 10x. * Joint Optimization of Bit-Width and Layer-Width: Unlike conventional approaches that treat bit-width and layer-width independently, we search for joint optimal configurations. This novel approach enables us to discover configurations that yield higher-quality results, which would have otherwise been challenging to find with independent optimization. Through an extensive series of experiments, we demonstrate the effectiveness and efficiency of our method. The results showcase substantial improvements in DNN processing efficiency without sacrificing predictive accuracy. ## II Related Work Mixed-precision quantization exploits the fact that not all model parameters require the same level of precision to maintain model accuracy. Quantization-aware training techniques simulate the quantization effects during the training process, allowing the model to adapt to lower precision, while post-training quantization techniques quantize DNN models after they have been trained at full precision. The former typically achieves higher accuracy at the expense of increased training time. In sensitivity-based mixed-precision quantization, first- or second-order gradient statistics, or other sensitivity metrics, are employed as a proxy to layer importance, which is subsequently used to assign an appropriate bit-width to each layer [13, 14, 15]. However, sensitivity-based quantization comes has some shortcomings. First, it does not take into account the influence of quantized weights and input activations on output activations, which are inputs to other layers of a DNN including batch normalization layers commonly found after convolutional layers. In other words, the batch statistics of batch normalization layers or weights of succeeding convolutional layers are trained with full-precision input activations, and any change in those input activations due to quantization may hamper model accuracy. Second, gradient-based methods utilize gradients calculated on the full-precision model, thus overlooking the impact of quantization on gradients, which is a significant effect when using ultra-low bit-width quantization. Third, the sensitivity values are often calculated based on the weights of layers, and thus, do not provide any insight into the proper bit-width of input activations. As a result, sensitivity-based quantization approaches often fall short of achieving high model size compression ratios and/or maintaining accuracy levels, leaving ample space for further optimization. In reinforcement learning (RL)-based mixed-precision quantization, an agent interacts with a quantized DNN environment, adjusting the bit-width configurations for various layers using RL algorithms, such as policy gradients, which provide rewards based on model accuracy and resource consumption, enabling the agent to discover optimal quantization strategies through iterative exploration and exploitation [16, 17, 18]. Despite their potential benefits, these RL-based techniques are confronted with a significant challenge: the considerable search time involved in the RL training process. As a consequence, achieving favorable results within specified GPU-hour constraints becomes challenging. Differentiable search-based approaches build upon the concept of differentiable architecture search [19] and apply it to mixed-precision quantization. In these approaches, a super-network is trained to replace each layer (or activation function) of a DNN, such as a convolutional layer, with parallel branches where each branch implements a quantized version of the layer (or activation function) [20]. These approaches have significant drawbacks. First, they suffer from large training times and high GPU RAM requirements due to the very large size of the super-network. Second, the distribution defined over each group of parallel branches, which is found during training, may not converge to a uni-modal distribution, rendering the selection of a single quantized version of a layer infeasible. Lastly, sequential model-based mixed-precision quantization approaches employ surrogate models that define a mapping between the search space and an objective function to guide the exploration of the search space [21]. Tree-structured Parzen estimator (TPE) is a powerful tool that has shown great success in hyperparameter tuning [22]. However, its naive application to mixed-precision quantization ignores the flat loss landscapes that are prevalent in DNNs. This greatly increases the search time and yields inferior objective values. Our TPE-based optimization achieves a \(12\times\) average search time speedup and a 20% reduction in model size compared to [21] while preserving the model accuracy. ## III Proposed Method This section details the three main components of our model-based bit-width and layer-width optimization framework, i.e., Hessian-based search space pruning, \(K\)-Means TPE, and hardware-aware performance modeling. ### _Hessian-Based Search Space Pruning_ The size of the search space grows exponentially as the number of DNN layers increases. As a result, pruning the search space by eliminating the bit-width choices that are likely to hamper the model accuracy or cause unnecessary computations is of paramount importance due to its exponential reduction in the size of the search space. The Hessian of the loss function with respect to each layer's weights provides an excellent starting point to evaluate the criticality of the bit-width for each layer. We first prove an important result. **Lemma 1**.: _The maximum error induced in a DNN's output by unit perturbation in a layer's parameters is bounded by the trace value of the Hessian matrix of the loss with respect to that layer's parameters._ Proof.: Assume we freeze all parameters of a DNN except those of a single convolutional filter or neuron in layer \(l\). Let \(\mathbf{w}_{l}\) and \(\mathbf{w}_{l}^{\mathrm{q}}\) denote the parameters of layer \(l\) and their corresponding quantized values, respectively. Taylor's Theorem implies that the output loss may be approximated around \(\mathbf{w}_{l}^{\mathrm{q}}\) as follows: \[\mathcal{L}(\mathbf{w}_{l})\approx\mathcal{L}(\mathbf{w}_{l}^{\mathrm{q}}) +(\mathbf{w}_{l}-\mathbf{w}_{l}^{\mathrm{q}})^{\mathrm{T}}\nabla\mathcal{ L}_{\mathbf{w}_{l}}\] \[+\frac{1}{2}(\mathbf{w}_{l}-\mathbf{w}_{l}^{\mathrm{q}})^{\mathrm{T}}\bm {H}_{\mathbf{w}_{l}}(\mathbf{w}_{l}-\mathbf{w}_{l}^{\mathrm{q}}),\] where \(\nabla\mathcal{L}_{\mathbf{w}_{l}}\) is the gradient vector of the loss function with respect to parameters \(\mathbf{w}\) evaluated at \(\mathbf{w}_{l}\), which is nearly zero on a trained model, and \(\mathbf{H}_{\mathbf{w}_{l}}\) denotes the Hessian matrix, whose entries are the second derivatives of the loss function with respect to parameters \(\mathbf{w}\) evaluated at \(\mathbf{w}_{l}^{\mathrm{q}}\). Therefore, we have: \[\mathcal{L}(\mathbf{w}_{l})-\mathcal{L}(\mathbf{w}_{l}^{\mathrm{q}})\approx\frac{1}{2 }(\mathbf{w}_{l}-\mathbf{w}_{l}^{\mathrm{q}})^{\mathrm{T}}\mathbf{H}_{\mathbf{w}_{l}}(\mathbf{w}_{ l}-\mathbf{w}_{l}^{\mathrm{q}}). \tag{1}\] By writing the spectral decomposition of the Hessian matrix as \(\mathbf{H}_{\mathbf{w}_{l}}=\mathbf{U}\mathbf{D}\mathbf{U}^{T}\), where \(\mathbf{U}\) is a unitary matrix and \(\mathbf{D}\) is a diagonal matrix, and plugging this expression back into (1), we obtain: \[\Delta\mathcal{L}(\mathbf{w}_{l})\approx\frac{1}{2}(\mathbf{U}^{\mathrm{T}}\Delta\mathbf{ w}_{l})^{\mathrm{T}}\mathbf{D}(\mathbf{U}^{\mathrm{T}}\Delta\mathbf{w}_{l}).\] with \(\Delta\mathbf{w}_{l}=\mathbf{w}_{l}-\mathbf{w}_{l}^{\mathrm{q}}\). Denoting \(\mathbf{U}^{T}\Delta\mathbf{w}_{l}\) with \(\mathbf{a}\), we have: \[\Delta\mathcal{L}(\mathbf{w}_{l})\approx\frac{1}{2}\mathbf{a}^{\mathrm{ T}}\mathbf{D}\mathbf{a} =\frac{1}{2}\sum_{i}a_{i}^{2}\lambda_{i}\] \[\leq\frac{1}{2}\max_{i}(a_{i}^{2})\sum_{i}\lambda_{i}.\] Since \(\mathbf{U}\) is a unitary matrix and the lemma assumes unit perturbation, \(\max_{i}(a_{i}^{2})\leq 1\). Therefore, \[\Delta\mathcal{L}(\mathbf{w}_{l})\leq\frac{1}{2}\mathrm{Tr}(\mathbf{H}_{\mathbf{w}_{l}}).\] The significance of this lemma lies in its implications for training and quantizing DNNs. For example, a small trace value of the Hessian matrix indicates that the loss function's surface is relatively flat around the current parameter values, making the model less sensitive to perturbations and potentially more stable during training. On the other hand, a large trace value suggests that the loss function's surface has steeper curves, indicating higher sensitivity to parameter changes and potentially making training more challenging. The Hessian-based search space pruning algorithm starts by normalizing the trace of the Hessian of the loss function with respect to each layer's weight by the number of weights in that layer to find an estimate of the relative importance of layers in a DNN. It then applies \(k\)-means clustering to the normalized trace values, sorts clusters in non-increasing order of their centroid values, and assigns candidate bit-widths to the layers within each cluster according to these centroid values by assigning higher bit-widths to layers that are within clusters with larger trace values. We use the same bit-width for weights and input activations of a DNN layer for improved hardware performance. For example, when \(k=4\), possibly overlapping subsets of candidate bit-widths \(B=\{8,6,4,3,2\}\) can be considered for different clusters, e.g., \(B_{1}=\{8,6\}\), \(B_{2}=\{6,4,3\}\), \(B_{3}=\{4,3,2\}\), and \(B_{4}=\{3,2\}\), where the bit-widths of layers within the cluster with the largest centroid value are selected from \(B_{1}\), and those for the second largest centroid value are chosen from \(B_{2}\), and so on.1 Footnote 1: The part of the search space defined by the layer-width values is not pruned, and layer-width is always taken from the set \(S=\{0.75,0.875,1,1.125,1.25\}\). ### \(K\)-Means TPE An integral part of the presented framework is a novel sequential model-based optimization methodology that is based on the TPE methods. The main idea behind TPE methods is to recursively partition the data space into smaller regions (nodes) and estimate the probability density function within each region separately. In particular, the TPE methods use Bayesian reasoning to propose configurations from the search space that are likely to improve an objective function as explained next. After drawing a few random configurations \(\chi=\{x^{(1)},x^{(2)},\ldots,x^{(k)}\}\) from the search space and observing their corresponding objective function values \(Y=\{y^{(1)},y^{(2)},\ldots,y^{(k)}\}\), the TPE methods define a threshold \(\hat{y}\) that is equal to the largest of \(q\)-quantiles of \(Y\), and creates a surrogate distribution \(l(x)\) for configurations with desirable objective function values (\(y^{(i)}\geq\hat{y}\) in a maximization problem) and a surrogate distribution \(g(x)\) for configurations with undesirable objective function values (\(y^{(i)}<\hat{y}\).) The candidate configuration to be evaluated next is an \(\tilde{x}\) that maximizes the \(\frac{l(x)}{g(x)}\) ratio. The chosen \(\tilde{y}\) leads to an update in \(\hat{y}\), \(l(x)\), and \(g(x)\) until a stopping criterion is met. Due to the flat loss landscapes that are prevalent in DNNs, widely different configurations from the search space may yield very close objective values. This becomes problematic when objective values of configurations from promising parts of the search space fall slightly below \(\hat{y}\), which puts those configurations in \(g(x)\). This effectively discourages exploring those parts of the search space, which, in turn, can yield configurations with inferior objective values. To address this problem, we introduce a novel dual-threshold TPE method that incorporates \(k\)-means clustering in the threshold definition process (which we name \(k\)-means TPE.) The dual-threshold optimizer starts with an initial \(k\) (where \(k\geq 3\)) for clustering the elements of \(Y\), sorts clusters in decreasing order of their centroid values (\(C_{1},\ldots,C_{k}\)), and defines the surrogate distributions as follows: \[p(x|y)=\begin{cases}l(x),&\text{if }y\in C_{1}\\ g(x),&\text{if }y\in C_{k}\.\end{cases}\] After every few iterations of the search process, we increase \(k\), which tightens the criteria of being a desirable or undesirable configuration. This effectively implements an annealing process that initially allows for large moves in the search space to explore distant regions of the search space, and gradually reduces the move size so that search is narrowed to nearby regions of the search space (i.e., those regions that are close to the currently found promising solutions.) ### _Hardware-Aware Objective Function_ The problem addressed in this work is mathematically formulated in its general form as follows: \[\max_{B,S} \text{accuracy}(\boldsymbol{\Theta},\mathcal{D},B,S)\] s.t. \[\text{modelSize}(\boldsymbol{\Theta},B,S)\leq\mu\] \[\text{latency}(\boldsymbol{\Theta},\mathcal{H},B,S)\leq\tau\] \[\text{energy}(\boldsymbol{\Theta},\mathcal{H},B,S)\leq\epsilon\] \[\text{throughput}(\boldsymbol{\Theta},\mathcal{H},B,S)\geq\pi,\] where \(\boldsymbol{\Theta}\) denotes the parameters of the target DNN to be optimized, \(\mathcal{D}\) comprises samples from a target dataset, \(B\) and \(S\) denote the sets of candidate bit-widths and layer-widths multipliers, respectively, \(\mathcal{H}\) characterizes the target hardware, \(\mu\), \(\tau\), and \(\epsilon\) represent the model size, latency, and energy consumption upper bounds, and \(\pi\) denotes the throughput lower bound. The optimization problem can be addressed by solving its Lagrangian dual problem, where large Lagrangian multipliers are assigned to various constraints. In practical applications, one or more of the said constraints may be relaxed. The focus of this work is on the model size and latency constraints. Exemplary target hardware used in this work is a Xilinx FPGA in which the chip layout includes columns of DSPs, BRAMs, and CLBs. Our design for the hardware architecture of the accelerator that processes a given DNN is comprised of * a 2D systolic array \(M\times N\) of processing elements (PEs) where each PE contains one DSP and a companion BRAM, and * a memory hierarchy that encompasses off-chip DRAM, on-chip URAMs and BRAMs, and register files. When processing some DNN layer, a first set of \(N\) input activations are loaded into the first row of the systolic array, and each input activation is multiplied with a corresponding weight for the first output filter, residing in the BRAM associated with each of the PEs in that row. In the next cycle, the first set of input activations is passed down the second row of PEs, while a second set of input activations are loaded into the first row. The second set of input activations is multiplied with a second set of weights associated with the first filter while the first set of activations is multiplied with the first set of weights associated with the second output filter. Evidently, this process is repeated multiple times (i.e., \(N^{\prime}/N\) where \(N\) is the number of entries in the input feature patch, which is commonly equal to \(3\times 3\times I\) with \(I\) denoting the input channel count) to produce the filter results for \(M\) output channels of the layer. The partial products computed in different cycles are accumulated within each PE in the 2D array. In the end, all partial accumulation results stored locally in the PEs of each row are passed onto a tree adder structure (which we call a processing unit, of which there are \(M\)) to produce the final scalar convolution result for each of the output channels. We point out that when the number of output channels \(M^{\prime}\) is larger than \(M\), the systolic array must be invoked \(M^{\prime}/M\) times. Each DSP block can perform a \(27\times 18\)-bit multiplication followed by a 48-bit accumulation. A crux of our design lies in packing multiple low-bit-width operands in each line of memory in addition to utilizing each DSP to efficiently perform multiple multiplications and additions. We extend the idea of HiKonv [23], which introduces packed operations for 1D convolutions, to support 2D convolutions with arbitrary bit-widths. More specifically, our operand and operation packing approach is capable of performing two multiplications for eight- or six-bit operands, six multiplications and two additions for four- or three-bit operands, and 15 multiplications and eight additions for two-bit operands all while only using a single DSP. Fig. 2 illustrates an example of operand and operation packing for four-bit operands. As a result of our architecture design and packing scheme, weight quantization yields linear weight size reduction as a function of the bit-width selected for a layer while latency reduction is a function of the number of operations that can be packed as explained above. Considering the total number of layers and the number of weights/operations per layer, Fig. 2: Four-bit operand and operation packing. The design yields the computations required for two rows of convolutional kernels every two cycles. the overall model size reduction and speedup can be easily calculated. Alg. 1 summarizes different steps of the search process presented in this work. ## IV Results & Discussion This section presents the results of experiments that evaluate the effectiveness of our methodology, which involve hyperparameter tuning in addition to mixed-precision quantization and layer-width scaling through neural architecture search. ### _Convergence of \(K\)-Means TPE_ This section presents the speedup in convergence for three types of machine learning models and three datasets. The first experiment involves applying random forest regression to the Iris dataset. The variables that define the search space are the number of trees in the forest, the maximum depth of each tree, and the minimum number of samples required to split an internal node. The second experiment involves training a gradient boosting classifier on the Titanic dataset. The variables that define the search space are the learning rate, number of boosting stages, maximum depth of the estimator, minimum number of samples required to split, minimum number of samples needed to be at a leaf node, and number of features to consider when looking for the best split. For the first two experiments, \(n_{0}=20\), \(n=100\), \(k=4\), and \(\alpha=0.98\). Finally, the third experiment involves mixed-precision quantization and layer-width scaling of a ResNet-18 model on the CIFAR-100 dataset. For this experiment, \(n_{0}=40\), \(n=160\), \(k=4\), and \(\alpha=0.98\). Figure 3 showcases the superior convergence speed of the presented \(k\)-means TPE over the traditional TPE for all experiments. It is observed that \(k\)-means TPE converges to superior or same-quality results at about two to three times fewer evaluations of proposed configurations compared to TPE. ### _Mixed-Precision Quantization and Layer-Width Scaling_ To show the effectiveness of \(k\)-means TPE on a variety of DNNs and datasets, we integrate it into the HyperOpt [22] library, and use the PyTorch library [24] to train and quantize the models. For all experiments, we only use a very small number of epochs to evaluate different configurations during the search process, similar to [19]. More specifically, for models trained on the CIFAR-10 and CIFAR-100 datasets, the number of epochs to train is set to four, and for the more extensive ImageNet dataset [25], this number is set to one. As shown in Table I for an exemplary model and dataset, such an approximation does not make a noticeable difference in the final results compared to a scenario where each configuration is trained for a much larger number of epochs. After finding the best configuration, we train the models for 90 epochs using the Adam [26] optimizer with a weight decay equal to \(10^{-4}\). We apply OneCycleLR learning rate scheduling with maximum learning rate of 0.01. Table II compares the accuracy, model size, and speedup (in terms of latency) of different DNNs trained on different datasets and quantized using a variety of approaches. #### Iv-B1 ImageNet **ResNet-18 [1]** The ResNet-18 model trained with bit-widths and layer-widths returned by the search process achieves 70.8% validation accuracy with model size reduced to 4.01MB and latency improved by a factor of 10.9 compared to a 16-bit fixed point baseline. The trained model outperforms PACT [8], AutoQ [18], EvoQ [15], and the mixed-precision quantization presented in [14] in terms of both Fig. 3: Comparison of the convergence speed of TPE and \(k\)-means TPE for different machine learning algorithms and on Iris, Titanic, and CIFAR-100 datasets. accuracy and compression (latency numbers are not reported in these references). Compared to the best prior work on ResNet-18 [14], our ResNet-18 achieves 1.1% higher accuracy at about 9% smaller model size (please note that [14] uses mixed-precision quantization with a minimum of three bits per weight, but it does not report model size, so we underestimate their model size by assuming all their weights are mapped to three bits). **MobileNetV2 [30]** The \(k\)-means TPE can compress the MobileNetV2 model to only 1.5 MB while causing a 0.6% accuracy drop compared to the baseline floating point model. These results advance state-of-the-art on the quantization of MobileNetV2: the optimized MobileNetV2 model has an only 3% higher model size compared to EMQ [28] while it achieves 1% higher accuracy on the difficult ImageNet dataset. **ResNet-50 [1]** The optimized ResNet-50 has a 7.15 MB model size while it only causes a 0.6% accuracy degradation. To the best of our knowledge, our method is the first to compress the network to this level with an acceptable accuracy drop. The work in [14] achieves a similar accuracy drop but obtains a model with 11.4% larger model size. #### Iv-B2 Cifar-100 Using \(k\)-means TPE, we successfully compressed ResNet-18 and MobileNetV1 on CIFAR-100 datasets by factors of 11.18\(\times\) and 5.06\(\times\), respectively, without any loss in accuracy. The inference latency also experienced a significant improvement, as summarized in Table II. Figure 4 showcases some of the samples explored by the search engine for ResNet-18, alongside the best configuration returned by the algorithm. The effective performance of \(k\)-means TPE in discovering high-performing samples is evident. #### Iv-B3 Cifar-10 For the CIFAR-10 dataset, we conducted experiments with ResNet-20 and compared our results to those of ReLeQ [17]. As demonstrated in Table II, \(k\)-means TPE outperformed ReLeQ in terms of compressing to ultralow storage size while preserving accuracy levels. Our approach showcased its superiority in achieving highly efficient models without compromising performance. To gain deeper insights into the efficiency of our \(k\)-means TPE search, we conducted a comparison with the recent work BOMP-NAS [21], which achieved state-of-the-art results in terms of GPU-hours of search and compression level. As depicted in Table III, our ResNet-20 model achieved nearly the same level of accuracy as BOMP-NAS, while being 31.5% smaller in size. Furthermore, the search cost for \(k\)-means TPE was 9.23\(\times\) less than that of BOMP-NAS. Additionally, our ResNet-18 model was 40% more compressed than BOMP-NAS, with a search time that was 14.63\(\times\) less than the reported GPU-hours for BOMP-NAS. These results highlight the significant advantages of our \(k\)-means TPE approach in terms of search efficiency and model compression. Table IV shows some configurations found by \(k\)-means TPE. For each model and dataset, the first and second rows of configurations contain the assigned bit-width for each layer and the assigned layer-width scaling factor. As the table demonstrates, to be able to quantize some layers to ultralow precisions (e.g., 2 or 3 bits), the method may strategically decide to scale up the number of filters in that layer. By doing so, the algorithm effectively mitigates quantization errors, achieving a favorable trade-off between precision reduction and layer-width scaling. This demonstrates the effectiveness of the joint optimization of bit-widths and layer-widths through \(k\)-means TPE. Fig. 4: The space for ResNet-18 compression and the output model ## V Conclusions We presented a search-based approach including a Hessian-based pruner and a tree-structured dual-threshold Parzen estimator for automatic optimization of DNNs bit-width and layer-width configurations to enable their efficient deployment. Through extensive experiments on benchmark datasets, we showed a \(12\times\) average search time speedup and a 20% reduction in model size compared to the state-of-the-art compression techniques while preserving the model's output accuracy. Such an advancement facilitates rapid model development and deployment in resource-constrained environments, unlocking new possibilities for scalable deep learning systems.
2306.12689
Vec2Vec: A Compact Neural Network Approach for Transforming Text Embeddings with High Fidelity
Vector embeddings have become ubiquitous tools for many language-related tasks. A leading embedding model is OpenAI's text-ada-002 which can embed approximately 6,000 words into a 1,536-dimensional vector. While powerful, text-ada-002 is not open source and is only available via API. We trained a simple neural network to convert open-source 768-dimensional MPNet embeddings into text-ada-002 embeddings. We compiled a subset of 50,000 online food reviews. We calculated MPNet and text-ada-002 embeddings for each review and trained a simple neural network to for 75 epochs. The neural network was designed to predict the corresponding text-ada-002 embedding for a given MPNET embedding. Our model achieved an average cosine similarity of 0.932 on 10,000 unseen reviews in our held-out test dataset. We manually assessed the quality of our predicted embeddings for vector search over text-ada-002-embedded reviews. While not as good as real text-ada-002 embeddings, predicted embeddings were able to retrieve highly relevant reviews. Our final model, Vec2Vec, is lightweight (<80 MB) and fast. Future steps include training a neural network with a more sophisticated architecture and a larger dataset of paired embeddings to achieve greater performance. The ability to convert between and align embedding spaces may be helpful for interoperability, limiting dependence on proprietary models, protecting data privacy, reducing costs, and offline operations.
Andrew Kean Gao
2023-06-22T06:23:31Z
http://arxiv.org/abs/2306.12689v1
# Vec2Vec: A Compact Neural Network Approach for Transforming Text Embeddings with High Fidelity ###### Abstract Since the seminal Word2Vec paper in 2013, vector embeddings have become ubiquitous and powerful tools for many language-related tasks. A leading embedding model is OpenAI's text-ada-002 which can embed up to approximately 6,000 words into a 1,536-dimensional vector. While powerful, text-ada-002 is not open source and is only available via API. Thus, users must have an Internet connection to query their text-ada-002 databases. Additionally, API costs can add up and users are locked in to OpenAI. We trained a simple neural network to convert open-source 768-dimensional MPNet embeddings into text-ada-002 embeddings. We compiled a subset of 50,000 reviews from Stanford's Amazon Fine Foods dataset. We calculated MPNet and text-ada-002 embeddings for each review and trained a simple neural network for 75 epochs. Our model achieved an average cosine similarity of 0.932 on 10,000 unseen reviews in our held-out test dataset. Given the high dimension (1,536) of text-ada-002 embeddings, 0.932 is quite impressive. Finally, we manually assessed the quality of our predicted "synthetic" embeddings for vector search over text-ada-002-embedded reviews. While not as good as real text-ada-002 embeddings, predicted embeddings were able to retrieve highly relevant reviews. Our final model, "Vec2Vec", is lightweight (\(<\)80 MB) and fast. Future steps include training a neural network with a more sophisticated architecture and a larger dataset of paired embeddings to achieve greater performance. The ability to convert between and align embedding spaces may be helpful for interoperability, limiting dependence on proprietary models, protecting data privacy, reducing costs, and offline operations. ## Introduction Embeddings are a powerful technique in natural language processing that allow us to represent texts as vectors in a high-dimensional space [1-2]. These vectors capture the semantic and syntactic relationships between texts, enabling us to perform various tasks such as search, sentiment analysis, language translation, and text classification. Embeddings are typically learned using unsupervised techniques such as word2vec or GloVe, which use large amounts of text data to learn the vector representations of words [3-4]. The Word2Vec model, developed by Tomas Mikolov and his team at Google in 2013, is a neural network that processes text by taking as its input a large corpus of words and producing a vector space. Each word in the corpus gets assigned a corresponding vector in the space. Words that share common contexts in the corpus are placed in close proximity to one another in the space. Embeddings have revolutionized the field of natural language processing and have become an essential tool for building state-of-the-art models in various applications [5]. Embeddings have widespread applications in many natural language processing tasks [6-8]. They are often used in search, where one can query the embedding space to find vectors closest to a given vector and return relevant documents. Text similarity and clustering involve comparing the embeddings of different pieces of text and grouping similar ones together. OpenAI's text-ada-002 is an advanced embedding model that represents large texts as high-dimensional vectors [9]. It is at the top of the leaderboards in various benchmarks and can embed up to approximately 6,000 words into a 1,536-dimensional vector. The model is quite powerful but is proprietary, meaning that it's not open-source and is only available via API. Also, there are rate limits. This has implications on costs, dependency, and requirement of an Internet connection. On the other hand, all-mpnet-base-v2 is an open-source embedding model that is freely available and can be run locally [10]. The creators of all-mpnet-base-v2 fine-tuned Microsoft's MPNet model on 1 billion sentence pairs [11]. In this paper, we train a neural network to learn to convert embeddings generated by an all-mpnet-base-v2 into text-ada-002 embeddings. Neural networks are essentially networks of algorithms aimed at recognizing underlying relationships in a set of data, modeled off of the human brain. They are designed to recognize patterns and are exceptionally effective at processing complex, high-dimensional data such as images, audio, language, or in our case, high-dimensional vectors. Neural networks, with their ability to learn complex mappings and generalize from seen to unseen data, are ideal for our purpose. In order to train a neural network, a loss function is necessary. In this study, we use cosine similarity. Cosine similarity is a metric used to measure how similar two vectors are, irrespective of their size. It measures the cosine of the angle between two vectors projected in a multi-dimensional space. The smaller the angle, the higher the cosine similarity. In the context of text embeddings, cosine similarity is a good metric as it effectively captures the orientation (direction) of the embeddings, which is more important than their magnitude in high-dimensional spaces. In the context of our research, we calculate the cosine similarity between our predicted embeddings and the real text-ada-002 embeddings, as a sort of accuracy metric. Let \(y_{\text{true}}\) and \(y_{\text{pred}}\) be two 1536-dimensional embedding vectors, where 'true' refers to the ground truth text-ada-002 vector in the test dataset and 'pred' refers to the vector predicted by Vec2Vec given an all-mpnet-base-v2 vector. The L2 normalization of these vectors is given by: \[y_{\text{true, norm}}=\frac{y_{\text{true}}}{\|y_{\text{true}}\|_{2}}=\frac{y_ {\text{true}}}{\sqrt{\sum_{i}(y_{\text{true}})_{i}^{2}}}\] \[y_{\text{pred, norm}}=\frac{y_{\text{pred}}}{\|y_{\text{pred}}\|_{2}}=\frac{y_ {\text{pred}}}{\sqrt{\sum_{i}(y_{\text{pred}})_{i}^{2}}}\] We perform element-wise multiplication on these normalized vectors: \[z=y_{\text{true, norm}}\odot y_{\text{pred, norm}}\] Then we take the mean of the resulting vector. Because the vectors are normalized to length 1, we do not need to divide by the product of their lengths because that would be dividing by 1: \[\mu_{z}=\frac{1}{N}\sum_{i}z_{i}\] Finally, the cosine similarity loss is the negative of this mean: \[\text{Cosine similarity loss}=-\mu_{z}\] _Figure 1._ Description and formulas of our custom cosine similarity loss function. The overarching idea of this study is to essentially map one embedding space onto another. One example application of our work is in vector databases. The initial vector database can be created using text-ada-002. At inference/search time, all-mpnet-base-v2 + Vec2Vec can be used to embed the search query instead of text-ada-002. ## Methods: We retrieved 568,454 Amazon reviews of fine foods from the Stanford Network Analysis Project [12]. The reviews are written by 256,059 unique Amazon users and span 74,258 food products, with an average of 7.65 reviews per product. The median number of words per review is 56. The reviews were posted on Amazon between October 1999 and October 2012. We selected this dataset because it is large, has texts of a manageable size, and because the domain is relatively constrained (food) while still being diverse (dog food, milk, spices, meat, vegetables, instant noodles, canned soups, etcetera). We downloaded the reviews in CSV format and used Python to preprocess the data. Because the OpenAI text-ada-002 model can only embed up to 8,192 tokens at once, we used the tiktoken package to remove reviews of length greater than 8,000 tokens, if any existed. Next, we randomly sampled 50,000 reviews from the remaining reviews using the Pandas sample method. Next, we calculated vector embeddings for these reviews using the open-source model all-mpnet-base-v2 (obtained from Hugging Face) and the OpenAI text-ada-002 model (June 2023 version). We referenced code provided by OpenAI on how to use text-ada-002. In order to obtain 50,000 embeddings from OpenAI in a reasonable timeframe, we used the LightspeedEmbeddings package to implement multithreading and send multiple API requests simultaneously [13]. Also, all-mpnet-base-v2 has a limit of approximately 384 'word pieces' per embedding so we divided any long reviews into chunks of 128 words, embedded the chunks separately, and then computed average embeddings. However, the vast majority of reviews fell under this 128 word threshold so this was not often needed. The all-mpnet-base-v2 model provided a 768-dimensional embedding vector and the text-ada-002 model provided a 1536-dimensional embedding vector, which happens to be exactly twice the size of the former. We built a simple fully connected sequential neural network using the Tensorflow and Keras libraries in Python [14-15]. The model had three hidden layers with ReLU activation functions and three dropout layers to combat overfitting. We used a custom loss function, which was just a cosine similarity but negative. The input features were the all-mpnet-base-v2 vectors and the output features were the corresponding text-ada-002 vectors. We reserved a random 20% of the data (10,000 reviews) for the final test set. The model did not see any of these reviews during training. We trained it for 75 epochs with a batch size of 32, the Adam optimizer, and a validation split of 20%, using cosine similarity for the loss function. We used cosine similarity in order to encourage the neural network to minimize the difference in angle between its predicted vectors and the ground truth text-ada-002 vectors. Due to the high dimension of the vectors, Euclidean distance would not be appropriate. Additionally, the orientation, rather than magnitude, of embedding vectors is more important. Figure 2: Schematic showing the simplified workflow of training the model. To visually assess whether our model was truly accurate, we set up a simple test. We saved the model and chained it with the all-mpnet-base-v2 model. Our test program would solicit a search query from the user. It would first embed the search query using the all-mpnet-base-v2 model. Then, it would send that vector into our custom model and obtain a predicted text-ada-002 embedding. Then, we performed vector search over the real text-ada-002 embeddings in our 10,000-review test set by ranking cosine similarities with our predicted embedding. We repeated the process using a real text-ada-002 embedding for our search query instead of a predicted embedding. While not rigorous, this approach enabled us to quickly verify whether our model was at all useful and able to replace text-ada-002 to some extent. Figure 3: Architecture of the Vec2Vec neural network. ## Results After training for 75 epochs, the model achieved a validation loss of -0.00060648. The validation loss decreased smoothly with the training loss. We computed the cosine similarity between each of our 10,000 predicted embeddings and the corresponding true text-ada-002 embeddings. It was 0.932. The maximum possible cosine similarity between any two vectors is 1. Figure 4: Schematic showing how we tested all-mpnet-base-v2 + Vec2Vec against text-ada-002 for retrieving relevant reviews from our 10,000-review dataset. The standard deviation in the cosine similarities was 0.0208. The worst cosine similarity was -0.044, an outlier, and the best cosine similarity was 0.977. The vast majority of cosine similarities fell between 0.85 and 0.975. To manually compare our predicted embeddings versus real text-ada-002 embeddings on performing vector search on a dataset of reviews embedded with text-ada-002, we wrote a program to return the top five best matching reviews to a search query in order. The search query would be embedded with text-ada-002 as well as all-mpnet-base-v2 + Vec2Vec. The search query was first embedded using all-mpnet-base-v2. Then, that 768-dimensional embedding was submitted to the Vec2Vec model to obtain a predicted 1536-dimensional text-ada-002 embedding. We found that the predicted embeddings performed well, but sometimes worse than real text-ada-002 embeddings. On simple queries, our Vec2Vec embedding returned many of the same reviews as the text-ada-002 embedding. Vec2Vec and all-mpnet-base-v2 worked very well (on par with text-ada-002) on short, semantically simple queries, such as "ramen noodles", "spicy", and "dog food", but performed worse on more complex queries such as "the dog food was expired" and "the wine had notes of lavender". However, text-ada-002 also struggled on these complex queries. For example, it mainly returned reviews about the plant lavender rather than reviews about the taste of wine. We have compiled the different search results in tables below. Figure 5: Graph showing the decrease in training loss and validation loss over 75 epochs. Figure 6: Histogram of cosine similarities between each of the 10,000 predicted embedding vectors and 10,000 ground truth text-ada-002 embedding vectors in the held-out test dataset. The average cosine similarity was 0.932. \begin{table} \begin{tabular}{|c|l|} \hline Search & Vec2Vec+ all-mpnet-base-v2 embedding results & text-ada-002 embedding results \\ Rank & \\ \hline 1 & Title: Great \& yummy noodles, you can’t go wrong; Content: My husband and I have eating these since college. 10 years later, we still love it. We usually get them from the asian supermarket, but the price is actually better on Amazon.<br/><br/>If you love spicy stuff, you will love this. I don’t eat things that are too spicy, so when I make it for myself, I only put in half the spice package. Yummy! \\ \hline 2 & Title: delicious noodle; Content: I like this noodle so much. Easy to make it, unexpensive, has a lot of flavours. I bet everybody will like this noodle also if they tried. \\ \hline \end{tabular} \end{table} Table 1: The top five search results returned from our text-ada-002 vector database of 10,000 reviews when using all-mpnet-base-v2 + Vec2Vec versus text-ada-002 for the query “spicy”. Both models performed well and returned relevant results. a healthy amount of black pepper, red chili flake, and a few drops of your favorite hot sauce and you're in business. Title: A Good Choice; Content: This is a classic ramen choice. Our kids, 3-6 love this. When we're RVing, we have a case of this on hand. A bowl, some water, and 2:30 in the microwave, and we've got our instant meal. We break up the noodles while still sealed in the package. This helps the kids not make a mess when trying to eat them. They don't seem to tire of these; and we're not overly offended by the nature of the just-add-water-product. We each had our ramen-eating-college-days. These also make a great item to have on hand in a desk drawer for the days you forget your lunch, etc. at work. Just make sure you have a pack of paper (not styro-save the environ) bowls and some fork/spoons in that drawer, too. Title: Yummy and NO MSG!!; Content: This is one of the best instant ramen-bows.<br/>Great late night snack (N.B. cures hangovers), satisfies your hot & spicy cravings, is super fun to eat on camping and ski trips, and it has no MSG. (SO AWESOME.)<br/>That said, beware the sodium content. Consuming the entire soup is an automatic 90% DV sodium intake. And honestly, you really don't need to use the whole packet to get the spice-kick and flavoring. Start with just half (or less, even!) and find your level of satisfaction. Title: pretty good; Content: These are quite tasty-soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn't finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package--I ordered Myojo's yakisoba along w/ this and had to throw most of it away. Title: A Good Choice; Content: This is a classic ramen choice. Our kids, 3-6 love this. When we're RVing, we have a case of this on hand. A bowl, some water, and 2:30 in the microwave, and we've got our instant meal. We break up the noodles while still sealed in the package. This helps the kids not make a mess when trying to eat them. They don't seem to tire of these; and we're not overly offended by the nature of the just-add-water-product. We each had our ramen-eating-college-days. These also make a great item to have on hand in a desk drawer for the days you forget your lunch, etc. at work. Just make sure you have a pack of paper (not styro-save the environ) bowls and some fork/spoons in that drawer, too. \begin{table} \begin{tabular}{|c|c|} \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 4 & Title: quick and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quick and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 4 & Title: quick and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quick and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 4 & Title: quick and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). They’re ready in 4 minutes, and they’re real noodles, unlike the ones that are just a combination of chemicals. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 5 & Title: pretty good; Content: These are quite tasty–soy sauce flavored but not too salty, which is misleading since it does contain plenty of salt( you probably shouldn’t finish all of the soup, for health reasons). Another problem with these instant ramen is the lack of an expiration date on the package—I ordered Myojo’s yakisoba along w/ this and had to throw most of it away. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 3 & Title: It tastes really good.; Content: I have tried many types of noodles and vermicellies and this one by far the best one. It is very easy to prepare (only took me about 20 minutes to get my meal ready). The season is just enough, not too light, not to salty. And did I mention that it tastes awesome? The only drawback is that it’s kind of dry. But if you have it with some sauce, it would not be a problem. \\ \hline 4 & Title: quickly and healthy; Content: My granddaughter loves these noodles (they’re so yummy). \begin{table} \begin{tabular}{|c|l|l|} \hline Search & Vec2Vec+ all-mpnet-base-v2 embedding results & text-ada-002 embedding results \\ Rank & & \\ \hline \multirow{3}{*}{1} & Title: Wellness dog food; Content: My dog, a picky eater, & Title: Good food - yum! Woof.; Content: My dog likes \\ & loves this food. And it’s made with better ingredients than & the Canidae, and he’s very fussy. The fish product is a little, well, fishy, but the other varieties are solid. \\ \hline \multirow{3}{*}{2} & Title: Dog won’t touch.; Content: Great ingredients! Really healthy. Good buy! Only problem. Even after I & Title: Wellness dog food; Content: My dog, a picky eater, loves this food. And it’s made with better ingredients than most other stuff on the market. \\ \cline{2-3} & Title: Good; Content: This is good dog food, but I stopped buying it, because one of my dogs (I have two) did not like it. She is not a big fan of can food in general. & Title: Wellness Dry Dog Food; Content: Good quality and our dogs will eat it. Many dry dog foods they just turn their noses up to. \\ \hline \multirow{3}{*}{4} & Title: Good; Content: This is good dog food, but I stopped buying it, because one of my dogs (I have two) did not like it. She is not a big fan of can food in general. & Title: Dog food; Content: We have two 85-pound retriever mix dogs who really seem to enjoy the food. They give it five stars! The subscribe and save is a great deal for big bags of dog food—free delivery! \\ \hline \multirow{3}{*}{5} & Title: Wellnes Dry Dog Food; Content: Good quality and our dogs will eat it. Many dry dog foods they just turn their noses up to. & Title: love this dog food; Content: We have been quite pleased with this dog food. Our dog has stopped itching and scratching, his coat is lustrous, and he eats less of it that other dog foods and seems to be maintaining his weight and being vigorous. \\ \cline{1-1} \cline{2-3} & Title: Wellnes Dry Dog Food; Content: Good quality and our dogs will eat it. Many dry dog foods they just turn their noses up to. & Title: Dog food; Content: We have two 85-pound retriever mix dogs who really seem to enjoy the food. They give it five stars! The subscribe and save is a great deal for big bags of dog food—free delivery! \\ \cline{1-1} \cline{2-3} & Title: Wellnes Dry Dog Food; Content: Good quality and our dogs will eat it. Many dry dog foods they just turn their noses up to. & Title: love this dog food; Content: We have been quite pleased with this dog food. Our dog has stopped itching and scratching, his coat is lustrous, and he eats less of it that other dog foods and seems to be maintaining his weight and being vigorous. \\ \cline{1-1} \cline{2-3} & & Title: love this dog food; Content: We have been quite pleased with this dog food. Our dog has stopped itching and scratching, his coat is lustrous, and he eats less of it that other dog foods and seems to be maintaining his weight and being vigorous. \\ \cline{1-1} \cline{2-3} & & & \\ \cline{1 * Title: Gave my dog the runs; Content: We have a 5 month old pit bull mix and she had very loose stool every time we tried to feed her this stuff. I offered the food to other people who have dogs and no one wanted it. I threw it away. What a waste. * Title: Was great, but formula changed!; Content: My dog was on Canidae for almost 2 years and she loved it. She was healthy with a full, shiny coat and plenty of energy. Then suddenly around December she started vomiting and having loose yellow stools, and became very lethargic. We took her to 2 vets and after several hundred dollars in tests they couldn't find anything wrong with her.<br/><br/>I did some research and found that there were numerous people having the exact same problems I was! It turns out, they recently changed their formula, changed manufacturers, and decreased the amount of food in their package. Do some research, it's not the same high-quality food it used to be! * * Title: GREAT tea; Content: This tea, with its orange undertones, is marvelous. The aroma is great. Even non-tea drinkers will love this one. * Title: PRESENT soft lavender sent; Content: I Have burned about 60 sticks so far:<br /><br />Hem Lavender can be described as smooth, deep, and peaceful. It has a lavender scent but does not have the sharp edge that lavender usually carries. I do like the edge, therefore I give this item 4 not 5 stars. If you had not seen the package, you might not guess it was lavender, but it is a very nice scent without being overly floral.<br /><br />This can be described as an "all audiences" type of lavender. The males and females of my home enjoy this scent.<br />This scent is good enough to be part of my standard stock. I enjoy it in the morning since it is peaceful without being sleepy.<br /><br />If you are looking to add calm to your chaotic home, try this lavender. * Title: Yum!; Content: this relaxing mate was delicious. It had rich bold flavors and that right touch of chocolate that warms a cold night. * Title: PRESENT AS I'd Hoped.; Content: I ordered this lavender about a month before it was scheduled to be delivered. Order was filled on time, with good communication. I was excited to open the box because the dried bunches are part of the decorations for our wedding and dinner/dance party afterwards. SO fragment, well-preserved, evenly divided bunches! The bunches are of good size, with no scrimping on quality or amount. OF COURSE the whole box smells wonderful, too. Packing for shipping helped them arrive with 99.5% :) of buds intact. Highly recommend. ## Conclusion In this paper, we introduced Vec2Vec, a lightweight neural network model capable of translating open-source MPNet embeddings into text-ada-002 embeddings. The model was trained and tested using a subset of 50,000 Amazon food reviews, providing an ample and diverse dataset of natural language. Our primary performance metric was cosine similarity between predicted and actual text-ada-002 embeddings. The Vec2Vec model achieved an impressive average cosine similarity of 0.932, indicating that the translation of embeddings was significantly accurate, given the high dimensionality of the target \begin{table} \begin{tabular}{|c|p{142.3pt}|} \hline Title: PRESENT AS I’d Hoped.; Content: I ordered this lavender about a month before it was scheduled to be delivered. Order was filled on time, with good communication. I was excited to open the box because the dried bunches are part of the decorations for our wedding and dinner/dance party afterwards. SO fragment, well-preserved, evenly divided bunches! The bunches are of good size, with no scrimping on quality or amount. OF COURSE the whole box smells wonderful, too. Packing for shipping helped them arrive with 99.5% :) of buds intact. Highly recommend. \\ \hline \end{tabular} \end{table} Table 5: The top five search results returned from our text-ada-002 vector database of 10,000 reviews when using all-mpnet-base-v2 + Vec2Vec versus text-ada-002 for the query “the wine had notes of lavender”. Vec2Vec returned several results related to oil instead of wine. Meanwhile, text-ada-002 seemed to focus on “lavender” but not “wine”. embedding space. However, we must emphasize that while the performance was commendable, the synthetic embeddings generated by our model could not fully match the performance of the actual text-ada-002 embeddings. This was observed in our manual quality assessments, where Vec2Vec fell short of text-ada-002's performance on complex vector search queries. Nonetheless, for simple queries, Vec2Vec's performance was close to the original model, demonstrating potential for its application in practical scenarios. Another limitation of our work is that Vec2Vec was only trained on food reviews. The model would most likely not perform as well on out-of-distribution data, such as news articles, tweets, or scientific writing. Also, different loss functions besides cosine similarity could be explored. Given that the Vec2Vec model is lightweight, with a size less than 80 MB, and its ability to work offline, it could be a valuable asset in various contexts where either the API costs or the need for a constant internet connection can be prohibitive. As such, Vec2Vec could help democratize the use of vector databases by providing an offline and cost-effective alternative. Also, for sensitive data such as health information that is HIPAA-protected, using an offline model circumvents the need to send data to an external provider. Our research suggests that a larger, higher-performing embedding model could potentially be used to "teach" or "tune" a smaller model by adding a neural network on top. Moving forward, there are several avenues for future work. Training a more sophisticated neural network, performing hyperparameter tuning, and leveraging larger datasets of paired embeddings could enhance performance and generalization capabilities. These datasets should include text from diverse domains, instead of just food reviews as was the case in this study. Additionally, incorporating a wider range of embedding models besides MPNet, such as BERT, Instructor, or RoBERTa, could extend the applicability of Vec2Vec. Finally, we believe that this work helps to pave the way for the development of tools that can robustly convert between different embedding spaces. We hope that Vec2Vec serves as a stepping stone for improved solutions aimed at enhancing interoperability, minimizing reliance on proprietary models, safeguarding data privacy, reducing costs, and allowing for offline operations. The code, model weights, and dataset for testing are available on Hugging Face and Github. Hugging Face: [https://huggingface.co/gaodrew/vec2vec](https://huggingface.co/gaodrew/vec2vec) Github: [https://github.com/andrewgcodes/vec2vec](https://github.com/andrewgcodes/vec2vec)
2308.12325
Predicting Drug Solubility Using Different Machine Learning Methods -- Linear Regression Model with Extracted Chemical Features vs Graph Convolutional Neural Network
Predicting the solubility of given molecules remains crucial in the pharmaceutical industry. In this study, we revisited this extensively studied topic, leveraging the capabilities of contemporary computing resources. We employed two machine learning models: a linear regression model and a graph convolutional neural network (GCNN) model, using various experimental datasets. Both methods yielded reasonable predictions, with the GCNN model exhibiting the highest level of performance. However, the present GCNN model has limited interpretability while the linear regression model allows scientists for a greater in-depth analysis of the underlying factors through feature importance analysis, although more human inputs and evaluations on the overall dataset is required. From the perspective of chemistry, using the linear regression model, we elucidated the impact of individual atom species and functional groups on overall solubility, highlighting the significance of comprehending how chemical structure influences chemical properties in the drug development process. It is learned that introducing oxygen atoms can increase the solubility of organic molecules, while almost all other hetero atoms except oxygen and nitrogen tend to decrease solubility.
John Ho, Zhao-Heng Yin, Colin Zhang, Nicole Guo, Yang Ha
2023-08-23T15:35:20Z
http://arxiv.org/abs/2308.12325v2
Predicting Drug Solubility Using Different Machine Learning Methods - Linear Regression Model with Extracted Chemical Features vs Graph Convolutional Neural Network ###### Abstract Predicting the solubility of given molecules is an important task in the pharmaceutical industry, and consequently this is a well-studied topic. In this research, we revisited this problem with the advantage of modern computing resources. We applied two machine learning models, a linear regression model and a graph convolutional neural network model, on multiple experimental datasets. Both methods can make reasonable predictions while the GCNN model had the best performance. However, the current GCNN model is a black box, while feature importance analysis from the linear regression model offers more insights into the underlying chemical influences. Using the linear regression model, we show how each functional group affects the overall solubility. Ultimately, knowing how chemical structure influences chemical properties is crucial when designing new drugs. Future work should aim to combine the high performance of GCNNs with the interpretability of linear regression, unlocking new advances in next generation high throughput screening. ## 1 Introduction In the pharmaceutical industry, discovery of new drugs is both an expensive and time consuming endeavor. In order to accelerate the process and cut cost, scientists use early stage high throughput screening (HTS) to eliminate molecules without desired properties [1]. One such property is solubility, which governs the uptake, movement and metabolism of drugs in human bodies. Using theory or known experimental data to predict the solubility of molecules is a hot research field for decades. Back in 1968, Hansch et al. found that the octanol water partition coefficient (P) can be used to predict solubility [2]. Later, the Yalkowsky group came up with a general solubility equation (GSE) which included P and melting point (MP) [3]. Jorgensen and Duffy used Monte Carlo (MC) simulations to predict aqueous solubility with structural information such as molecular weight (MW), volume, solvent accessible surface area (SASA) and hydrogen bonds (HB) counts, as well as some other physical descriptors such as water Coulomb and van der Waals interactions (ESXL), hydrophobic and hydrophilic components. They achieved a reasonable prediction performance on a dataset of 150 organic molecules [4]. In recent years, with fast-growing computing power and new algorithms, researchers can afford to work with much larger data sets and more sophisticated machine learning (ML) models. There now exists multiple databases with thousands of chemicals and their experimental solubility, such as the AQUASOL and PHYSPROP database used by Huuskonen et al. [5], the ESOL by Delaney [6], and some solubility handbooks [7]. AqSolDB is a newly developed database which merges multiple existing data sets [8]. From a methods standpoint, instead of traditional linear models and classic neural network (NN) models, the Barzilay group developed graph convolutional neural networks (GCNN) that convert the molecular structures into graphs which are input into a directed message passing neural network, and achieved state-of-the-art performance [9]. There has also been research going beyond drug solubility in aqueous solutions, like extending solute types to small proteins [10] or various organic solvents [11]. While these ML algorithms achieve outstanding performance, human scientists will find it difficult to gain mechanistic insights into the chemistry behind these solubility models. Such models are referred to as "black box": for example, it is almost impossible to understand a 20-layer deep learning NN or a GCNN with all molecules represented by huge matrices. From the perspective of chemists, we may have to step back from the focus on performance and seek more chemical insight. In this study, our goal is not to push the limits of predictive accuracy, but to take advantage of both classical models and modern sophisticated models to bring about a better understanding of the relationship between molecular structures and their chemical properties. With this knowledge, we may develop ML models in the future which are both highly accurate and human-interpretable. ## 2 Methods Two ML models were applied in this study: A linear regression model and a GCNN model (Figure 1). For the linear regression model, we used the molecular weight, total atom counts, and functional group counts as features to establish a multivariable regression with the experimental solubility values (logS). The features were obtained directly from the molecular structure using the RDKit module [12]. We also used L1 regularization with an alpha value of 0.01. For the GCNN, we used the Chemprop model [13]. Chemprop converts the atoms and bonds in the molecules into one-hot encodings, which are then concatenated into one tensor, representing each individual atom or bond. Next, Chemprop creates three tensors: one that maps each atom to its corresponding bonds (a2b); one that maps each bond to its corresponding atom (b2a), and one that maps each bond to its reverse bond (b2revb). It then concatenates each atom tensor into one large vector and each bond tensor into another large vector. With these five tensors, Chemprop finds the neighboring bonds of each bond and sums their vector representations. Finally, the model concatenates that sum to the vector representations of the bonds and atoms. The final vector representations of each bond are summed together to generate one feature vector for the entire molecule, which enters a standard feed-forward neural network with a single output (logS). We tested both models on three different datasets: the Delaney dataset, the Huuskonen dataset, and AqSolDB. We used 5-fold cross validation within each dataset, and used the root mean square error (RMSE) of the parity plots as the metric to evaluate the overall accuracy of the predictions. Figure 1: **Setup of the linear regression model (top) and GCNN model (bottom), using the tyrosine molecule as an example.** The linear regression model uses human-engineered features such as molecular weight (MW) and number of functional groups to predict experimental solubility (logS), whereas the GCNN uses features learned via message passing along a graph. ## 3 Results ### Predicting Solubility The parity plots for each model on different datasets are plotted in Figure 2, and the RMSEs are listed in Table 1. For all three datasets, both the linear regression model and the GCNN model could make reasonable predictions, with the majority of predicted values less than 1 log unit away from the true values. This is comparable with other similar studies [14, 15]. The linear regression and GCNN model both showed the best performance (lowest RMSEs) on the Huuskonen dataset and the least-best performance on the AqSolDB dataset. Interestingly, the linear regression model shows relatively tight scatter on the Delaney and Huuskonen dataset but a very loose scatter on the AqSolDB dataset. This may be due to the large size of the AqSolDB dataset, as well as its relatively large solubility range. The GCNN model overall did a better job than the linear regression model, especially when the dataset size is large. This is not surprising because the CNN is a more complicated model and has been shown to be a powerful method in many other fields, including computer vision. However, this does not suggest linear regression is inadequate. Considering the error due to experimental conditions such as pH and temperature, both models are sufficiently capable for drug design. \begin{table} \begin{tabular}{l l l l} \hline \hline Dataset & Size & RMSE, Linear Regression Model & RMSE, GCNN Model \\ \hline Delaney & 1127 & 1.13 & 0.59 \\ Huuskonen & 1282 & 1.08 & 0.49 \\ AqSolDB & 9982 & 1.83 & 0.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of the Linear Regression Model and GCNN Model on Three Solubility Datasets Figure 2: **Parity plots for the Delaney, Huuskonen, and AqSolDB datasets using linear regression model (top) and GCNN model (bottom).** Predictions are shown from the validation folds of 5-fold cross validation. Lines of best fit are shown in red. ### Understanding the relationship between molecular structure and solubility Unlike the GCNN method that is a black box, the linear regression model can show us a straightforward relationship between the features we input and the solubility property we are interested in. We can run a feature importance analysis to see how each feature impacts the final results. The importance of each type of atom and functional group is shown in Figure 3 and Figure 4. Figure 4: **The linear regression weights of each type of functional group feature for the Delaney dataset.** Positive weights indicate features contributing to a relative increase in solubility, whereas negative weights indicate features which contribute to a relative decrease in solubility. Figure 3: **The linear regression weights of each type of atom feature for the Delaney dataset.** Positive weights indicate features contributing to a relative increase in solubility, whereas negative weights indicate features which contribute to a relative decrease in solubility. From chemistry background knowledge, we know that it is intermolecular forces between solute and solvent molecules which determines solubility. Overall, more polar molecules with more hydrogen bonds (either donors or acceptors) tend to have higher solubility in aqueous solutions. The feature analysis results here give a more quantitative view of these effects. For example, O atoms have a very positive effect on solubility, because they both increase the overall polarity of the molecules and can form hydrogen bonds with solvent water molecules. Halogens, on the other hand, have known negative effects on solubility [16]. Although they can also increase polarity, they are not hydrogen bond donors and generally form weaker hydrogen bonds with water. This suggests hydrogen bonds play a major role in aqueous solubility and are even more significant than polarity. Furthermore, halogenated molecules tend to be soluble in hydrophobic solvents [17]. This characteristic holds significant implications for drug delivery across cell membranes, and this extended solubility topic is worth further investigation within the pharmaceutical domain. Interestingly, adding functional group counts can improve the RMSE significantly compared with only considering the MW and atom counts (Table 2). This suggests that the same type of atom, when incorporated into distinct functional groups, can exhibit different effects on solubility (this can also be seen in Figure 4). Some atoms such as N, S and P can form different functional groups which can have either positive or negative effects on solubility. Furthermore, a protonated amine tends to have much higher solubility than a non-protonated amine, and thus environmental conditions such as pH also need to be taken into consideration. Additionally, the effects of temperature should be taken into account, as temperature not only impacts solubility but also influences reactivity and stability. Finally, it is important to note that the aforementioned ML models can only predict a single solubility value for a given molecular structure. However, in reality, we must deal with high-dimensional data comprising various solubility values under different conditions for each compound. Addressing this challenge would require significant effort in terms of data collection, data cleaning, and algorithm development. Eventually, we believe that a complex NN-based model, coupled with interpretable feature analysis, will emerge as the tool of choice, rather than a simple linear regression approach. ### From solubility prediction to drug design As discussed above, the predictive performance of simple solubility models is adequate for high throughput screening, even using the GSE that was developed many years ago. The other side of the coin for such solubility studies lies in their potential to aid future drug design. This entails a reverse approach: If I want to design a drug molecule with a certain solubility value (or other physical properties), what functional groups should I include? Using the feature importance analysis results in this research, scientists can have a general idea of what functional groups to include. For example, in order to increase aqueous solubility, one can introduce an OH group on a side chain, while to increase the ability to cross cell membrane, a halogen atom might be the best choice. However, in reality, when multiple factors need to be simultaneously considered, the capabilities of human decision-making can be quickly exceeded. This is where the GCNN model can become invaluable. Through a well-trained neural network that establishes connections between defined molecular substructures and their corresponding properties, coupling the GCNN with a molecular generative model [18] could allow viable drug candidates with desired properties to be generated at scale. We believe that this approach will power the next generation of HTS in the pharmaceutical industry. ## 4 Conclusion This study explores the prediction of aqueous solubility using two models: a GCNN with learned features and a linear regression model with human-engineered features. Both models show reasonable prediction accuracy on a variety of datasets, with the GCNN performing better overall. However, the linear regression model can provide valuable insights into the relationship between certain features and solubility, highlighting the importance of some atoms, functional groups, and hydrogen bonds. We believe that a GCNN model, coupled with feature analysis, may be a valuable direction for future research. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Size & RMSE, Atom Features Only & RMSE, Atom and Functional Group Features \\ \hline Delaney & 1127 & 1.13 & 0.96 \\ Huuskonen & 1282 & 1.08 & 0.89 \\ AqSolDB & 9982 & 1.83 & 1.73 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of the Linear Regression Model with Only Atom Feature and with Atom and Functional Group Features on Three Solubility Datasets
2308.07728
Domain-Aware Fine-Tuning: Enhancing Neural Network Adaptability
Fine-tuning pre-trained neural network models has become a widely adopted approach across various domains. However, it can lead to the distortion of pre-trained feature extractors that already possess strong generalization capabilities. Mitigating feature distortion during adaptation to new target domains is crucial. Recent studies have shown promising results in handling feature distortion by aligning the head layer on in-distribution datasets before performing fine-tuning. Nonetheless, a significant limitation arises from the treatment of batch normalization layers during fine-tuning, leading to suboptimal performance. In this paper, we propose Domain-Aware Fine-Tuning (DAFT), a novel approach that incorporates batch normalization conversion and the integration of linear probing and fine-tuning. Our batch normalization conversion method effectively mitigates feature distortion by reducing modifications to the neural network during fine-tuning. Additionally, we introduce the integration of linear probing and fine-tuning to optimize the head layer with gradual adaptation of the feature extractor. By leveraging batch normalization layers and integrating linear probing and fine-tuning, our DAFT significantly mitigates feature distortion and achieves improved model performance on both in-distribution and out-of-distribution datasets. Extensive experiments demonstrate that our method outperforms other baseline methods, demonstrating its effectiveness in not only improving performance but also mitigating feature distortion.
Seokhyeon Ha, Sunbeom Jung, Jungwoo Lee
2023-08-15T12:08:43Z
http://arxiv.org/abs/2308.07728v5
# Domain-Aware Fine-Tuning: Enhancing Neural Network Adaptability ###### Abstract Fine-tuning pre-trained neural network models has become a widely adopted approach across various domains. However, it can lead to the distortion of pre-trained feature extractors that already possess strong generalization capabilities. Mitigating feature distortion during adaptation to new target domains is crucial. Recent studies have shown promising results in handling feature distortion by aligning the head layer on in-distribution datasets before performing fine-tuning. Nonetheless, a significant limitation arises from the treatment of batch normalization layers during fine-tuning, leading to suboptimal performance. In this paper, we propose Domain-Aware Fine-Tuning (DAFT), a novel approach that incorporates batch normalization conversion and the integration of linear probing and fine-tuning. Our batch normalization conversion method effectively mitigates feature distortion by reducing modifications to the neural network during fine-tuning. Additionally, we introduce the integration of linear probing and fine-tuning to optimize the head layer with gradual adaptation of the feature extractor. By leveraging batch normalization layers and integrating linear probing and fine-tuning, our DAFT significantly mitigates feature distortion and achieves improved model performance on both in-distribution and out-of-distribution datasets. Extensive experiments demonstrate that our method outperforms other baseline methods, demonstrating its effectiveness in not only improving performance but also mitigating feature distortion. ## Introduction Transferable neural network models have seen significant advancements in various domains, including computer vision [14, 15, 16], natural language processing [17, 18], and speech recognition [19, 20]. These models, trained on large-scale datasets with substantial computational resources, demonstrate exceptional performance and generalization capabilities, making them widely used in transfer learning. By employing these pre-trained models, transfer learning can substantially reduce the time and resources required for training and data collection, while also enhancing performance compared to training from scratch. When transferring knowledge from a pre-trained model, the feature extractor is first initialized with the pre-trained model. Subsequently, there are two primary optimization methods commonly employed: linear probing (LP) and fine-tuning (FT). In LP, only the head layer is optimized, while the feature extractor is fixed during training. On the other hand, FT involves optimizing both the feature extractor and the linear head layer simultaneously. While FT generally outperforms LP on in-distribution (ID) datasets, recent findings show that FT performs worse on out-of-distribution (OOD) datasets compared to LP. It is due to feature distortion in pre-trained features that possess good generalization capabilities. To mitigate feature distortion, LP-FT was proposed as a two-stage training approach [17]. In the first stage, LP aligns the head layer with the ID subspace, and in the second stage, FT is performed with a small learning rate to mitigate feature distortion. This allows LP-FT to achieve improved performance on both ID and OOD datasets. However, LP-FT still has several limitations. One of the key issues is the treatment of batch normalization layers during the FT stage in the LP-FT framework. During the FT stage, all batch normalization layers are switched to test mode, where fixed statistics are employed instead of batch statistics. While this prevents the alteration of batch normalization statistics during training, potentially mitigating feature distortion, it also harms beneficial effects such as improved gradient flow and reduced internal covariate shift [13]. Consequently, the overall optimization process is degraded, leading to suboptimal model performance. Furthermore, the initial LP stage in LP-FT optimizes the head layer with a feature extractor that has not been adapted to the target domain, which might not be ideal for domain adaptation. Moreover, employing a small learning rate during the FT stage restricts the optimization of the head layer. It becomes particularly severe when there is a significant dissimilarity in data distribution between the source and target domains. These limitations call for a more effective approach that can adequately address feature distortion while effectively performing batch normalization. To address these challenges, we present a novel method named Domain-Aware Fine-Tuning (DAFT). Our method is built upon two core techniques: batch normalization conversion and the integration of LP and FT into a single stage. The batch normalization conversion method modifies batch normalization layers to better suit the target domain prior to fine-tuning. It makes batch normalization more effective, reducing feature distortion by adjusting the statistics and parameters to the target domain. In addition, our DAFT integrates LP and FT into a single stage, where the linear head is optimized with gradually adapting features from the target domain. This strategy mitigates the issue of the head layer being optimized solely using a feature extractor that has not yet adapted to the target domain, resulting in improved optimization. Through these techniques, our DAFT effectively reduces feature distortion and achieves enhanced performance compared to other baseline methods. The effectiveness of our method is demonstrated through similarity measures computed between features before and after adaptation. As shown in Figure 1, our DAFT consistently exhibits higher feature similarity compared to existing methods such as FT and LP-FT. Furthermore, we conduct extensive experiments to compare our proposed method against other baseline methods. These results demonstrate the superior performance of the proposed DAFT with higher accuracy and less feature distortion. In summary, our main contributions are as follows. * We propose a novel batch normalization conversion technique, which improves batch normalization during fine-tuning with reducing modifications to the neural network. * The integration of LP and FT is proposed, allowing the head layer to be optimized with the gradual adaptation of the feature extractor. * Through extensive experiments, we demonstrate the superior performance of our proposed method compared to other baseline approaches. ## Related Works Head Initialization for Fine-TuningLP-FT, a two-stage transfer learning method, initializes the head layer with parameters obtained from LP prior to executing FT [22]. This highlights that random initialization of the head layer has the potential to cause feature distortion in FT, leading to degraded generalization performance. Nonetheless, the performance of LP-FT remains limited even with aligned head initialization, calling for further research on better head initialization methods. One promising technique is to stop the LP stage early before it reaches convergence, resulting in a non-converged head that improves performance during the subsequent FT stage [13]. Additionally, applying hardness-promoting augmentation during the LP stage can help mitigate feature distortion and also simplicity bias, leading to enhanced generalization performance [17]. Although these head initialization techniques have shown effectiveness in boosting the performance of the FT stage, they require a separate training stage to initialize the head layer before the FT stage. In contrast, our proposed DAFT integrates head adaptation seamlessly without the need for a distinct initialization stage. Other Modifications for Fine-TuningThere are several other techniques to improve the performance of FT. Side-tuning involves fixing the feature extractor to preserve pretrained features and combining it with a small side network, then training only the side network and the head layer [14]. Other methods combine FT with the regularization of pre-trained parameters [23, 15, 16, 17], or the use of different learning rates for each layer [18, 19]. Figure 1: Distribution of similarity measures for three different fine-tuning methods. We fine-tune the pre-trained ResNet-50 model from MoCo-v2 [13] on fMoW [17] dataset. First, we extract features from the test data before applying each fine-tuning method. Then, after completing the fine-tuning process, we compute the similarity measures between the pre-fine-tuning and post-fine-tuning features for each method. The similarity measures, including cosine similarity and L2 distance, are computed on the fMoW test dataset. Notably, our DAFT exhibits the least distortion in the pre-trained features across both similarity measures. et al., 2019; Clark et al., 2020). Our method falls under the category of employing different learning rates, as we use a larger learning rate for the head layer. However, unlike general methods that train the head layer with a learning rate approximately 10 times larger than that of the feature extractor, our method uses separate learning rates for the head layer and the feature extractor. This allows the head layer to align better with the ID dataset during the early stage of training and to converge with the adapting feature extractor. Batch Normalization in Transfer LearningBatch normalization is a well-known technique for covariance shift reduction and better stable training Ioffe and Szegedy (2015). However, in transfer learning scenarios, where the data distribution between the source domain and target domain may significantly differ, the movement of statistics within batch normalization layers can result in severe feature distortion. As a solution, many methods choose to freeze the batch normalization statistics during fine-tuning to alleviate feature distortion Kumar et al. (2022); Ren et al. (2023); Trivedi et al. (2023). On the other hand, some methods leverage batch normalization with domain-specific statistics to encourage the learning of more generalized features Wang et al. (2019); Chang et al. (2019) or even use statistics from test batches to enhance robustness against domain shift Mirza et al. (2022); Lim et al. (2023). In contrast to existing approaches, our method aims to mitigate distortion of pre-trained features while still benefiting from batch normalization during training. ## Method In this section, we introduce our approach, Domain-Aware Fine-Tuning (DAFT), which incorporates converting batch normalization and integrating LP and FT. ### Converting Batch Normalization Batch Normalization (BN)The Batch Normalization (BN) technique Ioffe and Szegedy (2015) was introduced based on the observation that network training tends to converge faster when its inputs are whitened LeCun et al. (2002); Wiesler and Ney (2011). BN operates in two modes: training mode and test mode. In the training mode, it first computes batch statistics, \(\mu\) and \(\sigma^{2}\), for the input features \(z\). Then, scaling and shifting parameters, \(\gamma\) and \(\beta\), are applied to scale and shift the normalized values. The normalization process is represented as follows: \[\hat{y}=\gamma\cdot\frac{z-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta, \tag{1}\] where \(\epsilon\) is a small constant for numerical stability. During training, BN computes the mean \(\mu\) and variance \(\sigma^{2}\) over a mini-batch of training data as \[\mu=\frac{1}{m}\sum_{i=1}^{m}z_{i},\quad\sigma^{2}=\frac{1}{m}\sum_{i=1}^{m} {(z_{i}-\mu)^{2}}, \tag{2}\] where \(m\) is the batch size. In the test mode, BN uses test statistics estimates, \(\mathrm{M}\) and \(\Sigma\), instead of \(\mu\) and \(\sigma\). Test statistics are obtained by using the moving averages of the train batch statistics. These moving averages are continuously updated during training to ensure consistent normalization during the test mode without the need for additional statistics computation. BN Transfer IssueDue to the presence of _dataset bias_ or _domain shift_Quinnero-Candela et al. (2008), significant differences arise in the input distribution of batch normalization between the source and the target domains. Let \(\mathrm{M}s\) and \(\Sigma s\) respectively represent the test mean and test variance of batch normalization, pre-trained on the source domain. When training a pre-trained model on a new target domain, the values of \(\mathrm{M}_{s}\) and \(\Sigma_{s}\) undergo considerable updates, leading to a significant deviation from their original values. Moreover, the learning parameters also experience substantial adjustments during training to adapt and converge with the target domain statistics. These significant changes in batch normalization are one of the main factors that cause feature distortion during fine-tuning. Batch Normalization ConversionTo mitigate this BN issue, we propose batch normalization conversion. Before starting the training process, we first compute batch statistics, \(\mu_{t}\) and \(\sigma_{t}\), for each mini-batch of the target training data. Next, we compute the new unbiased statistics, \(\mathrm{M}_{t}\) and \(\Sigma_{t}\), as follows: \[\mathrm{M}_{t}=\mathrm{E}_{\mathcal{B}_{t}}[\mu_{t}],\quad\Sigma_{t}=\frac{m}{ m-1}\mathrm{E}_{\mathcal{B}_{t}}[\sigma_{t}^{2}], \tag{3}\] where \(\mathcal{B}_{t}\) represents mini-batches of target training data. With these statistics estimated from the target domain, we can reformulate the batch normalization of the pre-trained model as follows: \[\hat{y} =\gamma_{s}\cdot\frac{z-\mathrm{M}_{s}}{\sqrt{\Sigma_{s}+\epsilon }}+\beta_{s} \tag{4}\] \[=\gamma_{s}\cdot\frac{\sqrt{\Sigma_{t}+\epsilon}}{\sqrt{\Sigma_{s} +\epsilon}}\cdot\frac{z-\mathrm{M}_{t}+\mathrm{M}_{t}-\mathrm{M}_{s}}{\sqrt{ \Sigma_{t}+\epsilon}}+\beta_{s},\] \[=(\gamma_{s}\frac{\sqrt{\Sigma_{t}+\epsilon}}{\sqrt{\Sigma_{s}+ \epsilon}})\cdot\frac{z-\mathrm{M}_{t}}{\sqrt{\Sigma_{t}+\epsilon}}+(\beta_{s} +\gamma_{s}\frac{\mathrm{M}_{t}-\mathrm{M}_{s}}{\sqrt{\Sigma_{s}+\epsilon}})\] \[=\gamma_{t}\cdot\frac{z-\mathrm{M}_{t}}{\sqrt{\Sigma_{t}+\epsilon }}+\beta_{t},\] where \(\gamma_{s}\) and \(\beta_{s}\) are the learning parameters of batch normalization that are pre-trained on the source domain. According to this reformulation, we define the new batch normalization parameters, \(\gamma_{t}\) and \(\beta_{t}\), with the target domain statistics as follows: \[\gamma_{t}=\gamma_{s}\cdot\frac{\sqrt{\Sigma_{t}+\epsilon}}{\sqrt{\Sigma_{s}+ \epsilon}},\quad\beta_{t}=\beta_{s}+\gamma_{s}\cdot\frac{\mathrm{M}_{t}- \mathrm{M}_{s}}{\sqrt{\Sigma_{s}+\epsilon}}. \tag{5}\] After computing these new parameters, we replace \(\gamma_{s}\) and \(\beta_{s}\) in the pre-trained model with \(\gamma_{t}\) and \(\beta_{t}\), respectively. Additionally, the statistics \(\mathrm{M}_{s}\) and \(\Sigma_{s}\) are replaced by \(\mathrm{M}_{t}\) and \(\Sigma_{t}\). After converting all batch normalization layers in the pre-trained model, we start training on the new target domain. With the converted BN parameters, the pre-trained model requires only slight adjustments to the batch statistics of the target domain. Furthermore, the movement of \(\mathrm{M}_{t}\) and can be largely reduced since the batch statistics of the target training data are already close to \(\mathrm{M}_{t}\) and \(\Sigma_{t}\). As a result, the converted BN substantially mitigates feature distortion, leading to improved training performance on the target domain. The overall process of our BN conversion is summarized in Algorithm 1. Utilizing our BN conversion technique, neural networks undergo more stable adjustments that aid in mitigating the distortion of pre-trained features, as shown in Figure 1. For a comprehensive understanding of how neural network evolves during adaptation, we compute the norm of the relative change using the equation: \[\frac{\|\Delta W\|_{2}}{\|W\|_{2}}=\frac{\|\widetilde{W}-W\|_{2}}{\|W\|_{2}}, \tag{6}\] where \(\widetilde{W}\) represents the parameter values after the training is completed, and \(W\) represents the initial parameter values before training begins. To conduct our evaluation, we employ a pre-trained ResNet-50 model from MoCo-v2 (Chen et al., 2020) and perform two types of training on fMoW (Christie et al., 2018) dataset: fine-tuning without our BN conversion (FT) and fine-tuning with our BN conversion (DAFT). For each case, we compute the norm of the relative change for all layers of the ResNet-50 model. It is important to note that \(W\) for FT refers to the value of the pre-trained model, whereas \(W\) for DAFT refers to the value after applying BN conversion to the pre-trained model. As illustrated in Figure 2, we can clearly observe that DAFT leads to significantly smaller relative changes compared to FT. This reduction is evident not only in the parameters of all layers but also in the statistics of all BN layers. This compelling evidence substantiates the effectiveness of our BN conversion technique, enabling the model to adapt more effectively to the target domain while inducing minor alterations. Figure 2: Comparison of relative changes in learning parameters (2a, 2b, 2c) and BN statistics (2d, 2e) between FT and our DAFT on the fMoW dataset. We utilize the pre-trained ResNet-50 model from MoCo-v2 as the feature extractor and conduct each fine-tuning method. The ResNet-50 architecture consists of an Input Stem and 4 subsequent stages, with each stage indicated on the x-axis from left to right. ‘IS’ on the x-axis represents the Input Stem, and all layers within each stage are further indicated on the x-axis sequentially from left to right. The relative changes are computed between the initial values before each fine-tuning and the final values after each fine-tuning process. Note that the relative changes of BN statistics are represented in log scale. ### Integrating LP and FT Our BN conversion technique can be applied to various transfer learning algorithms for neural networks that incorporate BN layers. However, considering the limitations of the recent LP-FT method, we integrate LP and FT in a single stage to leverage BN conversion more effectively. #### 3.2.1 Limitation of LP-FT Let \(\theta\) represent the parameters of the feature extractor denoted as \(f_{\theta}\), initialized with a pre-trained model. The classifier with parameters \(w\) is denoted as \(g_{w}\). While it is commonly a linear layer, it could also consist of multiple layers. In the LP-FT approach, the head classifier \(g_{w}\) is initialized using the value optimized by LP, and then FT is performed with a very small learning rate \(\eta\). As a result, the head layer undergoes only minor changes during FT and remains nearly unchanged after the FT stage. This implies that the head layer optimization primarily occurs during the LP stage, and it is not optimized in conjunction with the feature extractor's adaptation to the target domain. This limitation hinders performance, and it becomes particularly critical when there is a significant difference in data distribution between the source and target domains. To address this issue, we introduce integrating LP and FT into a single stage, allowing the head layer to be optimized with gradually adapting features. #### 3.2.2 Integrated LP-FT In order to integrate the LP and FT stages, we introduce separate learning rates, \(\eta_{\theta}\) and \(\eta_{w}\), for the feature extractor \(f_{\theta}\) and the head layer \(g_{w}\). While the conventional FT also suggests using a different learning rate for the head layer that is 10 times larger than that of the feature extractor, we independently determine the optimized learning rates for each component. To efficiently determine these optimized learning rates, we adopt a two-step procedure, sweeping over each learning rate. In the first step, we sweep over the learning rates \(\eta_{\theta}\) for the feature extractor with fixing \(\eta_{w}\). Having determined the optimized learning rate \(\eta_{\theta}\) for the feature extractor, we move on to the second step, where we sweep over the learning rates \(\eta_{w}\) while utilizing the previously determined optimized learning rate \(\eta_{\theta}\). By following this process, we integrate the FT and LP stages into a single approach, which distinguishes our approach from LP-FT. Moreover, this strategy significantly reduces the number of hyperparameter combinations and streamlines the search for proper learning rates. Additionally, we employ a zero initialization strategy for the head layer. This approach is aimed at reducing the influence of gradients on the feature extractor during the initial training phases. By initializing the head layer with zeros, we try to avoid making significant changes to the feature extractor until the head layer has started to converge to a reasonable solution. However, in cases where the head layer comprises multiple layers, we opt for random initialization instead of zero values to enable gradient computation. Detailed distinctions between our DAFT and other fine-tuning techniques, including FT and LP-FT, are summarized in Table 1. ## 4 Experiments In this section, we present the results of various experiments conducted to evaluate the effectiveness of our proposed method. We first present the experimental results on classification tasks and segmentation tasks. Additionally, we provide the results of additional experiments, including robustness evaluations and ablation studies. ### Experiments on Classification Task #### 4.0.1 Dataset We conduct experiments on various classification tasks and evaluate both In-Distribution (ID) and Out-of-Distribution (OOD) accuracy using the following datasets. * **CIFAR-10**[10]: A dataset that contains 10 categories of objects. For the OOD dataset, we use two additional datasets, **CIFAR-10.1**[11] and **STL**[12]. CIFAR-10.1 is a subset of the TinyImage dataset [1] and has the same categories as CIFAR-10. STL has the same categories as CIFAR-10 except for'monkey' instead of 'frog'. Therefore, we remove the'monkey' class in STL to align with CIFAR-10 categories. For data augmentation, we resize the input image to 224x224 and apply random horizontal flip. * **Entity-30** and **Living-17**[13]: Part of the BREEDS benchmarks, which are subpopulation shift datasets constructed using ImageNet. Entity-30 contains 30 categories of objects, and Living-17 contains 17 categories of animals. Each dataset consists of source and target domains. For both Entity-30 and Living-17, we use the source domain as the ID dataset and the target domain as the OOD dataset. Data augmentation involves RandomResizedCrop to 224x224 and random horizontal flip. * **DomainNet**[14]: A dataset that includes 345 categories of common objects in six different domain including Clipart, Infograph, Painting, Quickdraw, Real, and Sketch. However, due to labeling noise in DomainNet, we used a subset of DomainNet [12] containing 40 categories of common objects from the Sketch, Real, Clipart, and Painting domains. Sketch domain is used for the ID dataset and the rest of the domains (Real, Clipart, Painting) are used for the OOD dataset. Data augmentation includes resizing the images to 256x256, random cropping to 224x224, and applying random horizontal flip. * **fMoW**[10]: A remote sensing dataset that contains 62 categories of satellite images in five different regions including Asia, Europe, Africa, Americas, and Oceania. For our evaluation, we use images from the Americas region as the ID dataset, and images from the \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Head initialization & LR for FT & BN mode & BN Conversion \\ \hline FT & Random & \(\eta_{\theta}=\eta_{w}\) & training mode & X \\ LP-FT & LP & \(\eta_{\theta}=\eta_{w}\) & test mode & X \\ DAFT & Zero & \(\eta_{\theta}\neq\eta_{w}\) & training mode & O \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of details for our DAFT and other fine-tuning methods. Europe and Africa regions are used for the OOD dataset. As the fMoW data size is 224x224, we apply only random horizontal flip for data augmentation. Pretrained ModelsTo ensure the generalizability of our method across different pre-trained models, we utilize three different models based on the ResNet-50 architecture: MoCo-v2 (Chen et al., 2020), CLIP (Radford et al., 2021), and SWAV (Caron et al., 2020). Exceptionally, for the fMoW dataset, we use MoCo-TP (Ayush et al., 2021) instead of MoCo-v2 as the pre-trained model. Training DetailsConsistent with LP-FT (Kumar et al., 2022), we use \(\ell_{2}\)-regularized logistic regression classifier for linear probing and also choose the best \(\ell_{2}\)-regularization hyperparameter based on ID validation accuracy. For fine-tuning, we employ an SGD classifier with a cosine learning rate schedule and a batch size of 64. We fine-tune for 20 epochs on CIFAR-10, Entity-30, and Living-17, but extend the fine-tuning to 50 epochs on DomainNet and fMoW due to their limited number of images. Early stopping is applied based on the ID validation accuracy. ResultsOur experimental results on five distinct datasets are summarized in Table 2. All of results are averaged over three separate runs. First, we observe that FT tends to exhibit superior performance in ID accuracy compared to LP, while LP tends to outperform FT in OOD accuracy. Across three diverse pre-trained models, our DAFT consistently achieves higher ID accuracy on all datasets in comparison to other baseline methods. Additionally, DAFT consistently exhibits higher OOD accuracy across most of the datasets. ResultsTable 3 presents the results of our experiments on the segmentation task. Our DART demonstrates improved performance compared to other baseline methods for both the VOC and Cityscapes datasets. LP-FT also shows some enhancement by employing an LP-trained model, but it still achieves inferior performance compared to FT. These results show the significance of BN in the context of the segmentation task as well. ### Additional Experiments RobustnessIn addition to ID and OOD test, we also evaluate the robustness of our method. After fine-tuning on the CIFAR-10 dataset, we evaluate the model's performance on the CIFAR-10-C dataset Hendrycks and Dietterich (2019). The CIFAR-10-C dataset comprises 15 types of corruptions grouped into four main categories: noise, blur, weather, and digital. Each corruption type is present at five different levels of severity. We calculate the Corruption Error for all levels of corruptions and then compute their average. The results of the robustness assessment are presented in Table 4. Interestingly, LP outperforms FT in terms of OOD accuracy, but it shows weaker performance in terms of robustness. And, our DART achieves lower average errors across various corruptions compared to other methods. Since this robustness experiment is conducted for the methods whose results are presented in Table 2, the Corruption Error values are also obtained as an average over three runs. Ablation StudyTable 5 presents the results of our ablation study on BN conversion and integrated LP-FT. We conducted the study using a pre-trained ResNet-50 model from MoCo-v2 Chen et al. (2020) on the CIFAR-10 dataset. To evaluate the effectiveness of BN conversion alone, we applied it to each fine-tuning method, including FT, LP-FT, and integrated LP-FT. The results demonstrate that BN conversion is effective for most of OOD tests with all fine-tuning methods, indicating its importance in improving performance on OOD datasets. Additionally, we observe that integrated LP-FT without BN conversion performs similarly to LP-FT, revealing the limitations of the two-stage optimization and emphasizing the critical role of BN in the overall optimization process. ## Conclusion In this paper, we introduce Domain-Aware Fine-Tuning (DAFT), a novel approach designed to enhance the adaptability and performance of fine-tuned neural networks. Our method optimizes performance and minimizes network modification by aligning batch normalization layers with the target domain. Additionally, the integration of LP and FT allows the head layer to be optimized with gradually adapting features. The widespread use of batch normalization layers in many practical networks makes DAFT a valuable solution for real-world applications. Overall, DAFT bridges the gap between pre-trained models and new target domains, which contributes to improved model performance and generalization. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Noise} & \multicolumn{3}{c}{Blur} & \multicolumn{3}{c}{Weather} & \multicolumn{3}{c}{Digital} & \multicolumn{3}{c}{mCE} \\ \cline{3-14} & & Gauss. & Shot & Impulse & Defocus & Glass & Motion & Zoom & Snow & Frost & Fog & Brightness & Contrast & Elastic & Pixel & JPEG \\ \hline \multirow{6}{*}{MoCo} & LP & 47.54 & 40.05 & 54.24 & 19.48 & 50.79 & 33.38 & 23.50 & 22.80 & 23.57 & 23.72 & 10.12 & 18.26 & 28.71 & 30.48 & 25.65 & 30.15 \\ & FT & 56.07 & 41.87 & **45.48** & 9.88 & 40.95 & 17.10 & 12.42 & 9.33 & 11.91 & 7.40 & 3.24 & 9.83 & **12.62** & 27.17 & 20.89 & 21.74 \\ & LP-FT & 46.50 & 36.01 & 48.30 & 8.11 & **38.19** & **14.56** & **10.04** & **9.13** & 11.27 & **7.10** & 3.26 & 10.99 & 12.89 & **24.56** & **19.20** & 20.03 \\ & DAFT & **37.43** & **28.84** & 46.72 & **8.01** & 39.79 & 16.26 & 10.84 & 9.32 & **10.85** & 8.15 & **3.22** & **9.12** & 14.40 & 28.54 & 19.47 & **19.40** \\ \cline{2-14} & LP & 65.06 & 57.71 & 56.24 & 24.49 & 57.37 & 35.31 & 27.78 & 26.50 & 27.89 & 23.44 & 15.42 & 31.82 & 30.06 & 28.48 & 33.82 & 36.09 \\ & FT & **46.54** & **35.05** & **37.17** & 17.59 & 52.15 & 24.75 & 22.03 & 19.57 & 22.23 & 12.68 & 8.41 & 24.71 & 18.95 & **27.64** & **18.74** & 25.88 \\ & LP-FT & 57.33 & 46.14 & 48.53 & 14.40 & 51.64 & 24.41 & 18.31 & 15.76 & 18.24 & 11.63 & 8.05 & 23.77 & 20.39 & 33.87 & 27.27 & 27.98 \\ & DAFT & 61.58 & 48.10 & 40.71 & **13.55** & **46.95** & **21.96** & **16.87** & **13.78** & **15.88** & **9.74** & **6.15** & **18.39** & **17.61** & 31.95 & 22.99 & **25.75** \\ \hline \multirow{6}{*}{SWAV} & LP & 49.95 & 42.77 & 60.15 & 17.42 & 45.79 & 27.20 & 20.22 & 19.31 & 22.27 & 19.62 & 8.58 & 12.46 & 25.01 & 27.84 & 25.17 & 28.25 \\ & FT & 51.71 & 38.55 & 46.94 & 10.05 & 39.13 & 16.83 & 11.45 & **8.74** & 10.62 & 7.68 & 3.57 & 10.89 & **12.36** & **29.21** & 20.96 & 21.25 \\ \cline{1-1} & LP-FT & 50.05 & 37.70 & **36.61** & **8.72** & **33.36** & **15.41** & **10.44** & 8.84 & 11.08 & 6.57 & 3.43 & 9.36 & 12.54 & **25.27** & 20.50 & 19.37 \\ \cline{1-1} & DAFT & **41.39** & **30.18** & 46.90 & 8.86 & 36.13 & 15.98 & 10.87 & 8.90 & **9.99** & 7.60 & **3.12** & **7.69** & 12.82 & 25.39 & **19.45** & **19.02** \\ \hline \hline \end{tabular} \end{table} Table 4: Corruption Error (%) for 15 types of corruptions. The mean Corruption Error (mCE) is calculated for all corruptions. A lower value of Corruption Error indicates better performance. Best results are highlighted in bold. \begin{table} \begin{tabular}{c l l l} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{CIFAR-10} \\ \cline{3-4} & & ID & CIFAR-10.1 & STL \\ \hline \hline \multirow{6}{*}{MoCo} & FT & 97.66 & 92.95 & 81.18 \\ & + BN conversion & 97.59 & 93.27 & 84.05 \\ \cline{2-4} & LP-FT & 97.66 & 93.60 \\ & + BN conversion & 97.31 & 93.00 & 91.66 \\ \cline{2-4} & Integrated LP-FT & 97.63 \\ & + BN conversion & 97.84 & 94.18 & 92.36 \\ \hline \multirow{6}{*}{CLIP} & FT & 93.88 \\ & + BN conversion & 95.40 \\ \cline{2-4} & LP-FT & 94.02 \\ \cline{2-4} & + BN conversion & 94.70 \\ \cline{2-4} & Integrated LP-FT & 94.29 \\ & + BN conversion & 95.49 \\ \hline \multirow{6}{*}{SWAV} & FT & 97.55 \\ & + BN conversion & 97.62 \\ \cline{2-4} & LP-FT & 97.50 \\ \cline{1-1} & + BN conversion & 97.61 \\ \cline{1-1} & Integrated LP-FT & 97.28 \\ \cline{1-1} & + BN conversion & 97.77 \\ \hline \hline \end{tabular} \end{table} Table 5: BN conversion effect for each fine-tuning method.
2308.06436
A Domain-adaptive Physics-informed Neural Network for Inverse Problems of Maxwell's Equations in Heterogeneous Media
Maxwell's equations are a collection of coupled partial differential equations (PDEs) that, together with the Lorentz force law, constitute the basis of classical electromagnetism and electric circuits. Effectively solving Maxwell's equations is crucial in various fields, like electromagnetic scattering and antenna design optimization. Physics-informed neural networks (PINNs) have shown powerful ability in solving PDEs. However, PINNs still struggle to solve Maxwell's equations in heterogeneous media. To this end, we propose a domain-adaptive PINN (da-PINN) to solve inverse problems of Maxwell's equations in heterogeneous media. First, we propose a location parameter of media interface to decompose the whole domain into several sub-domains. Furthermore, the electromagnetic interface conditions are incorporated into a loss function to improve the prediction performance near the interface. Then, we propose a domain-adaptive training strategy for da-PINN. Finally, the effectiveness of da-PINN is verified with two case studies.
Shiyuan Piao, Hong Gu, Aina Wang, Pan Qin
2023-08-12T02:14:52Z
http://arxiv.org/abs/2308.06436v1
A Domain-adaptive Physics-informed Neural Network for Inverse Problems of Maxwell's Equations in Heterogeneous Media ###### Abstract Maxwell's equations are a collection of coupled partial differential equations (PDEs) that, together with the Lorentz force law, constitute the basis of classical electromagnetism and electric circuits. Effectively solving Maxwell's equations is crucial in various fields, like electromagnetic scattering and antenna design optimization. Physics-informed neural networks (PINNs) have shown powerful ability in solving PDEs. However, PINNs still struggle to solve Maxwell's equations in heterogeneous media. To this end, we propose a domain-adaptive PINN (da-PINN) to solve inverse problems of Maxwell's equations in heterogeneous media. First, we propose a location parameter of media interface to decompose the whole domain into several sub-domains. Furthermore, the electromagnetic interface conditions are incorporated into a loss function to improve the prediction performance near the interface. Then, we propose a domain-adaptive training strategy for da-PINN. Finally, the effectiveness of da-PINN is verified with two case studies. Domain-adaptive physics-informed neural network, domain-adaptive training strategy, Maxwell's equations in heterogeneous media. ## I Introduction Maxwell's equations are used to describe the fundamental laws of electromagnetic fields [1], which are widely used in geological exploration [2], medical imaging [3], and many other fields [4, 5]. The Maxwell's equations are classically solved by numerical methods [6], such as finite-difference time-domain method [7], finite element method [8], Born iterative method [9], and variational Born iteration method [10]. However, one of the challenges these numerical methods face is the high computational cost [11]. The deep learning methods with remarkable approximation capability have attracted increasing attention in solving partial differential equations (PDEs) [12, 13]. The physics-informed neural networks (PINNs) proposed in [14] are prevalent mesh-free methods for solving PDEs. In the framework of PINNs, the prior knowledge of physics and sparse measurements are incorporated into the loss function to train PINNs. Successful achievements in estimating the parameters of PDEs with homogeneous media, such as supersonic flows [15], beam systems [16], and nano-optics [17], have been reported [18, 19]. Meanwhile, the Maxwell's equations in heterogeneous media have wide applications, such as nondestructive testing of multilayered coating [20, 21]. However, it remains a challenge for PINNs to solve inverse problems in heterogeneous media composed of several homogeneous media [22]. In the heterogeneous media case, the parameters of PDEs are not fixed in the whole domain and will jump at media interface. Such parameter jumping can potentially lead to PINNs could not work well. In [23, 24], PINNs were used to solve inverse problems of the Maxwell's equations in heterogeneous media. However, these methods obtain inaccurate estimations near the interface due to the interface conditions not being considered, as indicated in our experiments. To solve the aforementioned problems, this letter proposes a domain-adaptive PINN (da-PINN) for the Maxwell's equations in heterogeneous media. The goal of da-PINN is to estimate parameters of the Maxwell's equations and predict the electric and magnetic fields. We first propose a location parameter of media interface to decompose the whole domain into several sub-domains under the assumption of each sub-domain with fixed parameters of the Maxwell's equations. Then, sub-networks are constructed to approximate electric and magnetic fields in each sub-domain. The electromagnetic interface conditions are incorporated into a loss function. In addition, a domain-adaptive training strategy is proposed to optimize da-PINN. Finally, the performance of da-PINN is verified with one- and two- dimensional Maxwell's equations in heterogeneous media. This letter is organized as follows. In Section II, we introduce PINNs for a derivative of vector-valued functions and da-PINN. In Section III, da-PINN is verified with one- and two- dimensional cases. Section IV concludes this letter. ## II Methodology ### _PINNs for Derivatives of Vector-valued Functions_ In this section, we briefly introduce PINNs for inverse problems in the derivative of vector-valued functions, the general form is as follows: \[\mathbf{u}_{t}(t,\mathbf{x})+\mathbf{\mathcal{N}}\left[\mathbf{u}(t,\mathbf{x});\mathbf{\lambda} \right]=0,\ \mathbf{x}\in\Omega\subseteq\mathbb{R}^{D},\ t\in\left[0,T\right]. \tag{1}\] Here, \(\mathbf{u}:\mathbb{R}\times\mathbb{R}^{p}\rightarrow\mathbb{R}^{q}\) denotes the solution with spatiotemporal dependence, \(\mathbf{u}_{t}\) is the partial derivative of \(\mathbf{u}\) with respect to \(t\), \(\mathbf{\mathcal{N}}\) is a vector differential operator parameterized by an unknown real-valued vector \(\mathbf{\lambda}\in\mathbb{R}^{m}\), \((t,\mathbf{x})\) is a pair-wise
2306.04955
Degraded Polygons Raise Fundamental Questions of Neural Network Perception
It is well-known that modern computer vision systems often exhibit behaviors misaligned with those of humans: from adversarial attacks to image corruptions, deep learning vision models suffer in a variety of settings that humans capably handle. In light of these phenomena, here we introduce another, orthogonal perspective studying the human-machine vision gap. We revisit the task of recovering images under degradation, first introduced over 30 years ago in the Recognition-by-Components theory of human vision. Specifically, we study the performance and behavior of neural networks on the seemingly simple task of classifying regular polygons at varying orders of degradation along their perimeters. To this end, we implement the Automated Shape Recoverability Test for rapidly generating large-scale datasets of perimeter-degraded regular polygons, modernizing the historically manual creation of image recoverability experiments. We then investigate the capacity of neural networks to recognize and recover such degraded shapes when initialized with different priors. Ultimately, we find that neural networks' behavior on this simple task conflicts with human behavior, raising a fundamental question of the robustness and learning capabilities of modern computer vision models.
Leonard Tang, Dan Ley
2023-06-08T06:02:39Z
http://arxiv.org/abs/2306.04955v1
# Degraded Polygons Raise Fundamental Questions of Neural Network Perception ###### Abstract It is well-known that modern computer vision systems often exhibit behaviors misaligned with those of humans: from adversarial attacks to image corruptions, deep learning vision models suffer in a variety of settings that humans capably handle. In light of these phenomena, here we introduce another, orthogonal perspective studying the human-machine vision gap. We revisit the task of recovering images under degradation, first introduced over 30 years ago in the Recognition-by-Components theory of human vision. Specifically, we study the performance and behavior of neural networks on the seemingly simple task of classifying regular polygons at varying orders of degradation along their perimeters. To this end, we implement the Automated Shape Recoverability Test1 for rapidly generating large-scale datasets of perimeter-degraded regular polygons, modernizing the historically manual creation of image recoverability experiments. We then investigate the capacity of neural networks to recognize and recover such degraded shapes when initialized with different priors. Ultimately, we find that neural networks' behavior on this simple task conflicts with human behavior, raising a fundamental question of the robustness and learning capabilities of modern computer vision models. Footnote 1: [https://github.com/leonardtang/Degraded-Polygons-RBC](https://github.com/leonardtang/Degraded-Polygons-RBC) ## 1 Introduction Since the advent of adversarial attacks (Goodfellow et al., 2015), researchers have grown increasingly wary of machine learning models' susceptibility to learning irrelevant patterns (Yuan et al., 2019). Oftentimes, neural networks rely on spurious features that humans know to avoid (Khani and Liang, 2021). A poignant example of such unorthodox behavior comes in the form of machine vision's over-dependency on object textures rather than object shapes (Geirhos et al., 2019). This often leads to dangerous consequences in practice (Hendrycks et al., 2021). Similarly, the fragility of vision models in response to minor image transformations such as shifts or rotations (Azulay and Weiss, 2019), raises concerns over how well these models truly learn, especially when considering these small geometric transformations are commonplace in natural vision scenarios. The specifics of when or how vision models might be expected to generalize well remains a mystery (Zhang et al., 2017). Beyond being unreliable in real-world settings, current vision models are decidedly unhuman in nature. They tend to learn undesirable features that are fundamentally misaligned with how humans perceive the world. Given the unexpected nature of such models, we study if models are also capable of correctly identifying recoverable images, a concept first presented in the Recognition-by-Components theory of human vision (Biederman, 1987). Similarly, we investigate whether or not these models' performance is marred by non-recoverable images. To make progress in understanding the behavior of computer vision models, we introduce the Automated Shape Recoverablility Test pipeline for evaluating vision models across a spectrum of image degradation in regular polygons. The intrinsic difficulty in classifying a generated image is directly controlled by the proportion of the image we delete, subject to the constraint of where the image deletion is allowed to occur. We operate on the domain of black-and-white sketches, since they most closely resemble the distribution of images presented in Biederman (1987), which consist of simple sketches of common objects. Moreover, our ultimate results on this simple task setting suggest a fundamental misalignment in the way humans and machine approach image classification. Using our pipeline, we produce 1,260,000 sketches of regular polygons evenly distributed across 7 shape categories, 9 levels of image degradation, and 2 forms of degradation (corner and edge degradation). Though seemingly simple, these images measure model performance against a canonical human vision task, yielding surprising discrepancies. We release the editing pipeline and final dataset of 1,260,000 images in the hopes of encouraging further research in this direction. In image classification experiments on a subset of the data, we observe that common vision architectures poorly recover (i.e. correctly classify) heavily edge-degraded and corner-degraded shapes, both of which humans are capable of recognizing. Surprisingly, neural networks also rely primarily on edges rather than corners for shape recovery - the exact opposite of human behavior. Moreover, our results also indicate that models pretrained entirely on non-accidental properties generated by Iterated Function Systems (Barnsley and Vince, 2010) display much stronger performance patterns on the same corner-removed class. Overall, our contributions are summarized as follows: * We introduce the Automated Shape Recoverability Test, a pipeline for generating datasets with parameterized degradation of regular polygons. We publicly release the pipeline and the accompanying dataset of 1,260,000 images, which span across seven categories and include varying degrees and forms of image degradation. * Through a comprehensive analysis of various neural network architectures on the task of shape recovery across varying levels of image degradation, we demonstrate a striking discrepancy in how machine learning models and humans perceive images. Unlike humans, neural networks consistently rely more on edges than corners for image recovery, pointing to a fundamental difference in image processing between machines and humans. * Our exploration of pretraining dataset choice reveals that models pretrained on fractals, unlike those pretrained on ImageNet, retain greater accuracy on edge-degraded shapes, while continuing to perform poorly on corner-degraded shapes, further misaligning human and machine vision. Through Grad-CAM visualizations, we reveal differences in how ImageNet and fractal pretrained models learn features and process degraded shapes. Figure 1: Example of the Automated Shape Recoverablility Test generation pipeline for a pentagon with 50% degradation proportion. Whole shapes are generated and subsequently edited with corner degradation (top), and edge degradation (bottom). Our experiments indicate that, unlike time-constrained humans performing sketch recovery (Biederman, 1987), neural networks rely heavily on edges rather than corners to recover degraded shapes. Related Work ### Sketch Classification Though sketch classification is not as common a task as natural image classification, much existing research attempts to tackle the problem. The TUBerlin Sketch Benchmark was the first large-scale sketch classification dataset for machine learning, consisting of 20,000 unique sketches distributed over 250 object categories that exhaustively cover most objects that are commonly encountered in everyday life Eitz et al. (2012). 1,350 unique Amazon Mechanical Turk (MTurk) workers were recruited to sketch these images, with each worker only being able to sketch a limited number of sketches per category so as to preserve diversity within each sketch class. Eitz et al. (2012) also develop a bag-of-features sketch representation alongside multi-class support vector machines trained on the dataset, which classifies unseen sketches with 56% accuracy. Since the introduction of this dataset, significant work has been done to build more accurate models of sketch classification using deep-learning architectures, particularly Convolutional Neural Networks and Recurrent Neural Networks, as well as solving challenges including partial sketch classification and sketch progression incorporation (Tran, 2017; Seddati et al., 2015, 2016; Yang and Hospedales, 2015; Ha and Eck, 2018). As a result of these efforts, the competitive accuracy on the TUBerlin Sketch Benchmark is now 77.69%, a significant improvement over the original paper. ### Image Recovery from a Cognitive Science Perspective Recognition-by-ComponentsThe task of _image recovery_ was first introduced in the landmark Recognition-by-Components theory from cognitive science, which explains how humans recognize and categorize objects based on their basic geometric structures, called geons, and their spatial relationships (Biederman, 1987). A critical component of Recognition-by-Components theory states that object recognition of two-dimensional black-and-white sketches is impaired when feature-relation information - the information about the relationships between geons - is degraded. In the limit of feature-relation information degradation, it is ambiguous what the original object was supposed to be; contextual inference of the original object is no longer possible. We define such a degraded image to be _non-recoverable_. Otherwise, a degraded image that can still be recognized is _recoverable_. Recognition-by-Components also posits that parsing of an object into components is performed at vertex regions of sharp concavity, with multiple curves terminating at a common point. To verify this claim, Biederman (1987) tested humans' ability to classify sketches of objects in a timed setting after object contours had undergone heavy degradation. Expectedly, heavier degradation - greater proportions of the sketch being deleted - led to lower classification accuracy. Additionally, confirming the original claim, degradation centered at object corners induced much lower accuracy in subjects compared to degradation along curves between corners. While our proposed dataset and task do not exactly match the setting of Biederman's (1987) experiments in the sense that we focus on a distinct set of sketch shapes and do not consider time-constrained recognition, our results indicate that this task is sufficiently challenging for modern deep learning models. Moreover, the simple structure of our data, being merely regular polygons, indicates a fundamental misalignment of deep learning vision models with respect to human vision. We view our dataset as a minimum viable task that highlights this misalignment: the fact that deep learning models exhibit pathologies even on this simple shape classification task suggests that neural network perception may be even less robust than we know. To contextualize our findings, we compare Biederman's (1987) human subject performance to those of common vision architectures in SS4.1. Gollin Figures TestClosely related to image recovery is the Gollin figures test for assessing human visual perception. Subjects are shown variations of a common object in quick succession, with five consecutive incomplete line drawings of a picture, from least to most complete, displayed to each subject. The subject needs to mentally complete the underlying drawing in order to identify the original object drawn (Gollin, 1960). Notably, unlike in Recognition-by-Components theory, the Gollin figures test makes no distinction between image degradations that yield recoverable images and degradations that yield non-recoverable images. Historically, the Gollin figures test has been automated in order to provide more fine-grained control over the proportion of an image that is degraded. That is, rather than having five discrete variants of degradation as in the original test, software was introduced to construct infintely many variants of degradation on a continuous spectrum (Foreman and Hemmings, 1987). Motivated by this precedent, we automate the generation of recoverable and non-recoverable image degradations at scale. Non-Accidental PropertiesNon-accidental properties (NAPs) are image properties that are invariant over orientation and depth (Amir et al., 2012). The human visual system processes NAPs in a two-dimensional drawing as feasibly occurring in three dimensions (Witkin and Tenenbaum, 1983). For example, if there is a straight line, a manifestation of _collinearity_, in a two-dimensional drawing, the visual system infers that the edge producing that line in the three-dimensional world is also straight. In other words, NAPs are dimension-invariant. Another perspective for understanding NAPs is the _non-accidentalness principle_, which states that spatiotemporal coherence and regularity is so unlikely to arise by the random interaction of independent components, that such structure, when observed, almost certainly denotes an underlying unified process (Blusseau et al., 2016). NAPs precisely capture these unlikely coherences, providing useful cues for human object recognition. Figure 2 displays the five canonical NAPs and examples of each property (Lowe, 1985). NAPs and image recoverability are intimately related notions. Specifically, NAP location and type directly parameterizes regions of non-recoverable image degradations. If NAPs are removed from any image region, it becomes more difficult to recover the object component that the deleted NAP belongs too. Under certain NAP removals, it becomes more difficult to recover the relationship between object components, and thus the overall image. For instance, Biederman (1987) demonstrated that deletion of vertices adversely affected object recognition more than deletion of midsegments. Inspired by this result, here we focus on producing degradation of regions surrounding vertices as well as midsegments in order to generate non-recoverable and recoverable images, respectively. Figure 2: The five classes of nonaccidental properties (NAPs) for object recognition in the visual cortex are 1) _collinearity_, the presence of straight lines; 2) _curvilinearity_, the presence of smoothly curved elements; 3) _symmetry_ across arbitrary axes; 4) _parallel curves_; and 5) _vertices_, junctions of two or more contours (Lowe, 1985). Critically, cognitive scientists suggest that NAPs form a perceptual basis for the set of components that enable object recognition. Fractals as Partial Non-Accidental PropertiesWhile there are not any known ways of specifying the generation of NAPs directly, we draw inspiration from Barnsley and Vince (2010) to automatically generate _fractals_. Due to their intrinsic collinearity, curvilinearity, symmetry, parallel nature, multitude of junctions, and occurrence in natural objects and scenes, fractals may be regarded as a proxy form of NAPs. To that end, we investigate the performance of models pretrained via fractal-guided Formula-driven Supervised Learning (Kataoka et al., 2020) on our benchmark. Recent results in the deep learning literature also motivate the use of fractal pretraining. In particular, Hendrycks et al. (2021) showed that a simple data augmentation technique mixing fractals with natural images comprehensively improves model robustness and safety metrics. Contemporaneously, Kataoka et al. (2020) also demonstrate that pretraining neural networks entirely on synthetically generated fractals achieves the same level of performance on downstream tasks as pretraining on natural images. For our experiments, we use models pretrained on synthetic fractals with 10,000 fractal classes, which we refer to as FractalDB models. ## 3 Automating Shape Recoverability Experiments We implement the Automated Shape Recoverablility Test, an automatic image editing pipeline for creating degraded regular polygons at varying severities. Figure 1 displays an example of our generation pipeline. In total, our pipeline requires less than 30 seconds of compute time to serially generate 6,000 diverse images across 7 shape categories - triangles, squares, pentagons, hexagons, heptagons, and octagons - with 1,000 images each2. Moreover, this procedure can be trivially scaled up to any arbitrary number of polygon classes and images per class. Our approach broadly consists of generating regular polygons, then degrading their perimeters to produce recoverable (edge degradation) and non-recoverable (corner degradation) images. Footnote 2: Benchmarked on an 8-core Apple M1 Pro chip. Regular Polygon GenerationOur shape generation pipeline begins by constructing regular polygons. We generate polygons by first implicitly defining a circle, then placing an appropriate number of points, matching the desired number of sides, on its circumference. For example, to draw a pentagon, we place five points equally spaced on the circumference of the circle, then use line segments to connect them. To promote image diversity, the center of the circle \(c\) is chosen randomly within a 224 \(\times\) 224 grid, subject to a minimum acceptable radius size \(r_{min}\). Subsequently, the circle's radius \(r\) is chosen uniformly at random between the minimum accepted radius size and the maximum radius size allowed by the sampled circle center. Furthermore, shapes are rotated uniformly at random by an angle \(\theta\) between \(0^{\circ}\) and \(360^{\circ}\). The border of each polygon is fixed to be two pixels thick. All generated shapes have black borders overlaid on top of a white background. Producing Degraded ShapesIn order to produce recoverable and non-recoverable versions of our shapes, we parametrically and evenly degrade the perimeter of each polygon. To do so, we first specify the proportion \(p_{d}\) of the shape's perimeter to degrade. Naturally, a larger degradation proportion yields more occluded shapes that are more difficult to recognize, and a smaller degradation proportion yields shapes closer to the original polygon, which are more easily recognized. To produce non-recoverable shapes, we degrade perimeter regions surrounding each corner, and to produce recoverable shapes, we degrade perimeter regions from the midsegment of each edge. Corner DegradationTo degrade corners from an initial regular polygon generation, we overlay white circles at each corner, effectively erasing the perimeter of the corresponding local neighborhood around the corner. Given a desired global degradation proportion \(p_{d}\), each individual white circle is then defined with the following radius, where \(N_{s}\) is the number of sides of the regular polygon and \(P\) is the total perimeter: \[r=\frac{p_{d}}{2\cdot N_{s}}\cdot P\] Under this construction, the circle at each corner will erase \(2r=(P\cdot p_{d})/N_{s}\) pixels from the shape's perimeter, and in aggregate \(N_{s}\) circles will erase \(P\cdot p_{d}\) pixels, thus precisely degrading a \((P\cdot p_{d})/P=p_{d}\) proportion of the original image, as desired. Edge DegradingWe adopt a similar procedure for edge degradation. However, instead of overlaying circles at shape corners, we overlay circles at midpoints between corners. Critically, defining \(r\) exactly as above, observe that no circle drawn at a midpoint can erase a corner so long as \(p_{d}<1\), which is true for all of our experiments and in all meaningful degradation cases. In the limiting cases, \(p_{r}=1\) erases the entire perimeter of the shape, and \(p_{r}=0\) retains the shape in full. Notably, this procedure performs a single removal at the middle of each edge, _not_ a dashed-line edit. ## 4 Experiments Our experiments evaluate neural networks' ability to classify degraded shapes at 10%, 15%, 20%, 25%, 30%, 40%, 50%, 60%, and 70% perimeter degradation proportions. For our architectures, we benchmark ResNet-18, ResNet-50 (He et al., 2016), MLP-Mixer (Tolstikhin et al., 2021), and ViT (Dosovitskiy et al., 2021). We consider the ResNet family due to its universality in vision experiments and relative simplicity. We also test MLP-Mixer and ViT due to their qualitatively different learning procedure and behavior. MLP-Mixer uses simple multi-layer perceptrons (MLPs) to mix features locally and globally, contrasting the explicit weight sharing seen in convolutional and transformer models. ViT adapts the transformer architecture from natural language processing to handle images, learning global dependencies by treating image patches as a sequence. We initialize our models using pretrained weights derived from ImageNet (Deng et al., 2009) and FractalDB-10k, a database without any natural images (Kataoka et al., 2020). All models are then finetuned on _whole_, non-degraded regular polygons according to a 60%/20%/20% training/validation/test split. Subsequently, they are directly tested on corner-degraded and edge-degraded polygons at varying degradation strengths. While our models obtain high test accuracy on whole shape classification, we are chiefly interested in their performance on degraded shapes. Training SetupWe finetune ResNet-18 and ResNet-50 on whole polygons for 20 epochs using SGD with a learning rate of 0.01, a momentum term of 0.9, no weight decay, and a batch size of 64. The learning rate follows a Reduce-Learning-Rate-on-Plateau scheduler with a patience of 3 epochs Figure 3: Top-1 test accuracy (%) of ImageNet-pretrained and whole-polygon finetuned models on the shape recovery task. Accuracy decreases as degradation proportion, \(p_{d}\), increases. Moreover, ResNet-15, ResNet-50, and MLP-Mixer all exhibit worse performance on edge-degraded shapes compared to corner-degraded shapes, the opposite of human behavior. and a learning rate reduction factor of 0.1. For ViT, we use SGD with a learning rate of 0.001, weight decay of 0.001, momentum of 0.9, and a batch size of 32. Across all experiments, standard data augmentation and preprocessing techniques are used, namely random cropping, random rotations, random horizontal flipping, and normalization over the whole-polygon dataset. ### Results At every epoch, we compute top-1 and top-5 validation accuracy on the whole-polygon dataset, and the weights of the network are saved. After training, the set of weights with the highest top-1 validation accuracy is used to compute test accuracy on whole polygons. We then evaluate this best model on corner-degraded and edge-degraded shapes. Critically, our models achieve 100% validation accuracy within 10 epochs, and also consistently achieve greater than 99.7% accuracy on randomly held out test sets of whole shapes. More interestingly, they perform much worse on degraded shapes. Comparison Across ArchitecturesThe top-1 test accuracies on corner-degraded and edge-degraded shapes for ImageNet-pretrained models are shown in Figure 3. Unsurprisingly, as degradation proportion increases, all architectures perform worse, both in the corner-degraded and edge-degraded settings. While ViT performs similarly on all degraded shapes at each degradation proportion, ResNet-18, ResNet-50, and MLPMixer consistently perform worse on edge-degraded shapes versus corner-degraded shapes. Notably, this is the opposite of how time-constrained humans perform on Biederman's (1987) degraded sketch recognition task. Human ComparisonTo contextualize the behavior of neural networks on our shape recovery task, we compare the performance of MLP-Mixer against human subjects from Biederman's (1987) original image recovery experiments. There, image degradation at corners and edges was performed for 18 objects at degradation proportions of 25%, 45%, and 65%. Degraded objects were then exposed Figure 4: Differential (_Edge \(-\) Corner_) in edge-degradation accuracy relative to corner-degradation accuracy on the shape and image recovery tasks. As \(p_{d}\) increases, humans retain high accuracy on edge-degraded images but perform worse on corner-degraded images. On the other hand, MLP-Mixer quickly experiences decreasing accuracy on edge-degraded images, while retaining high accuracy on corner-degraded images. Overall, MLP-Mixer and human subjects exhibit starkly contrasting behavior. Figure 5: Top-1 test accuracy (%) of FractalDB-pretrained and whole-polygon finetuned models on the shape recovery task. Again, accuracy decreases across the board as degradation proportion, \(p_{d}\), increases. Compared to their ImageNet-pretrained counterparts, however, ResNet-18 and ResNet-50 both retain performance better on edge-degraded shapes. We also note the discrepancy compared to corner-degraded shapes, the opposite of human behavior. to subjects for 100msec, 200msec, and 750msec. Figure 4 compares MLP-Mixer performance against these human subjects in the 100msec setting. Specifically, we compute the percentage difference in accuracy on edge-degraded shapes relative to corner-degraded shapes for both humans and MLP-Mixer. As degradation proportion increases, human subjects show greater accuracy preservation on edge-degraded shapes compared to corner-degraded shapes. Conversely, MLP-Mixer retains increasingly higher accuracy on corner-degraded shapes relative to edge-degraded shapes. Dataset PriorsWe also investigate the effect of pretraining dataset choice on our models' performance. Besides ImageNet, we also pretrain ResNet-18 and ResNet-50 on FractalDB before finetuning them on our whole-polygon dataset. Figure 5 displays the performance of these models on the shape recovery task. Tables 1 and 2 show the analogous per class performance for ResNet-18. While the general trend of these models is the same as their ImageNet counterparts, these models retain greater accuracy on edge-degraded shapes and fail more rapidly on corner-degraded shapes as degradation proportion increases, diverging even further from human behavior. For fractals generated by linear Iterated Function Systems (IFS), such as those in FractalDB, the resulting images primarily exhibit the NAPs of collinearity and symmetry, and not so much structurally complex vertices. Therefore, it is not surprising that these fractal-pretrained models are more amenable to processing edges, thus performing even better on corner-degraded shapes. In that sense, fractal pretraining further misaligns neural network behavior from humans, raising the question of if standard fractal pretraining can indeed be a suitable substitute for natural image pretraining. However, we note that extensions to the traditional IFS generative process exist that enable emphasis on different NAPs. For example, the Fractal Flame algorithm (Spotworks and Berthoud, 2008) produces _curved_ fractals, thus emphasizing curvilinearity. We leave the investigation of such fractal generation procedures' effects on neural network behavior and alignment for future work. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \(p_{d}\) **and Type** & **Triangle** & **Square** & **Pentagon** & **Hexagon** & **Heptagon** & **Octagon** \\ \hline 0.3 Edge Degradation & 100.0 & 67.7 & 92.2 & 0.2 & 0.9 & 100.0 \\ 0.4 Edge Degradation & 100.0 & 27.6 & 0.0 & 0.0 & 0.0 & 94.7 \\ 0.5 Edge Degradation & 100.0 & 39.0 & 0.0 & 0.0 & 0.1 & 92.9 \\ 0.6 Edge Degradation & 100.0 & 32.7 & 0.0 & 0.0 & 0.0 & 0.1 \\ 0.7 Edge Degradation & 6.6 & 98.9 & 0.0 & 0.0 & 0.0 & 0.0 \\ 0.3 Corner Degradation & 100.0 & 96.0 & 88.9 & 87.0 & 93.7 & 82.3 \\ 0.4 Corner Degradation & 100.0 & 79.2 & 55.9 & 66.3 & 42.4 & 60.5 \\ 0.5 Corner Degradation & 100.0 & 69.8 & 0.1 & 14.4 & 16.3 & 55.6 \\ 0.6 Corner Degradation & 100.0 & 16.1 & 0.0 & 0.0 & 0.0 & 0.0 \\ 0.7 Corner Degradation & 60.4 & 95.2 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Per-class accuracy of ImageNet-pretrained ResNet-18 at varying degradation proportions. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \(p_{d}\) **and Type** & **Triangle** & **Square** & **Pentagon** & **Hexagon** & **Heptagon** & **Octagon** \\ \hline 0.3 Edge Degradation & 100.0 & 99.3 & 98.6 & 30.1 & 9.4 & 100.0 \\ 0.4 Edge Degradation & 97.5 & 98.3 & 89.5 & 1.7 & 0.0 & 100.0 \\ 0.5 Edge Degradation & 58.0 & 57.0 & 2.4 & 0.2 & 0.2 & 99.9 \\ 0.6 Edge Degradation & 2.4 & 27.0 & 0.4 & 0.0 & 0.1 & 100.0 \\ 0.7 Edge Degradation & 0.0 & 12.0 & 0.0 & 0.1 & 69.7 & 26.5 \\ 0.3 Corner Degradation & 100.0 & 100.0 & 100.0 & 100.0 & 11.4 & 0.0 \\ 0.4 Corner Degradation & 100.0 & 100.0 & 100.0 & 100.0 & 99.1 & 0.0 \\ 0.5 Corner Degradation & 100.0 & 100.0 & 100.0 & 100.0 & 32.3 & 0.1 \\ 0.6 Corner Degradation & 100.0 & 98.5 & 100.0 & 62.5 & 100.0 & 84.0 \\ 0.7 Corner Degradation & 99.9 & 89.9 & 53.5 & 99.8 & 84.6 & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Per-class accuracy of FractalDB-pretrained ResNet-18 at varying degradation proportions. Visualizing Network InternalsTo gain further intuition for neural network behavior, we study our learned ResNet-18 models by analyzing their corresponding Grad-CAM visualizations (Selvaraju et al., 2017). Figure 6 shows an illustrative example of the difference in gradients for the _true_ target concept flowing into the final convolutional layer between ImageNet-pretrained ResNet-18 and FractalDB-pretrained ResNet-18. On two randomly selected pentagons from our dataset and their corresponding 50% edge-degraded and 50% corner-degraded variants, the highlighted regions of the ImageNet model are less concentrated within the shape's radius than the highlighted regions of the FractalDB model, suggesting that fractal pretraining endowed ResNet-18 with a somewhat more robust ability to classify degraded shapes. Though not a full explanation for model behavior, we see such discrepancies as a potential indication that fractal pretraining enables models to learn and leverage more robust geometric features, though not necessarily human-like vertex features. ## 5 Conclusion Inspired by cognitive science and the theory of Recognition-by-Components, we introduce a new perspective on the human-machine vision gap and investigate the notion of image recovery in degraded regular polygons. To do so, we develop the Automated Shape Recoverability Test pipeline for generating regular polygon sketches at scale with varying orders of degradation, which we open-source to the machine learning community. We then train common deep learning vision models for this simple task and find that their behavior fundamentally conflicts with how humans perceive images. Furthermore, we show that fractal pretraining further misaligns neural network and human behavior. Based on these results, we encourage further investigation into this fundamental conflict between human and machine vision. Figure 6: Grad-CAM visualizations with respect to the true pentagon class for ResNet-18 on (a) 50% corner-degraded pentagon with an ImageNet-pretrained model, (b) 50% edge-degraded pentagon with an ImageNet-pretrained model, (c) 50% corner-degraded pentagon with a FractalDB-pretrained model, and (d) 50% edge-degraded pentagon with a FractalDB-pretrained model.
2306.15858
Hierarchical Graph Neural Networks for Proprioceptive 6D Pose Estimation of In-hand Objects
Robotic manipulation, in particular in-hand object manipulation, often requires an accurate estimate of the object's 6D pose. To improve the accuracy of the estimated pose, state-of-the-art approaches in 6D object pose estimation use observational data from one or more modalities, e.g., RGB images, depth, and tactile readings. However, existing approaches make limited use of the underlying geometric structure of the object captured by these modalities, thereby, increasing their reliance on visual features. This results in poor performance when presented with objects that lack such visual features or when visual features are simply occluded. Furthermore, current approaches do not take advantage of the proprioceptive information embedded in the position of the fingers. To address these limitations, in this paper: (1) we introduce a hierarchical graph neural network architecture for combining multimodal (vision and touch) data that allows for a geometrically informed 6D object pose estimation, (2) we introduce a hierarchical message passing operation that flows the information within and across modalities to learn a graph-based object representation, and (3) we introduce a method that accounts for the proprioceptive information for in-hand object representation. We evaluate our model on a diverse subset of objects from the YCB Object and Model Set, and show that our method substantially outperforms existing state-of-the-art work in accuracy and robustness to occlusion. We also deploy our proposed framework on a real robot and qualitatively demonstrate successful transfer to real settings.
Alireza Rezazadeh, Snehal Dikhale, Soshi Iba, Nawid Jamali
2023-06-28T01:18:53Z
http://arxiv.org/abs/2306.15858v1
# Hierarchical Graph Neural Networks ###### Abstract Robotic manipulation, in particular in-hand object manipulation, often requires an accurate estimate of the object's 6D pose. To improve the accuracy of the estimated pose, state-of-the-art approaches in 6D object pose estimation use observational data from one or more modalities, e.g., RGB images, depth, and tactile readings. However, existing approaches make limited use of the underlying geometric structure of the object captured by these modalities, thereby, increasing their reliance on visual features. This results in poor performance when presented with objects that lack such visual features or when visual features are simply occluded. Furthermore, current approaches do not take advantage of the proprioceptive information embedded in the position of the fingers. To address these limitations, in this paper: (1) we introduce a hierarchical graph neural network architecture for combining multimodal (vision and touch) data that allows for a geometrically informed 6D object pose estimation, (2) we introduce a hierarchical message passing operation that flows the information within and across modalities to learn a graph-based object representation, and (3) we introduce a method that accounts for the proprioceptive information for in-hand object representation. We evaluate our model on a diverse subset of objects from the YCB Object and Model Set, and show that our method substantially outperforms existing state-of-the-art work in accuracy and robustness to occlusion. We also deploy our proposed framework on a real robot and qualitatively demonstrate successful transfer to real settings. ## I Introduction When humans interact with an object, vision and touch provide information for estimating the object's attributes (e.g., shape, position, and orientation). The nervous system frequently integrates sensory information from several sources, where the combined estimate is more accurate than each source individually [1, 2, 3]. For instance, when attempting to grab the handle of a coffee cup, we see and touch the handle. We also see and feel where our hands are based on the proprioception information signaled through muscles and joints [4, 5]. The combination of visual, haptic, and proprioceptive feedback allows us to accurately localize the coffee cup and the orientation of its handle. State-of-the-art object pose estimation models often use RGB-D observational data to compute an object's 6D pose [6, 7, 8]. In robotic applications, recent work on multimodal manipulation has shown significant improvements when incorporating tactile data into object reasoning tasks such as object 3D reconstruction [9], shape completion [10, 11], and pose estimation [12, 13]. However, these methods make limited use of the underlying geometric structure and disregard proprioceptive information embedded in the position of fingers. Depth information is typically included in 6D pose estimation models as an additional input to a convolutional architecture (CNN) [6, 7, 8]. However, CNN architectures do not fully exploit the inherent 3D geometric structure in the depth data. Recently, graph representations have shown promising results in capturing the geometric structure of depth point clouds [14, 15, 16]. Despite the advantage of graph representation for shape completion [10, 11] and grasp stability prediction [17], most existing works on multimodal pose estimation have focused only on CNN architectures [12, 13]. CNN-based pose estimator models depend predominantly on visual features and underutilize geometric information, which results in significant performance degradation under occlusion. Earlier work on deep visuomotor policy learning [18, 19] used robot configuration as a complementary input to their Fig. 1: Our model estimates the 6D pose of an in-hand object by learning a multimodal graph representation from sensor observations. The _Vision Graph_ is constructed from the RGB and the depth point cloud shown in gray color, similarly, the _Touch Graph_ is constructed from contact points shown in distinct colors for each tactile sensor. These two graphs are also interconnected (dotted lines) to allow the information to flow within and across modalities. model. However, incorporating proprioceptive information such as finger position for in-hand objects has not been explored in the literature. Providing proprioceptive information to the model potentially narrows down the object pose solution space by implicitly filtering out implausible poses that conflict with the hand configuration. In this work, we introduce a novel framework for in-hand object pose estimation that combines visual and tactile sensor readings and accounts for proprioceptive information. We propose learning a multimodal graph representation of an in-hand object on two modality levels: a vision graph and a touch graph (see Fig. 1). The Vision Graph encodes RGB-D observations while the Touch Graph encodes tactile readings and registers each tactile reading with its corresponding location on the robot's fingers--encapsulating the proprioceptive signal. We introduce a hierarchical message passing operation that updates the multimodal graph representation by flowing the information within and across the modality graphs. The updated multimodal graph representation can then be used to decode an accurate 6D pose of the object. Our key contributions are: (1) the proposal of a hierarchical graph neural network architecture for combining vision and touch observations, (2) the introduction of a hierarchical message passing operation to learn a graph-based object representation on multiple modality levels simultaneously through multiple rounds of inter- and intra-modality message passing, and (3) the use of proprioceptive information for in-hand 6D object pose estimation. ## II Related Work Object Pose from VisionEarly learning-based pose estimation methods such as Xiang _et al._[20] explored using RGB input to a convolutional model to directly predict the 6D object pose. Recent work has shown promising improvements by using RGB-D observations [6, 7, 8]. Li _et al._[21] included depth data as an additional channel to a convolutional pose estimator. Wang _et al._[6] proposed fusing RGB and depth information at pixel-level to estimate the 6D pose of objects [20]. Song _et al._[8] use a hybrid representation based on keypoints to improve pose estimation based on symmetries in the objects. Although RGB-D pose estimation methods are well developed, their performance degrades significantly under occlusion. Object Reasoning from Vision and TouchIn interactive robotic settings, such as in-hand object manipulation, multimodal observational data such as vision and touch are available to reason about the characteristics of the object [9, 10, 11, 13, 22]. Watkins-Valls _et al._[11] and Tahoun _et al._[9] used tactile sensors along with visual RGB-D information to reconstruct 3D geometries of objects under occlusion. Rustler _et al._[10] also showed 3D shape completion significantly enhances from active re-grasping of the object at the occluded areas. In a more relevant application to our work, Dikhale _et al._[12] proposed a CNN-based framework for 6D pose estimation to combine RGB-D and tactile contact points and showed major improvements in the pose estimation of occluded objects. Graph-Based Object ReasoningRecently graph neural networks have been applied to depth point clouds to process geometric information present in such data. A graph network [23, 24] is a mapping from an input graph to an updated output graph with updated attributes. Graph neural networks update the graph by learning to perform message passing operations in the graph to propagate information between nodes through edges [24, 25]. Si _et al._[14] proposed the use of graph representation for object detection based on depth data. Each node in the graph represents a point in the depth point cloud and nodes are connected through edges based on spatial proximity. For 6D pose estimation, Zhou _et al._[26] employed graph convolution [27] on RGB-D data and showed improvements over methods with no graph representation such as Wang _et al._[6]. The literature suggests a significant advantage of graph networks for object reasoning based on RGB-D, however, incorporating tactile data with a graph-based model is an underexplored topic. A recent work by Garcia-Garcia _et al._[17] performed grasp stability binary classification by building a graph for pressure readings of tactile sensors. In their method, each tactile sensor is modeled as a separate graph where nodes represent taxels. In comparison, our method aims to estimate the 6D pose of an object by learning a multimodal graph representation that jointly describes the vision and all tactile readings. This allows for capturing the geometric information of each modality as well as their relative geometrical information. We show that our proposed framework reliably estimates an accurate 6D pose and is robust to occlusions. ## III Problem Definition For an object instance within a category (e.g., bottle, can, box, etc.), we refer to category-level 6D pose estimation as the task of predicting the 6D pose \(p\in SE(3)\) of the object in the scene with respect to the camera coordinate frame. The 6D pose \(p=[R|t]\) consists of a rotation \(R\in SO(3)\) and a translation \(t\in\mathbb{R}^{3}\). We assume access to observational data in two modalities: 1. _Vision_: a single-view RGB-D image of the scene that contains RGB scene information \(I\in\mathbb{R}^{h\times w\times 3}\) and point cloud of the segmented object \(X_{V}\in\mathbb{R}^{3\times N_{v}}\) from the depth channel. 2. _Touch_: a set of contact points \(X_{T}\) from \(N_{s}\) individual tactile sensors in the form of object surface point clouds \(X_{T}=\{X_{T_{i}}\in\mathbb{R}^{3\times N_{t,i}}\text{ for }i=1\dots N_{s}\}\). We also assume access to the location of all tactile sensors \(X_{S}\in\mathbb{R}^{3\times N_{s}}\) with respect to the camera frame. ## IV Model We propose _Hierarchical Graph Neural Network_ (HGNN), a graph neural network framework with an encoder-processor-decoder architecture [28, 29, 24] to estimate the 6D pose of in-hand objects by combining vision and touch modalities and accounting for proprioception (Fig. 2). We introduce learning a hierarchical graph representation of objects on two modality levels: a vision graph, \((\mathcal{V}_{V},\mathcal{E}_{V})\) that encodes observations from vision, and a touch graph, \(\mathcal{G}_{T}=(\mathcal{V}_{T},\mathcal{E}_{T})\) that encodes touch observations and also encapsulates a proprioceptive signal that registers each tactile reading with the location of its corresponding sensor. These two graphs are interconnected to enable cross-modality information exchange. ### _Model Architecture_ Our network architecture, as shown in Figure 2, can be divided into three parts: (1) a graph encoder that constructs a hierarchical graph representation of the object, (2) a graph processor that updates the graph representation through hierarchical message passing, and (3) a pose decoder that estimates the 6D pose of the object based on a readout obtained from the updated graph nodes. #### Iii-A1 The Graph Encoder The graph encoder learns a mapping from the observations to a multimodal graph representation of the object (see Fig. 2-a). First, vision and touch point clouds from raw observation are filtered using voxel downsampling: a 3D voxel grid overlays the points and the centroid of each voxel is used to represent the points within that voxel. This preprocessing ensures that our solution is independent of the point cloud density. After computing the downsampled vision \(X_{V}\) and touch \(X_{T}\) points, the point features (\(F_{V}\), \(F_{T}\)) in each modality and the proprioceptive features (\(F_{S}\)) are encoded into a graph representation. These features (\(F_{V}\), \(F_{T},F_{S}\)) contain positional and visual information extracted from the observation. The positional information is based on the location of each point in the 3D space. The visual information is an auxiliary feature to fuse additional relevant information from the image. Visual features are obtained from a bounding box cropped around the 2D projection of the 3D points on the RGB image. We refer to the visual feature of the points as auxiliary since they only provide a visual context to the model and are not always directly observable. In the vision graph \(\mathcal{G}_{V}\), each node \(\mathbf{n}_{V}^{i}\in\mathcal{V}_{V}\), represents a point in the vision point cloud \(X_{V}\). Each vision node embedding is defined as \(\mathbf{n}_{V}^{i}=[x_{V}^{i},\varphi_{V}^{i},\varphi_{O}]\) where \(x_{V}^{i}\in X_{V}\) denotes the 3D coordinates of the point. A local visual feature vector \(\varphi_{V}^{i}\) is obtained from a convolutional encoder applied to a fixed-size bounding box \(\mathrm{b}\mathrm{b}_{V}^{i}\) around the 2D projection of \(x_{V}^{i}\) on the observed RGB image. A global visual feature vector \(\varphi_{O}\) obtained from a convolutional encoding of the object segment's image \(\mathrm{b}\mathrm{b}_{O}\) is also provided in each node embedding (Fig. 3). The touch graph \(\mathcal{G}_{T}\) carries positional and visual features associated with the tactile readings along with a proprioceptive signal that registers each contact point to its corresponding tactile sensor. Each node embedding in the touch graph is defined as \(\mathbf{n}_{T}^{i}=[x_{T}^{i},\varphi_{T}^{i},\varphi_{O},\mathbf{\pi}_{S}^{i}]\) where \(x_{T}^{i}\in X_{T}\) is the coordinates of the point, \(\varphi_{T}^{i}\) is the local visual features from the point's bounding box \(\mathrm{b}\mathrm{b}_{T}^{i}\) on the image, and \(\varphi_{O}\) is the global visual feature of the object (Fig. 3). For a given node \(x_{T}^{i}\) in the touch graph, the proprioceptive signal consists of positional and visual features of the corresponding tactile sensor \(\mathbf{\pi}_{S}^{i}=[x_{S}^{i},\varphi_{S}^{i}]\) where \(x_{S}^{i}\in X_{S}\) denotes the coordinates of the tactile sensor that observed the point \(x_{T}^{i}\) and \(\varphi_{S}^{i}\) is the visual feature obtained from a convolutional encoder applied to bounding box \(\mathrm{b}\mathrm{b}_{S}^{i}\) around the 2D projection of \(x_{S}^{i}\) on the observed image (Fig. 3). Edge embeddings in both graphs \(\mathbf{e}_{V}^{ij}\in\mathcal{E}_{V}\), and \(\mathbf{e}_{T}^{ij}\in\mathcal{E}_{T}\) is set by spatial proximity: a node pair \((i,j)\) is connected if \(|x_{i}-x_{j}|<r\). For the connected nodes, the edge embeddings are defined as \(|x_{i}-x_{j}|\) to capture the relative spatial relation. To enable cross-modality information exchange, we also define connectivity between the two graphs through inter-graph edges as \(\mathbf{e}_{V+T}^{ij}\) where each node in a modality graph is connected to the k-nearest neighbor nodes from the other modality graph. #### Iii-A2 The Graph Processor In a graph network \(\mathcal{G}\), one round of message passing (\(\mathrm{mp}\)) updates the graph embeddings by propagating the information in the graph as \(\mathcal{G}^{\prime}\leftarrow\mathrm{mp}(\mathcal{G})\) Fig. 2: Approach Overview: Our framework estimates the 6D pose of an in-hand object with an encoder-processor-decoder architecture. (a) The graph encoder maps the observation from vision (RGB and depth point cloud) and touch (contact point clouds) to a multimodal graph representation while accounting for the proprioceptive information from the hand. (b) The graph processor (HGNN) performs multiple rounds of hierarchical intra-modality, and inter-modality message passing to update the graph. (c) The pose decoder extracts a readout from the updated graph and estimates the object’s 6D pose. The message passing operation can be summarized as, \[\begin{split}\mathbf{e}^{ij}{}^{\prime}&\gets f _{\text{e}}(\mathbf{e}^{ij},\mathbf{n}_{i},\mathbf{n}_{j})\\ \mathbf{n}^{k}{}^{\prime}&\gets f_{\text{n}}( \mathbf{n}^{k},\sum_{i\in\mathcal{N}(k)}\mathbf{e}^{ik^{\prime}})\end{split} \tag{1}\] where an edge specific function \(f_{\text{e}}\) updates each node embedding, then for each node the node-specific function \(f_{\text{n}}\) aggregates the updated neighboring \(\mathcal{N}(k)\) edge information and updates the node embedding. Node- and edge-specific functions are multilayer perceptrons (MLPs). We propose Hierarchical Graph Neural Networks (HGNN) with a hierarchical message passing scheme (see Fig. 2-b) where first the information propagates at each modality level and then across modalities. A hierarchical message passing round can be described as, \[\begin{split}\mathcal{G}_{V}{}^{\prime}&\leftarrow \operatorname{mp}_{V\leftrightarrow V}(\mathcal{G}_{V})\\ \mathcal{G}_{T}{}^{\prime}&\leftarrow\operatorname{ mp}_{T\leftrightarrow T}(\mathcal{G}_{T})\\ \mathcal{G}_{V}{}^{\prime\prime},\mathcal{G}_{T}{}^{\prime\prime}& \leftarrow\operatorname{mp}_{V\leftrightarrow T}(\mathcal{G}_{V}{}^{ \prime},\mathcal{G}_{T}{}^{\prime})\end{split} \tag{2}\] where vision and touch graph nodes communicate within each graph through _intra-modality_ message passing (\(\operatorname{mp}_{V\leftrightarrow V}\), \(\operatorname{mp}_{T\leftrightarrow T}\)) and then exchange information through _inter-modality_ message passing (\(\operatorname{mp}_{V\leftrightarrow T}\)). We perform \(L\) rounds of hierarchical message passing where each round is applied sequentially to the output of the previous round. After the last message passing round the multimodal graph representation is updated to \(\tilde{\mathcal{G}}\). #### Iv-A3 The Pose Decoder The updated multimodal graph representation \(\tilde{\mathcal{G}}\) implicitly reflects the object pose information. To extract the object pose, a graph readout is obtained by collecting the updated node embeddings \(\tilde{\mathbf{n}}\) (see Fig. 2-c). We then use a simple MLP pose estimator function to compute a node-wise 6D pose \([\hat{R}_{i}|\hat{t}_{i}]\), and a confidence measurement \(\hat{c}_{i}\) for each node-wise pose estimate [6]. The final pose of the object is set to the node-wise pose estimate with maximum confidence: \(\operatorname{argmax}(\hat{c}_{i})=[\hat{R}|\hat{t}]\). ### _Model Training_ We use a 3D model of the object to build a loss function to train our framework in an end-to-end manner. First, the object model is voxalized to get a total number of \(P\) object surface points \(x_{p}\). Each node-wise estimated pose \([\hat{R}_{i}|\hat{t}_{i}]\) is used to transform the object model and then compared with a transformation using the ground-truth object pose \([R_{gt}|t_{gt}]\). A node-wise pose estimation loss is defined as, \[L_{i}^{\mathbf{n}}=\frac{1}{P}\sum_{p=1...P}||(R_{gt}x_{p}+t_{gt})-(\hat{R}_{ i}x_{p}+\hat{t}_{i})|| \tag{3}\] The total loss is computed as an average node-wise loss \(L_{i}^{\mathbf{n}}\) weighted by their corresponding confidence estimate \(\hat{c}_{i}\). Top \(K\) nodes are pooled to include in the total loss [30, 31, 32]. A regularization term is also added for the confidence estimates to achieve a more balanced confidence measurement among all nodes [6]. The total loss can be summarized as, \[L=\frac{1}{K}\sum_{i=1...K}(L_{i}^{\mathbf{n}}\hat{c}_{i}-\lambda\log(\hat{c}_ {i})) \tag{4}\] ## V Experiments Our experiments are motivated by the following questions: (1) Does the HGNN model learn to accurately estimate the object pose? (2) How effective is incorporating the proprioceptive information? (3) Does hierarchical message passing improve performance? ### _Environments_ #### V-A1 Synthetic We use VisuoTactile synthetic dataset from Dikhale _et al._[12] to train our framework. In this dataset, a subset of 11 YCB objects [33] are selected based on their graspability. A total number of \(20\)K distinct in-hand poses are simulated per object. In particular, Unreal Engine 4.0 [34] has been used to render photo-realistic observational data of a 6 DoF robot arm with a 4-fingered gripper (Allegro Hand, SimLab Co., Ltd.) equipped with 12 tactile sensors (3 per finger). A main RGB-D camera captures images of the robot holding an object. Each tactile sensor captures object surface contact points in a point cloud format. Each data sample is generated by randomizing the in-hand object pose, the robot fingers configuration, and the robot arm orientation and position. Domain randomization is also applied for the color and pattern of the background and workspace desk. #### V-A2 Real Robot After training our model on the synthetic data we deploy it on a multi-finger gripper (Allegro Hand, SimLab Co., Ltd.) attached to a Sawyer robot. The gripper has 4 fingers, 16 joints, and 3 tactile sensors (uSkin, XELA Robotics Co., Ltd) on each finger to capture the object's surface contact points (224 taxels in total). An RGB-D camera (Kinect2, Microsoft) is used as the main camera to get RGB-D images. We use real YCB object samples to test the performance of our framework on the real robot (Fig. 1 shows the real robot setup). Fig. 3: Visual Features: We compute an auxiliary local visual feature vector (\(\varphi_{V}\), \(\varphi_{T}\), \(\varphi_{S}\)) for each observed 3D point from vision, touch, and tactile sensors’ coordinates (\(x_{V}\), \(x_{T}\), \(x_{S}\)) by applying a convolutional encoder to a bounding box cropped around the projection of that point on the RGB image (\(\operatorname{bb}_{V}\), \(\operatorname{bb}_{T}\), \(\operatorname{bb}_{S}\)). A global visual feature vector \(\varphi_{O}\) is also computed based on the object segment. ### _Baselines and Ablations_ We compare our approach (HGNN) with two baselines: #### V-C1 VisuoTactile Fusion (ViTa) we compare with a recent multimodal category-level 6D object pose estimation Dikhale _et al._[12]. Vision and tactile data are processed in two separate channels. In the vision channel, RGB and depth data are fused at the pixel level. The tactile channel combines the depth data and tactile contact points. The outputs of the two channels are fused using 1D convolutional layers to estimate the 6D pose. #### V-C2 Point Cloud Graph Neural Network (Point-GNN) we modified a recent graph-based object detection method by Shi _et al._[14] to build a graph-based 6D pose estimation baseline. A graph representation is built where each node is a point in the point cloud. Unlike our hierarchical scheme, Shi _et al._[14] applied message passing across all nodes to update the graph. We use a pose decoder architecture similar to ours (section IV-A3) to estimate the object 6D pose. We also ablate our model to examine our formulation and single out the contribution of each of its components: #### V-C1 Visual Features (HGNN-NoVis) In this model, we only rely on the 3D coordinates of the observed points (depth, touch, and proprioception) and exclude all auxiliary visual features obtained from convolutional encoders (\(\varphi_{O}\), \(\varphi_{V}\), \(\varphi_{T}\), and \(\varphi_{S}\)). #### V-C2 Proprioception (HGNN-NoProp) In this model, we exclude all proprioceptive information \(\mathbf{\pi}_{S}\) (positional and visual features of the tactile sensors) from the graph representation of the observational data. #### V-C3 Hierarchical Message Passing (HGNN-NoHrch) We discard the hierarchical inter- and intra-modality message passing from equation (2). Instead, we perform \(L=3\) rounds of message passing across all nodes with no hierarchical scheme. ### _Metrics_ We measure the quality of an estimated 6D pose \(\hat{p}=[\hat{R}|\hat{t}]\) using two metrics: position error and angular error. The position error is computed as L2 distance between the estimated \(\hat{t}\) and ground truth translation vectors \(t_{gt}\). The angular error is defined as \(\cos^{-1}(2\langle\hat{q},q_{gt}\rangle^{2}-1)\) using the inner product of the estimated \(\hat{q}\) and ground truth \(q_{gt}\) rotation quaternions [35]. ### _Implementation Details_ All models are trained on the synthetic dataset. The dataset is split into training and testing sets, \(80\%\) and \(20\%\), respectively. We report the performance of models using the testing data. The bounding box sizes are set to \(\mathrm{bb}_{V},\mathrm{bb}_{T}:8\times 8\) pixels and \(\mathrm{bb}_{S}:64\times 64\) pixels. A total number of \(L=3\) rounds of message passing has been performed in equation (2). All the concatenated features in embeddings of the vision and touch graphs are mapped to a fixed-size 128-dimensional latent vector using MLPs. We set \(\lambda=1.5\)e-\(2\) and \(K=128\) in equation (4). ## VI Results ### _Comparison with Baselines_ Figure 4 shows the performance of our model measured in angular (deg) and position (cm) error. Across all objects, our model _HGNN_ consistently outperforms _Point-GNN_ and _ViTa_ by a large margin. On average, _HGNN_ outperforms _Point-GNN_, by \(53\%\) (\(0.12\) cm) in position and \(57\%\) (\(2.20^{\circ}\)) in angular error. The improvement margin is larger in comparison with _ViTa_, by \(72\%\) (\(0.27\) cm) in position and \(80\%\) (\(6.7^{\circ}\)) in angular error. Another important observation comes from examining the effect of occlusion on the performance of our model. We calculate the occlusion level as the percentage of object model points visible to the main camera. Each point in the model is labeled as visible if there exists a point within a fixed threshold in the main camera's point cloud using Fig. 4: 6D pose estimation performance. Fig. 5: 6D pose estimation performance under increasing levels of occlusion. the k-nearest neighbor algorithm. The occlusion level is the percentage of model points labeled as not visible. Figure 5 shows the results of this analysis. Our model is effectively more robust to occlusion levels compared to the baseline. ### _Ablations_ Table I summarizes the ablation experiment results. #### Vi-B1 Effects of Proprioceptive Information We remove the proprioceptive information from the graph representation in _HGNN-NoProp_ and observe a significantly inferior performance compared to _HGNN_ (\(23\%\) in position and \(33\%\) in angular error). #### Vi-B2 Effects of Visual Features We exclude the auxiliary visual features obtained from bounding boxes and only rely on 3D coordinates of the observed points in _HGNN-NoVis_. This results in a significantly less accurate pose estimation compared to _HGNN_ (\(18\%\) in position and \(15\%\) in angular error) which validates that providing visual context for the points enhances the learned object representation. #### Vi-B3 Effects of Hierarchical Message Passing We discard the hierarchical message passing and use simple message passing across all modalities in _HGNN-NoHrch_. No hierarchy in message passing results in significantly inferior performance (\(15\%\) in position and \(70\%\) in angular error). This indicates that a hierarchical scheme (i.e., inter- and intra-modality message passing) has a major advantage for multimodal graph representation learning. #### Vi-B4 Effect of Tactile Readings We assess the performance of our model over varying numbers of tactile data. Figure 6 shows the average accuracy of our model as a response to limiting the maximum number of tactile contact points available per sensor. We notice that as the number of points per sensor increases (i.e, the touch observation gets richer), our model increasingly estimates a more accurate pose. This shows the importance of incorporating the touch modality on top of vision to enhance the quality of the estimated pose. ### _Real Robot_ We deployed our HGNN network and ViTa baseline network simultaneously in real-time on the hardware setup described in section V-A2. During deployment, both our network and the baseline receive the same input. Note, that the networks are trained on the synthetic dataset only. Figure 7 is an example of the pose estimation performance under varying levels of occlusion. We observe that our model (blue), unlike ViTa (cyan), is robust to a wide range of occlusions. The accompanying video provides further demonstration of the results presented in this section. ## VII Conclusion In this paper, we proposed a graph-based framework for estimating the 6D pose of in-hand objects based on vision and touch observations and accounting for proprioceptive information. Motivated by human motor behavior, we introduced a novel hierarchical approach for learning a graph representation of the observational data on two modality levels using hierarchical message passing that allows the information to flow within and across modalities. We showed that our approach accurately estimates the object's pose and is robust to heavy occlusions. We compared our model to existing work and showed that it achieves state-of-the-art performance on category-level 6D pose estimation of YCB objects. Moreover, we deployed our model to a real robot and showed successful transfer to pose estimation in the real setting. One potential limitation in our formulation is that incorporating proprioceptive information leads to learning a pose estimator that is not gripper-agnostic. Although we showed that including proprioception significantly enhances the pose estimation accuracy, we speculate that for optimal performance the model needs to be retrained based on the gripper characteristics (e.g., number of fingers or number of sensors per finger). An interesting future direction is incorporating other sensory information such as pressure into our framework. Finally, we hope our general approach inspires future research on multimodal object representation learning. \begin{table} \begin{tabular}{|l|c c|} \hline Model & Position Error (cm) & Angular Error (degree) \\ \hline HGNN (Ours) & \(\mathbf{0.104\pm 0.002}\) & \(\mathbf{1.609\pm 0.081}\) \\ HGNN-NoVis & \(0.127\pm 0.003\) & \(1.907\pm 0.111\) \\ HGNN-NoProp & \(0.136\pm 0.002\) & \(2.390\pm 0.122\) \\ HGNN-NoHrch & \(0.154\pm 0.002\) & \(2.749\pm 0.108\) \\ \hline \end{tabular} \end{table} TABLE I: Ablation Results. Fig. 6: Effect of the varying maximum number of tactile points per sensor on HGNN pose estimation performance. Fig. 7: Pose estimation performance under different levels of occlusion for HGNN (blue), ViTa (cyan) compared with the ground-truth (gray).
2301.04702
Physics Simulation Via Quantum Graph Neural Network
We develop and implement two realizations of quantum graph neural networks (QGNN), applied to the task of particle interaction simulation. The first QGNN is a speculative quantum-classical hybrid learning model that relies on the ability to directly utilize superposition states as classical information to propagate information between particles. The second is an implementable quantum-classical hybrid learning model that propagates particle information directly through the parameters of $RX$ rotation gates. A classical graph neural network (CGNN) is also trained in the same task. Both the Speculative QGNN and CGNN act as controls against the Implementable QGNN. Comparison between classical and quantum models is based on the loss value and accuracy of each model. Overall, each model had a high learning efficiency, in which the loss value rapidly approached zero during training; however, each model was moderately inaccurate. Comparing performances, our results show that the Implementable QGNN has a potential advantage over the CGNN. Additionally, we show that a slight alteration in hyperparameters in the CGNN notably improves accuracy, suggesting that further fine tuning could mitigate the issue of moderate inaccuracy in each model.
Benjamin Collis, Saahil Patel, Daniel Koch, Massimiliano Cutugno, Laura Wessing, Paul M. Alsing
2023-01-11T20:21:10Z
http://arxiv.org/abs/2301.04702v2
# Physics Simulation Via Quantum Graph Neural Network ###### Abstract We develop and implement two realizations of quantum graph neural networks (QGNN), applied to the task of particle interaction simulation. The first QGNN is a speculative quantum-classical hybrid learning model that relies on the ability to directly utilize superposition states as classical information to propagate information between particles. The second is an implementable quantum-classical hybrid learning model that propagates particle information directly through the parameters of \(RX\) rotation gates. A classical graph neural network (CGNN) is also trained in the same task. Both the Speculative QGNN and CGNN are as controls against the Implementable QGNN. Comparison between classical and quantum models is based on the loss value and accuracy of each model. Overall, each model had a high learning efficiency, in which the loss value rapidly approached zero during training; however, each model was moderately inaccurate. Comparing performances, our results show that the Implementable QGNN has a potential advantage over the CGNN. Additionally, we show that a slight alteration in hyperparameters in the CGNN notably improves accuracy, suggesting that further fine tuning could mitigate the issue of moderate inaccuracy in each model. ## I Introduction Irregular graph-based machine learning is inherently difficult due to the lack of symmetry among the nodes and edges that constitute the graph [1]. Convolutional neural networks (CNN) that thrive in the context of grid-based data are ineffective and inappropriate to use in this context, as they lack a straight-forward method of incorporating irregular data [1]. This is exemplified in cases where a single node may have hundreds to thousands of edges, whereas its neighbor only has one. Recent pursuit of a neural network model that adapts to the complexities implicit in irregular graph-based systems, e.g. social networks [2], molecule structures [3], particle interactions [4], etc., has resulted in the development of a plethora of graph-based machine learning models which are all defined under the general term "graph neural network" (GNN) [5]. Though containing a spectrum of deviations, the core feature of these models is that they exchange information between nodes via vector messages, a process dubbed message passing, and then update via neural network [1]. It is this core feature that has lead to the success of GNNs, as evidenced by their ability to excel at a variety of tasks including predictions ranging from traffic [6], to the chemical properties of molecules [3], knowledge graph reasoning [7] and particle simulation [4]. In essence, the utility of GNNs encompasses a majority of learning problems that can be represented as a graph containing meaningful connections between nodes. The vagueness here is appropriate due to the extensive number of environments where GNNs are applicable [1; 5]. Quantum machine learning has likewise emerged recently with developments in quantum hardware sophistication and capacity [8], and is itself a sub-field of machine learning that has garnered increased interest over the last decade. Being relatively new and having an excess of possible realizations, there is a lack of formal definition for the topic [9]. In general, the process involves traditional learning via quantum information processing, where either parameters are optimized or a decision function is obtained [9]. For the purposes of this study, the quantum machine learning aspects are reserved to encoding, processing, and decoding the data via parameterized quantum circuits, while the remaining parts of the algorithms including calculating loss function, back-propagation, etc., are done classically. Thus, the learning models presented here represent hybrid quantum-classical algorithms. A general pursuit in quantum algorithms is quantum supremacy [10]. Yet prior to even attempts at such a lofty claim, it must be ascertained as to whether a quantum analog is obtainable for a classical algorithm. That is the main pursuit of this study; following confirmation of this, we consider performance comparisons between the classical and quantum cases. We begin by describing and implementing two quantum graph neural network (QGNN) learning models. In particular, the QGNN consists of three sections of parameterized quantum circuits (PQC): encoder, processor, and decoder. The encoder expands the initial superposition state by inducting additional qubits into it, and the decoder pools that information from the larger number of qubits into the desired smaller amount (pooling from six qubits to two). The processor in between is responsible for the message passing, utilizing a quantum-based interaction network (IN) to send information between nodes [4]. The variation in the two developed QGNNs is that the first is a speculative model, while the second is implementable. Specifically, the speculative model relies on being able to directly take and store the qubits' superposition amplitude states and to use them classically. This is not possible to directly implement on a quantum computer, as a superposition state can only be approximated via statistical analysis of numerous measurements [11]. For the speculative circuit, the statistical analysis would have to be implemented seven times for each input vector, where seven is the number of sub-circuits that constitute the overall quantum circuit, excluding the decoder. Supposing \(1,000\) measurements per approximation to reconstruct the superposition state, the speculative circuit would need to be run \(7,000\) times per input vector. Considering the number of input vectors for this study is approximately \(19,000\), this would require \(133\) million runs of the quantum computer for a single epoch. Furthermore, each run would require use of a subroutine to determine the sign of each superposition state's amplitude [11; 12], in addition to needing a subroutine to generate the expectation value that constitutes the quantum circuit's relevant output [13]. This is all very impractical for actual implementation on quantum hardware; thus, this model is dubbed speculative. Regardless, this quantum algorithm was designed to be the closest analog to the classical, and thus acts as a control. The Implementable QGNN is fully realizable on quantum hardware. However, due to considerable gate depth, the results shown in this study were achieved via a quantum circuit simulator. Both the Speculative and Implementable QGNNs, which they will be referred to as throughout this paper, correlate the output of a particular qubit to its expectation value following application of a Pauli-Z observable to that particular qubit [14; 15]. Actual use of the Implementable QGNN on a quantum computer requires a non-trivial subroutine to approximate the observable's expectation value [13]. However, as this is only necessary at the end of the circuit and only requires application for two qubits in this study, it is considered implementable. The utility of these learning models is examined under the context of particle interaction simulation. In particular, the case of point particles falling under the influence of gravity within a box. This study contains the following layout. Section II covers the learning algorithms. Section III covers the PQCs of the encoder, processor, and decoder. Additionally, it goes over the method of encoding the data, and it describes the progression of data from input to output for both QGNNs. Section IV covers the results of the QGNNs and CGNN. Section V concludes this study by considering the results and offering potential paths for future research. ## II Learning Model ### Overview The learning model implemented in this study is based on that used by Sanchez-Gonzalez et al. [16]. It includes three sections: the encoder, the processor, and the decoder. The encoder takes the initial vector input and expands it into a higher dimensional latent space. The processor then processes the expanded data through its interaction network (IN) for a select \(n\) number of steps. Each step corresponds to a particular node receiving a "message" from nodes \(n\) edges away. Lastly, the decoder receives this processed data and outputs a prediction [16]. ### Classical Interaction Network The message passing property of the classical graph neural network (GNN) used in this project is obtained through use of the IN learning model. It is the same as that described by Battaglia et al. [4], and is highly similar to the Graph Network (GN) learning model described Figure 1: Pseudocode of the classical interaction network algorithm by Sanchez-Gonzalez et al. [16]. A brief description of the classical IN is provided here, while a more in-depth analysis can be found in Battaglia et al.'s work [4]. Following this are the descriptions of the GNN quantum analogs, the quantum graph neural networks (QGNN). Appropriately, each QGNN has a unique quantum interaction network, each analogous to the classical. As described by Battaglia et al. [4], the classical IN is given below. \[IN_{Classical}=\phi_{O}(a(G,\phi_{R}(m(G)))) \tag{1}\] The IN contains two neural networks, \(\phi_{O}\) and \(\phi_{R}\). Respectively, these represent the node and edge state updates functions. The other two functions are the marshalling functions \(a\) and \(m(G)\), which concatenate the data supplied to them [4]. For \(m(G)\), this data consists of \(G\), which is described as follows: \[G=\langle O,R\rangle=\] \[\langle\{o_{j}\}_{j=1\ldots N_{O}},\langle\{R_{r}\}_{k},\{R_{s} \}_{k},\{R_{a}\}_{k}\rangle_{k=1\ldots N_{R}}\rangle\,, \tag{2}\] where \(O\) is the set of \(N_{O}\) nodes in the graph with state vector length \(O_{l}\), and \(R\) is the set of \(N_{R}\) directed edges with state vector length \(R_{l}\). Thus, \(O\) is a matrix of size \(O_{l}\times N_{O}\). The set \(R\) can be further decomposed into the triple set \(R_{r}\), \(R_{s}\), \(R_{a}\), where for a given pair of nodes connected by a directed edge, \(R_{r}\) is the receiver node, \(R_{s}\) is the sender node, and \(R_{a}\) is the state of that edge. \(R_{r}\), \(R_{s}\), and \(R_{a}\) are the respective matrix representations that contain all the receiver, sender, and edge state information of the graph. \(R_{r}\) and \(R_{s}\) contain only 0s and 1s, and are each of size \(N_{O}\times N_{R}\), where row index \(j\) corresponds to node \(o_{j}\), and column index \(k\) corresponds to edge \(\{R_{a}\}_{k}\). For \(R_{r}\), a 1 corresponds to a node being the receiver of a particular edge. Likewise, for \(R_{s}\), a 1 corresponds to a node being the sender of a particular edge. \(R_{a}\) is a matrix of size \(R_{l}\times N_{R}\), where the columns are the edges with state vector length \(R_{l}\). The state vector length is arbitrary for both edges and nodes. Thus, \(G\) defines the state of a graph, containing complete information of the nodes and their connections. The output \(B\) of the marshalling function \(m(G)\) is described below, along with its column slices, \(b_{k}\). \[m(G)=conc[OR_{r};OR_{s};R_{a}]=B\;;\;b_{k}\subset B \tag{3}\] \[\phi_{O}(a(G,\phi_{R}(m(G))))=\phi_{O}(a(G,\phi_{R}(B))) \tag{4}\] \(B\) consists of the concatenation of the matrix multiplications of \(OR_{r}\) and \(OR_{s}\), combined with \(R_{a}\)[4]. This packages the node and edge information in a convenient way for implementation into the neural network. In particular, \(B\) is a matrix composed of \(OR_{r}\) stacked on top of \(OR_{s}\) stacked on top of \(OR_{a}\). With \(OR_{r}\) and \(OR_{s}\) both taking the shape \(O_{l}\times N_{R}\), and \(R_{a}\) taking the shape \(R_{l}\times N_{R}\), the combination results in the \(B\) matrix taking the shape \((2O_{l}+R_{l})\times N_{R}\). \[\phi_{R}(B)=E \tag{5}\] \[\phi_{O}(a(G,\phi_{R}(B)))=\phi_{O}(a(G,E)) \tag{6}\] Next, \(B\) is supplied to \(\phi_{R}\). Its column slices, \(b_{k}\) (see equation 3), are the input, and the total number of columns in \(B\) is the batch size. \(\phi_{R}\) predicts the new edge states \(E\), which being a matrix of the edges containing only new edge states, is the same size as \(R_{a}\). This is appropriate, as \(E\) will replace \(R_{a}\) in the case of multiple processors, i.e. multiple iterations of this learning algorithm. \[\bar{E}=ER_{r}^{T} \tag{7}\] \[\phi_{O}(a(G,E))=\phi_{O}(a(O,\bar{E})) \tag{8}\] The transformation of \(E\) into \(\bar{E}\), with shape \(N_{R}\times N_{O}\), is useful in that it combines the information of the new edges and the nodes that this update will effect [4]. Furthermore, it allows for the equivalency in equation 8, such that \((O,\bar{E})\) contains the same relevant information including node states, edge updates, and nodes impacted as found within \((G,E)\). \[a(O,\bar{E})=C;c_{k}\subset C \tag{9}\] The second marshalling function, \(a\), concatenates the columns of \(O\) and \(\bar{E}\), outputting matrix \(C\). For the \(k\)-th column of \(C\), the \(k\)-th column of \(O\) is the top half and the \(k\)-th column of \(\bar{E}\) is the bottom half. In effect, \(C\) is composed of \(O\) on top of \(\bar{E}\). This packages the node and edge information in a convenient way for implementation into the neural network [4]. Again, \(O\) has shape \(O_{l}\times N_{O}\) and \(\bar{E}\) has shape \(N_{R}\times N_{O}\); thus, the size of matrix \(C\) is \((O_{l}+N_{R})\times N_{O}\). \[\phi_{O}(a(O,\bar{E}))=\phi_{O}(C)=P \tag{10}\] \(C\) is then supplied to the learning function \(\phi_{O}\). Its columns' slices, \(c_{k}\) (see equation 9), are the input, and the total number of columns in \(C\) is the batch size. Just as \(\phi_{R}\) predicted the new edge states, \(\phi_{O}\) now predicts the new node states, \(P\), which is a matrix the same size as \(O\) that contains the new node states [4]. This is appropriate, as \(P\) will replace \(O\) in the case of multiple processors. Finally, either the algorithm repeats, or the matrix \(P\) is supplied to the decoder. The outcome is dependent upon the number of processors in the GNN, with each processor corresponding to one complete run of the IN. For the case of multiple processors, the substitution \(O^{\prime}=P\) and \(R^{\prime}_{a}=E\) will be made, and the algorithm will be repeated with these updated \(O\) and \(R_{a}\) values. After cycling through all processors, the final \(P\) is given to the decoder, which will then output its prediction. The psuedocode for the full classical IN algorithm is shown in figure 1. Additionally, a complete explanation of this algorithm can be found in the work of Battaglia et al. [4]. To give a brief overview of the classical GNN implemented in this study, it has the overall same design as that utilized by Sanchez-Gonzalez et al. [16]., and is described as follows. The node encoder consists of a two layer multilayer perceptron (MLP), with the first layer being size 8 and the second layer being size 9. The edge encoder also consists of a two layer MLP, with the first layer being size 4 and the second layer being size 5. The first layers for these sections were explicitly chosen to correspond to the node and edge input data, which are of size 8 and 4, respectively. Concerning the processors, the node and edge processor each consist of a single perceptron layer of size 9 and 5, respectively. Last, the decoder contains a single perceptron layer of size 2. After each perceptron layer in the encoders and processors, layer normalization is used [17]. Additionally, each of these layers utilize a ReLU activation function, while the decoder does not implement one. ### Speculative QGNN Interaction Network The IN algorithm implemented in the Speculative QGNN behaves as follows: \[IN_{Speculative}=\phi_{n}(S_{n}(G,\phi_{e}(S_{e}))) \tag{11}\] The algorithm consists of two learning functions, \(\phi_{n}\) and \(\phi_{e}\), with the latter embedded within the prior. The learning function \(\phi_{e}\) is responsible for updating the edge states, i.e. predicting the effects of nodal connections. Likewise, \(\phi_{n}\) is the learning function responsible for updating the node states. The graph G, which is the same as the classical case, is described again as \[G=\langle N,E\rangle=\] \[\langle\{n_{j}\}_{j=1\ldots N_{n}},\langle\{e_{r}\}_{k},\{e_{s}\}_ {k},\{e_{a}\}_{k}\rangle_{k=1\ldots N_{e}}\rangle\,, \tag{12}\] Note the simple change in variable names for the quantum case as opposed to the classical. In particular, \(O\) is equal to \(N\) and \(R\) is equal to \(E\). This algorithm contains steps of matrix multiplication, which is where the superposition states are directly implemented. In particular, the outputs of the learning functions at each time step are the superposition states, which are used to construct their corresponding matrices. These are then used to propagate information via matrix multiplication. \[A_{1}=NE_{r}\;;\;A_{2}=A_{1}^{{}^{\prime}}NE_{s}\;;\;A_{3}=A_{2}^ {{}^{\prime}}E_{a} \tag{13}\] \[A_{1}^{{}^{\prime}}=\phi_{e_{1}}(A_{1})\;;\;A_{2}^{{}^{\prime}}= \phi_{e_{2}}(A_{2})\] (14) \[S_{e}=\langle\phi_{e_{1}}(A_{1}),\phi_{e_{2}}(A_{2}),\phi_{e_{3} }(A_{3})\rangle\] (15) \[\phi_{e}(S_{e})=\phi_{e_{1}}(A_{1})\rightarrow\phi_{e_{2}}(A_{2 })\rightarrow\phi_{e_{3}}(A_{3})=A \tag{16}\] Classically, \(S_{e}\) would be a marshalling function that concatenates \(A_{1}\), \(A_{2}\), and \(A_{3}\), shown in equations 13 and 14, into a vector of size \((2N_{l}+E_{l})\times N_{e}\). However, Figure 2: Pseudocode of the Speculative QGNN interaction network algorithm the quantum circuits do not adjust well to spontaneous changes in vector size, with quantum data compression of the superposition state being a non-trivial task [18; 19]. Thus, it was optimal to alter \(S_{e}\) to represent the application of \(A_{1}\), \(A_{2}\), and \(A_{3}\) in series to \(\phi_{e}\), which has been decomposed into three separate learning functions \(\phi_{e1}\), \(\phi_{e2}\), and \(\phi_{e3}\). The output of this series of learning functions is \(A\), the matrix of updated edge states, which is equivalent to the overall output of \(\phi_{e}\). \[\bar{A}=AE_{r}^{T} \tag{17}\] \[\phi_{n}(S_{n}(G,A))\rightarrow\phi_{n}(S_{n}(N,\bar{A})) \tag{18}\] \((G,A)\) describes the nodal composition of the graph, including the corresponding directed edges and their predicted effects. With the transformation of \(A\) to \(\bar{A}\), \((N,\bar{A})\) contains the same information. Thus, the substitution \((G,A)\rightarrow(N,\bar{A})\) is a convenient method of sorting the data for implementation. \[P_{1}=N\ ;\ P_{2}=(P_{1}^{{}^{\prime}}\bar{A})N \tag{19}\] \[P_{1}^{{}^{\prime}}=\phi_{n1}(P_{1})\] (20) \[S_{n}=\langle\phi_{n_{1}}(P_{1}),\phi_{n_{2}}(P_{2})\rangle\] (21) \[\phi_{n}(S_{n})=\phi_{n_{1}}(P_{1})\rightarrow\phi_{n_{2}}(P_{2} )=P \tag{22}\] \(S_{n}\) performs the same process as \(S_{e}\), except in the context of the nodal learning function where \(S_{n}\) applies \(P_{1}\) and \(P_{2}\) in series to \(\phi_{n}\), which has been decomposed into \(\phi_{n1}\) and \(\phi_{n2}\). The output of this series of learning functions is \(P\), the updated node states, which can be designated the output of \(\phi_{n}\). \[N^{\prime}=P \tag{23}\] \[E_{a}^{\prime}=A \tag{24}\] The next step is to either rerun the algorithm with the updated node and edge states, i.e. equations 23 and 24, or have the update features proceed to the decoder. This decision is based on the chosen number of runs, and the particular run the algorithm is on. For additional insight, figure 2 contains the psuedocode for the Speculative QGNN IN algorithm. It should be noted that for the purposes of this project only the updated node states were input into the decoder, while the updated edge states were only used in the node state update function and, thus, were confined to use within this algorithm, i.e. the processor. This is based on the similar process followed by Sanchez-Gonzalez et al. [16]. Additionally in static graphs, the matrices \(E_{r}\) and \(E_{s}\) of the receiver and sender indices remain the same. However, this project relies on dynamic graphs, meaning \(E_{r}\) and \(E_{s}\) change with the time progression of the particle interactions. This progression is described by time steps, each one corresponding to the overall state of the system at a particular moment in time, represented by a graph. Thus, edges are uniquely constructed between nodes for each graph, which is achieved via a nearest neighbour algorithm within a particular "connectivity" radius Figure 3: (a) The composition of the encoder convolution unitary. (b) The application of the encoder convolution unitary in encoding the node state input. (c) The application of the encoder convolution unitary in encoding the edge state input. Note, the last encoder unitary applied in (b) and (c) indicates that the top half of gates in (a) is applied to qubit 3 and the remaining bottom half is applied to qubit 0, i.e., it is applied in the same exact manner as the prior encoder unitaries. used for each node at each time step, as implemented by Sanchez-Gonzalez et al. [16]. ### Implementable QGNN Interaction Network The design of the Implementable QGNN requires further deviation from the initial classical GNN learning model. In particular, this is a result of only being able to utilize slices of larger matrices during a single run-through of the entire algorithm. This is required in order to maintain superposition in the quantum circuit. Whereas the classical GNN and the Speculative QGNN both rely on two marshalling functions, the Implementable QGNN has none. Instead, it applies only a single column of \(N\), \(E_{r}\), \(E_{s}\), and \(E_{a}\) at a time, via a series of \(RX\) gates to the qubits corresponding to the edges. The resultant information in the edge state qubits is then transitioned to into the node state qubits via decoding unitaries. Overall, this results in a method of information propagation not based in direct matrix multiplication, but instead in the application of rotation matrices. Furthermore, it adds the requirement of an additional layer of decoding unitaries. Note that all of this does not suggest that the Implementable QGNN will necessarily be less effective, nor that it is somehow a worse implementation of GNNs. Instead, these dissimilarities from the Sanchez-Gonzalez et al. GNN [16] make the QGNN model studied here a novel attempt at quantum machine learning. There are multiple steps in the Speculative QGNN learning model, i.e., a series of matrix multiplications. However, the Implementable QGNN learning model is more aptly defined by its unique design, in that there are no steps implemented outside the context of the quantum circuit. Thus, the entire algorithm is realized in a single quantum circuit, and is best explained through examination of the quantum gates that constitute it. The full description of the Implementable QGNN can be found in the Methods sections below. ## III Methods ### Data Encoding Each learning model is trained on two data sets, derived from the same classical simulation (see section IV.1), to create the initial node and edge state vectors. The initial state vector for a particular node consists of its previous two velocities, \(v_{n-1}\) and \(v_{n-2}\), and its normalized clipped distances to the boundaries, \(b_{i}\), where these distances are clipped by the connectivity radius [16]. Each velocity has vector length two, and the vector sum of the clipped distances is length four. Concatenating these features gives the initial node state vector its size of eight, i.e., \((v_{x,n-1},v_{y,n-1},v_{x,n-2},v_{y,n-2},b_{1},b_{2},b_{3},b_{4})\). A vector length of 8 was chosen because values of \(2^{n}\) are Figure 4: (a) The composition of the processor PQC and (b) its corresponding unitary representation. (c) The composition of the decoder PQC and (d) its corresponding unitary representation naturally easy to work with in a quantum circuit. Likewise, it is ideal to use a small number of qubits to avoid substantial training times and noise. The initial state vector for a particular edge consists of its relative positional displacement, \(d_{x}\) and \(d_{y}\), i.e. the distance between the corresponding sender and receiver given a particular edge, and that distance's corresponding magnitude, \(D\); the prior is vector length two, and the latter is vector length one. Requiring an input size of \(2^{n}\), a single layer of zero padding was added to each edge state vector. Concatenating these features gives the initial edge vector its size of four, i.e., \((d_{x},d_{y},D,0)\). Outside of the zero padding, this composition of data is the same as that used by Sanchez-Gonzalez et al. [16]. An initial step in quantum machine learning is encoding classical data into qubits, which can be accomplished using various methods such as qubit encoding [20], tensor product encoding, and amplitude encoding [21]. The latter was implemented using the AmplitudeEmbedding function available in Pennylane, the quantum-compatible python package used to realize this project's classical-quantum algorithms [22]. Amplitude encoding consists of embedding classical input values into the amplitudes of a quantum state. The requires transforming the data from its classical format into that of a superposition state. A superposition state consists of \(2^{n}\) values, \(n\) being the number of qubits for a given quantum circuit. Thus, the criterion arises for the input data to be of size \(2^{n}\). ### Parameterized Quantum Circuits The Parameterized Quantum Circuits (PQC) used for the encoder and decoder are the same as those utilized in the Quantum Convolution Neural Network (QCNN) designed by Cong et al. [23], while the processor is the same as Circuit 15 designed by Hubregtsen et. al [24]. The QCNN PQCs are valuable in that they provide both a method for expanding data into a higher dimensional latent space and for pooling information into a desired number of qubits. Conversely, the value of Circuit 15 is more holistic, being a PQC that was proven to be moderately accurate in the context of classification, while retaining a low number of required parameters. The encoder is the reverse QCNN used by Cong et al., which is equivalent to the multiscale entanglement renormalization ansatz (MERA) [23]. Thus, instead of pooling the information, it expands it to a higher dimensional latent space. This reverse QCNN is realized via the repeated application of a two qubit unitary, as shown in figure 3a [25]. This unitary consists of \(RX\), \(RY\), and \(RZ\) gates, with the learning parameters being the corresponding degrees of rotation. Note that the top and bottom \(RZ\), \(RY\), and \(RX\) gates labeled \(p_{6}-p_{8}\) share the same parameters between pairs. Therefore, even though there are 18 rotation gates, there are only 15 parameters in total. With the encoder's unitary defined, the next step is the method of application. The unitary is applied sequentially to every pair of qubits in the circuit, shown in figures 3b and 3c. Note the difference between figures 3b and 3c is simply that figure 3b is the encoder for the node input, which has an initial state vector length of 8, Figure 5: **Speculative QGNN:** Depiction of the entire quantum circuit as it runs through the algorithm. The random access memory (RAM) sections represent where the superposition states are being saved and then used classically. while figure 3c is the encoder for the edge input, which has an initial state vector length of 4. Thus, figure 3b requires 3 qubits to amplitude encode the data, while figure 3c needs 2 qubits. When the encoder unitary is applied to different qubits, the parameters are kept the same, meaning that regardless of how many qubit pairs it is applied to the number and values of parameters is constant. Concerning the decoder, it is the QCNN used by Cong et al., which is equivalent to the reverse MERA [23]. Thus, it pools the data into a select number of qubits. For this project, the data is pooled into the final two qubits. The QCNN is realized via the repeated application of the two qubit unitary shown in figure 4c [25]. This unitary consists of the application of \(RX\), \(RY\), \(RZ\), and CNOT gates, with the degrees of rotation being learned parameters. The total number of parameters is 6. The decoder unitary is applied to pairs of qubits, with the information in the top qubit being pooled into the bottom qubit. For example, here information from qubits 0 and 1 is pooled into qubits 2 and 3, respectively as seen in figure 4d. Similar to the encoder unitary, the decoder unitary's parameters are the same in each application to qubit pairs, prior to backpropagation. The processor is based on the design presented by Hubregtsen et. al [24], shown in figure 4a. It contains 16 gates, but only 8 parameters. It consists of two columns of \(RY\) gates, each followed by cascading CNOT gates. Though the encoder and decoder are utilized only once during the entire circuit, the processor is repeatedly applied based on the desired number of message passing steps. As described in section II (also see figure 5), the algorithm requires the processor being applied a minimum of five times: three for edge processing and two for node processing. As is an inherent trait of GNN's, each repetition of the processor corresponds to a node's message being passed an additional node away. For example, having three repeated instances of the processor in the algorithm corresponds to a node "knowing" about its neighbors up to three edges away [1]. However, this is in the classical sense, where the processor is a single multilayer mpreceptron (MLP). In the context of the quantum algorithm used in this study, a single complete run of the interaction network can be considered a single use of the processor. This is why we treat the overall edge and node processors as decomposing into their corresponding processors, as shown in equations 15 and 21. Thus, another more quantitative way to consider this situation is that each 5 uses of the processor unitary corresponds to the completion of a single message pass. Figure 5 gives a more intuitional sense of this, showing a complete run of the algorithm with the incorporation of the PQCs, containing only a single step of message passing. This figure will be explained in more detail in the following subsection. Note, table 1 lists the number of parameters and gates found in each QGNN model given \(P\) amount of processors in the learning model. As mentioned in the figure, the Implementable QGNN does not have an obvious scaling relationship with qubit count. For parameters, increasing the qubit count means expanding either the node or edge sections of the circuit or both; however, this is subjective, and depends upon the experimenter's choice. Likewise, for gate count, the implementation of the \(RX\) gates would shift with increased qubit count, such being that the original series of \(RX\) gates was explicitly chosen to efficiently distribute information over 4 qubits given 6 parameters. If either of these values changed, the application of \(RX\) gates would need to be adjusted, with the manner of adjustment being arbitrary and, thus, based on the experimenter's choice. For example, given a case where there are 6 parameters to apply and a quantum circuit with a node section containing 6 qubits, the unitaries \(U_{r}\), \(U_{s}\), and \(U_{a}\) of that circuit would only need a single column of \(RX\) gates to effectively propagate information. This is not true for other qubit amounts. Furthermore, with increasingly large numbers of qubits, it is unclear what form the \(RX\) unitaries would take once the qubit count has surpassed the \(RX\) gate count of each unitary, i.e., it is arbitrary how to apply a vector of length 4 to a quantum circuit with 7 qubits. ### Speculative QGNN Integration Referring to figure 5, our implemented method relies on only four qubits, with the output of each sub-circuit, i.e. node encoder, \(\phi_{p1}\), etc., saved and input into the next sub-circuit, as designated by the algorithm, with the corresponding rotation parameters likewise saved. The algorithm relies on numerous steps of matrix multiplication to combine information for quantifying origin, target, and magnitude of effects. This is easily done classically, where the output at each sub-circuit can be stored until the entire corresponding matrix is formed. However, doing so in a quantum circuit context would either require directly implementing superposition amplitudes or utilizing additional qubits to store the information. In the case of the latter, with multiple runs of each sub-circuit, this would lead to a quickly increasing number of qubits. This would be highly impractical, if not impossible, to realize on quantum hardware and the quantum simulation software utilized in this project, i.e. Pennylene [26]. \begin{table} \begin{tabular}{|c|c|c|} \hline **Circult** & **Number of Parameters** & **Number of Gates** \\ \hline **Speculative GGNN** & 10NP & 46N + 20NP + 164 \\ \hline **Implementable GGNN** & 22P + 36 & 10QP + 164 \\ \hline \end{tabular} \end{table} Table 1: The number of parameters and gates in each QGNN given \(N\) number of qubits and the application of \(P\) amount of processors. The parameters and gates for the Implementable QGNN have no obvious relationship with increased qubit count, thus this variable was left out for this circuit. Note, the gates required for the amplitude embedding were left out of these counts. Thus, the former is necessary, even though it forfeits implementability as explained in the Introduction section. However, a QGNN that is fully implementable is realized in future sections of this paper, though it requires deviating from the prior mentioned method of propagating information via matrix multiplication. Figure 5 begins on the left with two encoders. The top is the node encoder and the bottom is the edge encoder. Each encoder expands the data from its initial vector length (8 for node input and 4 for edge input) into a vector space of length 16. At this point, the interaction network is implemented. The expanded versions of \(N\) and \(E_{a}\), as seen at the intersection of the encoders and processors in the figure, correspond to equation 12, i.e. the start of the algorithm. These superposition amplitudes are then saved, with \(N\) directly plugged into the next step of the algorithm: the series of matrix multiplications and circuit applications of the edge processors as described in equations 13-16. This overall region of matrix multiplications and learning functions is the edge processor as indicated in the figure. The expanded \(E_{a}\) is included in the final matrix multiplication of the edge encoder, \(A_{3}\), whose output is then applied to \(\phi_{e3}\). The node processor, as seen in the figure, first begins with the expanded version of \(N\) being applied to \(\phi_{p1}\), whose output \(P_{1}^{\prime}\) is then multiplied by \(A\), the output of \(\phi_{e3}\), and the original expanded \(N\), to form \(P_{2}\). The final step in the node processor is to then apply \(P_{2}\) to \(\phi_{p2}\), producing the updated node state \(P\) (see equations 19\(-\)22). \(P\) is then applied to the decoder, which outputs on qubits 2 and 3 the predicted new vertical and horizontal accelerations of the particles. ### Implementable QGNN Integration Examining the Implementable QGNN in figure 7, the quantum circuit is broken up into two parts: qubits \(0-3\) represent the node states and qubits \(4-7\) represent the edge states. It is important to first understand the implementation of the matrices \(N\), \(E_{r}\), \(E_{s}\), and \(E_{a}\) as described in equation 12. Here, the algorithm requires that the transposes of \(E_{r}\), \(E_{s}\), and \(E_{a}\) be utilized, which is a consequence of the original shapes of these matrices. \(E_{r}\) and \(E_{s}\) are of size \(N_{n}\times N_{e}\), and \(E_{a}\) is of size \(N_{l}\times N_{e}\), which, for this project means that each \(E\) matrix is of shape \(3\times N_{e}\). The extra dimension of padding that \(N_{l}\) had for the Speculative QGNN is not used here. However, the dimension of \(N_{e}\) is variable because it describes the number of edges per given time step. Additionally, \(N\) is of fixed shape \(N_{l}\times N_{n}\), i.e. \(8\times 3\) for this project. Thus, for compatibility and consistency with applying the matrices in unison to the quantum circuit, the transposes of the \(E\) matrices were required. In short, each applied matrix has a column size of 3, meaning the quantum circuit is run a total of 3 times per time step, i.e. per set of \(N\), \(\bar{E_{r}}\), \(\bar{E_{s}}\), and \(\bar{E_{a}}\), with the variability of dimension \(N_{e}\) absorbed by the rotation gates (explained below). In the Speculative QGNN, adhering to the classical Figure 6: **Implementable QGNN:** (a) Series of \(RX\) gates implemented to realized values of \(E_{r}\), \(E_{s}\), and \(E_{a}\) learning model, information is propagated via matrix multiplication of the matrices \(N\), \(E_{r}\), \(E_{s}\), and \(E_{a}\). Here however, these matrices are applied directly to the quantum circuit via the rotation parameters of rotation gates. Figure 6a depicts a cascading series of \(RX\) gates; this entire cascade is treated as a unitary and is applied for each of the \(\bar{E}\) matrices. The choice of \(RX\) gate was arbitrary, though the application was purposefully kept uniform for equivalent incorporation of data. For a given \(RX\) unitary, the rotation values of its \(RX\) gates are determined by the row values of the given column in use. The \(RX\) unitaries for \(\bar{E}_{r}\), \(\bar{E}_{s}\), and \(\bar{E}_{a}\) are represented, respectively, by the unitaries in figure 6b, figure 6c, and figure 6d. Matrix \(N\), with row vector length 8, is of greater vector length than \(\bar{E}_{r}\), \(\bar{E}_{s}\), and \(\bar{E}_{a}\), whose row vector lengths have a possible maximum value of 6. Thus, implementation of matrix \(N\) in the quantum circuit requires two additional \(RX\) gates, but only a single application for efficient information distribution amongst the qubits, as shown in figure 6e, and its unitary representation shown in 6f. The row vector length of the \(\bar{E}\) matrices, \(N_{e}\), is variable. This variability is due to the calculation of edges per time step as achieved via a nearest neighbor algorithm applied to the particles in the simulation within a certain connectivity radius. For particles within the connectivity radius of each other, an edge will be generated between them. Regardless of connections, i.e. edges, with other particles, each particle is given a self-edge. In the context of this project's simulation, there are three particles contained within a box. If no particles are within the given interaction radius of one another, then there will only be 3 edges in that time step, each being a self-edge. However, if every particle is within the given interaction radius of every other particle, then there will be 6 edges, three self-edges and three edges between particles. The range of possible integer number of edges is bound by these minimum and maximum values. Hence, the dimension of \(N_{e}\) is bound between 3 and 6. Returning to the \(RX\) gates, this means that the number of rotation parameters can change between each time step. To account for this, the \(RX\) gate's rotation parameter assumes a value of zero when one is not provided. For example, if there exists only four edges, then the fourth and fifth \(RX\) gates, \(c_{4}\) and \(c_{5}\) of the \(E\) matrices, would be given a rotation parameter of zero. The full application of the Implementable QGNN is shown in figure 7. In particular, this figure demonstrates a one processor implementation of the QGNN, meaning there is only one step of message passing between nodes using this circuit. For additional steps of message passing, as in the prior GNN implementations, simply add copies of the processor following the original. Each \(n\) additional copy is equal to one additional run of a message passing algorithm, which corresponds to nodes learning about neighboring nodes \(n\) additional steps away. Note, these copies each have their own unique parameters. The algorithm begins by encoding the node state and edge state inputs into qubits \(0-2\) and \(4-6\), respectively. Immediately after this, the node and edge encoder unitaries, \(U_{E,n}\) and \(U_{E,e}\), respectively, are applied to their corresponding sections, which are the same unitaries described by figure 3a. Following this, the \(RX\) unitaries for \(N\), \(\bar{E}_{r}\), \(\bar{E}_{s}\), \(\bar{E}_{a}\), respectively, \(U_{N}\), \(U_{r}\), \(U_{s}\), and \(U_{a}\), are applied to edge section qubits. The edge section processor, \(U_{P,e}\), which is identical to that described in figure 4a, is applied to the same section. The application of the \(RX\) unitaries and processor is analog to equations 13-16 Figure 7: **Implementable QGNN:** Entire Implementable QGNN quantum circuit of the Speculative QGNN learning model. As previously mentioned, a consequence of implementing all parts of the algorithm in a single circuit is that an extra section of decoder unitaries is required. They are used to transfer information from the edge section qubits to the node sections qubits. This requirement is inherent from the node feature state update being based upon the edge feature state update, the crux of the GNN algorithm. This transfer is done via application of the decoder unitary, represented by \(U_{D,t}\); \(t\) for transition from edge to node. This decoder unitary has a set of parameters different from that of the last decoder unitary, \(U_{D,f}\); \(f\) for final application of decoder unitary. The next unitary is a reapplication of \(U_{r}\); however, this time it is applied to the node section qubits. The use of \(U_{r}\) here is analogous to equation 17 of the Speculative QGNN learning model. The final unitary of the overall processor is the node section processor, \(U_{P,n}\). This unitary has unique parameters from that of the edge section processor. Likewise, the application of this unitary is analogous to equations 18-22 for the Speculative QGNN. Ending the entire circuit, the decoder unitary \(U_{D,f}\) is applied, and a measurement is made on qubits 2 and 3. The expectation value of these measurements are the predicted \(x\) and \(y\) direction accelerations of each particle. For a single time step, there are three cycles of this complete circuit, meaning the final output is a \(2\times 3\) matrix where the rows are the \(x\) and \(y\) accelerations, and the columns correspond to particular particles. ## IV Results ### Overview There were two groups of GNN models trained. The first consisted of GNNs with one processor, and the second consisted of GNNs with two processors. As each model progressed through its training, the calculated loss value following the application of each additional batch is shown in figure 8. In particular, figures 8a-c and 8d-f each show the loss value, whereas figures 8g-i show the common logarithm of the loss. The loss was calculated via mean square error (MSE) of the predicted acceleration of a given particle compared with its ground-truth acceleration. The ground-truth acceleration was obtained via a particle simulator found in the open source software Taichi Lang [27]. The predicted position of each particle was obtained via a Euler integrator that calculates the next position from the current acceleration, as implemented by Sanchez-Gonzalez et al. [16]. Likewise, based on Sanchez-Gonzalez et al., the optimizer implemented was Adam, featuring a learning rate of 0.01 [16; 28]. A batch size of 4 was used, each of the four data points being a time step, with the respective loss value of each time step being averaged together to calculate the entire batch's loss value. The data points are randomized prior to each epoch. Note, the maximum number of processors used in this project was two. This was chosen with the amount of particles in mind; in particular, the simulation consisted of three particles interacting under the influence of gravity. The graphs generated from this situation contained nodes that, at a maximum, had two-step neighbors, i.e., neighbors that are two edges away. Each processor corresponds to a single step; thus, the maximum number of processors needed was two. ### Learning Efficiency Figure 8a shows the entire loss value trajectory as training progressed for the GNNs with a single processor. As evident in this figure, there exists considerable overlap between the loss values for the classical GNN (CGNN), Implementable QGNN, and Speculative QGNN. Figures 8b and 8c show zoomed in views. The prior zooms into the y-axis, looking between a range of \([0,1]\), while the latter zooms into the x-axis, looking between a range of \([0,25]\). Figure 8b further demonstrates the considerable overlap between loss values, and only in the figure 8c does the difference become clear. The CGNN starts with the highest loss value, the Implementable QGNN starts in the middle, and the Speculative QGNN starts with the lowest. Additionally, the QGNNs approach a near zero loss value approximately 5 batches prior to the classical. Regardless, each GNN approaches this near zero amount within 10 batches. It is difficult to compare the learning efficiencies of the 1 processor GNNs in this direct manner; thus, examination of their logarithmic plots was necessary, as shown in figure 8h. It is evident that the Implementable QGNN reaches and maintains the lowest loss values, followed by the CGNN, and last by the Speculative QGNN. Considering each GNN approaches a near zero value within the first one percent of applied batches, using the logarithmic results as criteria for comparison of learning efficiencies is appropriate. Thus, the Implementable QGNN is most efficient, the CGNN is second, and the Speculative QGNN is least. However, the de facto results show the learning efficiency of each 1 processor GNN model is highly similar, with a notable degree of overlap existing between loss values throughout training. Overall, each model is highly efficient in reducing the loss throughout training. Figure 8d shows the entire loss value trajectory as training progresses for the 2 processor GNNs. Additionally, figures 8e and 8f show zoomed in views of the y-axis and x-axis as described in the 1 processor GNNs case. Likewise, analysis of these figures, in addition to the logarithmic plot of the 2 processor GNNs loss values, as seen in figure 8i, will result in the same conclusions made for the 1 processor GNNs case. Specifically, in the context of learning efficiency, the Implementable QGNN is most efficient, the CGNN is second, and the Speculative QGNN is least efficient. However, once again, the de facto results show the learning efficiency of each 2 processor GNN model is highly similar, with a notable degree of overlap existing between loss values throughout training. Overall, each model is efficient in reducing the MSE loss throughout training. These conclusions are based on the same observations as found in the 1 processor case. Figure 8g shows the logarithmic plot of loss values for each GNN in both the 1 processor and 2 processor cases. As a general trend, it appears that the Implementable QGNNs have the highest learning efficiencies, the classical GNNs both have the middle, and the Speculative QGNNs both have the lowest. However, comparing model pairs, the 1 processor GNNs are more efficient than their 2 processor counterparts. This suggests that the graphs generated in the ground-truth three-particle simulation consist mainly of one-step graphs, i.e. graphs containing nodes whose neighbor nodes are only a single edge away. This would explain the difference in efficiency because 2-processor GNNs used on graphs containing nodes with only 1-step neighbors would be redund Figure 8: (a) Graph of loss value over duration of training for GNNs with 1 processor. (b) Version of (a) zoomed in on y-axis. (c) Version of (a) zoomed in on x-axis. (d) Graph of loss value per batch over duration of training for GNNs with 2 processors. (e) Version of (d) zoomed in on y-axis. (f) Version of (d) zoomed in on x-axis. (g) Log of loss for all GNNs over duration of training. (h) Log of loss for 1 processor GNNs over duration of training. (i) Log of loss for 2 processors GNNs over duration of training. dant. In particular, consider that each additional processor corresponds to a particular node learning about a node an additional step away, i.e. message passing. In this situation, the second message pass would be redundant because there would be no 2-step neighbor to learn about. Furthermore, the second message pass may cause an over mixing of the node states, decreasing the distinct features of each node, reducing useful information in the system. ### Accuracy Though it is positive to observe that each GNN has a high learning efficiency, it is equally important to observe the accuracy in the model predictions. To measure this, two methods were used; the first was taking the MSE of the predicted and target next position values of each particle, and the second was taking the percent error of these same values. The prior is based on Sanchez-Gonzalez et al. work, in which they utilized the same means of Figure 9: (a) Graph of average percent error in position prediction over duration of training for GNNs with 1 processor. (b) Version of (a) moderately zoomed in on y-axis. (c) Version of (a) highly zoomed in on y-axis. (d) Graph of average percent error in position prediction over duration of training for GNNs with 2 processors. (e) Version of (d) moderately zoomed in on y-axis. (f) Version of (d) highly zoomed in on y-axis. (g) Graph of MSE error in position prediction over duration of training for GNNs with 1 processor. (h) Version of (g) zoomed in on y-axis. (i) Version of (g) zoomed in on x-axis. (j) Graph of MSE error in position prediction over duration of training for GNNs with 2 processors. (k) Version of (j) zoomed in on y-axis. (l) Version of (j) zoomed in on x-axis. accuracy measurement [16]. The latter is based on the observation that the positions, predicted and actual, of each particle consist of considerably small numbers, a majority of the time being between \([-1,1]\). Thus, MSE measurements will already be a near zero value, meaning that the MSE method of estimating accuracy is impractical for visual comparison in the context of this study's results. Regardless, some useful information can still be obtained from viewing the MSE plot of each GNN. Figures 9g-i and 9j-l show the MSE plots for the 1 processor and 2 processor GNNs, respectively. Figures 9h and 9k zoom into the y-axis, looking between a range of \([0,0.10]\), while figures 9i and 9l zoom into the x-axis, looking between a range of \([0,25]\). Similar to the learning rates for both 1 processor and 2 processor cases, the MSE value rapidly decreases to near zero values. Likewise, the CGNN begins with the highest MSE values, and realigns with the Implementable QGNN and Speculative QGNN at approximately 10 batches. However, here the Implementable QGNN and Speculative QGNN immediately begin with near zero MSE values. Overall, the MSE values of the 1 processor GNNs overlap considerably, as do the MSE values of the 2 processor GNNs. Note that as a general trend the 2 processor GNNs have a higher MSE value throughout training as compared to the 1 processor GNNs. Furthermore, they do take more batches to reach the same near zero MSE values as already reached by the 1 processor GNNs. Figures 9a-c and 9d-f show the percent error plots for the 1 processor and 2 processor GNNs, respectively. Figures 9b and 9e moderately zoom into the y-axis, looking between a range of \([0,500]\), while figures 9c and 9f Figure 10: Note: The high percent errors in the above graphs are not indicative of fruitless performances by each model; see Performance and Hyperparmeters section for full explanation.(a) Percent error using validation data set with all GNNs. (b) Zoomed in view of (a). (c) Percent error using validation data set with 1 Processor GNNs. (d) Same as (c) except with the CGNN omitted to show its overlap with the Implementable QGNN. (e) Percent error using validation data set with 2 Processor GNNs. (f) Same as (e) except with the CGNN omitted to show its overlap with the Implementable QGNN. highly zoom into the y-axis, looking between a range of \([100,160]\). Note, the graphs of percent error are averages, with the average percent error calculated, i.e. updated, at every batch, and the resulting average percent error plotted. Examining the 1 processor GNNs, it is immediately clear that they deviate from the results of the learning efficiency comparisons. In particular, here, the Speculative QGNN has the highest accuracy throughout training, followed by the Implementable QGNN, and last by the CGNN. This is in direct contrast to the Implementable QGNN having the greatest learning efficiency, followed by the CGNN, and last by the Speculative QGNN. However, examining the 2 processor GNNs, they do not have this deviation but instead follow the pattern established by their learning efficiencies. This difference is a possible consequence of the occasional redundancy of the 2 processor GNNs in the case of this three-particle simulation, as prior described. Likewise, this is also a possible consequence of the increase in parameters with the inclusion of an additional processor, which does not increase the parameter count equally in each GNN model. Regardless, as shown in figure 11a, in both the 1 processor and 2 processor cases, the percent error of the Implementable QGNN, Speculative QGNN, and CGNN all decrease at comparable rates, leveling off in close proximity approximately at 110% error for the 1 processor GNNs and at 112% error for the 2 processor GNNs. Note, the high degree of inaccuracy shown in these measurements must be considered in the context of all other results. Thus, these measurements do not indicate that the performances of these models are fruitless; for a full analysis, see the Performance and Hyperparmeters section below. It is worth comparing percent error of all the GNNs together, as shown in figure 11a. Here, there is no particular pattern to the accuracy rankings. The Speculative QGNN with 1 processor performs the best, while its 2 processor counterpart performs the worst. The Implementable QGNNs perform second and third best, with both the 1 and 2 processor cases performing approximately the same. Likewise, the CGNNs perform forth and fifth best, with the 2 processor case performing slightly better overall then the 1 processor case. Regardless, these differences ultimately are rather minute, with each GNN having a difference in percent error accuracy within at most approximately 10% of each other. We find a similar situation testing the trained models on the validation data set, which is approximately 30% the size of the training data set. The time progression of sampled position predictions made by each model using this data set can be seen in figure 12a and the results can be seen in figure 10. In particular, figure 10a shows the percent error in predictions for all models while running the validation data set. The accuracy rankings given by the training results are comparable to the outcomes here. Figure 10b demonstrates this; zooming in on the y-axis, the 1 processor Speculative QGNN performs the best, while its 2 processor counterpart performs the worst. In this case however, the performance of the remaining models is nearly indistinguishable, being almost completely overlapped. For completeness, figures 10c-f show the zoomed in views of the Implementable QGNN and CGNN cases. In particular, figure 10c shows the 1 processor GNNs case, and figure 10e shows the same except with the CGNN results omitted to prove they overlap with the results of the Implementable QGNN. Like Figure 11: (a) Average percent error in position prediction over duration of training for all GNNs. (b) Average percent error in position prediction over duration of training for CGNN with various learning rates. Proceeding down the legend, the first learning rate is the original, i.e., the learning rate implemented throughout this study, the second and third learning rates are variations that adjust throughout training, and the fourth learning rate is a variation that is simply smaller than the original. Here, “lr” stands for learning rate and \(\beta_{1}\), \(\beta_{2}\), and \(\epsilon\) are the relevant variables to the Adam algorithm they are implementing. wise, figure 10e shows the 2 processor GNNs case, and figure 10f shows the same except with the CGNN results omitted to prove they overlap with the results of the Implementable QGNN. The accuracy in the context of the validation data set is notably worse based on the percent error measurements. However, this is not surprising, as the percent error accuracy was initially poor throughout training. Additionally, the overall performance of these models is less similar than their performances during training. In particular, for the final half of percent error results, the 1 processor Speculative QGNN resides at approximately 350%, both cases of the Implementable QGNNs and CGNNs reside at approximately 400%, and the 2 processor Speculative QGNN resides at approximately 500%. ### Performance and Hyperparmeters In determining the performance of the GNNs, it is necessary to consider their varying measurements of accuracy. In particular, their MSE measurements, combined with observing the constant overlap of particles in figure 12a, would suggest that their accuracies are high. These combined observations indicate that each model is capable of following the general trend of the ground-truth. However, this must also be considered in the context of the percent error measurements and figure 12b, the zoomed in view of the rightmost particle in figure 12a.iii. The method of simulating particle interactions is via generating time steps with a small time increment between consecutive steps. For the ground-truth simulation this time increment was 0.0001, meaning particle movement behaves approximately to this scale. Figure 12b shows that the vertical distance between the closest particle is 0.001 (arbitrary units). This difference is 10 times greater than the time increment. Thus, this large difference indicates that percent error measurements are also correct, meaning each model has notable degree of inaccuracy. Considering the conclusions based on both the MSE error and percent error measurements, the GNN models are hence moderately inaccurate. To be precise, they able to approximate the general trend of particle interactions while being a non-negligible percent off. The moderate inaccuracy and high learning efficiency of each model suggests that they are able to quickly identify some simple features in the data, and accurately make predictions based on them, which results in the high learning efficiency. However, simultaneously there are more complex variables at work, for which the models are completely input at determining, resulting in an overall moderate degree of inaccuracy. We found that these complexities are related to the nature of the problem: particles in a box interacting under the influence of gravity. If pushed in the x-direction, there are no forces on the particles except for collisions with boundaries or other particles. If they are falling, bouncing off another particle, or doing some combination of these actions, this is a complex behavior. This is exemplified in figure 10, where the sudden peaks in the percent error correspond to particles falling under the influence of gravity and particles colliding, whereas the remaining portions of the graphs correspond to particles rolling. This is further confirmed in the raw data, where it was observed that the x-direction values of the particles tended to be considerably more accurate than the y-direction values (see also figure 12b). Additionally, this suggests that the percent error in the training plateaued around 100% as an effect of the x-direction values being accurate, while the y-direction values were incorrect to a notable magnitude. The moderate inaccuracy of the models does not condem them as a whole. Rather rudimentary neural network structures were used throughout this project, with similarly simple learning rates. Any increasingly ad Figure 12: (a.i)-(a.vi): A select sample of predicted positions given by each GNN model using the validation data set. The time progression is incremental, beginning at (a.i) and ending at (a.vi). The particles begin off the ground, and their progression as they fall and collide can be approximately observed. (b) Zoomed in view of the rightmost particle in graph (a.iii). vanced techniques, such as dropout and decaying learning rates, were avoided. This lack in optimization of hyperparameters is likely a large contributor to the current issues with these models. This notion is further supported by figure 11b, which shows the percent error trajectories of the 2 processor CGNN for the learning rate of 0.01 compared to various learning rates. Proceeding down figure 11's legend, the first two variations follow the Adam learning rate algorithm described by Kingma and Ba [28], with the relevant variables given in the legend accounting for the difference in performance. The third variation is simply a decrease in the learning rate's magnitude. As shown by the first learning rate variation, a simple adjustment of this parameter already results in an increase of accuracy. As mentioned prior, the models implemented in this study were purposely kept simple for the sake of ease and efficiency in implementation. Thus, the basic learning rate of 0.01 was used throughout training and testing. As described at the beginning of this study, following proof of a quantum GNN analog, the results of the CGNN and Speculative QGNN were obtained to compare against the Implementable QGNN. It is promising that the Implementable QGNN has a greater learning efficiency than both the CGNNs and Speculative QGNNs. Likewise, during the training, the Implementable QGNN's accuracy performance appears to offer advantage over the CGNNs and the Speculative QGNN (ideal quantum-classical GNN analog) in situations containing redundancy. However, the identical performances of the CGNN and Implementable QGNN when tested on validation data suggest that this question requires further experimentation to reach any definite conclusion. Furthermore, this study was completed using a quantum circuit simulator, and thus would have to be implemented on actual quantum hardware to determine any real advantage. ## V Conclusion The aim of this project was to construct quantum analogs to the classical graph neural network, as based on the work of Sanchez-Gonzalez et al. [16]. That goal was realized via two quantum graph neural networks, one that was speculative and one that was implementable. These QGNNs were compared alongside the CGNN in the task of particle interaction simulation. For simplicity, the case of three particles contained within a box was used to generate the training data; likewise, the most basic form of these GNNs were implemented. Two sets of GNNs were tested. The first contained a single processor, and the second contained two processors. Overall, the models proved capable of learning simple characteristics in the data, resulting in a high learning efficiency. However, they were unable to determine the more complex behaviours that were simultaneously occurring, resulting in a moderate inaccuracy in predictions. These conclusions are evident in the discrepancy between the predictions in the x and y values, in which the x value predictions tended to be more accurate because they are governed by simpler behaviors. In addition to the successful realization of QGNNs, the results of this study suggest that the Implementable QGNN could have an advantage over CGNNs in learning efficiency and accuracy. However, further testing is required to confirm this. Furthermore, it is likely that the overall moderate inaccuracy in predictions is not wholly a fault of the models, but perhaps a result of not fine-tuning the hyperparameters. This leads to a path for a potential future study. In particular, future research should implement these models under a variety of hyperparamters, observing the consequences on learning efficiency and accuracy. ###### Acknowledgements. The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of Defense, the United States Air Force or the Griffiss Institute. The appearance of external hyperlinks does not constitute endorsement by the United States Department of Defense of the linked websites, or the information, products, or services contained therein. The Department of Defense does not exercise any editorial, security, or other control over the information you may find at these locations. ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2310.02340
Learning Interpretable Deep Disentangled Neural Networks for Hyperspectral Unmixing
Although considerable effort has been dedicated to improving the solution to the hyperspectral unmixing problem, non-idealities such as complex radiation scattering and endmember variability negatively impact the performance of most existing algorithms and can be very challenging to address. Recently, deep learning-based frameworks have been explored for hyperspectral umixing due to their flexibility and powerful representation capabilities. However, such techniques either do not address the non-idealities of the unmixing problem, or rely on black-box models which are not interpretable. In this paper, we propose a new interpretable deep learning method for hyperspectral unmixing that accounts for nonlinearity and endmember variability. The proposed method leverages a probabilistic variational deep-learning framework, where disentanglement learning is employed to properly separate the abundances and endmembers. The model is learned end-to-end using stochastic backpropagation, and trained using a self-supervised strategy which leverages benefits from semi-supervised learning techniques. Furthermore, the model is carefully designed to provide a high degree of interpretability. This includes modeling the abundances as a Dirichlet distribution, the endmembers using low-dimensional deep latent variable representations, and using two-stream neural networks composed of additive piecewise-linear/nonlinear components. Experimental results on synthetic and real datasets illustrate the performance of the proposed method compared to state-of-the-art algorithms.
Ricardo Augusto Borsoi, Deniz Erdoğmuş, Tales Imbiriba
2023-10-03T18:21:37Z
http://arxiv.org/abs/2310.02340v1
# Learning Interpretable Deep Disentangled Neural Networks for Hyperspectral Unmixing ###### Abstract Although considerable effort has been dedicated to improving the solution to the hyperspectral unmixing problem, non-idealities such as complex radiation scattering and endmember variability negatively impact the performance of most existing algorithms and can be very challenging to address. Recently, deep learning-based frameworks have been explored for hyperspectral unmixing due to their flexibility and powerful representation capabilities. However, such techniques either do not address the non-idealities of the unmixing problem, or rely on black-box models which are not interpretable. In this paper, we propose a new interpretable deep learning method for hyperspectral unmixing that accounts for nonlinearity and endmember variability. The proposed method leverages a probabilistic variational deep-learning framework, where disentanglement learning is employed to properly separate the abundances and endmembers. The model is learned end-to-end using stochastic backpropagation, and trained using a self-supervised strategy which leverages benefits from semi-supervised learning techniques. Furthermore, the model is carefully designed to provide a high degree of interpretability. This includes modeling the abundances as a Dirichlet distribution, the endmembers using low-dimensional deep latent variable representations, and using two-stream neural networks composed of additive piecewise-linear/nonlinear components. Experimental results on synthetic and real datasets illustrate the performance of the proposed method compared to state-of-the-art algorithms. Hyperspectral data, spectral unmixing, neural networks, disentanglement, deep learning. ## I Introduction ### _Background_ Due to physical limitations of the imaging process, hyperspectral images (HIs) provide very high spectral resolution but low spatial resolution, which means that a pixel usually contains a mixture of several different materials [2]. Hyperspectral unmixing (HU) consists in estimating the spectral signatures of pure materials (i.e., _endmembers_ - EMs) in a scene and the proportions with which they are contained in each pixel (i.e., _abundances_) directly from an HI [3]. Due to the unsupervised nature of HU, adequately exploring the physics of the problem when devising modeling strategies is paramount for obtaining stable and high-quality EM and abundance estimations. Simplistic methods considered the interaction between light and the EMs to be linear [3]. However, this model is over-simplified, degrading the quality of the estimates. Thus, addressing important non-idealities such as nonlinear interactions between light and the materials [4] and the variability of the EMs in different HI pixels [5] has become the subject of much attention more recently. Different approaches have been proposed to perform HU (see Section II). However, traditional methods often lack the flexibility to represent non-idealities observed in practical HIs. This motivated the use of machine learning approaches for HU that ally both flexibility and performance [6, 7]. Nonetheless, interpretability remains a key point when leveraging machine learning strategies in HI analysis [8]. Recently, physically motivated machine learning approaches have been successfully applied to HU [9, 10, 11, 12]. The advantage of such models with respect to fully black-box strategies lies in the interpretability of the estimated EMs and abundance parameters, which is a requirement for meaningful unmixing results as it allows a user to understand the causes for the algorithm's behavior [13]. When deep learning strategies come into play, autoencoder (AEC) architectures are of special interest due to the intrinsic low-dimensionality of the abundance space with respect to the pixels, and to the connection between such strategies and hyperspectral mixing models [14, 15]. Thus, several approaches using AECs were proposed to solve HU addressing phenomena such as nonlinearity [10, 16, 17], EM variability [11, 18] and outliers [19]. Although such deep learning methods presented relevant solutions for HU reaching high levels of accuracy while retaining physical interpretation, such strategies fail to provide a separation between EMs and abundances that is both interpretable and accounts for existing spectral variability and nonlinear effects. Recently, supervised disentanglement learning has become a popular approach to separate latent variables in deep learning models into different factors of variation that can have a physical interpretation [20]. Disentangled decompositions have been considered for different applications (e.g., separating content from style in images [20]), and its potential will be explored in this work to aid the separation between abundance and EM variations in HU. ### _Contributions of this work_ In this work, we aim to develop a learning-based unmixing algorithm which accounts for challenges including both nonlinearities in the mixing process and spectral variability of the EMs. Moreover, different from black-box models, the ###### Abstract In this paper, we propose a novel framework for multi-dimensional Gaussian random variable (EM), which combines the parameters of the model and an approximate posterior distribution. This approach is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable. We propose a novel framework for multi-dimensional Gaussian random variable (EM), which combines the parameters of the model and an approximate posterior distribution. This approach is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM). We propose a novel framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide a framework for multi-dimensional Gaussian random variable (EM), which is designed to provide a framework for multi-dimensional random variable (EM), which is designed to provide [52], combinations of additive and multiplicative factors [53], or multiscale spatial models [54]. HU is then formulated as a non-convex optimization problem using carefully designed regularizations to constrain the degrees of freedom of the model. Recent work also considered a linear-quadratic non-linear mixture model with EM variability in HU [55]. Deep learning-based HUDeep learning has become an established tool for hyperspectral imaging, with methods such as graph convolutional neural networks (NNs) [56] and the transformer architecture [57] bringing significant advances to hyperspectral classification. Various deep learning-based approaches have also been proposed for HU. Early works trained NNs as supervised regression methods to learn a mapping from the mixed pixels to their abundance values [58, 59]. However, those methods are hindered by the limited availability of training data with ground truth. This motivated the development of algorithms such as extended support vector machines (i.e., soft classification) [60], which require only EM signatures as training samples. Recently, autoencoder (AEC) networks have become widely used in unsupervised HU [7]. AECs are encoder-decoder (i.e., bottleneck) NNs which can be applied for HU by identifying the decoder with the mixing model (1), the encoder with its inverse, and the latent codes with the abundances [14, 15]. AECs have been developed for linear HU using, e.g., appropriately designed encoder networks or preprocessing approaches to reduce the effect of noise and outliers [61, 19], sparsity constraints [62, 63], or convolutional architectures to exploit spatial information about the HIs [64, 65]. Coupled convolutional AECs were also used to jointly unmix hyperspectral and multispectral data for image fusion [66]. Other methods based on AECs proposed to address non-linear HU by using nonlinear architectures for the decoder, including a post-nonlinear model [16], additive nonlinearities [67, 68], or using specifically designed nonlinear NN layers [69]. A model-based architecture for the encoder was also proposed in [17] by exploring the relationship between the AEC and the nonlinear mixing model. AECs have also been used to address EM variability in HU as generative EM models to capture the low-dimensional manifold of spectral variability [11]. This model was used in matrix factorization [11], structured sparse regression [21], and probabilistic [18] HU methods. Endmember variability was also addressed in [70] a spatial-spectral autoencoder using an additive perturbation model [50], and in non-AEC-based approaches using Gaussian Process regression [71, 72]. Sets of multiple pure pixels extracted from the HI (which can be seen as samples from EM signatures) have also been explored to improve the robustness of AECs in HU, either by learning mappings from the pure pixels to their corresponding abundances [10, 73], or by regularizing the EMs estimated by the decoder [74]. Recurrent NNs have also been recently employed in [75] to perform HU accounting for the temporal variability of the EMs. Other works have also considered different loss functions for AEC training, such as the Wasserstein distance [76] and adversarial losses [77]. A cycle-consistency loss was used in [78] to guide the reconstruction of two cascaded AECs in HU. Self-supervised learning has also been integrated into the design of AEC strategies for HU [79]. Moreover, non-AEC approaches were also proposed for HU, such as parametrizations of the abundances using untrained deep prior models [80] or designing NNs for unmixing by unfolding optimization algorithms such as the ADMM [81] or sparse regression methods [12]. Differently from approaches such as unfolded NNs, the proposed method is built upon a statistical model of the mixing and unmixing process, where some of the PDFs are parametrized by learnable models. ## III Proposed method DefinitionsLet us consider a dataset with \(N_{U}\) unlabeled HI pixels \(\mathcal{D}_{U}=\{\mathbf{y}_{1},\ldots,\mathbf{y}_{N_{U}}\}\). We also consider a supervised dataset \(\mathcal{D}_{S}=\{(\mathbf{y}_{1},\mathbf{a}_{1},\mathbf{M}_{1}),\ldots,(\mathbf{y}_{N_{S}}, \mathbf{a}_{N_{S}},\mathbf{M}_{N_{S}})\}\) with \(N_{S}\) labeled pixels (i.e., with its corresponding EMs and Figure 1: Overview of the proposed method. A self-supervised learning module is used to generate supervised and unsupervised training data, \(\mathcal{D}_{U}\) and \(D_{S}\), to train the model. A generative model (with parameters \(\theta\)) is constructed with a prior model \(p_{\theta}(\mathbf{M},\mathbf{a},\mathbf{Z})\), describing the a priori behavior of the EMs, abundances, and variability parameters, and a mixing model \(p_{\theta}(\mathbf{y}|\mathbf{M},\mathbf{a},\mathbf{Z})\), containing both a linear and a nonlinear component. To solve HU, an approximate posterior distribution (with parameters \(\phi\)) \(q_{\phi}(\mathbf{M},\mathbf{a},\mathbf{Z}|\mathbf{y})\) is constructed, and independence assumptions between the abundances and the EM variability parameters are used to disentangle these variables. A semi-supervised learning-based optimization objective is used to learn the parameters of both the generative model and of the approximate posterior by maximizing a lower bound on the log-likelihood of both the supervised and unsupervised data, in \(\mathcal{L}_{\mathrm{Data}}(\theta,\phi,\mathcal{D})\), and also regularizations promoting abundance sparsity (\(\mathcal{L}_{\mathrm{Sparse}}(\theta,\phi,\mathcal{D})\)) and controlling the amount of nonlinearity in both the generative and inference models. The learned posterior then provides the HU solution as the EMs and the abundances. abundances), which will be generated directly from \(\mathcal{D}_{U}\) based on a self-supervised learning strategy that is described in detail in Section IV-B. We also denote by \(\mathcal{D}=\mathcal{D}_{U}\bigcup\mathcal{D}_{S}\) the full dataset. Functions without accent (e.g., \(\mathbf{x}\)) belong to the mixing model, while functions with the _breve_ accent (e.g., \(\tilde{\mathbf{x}}\)) belong to the inference model. To simplify the notation, the pixel index will be omitted when possible. _Section overview:_ In the following, we first describe the proposed mixing model in Section III-A. Then, the unmixing problem is formulated in a disentangled variational inference framework in Section III-B. The cost function for the semi-supervised learning objective is presented in III-C. Finally, the optimization of the proposed cost function is addressed in Section III-D. An overview of the proposed framework is given in Figure 1, which illustrates the interplay between the different parts of the statistical model representing the mixing (i.e., generative) and unmixing (i.e., inference) process, the generation of the training data, and the different terms in the loss function which is optimized to perform unmixing. ### _The mixing model_ _Abundance prior:_ When assigning a statistical distribution to the abundances, it is very important to account for the physical constraints. The Dirichlet distribution is a natural choice for modeling abundance vectors since it enforces the non-negativity and sum-to-one constraints of the model, thus, being appropriate to represent fractions. In this work we consider a flat Dirichlet distribution: \[p(\mathbf{a})=\mathrm{Dir}(\mathbf{a};\mathbf{1}_{P})\,, \tag{2}\] where \(\mathbf{1}_{P}\in\mathbb{R}^{P}\) is a \(P\)-dimensional vector of ones that contains its concentration parameters of the distribution. The flatness indicates the lack of prior knowledge over the abundances other than its physical constraints. The Dirichlet distribution properties made it a popular choice of prior for Bayesian HU strategies [82, 83, 84], and has also been used in conjunction with a Markov transition model to represent the abundances multitemporal HU [85]. _Endmember model:_ Several models have been proposed to account for EM variability by representing the spectral signatures of the materials in each pixel using deterministic or statistical models. Deterministic models include the use of additive [50] or multiplicative [51, 52, 86] perturbations of reference EM signatures, or a combination thereof [53]. The Gaussian [87], Beta [47] or Gaussian mixtures [48] distributions have been considered as statistical models. However, in practice EM variability can be very complex and specifying a distribution \(p(\mathbf{M})\) directly is very difficult. Moreover, these models do not explore an important property of spectral variability. Since the spectrum of most materials is a function of a small number of physico-chemical parameters [88, 89], the EM signatures are supported on a low-dimensional submanifold of the high-dimensional spectral space. In this work, we consider a deep generative EM model to provide a flexible representation of EM signatures while accounting for their low intrinsic dimension [11]. Specifically, we model \(\mathbf{M}\) as a random variable that may be arbitrary but which follows a Gaussian distribution when conditioned on a set of low-dimensional latent variables \(\mathbf{Z}=[\mathbf{z}_{1},\dots,\mathbf{z}_{P}]\), \(\mathbf{z}_{k}\in\mathbb{R}^{H}\) (i.e., \(p(\mathbf{M}|\mathbf{Z})\) is Gaussian). The latent variables \(\mathbf{Z}\) control the variability of the spectral signatures, their dimension \(H\) being related to the flexibility of the model to represent the EMs in an HI. Results in [11] indicate that small values of \(H\) are sufficient for obtaining good unmixing results and represent EM variability well. In this way, although the conditional distribution is tractable, the marginal PDF \(p(\mathbf{M})=\int p(\mathbf{M}|\mathbf{Z})p(\mathbf{Z})d\mathbf{Z}\) can be arbitrary, rendering the model very flexible. Nonetheless, such a marginalization will not be necessary; instead, the latent variables \(\mathbf{Z}\) become an additional parameter that will be computed during inference. Considering the different EMs to be conditionally independent, the model becomes: \[p_{\theta}(\mathbf{M}|\mathbf{Z})=\prod_{k=1}^{P}p_{\theta}(\mathbf{m}_{k}|\mathbf{z}_{k})\,, \tag{3}\] with \(\mathbf{m}_{k}\in\mathbb{R}^{L}\) being the \(k\)-th column of \(\mathbf{M}\), and \[p_{\theta}(\mathbf{m}_{k}|\mathbf{z}_{k})=\mathcal{N}\big{(}\mathbf{m}_{k};\mathbf{\mu}_{ \theta}^{m,k}(\mathbf{z}_{k}),\mathrm{diag}(\mathbf{\sigma}_{\theta}^{m,k}(\mathbf{z}_{k}) )\big{)}\,. \tag{4}\] Here \(\mathcal{N}(\mathbf{x};\mathbf{\mu},\mathbf{\Sigma})\) denotes a Gaussian distribution of variable \(\mathbf{x}\) with mean \(\mathbf{\mu}\) and covariance matrix \(\mathbf{\Sigma}\). Functions \(\mathbf{\mu}_{\theta}^{m,k}(\mathbf{z}_{k})\) and \(\mathrm{diag}(\mathbf{\sigma}_{\theta}^{m,k}(\mathbf{z}_{k}))\) return the mean and (diagonal) covariance matrix of the distribution, and \(\theta\) is the set of parameters of the generative model. We assign a prior for \(\mathbf{Z}\) as \[p(\mathbf{Z})=\prod_{k=1}^{P}p(\mathbf{z}_{k})\,,\quad p(\mathbf{z}_{k})=\mathcal{N}(\mathbf{z} _{k};\mathbf{0},\mathbf{I})\,. \tag{5}\] Note that the number of EMs, \(P\), is assumed to be known a priori. In practice, it can be estimated from the HI using virtual dimensionality estimation techniques [89]. However, special care has to be taken to avoid overestimating the number of EMs in the scene, as explained in [90]. _Mixing model:_ Several models have been proposed to represent both macroscopic and intimate nonlinear mixtures, such as the BLMM and Hapke models [38, 4]. However, due to the complexity of the mixing process, specifying a precise model can be difficult. This has motivated the development of HU method based on nonparametric models where the nonlinearity is learned from the data, such as in kernel-based methods [9, 41] and nonlinear AEC networks [10, 16, 17]. In this work, we consider a decomposition of the nonlinear mixing process as the sum of a linear term (the LMM) and a nonparametric nonlinear component [9, 17, 91, 92]: \[\mathbf{y}=\mathbf{M}\mathbf{a}+\mathbf{\mu}_{\theta}^{y}(\mathbf{a},\mathbf{M})+\mathbf{e}\,, \tag{6}\] where \(\mathbf{\mu}_{\theta}^{y}\) denotes the nonlinear contribution. This model aims at representing both macroscopic and intimate mixtures [91], with the linear term representing the main contribution of macroscopic interactions, and the nonlinear term representing the effects of multiple scattering and intimate mixtures. Its nonparametric form does not make direct assumptions about the type of mixture in an HI, at the cost of having to be learned. Moreover, the amount of nonlinearity can be controlled by penalizing the norm of \(\mathbf{\mu}_{\theta}^{y}\) during the learning process, which becomes close to the LMM when \(\mathbf{\mu}_{\theta}^{y}\) is small. Considering the noise \(\mathbf{e}\in\mathbb{R}^{L}\) to be independent, white and Gaussian, the data likelihood becomes: \[p_{\theta}(\mathbf{y}|\mathbf{a},\mathbf{M})=\mathcal{N}\big{(}\mathbf{y};\mathbf{M}\mathbf{a}+\mathbf{\mu} _{\theta}^{y}(\mathbf{a},\mathbf{M}),\sigma_{\theta}^{y}\mathbf{I}\big{)}\,, \tag{7}\] where \(\sigma_{\theta}^{y}\in\mathbb{R}_{+}\) is the noise variance in each band. Since we assume that the abundances, the EMs and the noise are independent, this leads to the following factorization for the joint distribution: \[p_{\theta}(\mathbf{y},\mathbf{a},\mathbf{M},\mathbf{Z})=p_{\theta}(\mathbf{y}|\mathbf{a},\mathbf{M})p_{ \theta}(\mathbf{a})p_{\theta}(\mathbf{M}|\mathbf{Z})p(\mathbf{Z})\,. \tag{8}\] ### _The unmixing problem_ The unmixing problem consists of finding the posterior distribution \(p(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\). However, due to the choice of distributions in (8) computing the posterior analytically is intractable, and some approximations have to be performed. To this end, we use a variational approximation, which consists of specifying a parametric distribution \(q_{\phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\) from a sufficiently rich family, and finding the set of parameters \(\phi\) which makes it as close as possible to the true posterior distribution \(p(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\)[93]. We consider the following factorization for this approximation: \[q_{\phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})=q_{\phi}(\mathbf{a}|\mathbf{y},\mathbf{M},\mathbf{Z})q_ {\phi}(\mathbf{M}|\mathbf{Z},\mathbf{y})q_{\phi}(\mathbf{Z}|\mathbf{y}). \tag{9}\] Moreover, we also consider disentanglement learning within a statistical framework, simplifying the model by separating latent variables through statistical independence assumptions [20]. Thus, we assume that \(\mathbf{a}\) is independent of \(\mathbf{Z}\) conditioned on \(\mathbf{y}\) and \(\mathbf{M}\), and that \(\mathbf{M}\) is independent of \(\mathbf{y}\) conditioned on \(\mathbf{Z}\), that is: \[q_{\phi}(\mathbf{a}|\mathbf{y},\mathbf{M},\mathbf{Z}) =q_{\phi}(\mathbf{a}|\mathbf{y},\mathbf{M})\,, \tag{10}\] \[q_{\phi}(\mathbf{M}|\mathbf{Z},\mathbf{y}) =q_{\phi}(\mathbf{M}|\mathbf{Z})\,. \tag{11}\] Through these two assumptions, \(\mathbf{a}\) is disentangled from \(\mathbf{Z}\), and \(\mathbf{M}\) is disentangled from \(\mathbf{y}\). In the following, we define specific forms for each of those conditional PDFs. For \(q_{\phi}(\mathbf{a}|\mathbf{y},\mathbf{M})\), we consider another Dirichlet distribution: \[q_{\phi}(\mathbf{a}|\mathbf{y},\mathbf{M})=\mathrm{Dir}(\mathbf{a};\tilde{\mathbf{\gamma}}_{\phi} ^{a}(\mathbf{y},\mathbf{M}))\,, \tag{12}\] where function \(\tilde{\mathbf{\gamma}}_{\phi}^{a}\) computes its concentration parameters. Most works (such as AECs) consider "black-box" models to compute the abundances from the mixed pixels. However, it is important to incorporate prior knowledge about the model to improve the interpretability of the results when parametrizing the abundances posterior distribution in (12). Specifically, it has been shown that for autoencoder networks, the amount of nonlinearity in the mixing model and in the inference model are closely connected [17] in such a way that it is helpful to also decompose the encoder into a linear and nonlinear parts, so that the amount of nonlinearity in both parts of the model can be controlled. Thus, we propose to write \(\tilde{\mathbf{\gamma}}_{\phi}^{a}(\mathbf{y},\mathbf{M})\) as: \[\tilde{\mathbf{\gamma}}_{\phi}^{a}(\mathbf{y},\mathbf{M})=s_{\text{ReLU}}\left(\tilde{ \mathbf{\gamma}}_{\phi}^{a,\text{lin}}(\mathbf{y},\mathbf{M})+\tilde{\mathbf{\gamma}}_{\phi}^ {a,\text{lin}}(\mathbf{y},\mathbf{M})\right)\,, \tag{13}\] where \(\tilde{\mathbf{\gamma}}_{\phi}^{a,\text{lin}}(\mathbf{y},\mathbf{M})\) is a (piecewise) linear function which estimates the abundance mean and uncertainty based on a parametric model architecture which can also promote abundance sparsity effectively (as will be discussed in the following), while \(\tilde{\mathbf{\gamma}}_{\phi}^{a,\text{lin}}(\mathbf{y},\mathbf{M})\) is a nonparametric function that is able to compensate other nonlinearities in the model. Function \(s_{\text{ReLU}}(\mathbf{x})=\max(\mathbf{0},\mathbf{x})\) is the ReLU activation, which is used to ensure the nonnegativity of \(\tilde{\mathbf{\gamma}}_{\phi}^{a}(\mathbf{y},\mathbf{M})\). Thus, we can use regularizations to explicitly control the contribution of the nonparametric term \(\tilde{\mathbf{\gamma}}_{\phi}^{a,\text{lin}}(\mathbf{y},\mathbf{M})\) in the abundance estimates. Distribution \(q_{\phi}(\mathbf{Z}|\mathbf{y})\), is assumed to factorize as \(q_{\phi}(\mathbf{Z}|\mathbf{y})=\prod_{k=1}^{P}q_{\phi}(\mathbf{z}_{k}|\mathbf{y})\), where each \(q_{\phi}(\mathbf{z}_{k}|\mathbf{y})\) follows a Gaussian distribution: \[q_{\phi}(\mathbf{z}_{k}|\mathbf{y})=\mathcal{N}\big{(}\mathbf{z}_{k};\tilde{\mathbf{\mu}}_{ \phi}^{z,k}(\mathbf{y}),\mathrm{diag}(\tilde{\mathbf{\sigma}}_{\phi}^{z,k}(\mathbf{y})) \big{)}\,, \tag{14}\] in which functions \(\tilde{\mathbf{\mu}}_{\phi}^{z,k}\) and \(\tilde{\mathbf{\sigma}}_{\phi}^{z,k}\) compute its mean and the elements of its diagonal covariance matrix, respectively. We also assume that \(q_{\phi}(\mathbf{M}|\mathbf{Z})\) can be factorized as \(q_{\phi}(\mathbf{M}|\mathbf{Z})=\prod_{k=1}^{P}q_{\phi}(\mathbf{m}_{k}|\mathbf{z}_{k})\), with: \[q_{\phi}(\mathbf{m}_{k}|\mathbf{z}_{k})=\mathcal{N}\big{(}\mathbf{m}_{k};\tilde{\mathbf{\mu}}_{ \phi}^{m,k}(\mathbf{z}_{k}),\mathrm{diag}(\tilde{\mathbf{\sigma}}_{\phi}^{m,k}(\mathbf{z}_ {k}))\big{)}\,. \tag{15}\] #### Sharing parameters To simplify the inference process, we consider the same form for the EM conditional distribution in both (4) and (15). More precisely, we use \(\tilde{\mathbf{\mu}}_{\phi}^{m,k}(\mathbf{z}_{k})=\mathbf{\mu}_{\theta}^{m,k}(\mathbf{z}_{k})\) and \(\tilde{\mathbf{\sigma}}_{\phi}^{m,k}(\mathbf{z}_{k})=\mathbf{\sigma}_{\theta}^{m,k}(\mathbf{z}_ {k})\), therefore making \(q_{\phi}(\mathbf{M}|\mathbf{Z})=p_{\theta}(\mathbf{M}|\mathbf{Z})\). This is also known as "bottom-up" inference [94], and allows for parameter sharing to reduce the complexity of the model. An illustration of the relationship between the different random variables, \(\mathbf{a},\mathbf{Z},\mathbf{M}\) and \(\mathbf{y}\), in both the generative and inference models according to the selected factorizations of the PDFs and their parametrizations can be seen in Figure 2. ### _Objective function_ To learn the parameters of the model and perform HU, we maximize a cost function composed of three terms, defined as: \[\mathcal{L}_{\mathrm{T}}(\theta,\phi;\mathcal{D})=\mathcal{L}_{\mathrm{Data}}( \theta,\phi;\mathcal{D})-\mathcal{L}_{\mathrm{Sparse}}(\theta,\phi;\mathcal{D})- \mathcal{R}(\theta,\phi)\,. \tag{16}\] The first term attempts to fit the model to the semi-supervised training dataset \(\mathcal{D}\), and aims to maximize the data likelihood \(\log p(\mathcal{D})\). The second term aims to promote sparse abundance estimates, while the last term is a data-independent regularization on the model parameters. Figure 2: Overview of the dependencies between the variables in the proposed generative and inference (unmixing) models. Boxes denote functions, while the blue and red lines/arrows denote the data generating model and the inference model, respectively. _Data generating model:_ A latent variable \(\mathbf{Z}\) generates the EM matrix \(\mathbf{M}\), which together with the abundances \(\mathbf{a}\) generates the mixed pixel by an additive linear/nonlinear model. _Inference model:_ The learned model maps from the observed pixels to the latent EM representation \(\mathbf{Z}\), and from these to the estimated abundances using a “bottom-up” inference strategy and a two-stream neural network, respectively. #### Iii-C1 **Term \(\mathcal{L}_{\rm Data}(\theta,\phi;\mathcal{D})\)** The main criterion used to learn the parameters of the model is to maximize the likelihood of the data: \[\log p(\mathcal{D})=\sum_{\mathbf{y}\in\mathcal{D}_{U}}\log p(\mathbf{y})+ \sum_{(\mathbf{y},\mathbf{a},\mathbf{M})\in\mathcal{D}_{S}}\log p(\mathbf{y},\mathbf{a},\mathbf{M})\,, \tag{17}\] where the first term in the right hand side of (17) consists in the data likelihood for the unlabeled dataset \(\mathcal{D}_{U}\), while the second term is the data likelihood for the labeled dataset \(\mathcal{D}_{S}\) (i.e., in which variables \(\mathbf{a}\) and \(\mathbf{M}\) are treated as fixed/observed, and thus their likelihood is also maximized). However, maximizing the expression in (17) is intractable because it involves marginalizing different variables in the joint distribution (8), which is a common issue in latent variable models [95]. This problem can be circumvented by maximizing the so-called _evidence lower bound_ (ELBO) of each term in (17): \[\sum_{\mathbf{y}\in\mathcal{D}_{U}}\log p(\mathbf{y})\geq L_{U}(\theta, \phi;\mathcal{D}_{U})\,, \tag{18}\] \[\sum_{(\mathbf{y},\mathbf{a},\mathbf{M})\in\mathcal{D}_{S}}\log p(\mathbf{y},\bm {a},\mathbf{M})\geq L_{S}(\theta,\phi;\mathcal{D}_{S})\,, \tag{19}\] where \(L_{U}(\theta,\phi;\mathcal{D}_{U})\) and \(L_{S}(\theta,\phi;\mathcal{D}_{S})\) are given by [20, 25]: \[L_{U}(\theta,\phi; \mathcal{D}_{U})=\sum_{\mathbf{y}\in\mathcal{D}_{U}}\mathbb{E}_{q_{ \phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})}\Big{[}\log p_{\theta}(\mathbf{y}|\mathbf{a},\mathbf{M}, \mathbf{Z})\] \[+\log p_{\theta}(\mathbf{Z},\mathbf{a},\mathbf{M})-\log q_{\phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\Big{]}\,, \tag{20}\] and \[L_{S}(\theta,\phi; \mathcal{D}_{S})=\sum_{(\mathbf{y},\mathbf{a},\mathbf{M})\in\mathcal{D}_{S}} \mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y},\mathbf{a},\mathbf{M})}\Big{[}\log p_{\theta}(\mathbf{ y}|\mathbf{a},\mathbf{M},\mathbf{Z})\] \[+\log p_{\theta}(\mathbf{Z},\mathbf{a},\mathbf{M})-\log q_{\phi}(\mathbf{Z}|\mathbf{ y},\mathbf{a},\mathbf{M})\Big{]}\,. \tag{21}\] Moreover, it can be shown that maximizing the ELBO equivalently minimizes the KL divergence between the variational posterior \(q_{\phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\) and the (unknown) true posterior \(p_{\theta}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\)[96]. The cost function to be maximized is a combination of the lower bounds \(L_{U}\) and \(L_{S}\), and an additional regularization term [20, 25]: \[\mathcal{L}_{\rm Data}(\theta,\phi;\mathcal{D}) =L_{U}(\theta,\phi;\mathcal{D}_{U})+\lambda\Big{[}L_{S}(\theta, \phi;\mathcal{D}_{S})\] \[+\beta\sum_{(\mathbf{y},\mathbf{a},\mathbf{M})\in\mathcal{D}_{S}}\log q_{ \phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\Big{]}\,, \tag{22}\] where parameter \(\lambda\in\mathbb{R}_{+}\) is used to balance the contribution of the supervised and unsupervised terms in the cost function. This is especially important when the size of the supervised dataset is much smaller than that of the unsupervised one [20]. The additional term \(\log q_{\phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\), weighted by \(\beta\in\mathbb{R}_{+}\), is a regularization added to the cost function to promote models in which the labeled data (abundances and EMs) have high likelihood in the posterior, conditioned on the pixels [25]. #### Iii-C2 **Term \(\mathcal{L}_{\rm Sparse}(\theta,\phi;\mathcal{D})\)** An important property of many HIs is that while many materials may be present in the whole scene, only a small subset of them are usually present in each pixel. This leads the resulting abundance vectors to be sparse, with only a few of its elements being significantly far from zero [29]. This property can be used to improve the conditioning of the unmixing problem in the form of regularization strategies. In this work, we penalize the concentration parameters of the abundance posterior distribution (12) to encourage sparse abundance reconstructions: \[\mathcal{L}_{\rm Sparse}(\theta,\phi;\mathcal{D})=\tau\sum_{(\bm {y},\mathbf{a},\mathbf{M})\in\mathcal{D}_{S}}\left\|\tilde{\gamma}_{\phi}^{a}(\mathbf{y}, \mathbf{M})\right\|_{1/2}\] \[+\tau\sum_{\mathbf{y}\in\mathcal{D}_{U}}\mathbb{E}_{q_{\phi}(\mathbf{M}| \mathbf{Z})q_{\phi}(\mathbf{Z}|\mathbf{y})}\Big{\{}\big{\|}\tilde{\gamma}_{\phi}^{a}(\mathbf{y},\mathbf{M})\|_{1/2}\Big{\}}\,, \tag{23}\] where the averages of the L\({}_{1/2}\) semi-norm of the Dirichlet concentration parameters are performed with respect to the supervised (first term) and unsupervised (second term) data portions, and \(\tau\in\mathbb{R}_{+}\) is a parameter controlling the regularization. Furthermore, we considered the L\({}_{1/2}\) norm penalization as it is a more effective measure of sparseness when compared to the traditionally used L\({}_{1}\) norm penalty [24]. #### Iii-C3 **Term \(\mathcal{R}(\theta,\phi)\)** To control the nonlinear contributions and retain the interpretability of both the encoder (6) and decoder (13) models, we add as a penalization term \[\mathcal{R}(\theta,\phi)=\varsigma_{1}\big{\|}\mathbf{\mu}_{\theta}^{y}\big{\|}_{ \rm FNN}+\varsigma_{2}\big{|}\tilde{\gamma}_{\phi}^{a,\rm lin}\big{\|}_{\rm FNN}\,, \tag{24}\] where \(\|f\|_{\rm FNN}\) denotes the norm of some feedforward neural network \(f\), defined as the sum of the norms of its parameters at all layers. For instance, if \(f\) represents a multilayer perceptron (MLP) with weights \(\mathbf{W}_{\ell}\) and biases \(\mathbf{b}_{\ell}\) at layer \(\ell\), the norm is computed as \(\|f\|_{\rm FNN}=\sum_{\ell}|\mathbf{W}_{\ell}\|_{F}+\|\mathbf{b}_{\ell}\|_{2}\), with \(\|\cdot\|_{F}\) denoting the Frobenius norm. The first regularization term in (24) penalizes the amount of nonlinearity in the mixing model (6), while the second term penalizes the contribution of the nonparametric component in the abundance estimates in (13). Constants \(\varsigma_{1},\varsigma_{2}\in\mathbb{R}_{+}\) are regularization parameters. ### _Optimizing the cost function_ Due to the nature of the probabilistic model considered in this work, directly maximizing the cost function \(\mathcal{L}_{\rm T}(\theta,\phi;\mathcal{D})\) in (16) is not trivial. This occurs because it involves some conditional distributions which are not available in the model factorization devised in Section III-B. Moreover, it also requires optimizing expectations with respect to both Gaussian and Dirichlet distributions depending on the model parameters. We address these problems in the following. #### Iii-D1 **Rewriting the unavailable distributions** The first difficulty comes from the cost function \(\mathcal{L}_{\rm Data}(\theta,\phi;\mathcal{D})\) in (22). First, it requires sampling from and evaluating \(q_{\phi}(\mathbf{Z}|\mathbf{y},\mathbf{a},\mathbf{M})\) (see (21)), which is not available under the selected posterior factorization in (9). Similarly, we also do not have access to the marginal posterior \(q_{\phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\). To circumvent the first issue, we can proceed as in [20] and rewrite the last two terms of \(\mathcal{L}_{\mathrm{Data}}(\theta,\phi;\mathcal{D})\) as: \[L_{S}(\theta,\phi;\mathcal{D}_{S})+\beta\sum_{(\mathbf{y},\mathbf{a}, \mathbf{M})\in\mathcal{D}_{S}}\log q_{\phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\] \[=\sum_{(\mathbf{y},\mathbf{a},\mathbf{M})\in\mathcal{D}_{S}}\Big{[}\mathbb{E}_ {\mathbb{E}_{q_{\phi}}(\mathbf{Z}|\mathbf{y},\mathbf{a},\mathbf{M})}\Big{\{}\log p_{\theta}( \mathbf{y}|\mathbf{a},\mathbf{M},\mathbf{Z})\] \[+\log p_{\theta}(\mathbf{a},\mathbf{M},\mathbf{Z})-\log q_{\phi}(\mathbf{a},\mathbf{ M},\mathbf{Z}|\mathbf{y})\Big{\}}\] \[+(1+\beta)\log q_{\phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\Big{]}\,, \tag{25}\] where we added and subtracted the term \(\log q_{\phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\). Note that the variational posterior inside the expectation is now in the same form as (9). However, this still requires optimizing an expectation taken with respect to the unavailable \(q_{\phi}(\mathbf{Z}|\mathbf{y},\mathbf{a},\mathbf{M})\). This can be approximated using importance sampling [20], which allows us to approximate the expectation w.r.t. this term by sampling from \(\mathbf{Z}^{(i)}\sim q_{\phi}(\mathbf{Z}|\mathbf{y})\) (for each datapoint \(\mathbf{y}\in\mathcal{D}_{U}\)), which is available under the factorization (9). Using self-normalized importance sampling with \(K\) samples, we can write the expectation in the right hand side of (25) as: \[\mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y},\mathbf{a},\mathbf{M})}\big{\{}\log p _{\theta}(\mathbf{y},\mathbf{a},\mathbf{M},\mathbf{Z})-\log q_{\phi}(\mathbf{Z},\mathbf{a},\mathbf{M}|\mathbf{ y})\big{\}}\] \[=\mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y})}\bigg{\{}\frac{q_{\phi}(\bm {Z}|\mathbf{y},\mathbf{a},\mathbf{M})}{q_{\phi}(\mathbf{Z}|\mathbf{y})}\bigg{[}\log\frac{p_{\theta} (\mathbf{y},\mathbf{a},\mathbf{M},\mathbf{Z})}{q_{\phi}(\mathbf{Z},\mathbf{a},\mathbf{M}|\mathbf{y})}\bigg{]} \bigg{\}}\] \[=\mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y})}\bigg{\{}\frac{q_{\phi}(\bm {Z}|\mathbf{y},\mathbf{a},\mathbf{M})q_{\phi}(\mathbf{M},\mathbf{a}|\mathbf{y})}{q_{\phi}(\mathbf{Z}|\mathbf{y} )q_{\phi}(\mathbf{M},\mathbf{a}|\mathbf{y})}\] \[\times\bigg{[}\log\frac{p_{\theta}(\mathbf{y},\mathbf{a},\mathbf{M},\mathbf{Z})}{ q_{\phi}(\mathbf{Z},\mathbf{a},\mathbf{M}|\mathbf{y})}\bigg{]}\bigg{\}}\] \[=\mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y})}\bigg{\{}\frac{q_{\phi}(\bm {M}|\mathbf{Z})}{q_{\phi}(\mathbf{M}|\mathbf{y})}\bigg{[}\log\frac{p_{\theta}(\mathbf{y},\mathbf{ a},\mathbf{M},\mathbf{Z})}{q_{\phi}(\mathbf{Z},\mathbf{a},\mathbf{M}|\mathbf{y})}\bigg{]}\bigg{\}}\] \[\simeq\sum_{i=1}^{K}\frac{\omega^{(i)}}{\Omega}\log\frac{p_{\theta }(\mathbf{y},\mathbf{a},\mathbf{M},\mathbf{Z}^{(i)}}{q_{\phi}(\mathbf{Z}^{(i)},\mathbf{a},\mathbf{M}|\mathbf{y })}\,, \tag{26}\] where (10) and (11) were used to obtain the third equality. Variable \(\mathbf{Z}^{(i)}\) is sampled as \(\mathbf{Z}^{(i)}\sim q_{\phi}(\mathbf{Z}|\mathbf{y})\), and the importance weights \(\omega^{(i)}\) and their normalization factor \(\Omega\) are obtained as \[\frac{q_{\phi}(\mathbf{M}|\mathbf{Z}^{(i)})}{q_{\phi}(\mathbf{M}|\mathbf{y})} \varpropto q_{\phi}(\mathbf{M}|\mathbf{Z}^{(i)})\triangleq\omega^{(i)}\,, \tag{27}\] \[\Omega =\sum_{i=1}^{K}\omega^{(i)}\,. \tag{28}\] Note that since \(q_{\phi}(\mathbf{M}|\mathbf{y})\) is constant in (27), i.e., it does not depend on \(i\), and is also present in \(\Omega\) in (26) it cancels in (28). Therefore, \(q_{\phi}(\mathbf{M}|\mathbf{y})\) (which is not available under the chosen posterior factorization) is not required. Finally, to approximate term \(\log q_{\phi}(\mathbf{a},\mathbf{M}|\mathbf{y})\) in (25), note that it can be lower bounded as [20]: \[\log q_{\phi}(\mathbf{a}, \mathbf{M}|\mathbf{y})\geq\mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y})}\bigg{\{} \log\frac{q_{\phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})}{q_{\phi}(\mathbf{Z}|\mathbf{y})} \bigg{\}} \tag{29}\] \[=\log q_{\phi}(\mathbf{a}|\mathbf{M},\mathbf{y})+\mathbb{E}_{q_{\phi}(\mathbf{Z}| \mathbf{y})}\Big{\{}\log q_{\phi}(\mathbf{M}|\mathbf{Z})\Big{\}}\,.\] Then, using a Monte Carlo estimator of the expectation in (29) with \(K\) samples gives us: \[\mathbb{E}_{q_{\phi}(\mathbf{Z}|\mathbf{y})}\Big{\{}\log q_{\phi}(\mathbf{M}| \mathbf{Z})\Big{\}} \simeq\frac{1}{K}\sum_{i=1}^{K}\log q_{\phi}(\mathbf{M}|\mathbf{Z}^{(i)})\] \[=\frac{1}{K}\sum_{i=1}^{K}\log\omega^{(i)}\,. \tag{30}\] It can be seen that this approximation can be computed directly using the importance weights obtained in (27). To achieve the final cost function we re-write the terms in (16) leveraging the previous results for the different terms. For the terms in \(\mathcal{L}_{\mathrm{Data}}(\theta,\phi;\mathcal{D})\) shown in (25) we used the lower bound in (29) and the importance sampling Monte Carlo approximation in (26). For the remaining \(L_{U}(\theta,\phi;\mathcal{D}_{U})\) term and \(\mathcal{L}_{\mathrm{Sparse}}(\theta,\phi;\mathcal{D})\) we applied Monte Carlo approximations with \(K_{E}\) samples to compute its expectations. The resulting cost function is shown in equation (31). ### _Reparametrization_ In order to optimize the cost function in (31) with respect to the parameters \(\theta\) and \(\phi\), it is necessary to estimate the gradient of expressions involving variables \(\mathbf{Z}^{(i)}\), \(\mathbf{M}^{(i)}\) and \(\mathbf{a}^{(i)}\), which are sampled from the variational posterior distribution \(q_{\phi}(\mathbf{a},\mathbf{M},\mathbf{Z}|\mathbf{y})\). However, since this distribution depends on the parameters \(\phi\), it is necessary to account for the dependency of the samples \(\mathbf{Z}^{(i)}\), \(\mathbf{M}^{(i)}\) and \(\mathbf{a}^{(i)}\) on \(\phi\) when computing the gradients. To this end, we consider the reparametrization trick, which provides low-variance gradient estimates for such problems [97]. In general, this is performed by writing the considered random variables (say, \(\mathbf{x}\)) in terms of a distribution that does not depend on \(\phi\), e.g., for a given function \(f\) to be differentiated, we have: \[f(\mathbf{x})=f(g(\mathbf{\epsilon},\phi))\,, \tag{32}\] where \(g\) is a function such that \(x\) and \(g(\mathbf{\epsilon},\phi)\) have the same distribution, and \(\mathbf{\epsilon}\sim p(\mathbf{\epsilon})\) does not depend on \(\phi\). Thus, the gradient \(\partial f(\mathbf{x})/\partial\phi\) can be computed straightforwardly and estimated using Monte Carlo sampling. This strategy can be applied to reparametrize the samples from \(q_{\phi}(\mathbf{M}|\mathbf{Z})\) and \(q_{\phi}(\mathbf{Z}|\mathbf{y})\). Since the variables \(\mathbf{Z}\) and \(\mathbf{M}\) are conditionally Gaussian, the reparametrization trick is straightforward to apply (as in [98]), resulting in: \[\mathbf{Z}^{(i)} =\big{[}\tilde{\mathbf{\mu}}_{\phi}^{z,1}(\mathbf{y}),\ldots,\tilde{\mathbf{\mu }}_{\phi}^{z,P}(\mathbf{y})\big{]}\big{\circ}\,\mathbf{\Xi}_{z}^{(i)}\,, \tag{33}\] \[\mathbf{M}^{(i)} =\big{[}\tilde{\mathbf{\mu}}_{\phi}^{m,1}(\mathbf{Z}_{1}^{(i)}),\ldots, \tilde{\mathbf{\mu}}_{\phi}^{m,P}(\mathbf{Z}_{P}^{(i)})\big{]}\] \[\quad+\big{[}\tilde{\mathbf{\sigma}}_{\phi}^{m,1}(\mathbf{Z}_{1}^{(i)}), \ldots,\tilde{\mathbf{\sigma}}_{\phi}^{m,P}(\mathbf{Z}_{P}^{(i)})\big{]}\big{\circ}\, \mathbf{\Xi}_{m}^{(i)}\,, \tag{34}\] where \(\circ\) is the Hadamard (i.e., elementwise) product and \(\mathbf{\ in the softmax basis [99] or reparametrizing rejection samplers [22]. Here we consider the so-called pathwise derivative estimator proposed in [23], which allows us to estimate the derivative of a stochastic variable \(\mathbf{a}\) sampled from a Dirichlet distribution \(\mathrm{Dir}(\mathbf{a};\mathbf{\gamma})\) as: \[\frac{\partial a_{i}}{\partial\gamma_{j}}=-\frac{\frac{\partial E_{ \mathrm{Beta}}}{\partial\gamma_{j}}\big{(}a_{j};\gamma_{j},\sum_{k=1}^{P} \gamma_{k}-\gamma_{j}\big{)}}{f_{\mathrm{Beta}}\big{(}a_{j};\gamma_{j},\sum_{k =1}^{P}\gamma_{k}-\gamma_{j}\big{)}}\Big{(}\frac{\delta_{i-j}-a_{i}}{1-a_{j}} \Big{)}\,, \tag{35}\] where \(f_{\mathrm{Beta}}\) and \(F_{\mathrm{Beta}}\) are the Beta probability and cumulative distribution functions, respectively, and \(\delta_{i-j}\) the Kronecker delta function, satisfying \(\delta_{i-j}=1\) if \(i=j\) and zero otherwise (see [23] for more details). This expression can be used to compute the gradient of the cost function using the chain rule. ### _On the interpretability of the proposed approach_ Much interest has been recently dedicated to interpretability in the field of machine learning [13, 100], and while a mathematical formalization of interpretability is difficult, a definition that is appropriate in this context is when it allows a user/human observer to understand the model and algorithm's behavior [100, 10]. In the proposed method, interpretability mainly comes from the statistical modeling and residual network design. First, the unmixing problem tackled in this work involves multiple nonidealities, such as nonlinearity and variability of the endmembers. By modeling both the generative and inference (unmixing) models as PDFs (in (8) and (9)) and employing adequate conditional independence assumptions on the different model variables (e.g., in (10) (11)), we are able to separately account for each of these effects, instead of having a single black box model. Moreover, the independence assumptions used in the inference model allow us to disentangle the influence of each of these effects in the recovered abundances [20]. This framework also leads to a principled training procedure based on semi-supervised learning, where the contribution of the data with and without labels (which might be extracted from the HI through self-supervised learning, as detailed later) is clear. Another point is that the residual formulation of the mixing model, i.e., a linear mixing model augmented with a flexible nonlinear component, see (6), copes with different types of mixtures, such as, e.g., the multiple mixture pixel model [91] or BLMMs [4]. ## IV Neural network design and training In this section, we detail different aspects of the method, consisting in: 1) the choices of model and NN architectures, 2) the construction of the supervised training data using a self-supervision strategy, and 3) the cost function optimization strategy. ### _Model details and neural network architectures_ #### Iv-A1 **Endmember networks** For the EM networks, which is used in both the distribution \(p_{\theta}(\mathbf{m}_{k}|\mathbf{z}_{k})\) in the mixing model (4) and \(q_{\phi}(\mathbf{m}_{k}|\mathbf{z}_{k})\) in the inference network (15), we considered the following architectures. For the mean vectors \(\tilde{\mathbf{\mu}}_{\phi}^{m,k}(\mathbf{z}_{k})=\mathbf{\mu}_{\theta}^{m,k}(\mathbf{z}_{k})\), we considered a fully connected MLP as in [11] with six layers containing \(H\), \(\max\big{\{}[L/10],\,H+1\big{\}}\), \(\max\big{\{}[L/4],\,H+2\big{\}}+3\), \([1.2\times L]+5\) and \(L\) neurons. The ReLU activation function was used on the hidden layers and a sigmoid on the last layer. For the covariances, for simplicity we used an EM-wise learnable constant \(\sigma^{m,k}\in\mathbb{R}_{+}\) resulting in an isotropic model, which led to \(\tilde{\mathbf{\sigma}}_{\phi}^{m,k}(\mathbf{z}_{k})=\mathbf{\sigma}_{\theta}^{m,k}(\mathbf{z }_{k})=\sigma^{m,k}_{m}\mathbf{1}_{L}\), where \(\mathbf{1}_{L}\) is an \(L\)-dimensional vector of ones. #### Iv-A2 **Variability inference network** For the variational distributions \(q_{\phi}(\mathbf{z}_{k}|\mathbf{y})\) in (14), we considered two fully connected MLPs with partially shared parameters to compute the mean and diagonal covariance elements. For the mean \(\tilde{\mathbf{\mu}}_{\phi}^{z,k}(\mathbf{y})\), we considered four fully connected layers with \(L\), \(5H\), \(2H\) and \(H\) neurons. For the diagonal covariance elements \(\tilde{\mathbf{\sigma}}_{\phi}^{z,k}(\mathbf{y})\), we considered six layers with \(L\), \(5H\), \(2H\), \(2H\), \(2H\) and \(H\) neurons. The first two hidden layers were shared between both networks, and the ReLU activation function was used in all hidden layers, with a linear activation in the last layer. #### Iv-A3 **Nonlinear mixing (decoder) network** The measurement model \(p_{\theta}(\mathbf{y}|\mathbf{a},\mathbf{M})\) in (7) was parametrized as follows. For the function \(\mathbf{\mu}_{\theta}^{y}(\mathbf{a},\mathbf{M})\) representing the nonlinear contribution in the mixing model (6), we consider fully connected MLP as used in [17]. It consisted of five fully connected layers, with \(P(L+1)\), \(PL\), \(L\), \(L\) and \(L\) neurons. The ReLU activation was used in all hidden layers, and a linear activation was used in the last layer. The observation noise variance \(\sigma_{\theta}^{y}\) in (7) is set as a positive learnable constant. #### Iv-A4 **Abundance posterior (encoder) network** As discussed previously, we consider a two-stream architecture in (13) to estimate the abundance posterior distribution (12). For the piecewise linear reconstruction function \(\hat{\gamma}_{\phi}^{\alpha,\text{lin}}(\mathbf{y},\mathbf{M})\), we propose to consider an unrolled NN architecture that is suitable to sparse regression. Unrolling algorithms to create NNs consists of converting the iterative equations of traditional optimization algorithms (e.g., gradient descent or iterative shrinkage/thresholding) into NN layers. The different parameters of the algorithms, which are traditionally fixed (or come from a forward measurement model) become the trainable parameters of the NN [101]. Such architectures have become very prominent in machine learning solution to various inverse problems due to their high interpretability [101]. In this work, we consider an architecture inspired by the LISTA method [102] with \(M\) network layers, which unfolds the iterative shrinkage/thresholding algorithm for sparse regression. This leads to the following architecture, with the \(m\)-th hidden network layer, \(m\in\{1,\ldots,M-1\}\), given by: \[\mathbf{h}^{(m+\frac{1}{2})} =\mathbf{h}^{(m)}-\eta^{(m)}\mathbf{M}^{\top}\big{(}\mathbf{M}\mathbf{h}^{(m)}- \mathbf{y}\big{)}\,, \tag{36}\] \[\mathbf{h}^{(m+1)} =s_{\text{ReLU}}\big{(}\mathbf{h}^{(m+\frac{1}{2})}-\eta^{\text{sp}} \eta^{(m)}\mathbf{1}_{P}\big{)}\,, \tag{37}\] where \(\mathbf{h}^{(m)}\) denotes the signal in the \(m\)-th later, \(\eta^{(m)}\in\mathbb{R}_{+}\) is a learnable step size and \(\eta^{\text{sp}}\in\mathbb{R}_{+}\) a parameter that controls sparsity of the reconstruction. The hidden layers can be interpreted as a gradient step of a least squares problem, followed by a combined shrinkage and projection to the nonnegative orthant. The output of the first layer is computed as \(\mathbf{h}^{(1)}=\mathbf{M}^{\dagger}\mathbf{y}\), with \(\mathbf{M}^{\dagger}\) denoting the pseudoinverse of \(\mathbf{M}\). The last layer, computed as \(\mathbf{h}^{(M)}=\eta^{\text{unc}}\mathbf{h}^{(M-1)}\), consists of a simple scaling of the result by a scalar parameter \(\eta^{\text{unc}}\in\mathbb{R}_{+}\), which controls the spread of the Dirichlet distribution. This approach is interesting since the learnable parameters are just the step sizes \(\eta^{\text{unc}}\) and sparsity and uncertainty parameters \(\eta^{\text{sp}}\), \(\eta^{(m)}\), which maintains a high degree of interpretability in the architecture. For the nonlinear part of the abundance posterior \(\hat{\gamma}_{\phi}^{\alpha,\text{lin}}(\mathbf{y},\mathbf{M})\), we considered an MLP with six fully connected layers containing \(L\), \(2L\), \(\lfloor L/2\rfloor\), \(\lfloor L/4\rfloor\), \(4P\) and \(P\) neurons, where \(\lfloor\cdot\rfloor\) denotes rounding to the nearest integer. The ReLU activation function was used in the hidden layers, and a linear activation in the last layer. ### _Self-supervised training strategy_ An important aspect of the semi-supervised framework is the need to obtain some amount of supervised training data. As done in previous works [10, 11], we propose to use a self-supervised strategy based on methods which extract different samples from EMs directly from an observed HI (i.e., in the form of pure pixels) [5, 103]. These methods have been originally proposed as image-based spectral library construction to address spectral variability in library-based HU algorithms without the requirement of laboratory or _in situ_ data collection. Nonetheless, these are also suitable strategies to generate training data for learning-based HU methods. In this paper, we consider the strategy proposed in [11], to generate a supervised synthetic dataset \(\mathcal{D}_{\text{S}}\) directly from the HI being analyzed by following a two step procedure. In the first step, we create a dictionary \(\mathcal{D}_{\text{pps}}\) of pure pixels. Pure pixels are obtained by extracting a predetermined number \(N_{\text{pps}}\) of pixels from the HI which are spectrally close (e.g., having the smallest spectral angles) to a set of reference EMs obtained using a traditional EM extraction algorithm (such as the VCA [104]). The second step consists in an iterative algorithm for generating synthetic data that incorporates the variability existing within \(\mathcal{D}_{\text{pps}}\). For this, at each iteration \(k\) we generate tuples \(\{(\mathbf{y}_{j},\mathbf{a}_{j},\mathbf{M}_{k})\}_{j=1}^{P}\) where \(\mathbf{M}_{k}\in\mathbb{R}^{L\times P}\) is an EM matrix sampled from the pure pixel dictionary. For each \(k\) we generate \(P\) abundance vectors \(\mathbf{a}_{j}\), \(j\in\{1,\ldots,P\}\), with the elements \(a_{j,i}\) of the \(j\)-th vector satisfying \(a_{j,i}=1\) if \(i=p\) and \(a_{j,i}=0\) otherwise (i.e., a one-hot encoding of each EM). Then, the mixed pixels are generated according to \(\mathbf{y}_{j}=\mathbf{M}_{k}\mathbf{a}_{j}+\mathbf{e}\), with \(\mathbf{e}\) being white Gaussian noise. Note that since the synthetic abundances are one-hot encodings, this leads to a simple but effective means of generating training data, since nonlinear interactions (i.e., a nonlinear mixing model) don't need to be specified a priori due to the pixels being pure pixels. Nevertheless, more general methods using a finer sampling of the unit simplex and different choices of mixing models can also be used to generate the training data at the expense of a higher amount of user supervision and possibly at a higher computation cost. ### _Optimization_ To optimize the cost function, we considered the Adam stochastic optimization method [26]. We used a batch size of \(16\) and trained the model for up to \(30\) epochs, stopping when the relative increase in the value of the cost function \(\hat{\mathcal{L}}_{\text{T}}(\theta,\phi;\mathcal{D})\) between two epochs was smaller than \(0.01\). The learning rate was initially set as \(0.001\) and multiplied by \(0.9\) after each epoch until the \(10\)th epoch using a learning rate scheduler, after which is was kept fixed. ## V Experiments ### _Experimental setup_ The proposed method is compared with the fully constrained least squares (FCLS) algorithm, with EMs extracted by the VCA [104], with the PLMM [50], ELMM [51], and GLMM [52] algorithms, which account for EM variability, with the NDU [105], which considers a nonparametric nonlinear mixing model, and with recent deep learning-based strategies that address EM variability and nonlinearity, being DeepGUn [11], RBF-AEC [69] and EGU-Net [10]. In each experiment the parameters of each algorithm were adjusted in order to obtain the best results. All experiments were run in an AMD Ryzen 5 3500U portable computer with 6GB of RAM. The proposed method was implemented in Pytorch. We consider \(K_{E}=1\) sample for the Monte Carlo estimation of the expectations, and \(K=5\) for the importance sampling. The supervised dataset \(\mathcal{D}_{S}\) was constructed using the self-supervised strategy detailed in Section IV-B, with VCA being used to extract the reference EM signatures, \(N_{\text{pps}}=100\) pure pixels from each EM being extracted to generate the dataset, and a signal-to-noise ratio (SNR) of 30dB was considered for the additive noise. The dimension of the EM latent space was set as \(H=2\), and \(M=11\) layers (i.e., ten hidden layers) were considered in the piecewise-linear abundance inference network detailed in Section IV-A4. The remaining parameters of the method were manually selected within the following intervals: \(\lambda\in[0.01,1000]\) with one value per decade; \(\tau\) was either zero or selected in \(\tau\in[0.001,1]\) with two values per decade; and the nonlinearity regularizations in the interval \(\varsigma_{1},\varsigma_{2}\in[10^{-4},10^{5}]\) with one value per decade. For all methods, the number of EMs \(P\) was assumed to be known a priori for the simulations with synthetic data. For the simulations with real data, we used the same values as in previous works that studied the same HIs. When such knowledge is not available, this number can be estimated from the HI [89], although care must be taken to avoid overestimating the number of EMs in the scene [90]. To evaluate the results of the algorithms on the synthetic datasets, we considered the normalized mean squared error (NRMSE), defined as \(\text{NRMSE}_{\mathbf{X}}=\|\mathbf{X}-\mathbf{\widetilde{X}}\|_{F}/\|\mathbf{X}\|_{F}\), computed between a matrix \(\mathbf{X}\) and its estimated version \(\mathbf{\widetilde{X}}\), where \(\|\cdot\|_{F}\) denotes the Frobenius norm of a matrix or higher order tensor. We evaluated the NRMSE between the abundances (NRMSE\({}_{\mathbf{A}}\)) and between the EMs (NRMSE\({}_{\mathbf{M}}\)). We also evaluated the normalized image reconstruction error (NRMSE\({}_{\mathbf{Y}}\)); however, we emphasize that a low values of NRMSE\({}_{\mathbf{Y}}\) do not imply good unmixing performance. We also compute the spectral angle mapper (SAM) between the true and the estimated EMs at each pixel: \[\text{SAM}_{\mathbf{M}}=\frac{1}{N_{U}}\sum_{n=1}^{N_{U}}\sum_{j=1}^{P}\arccos \left(\frac{\mathbf{m}_{n,j}^{\top}\mathbf{\widetilde{m}}_{n,j}}{\|\mathbf{m}_{n,j}\|_{2} \|\mathbf{\widetilde{m}}_{n,j}\|_{2}}\right). \tag{38}\] where \(\mathbf{m}_{n,j}\) and \(\mathbf{\widetilde{m}}_{n,j}\) are the true and estimated versions of the \(j\)-th EM in the \(n\)-th pixel, respectively. ### _Synthetic data_ Two synthetic datasets with ground truth were considered, one containing nonlinearity in the mixing process and another containing EM variability. These two nonidealities are intruduced in separate datasets to have a more controlled experimental scenario, particularly since some of the compared methods were devised taking only nonlinearity or EM variability into account. However, in real datasets, especially over large scenes, these effects can occur concurrently. Data with nonlinearityTo generate the first datacube (DC1), with \(N_{U}=2500\) pixels, we considered synthetic abundance maps displayed in the first row of Figure 3 (left) and \(P=3\) EM signatures with \(L=224\) spectral bands selected from the USGS Spectral Library. The reflectance of each pixel was generated according to the BLMM \(\mathbf{y}_{n}=\mathbf{M}\mathbf{a}_{n}+\sum_{i=1}^{P}\sum_{j=i+1}^{P}a_{n,i}a_{n,j}\mathbf{m }_{i}\odot\mathbf{m}_{j}+\mathbf{e}_{n}\), with \(\mathbf{e}_{n}\) being white Gaussian noise with an SNR of 30dB. Data with endmember variabilityTo generate the second datacube (DC2), with \(N_{U}=2500\) pixels, we considered synthetic abundance maps displayed in the first row of Figure 3 (right). To incorporate EM variability in this dataset, sets of EM signatures from \(P=5\) pure materials (roof, metal, dirt, tree and asphalt) with realistic variability were first manually extracted from a real HI. Then, to generate each pixel \(\mathbf{y}_{n}\), spectral signatures for each EM were then randomly sampled from these sets and used in a generalized version of the LMM \(\mathbf{y}_{n}=\mathbf{M}_{n}\mathbf{a}_{n}+\mathbf{e}_{n}\) with pixelwise EM matrices, where \(\mathbf{e}_{n}\) was white Gaussian noise with an SNR of 30dB. Note that the two synthetic datacubes incorporate nonidealities seen in practical scenes. The BLMM, used in DC1, has been experimentally shown to accurately describe multiple scattering in mixtures containing vegetation [106]. The different EM samples used to construct DC2 were manually extracted from a real HI, being representative of the variability encountered in a real image. DiscussionThe quantitative and visual results for both data cubes are presented in Table I and Figure 3, respectively. It can be seen that ID-Net obtained the best abundance estimation accuracy NRMSE\({}_{\mathbf{A}}\) for both data cubes, with improvements of 39% and 25% over the second-best results. It also obtained a good EM estimation performance, with the best NRMSE\({}_{\mathbf{M}}\) for DC1 and the best SAM\({}_{\mathbf{M}}\) for DC2. The visual results of Figure 3 show that although most methods were able to obtain a rough separation between the different materials in the scene, the reconstructions by ID-Net were the closest to the ground truth. This can be observed most clearly, for instance, in the abundance maps of the Metal EM in DC2. The smallest reconstruction errors NRMSE\({}_{\mathbf{Y}}\) were obtained by NDU, which employs a nonparametric model with many degrees of freedom, which can represent a pixel arbitrarily well. However, this does not imply better unmixing performance (i.e., better abundance and EM reconstructions). The execution times of ID-Net, also shown in Table I, are on the same order of magnitude of the more complex competing algorithms (such as PLMM, DeepGUn and NDU) but considerably higher than the faster ones (such as EGU-Net). ### _Real data_ We also evaluate the performance of the methods on the Samson, on the Jasper Ridge, and on the Cuprite HIs. These images were acquired by the AVIRIS instrument, which samples the spectra on 224 spectral bands. Bands corresponding to water absorption regions or with a low SNR were removed from all datasets, which resulted in 156 bands for the Samson HI, 198 bands for the Jasper Ridge HI, and 188 bands for the Cuprite HI. Previous works have shown that the Samson and Jasper Ridge HIs contain three and four EMs, respectively [30, 32], while fourteen EMs are used for the Cuprite HI [104, 107]. These datasets are relevant since they represent nonidealities such as EM variability, which can appear due to non-trivial variations in illumination from irregular terrain topography and due to the intrinsic variations of the materials such as soil, vegetation and road, and nonlinear mixtures, which are often present in mixtures containing vegetation or water, or in mixtures of minerals. A representation of the real datasets can be seen in Figure 4. The abundances maps reconstructed by the different algorithms and datasets are presented in Figure 5. For the Samson and Jasper Ridge images, it can be noticed that the methods based on deep-learning (particularly, EGU-Net, RBF-AEC and ID-Net) generally provided a better separation between the different materials in the scene when compared to the remaining algorithms. This can be observed by the smaller confusion in the results by those methods between Water and Soil materials for the Samson HI, and between Road and Soil EMs for the Jasper Ridge HI. For the Cuprite HI (for which only the Musocvite, Buddingtonite, Kaolinite and Chalcedony EMs are shown), a similar behavior is observed for the learning-based methods. The abundances obtained by most of the algorithms generally agree with previous studies on this HI [104, 107], however, for some EMs (the Buddingtonite and Chalcedony) the abundances retrieved by Figure 4: Visual representation of the real HIs used in the experiments. Figure 3: True and reconstructed abundance maps for the experiments with synthetic data for data cubes DC1 (left) and DC2 (right). EGU-Net and RBF-AEC show a different spatial distribution compared to the remaining algorithms. However, pixel-by-pixel comparisons on this scene are hard due to the lack of ground truth [108]. When compared to the competing methods, ID-Net was able to obtain a better abundance reconstruction in regions with more heavily mixed EMs. This can be seen more clearly in the Vegetation and Soil materials in the Samson HI, for which the results of EGU-Net and RBF-AEC identify most pixels as being completely pure. Samples of the EMs estimated by ID-Net are shown in Figure 6. It can be seen that the amount of variability was different for each material and image, being higher in the Samson HI, or for vegetation, compared to the Cuprite HI, or water. It is also instructive to analyze the nonlinearity degree (i.e., the contribution of the nonlinear part of the NN) in the estimated abundance coefficients in (13) for each pixel, defined here as \(\eta_{d}=\|\hat{\gamma}_{\phi}^{a,\mathrm{nlin}}(\mathbf{y},\mathbf{M})\|/\big{(}\| \hat{\gamma}_{\phi}^{a,\mathrm{lin}}(\mathbf{y},\mathbf{M})\|+\|\hat{\gamma}_{\phi}^{a, \mathrm{nlin}}(\mathbf{y},\mathbf{M})\|\big{)}\), which is shown in Figure 7. It can be seen that the overall level of nonlinearity degree varies across the HI. As expected, \(\eta_{d}\) is generally higher in areas with more mixed pixels, and lower in homogeneous areas with predominantly pure pixels. This is particularly seen in areas containing mixtures of vegetation and water in the Samson HI. However, we emphasize that nonlinearity in (13) and EM variability jointly influence the HU results, and separating the effect of each one is not trivial. The quantitative results of the algorithms are shown in Table II. It can be seen that the GLMM and NDU methods obtained the smallest reconstruction errors (NRMSE\({}_{\mathbf{Y}}\)). This occurs since these methods have a high amount of degrees of freedom and can represent HI pixels very closely. However, we note that achieving low values of NRMSE\({}_{\mathbf{Y}}\) do not imply good abundance estimation performance. The execution times of ID-Net were generally higher but still comparable to those by DeepGUn and NDU (1.33 and 1.26 times on average, respectively). This is a limitation of the proposed method, and in future work we will investigate more efficient inference techniques to address it. The algorithms based on deep learning led to distinct results due to their different frameworks. RBF-AEC and EGU-Net, which are based on AEC NN architectures, obtained abundances that were generally sparse, with many pure components. DeepGUn, which is based on a matrix factorization framework with generative EM models, led to abundances comparatively closer to model-based methods such as the GLMM. ID-Net, which leverages a statistical model of the unmixing processes, gave results with pure components in homogeneous regions and mixed pixels in the regions of transitions between materials. Thus, while the differences between the results of these methods are not straightforward to explain, we can observe how they change as we transition from model-based towards fully neural network-based algorithms. ## VI Conclusions This paper proposed a hyperspectral unmixing method considering both nonlinearity and EM variability based on a deep disentangled variational inference framework. Both the model parameters and a tractable approximation of the posterior distribution were learned end-to-end by maximizing a lower bound to the likelihoods of supervised and unsupervised data. A self-supervised learning strategy was considered to leverage the benefits of semi-supervised learning algorithms while using only data extracted from the observed image. The abundances and EMs were disentangled through the use of appropriate independence assumptions on the variational posterior. Moreover, an interpretable model was designed by using unrolled optimization-based and two-stream (linear/nonlinear) Figure 5: Reconstructed abundance maps for the experiments with real data for the Samson (left), Jasper Ridge (middle) and Cuprite (right) HIs. NN architectures, allowing for an adjustable amount of nonlinearity in the model. Experimental results on both synthetic and real datasets showed that the proposed method can achieve state-of-the-art performance with execution times comparable to that of algorithms such as NDU and DeepGUn. Several interesting directions for future work can be considered, such as developing more efficient inference strategies that lead to algorithms with smaller computational cost; designing self-supervised learning approaches to extract the training data from the HI which do not depend on the presence of multiple pure pixels; and exploiting the statistical dependence among abundances of spatially adjacent pixels.
2302.09004
CovidExpert: A Triplet Siamese Neural Network framework for the detection of COVID-19
Patients with the COVID-19 infection may have pneumonia-like symptoms as well as respiratory problems which may harm the lungs. From medical images, coronavirus illness may be accurately identified and predicted using a variety of machine learning methods. Most of the published machine learning methods may need extensive hyperparameter adjustment and are unsuitable for small datasets. By leveraging the data in a comparatively small dataset, few-shot learning algorithms aim to reduce the requirement of large datasets. This inspired us to develop a few-shot learning model for early detection of COVID-19 to reduce the post-effect of this dangerous disease. The proposed architecture combines few-shot learning with an ensemble of pre-trained convolutional neural networks to extract feature vectors from CT scan images for similarity learning. The proposed Triplet Siamese Network as the few-shot learning model classified CT scan images into Normal, COVID-19, and Community-Acquired Pneumonia. The suggested model achieved an overall accuracy of 98.719%, a specificity of 99.36%, a sensitivity of 98.72%, and a ROC score of 99.9% with only 200 CT scans per category for training data.
Tareque Rahman Ornob, Gourab Roy, Enamul Hassan
2023-02-17T17:18:02Z
http://arxiv.org/abs/2302.09004v1
## CovidExpert: A Triplet Siamese Neural Network framework for the detection of COVID-19 ###### Abstract Patients with the COVID-19 infection may have pneumonia-like symptoms as well as respiratory problems which may harm the lungs. From medical images, coronavirus illness may be accurately identified and predicted using a variety of machine learning methods. Most of the published machine learning methods may need extensive hyperparameter adjustment and are unsuitable for small datasets. By leveraging the data in a comparatively small dataset, few-shot learning algorithms aim to reduce the requirement of large datasets. This inspired us to develop a few-shot learning model for early detection of COVID-19 to reduce the post-effect of this dangerous disease. The proposed architecture combines few-shot learning with an ensemble of pre-trained convolutional neural networks to extract feature vectors from CT scan images for similarity learning. The proposed Triplet Siamese Network as the few-shot learning model classified CT scan images into Normal, COVID-19, and Community-Acquired Pneumonia. The suggested model achieved an overall accuracy of 98.719%, a specificity of 99.36%, a sensitivity of 98.72%, and a ROC score of 99.9% with only 200 CT scans per category for training data. ## 1 Introduction The 2019 coronavirus disease (COVID-19) has now infected humans, causing severe acute respiratory illness. Since its first discovery in Wuhan, China, in December 2019, it has swiftly expanded worldwide. The World Health Organization (WHO) named the new epidemic illness Coronavirus Disease (COVID-19) after the virus that causes severe acute respiratory syndrome (SARS-CoV-2). And over 661 million COVID-19 symptoms have been confirmed as of December 26, 2022, and more than 6.685 million fatalities globally [1]. Due to a lack of critical care facilities and an overwhelming number of patients, health systems have collapsed, even in industrialised nations. On January 30, 2020, the WHO deemed the COVID-19 epidemic a worldwide health emergency; this was considered a pandemic on March 11, 2020 [2]. The symptoms of Coronavirus evolve within 2-14 days after viral infection. Due to COVID-19's high infection rates, early coronavirus diagnosis is essential for treating the disease. To diagnose COVID-19, the actual-time reverse-transcription polymerase chain reaction (RT-PCR) analysis is now the standard method. RT-PCR is still the most accurate method for making a confident detection of COVID-19 infection [3]. Even though the absence of an effective RT-PCR assay, the high false detection rate with the protracted turnaround time (minimum 4-6 hours) makes it difficult to identify infected people quickly. Because of this, many infected people go undetected and often spread the infection to others. The incidence of COVID-19 illness should decrease with the early diagnosis of this condition. The inefficiency and scarcity of the current COVID-19 tests have motivated several initiatives to investigate alternative test methodologies. One method uses radiological imaging to identify COVID-19 infections, including X-rays or computed tomography (CT). The researchers believe that a technique based on chest CT scans might be a vital tool for identifying and quantifying COVID-19 instances [4]. CT scan diagnosis may benefit significantly from using artificial intelligence (AI) based techniques. Researchers are making global efforts to harness AI as a formidable tool to develop quick and affordable diagnostic methods to stop the current pandemic [5]. The main objectives of the study are the transmission of COVID-19, early detection, the creation of efficient treatments, and comprehension of its economic impacts. Compared to standard head X-rays, CT scans include more precise information about tissue and structure, enabling the diagnosis of more traumas and disorders [6]. X-ray and CT scans are the two imaging methods most frequently used to examine COVID-19 patients. In this study, we concentrated on CT scans because chest CT and ultrasound of the lungs have been found to be more sensitive and moderately specific than chest X-Ray (CXR) in diagnosing COVID-19 [7]. Although deep learning methods have advanced significantly in various medical areas, the problem of dependency on the vast quantity of data still limits them. Researchers used a variety of strategies, including data augmentation and generative adversarial networks (GAN), to address this issue of data shortage [8]. Different data augmentation strategies have over-fitting issues, while GAN image generation techniques have difficulties simulating actual patient data, which results in unintended bias during experimental validation [8]. Additionally, several research studies have used various pre-trained Convolutional Neural Network (CNN) models in transfer learning approaches. Given that these CNN models were pre-trained on a big non-medical dataset (i.e., ImageNet), significant fine-tuning is required to get promising diagnostic findings which often involves a lengthier training time. To overcome these limitations, researchers are actively investigating few-shot learning (FSL) which is an instantiation of meta-learning. Meta-learning, also known as "learning to learn," is a machine learning technique that involves learning a learning algorithm that can adapt quickly to new tasks or environments [9]. In other words, meta-learning involves learning how to learn. Meta-learning can be used to learn a learning algorithm that can quickly adapt to new tasks or datasets, without the need for a large amount of labelled data for each task. This can be useful in situations where it is difficult or expensive to obtain a large amount of labelled data for a particular task. There are a number of ways to apply meta-learning to machine-learning tasks, depending on the specific problem the users are trying to solve. Some common approaches to meta-learning include FSL, transfer learning, and multi-task learning. Few-Shot Learning is a form of meta-learning when a learner is provided experience on a range of related tasks to effectively transition to new (but related) activity with a constrained number of situations in the meta-testing stage [10]. In this process, the model is trained on a small number of examples or "shots" within each class in the training set. In few-shot learning, the goal is to learn a model that can adapt quickly to new tasks or datasets by using a small number of labelled examples. One of the main motivations for using few-shot learning is the need to reduce the data required for training a machine learning model. Since datasets are difficult to find (such as cases of a rare illness) or the cost of annotating data is high, few-shot learning is advantageous. Few-shot learning allows the model to learn from a small number of examples, which can be much easier and faster to obtain. The model does not need to identify the images in the training set while going on to the test set to achieve the aim of few-shot learning. Instead, the goal is to compare and contrast items' similarities and differences [11]. Given the rarity of big, well-balanced medical image datasets, this concept might be instrumental in image analysis. Overall, few-shot learning is a useful technique for reducing the data required for training machine learning models, and it has shown promising results in a range of applications. The Siamese network is one of the meta-learning models which has lately been successful in applying FSL in various fields. Siamese networks are a particular network architecture used to create and evaluate feature vectors for each input. They comprise two or more similar (i.e., configured using the same parameters and weights) subnetworks [12]. By comparing the feature vectors of the Siamese network and training a similarity function between labelled points, it is possible to determine how similar the inputs are. To categorize COVID-19 patients with few training CT images, this research utilizes the Triplet Siamese Network, which consists of three identical subnetworks. To create feature embeddings from the input images, we employed an ensemble of pre-trained CNNs that had been fine-tuned, and we used the pairwise margin ranking loss function to change the network weights. Ensembling combines many models to create a solid and trustworthy model for making predictions. The contributions of this study are outlined in the following. 1. Developing an ensemble of pre-trained CNN, which utilized the Triplet Siamese network to assist in the early diagnosis of patients with COVID-19. 2. Utilizing a CT scan dataset with 600 training images to identify COVID-19. 3. The advantage of employing Siamese networks with small datasets is the main emphasis of the suggested study. 4. A performance assessment is provided to show how effective the suggested framework is. The paper is assembled as follows: An overview of recent literature articles related to this study is given in Section 2. Section 3 describes the proposed system, along with gathering and preparing datasets. Section 4 provides the experimental outcomes, comparative performance of the suggested framework and discussion. The study is concluded with further research in Section 5. ## 2 Related works Researchers have created deep learning approaches for detecting COVID-19 utilizing medical images, CXR, and CT scans to prevent the COVID-19 epidemic. This review discusses the most current COVID-19 detection systems that have used deep learning methods. To expedite the identification of COVID-19 victims, Panwar et al. [13] suggested a VGG-19 transfer learning method that utilizes CT-Scan and chests X-ray data. They utilized the COVID-19 chest X-ray dataset that includes 673 radiological images from 342 individuals. And the SARS-COV-2 CT scan dataset contains 1230 CT scans of those who confirmed negative for COVID-19 and 1252 CT images of individuals who confirmed positive for COVID-19. Their proposed model has a 95.61% COVID-19 case identification accuracy. According to their methodology, a patient identified with pneumonia is more likely to test as a False Positive. A completely automated technique to recognize COVID-19 illness from the chest CT results of the patients was proposed by Rahimzadeh et al. [14]. The ResNet50V2 model served as the framework for their experiment. Their dataset included 48,260 CT scans comprising 282 healthy people and 15,589 images comprising 95 patients with COVID-19. The suggested model was performed on 7996 test images at this binary image classification task with 98.499 accuracy. Utilizing chest X-ray and CT scan images, Hussain et al. [15] created a novel CNN model (CoroDet) to identify COVID-19 automatically. For the dataset, they utilized 7390 images. The proposed model produced a classification accuracy of 99.19% for two-class classification (COVID and Normal), 94.2% for three-class classification (COVID, Normal, and non-COVID pneumonia), and 91.2% for four-class classification (COVID, Normal, non-COVID viral pneumonia, and non-COVID bacterial pneumonia). The experiment shows 500 and 800 regular images for each category of two, three, and four classes, respectively. There are 400 images for the 4-class categorization of viral and bacterial pneumonia. For the three classes, there are 800 images of bacteria that cause pneumonia. The doctor claims that their suggested model is not ready to take the place of current COVID-19 detection tests. To identify COVID-19, Koglavani et al. [16] used CNN architectures such as VGG16, DeseNet121, MobileNet, NASNet, Xception, and EfficientNet. The collection contains 3873 overall CT scan images, including COVID-19 and Non-COVID scans. The accuracy rates are 97.68%, 97.53%, 96.38%, 89.51%, 92.47%, and 80.19% for VGG16, DenseNet121, MobileNet, NASNet, Xception, and EfficientNet, respectively. VGG16 design offered more accuracy in their analysis compared to other architectures. CNN with KNN (K-Nearest Neighbour) was used by Basu et al. [17] to provide a comprehensive structure for detecting COVID-19 using CT scan images. They used the SARS-COV-2 CT-Scan Dataset, which was upgraded from the original 2482 CT scan data to contain 2926 images. On their dataset, their model produced classification accuracy for binary classification of 97.30% and 98.87%, respectively. The disadvantage of the suggested technique is that it cannot identify CT scans that are COVID-19 positive during the initial phases of the infection. An approach to identify COVID-19 using the VGG-19 transfer learning model was suggested by Horry et al. [18]. They utilized 1103 images for the ultrasound image dataset (COVID-19, Pneumonia, and Normal), 60798 images for the X-ray dataset (COVID-19, Pneumonia, and Normal), and 746 images for the CT scan dataset (COVID and Non-COVID). Their tuned VGG19 model with the right settings performs at substantial levels of COVID-19 detection versus pneumonia or normal for all three lung imaging methods, with an accuracy of up to 86% for X-rays, 100% for ultrasounds, and 84% for CT scans. A BiT-M classification algorithm to detect COVID-19 using CT scan images was suggested by Zhao et al. [19]. They utilized the COVIDx CT-2A dataset, which contains 194,992 CT scan images. According to the study, the suggested multi-class model had a 99.2% accuracy rate for identifying COVID-19 instances. Their proposed model is at a theoretical research stage, and the model hasn't been put to the test in actual clinical practice. Serte et al. [20] published a model to forecast COVID-19 on a 3D CT volume with ResNet-50 and majority voting. They used the 3D CT scan images from the Mosmed and CCAP datasets. They transformed the 3D CT images into 2D images for training and assessment. 5019 images were utilized for testing, while 5493 images were used for training. The suggested deep learning model has an AUC value of 96% for identifying COVID-19 on CT images. Their model may not function on a phone or web server since they don't have the high memory requirements of modern desktop or laptop computers. Mukherjee et al. [21] suggested a nine-layer Convolutional Neural Network that can simultaneously train and evaluate CXRs and CT scans. They used a total of 672 images in the dataset, of which 336 were chest X-rays and 336 were CT scans. Their suggested model attained an overall accuracy of 96.28% in a test dataset consisting of 135 images. A model known as CTnet-10 that had been created for the COVID-19 diagnosis was suggested by Shah et al. [22]. They used a dataset that had 738 images overall. Their model has an 82.1% accuracy rate. Compared to other models, the accuracy of their proposed model is quite low. An ensemble deep learning model using pre-trained Residual Attention and DenseNet architectures was developed by Matfouni et al. [23]; and it performed as a reliable COVID-19 classifier on noisy labelled chest CT scan images. Their dataset included 2618 CAP images representing 60 cases, 6893 normal images representing 604 cases, and 7593 COVID-19 images representing 466 cases. On the test set, their ensemble model achieved 95.31% accuracy. Shortizzaman et al. [24] created a synergistic strategy to integrate contrastive learning with a fine-tuned ConvNet encoder to get unbiased feature representations and employed a Siamese network for the final classification of COVID-19 cases. They created a balanced dataset for this investigation with 678 Chest X-ray images. They pre-trained the VGG-16 encoder network with 480 images during training and 198 images in testing. 30 images were used for the Siamese network's training and 648 images for its testing. The accuracy of their suggested model was 95.6%. Jadon et al. [25] suggested employing Siamese networks with a few-shot learning method to find COVID-19. A total of 4200 chest X-ray images from 3 classes--COVID-19, Viral Pneumonia, and Normal--were included in the study's dataset. Their recommended method allowed them to attain 96.4% accuracy. Most of the strategies used in the literature to diagnose COVID-19 using CT or CXR images either employed pre-trained transfer learning models or custom CNN architecture, both of which need a lot of training data. Although some research employed a small amount of training data, their test dataset was so small that it is questionable if their model could help physicians in a real-world situation. As an alternative, we have suggested a Triplet Siamese Network that uses an ensemble of pre-trained BiT, DenseNet121, SwinTransformer, MobileNetV2, EfficientNetB0, and ResNext as the encoder of the architecture to classify COVID-19 instances with just 600 training CT images. To ensure the model worked in a real-world situation, we tested it with a dataset of 10152 images. Our proposed model is computationally efficient that can achieve better or the same level of accuracy as the pre-trained and other custom CNN models. Additionally, categorical cross entropy or binary cross entropy loss function is the most used strategy in the literature. Contrarily, we used the margin ranking loss function, leading to a quicker model adaptation rate with fewer tests and updates to the hyperparameters. Additionally, most available models involve image augmentation to increase model generalizability, which lengthens training time. The proposed few-shot learning model performs better than previous studies' models despite not using augmentation. ## 3 Methodology and dataset The COVID-19 detection system, which consists of many stages, is shown in Fig. 1. The preprocessing pipeline was initially applied to the raw CT scan images. Data scaling, shuffles, noise removal, image sharpening, normalizing, brightness and contrast adjustments were made in the pre-processing pipeline. The preprocessed data set was then partitioned into two sets of training dataset and testing dataset, where one set was used to train transfer-learning-based models, and another set was used for the proposed Triplet Siamese Network. After each epoch, the training accuracy and loss were computed. Validation accuracy and loss were simultaneously found using 10-fold cross-validation. The confusion matrix, accuracy, ROC AUC, specificity, sensitivity, and F1-score were utilized to test the performance of the suggested system. ### Dataset description The dataset is taken from Matfouni et al. [23]. Based on the authors, this COVID-19 lung CT dataset is the biggest one. This dataset contains 2618 images of community-acquired pneumonia, 7593 COVID-19 images, and 6893 normal images. By combining CT scan data from seven open datasets, they created the COVID-19 dataset. These datasets have shown their effectiveness in deep learning applications by being widely utilized in the COVID-19 diagnostic literature. The combined dataset is thus expected to boost the classification performance of deep learning techniques by merging all of these resources. Table 1 displays the dataset's properties. From the author-provided source, we created two datasets. One is for CNN models based on transfer learning, whereas the other is for the FSL model. We employed a total of 12344 CT scans for training and 3390 CT scans for evaluating the transfer-learning-based CNNs. The dataset distribution for transfer-learning-based models is shown in Table 2. We created a meta-dataset for the FSL model that includes images of the Anchor (an arbitrary class of points), Positive (the same class as the Anchor), and Negative (a different class from the Anchor). We employed 10152 CT scans for the test dataset, whereas 600 CT images were used to train this model. A radiologist helped us create the meta-dataset by arranging the slices of the CT scans so that the anchor and positive slices are identical and the anchor and negative slices are different. Since the two CT scans must be clinically related and not only belong to the same class, randomly producing an Anchor and Positive pair would've been incorrect. The selected balanced dataset for the suggested Triplet Siamese Neural Network is shown in Table 3. Fig. 2 and Fig. 3 shows input image for the models. many of them were blurry (i.e., bed rest corner, etc.). Since the data is improved after noise reduction, CNN can capture essential features of the images while ignoring the noises, making noise removal crucial. Background removal, automatic brightness and contrast adjusting, and unsharp masking were used to fix these problems. #### 3.2.1 Unsharp masking A linear image processing method that sharpens the image is called unsharp masking. The sharp details may show the difference between the original image and its blurred counterpart. The original image is then resized to include these elements: \[Enhancedimage=original+amount*(original-blurred)\] Any image filter technique could be used for the blurring stage; however, we used the gaussian filter. The particular parameters of the gaussian filter size and the weights when the images are subtracted define the exact attributes of the filter. The unsharp mask increases the high-frequency components of the image. #### 3.2.2 Background removal A function was implemented to normalize the pixel values to eliminate the background and the bed rest. We first determined the average pixel value near the foreground to renormalize washed-out images. Then, KMeans was used to establish the threshold for differentiating between the foreground (soft tissue/bone) and background (lung/air). To avoid accidenty clipping the lung, we erode the smaller components and expand to include some of the pixels around it. The biggest contour in the intermediate image was located after the initial foreground mask was discovered. After initializing a second foreground mask, a bitwise AND operation of the two images was performed to create an intersection of the two foreground masks. After filling the holes in the enclosed mask, we applied the mask to the image. We removed the background and the associated noises by cropping the masked image. \begin{table} \begin{tabular}{l l l l l l} \hline Case & Male & Female & Unknown Gender & Overall & Age \\ \hline COVID-19 & 3781 & 2240 & 1572 & 7593 & \(51.26\pm 16.49\) \\ Normal & 4244 & 2649 & 0 & 6893 & \(52.82\pm 21.87\) \\ CAP & 1567 & 1051 & 0 & 2618 & \(58.14\pm 20.94\) \\ Overall & 9592 & 5940 & 1572 & 17104 & \(53.10\pm 19.92\) \\ \hline \end{tabular} \end{table} Table 1: Dataset statistics. \begin{table} \begin{tabular}{l l l l l} \hline Dataset & COVID-19 & Normal & CAP & Total \\ \hline Training & 5479 & 4968 & 1897 & 12344 \\ Validation & 608 & 552 & 210 & 1370 \\ Testing & 1506 & 1373 & 511 & 3390 \\ Overall & 7593 & 6893 & 2618 & 17104 \\ \hline \end{tabular} \end{table} Table 2: Dataset Distribution for transfer-learning-based models. \begin{table} \begin{tabular}{l l l l l} \hline Dataset & COVID-19 & Normal & CAP & Total \\ \hline Training & 200 & 200 & 200 & 600 \\ Validation & 20 & 20 & 20 & 60 \\ Testing & 3384 & 3384 & 3384 & 10152 \\ Overall & 3604 & 3604 & 3604 & 10812 \\ \hline \end{tabular} \end{table} Table 3: Dataset Distribution for the proposed framework. Figure 1: Pipeline to develop the proposed model. Figure 2: Input Data Visualization for transfer-learning-based CNN models. #### 3.2.3 Automatic brightness and contrast adjusting The brightness and contrast may be modified using alpha (\(\alpha\)) and beta (\(\beta\)). These are sometimes referred to as the gain and bias parameters. The expression is written as follows: \[g(x)=a*f(x)+\beta\] where the pixels for the source image are f(x) and the pixels for the output image are g(x). Conveniently, the phrase may be written as: \[g(i,j)=a*f(i,j)+\beta\] where i and j denote the pixel's location as being in the i-th row and j-th column. This was accomplished by examining the image's histogram. \(\alpha\) and \(\beta\) are automatically calculated for [0... 255] output range for brightness and contrast improvement. When the color frequency fell below a certain threshold, we computed the cumulative distribution to decide where to cut the right and left sides of the histogram. As a result, we now have our minimum and maximum ranges. Our target output spectrum of \(255\) was divided by the lowest and highest grayscale range after clipping to determine \(\alpha\). \[a=255\big{/}\left(maximum_{\text{\tiny{gap}}}-minimum_{\text{\tiny{gap}}}\right)\] We used the formula \(g(i,j)=0\) and \(f(i,j)=minimum_{\text{\tiny{gap}}}\) to calculate \(\beta\), and the result was \(g(i,j)=a*f(i,j)+\beta\) which yields \(\beta=-minimum_{\text{\tiny{gap}}}*\alpha\). Thus, by solving \(\alpha\) and \(\beta\) introducing \(\alpha\) and \(\beta\) to the images using saturation arithmetics, we were able to improve brightness and contrast. Fig. 4 shows us the effect of data pre-processing on the original dataset. ### Development of transfer-learning-based models The machine learning method of transfer learning uses a model developed for one job as the foundation for another. It is a typical deep learning technique to use pre-trained models as the foundation for computer vision and natural language processing tasks due to the massive computing and time assets required to construct neural network algorithms for these problems and the huge jumps in the skill that they provide on related issues. It is common to perform transfer learning with predictive modelling problems that use image data as input. For these cases, it is customary to use a deep learning model that has been pre-trained for a big and challenging image classification task, like the ImageNet [26]. classification competition. Numerous models have been trained using the ImageNet dataset. Due to their cutting-edge architecture, we used the AlexNet, GoogLeNet, MobileNetV3, Vision Transformer, and Swin Transformer models to categorize COVID-19 using the initial dataset. Each of these models was trained using a simple augmentation technique (i.e., RandomRotation, RandomVerticalFlip, RandomHorizontalFlip, RandomAffine, ColorJitter, RandomJustSharpness, RandomAutocontrast) for 10 epochs that used the Adam optimizer with a batch size of 64 for AlexNet, 32 for GoogLeNet and MobileNetV3, and 16 for VisionTransformer and SwinTransformer. 0.0001 was the learning rate for every model except VisionTransformer, and the learning rate for VisionTransformer was 0.00001. The loss function in these models was Focal loss. #### 3.3.1 AlexNet One of the first CNN proposed by Krizhevsky et al. [27] was AlexNet. There are eight weighted layers in the AlexNet (i.e., five convolutional layers and three fully connected layers). #### 3.3.2 GoogleNet GoogLeNet [28] is an image classification and recognition deep learning convolution neural network architecture. The GoogleNet architecture consists of 22 layers and a total of 9 inception modules (when pooling layers are included, there are 27 layers). GoogLeNet employs nine initial layers because adding more layers and more data is the most effective strategy to increase the output of deep learning models. Figure 4: Comparison between the original and pre-processed image. Figure 3: Input Data Visualization for the proposed framework. #### 3.3.3 MobileNetV3 Hardware-aware network architecture search (NAS) and Platform-Aware Neural Network Adaptation for Mobile Applications (NetAdapt) are the foundations of MobileNetV3 [29]. To facilitate calculation, MobileNetV3 has included the Squeeze and Excitation (SE) module, and the sigmoid in the SE module has been replaced with a Hard-Sigmoid. ReLU is replaced with Swish to increase non-linearity. #### 3.3.4 Vision transformer Without convolutional layers, the Vision Transformer (ViT) [30] is a modified NLP Transformer (encoder alone) for image categorization. Patches of an image are separated, flattened, and then placed in a two-dimensional embedding space. Each vector receives an additional token to indicate its relative position inside the image, and the whole sequence of vectors receives an additional, learnable token to indicate the class. The vector sequence is put into the conventional Transformer encoder, which has been altered to include an additional fully-connected layer for classification at the very end. #### 3.3.5 SwinTransformer A division of Vision Transformers are Swin Transformers (ST). Due to the self-attention processing happening only inside each local window, it has a linear computational cost proportionate to the size of the input image. It builds hierarchical feature maps by combining image patches into deeper layers [31]. In contrast, owing to global self-attention processing, early vision transformers build feature maps with a single low resolution and have a quadratic computational cost proportionate to the size of the input image. #### 3.3.6 ResNetV2-101 Modern transfer learning techniques for image categorization include BigTransfer (BIT) [32]. When training deep neural networks for vision, the transfer of previously learned representations enhances sampling efficiency and makes hyperparameter tweaking easier. Pre-training on large supervised datasets and model fine-tuning on a target task are paradigms that BiT revisits. ResNeetV2-101 is a BIT model that is 101 layers deep. #### 3.3.7 DenseNet121 The first layer of the DenseNet [33] convolutional neural network is linked to the second, third, fourth, and so on, while the second layer is connected to the third, fourth, fifth, and so on layer. Each layer of the network is interconnected with all layers that are deeper in the network. Apart from the fundamental convolutional and pooling layers, DenseNet is made up of two significant building elements. They consist of Transition layers and Dense Blocks. The four dense blocks in the DenseNet-121 have [6, 12, 24, 16] layers. #### 3.3.8 MobileNetV2 In contrast to standard residual models that employ extended representations in the input, the MobileNetV2 [34] architecture is built on an inverted residual structure where the input and output of the residual block are narrow bottleneck layers. Lightweight depthwise convolutions from MobileNetV2 filter features in the intermediate expansion layer. To retain representational strength, non-linearities in the thin layers were also eliminated. #### 3.3.9 EfficientNet-B0 Using the ImageNet dataset as its training data, EfficientNet-B0 [35] is a simple mobile-size baseline architecture with 237 layers. The model uses seven inverted residual blocks, however, they each have unique settings. In addition to swish activation, these blocks also use squeeze and excitation blocks. #### 3.3.10 ResNeXt-101 The ResNeXt101 [36] is a CNN model that is based on a conventional ResNet model but uses 3x3 convolutions instead of 3x3 grouped convolutions within the bottleneck block. One convolution is divided into many smaller, parallel convolutions by the ResNeXt bottleneck block. The cardinality of the ResNeXt101 model is 32, and the bottleneck width is 4. This implies that 32 parallel convolutions with just 4 filters are employed in place of a single convolution with 64 filters. #### 3.3.11 Focal loss A balanced dataset for multi-class classification includes equally distributed target labels. An unbalanced dataset is one where one class contains significantly more samples than the other. This mismatch results in two issues. * Training is ineffective since the majority of examples are simple ones that don't provide any relevant learning signals. * The abundance of simple instances during training may degrade models. Focal Loss (FL) [37] is a modified form of Cross-Entropy Loss (CE) that attempts to address the class imbalance problem by giving more weight to complex or easily misclassified cases and less weight to easy examples. As a result, Focal Loss decreases the loss contribution from simple examples while emphasizing the need to address misclassified cases. Due to the primary dataset's extreme imbalance, we chose to employ FL rather than CE since it produced superior results. Mathematically, the focal loss is defined as: \[FL(p_{i})\!=\!-a_{i}(1\!-\!p_{i})^{\gamma}\log(p_{i})\] where the likelihood of the ground truth label, denoted by \(p_{i}\), might be expressed as follows: \[p_{i}=\left\{\begin{array}{ll}p&\mbox{if }y=1\\ 1-p&\mbox{otherwise}\end{array}\right\}\] For Focal Loss, there are two adjustable parameters. The focusing parameter \(\gamma\) gradually adjusts the rate at which simple instances are down-weighted. When \(\gamma=0\), FL equals CE, and as \(\gamma\) increases, the modulating factor's influence also grows (\(\gamma=2\) was the most effective in the studies). Focal loss is balanced by \(\alpha\), and the accuracy was marginally higher than the non-\(\alpha\)-balanced variant. ### Algorithmic details of the proposed system The Triplet Siamese Network, a Siamese network with three identical subnetworks, is used in this research. We fed the model three images, two of which were similar (anchor and positive samples) while the third was dissimilar (a negative example). A representation of this concept has been shown in Fig. 5. Our goal was for the model to learn to estimate the similarity between images. The Siamese network received each triplet image as an input, generated the embeddings, and output the distance between the anchor and the positive embedding and the distance between the anchor and the negative embedding. Six transfer-learning based models were used as the backbone of the Triplet Siamese Network to create an ensemble model to generate embeddings for each of the images of the input triplet images. The ensemble model included ResNetV2, DenseNet, SwinTransformer, MobileNetV2, EfficientNetB0, ResNeXt-101. The final layer of each model consisted of 512 neurons. The weights of all the layers except the final layer of the six models were frozen. This is important to avoid affecting the weights that the model has already learned. The final layer was left trainable so that we could fine-tune its weight during training. Every model generated embeddings independently, and the concatenation of embeddings was fed into a neural network of 512 neurons to reduce the generalization error of the extraction. Fig. 6 illustrates the proposed model architecture. The PairwiseDistance method was employed to compute the pairwise distances between the Anchor image's vectors and the Positive image's vectors, and the Anchor image's vectors and the Negative image's vectors using the p-norm. The pairwise distances were to measure the similarity between those vectors. The formula for the PairwiseDistance method is: \[||x||_{p}=\left(\sum\limits_{i=1}^{n}|x_{i}|^{\gamma}\right)^{1/p}\] Marking Ranking Loss was used as the loss function. The goal of Marking Ranking Loss is to anticipate the relative distances between inputs, as opposed to other loss functions like Cross-Entropy Loss or Mean Square Error Loss, whose goal is to learn to predict directly a label, Figure 5: Triplet input (anchor, positive and negative). Figure 6: Architecture of the proposed model. a value, or a group of values given an input. The formula for Margin Ranking Loss is: \[loss(x,y)=max(0,\ -\ y*(x_{1}-x_{2})+margin)\] A similarity score between the data points is necessary to employ this loss. The dissimilarity between the anchor image and the positive image has to be low, while the dissimilarity between the anchor image and the negative image has to be high. The ensemble model gathered features from the three input data points to create embedded representations for each so that the Margin Ranking Loss function could be applied. The PairwiseDistance method was used to get the similarity score. As a result, the gradients were computed using this loss function, and the weights and biases of the siamese network were modified using the gradients. Finally, the feature extractors were trained to provide comparable representations for both inputs when the inputs are similar, or distant representations for the two inputs when they are dissimilar. The high-level architecture of the proposed model to diagnose COVID-19, CAP, and Normal has been presented in Fig. 7. It includes the ensemble of models for feature embedding, similarity learning with Margin Ranking loss, and the training process. ### Evaluation metrics The performance of the proposed system was evaluated by using the eight metrics: Accuracy, Precision, Recall, F1-score, Specificity, and AUC ROC score. They were determined by: \[Accuracy=\frac{TP+TN}{TP+FP+TN+FN}\] \[Precision=\frac{TP}{TP+FP}\] \[Recall=\frac{TP}{TP+FN}\] \[F1-score=\frac{2*Precision*Recall}{Precision+Recall}\] \[Specificity=\frac{TN}{TN+FP}\] * TP stands for COVID-19 instances that were predicted properly * FP stands for normal, or CAP instances that the proposed system incorrectly classifies as COVID-19 Figure 7: An illustration of the proposed model. * TN stands for the appropriately categorized normal or pneumonia cases * FN stands for COVID-19 instances that were incorrectly categorized as pneumonia or normal cases. ## 4 Experimental result and discussion ### Experimental environments The models were developed on the AMD Ryzen Threadripper 1950X 16-Core Processor with 48 GB of main memory using Python 3.7.12 and Pytorch 1.11.0. The experiments were conducted utilizing two NVIDIA GeForce RTX 2080 Ti GPUs, each with 11 GB of RAM. The computer ran on Ubuntu 20.04.2 LTS. ### Analysis and discussion of results The transfer-learning-based models' dataset has been divided into 80% train dataset and 20% test dataset. The proposed Triplet Siamese Network dataset (i.e., meta-dataset) was divided into 600 images of the training dataset and 10152 images of the testing dataset. Since the dataset was partitioned in a patient-aware manner, no CT scan slice from the same patient was used in both the training and testing datasets. To get the results, the 10-fold cross-validation approach was utilized. To determine how well our proposed few-shot learning model would recognize COVID-19 occurrences, we first compare its accuracy and F1 score with several pre-trained CNN models. Table 4 provides the comparison results of the 3-class (Normal, CAP, COVID-19) classification. Since the proposed model utilized ensembling learning, ablation studies on the models which were included in the ensemble model has been shown in Table 5. Based on Table 5, we could conclude that it is beneficial to include all the six models in the ensemble as they provided complementary predictions. The base models made different types of errors on different examples and combining the predictions of these models resulted in a more diverse set of predictions and fewer overall errors, which led to improved performance. The proposed ensemble model smoothed out the predictions of the individual models, reduced the impact of overfitting and improved the generalization ability of the model. As shown in Table 4, our suggested model outperforms the implemented pre-trained models with a substantially better score. The result is more significant because the proposed model's score was obtained from 10152 images, whereas the other models were tested with only 3390 images. Furthermore, compared to pre-trained CNN models, the suggested model's training and validation loss seem more stable and better convergent, as illustrated in Fig. 8. We have used the L2 dropout regularization strategy to prevent our model from overfitting to training data. Early stopping was employed with a learning schedule (the ReducedLRonPlateau callback from Pytorch) to prevent overfitting. Table 6 summarizes the classification performance results for additional verification of the performance of our proposed model on a class-by-class basis. A graphical representation of Table 6 has been given in Fig. 9. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & Labels & Precision & Recall & F1 score & AUC ROC & Accuracy & Specificity & support \\ \hline CAP & 0.9958 & 0.9728 & 0.9842 & 0.9989 & 0.9896 & 0.9979 & 3384 \\ Normal & 0.9745 & 0.9947 & 0.9945 & 0.9993 & 0.9896 & 0.9870 & 3384 \\ COVID & 0.9917 & 0.9941 & 0.9929 & 0.9996 & 0.9953 & 0.9959 & 3384 \\ **Micro avg** & 0.9872 & 0.9872 & 0.9872 & 0.9990 & & 10152 \\ **Macro avg** & 0.9873 & 0.9872 & 0.9872 & 0.9992 & & 10152 \\ **Weighted avg** & 0.9873 & 0.9872 & 0.9872 & & & 10152 \\ **Samples avg** & 0.9872 & 0.9872 & 0.9872 & & & 10152 \\ \hline \hline \end{tabular} \end{table} Table 6: The class-wise score of the proposed model. Figure 8: Training vs. validation loss of pre-trained CNN models and the proposed model. Figure 9: Graphical representation of the class-wise score. Figure 11: ROC curve of the proposed system. Figure 10: Confusion matrix of the CovidExpert. recognize every positive case during this health emergency. This work proposed a cutting-edge AI-based approach for an urgently needed quick and accurate diagnostic method for COVID-19 disease. The proposed research aims to achieve this by integrating a few-shot learning model with the similarity learning method and an ensemble of pre-trained CNN encoders. With a small number of training data, we demonstrated the value of few-shot learning for multi-class classification by automatically classifying COVID-19, CAP, and Normal cases from CT scans to expedite patient care. We presented a novel architecture that combines few-shot learning with an ensemble of pre-trained convolutional neural networks to retrieve feature vectors from images which are then fed into the Triplet Siamese Network to discover similarities between input images. The ROC score was 99.9%, the specificity was 99.36%, the sensitivity was 98.72%, and the overall accuracy of the suggested model for the \begin{table} \begin{tabular}{l l l l l l} \hline \hline Author & Architecture & Score, Metric & Classification & Dataset & Data type \\ \hline Pamwar et al. [13] & VGG-19 & 95,61\%, accuracy & Binary & (Training, testing) \(-\) (6090, 1522) & images \\ Iforry et al. [18] & VGG-19 & 84.09, accuracy & Binary & (Training, testing) \(-\) (746, 146) & \\ Rahhimzadeh et al. & ResNet50V2 & 98.49\%, accuracy & Binary & (Training, testing) \(-\) (55853, 7996) & \\ Hussain et al. [15] & GoroDet & 99.1\%, accuracy (2 & Binary \& Multi-class & For 2 classes (Training, testing) \(-\) \\ & & classes) & & (1300, 260) & \\ & & 94.2\%, accuracy (3 & & For 3 classes: (Training, testing) \(-\) \\ & & classes) & & (2100, 420) & \\ Zhao et al. [19] & Bit-M & 99.2\%, accuracy & Binary & (Training, testing) \(-\) (194922, 25658) & \\ Serte et al. [20] & ResNet-50 & 96\%, AUC & Binary & (Training, Testing) \(-\) (244, 75) & 3D CT scans \\ Mukherjee et al. [21] & Tailored CNN & 96.28\%, accuracy & Binary & (Training, Testing) \(-\) (672, 672) & images \\ Shah et al. [22] & VGG-19 & 94.5\%, accuracy & Binary & (Training, Testing) \(-\) (592, 73) & \\ Matfouni et al. [23] & DenseNet-121\(+\)ResAttNet-92 & 95.31\%, accuracy & Multi-class (3 & (Training, Testing) \(-\) (14385, 1238) \\ Shortuzzaman et al. & Siamese Network (VGG-16 encoder) & 95.6\%, accuracy & Multi-class (3 & (Training, Testing) \(-\) (30, 648) \\ [24] & Siamese Network (VGG-16 encoder) & 96.4\%, accuracy & Multi-class (3 & (Training, Testing) \(-\) (2520, 840) \\ & & classes) & & \\ Kogilavani et al. [16] & VGG16 & Binary & (Training, Testing) \(-\) (2711, 581) & \\ Basu et al. [17] & CNN \(+\)KNN & 98.87\%, accuracy & Binary & (Training, Testing) \(-\) (2487, 439) & \\ **Proposed System** & **Triplet Siamese Network (Ensemble of CNNs as base encoder)** & **98.719\%, accuracy** & **Multi-class (3 & (Training, Testing) \(-\) (600, classes) \\ & & **classes)** & **10152)** & \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of the proposed system with existing systems. Figure 12: PR curve of the proposed system. multi-class classification was 98.719%. These performance values are thought to be very important for applications in medical contexts. Our suggested model performs similarly or better than the currently published models. This is encouraging since only 600 training samples were used to train our few-shot learning model. We hope that the shortcomings of CNNs may be overcome by using our architecture on a small and unbalanced dataset. Our strategy may facilitate the work of radiologists. Our method has shown a bright future in providing radiologists and clinicians with second views. The suggested system has certain drawbacks. Since Siamese neural networks (SNN) use quadratic pairs or cubic functions to learn from, they often need longer training time than conventional neural networks (pointwise learning). Because pairwise learning is a component of SNN training, the output is the distance from each class or similarity rather than the prediction probabilities. The suggested model will not work in other CT scan views, such as sagittal and coronal, since the dataset only included the axial view of abdominal CT scans. The research used imaging data; it did not consider the patient's age, gender, or past medical history. We want to expand our effort by resolving these issues soon. Developing the suggested method, such as reducing the training dataset and locating the region of irregularities, is one of our main research priorities for the future. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements The authors would like to thank Syed Md. Shakawath Hossain, Medical Technologist, CT Scan department, Medinova Medical Services Ltd, for his assistance in creating the dataset for the meta-learning model. The authors wish to acknowledge the Department of Computer Science and Engineering (CSE), Shahjali University of Science & Technology (SUST) for providing technical support.
2310.01649
On Training Derivative-Constrained Neural Networks
We refer to the setting where the (partial) derivatives of a neural network's (NN's) predictions with respect to its inputs are used as additional training signal as a derivative-constrained (DC) NN. This situation is common in physics-informed settings in the natural sciences. We propose an integrated RELU (IReLU) activation function to improve training of DC NNs. We also investigate denormalization and label rescaling to help stabilize DC training. We evaluate our methods on physics-informed settings including quantum chemistry and Scientific Machine Learning (SciML) tasks. We demonstrate that existing architectures with IReLU activations combined with denormalization and label rescaling better incorporate training signal provided by derivative constraints.
KaiChieh Lo, Daniel Huang
2023-10-02T21:23:31Z
http://arxiv.org/abs/2310.01649v2
# On Training Derivative-Constrained Neural Networks ###### Abstract We refer to the setting where the (partial) derivatives of a neural network's (NN's) predictions with respect to its inputs are used as additional training signal as a _derivative-constrained_ (DC) NN. This situation is common in physics-informed settings in the natural sciences. We propose an integrated RELU (IReLU) activation function to improve training of DC NNs. We also investigate _denormalization_ and _label rescaling_ to help stabilize DC training. We evaluate our methods on physics-informed settings including quantum chemistry and Scientific Machine Learning (SciML) tasks. We demonstrate that existing architectures with IReLU activations combined with denormalization/label rescaling better incorporate training signal provided by derivative constraints. ## 1 Introduction Deep learning is increasingly being applied to physics-informed settings in the natural sciences. By _physics-informed_, we mean any situation where inputs and/or outputs in a dataset involve relationships based on physics (_e.g._, forces). The field of Scientific Machine Learning (SciML) (Karniadakis et al., 2021) is emerging to address the issues of applying machine learning (ML) to the physical sciences (_e.g._, physics-informed neural networks or PINNs (Raissi et al., 2019)). Domains include fluid dynamics (Sun et al., 2020; Sun & Wang, 2020), geo-physics (Zhu et al., 2021), fusion (Mathews et al., 2020), and materials science (Shukla et al., 2020; Lu et al., 2020). In the realm of quantum chemistry, there are promising results (Hermann et al., 2022; 2020) that attempt to solve the electronic-structure problem (i.e., predict energy from structure) or model an atomic system's energy surface (Hu et al., 2021; Gasteiger et al., 2021) In the physics-informed setting, it is common to express constraints on the neural network's (NN's) predictions in terms of the NN's (partial) derivatives with respect to (w.r.t.) its inputs to express physical constraints. We call this a _derivative-constrained_ (DC) NN. Thus, training a DC NN addresses a subset of issues considered in physics-informed settings such as SciML. We emphasize that most settings do not use derivatives of the model w.r.t. its inputs to supply additional training signal even though most NN models are optimized with gradient-based methods. One strategy for incorporating derivative constraints is to add additional terms containing the derivative constraints to a loss function so that multi-objective optimization can be performed. While it is possible to construct NNs with high predictive accuracy using this strategy, the resulting models may not incorporate derivative constraints efficiently. In the physics-informed setting, this translates into capturing less of the physics. We demonstrate that this occurs in many existing works in quantum chemistry and SciML where we obtain high predictive accuracy but lower accuracy on derivative constraints (see experiments, Sec. 5). This presents an opportunity to reevaluate aspects of training DC NNs and revisit best practices. We make the following contributions. 1. We propose a new activation function called an _integrated ReLU_ (IReLU) obtained by integrating a standard ReLU activation (Agarap, 2018) (Sec. 4.1). We intend IReLU's as a drop in replacement for activations in an existing architectures. Our main motivation for doing so is that training a DC NN involves higher-order derivatives. Consequently, the choice of activation function will impact the propagation of additional derivative information. 2. We propose _denomalizing_ NNs, _i.e._, removing all normalization layers, and _label rescaling_ as a dataset preprocessing method to stabilize training (Sec. 4.2). Our primary motivation for doing so is because we hypothesize that DC training of NNs is sensitive to _units_. Consequently, unit-insensitive normalization procedures (_e.g._, batch normalization (Ioffe and Szegedy, 2015)) that help stabilize training in standard settings may introduce artifacts in the DC training case. We benchmark the performance of our proposed methods on a variety of datasets and tasks including quantum chemistry NNs (Schutt et al., 2017; Xie and Grossman, 2018; Gasteiger et al., 2020; Ma et al., 2021; Gasteiger et al., 2021) and PINNs (Raissi et al., 2019) (Sec. 5) used in SciML. We show that IReLUs combined with denormalization/label rescaling improve the learning of gradient constraints while retaining predictive accuracy. ## 2 Related Work There are at least two paths to improving training of DC NNs: (1) improving the loss function and (2) developing new architectures. In the first direction, loss functions used in training DC models often involve multiple terms so they are multi-objective optimization (MOO) problems. One solution to the MOO problem is to weigh each term in the loss function (Sener and Koltun, 2018; van der Meer et al., 2022; Bischof and Kraus, 2021), potentially in an adaptive manner (Li and Feng, 2022; Fernando and Tsokos, 2021; Xiang et al., 2022; Chen et al., 2018; Malkiel and Wolf, 2020; Heydari et al., 2019; Kendall et al., 2018; Lin et al., 2017). This is helpful in the SciML context since the different loss terms may use different units of measurements, and thus, have imbalanced label magnitudes (Wang et al., 2021). Weighing loss terms is also common in training quantum chemistry networks. We will demonstrate in Sec. 3.2 that it is difficult to control to motivate our methods. In the second direction, we can also develop novel architectures that better incorporate domain knowledge. Domains such as quantum chemistry have custom designed NN architectures (Schutt et al., 2017; Xie and Grossman, 2018; Gasteiger et al., 2020; Ma et al., 2021; Gasteiger et al., 2021; Gasteiger et al., 2021; Gasteiger et al., 2021; Gasteiger et al., 2022; Schmitt et al., 2021; Zitnick et al., 2022; Passaro and Zitnick, 2023; Liao et al., 2023). The architectural improvements in these works focus on re-arranging interaction patterns (_e.g._, convolution layers), leveraging graph properties of atoms (_e.g._, molecular bond), and encoding invariances/equivariances. We propose an activation function in Sec. 4.1 which we intend as a drop-in replacement for activations in existing architectures. Thus, we intend our activation to be applied to a wide range of architectures. ## 3 Training with Derivative-Constraints We review an example of training with derivative constraints in the setting of quantum chemistry (Sec. 3.1). Then, we motivate our proposed methods with an experiment demonstrating the difficulty of incorporating gradient constraint information with traditional approaches (Sec. 3.2). ### Example: Potential Energy Surface Modeling We use potential energy surface (PES) modeling from quantum chemistry as a concrete example to introduce DC training. A PES \(U:\mathbb{R}^{3A}\rightarrow\mathbb{R}\) gives the energy of a system with \(A\) atoms as a function of its atomic coordinates.1 A PES \(U(\mathbf{x})\) and its force field \(\mathbf{F}(\mathbf{x})\) can be evaluated by quantum mechanical simulation software such as Gaussian (Frisch et al., 2016) given the 3D Cartesian coordinates \(\mathbf{x}\) of the \(A\) atoms, _i.e._, its structure. The force field is the negative gradient of the PES and can be used to simulate the dynamics of the \(A\) atoms. In particular, when \(\mathbf{F}(\mathbf{x})=0\), there are no forces acting on \(\mathbf{x}\) which means that the configuration of \(\mathbf{x}\) is stable. Footnote 1: We are ignoring symmetries. Technically, \(U:\mathbb{R}^{3A-5}\rightarrow\mathbb{R}\) for general atomistic systems and \(U:\mathbb{R}^{3A-6}\rightarrow\mathbb{R}\) when it is planar. Conservation of energy is expressed with the following derivative constraint \[-\nabla_{\mathbf{x}}U(\mathbf{x})=\mathbf{F}(\mathbf{x}) \tag{1}\] that relates the gradient of the PES with the negative force. This connects simulation of a system with its changes in energy. As a result, this means that we can find stable configurations of atomistic systems in nature by finding local minima on the system's PES, since \(\mathbf{F}(\mathbf{x})=-\nabla_{\mathbf{x}}U(\mathbf{x})=0\). Consequently, we can study molecules and materials _in silico_ if we can model a PES efficiently and accurately enough. We can construct a surrogate model of a system's PES by fitting a NN \(f_{\theta}\) with parameters \(\theta\) to a dataset \(\mathcal{D}=\{(\mathbf{x}^{i},E^{i},F^{i})_{i}:1\leq i\leq N\}\) where \(\mathbf{x}^{i}\) are atomic coordinates, \(E^{i}\) is an energy, and \(F^{i}\) are forces - the negative gradient of the energy w.r.t. \(\mathbf{x}^{i}\). This dataset can be created from quantum mechanical simulation software. A surrogate model can be used to accelerate the computation of a PES since quantum mechanical simulation software can be compute-intensive to run. To train a NN on this dataset, we can use the following multi-objective loss function \[\text{Loss}(f_{\theta},\mathcal{D})=\sum_{i=1}^{N}\alpha\|f_{\theta}(\mathbf{x }^{i})-E^{i}\|^{2}+\beta\|-\nabla_{\mathbf{x}}f_{\theta}(\mathbf{x}^{i})-F^{ i}\|^{2}\,. \tag{2}\] The terms \(\alpha\) and \(\beta\) are hyper-parameters that weigh the contributions of the first term involving \(f_{\theta}\)'s predictions and the second term involving \(f_{\theta}\)'s gradients. Training \(f_{\theta}\) with a gradient-based method will thus involve second-order derivatives. The second term enforces the conservation of energy since it constrains the force prediction of the model to be the observed force. Thus, conservation of energy is violated when training signal from derivative constraints is not efficiently incorporated. More generally, we can have arbitrary derivatives, constraints, and datasets involving additional supervised signal that contains these constraints for different situations (_e.g._, thermodynamics, fluid flow). These constraints can come from the underlying partial differential equation (PDE) that describes the natural process. As before, we can add these constraints to a loss term and optimize as before. We refer the reader to the SciML literature (Raissi et al., 2019; Karniadakis et al., 2021; Meng et al., 2022) for more examples. ### Motivation We hypothesize that one fundamental challenge with training a DC model \(f_{\theta}\) is that it is more difficult to incorporate derivative constraint information in the loss term compared to model prediction information in the loss term. As a quick test of our hypothesis and motivation for our methods, we report the following experiment in the setting of quantum chemistry. ExperimentWe train a selected NN designed for quantum chemistry on a training set consisting of \(\{(\mathbf{x}^{i},E^{i},F^{i})_{i}:1\leq i\leq N\}\) tuples for varying \(\beta\) values while holding \(\alpha=1\) constant to compare the relative difficulty of predicting the energy versus predicting the force (_i.e._, involving Figure 1: Comparing relative difficulty of learning energy (prediction) versus force (derivative constraint) with SchNet (Schütt et al., 2017) on Asprin molecule in MD17 dataset by varying \(\beta\) in loss function (Eq. 2). the gradient). As a reminder, the terms \(\alpha\) and \(\beta\) control the relative importance of each term in the loss function in Eq. 2. Thus, whether \(\alpha>\beta\) or \(\alpha<\beta\) can be used as a proxy to determine which term in the loss is more difficult to learn. In particular, we consider learning of the derivative signal to be more difficult if \(\beta>\alpha\) gives lower force loss on the test set compared to energy loss on the test set on a relative basis. We use the relative basis since energies and forces are given in related units, \(\frac{\text{real}}{\text{mol}}\) versus \(\frac{\text{real}}{\text{mol}\,\Lambda}\) respectively, and so direct comparison is not possible. DetailsIn this experiment, we choose a classic NN, Schnet (Schutt et al., 2017). The implementation and hyper-parameters of SchNet are taken from the Open Catalyst Project (OCP) (Chanussot* et al., 2021), a joint effort to between computer scientists and chemistry/material scientists to solve the PES modeling problem. We select the MD17 (Chmilela et al., 2017) dataset which contains \((\mathbf{x}\cdot E^{i},F^{i})_{i}\) tuples for 8 different small molecules. As terminology, each \(\mathbf{x}^{i}\) is also called a _conformation_. We train Schnet on \(50,000\) conformations of the Asprin molecule for \(50\) epochs.2 Instead of normalizing the data before training, we train the networks directly on \(\mathbf{x}^{i}\) so that a surrogate PES can directly predict energies and forces with the same units. The mean energy of Asprin in our training set \(-406,737.28\frac{\text{real}}{\text{mol}}\) and the variance is \(35.36\frac{\text{real}^{2}}{\text{mol}^{2}}\). The mean force is \(423.87\frac{\text{real}}{\text{mol}\,\Lambda}\) and the variance \(779.99\frac{\text{real}^{2}}{\text{mol}^{2}\,\Lambda^{2}}\). Thus, the absolute value of the mean energy is roughly \(3\) orders of magnitude larger than the mean force. We will comment more on this in Sec. 4.2. We use \(\beta=\{0.01,0.1,1,10,30,50,100,200\}\). Footnote 2: We have also trained Schnet for \(300\) epochs, but have observed fast convergence. ResultsWe report training loss curves in Fig. 2a and test loss for various \(\beta\) values in Fig. 2b. We observe that the energy loss divided by \(1000\) (since the energies are roughly \(3\) orders of magnitude larger) is typically much lower than the force loss. This gives evidence that it is more difficult to incorporate derivative constraint information in the loss term compared to model prediction information in the loss term. Moreover, it is not easy to improve the relative difference, even for large values of \(\beta\) which may make the energy loss worse. Finally, we observe that the training losses for each choice of \(\beta\) converges to roughly the same level so that there is a trade-off between learning energies and learning forces. ## 4 Methods In this section, we propose two ideas to be used in conjunction to improve learning of derivative constraints. First, we propose a new activation function called an integrated ReLU (IReLU) activation function (Sec. 4.1). Second, we introduce _denormalization_ and _label rescaling_ to help stabilize training of DC NNs with IReLU activations (Sec. 4.2). ### Integrated ReLU Activation The simple observation that we make concerning training a DC NN is that it involves higher-order derivatives. Consequently, higher-order derivatives of activation functions will also be used during the training process. This motivates us to revisit the choice of activation function for DC training since ordinary activation functions have been designed in the setting where only the first-order derivative is used. We use a simple idea to construct an activation for DC NN training: use an integrated form of an ordinary activation function such as a ReLU when we are performing DC training of NNs. The intuition for doing so is the following: if we only fit derivative constraints, then we should use a ordinary activation (_e.g._, ReLU) after we have taken the derivative of the original activation in the model. We will discuss this intuition more in Appx. A. Define the _integrated ReLU_ (IReLU) to be the activation \[\mathrm{IReLU}(x)=\int_{0}^{x}\mathrm{ReLU}(y)dy=\max(0,0.5*x^{2})\,. \tag{3}\] We focus on the IReLU activation in this work since the ReLU is a popular activation. Naturally, the idea of an IReLU can be applied to other activations (Maas et al., 2013; Clevert et al., 2015; Ramachandran et al., 2017; Hendrycks & Gimpel, 2016) as well. ### De-Normalization and Label Rescaling Conventional wisdom is that normalization techniques such as batch normalization may help accelerate the training of NNs as well as improve the stability of training. Notably, centering and rescaling the internal values in a NN might be crucial when using IReLU's since these activations produce higher responses compared to traditional activations. Dataset normalization is also common practice to ensure that input features are set on equal footing. Intuitively, normalization techniques remove the _units_ of the features and dataset. We hypothesize that DC NNs are more sensitive to _units_ compared to typical training without derivative constraints because of the linearity of derivatives, _i.e._, \(\nabla f(c\mathbf{x})=c\nabla f(\mathbf{x})\). We can interpret the constant \(c\) as determining the _units_ of \(\mathbf{x}\), which also determines the units of the derivative \(c\nabla f(\mathbf{x})\). In particular, this constant \(c\) will appear in the loss term in while training a DC NN and so the loss function is sensitive to the choice of units on the inputs \(\mathbf{x}\). We emphasize that a typical setting does not have derivatives of the NN w.r.t. its inputs \(\mathbf{x}\), and so these units will not appear in the loss. If our hypothesis holds, then we will need to develop alternative approaches to stabilizing training than the typical unit-insensitive approaches. Towards this end, we propose two techniques. First, we propose _denormalization_, _i.e._, the removal of all normalization techniques in a NN architecture. Second, we propose a simple _label rescaling_ procedure where we scale the labels in a dataset \(\mathcal{D}=(\mathbf{x}^{i},\ell_{1}^{i},\ldots\ell_{n}^{i})_{i}\) by a suitable constant \(C\) defined as \[C=\max_{\ell_{j}^{i}}\{C:0\leq\frac{\ell_{j}^{i}}{C}\leq 1,C\text{ is power of 10 }\}\;. \tag{4}\] In the PES modeling example, this means we use the same constant \(C\) for both energy and force labels. Intuitively, what label rescaling does is set the units of the model's predictions and derivatives. The loss function in the PES modeling example becomes \[\sum_{i}\alpha\|f_{\theta}(\mathbf{x}^{i})-\frac{E^{i}}{C}\|^{2}+\beta\|-\nabla _{\mathbf{x}}f_{\theta}(\mathbf{x}^{i})-\frac{F^{i}}{C}\|^{2} \tag{5}\] with label rescaling. Thus, label rescaling plays a similar role to \(\alpha\) and \(\beta\) in setting units, the difference being that the units are set on the model as opposed to the loss. We emphasize that in label rescaling, we do not normalize the dataset inputs. ## 5 Experiments We benchmark the performance of our proposed methods on a variety of architectures, datasets, and tasks including quantum chemistry NNs (Sec. 5.1) and PINNs (Sec. 5.2) used in SciML. Scripts we used for the experiments can befound in Github: [https://github.com/lbai-lab/dcnn-training](https://github.com/lbai-lab/dcnn-training) ### Quantum Chemistry We separate our experiments in quantum chemistry by dataset since different atomistic systems can have different properties. We use the MD17 dataset (Sec. 5.1.1), which has small organic molecules, and OC22 (Chanussoot* et al., 2021) (Sec. 5.1.2), which contains large atomistic systems and metals. #### 5.1.1 Experiments on MD17 Our first experiment tests the efficacy of our methods across different architectures for task of potential energy surface modeling. We select SchNet (Schutt et al., 2017), CGCNN Xie & Grossman (2018), ForceNet Hu et al. (2021), DimeNet++Gasteiger et al. (2020), and GemNet Gasteiger et al. (2021). SchNet and CGCNN are based on a convolutional NN architectures. ForceNet, DimeNet++, and GemNet are based on graph NNs. We select the MD17 dataset. For each molecule in MD17, we randomly select \(50000\), \(6250\), and \(6250\) conformations from the dataset as training set, validation set and testing set. The molecules include Asprine (Asp.), Benzene (Ben.), Ethanol (Eth.), Malonaldehyde (Mal.), Naphthalene (Nap.), Salicylic acid (Sal.), Toulene (Tol.), and Uracil (Ura.). We present more details in Appx. B. Baseline models are trained with the same training configuration and model hyperparameters given by OCP (Chanussot* et al., 2021). We note that Schnet was originally benchmarked on MD17 whereas the other NNs have been tested on other datasets. We train for 50 epochs on the MD17 dataset. Given the fast convergence of the training loss in MD17 (Fig. 2a), we consider 50 epochs is sufficient for models to fully learn the energy and forces. We use a batch size of \(20\) as recommended in the literature (Chanussot* et al., 2021). We use the Adam optimizer with a learning rate of \(0.0001\). Tab. 1 compares the performance between models trained with original settings and models trained with our proposed methods. We denormalize all networks with normalization layers. For architectures which consist of multiple interaction-output blocks (_e.g._, DimeNet++ and GemNet), we were only able to replace activation layers in output blocks with IReLU as training with all activations replaced proved to be unstable. For label rescaling, we use the constant \(C=1000000\) since the energies in MD17 are on the order of \(400000\). The results of our proposed methods are noted with * in the table. To investigate the individual contribution of IReLU, denormalization, and label rescaling, we also conduct ablation studies (Appx. C). We also experiment on different dataset sizes (Appx. D). For each architecture, we report the energy loss (upper row) and the force loss (bottom row) separately. We use the units of the original dataset, \(\frac{\text{real}}{\text{mol}}\) for energy and \(\frac{\text{real}}{\text{mol}A}\) for the force respectively. In general, our methods improve upon force loss across most molecules and most architectures. In particular, there is significant improvement in learning forces for CGCNN, DimeNet++ and ForceNet. We observe cases where our method performs worse on forces (_e.g._, SchNet and GemNet) but provides competitive performance. Perhaps surprisingly, our methods also improve the energy loss (38 out of 40 cases). We might reason that better incorporating force information would lead to improved learning of the physics. Nevertheless, it would be an interesting direction of future work to study this in more detail. #### 5.1.2 Experiments on OC22 Dataset To validate our methods on more datasets, we also compare the performance with and without our proposed methods on OC22 (Chanussot* et al., 2021). OC22 contains \(62331\) relaxations of oxides calculated at the DFT level. It contains a wide range of crystal structures (_e.g._, monoclinic, \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline Model & Asp. & Ben. & Eth. & Mal. & Nap. & Sal. & Tol. & Ura. \\ \hline SchNet & 53.00 & 125.43 & **5.97** & 37.54 & 197.06 & 65.46 & 129.80 & 54.88 \\ & **1.21** & 0.39 & **0.68** & **1.00** & **0.72** & **1.02** & **0.67** & **0.86** \\ \hline SchNet* & **28.31** & **0.56** & 13.39 & **11.27** & **5.53** & **27.57** & **0.34** & **20.36** \\ & 1.67 & **0.36** & 1.53 & 1.67 & 1.01 & 1.12 & 1.04 & 1.23 \\ \hline \hline CGCNN & 239.07 & 98.86 & 38.36 & 104.45 & 130.98 & 197.09 & 124.29 & 133.51 \\ & 14.20 & 6.11 & 8.25 & 14.27 & 8.46 & 8.50 & 9.73 & 9.08 \\ \hline CGCNN* & **137.12** & **16.63** & **4.63** & **30.56** & **66.68** & **78.29** & **36.73** & **81.02** \\ & **7.13** & **0.61** & **2.96** & **4.99** & **2.38** & **4.77** & **2.91** & **4.93** \\ \hline \hline DimeNet++ & 47.43 & 133.70 & 236.94 & 856.01 & 1096.34 & 580.41 & 301.06 & 669.06 \\ & 13.82 & 12.45 & 6.41 & 8.87 & 7.82 & 10.41 & 8.64 & 6.15 \\ \hline DimeNet++* & **2.59** & **1.68** & **0.58** & **20.30** & **5.18** & **3.65** & **0.77** & **5.96** \\ & **2.52** & **0.25** & **0.29** & **0.78** & **0.70** & **1.79** & **0.46** & **0.71** \\ \hline \hline ForceNet & 755.49 & 239.53 & 1048.44 & 874.22 & 1677.71 & 165.17 & 369.33 & 1964.51 \\ & 21.95 & 117.79 & 17.61 & 31.52 & 18.16 & 22.36 & 18.33 & 46.47 \\ \hline ForceNet* & **13.80** & **19.09** & **3.18** & **4.52** & **5.98** & **41.43** & **5.63** & **14.54** \\ & **0.89** & **0.33** & **1.15** & **1.36** & **0.35** & **0.34** & **0.50** & **0.26** \\ \hline \hline GemNet & 2201.94 & 352.19 & 470.32 & **5.23** & 564.06 & 1238.93 & 193.50 & 228.10 \\ & **0.34** & **0.22** & 0.28 & **0.27** & **0.17** & **0.30** & **0.12** & **0.20** \\ \hline GemNet* & **13.16** & **6.31** & **7.10** & 15.36 & **5.87** & **5.67** & **8.13** & **21.98** \\ & 0.93 & 1.27 & **0.27** & 0.70 & 0.47 & 0.73 & 0.47 & 7.07 \\ \hline \end{tabular} \end{table} Table 1: Comparison of model performance trained with original settings and our proposed methods (*). Mean energy loss (kcal/mol) on the upper row and mean force loss (kcal/mol/A) on the bottom row of the eight molecules in MD17 trained on state-of-the-art models. tetragonal) that contain heavier elements (_e.g._, metalloids, transition metals). Compared to MD17, OC22 consists of various large structure/metals (more than \(100\) atoms) mixed together in the training, validation, and testing set. In this dataset, the absolute value of the mean energy is only 1 order of magnitude larger than the mean force (compared to 3 in MD17). We randomly select \(200000\) molecules in OC22 training split as our training set and \(25000\) molecules in OC22 validation (out of domain) split as our testing set. Baseline models are trained with the same training configuration and model hyperparameters given by OCP (Chanussoot* et al., 2021). We train all models for \(50\) epochs. We use a batch size of \(20\) as recommended in the literature (Chanussoot* et al., 2021) for SchNet and CGCNN. Due to hardware memory limitations we tested DimeNet++ and ForceNet with a batch size \(10\). We were not able to test GemNet due to hardware limitations. We use the Adam optimizer with a learning rate of \(0.00001\) for ForceNet and \(0.0001\) for others. Tab. 2 shows that our methods produce better force predictions for CGCNN, DimeNet++, and ForceNet. In those models where we improved force losses, CGCNN and DimeNet++ also improve energy losses. It would be interesting to investigate why SchNet with our methods performs worse on OC22. As a reminder, both SchNet and CGCNN are based on convolutional architectures, and CGCNN's performance is improved with our method. We emphasize again that we do not modify the given architectures beyond replacing the activation functions (when appropriate). ### Physics-informed Neural Networks To validate the generalization ability of our proposed methods in other domains aside from quantum chemistry, we also experiment on physics-informed neural networks (PINNs) (Raissi et al., 2019; Karniadakis et al., 2021; Wu et al., 2018). PINNs are a general family of models that use a NN to predict the solution of a partial differential equation (PDE). The solution of a PDE is a latent function \(\psi(x,t)\) that describe physical measurements (_e.g._, temperature and velocity) as a function of spatial coordinates \(x\) and time \(t\). PINNs enforce physical constraints on the solution \(\psi(x,t)\) by enforcing that the solution satisfies the governing PDE in the loss function. In general the loss function for a PINNs takes the form below \[\mathcal{L}(\psi,\mathcal{D})=\mathcal{L}_{f}\left(\psi,\mathcal{D}_{f}\right) +\mathcal{L}_{ICBC}\left(\psi,\mathcal{D}_{IC},\mathcal{D}_{BC}\right) \tag{6}\] where \(\psi\) is a learned PDE solution (_e.g._, a NN) and \(\mathcal{D}=\left(\mathcal{D}_{f},\mathcal{D}_{IC},\mathcal{D}_{BC}\right)\) is a dataset consisting of several components containing additional constraints. The first term \[\mathcal{L}_{f}\left(\psi,\mathcal{D}_{f}\right)=\frac{1}{\left|\mathcal{D}_{ f}\right|}\sum_{(\mathbf{x},t,\mathbf{y})\in\mathcal{D}_{f}}\mathcal{F}\left( \frac{\partial\psi(\mathbf{x},t)}{\partial\mathbf{x}},\frac{\partial\psi( \mathbf{x},t)}{\partial t},\frac{\partial^{2}\psi(\mathbf{x},t)}{\partial \mathbf{x}^{2}},\frac{\partial^{2}\psi(\mathbf{x},t)}{\partial\mathbf{x} \partial t},\ldots,\mathbf{y}\right) \tag{7}\] gives the predicted solution's loss evaluated on a spatial-temporal grid \((\mathbf{x},t)\in\mathcal{D}_{f}\) using a function \(\mathcal{F}\). The loss is a function of additional derivatives of the predicted solution \(\psi\) w.r.t. its inputs according to the governing PDE. Thus, the loss function for a PINN may contain many higher-order derivatives. The second term \[\mathcal{L}_{ICBC}\left(\psi,\mathcal{D}_{IC},\mathcal{D}_{BC}\right)=\frac{1} {\left|\mathcal{D}_{IC}\right|}\sum_{(\mathbf{x},\mathbf{i})\in\mathcal{D}_{IC }}\mathcal{I}(\psi(\mathbf{x},0),\mathbf{i})+\frac{1}{\left|\mathcal{D}_{BC} \right|}\sum_{(\mathbf{x},t,\mathbf{b})\in\mathcal{D}_{BC}}\mathcal{B}(\psi( \mathbf{x},t),\mathbf{b}) \tag{8}\] \begin{table} \begin{tabular}{|l|c|c|} \hline Model & Energy MAE Loss (eV) & Force MAE Loss (eV/A) \\ \hline SchNet & **6.94** & **0.10** \\ \hline SchNet* & 22.4621 & 0.12 \\ \hline \hline CGCNN & 233.90 & 0.32 \\ \hline CGCNN* & **71.33** & **0.08** \\ \hline \hline DimeNet++ & 5.10 & 0.09 \\ \hline DimeNet++ & **0.57** & **0.01** \\ \hline \hline ForceNet & **4.48** & 0.33 \\ \hline ForceNet* & 5.90 & **0.10** \\ \hline \end{tabular} \end{table} Table 2: Comparison of performance on OC22 with original settings and our proposed methods. enforces constraints on the PDE solution given by initial conditions (\(\mathcal{D}_{IC}\)) and boundary conditions (\(\mathcal{D}_{BC}\)). \(\mathcal{I}\) and \(\mathcal{B}\) are the respective loss functions for the initial conditions and boundary conditions. These conditions further constrain the solution of a PDE. There can be multiple IC and BC loss terms, Moreover, the IC and BC loss terms (not shown) can also involve higher-order derivatives in certain PINNs. For our experiments with PINNs, we adapt baseline architectures, datasets and training configurations from PDEBench (Takamoto et al., 2022; 2022). PDEBench provides implementations and benchmarks of SciML models including PINNs for learning (1) the Advection equation (Sec. 5.2.1), (2) the compressible fluid dynamic equation (Sec. 5.2.2), and (3) the diffusion-reaction equation (Sec. 5.2.3). In the baseline architecture, all PINNs use the same MLP (multi-layer perceptron) architecture with \(6\) hidden layers of \(40\) neurons in each to simulate their latent function \(\psi\). The MLP uses Tanh activation function for every hidden layer. We use the same library DeepXDE (Lu et al., 2021) to construct and train the backbone MLP. For all PDES, we compare training a PINN to learn the PDE with activations replaced with IReLU activations. We did not find a need for label rescaling. For comparison, we also add in batch normalization (BN) to study its impact. Following PINN convention, we measure the model performance by using the mean square error (MSE) of the PINN's latent function prediction (_i.e._, \(\psi(x,t)\)). Thus, the loss terms which involve higher-order derivatives are taken purely as constraints. Loss terms labeled with \(\dagger\) are the terms which involve derivatives of the model w.r.t its inputs. To give more fine-grained information about how our methods impact each component of the loss term, we provide the MSE value of each loss term evaluated in the last epoch of training along side the predictive MSE evaluated in testing set. #### 5.2.1 Advection Equation The Advection equation has a simple PDE that involves first-order partial derivatives of the model w.r.t. its input in \(\mathcal{L}_{f}\), one initial condition in \(\mathcal{L}_{ICBC}\), and one boundary condition in \(\mathcal{L}_{ICBC}\). Thus, this task tests our method's performance on loss functions in DC training with 3 terms. The full loss function associated with the Advection equation is presented in the Appx. E.1. Both standard training and training with our methods use \(15000\) epochs on the 1D Advection dataset with the Adam optimizer and an initial learning rate of \(0.001\) following PDEBench. Tab. 3 presents the predictive loss evaluated on the test set and the training loss of each term in the loss function evaluated in the last epoch of training since these terms act as constraints. The model trained with our method achieves the best predictive loss (\(\mathcal{L}_{f}\)) on the test set. We also improve two of the training loss terms. It is also interesting to observe that adding batch normalization (BN) decreases performance. #### 5.2.2 Compressible Fluid Dynamics Equation The compressible fluid dynamics (CFD) equation contains 6 total initial and boundary conditions in \(\mathcal{L}_{ICBC}\). Like the Advection equation, it also involves first-order derivatives of the model w.r.t. its input in \(\mathcal{L}_{f}\). Thus, this task tests our method's ability to handle many terms in the loss function. The full loss function associated with the CFD equation is presented in the Appx. E.2. Both standard training and training with our methods use \(15000\) epochs on the 1D CFD dataset from PDEBench with the Adam optimizer and an initial learning rate of \(0.001\) following PDE bench. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Method & MSE & \(\mathcal{L}_{f}^{\dagger}\) & \(\mathcal{L}_{IC}\) & \(\mathcal{L}_{BC}\) \\ \hline Tanh + BN & 1.70 & 8e-16 & 5e-5 & 3e-5 \\ \hline IReLU + BN & 2.42 & 9e-12 & 8e-6 & 2e-6 \\ \hline Tanh (original) & 1.59 & 1e-5 & **3e-6** & 6e-6 \\ \hline IReLU (ours) & **0.99** & \(<\)**e-45** & 0.08 & \(<\)**e-45** \\ \hline \end{tabular} \end{table} Table 3: MSE and loss terms of the Advection equation. \(\mathcal{L}_{f}\) is the loss of the PDE which governs the Advection equation, \(\mathcal{L}_{IC}\) is the loss on the initial conditions, and \(\mathcal{L}_{BC}\) is the loss on the boundary conditions. Tab. 4 presents the predictive loss \(\mathcal{L}_{f}\) evaluated on the test set. We also include the training loss of each term in \(\mathcal{L}_{ICBC}\) evaluated in the last epoch of training. The model trained with IReLU and denormalization performs the best on all training terms. Our methods perform exceptionally well on the term involving derivatives (\(\mathcal{L}_{f}\) loss). As before, we also observe that adding BN decreases performance. #### 5.2.3 Diffusion Reaction equation The diffusion-reaction (DR) equation involves the most complex PDE in terms of derivative constraints. The loss on the PDE solution's prediction contains second-order derivatives, which means that third-order derivative information is used during NN training. Additionally, there are boundary conditions that utilize gradient information to describe the solution's boundary state at each given time step. Thus, this task tests our method's ability to cope with higher-order derivatives and derivative constraints in multiple terms. The full loss function associated with the diffusion reaction equation is presented in the Appx. E.3. For the DR equation, both standard training and training with our methods use \(100\) epochs on the 2D DR dataset (\(1000\) samples) from PDEBench with the Adam optimizer and an initial learning rate of \(0.001\). Tab. 5 presents the MSE loss on the testing set and training loss of each term in \(\mathcal{L}_{ICBC}\) in the last epoch of training. The models with IReLU and denormalization have the best predictive loss, which involves second-order derivative information. Additionally, we also achieve competitive performance on the boundary conditions that involve derivatives. This experiment demonstrates that our proposed methods has ability to fit gradients of NN that have order more than two. Note that the IReLU has a vanishing third-order derivative so other integrated activations may perform better.
2301.02367
MR Elastography with Optimization-Based Phase Unwrapping and Traveling Wave Expansion-based Neural Network (TWENN)
Magnetic Resonance Elastography (MRE) can characterize biomechanical properties of soft tissue for disease diagnosis and treatment planning. However, complicated wavefields acquired from MRE coupled with noise pose challenges for accurate displacement extraction and modulus estimation. Here we propose a pipeline for processing MRE images using optimization-based displacement extraction and Traveling Wave Expansion-based Neural Network (TWENN) modulus estimation. Phase unwrapping and displacement extraction were achieved by optimization of an objective function with Dual Data Consistency (Dual-DC). A complex-valued neural network using displacement covariance as input has been constructed for the estimation of complex wavenumbers. A model of traveling wave expansion is used to generate training datasets with different levels of noise for the network. The complex shear modulus map is obtained by a fusion of multifrequency and multidirectional data. Validation using images of brain and liver simulation demonstrates the practical value of the proposed pipeline, which can estimate the biomechanical properties with minimum root-mean-square-errors compared with state-of-the-art methods. Applications of the proposed method for processing MRE images of phantom, brain, and liver show clear anatomical features and that the pipeline is robust to noise and has a good generalization capability.
Shengyuan Ma, Runke Wang, Suhao Qiu, Ruokun Li, Qi Yue, Qingfang Sun, Liang Chen, Fuhua Yan, Guang-Zhong Yang, Yuan Feng
2023-01-06T04:21:49Z
http://arxiv.org/abs/2301.02367v2
MR Elastography with Optimization-Based Phase Unwrapping and Traveling Wave Expansion-based Neural Network (TWENN) ###### Abstract Magnetic Resonance Elastography (MRE) can characterize biomechanical properties of soft tissue for disease diagnosis and treatment planning. However, complicated wavefields acquired from MRE coupled with noise pose challenges for accurate displacement extraction and modulus estimation. Using optimization-based displacement extraction and Traveling Wave Expansion-based Neural Network (TWENN) modulus estimation, we propose a new pipeline for processing MRE images. An objective function with Dual Data Consistency (Dual-DC) has been used to ensure accurate phase unwrapping and displacement extraction. For the estimation of complex wavenumbers, a complex-valued neural network with displacement covariance as an input has been developed. A model of traveling wave expansion is used to generate training datasets for the network with varying levels of noise. The complex shear modulus map is obtained through fusion of multifrequency and multidirectional data. Validation using brain and liver simulation images demonstrates the practical value of the proposed pipeline, which can estimate the biomechanical properties with minimal root-mean-square errors when compared to state-of-the-art methods. Applications of the proposed method for processing MRE images of phantom, brain, and liver reveal clear anatomical features, robustness to noise, and good generalizability of the pipeline. Magnetic resonance elastography, Modulus estimation, Neural network, Traveling waves, Phase unwrapping ## 1 Introduction Magnetic Resonance Elastography (MRE) can measure viscoelastic mechanical parameters of soft tissues noninvasively [1, 2]. In recent years, studies have investigated the diagnostic potential of MRE for liver cirrhosis [3], brain tumors [4], liver tumors [5], and Parkinson's diseases [6]. Post-processing of MRE images includes two major steps: 1) extraction of wavefield from the original wrapped phase; and 2) estimation of biomechanical properties based on the measured wavefield [7]. However, accurate displacement extraction from wrapped phase images with noise remains difficult [8]. In vivo wavefields of soft tissues with complicated anatomical structures usually contain deflections and reflections with acquisition noise. These bring challenges to the accurate estimation of the biomechanical properties of soft tissues [9]. As the first step of processing MRE images, the quality of the extracted wavefield determines the overall performance of the extracted biomechanical properties. In phase images, wave information is encoded with a motion encoding gradient, such that the magnitude of the motion can introduce a wrapping effect. Thus, phase unwrapping is necessary. Conventional approaches involve unwrapping a single image frame before performing Fast Fourier transform (FFT) in the time domain to extract the principal component [10]. In addition, spatiotemporal information can also be used for phase unwrapping [11]. Unwrapping algorithms such as Sorting by Reliability (SG) algorithm [12] and Dilate-Erode-Propagate (DE) [8] work in a search-like or discrete greedy update manner, which can fail in complicated wrapped scenarios [8]. Laplacian-based Estimation (LBE) simultaneously performs phase unwrapping and FFT in the frequency domain [13]. LBE is noise-resilient, but may introduce additional background offset noise. This can have a negative impact on the estimated biomechanical properties. The phase gradient method (PG) does not need direct phase unwrapping, but is noise-sensitive and requires specially designed modulus estimation algorithms [14]. Therefore, a noise-resistant, accurate method for wavefield extraction that can perform both phase unwrapping and extraction of the principal component is required. MRE is a shear-wave-based elastography that requires the estimation of complex wavenumbers from acquired wave images. Numerous methods have been proposed to date [9] and typical algorithms include Direct Inversion (DI) using the Helmholtz equation [15] and local frequency estimation (LFE) [16]. Due to the use of Laplace operators, DI is susceptible to noise and prone to edge artifacts. Iterative algorithms with prior information such as Multifrequency Elasticity Reconstruction using Structured Sparsity and ADMM (MERSA) [17] and MRE Inversion by Compressive Recovery (MICRo) [18] were proposed to use DI as an estimation kernel. But these methods had similar problems due to the differentiation operation. LFE is more noise-resistant, but its ability to recover anatomical features is limited. Enhanced Complex Local Frequency (EC-LFE) was developed to provide viscosity information, but structural resilience remained a problem [19]. To overcome the limitations of the DI-based method, Multifrequency Dual Elasto-Visco inversion (MDEV) was proposed to improve noise robustness using multi-frequency data averaging [20]. However, differentiation operation could still introduce structural edge artifacts, making it sensitive to noise. To improve the capability of distinguishing anatomical features, k-MDEV applied directional filter banks to suppress noise and new estimation kernels with a single-wave assumption [21]. However, the use of the Laplace operator may over-enhance the object boundary and the assumption of single-wave may be invalid when there are strong reflections. Elastography Software Pipeline (ESP) [22] used a series of filters for denoising and Gabor wavelets for inversion. ESP could recover refined anatomical details but required many parameter-tuning steps. Non-Linear Inversion (NLI) algorithms based on Finite Element (FE) can provide good inversion [9, 23, 24]. However, the computation cost was high and the boundary condition settings could greatly influence the results [25]. Notably, by implementing a parallelized, subzone-based domain decomposition approach, the NLI algorithm proposed by the Dartmouth group avoids undue bias from the applied boundary conditions and high computational cost [26]. The method has been used to establish benchmark values for viscoelastic properties of the human brain [27]. Recently, data-driven algorithms using Deep Learning (DL) were proposed using the training set from FE simulation [28, 29]. These methods also relied on specific FE models for specific application scenarios, hindering their generalization performance. Traveling Wave Expansion (TWE) was first introduced to solve the inversion problem by using large fitting windows [30]. However, the inverse of the ill-conditioned matrix limited the spatial resolution that can be achieved. Therefore, a noise-resistant method with low-computational-cost, clearly delineated anatomical features and strong generalizability is required. ### Paper Contribution In this study, we propose an optimization-based phase unwrapping method with Dual Data Consistency (Dual-DC) to simultaneously perform phase unwrapping and FFT. For the inversion of complex shear modulus, a TWE-based Neural Network (TWENN) algorithm is proposed. Detailed Simulation, phantom, and human studies were performed to demonstrate the pipeline's accuracy and reliability. ### Paper Organization The remaining sections are organized as follows: In Section II, the proposed pipeline including Dual-DC and TWENN is presented. In Section III, the test datasets, evaluation metrics, implementation details of the proposed pipeline and comparative algorithms are introduced. Section IV presents detailed results followed by Section V that provides a thorough analysis of the results. ## II Methods The proposed pipeline for processing MRE images includes an optimization-based phase unwrapping method with Dual-DC, and a Traveling Wave Expansion-based Neural Network (TWENN) modulus estimation to estimate complex shear modulus (Fig. 1). ### Optimization-based Phase Unwrapping with Dual Data Consistency The displacement extraction of phase images acquired from MRE generally consists of two steps: phase unwrapping and computation of the principal components using FFT [7]. Here, an optimization-based method was proposed to perform these two tasks simultaneously. #### Ii-A1 Problem Formulation If \(\mathbf{U}^{*}=\mathbf{U}^{\prime}+\mathbf{i}\cdot\mathbf{U}^{\prime\prime}\) is the complex displacement field rotating at a frequency \(\omega\), representing the oscillating motion. In the process of image acquisition, at a specific time point \(t_{j}(j=1,...,J)\) within the motion cycle where the phase offset is \(\varphi_{j}=\omega t_{j}\), the image phase recorded by motion encoding gradient is \(\mathbf{\phi}_{j}\): \[\mathbf{\phi}_{j}=\mathrm{Re}\big{(}\mathbf{U}^{*}\cdot\mathrm{e}^{i\varphi_{j}}\big{)} =\mathbf{U}^{*}\cdot\mathrm{cos}\big{(}\varphi_{j}\big{)}-\mathbf{U}^{\prime\prime} \cdot\mathrm{sin}\big{(}\varphi_{j}\big{)}, \tag{1}\] Therefore, the image \(\mathbf{I}_{j}\) acquired at \(t_{j}\) is \[\mathbf{I}_{j}=|\mathbf{A}|\cdot\mathrm{e}^{i\varphi_{j}}\cdot\mathrm{e}^{i\mathbf{\phi}_{ j}}, \tag{2}\] where \(|\mathbf{A}|\) is the magnitude of image and \(\mathbf{\Phi}\) is the background phase of \(\mathbf{I}_{j}\). The purpose of displacement extraction is to obtain \(\mathbf{U}^{*}\) from \(\mathbf{I}_{j}(j=1\...J)\), which includes two tasks: computation of principal component and phase unwrapping. Two objective functions are proposed to solve them, respectively. #### Ii-A2 Objective Function for Principal Components Cross differences among the acquired images are used to remove the background phase. For images acquired at two temporal points \(t_{p}\) and \(t_{q}\), \((p,q)\in\{(x,y)|x<y,1\leq x,y\leq J\}\): \[\begin{split}\frac{\mathbf{I}_{p}}{\mathbf{I}_{q}}&=e^{i \left(\Phi_{p}-\Phi_{q}\right)}\\ &=e^{i\left(\Phi^{\prime}\cdot\left\{\mathrm{cos}\left(\varphi_{ p}\right)-\mathrm{cos}\left(\varphi_{q}\right)\right\}+\mathbf{U}^{\prime\prime}\cdot \left[-\mathrm{sin}\left(\varphi_{p}\right)+\mathrm{sin}\left(\varphi_{q} \right)\right]\right\}\end{split} \tag{3}\] Define the constant coefficients \(E_{pq}\) and \(F_{pa}\), \[\begin{split} E_{pq}&=\mathrm{cos}\big{(}\varphi_{p} \big{)}-\mathrm{cos}\big{(}\varphi_{q}\big{)}\\ F_{pq}&=-\mathrm{sin}\big{(}\varphi_{p}\big{)}+ \mathrm{sin}\big{(}\varphi_{q}\big{)}\end{split} \tag{4}\] The relationship between unknown \(\mathbf{U}^{*}\) and known \(\mathbf{I}_{j}\) is established so that the principal components are obtained. Thus, the first data consistency (DC) term of phase can be written as: \[\mathcal{O}_{DC1}=\sum_{p,q}\left\|\frac{\mathbf{I}_{p}}{\mathbf{I}_{q}}-e^{i\left(\mathbf{ U}^{\prime\prime}\cdot E_{pq}+\mathbf{U}^{\prime\prime}\cdot F_{pq}\right)}\right\|_{2} \tag{5}\] #### 2.1.3 Objective Function for Unwrapping With Eq. (5) only, \(\mathbf{U}^{*}\) may still be wrapped. The absolute value of the wrapped phase gradient is much higher than that after unwrapping. Using the chain rule, the gradient of phase after unwrapping (true phase gradient) can be obtained directly from the wrapped phase [7]. Thus, data consistency is achieved between the true phase gradient and the gradient of \(\mathbf{U}^{*}\). If the background phase is ignored, \(\frac{t_{j}}{|t_{j}|}\approx e^{i\mathbf{\phi}_{j}}\), by using the chain rule, the phase gradient is calculated from the wrapped phase, \[\frac{\partial e^{i\mathbf{\phi}_{j}}}{\partial x}=i\cdot e^{i\mathbf{\phi}_{j}}\cdot \frac{\partial\left(\mathbf{U}^{\prime}\cdot\cos\bigl{(}\varphi_{j}\bigr{)}-\mathbf{U} ^{\prime\prime}\cdot\sin\bigl{(}\varphi_{j}\bigr{)}\right)}{\partial x} \tag{6}\] \[\cos\bigl{(}\varphi_{j}\bigr{)}\frac{\partial\mathbf{u}^{\prime}}{\partial x}-\sin \bigl{(}\varphi_{j}\bigr{)}\frac{\partial\mathbf{u}^{\prime\prime}}{\partial x}=- \frac{i}{e^{i\mathbf{\phi}_{j}}}\frac{\partial e^{i\mathbf{\phi}_{j}}}{\partial x}.\] In matrix form, we have \[\begin{bmatrix}\cos\bigl{(}\varphi_{1}\bigr{)}&-\sin\bigl{(}\varphi_{1}\bigr{)} \\ \vdots&\vdots\\ \cos\bigl{(}\varphi_{j}\bigr{)}&-\sin\bigl{(}\varphi_{j}\bigr{)}\end{bmatrix} \begin{bmatrix}\frac{\partial\mathbf{u}^{\prime}}{\partial x}\\ \frac{\partial\mathbf{u}^{\prime\prime}}{\partial x}\\ \frac{\partial\mathbf{u}^{\prime\prime}}{\partial x}\end{bmatrix}\begin{bmatrix} \frac{\partial\mathbf{u}^{\prime}}{\partial x}\\ \frac{\partial\mathbf{u}^{\prime}}{\partial x}\end{bmatrix}\begin{bmatrix}\frac{ \partial\mathbf{u}^{\prime}}{\partial x}\\ \frac{\partial\mathbf{u}^{\prime}}{\partial x}\end{bmatrix} \tag{7}\] \[\text{Let }\mathbf{H}=\begin{bmatrix}\cos\bigl{(}\varphi_{1}\bigr{)}&-\sin \bigl{(}\varphi_{1}\bigr{)}\\ \vdots&\vdots\\ \cos\bigl{(}\varphi_{j}\bigr{)}&-\sin\bigl{(}\varphi_{j}\bigr{)}\end{bmatrix} \begin{bmatrix}\frac{\partial\mathbf{u}^{\prime}}{\partial x}\\ \frac{\partial\mathbf{u}^{\prime}}{\partial x}\end{bmatrix}\begin{bmatrix}\frac{ \partial\mathbf{u}^{\prime}}{\partial x}\\ \frac{\partial\mathbf{u}^{\prime\prime}}{\partial x}\end{bmatrix} \tag{8}\] Thus, the second data consistency term (DC) of phase gradient can be defined as \[\begin{split}&\mathcal{O}_{DC2}=\left\lVert\frac{\partial\mathbf{U} ^{\prime}}{\partial x}-\mathbf{\mathbf{U}}^{\prime\prime}x\right\rVert_{2}+\left\lVert \frac{\partial\mathbf{U}^{\prime}}{\partial y}-\mathbf{\mathbf{U}}^{\prime\prime}y\right\rVert _{2}\\ &+\left\lVert\frac{\partial\mathbf{U}^{\prime\prime}}{\partial x}-\mathbf{ \mathbf{U}}^{\prime\prime}x\right\rVert_{2}+\left\lVert\frac{\partial\mathbf{U}^{ \prime\prime}}{\partial y}-\mathbf{\mathbf{U}}^{\prime\prime}y\right\rVert_{2}.\end{split} \tag{9}\] #### 2.1.4 Optimization Using Dual-DC The displacement extraction problem in MRE including FFT and unwrapping can be treated as an optimization problem (10) by combining (5) and (9) as an objective function. Here, the adaptive Momentum (ADAM) algorithm was used to solve this optimization problem. \[\begin{split}\min\limits_{\mathbf{U}}\left\{\sum_{p,q}\left\lVert \frac{t_{p}}{t_{q}}-e^{i\left(\mathbf{u}^{\prime}\cdot E_{pq}+\mathbf{U}^{\prime\prime }\cdot E_{pq}\right)}\right\rVert_{2}+\right.\\ \left.\lambda\left(\begin{array}{c}\left\lVert\frac{\partial\mathbf{u}^{ \prime}}{\partial x}-\mathbf{\mathbf{U}}^{\prime}x\right\rVert_{2}+\left\lVert\frac{ \partial\mathbf{u}^{\prime}}{\partial y}-\mathbf{\mathbf{U}}^{\prime}y\right\rVert_{2}\\ \left.+\left\lVert\frac{\partial\mathbf{u}^{\prime\prime}}{\partial x}-\mathbf{\mathbf{U}}^ {\prime\prime}x\right\rVert_{2}+\left\lVert\frac{\partial\mathbf{u}^{\prime \prime}}{\partial y}-\mathbf{\mathbf{U}}^{\prime\prime}y\right\rVert_{2}\right)\right\} \end{split}\right\}\end{split} \tag{10}\] where \(\lambda\) is a weighting parameter. **Input**: Measured complex MR images \(\mathbf{I}_{j}\), Phase offset \(\varphi_{j}\) for each image. **Output**: Main displacement components \(\mathbf{U}^{*}=\mathbf{U}^{\prime}+i\cdot\mathbf{U}^{\prime\prime}\) 1. Initialize the \(\mathbf{U}^{*}\) to be updated and set weighting parameter \(\lambda\) and maximum number of iterations \(maxIter\). Set number of iterations \(L=0\). 2. Calculate cross phase differences \(\frac{t_{p}}{t_{q}}\). (3) 3. Calculate coefficients \(E_{pq}\) and \(F_{pq}\) using \(\varphi_{j}\). (4) 4. Calculate phase gradient \(\mathbf{\mathbf{U}}^{\prime}_{x}\), \(\mathbf{\mathbf{U}}^{\prime}_{y}\), \(\mathbf{\mathbf{U}}^{\prime\prime}x\), \(\mathbf{\mathbf{U}}^{\prime\prime}y\) from MR images \(\mathbf{I}_{j}\). (8) 5. **repeat** 6. Update \(\mathcal{O}_{DC1}\). (5) 7. Update the gradient of \(\mathbf{U}^{*}\), \(\frac{\partial\mathbf{u}^{\prime}}{\partial x}\), \(\frac{\partial\mathbf{u}^{\prime}}{\partial y}\), \(\frac{\partial\mathbf{u}^{\prime\prime}}{\partial x}\), \(\frac{\partial\mathbf{u}^{\prime\prime}}{\partial y}\). 8. Update \(\mathbf{\mathbf{O}}_{DC2}\)(9) 9. Update \(\mathbf{U}^{*}\) using ADAM algorithm. (10) 10. \(L=L+1\) 11. **until \(L\geq maxIter\)** 12. **return \(\mathbf{U}^{*}=\mathbf{U}^{*}+i\cdot\mathbf{U}^{\prime\prime}\)** ### _TWEEN: Traveling Wave Expansion-based Neural Network Modulus Estimation_ The TWE model was used to generate noisy training data. It was also used to construct and train a complex value covariance neural network to mine the mapping from wavefield to wavenumber, and finally use multi-frequency multi-directional fusion to obtain the complex shear modulus. Fig. 1: A flow chart of the pipeline of MRE image processing for displacement extraction, training data generation, structure of network and multi-data fusion. Complex displacement field was extracted from phase images using an optimization of an objective function with dual-DC. Training data was generated by TWE model for network training. The normalized complex wavenumber was estimated by TWENN. Complex shear modulus was obtained by combining multi-frequency and multi-direction data. #### Ii-B1 Traveling Wave Expansion For homogeneous, incompressible, isotropic, and viscoelastic material, at spatial location \(r\), the complex value displacement is \(u(\mathbf{r})\). It is expressed as the superposition of \(M\) traveling waves [30]: \[\begin{split}& u(\mathbf{r})=\sum\nolimits_{m=1}^{M}w_{m}\\ &=\sum\nolimits_{m=1}^{M}a_{m}\cdot e^{i\cdot\left(k^{\prime}+i \cdot k^{\prime\prime}\right)\cdot\widehat{a_{m}}\cdot\mathbf{r}}\end{split} \tag{11}\] where \(w_{m}\) is the \(m\)th traveling waves, \(a_{m}\) is the complex value amplitude of the displacement, \(m\) is the number of traveling waves, \(\widehat{\mathbf{n_{m}}}\) is the unit vector of the propagation direction of the \(m\)th traveling wave, \(k^{*}=k^{\prime}+i\cdot k^{\prime\prime}\) is the local complex wavenumber, \(k^{\prime}\) is the real wavenumber related to material elasticity, \(k^{\prime\prime}\) is imaginary wavenumber related to material viscosity. The complex wavenumber can be normalized by vibration frequency \(\omega\): \[\begin{split}\overline{k^{*}}&=\frac{k^{*}}{\omega }=\frac{k^{\prime}}{\omega}+i\cdot\frac{k^{\prime\prime}}{\omega}\\ &=\overline{k^{\prime}}+i\cdot\overline{k^{\prime\prime}}.\end{split} \tag{12}\] Thus, the aim of the inversion is to find the operator \(\mathcal{F}_{\theta}\colon u\to\overline{k^{*}}\). #### Ii-B2 Training Data Generation To construct a neural network capable of realizing the mapping operator \(\mathcal{F}_{\theta}\), training data set was prepared using TWE model. The training set was created by varying different numbers of traveling waves \(M\), propagation direction \(\widehat{\mathbf{m_{m}}}\), complex amplitude \(a_{m}\) and complex wavenumber \(k\). In this study, an isotropic patch of 3\(\times\)3 (2D) or 3\(\times\)3\(\times\)3 (3D) was used for the inversion of \(k\) at each location. The relatively small patch used is to increase the spatial resolution. To obtain \(\mathcal{F}_{\theta}\) with better noise robustness, noise \(\zeta(\mathbf{r})\) was added to the training set. Suppose \(\mathbb{C}(\mathbf{r})\) is the normalized, complex-valued Gaussian noise, \(b\) is the intensity of the noise, \(\zeta(\mathbf{r})=b\cdot\mathbb{C}(\mathbf{r})\).The detail of dataset generation rules is shown in Appendix. The training data \(d(r)\) at location \(r\) can be written as: \[\begin{split}& d(\mathbf{r})=u(\mathbf{r})+\zeta(\mathbf{r})\\ &=\sum\nolimits_{m=1}^{M}a_{m}\cdot e^{i\cdot\left(k^{\prime}+i \cdot k^{\prime\prime}\right)\cdot\widehat{a_{m}}\cdot\mathbf{r}}+b\cdot\mathbb{C }(\mathbf{r})\end{split} \tag{13}\] For 2D cases, the training data \(\mathbf{d}\) can be written as a 2D tensor as follows: \[\mathbf{d}=\begin{bmatrix}d\left(r_{-1,-1}\right)&d\left(r_{0,-1}\right)&d \left(r_{1,-1}\right)\\ d\left(r_{-1,0}\right)&d\left(r_{0,0}\right)&d\left(r_{1,0}\right)\\ d\left(r_{-1,1}\right)&d\left(r_{0,1}\right)&d\left(r_{1,1}\right) \end{bmatrix} \tag{14}\] Where \(r_{x,y}\) is a vector pointing from the center of the patch to the target pixel position \((x,y)\). Likewise, for 3D cases, \(\mathbf{d}\) will be a three-dimensional tensor. #### Ii-B3 Covariance Complex Neural Network Studies have shown complex value neural networks are more suitable for solving complex-valued problems [31]. Therefore, a complex-valued neural network was constructed to process these complex-valued wavefields. In a cascade structure, TWENNs had 9-layer fully connected complex-value neural networks, with the number of neurons in each layer being 60,50,40,30,20,15,10,6,1, respectively. No extra layers were used. If the weight matrix of the linear layer is \(\mathrm{W}=\mathrm{A}+i\mathrm{B}\), and the complex input is \(\mathrm{h}=\alpha+i\mathrm{\beta}\), the output is \(\mathrm{Wh}=\alpha\alpha-\mathrm{B}\beta+i\left(\mathrm{A}\beta+\mathrm{B} \alpha\right).\) To preserve the phase angle, and to keep the magnitude within a certain range, a new complex activation function named _modSigmoid_ is designed [32]: \[\mathrm{modSigmoid}(\mathrm{z})=\mathrm{Sigmoid}([\mathrm{z}]+\mathrm{a}) \mathrm{e}^{i\theta_{\mathrm{z}}}, \tag{15}\] where \(\mathrm{z}\) is the complex input data, \(\mathrm{a}\) is a real bias to be learned, \(\theta_{\mathrm{z}}\) is the angle of \(\mathrm{z}\) and \(\mathrm{Sigmoid}(\mathrm{x})=\frac{1}{1+e^{-\mathrm{x}^{2}}}\). Because the covariance can expose the information within the wave field better [33], a covariance term \(\mathbf{D}=\mathrm{vec}(\mathbf{d})\mathrm{vec}(\mathbf{d})^{H}\) is used as the model input, where \(\mathrm{vec}(\cdot)\) is an operator transforms a tensor to a column vector. \(\vec{k^{\prime}}\) and \(\vec{k^{\prime\prime}}\) are estimated separately by two parallel multi-layer full connection networks (Fig. 1). Two separate networks are used to estimate \(\vec{k^{\prime}}\) and \(\vec{k^{\prime\prime}}\). For each network, mean square error is used as a loss function for optimization. In this study, the network was trained using the ADAM algorithm. \(\mathcal{F}_{\theta}(\omega)\) was trained and obtained at different frequencies \(\omega\). #### Ii-B4 Multi-frequency and Multi-direction Fusion For the wavefield \(\mathbf{U}_{\omega m,d_{n}}^{*}\) obtained from MRE at different frequencies \(\omega_{m}(m=1,...,M)\) and in different directions \(d_{n}(n=1,...,N)\), the corresponding normalized complex wavenumber \(\overline{k^{*}}(\omega_{m},d_{n})\) is obtained using the trained network \(\mathcal{F}_{\theta}(\omega_{m})\). For complex shear modulus \(G^{*}=G^{\prime}+iG^{\prime\prime}\), using corresponding principle [17], \[G^{*}=\rho\frac{\omega^{2}}{(k^{\prime}-k^{*\prime\prime})^{2}}=\frac{\rho}{( \vec{k^{\prime}}-k^{*\prime\prime})^{2}}. \tag{16}\] Thus, the storage modulus \(G^{*}\) and loss modulus \(G^{\prime\prime}\) can be estimated from the average of \(\overline{k^{*}}(\omega_{m},d_{n})\): \[G^{\prime}+iG^{\prime\prime}=\frac{\rho}{\left(\frac{\sum_{m,n}\vec{k^{\prime }}(\omega_{m},d_{n})}{MN}\right)^{2}}. \tag{17}\] This method does not do any pre-filtering to filter the noise, and the final modulus is filtered by a 3\(\times\)3 median filter. **Input**: Wavefields of multi-frequency and multi-direction \(\mathbf{U}_{\omega_{m}d_{m}}^{*}\), information of spatial resolution and vibration frequencies \(\omega_{m}(m=1,...,M)\) **Output**: Complex shear modulus \(G^{*}=G^{\prime}+iG^{\prime\prime}\) 1. Set the patch size, and intensity of the noise \(b\). 2. Generate training data based on spatial resolution and vibration frequencies using the traveling wave expansion model. (13) 3. Construct covariance complex neural network \(\mathcal{F}_{\theta}\). (14,15) 4. Training \(\mathcal{F}_{\theta}(\omega)\) using ADAM. 5. Calculate \(\overline{k^{*}}(\omega_{n},d_{m})\) using \(\mathcal{F}_{\theta}\). 6. Estimate \(G^{*}=G^{\prime}+iG^{\prime\prime}\) using multi-frequency multi-directional data fusion. (17) 7. **return \(G^{*}=G^{*}+iG^{\prime\prime}\)** ## III Validation and Verification Wrapped phase images from simulation liver dataset and in vivo brain MRE experiment were used for the validation of the proposed phase unwrapping method with Dual-DC. Modulus estimation with TWENN was validated using simulated phantom, brain, and liver wavefields. The proposed pipeline combining Dual-DC and TWENN was validated using phantom, brain, or liver MRE images. A summary of the data sets and methods used is shown in TABLE I. Results are also compared with those from state-of-the-art methods. ### Validation of Dual-DC phase unwrapping #### 3.1.1 Simulated Liver Dataset The artificially wrapped phase was generated using 2D wavefield data for publicly available simulated liver images ([https://bioqic-apps.charite.de/](https://bioqic-apps.charite.de/)). The displacements were normalized with a maximum phase of 4\(\pi\). Phases at 4 different temporal points within one cycle were generated. Complex Gaussian noise with different intensities was added to the complex image to evaluate the robustness of the algorithm. #### 3.1.2 In vivo 3D Brain Dataset The in vivo brain MRE images were acquired using a 3T scanner (uMR790, United Imaging Healthcare, Shanghai, China) with TR/TE=4000/65ms, MEG=40mT/m, resolution=3 mm \(\times\) 3 mm \(\times\) 3 mm, and 8 phase offsets [34]. The study protocol was reviewed and approved by the Institution Review Board of Shanghai Jiao Tong University. #### 3.2.3 Algorithm Comparison The performance of phase unwrapping method is compared with that from SR algorithm [12] and LBE [13, 35] that are commonly used in postprocessing MRE images [8]. Simulation results are compared with the Ground Truth (GT) while mean errors are calculated for validation. Interlayer continuity of the unwrapped phase is compared for brain MRE images. ### Validation of Modulus Estimation using TWENN #### 3.2.1 Test Data Set The test data set contained 10\({}^{7}\) examples for model evaluation. Wavefields with different signal-to-noise ratios (SNRs) and complex wavenumbers were generated using the TWE model, with a patch size of 3\(\times\)3 and a resolution of 3 mm \(\times\) 3 mm at 60Hz. SNRs were distributed uniformly from 12 dB to 38 dB, real normalized wavenumbers were uniformly distributed from 0.35s/m to 1.35s/m, while imaginary normalized wavenumbers were uniformly distributed between 0s/m and 0.28s/m. Estimation of \(\vec{k}^{\prime}\) and \(\vec{k}^{\prime\prime}\) at different SNRs and complex \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Application** & **Data Set** & **Details of data** & **Data** & **Method** & **Methods for** & **Evaluation** \\ & & **Source** & **Proposed** & **Comparison** & **Metrics** \\ \hline \multirow{3}{*}{Phase} & Simulated & 2mm\(\times\)2mm; & \multirow{3}{*}{BIOQIC} & \multirow{3}{*}{Dual-DC} & SG & Mean error \\ & Liver & 42Hz; RO & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Brain MRE} & 3mm\(\times\)3mm\(\times\)3mm; & 3T & \multirow{2}{*}{DLE} & Interlayer \\ & & 50Hz; RO & scanner & & & \\ \hline \multirow{6}{*}{Modulus estimation} & \multirow{2}{*}{Test data} & 3mm\(\times\)3mm; & TWE & \multirow{2}{*}{TWEEN} & DI (2D) & Mean error \\ & & 60Hz; SS & model & & & \\ \cline{1-1} \cline{4-7} & Simulated & 1.5mm\(\times\)1.5mm; & \multirow{2}{*}{COMSOL} & \multirow{2}{*}{k-MDEV} & \multirow{2}{*}{RMSE} \\ & phantom & 60,80,100Hz; SS & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Simulated liver} & 1.5mm\(\times\)1.5mm; & \multirow{2}{*}{BIOQIC} & \multirow{2}{*}{TWEEN} & MDEV & \\ & & 24, 28, 32, 36, 40, 44, 48, 52, & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{BIOQIC} & \multirow{2}{*}{TWEEN (3D)} & MDEV & \multirow{2}{*}{RMSE} \\ & & 56, 60Hz; RO, PE, SS & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{BIOQIC} & 2mm\(\times\)2mm; & \multirow{2}{*}{BIOQIC} & \multirow{2}{*}{LBE} & Match with \\ & & 20, 25, 30, 35, 40, 45Hz; & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Normal brain} & 200, PE, SS & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Normal liver} & & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Hepatic siderosis} & 2.7mm\(\times\)2.7mm; & \multirow{2}{*}{1.5T} & \multirow{2}{*}{MDEV} & \multirow{2}{*}{Regional mean} \\ & & 30, 40, 50, 60Hz; & scanner & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Liver tumor} & & & & & \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{Brain tumor} & 3mm\(\times\)3mm\(\times\)3mm; & 3T & \multirow{2}{*}{Dial-DC +} & \multirow{2}{*}{TWEEN} & \\ & & 30, 40, 50Hz; RO, PE, SS & scanner & & & \\ \cline{1-1} \cline{4-7} & & & & & & \\ \hline \hline \end{tabular} \end{table} TABLE I: A summary of data sets and methods used for comparison. 2D: 2D modelling inversion; 3D: 3D MOOLUS inversion; BIOQIC: DATASTS OR MRE processing pipeline were from [https://bioqic-apps.charite.de](https://bioqic-apps.charite.de); COMSOL: DATA was SIMulated using COMSOL (COMSOL AB, Stockholm, Sweden); 1.5T/3T scanner: MRE images were acquired from a 1.5T/3T scanner. Image RESOLUTION, Vibration FREQUENCY, AND THE MOTION ENCODING DIRECTION (RO: Read OUT, PE: Phase Encoding, SS: Slice Selection); ARE provolved IN THE DETIALS OF DATA. wavenumbers were compared with those from conventional DI (fitting window=3\(\times\)3). #### Iii-B2 Simulated Dataset The size of the simulated phantom using COMSOL (COMSOL, Stockholm, Sweden) had two circular inclusions of 10 mm radius with complex shear moduli of 4 + i0.48 and 6 + i0.84 kPa. The complex shear modulus of background was 2 + i0.2 kPa. These values served as GT. A 28dB SNR of Gaussian noise was added to the wave images. Publicly available MRE data for simulations of the liver and brain ([https://biooic-apps.charite.de/](https://biooic-apps.charite.de/)) were used for evaluation. Both were 3D, FE simulation models based on COMSOL. Model geometries were constructed from segmented images of a healthy human. Linear viscoelastic models were used and the tissue properties were assigned based on previously reported values [36, 37]. The data set contained a 3D multi-frequency and multi-direction wavefield with known \(G^{\prime}\) (GT). The liver dataset had a resolution of 2 mm \(\times\) 2 mm \(\times\) 2 mm with 4 frequency components (30, 36, 42, 48 Hz). The brain data set had a resolution of 1.5 mm \(\times\) 1.5 mm \(\times\) 1.5 mm, and 10 frequency components (24, 28, 32, 36, 40, 44, 48, 52, 56, 60 Hz). Root mean squared error (RMSE) of estimated modulus \(G_{est}\)\({}^{\prime}\) and the structures were compared with k-MDEV and MDEV [38]. RMSE is defined as: \[\text{RMSE}=\sqrt{\frac{\|\overline{G_{est}^{\prime}}-G_{GT}^{\prime}\|_{2}^{ 2}}{N}} \tag{18}\] ### Validation for the Proposed Pipeline #### Iii-C1 Comparison with Other Pipelines Both phantom and human MRE experiment images were used to evaluate the performance of the proposed pipeline for MRE image processing. The proposed pipeline contains a cascading of phase unwrapping method using Dual-DC and TWENN for modulus estimation. Comparisons were made with two state-of-the-art multi-frequency MRE processing pipelines available on a public platform [38]: LBE and k-MDEV, FG and MDEV. #### Iii-C2 Phantom MRE For phantom validation, the multi-frequency phantom dataset containing four cylindrical inclusions was used [20]. The moduli estimated by each method were averaged in the z-direction. Regional mean values and ground truth of \(G^{\prime}\) and \(G^{\prime\prime}\) were compared. #### Iii-C3 Brain MRE Brain MRE images of a patient with a meningioma were acquired using a 3T scanner (uMR790, United Imaging Healthcare, Shanghai, China) using an electromagnetic actuator [34]. MRE was implemented using a single-shot spin-echo echo-planar imaging sequence (TR/TE=4000/65ms) with a motion-encoding gradient of 40 mT/m and 8 phase offsets. Contrast-to-noise ratio (CNR) of \(G^{\prime}\) with respect to the tumor region was used for evaluation: \[\text{CNR}=\frac{2\left(\overline{G_{bkg}^{\prime}}-\overline{G_{tumor}^{ \prime}}\right)^{2}}{\sigma_{bkg}^{2}+\sigma_{tumor}^{2}}, \tag{19}\] where \(\overline{G_{bkg}^{\prime}}\) and \(\overline{G_{tumor}^{\prime}}\) are the mean value of shear modulus, \(\sigma_{bkg}\) and \(\sigma_{tumor}\) are the standard deviation of shear modulus, for background tissue and tumor tissue, respectively. The tumor region was delineated from the T1 structural image, and the background neighborhood was delineated as the region of one tumor radius outside of the tumor. A public data set of normal brain MRE images ([https://bioiqic-apps.charite.de/](https://bioiqic-apps.charite.de/)) was also used for evaluation by comparing the estimated modulus map and the structural image. #### Iii-C4 Liver MRE Liver MRE images from a healthy volunteer, a patient with a liver tumor, and a patient with hepatic siderosis & cirrhosis were acquired using a 1.5T pneumatic MRE system [39] (Magnetom Aera, Siemens, Erlangen, Germany, BIOQIC). CNR of \(G^{\prime}\) was used to evaluate the modulus estimation of liver tumor data. For normal liver data and iron-deposited cirrhotic data, the mean values of \(G^{\prime}\) of circled liver tissue were compared. #### Iii-D4 Implementation Details for Dual-DC and TWENN For all datasets unwrapped in this paper, the same hyperparameters of Dual-DC were used (learning rate = 0.005, \(\lambda=1000\), the number of iterations = 4000). The networks were trained using the ADAM algorithm with a batch size of 500 and a step number of 12000. Before the complex wave field was introduced into the network, complex normalization was performed based on the maximum absolute value of all data points. In this way, the absolute value of the input data was normalized between 0 to 1. Due to the fact that physiological noise such as vascular pulsation may pollute _in vivo_ measurement of brain and liver [40, 41], a relatively larger noise intensity \(b\) of 0.3 was added to the training sets, compared with that of 0.001 for the remaining non-living measurement Fig. 2: (a) A comparison of the performance of the three unwrapping methods at varying noise levels. (b) The differences between the ground truth and the outputs of three methods. (c) Mean error at different noise levels. dataset (all simulation data and phantom data). If the wavefield was continuous between imaging slices, a 3x3x3 patch was used. Otherwise, 3x3 patch was used. The proposed methods were implemented with PyTorch 1.12.0 and CUDA 11.6 on an Ubuntu 20.04 LTS (64-bit) operating system equipped with an AMD Ryzen 9 5950x central processing unit (CPU) and NVIDIA RTX 3080Ti graphics processing unit (GPU, 12 GB memory). ## IV Results ### _Phase unwrapping test_ #### Iv-A1 Simulation test Results of unwrapping simulation data showed the proposed optimization-based algorithm using dual-DC could unwrap phases with a complex Gaussian noise level up to \(\sigma=0.4\) without observing the offset of the background phase (Fig. 2a). However, SG did not perform unwrap at some complex edges without noise, and a large number of unwrapping failures were observed with increased noise levels (Fig. 2a, b). Although LBE had no obvious unwrapping failures, it introduced background offset even without noise (Fig. 2b). In contrast, an optimization-based algorithm using Dual-DC had better performance at different noise levels (Fig. 2c). #### Iv-A2 In vivo dataset test It was observed that SG was prone to unwrapping failure. The background offset introduced by the LBE could be seen in both coronal and sagittal views leading to an obvious 3D wavefield discontinuity. All these methods performed unwrapping at the transverse layers. However, an optimization-based algorithm using Dual-DC still ensured continuity at coronal and sagittal views (Fig. 3). The absolute differences also showed that Dual-DC produced the smoothest phase. ### _Modulus estimation test_ #### Iv-B1 Simulation data set A comparison of inversion between DI and TWENN with different complex wavenumbers and signal-to-noise ratios (SNR) in 2D wavefield (patch size=3x3) showed TWENN performed better than that of DI in estimating \(\vec{k^{\prime}}\) and \(\vec{k^{\prime\prime}}\). As SNR decreased, the advantages of TWENN over DI increased (Fig. 4a, b). Furthermore, the mean errors increased with the value of modulus, indicating an upper limit of estimation using TWENN. The upper limit, however, is higher than that of DI as shown in the figure. Results of the estimated shear moduli showed TWENN could recover the inclusions better than other methods (Fig. 5). Values of RMSE showed TWENN was the best at estimating complex shear moduli of the inclusions in most cases (TABLE II). #### Iv-B2 Simulation test of brain and liver Based on the simulated complex wavefield of the brain and liver, it was observed that TWENN had the best estimation Fig. 4: Distribution of mean errors of estimating \(\vec{k^{\prime}}\) (left) and \(\vec{k^{\prime}}\) (right) for different SNR values and normalized complex wavenumber. Fig. 5: Comparisons of estimated (a) \(G^{\prime}\) and (b) \(G^{\prime\prime}\) maps using different methods based on phantom simulation. 28dB of noise was added to the simulated wave images. Fig. 3: (a) comparison of displacement field extracted from brain MRE images. (b) The corresponding absolute differences of displacement at cross sections marked by the white dotted line. accuracy with RMSEs of 3.34 and 2.06kPa (TABLE II). The other two algorithms did not perform as well as TWENN probably due to the strong reflection of waves or the use of derivatives, which might produce artifacts for boundaries (Fig. 6). ### _Performance Evaluation_ #### Iv-C1 Phantom MRE Using multi-frequency MRE images of phantom, estimated \(G^{\prime}\) and \(G^{\prime\prime}\) values showed the proposed pipeline had the best performance in recovering the inclusion structures (Fig. 7a). Algorithms using the Laplace operator could not recover the structures well, probably because phantom images contained various noise and wave deflections. With longer wavelengths and lower SNR in regions with relatively high modulus, these algorithms were prone to underestimate the high modulus. The proposed pipeline produced the closest modulus estimation to the ground truth for both \(G^{\prime}\) and \(G^{\prime\prime}\) at most cases (Fig. 7b). #### Iv-C2 Tumor MRE For both brain and liver tumors, the proposed pipeline had better reconstruction of boundaries (Fig. 8a, b), with more details matching that of T1 images and the highest values of CNR (TABLE II). This showed the ability of the proposed pipeline to reconstruct the structural details. #### Iv-C3 Normal brain MRE For the normal human brain dataset, \(G^{\prime}\) values estimated from MDEV were relatively low (\(<\)1kPa). This was probably affected by noise. Both the k-MDEV and the proposed pipeline could estimate shear modulus maps which match the structure image. k-MDEV introduced significant boundary artifacts which is consistent with the results in Fig. 6. As indicated by the white arrow in Slice 1, the brain's midline was better reconstructed using the proposed pipeline than that of other algorithms (Fig. 8c). The white arrow of Slice 2 (Fig. 8c) showed k-MDEV estimated relatively low modulus (\(<\)0.3 kPa), but the proposed method provided a proper estimate (\(\sim\)1kPa). This dataset demonstrated the capability of the proposed pipeline to reconstruct anatomical features. #### Iv-C4 Liver dataset test For liver MRE (Fig. 8e, f), the estimated \(G^{\prime}\) values of the normal liver from k-MDEV, MDEV, and the proposed pipeline were 1.62, 1.31, and 1.97 kPa, respectively. These values were all in line with the reported normal range [42]. However, in the case of cirrhotic with iron deposition, the estimated \(G^{\prime}\) values were 1.70 and 1.09 for k-MDEV and MDEV, respectively. This was likely caused by the iron deposition that resulted in a low SNR. The value of 3.13 kPa estimated by the proposed pipeline fell within the reported range [42]. ## V Discussion A pipeline for estimating complex moduli from MRE images is proposed in this study. The processing pipeline contains two major steps: displacement extraction and modulus estimation. For displacement extraction, an optimization-based method with Dual-DC terms is used for both calculating principal components and phase unwrapping. For modulus estimation, a complex neural network using covariance as input is proposed based on the TWE model. Since discontinuities of unwrapped phase images can introduce serious artifacts of the derived modulus distribution, accurate phase unwrapping is an important prerequisite. Complex or noisy wavefields are difficult to deal with using search-like or discrete greedy update phase unwrapping techniques, such as the SG algorithm. Laplacian-based estimation (LBE) can perform unwrapping well in noisy cases, but extra background offset noise could be introduced. The Dual-DC based continuous optimization can deal with unwrapping of complex wrapped and noisy phase without introducing extra phase offset. The proposed method for unwrapping uses phase gradient information, which is more adaptable to complex scenarios than discrete unwrapping. Compared to LBE, Dual-DC does not introduce new background noise. Phase images were unwrapped successfully for wavefield data from phantom, brain and liver, using the same set of hyperparameters. This verifies the generalizability of the proposed algorithm. In addition, the computation time for Fig. 6: A comparison of estimated \(\bar{c}^{\prime}\) using different algorithms for simulated brain and liver. Regions were magnified to illustrate the anatomical features of \(\bar{c}^{\prime}\). Fig. 7: (**a**) A comparison of estimated \(\bar{c}^{\prime}\) and \(\bar{G}^{\prime\prime}\) maps of gelatin phantom using different processing methods. (**b**) A comparison of mean values of \(\bar{c}^{\prime}\) and \(\bar{G}^{\prime\prime}\) of each method. The longitudinal axis is displayed in a logarithmic form. MRE images of the whole brain (3 frequencies and 3 directions) was about 3 minutes. For modulus inversion, where noise robustness, structural resilience and computational complexity impose contradicting constraints, TWENN manages to reach a tradeoff among these three metrics. In terms of noise resilience, TWENN does not use explicit differentiation computation, which also provides structural resolution benefits by reducing potential edge artifacts. Except for the LFE-type and NLI-type methods [9, 25, 23, 26], most of current algorithms rely on differential computation, which can be sensitive to noise. By adding noise to the training set, TWENN can simultaneously achieve both denoising and inversion, improving its performance at regions with low SNR. For example, TWENN can estimate the modulus of cirrhotic tissue at low SNR regions induced by iron deposition (Fig. 8f). TWENN has the potential to improve the clinical performance of liver MRE due to the fact that iron deposition is a significant cause of liver MRE failure [42]. Considering structural resilience, a small patch of 3\(\times\)3 or 3\(\times\)3 pixels is applied so that TWENN can still distinguish details of the image objects even if local homogeneity is assumed. Benefiting from the absence of differential calculations, no significant artifacts are observed at the structural edges. Compare with MDEV and k-MDEV, both of which apply differential operations, TWENN has better performance in preserving anatomical details on both simulated and in vivo liver and brain datasets (Fig. 6, Fig. 8). If the estimation kernel is chosen in a trial-and-error fashion, it is difficult to ensure the noise robustness of small patches. Therefore, neural networks are used here to establish the nonlinear mapping between noisy wavefields and complex wavenumbers. Even training without noise, TWENN still has better noise robustness than conventional DI using a small kernel (Fig. 4). The dimension of the kernel also affects the inversion. The use of a small sized patch will set an upper limit to the estimated modulus. Using dejittering method, it has been shown that interslice phase inconsistencies introduced in signal acquisition can be removed, providing robust 2D inversions, especially in the abdomen. Here, with improved phase unwrapping, the smooth and physically accurate phase can result in better modulus estimation from 3D inversion. TWENN uses neural networks to pre-learn wavefield structure information for inversion. The time of training data generation and network training using a desktop PC did not exceed 3 minutes. The trained network can be directly used to process MRE images with the same vibration frequency and resolution. In this study, the 3D multi-directional and multi-frequency inversion time of a whole brain did not exceed 15 seconds, showing its potential to be deployed clinically. This is because that all these methods use multi-frequency fusion, which is not used by other leading FE-based approaches like NLI. Furthermore, inverse FE-based algorithms require iterative FE computation updates [25, 37, 43] which may result in computations ten times or higher than TWENN, MEDV, and k-MEDV. The modulus inversion of MRE is equivalent to estimating the wavenumber for an array of wave points, which can be considered as a variant of the Direction of Arrival (DOA) Fig. 8: T1-weighted (T1w) images, magnitude images of MRE, \(G^{*}\) and \(G^{*}\) maps estimated from different algorithms were compared for (a) brain tumor, (b) liver tumor, and (c, d) two difference slices of normal brain images. Magnified views of the tumor region were also provided. Magnitude images of MRE, \(G^{*}\) and \(G^{*}\) maps estimated from (e) a healthy volunteer and (f) a patient with hepatic siderosis & cirrhosis were compared. Regions of interest were delineated with white line, and the corresponding values of mean \(\pm\) standard deviation of \(G^{*}\) were provided. problem in array signal processing. Strategies commonly used in DOA solution problems [30] can be drawn on, where second-order moments can effectively reveal harmonic information. The possibility of estimating modulus from an array signal processing perspective was validated in our previous work [44]. From the perspective of generalizability, the training data set in TWENN is produced using a wave equation. Theoretically, any local wavefield can be represented using the TWE model, including reflected standing waves. Therefore, the training set can cover most of the wave propagation conditions, ensuring the generalizability of TWENN. Existing neural network training modulus estimation methods [28, 29] are usually trained using FE simulation for a limited number of scenarios, which could hamper their generalization performance. In addition, most of the elastography processing pipelines used various complex filters for either wave extraction or denoising [20, 21, 22]. However, the proposed pipeline does not use any filters other than a median filter with fixed parameters. Therefore, complicated parameter tuning procedures are prevented, improving its generalization performance. Although multi-frequency and multi-directional information is used to improve inversion performance [2], TWENN also supports single-frequency and single-direction inversion. In addition to \(G^{\prime}\) and \(G^{\prime\prime}\) values, shear wave speed and penetrating rate that are closely related with stiffness and damping [2] can also be estimated. Limitations of this study include assumptions of local homogeneity, isotropy, and linear viscoelasticity, which could be addressed by more realistic FE simulation based NLI [9, 23, 24, 25, 45, 46], and neural network based inversion [29, 47, 48]. Although TWENN can estimate a relatively smoother viscosity distribution from the noisy wavefield, only simple phantom cases were used to validate it. In this study, limited human imaging data were used for validation. Future work includes applying the proposed pipeline to a larger cohort of clinical data sets such as neurological and liver diseases. The unwrapping method proposed in this paper is readily transferable to other fields where unwrapping is required. The inversion method can also be applied to other elastic imaging modalities, such as optical elastography and ultrasound elastography where wave equations apply. The TWE model can be modified in terms of specific anisotropic and dispersive material models, in order to solve more complex modulus inversion problems. Future work includes estimating properties of transversely isotropic and frequency-dependent materials within the TWENN framework. The potential of using this data-driven method for obtaining inverse operators to solve other inverse problems such as electrical resistance tomography needs to be further explored. ## Training data generation The training data set was generated using the following parameter settings: 1. The number \(M\) of traveling waves 1,2,3,4,5,6,7,8 was set to be distributed at ratios of 6:4:3:2:1:1:1:1:1:1. 2. The complex amplitude \(|a_{m}|\) was uniformly distributed in [0,1] and angle(\(a_{m}\)) was uniformly distributed in [0,2\(\pi\)]. 3. The unit vector of the wave propagation direction is \(\vec{n}_{m}^{*}=[x,y,z]\). 1. In 2D cases, \(x=\cos(\theta)\,,y=\sin(\theta)\,,z=0\), \(\theta\) was uniformly distributed in [0,2\(\pi\)]. 2. In 3D cases, \(z\) is uniformly distributed in [\(-1\),1], \(\theta\) was uniformly distributed in [0,2\(\pi\)], \(x=\sqrt{1-z^{2}}\cos(\theta)\), \(y=\sqrt{1-z^{2}}\sin(\theta)\). 4. \(k^{\prime}\) and \(k^{\prime\prime}\) were uniformly distributed within the preset range. ## Acknowledgment We thank Prof. Ingolf Sack from Charite, Germany for providing part of the testing data sets and helpful discussions.
2308.14595
Neural Network Training Strategy to Enhance Anomaly Detection Performance: A Perspective on Reconstruction Loss Amplification
Unsupervised anomaly detection (UAD) is a widely adopted approach in industry due to rare anomaly occurrences and data imbalance. A desirable characteristic of an UAD model is contained generalization ability which excels in the reconstruction of seen normal patterns but struggles with unseen anomalies. Recent studies have pursued to contain the generalization capability of their UAD models in reconstruction from different perspectives, such as design of neural network (NN) structure and training strategy. In contrast, we note that containing of generalization ability in reconstruction can also be obtained simply from steep-shaped loss landscape. Motivated by this, we propose a loss landscape sharpening method by amplifying the reconstruction loss, dubbed Loss AMPlification (LAMP). LAMP deforms the loss landscape into a steep shape so the reconstruction error on unseen anomalies becomes greater. Accordingly, the anomaly detection performance is improved without any change of the NN architecture. Our findings suggest that LAMP can be easily applied to any reconstruction error metrics in UAD settings where the reconstruction model is trained with anomaly-free samples only.
YeongHyeon Park, Sungho Kang, Myung Jin Kim, Hyeonho Jeong, Hyunkyu Park, Hyeong Seok Kim, Juneho Yi
2023-08-28T14:06:36Z
http://arxiv.org/abs/2308.14595v1
Neural Network Training Strategy to Enhance Anomaly Detection Performance: A Perspective on Reconstruction Loss Amplification ###### Abstract Unsupervised anomaly detection (UAD) is a widely adopted approach in industry due to rare anomaly occurrences and data imbalance. A desirable characteristic of an UAD model is contained generalization ability which excels in the reconstruction of seen normal patterns but struggles with unseen anomalies. Recent studies have pursued to contain the generalization capability of their UAD models in reconstruction from different perspectives, such as design of neural network (NN) structure and training strategy. In contrast, we note that containing of generalization ability in reconstruction can also be obtained simply from steep-shaped loss landscape. Motivated by this, we propose a loss landscape sharpening method by amplifying the reconstruction loss, dubbed _Loss AMPlification_ (LAMP). LAMP deforms the loss landscape into a steep shape so the reconstruction error on unseen anomalies becomes greater. Accordingly, the anomaly detection performance is improved without any change of the NN architecture. Our findings suggest that LAMP can be easily applied to any reconstruction error metrics in UAD settings where the reconstruction model is trained with anomaly-free samples only. YeongHyeon Park\({}^{1,2}\), Sungho Kang\({}^{1}\) Myung Jin Kim\({}^{2}\) Hyeonho Jeong\({}^{3}\) Hyunkyu Park\({}^{1}\) Hyeong Seok Kim\({}^{2}\) Juneho Yi\({}^{1}\)\({}^{1}\)Department of Electrical and Computer Engineering, Sungkyunkwan University \({}^{2}\)SK Planet Co., Ltd. \({}^{3}\)College of Computing, Sungkyunkwan University {yeonghyeon, myungjin, beman}@sk.com, {sungho369, drake6751, mjss016, jhyi}@skku.edu Unsupervised anomaly detection, Loss amplification, Loss landscape, Training strategy ## 1 Introduction Exploiting a reconstruction model trained with anomaly-free samples only is a widely adopted approach to unsupervised anomaly detection (UAD) in various industries due to its capability to resolve the challenges posed by the scarcity of abnormal situations and data imbalance problems. The desirable characteristic of a trained UAD model is contained generalization ability in reconstruction. That is, the model should excel in reconstruction of seen normal patterns but struggle with unseen anomalous patterns. An easy way for containing of the generalization ability of an UAD model in reconstruction is to pour all its reconstruction capability into normal patterns so that there is no room to cover anomalous patterns. Various methods have been proposed to further improve the anomaly detection (AD) performance by containing generalization ability, but the focus has been mostly on exploring new neural network (NN) structures or extensions. NN designs in UAD can be divided into three main categories: 1) generative adversarial networks (GAN) that additionally include a discriminator on a generative model [4, 5, 6], 2) a memory module that can forge nor Figure 1: Effect of LAMP. The loss landscapes and their contour projections for \(\mathcal{L}_{2}\) and \(\mathcal{L}_{2}^{LAMP}\) are shown in the first and second rows, respectively. The loss landscape for an UAD model should be shaped with a steep and sharp form in order to contain the reconstruction generalization ability of the model and enhance the AD performance [1, 2, 3]. mal latent features to reconstruct normal-like images [7, 8, 9, 10], and 3) online knowledge distillation to prevent the model from generating a fixed, constant normal image regardless of changing input [11, 10, 12]. In contrast, we exploit the research results reported in [1] that the generalization ability of NNs is related with the shape of a loss landscape. They report that in a classification task, NNs with smooth-shaped loss landscape show better generalization ability compared to sharp shapes. Their loss landscape visualization method is also utilized in generative models [3] and can also be applied to our reconstruction model. The loss landscape for an UAD model should be shaped with a steep and sharp form in order to contain the reconstruction generalization ability of the model and enhance the AD performance. Based on the observation that reconstruction loss amplification causes a sharp shape of the loss landscapes, we propose a method of only changing the reconstruction loss function via amplification, dubbed _Loss AMPlification_ (LAMP). When LAMP is applied, it actually transforms the loss landscape to a sharp form as shown in Fig. 1. To verify the legitimacy of our method, we compare the loss landscape using the same encoder/decoder but with different loss functions and batch sizes. For the comparison, the MNIST dataset [13] is used. We confirm that when LAMP is applied to the reconstruction error metric for training, it not only transforms the loss landscape into a steeper shape for all batch sizes but also actually enhances the AD performance. We conduct additional experiments for the MVTec AD dataset [14] dealing with 15 AD tasks considering combinations of different loss functions and optimizers. The experimental results show that the AD performance is improved in most cases. Extensive experiments demonstrate that the application of LAMP leads to an improved AD performance, which is achieved via only loss amplification without the structural change or expansion of NNs. LAMP can be easily and safely applied across any reconstruction error metrics when training NNs in UAD settings. ## 2 Proposed Method ### Reconstruction generalization for UAD Loss landscape visualization can provide insight to relate the shape of the loss landscape to the reconstruction generalization ability of an NN model. When the loss landscape is smooth, a reconstruction model has high generalization ability. High generalization ability means that unseen patterns can be well reconstructed at the test time. However, in UAD, contained generalization ability is crucial because the criterion for determining whether an input sample is defective relies on the magnitude of the reconstruction error. Note that, when an UAD model has high generalization ability, it will cause the model to reconstruct unseen patterns accurately and fail to detect defective products due to the small reconstruction error. In this paper, we propose reconstruction loss amplification as a simple way to affect the generalization ability of an UAD model in reconstruction without altering the structure of the NNs or training strategy. ### Loss amplification The proposed method, LAMP, is a simple trick that can improve the AD performance by just amplifying the base loss function \(\mathcal{L}_{base}\). Note that, \(\mathcal{L}_{2}\), \(\mathcal{L}_{1}\), \(\mathcal{L}_{SSIM}\), etc. can be adopted as \(\mathcal{L}_{base}\). LAMP is formulated in (1). Instead of increasing the learning rate or weighting each loss term, LAMP imposes a larger penalty than the base loss function, \(\mathcal{L}_{base}\). \[\begin{split}\mathcal{L}_{base}^{LAMP}(y,\hat{y})=\sum_{h=1}^{H} \sum_{w=1}^{W}\sum_{c=1}^{C}-\log&\Big{(}1-\mathcal{L}_{base}( y,\hat{y})\Big{)},\\ w.r.t.\leavevmode\nobreak\ y\in\mathbb{R}^{H\times W\times C} \end{split} \tag{1}\] LAMP makes gradients steeper than the base loss function as shown in Fig. 2, accelerating loss convergence. This steeper gradient transforms the loss landscape shape of an UAD model into a sharp form, containing the reconstruction generalization ability. Note that we can safely amplify the reconstruction loss because an UAD model is only trained using anomaly-free samples. Figure 2: Loss curves for LAMP-applied \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) cases. \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) cases are shown in the first and second rows, respectively and their gradients are shown in the second column. LAMP imposes a larger penalty than the base loss function, \(\mathcal{L}_{base}\). ### Scaling trick The input of the log function in LAMP must be guaranteed to be positive. For this, we should ensure that a negative value does not occur for '\(1-\mathcal{L}_{base}\)' operation. We use a simple scaling trick that normalizes \(\mathcal{L}_{base}\) between 0 and 1 as in (2). We also multiply the coefficients '\(1-\epsilon\)' to adjust \(\mathcal{L}^{\prime}_{base}\) to slightly less than 1 which prevents '\(\log(1-\mathcal{L}^{\prime}_{base})\)' from exploding. \[\mathcal{L}^{\prime}_{base}(y,\hat{y})=\frac{\mathcal{L}_{base}(y,\hat{y})}{ \max{(\mathcal{L}_{base}(y,\hat{y}))}}(1-\epsilon) \tag{2}\] ## 3 Experiments ### Experimental setup A basic experiment is designed with the MNIST dataset [13] to see whether LAMP improves the AD performance. The MNIST dataset [13] is originally provided for classification of digits into ten classes, but we redesign it for AD experiments by setting one class as normal and the other nine as abnormal. For example, if the class '0' is set as normal, the AE will be trained by '0' only with the intent of filtering out unseen other nine digit classes '1-9' by large reconstruction error. We also report experimental results using the industrial dataset, MVTec AD [14]. The training set only includes anomaly-free samples, but the test set includes both anomaly-free and anomalous samples. **Implementation details.** We design a simple convolutional autoencoder (AE) referring to the previous study [15] while discarding skip connections which enables better generalization ability. Basically, the AE is structured with six layers for encoder and decoder respectively, but for low-resolution datasets such as MNIST [13], we change it into a four-layered structure for each. Note that we repeat 'convolution \(\rightarrow\) batch normalization \(\rightarrow\) leaky ReLU activation' for the encoder and 'upsampling \(\rightarrow\) convolution \(\rightarrow\) batch normalization \(\rightarrow\) leaky ReLU activation' for the decoder. **Training conditions.** In all AD experiments, we perform hyperparameter tuning to compare the best performance of each model. The following hyperparameters are tuned: 1) batch size, 2) a number of patches for patch-wise reconstruction [16], 3) learning rate, and 4) kernel size. For the experiment using the industrial dataset, MVTec AD [14], we use additional training conditions: 1) base loss functions (\(\mathcal{L}_{2}\), \(\mathcal{L}_{1}\), and \(\mathcal{L}_{SSIM}\)), and 2) NN optimizers (SGD [17], RMSprop [18], and Adam [19]). **Evaluation metric.** We use the area under the receiver operating characteristic curve (AUROC) [20] as an evaluation metric in the AD experiments. The reconstruction error is also called an anomaly score in the AD task. In this study, \(\mathcal{L}_{2}\) between the input \(y\) and reconstruction output \(\hat{y}\) is used as an anomaly score. AUROC will be close to 1 when the reconstruction errors of the AE for the unseen anomalous patterns are relatively larger compared to errors in normal pattern reconstruction. ### Comparison of loss landscapes We visualize the loss landscape of \(\mathcal{L}_{2}\) and \(\mathcal{L}_{2}^{LAMP}\) for the batch sizes of 128, 16, and 4. \(\mathcal{L}_{2}^{LAMP}\) denotes \(\mathcal{L}_{2}\) metric amplified by LAMP. For visualization, an open source1 provided by Li et al. [1] is used. The visualization results in Fig. 3 show the loss landscape only for the encoder part of the AE that is trained to predict class labels of the MNIST dataset [13]. The results indicate that all \(\mathcal{L}_{2}^{LAMP}\) cases always show dense contours which means steeper loss landscapes compared to \(\mathcal{L}_{2}\). Footnote 1: [https://github.com/fomgoldstein/loss-landscape](https://github.com/fomgoldstein/loss-landscape) ### AD experiments using MNIST dataset Our work is based on the fact that when the loss landscape of an NN is shaped steeper, it will show better AD performance due to the contained generalization ability [1]. The results shown in Table 1 experimentally prove our claim that LAMP-applied cases transform the loss landscape steeper and always outperform regardless of the batch size. Moreover, in a large batch size which yields a smoother loss landscape, the gap \begin{table} \begin{tabular}{l|c c c c c c} \hline \multirow{2}{*}{**Loss**} & \multicolumn{6}{c}{**Batch size**} \\ \cline{2-7} & **1024** & **128** & **32** & **16** & **4** & **1** \\ \hline \(\mathcal{L}_{2}\) & 0.658 & 0.919 & 0.921 & 0.926 & 0.931 & 0.919 \\ \(\mathcal{L}_{2}^{LAMP}\) & **0.712** & **0.925** & **0.929** & **0.929** & **0.932** & **0.927** \\ \hline \end{tabular} \end{table} Table 1: The average performance for ten AD tasks using the MNIST dataset [13]. LAMP-applied loss function, \(\mathcal{L}_{2}^{LAMP}\), always outperforms \(\mathcal{L}_{2}\). Figure 3: The contour representation of the loss landscapes with three batch size (BS) conditions for \(\mathcal{L}_{2}\) and \(\mathcal{L}_{2}^{LAMP}\). It is known that the generalization ability is contained when the BS is small [1]. We experimentally confirm the same effect for LAMP. between AUROCs is greater, that is, the effect of LAMP is maximized. This result experimentally demonstrates that the sharpness of the loss landscape can be exploited to enhance the AD performance. ### Results for industrial dataset We fix the batch size to 16 in this experiment considering the training speed. Table 2 shows the experimental results on the industrial dataset, MVTec AD [14], in nine experimental settings from three different base loss functions and three optimizers. In 5 out of 9 experimental settings, LAMP-applied cases achieve equal or better performances. The last column of Table 2 shows the best performance for each subtask and \(\mathcal{L}_{base}^{LAMP}\) attains better AUROC than \(\mathcal{L}_{base}\). In Fig. 4, we present reconstruction results for the best models trained with each \(\mathcal{L}_{base}\) and \(\mathcal{L}_{base}^{LAMP}\). \(\mathcal{L}_{base}\) produces blurry results for normal products in capsule, metal nut, and pill cases, causing a large reconstruction error. Furthermore, a pill case shows fixed reconstruction results regardless of changing the input, undermining the reliability of the AD performance. In contrast, \(\mathcal{L}_{base}^{LAMP}\) case demonstrates accurate reconstructions for normal samples. Note the clear visibility of the number '500' on the normal capsule. In cases of unseen anomalous patterns, they are converted closer to a seen normal pattern, which is intended of a reconstruction model in UAD settings. This yields accurate detection of defective samples. Via the results of extensive experiments, we confirm that LAMP improves the AD performance in most cases quantitatively and qualitatively. ## 4 Conclusion In this paper, we propose a simple method to enhance the AD performance in an UAD setting from the perspective of reconstruction loss amplification by noting that contained generalization ability is highly related to sharp-shaped loss landscapes. To show the legitimacy of our approach, we design extensive experiments with MNIST and MVTec AD datasets. We have shown the shape change of the reconstruction loss landscape when LAMP is applied as we vary the batch size. We demonstrate quantitative and qualitative performance enhancement of an UAD model by LAMP under various conditions. LAMP can be safely applied to any reconstruction error metrics in an UAD setup where a reconstruction model is trained with anomaly-free samples only. \begin{table} \begin{tabular}{l||c c|c c|c c|c c|c c|c c|c c|c c} \hline **Training** & \multicolumn{4}{c|}{\(\mathcal{L}_{2}\to\mathcal{L}_{SIGHT}^{LAMP}\)} & \multicolumn{4}{c|}{\(\mathcal{L}_{SIGHT}\)} & \multicolumn{4}{c}{\(\mathcal{L}_{SIGHT}\)} & \multicolumn{4}{c}{**Best**} \\ \hline **Optimizer** & \multicolumn{2}{c|}{**SGD**} & \multicolumn{2}{c|}{**RMSprop**} & \multicolumn{2}{c|}{**Adam**} & \multicolumn{2}{c|}{**SGD**} & \multicolumn{2}{c|}{**RMSprop**} & \multicolumn{2}{c|}{**Adam**} & \multicolumn{2}{c|}{**SGD**} & \multicolumn{2}{c|}{**RMSprop**} & \multicolumn{2}{c|}{**Adam**} & \multicolumn{2}{c|}{**SGD**} & \multicolumn{2}{c|}{**RMSprop**} & \multicolumn{2}{c|}{**Adam**} & \multicolumn{2}{c|}{\(\mathcal{L}_{base}\to\mathcal{L}_{SIGHT}^{LAMP}\)} \\ \hline Bottle & 0.952 \(\rightarrow\) 0.983 & 0.990 \(\rightarrow\) 0.990 & 0.994 \(\rightarrow\) 0.991 & 0.989 \(\rightarrow\) 0.929 & 0.993 \(\rightarrow\) 0.994 & 0.994 \(\rightarrow\) 0.992 & 0.993 \(\rightarrow\) 0.990 & 0.994 \(\rightarrow\) 0.994 & 0.994 \(\rightarrow\) 0.994 & 0.994 \(\rightarrow\) 0.993 & **0.994** \(\rightarrow\) **0.
2305.09276
Noise robust neural network architecture
In which we propose neural network architecture (dune neural network) for recognizing general noisy image without adding any artificial noise in the training data. By representing each free parameter of the network as an uncertainty interval, and applying a linear transformation to each input element, we show that the resulting architecture achieves decent noise robustness when faced with input data with white noise. We apply simple dune neural networks for MNIST dataset and demonstrate that even for very noisy input images which are hard for human to recognize, our approach achieved better test set accuracy than human without dataset augmentation. We also find that our method is robust for many other examples with various background patterns added.
Xiong Yunuo, Xiong Hongwei
2023-05-16T08:30:45Z
http://arxiv.org/abs/2305.09276v1
# Noise robust neural network architecture ###### Abstract In which we propose neural network architecture (dune neural network) for recognizing general noisy image without adding any artificial noise in the training data. By representing each free parameter of the network as an uncertainty interval, and applying a linear transformation to each input element, we show that the resulting architecture achieves decent noise robustness when faced with input data with white noise. We apply simple dune neural networks for MNIST dataset and demonstrate that even for very noisy input images which are hard for human to recognize, our approach achieved better test set accuracy than human without dataset augmentation. We also find that our method is robust for many other examples with various background patterns added. Keywords: Noisy image recognition, neural networks, robustness, noise immunity, additive white gaussian noise Introduction The problem of devising neural network architectures that can recognize noisy image without artifically adding any noise in the training process has been long standing in the field of deep learning [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. For example, it has been observed that state-of-the-art neural networks for ImageNet failed completely when we add small and special noises to the test images (the so-called adversarial examples [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]), and no efficient solutions have been found to tackle this issue and the presence of general noises. Therefore, noisy image recognition remains an active area of research [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] and in this work, we propose a different neural network architecture which takes the effect of noises explicitly into account. In the dune neural network we present here, each free parameter of the model is represented as an uncertainty interval. At each training iteration, we sample from dune neural network using Monte Carlo method and propagate through the sampled network in the usual manner, then uncertainty intervals in the original dune network are dynamically updated in response to the loss derivatives. The advantages of using dune neural networks are twofold: first, the uncertainty interval representation of free parameters tends to make the model easier to optimize, due to how finite width trajectories make the exploration of loss function landscape easier, as contrast to other "probabilistic" models such as Bayesian neural network [28], where model optimization proved to be much harder than optimization for standard architectures. Second, dune neural networks naturally encode robustness with respect to parameter perturbation, namely, all parameters in the given region defined by the uncertainty intervals should yield networks with similar performance. This property helps ensure that the training process is more stable as well. Inspired by the notion of uncertainty interval for dune neural networks, we also propose applying a linear transformation to each element of the noisy input data. This transformation does not eliminate the noise added to input data in any way, but maps the input uncertainty interval into a new uncertainty interval that achieves excellent noise robustness when fed into dune neural network. When we combine the input transformation with dune neural networks, we show that without dataset augmentation, the network realized high accuracy when evaluated on the noisy test set. Specifically, without dataset augmentation, in the examples we consider here, the noisy test set accuracy decreases roughly exponentially with increasing noise to signal ratio for traditional method with simple neural network; whereas for our approach here, the noisy test set accuracy only decreases approximately linearly. Therefore, we are able to achieve reasonable accuracy for very noisy images that are otherwise hard for a human to recognize, by training on the original training set only without the use of dataset augmentation. The work we present here may have potential value for the field of noisy image recognition and deep learning noise problem in general, and in particular may have a wide range of practical applications where we do not know or can not train the noises/disturbing background patterns in advance, e.g., in medical image recognition [29] and astronomical image classification [30; 31]. In the following, we give detailed account of dune neural network and its associated transformation of noisy input data, then we perform tests on MNIST dataset for simple neural network architectures to demonstrate the power of our approach to combat noisy data and disturbing background pattern. ## II Method ### Architecture Dune neural network converts each free parameter in an ordinary neural network into an uncertainty interval represented by two separate parameters. Let \(\theta\) denotes the collection of free parameters from the original network, then the corresponding dune neural network has parameters \(\mathbf{\Theta}\) where \([\Theta_{i1},\Theta_{i2}]\) is the uncertainty interval corresponding to the \(i\)th free parameter \(\theta_{i}\). So if the original network has \(n\) parameters, the associated dune neural network has \(2n\) parameters. Whenever we need to perform any operations on the dune neural network, we start by treating the uncertainty intervals as uniform distributions and sampling to obtain a set of real values, then construct an ordinary neural network based on those values: \[\tilde{\theta_{i}}\in[\Theta_{i1},\Theta_{i2}]. \tag{1}\] This is called the instantiation step. To perform forward propagation on dune neural network, we simply carry out the propagation on the sampled, ordinary neural network. For example, the loss function can be written as \[L_{\mathbf{\Theta}}(\tilde{\theta_{1}},...,\tilde{\theta_{n}}). \tag{2}\] To update the uncertainty intervals of a dune neural network dynamically based on the loss function, we first calculate the gradient with respect to the sampled variables in the usual manner with backward propagation: \[\mathbf{g}=\nabla_{\tilde{\theta}}L_{\mathbf{\Theta}}(\tilde{\theta_{1}},..., \tilde{\theta_{n}}). \tag{3}\] Then we may use any optimization algorithm (stochastic gradient descent, momentum, Adam, and so forth) we want to calculate an update on \(\tilde{\theta}\) values based on the gradient and other relevant hyperparameters. Let us denote that step as \[\tilde{\theta}^{(new)}=optimization\_algorithm(\tilde{\theta}^{(old)},\mathbf{g},...). \tag{4}\] Then the parameters in the corresponding dune neural network are updated as \[\Theta_{i1}^{(new)}=\Theta_{i1}^{(old)}+(1-p)(\tilde{\theta_{i}}^{(new)}- \tilde{\theta_{i}}^{(old)}),\] \[\Theta_{i2}^{(new)}=\Theta_{i2}^{(old)}+p(\tilde{\theta_{i}}^{(new)}-\tilde{ \theta_{i}}^{(old)}), \tag{5}\] where \(p\) is defined as \[p=\frac{\tilde{\theta_{i}}^{(old)}-\Theta_{i1}^{(old)}}{\Theta_{i2}^{(old)}- \Theta_{i1}^{(old)}}. \tag{6}\] In case \(\Theta_{i2}^{(old)}=\Theta_{i1}^{(old)}\) and \(p\) is undefined, we use the following rule to update \(\mathbf{\Theta}\) \[\Theta_{i1}^{(new)}=\Theta_{i1}^{(old)}+(\tilde{\theta_{i}}^{(new)}-\tilde{ \theta_{i}}^{(old)}),\] \[\Theta_{i2}^{(new)}=\Theta_{i2}^{(old)}+(\tilde{\theta_{i}}^{(new)}-\tilde{ \theta_{i}}^{(old)}). \tag{7}\] This ensures that when there are no uncertainty intervals (i.e. \(\Theta_{i2}^{(old)}=\Theta_{i1}^{(old)}\)), the usual parameter update rule for deterministic neural networks is recovered. The dynamics of uncertainty intervals in dune neural network resembles the movement of a dune in high dimensional space, hence the name. At each training iteration of dune neural network, we resample the parameter values from each uncertainty interval for better statistics. During testing process, we make a single sample from the dune neural network and treat the output of that sampled ordinary network as the final output. Usually, there is little gain from making multiple samples for a single iteration of dune neural network which decreases the efficiency. In its current form, dune neural networks have the same efficiency as traditional deterministic neural networks, and in deeper networks, dune neural networks have a good chance to converge to a solution faster than other networks under the same circumstances, due to the introduction of uncertainty intervals making it easier to explore the cost function landscape. To initialize a dune neural network, we first obtain a collection of parameters \(\theta\) for an ordinary network from any initialization scheme. Then we specify a prior uncertainty width \(d\) and \(\mathbf{\Theta}\) are set as \[\Theta_{i1}=\theta_{i}-d,\ \Theta_{i2}=\theta_{i}+d. \tag{8}\] Also, to prevent any uncertainty interval from growing too large during training, we can add a regularization term for the width of each uncertainty interval to the loss function, as follows: \[L^{(reg)}=L+\sum_{i=1}^{n}\beta(\Theta_{i2}-\Theta_{i1})^{2}, \tag{9}\] where \(\beta\) is a hyperparameter similar to the one from weight decay. And to prevent the uncertainty interval from growing too small, which would recover a deterministic network, we can impose a hard threshold for the smallest allowed uncertainty interval width, by specifying another hyperparameter \(w_{min}\). If after an update is made, an uncertainty interval has a width less than \(w_{min}\) (i.e. \(\Theta_{i2}-\Theta_{i1}<w_{min}\)), then we readjust that uncertainty interval as \[\Theta_{i1} =\frac{\Theta_{i1}+\Theta_{i2}}{2}-\frac{w_{min}}{2},\] \[\Theta_{i2} =\frac{\Theta_{i1}+\Theta_{i2}}{2}+\frac{w_{min}}{2}. \tag{10}\] ### Data representation In image recognition, we normalize the input data so that each pixel is represented as a single real value in the interval \(x\in[0,1]\) for grayscale images. For color images each pixel would be represented as three numbers each in the interval \([0,1]\). In this work, we first consider white noise as applied to the input images, by using this formula to add noise to all pixels in the image: \[x^{noisy}=\frac{x^{old}}{n_{s}+1}+\frac{n_{s}}{n_{s}+1}U(0,1), \tag{11}\] where \(U(0,1)\) denotes a random number from the uniform distribution \([0,1]\). It is easy to see that the noise-to-signal ratio in the above formula is \(n_{s}\), the larger \(n_{s}\) the more noise is applied to the image. Regardless of whether there are noises present in the image, each pixel is always restricted to lie in the interval \([0,1]\). Based on the idea of dune neural network, we may treat the range of pixel values \([0,1]\) as an uncertainty interval too. In particular, we propose to apply the same linear transformation to each pixel value, before feeding those values as input to the dune neural network, with the following formula: \[\tilde{x}^{noisy}=(1+2h_{s})x^{noisy}-h_{s}. \tag{12}\] It is straightforward to see that \(\tilde{x}^{noisy}\) now lies in the interval \([-h_{s},1+h_{s}]\). It is also worthy to note that the above transformation does not eliminate or weaken the noise in the noisy input data in any way, since the transformation is applied after the noise adding procedure given by Eq. (11). The \(h_{s}\) in the above equation is to be treated as a hyperparameter specified by the user. We will see later that after applying the transformation to the input data, the capability of dune neural network to recognize noisy images is dramatically improved. We call the input data representation as defined by Eq. (12) magic shift (abbreviated Magics). In the following, we apply the approach introduced in this section to MNIST dataset with noisy test images. We show that even without dataset augmentation, our architecture and data representation achieved excellent noise robustness compared with previous methods [13; 9; 1]. ## III Results We test our approach on MNIST dataset for simple dune neural networks. Specifically, we use dune network counterpart of simple fully connected network for image classification to verify the principle of our method, with ReLU activation and softmax classifier. We do not use dataset augmentation so the training set always remains the same, and when we need to add noise to the testing set, we add white noise using Eq. (11) (\(n_{s}=0\) corresponds to the case of no noise). To begin with, we check that the combination of dune neural networks and Magics is working correctly, by monitoring the accuracy on the original testing set (without noise). The result is shown in Fig. 1; as expected, the accuracy converges toward 100% at a stable pace. Then we turn our attentions to the case of noisy test set. In Fig. 2, we tested dune Figure 1: Test set (without noise) accuracy as a function of epochs for two layer fully connected dune neural network of hidden units 150 and 10. We used Adam optimization algorithm and the learning rate is 0.001, \(\beta=0.1\) and \(w_{min}=0.15\). In this simple setting, both the cases with Magics (\(h_{s}=0.5\)) and without Magics (\(h_{s}=0\)) converge toward 100% accuracy. Figure 2: Noisy test set accuracy (\(n_{s}=1.5\), no dataset augmentation) as a function of epochs for two layer fully connected dune neural network of hidden units 150 and 10, for different \(w_{min}\) values. The common hyperparameters are: learning rate 0.001, \(\beta=0.1\), \(h_{s}=1.8\). We can see that using different \(w_{min}\) values can have a slight effect on the test set accuracy, with \(w_{min}=0.15\) corresponding to the optimal choice. In practice one may need to experiment with different \(w_{min}\) to find the best choice. neural network with Magics on noisy test set with \(n_{s}=1.5\), without dataset augmentation. We give results for different \(w_{min}\) values, we can see that \(w_{min}=0.15\) gives optimal result. Next, we investigate the effects of Magics on the noisy test set accuracy, without dataset augmentation. As we mentioned before, the combination of dune neural network and Magics can improve the ability to recognize noisy images dramatically, so we try different Magics hyperparameter \(h_{s}\) (\(h_{s}=0\) is the special case with no Magics). The result is shown in Fig. 3, we can clearly see the improvement in noisy test set accuracy by using positive \(h_{s}\) values. As \(h_{s}\) increases the accuracy begins to converge. We note that the traditional method with simple neural network can be combined with Magics as well. To compare the noise robustness of these different approaches, we plot the noisy training set accuracy versus noise-to-signal ratio for three methods, as shown in Fig. 4. As shown by the yellow line based on the traditional method, the test set accuracy decreases exponentially with increasing noise-to-signal ratio. This exponential decay is also observed in many other works [3; 4] with more complex neural network, e.g. for Gaussian blur and Gaussian noise in Ref. [3]. One of the most challenging problem in noisy image recognition Figure 3: Noisy test set accuracy (\(n_{s}=1.5\), no dataset augmentation) as a function of epochs for two layer fully connected dune neural network of hidden units 150 and 10, for different \(h_{s}\) values. The common hyperparameters are: learning rate \(0.001\), \(\beta=0.1\), \(w_{min}=0.15\). \(h_{s}=0\) corresponds to the case without Magics and the network performs poorly. By using positive \(h_{s}\) values, we see that there is a dramatical improvement in the noisy test set accuracy. With large enough \(h_{s}\), the accuracy converges to about 91%. is to eliminate this exponentially decaying behavior, which is successfully realized in Fig. 4, as shown by blue line based on the combination of dune neural network and Magics, and the green line based on the combination of the traditional method and Magics. We also pick a specific case and plot the accuracy as a function of epochs for these three different methods, as shown in Fig. 5. We can see that the accuracy of traditional method decreases quickly with epochs due to overfitting, whereas the two methods with Magics both achieved decent noise robustness and did not overfit over the training course of 30 epochs. Figure 4: Noisy test set accuracy as a function of noise-to-signal ratio \(n_{s}\) (no dataset augmentation) for two layer fully connected dune neural network of hidden units 150 and 10, for different methods. The hyperparameters are: learning rate \(0.001\), \(\beta=0.1\), \(w_{min}=0.15\). For \(n_{s}\geq 1\), we set \(h_{s}=1.8\), otherwise we set \(h_{s}=2n_{s}\). Traditional method shown by yellow line (without dune neural network and without Magics) has a poor performance: with increasing noise-to-signal ratio the test set accuracy decreases exponentially. The combination of dune neural network and Magics (shown by blue line) achieves the best performance, while the combination of ordinary neural network and Magics (shown by green line) doing slightly worse. The dashed line in the figure corresponds to the threshold of human’s ability to recognize noisy images, we show a selection of images from MNIST for different \(n_{s}\) in the bottom of the figure. We emphasize that in our method there is no artificial noise added in the training dataset for recognizing test images with added white noise. Hence, it is expected that our method can be applied to other types of noise or background pattern. To support this conjecture, we also performed experiments on different types of noises and found that the combination of dune neural network and Magics achieved decent noise robustness for all the different noises we considered. A selection of images with various disturbing background patterns along with the test set accuracy are shown in Fig. 6. The improvement on the test set accuracy of noisy image recognition is significant, compared with previous results with more complex methods [1; 9; 13]. Figure 5: Noisy test set accuracy (\(n_{s}=1.5\), no dataset augmentation) as a function of epochs for two layer fully connected dune neural network of hidden units 150 and 10, for different methods. The hyperparameters are: learning rate 0.001, \(\beta=0.1\), \(w_{min}=0.15\). For the two methods with Magics, we set \(h_{s}=1.8\). ## IV Conclusion As a summary, in this work we propose a noise robust neural network architecture, based on the combination of dune neural network and Magics as applied to the input data. We tested on MNIST for different noises and noise strength to demonstrate the improvement of the current method over traditional method. In particular, we achieved decent noisy test set accuracy even without dataset augmentation, showing that our method is indeed noise robust for noisy image recognition. We believe the method present here has a broad applicability when faced with general noise problem in the field of deep learning. For example, all existing neural network architectures can be promoted to their dune neural network version without much extra difficulty, so our approach can naturally be applied to tasks other than image classification in the future. Figure 6: The same image with different types of noises applied to it, the upper left corner is the original image with added border. At the top right corner of each image, we show the noisy test set accuracy (without dataset augmentation), using the combination of dune neural network and Magics. A decent noise robustness is achieved in each case, showing that our method here can be used to deal with a variety of different noises and disturbing background patterns. Moreover, a possible future direction is to gain a deeper understanding of Magics, by applying it to other kinds of noisy input data to see if we can achieve noise robustness in other applications. The purpose of the present work is not to consider adversarial examples [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], which may deserve further studies based on our method. **Acknowledgments** The authors would like to acknowledge support from Hubei Polytechnic University. **Author Contributions** YN designed the study, proposed the idea, implemented the methods and wrote the manuscript. HW performed some numerical experiments and revised the manuscript. **Funding** This work is partly supported by the National Natural Science Foundation of China under grant numbers 11175246, and 11334001. **Declaration of competing interest** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. **Availability of data and material** The data that support the findings of this study are available from the corresponding author upon reasonable request. **Declarations** **Confact of interest** The authors declare no conflicts of interest. **Ethics approval** Not Applicable. **Consent to participate** Not Applicable. **Consent for publication** Not Applicable. **Code availability** All code associated with this paper is publicly available from [https://github.com/xiongyunuo/XNN](https://github.com/xiongyunuo/XNN)
2308.06807
Neural Networks for Programming Quantum Annealers
Quantum machine learning has the potential to enable advances in artificial intelligence, such as solving problems intractable on classical computers. Some fundamental ideas behind quantum machine learning are similar to kernel methods in classical machine learning. Both process information by mapping it into high-dimensional vector spaces without explicitly calculating their numerical values. We explore a setup for performing classification on labeled classical datasets, consisting of a classical neural network connected to a quantum annealer. The neural network programs the quantum annealer's controls and thereby maps the annealer's initial states into new states in the Hilbert space. The neural network's parameters are optimized to maximize the distance of states corresponding to inputs from different classes and minimize the distance between quantum states corresponding to the same class. Recent literature showed that at least some of the "learning" is due to the quantum annealer, connecting a small linear network to a quantum annealer and using it to learn small and linearly inseparable datasets. In this study, we consider a similar but not quite the same case, where a classical fully-fledged neural network is connected with a small quantum annealer. In such a setting, the fully-fledged classical neural-network already has built-in nonlinearity and learning power, and can already handle the classification problem alone, we want to see whether an additional quantum layer could boost its performance. We simulate this system to learn several common datasets, including those for image and sound recognition. We conclude that adding a small quantum annealer does not provide a significant benefit over just using a regular (nonlinear) classical neural network.
Samuel Bosch, Bobak Kiani, Rui Yang, Adrian Lupascu, Seth Lloyd
2023-08-13T16:43:07Z
http://arxiv.org/abs/2308.06807v1
# Neural Networks for Programming Quantum Annealers ###### Abstract Quantum machine learning is an emerging field of research at the intersection of quantum computing and machine learning. It has the potential to enable advances in artificial intelligence, such as solving problems intractable on classical computers. Some of the fundamental ideas behind quantum machine learning are very similar to kernel methods in classical machine learning. Both process information by mapping it into high-dimensional vector spaces without explicitly calculating their numerical values. Quantum annealers are mostly studied in the adiabatic regime, a computational model in which the quantum system remains in an instantaneous ground energy eigenstate of a time-dependent Hamiltonian. Our research focuses on the diabatic regime where the quantum state does not necessarily remain in the ground state during computation. Concretely, we explore a setup for performing classification on labeled classical datasets, consisting of a classical neural network connected to a quantum annealer. The neural network programs the quantum annealer's controls and thereby maps the annealer's initial states into new states in the Hilbert space. The neural network's parameters are optimized to maximize the distance of states corresponding to inputs from different classes and minimize the distance between quantum states corresponding to the same class. Recent literature showed that at least some of the "learning" is due to the quantum annealer, connecting a small linear network to a quantum annealer and using it to learn small and linearly inseparable datasets. In this study, we consider a similar but not quite the same case, where a classical fully-fledged neural network is connected with a small quantum annealer. In such a setting, the fully-fledged classical neural-network already has built-in nonlinearity and learning power, and can already handle the classification problem alone, we want to see whether an additional quantum layer could boost its performance. We simulate this system to learn several common datasets, including those for image and sound recognition. We conclude that adding a small quantum annealer does not provide a significant benefit over just using a regular (nonlinear) classical neural network. ## I Introduction: Machine learning (ML) is among the most exciting prospective applications of quantum technologies. As kernel-based methods have become a pillar of modern ML applications, it is natural to explore their applications within ML applications using quantum computing. The fundamental idea of quantum computing is surprisingly similar to the principle behind kernel methods in ML. That is, to efficiently perform computation in an intractably large vector space. In the case of quantum computers, this vector space is a Hilbert space. Quantum algorithms aim to perform efficient computations in a Hilbert space that grows exponentially with the size of a quantum system. Efficient, in this context, means that the number of operations applied to the system grows at most polynomially with the size of the system. There are different approaches to building physical quantum computers. The most well-known approaches are universal gate model quantum computers and quantum annealers. Universal gate model quantum computing (or general purpose quantum computing) is universal and flexible. However, building and maintaining the stability of physical qubits is hard. Quantum annealers are a less flexible paradigm of quantum computing but are easier to implement and realize than universal gate model quantum computers. Quantum annealing started as a purely theoretical combinatorial optimization method [1]. It is mostly used for solving optimization problems formulated in terms of finding ground states of classical Ising spin Hamiltonians [2; 3]. More recently, though, it is also understood as a heuristic optimization method implemented in physical quantum annealing hardware [4]. A physical implementation of quantum annealing on quantum hardware might lead to speedups over algorithms running on classical hardware in some specific cases. The construction of commercial (non-universal) quantum annealing processors, such as by the Canadian quantum computing company D-Wave Systems [5; 6; 7; 8; 9; 10], was inspired by earlier theoretical proposals [11; 12]. This work investigates a hybrid quantum-classical approach to performing classification tasks. The setup is illustrated and described in figures (1) and (2).In this study, we consider a classical fully-fledged neural network is connected with a small quantum annealer. In such a setting, the fully-fledged classical neural-network already has built-in nonlinearity and can already handle the classification problem alone, we want to see whether an additional quantum layer could boost its performance. Supervised training is performed on classical labeled datasets, such as MNIST and CIFAR, shown in figure (3a) and (4). The raw data is fed into a (classical) neural network. But instead of performing the entire learning procedure with it (i.e., directly reading the outputs of the neural network for classification), we feed the neural network's output as the input to a quantum annealer. The quantum annealer interprets the neural network's output as the annealing schedule, as described in section (III). Once the quantum annealer in figures (1) and (2) outputs a final quantum state \(|\psi_{final}\rangle\), this is then used to calculate the loss function, which is then used to optimize the NN parameters via gradient descent. The complete training procedure is shown in figure (1). ## II Related Work ### Quantum Machine Learning As quantum technologies continue to advance rapidly, it becomes increasingly important to understand which applications can benefit from the power of these devices. Over the past decades, ML on classical computers has made significant progress. Advanced ML and artificial intelligence (AI) tools have become openly available and easily accessible through libraries such as TensorFlow [13], Scikit-learn [14], and PyTorch [15] - to name a few. Together with increasing computing power, this resulted in revolutionizing applications such as text and image recognition, translation, and recently even the development of models for generating digital images from natural language descriptions - such as _DALL-E 2_[16]. Nevertheless, progress in applications such as self-driving cars [17] has been slower than many would have hoped, reminding us of the limitations of ML and AI. Therefore, if quantum computers could accelerate ML, the potential for impact would be significant. Two major paths to the quantum enhancement of ML have been considered [18]. First, driven by quantum applications in optimization [19; 20; 21; 22; 23], the power of quantum computing could be used to help improve the learning process of existing classical models [24], or improve inference in graphical models [25]. This could result in better methods for finding optima in a training landscape or with fewer queries. However, in most general unstructured problems, the advantage of these algorithms compared to their classical counterparts may be limited to quadratic or small polynomial speedups [26; 27]. The second area of interest is the possibility of using quantum models to express functions that are inefficient to represent by classical computation [18]. Recent theoretical and experimental successes demonstrating quantum computations beyond classical tractability can be considered evidence that quantum computers can sample from probability distributions that are exponentially difficult to sample from classically [28; 29]. It is important to emphasize that these distributions have no known real-world applications. This is the type of advantage sought Figure 1: _Training procedure._ Training the model works as follows. First, the neural network receives (classical) input data and feeds it to the quantum annealer, which outputs a state \(|\psi_{final}\rangle\) for each input. For details, see figure (2) and section III. In the above example, the inputs are 28 by 28 grayscale MNIST images, corresponding to hand-written digits from 0 to 9. Second, all the \(|\psi_{final}\rangle\) corresponding to the same class are grouped, and a density matrix is calculated for each class, as shown in the top right corner. Third, the Hilbert-Schmidt distance is calculated between all pairs of density matrices. Intuitively, this can be understood as the average distance between all the clusters of states on a Bloch sphere, as shown in a 1-qubit example in figure (3b). Forth, the loss function of the neural network is defined as the negative Hilbert-Schmidt distance and then used to perform gradient descent on the neural network’s parameters. These four steps are repeated until the loss function converges. Intuitively, the neural network is optimized in such a way that the output states \(|\psi_{final}\rangle\) cluster into groups located as far away from each other as possible, illustrated on the right side of figure (3b). Once the neural network is trained well, we can use it to perform classification on new input data because the input will be mapped into a quantum state close to its corresponding cluster. in recent work on quantum neural networks (QNNs) [30; 31; 32], which seek to parameterize a distribution through a set of adjustable parameters. Nevertheless, it is well known that these QNNs come with a host of trainability issues that must be addressed before they are implemented in large-scale systems [33; 34; 35]. ### Kernel methods Some quantum algorithms, such as the quantum principal component analysis [36], and quantum support vector machines [21], have been inspired by classical machine learning algorithms. This field is generally known as _quantum machine learning (QML)_[37; 38]. Similar approaches have been used to create quantum algorithms inspired by classical neural networks. It has been shown that the mathematical frameworks of many quantum machine learning (QML) models are similar to _kernel methods_[39]. Quantum computers and kernel methods process information by mapping it into high (or infinite) dimensional vector spaces without explicitly calculating each element of this vector space explicitly. In this work, we will focus on QML in the context of learning classical data, even though QML can be applied to quantum data as well [40; 41]. To perform computation, data must be encoded into the quantum system's physical states, equivalent to a _feature map_[42; 43]. This map assigns quantum states to classical data, and the inner product of the encoded quantum states gives rise to a kernel [44; 45; 21; 43]. In classical kernel methods, we access the feature space through kernels or inner products of feature vectors. In quantum kernel methods, we also indirectly access the high-dimensional feature space through inner products of quantum states, which are facilitated by measurements. While the Hilbert space of quantum states may be vast, deterministic quantum models can still be trained and operated in a lower-dimensional subspace. Thus, we obtain the benefits of kernel methods without requiring direct access to the entire Hilbert space. The equivalence of QML and kernel methods resulted in research on kernelized ML models [46], quantum generative models [47], as well as understanding the difference between the complexities of quantum and classical ML algorithms [48; 43; 18] - to mention a few examples. The connection between kernel and quantum learning methods arises from the classical data encoding process (quantum embedding), which determines the kernel. Although the quantum kernel has the potential to access an extremely high-dimensional Hilbert space, quantum learning model training actually occurs in a lower-dimensional subspace. This contrasts with variational models, where optimizing a significant number of classical parameters and finding a suitable circuit ansatz are necessary. The ansatz's parameters also require optimization. It's important to note that quantum learning models, including quantum neural networks (QNNs), do not always require exponentially many parameters to achieve effective results. It is important to consider the limitations of kernel methods, as highlighted by Kubler et al.[49]. The authors suggest that quantum machine learning models can only offer speed-ups if we manage to encode knowledge about the problem at hand into quantum circuits, while encoding the same bias into a classical model would be hard. They argue that this situation may plausibly occur when learning on data generated by a quantum process, but it seems more challenging for classical datasets [49]. Furthermore, the authors show that finding suitable quantum kernels is not easy because the kernel evaluation might require exponentially many measurements, which could limit the practicality of quantum kernel methods in some cases [49]. Despite the potential benefits of quantum kernel methods, these limitations should be taken into account when considering their use in quantum machine learning. ### Quantum Annealing An in-depth explanation can be found in Albash _et al._, 2018 [1], but for our purposes, it suffices to define adiabatic quantum annealing as a computational model in which the system remains in an instantaneous ground state of a time-dependent Hamiltonian \(H(t)\)[3]. For example, define \[H(t)=A(t)H_{X}+B(t)H_{Z} \tag{1}\] where \(H_{X}=-\sum_{i}\sigma_{X}^{(i)}\) denotes the Pauli \(\sigma_{x}\) matrices acting on the \(i\)-th qubit and \(H_{Z}\) is a Hamiltonian that is diagonal in the computational basis of eigenstates of tensor products of \(\sigma_{Z}^{(i)}.A(t)\) and \(B(t)\) are the time-dependent transverse and longitudinal coupling strengths. \(H_{Z}\) is the problem Hamiltonian, and its ground states encode the solution to a computational problem. In the case of standard quantum annealing, we start with \(A(t=0)=A_{0}\) and then monotonically decrease this to \(A(t=T)=0\) during a time interval \([0,T]\). During the same time period, \(B(t)\), which is the control of our problem Hamiltonian, \(H_{Z}\), is monotonically increased from \(B(t=0)=0\) to \(B(t=T)=B_{0}\). The core assumption here is that this transition happens slowly enough so that the system (which started in the ground state of \(H_{X}\)) remains in the ground state of \(H(t)\) until it eventually ends up in the ground state of \(H_{Z}\) at time \(t=T\). This gives us access to the solution to our computational problem. In a closed system, the time evolution of the entire system can be described by \[\left|\psi_{final}\right\rangle=\Big{(}\int_{0}^{T}e^{-\frac{i\pi}{H}H(t)}dt \Big{)}\left|\psi_{initial}\right\rangle \tag{2}\] In this work, we are not just focused on the adiabatic limit of quantum annealing (where the system always remains in the ground state of \(H(t)\)) but also in the diabatic setting. Here, diabatic transitions from and to low-energy excited states are permitted. The term _diabatic quantum annealing_ first appeared in _Muthukrishnan et al., 2016_[50], although the same concept appeared in earlier literature as _nonadiabatic quantum annealing_[51], and later also as _pulsed quantum annealing_[52]. With this in mind, we can now define _diabatic quantum computing_. To do so, we relax the condition that the system must always remain in a single instantaneous eigenstate of \(H(t)\). Instead, diabatic quantum computing and annealing are computational models (universal and for optimization, respectively) in which the system remains in a subspace spanned by the eigenstates of \(H(t)\) at all times. The instantaneous computational eigenstate in quantum annealing and adiabatic quantum computing is usually taken to be the ground state of \(H(t)\). Therefore, the energy band in diabatic quantum computing is usually taken to be the low-energy subspace of \(H(t)\), even though, by definition, it could be any part of the energy spectrum. It is important to point out that in diabatic quantum computing, the final state reached in equation (2) doesn't have to be the ground state of \(H(t)\). Diabatic quantum annealing dynamics are generally non-local because the unitary term in equation (2) cannot be written as a tensor product of locally acting unitaries. This is due to the non-commutativity of the terms in equation (1). The standard method for digitalizing this non-local unitary on a gate-based device required a very large number of gates [53], meaning that diabatic quantum annealing hardware can be used to run algorithms distinct from those being implemented on gate-based devices. ## III Setup and Simulations To illustrate the idea behind the training procedure, we present a simplified example, illustrated in figure (3). We use a subset of the MNIST database of handwritten digits, containing only 0s and 1s - shown in figure (3a). And we use our training setup from figure (1) with a small NN to train our model in figure (2). For simplicity, we use \(n=1\) qubits, so we can visualize all states on Bloch spheres. The NN is randomly initialized, so after passing the input data through the small NN and then inputting the NN's output into the schedule parameters of the quantum annealer to evolve its initial state into a final state, we will end up with a cluster of randomly mixed \(\psi_{output}\) states. This pre-training stage is illustrated on the left Bloch sphere in figure (3b). The loss function, Figure 2: Hybrid classical neural network/quantum annealer setup. The setup works as follows: the neural network receives classical data as input and forward-passes the information through its layers. In the above case, it’s a 28 by 28 grayscale MNIST image, and the neural network has only one hidden layer with 120 neurons. The outputs of the neural network, \(f_{ik}\), are the annealing schedule for the quantum annealer. The quantum annealer starts with state \(\ket{0,...,0}\) and outputs state \(\ket{\psi_{final}}\), which is defined above. The value of \(i\) corresponds to the different Hamiltonians which the quantum annealer can apply (see the beginning of section III), and the value of \(k\) corresponds to the discretized time instance of the annealing schedule. The output state \(\ket{\psi_{final}}\) can then be used for training the model, as shown in figure (1), as well as performing classification tasks. which is the negative average Hilbert-Schmidt distance, defined in equation (6), is close to zero. Next, we start training the NN by minimizing the loss function (or maximizing the Hilbert Schmidt distance between states corresponding to different classes), until we eventually end up with a trained NN. Now, if we pass the same (or new) images of 0s and 1s through the trained NN and then pass the NN's output to the quantum annealer to get evolved final states, we end up with states as shown on the right of figure (3b). Here, we can clearly see that the input data was mapped into two distinct clusters, corresponding to 0s and 1s, respectively. The hardware we simulated is able to apply the following operators on the state inside the quantum computers: * \(\sigma_{x}\) operation on individual qubits: \(\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\) * \(\sigma_{z}\) operation on individual qubits: \(\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\) * \(\sigma_{z}\otimes\sigma_{z}\) on each pair of qubits The hardware has the ability to apply any linear combination of these operators at a given time and can vary their individual intensities over time. In total, for an \(n\)-qubit quantum system, there are \(m=n+n+\frac{n(n-1)}{2}=\frac{n}{2}(n+3)\) Hamiltonians, and therefore also \(m\) parameters which control the system at any instance of time. Our goal is to create a hybrid NN, which consists of a classical NN and a quantum diabatic annealer. The Hamiltonian at any time instance can be described as a linear combination of the form: \[H(t)=\sum_{i=1}^{m}f_{i}(t)H_{i} \tag{3}\] where \(H_{i}\) for \(i\in\{1,...,n\}\) refers to single \(\sigma_{x}\) gates acting on the \(i^{th}\) qubit. \(H_{i}\) for to single \(\sigma_{z}\) gates acting on the \((i-n)^{th}\) qubit. \(H_{i}\) for \(i\in\{2n+1,...,m\}\) refers to double \(\sigma_{z}\otimes\sigma_{z}\) gates acting on all pairs of \((i,j)\in\{1,...,n\}\). The functions \(f_{i}(t)\) (or \(f_{i\mathbf{k}}\) in the discretized version from figure 2) define "how much" of the Hamiltonian \(H_{i}\) is applied at the time instance \(t\). Here \(t\in[0,T]\), meaning that the Hamiltonians are applied within a fixed time period \(T\). The goal of this simulation is to find the right weights and biases of the classical NN which, after classical input data is forward passed through it, outputs the optimal \(f_{i}(t)\) functions which map the initial quantum state \(|\psi_{init}\rangle=|0...0\rangle\) to _maximally_ distant final quantum states \(|\psi_{final}\rangle\) for input data corresponding to distinct classes. At the same time, for input data corresponding to the same class, it should map the initial quantum state \(|\psi_{init}\rangle\) to _minimally_ distant final quantum states \(|\psi_{final}^{(k)}\rangle\). Here \(k\) is simply the label of a class for classification purposes. In other words, \(|\psi_{final}^{(k)}\rangle\) states form clusters, as shown in a \(n=1\) qubit example with two classes in figure (3). Let us explain the details of the whole setup, shown in figure 2, starting with the classical NN on the left. As aforementioned, the input to the NN can be any type of labeled classical dataset (not restricted to images). We used different types of NNs, depending on how complex a given dataset is. For MNIST, shown in figure (4), a simple NN with one hidden layer with 120 neurons was sufficient. More complex datasets, such as CIFAR [54], also shown in figure (4), required convolutional neural networks (CNNs) with several hidden layers. It is common practice in the ML community to normalize all the input data \(D\), so that \(max(D)=1\), \(min(D)=-1\), and \(mean(D)=0\). The weights and biases of the NN are initialized randomly. For example, in the case of MNIST (as shown in figure 2.), using tanh activation functions for connecting the input layer with the hidden layer, followed by \(Sigmoid\) activation functions connecting the hidden layer with the output layer worked well. Once the input data has been passed through the randomly initialized NN, Figure 3: A simplified example of training our hybrid neural network/quantum annealer from figure (2) through the training procedure shown in figure (1). we receive the annealing schedule, which is the input to the quantum annealer shown in figure (2). As the output of the NN is supposed to describe the annealing schedules \(f_{i}(t)\), we have to restrict \(f_{i}(t)\) to a set of discrete values, as opposed to continuously changing functions. Therefore, we have to choose a value for the total number of discrete "steps" that each of the functions \(f_{i}(t)\) can take. Hence, the actual output of the NN isn't continuous functions \(f_{i}(t)\), but rather discrete values \(f_{ik}\), as shown in figure (2). The total size of the NN's output layer is the number of discretized steps multiplied by the number of control Hamiltonians: \[\text{NN output size = }steps\ \times\ m\text{=}steps\ \times\ \tfrac{n}{2}(n+3)\] Next, let us talk about the quantum annealer, shown in the right half of figure 2. The quantum annealer is initialized with the standard \(\ket{\psi_{input}}=\ket{0,...,0}\) quantum state (which is the ground state of a pure longitudinal Hamiltonian without interactions), and then evolves this state using the Hamiltonian \(H(t)\) described in equation (3). Given that the Hamiltonian \(H(t)\) in our discretized model changes only every \(\Delta t=\frac{T}{steps}\), the change of the quantum state inside the quantum annealer between each discretized time instance can be described by: \[\ket{\psi(t_{k+1})}=e^{\frac{-i\Delta t}{\hbar}H(t_{k})}\ket{\psi(t_{k})} \tag{4}\] where \(t_{k}=k\Delta t\) and \(H(t)=\sum_{i=1}^{m}f_{i}(t)H_{i}\). The final quantum state is: \[\ket{\psi_{final}}=\prod_{t(i=0)=0}^{t(i=N)=T}e^{-\frac{i\Delta t}{\hbar}\sum _{i=1}^{m}f_{ik}H_{i}}\ket{0,...,0} \tag{5}\] where \(f_{ik}=f_{i}(t_{k})\). Therefore, given all parameters of the classical NN, any input to the entire setup in figure (2) is mapped to a particular quantum state. ### Training procedure In the previous section, we described how a classical input state (for example, a handwritten digit) can be mapped into a quantum state through the setup shown in figure (2). In this section, we will explain how this setup can be used for classification tasks. A key element of every ML application is a training procedure, which uses a training dataset. The whole training procedure is illustrated in figure (1). The first part of the training procedure is just repeating the steps described in section III. We take the whole dataset of, in this illustration, MNIST 28 by 28 images, pass them through the NN, and then use the outputs of the NN as annealing schedules to create a set of output states \(\ket{\psi_{final}}\). As a next step, we create a set of mixed-density matrices for every class in the dataset. For a standard MNIST dataset, there are \(K=10\) classes. One for each digit. The density matrix corresponding to class \(j\) is defined as \(\rho_{j}=\sum_{i\in\{\text{input}_{i}=j\}}\ket{\psi_{final}}\bra{\psi_{final}}\). The goal of the training procedure is to make these density matrices as distinct as possible. We used the Hilbert-Schmidt distance as a metric to compare two density matrices. The Hilbert-Schmidt distance is defined as: \[D_{HS}(\rho,\sigma)=Tr\big{[}(\rho-\sigma)^{2}\big{]}=||\rho-\sigma||_{2}^{2} \tag{6}\] We could have also used the standard trace distance, defined as \(D(\rho,\sigma)=\frac{1}{2}Tr\big{[}\rho-\sigma\big{]}=\frac{1}{2}||\rho- \sigma||_{1}\). We decided not to use it, as the trace distance is more difficult to compute than the Hilbert-Schmidt distance, as it involves the diagonalization of \(\rho-\sigma\)[55]. Even though a mixed-density matrix cannot be shown on a Bloch sphere, such as in figure (3b), we can still imagine the training procedure to be somewhat similar. Our goal is to maximize the distance between states corresponding to different classes while keeping the distance between states corresponding to the same class as small as possible. Therefore, we define the loss function of the NN as minus the average Hilbert-Schmidt distance between all pairs of density matrices \(\rho_{j}\). Having a loss function allows us to use gradient descent (or another minimization algorithm) on all the parameters of the NN. This gradient-update procedure now needs to be repeated until the loss function converges to a minimum (=maximum average distance). ### Computational methods and errors In setting up the hybrid NN-QA model in this study, the choice was made to work with the largest number of qubits possible, as determined by practical memory and computation time limits on a conventional computer. The main bottleneck for the runtime was calculating the matrix exponentiations. The main bottleneck for memory consumption was performing the backpropagation (gradient calculation) step during the NN training procedure. Also, before even looking into the results of our simulations, it is essential to determine the magnitudes of the error rates of our approximation methods and whether or not they accurately simulate the quantum annealer. All numerical methods used are already well-established, including the matrix exponentiation approximation method needed for simulating equation (4). This method is based on the SSA approximation [56; 57], and it is one of the fastest and most accurate methods for general, non-sparse matrix exponentiation. Nevertheless, the computational resources needed for this matrix exponentiation approximation scales roughly as \(\mathcal{O}(2^{n})^{3}\). Therefore, given the large number of matrix exponentiations used in our simulations, we had to implement the Suzuki-Trotter approximation, explained in more detail in appendix B. The Suzuki-Trotter approximation method can be made arbitrarily accurate by adjusting its parameters, resulting in a trade-off between accuracy and the computational resources needed. A more detailed investigation of this method can be found in appendix B. ## IV Results In this section we discuss the datasets used, the classification accuracies achieved with them, as well as the overall improvement through the quantum annealer. ### Datasets and learning models In this section, we analyze the classification accuracies of four different setups on four different datasets. The datasets listed in table 1 are MNIST, CIFAR, ISOLET, and UCIHAR: * **MNIST** consists of 28 by 28 grayscale images of handwritten digits from 0 to 9, shown in figure (4). As of writing, the highest classification accuracy is 99.91% achieved by using a majority vote between several convolutional neural networks [58]. * **CIFAR** consists of 32 by 32 color images of 10 different objects, shown in figure (4). As of writing, the highest classification accuracy achieved is 99.5% by using transformer models, shaking up the long "supremacy" of CNNs [59]. * **ISOLET** consists of human voice recordings pronouncing the letters of the English alphabet, hence 26 different classes * Lastly, **UCIHAR** consists of acceleration and velocity data from a smartphone's gyroscope, as well as other motion, time, and frequency variables. The 12 labeled classes include activities such as walking upstairs, sitting, standing, etc. Even though the use of CNNs and transformer models can accomplish remarkable accuracies, we decided not to use such complex models. If state-of-the-art models were used, most of the learning would happen in the classical part of the model, leaving little room for improvement by the quantum annealer. Second, simulating a complex state-of-the-art model connected to a quantum annealer would be computationally infeasible, especially given the exponential growth of the quantum annealer simulation with the number of qubits. Not all datasets were equally easy to learn, and each required tuning of the network architecture. Here, we either used a simple neural network with one hidden layer or a more complex convolutional neural network. We ran both of these networks in isolation and contrasted this with the setting where they were connected to quantum annealers. The four different setups used to learn the aforementioned datasets are: * **NN**: This is a very simple classical neural network consisting of an input layer with one neuron for each input feature, one hidden layer with 120 neurons, and one output layer with one neuron for each class. The output is, therefore, \(K\) real numbers between 0 and 1, indicating how similar the input is to each class. Training is done by classical loss function minimization. * **NN + Annealer**: This is a very simple classical neural network consisting of an input layer with one neuron for each input feature, one hidden layer with 120 neurons, and one output layer with one neuron for each control parameter of the quantum annealer. The outputs are the parameters \(f_{ik}\) controlling the quantum annealer, as shown in figure (2). The resulting quantum states are then used to train the neural network, as illustrated in figure (1). * **CNN**: This is a small classical convolutional neural network consisting of two 2D convolutions. The first convolution has three input channels, six output channels, and a kernel size of five, so it has \((3\times 5\times 5+1)*6=456\) parameters. The second convolution has six input channels, 16 output channels, and a kernel size of five, so it has \((6*5*5+1)*16=2416\) parameters. An input layer with one neuron for each input feature, two hidden layers with 120 and 84 neurons, respectively, and an output layer with one neuron for each class. The output is, therefore, \(K\) real numbers between 0 and 1, as shown in figure (5). Training is, again, done by classical loss function minimization. * **CNN + Annealer**: This is a small classical convolutional neural network consisting of two 2D convolutions. The first convolution has three input channels, six output channels, and a kernel size of five, so it has \((3*5*5+1)*6=456\) parameters. The second convolution has six input channels, 16 output channels, and a kernel size of five, so it has \((6*5*5+1)*16=2416\) parameters. An input layer with one neuron for each input feature, two hidden layers with 120 and 84 neurons, respectively, and one output layer with one neuron for each control parameter of the quantum annealer. The outputs, again, are the parameters \(f_{ik}\) controlling the quantum annealer, as shown in figure (2). The resulting \begin{table} \begin{tabular}{c c c c c c} \hline \hline & **Data** & **Train** & **Test** & **Description** & Source \\ \hline **MNIST** & 784 & 10 & 11.1318/0.00 & 0.000 & Image recognition of handwritten digits (fig. 4) & [90] \\ **CIFAR** & 3072 & 10 & 15.681/0.00 & 0.000 & Image recognition of different objects (fig. 4) & [54] \\ **ISOLET** & 671 & 672 & 15.091/0.6 & 6.238 & 1.550 & Included letter speech recognition & [61] \\ **UCIHAR** & 561 & 12 & 1001/0.6 & 6.233 & 1.554 & Human activity recognition with smartphones & [62] \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets (\(n\): # of features, \(k\): # of classes). quantum states are then used to train the neural network, as illustrated in figure (1). ### Is there any advantage in using a quantum annealer? The results of our simulation experiments can be found in table 2. We performed 32 different experiments, with 8 for each dataset. Firstly, it is important to note that without training, the classification accuracy of our NN and CNN was always around \(\frac{1}{K}\), which is equivalent to random guessing. This is no surprise, given that the network cannot know which output neuron corresponds to which class in our prediction model. At the same time, when we connect the setup to an annealer, the classification accuracy is always better than \(\frac{1}{K}\). In the case of MNIST, it is around 59%; for CIFAR, it is 12%; for ISOLET, it is 44%; and for UCIHAR, it is 44%. This is because the output vectors \(|\psi_{final}\rangle\) are mapped to classes based on how close they are in the Hilbert space. Even with a neural network that is completely untrained, it is possible to achieve classification accuracies better than random choice. Nevertheless, this is not a property unique to quantum annealers but rather a consequence of the way in which classification is performed. To confirm this, we created a classical model which would average all images corresponding to a given class and then perform classification by comparing images to the \(K\) different averages. For MNIST, the accuracy of this trivial model was around 63% and for CIFAR around 13%, for ISOLET around 47%, and for UCIHAR also around 47%, with error margins of less than 1%. Therefore, quantum annealers provide no advantage over classical models without performing training. Next, let us have a closer look at what happens after the training procedure, as shown in figure (1). The results are reported in table (2). We can see that training a (C)NN + Annealer model never results in better classification accuracies as compared to a classical (C)NN model. There are multiple possible explanations for this negative result: 1. The annealers which we simulate may not be large enough. We could only consistently simulate systems with up to \(n=10\) qubits. It is possible that with a significantly larger number of qubits an advantage is gained from using a quantum annealer 2. It could be that we were using the wrong datasets. There is no apparent reason why, for example, image classification should benefit from a quantum annealer. All our datasets work really well with classical machine learning models, which is why they are so commonly used in research. Other datasets containing data about quantum systems might benefit from adding a quantum annealer to a neural network 3. Lastly, it could be that there is just no benefit from using our system from figure (2). This would mean \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline **Architecture** & **Training** & \multicolumn{2}{c}{**MNIST 10**} & \multicolumn{2}{c}{**CIFAR 10**} & \multicolumn{2}{c}{**ISOLET 26**} & \multicolumn{2}{c}{**UCIHAR 12**} \\ & & Accuracy & Params & Accuracy & Params & Accuracy & Params & Accuracy & Params \\ \hline **NN** & No & \((10.0\pm 0.2)\%\) & \(97,346\) & \((10.0\pm 0.2)\%\) & \(371,906\) & \((3.8\pm 0.2)\%\) & \(77,306\) & \((8.3\pm 0.5)\%\) & \(70,586\) \\ **NN** & Yes & \((88.2\pm 0.3)\%\) & \(97,346\) & \((44.1\pm 0.4)\%\) & \(371,906\) & \((86.2\pm 0.3)\%\) & \(77,306\) & \((87.2\pm 0.2)\%\) & \(70,586\) \\ **NN + Annealer** & No & \((59\pm 1)\%\) & \(133,525\) & \((12\pm 2)\%\) & \(408,085\) & \((43.9\pm 0.4)\%\) & \(113,485\) & \((44.2\pm 0.3)\%\) & \(106,765\) \\ **NN + Annealer** & Yes & \((86\pm 1)\%\) & \(133,525\) & \((13\pm 2)\%\) & \(408,085\) & \((85.5\pm 0.4)\%\) & \(113,485\) & \((85.9\pm 0.4)\%\) & \(106,765\) \\ **CNN** & No & \((10.0\pm 0.2)\%\) & \(45,786\) & \((10.0\pm 0.2)\%\) & \(63,366\) & \((3.9\pm 0.1)\%\) & \(63,366\) & \((8.8\pm 0.5)\%\) & \(63,366\) \\ **CNN** & Yes & \((95.0\pm 0.3)\%\) & \(45,786\) & \((49.9\pm 0.2)\%\) & \(63,366\) & \((93.4\pm 0.2)\%\) & \(63,366\) & \((94.2\pm 0.3)\%\) & \(63,366\) \\ **CNN + Annealer** & No & \((53\pm 1)\%\) & \(88,781\) & \((18.1\pm 0.5)\%\) & \(88,781\) & \((42\pm 1)\%\) & \(88,781\) & \((44\pm 1)\%\) & \(88,781\) \\ **CNN + Annealer** & Yes & \((91\pm 2)\%\) & \(88,781\) & \((18\pm 1)\%\) & \(88,781\) & \((90\pm 2)\%\) & \(88,781\) & \((92\pm 1)\%\) & \(88,781\) \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracies and numbers of trainable parameters. This is for n=10 qubits, steps=10, TN=50. These results are discussed in section (IV.2). The reason why our (C)NN’s do not have the same number of parameters as the (C)NN’s + Annealer’s is that the output layer is of different sizes. The (C)NN has \(K\) output features (one for every class), while the (C)NN+Annealer has one output for every parameter of the annealing schedule. Figure 4: a) MNIST dataset from [60]. It consists of 28 by 28 grayscale images (784 numbers each between 0 and 127). The full training dataset contains 60,000 images and the testing dataset 10,000 images. b) CIFAR dataset from [54]. It consists of 32 by 32 color images (1024 triples of numbers, each between 0 and 127). The full training dataset contains 50,000 images, and the testing dataset 10,000 images. that there exists no quantum advantage from connecting neural networks to quantum annealers and using them to perform classical classification. Simulating a system with more qubits might be very difficult, so it is unlikely that significant progress can be made in this direction. Therefore, we suggest future investigations focusing on applying our system to learning inherently different datasets. An example of these could be quantum datasets, such as quantum state tomography and quantum error correction.[63] ## V Conclusion We explore a setup for performing classification on labeled classical datasets consisting of a classical neural network connected to a quantum annealer. Quantum annealers are mostly studied in the adiabatic regime, a computational model in which the quantum system remains in an instantaneous eigenstate of a time-dependent Hamiltonian. However, in our research, we focus on the diabatic regime where the quantum state doesn't always have to remain in the ground state. The neural network programs the quantum annealer's controls and thereby maps the annealer's initial states into new states in the Hilbert space. The neural network's parameters are optimized to maximize the distance of states corresponding to inputs from different classes and minimize the distance between quantum states corresponding to the same class. Recent literature [64] demonstrates that at least some of the "learning" is due to the quantum annealer in our setup. This is due to the fact that the quantum annealer introduces a non-linearity in the network. No matter how large, neural networks can only be more powerful than linear classifiers if non-linearities are introduced between layers. In [64], the authors classified a linearly inseparable dataset by using only a linear layer + the quantum annealing setup, explicitly showing that the learning is due to the non-linearity introduced by the quantum annealer. Also, our setup can perform classification with fewer parameters for training and inference. In this study, we consider a similar but not quite the same case, where a classical fully-fledged neural network is connected with a small quantum annealer (of course, adding back the full classical nn will compromise the advantage in the number of parameters). In such a setting, the fully-fledged classical neural-network already has built-in nonlinearity and can already handle the classification problem alone, we want to see whether an additional quantum layer could boost its performance. However, as of writing, we didn't find sufficient evidence demonstrating any advantage over just using classical neural networks. This could be due to our inability to simulate large quantum annealers effectively, our datasets, or there might be no real advantage to using our setup at all. **ACKNOWLEDGMENTS** This publication is the result of research funded by the Defense Advanced Research Projects Agency (DARPA) under agreement No.HR00112109969.
2305.12656
Computing Multi-Eigenpairs of High-Dimensional Eigenvalue Problems Using Tensor Neural Networks
In this paper, we propose a type of tensor-neural-network-based machine learning method to compute multi-eigenpairs of high dimensional eigenvalue problems without Monte-Carlo procedure. Solving multi-eigenvalues and their corresponding eigenfunctions is one of the basic tasks in mathematical and computational physics. With the help of tensor neural network and deep Ritz method, the high dimensional integrations included in the loss functions of the machine learning process can be computed with high accuracy. The high accuracy of high dimensional integrations can improve the accuracy of the machine learning method for computing multi-eigenpairs of high dimensional eigenvalue problems. Here, we introduce the tensor neural network and design the machine learning method for computing multi-eigenpairs of the high dimensional eigenvalue problems. The proposed numerical method is validated with plenty of numerical examples.
Yifan Wang, Hehi Xie
2023-05-22T02:55:25Z
http://arxiv.org/abs/2305.12656v1
# Computing Multi-Eigenpairs of High-Dimensional Eigenvalue Problems Using Tensor Neural Networks+ ###### Abstract In this paper, we propose a type of tensor-neural-network-based machine learning method to compute multi-eigenpairs of high dimensional eigenvalue problems without Monte-Carlo procedure. Solving multi-eigenvalues and their corresponding eigenfunctions is one of the basic tasks in mathematical and computational physics. With the help of tensor neural network and deep Ritz method, the high dimensional integrations included in the loss functions of the machine learning process can be computed with high accuracy. The high accuracy of high dimensional integrations can improve the accuracy of the machine learning method for computing multi-eigenpairs of high dimensional eigenvalue problems. Here, we introduce the tensor neural network and design the machine learning method for computing multi-eigenpairs of the high dimensional eigenvalue problems. The proposed numerical method is validated with plenty of numerical examples. **Keywords.** high-dimensional eigenvalue problem, tensor neural network, deep Ritz method, high-accuracy, machine learning, multi-eigenpairs. **AMS subject classifications.** 65N30, 65N25, 65L15, 65B99 ## 1 Introduction In modern sciences and engineers, there exist many high-dimensional problems, which arise from quantum mechanics, statistical mechanics, and financial engineer. There have appeared more and more high dimensional eigenvalue problems along with the developments of science and engineer. Computing the eigenvalues and eigenfunctions of high dimensional operators becomes more and more important in modern scientific computing. The most famous and important high dimensional eigenvalue problem is the Schrodinger equation from quantum mechanics. The classical numerical methods, such as finite difference, finite element, spectral, can only solve the low dimensional Schrodinger equations for some simple systems. Using these classical numerical methods to solve high dimensional eigenvalue problems will suffer from the so-called curse of dimensionality since the number of degrees of freedom and computational complexity grows exponentially as the dimension increases. For the high dimensional problems, the Monte-Carlo based methods attract more and more attentions since it provides the possibility to solve high dimensional problems in the stochastic sense. For example, there are two well known variational Monte Carlo (VMC) [6] and diffusion Monte Carlo (DMC) [27] methods for high-dimensional eigenvalue problems in quantum mechanics. Recently, many numerical methods based on machine learning have been proposed to solve the high-dimensional PDEs ([3, 8, 9, 13, 14, 23, 24, 25, 26, 28, 30, 35, 36]). Among these machine learning methods, neural network-based methods attract more and more attention since it can be used to build approximations to the exact solutions of PDEs by machine learning methods. The essential reason is that neural networks can approximate any function given enough parameters. This type of method also provides a possible way to solve many useful high-dimensional PDEs from physics, chemistry, biology, engineering, and so on. Due to its universal approximation property, the fully-connected neural network (FNN) is the most widely used architecture to build the functions for solving high-dimensional PDEs. There are several types of FNN-based methods such as the well-known deep Ritz [9], deep Galerkin method [30], PINN [28], and weak adversarial networks [35] for solving high-dimensional PDEs by designing different loss functions. Among these methods, the loss functions always include computing high-dimensional integration for the functions defined by FNN. For example, the loss functions of the deep Ritz method require computing the integrations on the high-dimensional domain for the functions which is constructed by FNN. Direct numerical integration for the high-dimensional functions also meets the "curse of dimensionality". Always, the Monte-Carlo method is adopted to do the high-dimensional integration with some types of sampling methods [9, 15]. Due to the low convergence rate of the Monte-Carlo method, the solutions obtained by the FNN-based numerical methods are difficult to obtain high accuracy and stable convergence process. In other words, the Monte-Carlo method decreases computational work in each forward propagation by decreasing the simulation efficiency and stability of the FNN-based numerical methods for solving high-dimensional PDEs. In [32], based on the deep Ritz method, a type of tensor neural network (TNN) is proposed to build the trial functions for solving high-dimensional PDEs. The TNN is a function being designed by the tensor product operations on the neural networks or by correlated low-rank CAN-DECOMP/PARAFAC (CP) tensor decomposition [17, 22] approximations of FNNs. An important advantage is that we do not need to use Monte-Carlo method to do the integration for the functions which is constructed by TNN. The high dimensional integration of the TNN functions can be decomposed into one-dimensional integrations which can be computed by the highly accurate Gauss type quadrature scheme. The computational work for the integration of the functions by TNN is only a polynomial scale of the dimension, which means the TNN overcomes the "curse of dimensionality" in some sense for solving high-dimensional PDEs. Furthermore, the high accuracy of high dimensional integration can improve the TNN-based machine learning methods for solving high-dimensional problems. The TNN-based machine learning methods have already been used to solve the smallest eigenvalue and its corresponding eigenfunction of high-dimensional eigenvalue problems [32] and Schrodinger equations in [33]. In this paper, we investigate the applications of TNN for computing multi-eigenpairs of high dimensional eigenvalue problems. The aim of this paper is to propose a universal machine learning solver for computing multi-eigenpairs of high dimensional eigenvalue problems based on the TNN and deep Ritz method without the a priori knowledge of the actual or guessed solutions. Furthermore, the proposed eigensolver can compute multi-eigenpairs of the high dimensional eigenvalue problem with obviously better accuracy than the Montor-Carlo based machine learning methods. In this paper, we will also find that the applications of TNN and the high accuracy of high dimensional integrations bring more choices to define the loss functions for the machine learning methods. For example, the loss function in this paper for computing multi-eigenpairs is more like the classical way for computing eigenvalue problems. The proposed eigensolver provides a new way to solve high dimensional eigenvalue problems from physics, chemistry, biology, engineering, and so on. We will provide plenty of numerical results for the examples from the physics and material sciences. An outline of the paper goes as follows. In Section 2, we introduce the TNN architecture and its approximation property. In Section 3, the TNN-based machine learning method and corresponding numerical integration schemes are introduced for solving multi-eigenpairs of the high dimensional eigenvalue problems. Section 4 is devoted to providing numerical examples to validate the accuracy and efficiency of the proposed numerical methods. Finally, some concluding remarks are given in the last section. ## 2 Tensor neural network architecture TNN structure, its approximation property and the computational complexity of related integration have been discussed and investigated in [32]. In this section, we introduce the architecture of TNN. In order to express clearly and facilitate the construction of the TNN method for solving multi-eigenpairs of high dimensional eigenvalue problems, here, we will also elaborate on some important definitions and properties. TNN is a neural network of low-rank structure, which is built by the tensor product of of several one-dimensional input and multidimensional output subnetworks. Due to the low-rank structure of TNN, an efficient and accurate quadrature scheme can be designed for the TNN-related high dimensional integrations such as the inner product of two TNNs. In [32], we introduce TNN in detail and propose its numerical integration scheme with the polynomial scale computational complexity of the dimension. For each \(i=1,2,\cdots,d\), we use \(\Phi_{i}(x_{i};\theta_{i})=(\phi_{i,1}(x_{i};\theta_{i}),\phi_{i,2}(x_{i}; \theta_{i}),\cdots,\phi_{i,p}(x_{i};\theta_{i}))\) to denote a subnetwork that maps a set \(\Omega_{i}\subset\mathbb{R}\) to \(\mathbb{R}^{p}\), where \(\Omega_{i},i=1,\cdots,d\), can be a bounded interval \((a_{i},b_{i})\), the whole line \((-\infty,+\infty)\) or the half line \((a_{i},+\infty)\). The number of layers and neurons in each layer, the selections of activation functions and other hyperparameters can be different in different subnetworks. In this paper, in order to improve the numerical stability further, the TNN is defined as follows: \[\Psi(x;\Theta)=\sum_{j=1}^{p}c_{j}\widehat{\phi}_{1,j}(x_{1};\theta_{1})\widehat{ \phi}_{2,j}(x_{2};\theta_{2})\cdots\widehat{\phi}_{d,j}(x_{d};\theta_{d})=\sum_ {j=1}^{p}c_{j}\prod_{i=1}^{d}\widehat{\phi}_{i,j}(x_{i};\theta_{i}), \tag{2.1}\] where \(c=\{c_{j}\}_{j=1}^{p}\) is a set of trainable parameters, \(\Theta=\{c,\theta_{1},\cdots,\theta_{d}\}\) denotes all parameters of the whole architecture. For \(i=1,\cdots,d,j=1,\cdots,p\), \(\widehat{\phi}_{i,j}(x_{i},\theta_{i})\) is a normalized functions as follows: \[\widehat{\phi}_{i,j}(x_{i},\theta_{i})=\frac{\phi_{i,j}(x_{i},\theta_{i})}{ \|\phi_{i,j}(x_{i},\theta_{i})\|_{L^{2}(\Omega_{i})}}. \tag{2.2}\] In Section 3.2, we will discuss the architectures for each subnetwork \(\Phi_{i}(x_{i};\theta_{i})\) in detail according to different types of \(\Omega_{i}\). The TNN architecture (2.1) and the one defined in [32] are mathematically equivalent, but (2.1) has better numerical stability during the training process. Figure 1 shows the corresponding architecture of TNN. From Figure 1 and numerical tests, we can find the parameters for each rank of TNN are correlated by the FNN, which guarantee the stability of the TNN-based machine learning methods. This is also an important difference from the tensor finite element methods. In order to show the reasonableness of TNN, we now introduce the approximation property from [32]. Since there exists the isomorphism relation between \(H^{m}(\Omega_{1}\times\cdots\times\Omega_{d})\) and the tensor product space \(H^{m}(\Omega_{1})\otimes\cdots\otimes H^{m}(\Omega_{d})\), the process of approximating the function \(f(x)\in H^{m}(\Omega_{1}\times\cdots\times\Omega_{d})\) by the TNN defined as (2.1) can be regarded as searching for a correlated CP decomposition structure to approximate \(f(x)\) in the space \(H^{m}(\Omega_{1})\otimes\cdots\otimes H^{m}(\Omega_{d})\) with the rank being not greater than \(p\). In [32] we introduce and prove the following approximation result to the functions in the space \(H^{m}(\Omega_{1}\times\cdots\times\Omega_{d})\) under the sense of \(H^{m}\)-norm. **Theorem 1**.: _Assume that each \(\Omega_{i}\) is an interval in \(\mathbb{R}\) for \(i=1,\cdots,d\), \(\Omega=\Omega_{1}\times\cdots\times\Omega_{d}\), and the function \(f(x)\in H^{m}(\Omega)\). Then for any tolerance \(\varepsilon>0\), there exist a positive integer \(p\) and the corresponding TNN defined by (2.1) such that the following approximation property holds_ \[\|f(x)-\Psi(x;\theta)\|_{H^{m}(\Omega)}<\varepsilon. \tag{2.3}\] The motivation for employing the TNN architectures is to provide high accuracy and high efficiency in calculating variational forms of high-dimensional problems in which high-dimensional integrations are included. The TNN itself can approximate functions in Sobolev space with respect to \(H^{m}\)-norm. Therefore we naturally put forward an approach to solve high-dimensional PDEs and the ground state eigenpair in [32, 33]. The major contribution of this paper is to propose a TNN-based machine learning method to compute the leading multi-eigenpairs of high-dimensional eigenvalue problems. We will find that the designing of the machine learning method in this paper for computing multi-eigenpairs also shows the advantages of TNN. ## 3 Machine learning method for computing multi-eigenpairs This section is devoted to introducing the TNN-based machine learning method [11] to compute the multi-eigenpairs of high dimensional eigenvalue problems. We will introduce the way to build the eigenfunction approximations by TNN functions and the quadrature schemes to compute the inner products of the TNN functions. ### Approximate eigen-subspace by TNNs In this subsection, we present the TNN-based discretization of eigenvalue problems for solving multi-eigenpairs. Briefly speaking, analogous to the subspace projection method such as the finite element method, instead of using the finite element basis, our approach uses several TNNs as the basis to span a subspace of solution space and restrict the problem into this finite dimensional subspace. Through a machine learning process, an optimal approximation of the eigen-subspace represented by TNNs will be found. Then the final multi-eigenpair approximations are obtained by solving a finite-dimensional matrix eigenvalue problem, which is similar to the Rayleigh-Ritz step in the classical eigensolvers for the matrix eigenvalue problems [29]. Figure 1: Architecture of TNN. Black arrows mean linear transformation (or affine transformation). Each ending node of blue arrows is obtained by taking the scalar multiplication of all starting nodes of blue arrows that end in this ending node. The final output of TNN is derived from the summation of all starting nodes of red arrows. For generality, we describe the eigenvalue and the TNN-based machine learning method in the abstract way. More specifically, assume Sobolev spaces \(\mathcal{V}\) and \(\mathcal{W}\) are two Hilbert spaces and satisfy \(\mathcal{V}\subset\mathcal{W}\). And let \(a(\cdot,\cdot)\) and \(b(\cdot,\cdot)\) denote two positive definite symmetric bilinear forms on \(\mathcal{V}\times\mathcal{V}\) and \(\mathcal{W}\times\mathcal{W}\), respectively. Furthermore, based on these bilinear forms, we can define the norms on the space \(\mathcal{V}\) and \(\mathcal{W}\) as follows \[\|v\|_{a} = \sqrt{a(v,v)},\quad\forall v\in\mathcal{V}, \tag{3.1}\] \[\|w\|_{b} = \sqrt{b(w,w)},\quad\forall w\in\mathcal{W}. \tag{3.2}\] Assume the norm \(\|\cdot\|_{a}\) is relatively compact with respect to the norm \(\|\cdot\|_{b}\)[7]. To describe our method and to build the loss function for computing leading \(k\) eigenpairs briefly, we focus on the following general eigenvalue problem: Find \((\lambda,u)\in\mathbb{R}\times\mathcal{V}\) such that \(b(u,u)=1\) and \[a(u,v)=\lambda b(u,v),\quad\forall v\in\mathcal{V}. \tag{3.3}\] It is well known that the eigenvalue problem (3.3) has an eigenvalue sequence \(\{\lambda_{j}\}\) (cf. [2]): \[0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{k}\leq\cdots,\quad\lim_{k \rightarrow\infty}\lambda_{k}=\infty,\] and associated eigenfunctions \[u_{1},u_{2},\cdots,u_{k},\cdots,\] where \(b(u_{i},u_{j})=\delta_{ij}\) (\(\delta_{ij}\) denotes the Kronecker function). In the sequence \(\{\lambda_{j}\}\), the \(\lambda_{j}\) are repeated according to their geometric multiplicity. The eigenvalues satisfy the minimum-maximum principle [2] \[\lambda_{k}=\min_{\begin{subarray}{c}\mathcal{V}_{k}\subset \mathcal{V}\\ \dim\mathcal{V}_{k}=k\end{subarray}}\max_{f\in\mathcal{V}_{k}}\mathcal{R}(f)= \max_{f\in\mathrm{span}\{u_{1},\cdots,u_{k}\}}\mathcal{R}(f), \tag{3.4}\] where \(\mathcal{R}(f)=a(f,f)/b(f,f)\) denotes the Rayleigh quotient of the function \(f\). Let us define \(\mathcal{V}_{k}=\mathrm{span}\{v_{1},\cdots,v_{k}\}\) and \(v_{i}\in\mathcal{V}\) for \(i=1,\cdots,k\) are linearly independent of each other. Furthermore, we also define \(\mathcal{U}_{k}=\mathrm{span}\{u_{1},\cdots,u_{k}\}\) which denotes the eigensubspace with respect to the leading \(k\) eigenvalues. Use the terminology of the subspace approximation to the eigenvalue problems, we define the stiffness matrix \(\mathcal{A}(v_{1},\cdots,v_{k})\) and mass matrix \(\mathcal{B}(v_{1},\cdots,v_{k})\) as follows \[\mathcal{A}(v_{1},\cdots,v_{k}) =\Big{(}\mathcal{A}_{ij}(v_{1},\cdots,v_{k})\Big{)}_{1\leq i,j \leq k}=\Big{(}a(v_{i},v_{j})\Big{)}_{1\leq i,j\leq k}\in\mathbb{R}^{k\times k}, \tag{3.5}\] \[\mathcal{B}(v_{1},\cdots,v_{k}) =\Big{(}\mathcal{B}_{ij}(v_{1},\cdots,v_{k})\Big{)}_{1\leq i,j \leq k}=\Big{(}b(v_{i},v_{j})\Big{)}_{1\leq i,j\leq k}\in\mathbb{R}^{k\times k}. \tag{3.6}\] Then due to the minimum-maximum principle, by simple derivation, the summation of the leading \(k\) eigenvalues satisfies the following optimization problem \[\sum_{i=1}^{k}\lambda_{i}=\min_{\begin{subarray}{c}\mathcal{V}_{k}=\mathrm{ span}\{v_{1},\cdots,v_{k}\}\\ v_{j}\in\mathcal{V},j=1,\cdots,k\end{subarray}}\mathrm{trace}\left(\mathcal{B }^{-1}(v_{1},\cdots,v_{k})\mathcal{A}(v_{1},\cdots,v_{k})\right), \tag{3.7}\] and the eigensubspace with respect to the leading \(k\) eigenvalues consists with \[\mathcal{U}_{k}=\text{span}\{u_{1},\cdots,u_{k}\}=\arg\min_{\mathcal{ V}_{k}=\text{span}\{v_{1}\cdots,v_{k}\}\subset\mathcal{V}}\text{trace}\Big{(} \mathcal{B}^{-1}\big{(}v_{1},\cdots,v_{k}\big{)}\mathcal{A}\big{(}v_{1},\cdots, v_{k}\big{)}\Big{)}. \tag{3.8}\] We approximate \(\mathcal{U}_{k}\) by a space spanned by \(k\) TNNs \(\Psi_{1}(x;\Theta_{1})\), \(\cdots\), \(\Psi_{k}(x;\Theta_{k})\), where each \(\Psi_{\ell}(x;\Theta_{\ell})\) is defined by (2.1) with \(\Phi_{i,\ell}(x_{i};\theta_{i,\ell})=(\phi_{i,1,\ell}(x_{i};\theta_{i,\ell})\), \(\phi_{i,2,\ell}(x_{i};\theta_{i,\ell})\), \(\cdots\), \(\phi_{i,p,\ell}(x_{i};\theta_{i,\ell})\)) as the \(i\)-th subnetwork of the \(\ell\)-th TNN. Then, for \(\ell=1,\cdots,k\), the \(\ell\)-th TNN \(\Psi_{\ell}(x;\Theta_{\ell})\) is denoted as \[\Psi_{\ell}(x;\Theta_{\ell})=\sum_{j=1}^{p_{\ell}}c_{j,\ell}\prod _{i=1}^{d}\widehat{\phi}_{i,j,\ell}(x_{i};\theta_{i,\ell}), \tag{3.9}\] where each \(c_{\ell}=\{c_{j,\ell}\}_{j=1}^{p}\) is a set of trainable parameters, each \(\Theta_{\ell}=\{c_{\ell},\theta_{1,\ell},\cdots,\theta_{d,\ell}\}\) denotes all parameters of the \(\ell\)-th TNN. Similar to (2.2), each \(\widehat{\phi}_{i,j,\ell}\) is normalized function. We can select the appropriate activation function such that all \(\Psi_{\ell}(x;\Theta_{\ell}),\ell=1,\cdots,k\) belong to the space \(\mathcal{V}\). Let us define \(p=\max\{p_{1},\cdots,p_{k}\}\). These \(k\) TNNs are trained using the following loss function \[\text{Loss}\Big{(}\Psi_{1}(x;\Theta_{1}),\cdots,\Psi_{k}(x;\Theta_ {k})\Big{)}\] \[=\text{trace}\Big{(}\mathcal{B}^{-1}\big{(}\Psi_{1}(x;\Theta_{1} ),\cdots,\Psi_{k}(x;\Theta_{k})\big{)}\mathcal{A}\big{(}\Psi_{1}(x;\Theta_{1}),\cdots,\Psi_{k}(x;\Theta_{k})\big{)}\Big{)}. \tag{3.10}\] Since we do the inner-products of TNN in (3.10), the way to build the loss function is deep Ritz type. In order to assemble the matrices \(\mathcal{A}\) and \(\mathcal{B}\) in (3.10), we need to compute the high dimensional integrations included in the bilinear forms \(a(\cdot,\cdot)\) and \(b(\cdot,\cdot)\) in (3.3). The detailed method for assembling the matrices \(\mathcal{A}\) and \(\mathcal{B}\) in (3.10) will be introduced in Section 3.2. The loss function (3.10) is automatically differentiable thanks to packages that support backpropagation such as TensorFlow and PyTorch. In this paper, the gradient descent (GD) method is adopted to update all trainable parameters \[\Theta_{\ell}-\eta\nabla_{\Theta_{\ell}}\text{Loss}\to\Theta_{ \ell},\quad\ell=1,\cdots,k, \tag{3.11}\] where \(\eta\) is the learning rate and adjusted by the ADAM optimizer [20]. After sufficient training steps, we obtain a sequence of parameters \(\{\Theta_{1}^{*},\cdots,\Theta_{k}^{*}\}\) such that the loss function (3.10) arrives its minimum value under the required tolerance. By solving the following finite-dimensional matrix eigenvalue problem \[\mathcal{A}\big{(}\Psi_{1}(x;\Theta_{1}^{*}),\cdots,\Psi_{k}(x; \Theta_{k}^{*})\big{)}\mathbf{y}=\lambda\mathcal{B}\big{(}\Psi_{1}(x;\Theta_{1 }^{*}),\cdots,\Psi_{k}(x;\Theta_{k}^{*})\big{)}\mathbf{y}, \tag{3.12}\] we obtain a sequence of eigenvalues \[0<\widehat{\lambda}_{1}\leq\widehat{\lambda}_{2}\leq\cdots\leq \widehat{\lambda}_{k} \tag{3.13}\] and corresponding eigenvectors \[\mathbf{y}_{1},\mathbf{y}_{2},\cdots,\mathbf{y}_{k}, \tag{3.14}\] where \(\mathbf{y}_{j}=[y_{j,1},\cdots,y_{j,k}]^{\top}\) for \(j=1,\cdots,k\). Then \(\widehat{\lambda}_{1}\), \(\cdots\), \(\widehat{\lambda}_{k}\) can be chosen as the approximations to the first \(k\) eigenvalues \(\lambda_{1}\), \(\cdots\), \(\lambda_{k}\) and the corresponding approximations \(\widehat{u}_{1}\), \(\cdots\), \(\widehat{u}_{k}\) to the first \(k\) eigenfunctions of problem (3.3) can be obtained by the following linear combination procedure \[\widehat{u}_{j}(x) = \sum_{\ell=1}^{k}y_{j,\ell}\Psi_{\ell}(x;\Theta_{\ell}^{*}). \tag{3.15}\] **Remark 1**.: _In [36], the loss function is defined by the summation of \(k\) Rayleigh quotients and a penalty term which constrains the \(k\) neural networks to be mutually orthogonal and normalized. Different form [36], in this paper, the loss function (3.10) is defined without the penalty term and the orthogonalization condition of \(k\) TNNs is not imposed directly. The reason is that the use of TNN architectures can provide a high-accuracy and high-efficiency quadrature scheme, as we will see in Section 3.2, for assembling the matrices in the loss function (3.10) and eigenvalue problem (3.12). The penalty term makes optimization process much more difficult and thus affects the final accuracy. The orthogonalization condition is implicitly guaranteed by solving the optimization problem in the machine learning method with the loss function (3.10). The main innovation of the present paper is for the first time to apply the TNN architecture to compute multi-eigenpairs of high dimensional eigenvalue problems with high accuracy. In Section 4.1, we will give an example from [36] to show the advantages of proposed approach._ ### Quadrature scheme for inner product In this subsection, we provide a detailed description to calculate inner products in the loss function (3.10). For simplicity, we are concerned with the following model problem as an example: Find \((\lambda,u)\in\mathbb{R}\times H_{0}^{1}(\Omega)\) such that \[\begin{cases}-\Delta u+V(x)u=\lambda u,&\text{in}\ \ \Omega,\\ \hskip 14.226378ptu=0,&\text{on}\ \partial\Omega,\end{cases} \tag{3.16}\] where \(\Omega=\Omega_{1}\times\cdots\times\Omega_{d}\), each \(\Omega_{i},i=1,\cdots,d\), can be a bounded interval \((a_{i},b_{i})\), the whole line \((-\infty,+\infty)\) or the half line \((a_{i},+\infty)\), \(V(x)\in L^{2}(\Omega)\) is a potential function. We assume that the potential \(V(x)\) has separated representation in the tensor product space \(L^{2}(\Omega_{1})\otimes\cdots\otimes L^{2}(\Omega_{d})\) as follows \[V(x)=\sum_{j=1}^{q}\prod_{i=1}^{d}V_{i,j}(x_{i}), \tag{3.17}\] where \(V_{i,j}(x_{i})\in L^{2}(\Omega_{i})\). Then the equivalent variational form of the eigenvalue problem (3.16) can be defined as follows: Find \((\lambda,u)\in\mathbb{R}\times H_{0}^{1}(\Omega)\) such that \[a(u,v)=\lambda b(u,v),\ \ \ \ \forall v\in H_{0}^{1}(\Omega), \tag{3.18}\] where the bilinear forms \(a(\cdot,\cdot)\) and \(b(\cdot,\cdot)\) are defined as follows \[a(u,v)=\int_{\Omega}(\nabla u\cdot\nabla v+Vuv)d\Omega,\ \ \ \ \ \ b(u,v)=\int_{\Omega}uvd\Omega. \tag{3.19}\] Due to the definition of \(k\) TNNs (3.9) and (3.17), entries of the matrix \(\mathcal{A}\) in the loss function (3.10) have following expansions \[\mathcal{A}_{mn} = a\big{(}\Psi_{m}(x;\Theta_{m}),\Psi_{n}(x;\Theta_{n})\big{)} \tag{3.20}\] \[= \sum_{s=1}^{d}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{ \ell,n}\prod_{i\neq s}^{d}\int_{\Omega_{i}}\phi_{i,j,m}(x_{i};\theta_{i,m}) \phi_{i,\ell,n}(x_{i};\theta_{i,n})dx_{i}\] \[\qquad\cdot\int_{\Omega_{s}}\frac{\partial\phi_{s,j,m}}{\partial x _{s}}(x_{s};\theta_{s,m})\frac{\partial\phi_{s,\ell,n}}{\partial x_{s}}(x_{s };\theta_{s,n})dx_{s}\] \[\quad+\sum_{s=1}^{q}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j, m}c_{\ell,n}\prod_{i=1}^{d}\int_{\Omega_{i}}V_{i,s}(x_{i})\phi_{i,j,m}(x_{i}; \theta_{i,m})\phi_{i,\ell,m}(x_{i};\theta_{i,n})dx_{i},\] and entries of the matrix \(\mathcal{B}\) can be expanded as follows \[\mathcal{B}_{mn} = b\big{(}\Psi_{m}(x;\Theta_{m}),\Psi_{n}(x;\Theta_{n})\big{)} \tag{3.21}\] \[= \sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{\ell,n}\prod_{i=1 }^{d}\int_{\Omega_{i}}\phi_{i,j,m}(x_{i};\theta_{i,m})\phi_{i,\ell,n}(x_{i}; \theta_{i,n})dx_{i}.\] Since only one-dimensional integrations are involved in (3.20) and (3.21), there is no need to use the Monte Carlo procedure to do these high dimensional integrations. In order to guarantee the high accuracy of the high dimensional integrations included in the loss functions, we should use the high order one dimensional quadrature scheme, such as Gauss-type rules [10], for the integrations in (3.20) and (3.21). #### 3.2.1 Legendre-Gauss quadrature scheme for bounded domain Without loss of generality, we decompose each \(\Omega_{i}\) into \(M_{i}\) equal subintervals with length \(h_{i}=|\Omega_{i}|/M_{i}\) and choose \(N_{i}\) Legendre-Gauss points in each subinterval. Denote the total quadrature points and corresponding weights as follows \[\Big{\{}x_{i}^{(n_{i})}\Big{\}}_{n_{i}=1}^{M_{i}N_{i}},\quad\Big{\{}w_{i}^{(n _{i})}\Big{\}}_{n_{i}=1}^{M_{i}N_{i}},\quad i=1,2,\cdots,d, \tag{3.22}\] and define \(N=\max\{N_{1},\cdots,N_{d}\}\), \(\underline{N}=\min\{N_{1},\cdots,N_{d}\}\), \(M=\max\{M_{1},\cdots,M_{d}\}\) and \(\underline{M}=\min\{M_{1},\cdots,M_{d}\}\). Then using the quadrature scheme (3.22), entries of the matrix \(\mathcal{A}\) defined by (3.20) have the following numerical format \[\mathcal{A}_{mn}=a\big{(}\Psi_{m}(x;\Theta_{m}),\Psi_{n}(x; \Theta_{n})\big{)}\] \[\approx\sum_{s=1}^{d}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j, m}c_{\ell,n}\prod_{i\neq s}^{d}\sum_{n_{i}=1}^{M_{i}N_{i}}w_{i}^{(n_{i})} \phi_{i,j,m}(x_{i}^{(n_{i})};\theta_{i,m})\phi_{i,\ell,n}(x_{i}^{(n_{i})}; \theta_{i,n})\] \[\cdot\sum_{n_{s}=1}^{M_{s}N_{s}}w_{s}^{(n_{s})}\frac{\partial \phi_{s,j,m}}{\partial x_{s}}(x_{s}^{(n_{s})};\theta_{s,m})\frac{\partial\phi _{s,\ell,n}}{\partial x_{s}}(x_{s}^{(n_{s})};\theta_{s,n})\] \[+\sum_{s=1}^{q}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{ \ell,n}\prod_{i=1}^{d}\sum_{n_{i}=1}^{M_{i}N_{i}}w_{i}^{(n_{i})}V_{i,s}(x_{i}^ {(n_{i})})\phi_{i,j,m}(x_{i}^{(n_{i})};\theta_{i,m})\phi_{i,\ell,m}(x_{i}^{(n_ {i})};\theta_{i,n}), \tag{3.23}\] and entries of the matrix \(\mathcal{B}\) defined by (3.21) can be computed as follows \[\mathcal{B}_{mn} = b\big{(}\Psi_{m}(x;\Theta_{m}),\Psi_{n}(x;\Theta_{n})\big{)} \tag{3.24}\] \[\approx \sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{\ell,n}\prod_{i=1 }^{d}\sum_{n_{i}=1}^{M_{i}N_{i}}w_{i}^{(n_{i})}\phi_{i,j,m}(x_{i}^{(n_{i})}; \theta_{i,m})\phi_{i,\ell,n}(x_{i}^{(n_{i})};\theta_{i,n}).\] Then the complete TNN-based algorithm for computing the first \(k\) eigenpairs can be summarized in Algorithm 1. ``` 1. Initialization step: Build \(k\) initial TNNs \(\Psi_{1}^{(0)}(x;\Theta_{1}^{(0)}),\cdots,\Psi_{k}^{(0)}(x;\Theta_{k}^{(0)})\) as in (3.9), maximum training steps \(L\), learning rate \(\eta\), quadrature points and weights (3.22). 2. Assemble matrices \(\mathcal{A}\big{(}\Psi_{1}^{(\ell)}(x;\Theta_{1}^{(\ell)}),\cdots,\Psi_{k}^{( \ell)}(x;\Theta_{k}^{(\ell)})\big{)}\) and \(\mathcal{B}\big{(}\Psi_{1}^{(\ell)}(x;\Theta_{1}^{(\ell)}),\cdots,\Psi_{k}^{( \ell)}(x;\Theta_{k}^{(\ell)})\big{)}\) according to quadrature schemes (3.23) and (3.24), respectively. 3. Compute the value of the loss function (3.10), where the matrix \(\mathcal{B}^{-1}\mathcal{A}=:\mathcal{C}\) is obtained by solving the matrix equation \(\mathcal{BC}=\mathcal{A}\). 4. Compute the gradient of the loss function with respect to parameters \(\{\Theta_{1},\cdots,\Theta_{k}\}\) by automatic differentiation and update parameters \(\{\Theta_{1},\cdots,\Theta_{k}\}\) by (3.11). 5. Set \(\ell=\ell+1\) and go to Step 2 for the next step until \(\ell=L\). 6. Post-processing step: Solve the matrix eigenvalue problrm (3.12) and compute the first \(k\) eigenpair approximations by (3.15). ``` **Algorithm 1**TNN-based method for the first \(k\) eigenpairs It is worth mentioning that the quadrature schemes (3.23) and (3.24) which use 1-dimensional \(M_{i}N_{i}\) quadrature points in each dimension are actually equivalent to implementing the following \(d\)-dimensional full tensor quadrature scheme \[\mathcal{A}_{mn} \approx \sum_{n\in\mathcal{N}}w^{(n)}\nabla\Psi_{m}(x^{(n)};\Theta_{m}) \cdot\nabla\Psi_{n}(x^{(n)};\Theta_{n})\] \[+\sum_{n\in\mathcal{N}}w^{(n)}V(x^{(n)})\Psi_{m}(x^{(n)};\Theta_ {m})\Psi_{n}(x^{(n)};\Theta_{n}),\] \[\mathcal{B}_{mn} \approx \sum_{n\in\mathcal{N}}w^{(n)}\Psi_{m}(x^{(n)};\Theta_{m})\Psi_{n }(x^{(n)};\Theta_{n}), \tag{3.25}\] where \(d\)-dimensional quadrature points and weights are defined as follows \[\left\{x^{(n)}\right\}_{n\in\mathcal{N}} = \left\{\left\{x_{1}^{(n_{1})}\right\}_{n_{1}=1}^{M_{1}N_{1}} \times\left\{x_{2}^{(n_{2})}\right\}_{n_{2}=1}^{M_{2}N_{2}}\times\cdots\times \left\{x_{d}^{(n_{d})}\right\}_{n_{d}=1}^{M_{d}N_{d}}\right\},\] \[\left\{w^{(n)}\right\}_{n\in\mathcal{N}} = \left\{\left\{w_{1}^{(n)}\right\}_{n_{1}=1}^{M_{1}N_{1}}\times \left\{w_{2}^{(n_{2})}\right\}_{n_{2}=1}^{M_{2}N_{2}}\times\cdots,\times \left\{w_{d}^{(n_{d})}\right\}_{n_{d}=1}^{M_{d}N_{d}}\right\}.\] The quadrature points (3.26) for general \(d\)-dimensional integrand has accuracy \(\mathcal{O}(h^{2\underline{N}}/(2\underline{N})!)\) but \(\mathcal{N}\) has \(\mathcal{O}(M^{d}N^{d})\) elements. This becomes too large for \(d\gg 1\) and makes the approximation of integration intractable. Thanks to TNN having the low-rank tensor type structure, using the full tensor quadrature scheme for the inner product of two TNNs has the splitting schemes (3.23) and (3.24). In quadrature schemes (3.23) and (3.24), assembling matrices \(\mathcal{A}\) and \(\mathcal{B}\) costs only \(\mathcal{O}(d^{2}p^{2}MN+dp^{2}qMN)\) and \(\mathcal{O}(dp^{2}MN)\) operations, respectively. Furthermore, due to the equivalence of using full tensor quadrature points (3.26), quadrature schemes (3.23) and (3.24) also have accuracy \(\mathcal{O}(h^{2\underline{N}}/(2\underline{N})!)\). This means the TNN-based method proposed in this paper overcomes the "curse of dimensionality" in the sense of numerical integration. For generality, in the next two sections, we will also introduce the integration of TNN on unbounded domains. The corresponding complexity analysis will be omitted since they are similar to that in this section. #### 3.2.2 Hermite-Gauss quadrature scheme for the whole line For case \(\Omega_{i}=(-\infty,+\infty)\), we use Hermite-Gauss quadrature to assemble matrix \(\mathcal{A}\) and \(\mathcal{B}\). Hermite-Gauss quadrature scheme satisfies following property. **Lemma 1**.: _[_31_, Theorem 7.3]_ _Let \(\{x^{(k)}\}_{k=0}^{N}\) be the zeros of the \((N+1)\)-th order Hermite polynomial \(H_{N+1}(x)\), and let \(\{w^{(k)}\}_{k=1}^{N}\) be given by_ \[w^{(k)}=\frac{\sqrt{\pi}2^{N}N!}{(N+1)H_{N}^{2}(x^{(k)})},\quad 0 \leq k\leq N. \tag{3.27}\] _Then we have_ \[\int_{-\infty}^{+\infty}p(x)e^{-x^{2}}dx=\sum_{k=0}^{N}p(x^{(k)})w ^{(k)},\quad\forall p\in P_{2N+1}, \tag{3.28}\] _where \(P_{2N+1}\) denotes the set of polynomial functions of degree less than \(2N+1\)._ For using Hermite-Gauss quadrature, the \(i\)-th subnetwork of \(\ell\)-th TNN is defined as follows \[\Phi_{i,\ell}(x_{i};\theta_{i,\ell}) = \big{(}\phi_{i,1,\ell}(x_{i};\theta_{i,\ell}),\phi_{i,2,\ell}(x_ {i};\theta_{i,\ell}),\cdots,\phi_{i,p,\ell}(x_{i};\theta_{i,\ell})\big{)}\] \[= e^{-\frac{\beta_{i}^{2}x_{i}^{2}}{2}}\big{(}\varphi_{i,1,\ell}( \beta_{i}x_{i};\theta_{i,\ell}),\varphi_{i,2,\ell}(\beta_{i}x_{i};\theta_{i, \ell}),\cdots,\varphi_{i,p,\ell}(\beta_{i}x_{i};\theta_{i,\ell})\big{)},\] where \(\beta_{i}\) is trainable parameter and \(\varphi_{i,\ell}=\big{(}\varphi_{i,1,\ell}(\beta_{i}x_{i};\theta_{i,\ell}), \varphi_{i,2,\ell}(\beta_{i}x_{i};\theta_{i,\ell}),\cdots,\varphi_{i,p,\ell}( \beta_{i}x_{i};\theta_{i,\ell})\big{)}\) is a fully-connected neural network which maps \(\mathbb{R}\) to \(\mathbb{R}^{p}\). Then the \(\ell\)-th TNN can be written as \[\Psi_{\ell}(x;\Theta_{\ell})=\sum_{j=1}^{p}c_{j,\ell}\prod_{i=1}^ {d}e^{-\frac{\beta_{i}^{2}x_{i}^{2}}{2}}\widehat{\varphi}_{i,j,\ell}(\beta_{i }x_{i};\theta_{i,\ell}), \tag{3.29}\] where \(\widehat{\varphi}_{i,j,\ell}\) satisfies normalization property \(\|e^{-\frac{\beta_{i}^{2}x_{i}^{2}}{2}}\widehat{\varphi}_{i,j,\ell}(\beta_{i} x_{i};\theta_{i,\ell})\|_{L^{2}(\Omega_{i})}=1\) and is defined as follows \[\widehat{\varphi}_{i,j,\ell}(\beta_{i}x_{i};\theta_{i,\ell})= \frac{\varphi_{i,j,\ell}(\beta_{i}x_{i};\theta_{i,\ell})}{\Big{\|}e^{-\frac{ \beta_{i}^{2}x_{i}^{2}}{2}}\varphi_{i,j,\ell}(\beta_{i}x_{i};\theta_{i,\ell}) \Big{\|}_{L^{2}(\Omega_{i})}}. \tag{3.30}\] For \(i=1,\cdots,d\), let \(\{z_{i}^{(k_{i})}\}_{k_{i}=1}^{N_{i}}\) and \(\{w_{i}^{(k_{i})}\}_{k_{i}=1}^{N_{i}}\) be the Hermite-Gauss quadrature points and weights, respectively. Under coordinate transformation \(z_{i}=\beta_{i}x_{i}\), using the quadrature scheme (3.28), entries of the matrix \(\mathcal{A}\) defined by (3.20) have the following numerical format \[\mathcal{A}_{mn}=\sum_{i=1}^{d}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p _{n}}c_{j,m}c_{\ell,n}\prod_{i\neq s}^{d}\frac{1}{\beta_{i}}\sum_{k_{i}=1}^{N_ {i}}\widehat{\varphi}_{i,j,m}(z_{i}^{(k_{i})};\theta_{i,m})\widehat{\varphi}_{ i,\ell,n}(z_{i}^{(k_{i})};\theta_{i,n})w_{i}^{(k_{i})}\] \[\cdot\frac{1}{\beta_{s}}\sum_{k_{s}}^{N_{s}}\big{(}\widehat{ \varphi}^{\prime}_{s,j,m}(z_{s}^{(k_{s})};\theta_{s,m})-\beta_{s}\widehat{ \varphi}_{s,j,m}(z_{s}^{(k_{s})};\theta_{s,m})\big{)}\] \[\cdot\big{(}\widehat{\varphi}^{\prime}_{s,j,m}(z_{s}^{(k_{s})}; \theta_{s,m})-\beta_{s}\widehat{\varphi}_{s,j,m}(z_{s}^{(k_{s})};\theta_{s,m}) \big{)}w_{s}^{(k_{s})}\] \[+\sum_{s=1}^{q}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{ \ell,n}\prod_{i=1}^{d}\frac{1}{\beta_{i}}\sum_{k_{i}=1}^{N_{i}}V_{i,s}(\frac{ z_{i}^{(k_{i})}}{\beta_{i}})\widehat{\varphi}_{i,j,m}(z_{i}^{(k_{i})};\theta_{i,m}) \widehat{\varphi}_{i,\ell,n}(z_{i}^{(k_{i})};\theta_{i,n})w_{i}^{(k_{i})}. \tag{3.31}\] And entries of the matrix \(\mathcal{B}\) defined by (3.21) have the following numerical format \[\mathcal{B}_{mn}=\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{\ell,n}\prod _{i=1}^{d}\frac{1}{\beta_{i}}\sum_{k_{i}=1}^{N_{i}}\widehat{\varphi}_{i,j,m}(z _{i}^{(k_{i})};\theta_{i,m})\widehat{\varphi}_{i,\ell,n}(z_{i}^{(k_{i})}; \theta_{i,n})w_{i}^{(k_{i})}. \tag{3.32}\] #### 3.2.3 Laguerre-Gauss quadrature scheme for the half line For case \(\Omega_{i}=(0,+\infty)\), we use Laguerre-Gauss quadrature to assemble matrices \(\mathcal{A}\) and \(\mathcal{B}\). Laguerre-Gauss quadrature satisfies the following theorem **Lemma 2**.: _[_31_, Theorem 7.1]_ _Let \(\{x^{(k)}\}_{k=0}^{N}\) be the zeros of the \((N+1)\)-th order Laguerre polynomial \(\mathcal{L}_{N+1}(x)\), and let \(\{w^{(k)}\}_{k=0}^{N}\) be given by_ \[w^{(k)}=\frac{\Gamma(N+1)}{(N+1)(N+1)!}\frac{x^{(k)}}{\big{[} \mathcal{L}_{N}(x^{(k)})\big{]}^{2}},\quad 0\leq 0\leq N. \tag{3.33}\] _Then we have_ \[\int_{-\infty}^{+\infty}p(x)e^{-x}dx=\sum_{k=0}^{N}p(x^{(k)})w^{(k)},\quad \forall p\in P_{2N+1}, \tag{3.34}\] _where \(P_{2N+1}\) denotes the set of polynomial functions of degree less than \(2N+1\)._ For using Laguerre-Gauss quadrature, the \(i\)-th subnetwork of the \(\ell\)-th TNN is defined as follows \[\Phi_{i,\ell}(x_{i};\theta_{i,\ell}) = \big{(}\phi_{i,1,\ell}(x_{i};\theta_{i,\ell}),\phi_{i,2,\ell}(x_{i };\theta_{i,\ell}),\cdots,\phi_{i,p,\ell}(x_{i};\theta_{i,\ell})\big{)}\] \[= e^{-\frac{\beta_{i}x_{i}}{2}}\big{(}\varphi_{i,1,\ell}(\beta_{i} x_{i};\theta_{i,\ell}),\varphi_{i,2,\ell}(\beta_{i}x_{i};\theta_{i,\ell}),\cdots, \varphi_{i,p,\ell}(\beta_{i}x_{i};\theta_{i,\ell})\big{)},\] where \(\beta_{i}\) is trainable parameter and \(\varphi_{i,\ell}=\big{(}\varphi_{i,1,\ell}(\beta_{i}x_{i};\theta_{i,\ell}), \varphi_{i,2,\ell}(\beta_{i}x_{i};\theta_{i,\ell}),\cdots,\varphi_{i,p,\ell}( \beta_{i}x_{i};\theta_{i,\ell})\big{)}\) is a fully-connected neural network which maps \(\mathbb{R}\) to \(\mathbb{R}^{p}\). Then the \(\ell\)-th TNN can be written as \[\Psi_{\ell}(x;\Theta_{\ell})=\sum_{j=1}^{p}c_{j,\ell}\prod_{i=1}^{ d}e^{-\frac{\beta_{i}x_{i}}{2}}\widehat{\varphi}_{i,j,\ell}(\beta_{i}x_{i}; \theta_{i,\ell}), \tag{3.35}\] where \(\widehat{\varphi}_{i,j,\ell}\) satisfies normalization property \(\|e^{-\frac{\beta_{i}x_{i}}{2}}\widehat{\varphi}_{i,j,\ell}(\beta_{i}x_{i};\theta _{i,\ell})\|_{L^{2}(\Omega_{i})}=1\) and is defined as follows \[\widehat{\varphi}_{i,j,\ell}(\beta_{i}x_{i};\theta_{i,\ell})=\frac{\varphi_{i, j,\ell}(\beta_{i}x_{i};\theta_{i,\ell})}{\left\|e^{-\frac{\beta_{i}x_{i}}{2}} \varphi_{i,j,\ell}(\beta_{i}x_{i};\theta_{i,\ell})\right\|_{L^{2}(\Omega_{i})}}. \tag{3.36}\] For \(i=1,\cdots,d\), let \(\{z_{i}^{(k_{i})}\}_{k_{i}=1}^{N_{i}}\) and \(\{w_{i}^{(k_{i})}\}_{k_{i}=1}^{N_{i}}\) be Laguerre-Gauss quadrature points and weights, respectively. Under coordinate transformation \(z_{i}=\beta_{i}x_{i}\), using the quadrature scheme (3.28), entries of the matrix \(\mathcal{A}\) defined by (3.20) have the following numerical format \[\mathcal{A}_{mn}=\sum_{i=1}^{d}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p _{n}}c_{j,m}c_{\ell,n}\prod_{i\neq s}^{d}\frac{1}{\beta_{i}}\sum_{k_{i}=1}^{N_ {i}}\widehat{\varphi}_{i,j,m}(z_{i}^{(k_{i})};\theta_{i,m})\widehat{\varphi}_ {i,\ell,n}(z_{i}^{(k_{i})};\theta_{i,n})w_{i}^{(k_{i})}\] \[\cdot\frac{1}{\beta_{s}}\sum_{k_{s}}^{N_{s}}\big{(}\widehat{ \varphi}_{s,j,m}^{\prime}(z_{s}^{(k_{s})};\theta_{s,m})-\frac{1}{2}\widehat{ \varphi}_{s,j,m}(z_{s}^{(k_{s})};\theta_{s,m})\big{)}\] \[\cdot\big{(}\widehat{\varphi}_{s,j,m}^{\prime}(z_{s}^{(k_{s})}; \theta_{s,m})-\frac{1}{2}\widehat{\varphi}_{s,j,m}(z_{s}^{(k_{s})};\theta_{s,m })\big{)}w_{s}^{(k_{s})}\] \[+\sum_{s=1}^{q}\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{ \ell,n}\prod_{i=1}^{d}\frac{1}{\beta_{i}}\sum_{k_{i}=1}^{N_{i}}V_{i,s}(\frac{z _{i}^{(k_{i})}}{\beta_{i}})\widehat{\varphi}_{i,j,m}(z_{i}^{(k_{i})};\theta_{i,m})\widehat{\varphi}_{i,\ell,n}(z_{i}^{(k_{i})};\theta_{i,n})w_{i}^{(k_{i})}. \tag{3.37}\] And entries of the matrix \(\mathcal{B}\) defined by (3.21) have the following numerical format \[\mathcal{B}_{mn}=\sum_{j=1}^{p_{m}}\sum_{\ell=1}^{p_{n}}c_{j,m}c_{\ell,n}\prod _{i=1}^{d}\frac{1}{\beta_{i}}\sum_{k_{i}=1}^{N_{i}}\widehat{\varphi}_{i,j,m}( z_{i}^{(k_{i})};\theta_{i,m})\widehat{\varphi}_{i,\ell,n}(z_{i}^{(k_{i})}; \theta_{i,n})w_{i}^{(k_{i})}. \tag{3.38}\] ## 4 Numerical examples In this section, we provide several examples to investigate the performance of the TNN-based eigensolver proposed in this paper. To show the convergence behavior and accuracy of our method, we define the relative errors for the approximated eigenvalues \(\widehat{\lambda}_{\ell}\) and eigenfunctions \(\mathbf{y}_{\ell}\) as follows \[\mathrm{err}_{\lambda,\ell}:=\frac{|\widehat{\lambda}_{\ell}-\lambda_{\ell}|} {|\lambda_{\ell}|},\ \mathrm{err}_{L^{2},\ell}:=\frac{\|u_{\ell}-\mathcal{Q}_{\ell}u_{\ell}\|_{L^{2}( \Omega)}}{\|u_{\ell}\|_{L^{2}(\Omega)}},\ \mathrm{err}_{H^{1},\ell}:=\frac{|u_{\ell}- \mathcal{P}_{\ell}u_{\ell}|_{H^{1}(\Omega)}}{|u_{\ell}|_{H^{1}(\Omega)}},\ \ell=1, \cdots,k, \tag{4.1}\] where \(\lambda_{\ell}\) and \(u_{\ell}\) are reference eigenvalues and eigenfunctions. In the first two examples, the reference eigenpairs are obtained by high order finite element methods on the meshes with a sufficiently small mesh size. As for the harmonic oscillator problems, the exact eigenpairs are chosen as references. In (4.1), \(\mathcal{P}_{\ell}:H_{0}^{1}(\Omega)\to M_{\ell}\) and \(\mathcal{Q}:H_{0}^{1}(\Omega)\to M_{\ell}\) denote \(L^{2}(\Omega)\) projection operator and \(H^{1}(\Omega)\) projection operator, respectively, to the approximate eigenspace corresponding to the eigenvalue \(\lambda_{\ell}\). These two projection operators are defined as follows \[\left\langle\mathcal{P}_{\ell}u,v\right\rangle_{L^{2}} = \left\langle u,v\right\rangle_{L^{2}}:=\int_{\Omega}uvdx,\quad \forall v\in M_{\ell}\ \ \mathrm{for}\ u\in H_{0}^{1}(\Omega), \tag{4.2}\] \[\left\langle\mathcal{Q}_{\ell}u,v\right\rangle_{H^{1}} = \left\langle u,v\right\rangle_{H^{1}}:=\int_{\Omega}\nabla u\cdot \nabla vdx,\quad\forall v\in M_{\ell}\ \ \mathrm{for}\ u\in H_{0}^{1}(\Omega). \tag{4.3}\] In implementation, we use the quadrature scheme similar to that in (3.23) and (3.24), (3.31) and (3.32), (3.37) and (3.38), to compute \(\operatorname{err}_{L^{2},\ell}\) and \(\operatorname{err}_{H^{1},\ell}\) with the same tensor product quadrature points and weights as computing the loss functions if the reference solution \(u_{\ell}(x)\) has a low-rank representation, otherwise we only report \(\operatorname{err}_{\lambda,\ell}\). With the help of Theorem 2 in [32], the high efficiency and accuracy for computing \(\operatorname{err}_{L^{2},\ell}\) and \(\operatorname{err}_{H^{1},\ell}\) can be guaranteed. ### Infinitesimal generators of metastable diffusion processes In this subsection, we study the eigenvalue problem from [36] associated with the operator \[\mathcal{L}_{d}=\nabla V_{d}\cdot\nabla-\Delta. \tag{4.4}\] The potential functions \(V_{d}:\mathbb{R}^{d}\to\mathbb{R}\) for \(d=2,50,100\) are defined as follows \[V_{d}(x)=V(\theta)+2(r-1)^{2}+5e^{-5r^{2}}+5\sum_{i=3}^{d}x_{i}^{2},\quad \forall x=(x_{1},x_{2},\cdots,x_{d})\in\mathbb{R}^{d}, \tag{4.5}\] where \((r,\theta)\in[0,+\infty)\times[-\pi,\pi)\) denotes the polar coordinates which relate to the first two dimensional Euclid space \((x_{1},x_{2})\in\mathbb{R}^{2}\) by \[x_{1}=r\cos\theta,\quad x_{2}=r\sin\theta, \tag{4.6}\] and \(V:[-\pi,\pi)\to\mathbb{R}\) is a double-well potential function which is defined as follows \[V(\theta)=\begin{cases}\left[1-\left(\frac{3\theta}{\pi}+1\right)^{2}\right]^ {2},&\theta\in\left[-\pi,-\frac{\pi}{3}\right),\\ \frac{1}{5}\left(3-2\cos(3\theta)\right),&\theta\in\left[-\frac{\pi}{3},\frac{ \pi}{3}\right),\\ \left[1-\left(\frac{3\theta}{\pi}-1\right)^{2}\right]^{2},&\theta\in\left[ \frac{\pi}{3},\pi\right).\end{cases} \tag{4.7}\] The reference eigenvalue for \(d=2\) is obtained by using the third order conforming finite element method with the mesh size \(h=\frac{3}{1024}\sqrt{2}\) on the domain \([-3,3]^{2}\subset\mathbb{R}^{2}\). Here we use the open parallel finite element package OpenPFEM [34] to do the discretization and then using Krylovschur method from SLEPc [16] to solve the corresponding algebraic eigenvalue problem 1. In this way, we obtain the first three eigenvalues Footnote 1: We express our thanks to Yangfei Liao for this computation \[\lambda_{1}=0.21881493133369,\quad\lambda_{2}=0.76371970025476,\quad\lambda_{ 3}=2.79019347384363,\] as the reference values for our numerical investigation. The corresponding eigenfunctions are shown in the first column of Figure 2. In implementation, we use 3 TNNs to learn the lowest 3 eigenvalues. For \(d=2,50,100\) cases, each TNN has depth 3 and width 20 and the rank is chosen to be \(p=10\). The Adam optimizer is employed with a learning rate \(0.003\). We use the Adam optimizer in the first \(100000\) steps and then the LBFGS in the subsequent \(10000\) steps. The final eigenvalue approximations are represented in Table 1 and the corresponding eigenfunction approximations are shown in Figure 2, where we can find the proposed method has obviously better accuracy than that in [36]. ### Harmonic oscillator problems In this subsection, we consider the Schrodinger equation associated with the \(d\)-dimensional harmonic oscillator and the corresponding Hamiltonian operator reads as \[H=-\frac{1}{2}\sum_{i=1}^{d}\nabla_{i}^{2}+\frac{1}{2}x^{T}Ax, \tag{4.8}\] where \(x=(x_{1},x_{2},\cdots,x_{d})^{T}\), \(A=(a_{ij})_{d\times d}\in\mathbb{R}^{d\times d}\) is a symmetric positive definite matrix. The exact wavefunctions and the corresponding energy (i.e. eigenvalue) of different states can be obtained with the similar way in [4]. According to the property of the symmetric positive definite matrix, there exists an orthogonal matrix \(Q=(q_{ij})_{d\times d}\in\mathbb{R}^{d\times d}\) such that \[Q^{T}AQ=\text{diag}\big{\{}\mu_{1},\mu_{2},\cdots,\mu_{d}\big{\}}, \tag{4.9}\] where \(\mu_{j}\) and \(\mathbf{q}_{j}=(q_{1j},q_{2j},\cdots,q_{d})^{T}\), \(j=1,2,\cdots,d\) are eigenvalues and corresponding normalized eigenvectors of the matrix \(A\). Then under the rotation transformation \(y=Q^{T}x\), the Hamiltonian operator (4.8) in the coordinate system \((y_{1},y_{2},\cdots,y_{d})\) can be written as the following decoupled harmonic form \[H=-\frac{1}{2}\sum_{i=1}^{d}\nabla_{i}^{2}+\frac{1}{2}\sum_{i=1 }^{d}\mu_{i}y_{i}^{2}. \tag{4.10}\] Then the exact wavefunction of state \((n_{1},n_{2},\cdots,n_{d})\) can be immediately obtained as follows \[\Psi_{n_{1},n_{2},\cdots,n_{d}}(y_{1},y_{2},\cdots,y_{d})=\prod_{i =1}^{d}\mathcal{H}_{n_{i}}(\mu_{i}^{1/4}y_{i})e^{-\mu_{i}^{1/2}y_{i}^{2}/2}, \tag{4.11}\] and the corresponding exact energy is \[E_{n_{1},n_{2},\cdots,n_{d}}=\sum_{i=1}^{d}\Big{(}\frac{1}{2}+n_ {i}\Big{)}\mu_{i}^{1/2}, \tag{4.12}\] where \(y_{i}=\sum_{j=1}^{d}q_{ij}x_{j}\) and \(\mathcal{H}_{n}\) is physicists' Hermite polynomials [1]. In all three examples of this subsection, we adopt the same parameters \(a_{ij}\) as in [25] and calculate the lowest 16 energy states as [25] done. The aim here is to demonstrate the performance of the proposed approach through these comparisons. #### 4.2.1 Two-dimensional harmonic oscillator We first examine a simple case of the two-dimensional harmonic oscillator problem \[-\frac{1}{2}\Big{(}\nabla_{1}^{2}+\nabla_{2}^{2}\Big{)}\Psi+ \frac{1}{2}(x_{1}^{2}+x_{2}^{2})\Psi=E\Psi. \tag{4.13}\] Since the problem is essentially decoupled, the exact eigenfunction has low-rank representation in coordinate \((x_{1},x_{2})\) as follows \[\Psi_{n_{1},n_{2}}=\mathcal{H}_{n_{1}}(x_{1})e^{-x_{1}^{2}/2} \mathcal{H}_{n_{2}}(x_{2})e^{-x_{2}^{2}/2}. \tag{4.14}\] The corresponding exact energy is \[E_{n_{1},n_{2}}=\Big{(}\frac{1}{2}+n_{1}\Big{)}+\Big{(}\frac{1}{2}+n_{2}\Big{)}. \tag{4.15}\] We use 16 TNNs to learn the lowest 16 energy states and each subnetwork of a single TNN with depth 3 and width 50, and \(p=20\). The Adam optimizer is employed with a learning rate 0.001 in the first 500000 epochs and then the L-BFGS in the subsequent 10000 steps to produce the final result. The Hermite-Gauss quadrature scheme with 99 points are adopted in each dimension. The corresponding numerical results are shown in Table 2, where we can find the proposed TNN-based machine learning method has obvious better accuracy than that in [25], where the accuracy is 1.0e-2 by the Monte-Carlo based machine learning methods. #### 4.2.2 Two-dimensional coupled harmonic oscillator Since TNN-based methods carry out operations separately in each dimension, it is not unexpected that our method has an impressive performance in the completely decoupled case. To show the generalities of TNN-based machine learning method, the next two examples are to compute the multi-states of operators with coupled oscillators. First, we consider the following two-dimensional eigenvalue problem of the operator with coupled \begin{table} \begin{tabular}{c c c c c c c} \hline \(n\) & \((n_{1},n_{2})\) & Exact \(E_{n}\) & Approx \(E_{n}\) & err\({}_{E}\) & err\({}_{L^{2}}\) & err\({}_{H^{1}}\) \\ \hline 0 & (0,0) & 1.0 & 1.00000000000441 & 4.414e-13 & 2.935e-07 & 7.314e-07 \\ 1 & (0,1) & 2.0 & 2.0000000013887 & 6.944e-11 & 5.889e-06 & 1.021e-05 \\ 2 & (1,0) & 2.0 & 2.00000000369235 & 1.846e-10 & 9.371e-06 & 1.550e-05 \\ 3 & (0,2) & 3.0 & 3.000000000851601 & 2.839e-10 & 1.524e-05 & 2.180e-05 \\ 4 & (1,1) & 3.0 & 3.000000001492529 & 4.975e-10 & 1.970e-05 & 2.958e-05 \\ 5 & (2,0) & 3.0 & 3.000000005731970 & 1.911e-09 & 3.964e-05 & 5.852e-05 \\ 6 & (0,3) & 4.0 & 4.00000000287978 & 7.200e-11 & 8.690e-06 & 1.312e-05 \\ 7 & (1,2) & 4.0 & 4.000000000727938 & 1.820e-10 & 1.374e-05 & 1.838e-05 \\ 8 & (2,1) & 4.0 & 4.000000001748148 & 4.370e-10 & 2.272e-05 & 3.100e-05 \\ 9 & (3,0) & 4.0 & 4.000000005090556 & 1.273e-09 & 3.645e-05 & 5.110e-05 \\ 10 & (1,3) & 5.0 & 5.000000000117699 & 2.354e-11 & 5.199e-06 & 2.009e-05 \\ 11 & (2,2) & 5.0 & 5.000000000746078 & 1.492e-10 & 1.821e-05 & 3.091e-05 \\ 12 & (3,1) & 5.0 & 5.000000001093248 & 2.186e-10 & 1.973e-05 & 3.467e-05 \\ 13 & (0,4) & 5.0 & 5.000000001562438 & 3.125e-10 & 2.651e-05 & 2.520e-05 \\ 14 & (4,0) & 5.0 & 5.000000004861336 & 9.723e-10 & 4.059e-05 & 4.161e-05 \\ 15 & (3,2) & 6.0 & 6.000000043151862 & 7.192e-09 & 3.095e-05 & 3.760e-05 \\ \hline \end{tabular} \end{table} Table 2: Errors of two-dimensional harmonic oscillator problem for the 16 lowest energy states. harmonic oscillator \[-\frac{1}{2}\Big{(}\nabla_{1}^{2}+\nabla_{2}^{2}\Big{)}\Psi+\frac{1}{2}(a_{11}x_{ 1}^{2}+2a_{12}x_{1}x_{2}+a_{22}x_{2}^{2})\Psi=E\Psi. \tag{4.16}\] The coefficients \(a_{11},a_{12},a_{22}\) are chosen as in [25]. Since in [25] the coefficients are rounded to four decimal places, we use rounded values as exact coefficients, that is, \(a_{11}=0.8851\), \(a_{12}=-0.1382\), \(a_{22}=1.1933\). The exact eigenfunction has the following representation in coordinate \((y_{1},y_{2})\) \[\Psi_{n_{1},n_{2}}(y_{1},y_{2})=\mathcal{H}_{n_{1}}(\mu_{1}^{1/4}y_{1})e^{-\mu _{1}^{1/2}y_{1}^{2}/2}\cdot\mathcal{H}_{n_{2}}(\mu_{2}^{1/4}y_{2})e^{-\mu_{2}^ {1/2}y_{2}^{2}/2}, \tag{4.17}\] where \(y_{1}=-0.9339352418x_{1}-0.3574422527x_{2}\) and \(y_{2}=0.3574422527x_{1}-0.9339352418\), \(\mu_{1}=0.8322071257\), \(\mu_{2}=1.2461928742\), the two quantum numbers \(n_{1},n_{2}\) take the values \(0,1,2,\cdots\). The exact energy for the state \((n_{1},n_{2})\) is \[E_{n_{1},n_{2}}=\Big{(}\frac{1}{2}+n_{1}\Big{)}\mu_{1}^{1/2}+\Big{(}\frac{1}{ 2}+n_{2}\Big{)}\mu_{2}^{1/2}. \tag{4.18}\] We use 16 TNNs to learn the lowest 16 energy states. In each TNN, the rank is chosen to be \(p=20\), the subnetwork is built with depth 3 and width 50. The Adam optimizer is employed with a learning rate 0.001. We use the Adam optimizer in the first 500000 steps and then the L-BFGS in the subsequent 10000 steps. In each direction, 99 points Hermite-Gauss quadrature scheme are used to do the integration. The corresponding numerical results are collected in Table 3, where we can find the proposed numerical method can also obtain the higher accuracy for the Schrodinger equation with coupled harmonic oscillator. The corresponding approximate wavefunctions obtained by TNN-based machine learning method and the exact wavefunctions are shown in Figure 3, which implies that the approximate wavefunctions have very good accuracy even for the coupled harmonic oscillator. #### 4.2.3 Five-dimensional coupled harmonic oscillator Then, we investigate the performance of the proposed method for the case of five-dimensional coupled harmonic oscillator. Here, the Hamiltonian operator is defined as (4.8) with the matrix \(A\) being replaced with the following matrix \[A=\begin{bmatrix}1.05886042&0.01365034&0.09163945&0.11975290&0.05625013\\ 0.01365034&1.09613742&0.10887930&0.07448974&0.07407652\\ 0.09163945&0.10887930&1.00935913&0.05588543&0.08968956\\ 0.11975290&0.07448974&0.05588543&1.17627129&0.06049045\\ 0.05625013&0.07407652&0.08968956&0.06049045&0.94969417\end{bmatrix}. \tag{4.19}\] Then the exact energy is \[E_{n_{1},n_{2},\cdots,n_{5}}=\sum_{i=1}^{5}\Big{(}\frac{1}{2}+n_{i}\Big{)}\mu_ {i}^{1/2}, \tag{4.20}\] where \(\mu_{1}=0.88021303\), \(\mu_{2}=0.90973982\), \(\mu_{3}=1.02312382\), \(\mu_{4}=1.10243017\), \(\mu_{5}=1.37481559\), each of the five quantum numbers \(n_{i}\), \(i=1,2,\cdots,5\) takes the values \(0,1,2,\cdots\). We use 16 TNNs to learn the lowest 16 energy states. A larger TNN structure than the 2-dimensional example is used, the rank is chosen to be \(p=50\), the subnetwork is built with depth 3 and width 100. The Adam optimizer is employed with a learning rate 0.001 and epochs of 500000. Then the final result is given by the subsequent 10000 steps LBFGS. The same 99 points Hermite-Gauss quadrature scheme as that in the last two examples is sufficient to this example. Table 4 shows the corresponding numerical results, where we can find the proposed numerical method can obtain obviously better accuracy than that in [25]. Figure 4 shows the corresponding approximate wavefunctions obtained by TNN-based machine learning method. From Figure 4, the proposed numerical method here also has good accuracy for the five-dimensional eigenvalue problem of the coupled harmonic oscillator. ### Energy states of hydrogen atom In this section, we study energy states of hydrogen atom. The wave function \(\Psi(x,y,z)\) of the hydrogen atom satisfies the following Schrodinger equation \[-\frac{1}{2}\Delta\Psi-\frac{\Psi}{|\mathbf{r}|}=E\Psi, \tag{4.21}\] \begin{table} \begin{tabular}{c c c c c c c} \hline \(n\) & \((n_{1},n_{2})\) & Exact \(E_{n}\) & Approx \(E_{n}\) & err\({}_{E}\) & err\({}_{L^{2}}\) & err\({}_{H^{1}}\) \\ \hline 0 & (0,0) & 1.014291981649766 & 1.014291988589516 & 6.842e-09 & 2.801e-05 & 9.650e-05 \\ 1 & (1,0) & 1.926545852963290 & 1.926545854461407 & 7.776e-10 & 1.264e-05 & 3.167e-05 \\ 2 & (0,1) & 2.130622073635773 & 2.130622076362180 & 1.280e-09 & 1.766e-05 & 4.081e-05 \\ 3 & (2,0) & 2.838799724276814 & 2.838799728095222 & 1.345e-09 & 2.099e-05 & 4.402e-05 \\ 4 & (1,1) & 3.042875944949297 & 3.042875947694890 & 9.023e-10 & 1.633e-05 & 3.673e-05 \\ 5 & (0,2) & 3.246952165621781 & 3.246952166999784 & 4.244e-10 & 1.171e-05 & 2.485e-05 \\ 6 & (3,0) & 3.751053595590338 & 3.751053597870394 & 6.078e-10 & 1.500e-05 & 3.116e-05 \\ 7 & (2,1) & 3.955129816262821 & 3.955129818022521 & 4.449e-10 & 1.289e-05 & 2.784e-05 \\ 8 & (1,2) & 4.159206036935306 & 4.159206038606588 & 4.018e-10 & 1.281e-05 & 2.388e-05 \\ 9 & (0,3) & 4.363282257607788 & 4.363282258514639 & 2.078e-10 & 9.008e-06 & 2.002e-05 \\ 10 & (4,0) & 4.663307466903863 & 4.663307470243584 & 7.162e-10 & 1.856e-05 & 3.314e-05 \\ 11 & (3,1) & 4.867383687576346 & 4.867383691191087 & 7.426e-10 & 1.987e-05 & 3.338e-05 \\ 12 & (2,2) & 5.071459908248830 & 5.071459911990555 & 7.378e-10 & 1.992e-05 & 3.220e-05 \\ 13 & (1,3) & 5.275536128921312 & 5.275536131659159 & 5.190e-10 & 1.741e-05 & 2.794e-05 \\ 14 & (0,4) & 5.479612349593796 & 5.479612351630867 & 3.718e-10 & 1.421e-05 & 2.538e-05 \\ 15 & (5,0) & 5.575561338217387 & 5.575561344662695 & 1.156e-09 & 2.769e-05 & 3.627e-05 \\ \hline \end{tabular} \end{table} Table 3: Errors of two-dimensional coupled harmonic oscillator problem for the 16 lowest energy states. where \(|{\bf r}|=(x^{2}+y^{2}+z^{2})^{1/2}\). The exact energy of the hydrogen atom are \(E_{n}=-\frac{1}{2n^{2}}\) and there are \(n^{2}\) states consist with energy \(E_{n}\). In order to compute the singular integrals of the Column potential terms \(1/|{\bf r}|\), we adopt spherical coordinates \((r,\theta,\varphi)\) with density \(r^{2}\sin\theta\). Then the wave function \(\Psi({\bf r})\) should be written as \(\Psi(r,\theta,\varphi)\). The Laplace \(\Delta\) has following expression \[\Delta\Psi = \frac{\partial^{2}\Psi}{\partial r^{2}}+\frac{2}{r}\frac{\partial \Psi}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}\Psi}{\partial\theta^{2}}+ \frac{\cos\theta}{r^{2}\sin\theta}\frac{\partial\Psi}{\partial\theta}+\frac{ 1}{r^{2}\sin^{2}\theta}\frac{\partial^{2}\Psi}{\partial\varphi^{2}} \tag{4.22}\] \[= \frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{ \partial\Psi}{\partial r}\right)+\frac{1}{r^{2}\sin\theta}\frac{\partial}{ \partial\theta}\left(\sin\theta\frac{\partial\Psi}{\partial\theta}\right)+ \frac{1}{r^{2}\sin^{2}\theta}\frac{\partial^{2}\Psi}{\partial\varphi^{2}}.\] Figure 3: The contour plots of the first 16 eigenfunctions for two-dimensional coupled harmonic oscillator example in coordinate \((x_{1},x_{2})\). The two dashed lines are \(y_{1}=0\) and \(y_{2}=0\), respectively. The 99 points Laguerre-Gauss quadrature is used in the direction \(r\) and 16 points Legendre-Gauss quadrature with subintervals length \(\frac{\pi}{64}\) in directions \(\theta,\varphi\). The TNN structure is defined as follows \[\Psi(r,\theta,\varphi)=\sum_{j=1}^{p}c_{j}\phi_{r,j}(\beta r)e^{-\frac{\beta r}{ 2}}\cdot\phi_{\theta,j}(\theta)\cdot\big{(}\phi_{\varphi,j}(\varphi)\sin( \varphi/2)+\gamma_{j}\big{)}, \tag{4.23}\] where \(\phi_{r}=(\phi_{r,1},\cdots,\phi_{r,p})\), \(\phi_{\theta}=(\phi_{\theta,1},\cdots,\phi_{\theta,p})\) and \(\phi_{\varphi}=(\phi_{\varphi,1},\cdots,\phi_{\varphi,p})\) are three FNNs with depth 3 and width 50, and \(p=20\). The activation function is selected as \(\sin(x)\). The trainable parameter \(\gamma_{j}\) is introduced to satisfy periodic boundary conditions \(\Psi(r,\theta,0)=\Psi(r,\theta,2\pi)\). In implement, we use 15 TNNs to learn the lowest 15 energy states, each TNN is defined as (4.23). The Adam optimizer is employed with a learning rate 0.0003 and epochs of 100000 and then the L-BFGS in the subsequent 10000 steps to produce the final result. Table 5 shows the final energy approximations and corresponding errors. ## 5 Conclusions In this paper, based on the deep Ritz method, we design a type of TNN-based machine learning method to compute the leading multi-eigenpairs of high dimensional eigenvalue problems. The most important advantage of TNN is that the high dimensional integrations of TNN functions can \begin{table} \begin{tabular}{c c c c c} \hline \hline \(n\) & \((n_{1},n_{2},n_{3},n_{4},n_{5})\) & Exact \(E_{n}\) & Approx \(E_{n}\) & err\({}_{E}\) \\ \hline 0 & (0,0,0,0,0) & 2.562993697776131 & 2.562993699775476 & 7.801e-10 \\ 1 & (1,0,0,0,0) & 3.501190387362160 & 3.501190399601748 & 3.496e-09 \\ 2 & (0,1,0,0,0) & 3.516796517763949 & 3.516796531293733 & 3.847e-09 \\ 3 & (0,0,1,0) & 3.574489532179441 & 3.574489543933587 & 3.288e-09 \\ 4 & (0,0,0,1,0) & 3.612960443213281 & 3.612960454781468 & 3.202e-09 \\ 5 & (0,0,0,0,1) & 3.735519003914085 & 3.735519020687214 & 4.490e-09 \\ 6 & (2,0,0,0,0) & 4.439387076948189 & 4.439387114214263 & 8.394e-09 \\ 7 & (1,1,0,0,0) & 4.454993207349979 & 4.454993243128091 & 8.031e-09 \\ 8 & (0,2,0,0,0) & 4.470599337751768 & 4.470599382176197 & 9.937e-09 \\ 9 & (1,0,1,0,0) & 4.512686221765470 & 4.512686265259870 & 9.638e-09 \\ 10 & (0,1,1,0,0) & 4.528292352167259 & 4.528292394259157 & 9.295e-09 \\ 11 & (1,0,0,1,0) & 4.551157132799310 & 4.551157174501202 & 9.163e-09 \\ 12 & (0,1,0,1,0) & 4.566763263201100 & 4.566763304302355 & 9.000e-09 \\ 13 & (0,0,2,0,0) & 4.585985366582751 & 4.585985402475187 & 7.827e-09 \\ 14 & (0,0,1,1,0) & 4.624456277616591 & 4.624456313384630 & 7.735e-09 \\ 15 & (0,0,0,2,0) & 4.662927188650432 & 4.662927231227110 & 9.131e-09 \\ \hline \hline \end{tabular} \end{table} Table 4: Errors of five-dimensional coupled harmonic oscillator problem for the 16 lowest energy states. be calculated with high accuracy and efficiency. Based on the high accuracy and efficiency of the high dimensional integration, we can build the corresponding machine learning method for solving high dimensional problems with the high accuracy. The presented numerical examples show that the proposed machine learning method in this paper can obtain obviously better accuracy than the Monte-Carlo-based machine learning methods. In our numerical implementation, we also find that the accuracy and stability of the machine learning process should be paid more attention. These are necessary to get the final high accuracy for solving high dimensional problems by using machine learning methods. Figure 4: The contour plots of the first 16 eigenfunctions for five-dimensional coupled harmonic oscillator example. Actually, the proposed TNN and the corresponding machine learning method can be extended to other high dimensional problems such as Schrodinger equations, Boltzmann equations, Fokker-Planck equations, stochastic equations, multiscale problems and so on. This means TNN-based machine learning method can bring more practical applications in physics, chemistry, biology, material science, engineering and so on. These will be our future work.
2301.00969
Boosting Neural Networks to Decompile Optimized Binaries
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
Ying Cao, Ruigang Liang, Kai Chen, Peiwei Hu
2023-01-03T06:45:54Z
http://arxiv.org/abs/2301.00969v1
# Boosting Neural Networks to Decompile Optimized Binaries ###### Abstract. Decompilation aims to transform a low-level program language (LPL) (e.g., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named _NeurDP_, that targets compiler-optimized binaries. _NeurDP_ uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that _NeurDP_ can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks. 2022- + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote † †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † † †: thanks: [ + Footnote † † †: Footnote † † † †: thanks: [ + Footnote they still suffer a severe drawback: _none of these works can appropriately handle the decompilation of optimized code._ As compiler optimization is ubiquitously used, the ability to decompile optimized code is essential for the practical application of neural-based approaches. Through extensive analysis and experimentation, we found that it is not easy to train an end-to-end decompilation model that can handle optimized LPL. Since deep learning models are data-driven, a high-quality training set is critical to the model's performance. Most models used in the source code or reverse engineering domains rely on large, high-quality supervised or unsupervised datasets. In contrast, there are very few mature datasets in the field of decompilation, and datasets used in previous neural-based decompilation studies are not built for the decompilation of optimized LPL. Without a well-labeled dataset, it is difficult for the model to learn the mapping rules between HPL and LPL. Although many open-source projects exist in the real world, the source code and binary code cannot be completely matched at the statement level due to code optimization (e.g., dead code elimination), making it inaccurate to directly use source code as the labels for the optimized binaries. We summarize the challenges as follows. **Challenges. C1:** It is well known that statements of HPL are often significantly refactored during compiler optimization, which makes it challenging to make an exact match between the semantics of LPL and HPL. For example, dead code elimination causes certain statements in HPL not to appear in LPL. Loop unwinding can cause some code to appear multiple times in the binary and only once in the source code. These optimization strategies all lead to the code structure and semantic information in the text level of LPL being quite different from HPL. In some cases, textually similar HPL codes (only some variable names differ) can correspond to completely different LPL codes, and vice versa. Therefore, it is not feasible to directly train end-to-end models using HPL as the label of LPL, which makes it challenging to capture the decompilation rules. **C2:** Splitting LPL and HPL into code fragments with correct correspondence is a nontrivial task. Previous work typically utilizes functions or basic blocks (BBs) as input units for training neural models. However, the number of instructions in a function or a BB can be infinite (up to 1,000 instructions), which is hard to handle appropriately by neural network models. Therefore, it is essential to split the BB into finer-grained units, which can effectively reduce the model's difficulty in learning the decompiled rules. One straightforward method is to split the LPL or HPL based on debug information. However, the LPL and HPL mapped by the debug information are inaccurate, especially for optimized binaries. For example, the dead code in the HPL will also be mapped onto the LPL along with the live code. Another straightforward way is to set a maximum fragment length and split the basic block into fragments. However, several statements in a fragment may have over one independent feature. For example, there are three independent features (data flows) in the code segment "\(a=0;b=c+d;call(c)\),". Thus it is difficult for the model to properly encode it into a single vector representing its function or semantics. Splitting data dependency graphs (DDG) may solve this problem, but it is also a difficult task. Worse still, the DDGs of LPL and HPL are quite different because of compiler optimization. **Our approach.** In this paper, we propose an NN-based decompilation framework called NeurDP1. _NeurDP_ uses a neural network model to translate LPL into an optimized IR (IR decompiler) to address C1 instead of directly translating LPL into HPL. As we know, the compiler first generates the intermediate representation (IR) code during the compilation process and performs most of the optimization strategies on the IR. Therefore, the structural differences between the optimized IR and LPL are much more minor than those between HPL and LPL. Compared to previous end-to-end neural decompilers, _NeurDP_ can cope with the decompilation problem of compiler-optimized LPL. Finally, _NeurDP_ converts IR statement to HPL statement directly. Footnote 1: NeurDP (Neural Decompilation) Specifically, to train a well-performing IR decompiler model, we design a splitting technique called Optimal Translation Unit (OTU) to address C2. _OTU_ splits BBs into smaller pairs of LPL and _HIR_ fragments. The statements in each fragment have data dependencies and can be synthesized into one feature. _OTU_ helps build a high-quality training set for our NN model. To evaluate the accuracy of _NeurDP_, we use several programs randomly generated using our tool which is developed by cfile (Zhu et al., 2017) and regular expressions, including 500 lines of code. Since our goal is to boost the neural network's ability for decompilation, we compare _NeurDP_ with related studies using neural networks (e.g., Coda (Coda, 2018) and Neutron (Beng et al., 2019)). Experimental results show that _NeurDP_ is 5.8%-27.8% more accurate than Coda on unoptimized code. Moreover, _NeurDP_ can handle compiler-optimized code well, while Coda is incapable of action. _NeurDP_ can decompile optimized binaries with 45.21% higher accuracy than another neural decompilation Neutron (Beng et al., 2019). According to our evaluation, the introduction of _OTU_ and IR mechanisms in our model improves the accuracy by 4.1%-71.23% compared to using the model directly in the optimized code. **Contributions.** Our main contributions are outlined below: \(\bullet\) We design a novel neural machine decompilation technique. It is the first neural-based decompiler that can handle compiler-optimized code. \(\bullet\) We design an optimal translation unit (OTU) scheme, which can help other researchers form a sound dataset for training the IR decompilation model or other applications. \(\bullet\) We implement our techniques and conduct extensive evaluations. The results show that _NeurDP_ is much better than the state-of-the-art neural-based decompilers, especially for optimized code. We release our dataset and the NN parameters on GitHub2. Footnote 2: [https://github.com/zjiangcogito/neur-dp-data.git](https://github.com/zjiangcogito/neur-dp-data.git) ## 2. Background ### Compilation and Optimization Compilation translates HPL (e.g., C/C++) into LPL (e.g., machine code) that can be run on the target CPU (e.g., X86, ARM). Due to the differences between the two PLs, unnecessary information (e.g., symbols) for the CPU is usually removed. Also, in this process, optimization technologies are designed to minimize or maximize some attributes of the executable program, e.g., to reduce the program's memory usage. Note that the execution results of the optimized target program should be the same as the original program without optimization. Optimization usually has several levels (e.g., from 00 to 03 for the compiler gcc3). The higher the optimization level, the greater the difference between the compiled binary code and the source code. Optimization makes it more difficult to generate decompiled code using deep learning models. For example, the optimization could look for redundant operations among lines of code and combine them, or calculate some operations in the compiling time rather than the running time. This would change the structure of HPL. The optimization process is usually irreversible. Also, the optimization strategies are very diverse, even for the same type of operation. For example, in Figure 1, the target program languages after optimization for the division operations can have different forms when they have different types of operands. Specifically, if the operand is variable, the translated code is sdiv. Moreover, when the operand is changed to immediate, the target code is mov, movk, small, lsr, asr, add, which does not even contain the division operation. Footnote 3: [https://gcc.gnu.org/](https://gcc.gnu.org/) ### Decompilation Decompilation is a technique that transforms a compiled executable program or ASM (LPL) into a functionally equivalent HPL (Sakamoto et al., 2016). As mentioned previously, to decompile code, analysts would make many heuristic rules to help lift binary code to source code (Katz et al., 2017; Katz et al., 2017; Katz et al., 2017). However, generalizing the rules is challenging since complex instruction set architectures (ISA), code structure, and optimization strategies. Experts need to summarize the code changes brought by many optimization strategies and handcraft the corresponding decompilation rules. For example, in Figure 1, using different optimization levels, the operation var1=var1/-123; could be compiled into different types of instructions, e.g., mov, movk, small, lsr, asr, add. In this case, the developer needs to handcraft the rules to analyze the data dependencies of these instructions, and determine whether these instructions represent a division statement. The situation worsens when new operations are added, or new optimization strategies are developed, introducing new rules and impacting the old ones. What is more, this may mislead a neural-based decompiler to translate the similar code small, lsr, asr, add, which does not mean division operation to sdiv. Obtaining a good model requires training on a large-scale dataset with high-quality labels. However, directly using the source code as the label for optimized binary decompilation is inaccurate due to the gap between the source code and the optimized code. Existing decompilation tools typically design an intermediate representation (IR) as a bridge between the LPL and the HPL. While converting IR to HPL is a relatively easy task (Beng et al., 2017), the rules for translating LPL to IR rely on expert analysis and definitions. Moreover, as each tool proposes its own IR and has different definitions of micro-operations, rule-based decompilers suffer from poor generalizability and scalability. To solve these issues, researchers proposed neural-based decompilation (Katz et al., 2017; Katz et al., 2017; Katz et al., 2017). Katz et al. (Katz et al., 2017) adopted methods from NMT and formulated decompilation as a language translation task, aiming to overcome the bottleneck of rule-based approaches. Coda (Coda, 2018) and Neutron (Katz et al., 2017) are designed to learn the mapping rules automatically. The neural-based approaches bring a new idea to program decompilation. However, previous neural-based methods all learn a direct mapping from LPL to HPL or the abstract syntax trees (ASTs) of HPL. In addition, none of the current neural-based approaches can handle optimized code, mainly due to the unavailability of a high-quality dataset of optimized code. ## 3. Approach We propose a novel neural decompilation approach _NeurDP_ that can handle compiler-optimized code. _NeurDP_ first translates LPL to _HIR_ using GNN based IR decompiler model, and then recovers _HIR_ code to HPL. Below we elaborate on the design of _NeurDP_. Figure 1. An example showing different compiler optimization levels ### Overview The overview of _NeurDP_ is shown in Figure 2, which aims to decompile the LPL into functionality-equal C-like HPL. And the detail of _NeurDP_ is shown in Figure 6. Considering the large gap between the LPL and HPL, we introduce an IR named _HIR_ as a bridge. IR is optimized by the compiler front end. Using the optimized IR as the model's target can reduce the difficulty of model learning since the model no longer needs to learn to reverse the optimization strategies in the compiler's front end. In the data construction phase, we first disassemble the binary file, identify the code sections from the binary and retrieve the assembly code of all functions. Then, _NeurDP_ gets the control flow graph (CFG) for each function and the assembly code of each basic block following previous work (Kip lie in that (i) the instructions in HPL and LPL have poor correspondence (e.g., redundant or missing operations), and (ii) each function has many instructions, making it difficult for the neural network models to learn. In order to solve the problems and facilitate effective learning, we introduce an intermediate representation (_HIR_) that has better instruction correspondence with the LPL and use _OTU_ to split the basic blocks into smaller units. **Intermediate Representation.**_LIR_ is lifted from LPL by removing machine-related features from LPL, such as registers, designed as the model's input. To get _LIR_, we first change LPL to SSA form, then use optimization strategies similar to constant (register) propagation to eliminate as many registers as possible. _LIR_ maintains almost the same syntax as LPL (\(<\)_opcode_\(>\)\(<\)_opdes_\(>\), \(<\)_opsrc1_\(>\)...). Table 2 shows some of the syntax templates of _LIR_. For _HIR_, we extract the operands and opcodes from LLVM IR and rewrite them automatically according to the syntax (\(<\)_opdes_\(>\)=_opcode_\(>\)\(<\)_op_\(src1>\),...), which is a simplified scheme of LLVM IR. It is feasible to directly use LLVM IR instead of HPL as the model's output. However, a model that converts _LIR_ to LLVM IR instead of _HIR_ needs to learn too much additional information (like data types), which could complicate the model structure and make training such a model extremely difficult. Therefore, we choose to construct the model that converts _LIR_ to _HIR_, which is relatively simple (though training this model is still not straightforward). Table 3 lists parts of the instruction templates we use for _HIR_ and corresponding LLVM IR. Figure 3 (c) shows the _HIR_ generated by _NeurDP_. **Optimal Translation Unit.**_NeurDP_ splits the basic block into smaller units that could let the model learn the mapping rules between _LIR_ and _HIR_ instructions easily. An unit in _LIR_ should be functionally equivalent to the corresponding unit in _HIR_. One may use a fixed-length translation unit (TU) to spill a function into units. However, the units generated in this way may not be functionally equivalent. For example, in Figure 3, the madd instruction in (b) corresponds to the computation of $mul and $add in (c). Nevertheless, the two instructions are located far away and hard to include in a fixed-length TU. Also, a large size of the TU would include unrelated instructions that cannot be paired. We assume that a basic block is a black box, as shown in Figure 4. We find that most optimization strategies do not change the output of a basic block. We observe that the output of a basic block usually contains multiple variables whose data dependency graphs within the basic block often overlap. In Figure 4, regions with different colors corresponding to the variables a, b, and n represent their data dependency graphs. To ensure that the optimization to the data dependencies of one variable does not affect the result of other variables, the compiler usually considers optimizing the overlapped and independent parts of the DDG, respectively. Therefore, we consider that the corresponding parts in DDG of _HIR_ and _LIR_ has the same semantic. Based on this observation, we design an Optimal Translation Unit (OTU) to divide the overlapping and independent parts of each dependency path of the basic block, which consists of two steps. Step 1: _OTU_ divides a basic block into multiple non-overlapping units. Starting from the input variables of the basic block, _OTU_ traverses the entire DDG of the basic block and marks all instructions with two or more out edges as unit boundaries. The _OTU_ obtains \begin{table} \begin{tabular}{c|l|l} \hline **LIVM IR** & **NeurDP FIR** \\ \hline \hline \(<\)result -= mid- city- cop1\({}_{-}\), cop2\({}_{-}\) & \(<\)result -= mid - cop1\({}_{-}\), cop2\({}_{-}\) \\ \hline \(<\)result- = add naw naw n\({}_{-}\) cop1\({}_{-}\), cop2\({}_{-}\) & \(<\)result -= add - cop1\({}_{-}\), cop2\({}_{-}\) \\ \hline \(<\)result- = fub [fast-math flags] - city-\({}_{-}\) cop1\({}_{-}\), cop2\({}_{-}\) & \(<\)result -= fub - cop1\({}_{-}\), cop2\({}_{-}\) \\ \hline \(<\)result- = temp \(<\)comd- city- cop1\({}_{-}\), cop2\({}_{-}\) & \(<\)result -= temp \(<\)comd- cop1\({}_{-}\), cop2\({}_{-}\) \\ \hline switch -<-intty- \(<\)value-, label \(<\)default-\(\{\) [- entity- \(<\)val-, label \(<\)dest+\({}_{-}\) ] & switch -<value-, \(<\)default- [- \(\mid\)val-, \(<\)dest- \({}_{-}\) ] \\ \hline \end{tabular} \end{table} Table 3. HIR Syntax Templates Figure 4. Black box of basic block \begin{table} \begin{tabular}{c|l|l} \hline **Return** & **ret** & **ret x0** \\ \hline **Unconditional Branch** & **bl** label & **bl x0, labe1[, SRC [, SRC...]]** \\ \hline **Conditional Branch** & **cop SRC**, SRC & **b** COND, labe11, labe2, SRC, SRC \\ & **b** COND & **labe11** \\ \hline **Store Register** & **str** SRC, **OST** & **str** DST, **SRC** \\ \hline **Arithmetic (shifted register)** & **add** DST, SRC, SRC, [**ls1**] **JMM** & **add** DST, **SRC, SRC \\ & **labe11** & **labe1** & **labe1** \\ \hline **Move(wide immediate)** & **add** DST, SRC, SRC & **mov** DST, **IMM2, [**ls1**] **labe1** independent non-overlapping units based on the boundaries. After that, a DDG between units (UDG) can be constructed according to the data dependencies between statements. As shown in Figure 5 (a), the \(\mathit{OTU}\) first divides the DDG of \(\mathit{LIR}\) into five non-overlapping units. Each unit is regarded as a node of UDG, and the dotted lines are dependency edges. In the same way, Figure 5 (c) generates a UDG of \(\mathit{HIR}\). Step 2: \(\mathit{OTU}\) partially merges the units divided in Step 1, because there are units whose out edges all point to one unit. For example, there are 2 out edges between TU1a and TU1b in Figure 5 (a). This is due to compiler optimizations, such as the division optimization in Figure 5 that causes operations on a variable to be divided into 3 units TU1a, TU1b, and TU1c. We iteratively combine such units until no unit in UDG has all the out edges pointing to the same unit. For example, in Figure 5, (b) is the result of the merging of (a). **Training Dataset.** Since the neural model requires labeled data in the training phase, we use \(\mathit{OTU}\) to partition both the basic blocks of \(\mathit{LIR}\) and \(\mathit{HIR}\) when obtaining the training set. Due to the optimization strategies of the compiler, the CFG of \(\mathit{HIR}\) often does not match precisely with the CFG of LPL or \(\mathit{LIR}\). Inaccurate matching between the basic blocks will lead to inaccurate labeling of the training set. To avoid this problem, we choose functions that contain only one basic block and then segment them to form the training set of \(\mathit{NeurDP}\) (see Section 4). Based on our observation, optimization across basic blocks only changes the segmentation and their orders, which does not introduce new types of instructions and mappings. Therefore, the model trained on our dataset can accurately translate the \(\mathit{LIR}\) instructions in each basic block of a complex function to the corresponding \(\mathit{HIR}\) instructions. Existing rule-based decompilers (Han et al., 2017; Wang et al., 2018; Wang et al., 2018) are also implemented based on this principle. After applying \(\mathit{OTU}\) in both \(\mathit{LIR}\) and \(\mathit{HIR}\), we try to map units of them to label dataset. We observe that their UDGs are usually isomorphic, although the DDGs of \(\mathit{LIR}\) and \(\mathit{HIR}\) may be different. Therefore, we match UDGs of \(\mathit{LIR}\) and \(\mathit{HIR}\) to pair their units and get labeled. If two nodes in UDG cannot be distinguished, we first examine their internal instructions and distinguish them by some special features, including their constants, const strings, and the address of a procedure call. For nodes that are still indistinguishable according to these features, we throw them away. We use this labeling method to ensure the accuracy of labels. ### Neural Translation In the field of neural translation, sequence-to-sequence (seq2seq) neural networks (Krizhevsky et al., 2014; Sutskever et al., 2015; Krizhevsky et al., 2014) have achieved excellent results and have been applied in commercial products such as Google Translate (Krizhevsky et al., 2014). Therefore, previously studies (e.g., TraFix (Krizhevsky et al., 2014), Coda (Coda, 2015), and Neutron (Corba et al., 2016)) utilize such models for translation. However, these existing neural machine decompilers do not work well, especially for the optimized LPL, indicating that the seq2seq models cannot effectively cope with the decompilation tasks of LPL. The low accuracy is mainly because seq2seq neural networks do not consider the data dependencies between instructions, which is vital for the compiler to generate LPL. For example, in Figure 6 (c), the seq2seq neural networks view the \(\mathit{LIR}\) as the sequence \(<\)smu1, lsr, add3, asr, add5, lsl>, but ignore the data dependencies (e.g., \(<\)smu1\(\rightarrow\)lsr\(>\) and lsr\(\rightarrow\)add3>). So the model wrongly decompiles the result as shifting and arithmetic operations. However, the correct result is the division operation \(\mathtt{div}\). Based on the above observation, the neural network model should capture the instructions and the data dependencies between instructions. So we choose to use Graph Neural Networks (GNN). It can capture the features of nodes (i.e., instructions) and edges (i.e., data dependencies) in DDG. Thus, the model's decompilation problem can be defined as follows: Given the \(\mathit{LIR}\)'s DDG subgraph \(G\) as input, the model outputs the corresponding \(\mathit{HIR}\) sequence, expressed as \(P(Y)=P(Y|G)\). We adopt the graph-based neural network gated graph sequence neural network (GGS-NN) (Krizhevsky et al., 2014). We do not show the details of the model here. Figure 7 shows the model's architecture (i.e., an encoder-decoder architecture). The encoder uses the gated graph neural network (GG-NN) (Krizhevsky et al., 2014). The node initialization module aims to define the initial state of the node and perform initialization operations on the DDG. The decoder uses a long short-term memory (LSTM) network with the bridge mechanism. Considering the inputs are \(\mathit{LIR}\)\(\mathit{HIR}\) units which are not complicated, we use a 2-layer LSTM network. The global attention (Krizhevsky et al., 2014) is introduced to improve the model's performance. The token embedding (Krizhevsky et al., 2014) module is used to generate the word vector of the output sequence, and we use the Figure 5. An example of pair matching of OTU learnable multi-dimensional embedding vector. Regarding the loss function, we use the Kullback-Leibler divergence (Kullback and Leibler, 1944) as follows. \[D_{KL}(p||q)=\sum_{i=1}^{n}p(x)log\frac{p(x)}{q(x)} \tag{1}\] Based on our evaluation, our model is much more accurate (29.58% higher on average) than seq2seq neural networks. We further look into the code and find that GNN correctly captures the features of data dependencies. For the example in Figure 6, our model can correctly decompile the _LIR_ to div, which means our model is effective even for optimized code. ### Operands Recovery Recall that the output of our model does not contain real operands. To accurately recover the _HIR_ operands, we further split the _LIR_ unit, pair _LIR/HIR_ instructions, and recover operands in each unit. In this step, we use operands in LIR to fill into the HIR. We first pair the instructions with the same semantic meanings, which can be obtained by analyzing the instructions manually. For example, in Figure 6, the _LIR_ instruction b1 has the same semantic meaning as the _HIR_ instruction cal1. So we pair them together. Note that identifying the semantic meaning of instructions is only a one-time effort. Then, for the unpaired instructions, we pair them in the order of their addresses. For example, in Figure 6, the _LIR_ instructions between Line 1 and Line 6 are paired to the _HIR_ instruction div. After obtaining the instruction pairs, we design a data-flow-based approach to recover the _HIR_ operands. For each pair, we identify the destination operand in _LIR_ (from the node that has no output link in DDG) and use it as the destination operand in the _HIR_ instruction. Then we identify the source operands in _LIR_ (from the node that has no parents in DDG) and put them as the source operands in the corresponding _HIR_ instruction. For some special instructions in _HIR_ and _LIR_ (e.g., div and madd), we make rules to find the source operands and the destination operands. For example, in Figure 6, we map instructions 1-6 (18 operands) in the _LIR_ to instruction 1 (3 operands) in the _HIR_. By analyzing the DDG of _LIR_, we get the output variable \(x23\_2\), input variable \(x0\_5\), and \(4\) immediate \(\#0x84210843\), \(32\), \(8\), and \(31\), which are not defined inside the DDG. Operands \(x23\_2\) and \(x0\_5\) can be assigned to _HIR_ to the corresponding position. Besides, we manually make the division optimization rules to get another operand \(\#496\). Figure 6. Neural translation process Figure 7. Model architecture ## 4. Evaluation In this section, we describe our experiments to evaluate _NeurDP_'s performance. Firstly, we evaluate the accuracy of decompilation tools at different optimization levels, which can reflect the ability of each decompiler to respond to the optimization strategies related to the expression and data flow in the compiler optimization. We compare _NeurDP_ with two state-of-the-art neural-based decompilers (Beng et al., 2017; Chen et al., 2018). To evaluate the efficiency of our model and _OTU_, we compare _NeurDP_ with other baseline models and other methods of splitting basic blocks. We also analyze the decompiling results of one famous open-source decompilation tool RetDec (Rendle et al., 2017). ### Experiment Setup **Dataset.** To build the dataset, we randomly generated 20,000 functions, consisting of arithmetic and calling statements, compiled them using _clang10.0_ with optimization levels 00 to 03. By capturing the intermediate results and reversing the binaries, we got 80,000 _LIR/HIR_ pairs. Then, we use _OTU_ to split the function into smaller units for training. After removing the duplicated units and batches that were not full, we got 242,000 pairs. We randomly selected 220,000 pairs for training, and the rest 22,000 pairs were used for validation. We evaluate _NeurDP_ from the following aspects: accuracy and generalizability. **Platform.** All our experiments are conducted on a 64-bit server running Ubuntu 18.04 with 16 cores (Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz), 128GB memory, 2TB hard drive and 2 GTX Titan-V GPU. ### Accuracy **Metrics.** As mentioned above, the compiler's optimization changes the statements in the source code. So the decompiled code may not be the same as the original source code at the statements level, even if both codes have the same functionality. We propose a method to evaluate the compiler's accuracy in solving this problem. We consider the decompiled basic block correct if the semantics of this HPL's basic block is the same as the semantics of the corresponding LPL's basic block. Below, we will introduce our comparison method for the semantics of two basic blocks. Basic block can be abstracted to a function \(F\), mapping the input set \(IN(B)\) to the output set \(OUT(B)\). We define them as follows: \(IN(B)=\{in_{0},in_{1},...,in_{n}\}\), where \(in_{i}\) is the \(i\)-th input of a basic block. \(OUT(B)=\{out_{0},out_{1},...,out_{n}\}\), where \(out_{i}\) is the \(i\)-th output of a basic block. \(F=\{f_{0},f_{1},...,f_{n}\}\), where \(f_{i}\) is the \(i\)-th function of a basic block, \(out_{i}=f_{i}(IN(B))\), \(OUT(B)=F(IN(B))\). We define the accuracy of a basic block as \(Acc_{B}=Count_{correct}(f_{i}^{HPL})/Count(F^{LPL})\), where \(f_{i}^{HPL}\) is the \(i\)-th function of HPL. To find the correct \(f_{i}\), we first locate the corresponding \(out_{i}^{HPL}\) in the decompiled HPL for each \(out_{i}^{LPL}\) in LPL (e.g., pointer to the same variable). For _NeurDP_, it is easy to determine whether the two outputs correspond or not since we map the variables in _LIR_ directly to _HR_ when recovering the variables in Section 3. For other decompilers, we pair outputs by manual analysis. Then we can get corresponding \(<out_{i}^{HPL},out_{i}^{LPL}>\) pairs. Given \(IN(B)\), we consider the \(f_{i}^{HPL}\) obtained by decompiling to be correct if the results of the function \(f_{i}^{HPL}\) and \(f_{i}^{LPL}\) are equal for each paired \(out_{i}^{HPL}\) and \(out_{i}^{LPL}\). At last, we define the _program accuracy_ as \(Acc=\sum_{k=1}^{N}Acc_{B}(B_{i})/N\), where \(N\) is the number of basic blocks in the program, \(B_{i}\) is the \(i\)-th basic block in the program. In the evaluation, _NeurDP_ automatically generates functions from the basic blocks, and we manually check if \(f_{i}^{HPL}\) is correct by comparing the functions from HPL and LPL. For example, in Figure 1, HPL and LPL codes of func1 all contain one basic block. We can get the accuracy of func1 \(Acc=Acc_{B}(B)/1\). The output set and input set of \(B\) in HPL are \(\{ret_{LPL}\}\) and \(\{p0\}\). The output set and input set of \(B\) in LPL are \(\{ret_{PL}\}\) and \(\{w19\}\). Then, we can get output pairs in HPL and LPL \(\{ rectif_{PL},ret_{LPL}\}\). The corresponding function of \(ret_{PL}\) is \(f=\{f_{\_}scamf_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\}}\). The corresponding function of \(ret_{PL}\) is \(f=(f_{\_}scamf_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\ }{\_}{ Neutron [6]). From the results, we see that _NeurDP_ outperforms both of them. We do not have access to the source code and dataset of Coda [9]. We also find that the details needed to reproduce Coda are not described in their paper. So we could neither test Coda on our dataset nor test our _NeurDP_ on their dataset. Considering that the benchmark \((Math+NE)\)[9] in Coda is generated similarly to our dataset, we directly compare the effect with that described in their paper. Coda splits the long and short sentences in the dataset into two groups for testing \((Math+NE)_{S}\) and \((Math+NE)_{L}\). Therefore, the range of Coda's accuracy on these two datasets is listed in Table 5. In addition, since Coda cannot handle the compiler-optimized LPL, this part of the data is replaced with blanks. For Neutron, we get the code and test it on our dataset. The result in Table 5 shows that Neutron does not perform as well as _NeurDP_ on our dataset, especially for compiler-optimized code. We further analyze the experimental results, where _NeurDP_ adopts _HIR_ as the model's target to cope with compiler optimization and uses the _OTU_ mechanism to divide the basic blocks into finer-grained partitions. In contrast, the previous neural-based work utilizes HPL or AST as the model's target. Source code and AST are not optimized by the compiler front-end and cannot correspond well with the optimized LPL, increasing the difficulty of model learning. Therefore, Coda and Neutron cannot cope with the optimized code very well. **Compare with Different Neural Networks.** To understand the effect of the GGS-NN model, we compare _NeurDP_ with other models. We select three wildly used seq2seq models (Transformer [164], LSTM [55], and GRU [7]) to make a comparison with our _NeurDP_. Note that, instead of using a tree decoder, we serialize (traverse) the AST of the source code as the output of the transformer for model Transformer (AST) in Table 6. We use the same data set as _NeurDP_ to train these models separately. Note that we use Clang to extract <assembly, source code/AST/IR> as ground truth for these models. In this experiment, we use the accuracy of tokens to evaluate these models. This is because outputs of other models often have so many syntax errors that it is hard to evaluate their functionality. The experimental result proves that the GGS-NN can make decompilation results more accurate (see Section 3.3). We further evaluate the effectiveness of the NMT model using _HIR_ as a translation target. We choose the Transformer model as the baseline and use source code, AST, and _HIR_ as the model's translation targets for evaluation on DS1. From the first three lines of Table 6, we can find that when the NMT model uses source code (SRC) or AST as the translation target (output), its effect on O1-O3 is far worse than that at O0. In contrast, the model that uses _HIR_ as the output has no significant difference in translation effects at the O0-O3, and the complete accuracy is better than the other two models. The experimental results show that using _HIR_ as the translation target of the model is highly generalizable for optimized code, which are not affected by compiler optimizations. What's more, using our _HIR_ and _LIR_ pairs splitting by _OTU_ performs better than other models. **Impact under Different Translation Unit.** We evaluate the token accuracy of using different methods of splitting basic blocks, including code sequence-oriented _TU_ (STU) and DDG-oriented _TU_ (DTU). We select _DS1_ to evaluate the performance of different forms of TU. To verify the effect of _TU_, we choose the statement size of the _STU_ as 5, 10, and 15 on the assembly sequence. To further verify the performance between fixed-length and variable-length _DTU_, we select a fixed-length _DTU_ with a size of 5 and our _OTU_. Figure 8 shows the performance of _NeurDP_ under different forms of _TU_. The result indicates that _NeurDP_ becomes less effective in code sequences as the _STU_ increases, mainly because the longer the code the model needs to handle, the more difficult it is to translate accurately. If we do not split the basic block, the results will worsen. In addition, we find that the fixed length of _TU_, either sequence-oriented or DDG-oriented, makes the correspondence between _LIR_ and _HIR_ in the training set more ambiguous, which leads to the model's failure to learn the mapping rules. Compared with the above methods, our _OTU_ can maximize the automation of obtaining _LIR_ and _HIR_ pairs with the correct correspondence for training, thus enabling the model to learn the mapping relationship between them quickly and accurately. What is more worth mentioning is that our approach can deal with compiler optimization problems well. **Analysis of Rule-based Decompilers.** We also evaluate the performance of one famous open-source rule-based decompiler RetDec [16], which contains over 100,000 lines of code and is a representative decompilation work. In the evaluation, RetDec does not perform as well as those three neural-based decompilers (only achieving 29%) on unoptimized binaries. When evaluated on 01, RetDec achieves 29% accuracy without debug information and 17% accuracy without any symbolic information. When evaluated on 02 and 03, RetDec achieves only 4% accuracy because mostly binaries are failed to decompile. We manually analyzed these samples of \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline & \multicolumn{4}{c}{**Compiler optimization level**} \\ \cline{2-5} & **00** & **01** & **02** & **03** \\ \hline \hline **Transformer-SRC** & 52.93\% & 38.22\% & 39.84\% & 33.37\% \\ \hline **Transformer-AST** & 75.79\% & 45.62\% & 44.60\% & 32.98\% \\ \hline **Transformer-IR** & 77.18\% & 75.66\% & 74.49\% & 73.94\% \\ \hline **LSTM-IR** & 85.40\% & 86.61\% & 85.63\% & 85.95\% \\ \hline **GRU-IR** & 26.56\% & 21.93\% & 25.85\% & 24.19\% \\ \hline **NeurDP** & **89.50\%** & **93.16\%** & **93.40\%** & **90.08\%** \\ \hline \hline \end{tabular} \end{table} Table 6. Token accuracy of different neural networks Figure 8. Token accuracy under different forms of TU decompilation failures and found that many errors occurred in the disassembly step due to the lack of symbolic information. The code sections could not be accurately located. In some cases, although the function entry point was found, the decompilation is broken due to some small disassembly errors. Moreover, we studied the mechanism of rule-based decompilers (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019). Rule-based decompilation tools have more rules and constraints and are more sensitive to disassembly errors. A memory or stack error in the assembly will often cause the following code or the entire function to fail to decompile. In contrast, _NeurDP_ has no constraints (e.g., stack balance), so it has a certain degree of fault tolerance for some disassembly errors, such as _sp stack unbalanced_ caused by inline assembly code. It will not cause the entire decompilation process to fail. Even if the disassembly error leads to the wrong decompilation result, _NeurDP_ can still finish decompiling without triggering an error and stopping like other tools. ## 5. Discussion **Limitations.** In this work, we propose and implement a novel neural decompilation approach, named _NeurDP_, to demonstrate that the neural-based approach copes with the decompilation problem of compiler-optimized LPL. However, _NeurDP_ still has some limitations. Firstly, the HPL statements generated by _NeurDP_ are mapped directly from _HIR_ and are mostly monadic or binary statements. For example, for an expression a = b + c * d, _NeurDP_'s outputs are two statements tmp = c * d, a = b + tmp. Such problems could be solved through data dependency analysis. Besides, the quality of HPL decompiled by _NeurDP_ is closely related to the accuracy of the LPL (assemble code). We find that for stripped binaries, the disassembly tools (e.g., RetDec) may cause errors in assembly code due to incorrect function boundary identification, especially for decompiled code, which affects the quality of our _NeurDP_'s performance. Secondly, _NeurDP_ is not completely end-to-end from LPL to HPL. The lifting from IR to HPL depends on the rules, so it is not easy to support multi-machine and multi-language. Thirdly, _NeurDP_ is a prototype system, and the dataset contains only the statements consisting of arithmetic and calling operations to integer variables. We plan to include more types of statements in future work. Finally, _NeurDP_ directly uses the existing techniques, including disassembly, reconstruction of control structures, etc. _NeurDP_ does not consider the errors introduced by these modules so these components will affect the final accuracy. **Future Work.** We will continue to explore techniques for improving the decompilation quality of _NeurDP_ and resolve the above limitations. For example, we will eliminate some binary statements by merging expressions and reducing the redundant variables for HPL generated by _NeurDP_. The elimination of sentences will be achieved through data-flow analysis. At the same time, we will use neural-based methods to learn some patterns of statements that match developers' habits through historical experience and guide the process of merging to generate HPL that is more in line with programming habits. Furthermore, we will combine the collective capability of the-state-of-art commercial and open-source disassembly tools (Zhu et al., 2019; Chen et al., 2019), to generate high-quality assembly code. The goal is to ensure the _NeurDP_'s input is correct, which is a sufficient condition to ensure the performance of the decompiled code. ## 6. Related Work **Rule-based Decompilation.** Rule-based decompilation techniques rely on PL experts to customize and design specific heuristic rules lifting low-level PL to high-level PL and achieving software decompilation. The current popular rule-based decompilation tools are Hex-Rays (Han et al., 2017), RetDec (Chen et al., 2018) and Ghidra (Ghira et al., 2018). Hex-Rays is a decompiler engine integrated into the commercial reverse tool IDA Pro, which is the de-facto industry standard in the software security industry. However, Hex-Rays is not open-source, and the inside technology is hard to understand. RetDec is an LLVM-based redirectable open-source decompiler developed by Avast in 2017, aiming to be the first "universal" decompiler that can support multiple architectures and languages. RetDec can be used alone or as a plug-in to assist IDA Pro. Ghidra is an SRE framework developed by the National Security Agency (NSA) for cybersecurity missions. Ghidra supports running on Windows, macOS, and Linux, supporting multiple processor instruction sets and executable formats. Although these studies have made significant improvements, they are far from perfect. Rule-based approaches are needed to manually detect known control flow structures based on written rules and patterns. These rules are difficult to develop, error-prone, usually only capture part of the known CFG, and require long development cycles. Worse still, these methods do not work well when decompiling optimized code. However, optimization is now the default option when commercial software is compiled. Unlike rule-based decompilers, the goal of _NeurDP_ is based on deep neural networks to learn and extract rules from code data automatically. Trying to break through the problem of compiler-optimized code is challenging to decompile accurately. **Learning-based Decompilation.** Most of the existing learn-based decompilation methods (Chen et al., 2018; Chen et al., 2018; Chen et al., 2019) draw on the idea of NMT to transform the decompilation into the problem of mutual translation of two different PL. Katz et al. (2019) first proposed an RNN-based method for decompiling binary code snippets, demonstrating the feasibility of using NMT for decompilation tasks. Katz et al. (2019) proposed a decompilation architecture based on LSTM called TraFix, and they realized that the primary task of building a decompilation tool based on NMT is to make up for the information asymmetry between high-level PL and low-level PL. TraFix takes the preprocessed assembly language as input and the subsequent traversed form of C as output, which reduces the structural asymmetry between the two PLs. Fu et al. (2019) proposed an end-to-end neural decompilation framework, named Coda, based on several neural networks and different models used for different statement types. Coda (Chen et al., 2018) can accurately decompile some simple operations, such as binary operations, which is far from actual application. Unlike the above existing studies, our neural decompilation framework _NeurDP_ can decompile the real-world low-level PL code, especially the compiler-optimized PL code, into a C-like high-level PL code with corresponding functionality. ## 7. Conclusions In this paper, we propose and implement a neural decompilation framework named _NeurDP_, which accurately decompiles LPL code to a C-like HPL with similar functionality. We also design an optimal translation unit (OTU) suitable to form a dataset for learning algorithms better capturing the relationship between HPL and LPL. The evaluation results show that _NeurDP_ achieves better accuracy for optimized code, even compared with the state-of-the-art. ###### Acknowledgements. This work was supported by NSFC U1836211, Beijing Natural Science Foundation (No.M22004), Youth Innovation Promotion Association CAS, Beijing Academy of Artificial Intelligence (BAAI).
2305.09504
Content-Adaptive Downsampling in Convolutional Neural Networks
Many convolutional neural networks (CNNs) rely on progressive downsampling of their feature maps to increase the network's receptive field and decrease computational cost. However, this comes at the price of losing granularity in the feature maps, limiting the ability to correctly understand images or recover fine detail in dense prediction tasks. To address this, common practice is to replace the last few downsampling operations in a CNN with dilated convolutions, allowing to retain the feature map resolution without reducing the receptive field, albeit increasing the computational cost. This allows to trade off predictive performance against cost, depending on the output feature resolution. By either regularly downsampling or not downsampling the entire feature map, existing work implicitly treats all regions of the input image and subsequent feature maps as equally important, which generally does not hold. We propose an adaptive downsampling scheme that generalizes the above idea by allowing to process informative regions at a higher resolution than less informative ones. In a variety of experiments, we demonstrate the versatility of our adaptive downsampling strategy and empirically show that it improves the cost-accuracy trade-off of various established CNNs.
Robin Hesse, Simone Schaub-Meyer, Stefan Roth
2023-05-16T14:58:30Z
http://arxiv.org/abs/2305.09504v1
# Content-Adaptive Downsampling in Convolutional Neural Networks ###### Abstract Many convolutional neural networks (CNNs) rely on progressive downsampling of their feature maps to increase the network's receptive field and decrease computational cost. However, this comes at the price of losing granularity in the feature maps, limiting the ability to correctly understand images or recover fine detail in dense prediction tasks. To address this, common practice is to replace the last few downsampling operations in a CNN with dilated convolutions, allowing to retain the feature map resolution without reducing the receptive field, albeit increasing the computational cost. This allows to trade off predictive performance against cost, depending on the output feature resolution. By either regularly downsampling or not downsampling the entire feature map, existing work implicitly treats all regions of the input image and subsequent feature maps as equally important, which generally does not hold. We propose an adaptive downsampling scheme that generalizes the above idea by allowing to process informative regions at a higher resolution than less informative ones. In a variety of experiments, we demonstrate the versatility of our adaptive downsampling strategy and empirically show that it improves the cost-accuracy trade-off of various established CNNs. + Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ## 1 Introduction When humans are exposed to a complex task, they focus on relevant aspects to make optimal use of the brain's limited capacities [2]. For instance, when categorizing the bird's species in Fig. 1, our brain allocates significantly more computational resources for the bird than for the background. While naturally occurring in humans, this adaptive allocation of resources is not utilized in most of today's deep learning architectures. In this work, we propose an adaptive downsampling scheme for Convolutional Neural Networks (CNNs) [23] that mimics above adaptive allocation of resources by _simultaneously_ processing different regions of an image or feature map _at different resolutions_. Many CNNs used as backbones in computer vision rely on a progressive application of pooling or strided convolution [17, 22, 37] to increase the network's receptive field and decrease computational cost [46]. However, this progressive, regular downsampling comes at the price of losing fine detail in the feature maps, limiting the network's ability to correctly understand images [46] or recover fine detail in dense prediction tasks [4], such as at object boundaries. To approach this problem, Yu _et al_. [46] replace the last few downsampling operations in a CNN with dilated convolutions [45]. Although increasing the computational cost, this allows to keep the resolution of feature maps without reducing the network's receptive field. However, by regu Figure 1: _Illustration of our content-adaptive method compared to regular downsampling. (a)_ In regular downsampling, every second pixel is sampled (repeated twice here). Note how the result is almost in recognizable and important detail like the beak, claws, and tail of the bird is lost. (b) In our adaptive downsampling, a precomputed downsampling mask defines the number of downsampling operations applied to each pixel. The resulting representation (zoom in to avoid aliasing) contains information of locally varying resolution. This allows to semi-continuously adjust the representation size to keep more detail where it matters. larly downsampling, respectively _not_ downsampling [46], _entire_ feature maps, most existing CNNs implicitly assume that all regions of the input image and subsequent feature maps are equally important, which generally does not hold [21, 29, 31] as, _e.g_., also shown by locally salient attribution maps [36, 39, 18]. In contrast, our _locally adaptive_ downsampling scheme makes a more realistic assumption and can be considered as a generalization of both regular downsampling in CNNs and dilated convolutions [46]. By processing task-relevant regions at a higher resolution than unimportant ones, we can maintain the fine granularity of important regions while efficiently processing unimportant regions at a lower resolution, ultimately leading to an improved trade-off between the accuracy and computational expense of various popular CNNs. Contrary to existing adaptive downsampling methods [21, 29] that adaptively downsample the input image _before_ passing it through a CNN, our approach allows for adaptive downsampling of feature maps _within_ a CNN. Specifically, we make the following contributions: _(1)_ We present a novel adaptive downsampling scheme for CNNs, allowing to process different feature map regions at different resolutions. _(2)_ As different amounts of computational resources are being allocated for different regions, our method can be considered as a novel realization of "focus" in CNNs. This is fundamentally different from existing attention mechanisms, _e.g_., in transformers [8], where attention is implemented via adaptive weighting of input features. As a consequence, an uninformative black image and an informative natural image would still require the same inference time in a transformer, while the black image would be processed faster in our proposed adaptive downsampling. _(3)_ We provide an accompanying modified instantiation of submanifold sparse convolution [14] that allows to efficiently perform standard convolution on multi-resolution grids that occur in our adaptive downsampling. _(4)_ Thanks to carefully designing the proposed method to satisfy certain guarantees, it can be used in a plug-and-play fashion within existing backbones even without retraining, making it exceptionally versatile and practical. _(5)_ We further empirically show with two computer vision tasks that our method is an effective and application-agnostic tool to improve the cost-accuracy trade-off of different established CNNs that build on regular downsampling. ## 2 Related Work **Adaptive (down)sampling.** The goal of adaptive sampling approaches is to sample an image or feature map spatially to obtain optimal performance or properties of a CNN. An early instance are Spatial Transformer Networks [19], which aim to increase invariance to different geometric transformations by learning to spatially transform feature maps. Deformable convolutions [7] use two-dimensional offsets to the regular sampling locations of standard convolutions to allow for more flexible handling of geometric variations and different receptive field sizes within one layer. CF-ViT [5] entails a multi-stage approach where in each stage the input patch resolution of the most important patches is increased to refine the result of a vision transformer. Talebi and Milanfar [41] show that resizing images with learned resizers, instead of linear ones, can improve the accuracy of recognition models. Recasens _et al_. [31] use an auxiliary saliency network to estimate the most important image regions that are then used for non-uniformly downsampling the input image, leading to an amplification of salient regions. Marin _et al_. [29] train an auxiliary network to predict a non-uniform sampling grid that is denser near semantic boundaries and use it to non-uniformly downsample input images to retain fine detail for segmentation tasks. Jin _et al_. [21] propose a similar idea as [29], particularly for ultra-high-resolution images, additionally including the segmentation accuracy in the training objective of the sampling grid estimator. Our proposed approach is fundamentally different from the above methods that adaptively downsample the image _before_ passing it through a CNN [21, 29, 31], instead of _within_ the CNN. As a result, they severely sacrifice accuracy, making them primarily appropriate for very high-resolution images. **Detail-preserving pooling.** While above adaptive downsampling methods are concerned with the question of _how many_ features to sample per region, detail-preserving pooling is concerned with the question of _which_ features or feature combinations to sample to better retain important detail. Mixed Pooling [44] randomly selects max or average pooling. Lee _et al_. [24] learn a weighted combination of max and average pooling that is dependent on the pooling region. In \(L_{p}\) pooling [15], orders \(p_{j}\) are learned to interpolate between different pooling operators with \(p_{j}=1\) corresponding to average pooling and \(p_{j}=\infty\) to max pooling. Detail-preserving pooling [35] is a learnable adaptive pooling method that magnifies important detail, making use of inverse bilateral filters. Local importance-based pooling [11] preserves discriminative detail by training an auxiliary network to predict adaptive importance maps that are used to aggregate features for downsampling. SoftPool [38] minimizes information loss using a softmax-weighted sum of feature activations. Note that adaptive downsampling methods [21, 31], including our approach, also incorporate regular pooling layers, and thus, advanced pooling operations such as the above can be used complementarily. **Layer aggregation and architecture.** Besides the above, one can also use early high-resolution feature maps of CNNs and more complex network architectures to improve the granularity of feature maps or the output. Hypercolumns [16] build a feature vector for any pixel by concatenating activations from all feature maps above that pixel. Fully convolutional networks [26] refine semantic segmentations by combining earlier layers of higher resolution with later layers of lower resolution. The U-Net [34] architecture extends the previous idea by introducing skip connections between all layers of corresponding resolutions. Pinheiro [30] propose to gradually refine the output segmentation by adding information from earlier layers in a top-down fashion. Feature pyramid networks [25] produce feature pyramids by combining high-level and low-level features of a CNN. Instead of simple one-step skip-connections, Yu [47] incorporate more depth and sharing in their layer aggregation. HRNet [43] goes even further and processes high-resolution and low-resolution streams in parallel while repeatedly exchanging information between the streams. Yu [46] improve the granularity of feature maps by substituting downsampling operations with dilated convolutions [45], allowing to retain the feature map resolution while keeping the original receptive field of the CNN. Generally, high-level feature maps yield semantically stronger features [17, 25], and thus, using auxiliary information from low-level layers might not be optimal. Further, combining layers of different resolutions or keeping higher resolutions can potentially introduce new parameters, increase computational cost, and/or raise memory usage. **Sparse convolution.** While sparse convolution is not directly related to adaptive downsampling, it plays an essential role in our method. Initial works on sparse convolution [10, 12, 13, 33] improve the computational cost of standard convolution by processing only active elements in sparse inputs, such as point clouds [10], handwritten digits [12], or 3D grids [33]. Submanifold sparse convolution [14] avoids a growing number of active elements caused by standard convolutions that dilate the sparse data with each layer. Contrary to sparse convolution, which is only used to completely ignore inactive elements, we adapt sparse convolution to work with our proposed multi-resolution feature maps. ## 3 Content-adaptive Downsampling in CNNs In this work, we postulate that not all feature map regions of a CNN are equally important and that, therefore, different regions should be processed at _different resolutions_. By processing only a subset of the most important feature map regions at higher resolution while downsampling less important ones, we can combine the smaller representation size of regular downsampling with the higher feature map granularity of dilated convolutions [46]. As a consequence, we can gain relatively large improvements in predictive performance while only moderately increasing the computational cost compared to regular downsampling. The high-level idea behind our approach is illustrated in Fig. 1(b). Contrary to regular downsampling (Fig. 1(a)), where the image or feature map is downsampled uniformly by sampling every second pixel, we utilize an adaptive downsampling scheme (Fig. 1(b)) that retains higher resolution in areas with fine detail while downsampling less informative ones. In the following, we will outline the details of our method with a focus on two-dimensional feature maps with channels,, in CNNs for vision tasks. However, the extension to other dimensionalities, such as 1D signals and 3D temporal or volumetric data, works analogously. **Regular downsampling.** Let \(f\in\mathbb{R}^{H\times W\times C}\) be a feature map (or image) of height \(H\), width \(W\), and depth \(C\). We denote regular downsampling by a factor of \(d\in\mathbb{N}_{>1}\) as the process of reducing the spatial dimension of \(f\) by outputting one element \(e\in\mathbb{R}^{C}\) from each non-overlapping \(d\times d\) patch, such that the resulting feature map \(f_{d}\) is of shape \(\frac{H}{d}\times\frac{W}{d}\times C\). Generally, regular downsampling \(\psi\colon\mathbb{R}^{H\times W\times C}\mapsto\mathbb{R}^{H/d\times W/d\times C}\) in CNNs is realized by applying a patch-wise downsampling function \(\psi^{\prime}\colon\mathbb{R}^{d\times d\times C}\mapsto\mathbb{R}^{C}\) to all \(HW/d^{2}\) non-overlapping \(d\times d\) patches \(f^{\prime}_{i,j}\) in \(f\): \[f_{d}=\psi\left(f\right)=\left(\psi^{\prime}(f^{\prime}_{1,1}),\ldots,\psi^{ \prime}(f^{\prime}_{H/d,W/d})\right). \tag{1}\] The above definition of regular downsampling generalizes to many commonly used downsampling methods used in CNNs. For example, \(\psi^{\prime}\) can take the form of various pooling operations, [15, 35, 44], or it can select an element based on its spatial location to implement uniform downsampling,, sampling every \(d\)-th element. When following a convolution of stride 1, this corresponds conceptually to a strided convolution with a stride of \(d\). **Adaptive downsampling.** In this work, we generalize the above regular downsampling scheme by allowing some patches \(f^{\prime}_{i,j}\) to _not be downsampled_, and therefore, retaining their _original resolution_ of \(d\times d\). This is accomplished by providing an additional downsampling mask \(m\in\{0,1\}^{H/d\times W/d}\) as input, specifying which patches \(f^{\prime}_{i,j}\) should be downsampled (\(m_{i,j}=1\)) and which not (\(m_{i,j}=0\)). For now, we assume that this downsampling mask is given, and go later into more detail about how to compute it in a content-adaptive way. Formally, our novel adaptive downsampling \(\psi_{a}\) is defined as \[\psi_{a}\left(f,m\right) =\left(\psi^{\prime}_{a}\big{(}f^{\prime}_{1,1}),\ldots,\psi^{ \prime}_{a}(f^{\prime}_{H/d,W/d})\right), \tag{2}\] \[\text{with }\psi^{\prime}_{a}\big{(}f^{\prime}_{i,j}\big{)}= \begin{cases}f^{\prime}_{i,j}&\text{if }m_{i,j}=0\\ \psi^{\prime}(f^{\prime}_{i,j})&\text{if }m_{i,j}=1\,,\end{cases}\] denoting a patch-wise adaptive downsampling as indicated by the downsampling mask \(m\). **Multi-resolution grid convolution.** As the resulting feature map \(\psi_{a}\left(f,m\right)\) consists of multiple resolutions, we lose the regular grid structure of our data necessary to work with standard convolutional layers. To address this, we project the lower-resolution features onto their corresponding location in the high-resolution grid, as illustrated in Fig. 2(c). Naturally, high-resolution patches \(f^{\prime}_{i,j}\) that are filled with a lower-resolution element will have empty cells (Fig. 2(c) - shaded cells) and, thus, are sparse. We exploit the resulting sparsity by processing our feature representations in subsequent layers using sparse convolution [10, 13, 33]. This allows to only consider _active_ elements, and hence, uses less computation in regions of lower effective resolution. To avoid submanifold dilation [14], _i.e._, that inactive elements become active after convolution, we suggest a modified instance of submanifold sparse convolution [14] (SSC). SSC ensures that input and output contain the same set of active elements and thus enables us to keep the resolution of each respective feature map or image region unchanged. To retain the same receptive field as the convolutional layer would have with regular downsampling, we increase the dilation factor to \(d\). A remaining challenge is that at specific feature map locations (see, _e.g._, Fig. 2(c) - cell with \(6\)) non-central elements of the convolutional kernel could fall onto inactive elements, causing the result to be corrupted. To avoid this, we modify SSC to assume that all inactive elements take the value of the corresponding active element within their patch (shown in Fig. 2(c) as shaded cells). Extension to CNNs.Similarly to Yu _et al_. [46], our proposed adaptive downsampling is incorporated into a standard CNN with regular downsampling, _i.e._, pooling or strided convolution, by substituting the last \(n\) downsampling operations with adaptive downsampling. Further, we substitute all convolutional layers that follow the \(i\in\{1,\dots,n\}\) adaptive downsampling operations with our proposed sparse convolution with a dilation factor of \(d^{i}\) to keep the same receptive field as the original network. To perform multiple adaptive downsampling operations in succession, we only consider the elements belonging to the currently lowest available resolution in our multi-resolution feature map and convert them into a dense representation. We then perform adaptive downsampling on these elements according to Eq. (2). As before, the resulting features are projected to their corresponding location in the multi-resolution feature map. Consequently, patches already containing higher-resolution elements are excluded from the downsampling process. This implies that areas, where resolution is retained, cannot be downsampled at a later downsampling step, and thus, for \(d=2\), one obtains a quadtree-like resolution pattern as seen in Fig. 1(b). A more detailed explanation of this process can be found in the supplementary. Guarantees.The resulting CNN can further be seen as a generalization of both regular CNNs with downsampling as well as of the dilated convolution of Yu _et al_. [46]. When setting all elements of the downsampling mask to one, we obtain a standard CNN with regular downsampling. When setting all elements of the downsampling mask to zero, we obtain exactly dilated convolution [46] for higher-resolution feature maps. By simultaneously processing different feature map regions at _different_ resolutions, controlled by the downsampling mask, we establish a combination of the above methods that allows us to "interpolate" between different output strides in a _locally adaptive_ fashion. An important desideratum behind our approach is that - like dilated convolution [46] - additional training of the backbone is not mandatory. Therefore, it is exceptionally easy to use Figure 2: _Illustration of different methods to handle feature map resolution in CNNs. \(\star\)_ denotes the two-dimensional cross-correlation operator (w/ zero padding). _(a)_ Dilated convolution increases the receptive field (equally to downsampling) while retaining the feature resolution [46]. _(b)_ Standard strided convolution in CNNs (with stride=2) corresponds to uniformly downsampling the feature map by sampling every second element. This is typically followed by a standard convolution. _(c)_ In our adaptive scheme, the downsampling mask is used to downsample a subset of the feature map and low-resolution entries are projected into the high-resolution grid; shaded cells (hatched) denote inactive elements. Afterward, sparse convolution is applied to the active elements (black font) of the multi-resolution feature map to produce an output with the same, multiple resolutions as the input. Note how the (loop) invariant of all algorithms (Guarantee 2) ensures that the elements highlighted in blue, both before and after convolution, are the same across the different methods. in a plug-and-play fashion. This is possible due to the following two guarantees, which result from our design and hold when substituting regular downsampling, _i.e_., strided convolution or pooling, with our adaptive downsampling: **Guarantee 1:** Due to projecting all elements into the highest-resolution grid and adjusting the dilation factors of subsequent sparse convolutions, we can guarantee that the receptive fields of the CNN and each intermediate layer remain unchanged compared to the corresponding CNN with regular downsampling. **Guarantee 2:** All output features of a standard CNN with strided convolution equal the output features at the corresponding high-resolution locations of the same CNN with adaptive downsampling (see Fig. 2(c) - blue cells). This means that a CNN with adaptive downsampling is not modifying feature map values of a standard CNN with strided convolution, but instead only _adds_ information at pixel locations where there was none before, and therefore, refines the feature map. Note that while Guarantee 2 formally only holds for strided convolutions and not for pooling, we show in Sec. 4.3 that applying our adaptive downsampling in a CNN with pooling is still feasible in practice without necessarily requiring retraining. **Content-adaptive downsampling masks.** A remaining question is how to obtain the downsampling mask to indicate what regions to process at which resolution in a locally content-adaptive way. In Sec. 4, we show that traditional algorithms, like high-frequency detection or keypoint estimates, can serve as an effective basis to identify important image regions, and thus, to estimate downsampling masks. Additionally, we demonstrate how to learn an appropriate downsampling mask from data. Specifically, we utilize a shallow CNN that takes as input the feature map of the main model before the adaptive downsampling step, and outputs the downsampling mask. In order to train it end-to-end with the main model, we make the discrete mask estimation differentiable with Gumbel-Softmax [20]. As training our adaptive downsampling end-to-end can result in a downsampling mask of only zeros, _i.e_., full resolution, we control the amount of invested resources with an additional hyperparameter \(\gamma\in[0,1]\) that defines the desired proportion of active mask elements. Its squared difference to the actual proportion \(\hat{m}\in[0,1]\) of active elements in our downsampling mask is included in the final loss function \[\mathcal{L}=\alpha\mathcal{L}_{1}+\beta(\gamma-\hat{m})^{2}, \tag{3}\] with \(\alpha\) and \(\beta\) being weighting factors, and \(\mathcal{L}_{1}\) denoting the loss of the main task, _e.g_., the segmentation loss. Limitations.Our approach allows to interpolate between different feature map resolutions, which can drastically improve the cost-accuracy trade-off as shown in Sec. 4. However, some limitations arise that could hinder its effective usage in some applications. First, our method's benefit depends on the given downsampling mask. If the mask does not capture important regions, retaining higher-resolution regions will not help. Second, there may be backbones, especially those with advanced layer aggregation schemes [43, 47], or datasets without fine details where higher-resolution feature maps and, therefore, also our method generally do not yield advantages. However, our analysis and related work [9, 46] show that the highly impactful and widely used CNN backbones, VGG [37] and ResNet [17], benefit from higher-resolution feature representations in various applications. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Different backbones with DeepLabv3 [3]**} \\ \cline{2-10} & \multicolumn{3}{c}{ResNet-50 [17]} & \multicolumn{3}{c}{ResNet-101 [17]} & \multicolumn{3}{c}{ResNet-152 [17]} \\ \cline{2-10} OS & \(16\) & \(8\) & \(8{\rightarrow}32\) & \(16\) & \(8\) & \(8{\rightarrow}32\) & \(16\) & \(8\) & \(8{\rightarrow}32\) \\ \hline mIoU & \(0.7496\) & \(0.7655\) & \(0.7658\) & \(0.7591\) & \(0.7775\) & \(0.7748\) & \(0.7683\) & \(0.7852\) & \(0.7848\) \\ \#MA & \(2.6e11\) & \(8.0e11\) & \(3.8e11\) & \(4.2e11\) & \(1.4e12\) & \(6.7e11\) & \(5.7e11\) & \(1.9e12\) & \(1.0e12\) \\ Time & \(0.05\) & \(0.11\) & \(0.05/0.08\) & \(0.07\) & \(0.22\) & \(0.08/0.15\) & \(0.1\) & \(0.31\) & \(0.11/0.25\) \\ \hline \hline \end{tabular} \end{table} Table 1: _Oracle experiment for semantic segmentation._ For each model and output stride (OS), we report the mean IoU (mIoU) on the Cityscapes [6] evaluation split, the theoretical (average) number of multiply-adds (#MA), and the min./max. inference times in seconds. Our adaptive downsampling (OS=\(8{\rightarrow}32\)) improves the cost-accuracy trade-off of all models, indicated by the on par mIoU scores and the significantly reduced #MA and inference times, compared to OS=8. ## 4 Experiments To demonstrate the practicality of our proposed adaptive downsampling scheme, we conduct various experiments on two different computer vision tasks confirming the following points: _(1)_ As a generalization of [46] and standard CNNs, our method allows to interpolate between different output strides, enabling a more granular control of how many resources should be allocated for the task and input image at hand. _(2)_ By selecting task-relevant regions for a specific input, our method can drastically improve the cost-accuracy trade-off of standard CNNs. _(3)_ The guarantees satisfied by our method allow to incorporate it in a plug-and-play fashion into pre-trained networks at test time. _(4)_ Our method is not limited to a single application and finding appropriate downsampling masks is feasible in practice. _(5)_ Our approach generalizes to different backbones, segmentation heads, and extensions. We start our evaluation with an oracle experiment, demonstrating the potential of our method given a high-quality downsampling mask. Afterward, we present two case studies that show how our method can be used in practice, demonstrating the advantages summarized above. **Experimental setup (segmentation).** Experiments for semantic segmentation (Secs. 4.1 and 4.2) have been conducted on the Cityscapes dataset [6]. We report the mean intersection over union (mIoU) on the validation set. **Experimental setup (keypoint description).** For the keypoint description experiment (Sec. 4.3), we use the established D2-Net descriptor [9], which takes an image as input and outputs a dense feature map that can be used to locate and describe keypoints. Since D2-Net keypoint localization tends to be imprecise [42], we follow Uzpak _et al_. [42] and instead use SIFT [27] to first detect the 512 most salient keypoints and employ D2-Net only to describe them. Following common practice [28, 32], we report the mean matching accuracy, _i.e_., the ratio between correct and possible matches, at a three-pixel threshold (MMA@3) on the HPatches dataset [1]. **Experimental setup (general).** As the focus of this work is the cost-accuracy trade-off of the examined methods, for all experiments we report the respective task-specific metrics, _i.e_., mIoU and MMA@3, over the required theoretical (average) number of multiply-adds as a proxy of computational cost. As baselines, we consider the original models with varying output strides obtained by using regular downsampling [17, 37] with the standard factor \(d=2\), respectively _not_ downsampling and using dilated convolutions [46]. To draw a more conclusive picture, we also report the required inference time in seconds. However, the inference time depends on the used implementation and hardware - here an Nvidia RTX A6000 GPU. For a fair comparison we use the same implementation for our method and the baselines; please refer to the supplementary for different implementations. ### Oracle experiment: Feature resolution in semantic segmentation In our first experiment, we examine the role of feature map resolution in semantic segmentation and whether processing more informative regions at a higher resolution offers advantages and is generally possible. To this end, we first conduct an oracle experiment in which we assume a high-quality downsampling mask to be given. As baselines we use a segmentation model with regular downsampling and two different output strides (OS) of 16 (lower feature resolution) and 8 (higher feature resolution). For our proposed adaptive downsampling with output stride \(8\) to \(32\) (OS=\(8\)\(\rightarrow\)\(32\)), we define the downsampling mask as the pixels that are misclassified by a regular model with OS=\(32\) but correctly classified by a model with OS=\(8\), followed by a dilation with a square kernel size of \(K\times K\) (see Fig. 3 and supplementary). This gives us exactly the image regions that benefit from higher resolutions, and thus, can be considered our oracle downsampling mask. To demonstrate the generalizability of our approach, we investigate a variety of different backbones, segmentation heads, and extensions. Results for our adaptive downsampling and baselines can be seen in Tab. 1. Confirming our assumption and prior work [4, 46], a smaller output stride, respectively higher resolution, increases both the mIoU and computational cost of all the examined baseline models (OS=\(8\)_vs_. OS=\(16\)). Moreover, using our adaptive downsampling leads to an mIoU that is on par with the respective regular model with OS=\(8\) while for some images requiring less than 50% of the time and computational cost (_e.g_., ResNet-101+DeepLabv3). This clearly confirms our hypothesis that only a fraction of the feature map must be processed at a higher resolution to obtain strong predictive performance. Figure 3: _Example downsampling masks._ A Cityscapes [6] image and the corresponding estimated downsampling masks using our oracle setup, edge detection, and a shallow learned network. Note how low-frequency regions, _e.g_., the sky and road, belong to the same class, and thus, can be processed at low resolution (bright) to decrease the computational cost. Further, it shows that our proposed method is applicable to various different ResNet backbones (VGG [37] is shown in Sec. 4.3) and segmentation heads. Also, our approach is a novel and orthogonal research direction, which can be used _together_ with existing extensions like deformable convolution [7] and advanced pooling strategies [38]. ### Case study 1: Semantic segmentation In our first case study, we again consider semantic segmentation with ResNet101+DeepLabv3 models but now with realistically estimated downsampling masks. We propose two strategies to estimate useful downsampling masks: **Edge mask.** From analyzing the problem at hand, we know that segmentation boundaries are likely to align with edges. For this reason, we perform edge detection on the input image to create a downsampling mask. Specifically, we apply a Sobel filter to the original grayscale images and dilate the thresholded result with a square kernel of size \(11\times 11\). Computational cost for detecting the edges is neglectable (\(3\times 10^{-3}\%\) of the total multiply-adds). **Learned mask.** As described in Sec. 3, we use a shallow CNN to estimate downsampling masks from an intermediate layer of the feature extractor. The mask estimator is trained end-to-end with the segmentation model (see supplementary). The computational cost for the mask estimator is included in the reported multiply-adds and times. The resulting downsampling masks for a single example image can be seen in Fig. 3. Dark areas denote keeping the higher resolution of an output stride of 8 while bright areas correspond to an output stride of 16 (OS=\(8{\rightarrow}16\)). In Fig. 4, we report results for our adaptive downsampling with the two proposed strategies. We again observe that by "focusing" on more relevant image regions, _e.g_., edges, we can improve the cost-accuracy trade-off of an established model. Further, by adjusting the hyperparameters of the mask estimators, we can granularly control the amount of invested resources. Remarkably, adaptive downsampling with a learned mask achieves on par mIoU as regular downsampling with an output stride of 8 (0.775 vs. 0.776) while reducing the required computational cost by approximately 35%. Looking at the annotated minimum and maximum times, we can observe time improvements of up to \(40\%\). However, we observe that for images with a lot of "interesting" content, the worst-case inference time is similar to regular downsampling with OS=8. This case study also demonstrates how naively and easily we can estimate downsampling masks that together with our adaptive downsampling still yield advantages for the task at hand. A visual example of the estimated segmentations is shown in Fig. 5. Contrary to our work, existing methods that adaptively downsample the input _before_ passing it into a CNN [21, 29] lose important image information, resulting in lower mIoU scores of \(0.5\) or \(0.65\) on Cityscapes [6]. Hence, they are mainly beneficial for processing ultra-high resolution images and are not considered as competitive baselines here. ### Case study 2: Keypoint description In our second realistic case study, we consider the problem of keypoint description. To this end, we investigate the established SIFT detector [27] and D2-Net descriptor [9] as described in the experimental setup. D2-Net utilizes a VGG16 [37] backbone that progressively applies max pooling to reduce the resolution of the feature map, and thus, computational load. To demonstrate the simplicity and versatility of our method, we use the original model weights provided by the authors [9] and do _not_ perform additional training for any of the following setups. Again confirming prior work [9], in Fig. 6 we see that a higher feature map resolution (_i.e_., lower OS) increases both the MMA@3 and the computational cost of the baseline model. Next, we substitute the last 3, 2, or 1 max pooling op Figure 4: _Semantic segmentation results._ mIoU on the Cityscapes [6] evaluation set over the number of multiply-adds for a ResNet-101 [17] backbone with a DeepLabv3 [3] segmentation head, using regular downsampling [46] and our novel adaptive downsampling scheme with a learned mask, respectively an edge detection mask, with varying hyperparameters. Min. and max. inference times for each setup are annotated. To reduce variance, we report the maximum mIoU over three runs. X-confidence intervals show 2 times the standard deviation of the required multiply-adds. Figure 5: _Qualitative semantic segmentation results._ Estimated segmentation maps using regular downsampling with OS=\(8\) and adaptive downsampling with an output stride of 8 to 16 (OS=\(8{\rightarrow}16\)) and our edge downsampling (DS) mask. erations with our adaptive downsampling with max pooling (OS=\(\{1,2,4\}\)\(\rightarrow\)8). As only image regions around the keypoints are important for our task at hand, we estimate the downsampling masks by dilating keypoints obtained from SIFT with filters of increasing sizes for each of the up to three downsampling levels (see Fig. 7). We demonstrate a fine-grained control of computational cost by varying the dilation sizes for the downsampling levels. Larger dilation sizes will lead to larger areas being processed at higher resolution, and thus, increase cost as well as MMA@3. The results in Fig. 6 clearly show that, compared to the fixed output strides of the baseline model, our adaptive downsampling yields a significant reduction of the computational cost while achieving the same MMA@3, _e.g._, we achieve an \(\sim\)\(70\%\) reduction of the computational cost to reach an MMA@3 that is on par with a regular output stride of 1. **Inadequate masks.** To evaluate how our method performs with "bad" downsampling masks, in Fig. 6 we additionally report the MMA@3 for adaptive downsampling with an output stride of \(4\) to \(8\) using random and unreasonable masks, _i.e._, the inverse of reasonable masks. Confirming our stated limitations, we see that inadequate downsampling masks do not capture important image regions, and thus, we process unimportant regions at high resolution, leading to an increased computational cost compared to regular downsampling with OS=\(8\) without yielding significant improvements in predictive performance. Note, however, that thanks to our guarantees in Sec. 3, poor downsampling masks still yield comparable predictive performance as regular downsampling with OS=\(8\), showing that these masks do not negatively affect predictive performance and that the model still behaves in an expected way. ## 5 Conclusion and Discussion In this work, we propose - to the best of our knowledge - the first downsampling scheme that changes the operating resolution _within_ CNNs in a _locally adaptive_ fashion. We do so by generalizing standard CNNs [17, 22, 37] and Yu _et al._'s [46] dilated convolutional networks, allowing us to process feature maps with spatially-varying resolutions. By selecting an appropriate content-adaptive downsampling mask, indicating locally the most important regions that are to be processed at a higher resolution, we can enable CNNs to "focus" more strongly on task-relevant regions. Besides substantially improving the cost-accuracy trade-off in two computer vision tasks, our novel adaptive downsampling enables a more continuous control of the invested computational resources, giving practitioners another degree of freedom to best adapt their model to the available resources. Thanks to carefully designing the proposed method, it satisfies two important guarantees, allowing adaptive downsampling even to be used at test time in a plug-and-play fashion within standard CNNs pre-trained with _regular_ downsampling. As our approach improves the cost-accuracy trade-off of various established models, it contributes to saving valuable scarce resources and to the important research direction of more (energy) efficient deep learning [40]. **Acknowledgements.** This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 866008). The project has also been supported in part by the State of Hesse through the cluster project "The Adaptive Mind (TAM)" and the Hessian research priority programme LOEWE within the project "WhiteBox". Figure 6: _Keypoint description results._ MMA@3 over the required number of multiply-adds for a SIFT [27] detector with a pre-trained D2-Net [9] descriptor using regular downsampling [46] with different output strides (OS), and our novel adaptive downsampling scheme using different masks. Given a reasonable downsampling mask, our adaptive downsampling can achieve on par MMA@3 while significantly reducing the computational cost. Actual inference times for a randomly chosen image are annotated. Figure 7: Qualitative results of keypoints matched with our adaptive downsampling scheme and the corresponding downsampling masks. Note how only a fraction of the image is important, and thus, processed at high resolution (dark) to increase the accuracy while saving resources in bright areas of the downsampling mask.
2302.11313
Time-varying Signals Recovery via Graph Neural Networks
The recovery of time-varying graph signals is a fundamental problem with numerous applications in sensor networks and forecasting in time series. Effectively capturing the spatio-temporal information in these signals is essential for the downstream tasks. Previous studies have used the smoothness of the temporal differences of such graph signals as an initial assumption. Nevertheless, this smoothness assumption could result in a degradation of performance in the corresponding application when the prior does not hold. In this work, we relax the requirement of this hypothesis by including a learning module. We propose a Time Graph Neural Network (TimeGNN) for the recovery of time-varying graph signals. Our algorithm uses an encoder-decoder architecture with a specialized loss composed of a mean squared error function and a Sobolev smoothness operator.TimeGNN shows competitive performance against previous methods in real datasets.
Jhon A. Castro-Correa, Jhony H. Giraldo, Anindya Mondal, Mohsen Badiey, Thierry Bouwmans, Fragkiskos D. Malliaros
2023-02-22T11:50:39Z
http://arxiv.org/abs/2302.11313v3
# Time-varying Signals Recovery via Graph Neural Networks ###### Abstract The recovery of time-varying graph signals is a fundamental problem with numerous applications in sensor networks and forecasting in time series. Effectively capturing the spatio-temporal information in these signals is essential for the downstream tasks. Previous studies have used the smoothness of the temporal differences of such graph signals as an initial assumption. Nevertheless, this smoothness assumption could result in a degradation of performance in the corresponding application when the prior does not hold. In this work, we relax the requirement of this hypothesis by including a learning module. We propose a Time Graph Neural Network (TimeGNN) for the recovery of time-varying graph signals. Our algorithm uses an encoder-decoder architecture with a specialized loss composed of a mean squared error function and a Sobolev smoothness operator.TimeGNN shows competitive performance against previous methods in real datasets. Graph neural networks graph signal processing time-varying graph signal recovery of signals ## 1 Introduction Recent advances in information technology have led to an accumulation of large amounts of unstructured data. The representation and analysis of such irregular and complex data is a daunting task. Graph Signal Processing (GSP) and Graph Neural Networks (GNNs) are emerging research fields that have proved to be helpful for such tasks in recent years (Ortega et al. (2018), Defferrard et al. (2016), Kipf and Welling (2017), Ioannidis et al. (2019), Gama et al. (2019), Duval and Malliaros (2022)). In GSP and GNNs, the data is modeled as signals or vectors on a set of nodes of a graph, incorporating both the feature information and the underlying structure of the data. GSP and GNNs thus provide new perspectives on data handling, connecting machine learning and signal processing (Bronstein et al. (2017)), with profound impact in various fields like semi-supervised learning (Kipf and Welling (2017)), computer vision (Giraldo et al. (2022), Mondal et al. (2021)), and social media (Benamira et al. (2019)). The sampling and reconstruction of graph signals are fundamental tasks that have recently attracted considerable attention from the signal processing and machine learning communities (Ortega et al. (2018), Marques et al. (2015), Romero et al. (2016), Ramirez et al. (2017), Parada-Mayorga et al. (2019), Guler et al. (2019), Girault et al. (2020), Hara et al. (2021), Giraldo et al. (2022)). Nevertheless, the problem of time-varying graph signal reconstruction1 has not been widely explored (Giraldo et al. (2022)). The reconstruction of time-varying graph signals has significant applications in data recovery in sensor networks, forecasting of time-series, and infectious disease prediction (Giraldo et al. (2022), Girault (2015); Giraldo and Bouwmans (2020); Chen and Eldar (2021); Mondal et al. (2022)). Previous studies have extended the definition of smooth signals from static to time-varying graph signals (Qiu et al. (2017)). Similarly, other works have focused on the rate of convergence of the optimization methods used to solve the reconstruction problem (Giraldo et al. (2022); Giraldo and Bouwmans (2020)). However, the success of these optimization-based methods requires appropriate prior assumptions about the underlying time-varying graph signals, which could be inflexible for real-world applications. In this work, we propose the Time Graph Neural Network (TimeGNN) model to recover time-varying graph signals. TimeGNN encodes the time series of each node in latent vectors. Therefore, these embedded representations are decoded to recover the original time-varying graph signal. Our architecture comprises: 1) a cascade of Chebyshev graph convolutions (Defferrard et al. (2016)) with increasing order and 2) linear combination layers. Our algorithm considers spatio-temporal information using: 1) graph convolutions (Defferrard et al. (2016)) and 2) a specialized loss function composed of a Mean Squared Error (MSE) term and a Sobolev smoothness operator (Giraldo et al. (2022)). TimeGNN shows competitive performance against previous methods in real-world datasets of time-varying graph signals. The main contributions of our work are summarized as follows: 1) we exploit GNNs to recover time-varying graph signals from their samples, 2) we relax the strict prior assumption of previous methods by including some learnable modules in TimeGNN, and 3) we perform experimental evaluations on natural and artificial data, and compare TimeGNN to four methods of the literature. The rest of the paper is organized as follows. Section 2 introduces the proposed TimeGNN model. Section 3 presents the experimental framework and results. Finally, Section 4 shows the conclusions. ## 2 Time Graph Neural Network ### Preliminaries We represent a graph with \(G=(\mathcal{V},\mathcal{E},\mathbf{A})\), where \(\mathcal{V}\) is the set of nodes with \(|\mathcal{V}|=N\), \(\mathcal{E}\subseteq\{(i,j)\mid i,j\in\mathcal{V}\text{ and }i\neq j\}\) is the set of edges, and \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is the weighted adjacency matrix with \(\mathbf{A}(i,j)=a_{i,j}\in\mathbb{R}_{+}\) if \((i,j)\in\mathcal{E}\) and 0 otherwise. In this work, we consider connected, undirected, and weighted graphs. We also define the symmetrized Laplacian as \(\mathbf{L}=\mathbf{I}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1} {2}}\), where \(\mathbf{D}=\text{diag}(\mathbf{A}\mathbf{1})\) is the diagonal degree matrix of the graph. Finally, a node-indexed real-valued graph signal is a function \(x:\mathcal{V}\rightarrow\mathbb{R}\), so that we can represent a one-dimensional graph signal as \(\mathbf{x}\in\mathbb{R}^{N}\). ### Reconstruction of Time-varying Graph Signals The sampling and recovery of graph signals are crucial tasks in GSP (Marques et al. (2015); Romero et al. (2016)). Several studies have used the smoothness assumption to address the sampling and recovery problems for static graph signals. The notion of global smoothness was formalized using the _discrete \(p\)-Dirichlet form_(Shuman et al. (2013)) given by: \[S_{p}(\mathbf{x})=\frac{1}{p}\sum_{i\in\mathcal{V}}\left[\sum_{j\in\mathcal{ V}_{i}}\mathbf{A}(i,j)[\mathbf{x}(j)-\mathbf{x}(i)]^{2}\right]^{\frac{p}{2}}, \tag{1}\] where \(\mathcal{N}_{i}\) is the set of neighbors of node \(i\). When \(p=2\), we have \(S_{2}(\mathbf{x})\) which is known as the graph Laplacian quadratic form \(S_{2}(\mathbf{x})=\sum_{(i,j)\in\mathcal{E}}\mathbf{A}(i,j)[\mathbf{x}(j)- \mathbf{x}(i)]^{2}=\mathbf{x}^{\mathsf{T}}\mathbf{L}\mathbf{x}\)(Shuman et al. (2013)). For time-varying graph signals, some studies assumed that the temporal differences of time-varying graph signals are smooth (Giraldo et al. (2022); Qiu et al. (2017)). Let \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{M}]\) be a time-varying graph signal, where \(\mathbf{x}_{s}\in\mathbb{R}^{N}\) is a graph signal in \(G\) at time \(s\). Qiu _et al._(Qiu et al. (2017) defined the smoothness of \(\mathbf{X}\) as: \[S_{2}(\mathbf{X})=\sum_{s=1}^{M}\mathbf{x}_{s}^{\mathsf{T}}\mathbf{L} \mathbf{x}_{s}=\text{tr}(\mathbf{X}^{\mathsf{T}}\mathbf{L}\mathbf{X}). \tag{2}\] \(S_{2}(\mathbf{X})\) only computes the summation of the individual smoothness of each graph signal \(\mathbf{x}_{s}\)\(\forall\)\(s\in\{1,2,\ldots,M\}\), so we do not consider any temporal information. To address this problem, we can define the temporal difference operator \(\mathbf{D}_{h}\) as follows (Qiu et al. (2017)): \[\mathbf{D}_{h}=\begin{bmatrix}-1&&\\ 1&-1&&\\ &1&\ddots&\\ &&\ddots&-1\\ &&&1\end{bmatrix}\in\mathbb{R}^{M\times(M-1)}. \tag{3}\] Therefore, we have that \(\mathbf{X}\mathbf{D}_{h}=[\mathbf{x}_{2}-\mathbf{x}_{1},\mathbf{x}_{3}-\mathbf{x}_{ 2},\ldots,\mathbf{x}_{M}-\mathbf{x}_{M-1}]\). Some studies (Giraldo et al. (2022b); Qiu et al. (2017) have found that \(S_{2}(\mathbf{X}\mathbf{D}_{h})\) shows better smoothness properties than \(S_{2}(\mathbf{X})\) in real-world time-varying data, _i.e._, \(\mathbf{x}_{s}-\mathbf{x}_{k-1}\) exhibits smoothness in the graph even if \(\mathbf{x}_{s}\) is not smooth across the graph. Qiu et al. (Qiu et al. (2017) used \(S_{2}(\mathbf{X}\mathbf{D}_{h})\) to present a Time-varying Graph Signal Reconstruction (TGSR) method as follows: \[\min_{\tilde{\mathbf{X}}}\frac{1}{2}\|\mathbf{J}\circ\tilde{\mathbf{X}}- \mathbf{Y}\|_{F}^{2}+\frac{\upsilon}{2}\operatorname{tr}\left((\tilde{\mathbf{ X}}\mathbf{D}_{h})^{T}\mathbf{L}\tilde{\mathbf{X}}\mathbf{D}_{h}\right), \tag{4}\] where \(\mathbf{J}\in\{0,1\}^{N\times M}\) is a sampling matrix, \(\circ\) is the Hadamard product between matrices, \(\upsilon\) is a regularization parameter, and \(\mathbf{Y}\in\mathbb{R}^{N\times M}\) is the matrix of observed values. The optimization problem in (4) has some limitations: 1) the solution of (4) could lose performance if the real-world dataset does not satisfy the smoothness prior assumption, and 2) (4) is solved with a conjugate gradient method in (Qiu et al. (2017)), which has a slow convergence rate because \(S_{2}(\tilde{\mathbf{X}}\mathbf{D}_{h})\) is ill-conditioned (Giraldo et al. (2022b)). Our algorithm relaxes the smoothness assumption by introducing a learnable module. Similarly, TimeGNN is fast once the GNN parameters are learned. ### Graph Neural Network Architecture TimeGNN is based on the Chebyshev spectral graph convolutional operator defined by Defferrard et al. (Defferrard et al. (2016), whose propagation rule is given as follows: \[\mathbf{X}^{\prime}=\sum_{k=1}^{K}\mathbf{Z}^{(k)}\mathbf{W}^{(k)}, \tag{5}\] where \(\mathbf{W}^{(k)}\) is the \(k\)th matrix of trainable parameters, \(\mathbf{Z}^{(k)}\) is computed recursively as \(\mathbf{Z}^{(1)}=\mathbf{X}\), \(\mathbf{Z}^{(2)}=\hat{\mathbf{L}}\mathbf{X}\), \(\mathbf{Z}^{(k)}=2\hat{\mathbf{L}}\mathbf{Z}^{(k-1)}-\mathbf{Z}^{(k-2)}\), and \(\hat{\mathbf{L}}=\frac{2\mathbf{L}}{\alpha_{\text{max}}}-\mathbf{I}\). We use the filtering operation in (5) to propose a new convolutional layer composed of: 1) a cascade of Chebyshev graph filters, and 2) a linear combination layer as in Fig. 1. More precisely, we define the propagation rule of each layer of TimeGNN as follows: \[\mathbf{H}^{(l+1)}=\sum_{\rho=1}^{\alpha}\mathbf{\mu}_{\rho}^{(l)}\sum_{k=1}^{ \rho}\mathbf{Z}^{(k)}\mathbf{W}_{l,\rho}^{(k)}, \tag{6}\] where \(\mathbf{H}^{(l+1)}\) is the output of layer \(l+1\), \(\alpha\) is a hyperparameter, \(\mathbf{\mu}_{\rho}^{(l)}\) is a learnable parameter, \(\mathbf{Z}^{(k)}\) is recursively computed as in (5), and \(\mathbf{W}_{l,\rho}^{(k)}\) is the \(k\)th learnable matrix in the layer \(l\) for the branch \(\rho\). The architecture of TimeGNN is Figure 1: Cascade of Chebyshev graph convolutions. Figure 2: Pipeline of our Time Graph Neural Network (TimeGNN) for the recovery of time-varying graph signals. given by stacking \(n\) cascade layers as in (6), where the input is \((\mathbf{J}\circ\mathbf{X})\mathbf{D}_{h}\). Finally, our loss function is such that: \[\mathcal{L} =\frac{1}{|\mathcal{S}|}\sum_{(i,j)\in\mathcal{S}}(\mathbf{X}(i,j)- \mathbf{\hat{X}}(i,j))^{2}\] \[\quad+\lambda\text{ tr}\big{(}(\mathbf{\hat{X}}\mathbf{D}_{h})^{ \mathsf{T}}(\mathbf{L}+\epsilon\mathbf{I})\mathbf{\hat{X}}\mathbf{D}_{h}\big{)}, \tag{7}\] where \(\mathbf{\hat{X}}\) is the reconstructed graph signal, \(\mathcal{S}\) is the training set, with \(\mathcal{S}\) a subset of the spatio-temporal sampled indexes given by \(\mathbf{J}\), and \(\epsilon\in\mathbb{R}^{+}\) is a hyperparameter. The term \(\text{tr}\big{(}(\mathbf{\hat{X}}\mathbf{D}_{h})^{\mathsf{T}}(\mathbf{L}+ \epsilon\mathbf{I})\mathbf{\hat{X}}\mathbf{D}_{h}\big{)}\) is the Sobolev smoothness (Giraldo et al. (2022b)). We can think of TimeGNN as an encoder-decoder network with a loss function given by an MSE term plus a Sobolev smoothness regularization. The first layers of TimeGNN encode the term \((\mathbf{J}\circ\mathbf{X})\mathbf{D}_{h}\) to an \(H\)-dimensional latent vector that is then decoded with the final layer. As a result, we capture the spatio-temporal information using the GNN, the temporal encoding-decoding structure, and the regularization term \(\text{tr}\left((\mathbf{\hat{X}}\mathbf{D}_{h})^{\mathsf{T}}(\mathbf{L}+ \epsilon\mathbf{I})\mathbf{\hat{X}}\mathbf{D}_{h}\right)\) where we use the temporal operator \(\mathbf{D}_{h}\). The parameter \(\lambda\) in (7) weighs the importance of the regularization term against the MSE loss. Figure 2 shows the pipeline of our TimeGNN applied to a graph of the sea surface temperature in the Pacific Ocean. ## 3 Experiments and Results We compare TimeGNN with Graph Convolutional Networks (GCN) (Kipf and Welling (2017)), Natural Neighbor Interpolation (NNI) (Kiani and Saleem (2017)), TGSR (Qiu et al. (2017)), and Time-varying Graph signal Reconstruction via Sobolev Smoothness (GraphTRSS) (Giraldo et al. (2022b)). ### Implementation Details We implement TimeGNN and GCN using PyTorch and PyG (Fey and Lenssen (2019). We define the space search for the hyperparameters tuning of TimeGNN as follows: 1) number of layers \(\{1,2,3\}\), 2) hidden units \(\{2,3,\ldots,10\}\), 3) learning rate \([0.005,0.05]\), 4) weight decay \([1e-5,1e-3]\), 5) \(\lambda\in[1e-6,1e-3]\), 6) \(\alpha\in\{2,3,4\}\). Similarly, we set the following hyperparameters: 1) \(\epsilon=0.05\), and 2) the number of epochs to \(5,000\). The graphs are constructed based on the coordinate locations of the nodes in each dataset with a \(k\)-Nearest Neighbors (\(k\)-NN) algorithm as in (Giraldo et al. (2022b)). NNI, TGRS, and GraphTRSS are implemented using the code in (Giraldo et al. (2022b)) in MATLAB(r) 2022b. The hyperparameters of the baseline methods are optimized following the same strategy as with TimeGNN. ### Datasets **Synthetic Graph and Signals:** We use the synthetic graph dataset developed in (Qiu et al. (2017). The graph contains 100 nodes randomly generated from a uniform distribution in a \(100\times 100\) square area using \(k\)-NN. The graph signals \begin{table} \begin{tabular}{l|c c c c|c c c|c c c|c c c} \hline \hline Method & \multicolumn{2}{c|}{Synthetic Graph Signals} & \multicolumn{2}{c|}{PM2.5 Concentration} & \multicolumn{2}{c|}{Sea-surface Temperature} & \multicolumn{2}{c}{Intel Lab Data} \\ & RMSE & MAE & MAPE & RMSE & MAE & MAPE & RMSE & MAE & MAPE & RMSE & MAE & MAPE & RMSE & MAE & MAPE \\ \hline GCN (Kipf and Welling (2017)) & 11.296 & 8.446 & 1.123 & 4.657 & 2.959 & 0.550 & 3.766 & 2.922 & 0.548 & 2.998 & 2.327 & 0.120 \\ NNI (Kiani et al. (Kiani and Saleem 2017)) & 0.775 & 0.436 & 0.255 & 4.944 & 2.956 & 0.593 & 0.772 & 0.561 & 0.067 & 0.661 & 0.291 & 0.015 \\ GraphTRSS (Giraldo et al. (Giraldo et al. (2022b))) & **0.260** & **0.256** & **0.178** & **3.824** & **2.264** & **0.377** & **0.357** & **0.260** & **0.022** & **0.056** & **0.023** & **0.001** \\ TGSR (Qiu et al. (Qiu et al. (2017))) & **0.263** & **0.193** & **0.144** & 3.898 & 2.279 & 0.394 & 0.360 & 0.263 & 0.030 & _0.069_ & _0.037_ & _0.002_ \\ \hline TimeGNN (ours) & 0.452 & 0.323 & 0.226 & **3.809** & **2.172** & **0.362** & **0.275** & **0.203** & **0.023** & 0.156 & 0.095 & 0.005 \\ \hline \hline \end{tabular} * The best and second-best performing methods on each dataset are shown in **red** and **blue**, respectively. \end{table} Table 1: Quantitative comparison of TimeGNN with the baselines in all datasets using the average error metrics. Figure 3: Comparison of TimeGNN to baseline methods in one synthetic and three real-world datasets (RMSE). are generated with the recursive function \(\mathbf{x}_{t}=\mathbf{x}_{t-1}+\mathbf{L}^{-1/2}\mathbf{f}_{t}\), where \(\mathbf{x}_{1}\) is a low frequency graph signal with energy \(10^{4}\), \(\mathbf{L}^{-1/2}=\mathbf{U}\lambda^{-1/2}\mathbf{U}^{\top}\), where \(\mathbf{U}\) is the matrix of eigenvectors, \(\lambda=\operatorname{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})\) is the matrix of eigenvalues, \(\lambda^{-1/2}=\operatorname{diag}(0,\lambda_{2}^{-1/2},\ldots,\lambda_{N}^{- 1/2})\), and \(\mathbf{f}_{t}\) is an i.i.d. Gaussian signal. **PM 2.5 Concentration:** We use the daily mean concentration of PM 2.5 in the air in California, USA4. Data were collected from 93 sensors over 220 days in 2015. Footnote 4: [https://www.epa.gov/outdoor-air-quality-data](https://www.epa.gov/outdoor-air-quality-data) **Sea-surface Temperature:** We use the sea-surface temperature data, which are measured monthly and released by the NOAA PSL5. We use a sample of 100 locations in the Pacific Ocean over a duration of 600 months. Footnote 5: [https://psl.noaa.gov](https://psl.noaa.gov) **Intel Lab Data:** We use the data captured by the 54 sensors deployed at the Intel Berkeley Research Laboratory 6. The data consists of temperature readings between February 28th and April 5th, 2004. Footnote 6: [http://db.csail.mit.edu/labdata/labdata.html](http://db.csail.mit.edu/labdata/labdata.html) ### Evaluation Metrics We use the Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE) metrics, as defined in (Giraldo et al. (2022b)), to evaluate our algorithm. ### Experiments We construct the graphs using \(k\)-NN with the coordinate locations of the nodes in each dataset with a Gaussian kernel as in (Giraldo et al. (2022b). We follow a random sampling strategy in all experiments. Therefore, we compute the reconstruction error metrics on the non-sampled vertices for a set of sampling densities. We evaluate all the methods with a Monte Carlo cross-validation with 50 repetitions for each sampling density. For the synthetic data, \(k=5\) in the \(k\)-NN, and the sampling densities are given by \(\{0.1,0.2,\ldots,0.9\}\). For PM2.5 concentration, \(k=5\) and the sampling densities are \(\{0.1,0.15,0.2,\ldots,0.45\}\). For the sea-surface temperature, we keep \(k=5\) and the sampling densities are set to \(\{0.1,0.2,\ldots,0.9\}\). For Intel Lab data, we set \(k=3\) and the sampling densities at \(\{0.1,0.3,0.5,0.7\}\). ### Results and Discussion Figure 3 shows the performance of TimeGNN against the previous methods for all datasets using RMSE. Furthermore, Table 1 shows the quantitative comparisons using the averages of all metrics along the set of sampling densities. We do not plot the performance of GCN in Fig. 3 because this network performs considerably worse than the other methods, as shown in Table 1. GCN was implemented using the same input and loss function as in TimeGNN. Our algorithm outperforms previous methods for several metrics in PM2.5 concentration and sea-surface temperature datasets. The synthetic data were created to satisfy the conditions of smoothly evolving graph signals (Definition 1 in (Qiu et al. (2017)), while here, we relaxed that prior assumption by adding a trainable GNN module. Therefore, TGRS and GraphTRSS are better suited for that artificial dataset, as shown in Fig. 3 and Table 1. Similarly, the Intel Lab dataset is highly smooth. Some of the reasons behind our model's success in real-world datasets are: 1) its ability to capture spatio-temporal information, 2) its encoding-decoding structure, and 3) its powerful learning module given by a cascade of Chebyshev graph convolutions. ## 4 Conclusions In this paper, we introduced a GNN architecture named TimeGNN for the recovery of time-varying graph signals from their samples. Similarly, we proposed a new convolutional layer composed of a cascade of Chebyshev graph filters. TimeGNN includes a learning module that relaxes the requirement of strict smoothness assumptions. We found that our framework shows competitive performance against several approaches in the literature for reconstructing graph signals, delivering better performance in real datasets. Our algorithm could help solve problems like recovering missing data from sensor networks, forecasting weather conditions, intelligent transportation systems, and many others. For future work, we plan to extend our framework to other graph filters like transformers (Yun et al. (2019)), and alternative compact operators as introduced in (Ji and Tay (2019). Similarly, we will explore TimeGNN in highly dynamic 4D real datasets (Badiey et al. (2013), Castro-Correa et al. (2022)). **Acknowledgments:** This work was supported by DATAIA institute as part of the "Programme d'Investissement d'Avenir", (ANR-17-CONV-0003) operated by CentraleSupelec, by ANR (French National Research Agency) under the JCJC project GraphIA (ANR-20-CE23-0009-01), and by the Office of Naval Research, ONR (Grant No. N00014-21-1-2760).
2310.13073
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
Within the realm of deep learning, the interpretability of Convolutional Neural Networks (CNNs), particularly in the context of image classification tasks, remains a formidable challenge. To this end we present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN to make its underlying knowledge interpretable. What makes NeSyFOLD-G different from other similar frameworks is that we first find groups of similar kernels in the CNN (kernel-grouping) using the cosine-similarity between the feature maps generated by various kernels. Once such kernel groups are found, we binarize each kernel group's output in the CNN and use it to generate a binarization table which serves as input data to FOLD-SE-M which is a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a rule-set that can be used to make predictions. We present a novel kernel grouping algorithm and show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M, consequently, improving the interpretability. This rule-set symbolically encapsulates the connectionist knowledge of the trained CNN. The rule-set can be viewed as a normal logic program wherein each predicate's truth value depends on a kernel group in the CNN. Each predicate in the rule-set is mapped to a concept using a few semantic segmentation masks of the images used for training, to make it human-understandable. The last layers of the CNN can then be replaced by this rule-set to obtain the NeSy-G model which can then be used for the image classification task. The goal directed ASP system s(CASP) can be used to obtain the justification of any prediction made using the NeSy-G model. We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents.
Parth Padalkar, Gopal Gupta
2023-10-19T18:12:49Z
http://arxiv.org/abs/2310.13073v1
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks ###### Abstract Within the realm of deep learning, the interpretability of Convolutional Neural Networks (CNNs), particularly in the context of image classification tasks, remains a formidable challenge. To this end we present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN to make its underlying knowledge interpretable. What makes NeSyFOLD-G different from other similar frameworks is that we first find groups of similar kernels in the CNN (kernel-grouping) using the cosine-similarity between the feature maps generated by various kernels. Once such kernel groups are found, we binarize each kernel group's output in the CNN and use it to generate a binarization table which serves as input data to FOLD-SE-M which is a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a rule-set that can be used to make predictions. We present a novel kernel grouping algorithm and show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M, consequently, improving the interpretability. This rule-set symbolically encapsulates the connectionist knowledge of the trained CNN. The rule-set can be viewed as a _normal logic program_ wherein each predicate's truth value depends on a kernel group in the CNN. Each predicate in the rule-set is mapped to a concept using a few semantic segmentation masks of the images used for training, to make it human-understandable. The last layers of the CNN can then be replaced by this rule-set to obtain the NeSy-G model which can then be used for the image classification task. The goal directed ASP system s(CASP) can be used to obtain the justification of any prediction made using the NeSy-G model. We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents. Keywords:CNN Neurosymbolic AI Normal Logic Programs Rule-Based Machine Learning Interpretable Image Classification. ## 1 Introduction Interpretability of deep learning models is an important issue that has resurfaced in recent years as these models have become larger and are being applied to an increasing number of tasks. Some applications such as autonomous vehicles [9], disease diagnosis [25], and natural disaster prevention [11] are very sensitive areas where a wrong prediction could be the difference between life and death. The above tasks rely heavily on good image classification models such as Convolutional Neural Networks (CNNs). A CNN is a deep learning model used for a wide range of image classification and object detection tasks, first introduced by Y. Lecun et al. [14]. Current CNNs are extremely powerful and capable of outperforming humans in image classification tasks. A CNN is inherently a blackbox model, though attempts have been made to make it more interpretable [34, 33]. There is no way to tell whether the predictions made by the model are based on concepts meaningful to humans, or are simply the outcome of coincidental correlations. If the knowledge of the trained CNN becomes interpretable then domain experts can scrutinize this knowledge and point out any biases or spurious correlations that the CNN might have learnt which could lead to wrong predictions. Thus retraining with better and more targeted data can be suggested by the experts. We propose a framework for interpretable image classification using CNNs called NeSyFOLD-G. A CNN, like any deep neural network is composed of multiple layers. We focus on the convolution layer, more specifically the last convolution layer of a CNN in this work. The convolution layer is composed of kernels. A kernel, also known as a filter, is a 2D matrix. It acts like a small, specialized magnifying glass that slides over an image to help recognize specific features or patterns in the image, like edges, curves, or textures. It does this by multiplying its values with the pixel values of the image in a small region, and then it adds up those products. This process helps highlight important parts of the image. As the kernel slides over the entire image, it creates a new, simplified version of the image that emphasizes the patterns it's looking for. This simplified version is called a feature map. The CNN then uses these feature maps to understand the image and make predictions. The NeSyFOLD-G framework can be used to create a _NeSy-G_ model which is a composition of the CNN and a rule-set generated from kernels in its last convolution layer. A Rule Based Machine Learning (RBML) algorithm called FOLD-SE-M [28] is used for generating the rule-set by using binarized outputs of the groups of similar kernels in a trained CNN. The rule-set is a default theory represented as a normal logic program, [16] i.e., Prolog extended with negation-as-failure. The binarized output (0/1) of the kernel groups influences the truth value of the predicates appearing in the rule body. The rule-set can also be viewed as a stratified Answer Set Program and the s(CASP) [1] ASP system can be used to obtain justifications of the predictions made by the NeSy-G model. The rule-set also serves as a global explanation for the predictions made by the CNN. Our first novel contribution is the _kernel grouping algorithm_ that finds groups of similar kernels in the CNN based on the cosine similarity score of their corresponding generated feature maps. We also introduce a semantic labelling algorithm that can be used to label the predicates in the rule-set with the semantic concept(s) that their corresponding kernel groups represent in the images. For example, the predicate 52(X) corresponding to kernel group 52 in the last convolution layer of the CNN will be replaced by bathtub(X) in the rule-set, if kernel group 52 has learnt to look for "bathtubs" in the image. Fig. 1 illustrates the NeSyFOLD-G framework. Padalkar et al. proposed the NeSyFOLD framework [17] which shares similarities with the NeSyFOLD-G framework. The major difference that separates NeSyFOLD-G from NeSyFOLD is that the truth values of predicates in the generated rule-set is influenced by the binarized output of _groups_ of similar kernels. In NeSyFOLD each predicate's truth value is influenced by single kernels in the CNN. However, it is known that groups of kernels in the last layer are responsible for representing a single concept. Yang et al. [30] proposed an attention-based masking mechanism for finding the concept learnt by a single kernel by accounting for the other kernels with similar attention weights. Their approach serves as motivation behind our kernel grouping algorithm that uses the cosine similarity score between feature maps of various kernels to find the groups of similar kernels. The size of the rule-set generated can be used as a metric for interpretability. Lage et al. [12] comprehensively showed through human evaluations that as Figure 1: The NeSyFOLD-G framework. Each kernel group is depicted with a unique color in the rule-set. the size of the rule-set increases the difficulty in interpreting the rule-set also increases. Padalkar et al. show that NeSyFOLD framework generates a smaller rule-set than the ERIC system [26] which was the previous SOTA. We show that NeSyFOLD-G, achieves a significant reduction in the size of the rule-set generated while maintaining or improving on the accuracy and fidelity in comparison to the NeSyFOLD framework. To summarize, our contributions are as follows: 1. We present a novel kernel grouping algorithm that constitutes the heart of the NeSyFOLD-G framework for improving interpretability of the generated rule-set. 2. We also introduce a semantic labeling algorithm for labeling the predicates of the rule-set generated by the NeSyFOLD-G framework. ## 2 Background **FOLD-SE-M:** The FOLD-SE-M algorithm [28] that we employ in our framework, learns a rule-set from data as a _default theory_. Default logic is a non-monotonic logic used to formalize commonsense reasoning. A default \(D\) is expressed as: \[\frac{D=A:\mathbf{M}B}{\Gamma} \tag{1}\] Equation 1 states that the conclusion \(\Gamma\) can be inferred if pre-requisite \(A\) holds and \(B\) is justified. \(\mathbf{M}B\) stands for "it is consistent to believe \(B\)". Normal logic programs can encode a default theory quite elegantly [8]. A default of the form: \[\frac{\alpha_{1}\wedge\alpha_{2}\wedge\cdots\wedge\alpha_{n}:\mathbf{M}{\neg \beta_{1}},\mathbf{M}{\neg\beta_{2}}\ldots\mathbf{M}{\neg\beta_{m}}}{\gamma}\] can be formalized as the normal logic programming rule: \[\gamma\ :\boldsymbol{\texttt{-}}\ \alpha_{1},\alpha_{2},\ldots,\alpha_{n}, \texttt{not}\ \beta_{1},\texttt{not}\ \beta_{2},\ldots,\texttt{not}\ \beta_{m}.\] where \(\alpha\)'s and \(\beta\)'s are positive predicates and not represents negation-as-failure. We call such rules _default rules_. Thus, the default \[\frac{bird(X):M{\neg penguin}(X)}{flies(X)}\] will be represented as the following default rule in normal logic programming: flies(X) :- bird(X), not penguin(X). We call bird(X), the condition that allows us to jump to the default conclusion that X flies, the _default part_ of the rule, and not penguin(X) the _exception part_ of the rule. FOLD-SE-M [28] is a Rule Based Machine Learning (RBML) algorithm. It generates a rule-set from tabular data, comprising rules in the form described above. The complete rule-set can be viewed as a stratified answer set program. It uses special abx predicates to represent the exception part of a rule where x is unique numerical identifier. FOLD-SE-M incrementally generates literals for _default rules_ that cover positive examples while avoiding covering negative examples. It then swaps the positive and negative examples and calls itself recursively to learn exceptions to the default when there are still negative examples falsely covered. There are 2 tunable hyperparameters, \(ratio\), and \(tail\). The \(ratio\) controls the upper bound on the number of false positives to the number of true positives implied by the default part of a rule. The \(tail\) controls the limit of the minimum number of training examples a rule can cover. FOLD-SE-M generates a much smaller number of rules than a decision-tree classifier and gives higher accuracy in general. ## 3 Learning In this section we describe the process of generating a rule-set from the CNN and obtaining the NeSy-G model. We start by training the CNN on the input images for the given image classification dataset. Any optimization technique can be used for updating the weights. Fig. 1 illustrates the learning pipeline. **Binarization:** Once the CNN has been fully trained to convergence, we pass the full training set consisting of \(n\) images to the CNN. For each image \(i\) in the training set, let \(A_{i,k}\) denote the feature map generated by kernel \(k\) in the last convolutional layer. The feature map \(A_{i,k}\) is a \(2D\) matrix of dimension determined by the CNN architecture. For each image \(i\) there are \(K\) feature maps generated where \(K\) is the total number of kernels in the last convolutional layer of the CNN. To convert each of the feature maps to a single value we take the norm of the feature maps as demonstrated by eq. (2) to obtain \(a_{i,k}\). _Kernel grouping algorithm:_ We then find the groups of similar kernels in the CNN. Consider a kernel \(\hat{k}\) for which we need to identify the most similar kernels. We do this by first finding the _top-10_ images \(\hat{i}_{1},\hat{i}_{2},...,\hat{i}_{10}\) that activate \(\hat{k}\) the most, according to the norm values of the feature maps generated by \(\hat{k}\) for these images. Now, we compute the cosine similarity score between \(A_{\hat{i}_{g},\hat{k}}\) and \(A_{\hat{i}_{g},k^{\prime}}\), where \(\texttt{g}\in[1,10]\) and \(k^{\prime}\) is some kernel in the last layer the last layer of the CNN. The similarity score of kernel \(k^{\prime}\) w.r.t \(\hat{k}\) is calculated by taking the mean of the cosine similarity scores for all the _top-10_ images \(\hat{i}_{1},\hat{i}_{2},...,\hat{i}_{10}\) as \(sim_{\hat{k},k^{\prime}}\). The similarity score is a value between 0 and 1. Thus, we calculate the similarity score of all kernels in the last layer of the CNN w.r.t to \(\hat{k}\). The group of kernel \(\hat{k}\) would then constitute of all kernels that have a similarity score w.r.t \(\hat{k}\) greater than a user-defined similarity threshold \(\theta_{s}\). Hence, we find a group of similar kernels \(G_{k}\) for all the kernels \(k\) in the last layer of the CNN. Note that the total number of kernel groups \(G_{k}\) is the same as the total number of kernels in the last layer of the CNN. Next, for each kernel group \(G_{k}\) we obtain the group norm \(a_{i,G_{k}}\) for each image \(i\) in the training set. This is achieved by taking the mean of the norms corresponding to each kernel in \(G_{k}\) for each image \(i\). This leads to the creation of a table \(T_{G}\) with each row representing an image and each column representing the group norm for each of the kernel groups \(G_{k}\). Finally, for each kernel group we convert the group norm values to either 0 or 1 which symbolizes the kernel group "activating" or "deactivating" for each image. This is called _binarization_ of the kernel groups. This is done by determining an appropriate threshold \(\theta_{G_{k}}\) for each kernel group \(G_{k}\) to binarize its output. The threshold \(\theta_{G_{k}}\) is calculated as a weighted sum of the mean and the standard deviation of the group norms \(a_{i,G_{k}}\) for all images \(i\) in the training set, denoted by eq. (3) where \(\alpha\) and \(\gamma\) are user-defined hyperparameters. Thus a binarization table \(B_{G}\) is created. Each row in the table represents an image and each column is the binarized kernel group value represented by either a 0 if \(a_{i,G_{k}}\leq\theta_{G_{k}}\) or 1 if \(a_{i,G_{k}}>\theta_{G_{k}}\) (_cf._ Fig. 1 (right)). \[a_{i,k}= ||A_{i,k}||_{2} \tag{2}\] \[\theta_{G_{k}}= \alpha\cdot\overline{a_{G_{k}}}+\gamma\sqrt{\frac{1}{n}\sum(a_{i,G_{k}}-\overline{a_{G_{k}}})^{2}} \tag{3}\] **Rule-set Generation:** The binarization table \(B_{G}\) is given as an input to the FOLD-SE-M algorithm to obtain a rule-set in the form of a normal logic program. The FOLD-SE-M algorithm finds the most influential features in the \(B_{G}\) and generates a rule-set that has these features as predicates. Since \(B_{G}\) has features as kernel group ids, the raw rule-set has predicates with names in the form of their corresponding kernel group's id. An example rule could be: target(X,'2') :- not 3(X), 54(X), not ab1(X). This rule can be interpreted as "Image X belongs to class '2' if kernel group 3 is not activated and kernel group 54 is activated and the abnormal condition (exception) ab1 does not apply". There will be another rule with the head as ab1(X) in the rule-set. The binarized output of a kernel group would determine the truth value of its predicate in the rule-set. The rule-set generated is in the form of a decision list, i.e., the next rule is checked only if the current rule and all the rules above it were not satisfied. **Semantic labeling:** Groups of kernels activate in synergy to identify concepts in the CNN. Since we capture the outputs of the kernel groups as truth values of predicates in the rule-set, we can label the predicates with the semantic concept(s) that the corresponding kernel group has learnt. Thus, the same example rule from above may now look like: target(X,'bathroom') :- not bed(X), bathtub(X), not ab1(X). We introduce a novel semantic labelling algorithm to automate the semantic labelling of the predicates in the rule-set generated. The details of the algorithm are discussed later. The NeSy-G model is conceptualized as the model obtained after replacing all the layers following the last convolutional layer with the rule-set generated by applying the FOLD-SE-M algorithm on the binarization table \(B_{G}\). ## 4 Inference For using the NeSy-G model to obtain predictions on the test set, we first obtain the kernel feature maps for each kernel in the last convolutional layer. Then, we compute the group norms for all kernel groups that were found in the learning process to obtain the table \(T_{G_{k}}^{test}\). From \(T_{G_{k}}^{test}\) we obtain the binarization table \(B_{G}^{test}\) by binarizing the output of each kernel group in \(T_{G_{k}}^{test}\) by using the threshold \(\theta_{G_{k}}\) calculated in the learning phase. Next, for each binarized vector \(b\) in \(B_{G}^{test}\), we use the labeled/unlabelled rule-set obtained in the learning phase to make predictions. The truth value of the predicates in the rule-set is determined by the corresponding binarized kernel group values in \(b\). FOLD-SE-M toolkit's built-in rule-interpreter can be used to obtain the predicted class of \(b\) given the rule-set. The binarized kernel group values in \(b\) can also be listed as facts and the rule-set which can be viewed as a stratified answer set program, can be queried with the s(CASP) interpreter [1] to obtain the justification as well as the target class. Note that s(CASP) searches for the answer set in a goal directed manner, which implies that the rules are checked from the top to the bottom one by one. Hence, the first answer set that is found to satisfy the rule-set with the given facts entails the intended prediction made by the NeSy-G model. ## 5 Semantic Labelling of Predicates The raw rule-set generated by FOLD-SE-M initially has kernel group ids as predicate names. Also, since the FOLD-SE-M algorithm finds only the most influential kernel groups, the number of kernel groups that actually appear in the rule-set is usually very low in comparison to the total number of kernel groups. We present a novel algorithm for automatically labelling the corresponding predicates of the kernel groups with the semantic concept(s) that the kernel groups represent. Xie et al. [29] showed that each kernel in the CNN may learn to represent multiple concepts in the images. Hence each kernel group may also represent multiple concepts. As a result, we assign semantic labels to each predicate, denoting the names of the semantic concepts learnt by the corresponding kernel group. To regulate the extent of approximation, i.e., to dictate the number of concept names to be included in the predicate label, we introduce a hyperparameter _margin_. This hyperparameter exercises control over the precision of the approximation achieved. Figure 2 illustrates the semantic labelling of a given predicate. The algorithm requires a dataset that has semantic segmentation masks of the training images. This essentially means that for every image \(i\) in the dataset \(I\), there is an image \(i_{M}\) where every pixel is annotated with the label of the object (concept) that it belongs to (Fig. 2 middle). We denote these by \(I_{M}\). The CNN that is trained on the training set is used to obtain the norms \(a_{i,k}\) of the feature maps \(A_{i,k}\) generated by each kernel \(k\) in the last convolution layer. Next, as the respective kernel groups for each kernel are known, the table \(T_{G_{k}}^{I_{m}}\) is created where each row represents the images whose corresponding semantic segmentation masks are available and the columns are the kernel group norms. Now, consider some kernel group \(G_{\hat{k}}\) that has \(l\) kernels in the group namely, \(\hat{k}_{1},\hat{k}_{2},...,\hat{k}_{l}\). The _top-m_ images \(i^{\prime}_{1},i^{\prime}_{2},...,i^{\prime}_{m}\in I^{\prime}_{m}\), according to the group norm values are selected. We need to calculate the group's _Intersection over Union (IoU\({}_{c}\))_ score for each concept \(c\) visible in the _top-m_ images that most activate the group. Then according to this score for each concept \(c\), the label of the kernel group's predicate should comprise of the top concepts that the kernel group is detecting. \[IoU_{c}(i^{Mask},i)=\frac{\text{no. of non-zero pixels in }c\cap i}{\text{no. of non-zero pixels in }i} \tag{4}\] For a given image \(i_{j}\in I^{\prime}_{m}\), the resized feature map generated by every kernel in the kernel group is used to mask the image to obtain \(i^{\hat{k}_{1}}_{j},i^{\hat{k}_{2}}_{j},...,i^{\hat{k}_{l}}_{j}\). Fig. 2 (top) shows a few images masked with the resized feature maps generated by a kernel. For each of these masked images, the \(IoU_{c}\) score is calculated using eq. (4) for each concept \(c\), that appears in the corresponding semantic segmentation mask \(i^{Mask}_{j}\) of the image \(i_{j}\). Fig. 2 (middle) shows the semantic segmentation masks of the images at the top. Next, each kernel's \(IoU_{c}\) score for all the _top-m_ images, for all concepts \(c\) is calculated. Each kernel's mean \(IoU_{c}\) score is calculated by Figure 2: The calculation of mean \(IoU_{c}\) scores for a kernel. taking the mean score over all images. Finally, the kernel group's \(IoU_{c}\) score is calculated by taking the mean of the mean \(IoU_{c}\) score of each kernel for each concept \(c\). The algorithm can be summarized as follows: 1. For a given kernel group, find the _top-m_ images according to its group kernel norm value. 2. For each kernel in the kernel group find the \(IoU_{c}\) score for each of the _top-m_ images. 3. Calculate the mean \(IoU_{c}\) score for each kernel over all images. 4. Calculate the mean of the mean \(IoU_{c}\) score for each kernel to obtain the kernel Group's \(IoU_{c}\) score. Fig 2 illustrates the \(IoU_{c}\) scores calculation for a single kernel. The label of the corresponding predicate of a kernel group is chosen as the set of concepts that have their _normalized_\(IoU_{c}\) score in a certain "margin" from the top concept. This is controlled using the user-defined _margin_ hyperparameter. For example, if the \(IoU_{c}\) score for kernel group 12 is \(\{cabinets:0.5,\)\(door:0.4,\)\(drawer:0.1\}\) then with a _margin_ of 0.1 the label for the corresponding predicate will be "\(cabinets1\_door1\)" since the concept _door_ is in the 0.1 margin from the top concept _cabinets_. Note, each concept name in the label is appended with a unique numerical identifier (in this case 1), to distinguish it from the the other kernel groups that might learn the same concept. Say, if kernel group 25 is also detecting \(cabinets\) then its predicate's label would be "\(cabinets2\_...\)" where... denotes the other concepts that the kernel group 25 might be detecting. ## 6 Experiments and Results **Exp 1 (Setup):** We compare the performance of NeSyFOLD-G framework with that of the NeSyFOLD framework on various datasets. We report the accuracy, fidelity, number of unique predicates in the rule-set, number of rules generated and the size of the rule-set. Size is calculated as the total number of predicates in the bodies of the rules that constitute the logic program generated by NeSyFOLD and NeSyFOLD-G. We used a VGG16 CNN with pre-trained weights on the Imagenet dataset [4]. We trained for 100 epochs with a batch size of 32. We used the Adam [10] optimizer and applied class weights for imbalanced data. We also used \(L2\) regularization of 0.005 on all layers and a learning rate of \(5\times 10^{-}7\). We used a decay factor of 0.5 and patience of 10 epochs. Also, we resized all images to \(224\times 224\). We used \(\alpha=0.6\) and \(\gamma=0.7\) for all the datasets. For this experiment, we used the _German Traffic Sign Recognition Benchmark_ (GTSRB) [24], _MNIST_[15] and the Places [36] dataset. The GTSRB dataset has 43 classes. Each class contains multiple instances of a physical signpost and multiple images of the signpost are provided. We used a \(80:20\) training-validation split per class and used the provided test set to report the performance metrics of the models. The MNIST dataset has 10 classes. Each class contains images of a handwritten digit from 0 to 9. We split the standard training set into train and validation set by using the last \(10k\) images for the validation set. We used the provided test set to report the results. The Places dataset has images of various scenes. To see the effect of varying the number of classes \(\in\) {2, 3, 5, 10} we train on the bathroom and bedroom class (PLACES2) first. Then we add the kitchen class (PLACES3.1), then dining room, living room (PLACES5) and finally home office, office, waiting room, conference room and hotel room (PLACES10). We also selected 2 additional subsets of 3 classes each namely, {desert road, forest road, street} (PLACES3.2) and {desert road, driveway, highway} (PLACES3.3). We obtained the train and the test set by selecting \(1k\) images from each class for the test set and the other \(4k\) for the training set. We use the given validation set to tune our hyperparameters. The NeSy-G model was created using the learning procedure described previously using the NeSyFOLD-G framework and the NeSy model was created using the NeSyFOLD framework as described in [17]. The comparison between NeSyFOLD-G and NeSyFOLD is drawn in Table 1. The accuracy and fidelity are reported on the test set. The results are reported after 5 runs on each dataset. Note, fidelity determines how closely a model follows the predictions of another model. Since the NeSy-G and NeSy models are created from the trained model they should show high fidelity w.r.t the CNN. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Data & Algo & Fid. & Acc. & Pred. & Rules & Size \\ \hline \multirow{2}{*}{PLACES2} & NF & \(\mathbf{0.93\pm 0.01}\) & \(0.92\pm 0.01\) & \(16\pm 2\) & \(12\pm 2\) & \(28\pm 5\) \\ & NF-G & \(\mathbf{0.93\pm 0.0}\) & \(\mathbf{0.93\pm 0.0}\) & \(\mathbf{8\pm 1}\) & \(\mathbf{7\pm 1}\) & \(\mathbf{11\pm 2}\) \\ \hline \multirow{2}{*}{PLACES3.1} & NF & \(0.85\pm 0.03\) & \(0.84\pm 0.03\) & \(28\pm 6\) & \(21\pm 4\) & \(49\pm 9\) \\ & NF-G & \(\mathbf{0.87\pm 0.01}\) & \(\mathbf{0.86\pm 0.01}\) & \(\mathbf{20\pm 7}\) & \(\mathbf{15\pm 3}\) & \(\mathbf{31\pm 9}\) \\ \hline \multirow{2}{*}{PLACES3.2} & NF & \(\mathbf{0.94\pm 0.0}\) & \(\mathbf{0.92\pm 0.0}\) & \(16\pm 4\) & \(13\pm 3\) & \(26\pm 7\) \\ & NF-G & \(\mathbf{0.94\pm 0.01}\) & \(\mathbf{0.92\pm 0.01}\) & \(\mathbf{12\pm 3}\) & \(\mathbf{10\pm 1}\) & \(\mathbf{18\pm 3}\) \\ \hline \multirow{2}{*}{PLACES3.3} & NF & \(\mathbf{0.83\pm 0.01}\) & \(0.79\pm 0.01\) & \(32\pm 5\) & \(23\pm 3\) & \(60\pm 11\) \\ & NF-G & \(\mathbf{0.83\pm 0.01}\) & \(\mathbf{0.80\pm 0.01}\) & \(\mathbf{30\pm 2}\) & \(\mathbf{21\pm 3}\) & \(\mathbf{53\pm 6}\) \\ \hline \multirow{2}{*}{PLACES5} & NF & \(0.67\pm 0.03\) & \(0.64\pm 0.03\) & \(56\pm 3\) & \(52\pm 4\) & \(131\pm 10\) \\ & NF-G & \(\mathbf{0.68\pm 0.02}\) & \(\mathbf{0.65\pm 0.02}\) & \(\mathbf{41\pm 4}\) & \(\mathbf{34\pm 6}\) & \(\mathbf{83\pm 13}\) \\ \hline \multirow{2}{*}{PLACES10} & NF & \(0.23\pm 0.19\) & \(0.20\pm 0.17\) & \(\mathbf{33\pm 28}\) & \(\mathbf{32\pm 27}\) & \(\mathbf{78\pm 66}\) \\ & NF-G & \(\mathbf{0.33\pm 0.17}\) & \(\mathbf{0.30\pm 0.15}\) & \(74\pm 39\) & \(73\pm 39\) & \(184\pm 97\) \\ \hline \multirow{2}{*}{GTSRB} & NF & \(0.75\pm 0.04\) & \(0.75\pm 0.04\) & \(206\pm 28\) & \(134\pm 26\) & \(418\pm 79\) \\ & NF-G & \(\mathbf{0.76\pm 0.02}\) & \(\mathbf{0.76\pm 0.02}\) & \(\mathbf{176\pm 13}\) & \(\mathbf{98\pm 11}\) & \(\mathbf{320\pm 30}\) \\ \hline \multirow{2}{*}{MNIST} & NF & \(\mathbf{0.91\pm 0.01}\) & \(\mathbf{0.91\pm 0.01}\) & \(132\pm 9\) & \(90\pm 7\) & \(271\pm 25\) \\ & NF-G & \(0.90\pm 0.01\) & \(0.90\pm 0.01\) & \(\mathbf{103\pm 12}\) & \(\mathbf{79\pm 10}\) & \(\mathbf{216\pm 28}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison NeSyFOLD (NF) vs NeSyFOLD-G (NF-G). **Exp 1 (Result):** Table 1 clearly shows that the NeSy-G model outperforms NeSy model w.r.t accuracy and fidelity in most cases and is comparable otherwise. More importantly, the advantage of using the NeSyFOLD-G framework is apparent from the reduction in the number of predicates, number of rules and the overall size of the rule-set that is generated. The reduction in size of the rule-set is a direct indication of the improved interpretability as pointed out by Lage et al. [12]. The main difference between the NeSyFOLD and the NeSyFOLD-G framework is the grouping of similar kernels in the latter. The grouped kernel forms better features in the binarization table that is generated after binarizing the group norms. The grouping helps in creating more informative features for the FOLD-SE-M algorithm to generate the rules from. Hence, in a fewer number of predicates and rules, (as compared to NeSyFOLD) the same information can be captured. Note that as the number of classes increases as in the case of _PLACES2, PLACES3.1, PLACES3.2, PLACES3.3, PLACES5_ and _PLACES10_ both the models show a decrease in the accuracy and fidelity. This is because as the number of classes increases, more number of kernels are needed to represent the knowledge and consequently more kernels have to be binarized. Thus the loss incurred due to binarization of the kernels increases as the number of classes increases. Notice that for PLACES10 the size of the rule-set generated by NeSyFOLD-G is larger than that generated by NeSyFOLD. This is because for 2 out of the 5 runs, NeSyFOLD could not generate any rule-set as the FOLD-SE-M algorithm could not find good enough features in the binarization table. Due to the size of the training set being relatively large (\(40k\) examples) and the large number of classes (10 classes), the loss due to binarization rapidly increases. This is also the reason why the accuracy and fidelity is very low. However, since NeSyFOLD-G uses kernel grouping, the FOLD-SE-M algorithm gets to work with better features in the binarization table and thus the accuracy and fidelity is much higher compared to NeSyFOLD and thus the rule-set size is also high on average. Although in 1 run NeSyFOLD-G also manages to find no rule-set that explains the predictions of the CNN. **Exp 2 (setup):** We use the procedure described previously, for semantic labelling of the predicates in the rule-set generated. We use the _ADE20k_ dataset [37] in our experiments. It provides manually annotated semantic segmentation masks for a few images of all the classes of the Places dataset. The GTSRB and MNIST datasets do not have any semantic segmentation masks available. Hence, for all the subsets of classes of the Places dataset reported in Table 1, we show the effect of using the semantic labelling algorithm described in Section 5. In Fig. 3 we have shown labelled rule-sets for the PLACES2, PLACES3.1 and PLACES3.2, PLACES3.3 and PLACES5 datasets. We used a \(ratio\) of 0.8 for all datasets, \(tail:5e^{-3}\) for PLACES2, PLACES3.1 and PLACES3.2, \(tail:1e^{-2}\) for PLACES3.3 and PLACES5 dataset. A similarity threshold \(\theta_{s}\) of 0.8 was used for generating the rule-sets. We used a margin of 0.05 to label the raw rule-sets. We do not show the labelled rule-set for PLACES10 since the accuracy of the NeSy-G model is very low on the dataset. **Exp 2 (result):** The labelled rule-sets make intuitive sense to humans. This representation of knowledge in default theory in our opinion makes the rule-set easy to understand. The rule-set captures the knowledge of the trained CNN. Figure 4: The justification (right) obtained from s(CASP) for an image “img” when running the query?- target(img, X). against RULE-SET 5 (left). Figure 3: The labelled rule-sets generated by NeSyFOLD-G for PLACES2 (RULE-SET 1), PLACES3.2 (RULE-SET 2), PLACES3.3 (RULE-SET 3) and PLACES5 (RULE-SET 4) For example in RULE-SET 2, the first rule states that "an image X is a'street' if there is evidence of the concept 'building' in the image". Similarly the second rule states that "an image X is a 'forest road' if there is evidence of the concept 'tree' in the image and there is no evidence of some abnormal conditions 'ab1' and 'ab2'. Notice how in rules 2,3 of RULE-SET 1 in Fig. 3 the group of kernels, now labelled as 'wall1' and 'wall3' are (most probably) detecting a certain type of patterns on the walls that are indicative of bathrooms, possibly tiles. The kernels are labelled as wall only because the semantic segmentation masks available to us have the label 'wall' for the pixels that denote wall in the image. Hence we are restricted to the expressiveness of the annotations available to us. This can be alleviated by labelling the predicates via manual observation. Note that in the first rule of RULE-SET 5 (Fig. 4) there is a predicate cabinet4_wall5/1. This predicate corresponds to the kernel group in the CNN that is detecting either both cabinets and walls separately or a specific region in the images that contains a portion of cabinets and wall. It is hard to distinguish between the two cases. Fig 4 shows a sample justification obtained from s(CASP) for some image "img". The binarized vector associated with "img" is used to write the facts and the query target(img, X) is executed against RULE-SET 5. The first rule (shown in red) was satisfied. The first model found by s(CASP) that satisfies the rule-set binds the value of X to 'kitchen'. Hence, the predicted class of the image "img" is kitchen. ## 7 Related Work A similar approach of generating rules from the CNN was adopted by Townsend et al. [26], [27] where they used a decision tree algorithm to generate the rule-set. However, Padalkar et al. [17] have shown that using FOLD-SE-M generates a much smaller rule-set and higher accuracy and fidelity. There is a lot of past work which focuses on visualizing the outputs of the layers of the CNN. These methods try to map the relationship between the input pixels and the output of the neurons. Zeiler et al. [32] and Zhou et al. [35] use the output activation while others [20],[5],[23] use gradients to find the mapping. Unlike NeSyFOLD-G, these visualization methods do not generate any rule-set. Zeiler et al. [32] use similar ideas to analyze what specific kernels in the CNN are invoked. There are fewer existing publications on methods for modeling relations between the various important features and generating explanations from them. Ferreira et al. [7] use multiple mapping networks that are trained to map the activation values of the main network's output to the human-defined concepts represented in an induced logic-based theory. Their method needs multiple neural networks besides the main network that the user has to provide. Qi et al. [18] propose an Explanation Neural Network (XNN) which learns an embedding in high-dimension space and maps it to a low-dimension explanation space to explain the predictions of the network. A sentence-like explanation including the features is then generated manually. No rules are generated and manual effort is needed. Chen et al. [3] introduce a prototype layer in the network that learns to classify images in terms of various parts of the image. They assume that there is a one to one mapping between the concepts and the kernels. We do not make such an assumption. Zhang et al. [34],[33] learn disentangled concepts from the CNN and represent them in a hierarchical graph so that there is no assumption of a one to one kernel-concept mapping. However, no logical explanation is generated. Bologna et al. [2] extract propositional rules from CNNs. Their system operates at the neuron level, while NeSyFold-G works with groups of neurons. Our NeSyFOLD-G framework uses FOLD-SE-M to extract a logic program from the binarization table. There are other works that focus on extracting logic programs such as the ILASP system [13] by Law et al. and the XHAIL [19] system by Ray et al. which induce an answer set program from the data however these systems do not learn rules from images. Some other works [21], [6], [22] use a neurosymbolic system to induce logic rules from data. These systems belongs to the Neuro:Symbolic \(\rightarrow\) Neuro category whereas ours belongs to the Neuro;Symbolic category. ## 8 Conclusion and Future Work In this paper we have shown how the NeSyFOLD-G framework can be used to make a CNN more interpretable. We used the framework with a trained CNN to derive a NeSy-G model that constitutes the CNN with all layers after the last convolutional layer replaced by the rule-set generated by FOLD-SE-M algorithm. We compared the performance of the NeSyFOLD-G framework with that of the NeSyFOLD framework on various datasets. The major difference between the NeSyFOLD-G and the NeSyFOLD framework is that in the former, groups of similar kernels are found and the output of these groups kernels is then binarized to produce the binarization table, that is used as input to the FOLD-SE-M algorithm which generates a rule-set. The kernel grouping algorithm is a novel contribution of this work. In the NeSyFOLD framework each individual kernel's output is binarized and the rules are generated based on the binarization table thus constructed. We show in the experiments that grouping similar kernels leads to the creation of better features in the binarization table which consequently leads to a more succinct rule-set. The NeSyFOLD-G framework always generates a smaller rule-set than that generated by the NeSyFOLD framework while either outperforming or showing comparable accuracy and fidelity. We also introduced a novel semantic labelling algorithm that can be used for labelling each predicate that appears in the rule-set with the concepts(s) that its corresponding kernel group represents. We showed two labelled rule-sets and an example justification of a prediction that can be obtained using the s(CASP) ASP system. Note that both NeSyFOLD-G and NeSyFOLD are aimed at representing the connectionist knowledge of the CNN in terms of a symbolic rule-set. The symbolic rule-set can then be scrutinized by experts to figure out the biases that the CNN might have learnt from the data and these help in avoiding spurious predictions in sensitive domains such as medical imaging. The advantage that NeSyFOLD-G provides is that the interpretability of the rule-set increases as the size of the generated rule-set is significantly smaller. We acknowledge that the semantic segmentation masks of images may not be readily available depending on the domain, in which case the semantic labelling of the predicates has to be done manually. Our NeSyFOLD-G framework helps in this regard as well, as it decreases the number of predicates that need to be labelled. As the number of classes increases, the loss in accuracy also increases due to the binarization of more kernels. We plan to explore end-to-end training of the CNN with the rules generated so that this loss in binarization can be reduced during training itself. In future, we plan to use NeSyFOLD-G for real-world tasks such as interpretable breast cancer prediction. We also intend to explore combining the knowledge of two or more CNNs by producing a single rule-set that contains the kernels of the corresponding CNNs as predicates. We also plan to investigate how the knowledge from the generated rules can be backpropagated to improve the performance of a CNN [31]. Authors are supported by US NSF Grants IIS 1910131 and IIP 1916206, US DoD, and various industry grants.
2306.01391
Chemical Property-Guided Neural Networks for Naphtha Composition Prediction
The naphtha cracking process heavily relies on the composition of naphtha, which is a complex blend of different hydrocarbons. Predicting the naphtha composition accurately is crucial for efficiently controlling the cracking process and achieving maximum performance. Traditional methods, such as gas chromatography and true boiling curve, are not feasible due to the need for pilot-plant-scale experiments or cost constraints. In this paper, we propose a neural network framework that utilizes chemical property information to improve the performance of naphtha composition prediction. Our proposed framework comprises two parts: a Watson K factor estimation network and a naphtha composition prediction network. Both networks share a feature extraction network based on Convolutional Neural Network (CNN) architecture, while the output layers use Multi-Layer Perceptron (MLP) based networks to generate two different outputs - Watson K factor and naphtha composition. The naphtha composition is expressed in percentages, and its sum should be 100%. To enhance the naphtha composition prediction, we utilize a distillation simulator to obtain the distillation curve from the naphtha composition, which is dependent on its chemical properties. By designing a loss function between the estimated and simulated Watson K factors, we improve the performance of both Watson K estimation and naphtha composition prediction. The experimental results show that our proposed framework can predict the naphtha composition accurately while reflecting real naphtha chemical properties.
Chonghyo Joo, Jeongdong Kim, Hyungtae Cho, Jaewon Lee, Sungho Suh, Junghwan Kim
2023-06-02T09:37:03Z
http://arxiv.org/abs/2306.01391v1
# Chemical Property-Guided Neural Networks for Naphtha Composition Prediction ###### Abstract The naphtha cracking process heavily relies on the composition of naphtha, which is a complex blend of different hydrocarbons. Predicting the naphtha composition accurately is crucial for efficiently controlling the cracking process and achieving maximum performance. Traditional methods, such as gas chromatography and true boiling curve, are not feasible due to the need for pilot-plant-scale experiments or cost constraints. In this paper, we propose a neural network framework that utilizes chemical property information to improve the performance of naphtha composition prediction. Our proposed framework comprises two parts: a Watson K factor estimation network and a naphtha composition prediction network. Both networks share a feature extraction network based on Convolutional Neural Network (CNN) architecture, while the output layers use Multi-Layer Perceptron (MLP) based networks to generate two different outputs - Watson K factor and naphtha composition. The naphtha composition is expressed in percentages, and its sum should be 100%. To enhance the naphtha composition prediction, we utilize a distillation simulator to obtain the distillation curve from the naphtha composition, which is dependent on its chemical properties. By designing a loss function between the estimated and simulated Watson K factors, we improve the performance of both Watson K estimation and naphtha composition prediction. The experimental results show that our proposed framework can predict the naphtha composition accurately while reflecting real naphtha chemical properties. naphtha cracking process; naphtha composition prediction; chemical-guided neural network ## I Introduction Naphtha, a liquid hydrocarbon mixture derived from crude oil, serves as a crucial feedstock in Naphtha Cracking Centers (NCCs) for the production of core chemicals such as methane, ethylene, and propylene as shown in Fig. 1. These chemicals are essential in the manufacturing of plastics, synthetic rubbers, and other materials widely used across various industries. The ratio of produced chemicals in NCCs is influenced by several control variables. In order to enhance economic value, it is imperative to manipulate these variables to optimize the production of target products in NCC. The yield of products depends on the two key variables: 1) operating conditions such as operating temperature and pressure, and 2) the naphtha composition such as paraffins and aromatics [1]. The composition of the naphtha is significant because each component has a different activation energy required to be cracked into lighter hydrocarbons. In the past, the chemical industry has employed optimization and estimation techniques to increase the yield of the target products. The operating conditions have been optimized using various commercial simulation software [2, 3, 4, 5] and the detail naphtha composition has been estimated roughly [6, 7, 8] for feedstock management. However, the traditional approaches rely heavily on mathematical equations and theoretical models coupled with case studies, making the optimization process time-consuming. Additionally, determining the precise composition of naphtha is challenging due to its complexity, despite its significant influence on product yield. Therefore, relying on mathematical equations or theoretical models is not feasible as naphtha's detailed composition can only be estimated through various experiments, such as gas chromatography, which offers limited flexibility. Recent advances in machine learning and data analytics have provided new opportunities for the optimization of cracking furnace with high accuracy and efficiency. Kim et al. developed machine learning-based yield prediction model of cracking process [9].The model was applied to find out the optimal operating conditions for increasing the product yield considering uncertainty. As a result, they found the optimal operating conditions according to different target products, ethylene and propylene. Plehiers et al. developed artificial neural networks (ANNs)-based model to predict a detailed characterization of naphtha and the flow rate of products [10]. The detailed characterization of a naphtha is predicted from three boiling points and main five components of naphtha. The flow rate of products was estimated using the reconstructed naphtha and operating conditions. Although the accuracy of their model was lower than commercial simulation software, the required central processing unit time per reaction was in the order of seconds. Ma et al. [11] suggested a neural network-based naphtha molecular reconstruction model. They improved the typical artificial neural network-based model by embedding physical information into the neural network. Mei et al. [12] proposed a molecular-based Bayesian regression model for petroleum fractions. They assumed that each component of naphtha is independently and identically distributed. Bi and Qiu developed a novel naphtha molecular reconstruction process using a genetic algorithm and particle swarm optimization [13]. The proposed model was designed to provide an accurate composition for local refineries. They defined probability density functions to apply an optimization algorithm. As a result, they found that the proposed method reconstructed naphtha composition more accurately than other methods. Although considerable efforts have been made in previous studies to optimize naphtha cracking processes, most of these studies have primarily focused on optimizing operating conditions. A few studies have attempted to reconstruct naphtha composition in detail and demonstrated promising results; however, they often relied on several assumptions or neglected other important indicators, such as the Watson K factor, which encapsulates the chemical property information of naphtha. This study aims to address these limitations by comprehensively considering both detailed composition data and chemical properties for the optimization of naphtha cracking processes. To address this challenge, we propose a chemical property-guided neural network framework for accurate naphtha composition prediction using real-world naphtha data. Our proposed framework utilizes chemical property information to improve the performance of naphtha composition prediction. Specifically, our framework comprises two parts: a Watson K factor estimation network and a naphtha composition prediction network. The Watson K factor is a measure of the hydrocarbon mixture's normal paraffin content, which plays a crucial role in predicting the naphtha composition accurately The Watson K factor estimation network and naphtha composition prediction network share a feature extraction network based on the Convolutional Neural Network (CNN) architecture, while the output layers use Multi-Layer Perceptron (MLP) based networks to generate two different outputs - Watson K factor and naphtha composition. To enhance the naphtha composition prediction, we utilize a distillation simulator to obtain the distillation curve from the naphtha composition, which is dependent on its chemical properties. By designing a loss function between the estimated and simulated Watson K factors, we improve the performance of both Watson K factor estimation and naphtha composition prediction. The experimental results show that our proposed framework can predict the naphtha composition accurately and can be applied to chemical engineering processes. The main contributions of our study can be summarized as follows: * This is the first study to predict detailed naphtha composition from a distillation curve using deep neural networks. To predict the naphtha composition in detail, we propose a novel naphtha composition prediction model. * To improve the naphtha composition prediction model, we propose a chemical property-guided neural network framework by using the Watson K factor. * The proposed framework is validated with real industrial naphtha data, collected from a real naphtha cracking process. Thus, this framework is expected to be more applicable to real-world plants. The remainder of this paper is organized as follows. Section II provides an overview of the network architecture and training process. Section III presents our experimental results and analysis. Finally, in Section IV, we conclude our work and give an insight into future work. ## II Methodology The goal of the proposed framework is to predict the detailed composition of naphtha with high accuracy to quickly respond to the cracking process. By utilizing the predicted composition information, it is possible to optimize the reaction temperature for maximizing product yields using thermodynamic reaction kinetics equations. In the industrial fields, domain experts roughly estimate the composition of general types components of naphtha by analyzing the corresponding distillation curve. To predict the naphtha composition accurately, we propose the chemical property-guided model, as shown in Fig. 2. The proposed model takes the distillation curve data, which is determined by the physicochemical interaction between detailed composition, as input. The proposed model consists of a feature extractor motivated from ResNet [14] and MLP-based networks that predict the weight fraction of the detailed chemical composition. To guide the training procedure and improve the naphtha composition performance, we incorporate the Watson K factor, which characterizes different types of chemicals, including paraffin, aromatic, and naphthene-type chemicals. The Watson K factor can be calculated by a commercial chemical simulator and a traditional physics equation. In the training procedure, Thus, the additional loss term of K value updates the network to predict the naphtha having realistic chemical properties, not merely having high prediction accuracy. ### _Problem definition_ Predicting the detailed naphtha composition in the industry is a challenging task that often requires multiple experiments. The conventional approaches have limited flexibility because they rely on a restricted set of commercial indices. In addition, recently proposed data-driven models require certain Fig. 1: Overview of the naphtha cracking center (NCC) for ethylene and propylene production assumptions, which may hinder their application in real-world industrial settings. This study proposes a novel method for napphta prediction using real plant data that considers a commercial chemical indicator to overcome these challenges. The proposed model utilizes the distillation curve of napphta \(X=[x_{1},...,x_{n_{d}}]\) with \(n_{d}\) boiling points of the distillation curve. Consequently, to compute the distillation curve of napphta as a composition-dependent variable using the simulator, this research employed data preprocessing techniques to transform the flow units (ton/hour) data of napthta into weight percent (wt%) units data. This conversion was necessary to ensure that the total components of the napphta add up to 100 wt%. The Watson K factor \(k\) as input variables and the corresponding napthta composition \(C=[c_{1},...,c_{n_{c}}]\) with \(n_{c}\) compositions as output variables. The Watson K factor, which classifies oil by boiling points and density, is calculated using Eq. (1) [15, 16, 17]: \[k=\frac{\sqrt[3]{1.8\tau b}}{\rho} \tag{1}\] where \(\tau\) is the reference temperature, \(b\) is a coefficient, and \(\rho\) is the oil density. ### _Napththa Composition Prediction Model_ The napthta composition prediction model is illustrated in Fig. 3. In the chemical industry, napthta composition is typically estimated using a distillation curve and expert knowledge. To incorporate this knowledge into the model, we designed a feature extractor \(f(\cdot)\) that is motivated from ResNet with 1D operations [18], as shown in Fig. 4, an MLP-based composition estimation network \(e(\cdot)\). The input for napthta composition prediction is the distillation curve obtained by using ASPEN Hysys software [19, 20]. The ground truth of the napthta composition \(C\) is given in the dataset, and we preprocess the real napthta compositions to match the sum of the compositions equal to 100 wt%. The proposed napthta composition prediction model is trained to predict the detailed napthta components from the input distillation curve data with the preprocessed naphtha composition ground truth. To train the networks, we formulate two loss functions. The first loss function calculates the difference between the predicted compositions and the ground truth of real compositions. The second loss function regularizes the sum of the predicted compositions to be equal to 100 wt%. \[\mathcal{L}_{comp}=\left\|C-e(f(X))\right\|_{2}^{2} \tag{2}\] Fig. 4: Detailed network structure of the feature extractor Fig. 3: Training procedure and loss of the proposed napthta composition prediction model Fig. 2: Schematic illustration of proposed chemical property-guided model \[\mathcal{L}_{res}=\left\|100-\sum_{j}^{n_{c}}\hat{c}_{j}\right\|_{2}^{2} \tag{3}\] where \(\hat{C}\) is the output of the model \(e(f(X))\), \(n_{c}\) is the number of components in naphtha, the objective loss \(\mathcal{L}_{pred}\) is expressed as the weighted sum of two loss functions. The weights \(\lambda_{comp}\) and \(\lambda_{res}\) control the relative importance of different loss terms. \[\mathcal{L}_{pred}=\lambda_{comp}\mathcal{L}_{comp}+\lambda_{res}\mathcal{L}_ {res} \tag{4}\] ### _Chemical Property-guided Neural Network Framework with Watson K Factor_ To improve the naphtha composition prediction and incorporate the chemical property, we propose the chemical property-guided neural network model using the Watson K factor, a chemical property indicator of naphtha to guide the prediction of the naphtha composition. As shown in Fig. 5, our proposed framework consists of three networks: a feature extractor \(f(\cdot)\), an MLP-based composition estimation network \(e(\cdot)\), and an MLP-based Watson K factor estimation network \(w(\cdot)\). The weights of the feature extractor are shared between the composition estimation and the Watson K factor estimation networks. The composition estimation network is trained by minimizing the composition prediction loss in Eq. (4) and an additional loss function that measures the discrepancy between the ground truth of the Watson K factor, calculated from the ground truth of the naphtha compositions, and the simulated Watson K factor value from the predicted naphtha composition. \[\mathcal{L}_{Sim}=\left\|k-\hat{k}_{sim}\right\|_{2}^{2} \tag{5}\] \[\mathcal{L}_{pred}=\lambda_{comp}\mathcal{L}_{comp}+\lambda_{res}\mathcal{L}_ {res}+\lambda_{sim}\mathcal{L}_{sim} \tag{6}\] where \(\hat{k}_{sim}\) is the simulated Watson K factor of the predicted composition \(e(f(X))\) by the distillation simulator and \(\lambda_{sim}\) controls the relative importance of the simulated Watson K discrepancy loss. On the other hand, the Watson K factor estimation network is trained by minimizing the discrepancy between the predicted Watson K factor and the ground truth of the Watson K factor. \[\mathcal{L}_{K}=\left\|k-w(f(X))\right\|_{2}^{2} \tag{7}\] This loss function enables the Watson K factor estimation network to estimate the Watson K factor without the simulation and the Watson K factor equation Eq. (1). To train the different networks, we cross-train the networks by minimizing the two different loss functions, respectively. The detailed training procedure for the chemical property-guided neural network is demonstrated in Algorithm 1. ``` 0: Batch size \(m\) and Adam hyperparameters \(\eta\) 1:Input: Sets of distillation curves \(x\in X\), real naphtha compositions \(c\in C\), and Watson K factors \(k\in K\) 2:Output: predicted composition \(\hat{C}\) and the predicted Watson K factor \(\hat{k}\) 3:while\(\theta_{f},\theta_{e},\theta_{w}\) have not converged do 4: Sample a mini-batch \((x,c,k)\) from the distillation curve data \(X\), real naphtha compositions \(C\), and the corresponding Watson K factor \(K\). 5:\(\theta_{f}\leftarrow\theta_{f}-\eta\nabla_{\theta_{f}}\mathcal{L}_{pred}(x,c,k ;\theta_{f})\), 6:\(\theta_{e}\leftarrow\theta_{e}-\eta\nabla_{\theta_{e}}\mathcal{L}_{pred}(x,c,k;\theta_{e})\), \(\triangleright\)Eq. (6) 7: Sample a mini-batch \((x,k)\) from the distillation curve data \(X\), and the corresponding Watson K factor \(K\). 8:\(\theta_{f}\leftarrow\theta_{f}-\eta\nabla_{\theta_{f}}\mathcal{L}_{K}(x,k; \theta_{f})\), 9:\(\theta_{w}\leftarrow\theta_{w}-\eta\nabla_{\theta_{w}}\mathcal{L}_{K}(x,k; \theta_{w})\), \(\triangleright\)Eq. (7) 10:endwhile ``` **Algorithm 1** Training procedure for chemical property-guided neural networks \(f\), \(e\), and \(w\). All experiments in the paper used the default values \(m=8\), \(\eta=0.01\), \(n_{Watson}=2\). ## III Experimental Results ### _Dataset_ This study utilized actual naphtha data obtained from a Naphtha Catalytic Cracking (NCC) plant located in Korea. A total of 254 naphtha samples were gathered and subjected to chemical experiments to determine their detailed composition. The analysis revealed that the naphtha samples consisted of 25 primary components, including normal paraffins, isoparaffins, aromatics, and naphthene, based on their distinct carbon numbers. The distillation curve represents the boiling points of a liquid mixture as a function of their relative concentration. It shows the percentage of the liquid that boils at or below a given temperature and is commonly used to characterize and analyze the composition of the oil. Here, the distillation curve data was collected from the simulator and 30 boiling points of the curve data, which ranges from 348 to 788 \(K\), were applied to predict the naphtha composition. The dataset consisting of 254 samples including naphtha composition and distillation curve data is divided into two sets, one for training the model which contains 80% of the data, and the other for testing, which contains the remaining 20%. To avoid overfitting, K-fold cross-validation is performed Fig. 5: Overview of the training procedure and loss of the proposed chemical property-guided framework in each epoch, where the training and validation datasets are randomly partitioned into 5 equal-sized folds. ### _Implementation Details_ All models in this paper are trained using the Adam optimizer with a mini-batch size of 8. The learning rate hyperparameter was set to 0.01, and the initial decay rates for the first and second moments of the gradients were set to 0.9 and 0.99, respectively. Additionally, during the training of the Chemical property-guided model, weight parameters \(\lambda_{comp}\), \(\lambda_{res}\), and \(\lambda_{sim}\) were set to 0.1, 0.001, and 1, respectively. Based on experiment studies. the optimal weights were chosen to ensure that each loss term contributed equally to the training loss. Once the training was completed, the prediction accuracy of both models was compared on the test dataset using the following prediction metrics:mean-absolute error (\(MAE\)), mean-squared error (\(MSE\)), and \(R^{2}\). \[MAE=\frac{1}{n}\sum_{i=1}^{n}\lvert y_{i}-\hat{y}_{i}\rvert \tag{8}\] \[MSE=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2} \tag{9}\] \[R^{2}=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}}{\sum_{i=1}^{n}(y_{i}- \bar{y_{i}})^{2}} \tag{10}\] where \(y_{i}\) is the true value, \(\hat{y}_{i}\) is the predicted value, and \(\bar{y}_{i}\) is the average value of the true values. ### _Results and Discussion_ For the test dataset, Fig. 6 shows the parity plots of the proposed models, CNN-based and chemical property-guided models. In the graph, the x- and y-axis indicate the true and predicted wt% of each component, respectively. The data points are distributed near the line, y is equal to x, which implies that both models show high prediction performance for naphtha composition. Furthermore, Table I lists the averaged prediction performance of the proposed models in 5-fold cross-validation. The CNN-based model for predicting naptha composition had a good performance, with the averaged \(MAE\) of 0.0213, \(MSE\) of 0.0485, and \(R^{2}\) of 0.95 after being validated with a five-fold cross-validation. However, the standard deviations of \(MSE\) and \(R^{2}\) were 0.029 indicating some variations among the folds. On the other hand, the chemical property-guided neural networks achieved an average \(MAE\) of 0.0165, \(MSE\) of 0.0327, and \(R^{2}\) of 0.965 indicating that the chemical property-guided model not only performed for naptha composition prediction but also reduced the standard deviations of \(MSE\) and \(R^{2}\) to 0.020 and 0.021, respectively. These results show that the chemical property-guided model has improved the performance of the naphtha prediction model by additional training for Watson K. In naptha, according to the K factor, the general types of compositions follow specific behavior of weight fraction due to its physicochemical interaction. Commonly, in high Watson K value, the paraffin and iso-paraffin-type composition has a large weight fraction, aromatic and naphthene-type composition has a lower weight fraction. As shown in Fig. 7, predicted naphtha via the chemical property-guided model certainly follows general types of compositions behavior of real naptha in K value increase. Across the full range of Watson K values, the predominant components are paraffin and iso-paraffin, which make up more than 40% of the total weight. Due to their low boiling points resulting from their straight carbon chain structure, an increase in weight fraction leads to the high Watson K value. Conversely, carbon ring-based aromatic and naphthene components, which have Fig. 6: Parity plots for naphtha composition of different model: (a) CNN-based model; (b) Chemical property-guided model Fig. 7: Predicted general type components in different Watson K value: (a) Paraffin; (b) Iso-paraffin; (c) Naphthene ; (d) Aromatic content compound -boiling points ranging from 80 to 210\({}^{\circ}\)C, have a relatively lower weight percentage in crude oils with high Watson K values. Meanwhile, the CNN-based model does not show any composition behavior in different K values. Conclusively, the result reveals that the loss of the Watson K factor guides the training prediction model to generate naphtha composition that reflects realistic chemical properties, rather than just focusing on reducing the prediction accuracy. ## IV Conclusion The naphtha cracking process is significantly affected by variations in the composition of naphtha, but detailed composition analysis, such as ASTM gas chromatography, is time-consuming and requires pilot-plant scale experiments [21]. To address this issue, a neural network framework is proposed in this study, which utilizes chemical property information and laboratory-scaled data to accurately predict complex naphtha composition without detailed experiments. The proposed framework comprises two networks, a Watson K factor estimation network, and a naphtha composition prediction network using distillation curve data. Both networks share a feature extraction network based on the Convolutional Neural Network (CNN) architecture, while the output layers use Multi-Layer Perceptron (MLP) based networks to generate two different outputs: Watson K factor and naphtha composition. The results show that the proposed framework has lower prediction loss than the model using only distillation curve data and shows good agreement with real naphtha data. However, two main limitations remain, 1) the model needs to be evaluated in the cracking process and 2) there is a possibility of overfitting. Although the proposed model has high prediction accuracy, the error in the detailed composition may lead to considerable errors in the cracking process due to its complex reaction kinetics. Therefore, the predicted composition should be tested in the cracking process to ensure that the final product composition is similar to the real composition. Additionally, due to the small dataset size of 254 data points, further studies are required to expand the proposed model for practical application to the petrochemical process. ## Acknowledgment This work was partially supported by Carl-Zeiss Stiftung under the Sustainable Embedded AI project (P2021-02-009) and by the Korean Institute of Industrial Technology under the Development of AI Platform for Continuous Manufacturing of Chemical Process (JH-23-0002) and Development and application of carbon-neutral engineering platform based on carbon emission database and prediction model (KM-23-0098).
2310.02156
Probabilistically Rewired Message-Passing Neural Networks
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input. However, they operate on a fixed input graph structure, ignoring potential noise and missing information. Furthermore, their local aggregation mechanism can lead to problems such as over-squashing and limited expressive power in capturing relevant graph structures. Existing solutions to these challenges have primarily relied on heuristic methods, often disregarding the underlying data distribution. Hence, devising principled approaches for learning to infer graph structures relevant to the given prediction task remains an open challenge. In this work, leveraging recent progress in exact and differentiable $k$-subset sampling, we devise probabilistically rewired MPNNs (PR-MPNNs), which learn to add relevant edges while omitting less beneficial ones. For the first time, our theoretical analysis explores how PR-MPNNs enhance expressive power, and we identify precise conditions under which they outperform purely randomized approaches. Empirically, we demonstrate that our approach effectively mitigates issues like over-squashing and under-reaching. In addition, on established real-world datasets, our method exhibits competitive or superior predictive performance compared to traditional MPNN models and recent graph transformer architectures.
Chendi Qian, Andrei Manolache, Kareem Ahmed, Zhe Zeng, Guy Van den Broeck, Mathias Niepert, Christopher Morris
2023-10-03T15:43:59Z
http://arxiv.org/abs/2310.02156v4
# Probabilistically Rewired Message-Passing Neural Networks ###### Abstract Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input. However, they operate on a fixed input graph structure, ignoring potential noise and missing information. Furthermore, their local aggregation mechanism can lead to problems such as over-squashing and limited expressive power in capturing relevant graph structures. Existing solutions to these challenges have primarily relied on heuristic methods, often disregarding the underlying data distribution. Hence, devising principled approaches for learning to infer graph structures relevant to the given prediction task remains an open challenge. In this work, leveraging recent progress in exact and differentiable \(k\)-subset sampling, we devise probabilistically rewired MPNNs (PR-MPNNs), which learn to add relevant edges while omitting less beneficial ones. For the first time, our theoretical analysis explores how PR-MPNNs enhance expressive power, and we identify precise conditions under which they outperform purely randomized approaches. Empirically, we demonstrate that our approach effectively mitigates issues like over-squashing and under-reaching. In addition, on established real-world datasets, our method exhibits competitive or superior predictive performance compared to traditional MPNN models and recent graph transformer architectures. ## 1 Introduction Graph-structured data is prevalent across various application domains, including fields like chemo- and bioinformatics (Barabasi & Oltvai, 2004; Jumper et al., 2021; Reiser et al., 2022), combinatorial optimization (Cappart et al., 2023), and social-network analysis (Easley et al., 2012), highlighting the need for machine learning techniques designed explicitly for graphs. In recent years, _message-passing graph neural networks_ (MPNNs) (Kipf & Welling, 2017; Gilmer et al., 2017; Scarselli et al., 2008; Velickovic et al., 2018) have become the dominant approach in this area, showing promising performance in tasks such as predicting molecular properties (Klicpera et al., 2020; Jumper et al., 2021) or enhancing combinatorial solvers (Cappart et al., 2023). However, MPNNs have a limitation due to their local aggregation mechanism. They focus on encoding local structures, severely limiting their expressive power (Morris et al., 2019; Xu et al., 2019; Morris et al., 2021). In addition, MPNNs struggle to capture global or long-range information, possibly leading Topping et al. (2021); Bober et al. (2022) investigated over-squashing from the perspective of Ricci and Forman curvature. Refining Topping et al. (2021), Di Giovanni et al. (2023) analyzed how the architectures' width and graph structure contribute to the over-squashing problem, showing that over-squashing happens among nodes with high commute time, stressing the importance of _graph rewiring techniques_, i.e., adding edges between distant nodes to make the exchange of information more accessible. In addition, Deac et al. (2022); Shirzad et al. (2023) utilized expander graphs to enhance message passing and connectivity, while Karhadkar et al. (2022) resort to spectral techniques, and Banerjee et al. (2022) proposed a greedy random edge flip approach to overcome over-squashing. Recent work (Gutteridge et al., 2023) aims to alleviate over-squashing by again resorting to graph rewiring. In addition, many studies have suggested different versions of multi-hop-neighbor-based message passing to maintain long-range dependencies (Abboud et al., 2022; Abu-El-Haija et al., 2019; Gasteiger et al., 2019; Xue et al., 2023), which can also be interpreted as a heuristic rewiring scheme. The above works indicate that graph rewiring is an effective strategy to mitigate over-squashing. However, most existing graph rewiring approaches rely on heuristic methods to add edges, potentially not adapting well to the specific data distribution or introducing edges randomly. Furthermore, there is limited understanding to what extent probabilistic rewiring, i.e., adding or removing edges based on the prediction task, impacts the expressive power of a model. In contrast to the above lines of work, graph transformers (Chen et al., 2022; Dwivedi et al., 2022; He et al., 2023; Muller et al., 2023; Ramgasek et al., 2022) and similar global attention mechanisms (Liu et al., 2021; Wu et al., 2021) marked a shift from local to global message passing, aggregating over all nodes. While not understood in a principled way, empirical studies indicate that graph transformers possibly alleviate over-squashing; see, e.g., Muller et al. (2023). However, due to their global aggregation mode, computing an attention matrix with \(n^{2}\) entries for an \(n\)-order graph makes them applicable only to small or mid-sized graphs. Further, to capture non-trivial graph structure, they must resort to hand-engineered positional or structural encodings. Overall, current strategies to mitigate over-squashing rely on heuristic rewiring methods that may not adapt well to a prediction task or employ computationally intensive global attention mechanisms. Furthermore, the impact of probabilistic rewiring on a model's expressive power remains unclear. Figure 1: Overview of the probabilistically rewired MPNN framework. PR-MPNNs use an _upstream model_ to learn priors \(\mathbf{\theta}\) for candidate edges, parameterizing a probability mass function conditioned on exactly-\(k\) constraints. Subsequently, we sample multiple \(k\)-edge adjacency matrices (here: \(k=1\)) from this distribution, aggregate these matrices (here: subtraction), and use the resulting adjacency matrix as input to a _downstream model_, typically an MPNN, for the final predictions task. On the backward pass, the gradients of the loss \(\ell\) with respect to the parameters \(\mathbf{\theta}\) are approximated through the derivative of the exactly-\(k\) marginals in the direction of the gradients of the point-wise loss \(\ell\) with respect to the sampled adjacency matrix. We use recent work to make the computation of these marginals exact and differentiable, reducing both bias and variance. **Present work** By leveraging recent progress in differentiable \(k\)-subset sampling (Ahmed et al., 2023), we derive _probabilistically rewired MPNNs_ (PR-MPNNs). Concretely, we utilize an _upstream model_ to learn prior weights for candidate edges. We then utilize the weights to parameterize a probability distribution constrained by so-called \(k\)-subset constraints. Subsequently, we sample multiple \(k\)-edge adjacency matrices from this distribution and process them using a _downstream model_, typically an MPNN, for the final predictions task. To make this pipeline trainable via gradient descent, we adapt recently proposed discrete gradient estimation and tractable sampling techniques (Xie and Ermon, 2019; Niepert et al., 2021; Ahmed et al., 2023); see Figure 1 for an overview. Our theoretical analysis explores how PR-MPNNs overcome MPNNs' inherent limitations in expressive power and identifies precise conditions under which they outperform purely randomized approaches. Empirically, we demonstrate that our approach effectively mitigates issues like over-squashing and under-reaching. In addition, on established real-world datasets, our method exhibits competitive or superior predictive performance compared to traditional MPNN models and graph transformer architectures. _Overall, PR-MPNNs pave the way for the principled design of more flexible MPNNs, making them less vulnerable to potential noise and missing information._ ### Related Work MPNNs are inherently biased towards encoding local structures, limiting their expressive power (Morris et al., 2019; Xu et al., 2019; Morris et al., 2021). Specifically, they are at most as powerful as distinguishing non-isomorphic graphs or nodes with different structural roles as the _1-dimensional Weisfeiler-Leman algorithm_(Weisfeiler and Leman, 1968), a simple heuristic for the graph isomorphism problem; see Section 2. Additionally, they cannot capture global or long-range information, often linked to phenomena such as under-reaching (Barcelo et al., 2020) or over-squashing (Alon and Yahav, 2021), with the latter being heavily investigated in recent works. **Graph rewiring** Several recent works aim to circumvent over-squashing via graph rewiring. Perhaps the most straightforward way of graph rewiring is incorporating multi-hop neighbors. For example, Bruel-Gabrielsson et al. (2022) rewires the graphs with \(k\)-hop neighbors and virtual nodes and also augments them with positional encodings. MixHop (Abu-El-Haija et al., 2019), SIGN (Frasca et al., 2020), DIGL (Gasteiger et al., 2019), and SP-MPNN (Abboud et al., 2022) can also be considered as graph rewiring as they can reach further-aware neighbors in a single layer. Particularly, Gutteridge et al. (2023) rewires the graph similarly to Abboud et al. (2022) but with a novel delay mechanism, showcasing promising empirical results. Several rewiring methods depend on particular metrics, e.g., Ricci or Forman curvature (Bober et al., 2022), balanced Forman curvature (Topping et al., 2021). In addition, Deac et al. (2022); Shirzad et al. (2023) utilize expand graphs to enhance message passing and connectivity, while Karhadkar et al. (2022) resort to spectral techniques, and Banerjee et al. (2022) propose a greedy random edge flip approach to overcome over-squashing. Refining Topping et al. (2021), Di Giovanni et al. (2023) analyzed how the architectures' width and graph structure contribute to the over-squashing problem, showing that over-squashing happens among nodes with high commute time, stressing the importance of rewiring techniques. Overall, current strategies to mitigate over-squashing either rely on heuristic rewiring methods or even purely randomized approaches that may not adapt well to a given prediction task. Furthermore, the impact of probabilistic rewiring on a model's expressive power remains unclear. There also exists a large set of works from the field of graph structure learning proposing heuristical graph rewiring approaches; see Appendix A for details. **Graph transformers** Different from the above, graph transformers (Dwivedi et al., 2022b; He et al., 2023; Muller et al., 2023; Rampasek et al., 2022; Chen et al., 2022) and similar global attention mechanisms (Liu et al., 2021; Wu et al., 2021) marked a shift from local to global message passing. aggregating over all nodes. While not understood in a principled way, empirical studies indicate that graph transformers possibly alleviate over-squashing; see, e.g., Muller et al. (2023). However, all transformers suffer from their quadratic space and memory requirements due to computing an attention matrix. ## 2 Background In the following, we provide the necessary background. **Notations** Let \(\mathbb{N}\coloneqq\{1,2,3,\dots\}\). For \(n\geq 1\), let \([n]\coloneqq\{1,\dots,n\}\subset\mathbb{N}\). We use \(\llbracket\dots\rrbracket\) to denote multisets, i.e., the generalization of sets allowing for multiple instances for each of its elements. A _graph_\(G\) is a pair \((V(G),E(G))\) with _finite_ sets of _vertices_ or _nodes_\(V(G)\) and _edges_\(E(G)\subseteq\{\{u,v\}\subseteq V(G)\mid u\neq v\}\). If not otherwise stated, we set \(n\coloneqq|V(G)|\), and the graph is of _order_\(n\). We also call the graph \(G\) an \(n\)-order graph. For ease of notation, we denote the edge \(\{u,v\}\) in \(E(G)\) by \((u,v)\) or \((v,u)\). Throughout the paper, we use standard notations, e.g., we denote the _neighborhood_ of a vertex \(v\) by \(N(v)\) and \(\ell(v)\) denotes its discrete vertex label, and so on; see Appendix B for details. \(1\)**-dimensional Weisfeiler-Leman algorithm** The \(1\)-WL or color refinement is a well-studied heuristic for the graph isomorphism problem, originally proposed by Weisfeiler & Leman (1968). Formally, let \(G=(V(G),E(G),\ell)\) be a labeled graph. In each iteration, \(t>0\), the \(1\)-WL computes a node coloring \(C^{1}_{t}\colon V(G)\to\mathbb{N}\), depending on the coloring of the neighbors. That is, in iteration \(t>0\), we set \[C^{1}_{t}(v)\coloneqq\mathsf{RELABEL}\Big{(}\!\big{(}C^{1}_{t-1}(v), \llbracket\!\,\{C^{1}_{t-1}(u)\mid u\in N(v)\}\!\,\big{)}\!\,\Big{)},\] for all nodes \(v\in V(G)\), where \(\mathsf{RELABEL}\) injectively maps the above pair to a unique natural number, which has not been used in previous iterations. In iteration \(0\), the coloring \(C^{1}_{0}\coloneqq\ell\). To test if two graphs \(G\) and \(H\) are non-isomorphic, we run the above algorithm in "parallel" on both graphs. If the two graphs have a different number of nodes colored \(c\) in \(\mathbb{N}\) at some iteration, the \(1\)-WL _distinguishes_ the graphs as non-isomorphic. Moreover, if the number of colors between two iterations, \(t\) and \((t+1)\), does not change, i.e., the cardinalities of the images of \(C^{1}_{t}\) and \(C^{1}_{i+t}\) are equal, or, equivalently, \[C^{1}_{t}(v)=C^{1}_{t}(w)\iff C^{1}_{t+1}(v)=C^{1}_{t+1}(w),\] for all nodes \(v\) and \(w\) in \(V(G)\), the algorithm terminates. For such \(t\), we define the _stable coloring_\(C^{1}_{\infty}(v)=C^{1}_{t}(v)\), for \(v\) in \(V(G)\). The stable coloring is reached after at most \(\max\{|V(G)|,|V(H)|\}\) iterations (Grobe, 2017). It is easy to see that the algorithm cannot distinguish all non-isomorphic graphs (Cai et al., 1992). Nonetheless, it is a powerful heuristic that can successfully test isomorphism for a broad class of graphs (Babai & Kucera, 1979). A function \(f\colon V(G)\to\mathbb{R}^{d}\), for \(d>0\), is \(1\)-WL-_equivalent_ if \(f\equiv C^{1}_{\infty}\); see Appendix B for details. **Message-passing graph neural networks** Intuitively, MPNNs learn a vectorial representation, i.e., a \(d\)-dimensional real-valued vector, representing each vertex in a graph by aggregating information from neighboring vertices. Let \(\mathbf{G}=(G,\mathbf{L})\) be an attributed graph, following, Gilmer et al. (2017) and Scarselli et al. (2008a), in each layer, \(t>0\), we compute vertex features \[\mathbf{h}^{(t)}_{v}\coloneqq\mathsf{UPD}^{(t)}\Big{(}\mathbf{h}^{(t-1)}_{v},\mathsf{AGG}^{(t)}\big{(}\!\big{\{}\mathbf{h}^{(t-1)}_{u}\mid u\in N(v) \}\!\,\big{)}\!\,\Big{)}\Big{)}\in\mathbb{R}^{d},\] where \(\mathsf{UPD}^{(t)}\) and \(\mathsf{AGG}^{(t)}\) may be differentiable parameterized functions, e.g., neural networks, and \(\mathbf{h}^{(t)}_{v}=\mathbf{L}_{v}\). In the case of graph-level tasks, e.g., graph classification, one uses \[\mathbf{h}_{G}\coloneqq\mathsf{READOUT}\big{(}\!\big{\{}\mathbf{h}^{(T)}_{v} \mid v\in V(G)\!\,\big{\}}\!\big{)}\in\mathbb{R}^{d},\] to compute a single vectorial representation based on learned vertex features after iteration \(T\). Again, \(\mathsf{READOUT}\) may be a differentiable parameterized function, e.g., a neural network. To adapt the parameters of the above three functions, they are optimized end-to-end, usually through a variant of stochastic gradient descent, e.g., Kingma & Ba (2015), together with the parameters of a neural network used for classification or regression. ## 3 Probalistically rewired MPNNs Here, we outline probabilistically rewired MPNNs (PR-MPNNs) based on recent advancements in discrete gradient estimation and tractable sampling techniques (Ahmed et al., 2023). Let \(\mathfrak{A}_{n}\) denote the set of adjacency matrices of \(n\)-order graphs. Further, let \((G,\mathbf{X})\) be a \(n\)-order attributed graph with an adjacency matrix \(\mathbf{A}(G)\in\mathfrak{A}_{n}\), and node attribute matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\), for \(d>0\). A PR-MPNN maintains a (parameterized) _upstream model_\(h_{\mathbf{u}}\colon\mathfrak{A}_{n}\times\mathbb{R}^{n\times d}\to\Theta\), typically a neural network, parameterized by \(\mathbf{u}\), mapping an adjacency matrix and corresponding node attributes to unnormalized edge priors \(\mathbf{\theta}\in\mathbf{\Theta}\subseteq\mathbb{R}^{n\times n}\). In the following, we use the _priors_\(\mathbf{\theta}\) as parameters of a (conditional) probability mass function, \[p_{\mathbf{\theta}}(\mathbf{A}(H))\coloneqq\prod_{i,j=1}^{n}p_{\theta_{ij}}( \mathbf{A}(H)_{ij}),\] assigning a probability to each adjacency matrix in \(\mathfrak{A}_{n}\), where \(p_{\theta_{ij}}(\mathbf{A}(H)_{ij}=1)=\mathrm{sigmoid}(\theta_{ij})\) and \(p_{\theta_{ij}}(\mathbf{A}(H)_{ij}=0)=1-\mathrm{sigmoid}(\theta_{ij})\). Since the parameters \(\mathbf{\theta}\) depend on the input graph \(G\), we can view the above probability as a conditional probability mass function conditioned on the graph \(G\). Unlike previous probabilistic rewiring approaches, e.g., Franceschi et al. (2019), we introduce dependencies between the graph's edges by conditioning the probability mass function \(p_{\theta_{ij}}(\mathbf{A}(H))\) on a _\(k\)-subset constraint_. That is, the probability of sampling any given \(k\)-edge adjacency matrix \(\mathbf{A}(H)\), becomes \[p_{(\mathbf{\theta},k)}(\mathbf{A}(H))\coloneqq\left\{\begin{array}{ll}p_{\mathbf{ \theta}}(\mathbf{A}(H))/Z&\text{if }\|\mathbf{A}(H)\|_{1}=k,\\ 0&\text{otherwise},\end{array}\right.\text{ with }\quad Z\coloneqq\sum_{ \mathbf{B}\in\mathfrak{A}_{n}\colon\|\mathbf{B}\|_{1}=k}p_{\mathbf{\theta}}( \mathbf{B}). \tag{1}\] The original graph \(G\) is now rewired into a new adjacency matrix \(\bar{\mathbf{A}}\) by combining \(N\) samples \(\mathbf{A}^{(i)}\sim p_{(\mathbf{\theta},k)}(\mathbf{A}(G))\) for \(i\in[N]\) together with the original adjacency matrix \(\mathbf{A}(G)\) using a differentiable aggregation function \(g\colon\mathfrak{A}_{n}^{(N+1)}\to\mathfrak{A}_{n}\), i.e., \(\bar{\mathbf{A}}\coloneqq g(\mathbf{A}(G),\mathbf{A}^{(1)},\ldots,\mathbf{A}^{( N)})\in\mathfrak{A}_{n}\). Subsequently, we use the resulting adjacency matrix as input to a _downstream model_\(f_{\mathbf{d}}\), parameterized by \(\mathbf{d}\), typically an MPNN, for the final predictions task. We have so far assumed that the upstream MPNN computes one set of priors \(h_{\mathbf{u}}\colon\mathfrak{A}_{n}\times\mathbb{R}^{n\times d}\to\mathbb{R} ^{n\times n}\) which we use to generate a new adjacency matrix \(\bar{\mathbf{A}}\) through sampling and then aggregating the adjacency matrices \(\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(N)}\). In Section 5, we show empirically that having multiple sets of priors from which we sample is beneficial. Multiple sets of priors mean that we learn an upstream model \(h_{\mathbf{u}}\colon\mathfrak{A}_{n}\times\mathbb{R}^{n\times d}\to\mathbb{R} ^{n\times n\times M}\) where \(M\) is the number of priors. We can then sample and aggregate the adjacency matrices from these multiple sets of priors. **Learning to sample** To learn the parameters of the up- and downstream model \(\mathbf{\omega}=(\mathbf{u},\mathbf{d})\) of the PR-MPNN architecture, we minimize the expected loss \[L(\mathbf{A}(G),\mathbf{X},y;\mathbf{\omega})\coloneqq\mathbb{E}_{\mathbf{A}^{(i )}\sim p_{(\mathbf{\theta},k)}(\mathbf{A}(G))}\Big{[}\ell\Big{(}f_{\mathbf{d}}\Big{(} g\Big{(}\mathbf{A}(G),\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(N)}\Big{)}, \mathbf{X}\Big{)},y\Big{)}\Big{]},\] with \(y\in\mathcal{Y}\), the targets, \(\ell\) a point-wise loss such as the cross-entropy or MSE, and \(\mathbf{\theta}=h_{\mathbf{u}}(\mathbf{A}(G),\mathbf{X})\). To minimize the above expectation using gradient descent and backpropagation, we need to efficiently draw Monte-Carlo samples from \(p_{(\mathbf{\theta},k)}(\mathbf{A}(G))\) and estimate \(\nabla_{\mathbf{\theta}}L\) the gradients of an expectation with respect to the parameters \(\mathbf{\theta}\) of the distribution \(p_{(\mathbf{\theta},k)}\). **Sampling** To sample an adjacency matrix \(\mathbf{A}^{(i)}\) from \(p_{(\mathbf{\theta},k)}(\mathbf{A}(G))\) conditioned on \(k\)-edge constraints, and to allow PR-MPNNs to be trained end-to-end, we use Simple (Ahmed et al., 2023), a recently proposed gradient estimator. Concretely, we can use Simple to sample _exactly_ from the \(k\)-edge adjacency matrix distribution \(p_{(\mathbf{\theta},k)}(\mathbf{A}(G))\) on the forward pass. On the backward pass, we compute the approximate gradients of the loss (which is an expectation over a discrete probability mass function) with respect to the prior weights \(\mathbf{\theta}\) using \[\nabla_{\mathbf{\theta}}L\approx\partial_{\mathbf{\theta}}\mu(\mathbf{\theta})\nabla_{ \mathbf{A}}\ell\ \ \text{with}\ \ \mu(\mathbf{\theta})\coloneqq\{p_{(\mathbf{\theta},k)}(\mathbf{A}(G)_{ij})\}_{i,j=1}^{n} \in\mathbb{R}^{n\times n},\] with an exact and efficient computation of the marginals \(\mu(\mathbf{\theta})\) that is differentiable on the backward pass, achieving lower bias and variance. We show empirically that Simple (Ahmed et al., 2023) outperforms other sampling and gradient approximation methods such as Gumbel SoftSub-ST (Xie & Ermon, 2019) and I-MLE (Niepert et al., 2021), improving accuracy without incurring a computational overhead. **Computational complexity** The vectorized complexity of the exact sampling and marginal inference step is \(\mathcal{O}(\log k\log l)\), where \(k\) is from our \(k\)-subset constraint, and \(l\) is the maximum number of edges that we can sample. Assuming a constant number of layers, PR-MPNN's worst-case training complexity is \(\mathcal{O}(l)\) for both the upstream and downstream models. Let \(n\) be the number of nodes in the initial graph, and \(l=\max(\{l_{\text{add}},l_{\text{m}}\})\), with \(l_{\text{add}}\) and \(l_{\text{m}}\) being the maximum number of added and deleted edges. If we consider all of the possible edges for \(l_{\text{add}}\), the worst-case complexity becomes \(\mathcal{O}(n^{2})\). Therefore, to reduce the complexity in practice, we select a subset of the possible edges using simple heuristics, such as considering the top \(l_{\text{add}}\) edges of the most distant nodes. During inference, since we do not need gradients for edges not sampled in the forward pass, the complexity is \(\mathcal{O}(l)\) for the upstream model and \(\mathcal{O}(L)\) for the downstream model, with \(L\) being the number of edges in the rewired graph. ## 4 Expressive Power of Probabilistically Rewired MPNNs We now, for the first time, explore the extent to which probabilistic MPNNs overcome the inherent limitations of MPNNs in expressive power caused by the equivalence to \(1\)-WL in distinguishing non-isomorphic graphs (Xu et al., 2018; Morris et al., 2019). Moreover, we identify formal conditions under which PR-MPNNs outperform popular randomized approaches such as those dropping nodes and edges uniformly at random. We first make precise what we mean by probabilistically separating graphs by introducing a probabilistic and generally applicable notion of graph separation. Let us assume a conditional probability mass function \(p\colon\mathfrak{A}_{n}\to[0,1]\) conditioned on a given \(n\)-order graph, defined over the set of adjacency matrices of \(n\)-order graphs. In the context of PR-MPNNs, \(p\) is the probability mass function defined in Section 3 but it could also be any other conditional probability mass function over graphs. Moreover, let \(f\colon\mathfrak{A}_{n}\to\mathbb{R}^{d}\), for \(d>0\), be a permutation-invariant, parameterized function mapping a sampled graph's adjacency matrix to a vector in \(\mathbb{R}^{d}\). The function \(f\) could be the composition of an aggregation function \(g\) that removes the sampled edges from the input graph \(G\) and of a downstream MPNN. Now, the conditional probability mass function \(p\)_separates_ two graphs \(G\) and \(H\) with probability \(\rho\)_with respect to_\(f\) if \[\mathbb{E}_{\widetilde{G}\sim p(\cdot|G),\widetilde{H}\sim p(\cdot|H)}\big{[} f(\mathbf{A}(\widetilde{G}))\neq f(\mathbf{A}(\widetilde{H}))\big{]}=\rho,\] that is, if in expectation over the conditional probability distribution, the vectors \(f(\mathbf{A}(\widetilde{G}))\) and \(f(\mathbf{A}(\widetilde{H}))\) are distinct with probability \(\rho\). In what follows, we analyze the case of \(p\) being the exactly-\(k\) probability distribution defined in Equation 1 and \(f\) being the aggregation function removing edges and a downstream MPNN. However, our framework readily generalizes to the case of node removal, and we provide these theoretical results in the appendix. Following Section 3, we sample adjacency matrices with exactly \(k\) edges and use them to remove edges from the original graph. We aim to understand the separation properties of the probability mass function \(p_{(k,\mathbf{\theta})}\) in this setting and for various types of graph structures. Most obviously, we do not want to separate isomorphic graphs and, therefore, remain isomorphism invariant, a desirable property of MPNNs. **Theorem 4.1**.: _For sufficiently large \(n\), for every \(\varepsilon\in(0,1)\) and \(k>0\), we have for almost all pairs, in the sense of Babai et al. (1980), of isomorphic \(n\)-order graphs \(G\) and \(H\) and all permutation-invariant, \(1\)-WL-equivalent functions \(f\colon\mathfrak{A}_{n}\to\mathbb{R}^{d}\), \(d>0\), there exists a probability mass function \(p_{(\mathbf{\theta},k)}\) that separates the graph \(G\) and \(H\) with probability at most \(\varepsilon\) with respect to \(f\)._ Theorem 4.1 relies on the fact that most graphs have a discrete \(1\)-WL coloring. For graphs where the \(1\)-WL stable coloring consists of a discrete and non-discrete part, the following result shows that there exist distributions \(p_{(\mathbf{\theta},k)}\) not separating the graphs based on the partial isomorphism corresponding to the discrete coloring. **Proposition 4.2**.: _Let \(\varepsilon\in(0,1)\), \(k>0\), and let \(G\) and \(H\) be graphs with identical \(1\)-WL stable colorings. Let \(V_{G}\) and \(V_{H}\) be the subset of nodes of \(G\) and \(H\) that are in color classes of cardinality \(1\). Then, for all choices of \(1\)-WL-equivalent functions \(f\), there exists a conditional probability distribution \(p_{(\mathbf{\theta},k)}\) that separates the graphs \(G[V_{G}]\) and \(H[V_{H}]\) with probability at most \(\varepsilon\) with respect to \(f\)._ Existing methods such as DropGNN (Papp et al., 2021) or DropEdge (Rong et al., 2020) are more likely to separate two (partially) isomorphic graphs by removing different nodes or edges between discrete color classes, i.e., on their (partially) isomorphic subgraphs. For instance, in the appendix, we prove that pairs of graphs with \(m\) edges exist where the probability of non-separation under uniform edge sampling is at most \(1/m\). This is undesirable as it breaks the MPNNs' permutation-invariance in these parts. Now that we have established that distributions with priors from upstream MPNNs are more likely to preserve (partial) isomorphism between graphs, we turn to analyze their behavior in separating the non-discrete parts of the coloring. The following theorem shows that PR-MPNNs are more likely to separate non-isomorphic graphs than probability mass functions that remove edges or nodes uniformly at random. **Theorem 4.3**.: _For every \(\varepsilon\in(0,1)\) and every \(k>0\), there exists a pair of non-isomorphic graphs \(G\) and \(H\) with identical and non-discrete \(1\)-WL stable colorings such that for every \(1\)-WL-equivalent function \(f\),_ 1. _there exists a probability mass function_ \(p_{(k,\boldsymbol{\theta})}\) _that separates_ \(G\) _and_ \(H\) _with probability at least_ \((1-\varepsilon)\) _with respect to_ \(f\)_;_ 2. _removing edges uniformly at random separates_ \(G\) _and_ \(H\) _with probability at most_ \(\varepsilon\) _with respect to_ \(f\)_._ Finally, we can also show a negative result: there exist classes of graphs for which PR-MPNNs cannot do better than random sampling. **Proposition 4.4**.: _For every \(k>0\), there exist non-isomorphic graphs \(G\) and \(H\) with identical \(1\)-WL colorings such that every probability mass function \(p_{(\boldsymbol{\theta},k)}\) separates the two graphs with the same probability as the distribution that samples edges uniformly at random._ \begin{table} \begin{tabular}{l l l l l l} \hline \hline & & \multicolumn{2}{c}{Zinc} & \multicolumn{2}{c}{OGBG-molity} & \multicolumn{1}{c}{Alchemy} \\ & & - Edge \(\downarrow\) & + Edge \(\downarrow\) & + Edge \(\uparrow\) & + Edge \(\downarrow\) \\ \hline \multirow{4}{*}{\begin{tabular}{l} \(\boldsymbol{\theta}\) \\ \(\boldsymbol{\theta}\) \\ \(\boldsymbol{\theta}\) \\ \end{tabular} } & K-ST SAT & 0.162\(\pm\)0.007 & 0.115\(\pm\)0.005 & 0.625\(\pm\)0.039 & N/A \\ & K-SG SAT & 0.162\(\pm\)0.013 & **0.095\(\pm\)**0.002 & 0.613\(\pm\)0.010 & N/A \\ & Base & 0.258\(\pm\)0.006 & 0.207\(\pm\)0.006 & 0.775\(\pm\)0.011 & 11.12\(\pm\)0.690 \\ & Base w. PE & 0.162\(\pm\)0.001 & 0.101\(\pm\)0.004 & 0.764\(\pm\)0.018 & 7.197\(\pm\)0.094 \\ & PR-MPNN\({}_{\text{\sc{max}}}\) (ours) & 0.153\(\pm\)0.003 & 0.103\(\pm\)0.008 & 0.760\(\pm\)0.025 & 6.858\(\pm\)0.059 \\ & PR-MPNN\({}_{\text{\sc{max}}}\) (ours) & **0.151\(\pm\)**0.001 & 0.104\(\pm\)0.008 & **0.774\(\pm\)**0.015 & 6.692\(\pm\)0.061 \\ & PR-MPNN\({}_{\text{\sc{min}}}\) (ours) & **0.139\(\pm\)**0.001 & **0.085\(\pm\)**0.002 & **0.795\(\pm\)**0.009 & 6.447\(\pm\)**0.057 \\ \hline \multirow{4}{*}{ \begin{tabular}{l} \(\boldsymbol{\theta}\) \\ \(\boldsymbol{\theta}\) \\ \(\boldsymbol{\theta}\) \\ \end{tabular} } & GPS & N/A & **0.070\(\pm\)**0.004 & **0.788\(\pm\)**0.010 & N/A \\ & K-ST SAT & 0.164\(\pm\)0.007 & 0.102\(\pm\)0.005 & 0.613\(\pm\)0.025 & N/A \\ & K-SG SAT & **0.131\(\pm\)**0.002 & **0.094\(\pm\)**0.008 & 0.591\(\pm\)0.005 & N/A \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between PR-MPNN and baselines on three molecular property prediction datasets. We report results for PR-MPNN with different gradient estimators for \(k\)-subset sampling: Gumbel SoftSub-ST (Maddison et al., 2017; Jang et al., 2017; Xie & Ermon, 2019), I-MLE (Niepert et al., 2021), and Simple (Ahmed et al., 2023) and compare them with the base downstream model, and two Graph Transformers. The variant using SIMPLE consistently outperforms the base models and is competitive or better than the two Graph Transformers. We use green for the best model, blue for the second-best, and red for third. We note with + Edge the instances where edge features are provided, and with - Edge when they are not. Figure 2: Comparison between PR-MPNN and DropGNN on the 4-Cycles dataset. PR-MPNN rewiring is almost always better than randomly dropping nodes, and is always better with \(10\) priors. ## 5 Experimental Evaluation Here, we explore to what extent our probabilistic graph rewiring leads to improved predictive performance on synthetic and real-world datasets. Concretely, we answer the following questions. * Can probabilistic graph rewiring mitigate the problems of over-squashing and under-reaching in synthetic datasets? * Is the expressive power of standard MPNNs enhanced through probabilistic graph rewiring? That is, can we verify empirically that the separating probability mass function of Section 4 can be learned with PR-MPNNs and that multiple prior help? * Does the increase in predictive performance due to probabilistic rewiring apply to (a) graph-level molecular prediction tasks and (b) node-level prediction tasks involving heterophilic data? The repository of our code can be accessed at [https://github.com/chendiqian/PR-MPNN](https://github.com/chendiqian/PR-MPNN). **Datasets** To answer **Q1**, we utilized the Trees-NeighborsMatch dataset (Alon and Yahav, 2021). Additionally, we created the Trees-LeafCount dataset to investigate whether our method could mitigate under-reaching issues; see Appendix D for details. To tackle **Q2**, we performed experiments with the Exp(Abboud et al., 2020) and Csl datasets (Murphy et al., 2019) to assess how much probabilistic graph rewiring can enhance the models' expressivity. In addition, we utilized the 4-Cycles dataset from Loukas (2020); Papp et al. (2021) and set it against a standard DropGNN model (Papp et al., 2021) for comparison while also ablating the performance difference concerning the number of priors and samples per prior. To answer **Q3** (a), we used the established molecular graph-level regression datasets (Chen et al., 2019), Zinc(Jin et al., 2017; Dwivedi et al., 2020), ogbg-molhiy(Hu et al., 2020a), QM9 (Hamilton et al., 2017), LRGB (Dwivedi et al., 2022b) and five datasets from the TDataset repository (Morris et al., 2020). To answer **Q3** (b), we used the Cornell, Wisconsin, Texas node-level classification datasets (Pei et al., 2020). **Baseline and model configurations** For our upstream model \(h_{\mathbf{u}}\), we use an MPNN, specifically the GIN layer (Xu et al., 2019). For an edge \((v,w)\in E(G)\), we compute \(\mathbf{\theta}_{vw}=\phi([\mathbf{h}_{v}^{T}||\mathbf{h}_{w}^{T}])\in\mathbb{R}\), where \([\cdot||\cdot]\) is the concatenation operator and \(\phi\) is an MLP. After obtaining the prior \(\mathbf{\theta}\), we rewire our graphs by sampling two adjacency matrices for deleting edges and adding new edges, i.e., \(g(\mathbf{A}(G),\mathbf{A}^{(1)},\mathbf{A}^{(2)}):=(\mathbf{A}(G)-\mathbf{A}^ {(1)})+\mathbf{A}^{(2)}\) where \(\mathbf{A}^{(1)}\) and \(\mathbf{A}^{(2)}\) are two sampled adjacency matrices with a possibly different number of edges, respectively. Finally, the rewired adjacency matrix (or multiple adjacency matrices) is used in a _downstream model_\(f_{\mathbf{d}}\colon\mathfrak{A}_{n}\times\mathbb{R}^{n\times d}\to\mathcal{Y}\), typically an MPNN, with parameters \(d\) and \(\mathcal{Y}\) the prediction target set. For the instance where we have multiple priors, as described in Section 3, we can either aggregate the sampled adjacency matrices \(\mathbf{A}^{(1)},\ldots,\mathbf{A}^{(N)}\) into a single adjacency matrix \(\mathbf{\tilde{A}}\) that we send to a downstream model as described in Figure 1, or construct a downstream ensemble with multiple aggregated matrices \(\mathbf{\tilde{A}}_{1},\ldots,\mathbf{\tilde{A}}_{M}\). All of our downstream models \(f_{\mathbf{d}}\) and base models are MPNNs with GIN layers. When we have access to edge features, we use the GINE variant (Hu et al., 2020b) for edge feature processing. For graph-level tasks, we use mean pooling, while for node-level tasks, we take the node embedding \(\mathbf{h}_{v}^{T}\) for a node \(v\). The final embeddings are then processed and projected to the target space by an MLP. For ZINC, Alchemy, and ogbg-molhiv, we compare our rewiring approaches with the base downstream model, both with and without positional embeddings. Further, we compare to GPS (Rampasek et al., 2022) and SAT (Chen et al., 2022), two state-of-the-art graph transformers. For the TUDataset, we compare with the reported scores from Giusti et al. (2023b) and use the same evaluation strategy as in Xu et al. (2019); Giusti et al. (2023b), i.e., running 10-fold cross-validation and reporting the maximum average validation accuracy. For different tasks, we search for the best hyperparameters for sampling and our upstream and downstream models. See Table 8 in the appendix for the complete description. For Zinc, Alchemy, and ogbg-molhiv, we evaluate multiple gradient estimators in terms of predictive power and computation time. Specifically, we compare Gumbel SoftSub-ST (Maddison et al., 2017; Jang et al., 2017; Xie and Ermon, 2019), I-MLE (Niepert et al., 2021), and Simple(Ahmed et al., 2023). The results in terms of predictive power are detailed in Table 1, and the computation time comparisons can be found in Table 9 in the appendix. Further experimental results on QM9 and LRGB are included in Appendix F in the appendix. of the datasets, with the exception being NCI1, where our method ranks second, after the WL kernel. Hence, our results indicate that probabilistic graph rewiring can improve performance for molecular prediction tasks. Concerning **Q3** (b), we obtain performance gains over the base model and other existing MPNNs, see Table 11 in the appendix, indicating that data-driven rewiring has the potential of alleviating the _effects_ of over-smoothing by removing undesirable edges and making new ones between nodes with similar features. The graph transformer methods outperform the rewiring approach and the base models, except on the Texas dataset, where our method gets the best result. We speculate that GIN's aggregation mechanism for the downstream models is a limiting factor on heterophilic data. We leave the analysis of combining probabilistic graph rewiring with downstream models that address over-smoothing for future investigations. ## 6 Conclusion Here, we utilized recent advances in differentiable \(k\)-subset sampling to devise probabilistically rewired message-passing neural networks, which learn to add relevant edges while omitting less beneficial ones, resulting in the PR-MPNN framework. For the first time, our theoretical analysis explored how PR-MPNNs enhance expressive power, and we identified precise conditions under which they outperform purely randomized approaches. On synthetic datasets, we demonstrated that our approach effectively alleviates the issues of over-squashing and under-reaching while overcoming MPNNs' limits in expressive power. In addition, on established real-world datasets, we showed that our method is competitive or superior to conventional MPNN models and graph transformer architectures regarding predictive performance and computational efficiency. Ultimately, PR-MPNNs represent a significant step towards systematically developing more adaptable MPNNs, rendering them less susceptible to potential noise and missing data, thereby enhancing their applicability and robustness. #### Acknowledgments CQ and CM are partially funded by a DFG Emmy Noether grant (468502433) and RWTH Junior Principal Investigator Fellowship under Germany's Excellence Strategy. AM and MN acknowledge funding by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2075 - 390740016, the support by the Stuttgart Center for Simulation Science (SimTech), and the International Max Planck Research School for Intelligent Systems (IMPRS-IS). This work was funded in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under contract number HR00112220005, the DARPA Assured Neuro Symbolic Learning and Reasoning (ANSR) Program, and a gift from RelationalAI. GVdB discloses a financial interest in RelationalAI.
2301.09742
Topological Understanding of Neural Networks, a survey
We look at the internal structure of neural networks which is usually treated as a black box. The easiest and the most comprehensible thing to do is to look at a binary classification and try to understand the approach a neural network takes. We review the significance of different activation functions, types of network architectures associated to them, and some empirical data. We find some interesting observations and a possibility to build upon the ideas to verify the process for real datasets. We suggest some possible experiments to look forward to in three different directions.
Tushar Pandey
2023-01-23T22:11:37Z
http://arxiv.org/abs/2301.09742v1
# Topological understanding of Neural Networks ###### Abstract In this review paper, we look at the internal structure of neural networks which is usually treated as a black box. The easiest and most comprehensible thing to do is to look at a binary classification and try to understand the approach a neural network takes. We review the significance of different activation functions, types of network architectures associated to them, and some empirical data. At the end, we conclude with describing some possible choices of activation functions for different problems and techniques. ## I Introduction One of the prominent questions in deep learning is understanding what happens inside the black box, i.e. the hidden layers. The theme of the paper is to understand what happens to the data when it goes through different layer. There are some other approaches taken, one where each data point is looked at after every layer, usually images. The second one is to understand the boundary manifold and how that changes in the hidden layer. Even though these methods are important, we believe it's more important to look at the transformation of the entire data set as it goes through the hidden layers, and see the representation of the data space in the final layer. We begin the paper by looking at some smooth activation functions, where width plays an important role. We provide some intuition behind selecting neural networks with different architectures. We point out some possible errors in considering such methods and the time complexity that comes with it. In the third section, we look at a comparison between smooth and non-smooth activation functions, along with changing width and depth of the network. We try to answer a widely asked question, what makes ReLU better than other activation functions in practice? [1], [2], [3] We begin with a simulated dataset, where the topology is known, in order to understand the changes that could take place in an actual manifold. Under different conditions, these changes are measured. Once there is an idea of the change, it is verified on real world data in high dimensions. Due to computational power constraints, some parameters of the architecture are not adjusted from the simulated data to real world data with a large difference in dimensions. After reviewing these methods, we draw some conclusions from both the approaches and provide with experiments to extend the results to different architectures in order to improve the understanding of the black box. For most of the paper, we will consider the case where the data has two classes. All the definitions are provided at the end of the paper in Appendix. ## II Smooth activation functions and change in topology In this section, we look at the change in topology when the activation function is smooth. This section is based on the work of [4] Consider an architecture where there is no hidden layer. Let's assume the data forms two lines as described in fig 1. Figure 1: No hidden Layer Since there is no hidden layer, the output layer is a linear function, therefore the neural network tries to classify the data by separating it through a straight line (or a hyperplane in case of higher dimensions). In this example, it can not properly classify the data set as no straight line can possibly distinguish the two classes completely. Note that there is no activation function involved so far. Now consider the case with one hidden layer. We look at the \(tanh\) function and see the boundary line between the two classes of data in fig 2. This separation is seen in the actual data space, by which we mean the original way the data is represented. However, internally, the network doesn't try to change the boundary shape from a line to a curve, rather, it changes the shape of the data and then fits a linear separating boundary. More precisely, the ambient space of the data changes after applying the activation functions and therefore it's easier for the neural network to construct a linear boundary. An example of the same is demonstrated in fig 3. For a smooth activation functions, this change in the data space is a homeomorphism. A natural question that arises: _"Is one hidden layer enough to change the data space (manifold) in order to separate two classes via a linear boundary for smooth activation functions?"_ Answer: No. In fact, depth is not sufficient to decide whether or not two data sets are separable. The depth of the network corresponds to the number of transformations. Since the functions are homeomorphisms, they can't change some topological properties of the space. One such property of a space is the homology (or the Betti numbers). Back to neural networks; if one has an annulus as one class of data points and in the hole, there is a cluster corresponding to a different class, no homoemorphism in \(\mathbb{R}^{2}\) can distinguish between these classes for a smooth activation neural network. Since it's a homeomorphism, \(\beta_{1}\) will remain 1, which means, no linear boundary can separate the two classes completely. Question: _How about changing the width?_ Answer: yes! It will work. Width corresponds to an embedding in a higher dimensional space, where one can lift up the class inside the hole and by a hyperplane, separate them. This means, if we know the minimum dimension of the ambient space where the data can be embedded properly, we can use a neural network to completely classify the dataset. **Theorem II.1**.: _For the data space described above, any neural network with a smooth activation function, width 1 or 2 is not enough to completely classify the two data classes regardless of the depth._ Proof.: For a smooth activation function, hidden layer corresponds to a homeomorphism, which means \(\beta_{1}=1\) after all the hidden layers. The last layer is a linear transformation, therefore conserves \(\beta_{1}\) as well. In order to divide the space by a linear boundary, \(\beta_{1}\) needs to be 0. Therefore, a contradiction. Note that this was true only because of the smoothness of the activation function. Which means, if the activation function is not smooth (e.g. ReLU), the Neural network can form a linear boundary to distinguish the two classes. Figure 3: The data space is transformed and the final boundary (in green) is linear. Figure 2: ”tanh” activation function with one hidden layer. The green line is the boundary For a dataset with n dimensional points, they can be embedded in ambient space of dimension 2n+2, such that a linear (hyperplane) boundary can separate them. Therefore, if the hidden layer has width \(\geq 2n+2\), then the data can be separated. **Theorem II.2**.: _[_4_]_ _There is an ambient isotopy between the input and a network layer's representation if: a) W isn't singular, b) we are willing to permute the neurons in the hidden layer, and c) there is more than 1 hidden unit._ So far, we have a way to find a nice boundary to potentially get 100% accuracy on the training data. This however, does not mean it's the actual separation. The author [4] also suggests that based on some empirical results, changing the last layer from softmax to KNN, the accuracy increases. This approach concludes that for smooth activation, the width needs to be wide enough and there is a need of sufficient (but small) depth in order to not force the network to get stuck at a local minima. ## III Smooth vs non-smooth activation functions We follow [5] for most of the section. Once again, the data is divided into two classes \(M=M_{a}\cup M_{b}\). The first conclusion of this section describes ReLU outperforming smooth activation function, namely tanh, and the second one is describing the depth instead of the fact that shallow networks can approximate most functions pretty well. [5] have performed the analysis on real and simulated data to conclude such observations. The topological work is done through TDA, based on the idea of persistent homology originally inspired by [6; 7; 8] The interesting aspect of this study is that instead of looking at the change in each data point, the overall change in the data shape is observed, which provides a better insight. The focus of the study is to look at the change in Betti numbers and how it is affected in the training. In fig 4, the actual network changes the shape of the data like demonstrated. In order to unlink the components, it has to break the topology, and change the betti numbers. Generally speaking, for binary classification problems, the idea is to decrease the betti numbers, s.t. \(\beta_{1}(M_{a})=0=\beta_{1}(M_{b})\) and \(\beta_{2}(M_{a})=0=\beta_{2}(M_{b})\), whereas \(\beta_{0}(M_{a})=1=\beta_{0}(M_{b})\). Note that the last condition is not very strict. Even if \(\beta_{0}(M_{a})>1\) while other \(\beta_{j}(M_{a})=0\) for \(j\geq 1\), it's still a good enough classification. In fact, if \(\beta_{0}>1\), the suggests possibility of another class (or subclass), providing more insights about the data. Out of some of the obstructions this hypothesis possesses, one of them is that the data usually doesn't come in such nice form. The data will more likely be in a point cloud form with some noise. But, that's where persistent homology comes into picture. Presence of small noise does not change the effective \(\beta_{j}\) for the point cloud data. If the data does come from some sampling of a manifold structure, then persistent homology recovers the \(\beta_{j}\) almost precisely. The authors of the paper have not only looked at the topological changes for simulated data, but also for real world data including images. Some important questions which will be answered here are: 1) Why does ReLU perform better than others empirically? 2) Are these topological changes observed through this method robust? 3) Why do deep neural network work better than shallow ones even after the approximation theorems? ### Topology We try to look into topological complexity of the manifold and the generalization gap of the dataset which measures the difference in test and training accuracy of the model. The Generalization gap is defined as \[GG(X_{train},X_{test},m)=Acc(X_{train},m)-Acc(X_{test},m)\] where m is the model. For piece wise linear function the upper bound for topological complexity is given in terms of linear regions. However, the number of linear regions determined through the training set is not stable under small perturbations [9]. Instead of looking at the decision boundary at different stages, the data space transformations are the object of interest here. In order to understand the transformations, one can look at the change in Betti numbers. Furthermore, instead of looking at the Betti number of the entire Manifold, it's sufficient to look at the Betti number progression of each component. In practice, it's usually difficult to compute the homology for a point cloud. The standard practices in Topological Data Analysis (TDA) is i) Discard outliers, noise ii) Construct \(\epsilon-\)Vietoris-Rips complex iii) Simplify VR Complex without changing the topology. The topological structure can be altered by i) and ii) based on choices of noise reduction/smoothing and \(\epsilon\) value in VR complex. ### Setup 1. The problem assumed is a binary classification problem, \(M=M_{a}\cup M_{b}\) with an additional assumption, \(\inf\{||x-y||:x\in M_{a},y\in M_{b}\}>0\). 2. For the simulated dataset, we know the manifold, so to generate the point cloud a large sample is selected uniformly and densely. The neural network chosen in feed forward, with depth l. 3. The network function is \(v:s\circ f_{l}\circ\cdots\circ f_{1}:\mathbb{R}\rightarrow[0,1]\).s is the score function, \(s:\mathbb{R}^{n_{l+1}}\rightarrow[0,1]\). 4. Let \(n_{j}\) denote the width of the layer j. We let \(n_{1}=d,n_{l+1}=p\) and \(v_{j}:f_{j}\circ\cdots\circ f_{1}\). 5. The simulated data is non-realistic with complicated topology in low-dimension. Real world data is in high dimensions but most likely simpler in terms of entangled classes. For the simulated data we look at large \(\beta_{0}\), non-zero \(\beta_{j}\) for \(j\geq 1\) and large Topological complexity. 6. The model is trained to near-zero generalization error. Spoiler alert: with sufficient depth, the last layer maps \(v(T_{a}),v(T_{b})\) on the opposite ends of \([0,1]\). There are some challenges moving from simulated to real data. The topology is not known so the persistent homology is harder to compute at each layer. The work is done on three different dataset which can be seen in the following figure [5]. The red part of the data is \(M_{b}\) and the green part is \(M_{a}\). ### Training 1. Different activation functions: tanh, ReLU, leaky ReLU 2. Different depth from 4-10 3. Different width between 6-50 4. Criss entropy categorical loss 5. ADAM with 18000 epoch 6. \(\eta=0.02-0.04\) with exponential decay \(\eta^{t/d}\), \(d=2500\) and t is the epoch. 7. For bottleneck architecture (narrow width in middle), d = 4000, \(\eta=0.5\) 8. Score = softmax function 9. Metric \(\delta_{k}\) for \(VR_{\epsilon}\) Complex is the graph geodesic distance on K-nearest neighbours. \(\delta_{k}(x_{i},x_{j})=\) minimum no. of edges between them in knn graph. It preserves connectivity and normalizes distance. 10. Two hyperparameters, \(k\) and \(\epsilon\). The persistant homology is used through filtered complex w.r.t. \(\delta_{k}\) at \(\epsilon\). Find \(k^{*}\) with \(\epsilon=1\) first by \(\beta_{0}(VR_{k^{*}})=\beta_{0}(M)\). Then find \(\epsilon^{*}\) by equating \(\beta_{1}\) and \(\beta_{2}\). ### Results 1. Clear decay in \(\beta_{0}\) across all possible neural network architectures. It's slower in tanh, faster in Leaky ReLU and fastest in ReLU. 2. Width: * Narrow: 6 neurons each layer, changes topology faster * Bottleneck: one of the middle layer has 3 neurons while other have 15, sudden change in topology at bottleneck. * Wider: 50 neurons each, smoother reduction in topological complexity. 3. depth: * For highly entangled classes, more depth required for 100% accuracy. Low depth makes it difficult to get the accuracy we want * Faster reduction in Topological complexity in the final layers. * Sometimes, there's non-required simplification for deeper neural networks. 4. For wide enough layers, \(\beta_{1}(M)=0\) and \(\beta_{2}(M)=0\) is not required. Shallow networks sometimes forces this but deeper ones don't, therefore preserving more structure. ### Graphical Results ### Real Dataset These properties were verified for MNIST, HTRU2, UCI Banknotes and UCI sensorless drive datasets. Since the dimension of these datasets are high, the persistent homology can not be performed as efficiently and needs to be performed on every layer. The generalization gap is relaxed, from \(2-5\%\). The width and depth of the networks are fixed. MNIST: \(\mathbb{R}^{784}\), for this dataset, the topological observations are made on top 50 principal components, HTRU2 Dataset: \(\mathbb{R}^{8}\), UCI Banknotes: \(\mathbb{R}^{4}\), UCI drive: \(\mathbb{R}^{49}\) The table of results are available in the appendix. The observation as expected were: 1. The topological complexity is reduced overall. The network tries to reduce \(\beta_{0}\) to 1 and other \(\beta_{j}\) to 0. 2. Smooth activation function reduces the topological complexity slower than non-smooth ones with ReLU performing the most simplification. 3. ReLU adds a folding to the data space but not entirely. This is immediate from the definition of ReLU or the absolute value function. 4. More layers makes it easier to train the model by "taking it's time simplifying the space step by step". ## IV Future work While the possibility of research in this direction is never ending, some possible options for near term research experiments are: ### Different activation functions in different layers For training a model on multiple data sets or for training a dataset with composite architectures, setting the activation function for the initial layers as a smooth one, and applying ReLU or Leaky ReLU at the end could better transform the data. The idea behind this is that smooth activation function preserves the structure in the same dimension, but embedding it in higher dimension would reduce some complexity with some structures still being preserved. Figure 6: [5] Average change in \(\beta_{0}\) for different activation functions for Dataset I. The dark line is the average whereas the shaded regions are results of multiple simulations. Figure 7: [5] Change in \(\beta_{0}\) and \(\beta_{1}\) for Dataset II for different activation functions. ### Relationship between time complexity and robustness against noise There are some results about different linear folding because of ReLU which are not stable under noise. There seems to be a trade-off between oversimplification and time. It'll be interesting to relate the stability of folding under a network architecture with non-ReLU activation functions in some layers as well. Figure 8: [5] Change in \(\beta_{0}\) and \(\beta_{2}\) for Dataset III for different activation functions. Figure 9: [5] Topological complexity for varying network depth for all three datasets. ### Different neural network architectures Different architectures for the same dataset, e.g. CNN, simple feed forward network and ResNet should be compared in terms of changes in topological complexity.
2303.01567
Deep Neural Networks with Efficient Guaranteed Invariances
We address the problem of improving the performance and in particular the sample complexity of deep neural networks by enforcing and guaranteeing invariances to symmetry transformations rather than learning them from data. Group-equivariant convolutions are a popular approach to obtain equivariant representations. The desired corresponding invariance is then imposed using pooling operations. For rotations, it has been shown that using invariant integration instead of pooling further improves the sample complexity. In this contribution, we first expand invariant integration beyond rotations to flips and scale transformations. We then address the problem of incorporating multiple desired invariances into a single network. For this purpose, we propose a multi-stream architecture, where each stream is invariant to a different transformation such that the network can simultaneously benefit from multiple invariances. We demonstrate our approach with successful experiments on Scaled-MNIST, SVHN, CIFAR-10 and STL-10.
Matthias Rath, Alexandru Paul Condurache
2023-03-02T20:44:45Z
http://arxiv.org/abs/2303.01567v1
# Deep Neural Networks with Efficient Guaranteed Invariances ###### Abstract We address the problem of improving the performance and in particular the sample complexity of deep neural networks by enforcing and guaranteeing invariances to symmetry transformations rather than learning them from data. Group-equivariant convolutions are a popular approach to obtain equivariant representations. The desired corresponding invariance is then imposed using pooling operations. For rotations, it has been shown that using invariant integration instead of pooling further improves the sample complexity. In this contribution, we first expand invariant integration beyond rotations to flips and scale transformations. We then address the problem of incorporating multiple desired invariances into a single network. For this purpose, we propose a multi-stream architecture, where each stream is invariant to a different transformation such that the network can simultaneously benefit from multiple invariances. We demonstrate our approach with successful experiments on Scaled-MNIST, SVHN, CIFAR-10 and STL-10. ## 1 Introduction Deep Neural Networks (DNNs) are one of the core drivers of technological progress in various fields such as speech recognition, machine translation, autonomous driving or computer vision (LeCun et al., 2015). At the core of their success lies the ability to process large amounts of data to yield solutions with remarkable generalization properties (Lust and Condurache, 2020b). However, in many practical applications, the data is expensive to collect, store and label. Furthermore, it is rather difficult for humans to understand how such correlation-based methods work, which is a necessary first step in optimizing or adapting them to new application domains. Additionally, a certain degree of understanding and trust in the DNN's output is essential, e.g., when safety plays a major role (Lust and Condurache, 2020a). Often, _prior knowledge_ about transformations that modify the desired output in a predictable way is available before training. Leveraging prior knowledge increases the interpretability of DNNs while also improving the sample complexity - hence reducing the amount of data needed to obtain a desired performance. When incorporating this valuable prior knowledge into deep learning architectures, it is advantageous to _guarantee_ the corresponding in- or equivariances. We differentiate between _invariance_, which is the property of a map to yield the same output for a transformed input and _equivariance_ which is the property of a map to preserve the transformation of the input, such that the output is transformed predictably. One common example of leveraging equivariance in DNNs are convolutional layers (Fukushima, 1980; LeCun et al., 1990). These are equivariant to translations and can be easily used to generate a translation-invariant representation (e.g. by pooling). For many tasks, we can define a set of _symmetry transformations_ that affect the desired output in a predictable way. For example, in the case of image classification, symmetry transformations of an input object map it within the same image class and thus do not change the desired classification output. Typical examples are rotations, translations and scales. Enforcing invariance to such transformations as an inductive bias decreases the sample complexity by reducing the search space while training a DNN. Moreover, guaranteed invariances contribute towards gaining an intuition on how inference is conducted in the DNN. Cohen and Welling (2016) first used _group equivariant convolutions_ (G-Convs) in DNNs to enforce equivariance to transformations such as rotations and flips (Cohen and Welling, 2016; Worrall et al., 2017; Weiler et al., 2018; Weiler and Cesa, 2019) or scales (Xu et al., 2014; Kanazawa et al., 2014; Sosnovik et al., 2020, 2021). For classification DNNs, those equivariant layers are usually followed by a max pooling operation among the group and spatial dimensions to obtain invariant features that are processed by the final classification layers. While max pooling guarantees invariance, it destroys important information that could be leveraged by a classifier and therefore lacks efficiency. Since the transfer from equi- to invariant features has not been extensively investigated, it promises further improvement capabilities. Invariant Integration (II) is an algorithm to construct a complete feature space with respect to (w.r.t.) a transformation group (Schulz-Mirbach, 1992). So far, II has been used to replace the global spatial pooling operation for rotation-invariant classification DNNs. This resulted in an improved sample complexity by efficiently leveraging the available prior knowledge while adding targeted model capacity (Rath and Conducnache, 2020, 2022). However, II has not been extended to other relevant symmetry transformations such as _scales_. Group-equivariant DNNs usually incorporate prior knowledge about a single transformation. It is an open challenge how to proceed when _multiple symmetries_ are involved because it may be impossible to solve the constraints needed to design transformation-steerable filters depending on the involved groups. Even when avoiding the constraints via interpolation methods, simply expanding the regular equivariant G-Convs is computationally inefficient since the representation grows multiplicatively. For example, for a kernel with 8 rotations and 4 scales, we would have to store \(8\cdot 4=32\) responses per kernel. In this contribution, we extend the II framework beyond rotations and efficiently apply it to multiple transformations at once via a multi-stream architecture. Our **core contributions** are: * We adapt rotation-II to also include flips to achieve **invariance** to the 2D Euclidean group **E(2)**. * We **expand II towards scales**, thus covering a larger set of symmetry transformations. * We address the issue of **multiple invariances** within a single architecture that effectively **combines** several streams, each one with **specific invariances** (see Figure 1). This significantly **extends the practical applicability** of **II**. * We **evaluate** our approach on **Scaled-MNIST** and on the **real-world datasets** SVHN, CIFAR-10 and STL-10. On STL-10 using only labeled data, we report new **state-of-the-art** results. ## 2 Related Work ### Group-Equivariant Neural Networks _Group-equivariant convolutional_ layers were proposed by Cohen and Welling (2016) and applied to discrete 90\({}^{\circ}\) rotations and flips by transforming the filters and storing all responses among a _group channel_. This approach uses the _regular_ group-representation. Extensions apply this principle to finer-grained rotations via interpolation (Bekkers et al., 2018; Hoogeboom et al., 2018), rotation-steerable filters (Weiler et al., 2018; Weiler and Cesa, 2019) or by learning all rotated versions of a filter with invariant coefficients (Diaconu and Worrall, 2019). The maximum response can be stored as the orientation in a vector field (Marcos et al., 2017). This is closely related to the _irreducible representation_ which achieves continuous rotation-equivariance via complex-valued responses (Worrall et al., 2017). In general, it has been proven that G-Convs are the most general equivariant linear map and a necessary condition for equivariant DNNs (Kondor and Trivedi, 2018; Cohen et al., 2019; Esteves, 2020). Besides rotations, in- or equivariance to scale transformations plays a major role in many practical applications. In- or equivariance can again be achieved by sharing filters among different scales using bi-linear interpolation (Xu et al., 2014), scaling the input (Kanazawa et al., 2014) or processing the maximum response and the corresponding scale as a vector field (Marcos et al., 2018). Filters Figure 1: Triple-stream invariant Wide-ResNet16-4 architecture. Includes standard convolutions (grey), rotation-flip-steerable convolutions (E(2), orange), scale-steerable convolutions (blue), invariant integration layers (red), a weighted sum (green) and fully connected layers (purple). Residual shortcut connections are omitted for clarity. for scale-equivariant convolutions can be constructed using scale-steerable filters with log-radial harmonics (Ghosh and Gupta, 2019), Hermite polynomials (Sosnovik et al., 2020), optimized discrete bases (Sosnovik et al., 2021), or with separable Fourier-Bessel bases (Zhu et al., 2019). A scale-equivariant G-Conv operating on scale-spaces was introduced in (Worrall and Welling, 2019). Other work investigates general input domains such as the 3D Euclidean space (Worrall and Brostow, 2018; Weiler et al., 2018; Cesa et al., 2022), spheres (Cohen et al., 2018; Coors et al., 2018; Esteves et al., 2020; Defferrard et al., 2020; Kondor et al., 2018; Jiang et al., 2019; Shakerinava and Ravanbakhsh, 2021), general manifolds (Cohen et al., 2019a; Finzi et al., 2020), or general groups (Bekkers, 2020; Finzi et al., 2021). Moreover, equivariant versions of non-linear maps such as attention and transformers have been introduced (Fuchs et al., 2020, 2021; Romero and Hoogendoorn, 2020; Romero et al., 2020; Romero andordonnier, 2021; Hutchinson et al., 2021; He et al., 2021). The non-linearities and sub-sampling layers used in equivariant CNNs have been investigated in (Franzen and Wand, 2021; Xu et al., 2021). ### Invariant Neural Networks When solving tasks that require invariance, group-equivariant DNNs are usually followed by a global (max) pooling layer among the group and spatial dimensions. While max pooling guarantees invariance, it is affected by a loss of information, since all but the maximum value are discarded. Obtaining invariance in a more sophisticated way thus promises to further improve invariant DNNs. One approach to achieve invariance while also ensuring separability is _Invariant Integration_ (II), proposed by Schulz-Mirbach (1992, 1994). II is an algorithm to construct a complete feature space w.r.t. a group, i.e., similar patterns are mapped to the same point while distinct patterns are mapped to distinct points. II has been used in combination with conventional machine learning classifiers for image classification (Schulz-Mirbach, 1995), event detection within a cascaded feature extractor to obtain invariance to anthropometric changes (Condurache and Mertins, 2012) or robust speech recognition (Muller and Mertins, 2009, 2010, 2011). Rotation-II has been used to replace the global spatial pooling layer within rotation-invariant deep learning architectures (Rath and Conducache, 2020, 2022) and shown to further increase the sample complexity of such networks. Puny et al. (2021) solve the group average, which II is based on, for larger, intractable groups by integrating over a subset. They applied their method to classification for motion-invariant point clouds and graph DNNs integrating over the whole DNN. The invariant integral has also been used to prove that in- and equivariance improve generalization when the target distribution is in- or equivariant (Elesedy and Zaidi, 2021). Whereas Rath and Conduche (2020, 2022) focused on II for the group of rotations, we expand this framework to scales as well as flips (using E(2)). Thereby, we show that the framework can be expanded to general group transformations and generally improves the sample complexity of group-equivariant CNNs in classification tasks. Most related work focuses on single transformation groups. Through our multi-stream architecture, we propose a novel approach that allows our network to learn the best possible combination of invariant features among multiple transformations at once. Another method that combines equivariance to both rotations and scales is the Polar Transformer Network (PTN) which process inputs in the polar coordinate system (Esteves et al., 2018). However, working in polar coordinates, although advantageous for rotations, may prove to be detrimental to translation equivariance. Indeed PTNs are by design invariant to translation, but it is not clear how much other relevant information is destroyed. At the same time, Spatial Transformer Networks (Jaderberg et al., 2015) in general cannot offer invariance guarantees, as they rely on the localization network to learn the correct transformation. The invariance is not fully guaranteed but approximated w.r.t the estimated transformation (STNs) or object center (PTNs). The same is valid for deformable (Dai et al., 2017) and tiled convolutions (Le et al., 2010). ## 3 Theoretical Background ### In- and Equivariance In- and equivariance are mathematical concepts describing the behavior of a map \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) under transformations of the input that can be modeled using the mathematical abstraction of a group. A group is a set \(G\) equipped with a group operation \(\cdot:G\times G\to G\) fulfilling the four group axioms: closure, associativity, identity and invertibility. The map \(f\) is called equivariant w.r.t. \(G\), if left group actions \(L_{g}x\) acting on the input \(x\in\mathbb{R}^{n}\) result in predictable changes \(L_{g^{\prime}}\) of the output \[\forall x\;\forall g\;\exists g^{\prime}\;\text{s.t.}\;f(L_{g}x)=L_{g^{\prime }}f(x), \tag{1}\] where the left group action \(L_{g}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is defined for each group element \(g\in G\). The group elements \(g\) and \(g^{\prime}\) are not necessarily equal, i.e., the transformation of the output can be different from the one applied on the input, but is predictable. If the output does not change, i.e. \(\forall x\;\forall g,\)\(f(L_{g}x)=f(x)\), the function is called invariant. In the context of CNNs, the network and the layers are maps between feature spaces that can be described by \(f:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{n}\). The effect of input transformations on the feature maps and outputs can thus be studied using Group Theory. In the course of the paper, we use left actions on the input \(x\) by lifting the left group action of \(G\) on \(\mathbb{Z}^{2}\) via \(L_{g}x(y)=x(g^{-1}y)\) where \(y\in\mathbb{Z}^{2}\) are the pixel coordinates. The transformed input is equivalent to the original input at the point \(g^{-1}y\) that gets mapped to \(y\) by \(g\). ### Group-Equivariant Convolutions Group-equivariant convolutions are the most general linear map achieving equivariance to a transformation group \(G\)(Kondor and Trivedi, 2018; Cohen et al., 2019b; Esteves, 2020). Cohen and Welling (2016) first used G-Convs in the context of DNNs to learn representations with guaranteed equivariant to group transformations. The continuous G-Conv of two functions \(f\) and \(\psi\) is defined as \[(f\star_{G}\psi)(u)=\int_{g\in G}f(g)\psi(u^{-1}g)d\mu(g), \tag{2}\] where \(d\mu(g)\) is the Haar measure with \(\int_{g\in G}d\mu(g)=1\). The in- and output are defined on the group itself with \(g,u\in G\). The _standard convolution_ is a special case of G-Convs where the elements are given by \(y,t\in\mathbb{Z}^{2}\) and the inverse action \(t^{-1}y\) results in shifts \(y-t\). A DNN is group equivariant, if and only if each of its mappings is equivariant to or commutes with the group (Kondor and Trivedi, 2018). In many cases, the transformation group \(G\) can be split into the translation group T defined on \(\mathbb{Z}^{2}\) and the corresponding quotient group \(H=G/T\). In order to achieve equivariance to \(G\), a \(H\)-equivariant convolution can be applied at all spatial locations of the input using a standard convolution. Different representations can be used to compute and store the equivariant features. _Irreducible_ representations require the minimal possible number of parameters, but often involve complex calculations. For 2D rotations, using irreducible representations results in complex-valued feature spaces where the orientation information is stored via the complex phase. The _regular_ representation of a discrete group stores all responses of transformed filters among an additional channel called _group channel_. To compute regular G-Convs, either the input or the filters need to be transformed for all \(g\in G\). A simple method is to transform the filters using interpolation methods such as bi-linear interpolation. However, this introduces sampling artifacts and consequently weakens the equivariance guarantees. An alternative approach is to use transformation- steerable filters, which have been first introduced for rotations (Freeman and Adelson, 1991). While steerable filters introduce a computational overhead compared to simpler interpolation methods, arbitrarily transformed versions can be calculated in closed-form and are thus not afflicted by sampling effects. This concept can be effectively used for G-Convs by restricting the learned filters to linear combinations of steerable basis filters. Weiler et al. (2018b) built steerable filters for rotation G-Convs using a Gaussian kernel which Weiler and Cesa (2019) expanded to the general E(2)-group. Sosnovik et al. (2020, 2021) constructed scale-steerable filter CNNs using 2D Hermite polynomials or a learned discrete basis. We use the state-of-the-art methods for rotation- (E(2)-STCNNs, Weiler and Cesa 2019) and scale-invariant (DISCO, Sosnovik et al. 2021) tasks as our baseline. ### Invariant Representations in DNNs Invariance plays a major role in many DNN applications. For the example of classification, input transformations that do not change the desired class output should not change the learned feature space. Group-equivariant DNNs typically use pooling to transfer from equi- to invariant representations. When operating on regular G-Convs, the pooling procedure is two-fold: Pooling among the transformation channel creates an equivariant representation where an input-transformation induces the same transformation in the feature space; and pooling among the spatial dimension obtains the final invariance. For closed groups, such as rotations and flips, spatial average or max pooling can be used to obtain invariant representations. This is different for the scale group, where average pooling over the spatial dimension does not lead to invariant, but rather homogeneous features, i.e., scaling by \(s\) modifies the output by multiplying with \(s^{2}\). ### Invariant Integration Invariant Integration is an algorithm to construct a complete feature space \(\mathcal{F}\) w.r.t. a transformation group \(G\) introduced in Schulz-Mirbach (1992). A feature space is complete, if all equivalent patterns w.r.t. the transformation are mapped to the same point while all distinct patterns are mapped to different points. For this mapping, II uses the group average \(A[f](x)\) which integrates over all possible transformations \(g\in G\) of an input \(x\) processed by a polynomial \(f\) \[A[f](x)=\int_{g\in G}f(L_{g}x)d\mu(g). \tag{3}\] In our case, \(x\) is the output of the final feature map after pooling among the group dimension. For \(f\), Schulz-Mirbach (1994); Schulz-Mirbach (1995) used the set of monomials \(m(x)=\prod_{i=1}^{M}x_{i}^{b_{i}}\) with \(\sum_{i}b_{i}\leq|G|\), defined as a product of individual signal values \(x_{i}\) with exponents \(b_{i}\), which have been shown to be a good choice to maintain a high expressiveness of the invariant features (Noether, 1916). For 2D rotations and translations, II with monomials within a local neighborhood defined by distances \(d_{i}\) results in \[A[\text{m}](x)=\frac{1}{N_{\phi}UV}\sum_{\phi,t}\prod_{i=1}^{M}x[t-L_{\phi}d_{i }]^{b_{i}}. \tag{4}\] Rath and Condurache (2020) applied II on top of equivariant G-Convs within a DNN. The II layer is differentiable, if \(x_{i}>0\)\(\forall i\), which allows for an end-to-end optimization of the DNN via backpropagation. To select a meaningful subset of monomials, an iterative algorithm based on the least square solution of a linear classifier can be used. However, a pruning-based approach leads to a more streamlined training procedure (Rath and Condurache, 2022). Additionally, selecting the monomials can be avoided by replacing them with alternatives well-known in deep learning literature such as self-attention, a weighted sum (WS) or a multi-layer-perceptron. For rotation-II, the special case using a WS achieves the best performance while being easier to train (Rath and Condurache, 2022). II using a WS with a learnable kernel \(\psi\in\mathbb{R}^{k\times k}\) applied to \(N_{\phi}\) finite rotations \(\phi\in[0^{\circ},\frac{360^{\circ}}{N_{\phi}},\ldots]\) obtained using bi-linear interpolation with input dimensions \(U\times V\) is defined as \[A[\text{WS}](x)=\frac{1}{N_{\phi}UV}\sum_{\phi,t}\sum_{y\in\mathbb{Z}^{2}}x(y) L_{\phi}\psi(y-t). \tag{5}\] ## 4 Method In this section, we extend rotation-II using a WS to the E(2)-group involving rotations and flips (Section 4.1). We then show why II cannot be straightforwardly applied to scale transformations and introduce an alternative to obtain scale-invariants based on II (Section 4.2). We then propose a scale-invariant CNN including Scale-II (Section 4.3). Finally, we introduce a multi-stream DNN architecture that efficiently combines invariances to multiple transformations within a single DNN (Section 4.4). ### E(2)-Invariant Integration Equation 5 can straightforwardly be expanded to each discrete subgroup of E(2) (as used by Weiler and Cesa 2019), which contains flips \(L_{f}\psi(y)\) in addition to rotations \[A[\text{WS}](x)=\frac{1}{2N_{\phi}UV}\sum_{f,0,t}\sum_{y\in\mathbb{Z}^{2}}x(y )L_{\phi}L_{f}\psi(y-t). \tag{6}\] ### Scale-Invariant Integration In images, objects naturally appear at different scales, e.g., due to variable camera-to-object distances. Hence, DNNs for object classification or detection benefit from invariance to scales. We propose to use II in combination with a scale-equivariant CNN to obtain scale-invariant features. In comparison to the rotation group, discrete scale transformations are not circular and non-invertible due to the loss of information (e.g. during down-scaling). Thus, scales do not satisfy all group axioms and can only be modeled as a semi-group. Schulz-Mirbach (1992) demonstrated that it is impossible to construct invariants by integrating over the scale semi-group while at the same time achieving separability. This prohibits constructing a complete feature space w.r.t scales using the standard II approach. Nevertheless, the group average w.r.t. translations is a homogeneous function w.r.t. scales when using polynomials (Schulz-Mirbach, 1994). This means that the effect of scaling by \(s\) on the features obtained using translation-II is defined as \(A[f](L_{s}x)=s^{K}A[f](x)\) where the order \(K\) is defined by the polynomial order of \(f\) with the scale operator \(L_{s}[f](y)=f(s^{-1}y)\ \forall s>0\). As shown by Schulz-Mirbach (1994), a complete feature space w.r.t. the scale-translation semi-group \(G_{S}=S\rtimes T\) can be calculated by dividing homogeneous functions of the same order. When using monomials \(m\), one resulting scale-invariant integral is given by dividing monomials of the same order \(\sum b_{1,i}=\sum b_{2,i}\) with \(t\in\mathbb{Z}^{2}\) \[A_{G_{S}}[m](x)=\frac{A_{T}[m_{1}](x)}{A_{T}[m_{2}](x)}=\frac{\sum_{t}\prod_{i }x(t-d_{1,i})^{b_{1,i}}}{\sum_{t}\prod_{i}x(t-d_{2,i})^{b_{2,i}}}. \tag{7}\] The special case of translation-II that combines values within a fixed neighborhood using a learnable WS as function \(f\) results in a standard convolution followed by Average Pooling and is equivalent to a polynomial of order 1. Consequently, we can choose a divisor of the same homogeneous order to obtain a scale-invariant representation and use the mean of the feature map. We thus introduce the WS-based scale-group average \(A_{G_{S}}\) based on Average Pooling over a standard convolution without bias with translations \(L_{t}\), \(y,t\in\mathbb{Z}^{2}\) divided by the mean \[A_{G_{S}}[\text{WS}](x)=\frac{A_{T}[\text{WS}](x)}{\sum_{y}x(y)}=\frac{\sum_{ y}\sum_{t}x(y)\psi(y-t)}{\sum_{y}x(y)}. \tag{8}\] **Proof of Invariance.** Since translation-II and the mean are both homogeneous w.r.t. scales with factor \(s^{2}\), it is easy to see that dividing them leads to a scale-invariant solution \[\frac{A_{T}[f](L_{s}x(y))}{\sum_{y}L_{s}x(y)}=\frac{s^{2}A_{T}[f](x)}{s^{2} \sum_{y}x(y)}=\frac{A_{T}[f](x)}{\sum_{y}x(y)}. \tag{9}\] ### Application to DNNs Inspired by the work on rotations in Rath and Condurache (2020, 2022), we apply E(2)- and Scale-II (Formulas 6 - 8) on top of the corresponding equivariant features learned using regular G-Convs with steerable filters: E(2)-STCNNs (Weiler and Cesa, 2019) and DISCO (Sosnovik et al., 2021). Those features are processed via max pooling among the group dimension. II then replaces the spatial pooling operation to obtain invariant features that are in turn processed by the final classification layers. For E(2)-II, we use the WS approach. For Scale-II, we investigate both proposed variants. For Scale-II with monomials, we follow Rath and Condurache (2022) and randomly select monomials that are iteratively pruned during training to find the most relevant ones. We ensure the same monomial order between dividated and divisor by normalizing the divisor's exponents with \(\frac{\sum_{i}b_{1,i}}{\sum_{i}b_{2,i}}\). Additionally, \(x_{i}>0\) ensures a differentiable solution. For Scale-II with WS, each convolution depends on all input channels. Consequently, we divide by the mean over all input channels for the WS-based II. Moreover, it is important that \(\sum_{y}x(y)>0\). Hence, we apply both II variants to the output of a ReLU layer minimized to a small value \(\epsilon>0\). ### Multi-Stream Invariance To obtain features with guaranteed invariance to multiple transformations, we implement a DNN architecture with multiple streams. Each stream is invariant to a dedicated transformation enforced using G-Convs and II (see Figure 1). While multiple transformations could in theory be embedded into a single network, the regular representations the network needs to process would grow with \(\mathcal{O}(N\cdot M)\) for group sizes \(N\) and \(M\). In contrast, using a dedicated stream per transformation only increases the number of representations by \(\mathcal{O}(N+M)\ll\mathcal{O}(N\cdot M)\). The input is processed separately by each transformation-invariant stream using G-Convs with steerable filters. II is applied on top of the learned scale- and E(2)-equivariant features to obtain invariant representations. For the standard convolution stream (std.) we use average pooling. Thus, we have \(\mathbf{x}_{j}\in\mathbb{R}^{C_{j}}\) with \(j\in\{\text{e2},\text{scale},\text{std}\}\). We combine two or three streams, each one invariant to either rotations and flips, scales or translations (std.), using two steps. First, we map all features to the same dimension using a linear map \(\mathbf{\tilde{x}}_{j}=\mathbf{W}_{j}\mathbf{x}_{j}\) with \(\mathbf{W}_{j}\in\mathbb{R}^{C_{\text{map}}\times C_{j}}\). We then combine these streams via a normalized learnable WS, e.g. for the case of three streams \(\mathbf{x}_{\text{combined}}=\mathbf{w}_{\text{e2}}\circ\mathbf{\tilde{x}}_{ \text{e2}}+\mathbf{w}_{\text{scale}}\circ\mathbf{\tilde{x}}_{\text{scale}}+ \mathbf{w}_{\text{std}}\circ\mathbf{\tilde{x}}_{\text{std}}\) with \(\mathbf{w}_{j}\in\mathbb{R}^{C_{\text{map}}}\) initialized to 1 and normalized s.t. \((\mathbf{w}_{\text{e2}}+\mathbf{w}_{\text{scale}}+\mathbf{w}_{\text{std}})_{i}=1\) with \(i=1,\dots,C_{\text{map}}\) and the Hadamard product 0 inspired by the learnable channel-wise scaling used in BatchNorm layers (Ioffe and Szegedy, 2015). This approach allows to combine the invariant features with an explicitly learned factor s.t. the network can learn, which invariances are the most relevant for the corresponding task. Each stream is pre-trained individually. We then combine the architecture, freeze all convolutional weights and only train the linear maps \(\mathbf{W}_{j}\), the WS \(\mathbf{w}_{j}\) and the classification weights \(\mathbf{w}_{\text{out}}\in\mathbb{R}^{C_{\text{map}}\times n_{c}}\) with \(n_{c}\) classes. The combination head worked best, when mapping the other streams to the E(2)-output, i.e., \(C_{\text{map}}=C_{\text{e2}}\) and keeping \(\mathbf{W}_{\text{e2}}\) fixed as the identity. Appendix B.1 provides results with different combinations, e.g., mapping all streams or concatenation. Our dedicated training procedure allows to fine-tune each stream individually, which provides a good initialization point for the combination head and further improves the sample complexity compared to a full end-to-end training (cf. Appendix B.2). While this causes an overhead at train time, all operations can easily be fused into a single network at inference time. ## 5 Experiments & Discussion The proposed scale-II algorithm is evaluated on Scaled-MNIST (Sohn and Lee, 2012) and the invariant multi-stream networks on SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011). We use the full dataset and limited subsets with \(N_{t}\) samples to assess the sample complexity. The subsets are sampled with constant class balance and are the same for all variants. For Scaled-MNIST, we use \(N_{t}\in\{10,50,100,500,1k,5k,10k,12k\}\). For SVHN and CIFAR-10 we use \(N_{t}\in\{100,500,1k,5k,10k,50k\}\). For SVHN, CIFAR-10 and STL-10 we use Wide-ResNets (WRNs, Zagoruyko and Komodakis 2016) as backbone. We use the respective _standard data augmentations_: scales for Scaled-MNIST, no augmentations for SVHN, shifts and crops for CIFAR-10, and shifts, crops and Cutout (Devries and Taylor, 2017) for STL-10. For all single-streams, the number of trainable parameters is constant. We report results for the multi-stream architectures with full streams and constant number of parameters. We optimize all hyper-parameters (HPs) using a 80:20 validation split and Bayesian Optimization with Hyperband (BOHB, Falkner et al. 2018). For all II-WS-layers, we use \(k=3\) and a constant number of in- and output channels. For II with monomials, we use an iterative pruning-based selection with \(n_{m}=\{25,12,5\}\) monomial pairs after 0, 5 and 10 epochs following Rath and Condurache (2022). The exact HPs, optimization settings and network architectures can be found in Appendix C. If not mentioned otherwise, we report the mean and standard deviation over three runs. ### Evaluating Scale-Invariant Integration We evaluate our scale-II layer on the Scaled-MNIST dataset, which consists of hand-written digits artificially scaled with factor \(s\in[0.3,1]\). We use the architecture from Sosnovik et al. (2020) (SES-CNN) and Sos Figure 2: Log. Test Error (TE) on Scaled-MNIST subsets. novik et al. (2021) (DISCO) built of three convolutional and two dense layers using \(n_{S}=4\) scales and replace the final global pooling layer with the scale-II layer. Following Sosnovik et al. (2020, 2021), we compare the invariance error of the scale-II layers to the methods they use to obtain scale-invariant representations: a mixed pooling approach including average and max pooling for Scaled-MNIST and average pooling for STL-10. The invariance error is a simplified version of the equivariance error given as \(\Delta=\frac{1}{S}\sum_{s}\frac{|\psi(x)-\psi(L_{s}x)|^{2}_{2}}{|\psi(x)|^{2}_ {2}}\). We compute \(\Delta\) when directly processing the input, i.e., \(\psi\) is the II- or pooling-layer and when \(\psi\) is the randomly initialized CNN including the respective layer using 100 samples from Scaled-MNIST scaled with \(s\in[0.5,0.55,...,1.0]\). The results in Table 1 show that scale-II guarantees invariance as opposed to the pooling approaches. While Scale-II with WS achieves a better error directly on the input, the monomial variant is slightly better when applied within the DNN. We then evaluate the performance of our scale-II layer on Scaled-MNIST for classification. Here, scale-invariance is paramount to obtain correct results because the test set contains more variability than the training set, thus benefiting scale-invariant algorithms. Table 2 shows the results on the full dataset, Figure 2 with limited training data. We report mean and standard deviation over the six pre-defined data splits. II outperforms the pooling approach on full data and in the limited sample regime highlighting the improved sample complexity of our approach. In summary, the mixed pooling approach used by Sosnovik et al. (2021) does not guarantee scale-invariance which leads to a decreased performance. II allows for invariance guarantees and the WS-variant is easier to optimize than the monomials-variant. Hence, II with a WS outperforms the latter in the limited data domain and is used for all further experiments. ### Multi-Stream Digit Classification The Street View House Number (SVHN) dataset contains single digits taken from house numbers. SVHN includes digits with different colors, font types, orientations and backgrounds and is thus harder to solve than MNIST. For all experiments on SVHN, we use the core training data, a WRN16-4 architecture, \(n_{r}=8\) rotations for all E(2)-G-Convs and \(n_{S}=3\) scales for the scale stream. The II step is calculated using the same number of rotations and flips. The results on the full SVHN dataset are shown in Table 3 while Figure 3 depicts the results using limited training data. Using II instead of pooling improves the accuracy for all invariant architectures showing that II better preserves the information when transferring to invariance. Both invariant single-stream networks outperform the std. CNN in the limited and full data domain. This indicates that invariance to rotations and flips as well as scales is valuable information the classifier needs to learn during training - and the training data does not cover enough variability w.r.t rotations and scales for the baseline to learn this information. The rotation- and flip-invariant DNN achieves a better sample complexity than the scale-invariant one, indicating that rotation-invariance is more valuable for this dataset. The multi-stream networks significantly outperform all single-stream variants including the baseline in all data regimes. The multi-stream architecture learns meaningful combinations of the generated invariants and is able to automatically choose the invariance best-fit for the training data at hand. This allows for best performances on all dataset sizes. In addition, for the rather simple task of classifying numbers, combining only a rotation- and a scale-invariant stream outperforms a standard CNN and leads to almost the same performance as with additional std. convolutions. In this problem setup, rotation- and scale-invariances seem sufficient for optimal performance and are able to recover all necessary object invariances. We conjecture that the learned features in the network's layers within the invariant streams focus on global invariances like illumination and noise while the dedicated II layer handles specific object invariances, e.g. to scales, similar to the scattering transformation (Oyallon et al., 2019). ### Multi-Stream Object Classification Finally, we evaluate our proposed architecture for the more complex task of object classification on CIFAR-10 and STL-10. Both datasets contain RGB images of ten different object classes. STL-10 is a subset of ImageNet containing 5k training images that is commonly used as a bench \begin{table} \begin{tabular}{c c c} \hline \hline Layer & Input & CNN \\ \hline Average Pooling & 0.222 & 0.090 \\ Mixed Pooling & 0.017 & 0.039 \\ **Scale-II** Monomials & 1.24e-4 & **2.64e-7** \\ **Scale-II** WS & **2.97e-9** & 5.94e-3 \\ \hline \hline \end{tabular} \end{table} Table 1: Invariance Error \(\Delta\) when obtaining a scale-invariant representation using the respective layer directly on the input and a randomly initialized scale-equivariant network. \begin{table} \begin{tabular}{c c c} \hline \hline Method & Scale-II & TE [\%] \\ \hline CNN & - & 1.60 \(\pm\) 0.09 \\ SES-CNN & - & 1.42 \(\pm\) 0.07 \\ DISCO & - & 1.35 \(\pm\) 0.05 \\ **II-DISCO** & Monomials & **1.30 \(\pm\)** 0.06 \\ **II-DISCO** & WS & **1.30 \(\pm\)** 0.02 \\ \hline \hline \end{tabular} \end{table} Table 2: Test Error (TE) on Scaled-MNIST using a CNN with 5 layers, data augmentation with random scales and an upsampling layer to double the input size. mark for the limited data performance of object classification networks. STL-10 is more challenging than CIFAR-10 since it contains bigger and more diverse images. For CIFAR-10, we use WRN28-10 and the E(2)-stream with 8, 8 and 4 rotations per residual block. For STL-10, we use WRN16-8 and the E(2)-architecture with 8, 4 and 1 rotation per residual block (Weiler and Cesa, 2019). The E(2)-II-layer is used for 4-rotations and flips or flips-only, respectively. For both datasets, the scale-stream uses 3 scales (Sosnovik et al., 2021). The STL-10 and CIFAR-10 results on full data are shown in Table 3, CIFAR-10 results on limited data in Figure 4. For a fair comparison on STL-10, we adapted the official implementation of Sosnovik et al. (2021) which uses a stride of 1 in the initial convolutional layer by using stride 2 as in Devries and Taylor (2017); Weiler and Cesa (2019). On CIFAR-10, we again demonstrate an increased sample efficiency of the invariant streams leading to superior performance for small dataset sizes (Figure 4). While the E(2)-invariant network is able to outperform the std. baseline in the full data regime, the scale-invariant network achieves subpar performance. We interpret the latter as a sign of less variance along the scale mode in this problem setup due to the rather small images contained in CIFAR-10. Thus, other invariances are more important to decide for the correct class. The scale-invariant stream seems to be too restrictive in the sense that it is unable to learn the full set of object invariants that the baseline architecture leverages to classify the objects. On the full data, II improves the performance for the scale-invariant architecture. However, the performance on the E(2)-invariant architecture is only on par. Nevertheless, Rath and Condurache (2022) show that a rotation-invariant architecture using II outperforms the one with pooling in limited data regimes even when achieving slightly worse results on full data. We further investigate the advantages of II in Section 5.4. Our combined network is able to achieve the best results for all data regimes by learning to combine the best information at each dataset size. The triple stream outperforms the dual variant which indicates that the std. stream is able to capture important additional object invariances that are neglected by the restricted, invariant streams. \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{II} & \\ \hline E(2) & Scale & TE [\%] \\ \hline x & x & 7.51 \(\pm\) 0.12 \\ x & ✓ & 7.35 \(\pm\) 0.09 \\ ✓ & x & 6.34 \(\pm\) 0.07 \\ ✓ & ✓ & **5.90**\(\pm\) 0.05 \\ \hline \hline \end{tabular} \end{table} Table 4: Role of the II layer for our triple-stream network on STL-10. An ’x’ marks an invariant stream without II. Figure 4: TE on subsets of CIFAR-10 with full streams. Figure 3: Log. TE on subsets of SVHN with full streams. \begin{table} \begin{tabular}{c c c c c c} \hline \hline II & Streams & Invariance & SVHN [\%] & C-10 [\%] & STL-10 [\%] \\ \hline & PTN & Rot. \& Scale & 2.97 & 6.72 & - \\ x & Single & Std. & 2.93 \(\pm\) 0.02 & 3.89 \(\pm\) 0.08 & 12.02 \(\pm\) 0.05 \\ x & Single & E(2) & 2.64 \(\pm\) 0.05 & 2.91 \(\pm\) 0.13 & 9.80 \(\pm\) 0.40 \\ x & Single & Scale & 2.71 \(\pm\) 0.01 & 4.04 \(\pm\) 0.03 & 8.07 \(\pm\) 0.08 \\ \hline ✓ & Single & E(2) & 2.36 \(\pm\) 0.05 & 2.95 \(\pm\) 0.04 & 7.67 \(\pm\) 0.07 \\ ✓ & Single & Scale & 2.54 \(\pm\) 0.04 & 3.91 \(\pm\) 0.12 & 7.92 \(\pm\) 0.09 \\ ✓ & Dual\({}^{\star}\) & Scale & E(2) & 2.20 \(\pm\) 0.05 & 2.96 \(\pm\) 0.10 & 6.38 \(\pm\) 0.15 \\ ✓ & Dual & Scale & E(2) & 2.12 \(\pm\) 0.11 & 2.75 \(\pm\) 0.04 & 5.95 \(\pm\) 0.11 \\ ✓ & Triple\({}^{\star}\) & Scale, E(2) \& Std. & 2.29 \(\pm\) 0.03 & 2.74 \(\pm\) 0.08 & 6.46 \(\pm\) 0.08 \\ ✓ & Triple & Scale, E(2) \& Std. & **2.10**\(\pm\) 0.07 & **2.68**\(\pm\) 0.03 & **5.90**\(\pm\) 0.05 \\ \hline \hline \end{tabular} \end{table} Table 3: TE on SVHN, CIFAR-10 and STL-10. WRN16-4, WRN28-10 and WRN16-8 are used as baseline architecture. \({}^{\star}\) indicates constant number of parameters. On STL-10, the E2-II network outperforms its counterpart without II significantly. Scale-II also slightly improves upon the baseline and II is clearly beneficial when combining multiple streams (see Table 4). Our multi-stream network achieves a new state-of-the-art result (Table 3), even with constant number of parameters. This shows that incorporating prior knowledge about multiple transformations improves the performance of classification DNNs in the limited data domain, even for complex real-world datasets. Additionally, the results show that the features learned by each stream are complementary, preserved by our II layers and are effectively combined by our proposed multi-stream head. The multi-stream architecture successfully improves the sample complexity while raising the number of parameters - hence increasing generalization. ### Ablation Studies Multi-Stream: Number of Parameters.In addition to the full multi-stream architecture, we report the performance when keeping the number of parameters constant. Therefore, we shrink each stream by factor 2 or 3 for the dual or triple-stream network, respectively. The results are shown in Table 3 marked with \({}^{\star}\). Limited data results for SVHN and CIFAR-10 are shown in Appendix A. Our approach still achieves state-of-the-art performance on all datasets and in all data domains. Combining multiple streams is beneficial, even with constant number of parameters. The dual-stream performs slightly better than the triple-stream. We believe this occurs since the individual streams of the triple-stream architecture are too thin, particularly in early layers. Furthermore, we train a triple-stream architecture consisting of three standard streams on STL-10. We achieve a test error (TE) of 10.29% which is worse than the single-stream networks with invariance. This shows that the enforced invariance plays a key role for our multi-stream networks. Multi-Stream: Importance of Invariant Integration.To quantify and demonstrate the importance of the II layer, we compare our multi-stream architecture including II to variants without II on STL-10. This includes a multi-stream architecture, where only pooling is used. The results in Table 4 demonstrate that on a more complex classification task, in low-data regime (i.e. when the training data does not properly cover all variability present in the test data) the multi-stream approach works best when both streams use II rather than pooling. Training a standard WRN augmented with random 90\({}^{\circ}\) rotations and scales \(s\in[0.25,1]\) on STL-10 achieves a TE of 21.80%, which is clearly detrimental compared to the performance without those augmentations (12.08%). Hence, layer-wise, guaranteed in- and equivariance play a key role in the improved sample complexity of our approach. ## 6 Conclusion In this contribution, we expanded II to scale transformations and showed its effectiveness on Scaled-MNIST. Since Scale-II using a WS is easier to optimize than the monomial variant, we applied it in a multi-stream DNN which besides scales includes a standard convolutional and a rotation-and-flip-invariant stream. This multi-stream DNN covers a variety of practically interesting use cases as shown by an improved sample complexity on SVHN and CIFAR-10 and new state-of-the-art results on STL-10 using only labeled data. We impose invariance to scales and rotation-and-flips in dedicated streams that also learn other global invariances and cover the remaining object invariances with a standard convolutional stream. This guarantees multiple invariances without suffering from the multiplicative increase when directly combining the groups. Our framework is thought to leverage and honor prior knowledge. Therefore, it is focused on invariance guarantees, which may be rather restrictive in some cases. Specifically, invariance guarantees improve the sample complexity of DNNs leading to a performance boost, when training data is limited. In the large data domain, Vision Transformers (Dosovitskiy et al., 2021) with less geometrical constraints outperform conventional CNNs. Hence, our experiments focus on small-scale datasets. II is a general method that can be expanded beyond rotations and scales, but is restricted to transformations that can be modeled as (semi-) groups. Furthermore, we require an equivariant backbone before transferring to invariance. Nevertheless, the multi-stream network can in theory be enhanced with streams that achieve invariance without using II or G-Convs. In the future, it is interesting to apply II to tasks where equivariance is helpful, e.g. to infer the pose in object detection. II could still be used for the parts of the network that benefit from invariance. On CIFAR-10 and SVHN, more sophisticated architectures and training methods than WRNs achieve a better performance (Foret et al., 2021; Lim et al., 2019). For a fair comparison, we stuck to WRNs in our experiments. Nevertheless, guaranteed invariances can generally be applied to many architectures in order to improve their sample complexity. ## Acknowledgments We would like to thank the reviewers and our colleagues Lukas Enderich, Julia Lust and Paul Wimmer for their valuable remarks and contributions.
2303.02708
Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
Tactile pose estimation and tactile servoing are fundamental capabilities of robot touch. Reliable and precise pose estimation can be provided by applying deep learning models to high-resolution optical tactile sensors. Given the recent successes of Graph Neural Network (GNN) and the effectiveness of Voronoi features, we developed a Tactile Voronoi Graph Neural Network (Tac-VGNN) to achieve reliable pose-based tactile servoing relying on a biomimetic optical tactile sensor (TacTip). The GNN is well suited to modeling the distribution relationship between shear motions of the tactile markers, while the Voronoi diagram supplements this with area-based tactile features related to contact depth. The experiment results showed that the Tac-VGNN model can help enhance data interpretability during graph generation and model training efficiency significantly than CNN-based methods. It also improved pose estimation accuracy along vertical depth by 28.57% over vanilla GNN without Voronoi features and achieved better performance on the real surface following tasks with smoother robot control trajectories. For more project details, please view our website: https://sites.google.com/view/tac-vgnn/home
Wen Fan, Max Yang, Yifan Xing, Nathan F. Lepora, Dandan Zhang
2023-03-05T16:18:00Z
http://arxiv.org/abs/2303.02708v1
# Tac-VGNN: A Voronoi Graph Neural Network ###### Abstract Tactile pose estimation and tactile servoing are fundamental capabilities of robot touch. Reliable and precise pose estimation can be provided by applying deep learning models to high-resolution optical tactile sensors. Given the recent successes of Graph Neural Network (GNN) and the effectiveness of Voronoi features, we developed a Tactile Voronoi Graph Neural Network (Tac-VGNN) to achieve reliable pose-based tactile servoing relying on a biomimetic optical tactile sensor (TacTip). The GNN is well suited to modeling the distribution relationship between shear motions of the tactile markers, while the Voronoi diagram supplements this with area-based tactile features related to contact depth. The experiment results showed that the Tac-VGNN model can help enhance data interpretability during graph generation and model training efficiency significantly than CNN-based methods. It also improved pose estimation accuracy along vertical depth by 28.57% over vanilla GNN without Voronoi features and achieved better performance on the real surface following tasks with smoother robot control trajectories. For more project details, please view our website: [https://sites.google.com/view/tac-vgnn/home](https://sites.google.com/view/tac-vgnn/home) ## I Introduction Tactile perception is needed for robots to understand the objects they manipulate and the surrounding environment they interact with [1]. Similar to visual servoing where a robot controls the pose of a camera relative to features of the object image, tactile servoing control likewise changes the pose of a tactile sensor in physical contact with an object based on the touch information [2]. For example, Fig. 1(a) shows a tactile robotic system comprising a robot arm (Dobot MG400) and a tactile sensor (TacTip) for surface following, which can be considered a tactile servoing task. During such a process, the contact between the tactile sensor and the object is changing continuously, through which the surface of the unknown target can be explored. Emerging high-resolution optical tactile sensors have been developed and integrated into robotic systems for precise tactile perception, such as GelForce [3], GelSight [4], GelTip [5], GelSlim [6], etc. However, their capabilities of sliding over surfaces while maintaining contact have yet to be tested. The most significant difference among different optical tactile sensors is their material properties and construction. GelSight-type sensors have flat sensing surfaces comprised of a molded elastomer. Although highly effective at imaging fine surface details, they are less suited for sliding over surfaces because of the relatively flat and stiff elastomer [7]. Among existing low-cost optical tactile sensors, a marker-based optical tactile sensor (BRL TacTip [8]) has shown effectiveness in tactile servoing tasks [7]. The TacTip has a flexible 3D-printed skin and compliance from a soft gel, and is therefore more practical for the tactile servoing task since it has a greater tolerance for safe contact. The pins within the TacTip skin are biomimetic in mimicking the dermal papillae in human skin. The key features of contact information can be extracted through computer vision techniques, where deep learning-based methods are playing an increasingly significant role. Convolutional Neural Networks (CNNs) have been widely used to discover latent tactile features for tactile pose estimation [9], and have been proven to be effective for pose-based tactile servoing tasks [7, 10]. However, CNN-based approaches lack interpretability [11], since they extract pixel-level features using fixed-size filters and fail to provide explicit information generated from tactile signals. Recently, Graph Neural Networks (GNNs) have gained increasing popularity due to their expressive capabilities and better applicability to non-Euclidean data whose structures are irregular and changeable [12, 13]. The implicit relationship between different markers caused by touch could be well modeled by a GNN through feature-aggregating operations Fig. 1: Overview of the experimental setup and the Voronoi Graph for tactile servoing. (a) shows the surface following task conducted by a low-cost desktop robot arm (Dobot MG400) and a tactile sensor (TacTip). (b) displays the raw image comes from the Tactip sensor. The concept of ‘tessellation’ can be illustrated in (c). The change of Voronoi enhanced graph data (d) have strong explainability on X, Y, and Z dimension. [14]. If regarded the pins as vertices, the localization of each pin and the relative distance between any two pins will vary with the skin deformation. This principle is highly similar to the definition of node and edge in a graph, which is a non-linear data structure. Considering that a GNN has a higher efficiency of processing unstructured data, here we explore GNN-based architectures, aiming to construct tactile pose estimation models for pose-based tactile servoing tasks. The initial step of applying GNNs to the TacTip for pose estimation comes from graph construction. The K-Nearest Neighbors (kNN) and Minimum Spanning Tree (MST) have been applied to construct graph data [15, 16]. However, the methods mentioned above are computationally expensive, and thus not applicable to real-time tactile servoing tasks. Meanwhile, Delaunay triangulation has been used to calculate triangulation for a given set of discrete points, which has been demonstrated to have higher robustness compared with kNN [17]. Therefore, Delaunay triangulation will be used to support the graph construction of tactile images in this paper. Moreover, inspired by the study of [18, 19], the cells generated by the Voronoi tessellation can provide a surrogate of depth information for TacTip sensors. The area change in each cell region should be related to local tactile sensor compression. Therefore, we expect that Voronoi features can be used to enrich the information of graph data converted by tactile images. To this end, we define each pin on the TacTip as a node and integrate the pin's position and Voronoi feature as the overall features for the node. The constructed graphs from tactile images are then fed to a GNN model for pose estimation, paving a way for the implementation of tactile servoing. We call this proposed method as Tac-VGNN model, which has advantages over traditional CNN and GNN models in terms of efficiency, interpretability and generalizability. The **key contributions** of this paper are listed as follows. * A novel Voronoi Graph representation is designed for processing the tactile information of marker-based optical tactile sensors. * A Tac-VGNN model is developed for tactile pose estimation, while the performances of interpretability and efficiency were evaluated on a surface following task. ## II Methodology ### _Tactile Graph Generation_ The BRL TacTip can be fabricated at low cost using multi-material 3D printing technology, and customized to different biomimetic morphologies by changing the specifications of sensor diameters, radial depths of skin, and numbers of pins. Two TacTips with different morphologies are selected for experiments in this paper. One TacTip has a hexagonal layout with 127 pins (denoted as 'Hexagonal 127'), while the other has a round layout with 331 pins (denoted as 'Round 331'). After data collection, the raw tactile images obtained from TacTip are first converted to grayscale images (see Fig. 2(a)), then erosion and mask filtering are used for noise removal (see Fig. 2(b)). Subsequently, the positions of pins can be identified via blob recognition (see Fig. 2(c)). After image preprocessing, two approaches are explored for the tactile graph generation. The kNN approach can be used to build edges between each node and its adjacent nodes; however, redundant edges may be generated for the nodes in the outermost circle (see Fig. 3(a)), which may affect the aggregation performance of the GNN model. Here, Delaunay triangulation is used to build edges instead of kNN to improve graph building, which generates disjoint triangles for a set of discrete points, as shown in Fig. 3(b). Details of the tactile graph generation are given in **Algorithm 1** from steps 1-5, which build graph nodes \(V_{i}\) and graph edges \(E_{j}\). ### _Voronoi Feature Generation_ Significant information about the contact distribution and depth of deformation is contained within the shear displacement of pins/markers on the 2D tactile image. The Voronoi tessellation [20] can help extract this valuable information that is highly relevant to the deformation on the sensing surface. **Algorithm 1** steps 6-21 describe the details of this procedure. The enlarged boundary \(V_{bound}\) is built surrounding \(V_{i}\) to facilitate Voronoi vertice \(V_{vertice}\) generation, where Fig. 3: Different graph representations: (a) kNN graph with redundant edges; (b) Delaunay graph; (c) Voronoi graph. Fig. 2: Tactile data preprocessing: (a) the cropped grayscale tactile image; (b) application of circular mask of suitable size to remove artifacts from lighting; (c) pin positions extracted by blob detection. non-convex sets, such as Hexagonal 127, need virtual nodes \(V_{virtual}\) to assist. The area of each Voronoi tessellation \(S_{i}\) is matched to the location of \(V_{i}\), then combined to form the new node feature \(X_{i}\) = (\(V_{i}\), \(S_{i}\)) which relates to the pins' 3D information (\(x_{i}\), \(y_{i}\), \(z_{i}\)). Then Voronoi graph can be defined as \(G(X_{i},E_{j})\), as shown in Fig. 3(c), providing valuable input for GNN models of sensor pose. Fig. 4 illustrates the 3D plots of Voronoi graph after interpolation. When deformation occurs, the pin density at the contact centre decreases (Fig. 4(b)). The difference between two Voronoi graphs can reveal the contact location (Fig. 4(c)). ``` Input:\(I_{gray}\), // gray image after cropped and resized 1\(M_{circle}=255\); // binarized mask 2\(I_{processed}\) = AND(\(I_{gray}\),\(M_{circle}\)); // remove noise 3\(x_{i}\), \(y_{i}\) = Blob(\(I_{processed}\)); // extracted pins position 4\(V_{i}\) = (\(x_{i}\), \(y_{i}\)); // graph nodes 5\(E_{j}\) = Delaunay(\(V_{i}\)); // graph edges 6\(V_{outmost}\) = Convex(\(V_{i}\)); // outmost nodes 7if\(V_{i}\) is not Convexthen 8\(V_{virtual}\) = Rotate(\(V_{outmost}\)); // virtual nodes 9\(V_{convex}\) = \(V_{outmost}\) + \(V_{virtual}\); // convex nodes 10\(E_{convex}\) = Delaunay(\(V_{convex}\)); // convex edges 11\(E_{j}\) = \(E_{convex}\)(index(\(V_{i}\))); // filtered edges 12\(V_{outmost}\) = \(V_{convex}\); // new outmost nodes 13 endif 14\(V_{bound}\) = \(V_{outmost}\) \(\times\)\(L_{scale}\); // enlarge boundary 15\(V_{new}\) = \(V_{bound}\) + \(V_{i}\); // new nodes set 16\(V_{vertices}\), \(R_{i}\) = Voronoi(\(V_{new}\)); // vertices and regions 17\(R_{select}\) = \(R_{i}\)(index(\(V_{i}\))); // filtered regions 18\(V_{select}\) = \(V_{vertices}\)(index(\(R_{select}\))); // filtered vertices 19\(S_{i}\) = Area(\(V_{select}\)); // tessellation area size 20\(X_{i}\) = (\(V_{i}\), \(S_{i}\)) ; // new node feature 21\(G\) = (\(X_{i}\), \(E_{j}\)) ; // Voronoi graph Output:\(G(X_{i}\), \(E_{j}\)), // Voronoi-enhanced graph ``` **Algorithm 1**Process of Voronoi graph generation To ensure the proposed method can be used for real-time applications, it is important to reduce the computation time for the graph generation. We compare different generation methods using different TacTips, with results summarised in Table I. One would normally expect the Delaunay graph generation method to have the highest computation speed since it generates fewer edges than the kNN-based method. However, for the hexagonal layout of pins, due to the convex hull computation involved in generating the graph, the computation time is increased; with the round layout of pins, the computation time is lower as is more usual. The time for Voronoi graph generation is the longest but is still within a reasonable range (\(<50\) ms) for real-time operation. ### _Overview of the Tac-VGNN Model_ The Tac-VGNN architecture is shown in Fig. 5, inspired by Tactile GNN [14]. A 5-layer Graph Convolutional Network (GCN) is introduced for feature extraction with filter numbers of (16, 32, 48, 64, 96), whose input is a Voronoi graph \(G(X_{i}\), \(E_{j})\). A moderate number of GCN layers avoids the risk of over-smoothing while maintaining adequate performance. After the Pooling process, the down-scaled feature vectors are sent to 3 Fully-Connected (FC) layers for pose prediction, whose unit numbers are 96, 64 and 2. The final output consists of the pose estimations for vertical movement \(Y\) and rotation angle \(\theta_{Roll}\), further explained in Fig. 6(a). ## III Experiments and Results Analysis ### _Tactile Data Collection_ During data collection, the TacTip taps repeatedly against a surface with a range of poses augmented with the shear motion to imitate the real contact situations [9, 21]. The pose of each contact is recorded and saved as labels for each tactile image. The pose parameters for surface tasks include vertical depth \(Y\) and rotation angle \(\theta_{Roll}\) (see Fig. 6(a)). A desktop robot is utilized for the tactile data collection using a TacTip with 331 pins (setup shown in Fig. 6(b)). \begin{table} \begin{tabular}{c c c c c} \hline **TacTip Sensor** & **Graph Type** & **Nodes** & **Edges** & **Time Cost** \\ \hline Hexagonal 127 pins & kNN & (127,2) & (762,2) & 0.004s \\ Hexagonal 127 pins & Delaunay & (127,2) & (744,2) & 0.007s \\ Hexagonal 127 pins & Voronoi & (127,3) & (744,2) & 0.019s \\ Round 331 pins & kNN & (331,2) & (1986,2) & 0.010s \\ Round 331 pins & Delaunay & (331,2) & (1860,2) & 0.008s \\ Round 331 pins & Voronoi & (331,3) & (1860,2) & 0.045s \\ \hline \end{tabular} \end{table} TABLE I: Graph Generating Comparison Fig. 4: 3D plots of Voronoi graph after interpolation operation: (a) shows the reference data which has no contact; (b) represents the Voronoi graph data when deformation occurs; (c) displays the difference between the images in the first and second column. Fig. 5: The network architecture of the Tac-VGNN model. ### _Experiment Design_ Four models were used for comparative studies: i) original CNN model (ver 1), ii) customized CNN model (ver 2), iii) GNN model and iv) Tac-VGNN model. Two CNN models work as pose estimation baseline for comparison, whose architecture was designed in [9, 22], called PoseNet. The CNN (ver 1) has 5 convolutional layers and 2 fully-connected layers, while each convolutional layer has 256 filters. The CNN (ver 2) decreases the filter number for each convolutional layer to 48 and adds an extra fully-connected layer, simulating the regular Tac-VGNN structure. Both GNN and Tac-VGNN have the same architecture but different input dimensions. A kNN-based method is used to generate the graph for GNN model, while a Delaunay-based method is used for Tac-VGNN model. To train each model, the tactile dataset of 5000 samples is split randomly into a training/validation dataset (3763 samples; \(\sim\)75%) and a test dataset (1237 samples; \(\sim\)25%). A Linux PC with Titan XP GPU is used for model training. ### _Offline Analysis_ Here, we conduct comparision studies among 4 models to evaluate their performance in terms of i) Data interpretability; ii) Training efficiency; iii) Prediction accuracy. #### Iv-C1 Data Intepretability We used different parts of the TacTip to generate deformation at different points of a test object, and let tap depth \(Y\) modulate the deformation level. The images in Fig. 8 show the Voronoi graph generated from different levels of deformation. The strong correlation between distribution characteristics and the deformation in reality demonstrates the interpretability of the Voronoi graph. Here, the contact locations are clearly visible and consistent with changes in the density distribution as shown on the heat map. Both the surface and edge contact are clearly visualized. The depth information can also be seen through the color differences between a 2 mm tap and a 4 mm tap. In comparison, image data for CNN (Fig.1(b)) and vanilla graphs (Fig.3(a)) for GNN have insufficient intuitiveness. #### Iv-C2 Training Efficiency The training efficiency of each model was evaluated by analyzing its computing time. As shown in Fig 7, the CNN (ver 1) was about 10 times slower than the GNNs under all batch sizes. It also has a huge memory footprint, due to more filters being needed within layers, which resulted in GPU memory leaks when the batch size was 256 and 512. The same was also seen with the regular-sized CNN (ver 2) model, which needed more than 2- or 3 times the compute time of GNNs. In contrast, the average training speed of two GNNs were 0.86 and 1.05 sec/epoch respectively, whose 100 epochs of training were completed in less than two minutes. Also, the maximum memory usages were never more than half of the available GPU capacity (5Gb). Evidently, results show both GNNs have significant advantages over CNNs in terms of the training cost. As expected, Tac-VGNN was less computationally efficient than GNN due to extra feature dimensions. We also note that the faster training time of the GNNs will greatly benefit the hyperparameter optimization process, which can be very time-consuming for CNNs. #### Iv-C3 Prediction Accuracy To compare the prediction capabilities, the test results of each model have been summarised in Table II. \(N_{Conv}\) and \(N_{FC}\) indicate the convolutional and fully-connected layer number. MAE denotes the Mean Absolute Error of pose \(Y\) and \(\theta_{Roll}\). In all cases, it was found that the rotation angle \(\theta_{Roll}\) was harder to predict compared to the vertical pose \(Y\), which could be due to the added rotational shear on the Z-axis during the data collection. Overall, the most significant difference was pose \(Y\) between GNN and Tac-VGNN (MAE reduced \(28.57\%\)), which indicates that the addition of the Voronoi diagram can help learn useful depth-related features. The estimated vertical pose \(Y\) determines the contact depth of TacTip, which is closely related to the servoing performance and \begin{table} \begin{tabular}{c c c c c} \hline **Model** & \(N_{Conv}\) & \(N_{FC}\) & **Pose \(Y\) MAE** & **Pose \(\theta_{Roll}\) MAE** \\ \hline CNN (ver 1) & 5 & 2 & 0.06 mm & 1.20 deg \\ CNN (ver 2) & 5 & 3 & 0.07 mm & 1.25 deg \\ GNN & 5 & 3 & 0.07 mm & 1.16 deg \\ Tac-VGNN & 5 & 3 & 0.05 mm & 1.02 deg \\ \hline \end{tabular} \end{table} TABLE II: Comparison of Model Test Performances Fig. 6: Illustration of pose definition and data collection process. (a) required pose parameter for surface servoing task; (b) an example of data collection using a robot arm. In terms of pose {\(Y\), \(\theta_{Roll}\)}, the collection setups have a range of {[-2, 2]\(mm\), [-30, 30]\(deg\)}. The tap range is 5\(mm\) along \(Y\). Shear is added in {\(X\), \(\theta_{Roll}\)} with range {[-5, 5]\(mm\), [-5, 5]\(deg\)}, facilitating model robustness and generalizability. The number of collected data is 5000. Fig. 7: Comparison of model training efficiency. The training time per epoch based on different batch sizes of four models is shown on the diagram. The size of four kinds of markers indicates the GPU memory usage during training for different models. experiment safety. The test result details of Tac-VGNN are shown in Fig. 9. ### _Online Tactile Servoing_ To achieve tactile servoing, a Proportional Integral (PI) controller was used to maintain a reference pose designed to keep the TacTip oriented normal to the surface [21, 22]. The input to the controller was received from the model pose predictions. A radial move \(\Delta\)r and an axial rotation \(\Delta\theta\) were generated to move the robot based on the pose error and servo control gain. For the experiment, 5 household items were selected to evaluate the pose models' performance (Fig. 10(a)). To quantify the tactile servoing performance, we define an evaluation metric called smoothness \(S\). By calculating the absolute value of slope between every two points along the trajectory, the average of all these values represents smoothness \(S\). The smaller the value of \(S\), the smoother the trajectory. We chose GNN and Tac-VGNN models for comparative studies to focus on the differences between the two GNN-based methods. Five experiments were conducted using five types of objects for surface following, which are detailed as follows. #### Iv-D1 Cylindrical Can The two pose models gave successful surface following around a cylindrical can, even though the curvature of the surface was significantly different from the flat surface used in training. It appears that the GNN model led to a trajectory with the most fluctuations and the Tac-VGNN model gave a smoother trajectory. This behavior could originate from the differences in the prediction error of \(\theta_{Roll}\); it is possible the Tac-VGNN model also generalizes better to curved surface geometries. Smaller cylinders have greater curvature and more pronounced angle changes, which we expect would amplify the differences in model performance during the online surface following task. #### Iv-D2 Cylindrical Mesh The addition of a mesh texture onto the cylindrical object did not noticeably degrade the model performance when following the general shape of the surface. Again, the Tac-VGNN gave a smoother trajectory. #### Iv-D3 Rigid Square The servoing along the straight surfaces of the square objects was the easiest task for all models. Only small deviations were seen at the start of the surface following. The corners were challenging because these were not included in the training data, but both models managed to successfully turn and continue the surface following task. #### Iv-D4 Beveled Prism The beveled prism was a challenging object because it introduced a tilt relative to the Z-axis that was not present during the data collection and model training (Fig. 6(a)). Both two methods encountered some fluctuations after each corner, which is the most difficult region to predict accurate pose as the controller transitions from circular to linear servoing; however, the robot was able to stabilize and generate smooth trajectories on other parts of the object. #### Iv-D5 Tin Box The soft tin box was the most challenging as it has flexible surfaces. Any elastic deformation could give a nonlinear change to the tactile sensor that can be misleading for control and bring instability. In Fig. 10(b), the GNN model without Voronoi enhancement failed when first starting to turn. As the tip moves from the corner to the flatter surface, the elastic deformation can increase drastically, which requires accurate depth predictions to provide responsive control. It further supports the poor estimation of GNN in terms of pose \(Y\). In contrast, the Tac-VGNN model performed this task successfully, experiencing transient behavior around corners before reaching steady control. From our experiments, the Tac-VGNN performed well with fewer fluctuations and more accurate pose estimates compared to the GNN model. Though the Voronoi graph requires a longer generation time, experimental results show that it meets the need for real-time control. The GNN had Fig. 8: Data interpretability comparison: (a) the reference state without contact, {\(Y\) = \(0mm\), \(\theta_{Roll}\) = \(0deg\)}; (b,c) the TacTip central area contacts the surface at different depths, {\(Y\) = [2, 4]\(mm\), \(\theta_{Roll}\) = \(0deg\)}; (d,e) and (f,g) the TacTip peripheral area touches the object at the top and bottom respectively, {\(Y\) = [2, 4]\(mm\), \(\theta_{Roll}\) = \(0deg\)}. Fig. 9: Tac-VGNN model test performance for surface pose estimation, including perpendicular depth \(Y\) and rotation angle \(\theta_{Roll}\). a worse performance as it lacks reliable depth information and so cannot accurately perceive the deformation, especially during the initial contact, leading to imprecise control of the contact depth as the tactile sensor servos around an object. ## IV Conclusions and Future Work In this study, we propose a Tac-VGNN model for tactile pose estimation and evaluated its performance on a surface following task. This method takes full advantage of the additional information provided by Voronoi diagram to characterise the contact depth in each tessellation area. It successfully extends tactile graph features from 2D into 3D contact information, which has advantages in training efficiency, pose prediction accuracy, and increased interpretability. The experimental results indicate that the proposed method has generalizability to complete the surface following tasks of objects with shapes different from those used in training. The interpretability of Tac-VGNN also helps operators build models of objects, such as surface or edge features, facilitating improved dexterous manipulation capabilities. Further improvements of Tac-VGNN can be made by leveraging shear or force information to enhance tactile perception. Additionally, we believe this research could lead to more practical applications, such as medical robots for tactile palpation [23], service robots for surface cleaning or grasping [24], industrial robots for precise assembly and product quality control [25], and other contact-rich tasks [26]. Fig. 10: Comparison of surface servoing with different objects: (a) real scene when tactile servoing starts; (b) the trajectories for the GNN model, which fails when servoing around the soft tin box; (c) the trajectories for the Tac-VGNN model which is the best overall; (d) zoomed-in comparison of trajectories and local smoothness values.
2303.03402
A comparative study on different neural network architectures to model inelasticity
The mathematical formulation of constitutive models to describe the path-dependent, i.e., inelastic, behavior of materials is a challenging task and has been a focus in mechanics research for several decades. There have been increased efforts to facilitate or automate this task through data-driven techniques, impelled in particular by the recent revival of neural networks (NNs) in computational mechanics. However, it seems questionable to simply not consider fundamental findings of constitutive modeling originating from the last decades research within NN-based approaches. Herein, we propose a comparative study on different feedforward and recurrent neural network architectures to model inelasticity. Within this study, we divide the models into three basic classes: black box NNs, NNs enforcing physics in a weak form, and NNs enforcing physics in a strong form. Thereby, the first class of networks can learn constitutive relations from data while the underlying physics are completely ignored, whereas the latter two are constructed such that they can account for fundamental physics, where special attention is paid to the second law of thermodynamics in this work. Conventional linear and nonlinear viscoelastic as well as elastoplastic models are used for training data generation and, later on, as reference. After training with random walk time sequences containing information on stress, strain, and, for some models, internal variables, the NN-based models are compared to the reference solution, whereby interpolation and extrapolation are considered. Besides the quality of the stress prediction, the related free energy and dissipation rate are analyzed to evaluate the models. Overall, the presented study enables a clear recording of the advantages and disadvantages of different NN architectures to model inelasticity and gives guidance on how to train and apply these models.
Max Rosenkranz, Karl A. Kalina, Jörg Brummund, Markus Kästner
2023-03-06T12:37:40Z
http://arxiv.org/abs/2303.03402v1
# A comparative study on different neural network architectures to model inelasticity ###### Abstract The mathematical formulation of constitutive models to describe the path-dependent, i. e., inelastic, behavior of materials is a challenging task and has been a focus in mechanics research for several decades. There have been increased efforts to facilitate or automate this task through data-driven techniques, impelled in particular by the recent revival of neural networks (NNs) in computational mechanics. However, it seems questionable to simply not consider fundamental findings of constitutive modeling originating from the last decades research within NN-based approaches. Herein, we propose a comparative study on different feedforward and recurrent neural network architectures to model inelasticity. Within this study, we divide the models into three basic classes: black box NNs, NNs enforcing physics in a weak form, and NNs enforcing physics in a strong form. Thereby, the first class of networks can learn constitutive relations from data while the underlying physics are completely ignored, whereas the latter two are constructed such that they can account for fundamental physics, where special attention is paid to the second law of thermodynamics in this work. Conventional linear and nonlinear viscoelastic as well as elastoplastic models are used for training data generation and, later on, as reference. After training with random walk time sequences containing information on stress, strain, and - for some models - internal variables, the NN-based models are compared to the reference solution, whereby interpolation and extrapolation are considered. Besides the quality of the stress prediction, the related free energy and dissipation rate are analyzed to evaluate the models. Overall, the presented study enables a clear recording of the advantages and disadvantages of different NN architectures to model inelasticity and gives guidance on how to train and apply these models. neural networks recurrent neural networks enforcing physics constitutive modeling thermodynamic consistency viscoelasticity plasticity ## 1 Introduction Accurately describing the behavior of materials under mechanical loading by constitutive models has been a focus in mechanics research for several decades now. The formulation and parametrization of constitutive models is however still a challenging task especially for materials showing path-dependent, i. e., inelastic, behavior. As an alternative to traditional models, _data-based_ or _data-driven_ techniques are very promising and have the potential to improve or replace conventional models. These techniques have become increasingly popular in the computational mechanics community during the last years [1, 2], where the application of neural networks (NNs) is probably the most common technique. In the following, a brief overview on NNs in constitutive modeling is given. The concept of using NNs in constitutive modeling was initially put out by Ghaboussi et al. [3] in the early 1990s. However, in this early stage, generally pure black-box techniques were employed, i.e., networks that do not account for any physical principles and can therefore only accurately recreate the training data, in this case composed of stress-strain pairs, but perform badly when extrapolating. To address this issue, a relatively new approach in _NN-based constitutive modeling_, and scientific _machine learning_ (ML) in general, is to integrate crucial underlying physics in either a strong or weak form. These methods, known as _physics-informed_[4, 5], _mechanics-informed_[6, 7], _physics-augmented_[8, 9], _physics-constrained_[10], or _thermodynamics-based_[11], improve extrapolation capability and allow for the use of sparse training data. The easiest material behavior to model is elasticity, since here a suitable model only needs to predict the stresses for specific deformation states. In the context of ML, the works [12, 13] seek to approximate the elastic potential by using a _feedforward neural network_ (FNN) with three deformation-type invariants as input. Thus, a number of constitutive requirements is fulfilled by construction, e.g., _thermodynamic consistency_, objectivity, or material symmetry. However, training of these models directly requires the elastic potential. Meanwhile, FNNs using invariants as input and the hyperelastic potential as output are a very well established approach [14, 15, 16, 17, 18, 8, 9]. Thereby, an improved training is applied that allows calibration of the network directly by tuples of stress and strain, i.e., the derivative of energy with respect to the deformation is included into the loss, which is also called Sobolev training [19, 20]. Alternatively, a network previously trained to predict stress coefficients can be used to construct a pseudopotential, thus ensuring thermodynamic consistency of NN-based elastic models a posteriori [21]. Compared to elasticity, the modeling of path-dependent, i.e., inelastic, constitutive behavior by NN-based approaches is more complex. Some early proposals [22, 23, 24, 25, 26] can already achieve quite good predictions by, for example, adding stress and strain states from previous time steps into the input layer of an FNN [25]. This allows the network to indirectly learn a kind of evolution equation. The model [22] uses internal variables of the material to reliably reproduce stress-strain curves of a viscoplastic material. Alternatively, load history dependent behavior can also be represented without the availability of the internal variables by so-called _recurrent neural networks_ (RNNs), which have been shown to be universal and accurate, particularly for more sophisticated recurrent cells, e.g., according to Hochreiter and Schmidhuber [27]. RNNs, especially _long short-term memory_ (LSTM) cells, have been intensively used to model inelasticity, e.g., in the works [28, 29, 30, 31], and are very promising regarding their prediction quality. Very recently, spiking LSTMs, which enable a massive reduction in memory and energy consumption over conventional neural networks, have been applied to model isotropic hardening plasticity [32]. In addition, a new type of RNN named linearized minimal state cell [33] prevents that its response depends on path-sampling and is therefore advantageous in modeling elastoplasticity. This approach has been used for both 2D and 3D datasets matching to the real mechanical behavior of an aluminum alloy as determined by simulations of crystal plasticity [34]. A further promising approach to represent anisotropic elastoplasticity combines Lie algebra with RNNs [35]. Finally, although being trained entirely on monotonic data, a hybrid model [36] combining a data-driven encoder and a physics-based decoder allows for accurate predictions of elastoplastic unloading/reloading paths. Despite the great progress in ML-based constitutive modeling, the NN approaches to describing inelasticity mentioned so far are united by their lack of knowledge of the _second law of thermodynamics_. However, following Masi et al. [11], the incorporation of such fundamental physical principles offers decisive advantages as a more targeted and therefore faster training, that also requires only a small amount of data, and a significantly improved extrapolation capability. A data-driven framework called _deep material network_ is shown in the works [37, 38, 39, 40]. Thereby, the response of a representative volume element is reproduced by a network including a collection of connected mechanistic building blocks with analytical homogenization solutions, which enables to describe a complex effective response without the loss of essential physics. Within the works [41, 42, 43, 44, 45] the idea to replace parts of classical models with NNs is pursued to achieve this. E. g., the yield function or the evolution equations are described by FNNs instead of using a particular model. A coupling of NNs to the so-called micro-sphere approach is shown in the work [46]. A more freely formulated approach for the consideration of rate-independent inelasticity based on an adapted network architecture consisting of two FNNs is presented in the works [47, 11]. Thereby, the first network is used to learn the internal variables' evolution and the second for the approximation of the free energy, where the training procedure requires internal state variables. To account for the thermodynamic consistency, i.e., that the rate of dissipation \(\mathbf{D}\) is always greater equal to zero, this term, which follows from the free energy, is added into the loss function. A similar approach tailored for the modeling of inelasticity is shown in the work [48]. In contrast to the former model, internal state variables capturing the path-dependency are inferred automatically from the hidden state of an RNN. Thus, this method has the advantage of requiring only stresses and strains for training. The two mentioned models [11, 48] nevertheless have the weakness that the requirement \(\mathbf{D}\geq 0\) is not satisfied by design for arbitrary load cases, but is merely enforced by adding a penalty term to the loss function. Within the works [49, 7], on the other hand, thermodynamic consistency is fulfilled by design of the network architecture. This is achieved by combining the concept of _generalized standard materials_[50] with _input convex neural networks_ (ICNNs) [51]. Within the mentioned works, an application to viscoelasticity is shown. In a similar approach, finite viscoelasticity is modeled by replacing the Helmholtz free energy function and dissipation potential with data-driven functions that a priori satisfy the second law of thermodynamics, using neural ordinary differential equations (NODEs) [52]. After the brief overview given above, it can be summarized that there are a variety of NN-based approaches to modeling inelasticity, with very different levels of incorporated physics. Most approaches were applied exclusively for describing one specific material class, elastoplasticity or viscoelasticity. Thus, this work aims on bringing the different approaches into a uniform framework and comparing them by applying to both elastoplastic as well as viscoelastic data in the 1D case. Thereby, special attention is paid to the fulfillment of the second law of thermodynamics. With regard to this, a division of the models into three basic classes is done: _black box NNs, NNs enforcing physics in a weak form_, and _NNs enforcing physics in a strong form_. Networks belonging to the first class learn constitutive relations from data while the underlying physics are completely ignored, whereas the latter two are constructed such that they can account for fundamental physical principles. However, NNs enforcing physics in a weak form do not necessarily satisfy the second law for arbitrary load cases, which is due to the fact that \(D\geq 0\) is only integrated into the loss function by a penalty term. In contrast, the network architecture of NNs enforcing physics in a strong form is designed in such a way that this condition is fulfilled in every case, i.e., by construction. In this paper, conventional linear and nonlinear viscoelastic as well as elastoplastic models are used for training data generation and, later on, as reference. After training with random walk time sequences containing information on stress, strain, and - for some models - internal variables, the NN-based models are compared to the reference solution. Besides the quality of the stress prediction, the predicted free energy and dissipation rate are analyzed to evaluate the models for both interpolation and extrapolation. In addition to the provided comparison, some of the NN-based models are extended and/or modified at several points, in particular the approaches belonging to NNs enforcing physics in a strong form. The organization of the paper is as follows: In Sects. 2 and 3, the basics of constitutive modeling in continuum solid mechanics as well as artificial neural networks are given, respectively. After this, the considered NN-based constitutive models are introduced in Sect. 4. The generation of the database for training is given in Sect. 5. This is followed by a study of the prediction quality of the various NN-based models in Sect. 6. After a discussion of the results, the paper is closed by concluding remarks and an outlook to necessary future work in Sect. 7. ## 2 Classical constitutive models The description of the behavior of materials requires constitutive equations. A framework for the formulation of these equations for different types of material behavior is shown in this section. According to Haupt [53], in terms of their behavior, materials can be categorized into the four classes (i) _elasticity_, (ii) _elastplasticity_, (iii) _viscoelasticity_, and (iv) _viscoelastoplasticity_. Classes (ii) to (iv) are referred to as _inelastic_ and exhibit dissipative behavior. In order to ensure the irreversibility of such processes, _thermodynamic consistency_ must be taken into account during the formulation of the material laws. ### General framework #### 2.1.1 Dissipation inequality Using entropy balance, energy balance and the second law of thermodynamics, an expression known as Clausius-Duhem inequality can be formulated, which, assuming isothermal 1D processes, takes the form \[\sigma\dot{\epsilon}-\psi\geq 0\quad, \tag{1}\] with \(\sigma\) being the stress, \(\epsilon\) the strain and \(\psi\) the Helmholtz free energy density, which for the sake of brevity will simply be referred to as free energy in the following. Starting from Eq. (1), depending on the particular choice of \(\psi\), different constitutive models which strictly satisfy the second law can be derived. Generally, the free energy \(\psi:=\psi\left(\epsilon,\xi^{1},\xi^{2},\ldots,\xi^{N}\right)\) is a function of \(\epsilon\) and the internal variables \(\xi^{a},\alpha\in\left\{1,\ldots,N\right\}\). This set of internal variables is required to describe the load history dependent internal state of a material point and does not necessarily represent measurable physical quantities. To shorten notation, the internal variables are summarized in the generalized vector \(\mathbf{\xi}\in\mathbb{R}^{N}\) where appropriate in the following. Applying the principle of equipresence [53], the stress \(\sigma:=\sigma\left(\epsilon,\mathbf{\xi}\right)\) is assumed to be a function of the same set of variables. Evaluating Eq. (1) yields \[D:=\left(\sigma-\frac{\partial\psi}{\partial\epsilon}\right)\dot{\epsilon}- \underbrace{\frac{\partial\psi}{\partial\mathbf{\xi}}}_{=:-\tau}\cdot\dot{\mathbf{\xi}} \geq 0\quad, \tag{2}\] where \(\mathbf{a}\cdot\mathbf{b}\) is the scalar product of two vectors \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{N}\). The quantities \(D\) and \(\tau\in\mathbb{R}^{N}\) denote the dissipation rate and the vector of _thermodynamic conjugate forces_, also called _internal forces_[50], with respect to the internal variables \(\mathbf{\xi}\) In order to comply with inequality (2), the necessary and sufficient conditions \[\mathcal{D}\geq 0\quad\Longleftrightarrow\quad\sigma=\frac{\partial\psi}{ \partial\epsilon}\quad\wedge\quad\mathbf{\tau}\cdot\mathbf{\dot{\xi}}=\mathcal{D}\geq 0 \tag{3}\] arise. Thus, the evolution equations for the internal variable are yet to be defined such that \(\mathcal{D}\geq 0\) is ensured at all times. #### 2.1.2 Generalized standard materials A common way to formulate the necessary evolution equations is to use the concept of _generalized standard materials_[50, 54, 55] which is briefly explained in the following. Within this concept, in addition to \(\psi\), a _dissipation potential_\(\phi:=\phi(\mathbf{\xi},\mathbf{\xi},\varepsilon)\) which is defined to be (i) _convex_ with respect to its first argument \(\mathbf{\dot{\xi}}\) and is additionally normalized with respect to \(\mathbf{\xi}\), i.e., it fulfills the conditions (ii) \(\phi(\mathbf{0},\mathbf{\zeta},\varepsilon)=0\) and (iii) \(\phi(\mathbf{0},\mathbf{\zeta},\varepsilon)\geq 0\) is a minimum in \(\mathbf{\dot{\xi}}\), is introduced. The dissipation potential may be non-smooth for rate independent, i. e. elastoplastic, materials. The internal forces can now be determined from \(\mathbf{\phi}\) according to \(\mathbf{\tau}\in\partial_{\xi}\phi\), where the operator \(\partial_{\xi}(\cdot)\) denotes the subdifferential of a non smooth convex function. On the other hand, if \(\phi\) is smooth, the relation changes to \(\mathbf{\tau}=\partial_{\xi}\phi\), where the introduced operator now represents the standard partial derivative. Thus, using Eq. (2), it follows the Biot equation which describes the internal variables' evolution and is given by \[\mathbf{0}\in\partial_{\xi}\psi+\partial_{\xi}\phi\quad\text{or}\quad\mathbf{0}= \partial_{\xi}\psi+\partial_{\xi}\phi\quad\text{with}\quad\mathbf{\xi}(t=0)=\mathbf{ \xi}_{0} \tag{4}\] for rate independent and rate dependent constitutive behavior, respectively. Note that the inequality given in Eq. (3), i. e., \(\mathcal{D}\geq 0\), is automatically fulfilled by Eq. (4) due to the stated requirements on \(\phi\), i. e., convexity and normalization. An alternative formulation follows with the _dual dissipation potential_\(\phi^{*}\) obtained by the Legendre-Fenchel transformation \[\phi^{*}(\mathbf{\tau},\mathbf{\xi},\varepsilon):=\sup_{\mathbf{\dot{\xi}}}\left[\mathbf{\tau }\cdot\mathbf{\dot{\xi}}-\phi(\mathbf{\dot{\xi}},\mathbf{\xi},\varepsilon)\right]. \tag{5}\] By using the dual dissipation one gets \[\mathbf{\dot{\xi}}\in\partial_{\mathbf{\tau}}\phi^{*}\quad\text{or}\quad\mathbf{\dot{\xi }}=\partial_{\mathbf{\tau}}\phi^{*}\quad\text{with}\quad\mathbf{\xi}(t=0)=\mathbf{\xi}_{0}\quad, \tag{6}\] respectively, instead of Eq. (4). **Remark 1** It should be noted, that Eqs. (4) and (6) in this stated form only hold if the rates of the internal variables or the internal forces, respectively, are independent from each other. If there are constraints between the individual quantities, these must be explicitly taken into account during the evaluation of Eqs. (4) and (6). Otherwise, the dissipation potential might for example be expressed in terms of a reduced set of internal variable rates and the partial derivatives with respect to the omitted rates yield zero. This is the case, e.g., for the elastoplastic model as formulated in Tab. 1. Within the special case of _rate-independent_ constitutive behavior, the dissipation function \(\phi\) is now obtained by using the _concept of maximum dissipation_[50, 55]. Thus, it is defined by the constrained maximization problem \[\phi(\mathbf{\dot{\xi}},\mathbf{\xi},\varepsilon):=\sup_{\mathbf{\tau}\in\mathcal{E}}\left( \mathbf{\tau}\cdot\mathbf{\dot{\xi}}\right)\quad\text{with}\quad\mathcal{E}:=\left\{ \mathbf{\tau}\in\mathbb{R}^{N}\ |\ f(\mathbf{\tau},\mathbf{\xi})\leq 0\right\}. \tag{7}\] In the equation above, \(\mathcal{E}\) denotes the admissible domain of internal forces with the yield function \(f(\mathbf{\tau},\mathbf{\xi})\) which is assumed to be convex with respect to \(\mathbf{\tau}\), normalized and homogeneous of degree one. The solution of Eq. (7) yields the evolution equations of internal variables together with the Karush-Kuhn-Tucker conditions: \[\dot{\mathbf{\xi}}=\lambda\partial_{\mathbf{\tau}}f\ \wedge\ \lambda\geq 0\ \wedge\ f\leq 0\ \wedge\ \lambda f=0. \tag{8}\] Therein, the scalar \(\lambda\in\mathbb{R}_{+}\) denotes the plastic multiplier. ### Specific constitutive models The outlined framework given in Sect. 2.1 can be applied to describe, e.g., viscoelasticity or elastoplasticity, for which the corresponding models are briefly summarized in the following. #### 2.2.1 Viscoelasticity Viscoelastic behavior of solids is characterized by a strain rate dependent stress response with elastic equilibrium curve, see Fig. 1c. These properties can be described by a _generalized Maxwell model_[53] as shown in Fig. 1a, where \(E\) and \(E_{a}\) are the Young's moduli of the springs within the rheological model, \(\eta_{a}\) the respective viscosities, \(\epsilon_{a}^{\text{el}}\) and \(\epsilon_{a}^{\text{vi}}\) the elastic and viscous strains, and \(\sigma_{a}^{\text{ov}}\) the non-equilibrium stresses also denoted as overstresses. The viscosity \(\eta_{a}\) may be a function of the overstress \(\sigma_{a}^{\text{ov}}\), see Fig. 1b. A model with overstress dependent viscosities is referred to as nonlinear in the following and linear otherwise. The behaviour of such a generalized Maxwell model is described by the governing equations in Tab. 1. #### 2.2.2 Elastoplasticity In contrast to viscoelasticity, elastoplastic material behavior is characterized by strain rate independence but the presence of an equilibrium hysteresis [53]. Within an initial region \(\sigma\in\left(-\sigma_{y_{0}},\,\sigma_{y_{0}}\right)\), the material acts purely elastic. If the limits of this initial region are exceeded, i.e., \(|\sigma|>\sigma_{y_{0}}\), elastic-plastic deformation occurs and the elastic region can be both, shifted in the respective direction (_kinematic hardening_) as well as expanded (_isotropic hardening_) [56]. In order to model these properties, a so-called yield function \(f:=f\left(\tau,\xi\right)\leq 0\) is defined. It determines the boundaries of the elastic region such that the deformation is purely elastic if \(f<0\) and elastic-plastic if \(f=0\). The case \(f>0\) is not admissible. The evolution equations can be obtained from the maximum dissipation principle, see Sect. 2.1. Note, that the rates of the internal variables are not independent from each other, which has to be taken into account when evaluating Eq. (4), as explained in Remark 1. With that, the governing equations of elastoplastic behavior with mixed kinematic and isotropic hardening are derived, cf. Tab. 1. The corresponding rheological model is shown in 2a. Therein, \(E\) is the Young's modulus, \(H\) and \(\hat{H}\) are the kinematic and isotropic hardening modulus, respectively, and \(\sigma_{y_{0}}\) is the initial yield stress. Moreover, \(p\) denotes the back stress and \(\epsilon^{\text{el}}\) and \(\epsilon^{\text{pl}}\) the elastic and plastic strain. Setting the hardening modules \(H\) or \(\hat{H}\) to zero results in the special cases with ideal plasticity (\(H=\hat{H}=0\)), only kinematic hardening (\(\hat{H}=0\)) or only isotropic hardening (\(H=0\)) see Figs. 2b-2e. The corresponding internal variables of the hardening modules set to zero then still take on values unequal to zero, but no longer have any influence on \(\sigma\), \(\psi\) or \(\mathcal{D}\) and can therefore be neglected. For instance, in the case of ideal plasticity \(H=\hat{H}=0\), the set of internal variables can be reduced to the plastic strain \(\epsilon^{\text{pl}}\). \begin{table} \begin{tabular}{l l l} \hline & **Viscoelasticity** & **Elastoplasticity** \\ **Free energy**: & \(\psi:=\frac{1}{2}E\epsilon^{2}+\sum_{s=1}^{N}\frac{1}{2}E_{a}\left(\epsilon- \epsilon_{a}^{\text{si}}\right)^{2}\) & \(\psi:=\frac{1}{2}E\left(\epsilon-\epsilon^{\text{pl}}\right)^{2}+\frac{1}{2}Ha ^{2}+\frac{1}{2}\hat{H}\dot{a}^{2}\) \\ **Dissipation potential**: & \(\phi=\sum_{s=1}^{N}\frac{\gamma_{e}}{2}\left(\dot{a}_{a}^{\text{si}}\right)^{2}\) & \(\phi=\sigma_{y_{0}}\left|\dot{a}^{\text{pl}}\right|\) with \(\dot{a}=\dot{a}^{\text{pl}}\) and \(\dot{\dot{a}}=\left|\dot{a}^{\text{pl}}\right|\) \\ **Dual dissipation potential**: & \(\phi^{*}=\sum_{s=1}^{N}\frac{1}{2\gamma_{e}}\left(\sigma_{s}^{\text{ox}}\right)^ {2}\) & \(\phi^{*}=\left\{0,\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if }\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if }\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if} \quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad \text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text{if}\quad\text{if}\quad\text{if}\quad\text{if}\quad\quad\text ## 3 Basics of artificial neural networks NNs can be divided into different classes, each adapted to different types of tasks [57, 58]. For constitutive modeling, the use of FNNs and RNNs is particularly suitable. ### Feedforward neural networks FNNs are composed of artificial neurons arranged in several layers as shown in Fig. 2(a). The information of the input variables (features) \(i_{n}\), summarised in the input \(\mathbf{I}:=(i_{1},...,i_{N_{1}})\), is passed on to each neuron of the subsequent layer 2 and is used in these to calculate the outputs of this layer. The entire output of layer 2 in turn serves as input for each neuron in the next layer and so forth. The forward flow of information from the input layer to the final output \(\mathbf{O}:=(\,o_{1},\dots,o_{N_{L}})\) of layer \(L\) without feedback of intermediate results is called _feedforward_. The \(n\)th neuron of layer \(l\) is shown in Fig. 2(b). The output \(\mathcal{N}_{n}^{l}\) of this neuron is determined via \[\mathcal{N}_{n}^{l}=\mathcal{A}_{n}^{l}\left(\chi_{n}^{l}\right)=\mathcal{A}_ {n}^{l}\left(\sum_{m=0}^{N_{l-1}}\mathcal{N}_{m}^{l-1}\mathbf{\omega}_{nm}^{l} \right)\quad. \tag{9}\] Therein, \(\mathcal{A}_{n}^{l}\) denotes the activation function of this neuron, \(N_{l-1}\) the number of neurons in layer \(l-1\) and \(\mathbf{\omega}_{nm}^{l}\) the weight of the output \(\mathcal{N}_{m}^{l-1}\) in the argument of the activation function \(\mathcal{A}_{n}^{l}\). The weight \(\mathbf{\omega}_{n0}^{l}\) of the output of the imaginary neuron \(\mathcal{N}_{0}^{l-1}\equiv 1\) is referred to as the bias of the neuron. Common activation functions are the hyperbolic tangent \(\tanh(x)\), the rectifier \(\operatorname{ReLU}(x):=\max\left(0,x\right)\) or the softplus activation \(\operatorname{SP}(x):=\ln(1+\exp(x))\)[11, 57, 58]. Particularly, the outputs \(\mathcal{N}_{n}^{L}\) of the output layer \(L\) can also be expressed with (9). Recursively, the \(\mathcal{N}_{n}^{l}\) can be replaced until layer 1 with its inputs \(\mathcal{N}_{n}^{1}=i_{n}\) is reached. This clarifies that such a network may be arbitrarily nested, but for fixed \(\mathbf{\omega}_{nm}^{l}\) is a well-defined function of the input \(\mathbf{I}\). In this sense, the network as a function \[F:\mathbb{R}^{N_{1}}\rightarrow\mathbb{R}^{N_{L}}\quad,\quad\mathbf{I}\mapsto \mathcal{F}(\mathbf{I})=\mathbf{O} \tag{10}\] Figure 3: Structure of an FNN: **(a)** General representation of a full network with arbitrary number of layers and neurons and **(b)** functionality of the \(n\)th neuron of layer \(l\). Figure 2: Elastoplastic constitutive model: **(a)** Rheological model with isotropic and kinematic hardening and typical hystereses for **(b)** ideal plasticity (\(\mathbf{H}=\hat{\mathbf{H}}=0\)), **(c)** kinematic hardening (\(\hat{\mathbf{H}}=0\)), **(d)** isotropic hardening (\(\mathbf{H}=0\)) and **(e)** kinematic and isotropic hardening. maps the input \(\mathbf{I}\) to the output \(\mathbf{O}\). In order to adapt the weights \(w_{nm}^{l}\) such that the predictions of the NN match the expected values, a training data set \(S:=\left\{\mathcal{T}_{1},\ldots,\mathcal{T}_{N^{\text{th}}}\right\}\) consisting of \(N^{\text{ds}}\in\mathbb{N}\) data tuples \(\mathcal{T}_{a}\) is required. In each of these data tuples \(\mathcal{T}_{a}:=\left(\mathbf{I}_{a}\,,\,\mathbf{O}_{a}\right)\), an Input \(\mathbf{I}_{a}\) is assigned its expected output \(\mathbf{\tilde{O}}_{a}\). The expected output usually contains the desired values of the neurons in the last layer, but may also contain additional information, such as desired derivatives with respect to a certain input. The error of the predictions of an NN with respect to this set of data tuples is summarized in the loss function \(\mathcal{L}\left(\mathbf{w},\mathcal{S}\right)\), where \(\mathbf{w}\) denotes the vector of all weights \(w_{nm}^{l}\) of the NN. For the applications shown here, the loss function is mostly composed of mean absolute error terms, which for FNN read \[\text{MAE}\left(q\right):=\frac{1}{N^{\text{ds}}}\sum_{i=1}^{N^{\text{ds}}} \left|q(\mathbf{I}_{i},\mathbf{w})-\bar{q}_{i}\right|\quad. \tag{11}\] Therein, \(q\) is the quantity, whose prediction is evaluated. The minimization of this error function with respect to the weights \(\mathbf{w}\) \[\mathbf{w}=\arg\min_{\mathbf{k}}\mathcal{L}\left(\mathbf{x},\mathcal{S}\right) \tag{12}\] is called the training process of the NN. Various methods can be used to solve this minimization problem, e.g. _Stochastic Gradient Descent_ (SGD) or _Adam_, to name two of the most common optimizers. Within this work, _Sequential Least Squares Programming_ (SLSQP) is used for the optimization. A special class of FNNs are ICNNs initially proposed by Amos et al. [51], i.e., networks that have the property to be convex with respect to their input arguments. This is achieved by using a convex and non-decreasing activation function and non-negative weights. ### Recurrent neural networks In contrast to FNNs, RNNs allow intermediate results to be fed back, see Fig. 3(a). This feature enables the network to take into account not only one, but several time steps for the output and thus to learn to interpret the input variables in their (pseudo) temporal context. This can be particularly useful, for example, in text translation, speech recognition or prediction of the evolution of physical quantities. Thus, in RNNs, entire sequences of successive data are evaluated, which in the applications shown within this work are sets of time steps. RNNs are thus very different from FNNs in the way they operate. Nevertheless, their internal structure is also composed of a single or multiple FNNs. These FNNs receive the current input vector \({}^{n}\mathbf{I}\) and the so called hidden state \({}^{n-1}\mathbf{h}\) of the previous time step and output the hidden state \({}^{n}\mathbf{h}\) of the current time step. Outside of the recurrent cell, the new hidden state is subsequently processed further to obtain the output \({}^{n}\mathbf{O}\). Fig. 3(b) illustrates this process for a simple RNN cell unfolded in time. The depicted cell consists of only a single FNN and is not capable of incorporating many preceding time steps into the output of the current hidden state. Therefore, more complex internal structures have been developed using multiple FNNs in order to enable the cell to detect long term dependencies. The RNNs presented in this work use the so-called _long short-term memory_ (LSTM) [27] cell to overcome this problem. Besides the hidden state, this LSTM cell passes another set of information, the so-called cell state \({}^{n}\mathbf{c}\), from time step to time step. The number \(N^{\text{c}}\) of entries in \({}^{n}\mathbf{c}\) can be interpreted as the memory capacity of the cell and is a crucial parameter for the network performance. In contrast to FNNs, the training data set \(S:=\left\{\mathcal{T}_{1},\ldots,\mathcal{T}_{N^{\text{sqg}}}\right\}\) contains \(N^{\text{sqg}}\in\mathbb{N}\) tuples \(\mathcal{T}_{s}:=(\mathcal{T}_{s}^{1},\tilde{\mathcal{T}}_{s}^{0})\) of input sequences \(\mathcal{T}_{s}^{1}\) and assigned expected output sequences \(\tilde{\mathcal{T}}_{s}^{0}\). Each input sequence \(\mathcal{T}_{s}^{1}:=(^{1}\mathbf{I}_{s},\ldots,^{N^{\text{us}}}\mathbf{I}_{s})\) is an ordered series of \(N^{\text{ts}}_{s}\in\mathbb{N}\) input vectors, each representing one of the \(N^{\text{ts}}_{s}\) time steps of the sequence. The expected output sequence \(\tilde{\mathcal{T}}_{s}^{0}:=(^{1}\mathbf{\tilde{O}}_{s},\ldots,^{N^{\text{ts}}} \mathbf{\tilde{O}}_{s})\) Figure 4: Functionality and internal structure of a standard RNN cell in two equivalent illustrations: the output is **(a)** fed back into the RNN cell and **(b)** passed from one time step to the next. I.e., (b) can be understood as (a) unfolded in time. contains the same number of expected output vectors. The error of the RNNs predictions is measured in the loss function \(\mathcal{L}\left(\mathbf{w},\mathcal{S}\right)\), where \(\mathbf{w}\) now contains the weights of all FNNs inside and outside the recurrent cell. For RNN, the mean absolute error measure of a quantity \(q\) takes the form \[\text{MAE}\left(q\right)\,:=\frac{1}{N^{\text{seq}}}\sum_{s=1}^{N^{\text{seq}}} \left(\frac{1}{N_{s}^{\text{ts}}}\sum_{n=1}^{N_{s}^{\text{ts}}}\left|q(T_{s}^{ \text{t}},\mathbf{w})\,-\,^{n}\bar{q}_{s}\right|\right)\quad. \tag{13}\] The training process, similar to the FNNs, is performed as minimization of the loss function with respect to all weights present in the network. ## 4 NN-based constitutive models Although the models presented in Sect. 2 are able to reproduce essential aspects of real material behavior in a physically consistent manner, the applied approach to find a set of governing equations has some disadvantages. On the one hand, this set of equations has to be formulated manually, which becomes increasingly difficult for more complex materials in three dimensions and, on the other hand, it can only reproduce the phenomena taken into account during modeling. In this section, some possible methods are presented that, based on NNs, are intended to find arbitrary correlations automatically, i.e., without any additional manual modeling effort. These methods are divided into three categories, which describe the amount of incorporated physics: (i) the _black box models_ with no physical information, (ii) _NNs enforcing physics in a weak form_, which are trained to satisfy the second law and (iii) _NN-based models enforcing physics in a strong form_, in which the second law is satisfied a priori. Each method requires a training data set, whose generation is outlined in Sect. 5. The implementation is done in Python using the Tensorflow library, with the results being presented in Sect. 6. ### Black box models Black box models are characterized by their lack of incorporated physical knowledge, i.e., they take a set of input values (depending on the concrete model) and output the desired quantity (the stress) without respecting relationships known from continuum mechanics. Two simple architectures are examined, one using a feedforward network, the other using a recurrent cell. #### 4.1.1 Feedforward architecture with stress as output (FNN\({}^{\sigma}\)) This architecture is the simplest of the six architectures presented in this work and consists of only a single FNN, which outputs the stress directly. It receives the new strain \({}^{n+1}\epsilon\) as well as strains \({}^{m}\epsilon\), stresses \({}^{m}\sigma\) and time increments \({}^{m}\Delta t\,:=\,^{m+1}t\,-\,^{m}t\) of an arbitrary number of \(N^{\text{pt}}\geq 1\) preceding time steps and outputs the new stress, that is \[\mathbf{I}\,:=\,(^{n+1}\epsilon,^{n}\epsilon,^{n}\sigma,^{n}\Delta t,\ldots,\,^{n +1-N^{\text{pt}}}\epsilon,^{n+1-N^{\text{pt}}}\sigma,^{n+1-N^{\text{pt}}} \Delta t)\mapsto\mathcal{O}\,:=\,^{n+1}\sigma\quad. \tag{14}\] The time steps \({}^{m}\Delta t\) in the input are only necessary for rate-dependent behavior, i.e., for viscoelasticity. The loss function contains only the stress prediction, \[\mathcal{L}\,:=\,\mathcal{L}^{\sigma}\,:=\,\text{MAE}(\sigma)\quad. \tag{15}\] Similar approaches, e.g., the nested adaptive neural networks for constitutive modeling, have been described earlier [23, 24, 26]. However, as the results in Sect. 6 show, this method can only be applied to specific classes of material behavior. An improved architecture based on a recurrent cell can be used to overcome this restriction. #### 4.1.2 Recurrent architecture with stress as output (RNN\({}^{\sigma}\)) The second black box architecture consists of a _recurrent cell_ followed by an FNN as shown in Fig. 6. Such models are used in the works [29]. For each time step in every sequence, the RNN cell takes as input the new strain and if necessary the time increment and outputs the new hidden state, which is subsequently fed into the FNN to receive the new stress, \[{}^{n}\mathbf{I}^{\text{RNN}}\,:=\,(^{n}\epsilon,^{n-1}\Delta t)\mapsto{}^{n}\mathbf{ h}=:\,^{n}\mathbf{I}^{\text{FNN}}\mapsto{}^{0}\mathbf{O}^{\text{FNN}}\,:=\,^{n} \sigma\quad. \tag{16}\] The initial state before the first time step of hidden state as well as cell state are set to \({}^{0}\mathbf{h}=\mathbf{0}\) and \({}^{0}\mathbf{c}=\mathbf{0}\). The loss function \[\mathcal{L}\,:=\,\mathcal{L}^{\sigma}\,:=\,\text{MAE}(\sigma) \tag{17}\] averages the error in the stress prediction over all time steps and sequences. This architecture is capable of modeling a broad variety of materials. Nevertheless, neither is any physical knowledge provided to the network nor can any additional physical information be obtained besides the stress response. Therefore, two physically informed models are presented below. ### Neural networks enforcing physics in a weak form In contrast to black box models, _NN-based models enforcing physics in a weak form_ take into account known relations, namely Eq. (3)\({}_{2}\) and Eq. (3)\({}_{3}\). This aims at making predictions that are actually physically consistent and gaining additional information, particularly the free energy and the dissipation rate. The networks are trained to respect these relations, but the architecture itself cannot guarantee \(D\geq 0\) a priori. To incorporate the relations into the training process, the network must be differentiated with respect to its inputs. The differentiation of a network results in restrictions regarding the choice of activation functions: since gradients of the loss function are required during the optimization, the activation functions are differentiated a second time. Consequently, the activations must be differentiable twice to enable an optimization [10, 11, 47]. Therefore, the hyperbolic tangent and the softplus activation are used for the corresponding networks in the following. 2.1 Feedforward architecture with internal variables and free energy as output (FNN\({}^{\sharp+\psi}\)) The architecture described below is based on Masi et al. [11] and is studied in detail therein. It uses two FNNs with different tasks, as shown in Fig. 7. The first subnetwork sFNN\({}^{\sharp}\) receives the new and old strain, old stress as well as the time increment and internal variables of the previous time step and is trained to predict new internal variables: \[\mathbf{I}^{\text{FNN}^{\sharp}}:=(^{n+1}\epsilon,{}^{n}\epsilon,{}^{n}\sigma,{}^ {n}\Delta t,{}^{n}\xi^{1},\dots,{}^{n}\xi^{N})\mapsto\mathbf{O}^{\text{FNN}^{ \sharp}}:={}^{n+1}\mathbf{\xi}\quad. \tag{18}\] The following second subnetwork sFNN\({}^{\varphi}\) uses the output of the first network alongside the new strain to determine the free energy, i.e., \[\mathbf{I}^{\text{FNN}^{\varphi}}:=(^{n+1}\epsilon,{}^{n+1}\xi^{1},\dots,{}^{n+1} \xi^{N})\mapsto\mathbf{O}^{\text{sFNN}^{\varphi}}:={}^{n+1}\psi\quad. \tag{19}\] Since sFNN\({}^{\sharp}\) takes \({}^{n}\mathbf{\xi}\) as input, the availability of internal variables in the training data set is essential. The free energy is not required but improves the network performance. Within this study, the goal is to use as few information as possible, thus considering the free energy as not available during training. This applies to all of the following architectures as well. In contrast to the original work [11], the training process is not performed as a single optimization considering both networks, but is split into two smaller steps, speeding up the optimization: First, sFNN\({}^{\sharp}\) is trained independently, i.e., detached from sFNN\({}^{\varphi}\), using \[\mathcal{L}^{\text{sFNN}^{\sharp}}:=\mathcal{L}^{\xi}\quad\text{with}\quad \mathcal{L}^{\xi}:=\frac{1}{N}\sum_{\alpha=1}^{N}\text{MAE}\left(\xi^{\alpha}\right) \tag{20}\] to learn to predict the internal variables. Subsequently, both networks are used together in order to train the second network via the loss function \[\mathcal{L}^{\text{sFNN}^{\varphi}}:=w^{\sigma}\mathcal{L}^{\sigma}+w^{\psi} \mathcal{L}^{\psi}+w^{D\geq 0}\mathcal{L}^{D\geq 0}\quad\text{with}\quad \mathcal{L}^{\sigma}:=\text{MAE}\left(\sigma\right)\quad, \tag{21}\] \[\mathcal{L}^{\varphi}:=\text{MAE}\left(\psi\right)\quad\text{and}\quad \mathcal{L}^{D\geq 0}:=\sum_{k=1}^{N^{\text{old}}}\text{ReLU}\left(-{}^{n+1}D_{k} \right)\quad\text{where}\quad{}^{n+1}D=-\sum_{\alpha=1}^{N}\frac{\partial^{n+ 1}\psi}{\partial^{n+1}\xi^{\alpha}}\frac{{}^{n+1}\xi^{\alpha}-{}^{n}\xi^{ \alpha}}{{}^{n}\Delta t}\quad. \tag{22}\] The non-trainable parameters \(w^{\sigma}\), \(w^{\psi}\) and \(w^{D\geq 0}\) regulate the influence of each term on the value of the loss function. During this optimization, the loss is only differentiated with respect to the weights of sFNN\({}^{\varphi}\), so that only these weights are adjusted and the weights of sFNN\({}^{\xi}\) retain their values as obtained from the first training step. However, the provision of internal variables for the training data for more complex material behavior is not a trivial task, which is addressed in Masi and Stefanou [47]. In order to avoid providing internal variables, a similar architecture seems appropriate, in which sFNN\({}^{\xi}\) is replaced by a recurrent cell. .2.2 Recurrent architecture with internal variables and free energy as output (RNN\({}^{\xi+\psi}\)) Since RNNs are capable of storing information from time steps far in the past, it seems reasonable to exploit this property to mimic internal variables. Fig. 8 shows a possible implementation of this idea, which is similar to He and Chen [48]. Therein, at each time step, the RNN cell takes the new strain as argument as well as the time increment in case of viscous behavior and produces a new vector-valued output \({}^{n}\mathbf{h}\), which carries the history information and is initialized with \({}^{0}\mathbf{h}={}^{0}\mathbf{c}=\mathbf{0}\). Subsequently, these values are fed into an FNN and are reduced to freely selectable number of internal variables \(\mathbf{N}^{\xi}\): \[{}^{n}\mathbf{I}^{\text{RNN}}\ :=({}^{n}\varepsilon,^{-1}\Delta t)\mapsto{}^{n}\mathbf{h}= \cdot{}^{n}\mathbf{I}^{\text{FF}^{\xi}}\mapsto{}^{n}\mathbf{O}^{\text{FF}^{\xi}}:={}^{ n}\mathbf{\xi}\in\mathbb{R}^{N^{\xi}}\quad. \tag{23}\] Within another FNN, these values along with the new strain are projected onto the free energy, i.e., \[\mathbf{I}^{\text{FF}^{\psi}}\ :=({}^{n}\varepsilon,^{n}\underline{\varepsilon}^{1}, \ldots,{}^{n}\underline{\varepsilon}^{N^{\xi}})\mapsto O^{\text{FF}^{\psi}}\ :={}^{n}\psi\quad. \tag{24}\] The loss function \[\mathcal{L}^{\text{FF}^{\psi}}\ :=w^{\sigma}\mathcal{L}^{\sigma}+w^{\xi}\mathcal{L}^{\xi}+w^{ \psi}\mathcal{L}^{\psi}+w^{D\geq 0}\mathcal{L}^{D\geq 0}\quad\text{with} \tag{25}\] \[\mathcal{L}^{\sigma}\ :=\text{MAE}\left(\sigma\right)\quad,\quad\mathcal{L}^{\xi}\ :=\frac{1}{N_{\text{av}}}\sum_{a=1}^{N_{\text{av}}}\text{MAE}\left( \underline{\varepsilon}_{\text{av}}^{\sigma}\right)\quad,\quad\mathcal{L}^{ \psi}\ :=\text{MAE}\left(\psi\right)\quad\text{and} \tag{26}\] \[\mathcal{L}^{D\geq 0}\ :=\frac{1}{N^{\text{seq}}}\sum_{s=1}^{N^{\text{seq}}} \left(\frac{1}{N^{\text{s}}_{s}}\sum_{n=1}^{N^{\text{s}}_{s}}\text{ReLU} \left(-{}^{n}D_{s}\right)\right)\quad\text{where}\quad{}^{n}\mathcal{D}=-\sum _{a=1}^{N}\frac{\partial^{n}\psi}{\partial^{n}\xi^{a}}\frac{{}^{n}\xi^{a}-{}^{ n-1}\underline{\varepsilon}^{a}}{{}^{n}\Delta t}\quad, \tag{27}\] is composed of four terms that control the prediction of stress, internal variables and free energy as well as the compliance with thermodynamic consistency, i.e., \(D\geq 0\). In the error term for the internal variables \(\mathcal{L}^{\xi}\), only those internal variables Figure 8: Functionality of the RNN\({}^{\xi+\psi}\) architecture: For each time step in a sequence, new strain and time increment are fed into the RNN cell, whose output is passed on to the feedforward network sFNN\({}^{\xi}\) to obtain the new set of internal variables. Together with the new strain, these serve as input in a final feedforward network sFNN\({}^{\psi}\) which predicts the new free energy. Differentiating sFNN\({}^{\psi}\) yields the new stress. Figure 7: Functionality of the FNN\({}^{\xi+\psi}\) architecture: Based on the input values, the first subnetwork sFNN\({}^{\xi}\) predicts a new set of internal variables, which alongside the new strain is passed on to the second subnetwork sFNN\({}^{\psi}\) to predict the new free energy. Differentiating with respect to the new strain yields the new stress. can be considered that are actually available in the training data set, in Eq. (26) denoted as \(\xi_{\text{av}}\) for \(N_{\text{av}}\) available internal variables. This might be the measurable plastic strain, for example. Consequently, if no internal variables are known, this term is omitted and the network has to find reasonable representations of internal variables on its own. The free energy term \(w^{\prime\prime}\mathcal{L}^{\psi}\) is also optional and can be omitted if the free energy is unknown. ### Neural networks enforcing physics in a strong form The last category of models, denoted as _NNs enforcing physics in a strong form_ here, in contrast to the previous model class, satisfies \(D\geq 0\) a priori by construction of the model. To achieve this, the concept of generalized standard materials summarized in Sect. 2.1.2 is adopted into the data-driven paradigm by applying ICNNs [51]. This has been proposed for viscoelasticity by Huang et al. [49] as well as As'ad and Farhat [7]. In the following, three models, FNN\({}^{w+\phi}\), FNN\({}^{w+\phi^{*}}\), and FNN\({}^{w+\phi+\xi}\), are introduced in detail. **Remark 2**.: It should be noted that an application of the introduced approaches FNN\({}^{w+\phi}\) and FNN\({}^{w+\phi^{*}}\) to data belonging to a rate-independent material automatically leads to a regularization [59], i.e., an approximation of the data by a rate-dependent model. This is due to the fact that the chosen activation function cannot represent the non-smooth dissipation potentials typical for plasticity, cf. Tab. 1. #### 4.3.1 FNN-architecture with free energy and dissipation potential as output (FNN\({}^{w+\phi}\)) The first architecture of this category, FNN\({}^{w+\phi}\), is composed of two FNNs, modeling the free energy \(\psi\) and the dissipation potential \(\phi\). With the requirements on \(\phi\) from Sect. 2.1, i.e., (i) convexity in \(\xi\), (ii) \(\phi(\xi=0,\xi,\epsilon)=0\), and (iii) \(\phi(\xi=0,\xi,\epsilon)\) is a minimum in \(\xi\), this model can be used to make predictions that are _a priori thermodynamically consistent_. Convexity of the dissipation potential with respect to only \(\xi\), but not \(\xi\) and \(\epsilon\), is achieved through the multiplicative split \[\phi^{\text{NN}}(\xi,\xi,\epsilon):=\phi^{\text{con}}(\xi)\,\phi^{+}(\epsilon, \xi) \tag{28}\] into a convex part \(\phi^{\text{con}}(\xi)\) depending on only the rate of the internal variables and a positive part \(\phi^{+}(\epsilon,\xi)\) depending on the strain and the internal variables themselves. Each part is modeled by a single FNN. For \(\phi^{\text{con}}\) to be convex, it requires convex and non-decreasing activation functions, here the Softplus activation \(\text{SP}(x):=\ln(1+\exp(x))\), and non-negative weights across the whole network [51, 16, 5]. Positivity of \(\phi^{+}\) in turn requires only the weights and bias of the output layer to be non-negative and positive activation functions, here also the Softplus activation. As another, more general method to construct a network, that is convex in only some of its inputs, a _partially input convex neural network_ (PICNN) as presented by Amos et al. [51] could be used as well. To meet normalization conditions (ii) and (iii), the output \(\phi^{\text{NN}}\) of the combined network given in Eq. (28) is modified via \[\phi(\xi,\xi,\epsilon):=\phi^{\text{NN}}(\xi,\xi,\epsilon)-\phi^{\text{NN}}( \xi=0,\xi,\epsilon)-\frac{\partial\phi^{\text{NN}}(\xi=0,\xi,\epsilon)}{ \partial\xi}\cdot\xi\geq 0\ \forall\xi,\xi,\epsilon\quad. \tag{29}\] Likewise, the ansatz \[\psi(\epsilon,\xi):=\psi^{\text{NN}}(\epsilon,\xi)-\psi^{\text{NN}}(\epsilon= 0,\xi=0)-\frac{\partial\psi^{\text{NN}}(\epsilon=0,\xi=0)}{\partial\epsilon} \epsilon-\frac{\partial\psi^{\text{NN}}(\epsilon=0,\xi=0)}{\partial\xi}\cdot\xi \tag{30}\] guarantees that free energy, stress and internal forces equal zero in the initial, unloaded state [49]. Now, after the model formulation given above, it is explained how the training algorithm and the prediction of new time steps with a calibrated FNN\({}^{w+\phi}\) architecture work. Thereby, tuples of stress \(\sigma\), strain \(\epsilon\), and internal variables \(\xi\) are needed for the training of \(\phi\) and \(\psi\) with Eqs. (29) and (30).2 Footnote 2: To be able to use only stress and strain for training, it is also possible to integrate the full path of \(\xi\) in each training epoch. However, since this requires backpropagation over all time steps, the training becomes much more time consuming [52]. Alternatively, following As’ad and Farhat [7], a further network for the approximation of internal variables \(\xi\) during training can be applied. This also enables to end up with a training data set containing only tuples of \(\sigma\) and \(\epsilon\), whereby the training time is reduced compared to the first method. Another possibility is to determine the internal variables in advance within a preprocessing step. How this can be done is described in Ladeveze et al. [60] or Gerbaud et al. [61]. Here, the second method is used, see Sect. 4.3.3. Training of the three networks \(\phi^{\text{con}}(\xi)\), \(\phi^{+}(\epsilon,\xi)\), and \(\psi^{\text{NN}}(\epsilon,\xi)\) is performed as follows: The training starts by predicting \({}^{w}\psi\) with Eq. (30) and obtaining \({}^{n}\sigma\) and \({}^{n}\sigma^{4}\) via differentiation. Following Eq. (4), the internal forces shall equal the dissipation potential differentiated with respect to the rate of the corresponding internal variable. This rate is approximated with \({}^{n}\xi^{\dot{\alpha}}:=(^{n}\xi^{\alpha}-^{n-1}\xi^{\alpha})/^{n-1}\Delta t\). The difference of the predicted internal forces \({}^{n}\tau^{\alpha}=-\frac{\partial\psi}{\partial^{2}\xi^{\alpha}}\) and \({}^{n}\hat{\mathbf{\varepsilon}}^{a}:=\frac{\partial\phi}{\partial^{\frac{n}{2}}\phi^{a}}\) as well as the stress prediction are included in the loss function \[\mathcal{L}:=\omega^{\sigma}\mathcal{L}^{\sigma}+\omega^{\text{Bitt}}\mathcal{L}^ {\text{Biot}}\quad\text{with}\quad\mathcal{L}^{\sigma}:=\text{MAE}\left(\sigma \right)\quad\text{and}\quad\mathcal{L}^{\text{Biot}}:=\frac{1}{N}\sum_{a=1}^{N }\text{MAE}\left(\tau^{a}\right)\quad, \tag{31}\] which is to minimize in the process. Once training is finished and good representations of \(\psi\) and \(\phi\) are found, these potentials can be used to predict the material response for a given strain path. Therefore, in each time step \(n\), the rates of the internal variables \({}^{n}\hat{\mathbf{\varepsilon}}^{a}\) are adapted iteratively using a _Newton-Raphson scheme_, such that \(\max_{\mathbf{\varepsilon}}|{}^{n}\tau^{a}-{}^{n}\hat{\mathbf{\varepsilon}}^{a}|<e\) with a given tolerance \(\mathbf{\varepsilon}\), where the new internal variables are obtained via \({}^{n}\hat{\mathbf{\varepsilon}}^{a}={}^{n-1}\hat{\mathbf{\varepsilon}}^{a}+{}^{n-1} \Delta{}^{n}\hat{\mathbf{\varepsilon}}^{a}\), see the scheme given in Fig. 9. 3.2 FNN-architecture with free energy and dual dissipation potential as output (FNN\({}^{\psi+\phi^{*}}\)) The second architecture presented here is the FNN\({}^{\psi+\phi^{*}}\) model which makes use of the _dual dissipation potential_\(\phi^{*}(\mathbf{\tau},\mathbf{\xi},\mathbf{\varepsilon})\) according to Eq. (5). The model equations are thus similar to Eqs. (28)-(30) with the difference that \(\phi\) has to be replaced by \(\phi^{*}\). The dual dissipation potential \(\phi^{*}\) has to guarantee (i) convexity in \(\mathbf{\tau}\), (ii) \(\phi^{*}(\mathbf{\tau}=\mathbf{0},\mathbf{\xi},\mathbf{\varepsilon})=0\), and (iii) \(\phi^{*}(\mathbf{\tau}=\mathbf{0},\mathbf{\xi},\mathbf{\varepsilon})\) is a minimum in \(\mathbf{\tau}\), to get an a priori thermodynamically consistent model. Again, convexity of \(\phi^{*}\) with respect to only \(\mathbf{\tau}\) is achieved through a multiplicative split similar to Eq. (28). Besides the difference in the model equations, the FNN\({}^{\psi+\phi^{*}}\) architecture differs in its training algorithm and the method to predict the new time step. In FNN\({}^{\psi+\phi^{*}}\), training with respect to a data set consisting of tuples of \(\sigma\), \(\mathbf{\varepsilon}\), and \(\mathbf{\xi}\) is performed as follows: First, the free energy \({}^{n}\psi\) is calculated and \({}^{n}\sigma\) and \({}^{n}\tau^{a}\) are obtained via differentiation. Subsequently, the internal forces \({}^{n}\tau^{a}\) are fed into the combined network of the dual dissipation potential and \({}^{n}\phi^{*}\) is received. Differentiating \({}^{n}\phi^{*}\) with respect to the internal forces yields the rate of the internal variables \({}^{n}\hat{\mathbf{\varepsilon}}^{a}\). The loss function \[\mathcal{L}:=w^{\sigma}\mathcal{L}^{\sigma}+w^{\hat{\mathbf{\varepsilon}}} \mathcal{L}^{\hat{\xi}}\quad\text{with}\quad\mathcal{L}^{\sigma}:=\text{MAE} \left(\sigma\right)\quad\text{and}\quad\mathcal{L}^{\hat{\xi}}:=\frac{1}{N} \sum_{a=1}^{N}\text{MAE}\left(\hat{\xi}^{a}\right) \tag{32}\] now compares the predictions of the stress and the rates of the internal variables with their expected values \({}^{n}\bar{\sigma}\) and \({}^{n}\hat{\mathbf{\varepsilon}}^{a}\), respectively, where \({}^{n}\hat{\mathbf{\varepsilon}}^{a}=({}^{n}\hat{\mathbf{\varepsilon}}^{a}-{}^{n-1}\hat {\mathbf{\varepsilon}}^{a})/{}^{n-1}\Delta t\). Once the three networks are trained, the material state for a time step can be obtained using the pattern in Fig. 10, where due to the implicit description of \({}^{n}\hat{\mathbf{\varepsilon}}^{a}={}^{n-1}\hat{\mathbf{\varepsilon}}^{a}+{}^{n-1} \Delta t^{n}\hat{\mathbf{\varepsilon}}^{a}\), the evolution of the internal variables is determined iteratively using a Newton-Raphson scheme, such that \(\max_{\mathbf{\varepsilon}}|{}^{n}\hat{\mathbf{\varepsilon}}^{a}={}^{n}\hat{\mathbf{ \varepsilon}}^{a}|<\mathbf{\varepsilon}\) with a given tolerance \(\mathbf{\varepsilon}\). 3.3 Adapted training for FNN-architecture with free energy and dissipation potential as output (FNN\({}^{\psi+\phi+\xi}\)) The training of the former two architectures, i.e., FNN\({}^{\psi+\phi}\) and FNN\({}^{\psi+\phi^{*}}\), requires the internal variables to be present in the training data set. However, since these quantities are usually unknown, As'ad and Farhat [7] proposed an efficient training method for a model based on the dual dissipation potential, in which the internal variables are no longer required. This method is adopted analogously here for the architecture FNN\({}^{\psi+\phi}\). To no longer have to specify the internal variables directly while keeping the training fast, another FNN, which takes the time \(t\) as input and outputs the internal variables \(\xi^{\text{NN}}(t)\), is integrated only for the training process. The Figure 9: Functionality of the FNN\({}^{\psi+\phi}\) architecture: With the rate of internal variables of the current iteration, the internal forces are calculated using the network of the free energy (\(\mathbf{\tau}\), left) and the networks of the dissipation potential (\(\hat{\mathbf{\tau}}\), right). The rate of internal variables is now adapted iteratively, such that \(\mathbf{\tau}\approx\hat{\mathbf{\tau}}\). values for the internal variables and their rates entering the networks for the free energy and the dissipation potential are taken from this auxiliary network, where the rate is approximated consistently to the prediction process by \(\sfrac{\hat{\xi}^{\text{NN}}}{\hat{\xi}}=\frac{1}{\Delta t}\left(\xi^{\text{NN}}( \sfrac{n}{t})-\xi^{\text{NN}}(\sfrac{n-1}{t})\right)\). The condition \(\xi^{\text{NN}}(0)=\mathbf{0}\) is enforced by subtracting \(\xi^{\text{NN}}(0)\) from the actual output. During the training process, the weights of this network are adapted such that the predicted temporal courses of \(\xi^{\text{NN}}(t)\) allow the errors for stress (\(\mathcal{L}^{\sigma}\)) and Biot term (\(\mathcal{L}^{\text{Biot}}\)) to become as small as possible. This leads to a reasonable representation of the internal variables without explicitly specifying their values. Once the training is finished, the auxiliary network is no longer necessary and the prediction process can be carried out in the same manner as described in Fig. 9. To make the training process easier for the optimizer, the courses of \(\xi^{\text{NN}}(t)\) that are to be represented with the auxiliary network should to be smooth and rather simple. This is done by choosing a smooth strain path as explained in Sect. 5.1. As'ad and Farhat [7] further propose an additive split of the free energy into an equilibrium and a dissipative part. In order to retain comparability and consistency with \(\text{FNN}^{\psi+\phi}\), this split is not performed herein. However, it should be noted that this additional assumption significantly facilitates training. In addition to the loss function Eq. (31), another term \(w^{[\xi]}\mathcal{L}^{[\xi]}\) is added. This term has the purpose to keep the internal variables small. Otherwise, the chosen internal variables become unboundedly large. To illustrate this, consider a constitutive model with free energy function \(\psi=\psi(\epsilon,\xi)=\frac{1}{2}E(\epsilon-\xi)^{2}\) with corresponding stress and internal force \(\sigma=\tau=E(\epsilon-\xi)\). An equivalent expression with another internal variable \(\tilde{\xi}\) is \(\tilde{\psi}(\epsilon,\tilde{\xi})=\frac{1}{2}E(\epsilon-\frac{1}{A}\tilde{\xi})\) where \(\tilde{\xi}=A\xi\) and \(\tilde{\sigma}=E(\epsilon-\frac{1}{A}\tilde{\xi})=\sigma\) and \(\tilde{\tau}=\frac{1}{A}E(\epsilon-\frac{1}{A}\tilde{\xi})\). The loss term \(\mathcal{L}^{\text{Biot}}\) rewards small absolute values of the internal forces, i.e., a representation with large scaling factor \(|A|\), resulting in \(|\tilde{\xi}|\gg|\xi|\). This leads to difficulties in the prediction process later on. The new loss thus reads \[\mathcal{L}\,:=w^{\sigma}\mathcal{L}^{\sigma}+w^{\text{Biot}}\mathcal{L}^{ \text{Biot}}+w^{[\xi]}\mathcal{L}^{[\xi]}\quad\text{with}\quad\mathcal{L}^{[ \xi]}\,:=\frac{1}{N^{\text{ds}}}\sum_{n=1}^{N^{\text{ds}}}\sum_{\alpha=1}^{N} |\xi_{\alpha}^{\text{NN}}(\sfrac{n}{t})|\quad, \tag{33}\] where \(\mathcal{L}^{[\text{NN}]}\) penalizes large absolute values of the chosen internal variable. Note that this does not imply, that \(\text{sign}(\tilde{\xi})=\text{sign}(\xi)\), since \(A<0\) is still a valid option. Within the scope of this study, only a single internal variable is predicted. Another network for another internal variable or a network with two outputs could be used as well, but this is not shown herein. ## 5 Generation of the database for training and validation Prior to studying the usability of the presented material models based on NNs, a database must be generated for the training process and, later on, as a reference for validation. Thereby, four different models according to Sects. 2.2.1 and 2.2.2 are used to generate material states belonging to a prescribed strain path: a _nonlinear viscoelastic model with one Maxwell element_ (V1), a _linear viscoelastic model with two Maxwell elements_ (V2), an _elastoplastic model with kinematic hardening_ (P1), as well as an _elastoplastic model with mixed kinematic-isotropic hardening_ (P2). The models' governing equations given in Tab. 1 are solved by applying an implicit Euler scheme. The chosen parameters of these reference constitutive models can be found in Tab. 2. Figure 10: Functionality of the \(\text{FNN}^{\psi+\phi^{*}}\) architecture: Using the rate of internal variables of the current iteration, the internal variables alongside the strain are fed into the network of the free energy. Stress and internal forces are obtained via differentiation. The internal forces are passed on to the convex network for \(\phi^{*\text{con}}\), strain and internal variables to the positive network for \(\phi^{*+}\). Multiplying \(\phi^{*\text{con}}\) and \(\phi^{*+}\) yields the dual dissipation potential, which is differentiated with respect to the internal forces to receive \(\dot{\tilde{\xi}}\). The rate of internal variables is now adapted iteratively, such that \(\dot{\tilde{\xi}}\approx\dot{\tilde{\xi}}\). ### Generation of training data Two different methods to generate training data are used. The first method can be applied to all architectures except for FNN\({}^{\psi+\phi+\xi}\) and uses random walk sequences. The FNN\({}^{\psi+\phi+\xi}\) requires a smooth strain path, which is given by a cubic spline combining a chosen set of knots. Both methods are described in detail below. #### 5.1.1 Random walk strain paths Four different data bases are generated, one for each examined material V1, V2, P1 and P2. Since the recurrent architectures require several independent sequences in their training data, the data points are generated in multiple sequences. For the feedforward architectures, these sequences are decomposed back into their separate time steps to obtain the required input-output pairs or data tuples. Each sequence is created from a random walk regarding the strain path, starting from the initial material state with \({}^{0}\sigma={}^{0}\epsilon={}^{0}\epsilon{}^{\alpha}=0\). A strain increment \({}^{0}\Delta\epsilon\) as a sample of a normal distribution with standard deviation \(s^{\Delta\epsilon}\) around mean 0 is applied to the initial state in a time increment \({}^{0}\Delta t\) sampled from a uniform distribution \({}^{0}\Delta t\in(\Delta t_{\text{min}}\,,\,\Delta t_{\text{max}})\). The strain rate is constant within this interval. The next material state is obtained by applying another strain increment \({}^{1}\Delta\epsilon\) within another time increment \({}^{1}\Delta t\) and so on, until the sequence contains \(N^{\text{ts}}\) timesteps. Here, sequences of length \(N^{\text{ts}}=100\) have shown to perform well for the recurrent architectures, independent from the examined material. In order to limit the strain to a reasonable range, the absolute value of the strain may not exceed \(|\epsilon|_{\text{max}}\). That is, if \(|{}^{n}\epsilon+{}^{n}\Delta\epsilon|>|\epsilon|_{\text{max}}\), the strain increment \({}^{n}\Delta\epsilon\) is sampled again until \(|{}^{n}\epsilon+{}^{n}\Delta\epsilon|\leq|\epsilon|_{\text{max}}\). The parameters of the random walk are chosen to be \(s^{\Delta\epsilon}=0.25\,\%\), \(|\epsilon|_{\text{max}}=2\,\%\), \(\Delta t_{\text{min}}=0.02\,\text{s}\) and \(\Delta t_{\text{max}}=0.1\,\text{s}\). Exemplarily, the strain path and respective stress response for the first sequence of the data base for V1 is shown in Fig. 11a. The actual training data sets, i.e., the data that is effectively used during training, comprise a number of \(N^{\text{seq}}\) sequences or \(N^{\text{ds}}\) data tuples taken from this database. It was found that \(N^{\text{seq}}=100\) sequences for the recurrent or \(N^{\text{ds}}=1000\) data tuples for the feedforward architectures are sufficient without a loss of prediction quality, independent from the material behavior to be modeled. The selection of data from each database starts with the first sequence or the first time step of the first sequence and is continued chronologically. This means that the training data set of all feedforward architectures consists of information from the identical material states taken from the first 10 sequences and the recurrent architectures receive the same set of 100 sequences. #### 5.1.2 Smooth strain path In order to enable the NN representation of the internal variable with the auxiliary network, the FNN\({}^{\psi+\phi+\xi}\) architecture requires the training data to be generated from a simpler, more smooth strain path. This smooth strain path is described by a cubic spline connecting a set of manually chosen knots in the \(\epsilon\)-\(t\) space [7]. Once the spline is defined, the data points are generated by applying this strain path to the analytical models with uniformly distributed time increments \(\Delta t\in(0.01\,\text{s},0.02\,\text{s})\) until the data set contains 900 time steps. The same strain path is used for all materials V1, V2, P1 and P2 and is shown in Fig. 11b with the corresponding stress response for V1. The knots were chosen such that the data generated with this path are comparable to the data from the random walk sequences. ### Generation of validation data In order to evaluate and compare the network performances for unseen data, the networks have to predict the constitutive response for a set of consecutive time steps taken from the strain path shown in Fig. 12a, where a constant time increment of 0.05 s is used. The path is chosen such that strain, strain rate, and time increments do not exceed the training range. \begin{table} \begin{tabular}{c c c} \hline \hline Description & Label & Parameters \\ \hline Nonlinear viscoelastic model with & V1 & \(E=1\,\text{GPa}\), \(E_{1}=10\,\text{GPa}\), \(\hat{\eta}_{1}=200\,\text{GPa}\,\text{s}\), \(a_{1}=-0.7\), \(b_{1}=0.1\) \\ one Maxwell element & & \\ Linear viscoelastic model with two & V2 & \(E=1\,\text{GPa}\), \(E_{1}=10\,\text{GPa}\), \(\eta_{1}=10\,\text{GPa}\,\text{s}\), \(E_{2}=20\,\text{GPa}\), \(\eta_{2}=5\,\text{GPa}\,\text{s}\) \\ Maxwell elements & & \\ \hline Elastoplastic model with kinematic & P1 & \(E=20\,\text{GPa}\), \(\sigma_{y}=100\,\text{MPa}\), \(H=10\,\text{GPa}\) \\ Elastoplastic model with mixed & P2 & \(E=20\,\text{GPa}\), \(\sigma_{y}=100\,\text{MPa}\), \(H=3\,\text{GPa}\), \(\hat{H}=3\,\text{GPa}\) \\ kinematic-isotropic hardening & & \\ \hline \hline \end{tabular} \end{table} Table 2: Chosen parameters of the four testing material models used as for training data generation and validation. The networks thus do not have to extrapolate for this benchmark test. Reference data points are, analogously to the training data, obtained using an implicit Euler scheme. Secondly, data for a further validation path with strain, strain rate and time increments exceeding the training range are generated, see Fig. 11(b). This will be used to evaluate the extrapolation capabilities of the different models later on. This path includes a small hysteresis beyond the training limits of the strain (\(2\,\mathrm{s}<t\leq 4\,\mathrm{s}\)) and a short interval (\(5\,\mathrm{s}<t\leq 5.08\,\mathrm{s}\)) of 16 time steps with \(\Delta t=0.005\,\mathrm{s}\) and \(\epsilon=62.5\,\mathrm{\char 37}\,\mathrm{s}^{-1}\), undercutting the limits of \(\Delta t\) and exceeding the limits of \(\epsilon\) in another hysteresis. Afterwards, a short relaxation with constant strain \(\epsilon=0\,\mathrm{\char 37}\) is performed until \(t=6\,\mathrm{s}\). Subsequently, large time increments outside of the training range are applied (\(\Delta t=0.125\,\mathrm{s}\) and \(\Delta t=0.2\,\mathrm{s}\) in \(6\,\mathrm{s}<t\leq 7\,\mathrm{s}\) and \(7\,\mathrm{s}<t\leq 8\,\mathrm{s}\), respectively). For all other time steps, the time increment equals \(\Delta t=0.05\,\mathrm{s}\). Finally, the strain is increased linearly until \(t=10\,\mathrm{s}\) and \(\epsilon=6\,\mathrm{\char 37}\). ### Normalization The first and second derivatives of most common activation functions vanish for large absolute values of their arguments, rendering training impossible. In order to ensure nonzero gradients, the training data set, which for example contains stresses in the order of several MPa, has to be scaled down to values of magnitude 1. Since in general not all relevant features of the training data set are independent from each other, the scaling factors must be chosen appropriately, which is shown in the following. Generally, an independent quantity \(f\) can be normalized to the range \((-1,1)\ni\tilde{f}\) via \[\tilde{f}=\frac{f-m_{f}}{s_{f}}\quad\Longleftrightarrow\quad f=s_{f}\tilde{f} +m_{f}\quad\text{with}\quad m_{f}=\frac{1}{2}(f_{\text{max}}+f_{\text{min}}) \quad\text{and}\quad s_{f}=\frac{1}{2}(f_{\text{max}}-f_{\text{min}})\quad, \tag{34}\] wherein \(\tilde{f}\) is the scaled value of \(f\) and \(f_{\text{max}}\) and \(f_{\text{min}}\) denote the maximum and minimum value of \(f\) across the whole data set [21]. Figure 11: Stresses and strains over time in the training data for the nonlinear viscoelastic material (V1): **(a)** the first sequence taken from the non-smooth random walk data base and **(b)** the smooth path generated with cubic splines used for \(\text{FNN}^{\text{w}+\phi+\frac{\pi}{2}}\). Figure 12: Validation paths to evaluate the network performances for **(a)** interpolation and **(b)** extrapolation. The path in (a) is chosen such that neither strain nor time increments exceed the training range. Path (b) in turn exceeds those limits separately and includes sections of large strain rates. Black dashed lines indicate the training data limits. For the blackbox models FNN\({}^{\sigma}\) and RNN\({}^{\sigma}\), the necessary quantities \(\varepsilon\), \(\sigma\) and \(\Delta t\) can be treated as independent and the scaling factors are thus obtained using Eq. (34). In all other architectures, not every feature is independent from the others. This is due to the occurring differential operators. Furthermore, it is reasonable to choose the scaling factors in such a way, that all relations, that hold for the unscaled quantities also hold for their scaled values to avoid back transformations during training. This simplifies and speeds up the calculation of the loss function. For the sake of simplicity, all \(m_{f}\) are set to zero in the following. Considering all remaining architectures, the relevant quantities are \(\varepsilon\), \(\sigma\), \(\Delta t\), \(\xi\), \(\tau\), \(\psi\), \(\phi\) and \(\phi^{*}\). As chosen independent quantities, the scaling factors of \(\varepsilon\), \(\Delta t\) and \(\psi\) are obtained using Eq. (34)\({}_{4}\). All other scaling factors follow from the evaluation of the relevant equations under the stated condition, resulting in the following relations: \[s_{\sigma}=\frac{s_{\psi}}{s_{\varepsilon}}\quad,\quad s_{\xi}=s_{\varepsilon} \quad,\quad s_{\tau}=\frac{s_{\psi}}{s_{\varepsilon}}\quad,\quad s_{\phi}= \frac{s_{\psi}}{s_{\Delta t}}\quad\text{and}\quad s_{\phi^{*}}=\frac{s_{\psi} }{s_{\Delta t}}\quad. \tag{35}\] The scaling is applied to all features in the respective data set and the training is performed using only these scaled values. The network itself consequently predicts only scaled values. The back transformation Eq. (34)\({}_{2}\) yields the values in familiar physical units. The trained network can now be applied to predict the material response for a given strain path. ## 6 Applications Within this section, building on the data generated according to Sect. 5, each of the seven architectures presented in Sect. 4, i.e., FNN\({}^{\sigma}\), RNN\({}^{\sigma}\), FNN\({}^{\xi+\psi}\), RNN\({}^{\xi+\psi}\), FNN\({}^{\nu+\phi}\), FNN\({}^{\nu+\phi^{*}}\) and FNN\({}^{\nu+\phi+\xi}\), is tested on its capability to rebuild the constitutive response of the materials V1, V2, P1, and P2 after training. This is evaluated by the NN-based models' predicted stress responses for an unknown strain path given in Fig. 11(a), where the path is chosen such that strain, strain rate, and time increments do not exceed the training range. In addition, to analyze the extrapolation capability, the models' predicted responses are considered for a second validation path with strain, strain rate, and time increments exceeding the training range, cf. Fig. 11(b). In order to reduce the scope of the presented study to a reasonable level, the second validation path is investigated for V1 only. All of the used FNNs consist of an input layer, a single hidden layer and an output layer. The number of neurons in the hidden layer is denoted as \(N_{2}\). ### Training results and validation path without extrapolation #### 6.1.1 Black box models Fnn\({}^{\sigma}\)For each test material model, an FNN\({}^{\sigma}\) as described in Sect. 4.1.1 is created with the hyperparameters and training results given in Tab. 3. The model predictions are presented in Fig. 13. The results for viscoelasticity show that the prediction of the stress behavior with FNN\({}^{\sigma}\) for the materials V1 is possible without further difficulties. For V2, in contrast, a precise prediction is only possible with two preceding time steps in the input, i.e., \(N^{\text{pt}}=2\). These limitations of the architecture can be traced back to the ambiguity of the expected output given a particular input. That is, given the material parameters for V1, the overstress \(\sigma_{1}^{\text{ov}}\) can be determined solely from the values of \({}^{n}\varepsilon\) and \({}^{n}\sigma\) given in the input. The material state is therefore determined unambiguously. For material V2, on the other hand, only the sum \(\sigma_{1}^{\text{ov}}+\sigma_{2}^{\text{ov}}\) of the overstresses can be determined from these quantities, but not their distinct values. Thus, the inner state is not clearly described and the stress prediction is not possible precisely, as Fig. 11(b) shows. For this reason, the network is provided with further information of an additional time step, see also Fig. 11(b). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Material & \(N^{\text{pt}}\) & \(N_{2}\) & \(\mathcal{A}_{2}^{l}\) & \(N^{\text{ds}}\) & Iterations & \(\mathcal{L}\) & Val. \\ \hline V1 & 1 & 15 & tanh & 1000 & 3094 & \(2\cdot 10^{-4}\) & Fig. 13a \\ V2 & 1 & 25 & tanh & 1000 & 911 & \(2\cdot 10^{-3}\) & Fig. 13b \\ V2 & 2 & 25 & tanh & 1000 & 2903 & \(1\cdot 10^{-4}\) & Fig. 13b \\ P1 & 1 & 15 & ReLU & 1000 & 356 & \(3\cdot 10^{-6}\) & Fig. 13c \\ P2 & 1 & 15 & ReLU & 1000 & 1240 & \(8\cdot 10^{-3}\) & Fig. 13d \\ \hline \hline \end{tabular} \end{table} Table 3: FNN\({}^{\sigma}\) architectures: network hyperparameters with number of preceding time steps \(N^{\text{pt}}\), neurons in the hidden layer \(N_{2}\), and activation function \(\mathcal{A}_{2}^{l}\) as well as characteristic values from training with number of training data \(N^{\text{ds}}\) and loss value \(\mathcal{L}\) after the given number of iterations. The figure numbers with the corresponding validation load case are given in column Val. The state of the elastoplastic material with kinematic hardening P1 is, analogously to V1, sufficiently described by \({}^{n}\epsilon\) and \({}^{n}\sigma\). However, in contrast to V2, elastoplasticity with mixed kinematic-isotropic hardening as present in P2 cannot be described by adding an additional time step to the input. This would require information about the current position and extent of the elastic region, which cannot be provided by a fixed amount of preceding times steps. Interestingly, the model tries to approximate the material behavior with kinematic hardening as good as possible. Summarizing, FNN\({}^{\sigma}\) is thus only applicable for certain types of material behavior. Moreover, no further information is obtained apart from the stress prediction. On the other hand, the training process is computationally inexpensive and requires only stress-strain pairs and time steps for the data set. RNN\({}^{\sigma}\)Using the same reference data generated with V1, V2, P1, and P2, the capability of the RNN\({}^{\sigma}\) model according to Wu et al. [29] is now investigated, where the network hyperparameters given in Tab. 4 have been used. With that, the results given in Fig. 14 could be achieved for the validation test. The RNN\({}^{\sigma}\) model is thus able to predict the stress response almost perfectly for all four test materials. Due to the comparatively large number of weights and the expensive gradient calculations for networks with recurrent cells, this broad applicability comes at the cost of the computational time needed for the training. Similar to FNN\({}^{\sigma}\), only stresses and strains are required for the training data set, but no additional physical information is obtained. However, both models, FNN\({}^{\sigma}\) and RNN\({}^{\sigma}\), do not allow statements to be made about whether the processes described by the respective model are embedded in a meaningful thermodynamic framework. #### 6.1.2 Neural networks enforcing physics in a weak form Fnn\({}^{\xi+\psi}\)After the investigation of the pure black box models given above, the first approach belonging to the class of NNs enforcing physics in a weak form is analyzed, the FNN\({}^{\xi+\psi}\) model according to Masi et al. [11]. Again, the same reference data generated with V1, V2, P1, and P2, are used here. Using the hyperparameters given in Tab. 5, four FNN\({}^{\xi+\psi}\) networks were created. The results for the validation loading case given in Fig. 15 show that FNN\({}^{\xi+\psi}\) is able to produce precise predictions for stress and internal variables, i.e., \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Material & \(N^{c}\) & \(N_{2}^{\text{FF}}\) & \(\mathcal{A}_{2}^{\text{FF}}\) & \(N^{\text{eqs}}\) & \(N^{\text{us}}\) & Iterations & \(\mathcal{L}\) & Val. \\ \hline V1 & 6 & 10 & tanh & 100 & 100 & 3465 & \(9\cdot 10^{-4}\) & Fig. 14a \\ V2 & 10 & 10 & tanh & 100 & 100 & 2219 & \(8\cdot 10^{-5}\) & Fig. 14b \\ P1 & 10 & 10 & ReLU & 100 & 100 & 5352 & \(7\cdot 10^{-3}\) & Fig. 14c \\ P2 & 12 & 20 & ReLU & 100 & 100 & 5281 & \(1\cdot 10^{-3}\) & Fig. 14d \\ \hline \hline \end{tabular} \end{table} Table 4: RNN\({}^{\sigma}\) architectures: network hyperparameters with number of values in the cell state \(N^{\text{c}}\), neurons in the hidden layer of the feedforward network \(N_{2}^{\text{FF}}\) with activations \(\mathcal{A}_{2}^{\text{FF}}\) as well as characteristic values from training with number of sequences \(N^{\text{seq}}\) of length \(N^{\text{ts}}\) and loss value \(\mathcal{L}\) after training for the given number of iterations. Figure 13: Stress prediction of FNN\({}^{\sigma}\) for the four underlying test materials. The chosen hyperparameters are given in Tab. 3. the available quantities. In addition to the stress-strain plots, diagrams showing free energy \(\psi\), internal variables \(g^{\alpha}\), and dissipation rate \(D\) are added. It can be seen that \(\psi\) and \(D\) vary from the expected results, although the order of magnitude and rough course mostly coincide. The condition \(D\geq 0\) is satisfied reliably. Thus, in contrast to the black box models, \(\mathrm{FNN}^{\xi+\psi}\) puts the predictions in a more physical framework, which has positive effects on the generalization capability of the network. For a detailed study on this architecture, see Masi et al. [11]. However, \(\mathrm{FNN}^{\xi+\psi}\) requires the provision of information about the internal variables for the training data, which is not a trivial task. This problem is discussed in Masi and Stefanou [47] for multiscale simulations. Rnn\({}^{\xi+\psi}\)The second analyzed NN-based approach that enforces physics in a weak form is the \(\mathrm{RNNN}^{\xi+\psi}\) approach similar to He and Chen [48]. The network hyperparameters and the information on the training are given in Tab. 6. With that, the results given in Fig. 16 could be achieved for the considered validation path. One can see, that the stress predictions are very precise for all four materials. Furthermore, the Clausius-Duhem inequality is also complied with, with the exception of few individual time steps. The internal variables, however, vary vastly from the expected results. Since no internal variables are given in the training data set and no further restrictions on their course are made, the network is free to choose any set of internal variables that allow accurate stress predictions and compliance with \(D\geq 0\). The network thus finds another representation of the set of internal variables and the free energy function. Note, that for the kinematic hardening material P1, the network predicts only a single internal variable (\(N^{\xi}=1\)), although \(N=2\). This is possible since plastic strain \(\varepsilon^{\mathrm{pl}}\) and kinematic hardening variable \(\alpha\) coincide and the material response can be expressed in terms of only the plastic strain. The same simplification could as well be made for material P2 with \(N^{\xi}=2\) despite \(N=3\), but \(N^{\xi}=3\) is chosen. In contrast to \(\mathrm{FNN}^{\xi+\psi}\), the provision of internal variables is not explicitly necessary to obtain precise, physically consistent stress predictions. This advantage comes at cost of a computational expensive recurrent cell. For both \(\mathrm{FNN}^{\xi+\psi}\) and \(\mathrm{RNN}^{\xi+\psi}\), although \(D\geq 0\) is enforced by a penalty term in the loss function and is also satisfied in this validation path without extrapolation, this cannot be fully guaranteed for this class of models, see Sect. 6.2. Figure 14: Stress prediction of \(\mathrm{RNNN}^{\sigma}\) for the four underlying test materials. The chosen hyperparameters are given in Tab. 4. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} Mat. & \(N^{\mathrm{c}}\) & \(N^{\mathrm{c}}\) & \(N^{\mathrm{c}}\) & \(N_{2}^{\mathrm{FP}}\) & \(N_{2}^{\mathrm{FP}}\) & \(\mathcal{A}_{2}^{\mathrm{FP}^{\mathrm{c}}}\) & \(\mathcal{A}_{2}^{\mathrm{FP}^{\mathrm{c}}}\) & \(\mathcal{A}_{2}^{\mathrm{FP}^{\mathrm{c}}}\) & \(w^{\sigma}\) / \(w^{D2\otimes}\) & \(N^{\mathrm{sq}}\) & \(N^{\mathrm{u}}\) & Iter. & \(\mathcal{L}^{\sigma}\) / \(\mathcal{L}^{D2\otimes}\) & Val. \\ \hline V1 & 6 & 1 & 10 & 10 & tanh & tanh & 1/5 & 100 & 100 & 5079 & \(2\cdot 10^{-3}/4\cdot 10^{-7}\) & Fig. 16a \\ V2 & 6 & 2 & 10 & 10 & tanh & tanh & 1/5 & 100 & 100 & 5244 & \(8\cdot 10^{-3}/3\cdot 10^{-5}\) & Fig. 16b \\ P1 & 10 & 1 & 10 & 10 & ReLU & tanh & 1/5 & 100 & 100 & 5329 & \(7\cdot 10^{-3}/1\cdot 10^{-4}\) & Fig. 16c \\ P2 & 12 & 3 & 15 & 20 & ReLU & tanh & 1/5 & 100 & 100 & 5140 & \(1\cdot 10^{-3}/5\cdot 10^{-6}\) & Fig. 16d \\ \hline \end{tabular} \end{table} Table 6: RNN\({}^{\mathrm{s+\psi}}\) architectures: network hyperparameters with number of values in the cell state \(N^{\mathrm{c}}\), number of predicted internal variables \(N^{\mathrm{\xi}}\) neurons in the hidden layer of the feedforward networks \(N_{2}^{\mathrm{FF}^{\mathrm{c}}}\), \(N_{2}^{\mathrm{FP}^{\mathrm{v}}}\) with activations \(\mathcal{A}_{2}^{\mathrm{FF}^{\mathrm{c}}}\), \(\mathcal{A}_{2}^{\mathrm{FF}^{\mathrm{v}}}\) as well as weighting factors of the loss function \(w^{\sigma}\), \(w^{D2\otimes}\), number of training sequences \(N^{\mathrm{seq}}\) of length \(N^{\mathrm{ts}}\) and loss values \(\mathcal{L}^{\sigma}\), \(\mathcal{L}^{D2\otimes}\) after training for the given number of iterations. Validation load cases can be found in the figures given in Val. Figure 15: Prediction of stress, free energy, internal variables and dissipation rate for the four test materials using \(\mathrm{FNN}^{\mathrm{\xi+\psi}}\). Figure 16: Prediction of stress, free energy, internal variables and dissipation rate for the four test materials using RNN\({}^{\xi+\psi}\). #### 6.1.3 Neural networks enforcing physics in a strong form FNN\({}^{\psi+\phi^{*}}\)Finally, the analysis of approaches belonging to the class of _NNs enforcing physics in a strong form_ is done by using the reference data generated with V1, V2, P1, and P2. The first model is the FNN\({}^{\psi+\phi}\) model according to Huang et al. [49]. Here, the network hyperparameters and the information on the training are given in Tab. 7 and the results achieved for the considered validation path are given in Fig. 17. One can see, that the stress predictions are very precise for both viscoelastic materials, V1 and V2. In addition to the stress-strain plots, diagrams showing internal variables \(\xi^{a}\), free energy \(\psi\), internal forces \(\tau^{a}\), dissipation rate \(\mathcal{D}\), and dissipation potential \(\phi\) are added. From that one can see that all of these quantities are almost identical to the reference model, although these values do not explicitly appear in the loss, cf. Eq. (31). Only for V1 there is a visible discrepancy in the dissipation potential. Furthermore, in contrast to the former two models, the important condition \(\mathcal{D}\geq 0\) is now guaranteed for all admissible load paths by construction of the network architecture. However, FNN\({}^{\psi+\phi}\) requires the provision of information about the internal variables for the training data when no adapted training is used [7]. Regarding rate independent materials, the FNN\({}^{\psi+\phi}\) model is able to rebuild the response of the elastoplastic material P1. Interestingly, however, the internal forces \(\tau^{a}\) do not match with the reference. This is due to the fact that \(\xi^{1}=\epsilon^{\mathrm{pl}}\) and \(\xi^{2}=\mathbf{a}\) coincide, i.e. \(\mathbf{a}=\epsilon^{\mathrm{pl}}\), cf. Tab. 1. Therefore, there are any number of ways to divide the internal forces, all of which lead to the same, good result, even if the relation between the internal variables is not incorporated into the network, c.f. Remark 1. Furthermore, regarding the plots of \(\mathcal{D}\) and \(\phi\), one can see that the the original non-continuous curve shapes at the zero line are approximated by smooth ones. This process is often called a regularization of the rate independent model, i.e., an approximation by a rate dependent one [59]. Due to the choice of the softplus activation function this results all by itself within the training process. Regarding the results for P2, a poor prediction becomes apparent in Fig. 17 as soon as the plastic regime is reached. The stress response is, similar to FNN\({}^{\psi}\), approximated with only kinematic hardening instead of kinematic and isotropic hardening. According to the authors, this is due to the fact that there is no function \(\phi\), for which Eq. (4) can be applied without explicitly considering the relation between the rates of the plastic strain \(\dot{\epsilon}^{\mathrm{pl}}\) and the isotropic hardening variable \(\dot{\hat{\mathbf{a}}}\), c.f. Remark 1. If this relation, i.e., \(\dot{\hat{\mathbf{a}}}=|\dot{\epsilon}^{\mathrm{pl}}|\), is incorporated into the model architecture, the model is able to make accurate predictions. The authors have already verified this. FNN\({}^{\psi+\phi}\)Instead of using the dissipation potential \(\phi\), the model FNN\({}^{\psi+\phi^{*}}\) according to As'ad and Farhat [6] makes use of the dual dissipation potential \(\phi^{*}\). Here, the network hyperparameters and the information on the training are given in Tab. 8. With that, the results given in Fig. 18 could be achieved for the considered validation path. Similar to FNN\({}^{\psi+\phi*}\), the predictions for stress, free energy, dissipation rate, etc. are very precise for the viscoelastic materials V1 and V2. Again, only for V1 there is a visible discrepancy in the dual dissipation potential. FNN\({}^{\psi+\phi^{*}}\) requires the provision of information about the internal variables for the training data when no adapted training is used. Regarding the prediction of FNN\({}^{\psi+\phi^{*}}\) for the elastoplastic materials P1 and P2, a rather poor result shows up in Fig. 18 even for kinematic hardening. However, this is not surprising, since the dual dissipation potentials \(\phi^{*}\) of the reference models P1 and P2 have a shape that cannot be reasonably represented with the selected activation functions, cf. Tab. 1, and restrictions to the networks weights. FNN\({}^{\psi+\phi+\xi}\)This architecture models the free energy and dissipation potential without requiring knowledge about the internal variables during training. The architecture is restricted to a single internal variable, i.e., a single FNN for the internal variable with scalar output. Using the network parameters in Tab. 9, the results in Fig. 19 could be achieved. These results show, that the architecture is able to learn reasonable representations of the potentials for V1, V2 and P1, but not P2. All of the predictions are less precise compared to FNN\({}^{\psi+\phi}\). Similar to RNN\({}^{\psi+\psi}\), the internal variables are not learned exactly as in the reference model. However, normalization would show that the courses are similar. Because other potentials are learned if \(\xi(t)\) differs from the reference model, this leads to equivalent results. Interestingly, the architecture finds a set of potentials for V2, that allows for surprisingly accurate stress predictions, although only one internal variable is used. The prediction for P1 shows a rather strong rate dependency. This can be attributed to a second regularization mechanism: besides the regularization of the dissipation potential, the sudden increase of \(\dot{e}^{\text{pl}}\) when leaving the elastic region cannot be modeled accurately by the FNN \(\xi^{\text{NN}}(t)\). P2 in turn cannot be modeled at all, since at least two internal variables are necessary to describe the path dependency and, which is more important, due to the reasons discussed for FNN\({}^{\psi+\phi}\) above. The architecture FNN\({}^{\psi+\phi+\xi}\) is thus more suitable for viscoelastic materials. Figure 17: Prediction of stress, internal variables, free energy, internal forces, dissipation rate and dissipation potential for the four test materials using \(\mathrm{FNN}^{\mathsf{w}+\phi}\). Figure 18: Prediction of stress, internal variables, free energy, internal forces, dissipation rate and dual dissipation potential for the four test materials using FNNv+\(\phi^{*}\). Figure 19: Prediction of stress, internal variables, free energy, internal forces, dissipation rate and dissipation potential for the four test materials using the advanced training method \(\mathrm{FNN}^{\nu+\phi+\xi}\). ### Validation path with extrapolation After investigating which NN-based approaches can reproduce which material behavior well, i.e. V1, V2, P1 and P2, the extrapolation capability of these approaches is now compared. This is done using the path given in Fig. 12b. Since all NN-based approaches have shown that they can reproduce the viscoelastic material V1 well, only this will be considered in the following. The exact same models that were validated with the strain path without extrapolation are now used to carry out the extrapolation study. The results of this final study are given in Fig. 20. #### 6.2.1 Black box NNs To start with, the _extrapolation_ capabilities of the first _black box model_ FNN\({}^{\sigma}\) are evaluated. As can be seen, the model is easily able to extrapolate to strains of up to \(\epsilon=3\,\%\) which is outside the training range \(\epsilon^{\text{train}}\in[-2,2]\,\%\). After reentering the training range without notable inaccuracies, the model fails to extrapolate into ranges of higher strain rates and exhibits large errors. However, after a short relaxation, the model is able to produce accurate predictions for strain increments outside the training range and is even able to yield reasonable values for the stress up to \(4\,\%\). Thus, the FNN\({}^{\sigma}\) model is surprisingly good at extrapolating, except for increased strain rates \(\epsilon\). However, it should be noted that there is no possibility to make statements about the thermodynamics of the model. The second _black box model_, RNN\({}^{\sigma}\), initially shows a similarly good prediction quality, but also fails to predict stresses for larger strain rates and deviates for time increments of \(\Delta t=0.2\,\text{s}\). It should be noted that the extrapolation behavior of RNNs often differs significantly for different training configurations. Even strong oscillations could be observed in some cases. Thus, compared to the FNN\({}^{\sigma}\) model, the RNN\({}^{\sigma}\) is worse at extrapolating. However, it can be used for a broader class of material behavior compared to FNN\({}^{\sigma}\), see Figs. 13 and 14. The missing possibility to make statements about the thermodynamics remains. #### 6.2.2 NNs enforcing physics in a weak form Now, the _extrapolation_ behavior of _NNs enforcing physics in a weak form_ is considered. As shown in Fig. 20, the model FNN\({}^{5+\psi}\) agrees very well with the reference V1 up to maximum strains of \(2.5\,\%\), but slightly deviates above. After reentering the training range, precise predictions are made again. Increased strain rates do lead to errors as well, but these are substantially reduced compared to the previous models. The model is able to cope with increased time increments and again precisely predicts the stress up to \(2.5\,\%\). This also applies to the free energy \(\psi\) and the dissipation rate, for which \(D\geq 0\) applies up to that point. Thus, the model does not violate the second law of thermodynamics up to this point of the loading path. However, when the strain is further increased to \(\epsilon>3\,\%\), significant errors in \(\sigma\) occur. Furthermore, one can see that the model predicts negative values for \(D\). Thus, FNN\({}^{5+\psi}\) is very good at extrapolation for the most part of the loading path. However, from a certain level of strains, unphysical predictions can be seen. This is due to the fact that the fulfillment of the second law of thermodynamics is enforced only by a penalty term in the loss and is not fulfilled a priori. The RNN\({}^{5+\psi}\) model shows similarly good results, but is more precise in strains up to \(3\,\%\). As with the FNN\({}^{5+\psi}\), completely unphysical predictions with \(D<0\) occur for strains greater than \(3\,\%\). Thus, RNN\({}^{5+\psi}\) provides acceptable results when extrapolating up to \(\epsilon=3\,\%\). For this, however, no inner variables are necessary for the training here. Finally, as it is also the case for FNN\({}^{5+\psi}\), unphysical predictions may occur. #### 6.2.3 NNs enforcing physics in a strong form Lastly, the _extrapolation_ behavior of the three _NNs enforcing physics in a strong form_, FNN\({}^{\psi+\phi}\), FNN\({}^{\psi+\phi^{*}}\), and FNN\({}^{\psi+\phi}\), is analyzed. As shown in Fig. 20, highly accurate predictions can be achieved with FNN\({}^{\psi+\phi}\) and FNN\({}^{\psi+\phi^{*}}\) for \(\sigma\), \(\psi\), as well as \(D\). This applies to the entire load path, except for the last piece of FNN\({}^{\psi+\phi^{*}}\) with \(\epsilon>4\,\%\). FNN\({}^{\psi+\phi}\) is even capable to produce very precise results for strains of \(\epsilon=6\,\%\). The extrapolation using FNN\({}^{\psi+\phi+\phi}\) is not as accurate as the two former architectures, but still good considering the mediore interpolation results. Summarizing, neither highly increased strains or strain rates nor time increments outside of the training range lead to considerable deviations from the expected material response. Particularly noteworthy here is that in any case \(D\geq 0\) is ensured. Thus, the predictions are always in accordance with the second law given by the CDI (1). All in all, this model class is best suited for extrapolation which is due to the strong physical background inserted here. Figure 20: Predictions of the six considered NNs tested for an extrapolation path with \(\varepsilon\), \(\dot{\varepsilon}\), and \(\Delta t\) not included in the training data set: **(a)** stress \(\sigma\), **(b)** free energy \(\psi\), and **(c)** dissipation rate \(D\). The viscoelastic model V1 serves as a reference and has been used for generation of training data. ## 7 Conclusions In this work, a classification of the variety of NN-based approaches to modeling inelastic constitutive behavior with particular attention to the thermodynamic framework as well as a unified formulation of these approaches is provided. To this end, a division of NN-based approaches into _black box NNs, NNs enforcing physics in a weak form_ and _NNs enforcing physics in a strong form_ is made and an application to both 1D elastoplastic and viscoelastic data is made. After a compact literature review, a short overview on continuum based constitutive modeling including standard viscoelastic and elastoplastic models and a condensed repetition to the basis of FNNs and RNNs is given. Based on this, a total of seven NN-based approaches are presented with a detailed description on training and application. It is shown in which way the second law of thermodynamics is taken into account in the respective model and what data are necessary for training. Thereafter, the generation of training data and the application of the seven NN-based approaches to these data are shown. It can be seen that all considered models are able to represent viscoelasticity, whereas elastoplasticity cannot be represented by all approaches. Furthermore, the models' extrapolation capabilities are analyzed. In summary, the results of this work show that NN-based models are promising for the description of complex inelastic behavior and prove to be very flexible. They have the potential to replace the time-consuming task of classical constitutive model formulation and calibration piecemeal and to enable automated workflows. _Black box models_, however, do not allow any conclusions to be drawn as to whether the processes described by the respective model are embedded in a meaningful thermodynamic framework and are therefore not recommended for use. Furthermore, when applied to unknown load paths outside the training domain, the poor extrapolation capability of NNs can lead to large errors within stress predictions. _NNs enforcing physics in a weak form_ on the other hand, have a higher content of physics included into the model. Only the class of _NNs enforcing physics in a strong form_ can really ensure that no violation of thermodynamics occurs. Moreover, they also have shown to be characterized by the best extrapolation behavior of the considered models. Regarding the follow-up of the study presented here, several extensions are planned in the future. For instance, an extension to the general 3D case at finite strains has to be made. Thus, a variety of further physical principles and conditions have to be included in the comparison for this, e.g., principles as objectivity or material symmetry. ## CRediT authorship contribution statement **Max Rosenkranz:** Conceptualization, Formal analysis, Investigation, Methodology, Visualization, Software, Validation, Writing - original draft, Writing - review and editing. **Karl A. Kalina:** Conceptualization, Formal analysis, Methodology, Writing - original draft, Writing - review and editing. **Jorg Brummund:** Formal analysis, Methodology, Writing - review and editing. **Markus Kastner:** Funding acquisition, Resources, Writing - review and editing.
2310.14720
Extended Deep Adaptive Input Normalization for Preprocessing Time Series Data for Neural Networks
Data preprocessing is a crucial part of any machine learning pipeline, and it can have a significant impact on both performance and training efficiency. This is especially evident when using deep neural networks for time series prediction and classification: real-world time series data often exhibit irregularities such as multi-modality, skewness and outliers, and the model performance can degrade rapidly if these characteristics are not adequately addressed. In this work, we propose the EDAIN (Extended Deep Adaptive Input Normalization) layer, a novel adaptive neural layer that learns how to appropriately normalize irregular time series data for a given task in an end-to-end fashion, instead of using a fixed normalization scheme. This is achieved by optimizing its unknown parameters simultaneously with the deep neural network using back-propagation. Our experiments, conducted using synthetic data, a credit default prediction dataset, and a large-scale limit order book benchmark dataset, demonstrate the superior performance of the EDAIN layer when compared to conventional normalization methods and existing adaptive time series preprocessing layers.
Marcus A. K. September, Francesco Sanna Passino, Leonie Goldmann, Anton Hinel
2023-10-23T08:56:01Z
http://arxiv.org/abs/2310.14720v2
# Extended Deep Adaptive Input Normalization for Preprocessing Time Series Data for Neural Networks ###### Abstract Data preprocessing is a crucial part of any machine learning pipeline, and it can have a significant impact on both performance and training efficiency. This is especially evident when using deep neural networks for time series prediction and classification: real-world time series data often exhibit irregularities such as multi-modality, skewness and outliers, and the model performance can degrade rapidly if these characteristics are not adequately addressed. In this work, we propose the EDAIN (Extended Deep Adaptive Input Normalization) layer, a novel adaptive neural layer that learns how to appropriately normalize irregular time series data for a given task in an end-to-end fashion, instead of using a fixed normalization scheme. This is achieved by optimizing its unknown parameters simultaneously with the deep neural network using back-propagation. Our experiments, conducted using synthetic data, a credit default prediction dataset, and a large-scale limit order book benchmark dataset, demonstrate the superior performance of the EDAIN layer when compared to conventional normalization methods and existing adaptive time series preprocessing layers. ## 1 Introduction There are many steps required when applying deep neural networks or, more generally, any machine learning model, to a problem. First, data should be gathered, cleaned, and formatted into machine-readable values. Then, these values need to be preprocessed to facilitate learning. Next, features are designed from the processed data, and the model architecture and its hyperparameters are chosen. This is followed by parameter optimisation and evaluation using suitable metrics. These steps may be iterated several times. A step that is often overlooked in the literature is _preprocessing_ (see, for example, Koval, 2018), which consists in operations used to transform raw data in a format that is suitable for further modeling, such as detecting outliers, handling missing data and normalizing features. Applying appropriate preprocessing to the data can have significant impact on both performance and training efficiency (Cao et al., 2018; Nawi et al., 2013; Passalis et al., 2020; Sola and Sevilla, 1997; Tran et al., 2021). However, determining the most suitable preprocessing method usually requires a substantial amount of time and relies on iterative training and performance testing. Therefore, the main objective of this work is to propose a _novel efficient automated data preprocessing method for optimising the predictive performance of neural networks_, with a focus on normalization of multivariate time series data. ### Preprocessing multivariate time series Let \(\mathcal{D}=\left\{\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T},\ i=1,\ldots,N\right\}\) denote a dataset containing \(N\) time series, where each time series \(\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T}\) is composed of \(T\)\(d\)-dimensional feature vectors. The integers \(d\) and \(T\) refer to the feature and temporal dimensions of the data, respectively. Also, we use \(\mathbf{x}_{t}^{(i)}\in\mathbb{R}^{d},\ t=1,\ldots,T\) to refer to the \(d\) features observed at timestep \(t\) in the \(i\)-th time series. Before feeding the data into a model such as a deep neural network, it is common for practitioners to perform \(z\)-score normalization (see, for example, Koval, 2018) on \(\mathbf{x}_{t}^{(i)}=(x_{t,1}^{(i)},\ldots,x_{t,d}^{(i)})\in\mathbb{R}^{d}\), obtaining \[\tilde{x}_{t,k}^{(i)}=\frac{x_{t,k}^{(i)}-\mu_{k}}{\sigma_{k}},\ k=1,\ldots,d,\] where \(\mu_{k}\) and \(\sigma_{k}\) denote the mean and standard deviation of the measurements from the \(k\)-th predictor variable. Another commonly used method is min-max scaling, where the observations for each predictor variable are transformed to fall in the value range \([0,1]\) (see, for example, Koval, 2018). In this work, we refer to these conventional methods as _static preprocessing_ methods, as the transformation parameters are fixed statistics that are computed through a single sweep of the training data. Most of these transformations only change the location and scale of the observations, but real-world data often contains additional irregularities such as skewed distributions, outliers, extreme values, heavy tails and multiple modes (Cao et al., 2023; Nawi et al., 2013), which are not mitigated by transformations such as \(z\)-score and min-max normalization. Employing static normalization in such cases may lead to sub-optimal results, as demonstrated in Passalis et al. (2020, 2021); Tran et al. (2021) and in the experiments on real and synthetic data in Section 4 of this work. In contrast, better results are usually obtained by employing _adaptive_ preprocessing methods (see, for example Lubana et al., 2021; Passalis et al., 2020, 2021; Tran et al., 2021), where the preprocessing is integrated into the deep neural network by augmenting its architecture with additional layers. Both the transformation parameters and the neural network model parameters are then jointly optimised in an end-to-end fashion as part of the objective function of interest. The main contribution of this paper belongs to this class of methods: we propose a novel adaptive normalization approach, called EDAIN (Extended Deep Adaptive Input Normalization), which can appropriately handle irregularities in the input data, without making any assumption on their distribution. ### Contributions The main contribution of our work is EDAIN, displayed in Figure 1, a neural layer that can be added to any neural network architecture for preprocessing multivariate time series. This method complements the shift and scale layers proposed in DAIN (Passalis et al., 2020, described in details in Section 2) with an adaptive outlier mitigation sublayer and a power transform sublayer, used to handle common irregularities observed in real-world data, such as outliers, heavy tails, extreme values and skewness. Additionally, our EDAIN method can be implemented in two versions, named _global-aware_ and _local-aware_, suited to unimodal and multi-modal data respectively. Furthermore, we propose a computationally efficient variation of EDAIN, trained via the Kullback-Leibler divergence, named EDAIN-KL. Like EDAIN, this method can normalize skewed data with outliers, but in an unsupervised fashion, and can be used in conjunction with non-neural-network models. EDAIN is described in details in Section 3, after a discussion on related methods in Section 2. The proposed methodology is extensively evaluated on synthetic and real-world data in Section 4, followed by a discussion on its performance and a conclusion. An open-source implementation of the EDAIN layer, along with code for reproducing the experiments, is available in the GitHub repository marcusGH/edain_paper. ## 2 Related Work Several works consider adaptive normalization methods, but they all apply the normalization transformation to the outputs of the inner layers within the neural network, known as activations in the literature. A well-known example of these transformations is batch normalization (Ioffe and Szegedy, 2015), which applies \(z\)-score normalization to the output of each inner layer, but several alternatives and extensions exist (see, for example, Huang et al., 2023; Lubana et al., 2021; Yu and Spiliopoulos, 2023). To the best of our knowledge, there are only three other methods where the deep neural network is augmented by inserting the adaptive preprocessing layer as the first step, transforming the data before it enters the network: the Deep Adaptive Input Normalization (DAIN) layer Passalis et al. (2020), the Robust Deep Adaptive Input Normalization (RDAIN) layer Passalis et al. (2021), and the Bilinear Input Normalization (BIN) layer Tran et al. (2021). We will describe the DAIN layer in more detail in the next paragraph, as this method resembles our proposed EDAIN method the most. The DAIN layer normalizes each time series \(\mathbf{X}^{(i)}\) using three sublayers: each time series is first shifted, then scaled, and finally passed through a gating layer that can suppress irrelevant features. The unknown parameters are the weight matrices \(\mathbf{W}_{a}\), \(\mathbf{W}_{b}\), \(\mathbf{W}_{c}\in\mathbb{R}^{d\times d}\), and the bias term \(\mathbf{d}\in\mathbb{R}^{d}\), and are used for the shift, scale, and gating sublayer, respectively. The first two layers together, perform the operation \[\tilde{\mathbf{x}}_{t}^{(i)}=\left(\mathbf{x}_{t}^{(i)}-\mathbf{W}_{a}\mathbf{ a}^{(i)}\right)\oslash\mathbf{W}_{b}\mathbf{b}^{(i)},\] where \(\mathbf{x}_{t}^{(i)}\in\mathbb{R}^{d}\) is the input feature vector at timestep \(t\) of time series \(i\), \(\odot\) denotes element-wise division, and \(\mathbf{a}^{(i)}\in\mathbb{R}^{d}\) and \(\mathbf{b}^{(i)}\in\mathbb{R}^{d}\) are summary statistics that are computed for the \(i\)-th time series as follows: \[\mathbf{a}^{(i)}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{x}_{t}^{(i)},\ \mathbf{b}^{(i)}= \sqrt{\frac{1}{T}\sum_{t=1}^{T}\Big{(}\mathbf{x}_{t}^{(i)}-\mathbf{W}_{a} \mathbf{a}^{(i)}\Big{)}^{2}}. \tag{1}\] In Equation (1), the power operations are applied element-wise. The third sublayer, the gating layer, performs the operation \[\tilde{\mathbf{x}}_{t}^{(i)}=\tilde{\mathbf{x}}_{t}^{(i)}\odot S\left( \mathbf{W}_{c}\mathbf{c}^{(i)}+\mathbf{d}\right).\] Here, \(\odot\) is the element-wise multiplication operator, \(S:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) denotes the logistic function applied element-wise, and \(\mathbf{c}^{(i)}\) is the summary statistic \[\mathbf{c}^{(i)}=\frac{1}{T}\sum_{t=1}^{T}\tilde{\mathbf{x}}_{t}^{(i)}.\] The final output of the DAIN layer is thus \[\tilde{\tilde{\mathbf{X}}}^{(i)}=\left[\tilde{\tilde{\mathbf{x}}}_{1}^{(i)}, \ldots,\tilde{\tilde{\mathbf{x}}}_{T}^{(i)}\right]\in\mathbb{R}^{d\times T}.\] In the RDAIN layer proposed by Passalis et al. (2021), a similar 3-stage normalization pipeline as that of the DAIN layer is used, but a residual connection across the shift and scale sublayers is also introduced. The BIN layer (Tran et al., 2021) has two sets of linear shift and scale sublayers that work similarly to the DAIN layer, which are applied across columns and rows of each time series \(\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T}\). The output of the BIN layer is a trainable linear combination of the two. In real world applications, data often present additional irregularities, such as outliers, extreme values, heavy tails and skewness, which the aforementioned adaptive preprocessing methods are not designed to handle. Therefore, this work proposes EDAIN, a layer which comprises two novel sublayers that can appropriately treat skewed and heavy-tailed data with outliers and extreme values, resulting in significant improvements in performance metrics on real and simulated data. Also, the DAIN, RDAIN and BIN adaptive preprocessing methods are only primarily designed to handle multi-modal and non-stationary time series, which are common in financial forecasting tasks (Pasalsis et al., 2020). They do this by making the shift and scale parameters a parameterised function of each \(\mathbf{X}^{(i)}\), allowing a transformation specific to each time series data point, henceforth referred to as _local-aware_ preprocessing. However, these normalization schemes do not necessarily preserve the relative ordering between time series data points, which can degrade performance on unimodal datasets. As discussed in the next section, we address this drawback by proposing a novel _global-aware_ version for our proposed EDAIN layer, which preserves ordering by learning a monotonic transformation. It must be remarked that the EDAIN layer can also be fitted in local-aware fashion, to address multi-modality when present, providing additional flexibility when modelling real-world data. ## 3 Extended Deep Adaptive Input Normalization In this section, we describe in detail the novel EDAIN preprocessing layer, which can be added to any deep learning architecture for time series data. EDAIN adaptively applies local transformations specific to each time series, or global transformations across all the observed time series \(\mathbf{X}^{(i)}\), \(i=1,\ldots,N\) in the dataset \(\mathcal{D}\). These transformations are aimed at appropriately preprocessing the data, mitigating the effect of skewness, outliers, extreme values and heavy-tailed distributions. This section discusses in details the different sublayers of EDAIN, its global-aware and local Figure 1: Architecture of the proposed EDAIN (Extended Deep Adaptive Input Normalization) layer. The layout and color choices of the diagram are based on Figure 1 from Passalis et al. (2020). aware versions, and training strategies via stochastic gradient descent or Kullback-Leibler divergence minimization. An overview of the EDAIN layer's architecture is shown in Figure 1. Given some input time series \(\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T}\), each feature vector \(\mathbf{x}^{(i)}_{t}\in\mathbb{R}^{d}\) is independently transformed sequentially in four stages: an outlier mitigation operation \(\mathbf{h}_{1}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), a shift operation: \(\mathbf{h}_{2}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), a scale operation: \(\mathbf{h}_{3}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), and a power transformation operation: \(\mathbf{h}_{4}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\). Outlier mitigation sublayer.In the literature, it has been shown that an appropriate treatment of outliers and extreme values can increase predictive performance (Yin and Liu, 2022). The two most common ways of doing this are omission and winsorization (Nyitrai and Virag, 2019). The former corresponds to removing the outliers from further analysis, whereas the latter seeks to replace outliers with a censored value corresponding to a given percentile of the observations. In this work we propose the following smoothed winsorization operation obtained via the \(\tanh(\cdot)\) function: \[\ddot{\mathbf{x}}^{(i)}_{t}=\boldsymbol{\beta}\odot\tanh\left\{\left(\mathbf{ x}^{(i)}_{t}-\hat{\boldsymbol{\mu}}\right)\oslash\boldsymbol{\beta}\right\}+\hat{ \boldsymbol{\mu}}, \tag{2}\] where the parameter \(\boldsymbol{\beta}\in[\beta_{\min},\infty)^{d}\) controls the range to which the measurements are restricted to, and \(\hat{\boldsymbol{\mu}}\in\mathbb{R}^{d}\) is the global mean of the data, considered as a fixed constant. In this work, we let \(\beta_{\min}=1\). Additionally, we consider a ratio of winsorization to apply to each predictor variable, controlled by an unknown parameter vector \(\boldsymbol{\alpha}\in[0,1]^{d}\), combined with the smoothed winsorization operator (2) via a residual connection. This gives the following adaptive outlier mitigation operation for an input time series \(\mathbf{x}^{(i)}_{t}\in\mathbb{R}^{d}\): \[\mathbf{h}_{1}\left(\mathbf{x}^{(i)}_{t}\right)=\boldsymbol{\alpha}\odot \ddot{\mathbf{x}}^{(i)}_{t}+(\mathbf{1}_{d}-\boldsymbol{\alpha})\odot\mathbf{ x}^{(i)}_{t},\] where \(\mathbf{1}_{d}\) is a \(d\)-dimensional vector of ones. Both \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) are considered as unknown parameters as part of the full objective function optimised during training. Shift and scale sublayers.The adaptive shift and scale layer, combined, perform the operation \[\mathbf{h}_{3}\left\{\mathbf{h}_{2}\left(\mathbf{x}^{(i)}_{t}\right)\right\}= (\mathbf{x}^{(i)}_{t}-\boldsymbol{m})\oslash\boldsymbol{s},\] where the unknown parameters are \(\boldsymbol{m}\in\mathbb{R}^{d}\) and \(\boldsymbol{s}\in(0,\infty)^{d}\). Note that the EDAIN scale and shift sublayers generalise \(z\)-score scaling, which does not treat \(\boldsymbol{m}\) and \(\boldsymbol{s}\) as unknown parameters, but it sets them to the mean and standard deviation instead: \[\boldsymbol{m}=\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\mathbf{x}^{(i)}_{t},\] and \[\boldsymbol{s}=\sqrt{\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}\left(\mathbf{x}^ {(i)}_{t}-\boldsymbol{m}\right)^{2}},\] where the power operations are applied element-wise. Power transform sublayer.Many real-world datasets exhibit significant skewness, which is often corrected using power transformations (Schroth and Muma, 2021), such as the commonly used Box-Cox transformation (Box and Cox, 1964). One of the main limitations of the Box-Cox transformation is that it is only valid for positive values. A more general alternative which is more widely applicable is the Yeo-Johnson (YJ) transform (Yeo and Johnson, 2000): \[f^{\lambda}_{\mathrm{YJ}}(x)=\left\{\begin{array}{ll}\frac{(x+1)^{\lambda}-1 }{\lambda}&\text{if }\lambda\neq 0,\ x\geq 0,\\ \log(x+1)&\text{if }\lambda=0,\ x\geq 0,\\ \frac{(1-x)^{2-\lambda}-1}{\lambda-2}&\text{if }\lambda\neq 2,\ x<0,\\ -\log(1-x)&\text{if }\lambda=2,\ x<0.\end{array}\right. \tag{3}\] The transformation \(f^{\lambda}_{\mathrm{YJ}}\) only has one unknown parameter, \(\lambda\in\mathbb{R}\), and it can be applied to any \(x\in\mathbb{R}\), not just positive values (Yeo and Johnson, 2000). The power transform sublayer of EDAIN simply applies the transformation in Equation (3) along each dimension of the input time series \(\mathbf{X}^{(i)}\). That is, for each \(i=1,\ldots,N\) and \(t=1,\ldots,T\), the sublayer outputs \[\mathbf{h}_{4}\left(\mathbf{x}^{(i)}_{t}\right)=\left[f^{\lambda_{1}}_{ \mathrm{YJ}}\left(x^{(i)}_{t,1}\right),\ldots,f^{\lambda_{d}}_{\mathrm{YJ}} \left(x^{(i)}_{t,d}\right)\right],\] where the unknown quantities to be optimised are the power parameters \(\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{d})\in\mathbb{R}^{d}\). ### Global- and local-aware normalization For highly multi-modal and non-stationary time series data, Passalis et al. (2020, 2021) and Tran et al. (2021) observed significant performance improvements when using local-aware preprocessing methods, as these allow forming a unimodal representation space from predictor variables with multi-modal distributions. Therefore, we also propose a _local-aware_ version of the EDAIN layer in addition to the _global-aware_ version we presented earlier. In the local-aware version of EDAIN, the shift and scale operations also depend on a summary representation of the current time series \(\mathbf{X}^{(i)}\) to be preprocessed: \[\mathbf{h}_{3}\left\{\mathbf{h}_{2}\left(\mathbf{x}^{(i)}_{t}\right)\right\}= \left\{\mathbf{x}^{(i)}_{t}-\left(\boldsymbol{m}\odot\boldsymbol{\mu}^{(i)}_{ \mathbf{X}}\right)\right\}\oslash\left(\boldsymbol{s}\odot\boldsymbol{\sigma}^ {(i)}_{\mathbf{X}}\right).\] The summary representations \(\boldsymbol{\mu}^{(i)}_{\mathbf{X}},\boldsymbol{\sigma}^{(i)}_{\mathbf{X}}\in \mathbb{R}^{d}\), are computed through a reduction along the temporal dimension of each time series \(\mathbf{X}^{(i)}\) (_cf._ Figure 2): \[\boldsymbol{\mu}^{(i)}_{\mathbf{x}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{x}^{(i)}_{t},\ \boldsymbol{\sigma}^{(i)}_{\mathbf{x}}=\sqrt{\frac{1}{T}\sum_{t=1}^{T}\left( \mathbf{x}^{(i)}_{t}-\boldsymbol{\mu}^{(i)}_{\mathbf{x}}\right)^{2}}, \tag{4}\] where the power operations are applied element-wise in (4). The outlier mitigation and power transform sublayers remain the same for the local-aware version, except the \(\hat{\mathbf{\mu}}\) statistic in Equation (2) is no longer fixed, but rather the mean of the input time series: \[\hat{\mathbf{\mu}}^{(i)}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{x}_{t}^{(i)}.\] Order preservation.Adaptive local-aware preprocessing methods work well on multi-modal data such as financial forecasting datasets (Passalis et al., 2020, 2021; Tran et al., 2021). However, as we will experimentally demonstrate in the next section, local-aware preprocessing methods might perform worse than conventional methods such as \(z\)-score normalization when applied to unimodal data. This is because local-aware methods are not _order preserving_, since the shift and scale amount is different for each time series. In many applications in which the features have a natural and meaningful ordering, it would be desirable to have \[\tilde{x}_{t,k}^{(i)}<\tilde{x}_{t,k}^{(j)}\text{ if }x_{t,k}^{(i)}<x_{t,k}^{(j)}, \tag{5}\] for all \(i,j\in\{1,\dots,N\}\), \(t=1,\dots,T\) and \(k=1,\dots,d\), where \(\mathbf{X}^{(i)}\) denotes the output of a transformation with \(\mathbf{X}^{(i)}\) as input. This property does not necessarily hold for the local-aware methods (DAIN, RDAIN, BIN, and local-aware EDAIN). For unimodal features, the qualitative interpretation of the predictor variables might dictate that property (5) should be maintained (for example, consider the case of credit scores for a default prediction application, _cf._ Section 4.2). As a solution, the proposed _global-aware_ version of the proposed EDAIN layer does not use any time series specific summary statistics, which makes each of the four sublayers monotonically non-decreasing. This ensures that property (5) is maintained by the global-aware EDAIN transformation, providing additional flexibility for applications on real world data where ordering within features should be preserved. ### Optimising the EDAIN layer The output of the proposed EDAIN layer is obtained by feeding the input time series \(\mathbf{X}^{(i)}\) through the four sublayers in a feed-forward fashion, as shown in Figure 1. The output is then fed to the deep neural network used for the task at hand. Letting \(\mathbf{W}\) denote the weights of the deep neural network model, the weights of both the deep model and the EDAIN layer are simultaneously optimised in an end-to-end manner using stochastic gradient descent, with the update equation: \[\Delta\,(\mathbf{\alpha},\mathbf{\beta},\mathbf{m},\mathbf{s},\mathbf{\lambda}, \mathbf{W})=\\ -\eta\bigg{(}\eta_{1}\frac{\partial\mathcal{L}}{\partial\mathbf{ \alpha}},\eta_{1}\frac{\partial\mathcal{L}}{\partial\mathbf{\beta}},\eta_{2}\frac{ \partial\mathcal{L}}{\partial\mathbf{m}},\eta_{3}\frac{\partial\mathcal{L}}{ \partial\mathbf{s}},\eta_{4}\frac{\partial\mathcal{L}}{\partial\mathbf{\lambda}}, \frac{\partial\mathcal{L}}{\partial\mathbf{W}}\bigg{)},\] where \(\eta\in(0,\infty)\) is the base learning rate, whereas \(\eta_{1},\dots,\eta_{4}\) correspond to sublayer-specific corrections to the global learning rate \(\eta\). As Passalis et al. (2020) observed when training their DAIN layer, the gradients of the unknown parameters for the different sublayers might have vastly different magnitudes, which prevents a smooth convergence of the preprocessing layer. Therefore, they proposed using separate learning rates for the different sublayers. We therefore introduce corrections \(\eta_{\ell}\in\mathbb{R},\ \ell=\{1,2,3,4\}\) as additional hyperparameters that modify the learning rates for each of the four different EDAIN sublayers. Furthermore, note that computing the fixed constant \(\hat{\mathbf{\mu}}\) in the outlier mitigation sublayer (2) would require a sweep on the entire dataset before training the EDAIN-augmented neural network architecture, which could be computationally extremely expensive. As a solution to circumvent this issue, we propose to calculate \(\hat{\mathbf{\mu}}\) iteratively during training, updating it using a cumulative moving average estimate at each forward pass of the sublayer. We provide more details on this in Section 1 of the supplementary material. ### EdaIN-Kl In addition to the EDAIN layer, we also propose another novel preprocessing method, named EDAIN-KL (Extended Deep Adaptive Input Normalization, optimised with Kullback-Leibler divergence). This approach uses a similar neural layer architecture as the EDAIN method, but modifies it to ensure the transformation is invertible. Its unknown parameters are then optimised with an approach inspired by normalizing flows (see, for example, Kobyzev et al., 2021). The EDAIN-KL layer is used to transform a Gaus Figure 2: Visual comparison of the local- and global-aware versions of adaptive preprocessing schemes. sian base distribution \(\mathbf{Z}\sim\mathcal{N}(\mathbf{0},I_{dT})\) via a composite function \(\mathbf{g}_{\boldsymbol{\theta}}=\mathbf{h}_{1}^{-1}\circ\mathbf{h}_{2}^{-1} \circ\mathbf{h}_{3}^{-1}\circ\mathbf{h}_{4}^{-1}\) comprised of the inverses of the operations in the EDAIN sublayers, applied sequentially with parameter \(\boldsymbol{\theta}=(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{m}, \boldsymbol{s},\boldsymbol{\lambda})\). The parameter \(\boldsymbol{\theta}\) is chosen to minimize the KL-divergence between the resulting distribution \(\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{Z})\) and the empirical distribution of the dataset \(\mathcal{D}\): \[\hat{\boldsymbol{\theta}}=\operatorname*{argmin}_{\boldsymbol{\theta}}\operatorname {KL}\left\{\mathcal{D}\ ||\ \mathbf{g}_{\boldsymbol{\theta}}(\mathbf{Z})\right\}.\] Note that we apply all the operations in reverse order, compared to the EDAIN layer, because we use \(\mathbf{g}_{\boldsymbol{\theta}}\) to transform a base distribution \(\mathbf{Z}\) into a distribution that resembles the training dataset \(\mathcal{D}\). To normalize the dataset after fitting the EDAIN-KL layer, we apply \[\mathbf{g}_{\hat{\boldsymbol{\theta}}}^{-1}=\mathbf{h}_{4}\circ\mathbf{h}_{3 }\circ\mathbf{h}_{2}\circ\mathbf{h}_{1}\] to each \(\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T}\), similarly to the EDAIN layer. The main advantage of the EDAIN-KL approach over standard EDAIN is that it allows training in an unsupervised fashion, separate from the deep model. This enables its usage for preprocessing data in a wider set of tasks, including non-deep-neural-network models. An exhaustive description of the EDAIN-KL method is provided in Section 2 of the supplementary material. ## 4 Experimental Evaluation For evaluating the proposed EDAIN layer we consider a synthetic dataset, a large-scale default prediction dataset, and a large-scale financial forecasting dataset. We compare the two versions of the EDAIN layer (global-aware and local-aware) and the EDAIN-KL layer to \(z\)-score normalization, to the DAIN (Passalis et al., 2020) layer and to the BIN (Tran et al., 2021) layer. For all experiments, we use a recurrent neural network (RNN) model composed of gated recurrent unit (GRU) layers, followed by a classifier head with fully connected layers. Categorical features, when present, are passed through an embedding layer, whose output is combined with the output of the GRU layers and then fed to the classifier head. Full details on the model architectures, optimization procedures, including learning rates and number of epochs, can be found in Section 3 of the supplementary material and in the code repository associated with this work. ### Synthetic Datasets Before considering real-world data, we evaluate our method on synthetic data, where we have full control over the data generating process. To do this, we develop a synthetic time series data generation algorithm which allows specifying arbitrary unnormalized probability density functions (PDFs) for each of the \(d\) predictor variables. It then generates \(N\) time series of the form \(\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T}\), along with \(N\) binary response variables \(y^{(i)}\in\{0,1\}\). We present a detailed description of the algorithm in Section 4 of the supplementary material. For our experiments, we generated \(N_{\mathcal{D}}=100\) datasets, each with \(N=50\,000\) time series of length \(T=10\) and dimensionality \(d=3\). The three predictor variables were configured to be distributed as follows: \[f_{1}(x) =10\cdot\Phi_{\mathcal{N}}\left\{10\left(x+4\right)\right\}\cdot p _{\mathcal{N}}(x+4)\] \[\quad+\mathbb{I}_{(8,9.5)}(x)\cdot e^{x-8}/10, \tag{6}\] \[f_{2}(x) =\left\{\begin{array}{ll}20\cdot p_{\mathcal{N}}(x-20),&\text{ if }x>\pi,\\ e^{x/6}\cdot\left\{10\sin(x)+10\right\},&\text{ if }x\leq\pi,\end{array}\right.\] (7) \[f_{3}(x) =2\cdot\Phi_{\mathcal{N}}\left\{-4(x-4)\right\}\cdot p_{\mathcal{ N}}(x-4), \tag{8}\] where \(p_{\mathcal{N}}(\cdot)\) and \(\Phi_{\mathcal{N}}(\cdot)\) denote the PDF and cumulative distribution function (CDF) of the standard normal distribution, and \(\mathbb{I}_{\mathcal{A}}(\cdot)\) is the indicator function on the set \(\mathcal{A}\). Samples from the dataset are visualised in Figure 3. We train and evaluate a RNN model with the architecture described earlier on each of the \(N_{\mathcal{D}}\) datasets using a 80%-20% train-validation split. Our results are presented in Table 1, where the binary cross-entropy (BCE) loss and the accuracy on the validation set are used as evaluation metrics. From our experiments on the synthetic datasets, we observe that the model performance is more unstable when no preprocessing is applied, as seen from the increased variance in Table 1. We also observe that \(z\)-score normalization only gives minor performance improvements when compared to no preprocessing, aside from reducing the variance. As we have perfect information about the underlying data generation mechanism from Equations (6), (7) and (8), we also compared our methods to what we refer to as _CDF inversion_, where each observation is transformed by the CDF of its corresponding distribution, and then transformed via \(\Phi_{\mathcal{N}}^{-1}(\cdot)\), giving predictor variables with standard normal distributions. We also apply this method to the real-world datasets in Section 4.2 and Section 4.3, but since the true PDFs are unknown in those settings, we estimate the CDFs using quantiles Figure 3: Histogram across timesteps \(t=1,\dots,T\) of the \(d=3\) predictor variables from the synthetic data. from the distribution of the training data. Out of all the methods not using information about the underlying data generation mechanism, the global-aware version of EDAIN demonstrates superior performance. It also almost performs as well as CDF inversion, which is able to perfectly normalize each predictor variable via its data generation mechanism. Finally, we observe that the local-aware methods perform even worse than no preprocessing: this might be due to the ordering not being preserved, as discussed in Section 3.1. ### Default Prediction Dataset The first real-world dataset we consider is the publicly available default prediction dataset published by American Express Howard et al. (2022), which contains data from \(N=458\,913\) credit card customers. For each customer, a vector of \(d=188\) aggregated profile features has been recorded at \(T=13\) different credit card statement dates, producing a multivariate time series of the form \(\mathbf{X}^{(i)}\in\mathbb{R}^{d\times T}\), \(i=1,2,\ldots,N\). Given \(\mathbf{X}^{(i)}\), the task is to predict a binary label \(y^{(i)}\in\{0,1\}\) indicating whether the \(i\)-th customer definitely at any point within 18 months after the last observed data point in the time series. A default event is defined as not paying back the credit card balance amount within 120 days after the latest statement date Howard et al. (2022). Out of the \(d=188\) features, only the 177 numerical variables are preprocessed. To evaluate the different preprocessing methods, we perform 5-fold cross validation, which produces evaluation metrics for five different 20% validation splits. The evaluation metrics we consider are the validation BCE loss and a metric that was proposed by Howard et al. (2022) for use with this dataset, and we refer to it as the _Amerx metric_. This metric is calculated as \(M=0.5\cdot(G+D)\), where \(D\) is the default rate captured at 4% (corresponding to the proportion of positive labels captured within the highest-ranked 4% of the model predictions) and \(G\) is the normalized Gini coefficient (see Section 5 of the supplementary material and, for example, Lerman and Yitzhaki, 1989). Our results are reported in Table 2. From the table, it can be inferred that neglecting the preprocessing step deteriorates the performance significantly. Moreover, we observe that the local-aware methods (local-aware EDAIN, BIN, and DAIN) all perform worse than \(z\)-score normalization. This might be because the data in the default prediction dataset is mostly distributed around one central mode, and the local-aware methods discard this information in favour of forming a common representation space. Another possible reason is that the local-aware methods do not preserve the relative ordering between data points \(\mathbf{X}^{(i)}\) as per Equation (5), which might be detrimental for these types of datasets. As discussed in the previous section, predictor variables such as credit scores may be present in the dataset, which should only be preprocessed via monotonic transformations. From the results presented in Table 2, we can also conclude that the proposed global-aware version of the EDAIN layer shows superior average performance when compared to all alternative preprocessing methods. Additionally, it must be re \begin{table} \begin{tabular}{l|c c} \hline Preprocessing method & BCE loss & Binary accuracy (\%) \\ \hline No preprocessing & \(0.1900\pm 0.0362\) & \(91.68\pm 1.68\) \\ \(z\)-score normalization & \(0.1873\pm 0.0108\) & \(91.73\pm 0.65\) \\ CDF inversion & \(0.1627\pm 0.0094\) & \(92.89\pm 0.55\) \\ EDAIN-KL & \(0.1760\pm 0.0094\) & \(92.24\pm 0.59\) \\ EDAIN (local-aware) & \(0.2099\pm 0.0095\) & \(90.71\pm 0.57\) \\ **EDAIN (global-aware)** & \(\mathbf{0.1636\pm 0.0086}\) & \(\mathbf{92.83\pm 0.51}\) \\ BIN & \(0.2191\pm 0.0103\) & \(90.36\pm 0.59\) \\ DAIN & \(0.2153\pm 0.0146\) & \(90.48\pm 0.78\) \\ \hline \end{tabular} \end{table} Table 1: Experimental results on synthetic data, with 95% normal confidence intervals (\(\mu\pm 1.96\sigma\)) calculated across \(N_{\mathcal{D}}=100\) datasets. The gold-standard method for this simulation (CDF inversion) is underlined. \begin{table} \begin{tabular}{l|c c} \hline Preprocessing method & BCE loss & Amerx metric \\ \hline No preprocessing & \(0.3242\pm 0.0148\) & \(0.6430\pm 0.0195\) \\ \(z\)-score normalization & \(0.2213\pm 0.0039\) & \(0.7872\pm 0.0068\) \\ CDF inversion & \(0.2215\pm 0.0040\) & \(0.7861\pm 0.0072\) \\ EDAIN-KL & \(0.2218\pm 0.0040\) & \(0.7858\pm 0.0060\) \\ EDAIN (local-aware) & \(0.2245\pm 0.0035\) & \(0.7813\pm 0.0057\) \\ **EDAIN (global-aware)** & \(\mathbf{0.2199\pm 0.0034}\) & \(\mathbf{0.7890\pm 0.0078}\) \\ BIN & \(0.2237\pm 0.0038\) & \(0.7829\pm 0.0064\) \\ DAIN & \(0.2224\pm 0.0035\) & \(0.7847\pm 0.0054\) \\ \hline \end{tabular} \end{table} Table 2: Experimental results on the Amerx default prediction dataset, with 95% normal asymptotic confidence intervals (\(\mu\pm 1.96\sigma\)) calculated across folds. Figure 4: BCE cross-validation loss across different folds in the Amerx default prediction dataset. marked that most of the variance in the methods' performance arise from the folds themselves, as seen in Figure 4, and the global-aware EDAIN method consistently shows superior performance across all cross-validation folds when compared to the other methods. ### Financial Forecasting Dataset For evaluating the proposed methods, we also considered a limit order book (LOB) time series dataset (FI-2010 LOB Ntakaris et al., 2018), used as a benchmark dataset in Passalis et al. (2020) and Tran et al. (2021). The data was collected across 10 business days in June 2010 from five Finnish companies (Ntakaris et al., 2018), and it was cleaned and features were extracted based on the pipeline of Kercheval and Zhang (2015). This resulted in \(N=453\,975\) vectors of dimensionality \(d=144\). The task is predicting whether the mid price will increase, decrease or remain stationary with a prediction horizon of \(H=10\) timesteps, where a stock is labelled as stationary if the mid price changes by less than \(0.01\%\). More details on the FI-2010 benchmark dataset can be found in Ntakaris et al. (2018). For training, we use the anchored cross-validation scheme of Ntakaris et al. (2018). We sequentially increase the size of the training set, starting with a single day and extending it to nine days, while reserving the subsequent day for evaluation in each iteration, obtaining nine different evaluation folds. For evaluating the model performance, we look at the Cohen's \(\kappa\)(Artstein and Poesio, 2008) and macro-\(F_{1}\) score, obtained by averaging class-specific \(F_{1}\) scores across the three possible outcomes (increase, decrease or stationary). We can draw several conclusions from the results reported in Table 3. Firstly, employing some form of preprocessing is essential, as not applying any preprocessing gives \(\kappa\) values close to 0, which is what is expected to occur by random guessing. Secondly, applying a local-aware normalization scheme (local-aware EDAIN, BIN and DAIN) greatly improves performance when compared to conventional preprocessing methods such as \(z\)-score scaling. Our third observation is that the proposed local-aware EDAIN method outperforms both BIN and DAIN on average. Finally, we note that most of the variability in the evaluation metrics arises from the folds themselves, similarly to the default prediction example (_cf._ Figure 4). ## 5 Conclusion In this work, we proposed EDAIN, an adaptive data preprocessing layer which can be used to augment any deep neural network architecture that takes multivariate time series as input. EDAIN has four adaptive sublayers (outlier mitigation, shift, scale, and power transform). It also has two versions (local-aware and global-aware), which apply local or global transformations to each time series. Also, we proposed a computationally efficient variant of EDAIN, optimised via the Kullback-Leibler divergence, named EDAIN-KL. The EDAIN layer's ability to increase the predictive performance of the deep neural network was evaluated on a synthetic dataset, a default prediction dataset, and a financial forecasting dataset. On all datasets considered, either the local-aware or global-aware version of the proposed EDAIN layer consistently demonstrated superior performance. In Section 4.3, we observed that the local-aware preprocessing methods gave significantly better performance than the global-aware version of EDAIN and \(z\)-score normalization. However, in Section 4.1 and Section 4.2 the opposite is observed, with the global-aware version of EDAIN demonstrating superior performance and the local-aware methods being outperformed by \(z\)-score normalization. We hypothesize these differences occur because local-aware methods do not preserve the relative ordering between observations, while the global-aware EDAIN method does. In the financial forecasting dataset, which is highly multi-modal, it appears that the observations' feature values relative to their mode are more important than their absolute ordering. Such considerations should be taken into account when deciding what adaptive preprocessing method is most suitable for application on new data. There are several directions for future work. Passalis et al. (2021) observed that the performance improvements from DAIN differed greatly between different deep neural network architectures. Therefore, the effectiveness of EDAIN with other architectures could be further explored as only GRU-based RNNs were considered in this work. Additionally, with the proposed EDAIN method, one has to manually decide whether to apply local-aware or global-aware preprocessing. This drawback could be eliminated by extending the proposed neural layer to apply both schemes \begin{table} \begin{tabular}{l|c c} \hline \hline Preprocessing method & Cohen’s \(\kappa\) & Macro-\(F_{1}\)-score \\ \hline No preprocessing & \(0.0035\pm 0.0049\) & \(0.2859\pm 0.0228\) \\ \(z\)-score normalization & \(0.2772\pm 0.0550\) & \(0.5047\pm 0.0403\) \\ CDF inversion & \(0.3618\pm 0.0598\) & \(0.5798\pm 0.0373\) \\ EDAIN-KL & \(0.2870\pm 0.0642\) & \(0.5104\pm 0.0519\) \\ **EDAIN (local-aware)** & \(0.3836\pm 0.0554\) & \(0.5946\pm 0.0431\) \\ EDAIN (global-aware) & \(0.2820\pm 0.0706\) & \(0.5111\pm 0.0648\) \\ BIN & \(0.3670\pm 0.0640\) & \(0.5889\pm 0.0479\) \\ DAIN & \(0.3588\pm 0.0506\) & \(0.5776\pm 0.0341\) \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results on the FI-2010 LOB dataset, with 95% normal asymptotic confidence intervals (\(\mu\pm 1.96\sigma\)) calculated across 9 anchored folds. to each feature and adaptively learning which version is most suitable. Another common irregularity in real-world data is missing values (Nawi et al., 2013; Cao et al., 2018). A possible direction would be to extend EDAIN with an adaptive method for treating missing values that makes minimal assumptions on the data generation mechanism (for example, Cao et al., 2018).
2304.09837
Points of non-linearity of functions generated by random neural networks
We consider functions from the real numbers to the real numbers, output by a neural network with 1 hidden activation layer, arbitrary width, and ReLU activation function. We assume that the parameters of the neural network are chosen uniformly at random with respect to various probability distributions, and compute the expected distribution of the points of non-linearity. We use these results to explain why the network may be biased towards outputting functions with simpler geometry, and why certain functions with low information-theoretic complexity are nonetheless hard for a neural network to approximate.
David Holmes
2023-04-19T17:40:19Z
http://arxiv.org/abs/2304.09837v1
# Points of non-linearity of functions generated by random neural networks ###### Abstract We consider functions from \(\mathbb{R}\to\mathbb{R}\) output by a neural network with 1 hidden activation layer, arbitrary width, and ReLU activation function. We assume that the parameters of the neural network are chosen uniformly at random with respect to various probability distributions, and compute the expected distribution of the points of non-linearity. We use these results to explain why the network may be biased towards outputting functions with simpler geometry, and why certain functions with low information-theoretic complexity are nonetheless hard for a neural network to approximate. ## 1 Introduction It has been suggested [21, 11, 12] that neural networks are biased in favour of outputting'simple' functions. The above papers interpret simplicity in an information-theoretic fashion, suggesting that functions output by neural networks tend to have small information-theoretic complexity. The goal of this paper is to illustrate that, at least in some contexts, more 'geometric' notions of simplicity may capture this bias more accurately. One example of this may be seen by considering a simple periodic function such as the triangular sawtooth \(x\mapsto|x-\lfloor x\rfloor-\frac{1}{2}|\). This function has low information-theoretic complexity, but is relatively hard [13] for a neural network with ReLU (or other non-periodic) activation to learn. We will show that the points of non-linearity of a function output by a neural network tend to be either few in number, or clustered together in one place, depending on the setup. Either way, this illustrates why an information-theoretically simple function (such as a periodic function) can nonetheless be hard for a neural network to approximate. ### Neural networks with random weights According to the heuristics of [14], training a neural network by stochastic gradient descent may be well-approximated by assigning the weights and biases at random, _conditional_ on a good fit with the training data. For the sake of simplicity, in this preliminary work we do not use training data; we work simply with random neural networks, obtained by assigning each neuron a weight and bias chosen at random. Again for simplicity, we consider a restricted class of neural networks, with a single (ReLU) activation layer of width \(w\), and exactly one input and one output neuron. A choice of weights and biases for the neurons thus produces a piecewise linear function from \(\mathbb{R}\to\mathbb{R}\). A function produced in this way has a finite set of points of non-linearity; in fact there are at most \(w\) such points. Their distribution will of course depend on the distribution from which the weights and biases of the neurons are selected; below we compute precisely the distribution of the points of non-linearity for three different choices of distribution onto weights and biases of the neurons. Perhaps surprisingly, we will see that the distribution of the points of non-linearity depends very heavily on the distribution of the parameters. If the bias towards simple functions underlies generalisation properties of over-parameterised neural networks (as proposed in [14]), this may help to explain why some gradient descent schemes generalise better than others (as they approximate random sampling with respect to distributions more heavily favouring simple functions). ### Main results Our neural network has parameters taking values in some measurable subset of \(\Theta\) of a real vector space. The probabilities of seeing a given number of points of non-linearity turn out to be highly dependent on the _shape_ of the parameter space \(\Theta\), but independent of the _size_ of \(\Theta\). We will consider three different'shapes': one'rectangular' (theorem 1.1), one 'Gaussian' (theorem 1.2), and one'spherical'(theorem 1.3). We consider neural networks with one hidden activation layer of with \(w\) (a positive integer); see section 2 for a formal description of this setup. We fix \(R\in(0,\infty]\), and view the resulting real-valued function as being defined on the interval \((-R,R)\). #### 1.2.1 Rectangular parameter space Here we fix a positive real number \(T\), and then choose both the weight and the bias for each of the \(w\) neurons independently and uniformly at random from the interval \((-T,T)\). In other words, the vector of biases is chosen uniformly at random from a box \([-T,T]^{w}\subseteq\mathbb{R}^{w}\), and the same holds for the vector of weights. A PL function generated in this way has at most \(w\) points of non-linearity (lemma 3.1). In fact, for \(R\) finite, the number of points of non-linearity follows a binomial distribution: **Theorem 1.1** (proposition 3.3).: _Suppose \(R\in(0,\infty)\subseteq\mathbb{R}\). Given any \(w^{\prime}\in\{1,2,\ldots,w\}\), the probability of a function generated by this neural network having exactly \(w^{\prime}\) points of non-linearity is_ \[\binom{w}{w^{\prime}}\mathcal{P}^{w^{\prime}}(1-\mathcal{P})^{w-w^{\prime}}\] _where \(\binom{w}{w^{\prime}}\) is the binomial coefficient, and_ \[\mathcal{P}=\begin{cases}\frac{R}{2}&\text{if }0<R\leq 1\\ 1-\frac{1}{2R}&\text{if }R\geq 1.\end{cases} \tag{1.2.1}\] _In particular, the expected number of points of non-linearity is given by_ \[\mathbb{E}(\#D_{\theta})=w\mathcal{P}<w.\] Functions on an unbounded domainSuppose that we use the same parameter space \(\Theta\), but we now take \(R=\infty\); in other words, we view our PL functions as having domain \(\mathbb{R}\). Then for almost all \(\theta\in\Theta\), the PL function will have \(w\) points of non-linearity. However, the distribution of these points is far from uniform. Differentiating the above result, we find that the probability density function of the distribution of these \(w\) points is given by (see fig. 1) \[\mathbb{P}(|x|=r)=\begin{cases}\frac{w}{2}&\text{if }0\leq r\leq 1\\ \frac{w}{2r^{2}}&\text{if }1\leq r.\end{cases} \tag{1.2.2}\] #### 1.2.2 Gaussian parameter space We adopt the same notation as in the rectangular case, but instead of choosing weights and biases uniformly at random from an interval \([-T,T]\), we now fix a positive real number \(\nu\) and choose both the weight and bias for each neuron at random from a normal distribution with mean \(0\) and variance \(\nu\). In other words, the vector of biases (the 'affine part' at each neuron) is chosen at random from a product of normal distributions on \(\mathbb{R}^{w}\), and the same holds for the vector of weights (the 'linear part' at each neuron). Almost exactly the same formulae hold as above, except the expression for \(\mathcal{P}\) is different: **Theorem 1.2** (lemma 4.1).: _Suppose \(R\) is finite. Given any \(w^{\prime}\in\{1,2,\ldots,w\}\), the probability of a function generated by this neural network having exactly \(w^{\prime}\) points of non-linearity is_ \[\binom{w}{w^{\prime}}\mathcal{P}^{w^{\prime}}(1-\mathcal{P})^{w-w^{\prime}}\] _where \(\binom{w}{w^{\prime}}\) is the binomial coefficient, and_ \[\mathcal{P}=\frac{2}{\pi}\arctan R. \tag{1.2.3}\] _In particular, the expected number of points of non-linearity is given by_ \[\mathbb{E}(\#D_{\theta})=w\mathcal{P}<w.\] Unbounded domainAgain, in the case \(R=\infty\), differentiating the above result shows that the distribution of the \(w\) points of non-linearity is given by (see fig. 1) \[\mathbb{P}(|x|=r)=\frac{2w}{\pi(1+r^{2})}. \tag{1.2.4}\] #### 1.2.3 Spherical parameter space We again fix a positive real number \(T\), and choose the _biases_ of the neurons uniformly and independently at random from an interval \([-T,T]\). However, the weights (i.e. the linear part of the affine linear transformation at each neuron) are chosen uniformly at random from a sphere of radius \(T\) in \(\mathbb{R}^{w}\). In other words, the vector of biases is chosen at random from a product of normal distributions on \(\mathbb{R}^{w}\), and the vector of weights is chosen uniformly at random in \[\{L\in\mathbb{R}^{w}:|L|\leq T\}\subseteq\mathbb{R}^{w}.\] In this context it does not seem so easy to compute the exact probability of a given number of points of non-linearity occurring. However, at least for \(R\leq 1\), we can compute the expected number of points of non-linearity. **Theorem 1.3** (proposition 5.2).: _Assume \(0<R\leq 1\). For \(w\) even we have_ \[\mathbb{E}(\#D_{\theta})=\frac{Rw2^{w}}{(w+1)\pi}{w-1\choose w/2}^{-1}\sim R\sqrt{ 2w/\pi}. \tag{1.2.5}\] _For odd \(w\) we have_ \[\mathbb{E}(\#D_{\theta})=\frac{Rw^{2}}{2^{w-1}(w+1)}{w-1\choose(w-1)/2}\sim R \sqrt{2w/\pi}. \tag{1.2.6}\] _Here \(\sim\) means that the ratio tends to \(1\) as \(w\) tends to infinity._ Since \(\sqrt{w}\) is much smaller than \(w\), this indicates that such functions tend strongly to having few points of linearity. ### Possible generalisations and extensions #### 1.3.1 Training data If the training data forces \(w\) points of non-linearity, then of course they will occur with probability \(1\) among parameters fitting the training data. However, the fact that we generally work with highly over-parametrised models says that this will in general not be the setup. We expect that similar results will hold (and be provable with similar techniques) in the presence of training data, as long as certain 'overparametrisation' conditions are satisfied. Figure 1: Distribution of points of non-linearity for \(w=10\) and rectangular parameter distribution (blue) and Gaussian parameter distribution (red). #### 1.3.2 Higher dimensions Instead of looking at a single input neuron (i.e. real valued functions), we can consider any finite number \(i\) of input neurons, leading to functions \(\mathbb{R}^{i}\to\mathbb{R}\). Here the locus of points where the function is not linear will not be finite; as such, instead of using the cardinality of that set as a measure of simplicity, we will instead use its Hausdorff measure. We believe that similar results can be shown, by similar methods. #### 1.3.3 Different activation functions Our results are valid not just for the ReLU activation function, but in fact any piecewise-linear activation function which has a unique point of non-linearity at the origin. Generalising to other PL activation functions will require no major changes. For differentiable activation functions which are asymptotically linear, we can replace the measure of the locus of points of non-linearity by (for example) the integral of the square of the largest eigenvalue of the Hessian. Again, we expect similar results, but different techniques will be required to prove them. ### Acknowledgements The author is grateful to Jour Skalse for helpful comments. ## 2 Parameter space notation The _activation_ is a function \(\varphi\colon\mathbb{R}\to\mathbb{R}\) which is linear exactly away from \(0\) (for example, the ReLU activation \(x\mapsto\max(x,0)\)). We consider neural networks with one hidden activation layer, defining functions from \((-R,R)\subseteq\mathbb{R}\) to \(\mathbb{R}\). We write \(w\) for the width (a positive integer). Our parameter space \(\Theta\) for the neural network is naturally a product of \(4\) pieces: 1. a linear part in the first layer; this gives a subspace of \(\mathbb{R}^{w}\), denoted \(\Theta_{L}\) 2. a translation part in the first layer; this again gives a subspace of \(\mathbb{R}^{w}\), denoted \(\Theta_{T}\) 3. a linear part in the final layer; this again gives a subspace of \(\mathbb{R}^{w}\), denoted \(\Theta^{\prime}_{L}\) 4. a translation part in the final layer; this gives a subspace of \(\mathbb{R}\), denoted \(\Theta^{\prime}_{T}\). So \(\Theta=\Theta_{L}\times\Theta_{T}\times\Theta^{\prime}_{L}\times\Theta^{\prime}_{T}\). A point \(\theta=(\theta_{L},\theta_{T},\theta^{\prime}_{L},\theta^{\prime}_{T})\in\Theta\) determines a function \[f_{\theta}\colon\mathbb{R}\to\mathbb{R} \tag{2.0.1}\] by the formula \[x\mapsto\theta^{\prime}_{T}+\theta^{\prime}_{L}\cdot\varphi^{w}(\theta_{T}+x \cdot\theta_{L}), \tag{2.0.2}\] where \(\varphi^{w}\colon\mathbb{R}^{w}\to\mathbb{R}^{w}\) applies the activation function \(\varphi\) to each coordinate. Now \(\Theta^{\prime}_{L}\) and \(\Theta^{\prime}_{T}\) have no effect on the number of points of non-linearity (outside some measure-zero subset of \(\Theta^{\prime}_{L}\), which we ignore). So it suffices to describe the spaces \(\Theta_{L}\) and \(\Theta_{T}\), and their probability measures. ## 3 Rectangular weights and biases We fix a positive real number \(T\). We define \[\Theta_{L}=\Theta_{T}=[-T,T]^{w}, \tag{3.0.1}\] a box of side-length \(T\) and dimension \(w\), centred at \(0\in\mathbb{R}^{w}\). We equip it with the lebesgue measure. In other words, * the 'translation' part of the first layer is chosen uniformly at random between \(-T\) and \(T\) at each neuron; * the scaling factor at each neuron in the first layer is chosen uniformly at random between \(-T\) and \(T\). Given \(\theta\in\Theta\), we write \(D_{\theta}\subseteq(-R,R)\) for the set of points of non-linearity of the function given by the parameters \(\theta\). **Lemma 3.1**.: \(\#D_{\theta}\leq w\)_, and this maximum can be achieved._ Proof.: Suppose the image of \(X\) is not contained in a coordinate hyperplane of \(\mathbb{R}^{w}\). Then \(D_{\theta}\) is exactly the image under a linear map of the intersection of the image of \(X\) with the coordinate hyperplanes in \(\mathbb{R}^{w}\), of which there are at most \(w\). On the other hand, if the image of \(X\) is contained in the intersection of exactly \(w^{\prime}\) of the coordinate hyperplanes, then the image of \(X\) hits at most \(w-w^{\prime}\) other coordinate hyperplanes. Fix \(w^{\prime}\in\{0,1,\ldots,w\}\). We compute \(\mathbb{P}(\#D_{\theta}=w^{\prime})\). A point in \(c\in\mathbb{R}^{w}\) is chosen uniformly at random in a box around \(0\) of side-length \(2T\). Another point \(l\in\mathbb{R}^{w}\) is chosen uniformly at random in a box around \(0\) of side-length \(2RT\). Then we consider the line segment in \(\mathbb{R}^{w}\) joining \(c-l\) and \(c+l\), and we want to compute the probability of this segment meeting any of the coordinate hyperplanes; given \(1\leq i\leq w\) write \(\mathcal{P}_{i}\) for the probability of our line segment meeting the \(i\)th coordinate hyperplanes; this is independent of \(i\), so we also write it \(\mathcal{P}\). **Lemma 3.2**.: _If \(R\geq 2\) then \(\mathcal{P}=1-\frac{1}{2R}\). If \(0<R\leq 1\) then \(\mathcal{P}=\frac{r}{R}\)._ Proof.: Without loss of generality, \(i=1\). Then for fixed \(c\) the probability of intersecting the point \(x_{1}=0\) is \[\max(1-\frac{|c_{1}|}{RT},0).\] Integrating over \(|c_{1}|\) from \(0\) to \(T\) yields the result. Since the probabilities of hitting the various axes are independent, we deduce **Proposition 3.3**.: _For \(w^{\prime}\in\{0,1,\ldots,w\}\) the probability of a function generated by this neural network having exactly \(w^{\prime}\) points of non-linearity is_ \[\mathbb{P}(\#D_{\theta}=w^{\prime})=\binom{w}{w^{\prime}}\mathcal{P}^{w^{ \prime}}(1-\mathcal{P})^{w-w^{\prime}}. \tag{3.0.2}\] _The expected number of points of non-linearity is given by_ \[\mathbb{E}(\#D_{\theta})=\sum_{w^{\prime}\in\{0,\ldots,w\}}\frac{\mu(\Theta_ {w^{\prime}})}{\mu(\Theta)}w^{\prime}=w\mathcal{P}. \tag{3.0.3}\] ## 4 Gaussian weights and biases The Gaussian version is very similar, except that instead of the width \(T\) of the interval, we work with the variance \(\nu\) of the normal distribution. We have \(\Theta_{L}=\Theta_{T}=\mathbb{R}^{w}\), each of which is equipped with a product of normal distributions with variance \(\nu\). As before we write \(D_{\theta}\subseteq(-R,R)\) for the set of points of non-linearity for the function produced by some \(\theta\in\Theta\). Given \(w^{\prime}\in\{0,1,\ldots,w\}\), we begin by computing \(\mathbb{P}(\#D_{\theta}=w^{\prime})\). A point in \(c\in\mathbb{R}^{w}\) has coordinates chosen independently from a normal distribution with variance \(\nu\). Another point \(l\in\mathbb{R}^{w}\) has coordinates chosen independently from a normal distribution with variance \(R^{2}\nu\). Then we consider the line segment in \(\mathbb{R}^{w}\) joining \(c-l\) and \(c+l\), and we want to compute the probability of this segment meeting any of the coordinate hyperplanes; given \(1\leq i\leq w\) write \(\mathcal{P}_{i}\) for the probability of our line segment meeting the \(i\)th coordinate hyperplanes; this is independent of \(i\), so we also write it \(\mathcal{P}\). **Lemma 4.1**.: \[\mathcal{P}=\mathbb{P}(|\mathcal{N}(0,\sigma^{2}R^{2})|\geq|\mathcal{N}(0, \sigma^{2})|)=\frac{2}{\pi}\arctan R.\] (4.0.1) Proof.: The first equality is the definition. For the second, writing \(\varphi=\frac{e^{-z^{2}/2}}{\sqrt{2\pi}}\) for the probability density function of \(\mathcal{N}(0,1)\), the rotational symmetry of \(\varphi(x)\varphi(y)\) yields \[\mathbb{P}(|\mathcal{N}(0,R^{2})|\geq|\mathcal{N}(0,1)|) =\mathbb{P}(|\mathcal{N}(0,R^{2})/\mathcal{N}(0,1)|\geq 1)\] \[=\mathbb{P}(|\mathcal{N}(0,1)/\mathcal{N}(0,1)|\geq 1/R)\] \[=\frac{2}{\pi}\arctan R.\qed\] Since the probabilities of hitting the various axes are again independent, the exact same formulae as in proposition 3.3 hold; for \(w^{\prime}\in\{0,1,\ldots,w\}\) the probability of a function generated by this neural network having exactly \(w^{\prime}\) points of non-linearity is \[\mathbb{P}(\#D_{\theta}=w^{\prime})=\binom{w}{w^{\prime}}\mathcal{P}^{w^{ \prime}}(1-\mathcal{P})^{w-w^{\prime}}, \tag{4.0.2}\] and the expected number of points of non-linearity is given by \[\mathbb{E}(\#D_{\theta})=\sum_{w^{\prime}\in\{0,\ldots,w\}}\frac{\mu(\Theta_ {w^{\prime}})}{\mu(\Theta)}w^{\prime}=w\mathcal{P}. \tag{4.0.3}\] ## 5 Spherical weights, uniform biases We again fix a positive real parameter \(T\). We define \(\Theta_{T}=[-T,T]^{w}\subseteq\mathbb{R}^{w}\), and \[\Theta_{L}=\{L\in\mathbb{R}^{w}:|L|\leq T\}\subseteq\mathbb{R}^{w}, \tag{5.0.1}\] where \(|L|\) is the Euclidean norm. Just as in the previous cases, we have **Lemma 5.1**.: _For any \(\theta\in\Theta\), we have \(\#D_{\theta}\leq w\), and this maximum can be achieved._ To compute the probabilities precisely (as in proposition 3.3 and lemma 4.1) seems difficult, but by a simple application of classical results from geometric probability we will be able to compute the expected number of points of nonlinearity, on small domains. More precisely, we fix a real number \(R\in(0,1]\), and for \(\theta\in\Theta\) we define \(D_{\theta}\) to be the set of points of non-linearity of the resulting function \(f_{\theta}\colon(-R,R)\to\mathbb{R}\). The expected number of points of non-linearity is given by \[\mathbb{E}(\#D_{\theta})=\sum_{w^{\prime}\in\{0,\ldots,w\}}\mathbb{P}(\#D_{ \theta}=w^{\prime})w^{\prime}. \tag{5.0.2}\] For example, if \(\mathbb{E}(\#D_{\theta})\approx w\) this would tell us that most choices of parameters yield \(\#D_{\theta}=w\). On the other hand, if \(\mathbb{E}(\#D_{\theta})\approx 0\) this would tell us that most choice of parameter give an affine-linear function. **Proposition 5.2**.: _Assume \(0<R\leq 1\). For \(w\) even we have_ \[\mathbb{E}(\#D_{\theta})=\frac{Rw2^{w}}{(w+1)\pi}{w-1\choose w/2}^{-1}\sim R\sqrt {2w/\pi}. \tag{5.0.3}\] _For odd \(w\) we have_ \[\mathbb{E}(\#D_{\theta})=\frac{Rw^{2}}{2^{w-1}(w+1)}{w-1\choose(w-1)/2}\sim R \sqrt{2w/\pi}. \tag{5.0.4}\] _Here \(\sim\) means that the ratio tends to \(1\) as \(w\) tends to infinity._ Proof.: As before, only the first component \[f_{1}\colon\mathbb{R}\to\mathbb{R}^{w};x\mapsto\theta_{T}+x\cdot\theta_{L} \tag{5.0.5}\] of the map \(f_{\theta}\) has any impact on the number \(\#D_{\theta}\); more precisely, \(\#D_{\theta}\) is the number of intersection points of the image of \([-R,R]\) under \(f_{1}\) with the coordinate hyperplanes in \(\mathbb{R}^{w}\) (excluding the measure-zero case where the image crosses the intersection of two or more coordinate hyperplanes). We now relate the problem to a variation on Buffon's needle. The image of \([-R,R]\) is a line segment in \(\mathbb{R}^{w}\), with centre a point in \([-T,T]^{w}\) chosen uniformly at random, and endpoint chosen uniformly at random in a sphere of radius \(RT\) around that centre. We want to compute the expected number of intersection points with the coordinate hyperplanes. For now we fix the length \(2s\in[0,2RT]\) of the needle, and compute the expectation; later we will integrate over \(s\). By additivity of expectations, we are reduced to computing the expected number \(\frac{1}{w}\mathbb{E}(\#D_{w})\) of intersection points with a single coordinate hyperplane. By symmetry, the expected number of intersection points with a coordinate hyperplane is the same as the expected number of intersection points of a needle of length \(2s\), dropped uniformly at random in the plane, with the subset \[\{x\in\mathbb{R}^{w}:x_{1}\in 2T\mathbb{Z}\}. \tag{5.0.6}\] By [10, page 130] this expectation is given by1 Footnote 1: We write \(\Omega\) where Klain and Rota write \(\omega\), to make the distinction from the width \(w\) clearer. \[\mathbb{E}=\frac{\Omega_{1}\Omega_{w-1}}{w\Omega_{w}}\frac{s}{T} \tag{5.0.7}\] where for a non-negative integer \(k\) we have \[\Omega_{2k}=\frac{\pi^{k}}{k!} \tag{5.0.8}\] \[\Omega_{2k+1}=\frac{2^{2k+1}\pi^{k}k!}{(2k+1)!}. \tag{5.0.9}\] We find for even \(w\) that \[\mathbb{E}=\frac{2^{w}s}{w\pi T}{w-1\choose w/2}^{-1} \tag{5.0.10}\] and for odd \(w\) that \[\mathbb{E}=\frac{s}{2^{w-1}T}{w-1\choose(w-1)/2}. \tag{5.0.11}\] To simplify subsequent computations, we write \(\mathbb{E}^{\prime}=T\mathbb{E}/s\), which depends only on \(w\). To complete the computation of the expectations we must integrate over \(s\in[0,RT]\). However, we do not integrate with respect to the uniform distribution on \([0,RT]\); rather we want the endpoint of our needle to be chosen uniformly in a sphere. As such, the expectation for hitting one hyperplane is \[\frac{1}{w}\mathbb{E}(\#D_{w})=B(w,TR)^{-1}\int_{s=0}^{RT}\frac{s}{T}\mathbb{ E}^{\prime}S(w,s)ds \tag{5.0.12}\] where \[B(w,TR)=\frac{\pi^{w/2}}{\Gamma(\frac{w}{2}+1)}(TR)^{w}\] is the volume of a ball of radius \(TR\) and dimension \(w\), and \[S(w,s)=\frac{2\pi^{w/2}}{\Gamma(\frac{w}{2})}s^{w-1}\] is the surface area of a ball of dimension \(w\) and radius \(s\). This turns into \[\begin{split}\frac{1}{w}\mathbb{E}(\#D_{w})&=\frac {\Gamma(\frac{w}{2}+1)}{\pi^{w/2}(TR)^{w}}\int_{s=0}^{RT}\frac{s}{T}\mathbb{E} ^{\prime}\frac{2\pi^{w/2}}{\Gamma(\frac{w}{2})}s^{w-1}ds\\ &=R\frac{w}{w+1}\mathbb{E}^{\prime}.\end{split} \tag{5.0.13}\] For the asymptotics we apply the central binomial coefficient formula \[{2k\choose k}\sim\frac{4^{k}}{\sqrt{k\pi}}, \tag{5.0.14}\] and for even \(w\) we also use \[{2k-1\choose k-1}=\frac{1}{2}{2k\choose k}. \tag{5.0.15}\]
2307.11296
Theoretical analysis of zirconium oxynitride/water interface using neural network potential
Zr oxides and oxynitrides are promising candidates to replace precious metal cathodes in polymer electrolyte fuel cells. Oxygen reduction reaction activity in this class of materials has been correlated with the amount of oxygen vacancies, but a microscopic understanding of this correlation is still lacking. To address this, we simulate a defective Zr$_7$O$_8$N$_4$/H$_2$O interface model and compare it with a pristine ZrO$_2$/H$_2$O interface model. First, ab initio replica exchange Monte Carlo sampling was performed to determine defect segregation at the surface in the oxynitride slab model, then molecular dynamics accelerated by neural network potentials was used to perform 1000 of 500 ps-long simulations to attain sufficient statistical accuracy of the solid/liquid interface structure. The presence of oxygen vacancies on the surface was found to clearly modify the local adsorption structure: water molecules were found to adsorb preferentially on Zr atoms surrounding oxygen vacancies, but not on the oxygen vacancies themselves. The fact that oxygen vacancy sites are free from poisoning by water molecules may explain the activity enhancement in defective systems. The layering of water molecules was also modified considerably, which should influence the proton and O$_2$ transport near the interfaces which is another parameter that determines the overall activity.
Akitaka Nakanishi, Shusuke Kasamatsu, Jun Haruyama, Osamu Sugino
2023-07-21T01:55:18Z
http://arxiv.org/abs/2307.11296v2
# Structural analysis of zirconium oxynitride/water interface using neural network potential ###### Abstract Zr oxides with oxygen-nitrogen substitutions and oxygen vacancies are promising candidates to replace Pt as electrocatalyst for the oxygen reduction reaction. To understand the microscopic structure of the catalyst/water interface, many nanosecond-long molecular dynamics simulations were performed using the interatomic force field constructed by machine learning the ab initio calculations. A defective Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\)/H\({}_{2}\)O interface model was simulated and compared with a pristine ZrO\({}_{2}\)/H\({}_{2}\)O interface model. Water was found to be adsorbed on the surface partly as a H\({}_{2}\)O molecule or as dissociated components, OH and H, on both surfaces. On the pristine ZrO\({}_{2}\) surface, H\({}_{2}\)O molecules show a monolayer adsorption structure and adsorb on all Zr atoms with equal probability. On defective Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) surface, however, H\({}_{2}\)O molecules show a bilayer adsorption structure and do not adsorb on the oxygen vacancies but only on some of the surrounding Zr atoms. The total number of the molecular and dissociative adsorptions are almost the same on both surfaces, but a larger number are dissociatively adsorbed on the pristine surface. Possible implications for the catalytic behaviour are discussed. ## Introduction Polymer electrolyte fuel cells are attracting attention as energy conversion devices, and their performance depends on the catalytic activity of the oxygen reduction reaction (ORR) taking place on the cathode side of the electrode. Pt has been alloyed or coated to increase its catalytic activity. Although Pt has high catalytic activity, it has problems in terms of durability and cost.[1] Zr oxides[2, 3, 4] and Ti oxides[5, 6, 7, 8] are expected to replace Pt as electrocatalysts due to their high durability, low cost and high catalytic activity obtained by introducing defects such as oxygen vacancies (V\({}_{\mathrm{O}}\)s) and oxygen-nitrogen substitutions (N\({}_{\mathrm{O}}\)s).[9, 10] Previous studies have shown that the catalytic activity correlates with the amount of V\({}_{\mathrm{O}}\)s and other factors in ZrO\({}_{2}\),[2, 11, 12] but the mechanisms of the reaction pathways, rate-determining steps and other aspects of the ORR remain unknown. To the best of our knowledge, there have not been many studies on the ORR activity of defective oxide surfaces compared to pristine ones.[13, 14, 15] Muhammady et al. studied the activity-energy diagrams of ORR on tetragonal ZrO\({}_{2}\)(101) surfaces with V\({}_{\mathrm{O}}\) and N\({}_{\mathrm{O}}\) introduced from ab initio calculations.[15] The results suggest that the Zr atoms and the V\({}_{\mathrm{O}}\) do not differ significantly in activity. Their model had neither solvent nor adsorbed water molecules. In addition, their studies discussed only the static state, stable at 0 K, based on first-principles structural optimization, and did not go as far as to discuss the dynamic properties at finite temperatures. Conventional theoretical studies of solid-water interfaces, including their work, have been mainly based on first-principles calculations, in particular ab initio molecular dynamics (AIMD) for dynamic properties, which is computationally expensive although accurate. The high computational cost is a major problem, especially for models with a large number of atoms in the unit cell, such as catalysts with introduced defects, or when dynamic properties such as proton diffusion are studied using molecular dynamics calculations of 1 ns or longer. In recent years, neural network potentials (NNPs), which allow more efficient molecular dynamics calculations while maintaining the same level of accuracy as first-principles calcu lations, have been used to study adsorption structures and proton transfer at the solid-water interface [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. The aim of this study is to investigate the structure of water molecules at the interface between water and defective oxide catalysts, by using neural network potentials molecular dynamics (NNPMD) as a first step towards understanding the effects of defects and solvents on catalytic activity of ORR. Examples of such effects include the influence of water layering on proton transfer and the poisoning of O\({}_{2}\) adsorption sites by water. Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\)[32], a defective ZrO\({}_{2}\) with V\({}_{\rm O}\) and N\({}_{\rm O}\) introduced, was used as a research object and ZrO\({}_{2}\) as a comparison object. For these materials, NNPs reproducing first-principles calculations were constructed based on active learning. Based on the NNPMD trajectories, the structures of water molecule adsorption on the surfaces of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\) are investigated and the possibility of the V\({}_{\rm O}\) as catalytically active sites is discussed by comparing them. ## 2 Methods Figure 1 shows the interface models used in this study. The atomic configuration of the 5-layer Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is one of the stable structures obtained by Monte Carlo sampling. The V\({}_{\rm O}\) and N\({}_{\rm O}\) defects are found both on the surface and inside. The atomic configuration of ZrO\({}_{2}\) is that of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) with the V\({}_{\rm O}\) and N\({}_{\rm O}\) replaced by oxygen atoms. These atomic configurations are listed in the Supporting Information. Both the unit cells of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\) have a hexagonal lattice with lattice constants \(a=b=9.542394\) A, \(c=25\) A. The interface model is a random arrangement of water molecules on a unit cell with an additional 15 A vacuum layer. The number of water molecules was set to 21, which corresponds to about three layers of water. The xy-coordinates of the water molecules were set so that the oxygen of the water molecules was above one of the atoms of the slab and the dipole moment of the water molecules was parallel to the x-axis. The 21 water molecules were randomly assigned to three different groups of seven water molecules each and, depending Figure 1: The initial structures of molecular dynamics for interfaces between H\({}_{2}\)O and (a) Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) or (b) ZrO\({}_{2}\). Green, blue, red and white indicate Zr, N, O and H atoms respectively. The figures have been generated using the VESTA software package [33]. on the group, the z-component of the atomic coordinates was set to a value 3, 6, 9 A higher than the z-coordinate of the largest atom in the slab. The oxides prepared in this way, with water in a random arrangement, were used as the initial structure for AIMDs and NNPMDs. This procedure was used to create different initial structures that were not correlated with each other. When NNPMD is performed in a \(2\times 2\times 1\) supercell instead of a unit cell, the initial structure is not a \(2\times 2\times 1\) supercell of randomly placed water on a unit cell oxide, but randomly placed water on a \(2\times 2\times 1\) supercell oxide. Therefore, the initial structure of the slab always has translational symmetry, but that of the water molecules does not in most cases. Active learning is used to improve the accuracy of NNPs in complex systems such as interfaces.[22, 25, 27, 30] The following active learning procedure was used to generate the NNP of these systems. First, we obtained the total energy, interatomic forces and stress tensor data corresponding to the atomic configuration from the AIMD for the slab model. An NNP reproducing the data was generated and used for the NNPMDs. We performed ab initio self-consistent calculations on the structures appearing in the trajectory and obtained the total energy, interatomic forces and stress tensor data corresponding to the atomic configuration. These new data were added to the existing data to generate a new NNP. The accuracy of the NNP was improved by active learning, which repeated this process. The active learning was stopped when the structural properties (e.g. the number of adsorptions) calculated from the NNPMD trajectories converged so that the difference between Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\) could be discussed. In this paper, the first NNP trained with AIMD only is referred to as the loop 0 NNP, while subsequent NNPs are referred to as loop 1, 2,... NNPs, depending on the number of times the ab initio calculation results on the structures of the NNPMD were added. The calculation software used for each procedure and the detailed calculation conditions are as follows. First-principles calculations were performed using the VASP code,[34, 35] which is based on the plane-wave pseudopotential method. Projector augmented wave pseudopo tentials of the type [36] and the exchange correlation functional of the generalised gradient approximation of the Perdew-Burke-Ernzerhof type [37] were used. The projection operator was evaluated in real space, including the non-spherical contribution to the gradient of the density of potential spheres. Only \(\Gamma\) points were sampled for the Brillouin zone and partial occupancies were calculated by Gaussian smearing with a width of 0.03 eV. The cutoff energy for the plane-wave-basis set is 520 eV. Self-inconsistent calculations were performed for up to 100 loops and stopped when the energy difference was less than 0.1 meV. The initial magnetic moment was set to 0.6 \(\mu_{\mathrm{B}}\) for all atoms to account for spin polarisation. The mass of the hydrogen atoms was replaced by the mass of the deuterium atoms, allowing a larger time step without breaking the time evolution. AIMD used the NVT ensemble whose temperature was controlled at 300 K by the Langevin thermostat with a friction coefficient of 10 ps\({}^{-1}\). Four initial structures were created using the method described above, each with the different initial velocities and 1 ps AIMDs were performed for each with a 1 fs time step. The initial data set thus contains 4000 samples. The NNPs were generated using SIMPLE-NN (ver. 2) [38, 39], which is based on the symmetry function [40]. The parameters of the radial symmetry function G\({}_{2}\) are \(R_{\mathrm{cut}}=6.0,\eta=0.003214,0.035711,0.071421,0.124987,0.214264,0.3571 06,0.714213,1.428426,R_{\mathrm{s}}=0.0\). Those of the angular symmetry function G\({}_{3}\) are \(R_{\mathrm{cut}}=6.0,\eta=0.000357,0.028569,0.089277,\lambda=-1,1,\zeta=1,2,4\). A neural network is constructed with these symmetry functions as input and the total energy, the interatomic forces and the stress tensor as output. The nodes of the neural network are 30-30 and the activation function is tanh, and its parameters are optimised up to 1000 epochs using the Adam method with a learning rate of 0.001. The initial dataset of 4000 samples was split into 400 and 3600, the latter being used as test data for NNP accuracy testing. The former was further split 9:1 into training data and validation data; the training data is used to optimise the NNP weights and the validation data is used to calculate the loss function for each epoch to determine which epoch's NNP is adopted. The NNPMDs were performed using LAMMPS [41, 42]. The NVT ensemble was used with the temperature controlled to 300 K by the Langevin thermostat with the damping factor of 0.1 ps. The mass of the hydrogen atom was replaced by the mass of the deuterium atom. Four initial structures were created using the method described above, each with a different initial velocity and a 1 ns NNPMDs was performed for each with a 1 fs time step. Self-inconsistent field calculations were performed in VASP for 400 structures every 10 ps in four 1 ns trajectories, and the ab initio and NNP energy errors were checked. Samples with an energy error greater than 1 meV/atom were added to the training and validation data, but not to the test data. Thus, as active learning progresses, the total of training and validation data increases from 400 samples, while the test data remains at 3600 samples. When the energy error in the unit cell is less than 10 meV/atom, in addition to the unit cell, a \(2\times 2\times 1\) supercell was used to test the energy error in the ab initio calculation and the NNP under almost similar conditions. In the case of the \(2\times 2\times 1\) supercell, the difference is that 40 structures every 10 ps in four 1 ns trajectories are validated and the ab initio calculation results are not used to train the NNP. When the energy error was less than 10 meV/atom for both the unit cell and the \(2\times 2\times 1\) supercell, the structural properties used for the convergence decision condition of active learning were calculated. For the \(2\times 2\times 1\) supercell, 1000 new initial structures were generated and NNPMD was performed on each of them for 500 ps with a time step of 1 fs. The mean and standard deviation of the number of molecular and dissociative adsorptions were calculated using 1000 structures at 500 ps. The conditions for O adsorption on the Zr site and OH covalent bonds were set to 3.09651 A and 1.2 A respectively. When it was determined that these properties had sufficiently converged, active learning was stopped and the other properties - number density distribution for the surface perpendicular and parallel, adsorption distance d, adsorption angle \(\alpha\), molecular orientation \(\beta\), OH orientation \(\gamma\), number of hydrogen bonds and radial distribution function - were also calculated. Definitions of angles and hydrogen bonds will be explained later. In summary, if the RMSE of the unit cell is below 10 meV/atom, the RMSE of the \(2\times 2\times 1\) supercell is calculated; if it is also below 10 meV/atom, the number of adsorptions is calculated; if it converges, active learning is stopped and the other properties are calculated. ## Results and discussion Table 1 shows the root mean squared error (RMSE) for the Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\) NNPs against existing data sets. The energy error is less than 10 meV/atom in each loop. In most cases the relative difference between loops is less than 10 %. Although the NNP appears to converge sufficiently without going to loop 4, the NNP accuracy should not be judged solely on the basis of the RMSEs of the existing data. This is because, as discussed below, the RMSEs of the newly acquired data in the molecular dynamics trajectories performed with these NNPs are not necessarily 10 meV/atom. Table 2 shows the errors in the ab initio self-consistent calculations for the structures included in the NNPMD trajectories of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). Ab initio self-consistent calculations were also performed for \(2\times 2\times 1\) supercells only in loops 1-4, where the RMSE in the unit cell is less than 10 meV/atom. In loop 1, the RMSEs and structural properties were not \begin{table} \begin{tabular}{r r r r r r r r r} \hline Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) & Energy & (meV/atom) & Force & (eV/Å) & & Stress & (kbar) & \\ loop & train & valid & test & train & valid & test & train & valid & test \\ \hline 0 & 1.119 & 1.184 & 1.729 & 0.088 & 0.118 & 0.157 & 1.752 & 1.987 & 2.022 \\ 1 & 1.294 & 1.521 & 1.548 & 0.109 & 0.133 & 0.160 & 2.298 & 2.263 & 2.178 \\ 2 & 1.998 & 1.695 & 2.347 & 0.113 & 0.128 & 0.148 & 2.023 & 1.980 & 2.103 \\ 3 & 2.122 & 2.587 & 2.302 & 0.133 & 0.162 & 0.158 & 2.144 & 2.241 & 2.292 \\ 4 & 1.530 & 1.866 & 1.818 & 0.133 & 0.141 & 0.159 & 2.488 & 2.558 & 2.795 \\ \hline ZrO\({}_{2}\) & Energy & (meV/atom) & Force & (eV/Å) & & & Stress & (kbar) & \\ loop & train & valid & test & train & valid & test & train & valid & test \\ \hline 0 & 0.826 & 0.819 & 1.061 & 0.083 & 0.103 & 0.130 & 1.822 & 1.781 & 1.938 \\ 1 & 3.865 & 2.135 & 2.117 & 0.137 & 0.164 & 0.147 & 2.866 & 4.953 & 1.927 \\ 2 & 3.565 & 2.088 & 2.177 & 0.151 & 0.163 & 0.141 & 3.232 & 4.531 & 2.647 \\ 3 & 3.043 & 4.622 & 1.826 & 0.153 & 0.155 & 0.150 & 3.011 & 3.020 & 2.092 \\ 4 & 3.267 & 1.834 & 1.295 & 0.151 & 0.163 & 0.138 & 2.919 & 2.508 & 1.783 \\ \hline \end{tabular} \end{table} Table 1: The root mean squared error (RMSE) of the existing data set for the total energy, the interatomic force and the stress tensor for the NNP of each loop in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\) for training, validation and test data. calculated because the self-consistent calculations of \(2\times 2\times 1\) supercells did not converge for some structures. The RMSEs of loops 2, 3, and 4 were \(<\) 10 meV/atom in both cells, so structural properties were calculated. Note that the total energies in 1 ns NNPMDs reach equilibrium after about 0.1 ns and are conserved without significant increase or decrease thereafter. Whether sufficient time and the number of trajectories were used to calculate the structural properties was verified by the number of adsorptions calculated from the loop 4 NNPMD trajectories of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\). Figure 2 (a) shows the time dependence of the number of adsorptions calculated from the 10, 20,..., 500 ps structure of 1000 trajectories. It can be seen that it converges to 0.1 at 400 ps. Figure 2 (b) shows the dependence of the number of adsorptions on the number of trajectories calculated from the 500 ps structure of 2, 3,..., 1000 trajectories. It can be seen that the number converges to 0.1 for 200 trajectories. Thus the number of adsorptions calculated from the 500 ps structure of 1000 trajectories can be trusted to the order of 0.1. Table 3 shows the number of molecular and dissociative adsorptions (N) of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and \begin{table} \begin{tabular}{r r r r r r r r} \hline Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) & unit cell & & & \multicolumn{4}{c}{\(2\times 2\times 1\) supercell} \\ loop & run = 1 & 2 & 3 & 4 & run = 1 & 2 & 3 & 4 \\ \hline 0 & 10.572 & 17.765 & 11.313 & 10.633 & & & & \\ 1 & 4.833 & 7.105 & 5.323 & 5.113 & & & & \\ 2 & 2.317 & 2.140 & 8.145 & 2.143 & 2.279 & 2.353 & 5.398 & 2.299 \\ 3 & 1.626 & 1.541 & 1.603 & 1.442 & 1.931 & 6.112 & 6.553 & 6.647 \\ 4 & 1.202 & 1.178 & 1.392 & 1.384 & 1.310 & 1.199 & 1.428 & 1.345 \\ \hline ZrO\({}_{2}\) & unit cell & & & & \multicolumn{4}{c}{\(2\times 2\times 1\) supercell} \\ loop & run = 1 & 2 & 3 & 4 & run = 1 & 2 & 3 & 4 \\ \hline 0 & 28.576 & 28.416 & 28.955 & 25.323 & & & & \\ 1 & 1.900 & 1.939 & 1.899 & 2.100 & & & & \\ 2 & 1.628 & 2.553 & 1.514 & 1.573 & 6.632 & 6.231 & 5.308 & 5.153 \\ 3 & 1.469 & 1.094 & 1.088 & 1.152 & 4.936 & 5.024 & 5.159 & 5.116 \\ 4 & 2.644 & 1.160 & 1.625 & 2.123 & 3.021 & 3.902 & 3.884 & 3.543 \\ \end{tabular} \end{table} Table 2: RMSE in the total energy of the newly acquired data, which are included in the NNPMD trajectories of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). For loops 0 and 1, RMSEs for \(2\times 2\times 1\) supercells were not calculated due to large RMSEs for unit cell and the non-convergence for self-consistent calculation, respectively. \begin{table} \begin{tabular}{r r r r r r r} \hline \multicolumn{6}{l}{Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\)} \\ loop & \multicolumn{1}{c}{N\({}_{\rm mol}\)} & \multicolumn{1}{c}{N\({}_{\rm dis}\)} & \multicolumn{1}{c}{M\({}_{\rm mol}\)} & \multicolumn{1}{c}{M\({}_{\rm dis}\)} & \multicolumn{1}{c}{C\({}_{\rm mol}\)} & \multicolumn{1}{c}{C\({}_{\rm dis}\)} \\ \hline 2 & 9.6 (1.2) & 5.8 (1.1) & 2.4 (0.3) & 1.5 (0.3) & 34 (4) & 21 (4) \\ 3 & 10.2 (1.2) & 5.5 (1.0) & 2.5 (0.3) & 1.4 (0.2) & 36 (4) & 20 (3) \\ 4 & 9.5 (1.1) & 5.5 (1.0) & 2.4 (0.3) & 1.4 (0.3) & 34 (4) & 20 (4) \\ \hline \multicolumn{6}{l}{ZrO\({}_{2}\)} \\ loop & \multicolumn{1}{c}{N\({}_{\rm mol}\)} & \multicolumn{1}{c}{N\({}_{\rm dis}\)} & \multicolumn{1}{c}{M\({}_{\rm mol}\)} & \multicolumn{1}{c}{M\({}_{\rm dis}\)} & \multicolumn{1}{c}{C\({}_{\rm mol}\)} & \multicolumn{1}{c}{C\({}_{\rm dis}\)} \\ \hline 2 & 4.1 (1.4) & 9.3 (1.3) & 1.0 (0.4) & 2.3 (0.3) & 15 (5) & 33 (5) \\ 3 & 6.2 (1.4) & 9.1 (1.2) & 1.6 (0.4) & 2.3 (0.3) & 22 (5) & 33 (4) \\ 4 & 5.6 (1.4) & 9.6 (1.2) & 1.4 (0.4) & 2.4 (0.3) & 20 (5) & 34 (4) \\ \hline \end{tabular} \end{table} Table 3: Mean values of the number of molecular (mol) and dissociative (dis) adsorptions in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). The values in brackets are standard deviations. For both materials, these were calculated from loop 2, 3 and 4 NNPMD trajectories. N = the number of adsorptions in a \(2\times 2\times 1\) supercell. M = N/4 = the number of adsorptions per unit cell. C = coverage (%) = N/28(= the number of Zr sites in \(2\times 2\times 1\) supercell surface)*100. Figure 2: Dependence of the number of adsorptions on (a) timestep and (b) the number of trajectories. Blue and red represent molecular (mol) and dissociative (dis) adsorption, respectively. These were calculated from loop 4 NNPMD trajectories of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\). ZrO\({}_{2}\) calculated by each loop NNP. For reference, the number of adsorptions per unit cell (M), converted from the number of adsorptions per \(2\times 2\times 1\) supercell, and the coverage (C) are also given. For the number of adsorptions N, the mean difference between loops 3 and 4 is up to 0.7 and the standard deviation between the materials is up to 2.5. Both are small compared to the difference between materials in loop 4, 2.9, so we consider that the convergence is not a problem for comparing these materials. The relationship between the number of molecular and dissociative adsorptions is reversed between Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). Because Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) has less oxygen for the V\({}_{\rm O}\), the dissociated excess H\({}^{+}\) is less likely to be adsorbed on oxygen and more likely to re-bind to dissociatively adsorbed OH\({}^{-}\), which is the cause. This idea is also supported by the fact that the total number of adsorptions is almost the same for Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). In addition, Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is charge neutral due to the presence of the V\({}_{\rm O}\) and N\({}_{\rm O}\) defects, so it is unlikely that the V\({}_{\rm O}\) is responsible for attracting the O atom. According to a study by Muhammady et al. [15], where one V\({}_{\rm O}\) and two N\({}_{\rm O}\) defects were introduced into the tetragonal-ZrO\({}_{2}\) (101) surface, the V\({}_{\rm O}\) is not more easily adsorbed by the O\({}_{2}\) molecule than the Zr atom. Therefore, it is consistent with the present results to consider that the V\({}_{\rm O}\) does not promote dissociation by accepting the O of water molecules. The dissociative adsorption becomes molecular adsorption, which reduces the probability that intermediates and the H\({}^{+}\) that binds to them in the ORR will have an additional reaction with nearby adsorbates, i.e. the ORR is more likely to proceed in the presence of oxygen defects. Also, while the total number of adsorptions is almost the same and the number of vacant Zr sites is almost the same, Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) has two V\({}_{\rm O}\)s per unit cell, which could also be an active site for the ORR. In other words, Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) has more active sites and is more likely to promote the ORR than ZrO\({}_{2}\). Figure 3 shows the surface perpendicular number density distributions of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). The densities of total, oxygen and hydrogen atoms in water at the interface and hydrogen atoms in bulk water are shown respectively. The surface perpendicular number density distribution shows that the oxygen peak in the adsorption layer (16-18 A) is one in ZrO\({}_{2}\), whereas it is divided into two in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\), which is due to the fact that the height of the Zr sites is almost the same in all ZrO\({}_{2}\) and is divided into two types in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\). Differences in the surface-derived adsorption structures will affect their function as catalysts for the ORR, as well as the diffusion of molecules and protons. Away from the surface, the number density oscillates and is not constant as in the bulk. In other words, the motion is limited compared to the bulk. According to a previous study of the Cu/H\({}_{2}\)O interface [16], water molecules move like the bulk when the thickness of their layer is more than 30 A. In the present study, the thickness of the water molecule layer is less than 15 A, which is consistent with the results that it is not constant like the bulk. Figure 4 shows the surface parallel number density distributions of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\). Only the densities of the oxygen atoms in the H\({}_{2}\)O or OH\({}^{-}\) adsorbed on the Zr sites are shown. The distribution of ZrO\({}_{2}\) is uniform and translational symmetry is observed. As mentioned above, this is not due to the initial structure, as the translational symmetry of water is not guaranteed. ZrO\({}_{2}\) was assumed to have a uniform adsorption distribution due to the equivalent Zr sites on its surface. However, if the adsorption distribution does not Figure 3: Surface perpendicular number density distributions (Å\({}^{-1}\)) of (a) Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and (b) ZrO\({}_{2}\). The blue, red and yellow solid lines represent total, oxygen and hydrogen atoms in water at the interface. The red and yellow dashed lines represent oxygen and hydrogen atoms in bulk water. These densities for both materials were calculated from loop 4 NNPMD trajectories. Figure 4: Surface parallel number density distributions (Å\({}^{-3}\)) of O atoms in (a) molecular and (b) dissociative adsorption for Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and (c) molecular and (d) dissociative adsorption for ZrO\({}_{2}\). For visibility, the sign of the molecular adsorption number density is reversed. Two types of oxygen vacancies in a Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) unit cell are represented by v1 and v2. change much in 500 ps MD, the adsorption distribution calculated from only one trajectory may not be uniform. In the present study, the adsorption distribution was calculated from 1000 independent trajectories, so that a uniform distribution was obtained, regardless of whether the adsorption distribution was likely to change in 500 ps MD. The distribution of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is non-uniform, but translational symmetry is established. This is also a result of obtaining a sufficiently large number of trajectories, rather than the initial structure, and is due to the low computational cost of NNPMD, which allows so many MDs to be performed. Particles were found to adsorb on the Zr, but not on the V\({}_{\rm O}\). Zr atoms around V\({}_{\rm O}\) 2 are more likely to adsorb particles than those around V\({}_{\rm O}\) 1 These results suggest that the V\({}_{\rm O}\) act as active sites for the ORR and that differences in adsorption around them affect their function. In particular, we suggest that the V\({}_{\rm O}\) 1 is more likely to adsorb O\({}_{2}\) molecules as there is less adsorption on the surrounding Zr atoms. For other properties, (adsorption distance d, adsorption angle \(\alpha\), molecular orientation \(\beta\), OH orientation \(\gamma\), and number of hydrogen bonds N\({}_{\rm HB}\)) their definiti Figure 5: (a) Definition of adsorption angle \(\alpha\), molecular orientation \(\beta\) and OH orientation \(\gamma\). (b) Definition of hydrogen bonding. Oxygens donating and accepting hydrogen are denoted O\({}_{d}\) and O\({}_{a}\). Green, red and white indicate Zr, O and H atoms respectively. The figures have been generated using the VESTA software package. results are presented. Figure 5 (a) shows the definition of the adsorption angle \(\alpha\), molecular orientation \(\beta\) and OH orientation \(\gamma\). The adsorption angle \(\alpha\) is the angle between the position vector of O with respect to Zr and the surface normal. The adsorption distance d is the absolute value of the vector. The water molecule orientation \(\beta\) is the angle between the dipole vector of the water molecule and the surface normal. The OH orientation \(\gamma\) is the angle between the position vector of H with respect to O and the surface normal. Figure 5 (b) shows the definition of a hydrogen bond. A hydrogen bond is defined when the angle H-O\({}_{d}\)-O\({}_{a}\) is less than 30\({}^{\circ}\), the distance O\({}_{d}\)-H is less than 1.2 A and the distance O\({}_{d}\)-O\({}_{a}\) is less than 3.5 A, where O\({}_{d}\) and O\({}_{a}\) are the oxygen donating and accepting hydrogen, respectively [43]. Table 4 shows the adsorption distance d, adsorption angle \(\alpha\), molecular orientation \(\beta\), OH orientation \(\gamma\), and the number of hydrogen bonds N\({}_{\rm HB}\) calculated based on the above definition. The adsorption distance d of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is 0.1 A shorter in both molecular and dissociation. The increase or decrease of the adsorption angle \(\alpha\) is opposite for molecular and dissociation, with the adsorption angles becoming unequal, whereas for ZrO\({}_{2}\) they were almost equal at 18\({}^{\circ}\). The molecular orientation \(\beta\) of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) decreases, i.e. it turns upwards. The increase and decrease of the OH orientation \(\gamma\) is opposite in molecular and dissociation, going from unequal in ZrO\({}_{2}\) to aligned at almost 62\({}^{\circ}\). The number of hydrogen bonds in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is reduced by one. We consider this to be the effect of a distortion in the distribution of water molecules due to the defect. Figure 6 shows the Radial Distribution Function (RDF). The oxygens in the slab and in the water is distinguished as O\({}_{\rm s}\) and O\({}_{\rm w}\). Compared to ZrO\({}_{2}\), the RDF of Zr-O\({}_{\rm w}\) in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) has a lower peak at 2.1 A and is shifted to the right. This means that the average adsorption \begin{table} \begin{tabular}{l c c c c c c c} \hline & d\({}_{\rm mol}\) & d\({}_{\rm dis}\) & \(\alpha_{\rm mol}\) & \(\alpha_{\rm dis}\) & \(\beta_{\rm mol}\) & \(\gamma_{\rm mol}\) & \(\gamma_{\rm dis}\) & N\({}_{\rm HB}\) \\ \hline Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) & 2.29 & 2.11 & 12.81 & 20.23 & 35.03 & 62.14 & 62.38 & 67.56 \\ ZrO\({}_{2}\) & 2.39 & 2.20 & 18.28 & 17.74 & 51.68 & 68.40 & 53.35 & 68.70 \\ \hline \end{tabular} \end{table} Table 4: Adsorption distance d (Å), adsorption angle \(\alpha\), molecular orientation \(\beta\), OH orientation \(\gamma\) and number of hydrogen bonds N\({}_{\rm HB}\) for Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\), for molecular (mol) and dissociative (dis) adsorption. distance increases. Both molecular and dissociative adsorption distances are shorter for Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\), but the increase in the proportion of molecular adsorption at longer adsorption distances is thought to be the cause. The lower peak at 1 A in the O\({}_{\rm s}\)-H RDF of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is thought to be due to a decrease in H adsorbed on O\({}_{\rm s}\) as dissociative adsorption decreases. In addition to structural properties such as those in this study, other studies of solid/water interface systems using NNPMD include the free energy surface for proton transfer [18, 22, 29, 44], that for water dissociation [30, 31], solvation dynamics [27], and vibrational spectra of OH bonds [19]. Systems with intermediates in the ORR could also be added to the data to learn NNPs, sampled by NNPMD and calculate the free energies on the reaction pathways, so that the influence of defects and the proportion of molecular and dissociative adsorption on catalytic activity could also be discussed. There is also much to be explored in studies based on conventional first-principles calculations. In contrast to the present study, there are previous studies that have discussed the electric double layer (EDL) by calculating the electrostatic potential and density when the system is about a system with ions in water [45, 46, 47, 48, 49]. For example, Ando et al. performed AIMD at the interface between Pt and water containing Na\({}^{+}\) ions and calculated the electrostatic potential due to the EDL based on the difference in Figure 6: Radial distribution function \(g(r)\) of (a) Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and (b) ZrO\({}_{2}\). O\({}_{\rm s}\) and O\({}_{\rm w}\) represent oxygen in the slab and in the water molecule. Blue, green, yellow and red represent Zr-O\({}_{\rm w}\), Zr-H, O\({}_{\rm s}\)-O\({}_{\rm w}\) and O\({}_{\rm s}\)-H, respectively. the potentials between the case with and without Na\({}^{+}\) ions [45]. For Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) and ZrO\({}_{2}\), the electrostatic potential due to the EDL could also be evaluated in NNPMD by training NNP with additional data for the case of the solvent containing ions. Alternatively, as in the work of Otani et al. [50], the electrostatic potential of H\({}_{3}\)O\({}^{+}\) in oxygen can be discussed by creating a system with excess H\({}^{+}\) and electrons in water and performing an AIMD based on the effective screening medium method. Other studies [51, 52, 53, 54], have performed AIMD and structural optimisation to discuss band offsets, which are important values in interface systems, taking into account electron affinity, charge neutrality level and pinning factor, and have discussed the band offset, which is an important value in interface systems. This multifaceted study will allow a more detailed investigation of the effects of defects at the oxide/water interface. ## 4 Conclusion In this study, NNPs were constructed by active learning for interfaces between the water and defective Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) or pristine ZrO\({}_{2}\). Based on 1000 NNPMD trajectories, the structures of water molecule adsorption on these interfaces were investigated and it was found that Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) has a bilayer structure, which is different from a monolayer structure of ZrO\({}_{2}\). On the Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) surface, water molecules do not adsorb on the V\({}_{\rm O}\) but only on some of the surrounding Zr atoms, suggesting that the V\({}_{\rm O}\) is expected to act as an active site for the ORR. While the proportion of dissociative adsorption in Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) is less than in ZrO\({}_{2}\) due to the less oxygen on which the H\({}^{+}\) can adsorb, the total number of molecular and dissociative adsorption is the same for both materials. Therefore, Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) may promote the ORR more than ZrO\({}_{2}\) because the V\({}_{\rm O}\)s act as additional active sites. In the future, the influence of the solvent on the ORR activity of Zr\({}_{7}\)O\({}_{8}\)N\({}_{4}\) could be discussed by updating the NNP with new data for systems with intermediate oxygen and hydrogen molecules in addition to the solvent, water molecules. ## Acknowledgement This research was supported by the New Energy and Industrial Technology Development Organization (NEDO) project, MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Fugaku battery and Fuel Cell Project) (Grant No. JPMXP1020200301, Project No.: hp220177, hp210173, hp200131), and the Japan Society for the Promotion of Science (JSPS) as Grants in Aid for Scientific Research on Innovative Area "Hydrogenomics" (Grant No. 18H05519). The calculations were conducted at the ISSP Supercomputers Center, The University of Tokyo.
2304.00813
Model-Agnostic Reachability Analysis on Deep Neural Networks
Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., RdLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network's internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial examples. We also empirically demonstrate DeepAgn's superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs, and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches.
Chi Zhang, Wenjie Ruan, Fu Wang, Peipei Xu, Geyong Min, Xiaowei Huang
2023-04-03T09:01:59Z
http://arxiv.org/abs/2304.00813v1
# Model-Agnostic Reachability Analysis on Deep Neural Networks ###### Abstract Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., ReLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network's internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial example. We also empirically demonstrate DeepAgn's superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches. Our tool is available at [https://github.com/TrustAI/DeepAgn](https://github.com/TrustAI/DeepAgn) Keywords:Verification Deep Learning Model-agnostic Reachability ## 1 Introduction DNNs, or systems with neural network components, are widely applied in many applications such as image processing, speech recognition, and medical diagnosis [10]. However, DNNs are vulnerable to adversarial examples [25][12][33]. It is vital to analyse the safety and robustness of DNNs before deploying them in practice, particularly in safety-critical applications. The research on evaluating the robustness of DNNs mainly falls into two categories: falsification-based and verification-based approaches. While falsification approaches (e.g. adversarial attacks) [11]can effectively find adversarial examples, they cannot provide theoretical guarantees. Verification techniques, on the other hand, can rigorously prove the robustness of deep learning systems with guarantees [18, 19, 12, 13, 24]. Some researchers propose to reduce the safety verification problems to constraint satisfaction problems that can be tackled by constraint solvers such as Mixed-Integer Linear Programming (MILP) [1],Boolean Satisfiability (SAT) [20], or Satisfiability Modulo Theories (SMT) [15]. Another popular technique is to apply search algorithms [13] or Monte Carlo tree search [32] over discretised vector spaces on the inputs of DNNs. To improve the efficiency, these methods can also be combined with a heuristic searching strategy to search for a counter-example or an activation pattern that satisfies certain constraints, such as SHERLOCK [4] and Reluplex [15]. Nevertheless, the study subjects of these verification methods are restricted. They either target specific layers (e.g., fully-connected or convolutional layers), have restrictions on activation functions (e.g., ReLU activation only), or are only workable on a specific neural network structure (e.g., feedforward neural networks). Particularly, in comparison to FNNs, verification on RNNs is still in its infancy, with only a handful of representative works available, including [16, 14, 34]. The adoption of [16] requires short input sequences, and [14, 34] can result in irresolvable over-approximation error. This paper proposes a novel model-agnostic solution for safety verification on both feedforward and recurrent neural networks without suffering from the above weaknesses. Figure 1 outlines the working principle of DeepAgn, demonstrating its safety evaluation process and the calculation of the maximum safety radius. To the best of our knowledge, DeepAgn is one of the pioneering attempts on _model-agnostic_ verification that can work on both modern feedforward and recurrent neural networks under a unified framework. DeepAgn can deal with DNNs with very deep layers, a large number of neurons, and any type of activation function, via a black-box manner (without access to the internal structures/parameters of the network). Our contributions are summarised below: * To theoretically justify the applicability of DeepAgn, we prove that recurrent neural networks are also Lipschitz continuous for bounded inputs. Figure 1: Illustration of DeepAgn working on a black-box three-output neural network. _In reachability problem, given a set of inputs (quantified by a predefined \(L_{p}\)-norm ball) and a well-trained black-box neural network, DeepAgn can calculate the output range, namely, the minimal and maximum output confidence of each label (i.e., \([y_{1min},y_{1max}]\), \([y_{2min},y_{2max}]\), and \([y_{3min},y_{3max}]\)). For the safety verification problem, we can use a binary search upon the reachability to find the maximum safe radius \(r_{max}\) where the confidence intervals of the original label \(y_{1}\) and target label \(y_{2}\) meet._ * We develop an efficient method for reachability analysis on DNNs. We demonstrate that this _generic_ and _unified_ model-agnostic verification framework can work on FNNs, RNNs, and a hybrid of both. DeepAgn is an anytime algorithm, i.e., it can return both intermediate lower and upper bounds that are gradually, but strictly, improved as the computation proceeds; and it has provable guarantees, i.e., both the bounds can converge to the optimal value within an arbitrarily small error with provable guarantees. * Our experiments demonstrate that DeepAgn outperforms the state-of-the-art verification tools in terms of both accuracy and efficiency when dealing with complex, large and hybrid deep learning models. ## 2 Related Work Adversarial AttacksAttacks apply heuristic search algorithms to find adversarial examples. Attacking methods are mainly guided by the forward or cost gradient of the target DNNs. Major approaches include L-BFGS [25], FGSM [11], Carlini & Wagner attack [2], Universal Adversarial Attack [36], etc. Adversarial attacks for FNNs can be applied to cultivate adversarial examples for RNNs with proper adjustments. The concepts of adversarial example and adversarial sequence for RNNs are introduced in [22], in which they concrete adversarial examples for Long Short Term Memory (LSTM) networks. Based on the C&W attack [2], attacks are implemented against DeepSpeech in [3]. The method in [9] is the first approach to analyse and perturb the raw waveform of audio directly. Verification on DNNsThe recent advances of DNN verification include a layer-by-layer exhaustive search approach [13], methods using constraint solvers [23][15], global optimisation approaches [24][27], and the abstract interpretation approach [6][17]. The properties studied include robustness [13, 15, 35], or reachability [24], i.e., whether a given output is possible from properties expressible with SMT constraints, or a given output is reachable from a given subspace of inputs. Verification approaches aim to not only find adversarial examples but also provide guarantees on the results obtained. However, efficient verification on large-scale deep neural networks is still an _open problem_. Constraint-based approaches such as Reluplex can only work with a neural network with a few hundred hidden neurons [23, 15]. Exhaustive search suffers from the state-space \begin{table} \begin{tabular}{l|l c c c c c} \hline \hline & **Guarantees** & **Core Techniques** & **Neural Network Types** & \begin{tabular}{c} **Model** \\ **Agnostic** \\ \end{tabular} & \begin{tabular}{c} **Exnet** \\ **Computation** \\ \end{tabular} & \begin{tabular}{c} **Model Access** \\ **Model parameters** \\ \end{tabular} \\ \hline **Reluplex**[15] & Deterministic & SMT + LP & Relu-based FNNs & ✗ & ✓ & Model parameters \\ \hline **Planet**[5] & Deterministic & SMT + LP & ReLU-based FNNs & ✗ & ✓ & Model parameters \\ \hline **Alg**[2] & Upper bound & Alternate interpretation & ReLU-based FNNs & ✗ & ✗ & Model parameters \\ \hline **CombComb**[31] & Upper bound & Convex evaluation & ReLU-based FNNs & ✗ & ✗ & Model parameters \\ \hline **DeepGO**[24] & Converging bound & Lipschitz Optimization & FNNs with Lipschitz continuous & ✗ & ✓ & Confidence values \\ \hline **PustLip**[29] & Upper bound & Lipschitz estimation & ReLU-based FNNs & ✗ & ✗ & Model parameters \\ \hline **DeepGams**[23] & Approximated & Search based & ReLU/Tan/Sigmoid based FNNs & ✗ & ✓ & Confidence values \\ & converting bound & & & & & \\ \hline **PopPopQRN**[16] & Upper bound & Unrolling & RNNs, LSTMs, CRUs & ✗ & ✗ & Model parameters \\ \hline **RunVerity**[14] & Upper bound & Invariant Inference & RNNs & ✗ & ✗ & Model parameters \\ \hline **VERRNN**[34] & Upper bound & Unrolling-MLP & RNNs & ✗ & ✗ & Model parameters \\ \hline **DeepAgn** & Converging bound & Lipschitz Optimization & FNNs (CNN), RNNs, Hybrid networks & ✓ & Confidence values \\ & & with Lipschitz continuous layers & ✓ & ✓ & Confidence values \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with other verification techniques from different aspects explosion problem [13], although it can be partially alleviated by Monte Carlo tree search [32]. Moreover, the work [1] considers determining whether an output value of a DNN is reachable from a given input subspace. It proposes a MILP-based solution. SHERLORCK [4] studies the range of output values from a given input subspace. This method interleaves local search (based on gradient descent) with global search (based on reduction to MILP). Both approaches can only work with small neural networks. The research on RNN verification is still relatively new and limited compared with verification on FNNs. Approaches in [16, 34, 26] start with unrolling RNNs and then use the equivalent FNNs for further analysis. POPQORN [16] is an algorithm to quantify the robustness of RNNs, in which upper and lower planes are introduced to bound the non-linear parts of the estimated neural networks. The authors in [14] introduce invariant inference and over-approximation, transferring the RNN to a simple FNN model, demonstrating better scalability. However, the search for a proper invariant form is not straightforward. In Table 1, we compare DeepAgn with other safety verification works from _six_ aspects. DeepAgn is the _only_ model-agnostic verification tool that can verify hybrid networks consisting of both RNN and FNN structures. DeepAgn only requires access to the confidence values of the target model, enabling the verification in a _black-box_ manner. Its precision can reach an arbitrarily small (pre-defined) error with a global convergence _guarantee_. ## 3 Preliminaries Let \(o:[0,1]^{m}\rightarrow\mathbb{R}\) be a generic function that is Lipschitz continuous. The generic term \(o\) is cascaded with the Softmax layer of the neural network for statistically evaluating the outputs of the network. Our problem is to find its upper and lower bounds given the set \(X^{\prime}\) of inputs to the network. Definition 1 (Generic Reachability of Neural Networks): Let \(X^{\prime}\subseteq[0,1]^{n}\) be an input subspace and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is a neural network. The generic reachability of neural networks is defined as the reachable set \(R(o,X^{\prime},\epsilon)=[l,u]\) of network \(f\) over the generic term \(o\) under an error tolerance \(\epsilon\geq 0\) such that \[\begin{split}&\inf_{x^{\prime}\in X^{\prime}}o(f(x^{\prime}))- \epsilon\leq l\leq\inf_{x^{\prime}\in X^{\prime}}o(f(x^{\prime}))+\epsilon\\ &\sup_{x^{\prime}\in X^{\prime}}o(f(x^{\prime}))-\epsilon\leq u \leq\sup_{x^{\prime}\in X^{\prime}}o(f(x^{\prime}))+\epsilon\end{split} \tag{1}\] We write \(u(o,X^{\prime},\epsilon)=u\) and \(l(o,X^{\prime},\epsilon)=l\) for the upper and lower bound respectively. Then the reachability diameter is \(D(o,X^{\prime},\epsilon)=u(o,X^{\prime},\epsilon)-l(o,X^{\prime},\epsilon)\) Assuming these notations, we may write \(D(o,X^{\prime},\epsilon;f)\) if we need to explicitly refer to the network \(f\). Definition 2 (Safety of Neural Network): A network \(f\) is safe with respect to an input \(x_{0}\) and an input subspace \(X^{\prime}\subseteq[0,1]^{n}\) with \(x_{0}\in X^{\prime}\), if \[\forall x^{\prime}\in X^{\prime}:\arg\max_{j}c_{j}(x^{\prime})=\arg\max_{j}c_ {j}(x_{0}) \tag{2}\] where \(c_{j}(x_{0})=f(x_{0})_{j}\) returns \(N\)'s confidence in classifying \(x_{0}\) as label \(j\). Definition 3 (Verified Safe Radius): Given a neural network \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) and an input sample \(x_{0}\), a verifier \(V\) returns a verified safe radius \(r_{v}\) regarding the safety of neural network. For input \(x^{\prime}\) with \(\|x^{\prime}-x_{0}\|\leq r_{v}\), the verifier guarantees that \(\arg\max_{j}c_{j}(x^{\prime})=\arg\max_{j}c_{j}(x)\). For \(\|x^{\prime}-x_{0}\|>r\), the verifier either confirms \(\arg\max_{j}c_{j}(x^{\prime})\neq\arg\max_{j}c_{j}(x)\) or provides an unclear answer. Verified safe radius is important merit for robustness analysis, which is adopted by many verification tools such as CLEVER[30] and POPQORN [16]. Verification tools can further determine the safety of the neural network by comparing the verified safe radius \(r_{v}\) and the perturbation radius. A neural network \(f\) is determined safe by verifier \(V\) with respect to input \(x_{0}\), if \(\|x^{\prime}-x_{0}\|\leq r_{v}\). In Figure 2, the verification tool \(V_{2}\) with higher verified \(r_{2}>r_{1}\) radius have a higher evaluation accuracy. The sample \(x_{2}\) is misjudged as unsafe by \(V_{2}\). Definition 4 (Maximum Radius of a Safe Norm Ball): Given a neural network \(f:\mathbb{R}^{m\times n}\rightarrow\mathbb{R}^{s}\), an distance metric \(\|\centerdot\|_{D}\), an input \(x_{0}\in\mathbb{R}^{m\times n}\), a norm ball \(B(f,x_{0},\|\centerdot\|_{D},r)\) is a subspace of \([a,b]^{m\times n}\) such that \(B(f,x_{0},\|\centerdot\|_{D},r)=\{x^{\prime}\|x^{\prime}-x_{0}\|_{D}\leq r\}\). When \(f\) is safe in \(B(f,x_{0},\|\centerdot\|_{D},r)\) and not safe in any input subspace \(B(f,x_{0},\|\centerdot\|_{D},r^{\prime})\) with \(r^{\prime}>r\), we call \(r\) here the maximum radius of a safe norm ball. Definition 5 (Successful Attack on Inputs): Given a neural network \(f\) and input \(x_{0}\), a \(\alpha\)-bounded attack \(A_{\alpha}\) create input sets \(X^{\prime}=\{x^{\prime},\|x^{\prime}-x_{0}\|\leq\alpha\}\). \(A_{\alpha}\) is a successful attack, if an \(x_{a}\in X^{\prime}\) exists, where \(\arg\max_{j}c_{j}(x_{a})\neq\arg\max_{j}c_{j}(x_{0})\). We call \(r_{a}=\|x_{a}-x_{0}\|\leq\alpha\) the perturbation radius of a successful attack. Ideally, the verification solution should provide the maximum radius \(r\) of a safe norm ball as the verified safe radius, i.e., the black circle in Figure 2. However, most sound verifiers can only calculate a lower bound of the maximum safe radius, i.e., a radius that is smaller than \(r\), such as \(r_{1}\) and \(r_{2}\). Distinguishing from baseline methods, DeepAgn can estimate the maximum safe radius. ## 4 Lipschitz Analysis on Neural Networks This section will theoretically prove that most neural networks, including recurrent neural networks, are Lipschitz continuous. We first introduce the definition of Lipschitz continuity. Definition 6 (Lipschitz Continuity [21]): Given two metric spaces \((X,d_{X})\) and \((Y,d_{Y})\), where \(d_{X}\) and \(d_{Y}\) are the metrics on the sets \(X\) and \(Y\) respectively, a function \(f:X\to Y\) is called _Lipschitz continuous_ if there exists a real constant \(K\geq 0\) such that, for all \(x_{1},x_{2}\in X\): \(d_{Y}(f(x_{1}),f(x_{2}))\leq Kd_{X}(x_{1},x_{2})\). \(K\) is called the _Lipschitz constant_ of \(f\). The smallest \(K\) is called _the Best Lipschitz constant_, denoted as \(K_{best}\). ### Lipschitz Continuity of FNN Intuitively, a Lipschitz constant quantifies the changing rate of a function's output with respect to its input. Thus, if a neural network can be proved to be Lipschitz continuous, then Lipschitz continuity can potentially be utilized to bound the output of the neural network with respect to a given input perturbation. The authors in [25, 24] demonstrated that deep neural networks with convolutional, max-pooling layer and fully-connected layers with ReLU, Sigmoid activation function, Hyperbolic Tangent, and Softmax activation functions are Lipschitz continuous. According to the chain rule, the composition of Lipschitz continuous functions is still Lipschitz continuous. Thus we can conclude that a majority of deep feedforward neural networks are Lipschitz continuous. ### Lipschitz Analysis on Recurrent Neural Networks In this paper, we further prove that any recurrent neural network with finite input is Lipschitz continuous. Different from FNNs, RNNs contain feedback loops for processing sequential data, which can be unfolded into FNNs by eliminating loops [10]. Figure 3 illustrates such a process, by fixing the input size and direct unrolling the RNNs, we can eliminate the loops and build an equivalent feed-forward neural network. The FNN however contains structures that do not appear in regular FNNs. They are time-delays between nodes in Figure 4 (a) and different activation functions in the same layer, see Figure 4 (c). For the time delay situation, we add dummy nodes to intermediary layers. These dummy nodes use the identity matrix for weight and use the identity function as an activation function, as illustrated in Figure 4 (b). After the modification, the intermediary layer is equivalent to a regular FNN layer. The time delay between nodes occurs even by simple structure RNNs, such as in Figure 3 (a), while the same layer with different activation functions appears only by unfolding complex RNNs. Figure 4 (c) demonstrates the layer with different activation functions after unrolling. When the appeared different Figure 3: (a) Unfolded recurrent neural network; (b) A feedforward neural network by unfolding a RNN with input length 3: _The layers are denoted by \(L_{i}\) for \(1\leq i\leq 3\). The node \(h_{2}\) is located in the middle of layer \(L_{2}\), taking \(x_{2}\) and \(h_{1}\). The rest of the \(L_{2}\) are obtained by simply copying the information in \(L_{1}\)._ activation functions are Lipschitz continuous, the layer is Lipschitz continuous based on the sub-multiplicative property in matrix norms.See **Appendix-A** for detailed proof. ## 5 Reachability Analysis with Provable Guarantees ### Verification via Lipschitz Optimization In the Lipschitz optimization [8] we asymptotically approach the global minimum. Practically, we execute a finite number of iterations by using an error tolerance \(\epsilon\) to control the termination. As shown in Figure 5 (a), we first generate two straight lines with slope \(K\) and \(-K\), correcting a cross point \(Z_{0}\). Since \(Z_{0}\) is the minimal value of the generated piecewise-linear lower bound function (blue lines), we use the projected \(W_{1}\) for the next iteration. In Figure 5 (b), new \(W\) and \(Z\) points are generated. In \(i\)-th iteration, the minimal value of \(W\) is the upper bound \(u_{i}\), and the minimal value of \(Z\) is the lower bound \(l_{i}\). Our approach constructs a sequence of lower and upper bounds, terminates the iteration whenever \(|u_{i}-l_{i}|\leq\epsilon\), For the multi-dimensional optimization problem, we decompose it into a sequence of nested one-dimensional subproblems [7]. Then the minima of those one-dimensional minimization subproblems are back-propagated into the original dimension, and the final global minimum is obtained with \(\min_{x\in[a_{i},b_{i}]^{n}}\;\;w(x)=\min_{x_{1}\in[a_{1},b_{1}]}...\min_{x_{n }\in[a_{n},b_{n}]}w(x_{1},...,x_{n})\). We define that for \(1\leq k\leq n-1\), \(\phi_{k}(x_{1},...,x_{k})=\phi_{k}(x_{1},...,x_{k})\). We define the following function Figure 4: (a) Before we add dummy nodes to the intermediary layer: \(W_{i}\)_is a weight matrix, and \(f_{i}\) is an activation function. Initially, a connection from \(x_{2}\) to \(h_{2}\) crosses over the layer \(L_{2}\)._ (b) After we add dummy nodes: _we add nodes \(s_{3}\) and \(h_{3}\) to \(L_{2}\), where \(I\) denotes the identity matrix and \(id\) denotes the identity function._(c)Feed-forward layer with distinct activation functions: _layer \(L_{2}\) only performs a linear transformation (i.e. multiplication with weight matrices), and layer \(L_{3}\) has the role of applying non-linear activation functions that contain two distinct activation functions: Hyperbolic Tangent and Sigmoid._ Figure 5: A lower-bound function designed via Lipschitz constant \(\min_{x_{k+1}\in[a_{k+1},b_{k+1}]}\phi_{k+1}(x_{1},...,x_{k},x_{k+1})\) and for \(k=n\), \(\phi_{n}(x_{1},...,x_{n})=w(x_{1},...,x_{n})\). Thus we can conclude that \(\min_{x\in[a_{i},b_{i}]^{n}}\ \ w(x)=\min_{x_{1}\in[a_{1},b_{1}]}\phi_{1}(x_{1})\) which is actually a one-dimensional optimization problem. We design a practical approach to dynamically update the current Lipschitz constant according to the previous iteration: \(K=\eta\max_{j=1,...,i-1}\left\|\frac{w(y_{j})-w(y_{j-1})}{y_{j}-y_{j-1}}\right\|\) where \(\eta>1\), so that \(\lim_{i\rightarrow\infty}K>K_{best}\). We use the Lipschitz optimisation to find the minimum and maximum function values of the neural network. With binary search, we further estimate the maximum safe radius for target attack. ### Global Convergence Analysis We first analyse the convergence for a one-dimensional case. In the one dimensional case convergence exists under two conditions: \(\lim_{i\rightarrow\infty}l_{i}=\min_{x\in[a,b]}w(x)\); \(\lim_{i\rightarrow\infty}(u_{i}-l_{i})=0\). It can be easily proved since the lower bound sequence \(\mathcal{L}_{i}\) is strictly monotonically increasing and bounded from above by \(\min_{x\in[a,b]}w(x)\). We use mathematical induction to prove convergence for the multi-dimension case. The convergence conditions of the inductive step: if, for all \(x\in\mathbb{R}^{k}\), \(\lim_{i\rightarrow\infty}l_{i}=\inf_{x\in[a,b]^{k}}w(x)\) and \(\lim_{i\rightarrow\infty}(u_{i}-l_{i})=0\) are satisfied, then, for all \(x\in\mathbb{R}^{k+1}\), \(\lim_{i\rightarrow\infty}l_{i}=\inf_{x\in[a,b]^{k+1}}w(x)\) and \(\lim_{i\rightarrow\infty}(u_{i}-l_{i})=0\) hold. Proof: (sketch) By the nested optimisation scheme, we have \(\min_{\mathbf{x}\in[a_{i},b_{i}]^{k+1}}\ \ w(\mathbf{x})=\min_{x\in[a,b]} \Phi(x),\ \ \ \Phi(x)=\min_{\mathbf{y}\in[a_{i},b_{i}]^{k}}w(x,\mathbf{y})\). Since \(\min_{\mathbf{y}\in[a_{i},b_{i}]^{k}}w(x,\mathbf{y})\) is bounded by an interval error \(\epsilon_{\mathbf{y}}\), assuming \(\Phi^{*}(x)\) is the accurate global minimum, then we have \(\Phi^{*}(x)-\epsilon_{\mathbf{y}}\leq\Phi(x)\leq\Phi^{*}(x)+\epsilon_{\mathbf{ y}}\ \Phi(x)\) is not accurate but bounded by \(|\Phi(x)-\Phi^{*}(x)|\leq\epsilon_{\mathbf{y}},\forall x\in[a,b]\), where \(\Phi^{*}(x)\) is the accurate function evaluation. For the inaccurate evaluation case, we assume \(\Phi_{min}=\min_{x\in[a,b]}\Phi(x)\), and its lower and bound sequences are, respectively, \(\{l_{0},...,l_{i}\}\) and \(\{u_{0},...,u_{i}\}\). The termination criteria for both cases are \(|u_{i}^{*}-l_{i}^{*}|\leq\epsilon_{x}\) and \(|u_{i}-l_{i}|\leq\epsilon_{x}\), and \(\phi^{*}\) represents the ideal global minimum. Then we have \(\phi^{*}-\epsilon_{x}\leq l_{i}\). Assuming that \(l_{i}^{*}\in[x_{k},x_{k+1}]\) and \(x_{k},x_{k+1}\) are adjacent evaluation points, then due to the fact that \(l_{i}^{*}=\inf_{x\in[a,b]}H(x;\mathcal{Y}_{i})\) and the search scheme, we have \(\phi^{*}-l_{i}\leq\epsilon_{\mathbf{y}}+\epsilon_{x}\). Similarly, we can get \(\phi^{*}+\epsilon_{x}\geq u_{i}^{*}=\inf_{y\in\mathcal{Y}_{i}}\Phi^{*}(y)\geq u _{i}-\epsilon_{\mathbf{y}}\) so \(u_{i}-\phi^{*}\leq\epsilon_{x}+\epsilon_{\mathbf{y}}\). By \(\phi^{*}-l_{i}\leq\epsilon_{\mathbf{y}}+\epsilon_{x}\) and the termination criteria \(u_{i}-l_{i}\leq\epsilon_{x}\), we have \(l_{i}-\epsilon_{\mathbf{y}}\leq\phi^{*}\leq u_{i}+\epsilon_{\mathbf{y}}\), _i.e.,_ the accurate global minimum is also bounded. See more theoretical analysis of the global convergence in **Appendix-B**. ## 6 Experiments ### Performance Comparison with State-of-the-art Methods In this section, we compare DeepAgn with baseline methods. Their performance in feedforward neural networks and more details of the technique are demonstrated in **Appendix-C**. Here, we mainly focus on the verification of RNN. We choose POPQORN [16] as the baseline method since it can solve RNN verification problems analogously, i.e., calculating safe input bounds for given samples. Both methods were run on a PC with an i7-4770 CPU and 24 GB RAM. Table 2 demonstrates the verified safe radius of baseline \(r_{b}\), DeepAgn \(r\), and the radius of CW attack \(r_{a}\). We fixed the number of hidden neurons and manipulated the input lengths in Table 3 to compare the average safe radius and the time costs. It can be seen that increasing the input length does not dramatically increase the time consumption of DeepAgn because it is independent of the models' architectures. As in Table 4, we fixed the input length and employed RNNs and LSTMs with different numbers of hidden neurons. ### Ablation Study In this section, we present an empirical analysis of the Lipschitz constant \(K\) and the number of perturbed pixels, which both affect the precision of the results and the cost of time. As shown in Figure 6, DeepAgn with \(K=0.1\) gives a false safe radius, indicating that \(0.1\) is not a suitable choice. When the Lipschitz constant is larger than the minimal Lipschitz constant (\(K\geq 1\)), DeepAgn can always provide the exact maximum safe radius. However, with larger \(K\), we need more iterations to achieve the convergence condition when solving the optimisation problem. As for the number of perturbed pixels, we treat an n-pixel perturbation as an n-dimensional optimisation problem. Therefore, when the number of pixels increases, the evaluation time grows exponentially. ### Case Study 1 In this experiment, we use our method to verify a deep neural network in an audio classification task. The evaluated model is a deep CNN and is trained under the PyTorch framework. The data set is adopted from [28], where each one-second \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{**DeepAgn**} & \multicolumn{2}{c}{POPQORN} \\ \cline{2-5} & \multicolumn{1}{c}{**safe**} & \multicolumn{1}{c}{**time**} & \multicolumn{1}{c}{safe} & \multirow{1}{*}{time} \\ & **radius** & & & & radius \\ \hline rnn 7 \textasciitif{}16 & **0.5580** & 117.82s & 0.2038 & 2.14s \\ rnn 7 \textasciitif{}32 & **0.2371** & 175.92s & 0.1340 & 2.44s \\ rnn 7 \textasciitif{}128 & **0.6633** & 240.59s & 0.1052 & 4.25s \\ rnn 7 \textasciitif{}256 & **0.6656** & 187.83s & 0.2038 & 1.89s \\ lstm 7 \textasciitif{}16 & **0.3789** & **175.11s** & 0.0007 & 243.60s \\ lstm 7\textasciitif{}32 & **0.3461** & **189.51s** & 0.0015 & 256.77s \\ lstm 7\textasciitif{}128 & **0.3625** & **256.50s** & 0.0050 & 375.85s \\ \hline \hline \end{tabular} \end{table} Table 4: Average safe radius and time cost of DeepAgn and POPQORN on NN verification with different hidden neurons ### Case Study 2 In this case study, we verify a hybrid neural network CRNN that contains convolutional layers and LSTM layers with CTC loss. The network converts characters from scanned documents into digital forms. As far as we know, there is no existing verification tool that can deal with this complex hybrid network. However, DeepAgn can analyze the output range of this CRNN and compute the maximum Figure 8: We perturb six pixels of the image to generate the ground-truth adversarial examples(a) \(\theta=0.638\), the first letter recognized as ”K” with 48.174% confidence, as ”L” with 48.396%, the word recognized as ”IKEVIN”; (b) \(\theta=0.0631\), confidence for fourth letter recognized as ”R” 49.10%, ”B” 49.12%, recognized as ”CHEBPIN” (c) \(\theta=0.464\), the third letter as ”L” 25.96%, as ”I” 25.97%, recognized as ”JUIES”. Figure 6: Time cost and safe radius with different \(K\) and perturbed pixel numbers raw audio is transformed into a sequence input with 8000 frames and classified into 35 categories. We perturb the input value of the frame \((1000,2000...,7000)\) and verify the network of different perturbation radii. For the deep CNN case, the baseline method has a lower verification accuracy, while DeepAgn can still provide the output ranges and the maximal safe radius. Figure 7 shows the boundary of the radio waveform with perturbation \(\theta=0.1\), \(\theta=0.2\) and \(\theta=0.3\). Their differences from the original audio are imperceptible to human ears. We performed a binary search and found the exact maximum safe radius \(r=0.1591\). Figure 7: Input on lower and upper bounds of different perturbations. The first value indicates the logit output and the second shows the confidence value. For perturbation \(\theta=0.1\) and \(\theta=0.2\), the network remains safe. It is not safe for perturbation \(\theta=0.3\). safe radius of a given input. In Figure 8, we present the maximum safe radius of the inputs and their associated ground-truth (or provably minimally-distorted) adversarial examples. ## 7 Conclusion We design and implement a safety analysis tool for neural networks, computing reachability with provable guarantees. We demonstrate that it can be deployed in any network, including FNNs and RNNs regardless of the complex structure or activation function, as long as the network is Lipschitz continuous. We envision that DeepAgn marks an important step towards practical and provably-guaranteed verification for DNNs. Future work includes using parallel computation and GPUs to improve its scalability on large-scale models trained on ImageNet, and generalising this method to other deep models such as deep reinforcement learning and transformers.
2310.12350
Equipping Federated Graph Neural Networks with Structure-aware Group Fairness
Graph Neural Networks (GNNs) have been widely used for various types of graph data processing and analytical tasks in different domains. Training GNNs over centralized graph data can be infeasible due to privacy concerns and regulatory restrictions. Thus, federated learning (FL) becomes a trending solution to address this challenge in a distributed learning paradigm. However, as GNNs may inherit historical bias from training data and lead to discriminatory predictions, the bias of local models can be easily propagated to the global model in distributed settings. This poses a new challenge in mitigating bias in federated GNNs. To address this challenge, we propose $\text{F}^2$GNN, a Fair Federated Graph Neural Network, that enhances group fairness of federated GNNs. As bias can be sourced from both data and learning algorithms, $\text{F}^2$GNN aims to mitigate both types of bias under federated settings. First, we provide theoretical insights on the connection between data bias in a training graph and statistical fairness metrics of the trained GNN models. Based on the theoretical analysis, we design $\text{F}^2$GNN which contains two key components: a fairness-aware local model update scheme that enhances group fairness of the local models on the client side, and a fairness-weighted global model update scheme that takes both data bias and fairness metrics of local models into consideration in the aggregation process. We evaluate $\text{F}^2$GNN empirically versus a number of baseline methods, and demonstrate that $\text{F}^2$GNN outperforms these baselines in terms of both fairness and model accuracy.
Nan Cui, Xiuling Wang, Wendy Hui Wang, Violet Chen, Yue Ning
2023-10-18T21:51:42Z
http://arxiv.org/abs/2310.12350v3
# Equipping Federated Graph Neural Networks with Structure-aware Group Fairness ###### Abstract Graph Neural Networks (GNNs) have been widely used for various types of graph data processing and analytical tasks in different domains. Training GNNs over centralized graph data can be infeasible due to privacy concerns and regulatory restrictions. Thus, federated learning (FL) becomes a trending solution to address this challenge in a distributed learning paradigm. However, as GNNs may inherit historical bias from training data and lead to discriminatory predictions, the bias of local models can be easily propagated to the global model in distributed settings. This poses a new challenge in mitigating bias in federated GNNs. To address this challenge, we propose \(\mathrm{F}^{2}\mathrm{GNN}\), a Fair Federated Graph Neural Network, that enhances group fairness of federated GNNs. As bias can be sourced from both data and learning algorithms, \(\mathrm{F}^{2}\mathrm{GNN}\) aims to mitigate both types of bias under federated settings. First, we provide theoretical insights on the connection between data bias in a training graph and statistical fairness metrics of the trained GNN models. Based on the theoretical analysis, we design \(\mathrm{F}^{2}\mathrm{GNN}\) which contains two key components: a _fairness-aware local model update scheme_ that enhances group fairness of the local models on the client side, and a _fairness-weighted global model update scheme_ that takes both data bias and fairness metrics of local models into consideration in the aggregation process. We evaluate \(\mathrm{F}^{2}\mathrm{GNN}\) empirically versus a number of baseline methods, and demonstrate that \(\mathrm{F}^{2}\mathrm{GNN}\) outperforms these baselines in terms of both fairness and model accuracy. The code is publicly accessible via the following link: [https://github.com/yuening-lab/F2GNN](https://github.com/yuening-lab/F2GNN). Graph Neural Networks Federated Learning Group Fairness ## 1 Introduction Graph neural networks (GNNs) have emerged as a formidable tool for generating meaningful node representations [16, 17, 18] and making accurate predictions on nodes by leveraging graph topologies [17, 18]. GNNs have been studied in many applications such as knowledge graph completion [15], biology [19, 20], and recommender systems [16, 21]. A substantial portion of graph data originates from human society and various aspects of people's lives. Such data inherently contain sensitive information, susceptible to discrimination and prejudice. In many high-stakes domains such as criminal justice [14], healthcare [15], and financial systems [16, 17], ensuring fairness in the results of graph neural networks with respect to certain minority groups of individuals is of profound societal importance. These types of data are frequently distributed across multiple machines. Thus, challenges are present when applying machine learning tasks that could benefit from utilizing global data. To overcome these obstacles, federated learning has emerged as a promising solution. By leveraging a federated learning framework, organizations can collaborate on a task without compromising the confidentiality of their data. This enables the creation of a shared model while maintaining data privacy, allowing for secure and efficient cooperation among different organizations and local clients. As a result, several works have emerged in the field of federated learning combined with GNNs [15, 16, 17, 18] to address various challenges. He _et al._[15] present an open benchmark system for federated GNNs that facilitates research in this area. Xie _et al._[16] propose a graph clustered federated learning framework that dynamically identifies clusters of local systems based on GNN gradients. Yao _et al._[17] leverage federated learning to train Graph Convolutional Network (GCN) models for semi-supervised node classification on large graphs, achieving fast convergence and minimal communication. Nonetheless, there is also research indicating that predictions of GNNs (over centralized data) can be unfair and have undesirable discrimination [11, 12]. Under the federated learning setting, the bias in the local GNN models can be easily propagated to the global model. While most of the prior works [1, 13, 14] have primarily focused on debiasing federated learning models trained on non-graph data, none of them have studied federated GNNs. How to enhance the fairness of GNNs under federated settings remains to be largely unexplored. In general, the bias of GNNs can be stemmed from two different sources: (1) bias in graph data [15, 16], and (2) bias in learning algorithms (e.g., the message-passing procedure of GNNs) [17, 18]. In this paper, we aim to enhance the group fairness of federated GNNs by mitigating both types of bias. In particular, in terms of data bias, we group the nodes by their sensitive attribute (e.g., gender and race). Then we group the links between nodes based on the sensitive attribute values of the connecting nodes: (1) the _intra-group links_ that connect nodes belonging to the same node groups (i.e., the same sensitive attribute value), and (2) the _inter-group links_ that connect nodes that belong to different node groups. Based on the link groups, we consider the data bias that takes the form of the _imbalanced distribution between inter-group and intra-group links_, which is a key factor to the disadvantage of minority groups by GNNs [15]. On the other hand, in terms of model fairness, we consider two well-established group fairness definitions: _statistical parity_ (SP) [19, 20] and _equalized odds_ (EO) [17, 20]. In this paper, we propose F\({}^{2}\)GNN, one of the first federated graph neural networks that enhances group fairness of both local and global GNN models. Our contributions are summarized as follows: 1) We provide theoretical insights on the connection between data bias (imbalanced distributions between inter-group and intra-group links) and model fairness (SP and EO); 2) Based on the theoretical findings, we design F\({}^{2}\)GNN that enhances group fairness of both local and global GNN models by taking both the data bias of local graphs and statistical fairness metrics of local models into consideration during the training of both local and global models; 3) Through experiments on three real-world datasets, we demonstrate that F\({}^{2}\)GNN outperforms the baseline methods in terms of both fairness and model accuracy. ## 2 Related Work **Group fairness in GNNs.** Existing studies on fair graph learning can be broadly classified into three categories: data augmentations [13, 14, 15], problem optimization with fairness considerations [10, 17], and adversarial debiasing methods [16, 20, 21]. Data augmentations [16, 20] are used to create a balanced graph, followed by contrastive learning to learn fair graph representations. Biased edge dropout [21] is proposed to mitigate the impact of homophily and promote fairness in graph representation learning. Fair Message Passing (FMP) approach [17] uses optimization techniques to transform the message-passing scheme into a bi-level optimization framework. Adversarial debiasing [20] has been studied to mitigate graph bias and takes advantage of a sensitive feature estimator to require less sensitive information. Masrour _et al._[20] propose a post-processing step to generate more heterogeneous links to overcome the filter bubble effect and introduces a framework combining adversarial network representation learning with supervised link prediction. Hussain _et al._[15] inject edges between nodes with different labels and sensitive attributes to perturb the original graphs, followed by an adversarial training framework to debias the perturbed graphs. This work studies inter-group edges between different cases (e.g., different labels and different sensitive groups). However, it does not consider inter/intra-group edges' ratio. Recent studies have utilized mixing techniques for addressing fairness concerns in graph learning. Wang _et al._[20] generate fair views of features and adaptively clamp weights of the encoder to avoid using sensitive-related features. Ling _et al._[14] use data augmentations to create fair graph representations, followed by adversarial debiasing training. However, their theoretical proofs make strong assumptions regarding the loss function being bounded by a fixed constant. All these works are focused on fairness in centralized GNNs, and none of them have considered GNNs under the federated setting. **Fairness of federated learning.** In contrast to traditional fairness considerations in machine learning, federated learning has a unique fairness concept, performance fairness, at the device level. This is akin to fair resource allocation [1, 2] in wireless networks, which extends the notion of accuracy parity by evaluating the degree of uniformity in device performance. Li _et al._[14] give a solution to address fairness concerns in federated learning through an optimization objective that minimizes an aggregated reweighted loss, where devices with higher losses are assigned proportionally higher weights. Further, Bi-level optimization schemes [13, 14] aim to reduce the variance of test accuracy across devices, thereby increasing federated-specific fairness. In addition to the traditional fairness performance, there is a growing body of research in fair-federated learning [20, 15] that focuses on the general fairness definition, such as Ezzeldin _et al._[14] create a server-side aggregation step to weight local models performing well on fairness measures higher than others, and Du _et al._[14] use kernel reweighing functions to assign reweighing values to each training sample in both the loss function and fairness constraint. Qi _et al._ [QWW\({}^{+}\)22] present a fair vertical federated learning framework that assembles multiple techniques. They first learn representations from fair-insensitive data stored in some platforms and then send the learned representations to other platforms holding sensitive information. Subsequently, the authors conduct adversarial training with sensitive information to debias the representations. Unlike these works that consider learning models trained over non-graph data, we focus on GNNs and aim to enhance group fairness. ## 3 Federated Graph Learning **Notations.** Given an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) that consists of a set of nodes \(\mathcal{V}\), edges \(\mathcal{E}\), and the node features \(\mathbf{X}\), let \(\mathbf{A}\) be the adjacency matrix of \(\mathcal{G}\). We use \(\mathbf{Z}\) to denote the node representations (embedding) learned by a GNN model. In this paper, we consider node classification as the learning task. We use \(y\) and \(\hat{y}\) to denote the ground truth and the predicted labels respectively. **Federated Graph Learning.** In this paper, we consider _horizontal federated learning_ (HFL) setting [DSS\({}^{+}\)22,PZS\({}^{+}\)22] with \(K\) clients \(\{C_{1},\ldots,C_{K}\}\) and a server \(\mathcal{S}\). Each client \(C_{i}\) owns its private graph \(\mathcal{G}_{i}\). Different \(\mathcal{G}_{i}\) can have different \(\mathcal{V}_{i}\) and \(\mathbf{A}_{i}\). All clients share the same set of node features. In the HFL setting, clients do not share their local graphs with the server \(\mathcal{S}\) due to various reasons (e.g., privacy protection). Thus the goal of HFL is to optimize the overall objective function while keeping private graphs locally \[\arg\min_{\omega}\sum_{i=1}^{K}\mathcal{L}_{i}(\omega)=\arg\min_{\omega}\sum_{ i=1}^{K}\sum_{j=1}^{N_{i}}\boldsymbol{\ell}_{i}(v_{j},\mathcal{G}_{i}; \omega),\] where \(\mathcal{L}_{i}(\omega)\) denotes the loss function parameterized by \(\omega\) on client \(C_{i}\), \(N_{i}\) is the number of nodes in the local graph \(\mathcal{G}_{i}\), and \(\boldsymbol{\ell}_{i}(v_{j},\mathcal{G}_{i};\omega)\) denotes the average loss over the local graph on client \(C_{i}\). In this paper, we adapt the SOTA FedAvg framework [MMR\({}^{+}\)17] to our setting. In FedAvg, only model parameters are transmitted between \(\mathcal{S}\) and each client \(C_{i}\). Specifically, during each round \(t\), \(\mathcal{S}\) selects a subset of clients and sends them a copy of the current global model parameters \(\omega_{t}\) for local training. Each selected client \(C_{i}\) updates the received copy \(\omega_{t}^{i}\) by an optimizer such as stochastic gradient descent (SGD) for a variable number of iterations locally on its own data and obtains \(\omega_{t}^{i}\). Then \(\mathcal{S}\) collects the updated model parameters \(\omega_{t}^{i}\) from the selected clients and aggregates them to obtain a new global model \(\omega_{t+1}\). Finally, the server broadcasts the updated global model \(\omega_{t+1}\) to clients for training in round \(t+1\). The training process terminates when it reaches convergence. ## 4 Group Fairness and Data Bias In this paper, we mainly focus on group fairness. In general, group fairness requires that the model output should not have discriminatory effects towards _protected groups_ (e.g., female) compared with _un-protected groups_ (e.g., male), where protected and un-protected groups are defined by a sensitive attribute (e.g., gender) [BHN19]. **Node groups.** We adapt the conventional group fairness definition to our setting. We assume each node in the graph is associated with a _sensitive attribute_\(s\) (e.g., gender, race) [MMS\({}^{+}\)22]. The nodes are partitioned into distinct subgroups by their values of the sensitive attribute. For simplicity, we consider a binary sensitive attribute in this paper. In the following discussions, we use \(s=0\) and \(s=1\) to indicate the protected and un-protected node groups respectively. **Inter-group and intra-group edges.** Based on the definition of the protected node groups, we further partition edges in a graph into two groups: (1) _inter-group edges_ that consist of the edges connecting the nodes that belong to different groups (e.g., the edges between female and male nodes); and (2) _intra-group edges_ indicate the edges connecting nodes in the same group (e.g., the edges between female and female nodes). **Data bias in edge distribution.** Recent studies have shown that one potential data bias in graphs that can lead to unfairness of GNNs is the imbalanced distribution between the inter-group edges and the intra-group ones in training graphs [HCS\({}^{+}\)22a, MWY\({}^{+}\)20a, SSHU21]. Therefore, in this paper, we focus on one particular type of data bias which takes the form of the imbalance between the distribution of inter-group and intra-group edges. In particular, we define _group balance score_ (GBS) to measure this type of bias in edge distribution as follows. **Definition 1**.: _Given a graph \(\mathcal{G}\), let \(|\mathcal{E}_{\text{inter}}|\) and \(|\mathcal{E}_{\text{intra}}|\) be the number of inter-group edges and intra-group edges in \(\mathcal{G}\) respectively. \(|\mathcal{E}|=|\mathcal{E}_{\text{intra}}|+|\mathcal{E}_{\text{inter}}|\) denotes the total number of edges. Group balance score (GBS) (denoted as \(B\)) is defined as follows:_ \[B=1-(|H_{\text{intra}}|-|H_{\text{inter}}|),\] _where \(H_{\text{intra}}=\frac{|\mathcal{E}_{\text{intra}}|}{|\mathcal{E}|}\) and \(H_{\text{inter}}=\frac{|\mathcal{E}_{\text{intra}}|}{|\mathcal{E}|}\)._ Intuitively, a higher \(B\) value indicates a better balance between the inter-group edges and intra-group ones. \(B\) reaches its maximum (i.e., \(B=1\)) when the graph is perfectly balanced between the two edge groups (i.e., \(|H_{\text{intra}}|=|H_{\text{inter}}|\)). **Group fairness of models.** To measure the group fairness of the target GNN model, we employ two well-established fairness definitions: _statistical parity_ (SP) [1, 10, 11] and _equalized odds_ (EO) [13, 12, 11]. In this paper, we adapt both fairness notations to the node classification task. Formally, **Definition 2** (Statistical Parity).: _Statistical parity measures the difference in probabilities of a positive outcome of node groups, which is computed as the follows:_ \[\Delta_{\text{SP}}=|P(\hat{y}=1|s=0)-P(\hat{y}=1|s=1)|.\] **Definition 3** (Equalized Odds).: _Equalized odds measures the difference between the predicted true positive rates of node groups, which is computed as the follows:_ \[\Delta_{\text{EO}}=|P(\hat{y}=1|y=1,s=0)-P(\hat{y}=1|y=1,s=1)|.\] Intuitively, \(\Delta_{\text{SP}}\) measures the difference between the positive rates of two different node groups, while \(\Delta_{\text{EO}}\) measures the difference between the true positive rates for these two groups. Both measurements of SP and EO are equally important and complementary in terms of fairness in node classification. We are aware of the existence of other accuracy measurements (e.g., false positive rate). These accuracy measurements will be left for future work. In this paper, we consider both _local fairness_ and _global fairness_, i.e. the outputs of both local and global models should be fair in terms of statistical parity and equalized odds. As shown in the prior study [11], although local fairness and global fairness do not imply each other, global fairness is theoretically upper-bounded by local fairness for both SP and EO metrics when data follows the IID distribution. Our empirical studies (Section 7) will show that F\({}^{2}\)GNNcan achieve both local fairness and global fairness for non-IID distribution. ## 5 Connection between Data Bias and Model Fairness Data bias has been shown as an important source of model unfairness. In this section, we investigate how the imbalanced distribution between the intra-group and inter-group links impacts the model fairness (in terms of SP and EO). Directly examining how data bias can impact SP and EO is challenging. Therefore, we take an alternative approach that deals with model fairness from the lens of graph representations. Intuitively, group fairness requires that the model outputs should not be distinguishable based on sensitive attributes. As GNNs rely on graph representations to generate the model outputs for the downstream tasks, group fairness can be achieved when the graph representations are invariant to the sensitive attributes [11]. Informally, this requires independence between the graph representation and the sensitive attribute. How can the imbalanced distribution between the intra-group and inter-group links impact the correlation between the graph representation and the sensitive attribute? Next, we present our main theoretical analysis of the impact study. In this paper, we utilize the _Point-Biserial Correlation_[1], a special case of the Pearson correlation, to measure the correlation between the graph representation and the sensitive attribute. We do not consider other correlation metrics such as the Pearson correlation because the Point-Biserial Correlation accepts dichotomous variables as input. **Definition 4** (Point-Biserial Correlation Coefficient).: _The point-biserial correlation coefficient is a statistical measure that quantifies the degree of association between a continuous-level variable and a dichotomous variable. In particular, given a dichotomous variable \(s\) (\(s\in\{0,1\}\)), and a continuous variable \(X\), the point-biserial correlation coefficient \(\rho_{X,s}\) between \(X\) and \(s\) is defined as:_ \[\rho_{X,s}=\frac{\mu_{0}-\mu_{1}}{\sigma_{X}}\sqrt{\frac{N_{0}N_{1}}{N^{2}}},\] _where \(\mu_{0}\) (\(\mu_{1}\)) is the mean value of \(X\) that are associated with \(s=0\) (\(s=1\)), \(N_{0}\) (\(N_{1}\)) is the number of samples in the class \(s=0\) (\(s=1\)), \(N=N_{0}+N_{1}\), and \(\sigma_{X}\) is the standard deviation of \(X\)._ The conventional definition of the point-biserial correlation involves a 1-dimensional variable, \(X\). In line with the existing work [11], we extend this concept to incorporate higher-dimensional data, using a node feature matrix \(\mathbf{X}\). The node feature matrix is denoted as \(\mathbf{X}\in\mathbb{R}^{N\times d}\), where \(N\) represents the number of nodes and \(d\) denotes the feature dimension. In our work, we define the \(\mu_{0}\) (\(\mu_{1}\)) stated in the above definition as the mean of \(j\)-th dimension of node feature of the nodes possessing a sensitive attribute \(s=0\) (\(s=1\)), where \(j\in\{1,...,d\}\). The formal notions of \(\mu_{0}\) and \(\mu_{1}\) can be expressed as \(\mu_{j}^{s=0}\) and \(\mu_{j}^{s=1}\), the details of the calculation of \(\mu_{0}\) and \(\mu_{1}\) can be found Appendix A.1. For convenience, we will use \(\mu_{0}\) (\(\mu_{1}\)) and \(\mu_{j}^{s=0}\) (\(\mu_{j}^{s=1}\)) interchangeably. After computing the means \(\mu_{0}\) and \(\mu_{1}\) for a specific column \(j\), we can calculate the point-biserial correlation following the conventional definition. Given a graph \(\mathcal{G}\), let \(s\) be the sensitive attribute of \(\mathcal{G}\) and \(\mathbf{Z}\) be the representation (node embeddings) of \(\mathcal{G}\). Next, we will present the theoretical analysis of the relationship between the data bias (group balance score) and the correlation \(\rho_{\mathbf{Z},s}\) between \(s\) and \(\mathbf{Z}\). Our theoretical analysis relies on two assumptions regarding the model and the data distribution respectively: * **Model assumption**: We assume the GNN model contains a linear activation function [1, 1]. We note that although our theoretical analysis relies on the assumption of linear activation function, our algorithmic techniques are generic and can be applied to various GNN models, including those with non-linear activation functions. * **Data assumption**: Let \(\mathbf{X}_{j}^{s=0}\) (\(\mathbf{X}_{j}^{s=1}\)) be the \(j\)-th feature of those data samples associated with the sensitive attribute \(s=0\) (\(s=1\)). By following the prior works [1, 2], we assume the data values of \(\mathbf{X}_{j}^{s}\) follows a Gaussian distribution \(\mathcal{N}(\mu_{j}^{s},\sigma_{j}^{s})\), where \(\mu_{j}^{s}\) and \(\sigma_{j}^{s}\) are the mean and variance of \(\mathbf{X}_{j}^{s}\). Based on the assumptions, we derive the following lemma: **Lemma 5**.: _Given a graph \(\mathcal{G}\) and a GNN model that satisfies the data and model assumptions respectively, the Point-biserial correlation (denoted as \(\rho_{\mathbf{Z},s}\)) between the graph representation \(\mathbf{Z}\) and the sensitive attribute \(s\) is measured as the following:_ \[\rho_{\mathbf{Z},s}=(N_{0}\mu_{0}-N_{1}\mu_{1})(H_{\text{intra}}-H_{\text{ inter}})\cdot\frac{\sqrt{N_{0}\,N_{1}}}{\sigma_{\mathbf{Z}}N^{2}},\] _where \(H_{\text{intra}}\) and \(H_{\text{inter}}\) are defined by Definition 1, \(N_{0}\)\((N_{1})\) is the number of nodes associated with \(s=0\)\((s=1)\), \(\sigma_{\mathbf{Z}}\) is the standard deviation of \(\mathbf{Z}\), \(N\) is the total number of nodes of \(\mathcal{G}\), and \(\mu_{0}\) (\(\mu_{1}\)) is the mean value of nodes that are associated with \(s=0\) (\(s=1\))._ The proof of Lemma 5 can be found in Appendix A.2. Intuitively, when \(\rho_{\mathbf{Z},s}=0\) (i.e., the graph representation is independent of the sensitive attribute), either \(N_{0}\mu_{0}=N_{1}\mu_{1}\) or \(H_{\text{intra}}=H_{\text{inter}}\). Remark that both \(N_{0}\) and \(N_{1}\) are fixed values for a given graph. Therefore, we derive the following important theorem: **Theorem 6**.: _Let \(\mathcal{G}\) be a graph that satisfies the given data assumption, and \(\mathbf{Z}\) be the representation of \(\mathcal{G}\) learned by a GNN model that satisfies the given model assumption. Then the Point-biserial correlation \(\rho_{\mathbf{Z},s}\) between the graph representation \(\mathbf{Z}\) and the sensitive attribute \(s\) is positively correlated/monotonically increasing with \(|H_{\text{intra}}-H_{\text{inter}}|\). In particular, \(\rho_{\mathbf{Z},s}=0\) when \(H_{\text{intra}}=H_{\text{inter}}\)._ The details of the proof for Theorem 6 can be found in Appendix A.3. As Theorem 6 suggests, the imbalanced distribution between inter-group and intra-group links is one of the key factors to the correlation between the graph representation and the sensitive attribute. As the model predictions largely rely on the graph representations, the unfairness in the model predictions can be largely reduced if the graph representation is independent of the sensitive attribute. Therefore, a graph with a more (less, resp.) balanced distribution between inter-group and intra-group links should lead to a more (less, resp.) fair model. We will follow this insight and design F\({}^{2}\)GNN. ## 6 The Proposed Method: F\({}^{2}\)Gnn In a nutshell, F\({}^{2}\)GNN follows the same principles as FedAvg [1]: in each round \(t\), the server \(\mathcal{S}\) uniformly samples a set of clients. Each selected client \(C_{i}\) receives a copy of the current global parameters \(\omega_{t}\), and obtains its local model update \(\omega_{t}^{i}\) by learning over its local graph \(\mathcal{G}_{i}\) with \(\omega_{t}\) as the initialized model parameters. Then each client \(C_{i}\) sends \(\omega_{t}^{i}\) back to \(\mathcal{S}\). \(\mathcal{S}\) aggregates all the received local model updates on \(\omega_{t}\) and obtain \(\omega_{t+1}\). To equip both local fairness and global fairness with the federated learning framework, F\({}^{2}\)GNN comprises two main components: * _Client-side fairness-aware local model update scheme_: each client \(C_{i}\) obtains \(\omega_{t}^{i}\) by adding local fairness as a penalty term to the loss function of its local model. * _Server-side fairness-weighted global model update scheme_: When \(\mathcal{S}\) aggregates all local model updates from the clients, it takes both the fairness metrics of local models (e.g., SP or EO) and data bias of local graphs as aggregation weights. Figure 1 illustrates the framework of F\({}^{2}\)GNN, and detailed steps of our algorithm are summarized in Algorithm 1. Next, we present the details of each component. ### Client-side Fairness-aware Local Model Update Intuitively, the design of the local model update scheme should serve two objectives: 1) it ensures the convergence of not only the local models on the client side but also the global model on the server side, especially when the data across different clients is non-iid, and 2) it equips fairness (both SP and EO) with the local models by seamlessly incorporating the fairness constraint into the training process of local model updates. To achieve these objectives, we utilize two key techniques: (i) model interpolation by leveraging the Jensen-Shannon (JS) divergence [14], and (ii) adding fairness as a penalty term to the loss function of the local models. Next, we discuss the details of each technique. **JS divergence between global and local models.** One challenge of GNN training under federated setting is that the local data on each client may not be IID [13, 12]. The difference in data distribution can lead to a contradiction between minimizing local empirical loss and reducing global empirical loss and thus prevent the global model to reach convergence. To tackle the non-IID challenge, inspired by Zheng _et al._[13], F\({}^{2}\)GNN measures the difference between the local and the global model, and takes such difference into consideration when updating local model parameters. Specifically, let \(\omega_{t}\) and \(\omega^{i}_{t-1}\) be the global model parameters at epoch \(t\) and the local model parameters maintained by the client \(C_{i}\) at epoch \(t-1\), respectively. Let \(D_{\hat{\mathbf{Y}}_{\text{global}}}\) and \(D_{\hat{\mathbf{Y}}_{\text{local}}}\) be the distribution of the node labels in client \(C_{i}\)'s local graph predicted by the global model (with \(\omega_{t}\) as parameters) and local model (with \(\omega^{i}_{t-1}\) as parameters). Upon receiving \(\omega_{t}\), the client \(C_{i}\) calculates the Jensen-Shannon (JS) divergence (denoted as \(\text{js}^{i}_{t}\)) between \(D_{\hat{\mathbf{Y}}_{\text{global}}}\) and \(D_{\hat{\mathbf{Y}}_{\text{local}}}\) as follows: \[\text{js}^{i}_{t}=\text{JS}(D_{\hat{\mathbf{Y}}_{\text{global}}}||D_{\hat{ \mathbf{Y}}_{\text{local}}}).\] Intuitively, a higher \(\text{js}^{i}_{t}\) value indicates a greater difference between the global model and the local one. Next, the client \(C_{i}\) computes the initialized value of its local model update \(\omega^{i}_{t}\) (denoted as \(\hat{\omega}^{i}_{t}\)) as follows: \[\hat{\omega}^{i}_{t}\leftarrow(1-\text{js}^{i}_{t})\cdot\omega^{i}_{t-1}+ \text{js}^{i}_{t}\cdot\omega_{t}. \tag{1}\] Following Eqn. (1), when the global model largely varies from the local model, the global model \(\omega_{t}\) is considered more important than the local model \(\omega^{i}_{t}\). Consequently, \(\omega^{i}_{t}\) will be closer to \(\omega_{t}\). This will expedite the global model to move towards the global stationary point and reach convergence. **Fairness penalty term.** To equip fairness with the local models, we focus on a widely-used in-process approach, where fairness criteria are introduced during the training process, typically by adding a fairness constraint or penalty to the formulation of their objective function [13, 12]. Therefore, we do not include sensitive features as inputs to the GNNs but instead use them during training as part of a penalty on disparity. Formally, we define the fairness penalty term of the client \(C_{i}\) at epoch \(t\) (denoted as \(\mathbf{\ell}^{i}_{\text{fair}_{t}}\)) as the sum of two types the norm-1 of group fairness (SP and EO) measured over \(C_{i}\)'s local graph \(\mathcal{G}_{i}\): \[\mathbf{\ell}^{i}_{\text{fair}_{t}}=||\Delta^{i}_{\text{SP}_{t}}||_{1}+||\Delta^{ i}_{\text{EO}_{t}}||_{1}.\] Figure 1: An overview of F\({}^{2}\)GNN. In each iteration, client \(C_{i}\) receives the global model, calculates the Jensen-Shannon (JS) divergence \(\text{js}^{i}_{t}\), and updates the local model \(\hat{\omega}^{i}_{t}\). Local data is processed through a 2-layer GCNs, and the model is trained over several epochs. The updated model \(\omega^{i}_{t}\) and the group balance score \(B_{i}\) are uploaded to the server. The server then computes the data-bias and model-fairness weights, integrates them, and uses the combined weight to aggregate the local models and update the global model \(\omega_{t}\). Then, the fairness penalty term is added to the loss function of the local model as follows: \[\mathbf{\ell}(\omega_{t}^{i};\mathbf{Z}_{i})=\mathbf{\ell}_{\text{util}_{t}}^{i}+\alpha \,\mathbf{\ell}_{\text{fair}_{t}}^{i}, \tag{2}\] where \(\mathbf{\ell}_{\text{util}_{t}}^{i}\) denotes the utility loss (e.g., entropy loss for node classification), and \(\alpha\) is a tunable hyper-parameter. **Local model parameter updates.** Finally, the client \(C_{i}\) obtains the final local model update \(\omega_{t}^{i}\) by executing multiple SGD steps: \[\omega_{t}^{i}\leftarrow\hat{\omega}_{t}^{i}-\eta\cdot\nabla\mathbf{\ell}(\omega_ {t}^{i};\mathbf{Z}_{i}),\] where \(\hat{\omega}_{t}^{i}\) is the initialized local model update (Eqn. (1)), \(\eta\) is the learning rate, and \(\nabla\mathbf{\ell}(\omega_{t}^{i};\mathbf{Z}_{i})\) denotes the gradient of the loss function (Eqn. (2)). After completing local training, the client \(C_{i}\) uploads its local model parameters \(\omega_{t}^{i}\), the local fairness loss \(\mathbf{\ell}_{\text{fair}_{t}}^{i}\), and a group balance score \(B^{i}\) to the server \(\mathcal{S}\). We remark that sharing aggregated information of the local graphs such as \(\mathbf{\ell}_{\text{fair}_{t}}^{i}\) and \(B^{i}\) with the server will make attacking individual information harder and thus protect their privacy. ### Server-side Fairness-weighted Global Model Update The server-side model update scheme in each iteration consists of four steps: * _Step 1._ Calculate the _data-bias weight_\(\gamma_{\mathbf{\mathcal{E}}_{t}}\) based on the group balance scores provided by each client; * _Step 2._ Derive the _model-fairness weight_\(\gamma_{\mathbf{\mathcal{F}}_{t}}\) using the fairness loss metrics uploaded by the clients, ensuring fairness metrics are integrated into the global model aggregation process; * _Step 3._ Combine the data-bias weight and model-fairness weight into a combined weight (\(\gamma_{t}\)); and * _Step 4._ Aggregate the local model updates with the global model utilizing the combined weight. The following elaboration provides a detailed description of each step. **Data-bias weight.** Upon obtaining local parameters from the selected subset of clients at epoch \(t-1\), the server first constructs a data-bias weight \(\gamma_{\mathbf{\mathcal{E}}_{t}}\in\mathbb{R}^{K^{\prime}}\), where \(K^{\prime}\) represents the number of clients in the subset: \[\gamma_{\mathbf{\mathcal{E}}_{t}}=\operatorname{softmax}\big{(}B^{1},...,B^{K^{ \prime}}\big{)}.\] Each element in \(\gamma_{\mathbf{\mathcal{E}}_{t}}\) corresponds to the group balance score \(B^{i}\) for each client \(C_{i}\). Given that each client \(C_{i}\) maintains a graph \(\mathcal{G}_{i}\), the data-bias weight \(\gamma_{\mathbf{\mathcal{E}}_{t}}\) effectively gauges the equilibrium between inter- and intra-edges within each client graph. **Model-fairness weight.** While the group balance score provides a useful measure of the balance between inter- and intra-edges within each client graph, it is static and unable to capture the model's evolving learning process regarding fairness. To address this, we also consider a dynamic _fairness metric weight_\(\gamma_{\mathcal{F}}\in\mathbb{R}^{K^{\prime}}\) that evaluates the statistical parity \(\Delta_{\text{SP}_{t}}^{i}\) and equalized odds \(\Delta_{\text{E}\text{O}t}^{i}\) of each client's local model at each iteration: \[\gamma_{\mathcal{F}_{t}}=\exp\big{(}\operatorname{softmax}(\gamma_{\mathcal{F} _{t}}^{\prime})\big{)}.\] To calculate \(\gamma_{\mathcal{F}_{t}}\), we start by a weight vector \(\gamma_{\mathcal{F}_{t}}^{\prime}\), which takes each client's sum of two types of group fairness metrics as the element. Therefore, \(\gamma_{\mathcal{F}_{t}}^{\prime}\) can be illustrated as: \[\gamma_{\mathcal{F}_{t}}^{\prime}=\big{[}\Delta_{\text{SP}_{t}}^{1}+\Delta_{ \text{E}\text{O}_{t}}^{1},...,\Delta_{\text{SP}_{t}}^{K^{\prime}}+\Delta_{ \text{E}\text{O}_{t}}^{K^{\prime}}\big{]}.\] To magnify the effect of this dynamic model-fairness weight \(\gamma_{\mathcal{F}_{t}}^{\prime}\), we use an exponential function to rescale it after passing through a softmax function. This expands its range while maintaining its relative proportions as exponential functions grow faster than linear ones. We employ the softmax function to standardize the two weight vectors, making them comparable. Given the distinct properties of the group balance score \(B\), which is bounded within the interval \([0,1]\), each element in \(\gamma_{\mathcal{E}_{t}}\) adheres to the same constraints. On the other hand, each element in \(\gamma_{\mathcal{F}_{t}}^{\prime}\) is a composite of two statistical fairness metrics for each client. This combination yields a value bounded within the range of \([0,2]\). By utilizing the \(\operatorname{softmax}\) function, we standardize the ranges of these two weights, harmonizing their scales and facilitating subsequent hyperparameter tuning and combination operations. **Combined weight.** After computing \(\gamma_{\mathcal{E}_{t}}\) and \(\gamma_{\mathcal{F}_{t}}\), we integrate them into a singular weight vector \(\gamma_{t}\in\mathbb{R}^{K^{\prime}}\) using the following equation: \[\gamma_{t}=\operatorname{softmax}\big{(}\frac{\lambda\cdot\gamma_{\mathcal{E}_{ t}}+\gamma_{\mathcal{F}_{t}}}{\tau}\big{)}, \tag{3}\] where \(\lambda\) is a hyperparameter, and \(\tau\) is the temperature parameter of the \(\operatorname{softmax}\) function. This equation normalizes the two weight vectors \(\gamma_{\mathcal{E}_{t}}\) and \(\gamma_{\mathcal{F}_{t}}\) to probability distributions that sum to one, thereby producing a new weight vector \(\gamma_{t}\). The temperature parameter controls the smoothness of the probability distribution, while the hyperparameter \(\lambda\) adjusts the relative importance of data bias in the final combination. **Global model updating strategy.** With the unified weights, we update the global model \(\omega_{t}\) by combining the uploaded local models \(\omega_{t}^{i}\) from \(K^{\prime}\) clients, where \(i\in 1,...,K^{\prime}\), with the combined weight vector \(\gamma_{t}\) as follows: \[\omega_{t}\leftarrow\gamma_{t}^{1}\cdot\omega_{t}^{1}+...+\gamma_{t}^{K^{ \prime}}\cdot\omega_{t}^{K^{\prime}}, \tag{4}\] where \(\gamma_{t}^{i}\) is the \(k\)-th element of the combined weight vector. The updated global model \(\omega_{t}\) is then broadcasted to all clients by the server. ## 7 Experimental Evaluation In this section, we present the results of our experiments on three real-world datasets for node classification tasks. Our proposed approach, \(\text{F}^{2}\text{GNN}\) is evaluated and compared against several baseline schemes in terms of both utility and fairness metrics. ### Experimental Setup **Datasets.** We employ three real-world datasets, Pokec-z, Pokec-n, and NBA, that are widely used for GNN training [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 182, 184, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 206, 217, 218, 219, 22, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 299, 300, 311, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 301, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 403, 404, 409, 42, 409, 43, 409, 44, 411, 434, 44, 424, 44, 445, 446, 409, 42, 441, 44, 45, 46, 409, 44, 42, 45, 47, 48, 49, 410, 42, 409, 44, 42, 46, 409, 43, 44, 44, 45, 46, 409, 44, 47, 48, 49, 42, 49, 43, 44, 45, 46, 409, 44, 47, 49, 45, 48, 49, 46, 409, 47, 49, 48, 49, 49, 400, 41, 40, 42, 409, 44, 45, 46, 409, 47, 49, 48, 49, 49, 400, 41, 40, 43, 409, 41, 44, 45, 46, 409, 45, 47, 49, 48, 49, 410, 42, 409, 44, 45, 46, 409, 46, 409, 47, 48, 49, 49, 410, 42, 409, 43, 44, 45, 46, 409, 49, 410, 44, 45, 47, 49, 410, 42, 409, 45, 46, 409, 47, 49, 48, 49, 410, 49, 411, 43, 44, 45, 46, 409, 49, 411, 44, 45, 47, 49, 410, 42, 409, 45, 46, 409, 47, 49, 411, 45, 48, 49, 410, 42, 409, 45, 49, 411, 46, 47, 49, 412, 40, 46, 409, 47, 49, 413, 44, 45, 47, 49, 414, 48, 49, 415, 49, 416, 417, 418, 419, 42, 419, 43, 45, 46, 409, 410, 42, 409, 45, 47, 49, 413, 45, 46, 409, 47, 48, 49, 49, 414, 49, 415, 49, 416, 417, 41, 418, 419, 42, 41, 42, 42, 43, 44, 45, 46, 409, 41, 47, 48, 49, 415, 42, 43, 44, 45, 47, 49, 416, 42, 45, 46, 409, 47, 49, 417, 49, 418, 42, 409, 45, 419, 42, 43, 45, 46, number of hops to grow the networks. Each network (or subgraph) is considered as a client. We evaluate our proposed approach on the Pokec datasets in a setting with 30 clients, each holding a 3-hop ego-network, and on the NBA dataset in a setting with 10 clients, each holding a 3-hop ego-network. See more details in Appendix D. ### Performance Evaluation Table 1 provide a comprehensive performance comparison of various models on the Pokec datasets, with a particular focus on their fairness attributes, specifically \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\). The results on the NBA dataset can be found in Appendix E Table 4. Specifically, we evaluate the models on a global test set as well as local test sets. Each local client has its corresponding test set. After we get the metrics from all clients, we report the median value of these clients for each metric. Among the evaluated methods, our proposed framework, \(\text{F}^{2}\text{GNN}\) consistently outperforms other methods across all three datasets in both global and local sets. In particular, \(\text{F}^{2}\text{GNN}\) achieves the lowest \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\) values in all three datasets, indicating the effectiveness of the proposed scheme for group fairness in federated settings. The trade-off performance of each model on the three datasets is illustrated in Table 2. The proposed method achieves the highest trade-off values for the Pokec-z, Pokec-n, and NBA datasets, respectively, demonstrating its exceptional ability to balance accuracy and fairness. In summary, our proposed \(\text{F}^{2}\text{GNN}\) framework outperforms other baselines in terms of \(\Delta_{\text{SP}}\), \(\Delta_{\text{EO}}\), and trade-off values across all datasets. These results validate the effectiveness and robustness of \(\text{F}^{2}\text{GNN}\) in achieving competitive performance while maintaining group fairness in node classification tasks. These findings also highlight the broad applicability of our approach in the field of federated learning, suggesting its potential for adaptation to other domains. Given the limited number of baseline models, we further evaluate our approach in comparison to several Fair GNNs within a non-federated setting. Comprehensive experimental results and supplementary information can be found in Table 5 presented in Appendix E. ### Ablation Study In the ablation study reported in Figure 2, we evaluate the significance of the client-side fairness-aware local model updates and the server-side fairness-weighted model updates in our proposed model on the Pokec datasets and the NBA dataset. The omission of the client-side updates or server-side updates leads to a noticeable decrease in accuracy and an increase in the disparity measures \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\), underscoring their importance in maintaining both classification accuracy and fairness. Although discarding the server-side fair aggregation scheme results in the highest accuracy and AUC values on the Pokec-z dataset, it negatively impacts fairness, indicating its significant role in promoting fairness. \begin{table} \begin{tabular}{l c The full model, F\({}^{2}\)GNN achieves the best balance between accuracy and fairness, demonstrating the effectiveness of all components in federated graph learning. ## 8 Conclusion This research investigates the impact of data bias in federated learning, particularly in the context of graph neural networks. We emphasize the significance of considering the correlation between sensitive attributes and graph structures, in achieving model fairness. To address these challenges, we propose F\({}^{2}\)GNN, a comprehensive approach that incorporates fairness in both client-side local models and server-side aggregation schemes. Our experiments on real-world datasets demonstrate the effectiveness of our algorithm in mitigating bias and enhancing group fairness without compromising downstream task performance. Moving forward, we acknowledge the importance of strengthening privacy protection when uploading group balance scores. One potential solution is the utilization of techniques such as homomorphic encryption, which enables computations on encrypted data and secure aggregation on the server side.
2303.08230
Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks
Several approximate inference methods have been proposed for deep discrete latent variable models. However, non-parametric methods which have previously been successfully employed for classical sparse coding models have largely been unexplored in the context of deep models. We propose a non-parametric iterative algorithm for learning discrete latent representations in such deep models. Additionally, to learn scale invariant discrete features, we propose local data scaling variables. Lastly, to encourage sparsity in our representations, we propose a Beta-Bernoulli process prior on the latent factors. We evaluate our spare coding model coupled with different likelihood models. We evaluate our method across datasets with varying characteristics and compare our results to current amortized approximate inference methods.
Arunesh Mittal, Kai Yang, Paul Sajda, John Paisley
2023-03-14T20:50:12Z
http://arxiv.org/abs/2303.08230v1
# Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks ###### Abstract Several approximate inference methods have been proposed for deep discrete latent variable models. However, non-parametric methods which have previously been successfully employed for classical sparse coding models have largely been unexplored in the context of deep models. We propose a non-parametric iterative algorithm for learning discrete latent representations in such deep models. Additionally, to learn scale invariant discrete features, we propose local data scaling variables. Lastly, to encourage sparsity in our representations, we propose a Beta-Bernoulli process prior on the latent factors. We evaluate our spare coding model coupled with different likelihood models. We evaluate our method across datasets with varying characteristics and compare our results to current amortized approximate inference methods. ## 1 Introduction Sparse coding (Olshausen and Field, 1996) is an unsupervised latent factor model that has been widely used to uncover sparse discrete latent structure from data. Unlike auto-encoders, where the encoder is a parametric model, the encoder in sparse coding is an optimization algorithm that searches for an optimal encoding \(\mathbf{z}^{*}=\operatorname*{arg\,max}_{\mathbf{z}}p(\mathbf{x},\mathbf{z};\theta)\), which maximizes the joint likelihood of the data \(\mathbf{x}\) and latent encodings \(\mathbf{z}\). An advantage of the non-parametric approach is that it decouples the encoder and decoder such that the generalization error rises entirely from the reconstruction error of the decoder. In such models, sparsity is encouraged in the latent encodings \(\mathbf{z}\) via a prior such as Laplace, Cauchy or factorized Student-t prior (Goodfellow et al., 2016). Sparse coding by optimizing the MAP objective with a Laplace prior allows one to use gradient optimization methods for inferring \(\mathbf{z}\). However, one major drawback using such priors is that the latent factors in the encoding are encouraged to remain close to zero, even when those factors are active, whereas, for inactive elements, under the prior distribution, a factor being exactly zero has zero probability (Goodfellow et al., 2012). Variational Auto Encoders (Kingma and Welling, 2013, 2019) have been popular deep generative models employed to uncover lower dimensional latent structure in data. Despite the flexibility of the deep likelihood model \(p(\mathbf{x}\mid\mathbf{z})\), VAEs use a parametric encoder network for inferring the latent encoding \(\mathbf{z}\), and hence do not benefit from the same advantages as that of a non-parametric encoding model. In VAEs, the generalization error is linked to both the decoder and the encoder and is difficult to disentangle. In addition to the limitations of using a parametric network for inference, amortized variational inference using parametric neural networks has additional learning constraints due to the amortization and approximation gaps in the variational objective used to train VAEs (Cremer et al., 2018). In principle, a non-parametric encoding model with a MAP-EM optimization can perform better than neural net parameterized ammortized inference, as it does not suffer from the amortization gap or the variational approximation gap. This comes at the cost of losing posterior uncertainty estimates, however, this might be an acceptable trade-off given that the posterior uncertainty in deep generative models via ammortized approximate inference is poorly calibrated and is still an area of active research (Nalisnick et al., 2018). Additionally, utilizing the MAP estimates, we can still potentially approximate posterior uncertainty using a Laplace approximation (Ritter et al., 2018). VAE models with discrete latent factors (Maddison et al., 2016, Jang et al., 2016), do not work well with continuous data likelihood models, as the discrete sparse latent factors have limited representational capacity, and are unable to adequately represent local scale variations across an entire dataset. In fact, often one desires that the latent encodings only encode underlying latent structure of the data that is invariant to local data point scale variations. To address the aforementioned issues, we propose a generative model with local scaling variables that decouples the data scaling from the discrete latent representation. We utilize a Beta-Bernoulli process prior on the latent codes that allows us to learn sparse discrete latent factors. For inference in this model, we propose a MAP-EM greedy pursuit algorithm. We expect the inferred latent codes with true zeroes to have a stronger regularizing effect than the above mentioned sparsity promoting priors, which is especially advantageous in deep generative models with flexible neural network parameterized likelihood models. The primary disadvantage of the non-parametric encoder is that it requires greater time to compute \(\mathbf{z}\) due to the iterative algorithm, however, since the Beta-Bernoulli prior encourages the encodings to be sparse, as training progresses, the time taken to encode each data point significantly decreases over training iterations. We demonstrate the efficacy of our model by proposing three different instantiations of our general model. We evaluate our models on discrete and continuous data by examining the representational capacity of our model by measuring the data reconstruction error, as well as the sparsity of our learned representations. We compare our models to widely used VAE (Kingma and Welling, 2013) and its discrete variant the Gumbel Softmax VAE (Jang et al., 2016). Not only does our model perform better in terms of reconstruction errors, it also learns substantially sparser latent encodings. ## 2 Related Work We briefly review the VAE model that has been widely used to learn latent representations. In the typical VAE generative model, \(\mathbf{z}_{n}\) is drawn from a Gaussian prior, then given \(\mathbf{z}_{n}\), \(\mathbf{x}_{n}\) is then drawn from a distribution parametrized by a deep neural network \(f_{\theta}(\cdot)\), which maps \(\mathbf{z}_{n}\) to the sufficient statistics of the likelihood function \(p(\mathbf{x}_{n}\mid\mathbf{z}_{n})\): \[\mathbf{z}_{n} \sim p(\mathbf{z}_{n})\] \[\mathbf{x}_{n} \sim p(\mathbf{x}_{n}\mid f_{\theta}(\mathbf{z}_{n});\mathbf{\theta})\] Inference in this model is then performed using variational inference, however, unlike free form optimization used with mean field variational inference (Jordan et al., 1999), the VAE models, parametrize the variational \(q(\mathbf{z}_{n};\mathbf{\phi})\) distribution also with a neural network, that maps the data \(\mathbf{x}_{n}\) to the sufficient statistics of the \(q(\cdot)\) distribution. Then the posterior inference is performed by optimizing the Evidence Lower Bound ELBO using gradient methods: \[\ln p(\mathbf{x})\geq\mathrm{ELBO}=\sum_{n}\mathbb{E}_{q(\mathbf{z}_{n}|\mathbf{x}_{n}; \mathbf{\phi})}\left[\ln\frac{p(\mathbf{x}_{n},\mathbf{z}_{n};\mathbf{\theta})}{q(\mathbf{z}_{n}| \mathbf{x}_{n};\mathbf{\phi})}\right]\] ## 3 Beta-Bernoulli Generative Process We propose the following generative model with Beta-Bernoulli process prior. Given observed data \(\mathbf{x}_{n}\), the corresponding latent encoding \(\mathbf{z}_{n}\) is drawn from a Bernoulli process (BeP) parameterized by a beta process (BP), where, the Bernoulli process prior over each of the \(k\) factors \(\mathbf{z}_{nk}\in\mathbf{z}_{n}\), is parameterized by \(\mathbf{\pi}_{k}\) drawn from a finite limit approximation to the beta process (Griffiths and Ghahramani, 2011; Paisley and Carin, 2009). Since \(\mathbf{z}_{nk}\) is drawn from \(\mathrm{Bern}(\mathbf{\pi}_{k})\), where \(\mathbf{\pi}_{k}\sim\mathrm{Beta}\left(\alpha\gamma/K,\alpha\left(1-\gamma\,K \right)\right)\), \(k\in\{1,\ldots,K\}\), the random measure \(G_{n}^{K}=\sum_{k=1}^{K}\mathbf{z}_{nk}\delta_{f_{\theta}(\mathbf{z}_{nk})}\), \(\lim_{K\rightarrow\infty}G_{n}^{K}\) converges to a Bernoulli process (Paisley and Jordan, 2016). Then given a latent binary vector \(\mathbf{z}_{n}\in\{0,1\}^{K}\), the observed data point \(\mathbf{x}_{n}\) is drawn from an exponential family distribution with a local scaling factor \(\mathbf{\lambda}_{n}\), also drawn from an appropriate exponential family distribution (sections 3.2 to 3.1.1). The natural parameters of this data distribution are parametrized by a \(L\) layered neural network \(f_{\theta}(\cdot)\). The neural network \(f_{\theta}(\cdot)\), maps the binary latent code \(\mathbf{z}_{n}\in\{0,1\}^{K}\) to \(\mathbb{R}^{D}\). This corresponds to the following generative process: \[\mathbf{\pi}_{k} \sim\mathrm{Beta}\left(\alpha(\gamma/K),\alpha\left(1-(\gamma/K) \right)\right)\] \[\mathbf{z}_{nk} \sim\mathrm{Bern}(\mathbf{\pi}_{k})\] \[\mathbf{\lambda}_{n} \sim\mathrm{ExpFam}(\mathbf{\phi})\] \[\mathbf{x}_{n} \sim\mathrm{ExpFam}(f_{\theta}(\mathbf{z}_{n});\mathbf{\lambda}_{n})\] where, \(\mathbf{\pi}_{k}\) is the global prior on \(\mathbf{z}_{nk}\), which corresponds to the \(k^{th}\) dimension of the latent encoded vector \(\mathbf{z}_{n}\), and \(\mathbf{z}_{n}\) is the local latent encoding for the \(n^{th}\) data point \(\mathbf{x}_{n}\). The likelihood model for \(\mathbf{x}\) is parametrized by local parameters \(\{\mathbf{\lambda}_{n}\}_{n=1}^{N}\) and the global parameters \(\mathbf{\theta}\). During inference, the Beta-Bernoulli process prior on \(\mathbf{z}\), encourages the model to learn sparse latent encodings. As we would like the binary encodings to be scale invariant, modeling local data point specific scale distribution \(p(\mathbf{\lambda}_{n})\), allows us to marginalize out the scale variations in data when when inferring the latent code \(\mathbf{z}_{n}\). We demonstrate the utility of this non-parametric encoding model by coupling the Beta-Bernoulli process sparse encoding prior with three distinct exponential family likelihood models in the following sections. ### Scale Invariant Models Given two data points \(\mathbf{x}_{m}\) and \(\mathbf{x}_{n}\), where \(\mathbf{x}_{m}\) is just a scaled version of \(\mathbf{x}_{n}\), we would want these data points to have the same latent embedding \(\mathbf{z}\). To disentangle the scale of the data points from the latent discrete representation, we introduce a local scale distribution for the Gaussian and Poisson likelihood models. #### 3.1.1 GaussBPE For real valued data, we use a Gaussian likelihood model, where \(f_{\theta}(\cdot)\) parametrizes the mean of the Gaussian distribution. We model the local data point scale \(\mathbf{\lambda}_{n}\) with a univariate Gaussian: \[\mathbf{\lambda}_{n} \sim\mathcal{N}(0,c)\] \[\mathbf{x}_{n} \sim\mathcal{N}(\mathbf{\lambda}_{n}f_{\theta}(\mathbf{z}_{n}),\sigma^{2}I)\] The likelihood given the local encoding \(\mathbf{z}_{n}\), depends on both local parameters \(\mathbf{\lambda}_{n}\) and the global neural net parameters \(\mathbf{\theta}\). For \(\mathbf{\lambda}=1\), this is equivalent to the isotropic Gaussian likelihood with Gaussian prior generative model, employed by Kingma and Welling (2013). #### 3.1.2 PoissBPE For count data we use a Poisson likelihood model, where \(f_{\theta}(\cdot)\) parametrizes the rate of the Poisson distribution. We model the local data point rate \(\mathbf{\lambda}_{n}\) with a Gamma distribution. Additionally, we introduce a global parameter \(\mathbf{\beta}\): \[\mathbf{\lambda}_{n} \sim\mathrm{Gamma}(a,b)\] \[\mathbf{x}_{n} \sim\mathrm{Poiss}(\mathbf{\lambda}_{n}\mathbf{\beta}f_{\theta}(\mathbf{z}_{ n}))\] The likelihood then depends on local parameters \(\{\mathbf{\lambda}_{n}\}_{n=1}^{N}\) and the global neural net parameters \(\mathbf{\varphi}\), and \(\mathbf{\beta}\), a \(W\times T\) matrix, where each column \(\mathbf{\beta}_{,t}\in\Delta_{W-1}\). The global parameters include both the neural net parameters \(\mathbf{\varphi}\) and \(\mathbf{\beta}\), \(\mathbf{\theta}=\{\mathbf{\varphi},\mathbf{\beta}\}\) In the context of topic modeling, \(W\) corresponds to number of words in the vocabulary, \(T\) corresponds to the number of topics, and \(\mathbf{\beta}_{:,t}\), corresponds to the \(t^{th}\) topic distrubution over words. Then \(\mathbf{x}_{n}^{(d)}\) is the number of occurances of word \(d\) in the \(n^{th}\) document. ### BernBPE To evaluate a likelihood model, where we do not need to explicitly model the local scale, such a binary data, we use a Bernoulli likelihood model, where \(f_{\theta}(\cdot)\) parametrizes the mean of the Bernoulli distribution, without any local scaling variables: \[\mathbf{x}_{n}\sim\mathrm{Bern}(f_{\theta}(\mathbf{z}_{n}))\] Given the local encoding \(\mathbf{z}_{n}\), the likelihood model only depends on the global neural net parameters \(\mathbf{\theta}\). This model is equivalent to the Bernoulli likelihood model with Bernoulli prior employed by Jang et al. (2016). ## 4 Inference We propose a MAP-EM algorithm to perform inference in this model. We compute point estimates for local latent encodings \(\{\mathbf{z}_{n}\}_{n=1}^{N}\) and the global parameters \(\mathbf{\theta}\) and compute posterior distributions over \(\mathbf{\pi}\) and \(\mathbf{\lambda}\). Since, \(\mathbf{\pi}\perp\mathbf{x}_{n}\mid\mathbf{z}_{n}\), utilizing the conjugacy in the model, we can analytically compute the conditional posterior \(q(\mathbf{\pi})\). Similarly, for the local scaling variables in the Gaussian likelihood and Poisson likelihood models, we can analytically compute the conditional posterior \(q(\mathbf{\lambda}_{n})\). ### Inference for local scale parameters #### 4.1.1 \(q(\mathbf{\lambda})\) for Gaussian Likelihood Since the conditional posterior \(q(\mathbf{\lambda})\triangleq p(\mathbf{\lambda}\mid\mathbf{x},\mathbf{\theta},\mathbf{z})\) factorizes as \(\prod_{n}q(\mathbf{\lambda}_{n})=\prod_{n}p(\mathbf{\lambda}_{n}\mid\mathbf{x}_{n},\mathbf{ \theta},\mathbf{z}_{n})\) and the posterior distribution over \(\mathbf{\lambda}_{n}\) is also a Gaussian, we can analytically compute the posterior \(q(\mathbf{\lambda}_{n})\): \[q(\mathbf{\lambda}_{n}) =\mathcal{N}(\mathbf{\lambda}_{n}\mid\mu_{\mathbf{\lambda}_{n}|\mathbf{x}_{n },\mathbf{z}_{n},\mathbf{\theta}},\sigma^{2}_{\mathbf{\lambda}_{n}|\mathbf{x}_{n},\mathbf{z}_{n}, \mathbf{\theta}}) \tag{1}\] \[\sigma^{2}_{\mathbf{\lambda}_{n}|\mathbf{x}_{n},\mathbf{z}_{n},\mathbf{\theta}} =\big{(}c^{-1}+f_{\theta}(\mathbf{z}_{n})^{\top}f_{\theta}(\mathbf{z}_{n })/\sigma^{2}\big{)}^{-1}\] \[\mu_{\mathbf{\lambda}_{n}|\mathbf{x}_{n},\mathbf{z}_{n},\mathbf{\theta}} =(\sigma^{2}_{\mathbf{\lambda}_{n}|\mathbf{x}_{n},\mathbf{z}_{n},\mathbf{\theta}}) (f_{\theta}(\mathbf{z}_{n})^{\top}\mathbf{x}_{n})/\sigma^{2}\] ### \(q(\mathbf{\lambda})\) for Poisson Likelihood The conditional posterior \(q(\mathbf{\lambda})\triangleq p(\mathbf{\lambda}\mid\mathbf{x},\mathbf{z},\mathbf{\theta},\mathbf{ \beta})\) factorizes as \(\prod_{n}q(\mathbf{\lambda}_{n})=\prod_{n}p(\mathbf{\lambda}_{n}\mid\mathbf{x}_{n},\mathbf{z }_{n},\mathbf{\theta},\mathbf{\beta})\). Given the Gamma prior on \(\mathbf{\lambda}_{n}\), the posterior distribution over \(\mathbf{\lambda}_{n}\) is also Gamma distributed, hence, we can analytically compute the posterior \(q(\mathbf{\lambda}_{n})\): \[q(\mathbf{\lambda}_{n}) \propto\mathbf{\lambda}_{n}^{a-1}e^{-b\mathbf{\lambda}_{n}}\prod_{w}(\bm {\lambda}_{n})^{\mathbf{x}_{n}^{(w)}}e^{(-\mathbf{\lambda}_{n}\mathbf{\phi}_{w}^{(w)})} \tag{2}\] \[=\mathrm{Gamma}\left(\sum_{w}\mathbf{x}_{n}^{(w)}+a,b+1\right)\] Where \(\mathbf{\phi}_{n}^{(d)}\triangleq[\mathbf{\beta}f_{\theta}(\mathbf{z}_{n})]_{d}\). Since \(\sum_{t}[f_{\theta}(\mathbf{z}_{n})]_{t}\triangleq 1\) and \(\sum_{w}\mathbf{\beta}_{w,t}\triangleq 1\), the sum over the random vector, Figure 1: Greedy pursuit for \(\mathbf{z}\). For each greedy sparse coding step, first all bits in the active set \(\mathbf{\Omega}\) are turned on, then individually all \(j\in\{0,\dots,K\}\setminus\mathbf{\Omega}\) are turned on. The bit \(j^{*}\) that leads to maximal increase in the bound \(\mathcal{L}(\mathbf{z}_{\mathbf{\Omega}\cup\{j^{*}\}})\) is added to the active set \(\mathbf{\Omega}\). If adding a \(j^{*}\) leads to a decrease in the bound, the search is terminated and the sparse vector \(\mathbf{z}_{\mathbf{\Omega}}\) is returned as the sparse code. Here \(\mathcal{L}(\mathbf{z}^{*})\) is the optimal encoding that could be recovered if an exhaustive search were performed over \(2^{K}\) possible codes. \(1\). Hence, the posterior \(q(\mathbf{\lambda}_{n})\) does not depend on \(f_{\theta}(\mathbf{z}_{n})\) or \(\mathbf{\beta}\). In practice, we only need to compute \(q(\mathbf{\lambda})\) for the entire dataset just once during training. ### Inference for latent variables #### 4.3.1 Stochastic update for \(q(\mathbf{\pi})\) For scalable inference, given a batch of data \(\{\mathbf{x}_{n}\}_{n\in\mathcal{B}}\), we first compute the latent codes \(\{\mathbf{z}_{n}\}_{n\in\mathcal{B}}\), then we can efficiently compute the posterior \(q(\mathbf{\pi}\mid\{\mathbf{z}_{n}\}_{n\in\mathcal{B}})\) using natural gradient updates (Hoffman et al., 2013). This posterior parameter update is a stochastic gradient step along the natural gradient, which is equivalent to following updates to the posterior sufficient statistics \(\{\mathbf{a}_{k}\}_{k=1}^{K}\) and \(\{\mathbf{b}_{k}\}_{k=1}^{K}\) with step size \(\eta\): \[q(\mathbf{\pi}) =\prod_{k}\operatorname{Beta}(\mathbf{\pi}_{k}\mid\mathbf{a}_{k},\mathbf{b}_ {k}) \tag{3}\] \[\mathbf{a}_{k}^{\prime} =\alpha\frac{\gamma}{K}+\frac{N}{|S|}\sum_{n\in S}\mathbf{z}_{k}^{(n)}\] \[\mathbf{b}_{k}^{\prime} =\alpha\left(1-(\gamma/K)\right)+\frac{N}{|S|}\sum_{n\in S} \left(1-\mathbf{z}_{k}^{(n)}\right)\] \[\mathbf{a}_{k} \leftarrow(1-\eta)\mathbf{a}_{k}+\eta\;\mathbf{a}_{k}^{\prime}\] \[\mathbf{b}_{k} \leftarrow(1-\eta)\mathbf{b}_{k}+\eta\;\mathbf{b}_{k}^{\prime}\] ### Greedy pursuit for \(\mathbf{z}\) For each model we marginalize the local scale variables \(\mathbf{\lambda}\) and global \(\mathbf{\pi}\) to compute the complete data joint likelihood lower bounds \(\mathcal{L}_{\mathcal{G}},\mathcal{L}_{\mathcal{P}},\mathcal{L}_{\mathcal{B}}\) for Gaussian, Poisson and Bernoulli likelihood models respectively, which include terms that only depend on \(\mathbf{z}\): \[\mathcal{L}_{\mathcal{G}}(\mathbf{z}) =\ln\int p(\mathbf{x},\mathbf{\lambda}\mid\mathbf{\theta},\mathbf{z})d\lambda+ \mathbb{E}_{q(\mathbf{\pi})}\left[\ln p(\mathbf{z}\mid\mathbf{\pi})\right]\] \[\mathcal{L}_{\mathcal{P}}(\mathbf{z}) =\mathbb{E}_{q(\mathbf{\lambda})}\left[\ln p(\mathbf{x},\mathbf{\lambda} \mid\mathbf{\theta},\mathbf{z})\right]+\mathbb{E}_{q(\mathbf{\pi})}\left[\ln p(\mathbf{z} \mid\mathbf{\pi})\right]\] \[\mathcal{L}_{\mathcal{B}}(\mathbf{z}) =\ln p(\mathbf{x}\mid\mathbf{\theta},\mathbf{z})+\mathbb{E}_{q(\mathbf{\pi})} \left[\ln p(\mathbf{z}\mid\mathbf{\pi})\right]\] The expected log prior is given by: \[\mathbb{E}_{q(\mathbf{\pi})}\left[\log p(\mathbf{z}_{n}\mid\mathbf{\pi})\right] =\sum_{k}\mathbf{z}_{nk}[\psi(\mathbf{a}_{k}-\psi(\mathbf{a}_{k}+\mathbf{b}_{k})]\] \[+(1-\mathbf{z}_{nk})[\psi(\mathbf{b}_{k})-\psi(\mathbf{a}_{k}+\mathbf{b}_{k})]\] where \(\psi(\cdot)\) is the digamma function. For the Gaussian likelihood model we can marginalize \(\mathbf{\lambda}_{n}\) when maximizing \(p(\mathbf{x}_{n},\mathbf{\lambda}_{n},\mathbf{z}_{n})\): \[\mathcal{L}_{\mathcal{G}}(\mathbf{z}_{n})=\ln\int p(\mathbf{x}_{n},\mathbf{ \lambda}_{n}\mid\mathbf{\theta},\mathbf{z}_{n})d\mathbf{\lambda}_{n}+\] \[\mathbb{E}_{q(\mathbf{\pi})}\left[\ln p(\mathbf{z}_{n}|\mathbf{\pi})\right]\] where the marginal log likelihood can be calculated: \[\ln\int p(\mathbf{x}_{n},\mathbf{\lambda}_{n}\mid\mathbf{\theta},\mathbf{z}_{n })d\mathbf{\lambda}_{n}=\] \[-\frac{1}{2} \bigg{[}\ln\left(1+\frac{c}{\sigma^{2}}f_{\theta}(\mathbf{z}_{n})^{ \top}f_{\theta}(\mathbf{z}_{n})\right)\] \[+\mathbf{x}_{n}^{\top}\left(\sigma^{-2}I-\frac{\frac{1}{\sigma^{2}}f_{ \theta}(\mathbf{z}_{n})f_{\theta}(\mathbf{z}_{n})^{\top}}{e^{-1}\sigma^{2}+f_{\theta}( \mathbf{z}_{n})^{\top}f_{\theta}(\mathbf{z}_{n})}\right)\mathbf{x}_{n}\bigg{]}\] For the Poisson likelihood model we compute the expectation: \[\mathbb{E}_{q(\mathbf{\lambda})}[\ln p(\mathbf{x},\mathbf{\lambda}\mid\mathbf{ \theta},\mathbf{z})]=\] \[\mathbf{x}_{n}^{(w)}\ln\mathbf{\phi}_{n}^{(w)}-\frac{a+\sum_{w}\mathbf{x}_{n}^ {(w)}}{b+1}\mathbf{\phi}_{n}^{(w)}\] To optimize \(\mathcal{L}_{(\cdot)}(\mathbf{z})\), we employ a greedy pursuit algorithm, which is similar to the matching pursuit used by K-SVD (Aharon et al., 2006). We use \(\mathbf{z}_{\Omega_{n}}\) to denote a \(k\)-vector, corresponding to the latent vector for the \(n^{\text{th}}\) data point, where, \(\forall j\in\mathbf{\Omega}_{n},\mathbf{z}_{nj}=1\) and \(\forall j\notin\mathbf{\Omega}_{n},\mathbf{z}_{nj}=0\). To compute the sparse code given a data point \(\mathbf{x}_{n}\), we start with an empty active set \(\mathbf{\Omega}_{n}\), then \(\forall j\in\{1,\dots,K\}\), we individually set each \(\mathbf{z}_{nj}=1\) to find \(j^{*}\in\{1,\dots,K\}\setminus\mathbf{\Omega}_{n}\) that maximizes \(\mathcal{L}_{(\cdot)}(\mathbf{z}_{\Omega_{n}\cup\{j^{*}\}})\). We compute the scores \(\zeta^{+}\triangleq\mathcal{L}_{(\cdot)}(z_{\Omega_{n}\cup\{j^{*}\}})\) and \(\zeta^{-}\triangleq\mathcal{L}_{(\cdot)}(z_{\Omega_{n}},\mathbf{\theta})\). We add \(j^{*}\) to \(\mathbf{\Omega}_{n}\) only if \(\zeta^{+}>\zeta^{-}\), this step is necessary because unlike matching pursuit, the neural net \(f_{\theta}(\cdot)\) is a non-linear mapping from \(\mathbf{z}_{\Omega_{n}}\), hence, adding \(j^{*}\) to \(\mathbf{\Omega}_{n}\) can decrease \(\mathcal{L}_{(\cdot)}(\mathbf{z}_{\Omega_{n}})\). For each \(\mathbf{x}_{n}\), we repeat the preceding greedy steps to sequentially add factors to \(\mathbf{\Omega}_{n}\) till \(\mathcal{L}_{(\cdot)}(\mathbf{z}_{\Omega_{n}})\) ceases to monotonically increase. The expected log prior on \(\mathbf{z}\) imposes an approximate beta process penalty. Low probability factors learned through \(q(\mathbf{\pi})\) lead to negative scores, and hence eliminate latent factors, encouraging sparse encodings \(\mathbf{z}_{n}\). During optimization as \(q(\mathbf{\pi}_{k})\) for a given dimension \(k\) decreases, the likelihood that the \(k^{th}\) dimension will be utilized to encode the data point also decreases. Consequently, as training progresses, this allows for speed up of the sparse coding routine over iterations. ### Update for \(\boldsymbol{\theta}\) To update the global parameters \(\boldsymbol{\theta}\), for each model we marginalize the local scale variables \(\boldsymbol{\lambda}\) and global \(\boldsymbol{\pi}\) to compute the complete data joint likelihood lower bounds \(\widehat{\mathcal{L}}_{\mathcal{G}},\widehat{\mathcal{L}}_{\mathcal{P}}, \widehat{\mathcal{L}}_{\mathcal{B}}\) for Gaussian, Poisson and Bernoulli likelihood models respectively, which include terms that only depend on \(\boldsymbol{\theta}\): \[\widehat{\mathcal{L}}_{\mathcal{G}}(\boldsymbol{\theta}) =\mathbb{E}_{q(\boldsymbol{\lambda})}\left[\ln p(\boldsymbol{x}, \boldsymbol{\lambda}\mid\boldsymbol{\theta},\boldsymbol{z})\right]\] \[\widehat{\mathcal{L}}_{\mathcal{P}}(\boldsymbol{\theta}) =\mathbb{E}_{q(\boldsymbol{\lambda})}\left[\ln p(\boldsymbol{x}, \boldsymbol{\lambda}\mid\boldsymbol{\theta},\boldsymbol{z})\right]\] \[\widehat{\mathcal{L}}_{\mathcal{B}}(\boldsymbol{\theta}) =\ln p(\boldsymbol{x}\mid\boldsymbol{\theta},\boldsymbol{z})\] For Gaussian: \[\mathbb{E}_{q(\boldsymbol{\lambda})}\left[\ln p(\boldsymbol{x}, \boldsymbol{\lambda}\mid\boldsymbol{\theta},\boldsymbol{z})\right]=\| \boldsymbol{x}_{n}-\mu_{\boldsymbol{\lambda}_{n}|\boldsymbol{x}_{n}, \boldsymbol{z}_{n},\boldsymbol{\theta}}\cdot f_{\theta}(\boldsymbol{z}_{n}) \|^{2}\] \[+\sigma_{\boldsymbol{\lambda}_{n}|\boldsymbol{x}_{n},\boldsymbol{ z}_{n},\boldsymbol{\theta}}^{2}\cdot f_{\theta}(\boldsymbol{z}_{n})^{\top}f_{ \theta}(\boldsymbol{z}_{n})+c\] For Poisson and Bernoulli the likelihood is same as that in sparse coding step. We use stochastic optimization to update \(\boldsymbol{\theta}\) using ADAM Kingma and Ba (2014). First order gradient methods with moment estimates such as ADAM, can implicitly take into account the rate of change of natural parameters \((\boldsymbol{a}_{k},\boldsymbol{b}_{k})\) for \(q(\boldsymbol{\pi})\) when optimizing the neural net parameters. The full sparse coding algorithm is outlined in Algorithm 1. ## 5 Empirical study We demonstrate the potential of our beta process sparse encoding models in a variety of settings. We evaluate the Gaussian likelihood Beta-Bernoulli Process Encoder (Gauss-BPE) on scaled MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky, 2009) datasets. The scaled MNIST dataset consists of MNIST images that are randomly scaled using a scaling factor sampled from \(\mathcal{U}(-\text{scale max},\text{scale max})\). We evaluate the BernBPE on MNIST data. To compare Gauss-BPE to Gaussian VAE and BernBPE to Gumbel-Softmax VAE, we compare the sparsity of the learned encodings, as well as the reconstruction error on held-out data. We utilize the following metrics: Lastly, we present qualitative results for the PoissonBPE on 20-Newsgroup dataset (Joachims, 1996) to uncover latent distributions over topics. Figure 2: (A) Left: samples from scaled MNIST data. Middle: the corresponding sparse codes (reshaped as matrix for visualization), and the reconstructions using GaussBPE. Right: reconstructions using the VAE. (B) The probability of activation of the most class discriminative latent dimensions for randomly chosen classes from the CIFAR-10 dataset. As expected, the CIFAR-10 dataset utilizes more latent factors relative to the simpler MNIST dataset. (C) Sorted mean activation probabilities for all latent dimensions, for MNIST and CIFAR-10 datasets. (D) Time duration per epoch during training for all models. ### Sparsity We quantify the sparsity of the inferred latent encodings using the Hoyer extrinsic metric [10], which is \(0\) for a fully dense vector and \(1\) for a fully sparse vector. For a set of latent encodings \(\{\mathbf{z}_{n}\}_{n=1}^{N}\), the sparsity is defined as: \[\mathrm{Sparsity}(\{\mathbf{z}_{n}\}_{n=1}^{N})=\frac{1}{N}\sum_{n} \mathrm{Hoyer}(\mathbf{z}_{n})\] \[\mathrm{Hoyer}(\mathbf{z}_{n})=\frac{\sqrt{K}-\|\mathbf{z}_{n}\|_{1}/\| \mathbf{z}_{n}\|_{2}}{\sqrt{K}-1}\in[0,1]\] For the VAE models, we use the encoding means \(\{\mathbb{E}_{q(\mathbf{z}_{n}|\mathbf{z}_{n})}|\mathbf{z}_{n}|\}_{n=1}^{N}\) in lieu of \(\{\mathbf{z}_{n}\}_{n=1}^{N}\). ### Reconstruction Error For GaussBPE and Gaussian likelihood VAE, we report the reconstruction mean squared error (MSE). For the Gaussian likelihood VAE, \(\mathbb{E}[\mathbf{\lambda}_{n}]=1\), and we use \(\mathbb{E}_{q(\mathbf{z}_{n}|\mathbf{x}_{n})}[f_{\theta}(\mathbf{z}_{n})]\) instead of \(f_{\theta}(\mathbf{z}_{n})\): \[\mathrm{MSE}(\{\mathbf{x}_{n},\mathbf{z}_{n}\}_{n=1}^{N})=\frac{1}{N}\|\mathbf{x}_{n}- \mathbb{E}[\mathbf{\lambda}_{n}]f_{\theta}(\mathbf{z}_{n})\|^{2}\] For the BernBPE and Bernoulli likelihood Gumbel Softmax VAE, we report the negative log likelihood (NLL): \[\mathrm{NLL}(\{\mathbf{x}_{n},\mathbf{z}_{n}\}_{n=1}^{N})=-\frac{1}{N}\ln p(\mathbf{x}_{n} |\mathbf{z}_{n})\] For the VAE models, we use the same recognition network architecture as the original papers. For the VAE likelihood models and the GaussBPE and BernBPE likelihood models, we use the same architecture as that used by the Gumbel Softmax VAE paper. Notably, the last layer is linear for Gaussian VAE, however, sigmoid for GaussBPE, as in our model, \(\mathbf{\lambda}_{n}\) decouples the scaling of individual data points. A summary of all the hyperparameters used for all models can be found in the supplementary material. We evaluate the PoissBPE model on 20-Newsgroup data. We pre-process the data by removing headers, footers and quotes, as well as English stop words to get a \(512\) dimensional vocabulary. We then vectorize each document to a \(1142\) dimensional vector, where each dimension represents the number of occurrences of a particular word in the vocabulary, within the document. For the PoissBPE, we choose a \(W\times T\)\(\mathbf{\beta}\) matrix, with \(W=1142\) and \(T=15\), this corresponds to a topic model with \(15\) topics, where each topic vector \(\mathbf{\beta}_{,t}\), is a distribution over the \(1142\) words. The last layer non-linearity is a softmax, hence, the \(f_{\theta}(\mathbf{z}_{n})\), maps \(\mathbf{z}_{n}\) to a probability distribution over the \(T\) topics. ## 6 Results On binary MNIST data, where the scale of the data points does not affect the latent encodings, we found the BernBPE model to be comparable to the Gumbel Softmax VAE in terms of reconstruction error, however, it does so by utilizing substantially fewer latent dimensions. For real valued MNIST data, the GaussBPE significantly outperformed the Gaussian likelihood VAE in terms of both the reconstruction error as well as the sparsity of the latent codes. For randomly scaled MNIST data, the relative improvement in sparsity was similar to the improvement observed over VAE \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{Scaled MNIST} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{CIFAR 10} \\ Model & GS-VAE & BernBPE & VAE & GaussBPE & VAE & GaussBPE & VAE & GaussBPE \\ \cline{2-10} NLL & **81.55** & 82.16 & MSE & 32.92 & **9.18** & 16.94 & **8.51** & 79.15 & **75.88** \\ Sparsity & 0.40 & **0.93** & Sparsity & 0.72 & **0.86** & 0.83 & **0.86** & 0.81 & **0.96** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of reconstruction errors and latent code sparsity on held-out data. Figure 3: Learned topic associations through sparse codes. Each node represents a topic. The words within each node are the most representative words chosen from the top 15 most probable words from that topic. Groups of nodes connected by edges denote topics activated by the same sparse code, where the size of the node is proportional to the probability of the topic. on real valued MNIST data, however, the reconstruction error was markedly better. Lastly, on the CIFAR-10 dataset, the GaussBPE performed better than VAE in terms of reconstruction error and sparsity. We summarize our experimental results in Table 1. ## 7 Discussion We evaluated our models across four datasets to explore the effects of the different variables we introduce in our generative model. On the binary MNIST data, where scale is not a factor, as expected, we observed similar performance in terms of reconstruction error, however, the Beta-Bernoulli process prior encouraged sparsity in the latent representation, which lead to our model to learn much sparser representaions. For real valued MNIST data, with variations in intensity across images, the local scaling variable allowed the model to learn sparser encodings, while also improving the reconstruction error. We further explored this effect by exaggerating the local scale variations by randomly perturbing the intensity of the MNIST images. As we expected, this lead to significant deterioration in image reconstructions by the VAE. Our explicit modeling of local variations decoupled the local data scaling from the encoding process, which allowed the model to learn scale invariant encodings, resulting in substantially improved performance over the VAE. On natural image datasets such as CIFAR-10, we expect more variation in image intensity relative to the standard MNIST dataset. Since our model performs well even under random perturbation local data scaling, we expected the GaussBPE to perform well on the CIFAR-10 dataset. As we had hoped, the our model learned sparser encodings while also improving reconstruction error on the CIFAR-10 dataset.
2302.02420
Variational Inference on the Final-Layer Output of Neural Networks
Traditional neural networks are simple to train but they typically produce overconfident predictions. In contrast, Bayesian neural networks provide good uncertainty quantification but optimizing them is time consuming due to the large parameter space. This paper proposes to combine the advantages of both approaches by performing Variational Inference in the Final layer Output space (VIFO), because the output space is much smaller than the parameter space. We use neural networks to learn the mean and the variance of the probabilistic output. Like standard, non-Beyesian models, VIFO enjoys simple training and one can use Rademacher complexity to provide risk bounds for the model. On the other hand, using the Bayesian formulation we incorporate collapsed variational inference with VIFO which significantly improves the performance in practice. Experiments show that VIFO and ensembles of VIFO provide a good tradeoff in terms of run time and uncertainty quantification, especially for out of distribution data.
Yadi Wei, Roni Khardon
2023-02-05T16:19:01Z
http://arxiv.org/abs/2302.02420v4
# Direct Uncertainty Quantification ###### Abstract Traditional neural networks are simple to train but they produce overconfident predictions, while Bayesian neural networks provide good uncertainty quantification but optimizing them is time consuming. This paper introduces a new approach, direct uncertainty quantification (DirectUQ), that combines their advantages where the neural network directly models uncertainty in output space, and captures both aleatoric and epistemic uncertainty. DirectUQ can be derived as an alternative variational lower bound, and hence benefits from collapsed variational inference that provides improved regularizers. On the other hand, like non-probabilistic models, DirectUQ enjoys simple training and one can use Rademacher complexity to provide risk bounds for the model. Experiments show that DirectUQ and ensembles of DirectUQ provide a good tradeoff in terms of run time and uncertainty quantification, especially for out of distribution data. Machine Learning, ICML ## 1 Introduction With the development of training and representation methods for deep learning, models using neural networks provide excellent predictions. However, such models fall behind in terms of uncertainty quantification and their predictions are often overconfident (Guo et al., 2017). Bayesian methods provide a methodology for uncertainty quantification by placing a prior over parameters and computing a posterior given observed data, but the computation required for such methods is often infeasible. Variational inference (VI) is one of the most popular approaches for approximating the Bayesian outcome, e.g., (Blundell et al., 2015; Graves, 2011; Wu et al., 2019). By minimizing the KL divergence between the variational distribution and the true posterior and constructing an evidence lower bound (ELBO), one can find the best approximation to the intractable posterior. However, when applied to deep learning, VI requires sampling to compute the ELBO, and it suffers from both high computational cost and large variance in gradient estimation. Wu et al. (2019) have proposed a deterministic variational inference (DVI) approach to alleviate the latter problem. The idea of this approach relies on the central limit theorem, which implies that with sufficiently many hidden neurons, the distribution of each layer forms a multivariate Gaussian distribution. Thus we only need to compute the mean and covariance of each layer. However, DVI still suffers from high computational cost and complex optimization. Inspired by DVI, we observe that the only aspect that affects the prediction is the distribution of the last layer in the neural network. We therefore propose to use a neural network to directly output the mean and the diagonal covariance of the last layer, which avoids calculating a distribution over network weights and saves intermediate computations. Like all Bayesian methods, our method quantifies both epistemic and aleatoric uncertainty of the output. However, it does so in an explicit manner, so we call it _Direct Uncertainty Quantification_ (DirectUQ). DirectUQ has a single set of parameters and thus enjoys simple optimization as in non-Bayesian methods yet it has the advantage of uncertainty quantification in predictions similar to Bayesian methods. We can motivate DirectUQ from several theoretical perspectives. First, due to its simplicity, one can derive risk bounds for the model through Rademacher complexity. Second, DirectUQ can be derived as an alternative ELBO for neural networks which uses priors and posteriors on the output layer. We show that, for the linear case, with expressive priors DirectUQ can capture the same predictions as standard variational inference. With practical priors and deep networks the model can be seen as a simpler alternative. Third, through the interpretation as ELBO, we derive improved priors (or regularizers) for DirectUQ motivated by collapsed variational inference (Tomczak et al., 2021) and empirical bayes (Wu et al., 2019). The new regularizers greatly improve the performance of DirectUQ. An experimental evaluation compares DirectUQ with VI and non-Bayesian neural networks optimized by stochastic gradient descent (SGD), as well as ensembles of these models. The results show that (1) DirectUQ is slightly slower than SGD but much faster than VI, and (2) Direc tUQ achieves better uncertainty quantification on shifted and out-of-distribution data while preserving the quality of in-distribution predictions. Overall, ensembles of DirectUQ provide good tradeoff in terms of run time and uncertainty quantification especially for out-of-distribution data. ### Related Work DirectUQ shares some similarities with Kendall and Gal (2017), where both use neural networks to output the mean and covariance of the last layer. However, unlike their method DirectUQ represents a variational lower bound, and there are differences in the loss function and approach for epistemic uncertainty which are elaborated further in Section 2. Another similar line of work (Sun et al., 2019; Tran et al., 2022) performs variational inference on the functional space. However, they focus on choosing a better prior in weight space which is induced from Gaussian Process priors on functional space, whereas DirectUQ directly puts a simple prior in function space. DirectUQ differs from existing variational inference methods. Last-layer variational inference (Brosse et al., 2020) performs variational inference on the weights of the last layer, while we perform variational inference on the output of the last layer. The local reparametrization trick (Tomczak et al., 2020; Oleksiienko et al., 2022) performs two forward passes with the mean and variance to sample the output for each layer, while we only require one pass and sample the output of the last layer for prediction. Various alternative Bayesian techniques have been proposed. One direction is to get samples from the true posterior, as in Markov chain Monte Carlo methods (Wenzel et al., 2020; Izmailov et al., 2021). Expectation propagation aims to minimize the reverse KL divergence to the true posterior (Teh et al., 2015; Li et al., 2015). These Bayesian methods, including variational inference, often suffer from high computational cost and therefore hybrid methods were proposed. Stochastic weight averaging Gaussian (Maddox et al., 2019) forms a Gaussian distribution over parameters from the stochastic gradient descent trajectory. Dropout (Gal and Ghahramani, 2016) randomly sets weights 0 to model the epistemic uncertainty. Deep ensembles (Lakshminarayanan et al., 2017) use ensembles of models learned with random initialization and shuffling of data points and then average the predictions. These methods implicitly perform approximate inference. In addition to these methods, there are also non-Bayesian methods to calibrate the overconfident predictions, for example, temperature scaling (Guo et al., 2017) introduces a temperature parameter to anneal the predictive distribution to avoid high confidence. DirectUQ strikes a balance between simplicity and modelling power to enable simple training and Bayesian uncertainty quantification. ## 2 Direct Uncertainty Quantification In this section we describe our DirectUQ method in detail. Given a neural network parametrized by weights \(W\) and input \(x\), the output layer is \(z=f_{W}(x)\). In classification, \(z\) is a \(k\)-dimensional vector, where \(k\) is the number of classes. The probability of being class \(i\) is defined as \[p(y=i|z)=\text{softmax}(z)_{i}=\frac{\exp z_{i}}{\sum_{j}\exp z_{j}}. \tag{1}\] In regression, \(z=(m,l)\) is a 2-dimensional vector. We apply a function \(g\) on \(l\) that maps \(l\) to a positive real number. The probability of the output \(y\) is: \[p(y|z)=\mathcal{N}(y|m,l)=\frac{1}{\sqrt{2\pi g(l)}}\exp\left(- \frac{(y-m)^{2}}{2g(l)}\right). \tag{2}\] Traditional non-Bayesian deep learning learns weights by minimizing \(-\log p(y|z)\) using stochastic gradient descent. We refer to this as SGD. By fixing the weights \(W\), SGD maps \(x\) to \(z\) deterministically, while Bayesian methods seek to map \(x\) to a distribution over \(z\). Variational inference puts a distribution over \(W\) and marginalizes out to get a distribution over \(z\). According to central limit theorem, with sufficiently wide neural network, the marginalization distribution of \(z\) is normal (Wu et al., 2019). DirectUQ pursues this in a direct manner. It has two sets of weights, \(W_{1}\) and \(W_{2}\) (with shared components), to model the mean and variance of \(z\). That is, \(\mu_{q}(x)=f_{W_{1}}(x)\), \(\sigma_{q}(x)=g(f_{W_{2}}(x))\), where \(g:\mathbb{R}\rightarrow\mathbb{R}^{+}\) maps the output to positive real numbers as the variance is positive. Thus, \(q(z|x)=\mathcal{N}(z|\mu_{q}(x),\text{diag}(\sigma_{q}^{2}(x)))\), where \(\mu_{q}(x),\sigma_{q}^{2}(x)\) are vectors of the corresponding dimension. We will call \(q(z|x)\) the _variational predictive distribution_. Note the the prediction over \(y\) is given by \(\int q(z|x)p(y|z)dz\) which is different from the basic model. In regression, the basic model of Eq (2) is known as the mean-variance estimator (Kabir et al., 2018; Khosravi et al., 2011; Kendall and Gal, 2017). Applying DirectUQ on this model, we will have **four** outputs, two of which are the means of \(m\) and \(l\), and the other two are the variances of \(m\) and \(l\). The mean of \(l\) represents the heteroscedastic aleatoric uncertainty and the variances of \(m\) and \(l\) model the epistemic uncertainty as the variances come from marginalization of Bayesian posteriors. In classification, using Eq (1), the mean of \(z\) induces aleatoric uncertainty and the variance of \(z\) represents epistemic uncertainty. The standard Bayesian approach puts a prior on the weights \(W\). Instead, since DirectUQ models the distribution over \(z\), we put a prior over \(z\). We consider two options, a conditional prior \(p(z|x)\) and a simpler prior \(p(z)\). Both of these choices yield a valid ELBO using the same steps: \[\log p(y|x) \geq\mathbb{E}_{q(z|x)}\left[\log\frac{p(y,z|x)}{q(z|x)}\right]\] \[=\mathbb{E}_{q(z|x)}[\log p(y|z)]-\text{KL}(q(z|x)||p(z|x)). \tag{3}\] Eq (3) is defined for every \((x,y)\). For a dataset \(\mathcal{D}=\{(x,y)\}\), we optimize \(W_{1}\) and \(W_{2}\) such that: \[\sum_{(x,y)\in\mathcal{D}}\left\{\mathbb{E}_{q(z|x)}[\log p(y|z)]-\text{KL}(q (z|x)||p(z|x))\right\}\] is maximized. We regard the negation of the first term \(\mathbb{E}_{q(z|x)}[-\log p(y|z)]\) as the loss term and treat \(\text{KL}(q(z|x)||p(z))\) as a regularizer. In practice, we put a coefficient \(\eta\) on the regularizer, as is often done in variational approximations (Higgins et al., 2017). Then, from a regularized loss minimization view, our objective becomes: \[\min_{W_{1},W_{2}}\sum_{(x,y)\in\mathcal{D}}\left\{\mathbb{E}_{q(z|x)}[-\log p (y|z)]+\eta\text{KL}(q(z|x)||p(z|x))\right\}. \tag{4}\] In most cases, the loss term is intractable, so we use Monte Carlo samples to approximate it, where implementations can use reparametrization to reduce the variance in gradients: \[\mathbb{E}_{q(z|x)}[-\log p(y|z)]\approx\frac{1}{M}\sum_{m=1}^{M}-\log p(y|z^{ (m)}), \tag{5}\] where \(z^{(m)}\sim q(z|x)\). We can now discuss the relation to Kendall & Gal (2017) in more details. This work also draws samples in the output space but replaces the loss term in Eq (5) with the cross entropy loss, \(-\log\frac{1}{M}\sum_{m}p(y|z^{(m)})\). Furthermore, they use dropout and their objective has no explicit regularization, whereas our formulation yields a valid ELBO. ## 3 Rademacher Complexity of DirectUQ In this section we provide generalization bounds for DirectUQ through Rademacher Complexity. We need to make the following assumptions. **Assumption 3.1**.: \(\log p(y|z)\) is \(L_{0}\)-Lipschitz in \(z\), i.e., \(\log p(y|z)-\log p(y|z^{\prime})\leq L_{0}\|z-z^{\prime}\|_{2}\). **Assumption 3.2**.: The link function \(g\) is \(L_{1}\)-Lipschitz. We show in the Appendix that these assumptions hold for classification and with a smoothed loss for regression. With the two assumptions, we derive the main result: **Theorem 3.3**.: _Let \(\mathcal{H}\) be the set of functions that can be represented with neural networks with parameter space \(\mathcal{V}\), \(\mathcal{H}=\{f_{W}(\cdot)|W\in\mathcal{W}\}\). DirectUQ has two components, so the DirectUQ hypothesis class is \(\mathcal{H}\times\mathcal{H}\)\(=\{(f_{W_{1}}(\cdot),f_{W_{2}}(\cdot))\mid W=(W_{1},W_{2}),W_{1},W_{2} \in\mathcal{W}\}\). Let \(l\) be the loss function for DirectUQ, \(l(W,(x,y))=E_{q_{W}(z|x)}[-\log p(y|x)]\). Then the Rademacher complexity of DirectUQ is bounded as \(R(l\circ(\mathcal{H}\times\mathcal{H})\circ S)\leq(L_{0}(1+L_{1})K)\cdot R( \mathcal{H}\circ S)\), where \(K\) is the dimension of \(z\)._ Proof.: We show that the loss is Lipschitz in \(f_{W_{1}}(x)\) and \(f_{W_{2}}(x)\). Fix any \(x\), and \(W,W^{\prime}\). We denote the mean and standard deviation of \(q_{W}(z|x)\) by \(\mu\) and \(s\) and the same for \(q_{W^{\prime}}(z|x)\). \[\mathbb{E}_{q_{W}(z|x)}[-\log p(y|z)]-\mathbb{E}_{q_{W^{\prime}} (z|x)}[-\log p(y|z)]\] \[= \mathbb{E}_{\epsilon\sim\mathcal{N}(0,I)}[\log p(y|\mu^{\prime}+ \epsilon\cdot s^{\prime})-\log p(y|\mu+\epsilon\cdot s)]\] \[\leq \mathbb{E}_{\epsilon\sim\mathcal{N}(0,I)}\left[L_{0}\|(\mu-\mu^{ \prime})+\epsilon\cdot(s-s^{\prime})\|_{2}\right]\] \[\leq L_{0}\|\mu-\mu^{\prime}\|_{2}+L_{0}\mathbb{E}_{\epsilon}[\sqrt{ \epsilon^{2}(s-s^{\prime})^{2}}]\] \[\leq L_{0}\|\mu-\mu^{\prime}\|_{2}+L_{0}\sqrt{\mathbb{E}_{\epsilon}[ \epsilon^{2}(s-s^{\prime})^{2}]}\] \[= L_{0}(\|\mu-\mu^{\prime}\|_{2}+\|s-s^{\prime}\|_{2})\] \[\leq L_{0}(\|\mu-\mu^{\prime}\|_{1}+\|s-s^{\prime}\|_{1})\] where the third equation uses the Lipschitz assumption and the fifth uses Jensen's inequality. In the above equations the product and power are performed element-wise, and the derivation holds for multivariate \(z\). The loss function is Lipschitz in \(\mu\), which is exactly \(f_{W_{1}}(x)\). Further, \(s\) is \(L_{1}\)-Lipschitz in the logit \(f_{W_{2}}(x)\), thus, the loss function is \((L_{0}(1+L_{1}))\)-Lipschitz in \(f_{W_{1}}(x)\) and \(f_{W_{2}}(x)\). The theorem follows from Rademacher complexity bounds for bivariate functions (Lemma A.1) and for functions over vectors (Corollary A.2). Hence Rademacher complexity for DirectUQ is bounded through the Rademacher complexity of deterministic neural networks. This shows one advantage of DirectUQ which is more amenable to analysis than standard VI due to its simplicity. The Rademacher complexity for neural networks is \(O\left(\frac{B_{W}B_{x}}{\sqrt{N}}\right)\)(Golowich et al., 2018), where \(B_{W}\) bounds the norm of the weights and \(B_{x}\) bounds the input. The Rademacher complexity of DirectUQ is of the same order. ## 4 Comparison of DirectUQ and VI DirectUQ is inspired by deterministic variational inference and it highly reduces the computational cost. We explore whether DirectUQ can produce exactly the same predictive distribution as VI. In this section, we show that this is the case for linear models but that for deep models DirectUQ is less powerful. We first introduce the setting of linear models. Let the parameter be \(\theta\), then the model is defined as: \[y|x,\theta\sim p(y|\theta^{\top}x). \tag{6}\] For example, \(p(y|\theta^{\top}x)=\mathcal{N}(y|\theta^{\top}x,\frac{1}{\beta})\) where \(\beta\) is a constant for Bayesian linear regression; and \(p(y=1|\theta^{\top}x)=\) \(\frac{1}{1+\exp(-\theta^{\top}x)}\) for Bayesian binary classification. The standard approach specifies the prior of \(\theta\) to be \(p(\theta)=\mathcal{N}(\theta|m_{0},S_{0})\), and uses \(q(\theta)=\mathcal{N}(\theta|m,S)\). Then the ELBO objective for linear models with a dataset \((X_{N},Y_{N})\) of size \(N\), where \(X_{N}=(x_{1},x_{2},\ldots,x_{N})\in\mathbb{R}^{d\times N}\) and \(Y_{N}=(y_{1},y_{2},\ldots,y_{N})\in\mathbb{R}^{N}\), is: \[\sum_{i=1}^{N}\mathbb{E}_{q(\theta)}[\log p(y_{i}|\theta^{\top}x_ {i})]-\text{KL}(q(\theta)||p(\theta))\] \[= \sum_{i=1}^{N}\mathbb{E}_{q(\theta)}[\log p(y_{i}|\theta^{\top}x_ {i})]-\frac{1}{2}[\text{tr}(S_{0}^{-1}S)-\log|S_{0}^{-1}S|]\] \[-\frac{1}{2}(m-m_{0})^{\top}S_{0}^{-1}(m-m_{0})]. \tag{7}\] As the following theorem shows, if we use a conditional correlated prior and a variational posterior that correlates data points, then in the linear case DirectUQ can recover the ELBO and the solution of VI. **Theorem 4.1**.: _Let \(p(z|X_{N})=\mathcal{N}(z|m_{0}^{\top}X_{N},X_{N}^{\top}S_{0}X_{N})\) be a correlated and data-specific prior (which means that for different data \(x\), we have a different prior over \(z\)), and \(q(z|x)=\mathcal{N}(z|\theta_{1}^{\top}x,x^{\top}V_{2}x)\) be the variational predictive distribution of \(z\), where \(\theta_{1}\) and \(V_{2}\) are the parameters to be optimized. Then the DirectUQ objective is equivalent to the ELBO objective implying identical predictive distributions._ Proof.: The DirectUQ objective is: \[\sum_{i=1}^{N}\left\{\mathbb{E}_{q(z|x_{i})}[\log p(y_{i}|z)] \right\}-\text{KL}(q(z|X_{n})||p(z|X_{N}))\] \[=\sum_{i=1}^{N}\left\{\mathbb{E}_{q(z|x_{i})}[\log p(y_{i}|z)]\right\}\] \[-\frac{1}{2}\text{tr}((X_{N}^{\top}S_{0}X_{N})^{-1}(X_{N}^{\top} V_{2}X_{N}))\] \[+ \frac{1}{2}\log|(X_{N}^{\top}S_{0}X_{N})^{-1}(X_{N}^{\top}V_{2}X_ {N})|\] \[- \frac{1}{2}(\theta_{1}^{\top}X_{N}-m_{0}^{\top}X_{N})^{\top}(X_{N }^{\top}S_{0}X_{N})^{-1}(\theta_{1}^{\top}X_{N}-m_{0}^{\top}X_{N}). \tag{8}\] Assume \(N>d\). First consider the loss term. Let \(L\) be the Cholesky decomposition of \(V_{2}\), i.e. \(V_{2}=LL^{\top}\). By reparametrization, for \(\epsilon\sim\mathcal{N}(0,I_{d})\), \(\theta_{1}^{\top}x_{i}+x_{i}^{\top}L\epsilon\sim\mathcal{N}(\theta_{1}^{\top} x_{i},x_{i}^{\top}LL^{\top}x_{i})\) and thus \[\mathbb{E}_{q(z|x_{i})}[\log p(y_{i}|z)] =\mathbb{E}_{\epsilon\sim\mathcal{N}(0,I_{d})}[\log p(y_{i}| \theta_{1}^{\top}x_{i}+x_{i}^{\top}L\epsilon)]\] \[=\mathbb{E}_{\epsilon\sim\mathcal{N}(0,I_{d})}[\log p(y_{i}|( \theta_{1}+L\epsilon)^{\top}x_{i})]\] \[=\mathbb{E}_{\theta\sim\mathcal{N}(\theta_{1},LL^{\top})}[\log p( y_{i}|\theta^{\top}x_{i})], \tag{9}\] where the last equality uses reparametrization in a reverse order. By aligning \(\theta_{1}=m\) and \(V_{2}=LL^{\top}=S\), we recognize that Eq (9) is exactly the loss term in Eq (7). Thus the low-dimensional posterior on \(z\) yields the same loss term as the high-dimensional posterior over \(W\). For the regularization, we use the pseudo inverse \[(X_{N}^{\top}S_{0}X_{N})^{-1}=X_{N}^{\top}(X_{N}X_{N}^{\top})^{-1}S_{0}^{-1}(X_ {N}X_{N}^{\top})^{-1}X_{N}\] and the same for \(V_{2}\). Then the regularization term in Eq (7) can be simplified to: \[-\text{KL}(q(z|X_{N})||p(z|X_{N}))\] \[= -\frac{1}{2}\text{tr}(S_{0}^{-1}V_{2})+\frac{1}{2}\log|S_{0}^{-1} V_{2}|\] \[-\frac{1}{2}(\theta_{1}-m_{0})^{\top}S_{0}^{-1}(\theta_{1}-m_{0}). \tag{10}\] By aligning \(\theta_{1}=m,V_{2}=S\), we can see that eq (10) is exactly the same as the regularizer in eq (7). However, as the next theorem shows, for the non-linear case even if we ignore the prior we may not be able to recover the loss term exactly. **Theorem 4.2**.: _Given a neural network \(f_{W}\) parametrized by \(W\) and a mean-field Gaussian distribution \(q(W)\) over \(W\), there may not exist a set of parameters \(\widetilde{W}\) such that for all input \(x\) we have \(\mathbb{E}_{q(W)}[f_{W}(x)]=f_{\widetilde{W}}(x)\)._ Proof Sketch.: Consider the neural network with only one hidden unit and only one input. No matter what \(W\) we choose, there exists some region of \(x\) such that \(f_{\widetilde{W}}(x)=0\) due to the non-linear activation. However, \(\mathbb{E}_{q(W)}[f_{W}(x)]\) will include two segments of non-zero functions due to the properties of truncated normal distribution, thus, there is no \(\widetilde{W}\) that can perfectly recover \(\mathbb{E}_{q(W)}[f_{W}(x)]\). The results of this section point to the potential and limitations of DirectUQ in terms of expressiveness relative to standard VI. In practice, computing a correlated and data-specific prior \(p(z|x)\) is complex, and tuning its hyperparameters would be challenging. Hence, for a practical algorithm we propose to use a simple prior \(p(z)\) independent of \(x\). In addition, to reduce computational complexity, we do not learn a full covariance matrix and focus on the diagonal approximation. This limits expressive power but enables fast training of ensembles of DirectUQ. ## 5 Collapsed VI Applied to DUQ Bayesian methods are often sensitive to the choice of prior parameters. To overcome this, Wu et al. (2019) used empirical Bayes (EB) to select the value of the prior parameters, and Tomczak et al. (2021) proposed collapsed variational inference, which defined a hierarchical model and performed inference on the prior parameters as well. Empirical Bayes can be regarded as a special case of collapsed variational inference. In this section, we show how this idea is applicable in DirectUQ. In addition to \(z\), we model the prior mean \(\mu_{p}\) and variance \(\sigma_{p}^{2}\) as Bayesian parameters. Now the prior becomes \(p(z|\mu_{p},\sigma_{p}^{2})p(\mu_{p},\sigma_{p}^{2})\) and the variational distribution is \(q(z|x)q(\mu_{p},\sigma_{p}^{2})\). Then the objective becomes: \[\log p(y|x)\geq\mathbb{E}_{q(z|x)q(\mu_{p},\sigma_{p}^{2})}\left[ \log\frac{p(y,z,\mu_{p},\sigma_{p}^{2}|x)}{q(z|x)q(\mu_{p},\sigma_{p}^{2})}\right]\] \[=\mathbb{E}_{q(z|x)}[\log p(y|z)]-\mathbb{E}_{q(\mu_{p},\sigma_{p }^{2})}[\text{KL}(q(z|x)||p(z|\mu_{p},\sigma_{p}^{2}))]\] \[-\text{KL}(q(\mu_{p},\sigma_{p}^{2})||p(\mu_{p},\sigma_{p}^{2})). \tag{11}\] Similar to eq (4), we treat the first term as a loss term and the other two terms as a regularizer along with a coefficient \(\eta\) and aggregate over all data. Since the loss term does not contain \(\mu_{p}\) and \(\sigma_{p}^{2}\), we can get the optimal \(q^{*}(\mu_{p},\sigma_{p}^{2})\) by optimizing the regularizer and the choice of \(\eta\) will not affect \(q^{*}(\mu_{p},\sigma_{p}^{2})\). Then we can plug in the value of \(q^{*}\) into Eq (11). We next show how to compute \(q^{*}(\mu_{p},\sigma_{p}^{2})\) and the final collapsed variational inference objective. The derivations are similar to the ones in (Tomczak et al., 2021) but they are applied on \(z\) not on \(W\). In the following, we use \(K\) to denote the dimension of \(z\). ### Learn mean, fix variance Let \(p(z|\mu_{p})=\mathcal{N}(z|\mu_{p},\gamma I)\), \(p(\mu_{p})=\mathcal{N}(\mu_{p}|0,\alpha I)\). Then \(q^{*}(\mu_{p}|x)\) is \[\operatorname*{arg\,min}_{q(\mu_{p})}\mathbb{E}_{q(\mu_{p})}[ \text{KL}(q(z|x)||p(z|\mu_{p}))]+\text{KL}(q(\mu_{p})||p(\mu_{p})),\] and the optimal \(q^{*}(\mu_{p}|x)\) can be computed as: \[\log q^{*}(\mu_{p}|x)\propto-\frac{(\mu_{q}(x)-\mu_{p})^{\top}( \mu_{q}(x)-\mu_{p})}{2\gamma}-\frac{\mu_{p}^{\top}\mu_{p}}{2\alpha},\] and \(q^{*}(\mu_{p}|x)=\mathcal{N}(\mu_{p}|\frac{\alpha}{\alpha+\gamma}\mu_{q}(x), \frac{\alpha\gamma}{\alpha+\gamma})\). Notice that unlike the prior \(q^{*}(\mu_{p})\) depends on \(x\). If we put \(q^{*}\) back in the regularizer of eq (11), the regularizer becomes: \[\frac{1}{2\gamma}\left[1^{\top}\sigma_{q}^{2}(x)+\frac{\gamma}{ \gamma+\alpha}\mu_{q}(x)^{\top}\mu_{q}(x)\right]-\frac{1}{2}1^{\top}\log \sigma_{q}^{2}(x)\] \[+\frac{K}{2}\log(\gamma+\alpha)-\frac{K}{2}. \tag{12}\] As in (Tomczak et al., 2021), one thing to observe from eq (12) is that it puts a factor \(\frac{\gamma}{\gamma+\alpha}<1\) in front of \(\mu_{q}(x)^{\top}\mu_{q}(x)\), which weakens the regularization on \(\mu_{q}(x)\). We refer to this method as "duq-mean". ### Learn both mean and variance Let \(p(z|\mu_{p},\sigma_{p}^{2})=\mathcal{N}(z|\mu_{p},\sigma_{p}^{2})\), \(p(\mu_{p})=\mathcal{N}(\mu_{p}|0,\frac{1}{t}\sigma_{p}^{2})\), \(p(\sigma_{p}^{2})=\mathcal{IG}(\sigma_{p}^{2}|\alpha,\beta)\), where \(\mathcal{IG}\) indicates the inverse Gamma distribution. The posterior of \(\mu_{p}\) is Gaussian and the posterior of \(\sigma_{p}^{2}\) is inverse Gamma. In this case we can show that \(q^{*}(\mu_{p}|x)=\mathcal{N}(\mu_{p}|\frac{1}{t+1}\mu_{q}(x),\frac{1}{(1+t)} \sigma_{p}^{2})\) and \(q^{*}(\sigma_{p}^{2}|x)=\mathcal{IG}(\sigma_{p}^{2}|(\alpha+\frac{1}{2})1,\beta +\frac{t}{2(t+1)}\mu_{q}(x)^{2}+\frac{1}{2}\sigma_{q}^{2}(x))\) and the regularizer becomes \[(\alpha+\frac{1}{2})1^{\top}\log\left[\beta 1+\frac{t}{2(1+t)}\mu_{q}(x)^{2}+ \frac{1}{2}\sigma_{q}^{2}(x)\right] \tag{13}\] \[-\frac{1}{2}1^{\top}\log\sigma_{q}^{2}(x). \tag{14}\] We refer to this method as "duq-mv". ### Empirical Bayes Let \(p(\sigma_{p}^{2})=\mathcal{IG}(\sigma_{p}^{2}|\alpha,\beta)\), \(p(z|\sigma_{p}^{2})=\mathcal{N}(z|0,\sigma_{p}^{2})\) and let \(q(\sigma_{p}^{2})\) be a delta distribution \(\delta(s^{*}(x))\). Then the regularizer of eq (11) becomes: \[\text{KL}(q(z|x)||p(z|\sigma_{p}^{2}))-\log p(\sigma_{p}^{2}). \tag{15}\] By minimizing this objective we obtain the optimal value \(s^{*}(x)=\frac{\mu_{q}(x)^{\top}\mu_{q}(x)+1^{\top}\sigma_{q}^{2}(x)+2\beta}{K +2\alpha+2}\). Plugging this value into the KL term we obtain: \[\frac{1}{2}\left[K\log\frac{\mu_{q}(x)^{\top}\mu_{q}(x)+1^{ \top}\sigma_{q}^{2}(x)+2\beta}{K+2\alpha+2}-1^{\top}\log|\sigma_{q}^{2}(x)| \right]\] \[-\frac{K}{2}+\frac{1}{2}\frac{(K+2\alpha+2)(\mu_{q}(x)^{\top}\mu_ {q}(x)+1^{\top}\sigma_{q}^{2}(x))}{\mu_{q}(x)^{\top}\mu_{q}(x)+1^{\top}\sigma_{q} ^{2}(x)+2\beta}. \tag{16}\] Following (Wu et al., 2019), as shown in appendix B.3, including the prior term in the regularizer reduces its complexity and this harms performance in practice. Hence, we use Eq (16) as the regularizer. We call this method "duq-eb". ### L2 regularization Notice that all regularizations above are regularizing the variational predictive distribution \(q(z|x)\), not the parameters of the neural network. By analogy to MAP solutions of Bayesian neural networks, we can also regularize the weights directly by replacing the regularizers with \(\frac{\eta}{N}\|W\|_{2}\) in the objective, even though our posterior is on \(z\). We include this in our experiments and call it "duq-l2". ## 6 Experiments In this section, we compare the empirical performance of DirectUQ with VI and hybrid methods related to stochastic gradient descent, as well as dropout (Gal and Ghahramani, 2016). VI candidates include the VI algorithm (Blundell et al., 2015) with fixed prior parameters (called vi-naive below), and other variations from collapsed variational inference (Tomczak et al., 2021) and empirical Bayes (Wu et al., 2019). The hybrid methods include the non-Bayesian network optimized by stochastic gradient descent (marked as SGD below), as well as stochastic weight averaging (SWA, which uses the average of the SGD trajectory as the final weights) from Izmailov et al. (2018) and SWA-Gaussian (SWAG, which uses the SGD trajectory to form a Gaussian distribution over the neural network weight space) from Maddox et al. (2019). These methods are called hybrid methods because they do not have an explicit prior distribution over parameters but they can be interpreted as Bayesian methods. In addition to comparing the single-model performance of these methods, we compare their ensembles. The performance of each method is evaluated by the log loss on the test datasets and by two uncertainty quantification measures on shifted and out-of-distribution datasets. ### Experiment Details We examine our methods on small datasets using a fully connected single layer neural network and on large image datasets using more complex neural network structures. DirectUQ is implemented such that \(W_{1}\) and \(W_{2}\) share all weights except the last layer. For the first set of experiments, we pick four regression datasets and four classification datasets from the UCI repository (Dua and Graff, 2017), and use a single-layer neural network with 50 hidden units. We split every dataset into 80%/10%/10% for training, validation and testing and perform standard normalization so that we do not need to tune hyperparameters for every dataset. For VI and DirectUQ, we select \(\eta\in\{0.1,0.3,0.5,0.7,0.9,1.0,3.0,5.0,7.5,10.0\}\) based on the validation log loss; for SGD, SWA and SWAG, we select weight decay in the same range as \(\eta\). Since the neural network is small, we do not include dropout in comparison. For the second set of experiments, we use four image datasets, CIFAR10, CIFAR100, SVHN1, STL10, together with two types of neural networks, AlexNet (Krizhevsky et al., 2012) and PreResNet20 (He et al., 2016), and fix \(\eta=0.1\) (except in "duq-l2" where we use \(\eta=5.0\)) and the dropout rate 0.1. Note that using \(\eta=1\) in VI yields poor predictions (see Appendix) and using \(\eta=0.1\) provides a stronger baseline. In all experiments, we use the Adam optimizer for VI and DirectUQ. For hybrid methods, we adapt the code from previous work (Izmailov et al., 2018; Maddox et al., 2019) and use the stochastic gradient descent optimizer for SGD, SWA and SWAG. Ensemble models use 5 runs with different random initializations and batch orders. Other details such as hyperparameters, learning rate and number of epochs are given in the Appendix. Footnote 1: Dropout for SVHN on AlexNet fails and gives trivial uniform predictions, so we did not include it in the results. ### Run Time Ignoring the data preprocessing time, we compare the run time of training 1 epoch of VI, DirectUQ and SGD. In Table Figure 1: Uncertainty Quantification on AlexNet 1 we show the mean and standard deviation of 10 runs of these methods. Different regularizers have a small effect in terms of run time, so we only show that of vi-naive for VI and duq-mean for DirectUQ. As shown in Table 1, DirectUQ is much faster than VI and is slightly slower than SGD. This is as expected because if we set the sample size to be \(M\), then SGD only needs 1 forward pass without sampling per batch, DirectUQ needs 1 forward pass and \(M\) samples of size \(K\) per batch, and VI needs \(M\) samples of the neural network size and \(M\) forward passes per batch. In addition, as the sample size \(M\) or the number of batches increases, the advantage of DirectUQ over VI expands as the increased forward passes will take more time, but the advantage of SGD over DirectUQ remains the same as sampling in a smaller space can be done in nearly constant time. ### Uncertainty Quantification for Shifted and Out-of-distribution Data In this section we examine whether DirectUQ can capture the uncertainty in predictions for shifted and out-of-distribution (OOD) data. For uncertainty under data shift, STL10 and CIFAR10 can be treated as a shifted dataset for each other, as the figure size of STL10 is different from CIFAR10, and STL10 shares some classes with CIFAR10 so the labels are meaningful. Thus we can use expected calibration error (ECE, (Naeini et al., 2015)) to measure the uncertainty. That is, we separate data into bins of the same size according to the confidence level, calculate the difference between the accuracy and the averaged confidence in each bin and then average the differences among all bins, see Ovadia et al. (2019) for details. We select the number of bins to be 20. For uncertainty under OOD data, we choose the SVHN dataset as an OOD dataset for CIFAR10, as SVHN contains images of digits and the labels of SVHN are not meaningful in the context of CIFAR10. Thus, we cannot compute the accuracy. Instead, we use the entropy to measure the uncertainty. For the OOD data, we want our model to be as uncertain as possible and this implies high entropy in the predictive distribution. An alternative way to quantify the uncertainty is count-confidence plots, i.e., separate data into bins of the same confidence interval, Figure 3: Test log loss of four image datasets on AlexNet. Each dot with error bar represents the mean and standard deviation of 5 independent runs and the triangle represents the ensemble of each method that aggregate these 5 runs for prediction (same for Figure 8). The standard deviations for some VI and DUQ methods are very small. Figure 2: Test log loss on UCI regression datasets. Each dot with the error bar shows the mean and standard deviation of 5 independent runs. The standard deviations for some methods are very small. and then plot the number of data points of each bin as a function of the averaged confidence score. We include the count-confidence plots in the Appendix, as it is not easy to compare multiple methods visually in this manner. Instead, we summarize each count-confidence plot as a single number using the averaged entropy over the entire dataset. Figure 1(a) and 1(d) show the ECE of each method under data shift. In some cases single-model DirectUQ performs less well than VI. But ensembles of DirectUQ are better than VI, ensembles of SGD and Dropout, and they are even competitive with ensembles of VI. The remaining plots in Figure 1 show the entropy of predictive distribution of OOD data for all methods. Here single-model DirectUQ outperforms SGD and Dropout and ensembles of DirectUQ outperform VI and are even competitive with ensembles of VI. Results for PreResNet20 are similar and they are included in the Appendix (Figure 4). ### In-distribution Log Loss Figure 2 and Figure 7 (in Appendix) compare VI, DirectUQ and hybrid methods on small datasets. We omit the naive version of DirectUQ as it produces poor results. For regression, duq-mean and duq-mv both perform well and they are better than VI and the hybrid methods in three of four datasets. For classification, duq-l2, i.e., DirectUQ with L2 regularization performs the best among all DirectUQ methods and it is comparable to all VI methods and is better than hybrid methods in three out of four datasets. Figure 3 and Figure 8 (in Appendix) compare all methods on large datasets. First we observe that using collapsed variational inference in VI does not significantly improve the performance as shown in Tomczak et al. (2021). This is because we use \(\eta=0.1\) that yields better performance while Tomczak et al. (2021) use \(\eta=1\). Second, we observe that duq-mean is the best among all DirectUQ methods and it is worse than VI but better than SGD. SWAG is slightly better in 2 cases but it is significantly worse in other cases. Dropout is not stable across different datasets. Third, we observe that for all methods, ensembles perform better than single models. The ensemble of duq-mean is competitive with VI. Thus, ensembles of DirectUQ can help alleviate the limitation of the expressiveness of DirectUQ. Ensembles of dropout also perform well. Last but not least, we observe that the ensemble of VI performs the best although optimizing each model in the ensemble is slow. Ensembles of VI have not been well studied in the literature and our experiments highlight their potential strong performance. Overall, ensembles of DirectUQ are slightly worse but competitive with single model VI and ensembles of Dropout in terms of in-distribution log loss. On the other hand they provide better uncertainty quantification for out of distribution data, which is competitive with ensembles of VI. Among variants, DirectUQ with learn-mean regularizer (duq-mean and ens-duq-mean) provide the best overall performance. ## 7 Conclusion In Bayesian neural networks, the distribution of the last layer directly affects the predictive distribution. Motivated by this fact, we proposed direct uncertainty quantification, DirectUQ, that uses a neural network to directly learn the mean and variance of the last layer. We showed that DirectUQ can match the expressive power of VI in linear cases with a strong prior but that in general it provides a less expressive model. On the other hand the simplicity of the model enables fast training and facilitates convergence analysis through Rademacher bounds. In addition, DirectUQ can be derived as a non-standard variational lower bound, which provides an approximation for the last layer. This connection allowed us to derive better regularizations for DirectUQ by using collapsed variational inference over a hierarchical prior. Empirical evaluation highlighted the strong performance of ensembles of VI (when using weak regularization \(\eta=0.1\) instead of 1) albeit at the cost of high run time. Ensembles of DirectUQ are competitive with VI and other methods in terms of in-distribution loss, they outperform VI for out of distribution data, and are even competitive with ensembles of VI. Hence DirectUQ gives a new attractive approach for approximate inference in Bayesian models. The efficiency of DirectUQ also means faster test time predictions which can be important when deploying Bayesian models for real-time applications. In future work it would be interesting to explore the complexity performance tradeoff provided by DirectUQ, and the connections to variational inference in functional space that induces complex priors. \begin{table} \begin{tabular}{|c|c c c c|} \hline dataset & CIFAR10 & CIFAR100 & SVHN & STL10 \\ \hline size & 50000 & 50000 & 73257 & 500 \\ \hline VI & \(8.51\pm 0.41\) & \(8.27\pm 0.40\) & \(11.56\pm 0.39\) & \(1.75\pm 0.41\) \\ DirectUQ & \(2.18\pm 0.39\) & \(2.17\pm 0.43\) & \(2.72\pm 0.38\) & \(1.16\pm 0.40\) \\ SGD & \(1.97\pm 0.41\) & \(1.99\pm 0.43\) & \(2.46\pm 0.40\) & \(1.12\pm 0.38\) \\ \hline \end{tabular} \end{table} Table 1: Running time for training 1 epoch with batch size 512, AlexNet ## Acknowledgements This work was partly supported by NSF under grant IIS-1906694. Some of the experiments in this paper were run on the Big Red computing system at Indiana University, supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
2302.00237
Bridging Physics-Informed Neural Networks with Reinforcement Learning: Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO)
This paper introduces the Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO) algorithm into reinforcement learning. The Hamilton-Jacobi-Bellman (HJB) equation is used in control theory to evaluate the optimality of the value function. Our work combines the HJB equation with reinforcement learning in continuous state and action spaces to improve the training of the value network. We treat the value network as a Physics-Informed Neural Network (PINN) to solve for the HJB equation by computing its derivatives with respect to its inputs exactly. The Proximal Policy Optimization (PPO)-Clipped algorithm is improvised with this implementation as it uses a value network to compute the objective function for its policy network. The HJBPPO algorithm shows an improved performance compared to PPO on the MuJoCo environments.
Amartya Mukherjee, Jun Liu
2023-02-01T04:33:06Z
http://arxiv.org/abs/2302.00237v1
Bridging Physics-Informed Neural Networks with Reinforcement Learning: Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO) ###### Abstract This paper introduces the Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO) algorithm into reinforcement learning. The Hamilton-Jacobi-Bellman (HJB) equation is used in control theory to evaluate the optimality of the value function. Our work combines the HJB equation with reinforcement learning in continuous state and action spaces to improve the training of the value network. We treat the value network as a Physics-Informed Neural Network (PINN) to solve for the HJB equation by computing its derivatives with respect to its inputs exactly. The Proximal Policy Optimization (PPO)-Clipped algorithm is improvised with this implementation as it uses a value network to compute the objective function for its policy network. The HJBPPO algorithm shows an improved performance compared to PPO on the MuJoCo environments. **keywords:** Continuous-Time Reinforcement Learning, Physics-Informed Neural Networks, Proximal Policy Optimization, Hamilton-Jacobi-Bellman Equation ## 1 Introduction In recent years, there has been a growing interest in Reinforcement Learning (RL) for continuous control problems. RL has shown promising results in environments with unknown dynamics through a balance of exploration in the environment and exploitation of the learned policies. Since the advent of REINFORCE with Baseline, the value network in RL algorithms has shown to be useful towards finding optimal policies as a critic network [21]. This value network continues to be used in state-of-the-art RL algorithms today. In discrete-time RL, the value function estimates returns from a given state as a sum of the returns over time steps. This value function is obtained from solving the Bellman Optimality Equation. On the other hand, in continuous-time RL, the value function estimates returns from a given state as an integral over time. This value function is obtained by solving a partial differential equation (PDE) known as the Hamilton-Jacobi-Bellman (HJB) equation [12]. Both equations are difficult to solve analytically and numerically, and therefore the RL agent must explore the environment and make successive estimations. Currently existing algorithms in the RL literature such as Proximal Policy Optimization (PPO) aim to update the value function using the Bellman Optimality Equation so that it estimates the discrete-time returns for each state. However, we discovered that this value function, when trained on MuJoCo environments, does not show convergence towards the optimal value function as described by the HJB equation (see Figure 4). This shows that information is lost when the value function is trained using discrete time steps rather than continuous time. The introduction of physics-informed neural networks (PINNs) by [17] has led to significant advancements in scientific machine learning. PINNs leverage auto-differentiation to compute derivatives of neural networks with respect to their inputs and model parameters exactly. This enables the laws of physics (described by ODEs or PDEs) governing the dataset of interest to act as a regularization term for the neural network. As a result, PINNs outperform regular neural networks on such datasets by taking advantage of the underlying physics of the data. To the best of our knowledge, this paper is the first to examine the intersection between PINNs and RL. In order to force the convergence of the value function in PPO towards the solution of the HJB equation, we utilize PINNs to encode this PDE and bridge the information gap between returns computed over discrete time and continuous time. This allows our algorithm to utilize auto-differentiation to eliminate the error associated with gradient computation and discretization of time. We propose the Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO) algorithm, which demonstrates superior performance in terms of higher rewards, faster convergence, and greater stability compared to PPO on MuJoCo environments, making it a significant improvement. ## 2 Preliminaries Consider a controlled dynamical system modeled by the following equation: \[\dot{x}=f(x,u),\quad x(t_{0})=x_{0}, \tag{1}\] where \(x(t)\) is the state and \(u(t)\) is the control input. In control theory, the optimal value function \(V^{*}(x)\) is useful towards finding a solution to control problems [8]: \[V^{*}(x)=\sup_{u}\int_{t_{0}}^{\infty}\gamma^{t}R(x(\tau;t_{0},x_{0},u(\cdot)),u(\tau))d\tau, \tag{2}\] where \(R(x,a)\) is the reward function and \(\gamma\) is the discount factor. The following theorem introduces a criteria for assessing the optimality of the value function [[11], [13]]. **Theorem 2.1**.: _A function \(V(x)\) is the optimal value function if and only if:_ 1. \(V\in C^{1}(\mathbb{R}^{n})\) _and_ \(V\) _satisfies the Hamilton-Jacobi-Bellman (HJB) Equation_ \[V(x)\ln\gamma+\sup_{u\in U}\{R(x,u)+\nabla_{x}V^{T}(x)f(x,u)\}=0\] (3) _for all_ \(x\in\mathbb{R}^{n}\)_._ 2. _For all_ \(x\in\mathbb{R}^{n}\)_, there exists a controller_ \(u^{*}(\cdot)\) _such that:_ \[V(x)\ln\gamma+R(x,u^{*}(x))+\nabla_{x}V^{T}(x)f(x,u^{*}(x))\] \[=V(x)\ln\gamma+\sup_{u\in U}\{R(x,\tilde{u})+\nabla_{x}V^{T}(x)f( x,\tilde{u})\}.\] (4) Currently existing algorithms in RL do not focus on solving the HJB equation to maximize the total reward for each episode. For example, in PPO, the HJB equation does not seem to be satisfied when tested on MuJoCo environments. To show this, we define the HJB loss at each episode as the following: \[MSE_{f}\] \[=\frac{1}{T}\sum_{t=0}^{T-1}|V(x_{t})\ln\gamma+R(x_{t},a_{t})+ \nabla_{x}V^{T}(x_{t})f(x_{t},a_{t})|^{2}, \tag{5}\] where \(T\) is the number of timesteps in the episode, \(x_{t}\) is the state of the environment at timestep \(t\), and \(a_{t}\) is the action taken at timestep \(t\). \(\nabla_{x}V^{T}(x_{t})\) is computed exactly using auto-differentiation. We approximate \(f(x_{t},a_{t})\) using finite differences: \[MSE_{f}=\frac{1}{T}\sum_{t=0}^{T-1}|V(x_{t})\ln\gamma+R(x_{t},a _{t})\\ +\nabla_{x}V^{T}(x_{t})(\frac{x_{t+1}-x_{t}}{\Delta t})|^{2}, \tag{6}\] where \(\Delta x\) is the time step size used in the environment. We have plotted the HJB loss for each environment using PPO in Figure 4. The mean HJB loss for each environment takes extremely high values and does not show convergence in 6 out of 10 of the environments, thus showing that the value function does not converge to the optimal value function as shown by the HJB equation. As a comparison, we have plotted the graphs for the value network loss in Figure 5. The Bellman optimality loss shows convergence in 8 out of the 10 environments. This shows that information is lost when we solve the solve the Bellman optimality equation for a discrete-time value function compared to continuous-time value function. It also shows that convergence of the value function does not necessarily lead to convergence in the HJB loss. To solve this problem as shown in Figure 4, we treat the value network as a PINN and use gradient-based methods to reduce the HJB loss. ## 3 Related Work The use of HJB equations for continuous RL has sparked interest in recent years among the RL community as well as the control theory community, and has led to promising works. [10] introduced an alternate HJB equation for Q Networks and used it to derive a controller that is Lipschitz continuous in time. This algorithm has shown improved performance over Deep Deterministic Policy Gradient (DDPG) in three out of the four tested MuJoCo environments without the need for an actor network. [24] introduced a distributional HJB equation to train the FD-WGF Q-Learning algorithm. This models return distributions more accurately compared to Quantile Regression TD (QTD) for a particle-control task. Finite difference methods are used to solve this HJB equation numerically. We intend to build up from these works by adding a policy network and by incorporating PINNs to solve the HJB equation. Furthermore, the authors mentioned the use of auto-differentiation for increased accuracy of the distributional HJB equation as a potential area for future research in their conclusion. Our work relaxes the requirement for the controller to be Lipschitz, and it minimizes the computational error associated with finite difference methods. The use of neural networks to solve the HJB equation has been an area of interest across multiple research projects. [7] uses a structured Recurrent Neural Network to solve for the HJB equation and achieve optimal control for the Dubins car problem. [22] uses the Pineda architecture [15] to estimate partial derivatives of the value function with respect to its inputs. They used iterative least squares method to solve for the HJB equation. This algorithm shows convergence in several control problems without the need for an initial stable policy. [20] develops the DGM algorithm to solve PDEs. They use auto-differentiation to compute first order derivatives and Monte-Carlo methods to estimate higher order derivatives. This algorithm was used to solve the HJB equation to achieve optimal control for a stochastic PDE and achieved an error of 0.1%. We intend to further advance from these works and use a PINN to solve for the HJB equation. The use of a PINN to solve the HJB equation for the value network was done by [14] in an optimal feedback control problem setting. This mitigates the use of finite differences to compute derivatives of the value function. The paper achieves results similar to that of the true optimal control function in high dimensional problems. We intend to build up from this work by using PINNs in a RL setting where the dynamics are unknown and exploration is needed. ## 4 Hjbppo To our knowledge, our work is the first to combine the HJB equation with a currently existing RL algorithm, PPO. It is also the first to use a PINN to solve the HJB equation in a RL setting. The PPO-Clipped algorithm is improvised with this implementation because it uses a value network to compute advantages used to update its policy network [18]. PPO is an actor-critic method that limits the update of the policy network to a small trust region at every iteration. This ensures that the objective function of the policy network is a good ap proximation of the true objective function and forces small updates to the value network as well. As a result, PPO shows state-of-the-art performance on deterministic RL environments by ensuring small and robust updates at every iteration. A study by [9] presented a convergence analysis for two time scaled stochastic approximation with controlled noise. In actor-critic methods, the parameter updates in neural networks using optimizers such as stochastic gradient descent or Adam can be seen as numerical solutions to a stochastic ODE. As such, this work has been utilized by [5] to introduce a convergence analysis for actor-critic methods. Furthermore, these results have been used to show the asymptotic convergence of PPO and RUD-DER [1]. The authors introduce model assumptions as well as loss function assumptions that need to be satisfied to ensure that parameters in PPO and RUD-DER may converge to a local minimum in a neighborhood near their initial values. The study shows that the parameters of the policy network and value network in PPO may converge to a local minimum in a neighborhood near their initial values. Another theoretical study by [6] concludes that policy optimization methods including PPO shows a guaranteed convergence on LQR tasks through the use of gradient-based optimization by formulating it as a non-convex optimization problem. It also shows reliable performance on state-feedback control problems. The paper refers to advanced regularization techniques as a potential area for improving robustness. This further justifies the introduction of the HJB loss as a regularization term. ### PINNs for HJB equation Our work combines the HJB equation with reinforcement learning in continuous state and action spaces to improve the training of the value network. On a stochastic system with infinite time horizon, the HJB equation is a second order elliptic equation [2]. A theoretical study by [19] shows that PINNs converge uniformly to the solution of second order linear elliptic equations, thus justifying the use of PINNs to solve the HJB equation. We treat the value network as a PINN to solve for the HJB equation by computing its derivatives respect to its inputs exactly. Note that the term \[\sup_{\hat{u}\in U}\{R(x,\hat{u})+\nabla_{x}V^{T}(x)f(x,\hat{u})\}\] in the HJB equation cannot be determined without exploration of the agent in its environment. From Theorem 2.1, the optimal policy \(\pi^{*}(a|x)\) and the optimal controller \(u^{*}(x)=\operatorname*{argmax}_{a}\pi^{*}(a|x)\) satisfies equation (4). The optimal policy is modeled by the policy network \(\pi_{\theta}\) parameterized by \(\theta\) and the optimal controller can be approximated using \(u(x)=\operatorname*{argmax}_{a}\pi_{\theta}(a|x)\). As a result, we can use the following approximation: \[V(x)\ln\gamma+R(x,u(x))+\nabla_{x}V^{T}(x)f(x,u(x))\] \[\approx V(x)\ln\gamma+\sup_{\hat{u}\in U}\{R(x,\hat{u})+\nabla_{x }V^{T}(x)f(x,\hat{u})\}. \tag{7}\] This, as a result, justifies the use of equation 6 as the HJB loss used to update the value function at each episode. The loss function is computed as \[J(\rho)=0.5MSE_{u}+\lambda_{HJB}MSE_{f},\] where \(MSE_{f}\) is defined in equation (6) and \(MSE_{u}\) is the standard loss function for the value network used in PPO, and is used to improve the discrete time estimate of returns for the value function: \[MSE_{u}=\frac{1}{T}\sum_{t=0}^{T-1}|V(x_{t})-(R(x_{t},a_{t})+\gamma V(x_{t+1} ))|^{2}, \tag{8}\] where \(\{x_{t}\}_{t=1}^{T}\) is a batch of states explored in a single episode, and \(\{a_{t}=\operatorname*{argmax}_{a}\pi_{\theta}(a|x_{t})\}_{t=1}^{T}\) is a batch of actions executed at time step \(t\) following the policy \(\pi_{\theta}\). The hyperparameter \(\lambda_{HJB}\) is determined based on the magnitude of the HJB loss curves compared to the Bellman optimality loss curves so that both loss functions are given similar weight. ### Algorithm The HJBPPO algorithm is provided in Algorithm 1. The policy update and the minimization of for the value network is identical to PPO. In order to satisfy \(V(x)\in C^{1}(\mathbb{R}^{n})\) as stated in Theorem 2.1, we use the infinitely differentiable _tanh_ activation function for the value network. Lines 3-7 in the algorithm are identical to the PPO algorithm. The advantage term \(A_{t}\) is computed as: \(A_{t}=\sum_{n=t}^{T-1}(\gamma\lambda)^{n-t}\delta_{n}\), where \(\delta_{t}=R_{t}+\gamma V_{\varphi}(s_{t+1})-V_{\varphi}(s_{t})\), \(\gamma\) is the discount factor, and \(\lambda\) is the generalized advantage estimation (GAE) parameter. \(\alpha_{1}\) and \(\alpha_{2}\) are learning rates for the policy network optimizer and value network optimizer respectively. \(\varepsilon\) is the clipping parameter. Lines 8-9 in the algorithm are our modifications to the PPO algorithm. We treat the value network as a PINN and add a \(MSE_{f}\) term into its loss function. This way, the HJB equation is used as a regularization term for the value network. We will compare the performance of HJBPPO to PPO on the MuJoCo environments for rewards as well as the HJB loss. ## 5 Results ### Training The HJBPPO algorithm was implemented by modifying the code for PPO in the Stable Baselines 3 library by [16]. To ensure the reproducibility of our results, we have posted our code at (Github repository redacted, code provided in supplementary material) and we have provided our hyperparameters in Tables 1 and 2 in Appendix A. The code was run on the Beluga cluster in Compute Canada. The cluster provided the MuJoCo envi Figure 1: Comparison of learning curves for PPO (Red, Dashed) compared to HJBPPO (Blue, Smooth) ronments for training. Training each algorithm over 1 million time steps took seven hours, and training over 10 million time steps took three days. The multiprocessing library from python was used to train each algorithm over multiple environments at the same time. ### Reward Curves The reward graphs have been plotted in Figure 1, comparing HJBPPO to PPO on all the MuJoCo environments over a million time steps or ten million time steps. The line shows the total reward, averaged over 50 consecutive episodes, and the shaded area indicated the standard deviation of the total reward over 50 consecutive episodes. HJBPPO shows a significant improvement in Antv4 and HalfCheetah-v4. It shows faster convergence and stability in Reacher-v4, Swimmer-v4, and InvertedDoublePendulum-v4. And it shows a slight improvement in HumanoidStandup-v4, Hopper-v4, and Walker2d-v4. For the two remaining environments (Humanoid-v4 and InvertedPendulum-v4), it shows equal performance to PPO. As a result, the graphs show that incorporating the continuous-time HJB equation into the PPO al Figure 3: Bellman optimality loss curves for HJBPPO on MuJoCo environments Figure 2: HJB loss curves for HJBPPO on MuJoCo environments gorithm to train the value function leads to an improved learning curve for the agent. This is because HJBPPO uses a PINN to exploit the physics in the environment. It uses finite differences to approximate the underlying governing equation \(f(x,u)\) of the environment and uses auto-differentiation to solve the HJB equation to achieve optimal control. ### HJB Loss Curves The HJB loss for each environment has been plotted in Figure 2. HJBPPO shows a significant decrease in the HJB loss compared to PPO. And the HJB loss shows convergence in 8 out of the 10 environments, including Ant-v4 and HalfCheetah-v4 where it performs significantly better than PPO in terms of rewards, thus showing that the value function converges to the optimal value function as shown in the HJB equation. The HJB loss does not converge for Humanoid-v4 and HumanoidStandup-v4, even though the HJB loss for these environments is significantly lower than that as shown in Figure 4. In both of these environments, the reward curve for HJBPPO shows similar performance to PPO. This shows that HJBPPO does not show significantly improved performance Figure 4: HJB loss curves for PPO on MuJoCo environments Figure 5: Bellman optimality loss curves for PPO on MuJoCo environments compared to PPO in general if the HJB loss does not show convergence. ### Bellman Optimality Loss Curves The Bellman optimality loss for each environment has been plotted in Figure 3. The value network shows convergence in every environment. This shows that convergence of the value function in the continuous-time HJB equation also improves its convergence in the discrete-time Bellman optimality equation, while the converse may not necessarily be true. Figure 6: Comparison of HJB loss curves for PPO (Red, Dashed) compared to HJBPPO (Blue, Smooth) Figure 7: Comparison of Bellman optimality loss curves for PPO (Red, Dashed) compared to HJBPPO (Blue, Smooth) ### Comparison with PPO For comparison, we have posted the HJB loss and Bellman optimality loss curved for PPO in Figures 4 and 5 below. A notable difference is that the HJB loss in HJBBPO takes significantly lower valued compared to PPO. This is because the HJB loss is actively being minimized during the training of HJBBPO. To make the difference clearer, we plotted the loss curves on the same graphs in Figure 6. It is clear that the HJB loss for HJBBPO takes extremely small values in comparison with PPO. As shown in Figure 4, the HJB loss in PPO shows convergence in only 4 out of the 10 environments; InvertedPendulum-v4, InvertedDoublePendulum-v4, Reacher-v4, and Swimmer-v4. For the remaining environments, the HJB loss shows an overall increasing trend. For the environments where the HJB loss converges for PPO, it also shows convergence for HJBBPPO as shown in Figure 2. While HJBBPO does not show convergence in HJB loss for HumanoidStandup-v4, the loss curve is an improvement compared to PPO, where the loss curve shows a significant increasing trend. Figure 5 shows convergence in Bellman optimality loss for 8 out of the 10 MuJoCo environments using PPO. The convergence in the Bellman optimality loss is achieved by PPO by training the value function to solve for the Bellman optimality equation. However, despite the choice of this loss function, in Ant-v4 and Walker2d-v4, the Bellman optimality loss does not show convergence, and instead shows an increasing trend for large time steps. This issue is solved in HJBBPO as shown in Figure 3. The Bellman optimality loss shows an overall decreasing trend in all environments including Ant-v4 and Walker2d-v4. To make the difference clearer, we plotted the Bellman optimality loss curves on the same graphs in Figure 7. The Bellman optimality loss curves for HJBBPO show equal performance in general compared to PPO with better convergence in Ant-v4, HumanoidStandup-v4, InvertedDoublePendulum-v4, Hopper-v4, and Walker2d-v4. In summary, HJBBPPO shows an improved performance compared to PPO. It shows improvement in the rewards curves, the HJB loss curves, and the Bellman optimality loss curves. This is due to the fact that HJBBPO incorporates an HJB loss regularization term and uses works from optimal control to improve the learning of the value function, and thus, improve the convergence of the algorithm. ## 6 Conclusion In this paper, we have introduced the HJBBPPO algorithm that improvises the PPO algorithm to solve the HJB equation. This paper is the first of its kind to combine PINNs with RL. We treat the value function as a PINN to solve the HJB equation in an RL setting. The HJBBPO algorithm shows an overall improvement in performance compared to PPO due to its ability to exploit the physics of the environment as well as optimal control to improve the learning curve of the agent. This paper also shows that convergence of the value function in the continuous-time HJB equation also improves its convergence in the discrete-time Bellman optimality equation. ## 7 Future Research Despite showing an overall improvement in the reward curves, the HJBBPPO leaves room for improved RL algorithms using PINNs. A limitation of the HJBBPO algorithm as shown in figure 2 is that the HJB loss does not always show convergence in the environments albeit showing a significant improvement compared to PPO. A potential area of further research could involve new optimization methods for PINNs that show improved convergence of the HJB loss. The loss function \(MSE_{f}\) using in training the value network was derived as a result of the approximation used in equation 7. So this does not guarantee convergence of the policy network towards the optimal policy such that \(u(x)=\sup_{\hat{u}\in U}\{R(x,\hat{u})+\nabla_{x}V^{T}(x)f(x,\hat{u})\}\) where the controller \(u(x)\) is derived from the policy \(\pi_{\theta}(a|x)\). [5] proves the convergence of the policy network parameters in PPO to a local op timum but it does not guarantee global convergence. Thus, a potential area of further research could involve alternate choices of HJB loss functions for the value network that relaxes this approximation. In this paper, we have explored and compared two deterministic RL algorithms - HJBPPO and PPO. It will be interesting to see how this algorithm can be extended to a stochastic setting. In the SAC paper, [4] introduces an entropy-regularized stochastic policy that is less likely to overfit or stick to a local optima. As a consequence of the approximation used in equation (7), the HJBPPO algorithm also poses a risk that the HJB loss of the value function could lead it to overfit to a suboptimal policy. This risk could be lessened by introducing an alternate HJB equation that facilitates exploration and incorporates entropy maximization. As a result, improvising the SAC algorithm with this HJB equation using PINNs is a potential area for further exploration. In the MuJoCo environments, the HJBPPO algorithm showed an improvement compared to PPO. But this is due to the fact that \(f(x,u)\) could be estimated through finite differences, thus allowing for the physics of the environment to be exploited. The environments give all the details of the state needed to choose an action. One limitation of HJBPPO is that it may not perform well in partially observable environments because the estimate of \(f(x,u)\) may be inaccurate. Deep Transformer Q Network (DTQN) was introduced by [3] and achieves state-of-the-art results in many partially observable environments. A potential area for further research may be the introduction of an alternate HJB equation that facilitates partial observability. The DTQN algorithm may be improvised by incorporating this HJB equation using PINNs. Additionally, finite difference approximations become less accurate in environments with high dimensions [20]. This makes the HJB loss less reliable in environments such as Humanoid-v4 and HumanoidStandup-v4 where the state is a 376-dimensional vector. The finite difference approximation does not compute \(f(x,u)\) exactly because the environment uses semi-implicit Euler integration steps rather than Euler's method [23]. Thus, a potential area for future research could be combining HJBPPO with model-based RL so that \(f(x,u)\) can be estimated with a smaller error. ## Acknowledgments The authors would like to thank Pascal Poupart, Ashish Gaurav, and Yanting Miao from the Department of Computer Science, University of Waterloo, for providing us with feedback for this paper.
2302.08643
Fast Temporal Wavelet Graph Neural Networks
Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the Fast Temporal Wavelet Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of multiresolution analysis and wavelet theory on discrete spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at https://github.com/HySonLab/TWGNN
Duc Thien Nguyen, Manh Duc Tuan Nguyen, Truong Son Hy, Risi Kondor
2023-02-17T01:21:45Z
http://arxiv.org/abs/2302.08643v3
# Fast Temporal Wavelet Graph Neural Networks ###### Abstract Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the _Fast Temporal Wavelet Graph Neural Networks_ (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of _multiresolution analysis_ and _wavelet theory_ on discrete spaces. We employ _Multiresolution Matrix Factorization_ (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at [https://github.com/HySonLab/TWGNN](https://github.com/HySonLab/TWGNN). Machine Learning, ICML ## 1 Introduction Time series modeling has been a quest in a wide range of academic fields and industrial applications, including neuroscience (Pourahmadi and Noorbalooloch, 2016) and traffic modeling (Li et al., 2018). Traditionally, model-based approaches such as autoregressive (AR) and Support Vector Regression (Smola and Scholkopf, 2004) require domain-knowledge as well as stationary assumption, which are often violated by the complex and non-linear structure of neural and traffic data. Recently, there has been intensive research with promising results on the traffic forecasting problem using deep learning such as Recurrent Neural Network (RNN) (Qin et al., 2017), LSTM (Koprinska et al., 2018), and graph-learning using Tranformer (Xu et al., 2020). On the other hand, forcasting in neuroscience has been focusing mainly on long-term evolution of brain network structure based on fMRI data, such as predicting brain connectivities of an Alzheimer's disease after several months (Bessadok et al., 2022), where existing methods are GCN-based (Goktas et al., 2020) or GAN-based graph autoencoder (Gurter et al., 2020). Meanwhile, research on instantaneous time series forecasting of electroencephalogram (EEG) or electrocorticography (ECoG) remains untouched, even though EEG and ECoG are often cheaper and quicker to obtain than fMRI, while short-term forecasting may be beneficial for patients with strokes or epilepsy (Shoeibi et al., 2022). In graph representation learning, a dense adjacency matrix expressing a densely connected graph can be a waste of computational resources, while physically, it may fail to capture the local "smoothness" of the network. To tackle such problems, a mathematical framework called Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) has been adopted to "sparsify" the adjacency and graph Laplacian matrices of highly dense graphs. MMF is unusual amongst fast matrix factorization algorithms in that it does not make a low rank assumption. Multiresolution matrix factorization (MMF) is an alternative paradigm that is designed to capture structure at multiple different scales. This makes MMF especially well suited to modeling certain types of graphs with complex multiscale or hierarchical strucutre (Hy and Kondor, 2022), compressing hierarchical matrices (e.g., kernel/gram matrices) (Teneva et al., 2016; Ding et al., 2017), and other applications in computer vision (Ithapu et al., 2017). One important aspect of MMF is its ability to construct wavelets on graphs and matrices during the factorization process (Kondor et al., 2014; Hy and Kondor, 2022). The wavelet basis inferred by MMF tends to be highly sparse, that allows the corresponding wavelet transform to be executed efficiently via sparse matrix multiplication. (Hy and Kondor, 2022) exploited this property to construct fast wavelet convolution and consequentially wavelet neural networks learning on graphs for graph classification and node classification tasks. In this work, we propose the incorporation of fast wavelet convolution based on MMF to build a time- and memory-efficient temporal architecture learning on timeseries data with the underlying graph structure. From the aforementioned arguments, we propose the _Fast Temporal Wavelet Graph Neural Network_ (FTWGNN) for graph time series forecasting, in which the MMF theory is utilized to describe the local smoothness of the network as well as to accelerate the calculations. Experiments on real-world traffic and ECoG datasets show competitive performance along with remarkably smaller computational footprint of FTWGNN. In summary: * We model the spatial domain of the graph time series as a diffusion process, in which the theories of _multiresolution analysis_ and _wavelet theory_ are adopted. We employ _Multiresolution Matrix Factorization_ (MMF) to factorize the underlying graph structure and derive its sparse wavelet basis. * We propose the _Fast Temporal Wavelet Graph Neural Network_ (FTWGNN), an end-to-end model capable of modeling spatiotemporal structures. * We tested on two real-world traffic datasets and an ECoG dataset and achieved competitive results to state-of-the-art methods with remarkable reduction in computational time. ## 2 Related work A spatial-temporal forecasting task utilizes spatial-temporal data information gathered from various sensors to predict their future states. Traditional approaches, such as the autoregressive integrated moving average (ARIMA), k-nearest neighbors algorithm (kNN), and support vector machine (SVM), can only take into account temporal information without considering spatial features (Van Lint & Van Hinsbergen, 2012; Jeong et al., 2013). Aside from traditional approaches, deep neural networks are proposed to model much more complex spatial-temporal relationships. Specifically, by using an extended fully-connected LSTM with embedded convolutional layers, FC-LSTM (Sutskever et al., 2014) specifically combines CNN and LSTM to model spatial and temporal relations. When predicting traffic, ST-ResNet (Zhang et al., 2017) uses a deep residual CNN network, revealing the powerful capabilities of the residual network. Despite the impressive results obtained, traffic forecastin scenarios with graph-structured data is incompatible with all of the aforementioned methods because they are built for grid data. For learning tasks on graphs, node representations in GNNs (Kipf & Welling, 2016) uses a neighborhood aggregation scheme, which involves sampling and aggregating the features of nearby nodes. Since temporal-spatial data such as traffic data or brain network is a well-known type of non-Euclidean structured graph data, great efforts have been made to use graph convolution methods in traffic forecasting. As an illustration, DCRNN (Li et al., 2018) models traffic flow as a diffusion process and uses directed graph bidirectional random walks to model spatial dependency. In the field of image and signal processing, processing is more efficient and simpler in a sparse representation where fewer coefficients reveal the information that we are searching for. Based on this motivation, Multiresolution Analysis (MRA) has been proposed by (Mallat, 1989) as a design for multiscale signal approximation in which the sparse representations can be constructed by decomposing signals over elementary waveforms chosen in a family called _wavelets_. Besides Fourier transforms, the discovery of wavelet orthogonal bases such as Haar (Haar, 1910) and Daubechies (Daubechies, 1988) has opened the door to new transforms such as continuous and discrete wavelet transforms and the fast wavelet transform algorithm that have become crucial for several computer applications (Mallat, 2008). (Kondor et al., 2014) and (Hy & Kondor, 2022) have introduced Multiresolution Matrix Factorization (MMF) as a novel method for constructing sparse wavelet transforms of functions defined on the nodes of an arbitrary graph while giving a multiresolution approximation of hierarchical matrices. MMF is closely related to other works on constructing wavelet bases on discrete spaces, including wavelets defined based on diagonalizing the diffusion operator or the normalized graph Laplacian (Coifman & Maggioni, 2006) (Hammond et al., 2011) and multiresolution on trees (Gavish et al., 2010) (Bickel & Ritov, 2008). ## 3 Background ### Multiresolution Matrix Factorization Most commonly used matrix factorization algorithms, such as principal component analysis (PCA), singular value decomposition (SVD), or non-negative matrix factorization (NMF) are inherently single-level algorithms. Saying that a symmetric matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is of rank \(r\ll n\) means that it can be expressed in terms of a dictionary of \(r\) mutually orthogonal unit vectors \(\{u_{1},u_{2},\ldots,u_{r}\}\) in the form \[\mathbf{A}=\sum_{i=1}^{r}\lambda_{i}u_{i}u_{i}^{T}\,,\] where \(u_{1},\ldots,u_{r}\) are the normalized eigenvectors of \(A\) and \(\lambda_{1},\ldots,\lambda_{r}\) are the corresponding eigenvalues. This is the decomposition that PCA finds, and it corresponds to factorizing \(\mathbf{A}\) in the form \[\mathbf{A}=\mathbf{U}^{T}\mathbf{H}\mathbf{U},\] where \(\mathbf{U}\) is an orthogonal matrix and \(\mathbf{H}\) is a diagonal matrix with the eigenvalues of \(\mathbf{A}\) on its diagonal. The drawback of PCA is that eigenvectors are almost always dense, while matrices occuring in learning problems, especially those related to graphs, often have strong locality properties, in the sense that they are more closely couple certain clusters of nearby coordinates than those farther apart with respect to the underlying topology. In such cases, modeling \(A\) in terms of a basis of global eigenfunctions is both computationally wasteful and conceptually unreasonable: a localized dictionary would be more appropriate. In contrast to PCA, (Kondor et al., 2014) proposed _Multiresolution Matrix Factorization_, or MMF for short, to construct a sparse hierarchical system of \(L\)-level dictionaries. The corresponding matrix factorization is of the form \[\mathbf{A}=\mathbf{U}_{1}^{T}\mathbf{U}_{2}^{T}\ldots\mathbf{U}_{L}^{T}\mathbf{HU}_{L}\ldots\mathbf{U} _{2}\mathbf{U}_{1},\] where \(\mathbf{H}\) is close to diagonal and \(\mathbf{U}_{1},\ldots,\mathbf{U}_{L}\) are sparse orthogonal matrices with the following constraints: * Each \(\mathbf{U}_{\ell}\) is \(k\)-point rotation (i.e. Givens rotation) for some small \(k\), meaning that it only rotates \(k\) coordinates at a time. Formally, Def. 3.1 defines the \(k\)-point rotation matrix. * There is a nested sequence of sets \(\mathbb{S}_{L}\subseteq\cdots\subseteq\mathbb{S}_{1}\subseteq\mathbb{S}_{0} =[n]\) such that the coordinates rotated by \(\mathbf{U}_{\ell}\) are a subset of \(\mathbb{S}_{\ell}\). * \(\mathbf{H}\) is an \(\mathbb{S}_{L}\)-core-diagonal matrix that is formally defined in Def. 3.2. We formally define MMF in Defs. 3.3 and 3.4. A special case of MMF is the Jacobi eigenvalue algorithm (Jacobi, 1846) in which each \(\mathbf{U}_{\ell}\) is a 2-point rotation (i.e. \(k=2\)). **Definition 3.1**.: We say that \(\mathbf{U}\in\mathbb{R}^{n\times n}\) is an **elementary rotation of order \(k\)** (also called as a \(k\)-point rotation) if it is an orthogonal matrix of the form \[\mathbf{U}=\mathbf{I}_{n-k}\oplus_{(i_{1},\cdots,i_{k})}\mathbf{O}\] for some \(\mathbb{I}=\{i_{1},\cdots,i_{k}\}\subseteq[n]\) and \(\mathbf{O}\in\mathbb{SO}(k)\). We denote the set of all such matrices as \(\mathbb{SO}_{k}(n)\). **Definition 3.2**.: Given a set \(\mathbb{S}\subseteq[n]\), we say that a matrix \(\mathbf{H}\in\mathbb{R}^{n\times n}\) is \(\mathbb{S}\)-core-diagonal if \(\mathbf{H}_{i,j}=0\) unless \(i,j\in\mathbb{S}\) or \(i=j\). Equivalently, \(\mathbf{H}\) is \(\mathbb{S}\)-core-diagonal if it can be written in the form \(\mathbf{H}=\mathbf{D}\oplus_{\mathbb{S}}\overline{\mathbf{H}}\), for some \(\overline{H}\in\mathbb{R}^{|\mathbb{S}|\times|\mathbb{S}|}\) and \(\mathbf{D}\) is diagonal. We denote the set of all \(\mathbb{S}\)-core-diagonal symmetric matrices of dimension \(n\) as \(\mathbb{H}_{n}^{\mathbb{S}}\). **Definition 3.3**.: Given an appropriate subset \(\mathbb{O}\) of the group \(\mathbb{SO}(n)\) of \(n\)-dimensional rotation matrices, a depth parameter \(L\in\mathbb{N}\), and a sequence of integers \(n=d_{0}\geq d_{1}\geq d_{2}\geq\cdots\geq d_{L}\geq 1\), a **Multiresolution Matrix Factorization (MMF)** of a symmetric matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) over \(\mathbb{O}\) is a factorization of the form \[\mathbf{A}=\mathbf{U}_{1}^{T}\mathbf{U}_{2}^{T}\ldots\mathbf{U}_{L}^{T}\mathbf{HU}_{L}\ldots\mathbf{U} _{2}\mathbf{U}_{1}, \tag{2}\] where each \(\mathbf{U}_{\ell}\in\mathbb{O}\) satisfies \([\mathbf{U}_{\ell}]_{[n]\setminus\mathbb{S}_{\ell-1},[n]\setminus\mathbb{S}_{ \ell-1}}=\mathbf{I}_{n-d_{\ell}}\) for some nested sequence of sets \(\mathbb{S}_{L}\subseteq\cdots\in\mathbb{S}_{1}\subseteq\mathbb{S}_{0}=[n]\) with \(|\mathbb{S}_{\ell}|=d_{\ell}\), and \(\mathbf{H}\in\mathbb{H}_{n}^{\mathbb{S}_{L}}\) is an \(\mathbb{S}_{L}\)-core-diagonal matrix. **Definition 3.4**.: We say that a symmetric matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is **fully multiresolution factorizable** over \(\mathbb{O}\subset\mathbb{SO}(n)\) with \((d_{1},\ldots,d_{L})\) if it has a decomposition of the form described in Def. 3.3. ### Multiresolution analysis (Kondor et al., 2014) has shown that MMF mirrors the classical theory of multiresolution analysis (MRA) on the real line (Mallat, 1989) to discrete spaces. The functional analytic view of wavelets is provided by MRA, which, similarly to Fourier analysis, is a way of filtering some function space into a sequence of subspaces \[\cdots\subset\mathbb{V}_{-1}\subset\mathbb{V}_{0}\subset\mathbb{V}_{1}\subset \mathbb{V}_{2}\subset\ldots \tag{3}\] However, it is best to conceptualize (3) as an iterative process of splitting each \(\mathbb{V}_{\ell}\) into the orthogonal sum \(\mathbb{V}_{\ell}=\mathbb{V}_{\ell+1}\oplus\mathbb{W}_{\ell+1}\) of a smoother part \(\mathbb{V}_{\ell+1}\), called the _approximation space_; and a rougher part \(\mathbb{W}_{\ell+1}\), called the _detail space_ (see Fig. 1). Each \(\mathbb{V}_{\ell}\) has an orthonormal basis \(\Phi_{\ell}\triangleq\{\phi_{m}^{\ell}\}_{m}\) in which each \(\phi\) is called a _father_ wavelet. Each complementary space \(\mathbb{W}_{\ell}\) is also spanned by an orthonormal basis \(\Psi_{\ell}\triangleq\{\psi_{m}^{\ell}\}_{m}\) in which each \(\psi\) is called a _mother_ wavelet. In MMF, each individual rotation \(\mathbf{U}_{\ell}\colon\mathbb{V}_{\ell-1}\to\mathbb{V}_{\ell}\oplus\mathbb{W}_{\ell}\) is a sparse basis transform that expresses \(\Phi_{\ell}\cup\Psi_{\ell}\) in the previous basis \(\Phi_{\ell-1}\) such that: \[\phi_{m}^{\ell}=\sum_{i=1}^{\dim(\mathbb{V}_{\ell-1})}[\mathbf{U}_{\ell}]_{m,i}\phi _{i}^{\ell-1},\] \[\psi_{m}^{\ell}=\sum_{i=1}^{\dim(\mathbb{V}_{\ell-1})}[\mathbf{U}_{\ell}]_{m+\dim( \mathbb{V}_{\ell-1}),i}\phi_{i}^{\ell-1},\] in which \(\Phi_{0}\) is the standard basis, i.e. \(\phi_{m}^{0}=e_{m}\); and \(\dim(\mathbb{V}_{\ell})=d_{\ell}=|\mathbb{S}_{\ell}|\). In the \(\Phi_{1}\cup\Psi_{1}\) basis, \(\mathbf{A}\) compresses into \(\mathbf{A}_{1}=\mathbf{U}_{1}\mathbf{A}\mathbf{U}_{1}^{T}\). In the \(\Phi_{2}\cup\Psi_{2}\cup\Psi_{1}\) basis, it becomes \(\mathbf{A}_{2}=\mathbf{U}_{2}\mathbf{U}_{1}\mathbf{A}\mathbf{U}_{1}^{T}\mathbf{U}_{2}^{T}\), and so on. Finally, in the \(\Phi_{L}\cup\Psi_{L}\cup\cdots\cup\Psi_{1}\) basis, it takes on the form \(\mathbf{A}_{L}=\mathbf{H}=\mathbf{U}_{L}\ldots\mathbf{U}_{2}\mathbf{U}_{1}\mathbf{A}\mathbf{U}_{1}^{T}\mathbf{U} _{2}^{T}\ldots\mathbf{U}_{L}^{T}\) that consists of four distinct blocks (supposingly that we permute the rows/columns accordingly): \[\mathbf{H}=\begin{pmatrix}\mathbf{H}_{\Phi,\Phi}&\mathbf{H}_{\Psi,\Psi}\\ \mathbf{H}_{\Psi,\Phi}&\mathbf{H}_{\Psi,\Psi}\end{pmatrix},\] where \(\mathbf{H}_{\Phi,\Phi}\in\mathbb{R}^{\dim(\mathbb{V}_{L})\times\dim(\mathbb{V}_{L})}\) is effectively \(\mathbf{A}\) compressed to \(\mathbb{V}_{L}\), \(\mathbf{H}_{\Phi,\Psi}=\mathbf{H}_{\Psi,\Phi}^{T}=0\) and \(\mathbf{H}_{\Psi,\Psi}\) is diagonal. MMF approximates \(\mathbf{A}\) in the form \[\mathbf{A}\approx\sum_{i,j=1}^{d_{L}}h_{i,j}\phi_{i}^{L}{\phi_{j}^{L}}^{T}+\sum_{\ell= 1}^{L}\sum_{m=1}^{d_{L}}c_{m}^{\ell}\psi_{m}^{\ell}{\psi_{m}^{\ell}}^{T},\] where \(h_{i,j}\) coefficients are the entries of the \(\mathbf{H}_{\Phi,\Phi}\) block, and \(c^{\ell}_{m}=\langle\psi^{\ell}_{m},\mathbf{A}\psi^{\ell}_{m}\rangle\) wavelet frequencies are the diagonal elements of the \(\mathbf{H}_{\Psi,\Psi}\) block. In particular, the dictionary vectors corresponding to certain rows of \(\mathbf{U}_{1}\) are interpreted as level one wavelets, the dictionary vectors corresponding to certain rows of \(\mathbf{U}_{2}\mathbf{U}_{1}\) are interpreted as level two wavelets, and so on. One thing that is immediately clear is that whereas Eq. (1) diagonalizes \(\mathbf{A}\) in a single step, multiresolution analysis will involve a sequence of basis transforms \(\mathbf{U}_{1},\mathbf{U}_{2},\ldots,\mathbf{U}_{L}\), transforming \(\mathbf{A}\) step by step as \[\mathbf{A}\to\mathbf{U}_{1}\mathbf{A}\mathbf{U}_{1}^{T}\to\cdots\to\mathbf{U}_{L}\ldots\mathbf{U}_{1} \mathbf{A}\mathbf{U}_{1}^{T}\ldots\mathbf{U}_{L}^{T}\doteq\mathbf{H}, \tag{4}\] so the corresponding matrix factorization must be a multi-level factorization \[\mathbf{A}\approx\mathbf{U}_{1}^{T}\mathbf{U}_{2}^{T}\ldots\mathbf{U}_{\ell}^{T}\mathbf{H}\mathbf{U}_ {\ell}\ldots\mathbf{U}_{2}\mathbf{U}_{1}. \tag{5}\] ### MMF optimization problem Finding the best MMF factorization to a symmetric matrix \(\mathbf{A}\) involves solving \[\min_{\begin{subarray}{c}\mathbb{S}_{L}\in\mathbb{S}\subseteq\mathbb{S}_{1} \subseteq\mathbb{S}_{0}=[n]\\ \mathbf{H}\in\mathbb{H}_{n}^{\mathbb{S}_{L}};\,\mathbf{U}_{1},\ldots,\mathbf{U}_{L}\in \mathbb{O}\end{subarray}}\|\mathbf{A}-\mathbf{U}_{1}^{T}\ldots\mathbf{U}_{L}^{T}\mathbf{H}\mathbf{ U}_{L}\ldots\mathbf{U}_{1}\|. \tag{6}\] Assuming that we measure error in the Frobenius norm, (6) is equivalent to \[\min_{\begin{subarray}{c}\mathbb{S}_{L}\subseteq\mathbb{S}\subseteq\mathbb{S} _{1}\subseteq\mathbb{S}_{0}=[n]\\ \mathbf{U}_{1},\ldots,\mathbf{U}_{L}\in\mathbb{O}\end{subarray}}\|\mathbf{U}_{L}\ldots\bm {U}_{1}\mathbf{A}\mathbf{U}_{1}^{T}\ldots\mathbf{U}_{L}^{T}\|_{\text{resi}}^{2}, \tag{7}\] where \(\|\cdot\|_{\text{resi}}^{2}\) is the squared residual norm \(\|\mathbf{H}\|_{\text{resi}}^{2}=\sum_{i\neq j;\;(i,j)\notin\mathbb{S}_{L}\neq \mathbb{S}_{L}}|\mathbf{H}_{i,j}|^{2}\). The optimization problem in (6) and (7) is equivalent to the following 2-level one: \[\min_{\begin{subarray}{c}\mathbb{S}_{L}\subseteq\cdots\subseteq\mathbb{S}_{1} \subseteq\mathbb{S}_{0}=[n]\\ \end{subarray}}\min_{\mathbf{U}_{1},\ldots,\mathbf{U}_{L}\in\mathbb{O}}\|\mathbf{U}_{L} \ldots\mathbf{U}_{1}\mathbf{A}\mathbf{U}_{1}^{T}\ldots\mathbf{U}_{L}^{T}\|_{\text{resi}}^{2}. \tag{8}\] There are two fundamental problems in solving this 2-level optimization: * For the inner optimization, the variables (i.e. Givens rotations \(\mathbf{U}_{1},\ldots,\mathbf{U}_{L}\)) must satisfy the orthogonality constraints. * For the outer optimization, finding the optimal nested sequence of indices \(\mathbb{S}_{L}\subseteq\cdots\subseteq\mathbb{S}_{1}\subseteq\mathbb{S}_{0}= [n]\) is a combinatorics problem, given an exponential search space. In order to address these above problems, (Hy & Kondor, 2022) proposes a learning algorithm combining Stiefel manifold optimization and Reinforcement Learning (RL) for the inner and outer optimization, respectively. In this paper, we assume that a nested sequence of indices \(\mathbb{S}_{L}\subseteq\cdots\subseteq\mathbb{S}_{1}\subseteq\mathbb{S}_{0}= [n]\) is given by a fast heuristics instead of computationally expensive RL. There are several heuristics to find the nested sequence, for example: clustering based on similarity between rows (Kondor et al., 2014) (Kondor et al., 2015). In the next section, we introduce the solution for the inner problem. ### Stiefel manifold optimization In order to solve the inner optimization problem of (8), we consider the following generic optimization with orthogonality constraints: \[\min_{\mathbf{X}\in\mathbb{R}^{n\times p}}\mathcal{F}(\mathbf{X}),\ \ \text{s.t.}\ \mathbf{X}^{T}\mathbf{X}=\mathbf{I}_{p}, \tag{9}\] where \(\mathbf{I}_{p}\) is the identity matrix and \(\mathcal{F}(\mathbf{X}):\mathbb{R}^{n\times p}\to\mathbb{R}\) is a differentiable function. The feasible set \(\mathcal{V}_{p}(\mathbb{R}^{n})=\{\mathbf{X}\in\mathbb{R}^{n\times p}:\mathbf{X}^{T} \mathbf{X}=\mathbf{I}_{p}\}\) is referred to as the Stiefel manifold of \(p\) orthonormal vectors in \(\mathbb{R}^{n}\). We will view \(\mathcal{V}_{p}(\mathbb{R}^{n})\) as an embedded submanifold of \(\mathbb{R}^{n\times p}\). In the case there are more than one orthogonal constraints, (9) is written as \[\min_{\mathbf{X}_{1}\in\mathcal{V}_{p_{1}}(\mathbb{R}^{n_{1}}),\ldots,\mathbf{X}_{q} \in\mathcal{V}_{p_{q}}(\mathbb{R}^{n_{q}})}\mathcal{F}(\mathbf{X}_{1},\ldots,\mathbf{ X}_{q}) \tag{10}\] where there are \(q\) variables with corresponding \(q\) orthogonal constraints. In the MMF optimization problem (8), suppose we are already given \(\mathbb{S}_{L}\subseteq\cdots\subseteq\mathbb{S}_{1}\subseteq\mathbb{S}_{0}= [n]\) meaning that the indices of active rows/columns at each resolution were already determined, for simplicity. In this case, we have \(q=L\) number of variables such that each variable \(\mathbf{X}_{\ell}=\mathbf{O}_{\ell}\in\mathbb{R}^{k\times k}\), where \(\mathbf{U}_{\ell}=\mathbf{I}_{n-k}\oplus_{\mathbb{I}_{\ell}}\mathbf{O}_{\ell}\in\mathbb{R} ^{n\times n}\) in which \(\mathbb{I}_{\ell}\) is a subset of \(k\) indices from \(\mathbb{S}_{\ell}\), must satisfy the orthogonality constraint. The corresponding objective function is \[\mathcal{F}(\mathbf{O}_{1},\ldots,\mathbf{O}_{L})=\|\mathbf{U}_{L}\ldots\mathbf{U}_{1}\mathbf{A} \mathbf{U}_{1}^{T}\ldots\mathbf{U}_{L}^{T}\|_{\text{resi}}^{2}. \tag{11}\] Therefore, we can cast the inner problem of (8) as an optimization problem on the Stiefel manifold, and solve it by the specialized steepest gradient descent (Tagare, 2011). ## 4 Method ### Wavelet basis and convolution on graph Section 3.2 introduces the theory of multiresolution analysis behind MMF as well as the construction of a _sparse_ wavelet basis for a symmetric matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\). Without the loss of generality, we assume that \(\mathbf{A}\) is a weight matrix of a weighted undirected graph \(\mathcal{G}=(V,E)\) in which \(V=\{v_{1},..,v_{n}\}\) is the set of vertices and \(E=\{(v_{i},v_{j})\}\) is the set of edges with the weight of edge \((v_{i},v_{j})\) is given by \(\mathbf{A}_{i,j}\). Given a graph signal \(\mathbf{f}\in\mathbb{R}^{n}\) that is understood as a function \(f:V\rightarrow\mathbb{R}\) defined on the vertices of the graph, the wavelet transform (up to level \(L\)) expresses this graph signal, without loss of generality \(f\in\mathbb{V}_{0}\), as: \[f(v)=\sum_{\ell=1}^{L}\sum_{m}\alpha_{m}^{\ell}\psi_{m}^{\ell}(v)+\sum_{m} \beta_{m}\phi_{m}^{L}(v),\quad\text{for each}\;\;v\in V,\] where \(\alpha_{m}^{\ell}=(f,\psi_{m}^{\ell})\) and \(\beta_{m}=(f,\phi_{m}^{L})\) are the wavelet coefficients. Based on the wavelet basis construction via MMF detailed in (Hy & Kondor, 2022): * For \(L\) levels of resolution, we get exactly \(L\) mother wavelets \(\overline{\psi}=\{\psi^{1},\psi^{2},\ldots,\psi^{L}\}\), each corresponds to a resolution (see Figure 6 for visualizations). * The rows of \(\mathbf{H}=\mathbf{A}_{L}\) make exactly \(n-L\) father wavelets \(\overline{\phi}=\{\phi_{m}^{L}=\mathbf{H}_{m,:}\}_{m\in\mathbb{S}_{L}}\). In total, a graph of \(n\) vertices has exactly \(n\) wavelets, both mothers and fathers. Analogous to the convolution based on Graph Fourier Transform (Bruna et al., 2014), each convolution layer \(k\in\{1,..,K\}\) of wavelet neural network transforms an input vector \(\mathbf{f}^{(k-1)}\) of size \(|V|\times F_{k-1}\) into an output \(\mathbf{f}^{(k)}\) of size \(|V|\times F_{k}\) as \[\mathbf{f}^{(k)}_{:,j}=\sigma\Bigg{(}\mathbf{W}\sum_{i=1}^{F_{k-1}}\mathbf{g}^{(k)}_{i,j} \mathbf{W}^{T}\mathbf{f}^{(k-1)}_{:,i}\Bigg{)}\quad\text{for}\;\;j=1,\ldots,F_{k}, \tag{12}\] where \(\mathbf{W}\) is our wavelet basis matrix as we concatenate \(\overline{\phi}\) and \(\overline{\psi}\) column-by-column, \(\mathbf{g}^{(k)}_{i,j}\) is a parameter/filter in the form of a diagonal matrix learned in spectral domain, and \(\sigma\) is an element-wise non-linearity (e.g., ReLU, sigmoid, etc.). In Eq.(12), first we employ the wavelet transform of a graph signal \(\mathbf{f}\) into the spectral domain (i.e. \(\hat{\mathbf{f}}=\mathbf{W}^{T}\mathbf{f}\) is the forward transform and \(\mathbf{f}=\mathbf{W}\hat{\mathbf{f}}\) is the inverse transform), then a learnable filter \(\mathbf{g}\) to the wavelet coefficients, the inverse transform back to the spatial domain and let everything through a non-linearity \(\sigma\). Since the wavelet basis matrix \(\mathbf{W}\) is _sparse_, both the wavelet transform and its inverse transform can be implemented efficiently via sparse matrix multiplication. ### Temporal Wavelet Neural Networks Capturing spatiotemporal dependencies among time series in various spatiotemporal forecasting problems demands both spatial and temporal models. We build our novel _Fast Temporal Wavelet Graph Neural Network_ with the architectural backbone from _Diffusion Convolutional Recurrent Neural Network_ (DCRNN) (Li et al., 2018), that combines both spatial and temporal models to solve these tasks. **Spatial Dependency Model** The spatial dynamic in the network is captured by diffusion process. Let \(G=(\mathbf{X},\mathbf{A})\) represent an undirected graph, where \(\mathbf{X}=[\mathbf{x}_{1}^{T},\ldots,\mathbf{x}_{N}^{T}]^{T}\in\mathbb{R}^{N\times D}\) denotes signals of \(N\) nodes, each has \(D\) features. Define further the right-stochastic edge weights matrix \(\tilde{\mathbf{A}}\in\mathbb{R}^{N\times N}\) in which \(\sum_{j}\tilde{\mathbf{A}}_{ij}=1\forall i\). In the simplest case, when \(\tilde{\mathbf{L}}=\mathbf{I}-\tilde{\mathbf{A}}\) is the normalized random walk matrix, the diffusion process on graph is governed by the following Figure 3: Architecture for the Wavelet Convolutional Gated Recurrent Unit. **WC**: graph wavelet convolution given MMF’s wavelet basis Figure 2: Architecture of Fast Temporal Wavelet Neural Network. **WC:** graph wavelet convolution given MMF’s wavelet basis. equation (Chamberlain et al., 2021): \[\frac{\mathrm{d}\mathbf{X}(t)}{\mathrm{d}t}=(\tilde{\mathbf{A}}-\mathbf{I})\mathbf{X}(t) \tag{13}\] where \(\mathbf{X}(t)=[\mathbf{x}_{1}^{T}(t),\dots,[\mathbf{x}_{N}^{T}(t)]^{T}\in\mathbb{R}^{N\times D}\) and \(\mathbf{X}(0)=\mathbf{X}\). Applying forward Euler discretization with step size 1, gives: \[\mathbf{X}(k) =\mathbf{X}(k-1)+(\tilde{\mathbf{A}}-\mathbf{I})\mathbf{X}(k-1)\] \[=\mathbf{X}(k-1)-\tilde{\mathbf{L}}\mathbf{X}(k-1)\] \[=\tilde{\mathbf{A}}\mathbf{X}(k-1)\] \[=\tilde{\mathbf{A}}^{k}\mathbf{X}(0) \tag{14}\] Eq.14 is similar to the well-established GCN architecture propose in (Kipf and Welling, 2016). Then, the diffusion convolution operation over a graph signal \(\mathbb{R}^{N\times D}\) and filter \(f_{\mathbf{\theta}}\) is defined as: \[\mathbf{X}_{:,d}\star_{\mathcal{G}}f_{\mathbf{\theta}}=\sum_{k=0}^{K-1}\theta_{k} \tilde{\mathbf{A}}^{k}\mathbf{X}_{:,d}\quad\forall d\in\{1,\dots,D\} \tag{15}\] where \(\mathbf{\Theta}\in\mathbb{R}^{K\times 2}\) are the parameters for the filter. Temporal Dependency ModelThe DCRNN is leveraged from the recurrent neural networks (RNNs) to model the temporal dependency. In particular, the matrix multiplications in GRU is replaced with the diffusion convolution, which is called _Diffusion Convolutional Gated Recurrent Unit_ (DCGRU). \[\mathbf{r}^{(t)} =\sigma(\mathbf{\Theta}_{r}\star_{\mathcal{G}}[\mathbf{X}^{(t)},\mathbf{H}^{ (t-1)}]+\mathbf{b})\] \[\mathbf{u}^{(t)} =\sigma(\mathbf{\Theta}_{r}\star_{\mathcal{G}}[\mathbf{X}^{(t)},\mathbf{H}^{ (t-1)}]+\mathbf{b}_{\mathbf{u}})\] \[\mathbf{C}^{(t)} =\tanh\bigl{(}\mathbf{\Theta}_{r}\star_{\mathcal{G}}[\mathbf{X}^{(t)},( \mathbf{r}\odot\mathbf{H}^{(t-1)})\bigr{]}+\mathbf{b}_{\mathbf{c}}\bigr{)}\] \[\mathbf{H}^{(t)} =\mathbf{u}^{(t)}\odot\mathbf{H}^{(t-1)}+(1-\mathbf{u}^{(t)})\odot\mathbf{C}^{(t)}\] where \(\mathbf{X}(t),\mathbf{H}(t)\) denote the input and output of at time \(t\), while \(\mathbf{r}^{(t)},\mathbf{u}^{(t)}\) are reset gate and update gate at time \(t\), respectively. Both the encoder and the decoder are recurrent neural networks with DCGRU following _Sequence-to-Sequence_ style. To mitigate the distribution differences between training and testing data, scheduled sampling technique (Bengio et al., 2015) is used, where the model is fed with either the ground truth with probability \(\epsilon_{i}\) or the prediction by the model with probability \(1-\epsilon_{i}\). For our novel _Fast Temporal Wavelet Graph Neural Network_ (FTWGNN), the fundamental difference is that instead of using temporal traffic graph as the input of DCRNN, we use the sparse wavelet basis matrix \(\mathbf{W}\) which is extracted via MMF (see Section 3.2) and replace the diffusion convolution by our fast _wavelet convolution_. Given the sparsity of our wavelet basis, we significantly reduce the overall computational time and memory usage. Each Givens rotation matrix \(\mathbf{U}_{\ell}\) (see Def. 3.1) is a highly-sparse orthogonal matrix with a non-zero core of size \(K\times K\). The number of non-zeros in MMF's wavelet basis \(\mathbf{W}\), that can be computed as product \(\mathbf{U}_{1}\mathbf{U}_{2}\cdots\mathbf{U}_{L}\), is \(O(LK^{2})\) where \(L\) is the number of resolutions (i.e. number of Givens rotation matrices) and \(K\) is the number of columns in a Givens rotation matrix. (Kondor et al., 2014) and (Hy and Kondor, 2022) have shown in both theory and practice that \(L\) only needs to be in \(O(n)\) where \(n\) is the number of columns and \(K\) small (e.g., 2, 4, 8) to get a decent approximation/compression for a symmetric hierarchical matrix. Technically, MMF is able to compress a symmetric hierarchical matrix from the original quadratic size \(n\times n\) to a linear number of non-zero elements \(O(n)\). Practically, all the Givens rotation matrices \(\{\mathbf{U}_{\ell}\}_{\ell=1}^{L}\) and the wavelet basis \(\mathbf{W}\) can be stored in Coordinate Format (COO), and the wavelet transform and its inverse in wavelet convolution (see Eq. 12) can be implemented efficiently by sparse matrix multiplication in PyTorch's sparse library (Paszke et al., 2019). The architecture of our model is shown in Figures 2 and 3. ## 5 Experiments Our PyTorch implementation is publicly available at [https://github.com/HySonLab/TWGNN](https://github.com/HySonLab/TWGNN). Implementation of Multiresolution Matrix Factorization and graph wavelet computation (Hy and Kondor, 2022) is publicly available at [https://github.com/risilab/Learnable_MMF](https://github.com/risilab/Learnable_MMF). To showcase the competitive performance and remarkable acceleration of FTWGNN, we conducted experiments on two well-known traffic forecasting benchmarks METRLA and PEMS-BAY, and one challenging ECoG dataset AJILE12. Following (Li et al., 2018), we compare our model with widely used time series models, including: 1. HA: Historical Average, which models the traffic flow as a seasonal process, and uses weighted average of previous seasons as the prediction; 2. ARIMA\({}_{kal}\): Auto-Regressive Integrated Moving Average model with Kalman filter; 3. VAR: Vector Auto-Regression. 4. SVR: Support Vector Regression. 5. FNN: Feed forward neural network with two hidden layers and L2 regularization. 6. FC-LSTM: Recurrent Neural Network with fully connected LSTM hidden units(Sutskever et al., 2014) Methods are evaluated on three metrics: **(i)** Mean Absolute Error (MAE); **(ii)** Mean Absolute Percentage Error (MAPE); and **(iii)** Root Mean Squared Error (RMSE). FTWGNN and DCRNN are implemented using PyTorch (Paszke et al., 2019) on an NVIDIA A100-SXM4-80GB GPU. Details about parameters and model structure can be found in the Appendix A and B. Adjacency matrixAccording to DCRNN (Li et al., 2018), the traffic sensor network is expressed by an adja cency matrix which is constructed using thresholded Gaussian kernel (Shuman et al., 2013). Specifically, for each pair of sensors \(v_{i}\) and \(v_{j}\), the edge weight from \(v_{i}\) to \(v_{j}\), denoted by \(A_{ij}\), is defined as \[A_{ij}\coloneqq\begin{cases}\exp\!\left(-\frac{\text{dist}(v_{i},v_{j})}{ \sigma^{2}}\right)\!,&\text{dist}(v_{i},v_{j})\leq k\\ 0,&\text{otherwise}\end{cases}\,, \tag{16}\] where \(\text{dist}(v_{i},v_{j})\) denotes the spatial distance from \(v_{i}\) to \(v_{j}\), \(\sigma\) is the standard deviation of the distances, and \(k\) is the distance threshold. Nevertheless, such an user-defined adjacency matrix requires expert knowledge, thus may not work on other domains, _e.g._, brain networks. In the ECoG time series forecasting case, the adjacency matrix is computed based on the popular Local Linear Embedding (LLE) (Saul and Roweis, 2003). In particular, for the matrix data \(\mathbf{X}=[\mathbf{x}_{1},\dots,\mathbf{x}_{N}]\in\mathbb{R}^{T\times N}\) where \(\mathbf{x}_{i}\) denotes the time series data of node \(i\) for \(i\in\{1,\dots,N\}\), an adjacency matrix \(\mathbf{A}\) is identified to gather all the coefficients of the affine dependencies among \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) by solving the following optimization problem \[\mathbf{A}\coloneqq\arg\min_{\mathbf{\hat{A}}\in\mathbb{R}^{N\times N}} \left\|\mathbf{X}-\mathbf{X}\hat{\mathbf{A}}^{T}\right\|_{\text{F}}^{2}+\lambda_{A} \big{\|}\hat{\mathbf{A}}\big{\|}_{1}\] \[\text{s.to}\quad\mathbf{1}_{N}^{T}\hat{\mathbf{A}}=\mathbf{1}_{N}^{T}\,,\quad \text{diag}(\hat{\mathbf{A}})=\mathbf{0}\,, \tag{17}\] where the constraint \(\mathbf{1}_{N}^{T}\hat{\mathbf{A}}=\mathbf{1}_{N}^{T}\) realizes the affine compositions, while \(\text{diag}(\hat{\mathbf{A}})=\mathbf{0}\) excludes the trivial solution \(\hat{\mathbf{A}}=\mathbf{I}_{N}\). Furthermore, to promote the local smoothness of the graph, each data point \(x_{i}\) is assumed to be approximated by a few neighbors \(\{x_{j_{1}},x_{j_{2}},\dots,x_{j_{k}}\}\), thus \(\hat{A}\) is regularized by the \(l_{1}\)-norm loss \(\big{\|}\hat{\mathbf{A}}\big{\|}_{1}\) to be sparse. Task (17) is a composite convex minimization problem with affine constraints, hence can be solved by (Slavakis and Yamada, 2018). ### Traffic prediction Two real-world large-scale traffic datasets are considered: * **METR-LA** Data of 207 sensors in the highway of Los Angeles County (Jagadish et al., 2014) over the 4-month period from Mar 1st 2012 to Jun 30th 2012. * **PEMS-BAY** Data of 325 sensors in the Bay Area over the 6-month period from Jan 1st 2017 to May 31th 2017 from the California Transportation Agencies (CalorTrans) Performance Measurement System (PeMS). The distance function \(\text{dist}(v_{i},v_{j})\) in (16) represents the road network distance from sensor \(v_{i}\) to sensor \(v_{j}\), producing an asymmetric adjacency matrix for a directed graph. Therefore, the symmetrized matrix \(\hat{\mathbf{A}}\coloneqq\frac{1}{2}(\mathbf{A}+\mathbf{A}^{T})\) is taken to compute the wavelet basis matrix \(\mathbf{W}\) following Sec. 3.2. \begin{table} \begin{tabular}{c||c|c|c c c c c c c} \hline \hline Dataset & \(T\) & Metric & HA & ARIMA\({}_{kal}\) & VAR & SVR & FNN & FC-LSTM & DCRNN & FTWGNN \\ \hline \multirow{6}{*}{METR-LA} & \multirow{3}{*}{15 min} & MAE & 4.16 & 3.99 & 4.42 & 3.99 & 3.99 & 3.44 & 2.77 & **2.70** \\ & & RMSE & 7.80 & 8.21 & 7.89 & 8.45 & 7.94 & 6.30 & 5.38 & **5.15** \\ & & MAPE & 13.00\% & 9.6\% & 10.2\% & 9.3\% & 9.9\% & 9.6\% & 7.3\% & **6.81\%** \\ \cline{2-10} & \multirow{3}{*}{30 min} & MAE & 4.16 & 5.15 & 5.41 & 5.05 & 4.23 & 3.77 & 3.15 & **3.0** \\ & & RMSE & 7.80 & 10.45 & 9.13 & 10.87 & 8.17 & 7.23 & 6.45 & **5.95** \\ & & MAPE & 13.0\% & 12.7\% & 12.7\% & 12.1\% & 12.9\% & 10.9\% & 8.80\% & **8.01\%** \\ \cline{2-10} & \multirow{3}{*}{60 min} & MAE & 4.16 & 6.90 & 6.52 & 6.72 & 4.49 & 4.37 & 3.60 & **3.42** \\ & & RMSE & 7.80 & 13.23 & 10.11 & 13.76 & 8.69 & 8.69 & 7.59 & **6.92** \\ & & MAPE & 13.0\% & 17.4\% & 15.8\% & 16.7\% & 14.0\% & 13.2\% & 10.5\% & **9.83\%** \\ \hline \hline \multirow{6}{*}{PEMS-BAY} & \multirow{3}{*}{15 min} & MAE & 2.88 & 1.62 & 1.74 & 1.85 & 2.20 & 2.05 & 1.38 & **1.14** \\ & & RMSE & 5.59 & 3.30 & 3.16 & 3.59 & 4.42 & 4.19 & 2.95 & **2.40** \\ & & MAPE & 6.8\% & 3.5\% & 3.6\% & 3.8\% & 5.2\% & 4.8\% & 2.9\% & **2.28\%** \\ \cline{2-10} & \multirow{3}{*}{30 min} & MAE & 2.88 & 2.33 & 2.32 & 2.48 & 2.30 & 2.20 & 1.74 & **1.5** \\ & & MAPE & 6.8\% & 5.4\% & 5.0\% & 5.5\% & 5.43\% & 5.2\% & 3.9\% & **3.15\%** \\ \cline{2-10} & \multirow{3}{*}{60 min} & MAE & 2.88 & 3.38 & 2.93 & 3.28 & 2.46 & 2.37 & 2.07 & **1.79** \\ & & RMSE & 5.59 & 6.5 & 5.44 & 7.08 & 4.98 & 4.96 & 4.74 & **3.99** \\ \cline{2-10} & \multirow{3}{*}{60 min} & MAE & 2.88 & 3.37 & 6.5\% & 8.0\% & 5.89\% & 5.7\% & 4.9\% & **4.14\%** \\ \cline{1-1} \cline{2-10} & \multirow{3}{*}{30 min} & MAE & 2.88 & 2.33 & 2.32 & 2.48 & 2.30 & 2.20 & 1.74 & **1.5** \\ \cline{1-1} & & MAPE & 6.8\% & 5.4\% & 5.0\% & 5.5\% & 5.43\% & 5.2\% & 3.9\% & **3.15\%** \\ \cline{1-1} \cline{2-10} & \multirow{3}{*}{60 min} & MAE & 2.88 & 3.38 & 2.93 & 3.28 & 2.46 & 2.37 & 2.07 & **1.79** \\ \cline{1-1} & & RMSE & 5.59 & 6.5 & 5.44 & 7.08 & 4.98 & 4.96 & 4.74 & **3.99** \\ \cline{1-1} \cline{2-10} & \multirow{3}{*}{60 min} & MAE & 6.8\% & 8.3\% & 6.5\% & 8.0\% & 5.89\% & 5.7\% & 4.9\% & **4.14\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Training time/epoch between DCRNN and FTWGNN. \begin{table} \begin{tabular}{c||c|c c|c} \hline \hline Dataset & \(T\) & DCRNN & FTWGNN & Speedup \\ \hline \multirow{3}{*}{METR-LA} & 15 min & 350s & **217s** & 1.61x \\ \cline{2-5} & 30 min & 620s & **163s** & 3.80x \\ \cline{2-5} & 60 min & 1800s & **136s** & 13.2x \\ \cline{2-5} & 15 min & 427s & **150s** & 2.84x \\ \cline{2-5} & 30 min & 900s & **173s** & 5.20x \\ \cline{2-5} & 60 min & 1800s Parameters can be found in the Appendix B. Table 1 shows the evaluation of different approaches on the two traffic datasets, while Table 2 reports the training time per epoch of FTWGNN and DCRNN. Overall, although FTWGNN performs better than DCRNN by about only 10%, it is significantly faster by about 5 times on average. ### Brain networks _Annotated Joints in Long-term Electrocorticography for 12 human participants_ (AJILE12), publicly available at (Peterson et al., 2022), records intracranial neural activity via the invasive ECoG, which involves implanting electrodes directly under the skull (Peterson et al., 2022). For each participant, ECoG recordings are sporadically sampled at 500Hz in \(7.4\pm 2.2\) days (mean\(\pm\)std) from at least 64 electrodes, each of which is encoded with an unique set of Montreal Neurological Institute (MNI) x, y, z coordinates. The proposed model is tested on the first one hour of recordings of subject number 5 with 116 good-quality electrodes. Signals are downsampled to 1Hz, thus producing a network of 116 nodes, each with \(3{,}600\) data points. Furthermore, the signals are augmented by applying the spline interpolation to get the upper and lower envelopes along with an average curve (Melia et al., 2014) (see Figure 3(a)). The adjacency matrix \(\mathbf{A}\) is obtained by solving task (17), then the wavelet basis matrix \(\mathbf{W}\) is constructed based on Sec. 3.2. Parameters can be found in the Appendix B. Table 3 reports the performance of different methods on the AJILE12 dataset for 1-, 5-, and 15-second prediction. Generally, errors are much higher than those in the traffic forecasting problem, since the connections within the brain network are much more complicated and ambiguous (Breakspear, 2017). High errors using HA and VAR methods show that the AJILE12 data follows no particular pattern or periodicity, making long-step prediction extremely challenging. Despite having a decent performance quantitatively, Figure 3(b) demonstrates the superior performance of FTWGNN, in which DCRNN fails to approximate the trend and the magnitude of the signals. Even though FTWGNN performs well at 1-second prediction, it produces unstable and erroneous forecast at longer steps of 5 or 15 seconds (see Figure 5). Meanwhile, similar to traffic prediction case, FTWGNN also sees a remarkable improvement in computation time by around 2 times on average (see Table 2). Table 4 shows the sparsity density of our MMF wavelet basis \(\mathbf{W}\) for each dataset. It is important to note that \(\mathbf{W}\) is extremely sparse with no more than \(2\%\) of non-zeros, that makes our models run much faster with significantly low memory usage while achieving competitive results. ## 6 Conclusion We propose a new class of spatial-temporal graph neural networks based on the theories of multiresolution analysis and wavelet theory on discrete spaces with RNN backbone, coined _Fast Temporal Wavelet Graph Neural Net Figure 4: (a) Augmented ECoG signals by spline interpolation envelopes, (b) 1-second prediction of ECoG signals \begin{table} \begin{tabular}{c||c|c|c c c c c c c} \hline Dataset & \(T\) & Metric & HA & VAR & LR & SVR & LSTM & DCRNN & FTWGNN \\ \hline \multirow{8}{*}{AJILE12} & \multirow{4}{*}{1 sec} & MAE & 0.88 & 0.16 & 0.27 & 0.27 & 0.07 & 0.05 & **0.03** \\ & & RMSE & 1.23 & 0.25 & 0.37 & 0.41 & **0.09** & 0.45 & 0.35 \\ & & MAPE & 320\% & 58\% & 136\% & 140\% & 38\% & 7.84\% & **5.27\%** \\ \cline{2-10} & \multirow{4}{*}{5 sec} & MAE & 0.88 & 0.66 & 0.69 & 0.69 & 0.39 & 0.16 & **0.11** \\ & & RMSE & 1.23 & 0.96 & 0.92 & 0.93 & 0.52 & 0.24 & **0.15** \\ & & MAPE & 320\% & 221\% & 376\% & 339\% & 147\% & 64\% & **57\%** \\ \cline{2-10} & \multirow{4}{*}{1 sec} & MAE & 0.88 & 0.82 & 0.86 & 0.86 & 0.87 & 0.78 & **0.70** \\ & & RMSE & 1.23 & 1.15 & 1.13 & 1.13 & 1.14 & 1.01 & **0.93** \\ \cline{1-1} & & MAPE & 320\% & 320\% & 448\% & 479\% & 330\% & 294\% & **254\%** \\ \hline \end{tabular} \end{table} Table 3: Performance comparison on ECoG signals forecast. work_ (FTWGNN). Fundamentally, we employ _Multiresolution Matrix Factorization_ to factorize the underlying graph structure and extract its corresponding sparse wavelet basis that consequentially allows us to construct efficient wavelet transform and convolution on graph. Experiments on real-world large-scale datasets show promising results and computational efficiency of FTGWNN in network time series modeling including traffic prediction and brain networks. Several future directions are: **(i)** investigating synchronization phenomena in brain networks (Honda, 2018); **(ii)** developing a robust model against outliers/missing data that appear frequently in practice.
2303.04436
A comparison of rational and neural network based approximations
Rational and neural network based approximations are efficient tools in modern approximation. These approaches are able to produce accurate approximations to nonsmooth and non-Lipschitz functions, including multivariate domain functions. In this paper we compare the efficiency of function approximation using rational approximation, neural network and their combinations. It was found that rational approximation is superior to neural network based approaches with the same number of decision variables. Our numerical experiments demonstrate the efficiency of rational approximation, even when the number of approximation parameters (that is, the dimension of the corresponding optimisation problems) is small. Another important contribution of this paper lies in the improvement of rational approximation algorithms. Namely, the optimisation based algorithms for rational approximation can be adjusted to in such a way that the conditioning number of the constraint matrices are controlled. This simple adjustment enables us to work with high dimension optimisation problems and improve the design of the neural network. The main strength of neural networks is in their ability to handle models with a large number of variables: complex models are decomposed in several simple optimisation problems. Therefore the the large number of decision variables is in the nature of neural networks.
Vinesha Peiris, Reinier Diaz Millan, Nadezda Sukhorukova, Julien Ugon
2023-03-08T08:31:06Z
http://arxiv.org/abs/2303.04436v2
###### Abstract ###### Abstract Rational and neural network based approximations are efficient tools in modern approximation. These approaches are able to produce accurate approximations to nonsmooth and non-Lipschitz functions, including multivariate domain functions. In this paper we compare the efficiency of function approximation using rational approximation, neural network and their combinations. It was found that rational approximation is superior to neural network based approaches with the same number of decision variables. Our numerical experiments demonstrate the efficiency of rational approximation, even when the number of approximation parameters (that is, the dimension of the corresponding optimisation problems) is small. Another important contribution of this paper lies in the improvement of rational approximation algorithms. Namely, the optimisation based algorithms for rational approximation can be adjusted to in such a way that the conditioning number of the constraint matrices are controlled. This simple adjustment enables us to work with high dimension optimisation problems and improve the design of the neural network. The main strength of neural networks is in their ability to handle models with a large number of variables: complex models are decomposed in several simple optimisation problems. Therefore the the large number of decision variables is in the nature of neural networks. **A comparison of rational and neural network based approximations** Vinesha Peiris, Reinier Diaz Millan, Nadezda Sukhorukova, Julien Ugon **Mathematics Subject Classification (2020)**Primary: 41A50, 41A20, 41A63, 65D10, 65D12, 65D15 **Keywords:** Chebyshev approximation, rational approximation, neural network approximation, multivariate approximation ## 1 Introduction In this paper we compare two efficient function approximations tools: rational approximation and neural network based approximation. These two types of approximation are very different in their nature, but both of them are able to produce fast and accurate approximations. We also consider some combinations of these two approaches in order to improve the performance of the learning system. The neural network we are using in this study has a specific structure and also known as deep learning. Deep learning is one of the key tools in the modern area of Artificial Intelligence. There are many practical applications for deep learning, including data analysis, signal and image processing and many others [11, 33]. Despite all these practical applications, deep learning is just a specific type of function approximation, where the function to be approximated is only known by its values in a finite number of points (training data), while the approximation prototype (class of approximations) is a composition of affine mappings and the so called activation functions (special type of univariate function). A very comprehensive and thorough textbook on the modern view on deep learning can be found in [11]. Our second main approach is rational approximation. In this case, the approximations are simply the ratios of polynomial functions. The choice of rational functions is quite natural: they provide flexible approximation to a wide range of functions, including non-smooth and non-Lipschitz function [19, 30, 31]. This flexibility is even comparable with free-knot piecewise polynomial approximation [28]. At the same time, the corresponding optimisation problems are quasiconvex [7, 18, 26] and there are a number of efficient computational methods to tackle them [9, 20, 24] just to name a few. Our numerical experiments demonstrate that rational approximation is superior to neural network based approaches. Moreover, we also demonstrate that the optimisation based approach can be used to control the conditioning number of the constraint matrices appearing in auxiliary problems. This additional adjustment of the optimisation problems allows us to increase the size of the problems that can be solved. Therefore, the introduction of modern optimisation techniques lead to the enhancement of the learning system. The paper is organised as follows. In section 2 we provide the background of the approximation methods we use in this paper: neural network, rational approximation and their combinations. In section 3 we explain the approximation models we are using in this paper. Then in sections 4-6 we compare the models. Finally, in section 7 we draw the conclusions and highlight possible future research directions. Appendix A.1 contains supplementary results, including the results with only two nodes in the hidden layer. The approximations with only two nodes in the hidden layer are not accurate, but we include these results for consistency and comparison. ## 2 Preliminaries ### Deep learning Deep learning is a powerful tool for data and function approximation, which can also be used for data analysis and data classification. The power of deep learning is in its structure: the subproblems that the system has to solve are very simple from the point of view of modern optimisation theory and therefore the system can handle problems with hundreds and even thousands of decision variables very efficiently. The origin of deep learning is mathematical in its nature. Essentially, the objective of deep learning is to solve an approximation problem: optimise the weights (parameters) of the network. These weights are the decision variables of certain optimisation problems, whose objective functions represent inaccuracy of approximation. Therefore, it is natural to approach this problem using modern optimisation tools [11, 12, 33]. Solid mathematical background of deep learning was established in [10, 14, 17, 29]. These works rely on the results of the celebrated Kolmogorov-Arnold Theorem [3, 15]. The Kolmogorov-Arnold representation theorem states that every multivariate continuous function can be represented as a composition of continuous univariate functions over the binary operation of addition. In general, there is no algorithm for constructing these composition functions. Instead of constructing this representation, modern deep learning techniques approximate this composition function by a composition of affine transformations and the so-called activation functions. The activation functions are univariate non-polynomial functions. Most commonly used activation functions are sigmoid functions and Rectified Linear Unit (ReLU) functions (monotonic piecewise linear function \(\varphi(x)=\max\{0,x\}\)). Most deep learning algorithms rely on least squares based measure of inaccuracy (loss function). Since loss functions are measures of inaccuracy, the goal of optimisation is to optimise the weights by minimising the corresponding loss function. Therefore, least squares-based models are minimising a smooth quadratic functions, and therefore fast and simple optimisation techniques (for example, gradient descent) are applicable. In some cases, however, other loss functions are more efficient: uniform based, etc. The goal of this paper is to compare the approximation results obtained by deep learning and other approximation techniques. In particular, we are looking at rational approximation due to its approximation power [28]. More details on rational approximation will be provided in the next section. On the other hand, in [34], the author demonstrate that rational functions are as powerful as neural networks with standard ReLU activation functions. This result encouraged many researcher in the deep learning community to combine neural networks, rational functions and rational approximation techniques together to enhance the performance of each other [6, 22, 27]. ### Rational approximation Rational approximation in Chebyshev (uniform) sense was a very popular research direction in the 1950s-70s [1, 5, 8, 21, 30, 31] and many others. There are two main groups of methods to approach rational approximation. The first group [23] is dedicated to "nearly optimal" solutions. This approach (also known as AAA approach) is very efficient and therefore very popular, but it is "nearly optimal". The extension of AAA to multivariate cases is still open. Moreover, this method is only designed for unconstraint optimisation and, as we will see later in this paper, this will limit our abil ity to work with ill-conditioned matrices. The second group of methods is based on modern optimisation techniques: the corresponding optimisation problems are quasiconvex and can be solved using a general quasiconvex optimisation methods. There are many methods for rational approximation [4, 16, 25, 30] (just to name a few), but the implementation of most of them relies on solving linear programming problems. Therefore, additional linear constraints can be easily added to these implementations without making the problems complex. Currently, the most popular optimisation methods for rational approximation are the bisection method for quasiconvex functions and the differential correction method. The advantage of these two methods is that these methods can be easily extended to the case of non-monomial cases (generalised rational approximation) [8, 26] and to multivariate settings [2]. Overall, the differential correction method has a quadratic convergence [4], while the bisection method converges linearly [7]. Therefore, in our experiments with univariate rational approximation we use the differential correction method. At the same time, the bisection method is still a good choice due to its simplicity. Hence, we use the bisection method in our multivariate rational approximation experiments. ## 3 Computational models We compare six different approaches for approximating a given continuous function. Four of them are based on neural network (NN) and the remaining two are purely rational approximation based approaches. The approaches are as follows: 1. NN with ReLU activation function, 2. NN with rational approximation to ReLU activation (we apply the differential correction method to approximate ReLU), 3. NN with rational activation function, the coefficients of the rational activation function are learnt from NN, 4. NN with rational activation, the coefficients of the rational activation function and the parameters of the network are learnt with split method, where the training process is done in three steps: between the input layer and the hidden layer and then between the hidden layer and the output layer, and finally, the coefficients of the rational activation, 5. Rational approximation (Differential correction), this is the direct rational approximation between the input and output layer, 6. Rational approximation (AAA), this is the direct "near-optimal" rational approximation between the input and output layer. We use neural networks with three layers: input layer, hidden layer and output layer. We have the following options for the number of nodes in the hidden layer: * a network with 2 nodes in the hidden layer (see Appendix A.1), * a network with 10 nodes in the hidden layer. In our experiments, we use different numbers of epochs: 50, 100 and 200. We record the training time per epoch for each network training procedure. For NN methods, we the MSE-based loss function (MSE stands for Mean Squared Error and simply means "least squares") and uniform loss. For the MSE-based loss we use ADAM, while for the uniform loss we use ADAMAX (a version of ADAM specially designed for uniform loss). All the rational based approximation models are designed for the uniform loss, which is a stronger criterion than least squares and clearly more appropriate if the goal is to approximate a function. In our experiments, we consider two different settings. * The domain is a segment (univariate function). We approximate the nonsmooth function \[f(x)=\sqrt{|x-0.25|},\quad x\in[-1,1],\] by using the above approaches with different conditions (number of nodes in the hidden layer, number of epochs, etc.). The choice of the function \(f(x)\) is due to its difficulty for most approximation techniques: this function is nonsmooth and non-Lipschitz at \(x=0.25\). * The domain is bi-variate. It is also possible to extend the results to higher dimensions, but all these extensions are out of the scope of this paper. In the next section, we only present the important results of the numerical experiments. In particular, we report the results related to neural networks with 10 nodes in the hidden layer where the loss function is in the form of uniform norm. The Appendix A contains a very thorough discussion on the remaining results. ## 4 Results: Neural network-based approximation. The findings in this section are from a neural network with 10 nodes in the hidden layer. The loss function is based on the uniform norm and we use ADAMAX as the optimiser. ### Neural Network with ReLU activation We start our experiments with the standard ReLU activation function. Table 1 demonstrates the loss function value and computational time for different epochs. One can see that the value of the loss function does not improve significantly when the number of epochs is increasing. Moreover, it appeared that the minimum reported value of the loss function may take place before the final epoch. For example, in the case of 200 epochs, the minimal value (0.088032) was obtained at epoch 187, while the reported value (after 200 epochs) is 0.138172. The last column of this table reports the running time per epoch and there is no significant difference between the computational time for each epoch. Figure 1 shows the approximations computed by the network with ReLU activation. The accuracy of the approximations is significantly better, compared to the networks with fewer nodes in the hidden layer. It also appeared that the approximation around the "difficult point" \(x=0.25\) is better in the case of the uniform loss than it is in the case of MSE (Figures for MSE can be found in the Appendix A). Therefore, the uniform loss may be a better measure when it is essential to obtain accurate approximations around such points. ### Neural Network with rational approximation to ReLU activation In this section, we present the results of numerical experiments, where the experiment settings are similar to those in Section 4.1, but the activation \begin{table} \begin{tabular}{|c|c|c|c|} \hline Epoch & Final loss & Minimum loss & Run time (per epoch) \\ \hline 50 & 0.130828 & & 2.52s \(\pm\) 79.3ms \\ 45 & & 0.117747 & \\ \hline 100 & 0.139669 & & 2.51s \(\pm\) 96.8ms \\ 97 & & 0.111249 & \\ \hline 200 & 0.138172 & & 2.49s \(\pm\) 59.2ms \\ 187 & & 0.088032 & \\ \hline \end{tabular} \end{table} Table 1: Results: experiments of NN with ReLU activation Figure 1: Approximation is computed by a neural network with ReLU activation, 50, 100 and 200 epochs. function is the rational approximation to ReLU. This approximation was found by the differential correction method. Our activation function is a rational function of degree \((3,2)\): the rational approximation is the ratio of two polynomials, the degree of the numerator is \(3\) and the degree of the denominator is \(2\). The coefficients of the rational activation function come from the best rational \((3,2)\) approximation to the ReLU function. These coefficients are fixed throughout the whole training procedure. We do not learn the coefficients with the rest of the parameters of the network. Table 2 shows the improvement in the accuracy compared to the standard ReLU activation in Section 4.1. This observation is especially prominent when the number of epochs is \(100\) or \(200\). Figure 2 shows that the accuracy of the approximation is improving, even for the "difficult point", especially when the number of epochs is \(100\) or more. Overall conclusion: the approximation results are more accurate when the rational approximation to ReLU is used as the activation function rather than ReLU itself. This observation is valid for both MSE and uniform loss. Results related to MSE can be found in the Appendix A ### Neural Network with rational activation In this case, our activation function is a rational function of degree \((3,2)\). The coefficients of the rational activation function are now a part of the parameter set. We learn these coefficients as we learn other parameters during \begin{table} \begin{tabular}{|c|c|c|c|} \hline Epoch & Final loss & Minimum loss & Run time (per epoch) \\ \hline 50 & 0.175439 & & 2.82s \(\pm\) 90.5ms \\ 46 & & 0.125264 & \\ \hline 100 & 0.105346 & & 2.77s \(\pm\) 79.5ms \\ 99 & & 0.071646 & \\ \hline 200 & 0.084491 & & 2.83s \(\pm\) 382ms \\ 195 & & 0.053663 & \\ \hline \end{tabular} \end{table} Table 2: Results: experiments of NN with rational approximation to ReLU the training procedure. This type of network with rational activation function whose coefficients are learnable parameters are called 'rational neural networks' and more details can be found in [6]. The Python code for this section is also from [6]. Table 3 shows that when the number of nodes in the hidden layer is 10, the optimal loss function values are very close to the case when ReLU was approximated by the rational function. Figure 3 confirms these results. The approximation appears to be accurate even around the "difficult point" when the number of epochs is 200 (similar to MSE). ### Neural Network with rational activation, learn with split training method In this section, our activation function is a rational function of degree \((3,2)\). The coefficients are now a part of the parameter set. Similar to the network in Section 4.4, we learn these coefficients as we learn other parameters during the training procedure. The training process is done in three steps: 1. weights between the input and hidden layer, 2. weights between the hidden layer and the output; 3. rational coefficients. This approach is based on a block coordinate-wise optimisation; therefore, even when it produces a sequence which decreases the functional values, the limit point is not guaranteed to be globally optimal. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Epoch & Final loss & Minimum loss & Run time (per epoch) \\ \hline 50 & 0.131586 & & 2.75s \(\pm\) 77.9ms \\ 44 & & 0.114195 & \\ \hline 100 & 0.104754 & & 2.64s \(\pm\) 62.1ms \\ 96 & & 0.097785 & \\ \hline 200 & 0.078658 & & 2.76s \(\pm\) 423ms \\ 177 & & 0.055616 & \\ \hline \end{tabular} \end{table} Table 3: Results: experiments with rational activation function Figure 3: Approximation is computed by the usual training process, epoch is 50, 100, 200. Table 4 shows that the results are comparable for this method and for the cases of rational activation and rational approximation to the ReLU activation function (in terms of the optimal values of the loss function). Figure 4 shows with the increase of the number of epochs the approximation is improving even around the "difficult point". Overall conclusion: the approximation results are more accurate when the rational activation function is used with the split training method rather than the usual training approach. ## 5 Direct rational approximation approach: the differential correction and AAA method. In this section, all the experiments correspond to uniform loss. The decision variables are the coefficients of the rational function. We compare two main methods for constructing rational approximations: the differential correction method (converges to a global optimal solution) and the AAA method (practical method, converges to "near optimal" solution). ### The differential correction method #### 5.1.1 Rational approximation of degree (21,20), the differential correction method. When one considers a neural network with three layers and just 10 nodes in the hidden layer and the rational activation function of degree \((3,2)\), the out \begin{table} \begin{tabular}{|c|c|c|c|} \hline Epoch & Final loss & Minimum loss & Run time (per epoch) \\ \hline 50 & 0.168947 & & 2.65s \(\pm\) 72.2ms \\ 49 & & 0.113316 & \\ \hline 100 & 0.093841 & & 2.71s \(\pm\) 662ms \\ 75 & & 0.087595 & \\ \hline 200 & 0.061735 & & 2.82s \(\pm\) 384ms \\ 190 & & 0.053655 & \\ \hline \end{tabular} \end{table} Table 4: Results: experiments with split method Figure 4: Approximation is computed by the split training process, 50, 100, 200 epochs. put is a rational function of degree \((21,20)\). Therefore, we can approximate the function \(f(x)=\sqrt{|x-0.25|}\) by a rational function of degree \((21,20)\) using the differential correction method and compare this approximation with the results obtained for uniform loss function and \(10\) nodes in the hidden layer. Results related to the case where the number of nodes on the hidden layer is \(2\), can be found in the Appendix A. In theory, the direct approximation by a rational function of degree \((21,20)\) is more flexible. We use Python in our experiments. We also compute the rational approximations of degree \((20,20)\) for comparison. We will also use these additional results to compare with the AAA method, which is expecting the same degree in the numerator and the denominator. Table 25 summarises the computational time and the optimal loss. The results demonstrate that the computational time is very small for all three settings and corresponding errors are very similar. Interestingly, the increase in the degree does not always improve the optimal loss function value. This can be explained as a result of computational instability when the dimension of the corresponding optimisation problems is increasing. Table 5 summarises the computational time and the optimal loss values for the degrees (21,20), (20,20) and (21,21). These results are very similar. Figure 5 depicts the approximation of degree (21,20). The approximation is accurate, but there is a strong oscillation of the approximation, which is not surprising for high degree rational functions. Figure 5: Approximation by the differential correction method, rational approximation degree is \((21,20)\), \((20,20)\) and \((21,21)\) respectively. \begin{table} \begin{tabular}{|c|c|c|} \hline Degrees & Error (uniform norm) & Run time \\ \hline (21,20) & 0.04376536257200335 & 10.98 \\ \hline (20,20) & 0.04841949102524923 & 11.58 \\ \hline (21,21) & 0.04411128396473672 & 12.48 \\ \hline \end{tabular} \end{table} Table 5: Direct rational approximation for degrees \((21,20)\), \((20,20)\) and \((21,21)\) by the differential correction method. ### The AAA method #### 5.2.1 Rational approximation of degree (21,20) with the AAA method In order to compare the differential correction method with AAA, we apply the AAA method in the same experimental settings as they are in the case of the differential correction method. Due to the limitations of the AAA method, we cannot use different degrees for the numerator and the denominator. Hence, for these experiments, we use the rational approximation of degree (20,20) and the rational approximation of degree (21,21). The codes are developed in Python by C. Hofreither [13]. Table 6 and Figures 6 shows that the approximations are very accurate and the computational time is very low. These results suggest that the AAA method is very fast and accurate, even when the degree is high. The function and the approximation are indistinguishable from the picture. \(u_{i}\) on \((x_{i},t_{i})\) by using the multivariate bisection method and the algorithm was implemented in Matlab. The specifications of the dataset is as follows: * observation data (known function values) of size \(512\times 201\). * spatio temporal points (known domain values). * \(x_{i}\in[0,40]\) with the step size of \(1\). * \(t_{i}\in[-20,19.84375]\) with the step size of \(0.390625\). In [6], authors used Euclidean norm to compute the errors but here, we compute the error of the approximation in the uniform norm. Therefore, there are not many possibilities for comparison with the findings in [6]. All of our experiments are constructed by using a certain set of domain points (using every \(k^{\text{th}}\) point of each dimension and the corresponding function values). The approximation and the uniform error term are computed on the same reduced domain. This extra step has been taken to ensure that big data volume does not lead MATLAB programs to crash. ### The original dataset The following pictures (Figure 7, Figure 8, Figure 9) are of the original dataset. Some images are constructed by using a certain set of domain points. In particular, we use the pairs of every \(20^{\text{th}}\) point of the domain (along each of the two dimensions) and pairs of every \(10^{\text{th}}\) point of the domain in order to reduce the dimension of the original problem. For simplicity, we sometimes refer to these pairs as "Every \(10^{\text{th}}\) point of the domain" or "Every \(20^{\text{th}}\) point of the domain". Now, we compute rational approximations of different degrees and report the uniform approximation error and the computational time for each degree when \(k=20\) and \(k=10\). We start our experiments with rational approximation of degree \((2,2)\) and keep increasing the degree up to \((20,20)\). The classical monomials in the rational function were replaced by Chebyshev monomials for better approximations. ### Rational approximation of degree (2,2) We report 3D and 2D pictures of the rational approximation of degree (2,2) along with its error curve. Figure 8: Pictures constructed for the original dataset taking the pairs of every \(10^{\text{th}}\) point of the domain: 3D and 2D view Figure 9: Pictures constructed for the original dataset taking all the points of the domain: 3D and 2D view In Figure 10, we compute the approximation by only considering the pairs of every \(20^{\text{th}}\) point of the domain and in Figure 11, the approximation is computed on the domain which contains pairs of every \(10^{\text{th}}\) point of the domain. The computational time and the uniform error is presented in Table 7. Figure 11: Approximations (above) and the error curves (below) are constructed on the domain which consists of pairs of every \(10^{\text{th}}\) point of the original domain: 3D and 2D view Figure 10: Approximations (above) and the error curves (below) are constructed on the domain which consists of pairs of every \(20\)th point of the original domain: 3D and 2D view One can clearly see that the algorithm runs faster when the number of discretised points in the domain is smaller. ### Rational approximation of degree (5,5) Here, we now increase the degree of the approximation to degree (5,5). \begin{table} \begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & Uniform error & Time (sec.) \\ \hline Every 20\({}^{\text{th}}\) point of the domain & 1.29449744612994 & 2.049840 \\ \hline Every 10\({}^{\text{th}}\) point of the domain & 1.36991465992617 & 2.504411 \\ \hline \end{tabular} \end{table} Table 7: Uniform error and computational time for the degree (2,2) approximation Figure 12: Approximations (above) and the error curves (below) are constructed on the domain which consists of pairs of every 20\({}^{\text{th}}\) point of the original domain: 3D and 2D view. Figure 12 and Figure 13 consist of the approximations computed on the domains which contain pairs of every \(20^{\text{th}}\) point of the domain and the \(10^{\text{th}}\) point of the domain, respectively. The computational time and the uniform error is presented in Table 8. The approximations are better than the degree (2,2) case and the uniform error is drastically reduced. ### Rational approximation of degree (10,10) In this section, we approximate the original function by a rational approximation of degree (10,10). The approximations and the error curves are depicted in Figure 14 and Figure 15. Approximations resemble the original functions well in both domains. The computational time and the uniform error are presented in Table 9. The uniform error term is much smaller than the case where the degree of the approximation is (5,5). On the other hand, the algorithm takes more time to compute the approximations as the degree increases, Figure 14: Approximations (above) and the error curves (below) are constructed on the domain which consists of pairs of every \(20^{\text{th}}\) point of the original domain: 3D and 2D view. Figure 15: Approximations (above) and the error curves (below) are constructed on the domain which consists of pairs of every \(10^{\text{th}}\) point of the original domain. ### Rational approximation of degree (18,18) We now increase the degree of the approximation to (18,18). The approximations presented in Figure 16 and Figure 17 are very similar to the corresponding original functions presented in Figure 7 and Figure 8. One can also see from table 10 that the uniform error is much smaller than the previous cases. ### Rational approximation of degree (20,20) In this section, we consider approximating the original functions in Figure 7 and Figure 8 by a rational function of degree (20,20). Even though the degree of the rational function is higher, the number of parameters (the coefficients of the polynomials in the numerator and the denominator) is smaller (\(\approx 400\)) compared to the number of parameters in the neural networks that the authors of [6] used in their experiments (\(\approx 8035\)). As the degree of the rational function increases, the condition number of the constraint matrices appearing in the auxiliary problem of the bisection method increases and in some cases, the MATLAB code tends to complain and crash. This issue can be solved by adding an extra linear constraint to the auxiliary problem of the bisection method which restricts the denominator polynomial from above. The bisection method has the flexibility of adding constraints and this additional adjustment allows us to compute higher degree rational approximations. Similar observation can be found in [32] where authors use this property of the bisection method in matrix function evaluations. The experiments in this section where the degree of the approximation is \((20,20)\) were conducted with an additional constraint \begin{table} \begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & Uniform error & Time (sec.) \\ \hline Every \(20^{\text{th}}\) point of the domain & 0.014061551772703 & 66.460222 \\ \hline Every \(10^{\text{th}}\) point of the domain & 0.034389125232539 & 167.995076 \\ \hline \end{tabular} \end{table} Table 10: Uniform error and computational time for the degree (18,18) approximation Figure 17: Approximations (above) and the error curves (below) are constructed on the domain which consists of pairs of every \(10^{\text{th}}\) point of the original domain: 3D and 2D view. which restricts the denominator from above. Clearly, the approximations in Figure 18 and Figure 19 coincide with the original functions and the uniform error is much smaller compared to the lower degree approximations. Overall conclusion: Our results cannot be compared with the results in [6] since the errors are computed in different norms. However, we demonstrate that the rational approximation in multivariate domain is as powerful as neural networks and it is capable of approximating complicated functions by using much fewer parameters than a neural network. For each degree, we also computed the uniform error on the whole domain once the approximation is computed on a selected reduced domain. The results can be found in Table 12. Note that the in the case of degree (20,20), the denominator is bounded from above. One can clearly see that the uniform error computed on the whole domain is very similar to the uniform error computed on the domain with fewer selected points (reduced domain) when the degree of the approximation is small. This observation is very prominent in the case where the domain consists of pairs of every \(10^{\text{th}}\) point of the domain. For higher degrees, the difference between the errors computed on the whole domain and reduced domain gets much larger when the domain has much fewer points (when the domain has pairs of every \(10^{\text{th}}\) point of the whole domain). ## 7 Conclusions and future research directions In this paper we compared several approaches to approximating functions, each being a slight modification of the other one. Starting from a classical neural network with a ReLU activation function, we gradually changed our set up, first by replacing the activation function with a rational function approximating ReLU, then including the coefficients of the rational activation function in the learning parameters, and finally replacing the neural network \begin{table} \begin{tabular}{|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & Degree & Uniform error \\ \hline \multirow{3}{*}{Pairs of every 20\({}^{\text{th}}\) point of the domain} & (2,2) & 1.49062088756304 \\ \cline{2-4} & (5,5) & 1.57543434735584 \\ \cline{2-4} & (10,10) & 245.34679368905 \\ \cline{2-4} & (18,18) & 11404.1284016 \\ \cline{2-4} & (20,20) & 88023.9109319522 \\ \hline \multirow{3}{*}{Pairs of every 10\({}^{\text{th}}\) point of the domain} & (2,2) & 1.40803015961408 \\ \cline{2-4} & (5,5) & 0.702071289628257 \\ \cline{1-1} \cline{2-4} & (10,10) & 0.334154891477345 \\ \cline{1-1} \cline{2-4} & (18,18) & 15.3976686595557 \\ \cline{1-1} \cline{2-4} & (20,20) & 9.191409640164363 \\ \hline \end{tabular} \end{table} Table 12: Uniform error computed on the whole domain with a best rational approximation of the same number of parameters using traditional optimisation approaches. Our finding is that as each step increases the flexibility and size of the feasible space, we observe an improvement in the approximation. In particular the optimisation-based approximation achieves better approximation than the neural network parts. The training time is also in favour of the optimisation-based approximation, especially because it is often possible to achieve a good approximation with fewer parameters. As the dimension of the function to approximate increases, it is not clear whether this advantage remains. Our study did not consider other aspects of the approximation, such as generalisability. It is well known in general that deep learning has good generalisability properties and avoid overfitting. However, it is not clear whether adding extra flexibility (for example including the coefficients of the activation function in the learning parameters) retain those generalisability properties. Further work needs to be done to gain better insights about this. ## Acknowledgements, Statements and Declarations We are grateful to the Australian Research Council for supporting this work via Discovery Project DP180100602.
2310.14084
Graph Neural Networks and Applied Linear Algebra
Sparse matrix computations are ubiquitous in scientific computing. With the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NN). Unfortunately, multi-layer perceptron (MLP) neural networks are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g. arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one approach suitable to sparse matrices. GNNs define aggregation functions (e.g., summations) that operate on variable size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined a-priori as well as cases where parameters must be learned. The intent with this article is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will stimulate data-driven extensions of classical sparse linear algebra tasks.
Nicholas S. Moore, Eric C. Cyr, Peter Ohm, Christopher M. Siefert, Raymond S. Tuminaro
2023-10-21T18:37:56Z
http://arxiv.org/abs/2310.14084v1
# Graph Neural Networks and Applied Linear Algebra+ ###### Abstract Sparse matrix computations are ubiquitous in scientific computing. Given the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NN). Unfortunately, multi-layer perceptron (MLP) neural networks are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of different nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g. arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one such approach suitable to sparse matrices. The key idea is to define aggregation functions (e.g., summations) that operate on variable size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete GNN examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative and multigrid methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined _a-priori_ as well as cases where parameters must be learned. The intent with this article is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will further stimulate data-driven extensions of classical sparse linear algebra tasks. os ## 1 Introduction Artificial intelligence (AI) and machine learning (ML) have drawn a great deal of media attention -- deep fake audio [19], transformer-based AI chatbots [1], stable diffusion AI image generation and even AI Elvis [11] singing a song have all inserted themselves into popular culture. In scientific realms, medical image identification (e.g. detection of cancer) has also generated media attention exposing the potential of ML technologies. But in the realm of applied mathematics the impact of AI/ML has been far less visible to the general public. The goal of this paper is to provide an introduction to graph neural networks (GNNs) and to show how this specific class of AI/ML algorithms can be used to represent (and enhance) traditional algorithms in numerical linear algebra. Associated code which implements the example GNNs is provided at [https://github.com/sandialabs/gnn-applied-linear-algebra/](https://github.com/sandialabs/gnn-applied-linear-algebra/). Introductions to neural network models in ML and AI typically focus on deep neural networks (DNNs) or convolutional neural networks (CNNs). However, GNNs are can be notably more appropriate for many computational science tasks [29]. DNNs and CNNs generally assume _structured_ input data -- a vector that is a fixed size, or an image where pixels are aligned along Cartesian directions. While structured applications do exist in computational science (uniformly meshed problems for example), many other applications are _unstructured_, again often associated with meshes. Consider a drawing of an object that an engineer wishes to simulate on a computer. After pre-processing, the geometry of the object will be subdivided a mesh. The object may have holes, protrusions or a disparity of features scales such that a structured, Cartesian mesh cannot be generated. As a result, an unstructured mesh is constructed. Application of traditional DNN and CNN tools would likely not be possible for this mesh due to the the unstructured nature of the data. However, GNN models could still be employed on this geometry for associated AI/ML tasks. So what is a GNN? As suggested by the name, graph neural networks are built on the concept of associating data with edges and vertices of a _graph_. For the above engineered object, the mesh itself can define the graph. Graphs can also represent structured data -- for an image, pixels can be vertices and edges can be used to represent neighboring pixels. The general nature of unstructured graphs make them highly appropriate for a wide range of science and engineering applications. Beyond the meshing example, another example of unstructured data arises from social network graphs where each edge represents a connection between two people. In these type of graph networks, eigenvector calculations provide useful information for determining the influence that a vertex (e.g., a person) has on the rest of the network. GNNs were first proposed in [16] and [28] as a means of adapting a convolutional neural network to graph problems. The issue with basic neural networks (e.g. DNNs and CNNs) that GNNs address is that they require a fixed input/feature size. That is, a basic neural network can be viewed as a type of function approximation where the number of input/feature values to the function is always the same when either training the network or when using the network for inference. If a graph has a simple repeatable interaction pattern, then a basic neural network can be effectively applied to a fixed window (i.e., a subset of the graph), which can be moved to address different portions of the network. However, general graphs have no such repeatable pattern. The key difficulty lies in the fact that the number of edges adjacent to each vertex can vary significantly throughout the network. In the case of a social network graph, for instance, a popular person has many friends while a loner might have few interactions. To address this variability, GNNs include the notion of general aggregation functions that allow for a variable number of inputs. Simple aggregation function examples are summation or maximum which are well-defined regardless of the number of inputs. Aggregation functions are used to combine information from edges adjacent to a vertex. These functions can include a fixed number of learnable parameters. For example, an aggregation function might include both a summation and a maximum function whose results are combined in a weighted fashion using a learnable weight. These are then combined with transformation functions associated with edges and vertices, which can also include learnable parameters. It is well known that many numerical linear algebra algorithms also possess a graph structure. For instance, in Section 3.1 we detail how sparse matrix-vector multiplication transforms matrix entries as graph edge attributes and vector values as graph vertex attributes, into a new set of graph vertex attributes representing the product. This algorithm can be written as a GNN with precise choices of aggregation and transformation functions. We provide additional descriptions of standard iterative methods and components of algebraic multigrid (AMG) preconditioners that can be re-formulated using GNNs. Some of the considered multigrid components include matrix-vector products, Jacobi relaxation, AMG strength-of-connection measures, and AMG interpolation operators. While the associated GNNs do not contain trainable parameters, they illustrate the flexibility of GNNs in representing traditional numerical linear algebra components. The potential for advances in AMG using GNN architectures is made apparent by including trainable parameters in the transformations. These parameters will enable learning complex nonlinear relationships between the feature spaces, implicitly encoding the topological and numerical properties of a sparse matrix. We explore some prototypical use cases in Section 4. We note that more complicated applications of GNNs for multigrid have been considered in the literature and encourage the interested reader to look more deeply [22, 23, 30, 32]. As this paper is educational in nature, we are not proposing new AI/ML powered methods that significantly outperform existing methods, but rather demonstrating how GNNs can allow us to modify existing numerical methods while simultaneously highlighting some of the challenges that must be addressed. ### Developing Intuition: A Viral Example To make the discussion more accessible, we present an example based on the spread of disease (a subject that we are all unfortunately familiar with). The intent is to help the reader develop intuition about components of the GNN algorithms and data structure. In the text that follows we call out explicitly where we use the example. If the notation and ideas are clear to the reader, these _Viral Interludes_ can be skipped. Here, we briefly describe the setup of the viral example. We consider a community where each individual interacts on a weekly basis with a fixed set of community members. The one-week period over which the set of interactions occurs is referred to as a cycle. Initially, within the community each individual has a probability of being infected by the disease. Simultaneously, a cure, which also (conveniently) diffuses through individual interactions, has been distributed randomly through the population1. The specific rate of diffusion for the disease and cure, as well as the effectiveness of the intervention is not known in advance, and must be estimated based on observations. The goal then is to develop a model to assess each individual's risk of carrying the infection or carrying the cure after a particular number of cycles using the information about the interactions between individuals. Footnote 1: Perhaps additional individuals seek treatment after learning about it through peer interactions. ## 2 Graph Neural Network Background Graph neural networks (GNN) were introduced in [16, 28] to overcome limitations in convolutional neural networks (CNN) that assume a regular structure on the input data to the network. A CNN takes, as input data, structured data that is topologically like a Cartesian mesh, and produces output data with a similarly regular structure. Bitmap images are the usual exemplar for CNN applications. By contrast, the topology of input/output data elements for a GNN is associated with a graph. A GNN takes attributes for each vertex and for each edge and produces a new set of vertex and edge attributes. Additionally, a set of global attributes associated with the entire graph can also be transformed to a new set of global attributes. Formalizing this description, define a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of \(N^{v}\) vertices, and \(\mathcal{E}\) is the set of \(N^{e}\) directed edges. A directed edge is defined as an ordered pair of vertices, \((v_{i},v_{j})=e_{ij}\in\mathcal{E}\) where the edge emanates from \(v_{i}\in\mathcal{V}\) and terminates at \(v_{j}\in\mathcal{V}\). The set of features/attributes (data defined on vertices are denoted by \(V=\{\mathbf{v}_{k}\in\mathbb{R}^{n_{e}}:k=1\ldots N^{v}\}\), on edges by \(E=\{\mathbf{e}_{ij}\in\mathbb{R}^{n_{e}}:e_{ij}\in\mathcal{E}\}\), and for the graph are \(\mathbf{g}\in\mathbb{R}^{n_{g}}\). While the terms "features" and "attributes" are both used to describe data associated with edges or vertices in the literature, following Battaglia et al. [4], we will prefer the term attributes. Table 1 summarizes notation used within this paper (including symbols that are introduced shortly). _Viral Interlude._ For our viral example each vertex in the graph \(v_{i}\in\mathcal{V}\) represents the \(i^{th}\) individual in the community. The initial vertex attribute, prior to the first cycle, is the vector \(\mathbf{v}_{i}\in[0,1]^{2}\in\mathbb{R}^{n_{e}}\) where \(n_{v}=2\). The first component is the probability the individual contains the cure, while the second component is the probability the individual contains the disease. Similarly, the existence of an edge \(e_{ij}\in\mathcal{E}\) indicates an interaction between individual \(i\) and \(j\). Initially, this edge contains as attributes the length of the interaction, which is important in determining both the spread of the infection and the cure, thus \(\mathbf{e}_{ij}\in\mathbb{R}^{n_{e}}\) where \(n_{e}=1\). ### Graph Neural Network Function A GNN, denoted by \(\mathcal{N}\), is a parameterized function that acts on the attributes of a graph and produces new attributes while maintaining topology: \[\mathcal{N}\left(\mathcal{G},(V,E,\mathbf{g});\Theta\right)=\{V^{\prime},E^{ \prime},\mathbf{g}^{\prime}\}\,, \tag{1}\] where \(\Theta\) are the GNN parameters, and a tick denotes the output attributes. Note that the sizes of the attributes can change as a result of the neural network action; for example \(n_{v},n_{e},n_{g}\mapsto n_{v^{\prime}},n_{e^{\prime}},n_{g^{\prime}}\) where \(n_{v}\) can be different from \(n_{v^{\prime}}\), \(n_{e}\) can be different from \(n_{e^{\prime}}\), and \(n_{g}\) can be different from \(n_{g^{\prime}}\). This feature of GNNs will be used in the Viral Interludes, though not in the examples in Section 4. Even if the number of attributes change after the GNN, the topology of the graph where attributes reside is not perturbed by the graph neural network. However, edges and vertices can be marked as on or off in the attributes, though the fundamental topology doesn't change. Also note that the GNN architectures discussed here apply only to directed graphs. Undirected graphs can be represented as well by treating an undirected edge as a pair of directed edges, one in each direction. In this paper, we focus on GNN models based on message passing that were originally proposed in [13]. This model decomposes the action of the neural network into a sequence of layers, that themselves are \begin{table} \begin{tabular}{c l} Symbol & Meaning \\ \hline \(\mathcal{G}\) & Graph \\ \(\mathcal{V}\) & Set of vertices in a graph \\ \(\mathcal{E}\) & Set of edges in a graph \\ \(\mathcal{N}\) & Graph neural network \\ \(\Theta\) & Graph neural network parameters \\ \(N^{v}\) & Number of vertices in a graph: \(|\mathcal{V}|\) \\ \(N^{e}\) & Number of edges in a graph: \(|\mathcal{E}|\) \\ \(V\) & Set of input attributes defined on vertices \\ \(E\) & Set of input attributes defined on edges \\ \(\mathbf{g}\) & Set of input global attributes on a graph \\ \(n_{v}\) & Number of input vertex attributes per vertex \\ \(n_{e}\) & Number of input edge attributes per edge \\ \(n_{g}\) & Number of input global attributes \\ \(v_{k}\) & \(k^{th}\) vertex in the graph \\ \(e_{ij}\) & Directed edge that emanates from \(v_{i}\) and terminates at \(v_{j}\) \\ \(\mathbf{v_{k}}\) & Attributes associated with vertex \(v_{k}\) \\ \(\mathbf{e}_{ij}\) & Attributes associated with edge \(e_{ij}\) \\ \(\overline{\mathbf{e}_{j}}\) & Aggregated attributes from all edges terminating at vertex \(v_{j}\) \\ \(\mathbb{R}^{\text{var}(v)}\) & Variable input space for variable vertex-based functions \\ \(\rho_{e\to v}()\) & Function whose inputs (outputs) are edge (vertex) attributes of a vertex \\ \(\rho_{e\to g}()\) & Function whose inputs (outputs) are edge (global) attributes of the graph \\ \(\rho_{v\to g}()\) & Function whose inputs (outputs) are vertex (global) attributes of the graph \\ \(n_{e\to v}\) & Number of output vertex attributes produced by \(\rho_{e\to v}()\) \\ \(n_{e\to g}\) & Number of output global attributes produced by \(\rho_{e\to g}()\) \\ \(n_{v\to g}\) & Number of output global attributes produced by \(\rho_{v\to g}()\) \\ \end{tabular} \end{table} Table 1: Notation summary parameterized functions of the graph topology and attributes. For instance, a two layer GNN with layers denoted as \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) is \[\mathcal{N}(\mathcal{G},(V,E,\mathbf{g});\{\Theta_{1},\Theta_{2}\}):=\mathcal{L} _{2}(\mathcal{G},\mathcal{L}_{1}(\mathcal{G},(V,E,\mathbf{g});\Theta_{1}); \Theta_{2}). \tag{2}\] The specifics of layer architectures are discussed in Section 2.2. _Viral Interlude._ For our viral model, a GNN function \(\mathcal{N}\) represents the evolution of the disease within this community from the initial state to a final state. \(\mathcal{N}\) is constructed through composition of the layers, where a layer \(\mathcal{L}_{i}\) describes an evolution over a single cycle. In a cycle new attributes for each individual (the set \(V^{\prime}\)) are computed by considering a combination of the vertex attributes \(V\), and the time of interaction specified by the edge attribute \(E\). The parameters \(\Theta\) are not known _a priori_ and must be calibrated based on empirical data about the evolution of the outbreak. (See Section 2.3). In a GNN, the parameter \(\Theta\) encompasses these model unknowns. The layer may also require edge updates based on changing interaction times. Further, if the number of values representing an edge or vertex state changes as a result of applying a layer then we have \(n^{\prime}_{v}\neq n_{v}\) and/or \(n^{\prime}_{e}\neq n_{e}\). For instance, an additional attribute could be added by the GNN layer for each vertex specifying the survival (or not) of an individual. This boolean attribute could have the effect of virtually removing an individual from the graph in subsequent layers of the GNN function. ### Message Passing Layer While several different variations of message passing GNNs exist, we will describe and utilize the GNN from [4]. The parameterization of a GNN layer is defined by three aggregation functions and three update functions. Because graphs can have varying topology, a method for combining attributes from multiple entities of the same type is necessary. Formally, these aggregation functions are \[\rho_{e\to v}(\mathbf{e}^{\prime}_{*k}):\mathbb{R}^{\mathrm{var}(v)}\to \mathbb{R}^{n_{e\to v}},\quad\rho_{e\to g}(E^{\prime}):\mathbb{R}^{N^{v}\, \cdot\,n_{e}}\to\mathbb{R}^{n_{e\to g}},\quad\rho_{e\to g}(V^{\prime}): \mathbb{R}^{N^{v}\,\cdot\,n_{v}}\to\mathbb{R}^{n_{e\to g}} \tag{3}\] where \(\mathbf{e}^{\prime}_{*k}\) denotes the set of updated attributes associated with all edges that terminate at vertex \(v_{k}\). The function \(\rho_{e\to v}()\) is applied at each vertex (i.e., for each \(k=1,\ldots,N_{v}\)) and the notation \(\mathbb{R}^{var(v)}\) denotes that the \(\rho_{e\to v}\) function is variable: the input space is a finite (but variable) number of edge attribute vectors. This function takes the set of attributes from all edges which terminate at a single vertex and aggregates them together into \(n_{e\to v}\) attributes that are collected at this vertex. The other two aggregation functions, \(\rho_{e\to g}\) and \(\rho_{v\to g}\), are similar but combine updated attributes \(E^{\prime}\) associated with all graph edges and updated attributes \(V^{\prime}\) associated with all graph vertices respectively. Some simple examples of aggregation functions include summation, minimum, and maximum. The aggregated attributes are then used to update the attributes of the vertices and the graph. The update functions are \[\phi_{e}(\mathbf{e}_{ij},\mathbf{v}_{i},\mathbf{v}_{j},\mathbf{g }):\mathbb{R}^{n_{e}}\times\mathbb{R}^{n_{v}}\times\mathbb{R}^{n_{e}}\times \mathbb{R}^{n_{g}}\to\mathbb{R}^{n^{\prime}_{e}}, \tag{4}\] \[\phi_{v}(\mathbf{v}_{i},\overline{\mathbf{e}}_{i},\mathbf{g}): \mathbb{R}^{n_{v}}\times\mathbb{R}^{n_{e\to v}}\times\mathbb{R}^{n_{g}} \to\mathbb{R}^{n^{\prime}_{e}},\] (5) \[\phi_{g}(\mathbf{g},\rho_{e\to g}(E^{\prime}),\rho_{v\to g}(V^{ \prime})):\mathbb{R}^{n_{g}}\times\mathbb{R}^{n_{e\to g}}\times \mathbb{R}^{n_{v\to g}}\to\mathbb{R}^{n^{\prime}_{g}} \tag{6}\] where \(\overline{\mathbf{e}}_{i}\) are edge attributes that have been aggregated to vertex \(i\) (i.e., \(\overline{\mathbf{e}}_{i}=\rho_{e\to v}(\mathbf{e}^{\prime}_{i})\)). Each of these functions (with any implied parameters), takes, as input, the attributes of a graph entity and the attributes of entities in its neighborhood and transforms them into an updated attribute associated with the original graph entity. For instance \(\phi_{e}\) is associated with updating an edge's attributes in the graph. The edge's neighborhood includes two vertices, and the graph itself. Therefore, it takes as input the attributes of the edge, its two neighboring vertices and the graph's attributes. The vertex update, \(\phi_{v}\), is associated with a vertex in the graph. The neighborhood of the vertex includes the graph itself and all the edges which terminate at the vertex. The number of terminating edges can vary from vertex to vertex, which necessitates use of the variadic aggregation function \(\rho_{e\to v}\) that combines all the edge attributes together. The original attributes of the vertex, the aggregated edge attributes, and the global attributes are then used as arguments to the vertex update function. A similar process occurs in the global update \(\phi_{g}\) as well, where all updated edge attributes are aggregated together and all updated vertex attributes are aggregated together and used to update the global graph attributes. With the the aggregation and update functions described, we can now explain how they are combined in Algorithm 1 to compute the action of a GNN layer. The first **for** loop in the algorithm transforms input edge attributes using the update function \(\phi_{e}\). This is depicted graphically in Figure 1 with the initial state of the graph shown on the left graph and the final state shown on the right. For a single edge \(e_{12}\), its attributes, along with those of the neighboring vertices, and the graph are the input to \(\phi_{e}\) yielding an updated edge attribute \(\mathbf{e}^{\prime}_{12}\). While only one update is shown, all edges are updated (potentially in parallel) using the same update function. ``` 0: Graph \(G\), vertex attributes \(V=\{\mathbf{v}_{j}:j=1,\ldots,N^{v}\}\), edge attributes \(E=\{\mathbf{e}_{ij}:e_{ij}\in\mathcal{E}\}\) and global attributes \(\mathbf{g}\) Output: updated edge attributes \(E^{\prime}\), vertex attributes \(V^{\prime}\) and global attributes \(\mathbf{g}^{\prime}\) ``` **Algorithm 1** Computation of a Graph Network Layer With the edges transformed, the next **for** loop in Algorithm 1 updates the vertex attributes. Each iteration contains two steps, the first being the aggregation of all edges terminating at the current vertex using \(\rho_{e\to v}\), and the second updating the vertex's attribute. This is depicted in Figure 2 for \(v_{4}\). Here, the edge neighborhood contains edges that terminate at \(v_{4}\). The attributes \(\mathbf{e}_{24}^{\prime},\mathbf{e}_{34}^{\prime}\) and \(\mathbf{e}_{14}^{\prime}\) for these three edges are aggregated using \(\rho_{e\to v}\) yielding \(\bar{\mathbf{e}}_{4}\). The aggregated edge attributes, with the vertex attribute \(\mathbf{v}_{4}\), and the global attribute \(\mathbf{g}\) are input to the update function \(\phi_{v}\). While only one update is shown, this procedure is repeated (potentially in parallel) until all vertex attributes are updated. The final phase of the GNN layer is to transform the global attributes of the graph. This is done by first aggregating all the transformed edge attributes using \(\rho_{e\to g}\) and all the transformed vertex attributes using \(\rho_{v\to g}\). Then the new global attributes \(\mathbf{g}^{\prime}\) are computed using \(\phi_{g}\). The transformed edge, vertex, and graph attributes are returned in the final line of Algorithm 1. A notable quality of the message passing layer is that if the graph attributes are ignored (\(\mathbf{g}\) doesn't change), then the action of the layer only considers the one-ring (or distance-one) neighborhood of a vertex for attribute update. So the action of two consecutive GNN layers will update attributes based on vertices two edges away, or the two-ring neighborhood. This breadth-first approach has implications on the use of GNNs for processing subgraph information independently, leaving intriguing possibilities for parallelism. Additionally, each pass of the layer results in an exchange of information over attributes within a single vertex one-ring neighborhood. Consequently, multiple applications of the layer communicate to ever broader vertex neighborhoods. Figure 1: Edge feature update for \(\mathbf{e}_{12}\). The concatenated attributes for the edge, neighboring vertices, and global graph are input to the vertex update function \(\phi_{e}\). The output \(\mathbf{e}_{12}^{\prime}\) corresponds to updated edge attributes. This process (not shown) is repeated for all edges in parallel, using the same update function. _Viral Interlude._ The update for the viral outbreak graph described conceptually above can be made (more) formal by defining appropriate aggregation and update functions. The edge update function \(\phi_{e}\) must provide a measurement of an interaction contributing to the cure and disease state of an individual. Further, if a graph function is to be applied multiple times then the time of interaction must also be represented in the results of this function: \[\mathbf{e}^{\prime}_{ij}\leftarrow\phi_{e}(\mathbf{e}_{ij},\mathbf{v}_{i}, \mathbf{v}_{j},\mathbf{g})=[\mathbf{e}_{ij},\mathbf{i}_{ij}] \tag{7}\] where \(\mathbf{e}_{ij}\) is the original attribute, the time of interaction, and \(\mathbf{i}_{ij}\) is a quantification of the impact of the interaction (note that in the context of neural networks the quantity \(\mathbf{i}_{ij}\) is not understood in a precise way, rather the GNN will _learn_ to interpret it). For this layer \(n^{\prime}_{e}=n_{e}+1\). While this quantifies the impact of a single interaction, the end state of an individual is based on all their interactions specified by connected graph edges. This is the intent of aggregation function, \(\rho_{e\to v}\). For example, the sum of the interaction risks might be a useful metric, as would the maximum and/or minimum risk, depending on the dynamics of the cure and disease. More formally, a vector-valued function that quantifies the external risk to an individual \(j\) over all interactions is \[\bar{\mathbf{e}}_{j}=\rho_{e\to v}(\mathbf{e}^{\prime}_{ij})=\left[\sum_{i} \mathbf{e}^{\prime}_{ij}\ \,\max_{i}\mathbf{e}^{\prime}_{ij},\ \ \min_{i}\mathbf{e}^{\prime}_{ij}\right]. \tag{8}\] With the external risks to an individual quantified, the vertex attributes specifying the cure/disease probability for the individual must be updated using \(\phi_{v}\). ### Training Typically in GNN applications, MLP neural networks are used as the update functions. Each of these MLPs is parameterized by a set of weights and biases. Therefore, the parameters of a GNN is the union of the weights and biases of all the update functions. We denote the collected set of parameters for the neural network as \(\Theta\). The activity of tuning these parameters to learn to perform a set task is referred to as training. The are different types of tasks that can be learned, but the most common is referred to as supervised learning. In this case, a desired mapping is specified empirically using a dataset composed of input-output pairs. For exposition, in this section we define the data set as \((X_{k},Y_{k})\) for \(k=1\ldots n_{d}\), where \(X_{k}\) is the domain element mapping to \(Y_{k}\), the range element. For GNN's, the domain and range values are defined as vertex, edge and/or global attributes. The parameters of a neural network are said to be trained when a _loss_ function is minimized. An idealized supervised learning problem and loss function is \[\Theta^{*}=\operatorname*{arg\,min}_{\Theta}\left(\mathcal{L}(\Theta):=\frac{ 1}{n_{d}}\sum_{k=1}^{n_{d}}l_{\Theta}(\mathcal{N}(X_{k};\Theta),Y_{k})\right) \tag{9}\] where the function \(l_{\Theta}\) measures the difference between the GNN prediction for the attributes \(X_{k}\) and the empirical target \(Y_{k}\). Choices for \(l_{\Theta}\) include norms for regression, or cross entropy for classification. With the loss function selected, an algorithmic approach to find the optimal parameters \(\Theta^{*}\) is required. Initially the weights and biases are selected at random (see [14, 15, 8, 17] for example approaches and considerations). Then an iterative method is used to incrementally reduce the loss function and improve Figure 2: Vertex feature update for \(\mathbf{v}_{4}\). The updated attributes for incoming edges are aggregated using \(\rho_{e\to v}\) yielding aggregated edge information \(\bar{\mathbf{e}}_{4}\). The aggregated attributes are concatenated with vertex attributes and global attributes to define the input to the vertex update function \(\phi_{v}\). The output \(\mathbf{v}_{4}^{\prime}\) corresponds to updated vertex attributes. This process is repeated for all vertices in parallel (not shown), using the same aggregation and update functions. the prediction by adjusting the GNN parameters. The most common iterative optimization methods for training neural networks are gradient descent algorithms. At each iteration, the algorithm calculates the loss for the current \(\Theta\) parameters by forward propagation through a GNN and evaluating a loss on the predicted versus observed output values. In a second step of the iteration, the gradient of the loss with respect to \(\Theta\) (e.g. \(\nabla_{\Theta}\mathcal{L}\)) is computed using the celebrated _backpropagation_ algorithm [15]. Simplistically, the negative of the gradient is used to update the parameters and the iteration is repeated. An important metric to measure the rate of the reduction of the loss is the _epoch_. One epoch corresponds to each entry in the dataset having been used in a gradient descent step once. Values of tens to thousands of epochs are not uncommon. While gradient descent [24] is a common algorithm in training, more sophisticated algorithms can lead to more robust results, for example Stochastic Gradient Descent [26], RMSProp [33], and Adam [20]. For a review of optimization methods for machine learning see [5, 15]. The ability to calculate the gradient for a large number of parameters is automated by software libraries which perform automatic differentiation. Common libraries for machine learning such as PyTorch [25], TensorFlow [2], Jax [6], and others all include this capability, allowing gradient descent algorithms to be applied without requiring user-specified gradient functions. _Viral Interlude._ Applied to our viral example, the goal of training is to determine parameters that define the transmission rates, and effectiveness of the cure versus the spread of the disease. In this supervised learning example, the dataset input is defined as the initial distribution of the virus and cure over the vertices of the graph, while the output data will be the distribution of the virus and cure after multiple cycles. The dataset would be comprised of graphs defined by multiple communities already inflicted by the virus (thus an observation can be obtained). The model edge update function is parameterized by a neural network that defines the transfer of cure/disease through an individual interaction. The vertex update is also a neural network model that defines the uptake of the cure/disease by a single individual through multiple interactions. Prior to training, the neural network contains only the context of the interactions, not the specifics of the cure/viral diffusion. These specifics would be learned by defining a regression-based loss on the vertex attributes associated with the probability of an individual having the virus/cure. Once trained, the GNN model can be applied to communities (graphs) that have an initial viral load to predict a final state distribution. ## 3 Linear Operations as Graph Network Layers This section demonstrates how linear algebra computations can be carried out utilizing the language and structure of graph neural networks. These familiar algorithms are selected to make the graph neural network framework described above concrete. We note that the representation of these operations as graph neural networks are not unique, and multiple representations may exist for a given calculation. As such, this is not meant to be an exhaustive list, but instead a general guide to setting up and representing different linear algebra operations as graph neural networks. We begin by briefly illustrating the natural relationship between sparse matrices and graphs. A weighted graph with \(n\) vertices can represent a square \(n\times n\) matrix \(A\). Each nonzero \(A_{ij}\) defines a weight for a directed edge that emanates from vertex \(j\) and terminates at vertex \(i\). Notice that this definition uses self edges to represent entries on the matrix diagonal. Alternatively, one can omit self edges and instead store \(A_{ii}\) at the \(i^{th}\) vertex. The inclusion or omission of self edges affects how linear algebra operations are represented and how data propagates through the network as different information is available during specific update/aggregation phases. The algorithms in this section assume that the graph neural network includes self edges, except for the matrix-vector example in Section 3.1. Additionally, our edge orientation choice is natural for matrix-vector products \(y=Ax\) where the \(i^{th}\) entry of \(y\) is defined by \[y_{i}=\sum_{j\in A_{i*}}A_{ij}x_{j}\] and \(A_{i*}\) denotes the set of all nonzero entries in the \(i^{th}\) matrix row. Here, information from neighboring edges must be gathered at the \(i^{th}\) vertex. This is accomplished by an aggregation function as the edges in the \(i^{th}\) matrix row terminate at the \(i^{th}\) vertex. More generally, our edge orientation choice emphasizes that information that directly influences vertex \(i\) flows into vertex \(i\). Finally, we note that we only consider square matrices. However, non-square matrices can be represented as bipartite graphs where rows and columns each have their own distinct set of vertices. Bipartite graphs extensions of Algorithm 1 are possible, but are beyond the scope of this educational survey. We start with foundational linear algebra algorithms: sparse matrix-vector product and a matrix-weighted norm. Next, we describe three simple iterative methods: a weighted Jacobi linear solver, a Chebyshev linear solver, and a power method eigensolver. We conclude with some kernels used within an advanced algebraic multigrid linear solver. All of these examples follow Algorithm 1 for the structure of the graph neural network. The details of each operation specific component, such as the update functions (\(\phi_{v}\), \(\phi_{e}\), \(\phi_{g}\)) and aggregation functions (\(\rho_{e\to v}\), \(\rho_{e\to g}\), \(\rho_{v\to g}\)), are summarized in Tables 1-1. _Code Availability_. Code for each of the GNNs given in this chapter can be found in the repository located at [https://github.com/sandialabs/gmn-applied-linear-algebra/](https://github.com/sandialabs/gmn-applied-linear-algebra/). Code is provided both in MATLAB script and python via PyTorch and the PyTorch Geometric package. Each example provides the code implementation of the layer as well as a small demonstration which shows the output of the GNN matches that of the "traditional" method. ### Sparse Matrix-Vector Product Sparse matrix-vector products are fundamental building blocks of many numerical linear algebra algorithms, such as Krylov methods. Table 1 illustrates a GNN that computes \(y=Ax\). The left block of the table describes the input, intermediate and output data for each graph component (vertex, edge, global). The right column is subdivided into the update functions required by Algorithm 1 on the left, and the aggregation functions. Further, if there are distinct layers in the network, the right column lists the sets of update and aggregation functions. For the sparse matrix vector product the nonzero entries \(A_{ij}\) naturally correspond to edges, so we assign them to edges as fixed (i.e., unmodified by the GNN) data objects. The vector entries \(x_{i}\) and \(y_{i}\) are assigned to vertices, with \(x_{i}\) fixed and \(y_{i}\) mutable (i.e., modified by the GNN) and initialized to zero. The upper left graph in Figure 1 illustrates the starting condition of the GNN with data placed according to the description in Table 1. We now follow the update and aggregation functions of Algorithm 1 in order. Each step is a transition between graph states in Figure 1. The edge feature update \(\phi_{e}\) takes fixed edge feature \(A_{ij}\) and fixed vertex feature \(x_{j}\) and multiplies them, storing the result on each edge. Summation is employed for edge aggregation, denoted \(\phi_{e}\), with the result stored on the vertex in the vertex update phase, \(\phi_{v}\). The lower left graph in Figure 1 shows the final state of the GNN, with the entries \(y_{i}\) containing the matrix vector product located at graph vertices. In Table 1 and all the tables that follow, it is understood that operations such as \(A_{ij}x_{j}\) and \(\sum_{j}c_{ij}\) only operate on nonzero entries of \(A\) and \(c\) respectively. That is, the edges must be included in the graph. The table can be modified if self-edges are not stored. To that end, Table 2 shows a slightly more complicated GNN. The difference is that \(A_{ii}\) is stored on vertices and so the \(A_{ii}x_{i}\) term is added in the vertex update function \(\phi_{v}\). ### Matrix-Weighted Norm Similar to the matrix-vector product, a GNN for computing a matrix-weighted vector norm, \(\left\|x\right\|_{W}=\sqrt{x^{T}Wx}\), is shown in Table 3. The matrix-vector product from Table 1 is modified in the vertex update and adds operations for vertex-to-global aggregation and global \begin{table} \begin{tabular}{|l l l||l l|l l|} \hline \multicolumn{3}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ \multicolumn{3}{|c||}{Fixed} & \multicolumn{1}{l||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{2}{c|}{Aggregation} \\ \hline \hline \multirow{3}{*}{Edge} & \(A_{ij}\) & \(c_{ij}\) & \(\phi_{e}\) & \(c_{ij}=A_{ij}x_{j}\) & \(\rho_{e\to v}\) & \(\overline{c}_{i}=\sum_{j}c_{ij}\) \\ & \(x_{i}\) & \(y_{i}\) [Output] & \(\phi_{v}\) & \(y_{i}=\overline{c}_{i}\) & \(\rho_{e\to g}\) & — \\ \cline{1-1} & \(\quad\) — & \(\quad\) — & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 1: Sparse Matrix-Vector Product as a Graph Network with Self-Edges \begin{table} \begin{tabular}{|l l l||l l|l l|} \hline \multicolumn{3}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ \multicolumn{3}{|c||}{Fixed} & \multicolumn{1}{l||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{2}{c|}{Aggregation} \\ \hline \hline \multirow{3}{*}{Edge} & \(A_{ij}\) & \(c_{ij}\) & \(\phi_{e}\) & \(c_{ij}=A_{ij}x_{j}\) & \(\rho_{e\to v}\) & \(\overline{c}_{i}=\sum_{j}c_{ij}\) \\ & \(x_{i}\) & \(y_{i}\) [Output] & \(\phi_{v}\) & \(y_{i}=\overline{c}_{i}\) & \(\rho_{e\to g}\) & — \\ \cline{1-1} & \(\quad\) — & \(\quad\) — & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 2: Sparse Matrix-Vector Product as a Graph Network without Self-Edges update. After the edge update and the edge-to-vertex aggregation, \(\overline{e}\) contains the vector \(Wx\). Therefore, in the vertex update, we multiply by \(x_{i}\) to obtain the vector \(y_{i}\). In the vertex-to-global aggregation, \(y_{i}\) is summed to obtain \(x^{T}Wx\). Finally, in the global update, the square root is taken to yield the vector norm. ### Weighted Jacobi Iteration Weighted Jacobi iteration is a simple method to solve a linear system \(Ax=b\) for \(x\). One iteration of weighted Jacobi is written as an update formula \[x^{k+1}=x^{k}+\omega D^{-1}(b-Ax^{k}), \tag{10}\] where \(k\) is the iteration index, \(D\) is the matrix diagonal, \(b\) is the right-hand side, \(x^{k}\) is the solution at the \(k^{th}\) Jacobi iteration, and \(\omega\) is the weight parameter. For the GNN representation of the weighted Jacobi method, the edge data includes the matrix entries \(A_{ij}\). The initial vertex data is comprised of the matrix diagonal, \(A_{ii}\), and the right-hand side vector, \(b_{i}\). The weight parameter, \(\omega\) is included as fixed global data. Note that while the matrix diagonal, \(A_{ii}\), is stored as vertex data, this GNN layer uses self-edges as well, so the matrix diagonal is also accessible as regular edge data. This is done to allow easy access to this data for both the matrix-vector product and the Jacobi update. For the functions, all of the components of the sparse matrix-vector product are retained, except for the vector update, \(\phi_{v}\). That is replaced to use the aggregated edge data inside a Jacobi-style update, (10) to yield the GNN layer shown in Table 4. ### Chebyshev Iterative Solver The Chebyshev method is another simple iterative scheme. It can be viewed as a type of multi-step Jacobi algorithm where different weights are used for each step and these weights are optimal in some sense given knowledge about the largest and smallest eigenvalue (\(\lambda_{max}\) and \(\lambda_{min}\)) of the symmetric positive definite matrix \(A\). An example algorithm is given in Algorithm 2, following the algorithm presentation in [27]. ``` 1:\(\mathcal{L}=\{x^{k},y^{k}\}\), \(\overline{ For the Chebyshev GNN, most of the computation is nodal and the GNN primarily serves to propagate residual information. When multiple values are updated on a single line of an update function, the values update from left to right, so parameters are updated in the correct order. In each iteration of the Chebyshev algorithm the GNN uses Layer 1 to update the current iterate. Then Layer 2 updates the residual information. Finally, Layer 3 updates the direction vector based on the the updated residual. If \(N\) iterations of the Chebyshev method are desired, run the GNN in Table 3.5\(N\) times consecutively. ### Power Method Since the power method consists for symmetric matrices consists of matrix-vector multiplications and norms, we can represent the power method as a GNN as well. The details are given in Tables 3.6 and 3.7. We split the algorithm into two networks. The network in Table 3.6 is the iterative portion which updates the \(b\) vector to an improved approximation of the eigenvector associated with the dominant eigenvalue. The second network computes the Rayleigh quotient to give the eigenvalue estimate. Thus, for more than a single iteration of the power method, the network in Table 3.6 is applied successively for the desired number of iterations, and then the Rayleigh quotient is computed from the network in Table 3.7. \begin{table} \begin{tabular}{|l l l||l l|l l|} \hline \multicolumn{4}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ \multicolumn{2}{|c|}{Fixed} & \multicolumn{2}{c||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{2}{c|}{Aggregation} \\ \hline \hline \multicolumn{4}{|c||}{Edge} & \multicolumn{1}{c}{\(A_{ij}\)} & \multicolumn{1}{c}{\(c_{ij}\)} & \multicolumn{1}{c}{\(\phi_{e}\)} & \multicolumn{1}{c|}{\(c_{ij}=A_{ij}x_{j}\)} & \multicolumn{1}{c|}{\(\rho_{e\to v}\)} & \multicolumn{1}{c|}{\(\overline{c}_{i}=\sum_{j}c_{ij}\)} \\ \multicolumn{2}{|c|}{Vertex} & \multicolumn{1}{c}{\(A_{ii},b_{i}\)} & \multicolumn{1}{c}{\(x_{i}\) [Output]} & \(\phi_{v}\) & \(x_{i}=x_{i}+\omega A_{ii}^{-1}(b_{i}-\overline{c}_{i})\) & \(\rho_{e\to g}\) & — \\ \multicolumn{2}{|c|}{Global} & \multicolumn{1}{c}{\(\omega\)} & — & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 3.4: Weighted Jacobi Iteration as a Graph Network \begin{table} \begin{tabular}{l l l} \hline \multicolumn{2}{|c||}{**function** (\(x\)) = Cheby(\(A,b,x_{0},\lambda_{max},\lambda_{min}\),\(N\))} \\ \multicolumn{2}{|c|}{\(x\gets x_{0}\) ; } & \multicolumn{1}{c}{\(r\gets b-Ax\)} \\ \multicolumn{2}{|c|}{\(\theta\leftarrow(\lambda_{max}+\lambda_{min})/2\) ; } & \multicolumn{1}{c}{\(\delta\leftarrow(\lambda_{max}-\lambda_{min})/2\) ; } & \multicolumn{1}{c}{\(\sigma\leftarrow\theta/\delta\)} \\ \multicolumn{2}{|c|}{\(\rho\leftarrow 1/\sigma\) ; } & \multicolumn{1}{c}{\(d\leftarrow(1/\theta)\)\(r\)} \\ \multicolumn{2}{|c|}{**for**\(i\in\{1\ldots N\}\)**do**} \\ \multicolumn{2}{|c|}{\(x\gets x+d\)} \\ \multicolumn{2}{|c|}{\(r\gets r-Ad\)} \\ \multicolumn{2}{|c|}{\(\rho_{\text{prior}}\leftarrow\rho\)} \\ \multicolumn{2}{|c|}{\(\rho\gets 1/(2\sigma-\rho)\)} \\ \multicolumn{2}{|c|}{\(d\leftarrow\rho\)\(\rho_{\text{prior}}\)\(d\)\(+\)\((2\rho/\delta)\)\(r\)} \\ \multicolumn{2}{|c|}{**end for**} \\ \multicolumn{2}{|c|}{**return**\(x\)} \\ \multicolumn{2}{|c|}{**end function**} \\ \hline \multicolumn{4}{|c||}{_Chebyshev Solver Iteration as a Graph Network_} \\ \hline \multicolumn{4}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ \multicolumn{2}{|c|}{Fixed} & \multicolumn{1}{c||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{1}{c|}{Aggregation} \\ \hline \hline \multicolumn{4}{|c||}{Edge} & \multicolumn{1}{c}{\(A_{ij}\)} & \multicolumn{1}{c}{\(c_{ij}\)} & \multicolumn{1}{c}{\(\phi_{e}\)} & — & \(\rho_{e\to v}\) & — \\ \multicolumn{2}{|c|}{Vertex} & \multicolumn{1}{c}{\(r_{i},d_{i},x_{i}\) [Output]} & \(\phi_{v}\) & \(x_{i}=x_{i}+d_{i}\) & \(\rho_{e\to g}\) & — \\ \multicolumn{2}{|c|}{Global} & \multicolumn{1}{c}{\(\delta\), \(\sigma\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} \\ \multicolumn{2}{|c|}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} \\ \multicolumn{2}{|c|}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} \\ \multicolumn{2}{|c|}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} \\ \multicolumn{2}{|c|}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior}}\)} & \multicolumn{1}{c}{\(\rho,\rho_{\text{prior ### Advanced Algebraic Mutlgrid Solver We conclude this section with two algebraic multigrid kernels: one for interpolation and another associated with coarsening a matrix graph. Algebraic multigrid (AMG) is an advanced linear solver. In the description that follows, we consider solving the matrix system \(A_{1}x_{1}=b_{1}\) for the unknown vector \(x_{1}\). While Jacobi and Chebyshev may require a very large number of iterations to reach an acceptable solution, AMG can converge rapidly on many important matrix systems. For example, AMG is known to converge efficiently on matrices arising from discrete approximations of elliptic partial differential equations (PDEs) such as the heat equation [7]. In particular, the number of required iterations can be independent of the dimension of the matrix \(A_{1}\). Thus, increasingly larger matrix systems do not necessarily require increasingly more iterations. As matrix systems can include over a billion unknowns, this property is extremely advantageous. AMG methods are primarily used to solve discrete versions of PDEs (including many non-elliptic PDEs), though they have been applied to other systems such as those arising from certain network problems. While a full understanding of AMG is beyond our scope (see [7] for a more thorough explanation of the method), we present a few AMG ideas using a simple example. Consider a metal plate with a hole where the plate is clamped at two locations. One clamp is fixed at \(40\lx@math@degree C\) while the other clamp is fixed at \(10\lx@math@degree C\). Suppose a discrete representation of Poisson's equation is used to model the steady-state heat distribution within the plate where the associated matrix \(A_{1}\) only has nonzeros on the diagonal and for each edge of a mesh. The vector \(b_{1}\) contains only a few nonzeros to represent the fixed temperature of the plate where it is held by the clamps. The other boundary conditions indicate that the heat gradient normal to the plate boundaries is zero. When starting with an initial guess of \(x_{1}^{(0)}=0\), the first Jacobi iteration produces \(x_{1}^{(1)}=\omega D_{1}^{-1}A_{1}b_{1}\). Due to the sparsity pattern of \(A_{1}\), \(x_{1}^{(1)}\) must only have a few nonzeros, all within a graph distance of one from one of the two clamp locations. In general, the \(k^{th}\) Jacobi iteration extends \(x_{1}^{(k-1)}\)'s nonzero regions along edges adjacent to these regions. Thus, one can see that many iterations are required for the influence of the two clamps to propagate throughout the entire plate, and so it should not be surprising that many further iterations are required to reach a converged solution. As the mesh is refined, more iterations are needed to propagate information from the clamps to the rest of the plate. The situation is similar for \(N\)-step Chebyshev solvers, which propagate information \(N\) times faster than Jacobi but are also \(N\) times as \begin{table} \begin{tabular}{|l l l||l||l|l|} \hline \multicolumn{3}{|c||}{} & \multicolumn{2}{c||}{Data} & \multicolumn{2}{c|}{Functions} \\ \multicolumn{3}{|c||}{Fixed} & \multicolumn{1}{c||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{2}{c|}{Aggregation} \\ \hline \hline \multirow{3}{*}{Edge} & \(A_{ij}\) & \(c_{ij}\) & \(\phi_{e}\) & \(c_{ij}=A_{ij}b_{j}\) & \(\rho_{e\to v}\) & \(\overline{c}_{i}=\sum_{j}c_{ij}\) \\ & — & \(b_{i},y_{i}\) & \(\phi_{v}\) & \(b_{i}=\overline{c}_{i}\) & \(\rho_{e\to g}\) & — \\ & — & \(h_{1},n_{A},\lambda_{\max}[Output]\) & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \cline{3-6} & & & & \multicolumn{2}{c|}{Layer 2 (\(h_{1}=\|b\|\))} \\ \cline{3-6} & & & & \multicolumn{2}{c|}{—} & \(\rho_{e\to v}\) & — \\ \cline{3-6} & & & & \multicolumn{2}{c|}{—} & \(\rho_{e\to v}\) & — \\ \cline{3-6} & & & \multicolumn{2}{c|}{—} & \(\phi_{v}\) & \(y_{i}=b_{i}^{2}\) & \(\rho_{e\to g}\) & — \\ \cline{3-6} & & & \multicolumn{2}{c|}{—} & \(\phi_{g}\) & \(h_{1}=\overline{\psi}\) & \(\overline{\psi}=\sum_{i}y_{i}\) \\ \cline{3-6} & & & & \multicolumn{2}{c|}{—} & \(\rho_{v\to g}\) & — \\ \cline{3-6} & & & & \multicolumn{2}{c|}{—} & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 3: Power Method as a Graph Network - Iterative Layers \begin{table} \begin{tabular}{|l l l||l|l|} \hline \multicolumn{3}{|c||}{} & \multicolumn{2}{c||}{Data} & \multicolumn{2}{c|}{Functions} \\ \multicolumn{3}{|c||}{Fixed} & \multicolumn{1}{c||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{2}{c|}{Aggregation} \\ \hline \hline \multirow{3}{*}{Edge} & \(A_{ij}\) & \(c_{ij}\) & \(\phi_{e}\) & \(c_{ij}=A_{ij}b_{j}\) & \(\rho_{e\to v}\) & \(\overline{c}_{i}=\sum_{j}c_{ij}\) \\ & — & \(b_{i},y_{i}\) & \(\phi_{v}\) & \(b_{i}=\overline{c}_{i}\) & \(\rho_{e\to g}\) & — \\ \cline{3-6} & & & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \cline{3-6} & & & \multicolumn{2}{c|}{—} & \multicolumn{2}{c|}{—} \\ \cline{3-6} & & & \multicolumn{2}{c|}{—} & \(\phi_{e}\) & — \\ \cline{3-6} & & & \(\phi_{v}\) & \(y_{i}=b_{i}^{2}\) & \(\rho_{e\to g}\) & — \\ \cline{3-6} & & & \multicolumn{2}{c|}{—} & \(\rho_{v\to g}\) & — \\ \cline{3-6} & & & \multicolumn{2}{c|}{—} & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 3: Power Method as a Graph Network - Rayleigh Quotient expensive. While Jacobi/Chebshev propagate information slowly across the mesh, it can be shown that these methods often efficiently produce approximate solutions where the error is smooth. The multigrid algorithm leverages this smoothing property in conjunction with a hierarchy of meshes \(\mathcal{G}_{\ell}\) such as those depicted in Figure 2. One simple example of a multigrid iteration employs \(k\) Jacobi iterations on the \(A_{1}x_{1}=b_{1}\) system to generate an approximate solution \(x_{1}^{(k)}\). Since the error is now smooth, it can be accurately approximated on a coarser mesh. Thus, the multigrid idea is to improve the solution using a coarser mesh. To do this, a residual is computed and then projected on to the \(\mathcal{G}_{2}\) mesh within the hiearchy and used as the right hand side to a second linear system \(A_{2}x_{2}=b_{2}\). Here, the matrix \(A_{2}\) is a coarser discrete version of the Poisson equation constructed using \(\mathcal{G}_{2}\). The \(A_{2}\) matrix is a less accurate approximation to Poisson's equation, but it is only used to correct the approximation obtained on the finer mesh. One can apply \(k\) Jacobi iterations to this \(A_{2}\) system to produce \(x_{2}^{(k)}\). Notice that these iterations are less expensive than applying Jacobi to the \(A_{1}\) system. We can repeat this same process (apply Jacobi to \(A_{\ell+1}x_{\ell+1}=b_{\ell+1}\) where \(b_{\ell+1}\) is a projected version of the residual for the \(A_{\ell}\) system) for all meshes in the hierarchy. In each case, the approximate solution on the \(\ell^{th}\) mesh is a correction to the solution on the previous mesh system. To complete the AMG iteration, we must add together the individual corrections. That is, the approximate solution at the end of a single AMG iteration is given recursively by \(\tilde{x}_{\ell}=x_{\ell}^{(k)}+P_{\ell}\tilde{x}_{\ell+1}\) where \(P_{\ell}\) is a rectangular matrix that interpolates (or prolongates) approximate solutions associated with the \(\ell+1\) mesh to the \(\ell\) mesh. Here \(1\leq\ell\leq N_{levels}-1\) and on the coarsest mesh we take \(\tilde{x}_{N_{levels}}=x_{N_{levels}}^{(k)}\). Often a Gaussian elimination solver is used on the coarsest mesh as its cost is negligible when the matrix \(A_{N_{levels}}\) is sufficiently small. Notice that in this case, one AMG iteration propagates information from the clamps throughout the entire mesh. While there is some cost to the coarse level computations, this cost is small relative to the \(A_{1}\) computations if the coarser meshes are significantly smaller than the finest mesh. While it is generally difficult for application developers to generate the coarse operators \(\mathcal{G}_{\ell},A_{\ell}\), and \(P_{\ell}\) (\(P_{\ell}^{T}\) is commonly used for residual projections), an AMG algorithm automates this entire process. That is, all of the additional operators/meshes are generated within the AMG method using graph algorithms that coarsen the matrix graph, define _algebraic_ interpolation, and define the coarse \(A_{\ell}\)'s. #### 3.6.1 Strength of Connection Automatically coarsening a matrix graph is difficult. In our example, we need a coarse graph that can approximate errors that remain after \(k\) Jacobi steps. Several different AMG algorithms are used in practice each with its own coarsening procedure. Typically, a strength-of-connection algorithm is applied as a pre-cursor to coarsening. The basic idea is that some graph edges (or off-diagonal matrix nonzeros) may contribute very little to the information flow. For example, suppose one corner region of our plate consists of a near insulating material (e.g., extending from the corner to the closest segment of the circle). Edges between the conducting and insulating regions should be classified as weak so that they can effectively be ignored during the graph coarsening process. Unfortunately, improper edge classification can ruin AMG's impressive convergence properties and remains an active research topic that could possibly be improved by new ML algorithms. We present two common strength-of-connection algorithms that are based on evaluating the relative size of off-diagonal matrix entries. However, we note that these algorithms may not be appropriate for complex matrix systems. Unlike the previously discussed methods that output information at graph vertices, strength of connection algorithms output data on graph edges. Specifically, a strength of connection Figure 2: Left: sample mesh hierarchy (clamps shown on \(\mathcal{G}_{1}\)). Right: data movement within an AMG iteration (Jacobi iterations occur at tan circles and Gaussian elimination occurs at dark brown circle). algorithm produces a matrix \(S(A)\) which has the same sparsity pattern as the input matrix \(A\). For the (non-symmetric) smoothed aggregation AMG strength of connection, this computation is, \[S_{ij}=\frac{A_{ij}^{2}}{A_{ii}A_{jj}}. \tag{2}\] The GNN layer version of this algorithm, which is shown in Table 8, relies on fixed matrix edge data, \(A_{ij}\), as well as fixed diagonal values, \(A_{ii}\), stored on the vertices. Note that this GNN allows for self edges, so the diagonal values are stored on both the vertices and the edges. The only calculation takes place in the edge update function, \(\phi_{e}\), which simply implements (2). Similar to the smoothed aggregation strength of connection, the classical strength of connection algorithm provides output data on graph edges to construct a strength of connection matrix \(S(A)\) that has the same sparsity as the input matrix \(A\). For the classic strength of connection, this computation is, \[S_{ij}=\frac{-A_{ij}}{\max_{k\neq i}\{-A_{ik}\}}. \tag{3}\] The GNN representation of this algorithm, which is shown in Table 9, relies only on fixed matrix edge data, \(A_{ij}\). The maximum negative value on each row is first determined in the aggregation phase, \(\rho_{e\to v}\), and is then propagated to the vertices. Finally, the edge update is used to finish the calculation in (3). One final step is needed to finish the classic strength of connection, and that is the dropping of weak connections. For a suitable \(0<\tau\leq 1\), drop the nonzero values where \(S_{ij}<\tau\). The sparsity pattern of \(S_{ij}\) then provides the classic strength of connection. This could be done as a post-processing step, or, to keep with the theme of utilizing ideas from graph neural networks, we can modify the final edge update to utilize a non-linear activation function, such as the **step** function: \[\text{step}(x)=\begin{cases}1&x>0\\ 0&x\leq 0\end{cases}\] to zero out the weak connections based on \(\tau\). This results in \[\hat{S}_{ij}=\text{step}\left(\frac{-A_{ij}}{\max_{k\neq i}\{-A_{ik}\}}-\tau \right), \tag{4}\] where the strong connections are the non-zero entries of \(\hat{S}\). \begin{table} \begin{tabular}{|l l l||l l|l l|} \hline \multicolumn{3}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ \multicolumn{3}{|c||}{Fixed} & \multicolumn{1}{c||}{Mutable} & \multicolumn{3}{c|}{Updates} & \multicolumn{3}{c|}{Aggregation} \\ \hline \hline \multirow{3}{*}{Edge} & \multirow{3}{*}{\(A_{ij}\)} & \multirow{3}{*}{\(S_{ij}\) [Output]} & \multicolumn{3}{c|}{Layer 1} \\ \cline{3-3} \cline{5-8} & & & \(\phi_{e}\) & — & \(\rho_{e\to v}\) & \(\overline{\epsilon}_{i}=\max_{j\neq i}\{-A_{ij}\}\) \\ \cline{3-3} \cline{5-8} Vertex & — & \(v_{i}\) & \(\phi_{v}\) & \(v_{i}=\overline{\epsilon}_{i}\) & \(\rho_{e\to g}\) & — \\ \cline{3-3} \cline{5-8} Global & — & — & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 9: Classic Strength of Connection as a Graph Network \begin{table} \begin{tabular}{|l l l||l l|l l|} \hline \multicolumn{3}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ \multicolumn{3}{|c||}{Fixed} & \multicolumn{1}{c||}{Mutable} & \multicolumn{3}{c|}{Updates} & \multicolumn{3}{c|}{Aggregation} \\ \hline \hline \multirow{3}{*}{Edge} & \multirow{3}{*}{\(A_{ij}\)} & \multirow{3}{*}{\(S_{ij}\) [Output]} & \multicolumn{3}{c|}{Layer 1} \\ \cline{3-3} \cline{5-8} & & & \(\phi_{e}\) & \(S_{ij}=A_{ij}^{2}/(A_{ii}A_{jj})\) & \(\rho_{e\to v}\) & — \\ \cline{3-3} \cline{5-8} Vertex & \(A_{ii}\) & — & \(\phi_{v}\) & — & \(\rho_{e\to g}\) & — \\ \cline{3-3} Global & — & — & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 8: Smoothed Aggregation Strength of Connection as a Graph Network #### 3.6.2 Direct Interpolation We present one of the simplest AMG interpolation schemes, though more advanced/robust algorithms are generally preferred. The interpolation algorithm provides output on the graph edges, similar to the strength of connection algorithms. Specifically, we seek the nonzeros entries of \(P_{\ell}\). In the description that follows, we omit the sub-script \(\ell\) to simplify notation. The algorithm relies on first partitioning the vertex set \(\mathcal{V}\) into \(\mathcal{F}\) and \(\mathcal{C}\) such that \(\mathcal{F}\cap\mathcal{C}=\emptyset\) and \(\mathcal{F}\cup\mathcal{C}=\mathcal{V}\) (cf., [7]). The F-vertices only exist on the fine grid while C-vertices exist on both fine and coarse grids. We assume that the rows/columns of A (and hence the vertices) are ordered so that all C-vertices are numbered before F-vertices. In addition to partitioning, we require a strength of connection matrix, \(S\), as discussed in Section 3.6.1. The direct interpolation operator is derived from input matrix \(A\) such that if \(i\) is an F-vertex, then \[P_{ij}=-A_{ij}\ \frac{\sum_{k\in N_{i}}A_{ik}}{A_{ii}\sum_{k\in C_{i}^{ \prime}}A_{ik}} \tag{13}\] where \(N_{i}\) denotes the neighbors of vertex \(i\), and \(C_{i}^{\ast}\) denotes the strong, coarse neighbors of vertex \(i\). Specifically \(j\in C_{i}^{s}\) if and only if \(j\in N_{i}\), \(j\in\mathcal{C}\) and \(S_{ij}-\tau>0\). If \(i\) is instead a C-vertex, \[P_{ij}=\begin{cases}1&j=i\\ 0&j\neq i\end{cases}.\] A derivation of (13) can be found in [9]. One can see that the \(i^{th}\) row of \(P\) is simply a weighted sum where weights are proportional to \(A_{ij}\) and normalized by the fraction term, This fraction does not depend on \(j\) and guarantees that the sum of the nonzeros in the \(i^{th}\) row of \(P\) ( \(=\sum_{k\in C_{i}^{\prime}}P_{k}\) ) is equal to one when the sum of \(A\)'s nonzeros in the \(i^{th}\) row is zero. This is generally true when \(A\) represents differentiation as the derivative of a constant function is zero, which is effectively approximated by multiplying \(A\) by a vector where all entries are constant. The GNN representation of this algorithm, which is shown in Table 10, requires two layers. We use the coarse/fine splitting information and the matrix diagonal as fixed vertex features. We also need a mutable, temporary vertex feature \(\alpha\) for communicating information between the two GN layers. As edge features, we use the off-diagonal entries of \(A\) and the strength of connection information as fixed features, and the output weight, \(P_{ij}\) is edge-specific and mutable. As the algorithm proceeds, the edge update in the first layer is used to pass the coarse/fine splitting information, \[C_{j}=\begin{cases}1&j\in\mathcal{C}\\ 0&j\in\mathcal{F}\end{cases},\] of the receiving vertex to the edge. Next, the edge aggregation is used to perform the necessary summations since all the quantities are specific to the \(i\)th vertex. In the vertex update of the first layer, the \(A_{ii}\) factor is multiplied through. Finally, in the second layer, only the edge update function is utilized, which incorporates the \(A_{ij}\) value into the weight and also assigns zeroes to all rows associated with coarse points. After the computation of the GNN is complete, some additional post-processing is necessary. As mentioned previously, the GNN generates an updated graph with the same structure as the input graph (which is square since it is derived from \(A\) and \(S\)), but the prolongation operator is rectangular. To get a complete prolongation operator, two additional operations would be necessary: (a) setting the diagonal to one and (b) removing all columns associated with fine vertices. The first of these operations can be wrapped into the edge update in the second layer of the GNN network, but the second must be completed after the computation of the entire graph neural network. In doing so, the standard direct interpolation operator is obtained. ## 4 Training Within GNN Frameworks The GNN models considered in the previous section used prescribed aggregation and update functions. Typically, update functions used are parameterized multi-layer perceptron networks, while the variadic aggregation functions are prescribed. This section gives two examples that illustrate how learned parameters can be introduced into update functions so that the GNN model can be trained to perform a numerical task. Code AvailabilityThe code for the examples is this chapter is provided in the repository at [https://github.com/sandialabs/gnn-applied-linear-algebra/](https://github.com/sandialabs/gnn-applied-linear-algebra/). Only PyTorch code is provided. The repository includes code to create all datasets, implement the given GNNs, and train the models. ### Learning a Diagonal for the Jacobi Iteration As discussed, an iteration of the weighted Jacobi method for solving \(Ax=b\) is defined by \[x^{k+1}=x^{k}+\omega D^{-1}(b-Ax^{k}), \tag{14}\] where the solution at the \(k^{th}\) Jacobi iteration is updated based on the associated residual. The \(i^{th}\) entry of the residual is scaled by \(\omega/D_{ii}\) to update the \(i^{th}\) solution entry. Here, \(\omega\) is a user-provided scaling value and \(D\) is a diagonal matrix whose nonzeros are given by \(D_{ii}=A_{ii}\). Ideally, \(\omega\) is chosen so that convergence is attained in the fewest number of iterations. While this ideal \(\omega\) can be determined by computing eigenvalues of the matrix \(D^{-1}A\) when \(A\) is a symmetric positive definite matrix, in practice it is often chosen in an ad hoc fashion. We pursue an alternative machine learning approach that selects a diagonal relaxation operator \(\bar{D}\) based on the matrix. A generalized Jacobi method (or scaled Richard iteration) is given by \[x^{k+1}=x^{k}+\bar{D}^{-1}(b-Ax^{k}), \tag{4.2}\] where the diagonal matrix \(\bar{D}\) is not defined by \(A\)'s diagonal entries but is instead chosen by a machine learning algorithm to reduce the number of iterations required to reach convergence. We follow the framework introduced in [31] for learning the diagonal of the generalized Jacobi iterative method, though we consider a different target class of matrices to demonstrate the approach focusing on Jacobi as a relaxation method in multigrid. Within multigrid, the relaxation method should reduce high frequency errors as a complement to the coarse grid correction that addresses low frequency errors. As an example, we discretize a 2D Poisson operator \[\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\] on the unit square \([0,1]\times[0,1]\) with homogeneous Dirichlet boundary conditions: \(u(x,0)=u(0,y)=u(x,1)=u(1,y)=0\). The domain is tiled using linear finite elements with uniformly shaped quadrilaterals, except for a small, thin band of tall and skinny quadrilaterals, such as the mesh shown in Figure 4.1. Note that the horizontal location of the band and its width can be changed to produce different variations of the problem. This example is meant to highlight a situation where a generalized \(\bar{D}\) matrix within Jacobi's method might be advantageous, and should not be misconstrued as good meshing practice. In the matrix arising from discretizing this system, the abrupt change in mesh spacing yields significant differences when comparing rows associated with mesh nodes far from the band versus mesh nodes within the band. Two matrix stencils that highlight this variation are \[\frac{1}{6}\left[\begin{array}{ccc}-2&-2&-2\\ -2&16&-2\\ -2&-2&-2\end{array}\right]\qquad\quad\text{and}\qquad\quad\quad\frac{\beta}{6 h}\left[\begin{array}{ccc}-1-\left(\frac{h}{\beta}\right)^{2}&-4+2\left(\frac{h}{ \beta}\right)^{2}&-1-\left(\frac{h}{\beta}\right)^{2}\\ 2-4\left(\frac{h}{\beta}\right)^{2}&8+8\left(\frac{h}{\beta}\right)^{2}&2-4 \left(\frac{h}{\beta}\right)^{2}\\ -1-\left(\frac{h}{\beta}\right)^{2}&-4+2\left(\frac{h}{\beta}\right)^{2}&-1- \left(\frac{h}{\beta}\right)^{2}\end{array}\right].\] The left stencil corresponds to a mesh node like the node surrounded by a square in Figure 4.1 which is away from the boundary and the thin band (so locally the mesh is uniformly spaced in both the \(x\) and \(y\) directions). The right stencil corresponds to a mesh node like the node surrounded by a circle in Figure 4.1 which in the thin band and away from the boundary; the vertical spacing is \(h\) and the horizontal spacing (both to the left and right of the node) is \(\beta\). When \(h/\beta=1\), the left and right stencils correspond. Notice that for \(\beta\ll h\), the \(h/\beta\) terms dominate and one can see that the left and right stencils are very different. While our example is artificially devised to stress Jacobi's method, this type of \begin{table} \begin{tabular}{|l l l l||l l l|} \hline \multicolumn{4}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ & Fixed & Mutable & Updates & Aggregation \\ \hline \hline \multirow{4}{*}{Edge} & \multirow{4}{*}{\(A_{ij},\hat{S}_{ij}\)} & \multirow{4}{*}{\(v_{ij},P_{ij}\) [Output]} & \multicolumn{4}{c|}{Layer 1} \\ \cline{3-6} & & & & \(\phi_{e}\) & \(v_{ij}=C_{j}\) & \(\rho_{e\to v}\) & \(\overline{\gamma}_{i}=\frac{\sum_{j}A_{ij}}{\sum_{j}A_{ij}v_{ij}\hat{S}_{ij}}\) \\ \cline{3-6} & & & & \(\phi_{v}\) & \(\alpha_{i}=\frac{1}{A_{ii}}\overline{\gamma}_{i}\) & \(\rho_{e\to g}\) & — \\ \cline{3-6} & & & & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \cline{3-6} & & & & \multicolumn{3}{c|}{Layer 2} \\ \cline{3-6} & & & & \(\phi_{e}\) & \(P_{ij}=(1-C_{i})(-A_{ij}\alpha_{i})\) & \(\rho_{e\to v}\) & — \\ \cline{3-6} & & & \(\phi_{v}\) & — & \(\rho_{e\to g}\) & — \\ \cline{3-6} & & & & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 10: Direct Interpolation Kernel as a Graph Network abrupt stencil change occurs in more realistic scenarios. For instance in [10], generalized Jacobi iterations are used for a discontinuous Galerkin discretization where penalties glue meshes together. #### 4.1.1 Datasets The training, test and validation datasets are generated by varying the location and width of the band in the mesh. The steps for producing the datasets are as follows: 1. Create a 2D uniform mesh which contains \(N_{y}\) points in each direction 2. Randomly select \(\beta\) between \(\beta_{\min}\) and \(\beta_{\max}\) 3. Randomly select an existing \(x\)-coord from the mesh in step 1 4. Build the band mesh by placing new mesh vertices a distance of \(\beta\) to the left and right of all vertices with the \(x\)-coordinate in step 3. 5. Build the matrix based on a finite element discretization 6. Repeat steps 1-5 to generate 1000 matrices 7. Matrices 1-800 become the training dataset 8. Matrices 801-850 become the validation set 9. Matrices 851-1000 become the test set The training set defines the loss function used by gradient descent to determine model parameters that yield a reduced loss value. Using the validation set, the model (built with the training set) can be evaluated over a range of different model and algorithm choices (referred to as hyperparameters) such as model architectures and optimizers. The test dataset is used to evaluate the final model obtained after all hyperparameter tuning has occurred. The test set is never used during the model selection process. #### 4.1.2 Input/Output Attributes The input edge attributes are the nonzero matrix entries \(A_{ij}\) excluding the diagonal. The matrix diagonal, \(A_{ii}\) is included as a a vertex attribute for each \(i\). The output vertex attribute will be \(d_{i}=\bar{D}_{ii}^{-1}\) for the generalized Jacobi method. See the _Data_ section of Table 1. #### 4.1.3 Loss Function A neural network is trained by identifying parameter values that approximately minimize or at least significantly reduce the value of a loss (also known as an objective) function. One standard machine learning technique involves finding _optimal_ answers (typically from an expensive process) and then training the neural network to match these optimal values; a technique referred to as supervised learning. In our case, this would require computing a set of optimal \(\bar{D}_{ii}\) values for each training scenario using a (possibly expensive) numerical procedure. Then, a loss function would be defined that minimizes the difference between these pre-computed values and the GNN output version of Figure 1: An example mesh with a thin band of elements used for training and testing. Here \(h=\frac{1}{l}\) and \(\beta=0.05\) these values. Unfortunately, a tractable numerical procedure for pre-computing the \(\bar{D}_{ii}\) values is not apparent given the fact that numerous eigenvalue calculations would be needed to determine an _optimal_ high dimensional vector (in \(\mathbb{R}^{N}\)) to define \(\bar{D}\) for each training case. Instead we define the loss function based on the desired performance of the Jacobi method itself. We would like to minimize the number of required Jacobi iterations to reach some specified convergence tolerance. However, efficient optimization methods for neural network training require computing the gradient of the loss function and since the number of iterations is a discrete quantity, we cannot calculate a gradient with respect to it. Thus, the number of iterations cannot be used and a differentiable quantity that measures the performance of a Jacobi iteration will be used instead. From the theory of iterative methods, we that the asymptotic convergence rate of a linear iterative method is given by the spectral radius of the error propagation matrix. The error propagation matrix for generalized Jacobi is \(I\!-\!\bar{D}^{-1}A\), which is obtained by substituting \(e^{k}=x^{k}\!-\!A^{-1}b\) and \(e^{k+1}=x^{k+1}\!-\!A^{-1}b\) in (4.2). Thus, we seek to minimize the spectral radius of this \(I\!-\!\bar{D}^{-1}A\) matrix. Here, we face another obstacle: standard automatic differentiation tools do not support eigenvalue algorithms. Even if support was added, [34] shows that standard eigenvalue decomposition algorithms and power method algorithms give numerically unstable gradients. Thus, we need a different method to approximate the spectral radius of a matrix which gives a numerically stable eigenvalue estimate. In [31], it is proved that the spectral radius of a matrix, \(B\) can be approximated by \[\max_{i=1,2,...,m}\{\|B^{K}\hat{u}_{i}\|^{\frac{1}{K}}\}\] where \(K\) is a user defined number of iterations, \(\{\hat{u}_{i}\}_{i=1}^{m}\) is a set of vectors randomly chosen from the surface of the unit sphere. For multigrid relaxation, we instead want to maximize the performance of Jacobi with regard to only high-frequency errors. We can characterize the high frequency space using a discrete sine transformation matrix \(V\) that is defined such that each column has the general form \[\alpha\sin(\theta_{x}\pi x)\sin(\theta_{y}\pi y).\] Specifically, each of the \(N_{y}^{2}\) columns corresponds to a unique \((\theta_{x},\theta_{y})\) pair chosen from \(\theta_{x}=1,...,N_{y}\) and \(\theta_{y}=1,...,N_{y}\). The scalar \(\alpha\) is picked to ensure that the norm of each column is one and for the results in Section 4.1.6, we use \(N_{y}=38\). Partitioning \(V\) into low frequency and high frequency columns, we have \(V=[V_{lf}\;\;V_{hf}]\) where the low frequency modes correspond to pairs where both \(\theta_{x}\leq N_{y}/2\) and \(\theta_{y}\leq N_{y}/2\) while all other modes define \(V_{hf}\). As we seek to minimize high frequency errors via Jacobi relaxation, for each matrix \(A^{(j)}\) we train the GNN to find \(\bar{D}\) that minimizes \[\max_{i=1,2,...,m}\{\|(I-\bar{D}^{-1}A^{(j)})^{K}\hat{u}_{i}\|^{\frac{1}{K}}\} \tag{4.3}\] where each \(\hat{u}_{i}\) is a randomly chosen column of \(V_{hf}\). Therefore, if \(N\) is the number of matrices in the dataset being evaluated (training, validation, or test), the overall loss function is \[\mathcal{L}=\sum_{j=1}^{N}\max_{i=1,2,...,m}\{\|(I-\bar{D}^{-1}A^{(j)})^{K}\hat {u}_{i}\|^{\frac{1}{K}}\} \tag{4.4}\] For the results given in Section 4.1.6, we use \(K=3\) and \(m=20\). #### 4.1.4 Architecture In general the choice of architecture requires a search over range of parameters known as an ablation study. However, for this example, a fixed architecture for the neural network is chosen as a simple demonstration. The GNN consists of a single GNN layer with a vertex update, as can be seen in Table 1. The edge to vertex aggregation function, \(\rho_{e\to v}\), calculates the min, mean, sum and max of the edge attributes, resulting in four quantities per vertex. These four attributes are concatenated with the single vertex attribute (\(A_{ii}\)) to form a vector in \(\mathbb{R}^{5}\) input into the vertex update function. The vertex update function, \(\phi_{v}\) is a feed-forward neural network consisting of 3 layers. For more information about the architecture of this GNN, see Appendix A.1. \begin{table} \begin{tabular}{|l l l||l|l|l|} \hline \multicolumn{3}{|c||}{Data} & \multicolumn{3}{c|}{Functions} \\ & Fixed & Mutable & \multicolumn{1}{c|}{Updates} & \multicolumn{1}{c|}{Aggregation} \\ \hline \hline Edge & \(A_{ij}\) & — & \multicolumn{3}{c|}{Layer 1} \\ \cline{4-6} Vertex & \(A_{ii}\) & \(d_{i}\) [Output] & \(\phi_{e}\) & — & \(\rho_{e\to v}\) & \(\overline{c}=[min,mean,sum,max]\) \\ Global & — & — & \(\phi_{g}\) & — & \(\rho_{v\to g}\) & — \\ \hline \end{tabular} \end{table} Table 1: Training a Diagonal for Jacobi as a Graph Network #### 4.1.5 Training Methodology The model is implemented in PyTorch [25], using the PyTorch Geometric library [12]. The GNN is trained for 62 epochs using the Adam optimization method [21] with batch size of 100. The number of epochs can have a large influence on the overall success of the training phase. Too few epochs and the training may not reach peak performance, while too many epochs could lead to the model "over-fitting". In this case, the model fits the data too tightly and so the ability of the model to generalize to new data degrades. There are different methods to address over-fitting; we choose to stop training when the performance of the model on the validation set starts to degrade. For this experiment, the number of epochs is selected by training the model for 100 epochs, but stopping the training when the loss on the validation set is minimized. By stopping when the validation error is minimized, we obtain good generalization properties for future data. #### 4.1.6 Results We compare the result of the generalized Jacobi iteration using the learned model \(\bar{D}\) with a traditional weighted Jacobi using three different standard weights: standard Jacobi (\(\omega=1\)), a "default" weight (\(\omega=\frac{2}{3}\)), and a "classical optimal" weight (\(\omega_{\text{co}}=\frac{2}{\lambda_{\min}+\lambda_{\max}}\) where \(\lambda_{\min},\lambda_{\max}\) are the minimum and maximum value of \(D^{-1}A\) respectively). Note that the "classical optimal" weight is only optimal when Jacobi is used directly to solve \(Ax=b\) as opposed to Jacobi relaxation within a multigrid algorithm. To compare the different Jacobi relaxation procedures, we compute the maximum eigenvalue(s) of \(I-\omega V_{th}^{T}D^{-1}AV_{hf}\) (or of \(I-V_{th}^{T}\bar{D}^{-1}AV_{hf}\) for the generalized Jacobi method), thereby investigating each method's ability to damp high frequency error. The 10 largest eigenvalues associated with two of the test matrices are shown in Figure 2. We can see from the two sample matrices that in some cases the learned diagonal outperforms the rest of the methods, while in some cases, the \(\omega_{\text{co}}\) weight performs the best. It would therefore be helpful to see a full comparison across all test matrices. In Figure 3, we compare the distributions of maximal eigenvalues of the standard methods with respect to the learned diagonal method. Hence, in the figure, all counts left of the 0.0 line indicate matrices where the associated method performed better than the learned method, while all counts to the right of the Figure 3: Distributions of the differences in the maximum eigenvalues from the learned diagonal and the given constant weights for the Jacobi method. Counts left of the 0.0 line indicate matrices for which the associated method has lower maximum eigenvalue than the learned method (and by how much), while counts right of the 0.0 line indicate matrices for which the associated method has larger maximum eigenvalue than the learned method. Figure 2: Top 10 largest eigenvalues of the generalized Jacobi relaxation method for two different test matrices, comparing the different methods of selecting the \(\omega\) parameter in weighted Jacobi to the learned diagonal from the GNN. \(0.0\) line indicate matrices where the associated method performed worse than the learned method. From these results, we see that the learned method gives a better result than the standard non-weighted and \(\omega=\frac{2}{3}\) weighted Jacobi method in all cases and outperforms the \(\omega_{co}\) weight in more than \(75\%\) of the test matrices. The shape of the right-most histogram in Figure 3 suggests that there might be areas of the test space where the learned method performs better and areas where the \(\omega_{co}\) performs better. To discover what patterns might exist, we map test matrices with regard to the width and location of the band in Figure 4 and color their associated dot with the top performing method. We can see from this plot that there is a clear pattern with regard to which matrices are more effectively solved using the learned diagonal versus the \(\omega_{co}\), constant weight. This makes intuitive sense because for larger widths, the mesh is "more uniform" and thus the correctly chosen constant will perform well, while for smaller widths, allowing the weight to vary on a per-row basis allows for more flexibility to address the more drastically different elements. Finally, comparison plots for the scaled diagonal values arising from the \(\omega_{co}\) method and the learned diagonal are given in Figure 5 for a test matrix with \(h=0.003\) and where the band is approximately located at \(x=0.405\). There are several observations which can be made from the images. First, we observe that in general, the learned method yields larger diagonal values than the \(\omega_{co}\) method. It can also be observed that the learned method produces a larger diagonal in the center of the band than the \(\omega_{co}\) method. Most drastically, we observe in the learned method that boundary points (especially the corners) have larger diagonal values than the interior points. This is true even in the band, where the boundary points are also larger than the interior points. This is a modification that the traditional weighted Jacobi method cannot replicate since the diagonal of \(A\) is the same at the boundaries and interior. Figure 4: A plot of all the test matrices according to the width and location of their band, where color represents the winning method. Red represents the learned diagonal method while green represents the \(\omega_{co}\) constant weight method. Figure 5: Diagonals at each mesh point for the \(\omega_{co}\) method and the learned method The shapes in Figure 5 demonstrate why the learned method is able to outperform the "classical optimal" weighting for Jacobi's method. For the standard weighted Jacobi, only a constant multiple of \(D^{-1}A\) is allowed, which forces the shape of the diagonal to match the shape of the inverse of the diagonal of \(A\). This shape cannot be adjusted, nor can the relative distances be changed (only scaled). In our new learned method, no such limitation exists, hence the space from which the learned method can select the diagonal values is much broader. This broader space presents greater opportunity to find an optimal diagonal. ### Example: Training a GNN to Determine Diffusion Coefficients The strength of connection algorithms discussed in Section 3.6.1 are heuristic schemes with several shortcomings. One could instead consider GNN models that learn a more sophisticated measure of strength. In this section, we consider the related task of determining diffusion coefficients given matrix stencil and coordinate information. Though not equivalent to determining strength measures, there is a rough connection between the relative sizes of diffusion coefficients in different directions and the strength of the matrix connections aligned with these directions. Consider the diffusion operator \[\sum_{i=1}^{2}\sum_{j=1}^{2}\frac{\partial}{\partial x_{i}}\left[D_{ij}(x,y) \frac{\partial}{\partial x_{j}}\right] \tag{10}\] defined on the unit square with periodic boundary conditions. \(D(u,x,y)\) is assumed to have the form \[D(x,y)=\begin{bmatrix}\alpha(x,y)&0\\ 0&\beta(x,y)\end{bmatrix}.\] The goal of this learning task is to predict the diffusion coefficients \(\alpha(x,y)\) and \(\beta(x,y)\) at each grid point using information from the coefficient matrix, coordinates of vertices, and mesh spacing. #### 4.2.1 Datasets All the datasets consist of data generated from discretizing (10) with the finite element method. Using the following six-step procedure, 1000 matrices are generated: 1. Select a random integer \(N\in[80,100]\) and let the mesh resolution be \(h=\frac{1}{N}\) 2. Select \(\theta_{\alpha,x},\theta_{\alpha,y},\theta_{\beta,x},\theta_{\beta,y}\) each with uniform probability from \(\{i:i\in\mathbb{Z}\text{ and }0\leq i\leq 6\}\) 3. Define \(\alpha(x,y)=\cos\left(\theta_{\alpha,x}\pi x\right)^{2}\cos\left(\theta_{ \alpha,y}\pi y\right)^{2}\) 4. Define \(\beta(x,y)=\cos\left(\theta_{\beta,x}\pi x\right)^{2}\cos\left(\theta_{\beta, y}\pi y\right)^{2}\) 5. Discretize (10) on a uniform 2D quadrilateral mesh with resolution \(h\) in both \(x\) and \(y\) directions to construct the matrix operator \(A\) 6. Generate input features and output targets for this matrix Notice that diffusion coefficients are chosen so that there is no discontinuity over the periodic boundary conditions. Thus, the diffusion fields vary smoothly over the domain. The data set generated is divided into a training set (700 matrices), validation set (200 matrices), and test set (100 matrices). The training set is used to determine model parameters that yield a sufficiently small loss. The validation set is used to determine the model architecture, as will be explained shortly. Finally, the test set is only used once at the end of the study to evaluate the final model's effectiveness on unseen data. The test set is evaluated in the results sections where it is used as a trustworthy indication of how the chosen model will perform on new, completely unseen data. #### 4.2.2 Input/Output Attributes The left column of Table 2 describes the input/output attributes. The vertex attributes \(v_{i}\) consist of the matrix diagonal entries. That is, \(v_{i}=A_{ii}\). The edge attributes, \(c_{ij}\) are constructed as \(c_{ij}=(A_{ij},x_{\text{rel}},y_{\text{rel}})\) where \(x_{\text{rel}},y_{\text{rel}}\) are the relative differences between vertex \(i\) and vertex \(j\) in the \(x\) and \(y\) directions respectively, scaled by \(\frac{1}{h}\). For example, if vertex \(j\) is the vertex southeast of vertex \(i\), then the edge features for the edge from vertex \(i\) to vertex \(j\) are \(c_{ij}=(A_{ij},1,-1)\). There is a global attribute in this example, which is the mesh resolution: \(g=h\). The output of the neural network is the updated vertex attributes which are the predicted values for \(\alpha\) and \(\beta\) at each mesh vertex. #### 4.2.3 Loss Function The loss function is given by the mean squared error (MSE) \[\mathcal{L}_{\omega}^{(k)}=\frac{1}{2N^{2}}\sum_{i=1}^{N^{2}}||\alpha^{(k)}(x _{i},y_{i})-\tilde{\alpha}_{\omega}^{(k)}(x_{i},y_{i})||_{2}^{2}+||\beta^{(k)} (x_{i},y_{i})-\tilde{\beta}_{\omega}^{(k)}(x_{i},y_{i})||_{2}^{2}\] where the \((k)\) superscript denotes the matrix from the training set; \(\omega\) refers to the model's trainable parameters (determined by numerical optimization during training); the \((x_{i},y_{i})\) correspond to different mesh points and \(\tilde{\alpha}_{\omega}^{(k)}\) and \(\tilde{\beta}_{\omega}^{(k)}\) represents the GNN model predictions using the model parameter \(\omega\). #### 4.2.4 Architecture The model architecture employs an encoder, followed by the GNN outlined in Table 2. The encoder is applied to the edge and vertex attributes separately before the execution of the graph neural network. Specifically, an MLP, \(\text{Encoder}_{e}\), is applied to each set of edge attributes. Similarly, an MLP, \(\text{Encoder}_{v}\), is applied to each set of vertex attributes. Finally, a third MLP, \(\text{Encoder}_{g}\) is applied to the global attributes. Details on the design of the encoder MLPs can be found in Appendix A.2. Generically, encoders and decoders are common in GNNs. These are used to enhance the expressiveness of user input attributes with the aim to improve predictions and network trainability. Within the graph neural network layer, MLP neural networks are used for the edge update and vertex update functions. This architecture was chosen after performing experiments using the validation set. The validation set allows researchers to experiment with different model parameters, architectures, optimizers, etc. without exposing the model to the test set prematurely. In our case, we investigated the following variations * Number of GNN layers: 1, 2, or 3 * Number of MLP layers in the update functions: 1, 2, 3, or 4 * Width of the hidden layers in the MLP update functions: 8, 16, 32, or 64 * Encoder architecture: No encoder, 1 layer with 16 neurons, or 3 layers with 16 neurons * Decoder architecture: No decoder, 1 layer with 16 neurons, or 3 layers with 16 neurons. This yields a total of \(3\!\cdot\!4\!\cdot\!4\!\cdot\!3\!\cdot\!3=432\) total architectures combinations. To improve computational efficiency, we evaluated the different models in two stages. In the first stage, we employ \(25\%\) of the training set and \(25\%\) of the validation set to quickly identify the highest performing models among the \(432\) possibilities. The five best-performing models are then evaluated in stage two on the full validation set after they have been optimized using the full training set. As described in more detail in Appendix A.2, the best performing model of the \(432\) is parameterized by \(14,002\) trainable parameters (1 GNN layer, 2 MLP layers of width 16 in the update function and 3 layer 16 neuron encoders and decoders). #### 4.2.5 Training Methodology The model is again implemented in PyTorch [25], using the PyTorch Geometric library [12]. The Adam optimization method [21] is utilized to train the neural network weights with a batch size of 10. Of particular importance is the choice of the number of training epochs. As noted earlier, too few epochs may lead to poor accuracy while too many epochs may lead to overfitting where the model performs poorly on new, unseen data. We again choose to stop training when the performance of the model on the validation set starts to degrade. In our case, the minimum validation loss occurs after training the model for \(187\) epochs. #### 4.2.6 Results Figure 6 shows the MSE loss for the training and validation sets as a function of number of epochs. Each epoch is a forward and backward propagation to compute the gradient over all batches in the training set. Thus with a training set of \(700\) entries, and a batch size of \(10\) that is \(70\) gradient computations per epoch. The loss for both sets is attains a similar level (\(5.73\times 10^{-4}\) for validation and \(5.74\times 10^{-4}\) for testing). This similarity indicates that it is unlikely that over-training has occurred. Over-training is usually marked by a descending training error while simultaneously increasing validation error. This indicates the network does not generalize to data outside of the training set. We also consider model performance in relation to the frequency of the diffusion functions. To study this behavior, assume \(D(x,y)\) takes the form \[D(x,y)=\begin{bmatrix}\alpha(x,y)&0\\ 0&\alpha(x,y)\end{bmatrix}\] where \(\alpha(x,y)\) has the form described in Section 4.2.1. Now, we can test the model on problems where \(\theta_{\alpha,x}\) and \(\theta_{\alpha,y}\) are selected in a grid from the set \(\{i:i\in\mathbb{Z}\text{ and }0\leq i\leq 16\}\). The frequency versus mean-squared error plot is given in Figure 7. Recall that the model is only trained for \(0\leq i\leq 6\), so \begin{table} \begin{tabular}{|l l l||l l|l l|} \hline \multicolumn{3}{|c||}{Data} & \multicolumn{4}{c|}{Functions} \\ & \multicolumn{1}{c}{Fixed} & \multicolumn{1}{c||}{Mutable} & \multicolumn{2}{c|}{Updates} & \multicolumn{2}{c|}{Aggregation} \\ \hline \hline \multirow{4}{*}{Edge} & — & \multirow{4}{*}{\(c_{ij}\) [Output]} & \multirow{4}{*}{\(c_{ij}\) [Output]} & \(\phi_{e}\) & \(c_{ij}=\text{Encoder}_{e}(c_{ij})\) & \(\rho_{e\to v}\) — \\ & & & \(\phi_{e}\) & \(v_{i}\) & \(\text{Encoder}_{v}(v_{i})\) & \(\rho_{e\to g}\) — \\ \cline{1-1} & & & \(\phi_{g}\) & \(g\) & \(=\text{Encoder}_{g}(g)\) & \(\rho_{v\to g}\) — \\ \cline{1-1} & & & \multicolumn{4}{c|}{Graph Neural Network} \\ \cline{5-6} & & & \(\phi_{e}\) & \(c_{ij}=\textit{NN}_{e}(c_{ij},v_{i},v_{j},g)\) & \(\rho_{e\to v}\) & \(\overline{c}_{i}=[min,mean,sum,max]\) \\ \cline{1-1} & & & \(\phi_{v}\) & \(v_{i}\) & \(=\textit{NN}_{v}(v_{i},\overline{c}_{i},g)\) & \(\rho_{e\to g}\) — \\ \cline{1-1} & & & \(\phi_{g}\) & — & \(\rho_{v\to g}\) — \\ \hline \end{tabular} \end{table} Table 2: Training a Graph Network to Determine Diffusion Coefficients all frequency combinations outside this interval are being extrapolated by the model. This portion of the subdomain is indicated by the shaded region. All matrices used in this study are generated using the same finite elements as previously, and all have 100 nodes in each direction yielding 10,000 by 10,000 matrix problems. The figure indicates that coefficients with low frequencies, those in the training set, are well approximated. Deviating from the training set, there are two sources of potential error, both of which contribute to the increase in the error observed in the figure. First, higher frequency coefficients suffer increasing error on a fixed (\(100\times 100\)) mesh as the number of points per wavelength decreases. The second source of error comes from the model extrapolating outside of the training set. In this context, examining the error away from the training frequencies we see that the error remains relatively small even with larger departures from the training data. Connecting this result to strength-of-connection metrics, we remark that in general, it can be difficult to correctly classify strong and weak connections when \(\alpha\) is small and \(\beta\) is not small (or vice versa) when linear finite elements are used on quadrilateral meshes. Such a stencil where \(\alpha(x,y)=0.001\) and Figure 4.6: Loss plots for training diffusion coefficients Figure 4.7: Frequency vs mean-squared error for the trained model predicting point-wise diffusion coefficients from the finite element problem matrix and relative coordinates. The shaded region demonstrates the area where the training data was taken from, while the non-shaded region demonstrates where the model is extrapolating. \(\beta(x,y)=0.8\) is shown below: \[\left[\begin{array}{ccc}-0.1335&-0.533&-0.1335\\ \\ 0.266&1.068&0.266\\ \\ -0.1335&-0.533&-0.1335\end{array}\right]\] Notice that if the following standard strength-of-connection metric is used: \[|A_{ij}|\geq\theta\max_{k\neq i}|A_{i,k}|\Leftrightarrow A_{ij}\text{ is strong}\] then \(\theta<0.25\) would (incorrectly) classify all four cardinal directions as strong. Allowing the trained model to predict the \(\alpha\) and \(\beta\) for this case, we obtain the following values: \[\alpha=-0.00052847\quad\beta=0.71802.\] As such, the model has correctly predicted this stencil as having small \(\alpha\) and with a reasonable approximation for \(\beta\). This provides an opportunity for giving better guidance to direct the multigrid coarsening procedure. Notice that the \(\alpha\) prediction is slightly negative. This is a by-product of using a LeakyReLU for the final activation allowing for negative outputs. The standard ReLU, while strictly positive, yields several training cases that have smaller derivative values, slowing down learning, and ultimately negatively impacting model accuracy. Applying a thresholding similar to a ReLU as a post-training step is possible, and would act as a more "reasonable" filter for outputs, but since we wished to evaluate the performance of our model, no additional steps were applied to the output of the GNN. This discrepancy also points to the possibility that more fine-tuning could improve the trained model, but as this is a more educational paper in nature, we forego adding any additional complexities to the model in this study. ## 5 Conclusion We have examined graph neural networks (GNNs) from the perspective of numerical linear algebra. Our objective has been to highlight the close relationship between several sparse matrix tasks and GNNs. This relationship should not be surprising given the strong connections between sparse matrices and graphs. Several traditional numerical algorithms have been recast using a GNN representation. The GNN algorithms presented here will often be much less efficient than specialized sparse linear algebra libraries like Trilinos [18] or PETSc [3], but are instead intended to familiarize the reader with the inner workings and operations of GNNs by relating well-known algorithms to a GNN counterpart. We believe that GNNs may prove useful for many sophisticated linear algebra tasks where traditional algorithms have significant shortcomings. For example, one frequently recurring linear algebra theme is that of matrix approximation. The details vary by context, but the general idea is to find an inexpensive surrogate matrix that approximates the behavior of a large sparse matrix. For multigrid, this inexpensive matrix might be needed to approximate the action of the original matrix on a low frequency space. In other situations (e.g., reduced order modeling) an approximation of matrix vector products might be needed only for vectors that lie in a relatively small subspace driven by the simulation. These inexpensive approximations leverage smaller, sparser or lower rank matrices. Machine learning automates the task of generating approximation using data in cases that can be difficult with traditional approaches. The power of GNNs within machine learning is that their flexibility can address general sparse matrices. In this paper, we give two GNN learning examples to demonstrate, in simple cases, how GNNs can address some linear algebra tasks that can be incorporated into more traditional methods. While these examples are basic demonstrations of where GNNs can be used, we hope they will provide inspiration for how to incorporate GNNs into more complex problems.
2307.04880
Active Linearized Sparse Neural Network-based Frequency-Constrained Unit Commitment
Conventional synchronous generators are gradually being re-placed by low-inertia inverter-based resources. Such transition introduces more complicated operation conditions, frequency deviation stability and rate-of-change-of-frequency (RoCoF) security are becoming great challenges. This paper presents an active linearized sparse neural network (ALSNN) based frequency-constrained unit commitment (ALSNN-FCUC) model to guarantee frequency stability following the worst generator outage case while ensuring operational efficiency. A generic data-driven predictor is first trained to predict maximal frequency deviation and the highest locational RoCoF simultaneously based on a high-fidelity simulation dataset, and then incorporated into ALSNN-FCUC model. Sparse computation is introduced to avoid dense matrix multiplications. An active data sampling method is proposed to maintain the bindingness of the frequency related constraints. Besides, an active ReLU linearization method is implemented to further improve the algorithm efficiency while retaining solution quality. The effectiveness of proposed ALSNN-FCUC model is demonstrated on the IEEE 24-bus system by conducting time domain simulations using PSS/E.
Mingjian Tuo, Xingpeng Li
2023-07-10T20:03:08Z
http://arxiv.org/abs/2307.04880v1
# Active Linearized Sparse Neural Network-based Frequency-Constrained Unit Commitment ###### Abstract Conventional synchronous generators are gradually being replaced by low-inertia inverter-based resources. Such transition introduces more complicated operation conditions, frequency deviation stability and rate-of-change-of-frequency (RoCoF) security are becoming great challenges. This paper presents an active linearized sparse neural network (ALSNN) based frequency-constrained unit commitment (ALSNN-FCUC) model to guarantee frequency stability following the worst generator outage case while ensuring operational efficiency. A generic data-driven predictor is first trained to predict maximal frequency deviation and the highest locational RoCoF simultaneously based on a high-fidelity simulation dataset, and then incorporated into ALSNN-FCUC model. Sparse computation is introduced to avoid dense matrix multiplications. An active data sampling method is proposed to maintain the blindness of the frequency related constraints. Besides, an active ReLU linearization method is implemented to further improve the algorithm efficiency while retaining solution quality. The effectiveness of proposed ALSNN-FCUC model is demonstrated on the IEEE 24-bus system by conducting time domain simulations using PSS/E. Deep learning, Frequency deviation, Frequency stability, Low-inertia power systems, Sparse neural network, ReLU linearization, Rate of change of frequency, Unit commitment. ## Nomenclature \begin{tabular}{l l} _Sets_ & \\ \(G\) & Set of generators. \\ \(K\) & Set of lines. \\ \(K^{*}(n)\) & Set of lines with bus \(n\) as receiving bus. \\ \(K^{-}(n)\) & Set of lines with bus \(n\) as sending bus. \\ \(T\) & Set of time periods. \\ \(N\) & Set of buses. \\ _NS_ & Set of samples. \\ _NL_ & Set of neural network layers. \\ \(\mathcal{R}\) & Set of active selected neurons. \\ \(\mathcal{R}\) & Set of not selected neurons. \\ \end{tabular} \begin{tabular}{l l} _Indices_ & \\ \(g\) & Generator \(g\). \\ \(k\) & Line \(k\). \\ \(t\) & Time \(t\). \\ \(n\) & Bus \(n\). \\ \(q\) & Neural network layer \(q\). \\ \(l\) & _l_-th neuron of a neural network layer. \\ \(s\) & Sample \(s\). \\ \end{tabular} \begin{tabular}{l l} _Parameters_ & \\ _c\({}_{g}\)_ & Linear operation cost for generator \(g\). \\ _r\({}_{g}^{min}\)_ & Minimum output limit of generator \(g\). \\ \end{tabular} ## I Introduction The decarbonization of the electricity generation relies on the integration of converter-based resources. With the increased penetration of renewable energy sources (RES), maintaining power system frequency stability has become a great challenge for reliable system operations [1]. Traditional-ly, synchronous generators play an important role in regulating frequency excursion and rate of change of frequency (RoCoF) after a disturbance as it ensures slower frequency dynamics [2]. Due to the retirement and replacement of conventional generation, more generation is coming from converter-based resources such as wind and solar power. Consequently, the system kinetic energy decreases significantly, leaving the system more vulnerable to large variations in load or generation [3]. Transmission system operators (TSOs) are concerned with the system stability issues with high penetration level of converter-based resources. Some have also suggested to impose extra RoCoF related constraints in the conventional unit commitment (UC) model to keep the minimum amount of synchronous inertia online [4]. SirGrid has introduced a synchronous inertial response constraint to ensure that the available inertia is above a minimum limit of 23 GWs in Ireland [5]. The Swedish TSO once ordered one of its nuclear power plants to reduce output by 100 MW to mitigate the risk of loss of that power plant [6]. Several papers have included frequency related constraints into traditional security-constrained unit commitment (SCUC) formulations. Reference [7] implements a system frequency stability constrained multiperiod SCUC model. In [8]-[9], uniform frequency response model was extended by including converter-based control, and constraints on RoCoF are then derived and incorporated into SCUC formulations. The work in [10] studied a mixed analytical-numerical approach based on multi-regions and investigated a model combining evolution of the center of inertia and certain inter-area oscillations. However, these approaches oversimplify the problem as they neglect nodal frequency dynamics, and the actual need for frequency ancillary services would be underestimated. Ref. [11] considers the geographical discrepancies and connectivity impacts on nodal frequency dynamics. However, results show that model-based approaches may fail to handle higher order characteristics and nonlinearities in system frequency response, approximation in model may also introduce extra errors into the derived constraints, resulting in conservative solutions. Recently, a pioneering data-driven approach has been proposed in [12], which incorporates neural network-based frequency nadir constraints against the worst-case contingency into frequency constrained unit commitment (FCUC). Ref. [13] proposes a deep neural network (DNN)-FCUC (DNN-FCUC) which incorporates DNN-based frequency predictor into formulations by introducing a set of mixed-integer linear constraints. However, computational efficiency of data-driven based approaches has not been investigated thoroughly yet. Such data-driven approaches may increase computational burden due to dense matrix multiplications [14]. On the other hand, reformulation of DNN would also introduce extra group of binary variables and further degrade the computational efficiency [13]. To bridge aforementioned gaps, we propose an active linearized sparse neural network (ALSNN) based frequency-constrained unit commitment (ALSNN-FCUC) model in this paper to secure system frequency stability following the worst contingency event. DNN based frequency metrics predictor is trained using system operation data, which can reflect geographical discrepancies and locational frequency dynamics due to non-uniform distribution of inertia. To incorporate the non-linear predictor into unit commitment model, a set of mixed-integer linear constraints are introduced. Besides, sparse computations are introduced to prune redundant parameters in the predictors to improve the efficiency of the proposed approach. In addition, an active rectified linear unit (ReLU) linearization algorithm is implemented to further improve the FCUC framework efficiency. The major contributions of this work are concluded as follows: * First, we improve the data driven FCUC model by including the constraints on both locational RoCoF and frequency nadir security. In contrast to the existing data-driven approaches, where only frequency nadir security is considered, we propose a DNN-based frequency metrics predictor with the concept of sharing parameters. The predictor can simultaneously track the locational RoCoF and maximal frequency deviation for post-contingency conditions where oscillations in frequency cannot be ignored. * Secondly, we analyze the dynamic model of the power system comprising practical number of generators at the same buses; heterogeneous responses of each node are then derived. Unlike [12] where random data injections may lead to post-contingency stability issues, model-based approaches that enforcing system locational frequency security are proposed to efficiently generate realistic data for predictor training. Results show that transient stability, small signal stability, and voltage stability of generated conditions are well handed. * Thirdly, to the best of our knowledge, no prior work investigates the computational efficiency of DNN-FCUC. In this paper, we propose an ALSNN-FCUC model that incorporates sparse computations to perform parameter selection and increase neural network sparsity. This proposed sparse DNN subsequently reduces the computational burden of the framework. In addition, an active ReLU linearization method is performed over selected neurons to further improve the model efficiency. * Last, we propose an active sampling method to improve the robustness of the trained predictor. This method allows us to increase the number of frequency constrained time intervals for an FCUC problem with acceptable solving time while maintaining the bindingness of these frequency related constraints. The remainder of this paper is organized as follows. In section II, the model-based approaches and data-driven approaches are compared. Section III details the formulation of deep learning-based frequency constraints. Section IV describes the active ReLU linearization of sparse DNN. Section V describes the formulation of proposed ALSNN-FCUC model. The results and analysis are presented in Section VI. Section VII presents the concluding remarks and future work. ## II System Frequency Dynamics and Overview of Solution ### _Original Problem_ The frequency of the power system is one of the most important metrics that indicate the system stability. Traditionally, the frequency of the system is treated as a single-bus representation or center of inertia (COI) representation; the total power system inertia is considered as the summation of the kinetic energy stored in all dispatched generators synchronized with the power system [15]. \[E_{sys}=\sum_{t=1}^{N}2H_{i}S_{B_{i}} \tag{1}\] where \(S_{B_{i}}\) is the generator rated power in MVA and \(H_{i}\) denotes the inertia constant of the generator. Assuming a disturbance in the electrical power, the dynamics between power and frequency can be modeled by the swing equation described in (2) with \(M=2H\) denoting the normalized inertia constant and \(D\) denoting damping constant respectively [16]. \[\Delta P_{m}-\Delta P_{e}=M\frac{d\Delta\omega}{dt}+D\Delta\omega \tag{2}\] where \(\Delta P_{m}\) is the total change in mechanical power and \(\Delta P_{e}\) is the total change in electric power of the power system. \(d\Delta\omega/dt\) is commonly known as RoCoF. During a short period of time following a disturbance, we can derive the initial RoCoF constraints used for system uniform model, \[f_{rcf}=\frac{\Delta P_{m}-\Delta P_{e}}{2HS_{B}}\,\omega_{n}\leq-RoCoF_{tim} \tag{3}\] However only considering system uniform metrics neglects the geographical discrepancies in locational frequency dynamics on each bus, which has imposed risks on power system stability [10]. Dynamic model is preferred in modern power system for frequency oscillation analysis. The topological information and system parameters can be embedded into the model by using swing equation on each individual bus to describe the oscillatory behavior within the system, \[m_{i}\ddot{\theta}_{i}+d_{i}\dot{\theta}_{i}\,=\,P_{in,n}-\,\sum_{j=1}^{n}b_{ ij}\,sin\,(\theta_{i}\,-\,\theta_{j}) \tag{4}\] where \(m_{i}\) and \(d_{i}\) denote the inertia coefficient and damping ratio for node \(i\) respectively, while \(P_{in,i}\) denotes the power input. A network-reduced model with \(N\) generator buses can be obtained by eliminating passive load buses via Kron reduction [17]. By focusing on the network connectivity's impact on the power system nodal dynamics, the phase angle \(\theta\) of generator buses can be expressed by the following dynamic equation [18], \[\mathbf{M}\ddot{\mathbf{\theta}}\,+\,\mathbf{D}\,\dot{\mathbf{\theta}}\,=\,\mathbf{P}-\mathbf{L}\,\bm {\theta} \tag{5}\] where \(\mathbf{M}=\text{diag}\left(\{m_{i}\}\right)\), \(\mathbf{D}=\text{diag}\left(\{d_{i}\}\right)\); for the Laplacian matrix \(\mathbf{L}\), its off-diagonal elements are \(l_{ij}=-b_{ij}V_{i}^{(0)}V_{j}^{(0)}\), and diagonals are \(l_{ij}=\sum_{j=1,j\neq i}^{n}b_{ij}V_{i}^{(0)}V_{j}^{(0)}\). The RoCoF value at bus \(i\) can then be derived, \[f_{rcf,i}(t_{0})=\] \[\frac{\Delta Pe^{-\frac{\gamma t}{2}}}{2\pi m}\sum_{a=1}^{N_{B}}\frac{\beta_{ ai}\beta_{ab}}{\frac{\beta_{ai}\,\gamma^{2}}{4}\pi t}\left[\begin{array}{c}e^{- \frac{\gamma t\pi}{2}}\,sin\left(\sqrt{\frac{\beta_{a}}{m}-\frac{\gamma^{2}}{ 4}}\,(t_{0}+\Delta t)\right)\end{array}\right]\] where \(\lambda_{a}\) is the eigenvalue of matrix \(L\), \(\beta_{ai}\) is the eigen vector value; and \(m\) denotes average inertia distribution on generator buses; and bus \(b\) is where disturbance occurs; \(\Delta t\) is the frequency monitoring window, and \(t_{0}\) is the measuring time. \(N_{g}\) denotes the set of generator buses in the reduced model. The ratio of damping coefficient to inertia coefficient \(\gamma=d_{i}/m_{i}\) is assumed as a constant [3]. We can then derive the dynamic RoCoF related constraints as, \[f_{rcf,i}(t_{0})\leq-\text{RoCoF}_{lim},\forall\,i\in N_{G} \tag{7}\] ### _Overview of Data Driven Approach_ Given a system with \(N\) generators, the objective goal of the ordinary UC model is to minimize the total operating cost subject to various system operational constraints. \[\begin{split}\text{min.}&\mathcal{C}(\mathbf{s}_{t},\mathbf{u}_{t}) \\ s.\,t.&\mathcal{F}(\mathbf{s}_{t},\mathbf{u}_{t},\mathbf{d}_{t},\mathbf{r}_{t})=0, \mathcal{G}(\mathbf{s}_{t},\mathbf{u}_{t},\mathbf{d}_{t},\mathbf{r}_{t})\leq 0,\forall t\\ \end{split} \tag{8}\] where \(\mathcal{F}\) and \(\mathcal{G}\) are the equality and inequality constraints respectively; \(\mathbf{s}_{t}\) denotes the system states, and \(\mathbf{u}_{t}\)is the generation dispatch at period \(t\). \(\mathbf{d}_{t}\) and \(\mathbf{r}_{t}\) denote the load profile and renewable forecast, respectively. Assume a potential disturbance \(\mathbf{\varpi}_{t}\) occurs in period \(t\). With system nominal frequency \(f_{n}=60\) Hz being the base frequency, the model based RoCoF constraints (6) and (7) could be derived and then added into the ordinary UC formulation to secure the system frequency stability. Assumptions made during the system model analysis may introduce approximation error, subsequently lead to unsecure results or conservative results. The idea of data driven approach is to replace model-based constraints by DNN formulations, \[\hat{\mathbf{h}}^{f}(\mathbf{s}_{t},\mathbf{u}_{t},\mathbf{r}_{t},\mathbf{\varpi}_{t})\leq\mathbf{\varepsilon} \tag{9}\] where \(\hat{\mathbf{h}}^{f}\) is the nonlinear DNN-based frequency metrics predictor, including system wide maximal frequency deviation and maximal locational RoCoF; \(\mathbf{\varepsilon}\) is the vector of predefined threshold. Fig. 1: Overview of the proposed approach. The loss of generation not only results in the largest power outage level, but also leads to the reduction in system inertia, which further leads to the highest RoCoF value and frequency deviation; thus, G-1 event is considered as the worst contingency in this study. It should be noted that the output vector of the DNN-based predictor includes multiple elements which could be generalized for other constraints without including more parameters. The overview of the working pipeline is shown in Fig. 1, in which \(\Delta f_{max}\) is the maximal frequency deviation, and \(f_{max}\) is the highest RoCoF value. ### _Model based Data Generation_ Wide-range space of all power injections is utilized in [12] to ensure reliability under vast ranges of operating conditions for a small group of generators. However, considering a power system consists of a large group of generators, the dimension of \(\mathbf{x}_{t}\) would increase accordingly, which would further cause exponentially increase in the dimension of injections state space for practical system condition [19]. Such wide-range injections may lead to divergence during the simulation initialization process and the system would also be subject to post-contingency stability issues. Unlike randomly data generation considering wide-range space of dispatching, a model-based systematic data generation approach is proposed to generate reasonable and representative data that will be used to train RoCoF Predictors with much less computational burden and compromise of efficiency. Traditional SCUC (T-SCUC) models and RoCoF constrained SCUC models are implemented in this process to generate training samples over various load and RES scenarios. Objective function of three models is to minimize the total system cost consisting of variable fuel costs, no-load costs, start-up costs, and reserve costs. RoCoF related constraints based on equivalent model and dynamic model are added into the formulation of system equivalent model based RoCoF constrained SCUC (ERC-SCUC) (ERC-RCUC) and location based RoCoF constrained SCUC (LRC-SCUC) respectively. The constraints on maximal frequency deviation RoCoF for locational frequency dynamics are nonlinear. In order to incorporate the nonlinear RoCoF-related constraints into the MILP model, a piecewise linear programming method is introduced, and the detail of all models is presented in [11]. ## III Deep Learning-Based Frequency Constraints ### _Power System Feature Definition_ The highest locational frequency deviation and highest system locational RoCoF are considered as functions with respect to the contingency level, contingency location, system states, unit dispatch. Since both the magnitude and location of the contingency will have impact on the locational frequency deviation and locational inertial response, the generator status and dispatching values are encoded in to feature vectors [20]. For a case of period \(t\), the generator status vector is defined as follow, \[\mathbf{u}_{t}=\begin{bmatrix}u_{1,t},u_{2,t},\cdots,u_{N_{G},t}\end{bmatrix} \tag{10}\] The disturbance feature vector is defined against the loss of the largest generation, the magnitude of the contingency could be expressed as, \[\mathbf{P}_{t}^{con}=\underset{g\in\mathcal{G}}{max}\left(P_{1,t},P_{2,t},\cdots,P _{N_{G},t}\right) \tag{11}\] The location of the disturbance is represented by the index of the generator producing maximum power, \[g_{t}^{con}=\underset{g\in\mathcal{G}}{arg}\underset{g\in\mathcal{G}}{max} \left(P_{1,t},P_{2,t},\cdots,P_{N_{G},t}\right) \tag{12}\] The information of magnitude and location into the disturbance feature vector as [12], \[\mathbf{\varpi}_{t}^{G}=[0,\cdots,0],\underset{g_{t}^{con}\in\mathcal{G}}{p_{t}^ {con}},0,\cdots,0] \tag{13}\] Laplacian matrix \(\mathcal{L}\) of the grid and Fiedler mode value depend on the power-angle characteristics, which are determined by the active power injection [11]. Thus, the active power injection of all synchronous generator will be encoded into the feature vector. \[\mathbf{P}_{t}=\begin{bmatrix}p_{1,t},\cdots,P_{2,t},\cdots,P_{N_{G},t}\end{bmatrix} \tag{14}\] The overall feature vector of a case \(\mathbf{x}_{t}\) can be then defined as follows, \[\mathbf{x}_{t}=[u_{1,t},\cdots,u_{N_{G},t},\ \epsilon_{1,t},\cdots,\ \epsilon_{N_{G},t},P_{1,t},\cdots,P_{N_{G},t}] \tag{15}\] ### _DNN-based Frequency Metrics Predictor_ Regarding frequency related constraints, both model-based approach and data-driven approach share the same variables which are determined by the generator status and disturbance information. Thus, A DNN-based frequency metrics predictor is proposed, which combines the functions of locational frequency deviation and highest locational RoCoF in frequency-related constraints together into a nonlinear DNN with multiple outputs. The combination of these functions could significantly decrease the number of DNN parameters thus reducing additional variables introduced into the FCUC formulation. In the aftermath of a large \(G-1\) contingency, the frequency metrics predictor for \(h\) can be expressed as, \[\begin{bmatrix}\hat{f}_{dev}\\ \hat{f}_{rcf}\end{bmatrix}=\widehat{h}(\mathbf{x}_{t},\mathbf{W}^{\prime},\mathbf{b}^{ \prime}) \tag{16}\] where \(\mathbf{W}^{\prime}\) and \(\mathbf{b}^{\prime}\) denote the well-trained neural network parameters for frequency metrics predictor. Let \(\hat{f}_{dev}\) and \(\hat{f}_{rcf}\) denote the predicted frequency deviation and highest RoCoF values respectively for case \(\mathbf{x}_{t}\). Considering a DNN with \(N_{L}\)hidden layers, where ReLU is used as activation function for each hidden layer and the output layer is linear activation function. The forward propagation of each layer is then expressed as follows, \[\mathbf{z}_{1}=\mathbf{x}_{t}W_{1}+b_{1} \tag{17}\] \[\hat{\mathbf{z}}_{q}=\mathbf{z}_{q-1}\mathbf{W}_{q}+\mathbf{b}_{q}\] (18) \[\mathbf{z}_{q}=\max\left(\hat{\mathbf{z}}_{q},0\right)\] (19) \[\hat{f}_{dev}=\mathbf{z}_{N_{L}}\mathbf{W}_{N_{L+1}^{c}}^{dev}+\mathbf{b}_{N _{L+1}^{dev}}^{dev}\] (20) \[\hat{f}_{rcf}=\mathbf{z}_{N_{L}}\mathbf{W}_{N_{L+1}^{c}}^{rcf}+\mathbf{b}_{N _{L+1}^{c}}^{rcf} \tag{21}\] where \(W_{q}\) and vector \(b_{q}\) for \(q\in\mathbb{{}N}_{L}\) represent the set of weight and bias across all hidden layers; \(W_{N_{L}}^{dev}\) and \(b_{N_{L}}^{dev}\) represent the set of weight and bias of the output layer for maximal frequency deviation prediction, and \(W_{N_{L}}^{rcf}\) and \(b_{N_{L}}^{rcf}\) represent the output layer for system highest locational RoCoF prediction. The training process is to minimize the total mean squared error between the predicted output and the labeled outputs of all training samples as follows \[\min_{\Phi}\frac{1}{N_{S}}\sum_{s=1}^{N_{S}}\left(\Delta f_{max}-\hat{f}_{dev} \right)^{2}+\left(\hat{f}_{max}\ -\hat{f}_{rcf}\right)^{2} \tag{22}\] where \(\Phi=\{W_{q_{q}},b_{q},W_{N_{L}+1}^{dev},W_{N_{L}+1}^{ref},b_{N_{L}+1}^{dev},b_ {N_{L}+1}^{ref}\}\) represents the set of optimization variables. ### _Complete DNN Linearization_ Since ReLU activation functions are nonlinear, to include the DNN into the MILP, binary variables \(a_{q[l]}\) are introduced which represent the activation status of the ReLU unit at \(l\)th neuron of \(q\)th layer. For a given sample \(s\), consider \(A\) is a big number that is larger than the absolute value of all \(\hat{2}_{q[l],s}\). When preactivated value \(\hat{2}_{q[l],s}\) is larger than zero, constraints (23a) and (23b) will force binary variable \(a_{q[l],s}\) to one, and the activated value will be equal to \(\hat{2}_{q[l],s}\). When \(\hat{2}_{q[l],s}\) is less than or equal to zero, constraints (23c) and (23d) will force binary variable \(a_{q[l],s}\) to zero. Subsequently, the activated value \(a_{q[l],s}\) will be set zero. \[z_{q[l],s} \leq\hat{2}_{q[l],s}+A(1-a_{q[l],s}),\forall q,\forall l,\forall s, \tag{23a}\] \[z_{q[l],s} \geq\hat{2}_{q[l],s},\forall q,\forall l,\forall s,\] (23b) \[z_{q[l],s} \leq Aa_{q[l],s},\forall q,\forall l,\forall s,\] (23c) \[z_{q[l],s} \geq 0,\forall q,\forall l,\forall s,\] (23d) \[a_{q[l],s} \in\{0,1\},\forall q,\forall l,\forall s, \tag{23e}\] ## IV Active Linearized Sparse Computation Although the aforementioned reformulation of DNN introduces no approximation error, such linearization progress would introduce dense parameter matrix and additional binary variables into MILP model. Thus, the computational burden for the DNN-MILP would increase especially for the condition where multiple scheduling periods are limited by DNN constraints. To handle that problem, sparse computation and active ReLU linearization are proposed to obtain optimal efficiency. ### _Sparse Computation_ Conversion of DNN network into MILP has introduced group parameters. However, not all of these parameters are required to achieve high performance of the predictor. Instead of connecting every pair of neurons in each layer, we prune the connections and redundant parameters from networks to decrease the density of weights matrix without affecting performance of the frequency metrics predictor [21]. The example of training algorithm for sparse computation is shown in Fig. 2. In this work, the network's connections are pruned during the training process by applying a mask [22]. First, we initialize the network and pre-trained the parameters of the frequency metrics predictor. For each layer chosen to be pruned, a binary mask variable is added which is of the same size and shape as the layer's weight matrix and determines which of the weights participate in the forward execution of the graph as shown in Fig. 3. The weights in each layer are sorted by their absolute values and mask the smallest magnitude weights to zero. The sparsity of the parameter matrix is increased from an initial sparsity value \(s_{0}\) to a final sparsity value \(s_{final}\) over a span of \(\mu\) pruning steps with pruning frequency \(\Delta e\): \[s_{e}=s_{final}+\left(s_{0}-s_{final}\right)\left(1-\frac{e-e_{0}}{\mu\Delta e }\right)^{3} \tag{24}\] for \(e\in\{e_{0},e_{0}+\Delta e,...,e_{0}+\mu\Delta e\}\) Gradually increasing the sparsity of the network allows the network training steps to recover from any pruning-induced loss in accuracy. Similar binary masks are applied to the back-propagated gradients, and the weights that were masked in the forward execution are not updated in the back-propagation step, the overall strategy is introduced as follows, ``` 0: Training Dataset \(\Theta=\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\), \(\theta=\{W,b\}\), Mask generator \(Sp(\cdot)\), Final Sparsity \(s_{final}\), Initial epoch \(e_{0}\), Pruning frequency \(\Delta e\), Pruning times \(Q\), Total Training Epochs \(E\), Batch size \(B\) 0: optimal sparse \(W,b\) 1:\(W\gets W_{0}\)\(\triangleright\) Initialize W with Pretrained \(W_{0}\) 2:\(b\gets b_{0}\)\(\triangleright\) Initialize b with Pretrained \(b_{0}\) 3:for\(e=1,2,...,E\)do 4:if\(e\neq e_{0}+\mu\Delta e(\mu\in Q)\)then 5:\(h_{1},...,h_{B}\gets SNN(\Theta_{B},\theta)\) 6:\(\Delta_{\theta}\gets BP(\Theta_{B},h_{1},...,h_{B},\theta)\) 7:\(\theta\leftarrow LearningRule(\Delta_{\theta},\theta)\) 8:else 9:\(W\gets W\odot Sp(s_{0},s_{final},\mu,e)\) 10:\(h_{1},...,h_{B}\gets SNN(\Theta_{B},\theta)\) 11:\(\Delta_{\theta}\gets BP(\Theta_{B},h_{1},...,h_{B},\theta)\) 12:\(\theta\gets LearningRule(\Delta_{\theta},\theta,Sp(s_{0},s_{final},\mu,e))\) 13:endif 14:endfor 15:return\(\theta\) ``` **Algorithm 1** Sparse Neural Network Training ### _Active Sampling_ Even though the sparse computation and ReLU linearization have been proved to be able to improve the computational efficiency of the model, there are other important aspects that we should consider. However, with sparse computation introduced, the compressed network of predictor tuned for the data Fig. 3: Example of sparse fully connected neural network. Fig. 2: Example of sparse fully connected neural network. near the critical threshold using region-of-interest active sampling method may fail to approximate the output of the original network. Subsequently, the related frequency constraints may not bind during the unit commitment process. Thus, it is reasonable to consider the tradeoff between the sparsity rate and the prediction error. Model-based data generation is utilized to generate a group of samples which are labeled with maximal frequency deviation and the highest RoCoF following the worst contingency. Labeling the RoCoF value of adequate samples to achieve desired distributions requires significant computation resources and time, which is known as the labeling bottleneck. In other words, how to sample the cases to be labeled is essential. In order to generate and label desired samples, an active sampling (AS) method is proposed to actively select unlabeled samples. A frequency security discriminator is first trained based on the small group of samples \(\mathcal{F}\) with highest RoCoF and frequency deviation labeled, the average value in 0.1s measuring interval is used. Security label \(f_{sec}\) is then applied to the training dataset: if the \(\Delta f_{max}\) and \(f_{max}\) of a sample are all within the limit, \(f_{sec}\) is labeled as 1. Otherwise, if one of the metrics violate the threshold, \(f_{sec}\) is labeled as 0. And then we perform the active sampling strategy on unlabeled dataset \(\mathcal{U}\) using the discriminator. The unlabeled samples whose posterior probability provided by the discriminator are then selected as training dataset for RoCoF and frequency deviation predictor. The general sampling strategy uses entropy as measure [23]: \[x_{sec}=\operatorname*{argmax}_{u}\left(-\sum_{u}^{1}p(y_{i}|\mathcal{U})\log p (y_{i}|\mathcal{U})\right) \tag{25}\] \[x_{insec}=\operatorname*{argmin}_{u}\left(-\sum_{u}^{1}p(y_{i}|\mathcal{U}) \log p(y_{i}|\mathcal{U})\right) \tag{26}\] where \(y_{i}\) ranges over all possible labelings, and \(p(y_{i}|\mathcal{U})\) is the predicted posterior probability of class membership \(y_{i}\). This sorting process finds the largest and lowest from the finite set of numerical values. The selected \(x_{sec}\) is considered as sample distribute close the threshold which can improve the accuracy of predictor, while \(x_{insec}\) denotes samples which help the output of the tuned frequency metrics predictor approximate to the original. In the other word, they help keep the constraints binding after the sparse computation. The selected sample \(x\) is then added into \(\Theta\) and labeled with highest RoCoF and frequency deviation for frequency metrics predictor training. ### _Active ReLU Linearization_ Activation unit has been set on the neurons of hidden layer to realize the nonlinearization of a neural network; activation function, such as ReLU, is applied to the result of linear combination of values from neuron nodes [24]. The incorporation of ReLU function in a neural network into MILP without no approximation would introduce multiple extra binary variables. Subsequently, increasing the computational burden of the MILP model and leading to poor efficiency. Therefore, an active linearization of ReLU function has been introduced to reduce the DNN size without too much degradation of prediction accuracy [25]. Approximation of ReLU function is shown in Fig. 1. The weighted sum of input signals to the node is denoted as variable \(\hat{z}\), and the output of the node is denoted using the variable \(z\). Given the upper and lower bounds \([LB,UB]\) of \(\hat{z}\), the relationship of \(z\) and \(\hat{z}\) can then be approximated by a set of constraints \(z\geq 0\), \(z\geq\hat{z}\), and \(z\leq\frac{UB(2-LB)}{UB+LB}\). These constraints are all linear equations with constant \(UB\) and \(LB\). Although ReLU linearization can be applied to all the neurons that would convert a DNN-FCUC model into ReLU linearized neural network based FCUC (RLNN-FCUC), this may lead to large approximation error and low prediction accuracy; and the associated frequency constraints may no longer be binding. In this work, we introduce an active selecting method to improve the computational efficiency of the DNN-FCUC model while maintaining the bindingness of derived constraints. The actively sampled dataset is first fed into the well-trained frequency metrics predictor, a nodal positivity index \(\varepsilon_{q[l]}\) is proposed to estimate the percentage of positive pre-activated values of neuron node \(l\) in \(q\)-th layer. \[\varepsilon_{q[l]}=\frac{1}{N_{S}}\left(\sum_{\overline{N_{S}}}2_{q[l]s}-\sum _{\overline{N_{S}}}\left|2_{q[l]s}-\frac{1}{N_{S}}\sum_{\overline{N_{S}}}2_{q[ l]s}\right|\right)\geq\gamma \tag{27}\] where \(\gamma\) is the threshold set to select nodes suitable for ReLU linearization with less approximation error. For the prediction of a given sample \(s\), equations (23a)-(23e) for ReLU function in selected neurons can be replaced by (28b)-(28d) as follows, \[z_{q[l]s}\geq\hat{z}_{q[l]s},\forall q,\forall l,\forall s, \tag{28b}\] \[z_{q[l]s}\leq\frac{UB_{q[l]}\cdot(\hat{z}_{q[l]s}-LB_{q[l]})}{UB_{q[l]}-LB_{q[ l]}},\forall q,\forall l,\forall s, \tag{28c}\] \[z_{q[l]s}\geq 0,\forall q,\forall l,\forall s, \tag{28d}\] ## V Mixed-Integer Formulation of ALSNN-FCUC In this section, the proposed ALSNN-FCUC considering frequency related constraints is formulated. The objective of the modified ALSNN-FCUC model is to minimize total operating cost subject to various system operational constraints and guarantee system frequency stability. The formulation is shown below: \[\min_{\phi}\sum_{g\in\mathcal{G}}\sum_{t\in\mathcal{T}}(c_{g}p_{g,t}+c_{g}^{ NL}u_{g,t}+c_{g}^{SU}v_{g,t}+c_{g}^{RE}r_{g,t}) \tag{29a}\] \[\sum_{g\in\mathcal{G}}P_{g,t}+\sum_{k\in\mathcal{K}(n-)}p_{g,t}- \sum_{k\in\mathcal{K}(n+)}p_{g,t}-D_{n,t}\] (29b) \[+E_{n,t}\;=\;0,\;\;\;\forall n,t\] (29c) \[P_{k,t}-b_{k}(\theta_{n,t}-\theta_{m,t})\;=\;0,\;\;\forall k,t\] Fig. 4: The linear approximation of ReLU activation function. \[-p_{k}^{max}\leq P_{k,t}\leq P_{k}^{max},\ \forall k,t \tag{29d}\] \[p_{g}^{min}u_{g,t}\leq P_{g,t},\ \ \ \ \ \forall g,t\] (29e) \[P_{g,t}+r_{g,t}\leq u_{g,t}p_{g}^{max},\ \ \ \ \ \forall g,t\] (29f) \[0\leq r_{g,t}\leq R_{g}^{re}u_{g,t},\ \ \ \ \ \forall g,t\] (29g) \[\sum_{j\in G}r_{j,t}\geq P_{g,t}+r_{g,t},\ \ \ \ \ \ \forall g,t\] (29h) \[P_{g,t}-P_{g,t-1}\leq R_{g}^{hr},\ \ \ \ \ \ \forall g,t\] (29i) \[P_{g,t-1}-P_{g,t}\leq R_{g}^{hr},\ \ \ \ \ \forall g,t\] (29j) \[v_{g,t}\geq u_{g,t}-u_{g,t-1},\ \forall g,t\] (29k) \[v_{g,t+1}\leq 1-u_{g,t},\ \forall g,t\leq nT-1\] (29l) \[v_{g,t}\leq u_{g,t},\ \ \ \ \ \forall g,t\] (29m) \[\sum\nolimits^{t}_{s=t-UT_{g}}v_{g,s}\leq u_{g,t},\forall g,t \geq UT_{g}\] (29n) \[\sum\nolimits^{t+DT_{g}}v_{g,s}\leq 1-u_{g,t},\forall g,t\geq nT-DT_{g}\] (29o) \[u_{g,t},v_{g,t}\in\{0,1\},\ \ \ \ \ \forall g,t \tag{29p}\] Equation (29a) is the objective function, and the basic constraints include (29b)-(29o). Equation (29b) enforces the nodal power balance. Network power flows are calculated in (29c) and are restricted by the transmission capacity as shown in (29d). The scheduled energy production and generation reserves are bounded by unit generation capacity and ramping rate (29e)-(29j). As defined in (29h), the reserve requirements ensure the reserve is sufficient to cover any loss of a single generator. The start-up status and on/off status of conventional units are defined as binary variables (29k)-(29m). Minimum-down time before a generator can be started-up and the minimum-up time before a generator can be shutdown are depicted in (29n) and (29o), respectively. Indicating variables for generator start-up and commitment status are binary and are represented by (29p). Since \(\boldsymbol{x_{t}}\) contains the max operator, and disturbance vector cannot be directly used in the encoding formulation. Supplement variables \(\mu_{g,t}\) and \(\rho_{g,t}\) are used to indicate the largest generator output for period \(t\). The reformulations are expressed as follows, \[p_{g,t}^{con}-P_{g,t}\leq A(1-\mu_{g,t}),\ \ \ \ \ \ \forall g,t \tag{30a}\] \[\sum_{g\in G}\mu_{g,t}=1,\ \ \ \ \ \ \forall t\] (30b) \[\mu_{g,t}\in\{0,1\},\ \ \ \ \ \ \ \forall g,t\] (30c) \[\rho_{g,t}-P_{g,t}\geq-A(1-\mu_{g,t}),\ \ \ \ \ \ \forall g,t\] (30d) \[\rho_{g,t}-P_{g,t}\leq A(1-\mu_{g,t}),\ \ \ \ \ \ \forall g,t\] (30e) \[0\leq\rho_{g,t}\leq A\mu_{g,t},\ \ \ \ \ \ \forall g,t \tag{30f}\] where \(A\) is a big number. The input feature vector is then reformulated as follows. \[x_{t}=[u_{1,t},\cdots,u_{N_{G},t},\ \rho_{g,t},\cdots,\ \rho_{N_{G},t},P_{1,t}, \cdots,P_{N_{G},s}] \tag{31}\] Then we introduce the formulation of proposed frequency metrics predictor. As discussed before, the neuron of each layer with positivity index \(\epsilon_{g[t]}\) larger than \(\gamma\) is selected out and added into set \(\mathcal{H}\). The reformulations of ReLU activation on the selected neurons are expressed as follows. \[z_{1}=\boldsymbol{x_{t}}W_{1}+b_{1},\forall t, \tag{32a}\] \[z_{q[1],t}\geq\hat{z}_{q[1],t},\forall q,\forall l\in\mathcal{ H},\forall t,\] (32b) \[z_{q[1],t}\leq\frac{UB_{q[t]}\cdot(\hat{z}_{q[1],t}-LB_{q[t]})}{ UB_{q[1]}-LB_{q[1]}},\forall q,\forall l\in\mathcal{H},\forall t, \tag{32c}\] The ReLU function on the rest of neuron is reformulated with no approximation as follows. (33f) and (33g) constrain the maximal frequency deviation and maximal RoCoF. \[z_{q[1],t}\leq\hat{z}_{q[1],t}-A(1-a_{q[1],t}),\forall q,l\in \mathcal{\bar{H}},t, \tag{33a}\] \[z_{q[1],t}\geq\hat{z}_{q[1],t},\forall q,\forall l\in\mathcal{ \bar{H}},\forall t,\] (33b) \[z_{q[1],t}\leq A\alpha_{q[1],t},\forall q,\forall l\in\mathcal{ \bar{H}},\forall t,\] (33c) \[z_{q[1],t}\geq 0,\forall q,\forall l,\forall t,\] (33d) \[z_{q[1],t}\in\{0,1\},\forall q,\forall l,\forall t,\] (33e) \[z_{N_{L}t}W_{N_{L}+1}^{dev}+b_{N_{L}+1}^{dev}\leq f_{nom}-f_{lim}\] (33f) \[z_{N_{L}t}W_{N_{L}+1}^{ref}+b_{N_{L}+1}^{ref}\leq-RoCoF_{lim} \tag{33g}\] ## VI Result Analysis A case study on IEEE 24-bus system [26] is provided to demonstrate the effectiveness of the proposed methods. This test system contains 24 buses, 33 generators and 38 lines, which also has wind power as renewable resources. The mathematical model-based data generation is operated in Python using Pyomo [27]. The PSS/E software is used for time domain simulation and labeling process [28]. We use full-scale models for the dynamic simulation during the labeling process: GENROU and GENTPJ for the synchronous machine; IEEEX1 for the excitation system; IESGO for the turbine-governor; PSS2A for the power system stabilizer. Standard WTG and corresponding control modules are employed. The FCUC is performed using Pyomo and Gurobi on a window laptop with Intel(R) Core(TM) i7 2.60GHz CPU and 16 GB RAM. ### _Predictor Training_ We first generate 3000 samples as \(\mathcal{L}\) for predictor training. Each case is labeled with security status, 0 for insecure and 1 for secure based on post contingency conditions. To ensure the practicality of the dataset and the generality of the trained model, load profile and RES profile are sampled based on Gaussian distribution while the deviation of means value ranges from [-20%, 20%] of the based value. The optimality gap of the solver is set to 0.1%. We assume SGs have adequate reactive power capacity, and WTGs are controlled with a unity power factor. For each strategy, we selected 500 samples from the generated training dataset. The distributions of RoCoF values of the sampled cases are depicted in Fig. 5. The interval size of the distribution bin is set 0.1, and regions closest to the threshold are [0.4, 0.5] and [0.5, 0.6]. With randomly selected method, most of the cases distribute within [0.6, 0.7] and [0.7, 0.8], while only 15% of the cases are within the range [0.4, 0.6] closest to the threshold. With uncertainty sampling applied, nearly 40% of the cases the selected cases centered around the threshold while cases with RoCoF values larger than 1.0 Hz/s are filtered out. When we use the proposed active learning method, the samples closest to the threshold increase; the number of samples whose RoCoF vales far away from the threshold increases as desired. The proposed frequency metrics predictor with the same DNN structure is then tested on different dataset. As results shown in TABLE I and TABLE II, RoCoF prediction accuracies of all cases are above 93.27% with an error tolerance of 5%, while the frequency deviation validation accuracies are relatively lower. Predictor trained by uncertainty sampling dataset has the highest prediction accuracy on both RoCoF and frequency deviation values as expected, which gives more accurate frequency related constraints. While the validation accuracy of the proposed active learning dataset is only slightly lower than the uncertainty sampling dataset-based predictor. Full-range out-of-sample dataset are used for sparse neural network validation. Sparsity of the neural network is increased from 0% to 90% with interval at 10%. The results of RoCoF prediction accuracy with the tolerance of 5% are depicted in Fig. 6. As we can see in both cases, validation accuracy is relatively high at the beginning of the test. When sparsity reaches 60%, the RoCoF prediction accuracy of predictor trained by uncertainty sampling dataset drops significantly, implying low robustness. For predictor based on randomly selected dataset, the accuracy drops to 80.04% at the sparsity of 50% and gets totally lost at the sparsity of 0.7. Meanwhile, the predictor trained by active sampling dataset significantly outperforms the others at the sparsity of 80%, implying accurate prediction with much less computational burden. Results in Fig. 7 show that the predictor trained by active sampling dataset has higher robustness against sparsity than two other cases. However, comparing to the RoCoF prediction accuracy, frequency deviation prediction accuracy is more sensitive to the change in network sparsity. Although having superiority over two other predictors, the prediction accuracy of predictor trained by active sampling dataset is lost at the sparsity of 0.7. ### _Simulation Results_ The total scheduling horizon is 24 hours, and hours 9-12 are selected to be the time instance where frequency related constraints are applied to secure system stability against generator contingency considering high penetration level of intermittent wind generation and peak hour impact. The test case has a demand ranging from 1,348 MW to a peak of 1,853MW. Regarding post-contingency frequency limits, the maximal frequency deviation should not be larger than 0.5 Hz, and the Fig. 5: RoCoF value distributions for three different datasets. Fig. 6: RoCoF prediction accuracy with different NN sparsity. Fig. 7: Frequency deviation prediction accuracy with different NN sparsity. RoCoF must be higher than -0.5Hz/s to avoid the tripping of RoCoF-sensitive protection relays. The optimality gap is set to 0.1%. We first investigate the impact of frequency metrics predictor sparsity on efficiency of sparse neural network based FCUC (SNN-FCUC) without ReLU linearization. The heatmap of computational time is shown in Fig. 8, and 100% sparsity indicates no frequency related constraints. Computational time less than 5 seconds indicating non-binding of the constraints. As we can see, the computational times of RS and US based predictors drops significantly when sparsity reaches 70%, since frequency related constraints under such sparsity are not binding, implying frequency stability is not enforced as demonstrated with time-domain simulations. For the case where predictor is trained by AS dataset, the solution is no longer valid in terms of frequency stability when sparsity reaches 90%. It should be noted that although the RS based predictor has an RoCoF prediction accuracy of 77.69% at the sparsity of 70%, frequency requirements are not respected. At the same time, the AS based predictor shows much higher robustness when being incorporated into MILP problem. Frequency metrics predictor with sparsity of 0.8 is selected based on the proposed method. The trained well predictor is then incorporated into ALSNN-FCUC models. RoCoF prediction accuracy of the sparse predictor is 87.47%, and the frequency deviation prediction accuracy is 80.80%. \(\gamma\) for active neuron selection is set 0.25. \(\hat{f}_{max}\) and \(\Delta f_{max}\) are obtained by conducting time domain simulations under worst contingency on PSS/E at hour 10. The simulation results of the proposed ALSNN-FCUC model and benchmark models are listed in TABLE III. Comparing to T-SCUC, all frequency constrained models have relatively higher operational cost, the extra cost comes from the efforts in securing the frequency nadir and RoCoF stability. Solution of LRC-SCUC model is relatively conservative due to approximation error. \(\hat{f}_{max}\) of DNN-FCUC model is 0.50 Hz/s, indicating that the solution perfectly satisfies the threshold and there is no conservativeness. It should be also noted that \(\Delta f_{max}\) of models with RoCoF related constraints applied are all within the safe range, implying RoCoF related constraints are more likely to bind comparing to the frequency deviation constraints. And inertia related protections would be a main factor that limits the transition toward RES dominant system. A noticeable increase in computational time can also be observed when there is no approximation in the DNN-FCUC. For the proposed ALSNN-FCUC model, the computational time is reduced by 62% from 22.56 s to 8.56 s when sparse computation as well as active ReLU linearization is applied. Results show that such algorithm significantly improves the computational efficiency while maintaining the highest post-contingency RoCoF values within an acceptable range. We then compare the time domain simulation results of the proposed ALSNN-FCUC with ERC-SCUC where frequency related constraints are derived from system equivalent model. Worst G-1 contingency is conducted under dispatching during hour 10. RoCoF evolutions are shown in Fig. 9 and Fig. 10. As we can see, widely used uniform model-based approach cannot ensure locational RoCoF security due to approximation error of model simplification. With frequency metrics predictor-based constraints added, the proposed ALSNN-FCUC frameworks can secure the system highest locational RoCoF within a safe range. Additionally, impact of total number of constrained hours on computational time of FCUC models over 24 hours scheduling horizon is investigated. Results in TABLE IV show that for DNN-FCUC without sparse computation and active linearization process, computational time for solving FCUC frame Fig. 8: Computational time of SNN-FCUC with different sparsity. Fig. 10: RoCoF evolution of ALSNN-FCUC model under worst contingency at hour 10. Fig. 9: RoCoF evolution of ERC-SCUC model under worst contingency at hour 10. work increases exponentially as the number of hours that enforce frequency requirements increases. DNN-FCUC reaches the time limit at 3600 seconds when the number of total constrained periods is larger than 16 periods. While the proposed ALSNN-FCUC model has much higher efficiency. The voltage dynamics of generators are plotted in Fig. 11. As we can see, the proposed ALSNN-FCUC dispatch also satisfies the voltage limits for given conditions. It should be noted that voltage and related metrics could be potentially incorporated into the frequency metrics predictor-based constraints with nearly negligible increase in the computational burden. ## VII Conclusions Reduced system inertial due to high renewable penetration level will lead to frequency insecurity under worst G-1 contingencies. Incorporation of frequency related constraints into SCUC has been used to secure system post contingency frequency stability. However, model-based approaches either fail to secure locational RoCoF stability due to approximation error or provides too conservative solutions with extra costs. This paper proposes an ALSNN-FCUC model which incorporates frequency related constraints derived from deep neural networks. A DNN based frequency metrics predictor is constructed to represent maximal frequency and the highest RoCoF value. With concepts of sharing parameters, more potential system metrics can be tracked without increasing much computational burden. The proposed active sampling method can improve the robustness of the predictor when applying sparse computation during training process. In addition, active ReLU linearization has been implemented to further improve the FCUC computational efficiency. Verifications on PSS/E show that the proposed ALSNN-FCUC model can secure system frequency stability without conservativeness.
2301.02819
ExcelFormer: A neural network surpassing GBDTs on tabular data
Data organized in tabular format is ubiquitous in real-world applications, and users often craft tables with biased feature definitions and flexibly set prediction targets of their interests. Thus, a rapid development of a robust, effective, dataset-versatile, user-friendly tabular prediction approach is highly desired. While Gradient Boosting Decision Trees (GBDTs) and existing deep neural networks (DNNs) have been extensively utilized by professional users, they present several challenges for casual users, particularly: (i) the dilemma of model selection due to their different dataset preferences, and (ii) the need for heavy hyperparameter searching, failing which their performances are deemed inadequate. In this paper, we delve into this question: Can we develop a deep learning model that serves as a "sure bet" solution for a wide range of tabular prediction tasks, while also being user-friendly for casual users? We delve into three key drawbacks of deep tabular models, encompassing: (P1) lack of rotational variance property, (P2) large data demand, and (P3) over-smooth solution. We propose ExcelFormer, addressing these challenges through a semi-permeable attention module that effectively constrains the influence of less informative features to break the DNNs' rotational invariance property (for P1), data augmentation approaches tailored for tabular data (for P2), and attentive feedforward network to boost the model fitting capability (for P3). These designs collectively make ExcelFormer a "sure bet" solution for diverse tabular datasets. Extensive and stratified experiments conducted on real-world datasets demonstrate that our model outperforms previous approaches across diverse tabular data prediction tasks, and this framework can be friendly to casual users, offering ease of use without the heavy hyperparameter tuning.
Jintai Chen, Jiahuan Yan, Qiyuan Chen, Danny Ziyi Chen, Jian Wu, Jimeng Sun
2023-01-07T09:42:03Z
http://arxiv.org/abs/2301.02819v8
# ExcelFormer: A Neural Network Surpassing GBDTs on Tabular Data ###### Abstract Though deep neural networks have gained enormous successes in various fields (e.g., computer vision) with supervised learning, they have so far been still trailing after the performances of GBDTs on tabular data. Delving into this task, we determine that a judicious handling of feature interactions and feature representation is crucial to the effectiveness of neural networks on tabular data. We develop a novel neural network called ExcelFormer, which alternates in turn between two attention modules that shrewdly manipulate feature interactions and feature representation updates, respectively. A bespoke training methodology is jointly introduced to facilitate model performances. Specifically, by initializing parameters with minuscule values, these attention modules are attenuated when the training begins, and the effects of feature interactions and representation updates grow progressively up to optimum levels under the guidance of our proposed specific regularization schemes Feat-Mix and Hidden-Mix as the training proceeds. Experiments on 28 public tabular datasets show that our ExcelFormer approach is superior to extensively-tuned GBDTs, which is an unprecedented progress of deep neural networks on supervised tabular learning. The codes are available at [https://github.com/WhatAShot/ExcelFormer](https://github.com/WhatAShot/ExcelFormer). Machine Learning, ICML ## 1 Introduction Neural networks have been firmly established as state-of-the-art approaches in various fields such as computer vision (Srivastava et al., 2015; Khan et al., 2022), natural language processing (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017), and automatic speech recognition (Dong et al., 2018). However, on tabular data, one of the most ubiquitous data formats, neural networks have not yet achieved comparable performances as traditional gradient boosting decision trees (GBDTs) (Chen and Guestrin, 2016; Prokhorenkova et al., 2018; Duan et al., 2020) in supervised learning despite numerous efforts (Borisov et al., 2021). This hinders the widespread adoption of neural networks and progress towards general artificial intelligence applications. It has been suggested (Grinsztajn et al., 2022) that three inherent characteristics of tabular data impeded the performances of known neural networks: irregular patterns of the target function, negative effects of uninformative features, and non-rotationally-invariant features. Based on these propositions, we identify two keys for largely promoting the capabilities of neural networks on tabular data: **(i) An appropriate feature representation learning approach.** Though it was demonstrated (Rahaman et al., 2019) that neural networks could likely predict overly smooth solutions on tabular data, a deep learning (DL) model was also observed to be capable of memorizing random labels (Zhang et al., 2021). To deal with irregular target function patterns (Gorishniy et al., 2021) and spurious correlations of targets and features in tabular data, an appropriate organization of feature representation is needed to well fit the irregular patterns while maintaining generalizability. **(ii) An effective feature interaction approach.** Since features of tabular data are non-rotationally-invariant and a considerable portion of data is uninformative, network generalization can be harmed when a model incorporates useless feature interactions. But, theoretical analysis (Ng, 2004) suggested that known neural networks are naturally ineffective in dealing with data that have very limited relevant features, incurring a cost of high worst-case sample complexity. Thus, an effective interaction approach is needed to prevent negative effects of ill-suited feature interactions. Some previous studies designed feature embedding approaches (Gorishniy et al., 2022) to alleviate overly smooth solutions inspired by (Tancik et al., 2020), or employed regularization (Katzir et al., 2020) and shallow models (Cheng et al., 2016) to promote model generalization. While some neural networks utilized sophisticated feature interaction approaches (Yan et al., 2023; Chen et al., 2022; Gorish niy et al., 2021) for better selective feature interactions. Although these tailored designs gained performances on supervised tabular data tasks, their performances are still not comparable with GBDT approaches (e.g., XGboost) on a diverse array of datasets (Borisov et al., 2021). Our work pushes this research envelop: We develop a new neural network that, for the first time, outperforms GBDTs on a wide range of public tabular datasets. This is achieved based on the cooperation of a new tabular data tailored architecture called ExcelFormer and a bespoke training methodology, which jointly learns appropriate feature representation update functions and judicious feature interactions (satisfying aforementioned **(i)** and **(ii)**). For better feature representations, we propose an attention module, called _attentive intra-feature_**update module** (_AiuM_), which is more powerful than previous non-attentive representation update approaches (e.g., linear or non-linear projection networks). For feature interactions, we present a conservative approach based on a novel module called _**d**irected**inter-feature_**a**_ttention_**_m_odule_ (_DiaM_), which avoids compromising the semantics of critical features by only allowing features of lower importance to fetch information from those of higher importance. Our ExcelFormer is mainly built by stacking alternately these two types of modules in turn. Since the main ingredients _AiuM_ and _DiaM_ are both flexible attention based modules, our training methodology aims to prevent ExcelFormer from converging to an overly complicated representation function that overfits irregular target functions and from introducing useless feature interactions to hurt generalization. At the start of training, a novel initialization approach assigns minuscule values to the weights of _DiaM_ and _AiuM_, so as to attenuate the intra-feature representation updates and inter-feature interactions. During training, the effects of _DiaM_ and _AiuM_ then grow progressively to optimum levels under the guidance of our new regularization schemes Feat-Mix and Hidden-Mix. Hidden-Mix and Feat-Mix are two variants of Mixup (Zhang et al., 2018) specifically for tabular data, which avoid the disadvantages of the original Mixup approach (to be discussed in Sec. 4) and respectively prioritize to promote the learning of feature representations and feature interactions. Our main contributions are summarized as follows. * We present the first neural network that outperforms GBDTs (e.g., XGboost), which is verified by comprehensive experiments on 28 public tabular datasets. * We identify two key capabilities of neural networks for effectively handling tabular data, which will inspire further researches. * To equip our ExcelFormer model with the two key capabilities, we develop new modules and a novel training methodology that cooperatively promote the model's effectiveness. * We propose two tabular-data-specific Mixup variants, Hidden-Mix and Feat-Mix, which are superior to the vanilla input Mixup approach on tabular data. ## 2 Related Work Supervised Tabular Learning.Since neural networks have been demonstrated to be efficient on various data types (e.g., images (Khan et al., 2022)), plentif efforts were made to harness the power of neural networks on tabular data. However, so far GBDT approaches (e.g., XGboost) still remain as the go-to choice (Katzir et al., 2020) for various supervised tabular tasks (Borisov et al., 2021; Grinsztajn et al., 2022), due to their superior performances on diverse tabular datasets. To achieve GBDT-level results, recent studies focused on devising sophisticated neural modules for heterogeneous feature interactions (Gorishniy et al., 2021; Chen et al., 2022; Yan et al., 2023), mimicking tree-like approaches (Katzir et al., 2020; Popov et al., 2019; Arik and Pfister, 2021) to find decision paths, or resorting to conventional approaches (Cheng et al., 2016; Guo et al., 2017). Apart from model designs, various data representation approaches, such as feature embedding (Gorishniy et al., 2022; Chen et al., 2023), discretization of continuous features (Guo et al., 2021; Wang et al., 2020), and Boolean algebra based methods (Wang et al., 2021), were applied to deal with irregular target patterns (Tancik et al., 2020; Grinsztajn et al., 2022). These attempts suggested the potentials of neural networks, but still yielded inferior performances comparing with GBDTs on a wide range of tabular datasets. Several challenges for neural networks on tabular data were summed up in (Grinsztajn et al., 2022). But no solution has been given, and these challenges still remain open. Besides, there were some attempts (Wang and Sun, 2022; Arik and Pfister, 2021; Yoon et al., 2020) to apply self-supervision to tabular datasets. However, these approaches are dataset- or domain-specific, and appear difficult to be adopted widely due to the heterogeneity of tabular datasets. Mixup and Its Variants.The original Mixup (Zhang et al., 2018) generates a new data by convex interpolations of two given data, which was proved beneficial on various image datasets (Tajbakhsh et al., 2020; Touvron et al., 2021) and some tabular datasets. However, we found that the original Mixup may conflict with irregular target patterns (to be discussed in Sec. 4) and hardly cooperate with the cutting-edge models (Gorishniy et al., 2021; Somepalli et al., 2021). ManifoldMix (Verma et al., 2019) and FlowMixup (Chen et al., 2020) applied convex interpolations to the hidden states, which did not fundamentally alter the way to synthesize new data and exhibited similar characteristics as the vanilla input Mixup. The follow-up variants CutMix (Yun et al., 2019), AttentiveMix (Walawalkar et al., 2020), SaliencyMix (Uddin et al., 2020), ResizeMix (Qin et al., 2020), and PuzzleMix (Kim et al., 2020) spliced two images spatially, which defended local patterns of images but are not directly available to tabular data. Darabi _et al_(Darabi et al., 2021) and Gowthami _et al_(Somepalli et al., 2021) applied Mixup and CutMix-like approaches in tabular data pre-training. It was shown (Kadra et al., 2021) that a search through regularization approaches could promote the performance of a simple neural network up to the XGboost level. But, time-consuming hyper-parameter tuning is a necessity in their settings, while XGboost and Catboost may not be extensively tuned. In contrast, our ExcelFormer models with fixed settings achieve GBDT-level performances without hyper-parameter tuning. ## 3 ExcelFormer ### The Overall Architecture Fig. 1 shows our proposed ExcelFormer model. ExcelFormer is built mainly based on two simple ingredients, the _attentive intra-feature update module_(_AiuM_) and _directed inter-feature attention module_(_DiaM_), which respectively conduct feature representation update and feature interactions. During the processing, \(f\) features of an input data \(x\in\mathbb{R}^{f}\) are first tokenized by a neural embedding layer into representations of size \(d\) each, denoted as \(z^{(0)}\in\mathbb{R}^{f\times d}\). It is then successively processed by \(L\)_DiaM_s and \(L\)_AiuM_s alternately. These two modules both have a LayerNorm head, and are accompanied with additive shortcut connections as illustrated in Fig. 1. Finally, a probability vector of \(C\) categories \(p\in\mathbb{R}^{C}\) (\(C>2\)) for multi-class classification or a scale value \(p\in\mathbb{R}^{1}\) for regression and binary classification is produced by a prediction head. ### Attentive Intra-feature Update Module (_AiuM_) A possible conflict between the irregularity of target functions and over-smooth solutions produced by neural networks was identified (Grinsztajn et al., 2022). In known Transformer-like models (Yan et al., 2023; Gorishniy et al., 2021), the commonly-used position-wise feed-forward network (FFN) (Vaswani et al., 2017) was employed for feature representation update. However, we empirically discovered that the FFN containing two linear projections and a ReLU activation is not flexible enough to fit irregular target functions, and hence design an attention approach to handle intra-feature representation updates, by: \[z^{\prime}=\tanh{(zW_{1}^{(l)}+b_{1}^{(l)})}\odot(zW_{2}^{(l)}+b_{2}^{(l)}), \tag{1}\] where \(W_{1}^{(l)}\in\mathbb{R}^{d\times d}\), \(W_{2}^{(l)}\in\mathbb{R}^{d\times d}\), \(b_{1}^{(l)}\in\mathbb{R}^{d}\), and \(b_{2}^{(l)}\in\mathbb{R}^{d}\) are all learnable parameters for the \(l\)-th layer, \(\odot\) denotes element-wise product, and \(z\) and \(z^{\prime}\) denote the input and output representations, respectively. Our experiments show that Eq. (1) is more powerful than FFN with the same computational costs. Notably, the operations in Eq. (1) do not conduct any feature interactions. ### Directed Inter-feature Attention Module (_DiaM_) It was pointed out (Ng, 2004) that neural networks are inherently inefficient to organize feature interactions, yet previous work empirically demonstrated the benefits of feature interactions (Chen et al., 2022; Cheng et al., 2016). Thus, we present a conservative approach for feature interactions that allows only the lower target-relevant features to gain access to the information of the higher target-relevant features. Before feeding features into ExcelFormer, we sort them in descending order according to the feature importance (we use mutual information in this paper) with respect to the targets in the training set. For judiciously handling feature interactions, we perform a special self-attention operation with an unoptimizable mask \(M\), as: \[z^{\prime}=\sigma(((zW_{q})(zW_{k})^{T}\oplus M)/\sqrt{d})(zW_{v}), \tag{2}\] where \(W_{q},W_{k}\), \(W_{v}\in\mathbb{R}^{d\times d}\) are all learnable matrices, \(\oplus\) is element-wise addition, and \(\sigma\) is the _softmax_ operating along the last dimension. The elements in the lower triangle Figure 1: Illustrating our proposed ExcelFormer model. _AiuM_ and _DiaM_ denote the _attentive intra-feature update module_ and _directed inter-feature attention module_, respectively. “Norm” denotes a LayerNorm layer (Ba et al., 2016). Before being fed into the model, the input features are sorted according to a feature importance metric (e.g., mutual information). portion of \(M\in\mathbb{R}^{f\times f}\) are all set to zeros, and the remaining elements of \(M\) are all set as negative infinity (using \(-10^{5}\) in our implementation as default). This makes the elements in the upper triangle portion (except for the diagonal elements) of the attention map all zeros after _softmax_ activation. In practice, Eq. (2) is extended to a multi-head self-attention version, with 32 heads by default. **Remarks.** By our _DiaM_, a feature is updated by features of higher importance, but not vice versa. This remains as the interactions of any two features while protecting important features to a large extent if some interactions performed by the model are ill-suited. Our _DiaM_ might appear similar to some self-attention mechanisms (Radford et al., 2018). But, a distinguishing aspect of our method is that the process is applied with feature importance (features are sorted in descending order based on the feature importance). Mutual information is used for feature importance in this paper. ### Embedding Layer Our embedding layer is also an attention based module that is similar to _AiuM_. In Eq. (1), the parameters \(W_{1}\), \(W_{2}\), \(b_{1}\), and \(b_{2}\) are shared among features, while in the embedding layer, the parameters are not shared among features, as: \[z^{(0)}=\tanh{(x\odot W_{1}^{(0)}+b_{1}^{(0)})\odot(x\odot W_{2}^{(0)}+b_{2}^{ (0)})}, \tag{3}\] where the input features \(x\in\mathbb{R}^{f}\), the learnable parameters \(W_{1}^{(0)},W_{2}^{(0)}\in\mathbb{R}^{f\times d}\), \(b_{1}^{(0)},b_{2}^{(0)}\in\mathbb{R}^{f\times d}\), and \(\odot\) is element-wise product. Before being fed to the embedding layer, numerical features are normalized and categorical features are transformed into numerical features by the CatBoost Encoder implemented with the _Sklearn_ Python package. 1 Footnote 1: [https://contrib.scikit-learn.org/category_encoders/catboost.html](https://contrib.scikit-learn.org/category_encoders/catboost.html) ### Prediction Head In our ExcelFormer, we do not use a class token in target function prediction as in previous Transformer-based approaches for tabular data (Gorishniy et al., 2021), since it was proved inefficient for feature interactions that were also conducted by class token based approaches. Our prediction head is directly applied to the output of the last _AiuM_, which contains two linear projection layers to separately compress the information along the feature dimension and the representation dimension, by: \[p=\phi(W_{d}(\text{PReLU}((z^{(L)})^{T}W_{f}+b_{f}))^{T}+b_{d}), \tag{4}\] where \(z^{(L)}\) is the output of the top-most _AiuM_, \(W_{f}\in\mathbb{R}^{f\times C}\) and \(b_{f}\in\mathbb{R}^{C}\) (\(C\) is the target category count in multi-classification for \(C>2\); \(C=1\) for regression and binary classification) compress the features, and \(W_{d}\in\mathbb{R}^{d\times 1}\) and \(b_{d}\in\mathbb{R}^{1}\) jointly compress the representation size \(d\) into 1. \(\phi\) is _sigmoid_ for \(C=1\), and is _softmax_ for \(C>2\). ## 4 Training Methodology Our proposed _AiuM_ and _DiaM_ satisfy both the two keys **(i)** and **(ii)** given in Sec. 1 respectively. Further, we argue that their effectiveness can be improved by using tailored training methodology since vanilla neural network training strategies were considered inefficient for tabular data (Ng, 2004; Rahaman et al., 2019). Mixup (Zhang et al., 2018) is one of the most effective regularization approaches for neural networks, but our tests showed that it cannot well cooperate with some cutting-edge approaches like (Gorishniy et al., 2021; Somepalli et al., 2021). Besides, such element-wise convex interpolation operations are intuitively in conflict with irregular target functions of tabular datasets. Fig. 2 shows an example of an irregular target function of tabular data, and obviously the data synthesized by the original Mixup (i.e., convex combination) conflict with the target function. To address this issue, we introduce two Mixup variants, Hidden-Mix and Feat-Mix, for tabular data, which can enhance the model performances and avoid the conflicts shown in Fig. 2. Besides, we propose an attenuated initialization approach for these two modules. For easier understanding, we first introduce Hidden-Mix and Feat-Mix, and then the attenuated initialization approach. Figure 3: Examples for the Hidden-Mix and Feat-Mix operations, where “rep.” means “representations”. Figure 2: Decision boundaries of \(k\)-Nearest Neighbors (\(k\)NN, \(k=8\)) for the 2 most important features (by mutual information) of a zoomed-in part of the Higgs dataset. The convex combinations (points on the black line) of two samples \(x_{1}\) and \(x_{2}\) of two different categories likely are in conflict with the irregular target function. ### Hidden-Mix Our Hidden-Mix is applied to the representations after the embedding layer and the labels. It exchanges some representation elements of two samples (e.g., Fig. 3), by: \[\begin{cases}z_{\text{m}}^{(0)}=S_{H}\odot z_{1}^{(0)}+(1_{H}-S_{H})\odot z_{2}^{ (0)},\\ y_{\text{m}}=\lambda_{H}y_{1}+(1-\lambda_{H})y_{2},\end{cases} \tag{5}\] where \(z_{1}^{(0)},z_{2}^{(0)},z_{\text{m}}^{(0)}\in\mathbb{R}^{f\times d}\) are the feature representations of two samples and the synthesized sample, and \(y_{1},y_{2},y_{\text{m}}\) are the labels of the two samples and synthesized sample. The coefficient matrix \(S_{H}\) and the all-one matrix \(\mathbb{1}_{H}\) are of size \(f\times d\). \(S_{H}=[s_{1},s_{2},\ldots,s_{f}]^{T}\), all whose vector \(s_{h}\in\mathbb{R}^{d}\) (\(h=1,2,\ldots,f\)) are identical and have \(\lfloor\lambda_{H}\times d\rfloor\) randomly selected elements of 1's and the rest elements are 0's. Similar to vanilla input Mixup (Zhang et al., 2018), the scalar coefficient \(\lambda_{H}\) is sampled from the \(\mathcal{B}eta(\alpha_{H},\alpha_{H})\) distribution, where \(\alpha_{H}\) is a hyper-parameter. **Interpretation.** Our Hidden-Mix encourages learning linear feature representation solutions. Consider a simple situation in which there are two data (after embedding layer), \(z_{a}\in\mathbb{R}^{f\times d}\) and \(z_{b}\in\mathbb{R}^{f\times d}\), the number of feature \(f=1\), representation dimension \(d=2\), and \(\lambda_{H}=\frac{1}{2}\), then we have \(y_{\text{m}}=\frac{1}{2}(y_{a}+y_{b})\) (\(y_{a}\) and \(y_{b}\) are labels of \(a\) and \(b\), and \(y_{\text{m}}\) is the label of synthesized data). Thus, we can infer the constraint as a neural network \(\mathbf{g}\) that \(\mathbf{g}(z_{a}[0,0],z_{b}[0,1])+\mathbf{g}(z_{b}[0,0],z_{a}[0,1])=\mathbf{g }(z_{a}[0,0],z_{a}[0,1])+\mathbf{g}(z_{b}[0,0],z_{b}[0,1])\), in which the index \([i,j]\) indicates the \(j\)-th representation element of the \(i\)-th feature. For a simple neural network \(\mathbf{g}(z[0,0],z[0,1])=w_{1}^{\mathbf{g}}z[0,0]+w_{2}^{\mathbf{g}}z[0,1]+w_ {3}^{\mathbf{g}}z[0,0]z[0,1]\), it is obvious that Hidden-Mix requires \(w_{3}^{\mathbf{g}}(z_{a}[0,0]z_{b}[0,1]+z_{a}[0,1]z_{b}[0,0])\equiv w_{3}^{ \mathbf{g}}(z_{a}[0,0]z_{a}[0,1]+z_{b}[0,0]z_{b}[0,1])\) for any \(z_{a}\) and \(z_{b}\), and thus \(w_{3}^{\mathbf{g}}=0\). In our ExcelFormer, _AiuM_ and the embedding layer are implemented with flexible attention operations for fitting irregular target functions, while our Hidden-Mix prioritizes to learn a linear representation update approach for a feature and avoid over-fitting. ### Feat-Mix See the examples in Fig. 3. Unlike Hidden-Mix acting on the representation dimension, our Feat-Mix swaps parts of features between two input samples \(x_{1},x_{2}\in\mathbb{R}^{f}\) (following the input Mixup (Zhang et al., 2018)), by: \[\begin{cases}x_{\text{m}}=\mathbf{s}_{F}\odot x_{1}+(1_{F}-\mathbf{s}_{F}) \odot x_{2},\\ y_{\text{m}}=\Lambda y_{1}+(1-\Lambda)y_{2},\end{cases} \tag{6}\] where the vector \(\mathbf{s}_{F}\) and the all-one vector \(\mathbb{1}_{F}\) are of size \(f\), \(\mathbf{s}_{F}\) contains \(\lfloor\lambda_{F}\times f\rfloor\) randomly chosen 1's and the rest of its elements are 0's (\(\lambda_{F}\sim\mathcal{B}eta(\alpha_{F},\alpha_{F})\)), and \(y_{1}\), \(y_{2}\), and \(y_{m}\) are labels of samples \(x_{1}\), \(x_{2}\), and the synthesized sample. \(\Lambda\) is the normalized sum of the mutual information of the features selected by \(\mathbf{s}_{F}\), which is computed by: \[\Lambda=\nicefrac{{\sum_{\mathbf{s}_{F}^{(i)}=1}\text{MI}^{(i)}}}{{\sum_{i=1} ^{f}\text{MI}^{(i)}}}, \tag{7}\] where \(\mathbf{s}_{F}^{(i)}\) is the \(i\)-th element of \(\mathbf{s}_{F}\), and \(\text{MI}^{(i)}\) is the mutual information of the \(i\)-th feature. **Interpretation.** Consider two tabular data \(x_{a}\) and \(x_{b}\) before being processed by the embedding layer (with \(f=2\), the sampled \(\lambda_{F}=0.5\), and \(\text{MI}^{(1)}=\text{MI}^{(2)}\)) that are processed by Feat-Mix. One can easily infer the constraint as a neural network \(\mathbf{h}\) such that \(\mathbf{h}(x_{a}[0],x_{b}[1])+\mathbf{h}(x_{b}[0],x_{a}[1])=\mathbf{h}(x_{a}[0],x_{a}[1])+\mathbf{h}(x_{b}[0],x_{b}[1])\), in which \(x[i]\) indicates the \(i\)-th feature value of the data \(x\). For a neural network \(\mathbf{h}(x[0],x[1])=w_{1}^{\mathbf{h}}x[0]+w_{2}^{\mathbf{h}}x[1]+w_{3}^{ \mathbf{h}}x[0]x[1]\), it is suggested that \(w_{3}^{\mathbf{h}}\) is likely to be 0 and Feat-Mix is disposed to make the neural network \(\mathbf{h}\) learn a non-feature-interaction function. It encourages _DiaM_ to include solely necessary interactions, discarding the useless ones. ### An Attenuated Initialization The function of our proposed attenuated initialization approach aims to reduce the effects of _DiaM_ and _AiuM_ during the start of the model training. This initialization approach is built upon the commonly used He's initialization (He et al., 2015) or _Xavier_ initialization (Glorot and Bengio, 2010) approaches, by rescaling the variance of an initialized weight \(w\) with \(\gamma\) (\(\gamma\to 0^{+}\)) while keeping the expectation at 0: \[\text{Var}(w)=\gamma\text{Var}_{\text{prev}}(w), \tag{8}\] where \(\text{Var}_{\text{prev}}(w)\) denotes the weight variance used in the previous work (He et al., 2015; Glorot and Bengio, 2010). In this work, we set \(\gamma=10^{-4}\). To reduce the impacts of _AiuM_ and _DiaM_, we can either apply Eq. (8) to all the parameters of these modules or to part of them. We empirically witness that these options all perform similarly. We apply Eq. (8) to all the parameters in _AiuM_ and in _DiaM_ as default. Thus, these two modules have almost no effects before training. **Interpretation.** As discussed in the **Interpretations** for Hidden-Mix and Feat-Mix, these Mixup schemes encourage a neural network to learn linear feature representation update functions and non-feature-interaction solutions by requiring the interaction coefficient terms \(w_{3}^{\mathbf{g}}\) and \(w_{3}^{\mathbf{h}}\) to be 0. By cooperating with these two schemes, our initialization approach suppresses the intra-feature representation updates and inter-feature interactions when the training starts. The effects of the necessary non-linear feature representation updates and crucial feature interactions can be progressively added under the driving force of the data. On the other hand, for a module with an additive identity shortcut like \(y=\mathcal{F}(x)+x\), our initialization approach attenuates the sub-network \(\mathcal{F}(x)\) and satisfies the property of _dynamical isometry_(Saxe et al., 2014) for better trainability. Some previous work (Bachlechner et al., 2021; Touvron et al., 2021) suggested to rescale the \(\mathcal{F}(x)\) path as \(y=\eta\mathcal{F}(x)+x\), where \(\eta\) is a learnable scalar initialized as 0 or a learnable diagonal matrix whose elements are of very small values. Different from these methods, our attenuated initialization approach directly gives minuscule values to the model weights in the initialization, which is more flexible and allows every feature to be learned adaptively from feature interactions and representation updates. ### Model Training and Loss Functions Our ExcelFormer can handle both classification and regression tasks on tabular datasets. In training, our two proposed Mixup schemes can be applied successively by Hidden-Mix(Embedding Layer(Feat-Mix\((x,y))\)). But, our tests suggest that the effect of ExcelFormer on a certain dataset can be better by using only Feat-Mix or Hidden-Mix. Thus, we use only one such Mixup scheme in dealing with certain tabular datasets. The cross-entropy loss is used for classification tasks, and the mean square error loss is used for regression tasks. ## 5 Experiments ### Experimental Setup **Datasets.** For fair and comprehensive comparisons, we use 28 public tabular datasets in our experiments, in large-, medium-, or small-scale, with numerical or categorical features, and for regression, binary classification, or multi-class classification tasks. The detailed dataset descriptions are given in Appendix A. **Implementation Details.** The codes of our ExcelFormer and training methodology are implemented using PyTorch (Python 3.8). All the experiments are run on NVIDIA RTX 3090. We set the numbers of the _DiaM_ and _AiuM_ layers \(L=3\), the feature representation size \(d=256\), and the dropout rate for the attention map as 0.3. The optimizer for our approach is AdamW (Loshchilov and Hutter, 2018) with default settings. We use our attenuated initialization approach for _AiuM_ and _DiaM_, and use He's initialization (He et al., 2015) for the other parts. The learning rate is set to \(10^{-4}\) without weight decay, and \(\alpha_{H}\) and \(\alpha_{F}\) for \(\mathcal{B}eta\) distributions are set to 0.5. These settings are for ExcelFormer with fixed hyper-parameters. In \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Datasets & AN & IS & CP & VI & YP & GE & CH & SU & BA & BR \\ \hline XGboost & -0.1076 & 95.78 & **-2.1370** & -0.1140 & **-0.0275** & 68.75 & 85.66 & -0.0177 & 88.97 & -0.0769 \\ Catboost & -0.0929 & 95.26 & -2.5160 & -0.1181 & **-0.0275** & 66.54 & **86.62** & -0.0220 & 89.16 & -0.0931 \\ \hline ExcelFormer-Feat-Mix & **-0.0782** & **96.38** & -2.6590 & -1.6220 & -0.0276 & **70.38** & 85.89 & **-0.0184** & 89.00 & -0.1123 \\ ExcelFormer-Hidden-Mix & **-0.0786** & **96.72** & -2.2320 & -0.2440 & -0.0276 & **70.72** & 85.89 & **-0.0174** & 88.65 & **-0.0696** \\ ExcelFormer (Mix Tuned) & **-0.0876** & **96.51** & -2.2020 & **-0.1070** & **-0.0275** & 68.36 & 85.80 & **-0.0173** & **89.21** & **-0.0627** \\ ExcelFormer (Fully Tuned) & **-0.0778** & **96.56** & -2.1980 & **-0.0899** & **-0.0275** & **68.94** & 85.89 & **-0.0161** & **89.16** & **-0.0641** \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c} \hline \hline Datasets (Continued) & EY & MA & AI & PO & BP & CR & CA & HS & HO \\ \hline XGboost & 72.88 & 93.69 & **-0.0001605** & -4.331 & **99.96** & 85.11 & -0.4359 & **-0.1707** & **-3.139** \\ Catboost & 72.41 & 93.66 & -0.0001616 & -4.622 & 99.95 & 85.12 & -0.4359 & -0.1746 & -3.279 \\ \hline ExcelFormer-Feat-Mix & 71.44 & 93.38 & -0.0001689 & -5.694 & 99.94 & **85.23** & **-0.4331** & -0.1835 & -3.305 \\ ExcelFormer-Hidden-Mix & 72.09 & 93.66 & -0.0001627 & **-2.862** & 99.95 & **85.22** & -0.4587 & -0.1773 & -3.147 \\ ExcelFormer (Mix Tuned) & **74.14** & **94.04** & -0.0001615 & **-2.629** & 99.93 & **85.26** & **-0.4316** & -0.1726 & -3.159 \\ ExcelFormer (Fully Tuned) & **78.94** & **94.11** & -0.0001612 & **-2.636** & **99.96** & **85.36** & **-0.4336** & -0.1727 & -3.214 \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c} \hline \hline Datasets (Continued) & DI & HE & JA & HI & RO & ME & SG & CO & NY & Rank \\ \hline XGboost & **-0.2353** & 37.39 & 72.45 & 80.28 & **90.48** & -0.0820 & -0.01635 & 96.92 & -0.3683 & 3.43 \\ Catboost & -0.2362 & 37.81 & 71.97 & 80.22 & 89.55 & -0.0829 & -0.02038 & 96.25 & -0.3808 & 4.36 \\ \hline ExcelFormer-Feat-Mix & -0.2368 & 37.22 & **72.51** & **80.60** & 88.65 & -0.0821 & **-0.01587** & **97.38** & -0.3887 & 4.61 \\ ExcelFormer-Hidden-Mix & -0.2387 & **38.20** & **72.79** & **80.75** & 88.15 & **-0.0808** & **-0.01531** & **97.17** & -0.3930 & 3.75 \\ ExcelFormer (Mix Tuned) & -0.2359 & **38.65** & **73.15** & **80.88** & 89.33 & **-0.0809** & **-0.01465** & **97.43** & -0.3710 & 2.46 \\ ExcelFormer (Fully Tuned) & -0.2358 & **38.61** & **73.55** & **81.22** & 89.27 & **-0.0808** & **-0.01454** & **97.43** & **-0.3625** & 1.79 \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance comparison of various ExcelFormer versions with extensively tuned XGboost and Catboost on 28 public datasets. The performances of ExcelFormers that outperform both XGboost and Catboost are marked in bold, while those that outperform either XGBoost or Catboost are underlined. The performances of XGboost and Catboost are bold if they are the best results.** hyper-parameter tuning, the Optuna library (Akiba et al., 2019) is used for all the approaches. Following (Gorishniy et al., 2021), we randomly select 80% of data as training samples and the rest as test samples. In training, we use 20% of all the training samples for validation. For tuning our ExcelFormer, we set two tuning configurations called "Mix Tuned" and "Fully Tuned". All the settings are fully described in Appendix B. **Compared Models.** Since Grinsztajn et al. (2022) have proved that known neural network approaches fall behind GBDTs, we compare our new ExcelFormer with the representative and popular GBDT approaches XGboost (Chen and Guestrin, 2016) and Catboost (Prokhorenkova et al., 2018). The implementations of XGboost and Catboost mainly follow (Gorishniy et al., 2021). Since we aim to extensively tune XGboost and Catboost for their best performances, we increase the number of estimators/iterations (i.e., the number of decision trees) from 2000 to 4096 and the number of tuning iterations from 100 to 500, which give a more stringent setting than in the previous work (e.g., FT-Transformer (Gorishniy et al., 2021)). **Evaluation.** For each fixed or tuned configuration, we run the codes for 5 times with different random seeds and report the average performance on the test set. For our proposed ExcelFormer, we do not use any ensemble strategy. For binary classification (binclass) tasks, we compute the area under the ROC Curve (AUC) for evaluation. We use accuracy (ACC) for multi-class classification (multiclass) tasks, and the negative root mean square error (nRMSE) for regression tasks. On all these metrics, the higher the result values are, the better the performances are. ### Performances **Performances of Untuned ExcelFormer.** ExcelFormer-Feat-Mix (resp., ExcelFormer-HiddenMix) uses only our Feat-Mix (resp., Hidden-Mix) but not Hidden-Mix (resp., Feat-Mix). Their performances on all the 28 datasets are reported in Table 1. Both ExcelFormer-Feat-Mix and ExcelFormer-HiddenMix are trained with pre-specified hyper-parameters, without any hyper-parameter tuning. The average performance rank of ExcelFormer-Feat-Mix is 4.61, which is quite close to Catboost's 4.36. The average rank of ExcelFormer-Hidden-Mix is 3.75, which falls between those of Catboost (4.36) and XGboost (3.43). In pairwise comparison, ExcelFormer-Feat-Mix beats the extensively tuned XGboost on 11 out of 28 datasets, while ExcelFormer-Hidden-Mix beats XGboost on 14 out of 28 datasets. In comparison with extensively tuned Catboost, ExcelFormer-Feat-Mix obtains better performances on 11 out of 28 datasets while ExcelFormer-Hidden-Mix obtains better performances on 15 out of 28 datasets. These results suggest that, by directly using ExcelFormer with the default hyper-parameters, one can easily obtain GBDT-level performances on tabular datasets. Due to the diversity of tabular datasets, a foolproof well-performed approach without tuning is very user-friendly and has a great potential for practical applications, as most users are not proficient in conducting hyper-parameter tuning. **Performances of Tuned ExcelFormer.** By tuning the configurations of our Mixup schemes (i.e., the Mixup types and \(\alpha\) of \(Beta\) distributions), the performance rank of ExcelFormer, 2.46, is much better than XGboost (3.43) and Catboost (4.36). In direct comparison to the extensively tuned XGboost, ExcelFormer with Mixup tuning (denoted by ExcelFormer (Mix Tuned)) is superior on 18 out of 28 datasets, and is superior on 24 out of 28 datasets comparing with the extensively tuned Catboost. These results suggest that a user can easily obtain considerably better performances than the extensively tuned XGboost and Catboost by tuning only two hyper-parameters of the Mixup configurations, without tuning the configurations of the model architecture. Moreover, one can obtain further improved performances by tuning all the configurations (listed in Appendix B). In this way, ExcelFormer (denoted by ExcelFormer (Fully Tuned)) yields a better performance rank at 1.79, outperforming (or on par with) the extensively tuned XGboost and Catboost on 20 and 24 out of 28 datasets, respectively. Observing the performance ranks over the 28 datasets, one can see that the fully tuned ExcelFormer performs better than its Mix tuned version, which is much better than ExcelFormer with fixed hyper-parameters. From the practical perspective, ExcelFormer with fixed hyper-parameters is sufficient to attain results on par with the extensively tuned XGboost/Catboost, and the tuned versions of ExcelFormer can yield remarkably better results. Figure 4: Model performances for different types of tasks. “Cat.”, “XGb.”, and “Exc.” denote “Catboost”, “XGboost”, and “ExcelFormer”, while “M.T.” and “F.T.” in the brackets indicate the “Mix Tuned” and “Fully Tuned” versions of ExcelFormer. ### Usage Suggestions In practice, we would suggest that a user may use our ExcelFormer as follows: (1) first try ExcelFormer with fixed hyper-parameters, and it can meet the needs in most situations; (2) try the setting of "Mix Tuned" if the fixed ExcelFormer versions are not satisfactory; (3) finally, tune all the hyper-parameters of ExcelFormer if better performances are desired. Fig. 4 gives performance comparisons on different types of tasks, based on which we offer two further suggestions to users. (i) If extremely high effects are desired, it is wise to tune ExcelFormer following the "Mix Tuning" setting or "Fully Tuning" setting, for any type of tasks. (ii) For a multi-class classification task, ExcelFormer should be the first choice, since it commonly outperforms GBDTs, even without hyper-parameter tuning. ### Ablation Study We analyze the effects of our proposed ingredients empirically on 6 tabular datasets (we find that the conclusions on the other datasets are similar). We take the best-performing model of ExcelFormer-Feat-Mix and ExcelFormer-Hidden-Mix (without hyper-parameter tuning) as the baseline, and either remove or replace one ingredient each time for comparison. Fig. 5 reports the performances of the following ExcelFormer versions: (1) He's initialization is used to replace our attenuated initialization approach for _AiuM_ and _DiaM_, (2) a vanilla self-attention module (vanilla SA) is used to replace _DiaM_ for heterogeneous feature interactions, (3) the linear feed-forward network (FFN) is used to replace _AiuM_ for feature representation updates, (4) both Feat-Mix and Hidden-Mix are not used, and (5) the input Mixup (Zhang et al., 2018) (\(\alpha=0.5\)) is used to replace our proposed Mixup schemes. One can see that the performances often decrease when an ingredient is removed or replaced, suggesting that all of our ingredients are beneficial in general. But, it is also witnessed that the compared model versions perform worse than the baseline on 1 or 2 out of the 6 datasets, indicating that an ingredient may have negative impact on some datasets. In the model development, we retain all these designs since they show positive impacts on most of the datasets. Notably, it is difficult to optimize a design that is always effective since tabular data are of high diversity and our goal is to present a neural network that can accommodate as many tasks as possible. Comparing the baseline with the versions with the input Mixup or with no Mixup, it is clear that our proposed Mixup schemes are more suitable to tabular data and are outperforming on 5 or 6 out of the 6 datasets, respectively. Comparing the no-Mixup version and the version with the input Mixup, the version with the input Mixup performs better on 4 out of 6 datasets; the no-Mixup version is better on the other 2 datasets. These results further indicate that using the input Mixup is not consistently effective across various tabular datasets, though it beats our proposed Mixup schemes on the GE dataset. ## 6 Conclusions In this paper, we developed a new neural network model, ExcelFormer, for supervised tabular data tasks (e.g., classification and regression), and achieved performances beyond the level of GBDTs without bells and whistles. Our proposed ExcelFormer can achieve competitive perfor Figure 5: **Ablation study on our proposed ingredients of ExcelFormer using six datasets. “-” denotes removal and “+” denotes inclusion. The bars colored in “\(\mathrm{purple}\)” indicate worse performances compared with the baseline, while the bars in “\(\mathrm{orange}\)” indicate better performances. Note that the lower “RMSE”, the better; the higher “ACCURACY”, the better.** mances compared to the extensively tuned XGBoost and Catboost even without hyper-parameter tuning, while hyper-parameter tuning can improve ExcelFormer's performances further. Such superiority is demonstrated by comprehensive experiments on 28 public tabular datasets, and is achieved by the cooperation of a simple but efficient model architecture and an accompanied training methodology. We expect that our ExcelFormer together with the training methodology will serve as an effective tool for supervised tabular data applications, and inspire future studies to develop better approaches for dealing with tabular data. ## Acknowledgements This research was partially supported by the National Key R&D Program of China under grant No. 2018AAA0102102 and National Natural Science Foundation of China under grants No. 62132017.
2303.14802
Economics-Inspired Neural Networks with Stabilizing Homotopies
Contemporary deep learning based solution methods used to compute approximate equilibria of high-dimensional dynamic stochastic economic models are often faced with two pain points. The first problem is that the loss function typically encodes a diverse set of equilibrium conditions, such as market clearing and households' or firms' optimality conditions. Hence the training algorithm trades off errors between those -- potentially very different -- equilibrium conditions. This renders the interpretation of the remaining errors challenging. The second problem is that portfolio choice in models with multiple assets is only pinned down for low errors in the corresponding equilibrium conditions. In the beginning of training, this can lead to fluctuating policies for different assets, which hampers the training process. To alleviate these issues, we propose two complementary innovations. First, we introduce Market Clearing Layers, a neural network architecture that automatically enforces all the market clearing conditions and borrowing constraints in the economy. Encoding economic constraints into the neural network architecture reduces the number of terms in the loss function and enhances the interpretability of the remaining equilibrium errors. Furthermore, we present a homotopy algorithm for solving portfolio choice problems with multiple assets, which ameliorates numerical instabilities arising in the context of deep learning. To illustrate our method we solve an overlapping generations model with two permanent risk aversion types, three distinct assets, and aggregate shocks.
Marlon Azinovic, Jan Žemlička
2023-03-26T19:42:16Z
http://arxiv.org/abs/2303.14802v1
# Economics-Inspired Neural Networks with Stabilizing Homotopies+ ###### Abstract Contemporary deep learning based solution methods used to compute approximate equilibria of high-dimensional dynamic stochastic economic models are often faced with two pain points. The first problem is that the loss function typically encodes a diverse set of equilibrium conditions, such as market clearing and households' or firms' optimality conditions. Hence the training algorithm trades off errors between those -- potentially very different -- equilibrium conditions. This renders the interpretation of the remaining errors challenging. The second problem is that portfolio choice in models with multiple assets is only pinned down for low errors in the corresponding equilibrium conditions. In the beginning of training, this can lead to fluctuating policies for different assets, which hampers the training process. To alleviate these issues, we propose two complementary innovations. First, we introduce _Market Clearing Layers_, a neural network architecture that automatically enforces all the market clearing conditions and borrowing constraints in the economy. Encoding economic constraints into the neural network architecture reduces the number of terms in the loss function and enhances the interpretability of the remaining equilibrium errors. Furthermore, we present a homotopy algorithm for solving portfolio choice problems with multiple assets, which ameliorates numerical instabilities arising in the context of deep learning. To illustrate our method we solve an overlapping generations model with two permanent risk aversion types, three distinct assets, and aggregate shocks. _JEL classification_: C61, C63, C68, E21. _Keywords_: computational economics, deep learning, deep neural networks, implicit layers, market clearing, global solution method, life-cycle, occasionally binding constraints, overlapping generations. Introduction A rapidly growing literature (see, _e.g._Duarte, 2018; Valaitis and Villa, 2021; Azinovic et al., 2022; Ebrahimi Kahou et al., 2021; Han et al., 2021; Maliar et al., 2021; Kase et al., 2022; Duarte et al., 2021; Folini et al., 2021; Bretscher et al., 2022) uses deep neural networks to approximate price, policy and value functions in economic models. To find good approximations, the learning algorithm often aims directly at minimizing the errors in the equilibrium conditions, such as Euler equations, market clearing conditions, and budget constraints, on simulated paths of the economy.1 Since the equilibrium conditions will not be fulfilled exactly, the researcher has to, implicitly, trade-off errors between different equilibrium conditions. This can be challenging, especially for equilibrium conditions of different types and units. Footnote 1: Examples of deep learning algorithm which directly minimize errors in the equilibrium conditions are Azinovic et al. (2022); Maliar et al. (2021); Ebrahimi Kahou et al. (2021); Kase et al. (2022). We propose a new architecture for deep neural networks, which we call market clearing layers, that, by construction, enforces exact market clearing conditions. Encoding known properties, coming from the economic model, directly into the architecture has the advantage that it reduces the search space in the space of neural network approximators, to only the subset consistent with market clearing. Hence the neural network does not need to learn a property, we know ex-ante. Furthermore, the number of different error terms in the loss function is reduced, and hence a given loss value is easier to interpret. Next to a simple and easily implementable way to ensure market clearing, we also introduce an implicit layer, which additionally ensures that borrowing constraints are always satisfied. To do so, the last layer of the network solves a quadratic program to obtain adjustments to the layer inputs, with computationally desirable properties. We are not the first to leverage economic insights for the design of neural network architectures. Ebrahimi Kahou et al. (2021) and Han et al. (2021) encode a symmetry property into the neural network architecture. Han et al. (2021) and Azinovic et al. (2022) use suitable activation functions in the output layers to ensure borrowing constraints are satisfied. Additionally, we propose a homotopy method to stabilize the training progress of deep learning based solution methods for economic models with multiple assets. Homotopy methods are a class of methods, which start by solving a simple problem and then gradually transform the simple problem into the harder problem of interest. In our case, the simple problem to start from is an economic model with a single asset. The hard problem of interest is a model with multiple assets. Multiple assets pose a problem for deep learning based solution methods since a precise solution is required for agents to correctly distinguish between multiple assets. Especially during the beginning of training, when the policies are not yet precisely approximated, this can lead to erratic policies with spurious bang-bang allocations to different assets, which may lead to a failure of the training procedure. To address this, we propose to firstly solve a single-asset model, by setting the supply and short-sale constraints of the remaining assets to zero. Once the single asset model is solved, we keep training the model and slowly increase the supply of the second asset. We proceed this way until the full model is solved. The idea to use homotopy methods to solve for equilibria in economic models has been used, for example, in Schmedders (1998), where a homotopy is used to gradually lift bounds on asset sales. The idea to introduce small amounts of an additional asset is reminiscent of the idea in Judd and Guu (2000), who first solve a model with deterministic asset returns and then uses perturbation methods to introduce small amounts of uncertainty into the asset returns. The two contributions of this paper go hand in hand. Since the homotopy method we are proposing, slowly and one after the other, increases the supply of additional assets in the economy, it naturally leverages the market clearing neural network architectures. To the best of our knowledge, we are the first to study market clearing architectures for neural networks and to lay out a homotopy algorithm to step-wise introduce multiple assets into a deep learning based solution method. ## 2 Market Clearing Layers To introduce market clearing layers, we first consider a neural network approximating policy and price functions for a single asset and a finite number of agents, abstracting from the rest of the economy, such as the households' objective functions. In section 2.5 we show how market clearing layers can be applied in a concrete economic model. Consider an economic model, with a set of finitely many agents, indexed with \(i\in\mathcal{I}:=\{1,\ldots,H\}\) and associated population weights \(\mu_{i}\). In an overlapping generations model, for example, \(i\) could index a specific age group and \(\mu_{i}\) could denote its size.2 Let \(\mathbf{x}\in\mathbb{R}^{N}\) denote the state of the economy. Assume that agents can save in a single asset \(b\) with price \(p^{b}\) and aggregate supply \(B\). Let \(b^{i}(\mathbf{x})\) denote the policy function of agent \(i\). Market clearing requires that the total asset demand by households adds up to the total asset supply: Footnote 2: In an economy with a continuum of agents, \(i\) could index a specific combination of shocks and asset holdings, and \(\mu_{i}\) would denote the mass of agents at a specific histogram point. \[\sum_{i=1}^{H}\mu_{i}b^{i}(\mathbf{x})=B. \tag{1}\] We are interested in using a neural network to approximate the households' asset policies. Let \[\mathcal{N}_{\rho}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{H}\times\mathbb{R}_{> 0},\mathcal{N}_{\rho}(\mathbf{x})=\begin{pmatrix}\tilde{b}^{1}(\mathbf{x})\\ \tilde{b}^{2}(\mathbf{x})\\ \ldots\\ \tilde{b}^{H}(\mathbf{x})\\ \hat{p}^{b}(\mathbf{x})\end{pmatrix} \tag{2}\] denote a neural network with parameters \(\rho\) that maps the state of the economy to an approximation of the household savings choices and the price of the asset. The goal of this paper is to devise a neural network architecture that, by design, ensures that market clearing is satisfied while also retaining desirable computational properties. ### Simple Adjustment for Pure Market Clearing Let the aggregate asset demand, which is implied by the households' policies \(\tilde{b}^{i}(\mathbf{x})\), be denoted with \[B^{H}(\mathbf{x}):=\left(\sum_{i=1}^{H}\mu_{i}\tilde{b}^{i}(\mathbf{x})\right). \tag{3}\] The corresponding excess demand is given by \[\Delta B(\mathbf{x})=B^{H}(\mathbf{x})-B. \tag{4}\] We would like to adjust the policies \(\tilde{b}^{i}(\mathbf{x})\), such that they are consistent with market clearing, _i.e._ such that the excess demand is zero. Let \(b^{i}(\mathbf{x})\) denote the adjusted policy functions. There are multiple ways to adjust the policies \(\tilde{b}^{i}\) to make up for the excess demand \(\Delta B(\mathbf{x})\). For example, one way would be to rescale the policies to obtain new policies \[\forall i\in\mathcal{I}:\ b^{i}(\mathbf{x})=\tilde{b}^{i}(\mathbf{x})\frac{B} {B^{H}(\mathbf{x})}. \tag{5}\] However, in general this algorithm would not be convenient, for example in settings when \(B^{H}(\mathbf{x})=0\). Another way would be to adjust the policy of only a single household, for example \[j\in\mathcal{I}:b^{j}(\mathbf{x}) =\tilde{b}^{j}(\mathbf{x})-\frac{\Delta B(\mathbf{x})}{\mu^{j}} \tag{6}\] \[\forall i\in\mathcal{I},i\neq j:b^{i}(\mathbf{x}) =\tilde{b}^{i}(\mathbf{x}). \tag{7}\] This way would load all the excess demand onto a single agent and cause an asymmetry in the accuracy of the policies across different households. We propose the solution to the following problem as a desirable adjustment mechanism: \[\{b^{i}(\mathbf{x})\}_{i\in\mathcal{I}} =\arg\min\sum_{i\in\mathcal{I}}\frac{1}{2}\mu_{i}(b^{i}(\mathbf{x })-\tilde{b}^{i}(\mathbf{x}))^{2}\] \[\text{subject to}: \sum_{i\in\mathcal{I}}\mu_{i}b^{i}(\mathbf{x}) =B. \tag{8}\] The problem has the simple solution that all households policy is adjusted by an equal amount \[\forall i\in\mathcal{I}:b^{i}(\mathbf{x})=\tilde{b}^{i}(\mathbf{x})-\Delta B (\mathbf{x}). \tag{9}\] Next to avoiding a distortion in the symmetry between the Euler equations across households this formulation has the advantage that it also works for assets in zero net supply as well as for assets where the net supply is state-dependent.3 Footnote 3: The latter is true for the previously mentioned to formulations as well. Despite its computational simplicity, a disadvantage of this approach is that, while the adjusted policies are consistent with market clearing, they are not necessarily consistent with borrowing constraints. A softplus activation function, for example, could ensure that \(\tilde{b}^{i}(\mathbf{x})\) are non-negative, however this would not imply that the adjusted, market clearing, \(b^{i}(\mathbf{x})\) are non-negative as well. Ideally, we would like the adjustment to be able to ensure that both, market clearing and borrowing constraints, are always satisfied. We address this point in the next section. ### Implicit Layer to Encode Market Clearing and Borrowing Constraints The market clearing adjustment described in the previous section does not ensure that borrowing constraints are satisfied. One way to deal with this would be to separately predict Karush-Kuhn-Tucker (KKT) multipliers and to include the KKT conditions into the loss function. An alternative would be to use the Fischer-Burmeister equation (see Jiang, 1999), as for in example Maliar et al. (2021), to encode the Euler equation error and the violation of the borrowing constraint into a single error term. We propose a modification to the market clearing mechanism described in the previously in section 2.1, which simultaneously ensures market clearing and that the borrowing constraints are satisfied, by adding the borrowing constraint to the above minimization problem: \[\{b^{i}(\mathbf{x})\}_{i\in\mathcal{I}} =\arg\min\sum_{i\in\mathcal{I}}\frac{1}{2}\mu_{i}(b^{i}(\mathbf{x })-\tilde{b}^{i}(\mathbf{x}))^{2}\] \[\text{subject to}:\sum_{i\in\mathcal{I}}\mu_{i}b^{i}(\mathbf{x}) =B\] \[b^{i}(\mathbf{x}) \geq\underline{b}^{i}(\mathbf{x}) \tag{10}\] where \(\underline{b}^{i}(\mathbf{x})\) denotes the borrowing limit of agent \(i\). The solution to this problem can be obtained with solvers for box-constrained Quadratic Programs, which are meanwhile implemented in modern deep learning libraries, such as JAX, and provide efficiently computed derivatives for the use in backpropagation algorithms.4 The draw-back of these algorithms is that they substantially slow down the training process and are often programmed for a general class of Quadratic Programs and do not exploit symmetries, which are often present in economic models. Footnote 4: We used the BoxQSQP solver from the jaxopt library: [https://jaxopt.github.io/stable/quadratic_programming.html](https://jaxopt.github.io/stable/quadratic_programming.html). Consider, for example, a borrowing constraint at zero for all agents: \(\forall i:\ b^{i}(\mathbf{x})\geq 0\). In this case we can ensure that \(\tilde{b}^{i}\geq 0\) by using a softplus activation function in the output layer of the neural network. If the excess demand is negative, _i.e._\(\Delta B(\mathbf{x})<0\), the solution to the constrained quadratic program coincides with the simple solution described in section 2.1. The adjusted asset demand of all agents is given by \(b^{i}(\mathbf{x})=\tilde{b}^{i}(\mathbf{x})-\Delta B(\mathbf{x})\) and no constrained is violated. When the excess demand is positive, all asset choices \(\tilde{b}^{i}\) below a threshold value \(\tilde{b}^{\text{threshold}}\) are adjusted to lie on the borrowing constraint and all remaining asset holdings are adjusted by the same amount such that markets clear. Let \(\mathcal{J}\) denote the set of households \(j\) with \(\tilde{b}^{j}<\tilde{b}^{\text{threshold}}\). The solution to the constrained quadratic program is given by \[b^{i}=\begin{cases}0&\text{for }i\in\mathcal{J}\\ \tilde{b}^{i}-\underbrace{(\Delta B(\mathbf{x})-\sum_{j\in\mathcal{J}}\mu_{j}( \tilde{b}^{j}-0))}_{\text{remaining excess demand}}&\text{for }i\in\mathcal{I} \setminus\mathcal{J}\end{cases} \tag{11}\] Solving the constrained quadratic program hence simplifies to finding a single number \(\tilde{b}^{\text{threshold}}\). Since \(\tilde{b}^{\text{threshold}}\in[\min_{i}\tilde{b}^{i},\max_{i}\tilde{b}^{i}+\epsilon]\), for \(\epsilon>0\), it can be efficiently computed with a few bisection steps. For our examples, this turned out to be faster than using the general solver for constrained quadratic programs. #### 2.2.1 Policy updating for falsely constrained agents One problem, when ensuring the borrowing constraint as described above, is that for constrained households, _i.e._ households with \(\tilde{b}^{i}<\tilde{b}^{\text{threshold}}\) and \(b^{i}=0\), an infinitesimal change in the neural network output \(\tilde{b}^{i}\) does not influence the adjusted policy \(b^{i}\). For a loss function purely based on the post-adjustment prediction \(b^{i}\), this is a problem if an agent is predicted to be constrained, though it should not be. The derivative of the loss function with respect to the pre-adjustment prediction \(\tilde{b}^{i}\) would be zero and the gradient would hence not help to improve the corresponding parameters in the neural network.5 To address this issue, we provide a way to propagate the signal from the loss function to the neural network, so that the gradient descent step will lead to improved neural network parameters, which increase the prediction for \(\tilde{b}^{i}\) in such cases. For clarity, we detail it in context of the simple example below in section 2.5. While our solution allows us to retain the signal while enforcing the borrowing constraints by design, we admit that the solution is based on additional terms in the loss function, which is unfortunate given the very goal of this paper. Another solution, of course, is to follow the simple adjustment described above in section 2.1. Using reformulations like the Fischer-Burmeister equation (see Jiang, 1999),6 or the Garcia Zangwill-trick (see Zangwill and Garcia, 1981; Judd et al., 2000), the KKT conditions can be summarized by a single equation and no additional terms in the loss function are required. Footnote 5: A similar problem could arise in models where the last layer has a softplus activation when the pre-activated value is very negative. While the softplus activation would still guarantee a positive derivative, the derivative could be vanishingly small. Footnote 6: The Fischer-Burmeister function is \(\psi(x,y):=x+y-\sqrt{x^{2}+y^{2}}\). While the Fischer-Burmeister function is differentiable, its roots satisfy \(x\geq 0\), \(y\geq 0\) and \(xy=0\). ### Choosing Between the Two Market-Clearing Algorithms In sections 2.1 and 2.2 we laid out two ways to ensure that market clearing is satisfied exactly. The simple algorithm described in section 2.1 has the advantage that we can obtain the solution to problem (8) in closed form, as given in equation (9). The market clearing policies can hence be computed with minimal computational complexity. The disadvantage is that the adjusted policies may violate the borrowing constraints. Consequently, the borrowing constraints must be encoded in the loss function, for example by using the Fischer-Burmeister equation (see Jiang, 1999). The solver-based adjustment described in section 2.2 has the advantage that the solution to the problem (10) ensures that, additionally to market clearing, all borrowing constraints are always satisfied. The disadvantage is that a closed form solution for constrained quadratic programs is not available and we therefore need to invoke a solver, which slows down the training. Furthermore, the solver-based adjustment faces the problem that we need to add auxiliary terms to the loss-function in order to ensure that the neural network weights are updated if an agent is falsely predicted to be constrained, as described in section 2.2.1. Which of the two market clearing algorithm is better may depend on the model at hand and on whether a strict enforcement of the borrowing constraints adds stability to the training algorithm. When the borrowing constraints can be conveniently encoded using the Fisher-Burmeister equation, we found that the advantage of ensuring the borrowing constraints by design is not large enough to make up for the loss in speed resulting from the invocation of a quadratic solver each time the neural network is evaluated. ### Learning algorithm We use a version of the Deep Equilibrium Nets method developed by Azinovic et al. (2022). Following the Deep Equilibrium Nets algorithm, we proceed in three steps. First, we parameterize the equilibrium functions of the economy (i.e. value, policies, prices, etc) using a deep neural network. Second, we write down a loss function, which is defined as a mean squared residual of model equations, which characterize the equilibrium functions, over a batch of states. Third, we update network weights by minimizing the loss function along simulated paths of the economy. Because the loss function includes all the equilibrium conditions of the economy, achieving a sufficiently low level of loss over some region of the state space indicates that the neural network learned how to construct an approximate model solution over that region. By minimizing loss function over states obtained from a long simulation of the economy we make sure that the neural network learns to approximate the model solution on the approximate ergodic set of the economy. As discussed in Judd et al. (2011), drawing states from the approximate ergodic set instead of sampling from some of its enclosing hypercubes allows for a drastic reduction of the computational complexity of obtaining an approximate solution.7 Footnote 7: Sparse grids (see Krueger and Kubler, 2004) and adaptive sparse grids (see Brumm and Scheidegger, 2017), for example, require a hyper-cubic domain. Relative to the original Deep Equilibrium Nets algorithm, our algorithm differs along two dimensions. First, our neural networks feature Market Clearing Layers, hence their outputs automatically satisfy market clearing conditions. Therefore, market clearing conditions do not have to be included in the loss function. Moreover, the states we obtain from the simulation are always consistent with market clearing. Second, we use a modified simulation algorithm. Instead of simulating several paths of the economy for many periods before retraining the neural network, we simulate a large number of states for only one period forward before updating the network weights by training on those newly simulated states.8 Footnote 8: Results shown by Zemlicka (2023) suggest, that this approach improves the stability of the training process. Specifically, we simulate a number of parallel trajectories for one period forward. Let \(N^{\text{trajectories}}\) denote the number of parallel trajectories. To simulate the model forward, we use the policies as approximated by the neural network and draw random exogenous shocks according to their transition probabilities. In each simulation step, we hence obtain \(N^{\text{trajectories}}\) new states. These newly simulated state constitute our dataset to update the approximated policies by training the neural network on the dataset to minimize the loss function. Let \(N^{\text{epochs}}\) denote the number of epochs we train on each simulated dataset, and let \(N^{\text{mini-batch}}\) denotes the mini-batch size. An epoch refers to a single training pass through the entire dataset.9 The mini-batch size refers to the number of data-point used for each step of stochastic gradient descent.10 Hence, each simulated dataset of \(N^{\text{trajectories}}\) new states is used for \(\frac{N^{\text{trajectories}}N^{\text{epochs}}}{N^{\text{mini-batch}}}\) steps of stochastic gradient descent. After training the neural network, we simulate a new dataset. We refer to one simulation of new training data, together with subsequent training of the neural network on that data, as one episode. Let \(N^{\text{episodes}}\) denote the total number of episodes used to train the neural network. Our simulation procedure starts out with \(N^{\text{trajectories}}\) copies of a single feasible state of the economy. As the policies improve and as we simulate the model forward, the set of states, generated by simulation, converges toward an approximation of the ergodic set of states of the economy. Footnote 10: See Goodfellow et al. (2016) for a general introduction to deep learning. Footnote 10: We use the Adam optimizer (Kingma and Ba, 2014), which is a variant of stochastic gradiant descent and the de-facto standard in the deep learning literature. In order to evaluate the accuracy of our approximate solution, we simulate the model forward without retraining the neural network parameters and evaluate the equilibrium conditions on the newly generated set of states. ### Simple Example We illustrate the market clearing mechanism with an overlapping generations economy, with a single risk free asset and borrowing constraints. #### 2.5.1 Model Time is discrete. Every period, a shock \(z_{t}\) realizes, which determines the aggregate labor income. We assume that \(\log(z_{t})\) follows an AR(1) process \[\log(z_{t})=\rho\log(z_{t-1})+\sigma\epsilon_{t}, \tag{12}\] where \(\rho\) denotes the persistence and \(\sigma\) the volatility, and where the innovation \(\epsilon_{t}\) is drawn form a standard normal distribution. Every period a new cohort of measure one enters the economy and lives deterministically for \(H<\infty\) periods. Households of age \(h\in\{1\dots H\}\) receive an exogenous income \(y_{t}^{h}=z_{t}y^{h}\) which is given by an age-specific share \(y^{h}\) of aggregate labor income \(z_{t}\). Households rent housing services with fixed supply \(H^{r}\), consume and can trade a risk free one-period bond with fixed supply \(B\) subject to borrowing constraints. Let \(b_{t}^{h}\) denote the bond holdings of age group \(h\) in the beginning of period \(t\) and let \(\underline{b}\) denote the exogenous borrowing constraint: \(b_{t+1}^{h+1}\geq\underline{b}\). We assume that households are born and die without assets, \(b_{t}^{1}=b_{t}^{H+1}=0\). To adjust their bond holdings, households have to pay an adjustment cost \(\zeta^{b}\frac{1}{2}(b_{t+1}^{h+1}-b_{t}^{h})^{2}\). Households of age \(h\) maximize their remaining time-separable discounted expected lifetime utility given by \[\sum_{i=0}^{H-h}\mathrm{E}_{t}\left[\beta^{i}\left(u(c_{t+i}^{h+i} )+\psi^{h}v(h_{t+i}^{r,h+i})\right)\right], \tag{13}\] where \(c_{t}^{h}\) denotes the consumption of age group \(h\) in period t, \(h_{t}^{r,h}\) denotes the consumption of housing services of age group \(h\) in period \(t\), and \(\beta\) denotes the patience. We assume constant relative risk aversion utility for both types of consumption with coefficient of relative risk aversion \(\gamma\), \(u(c)=\frac{c^{1-\gamma}}{1-\gamma}\) and \(v(h^{r})=\frac{(h+h^{r})^{1-\gamma}}{1-\gamma}\). The weight for housing services, \(\psi^{h}\), is age dependent. Households budget constrained is given by \[c_{t}^{h}=z_{t}y^{h}+b_{t}^{h}-p_{t}^{b}b_{t+1}^{h+1}-p_{t}^{b} \zeta^{b}\frac{1}{2}(b_{t+1}^{h+1}-b_{t}^{h})^{2}-p_{t}^{r}h^{r}h_{t+1}^{h+1}, \tag{14}\] where \(p_{t}^{b}\) denotes the price of the bond and \(p_{t}^{r}\) denotes the price for renting housing services. The Bellman equation corresponding to the households' problem is given by \[V^{h}(z_{t},b_{t}^{h}) =\max_{b_{t+1}^{h+1},h_{t}^{r,h}}\left\{u(c_{t}^{h})+\psi^{h}v(h_ {t}^{r,h})+\beta\mathrm{E}\left[V^{h+1}(z_{t+1},b_{t+1}^{h+1}\right]\right\}\] (15) subject to : \[c_{t}^{h} =z_{t}y^{h}+b_{t}^{h}-b_{t+1}^{h+1}p_{t}^{b}-h_{t}^{r,h}p_{t}^{r} -\zeta^{b}\frac{1}{2}\left(b_{t+1}^{h+1}-b_{t}^{h}\right)^{2} \tag{16}\] \[0 \leq b_{t+1}^{h+1}-\underline{b} \tag{17}\] The Karush-Kuhn-Tucker conditions for the savings choice of and households' savings choices are given by \[(p_{t}^{b}+p_{t}^{b}\zeta^{b}(b_{t+1}^{h+1}-b_{t}^{h}))u^{\prime}(c_{t }^{h}) =\beta\mathrm{E}_{t}\left[u^{\prime}(c_{t+1}^{h+1})(1+p_{t+1}^{b} \zeta^{b}(b_{t+2}^{h+2}-b_{t+1}^{h+1}))\right]+\lambda_{t}^{h} \tag{18}\] \[(b_{t+1}^{h+1}-\underline{b}) \geq 0\] (19) \[\lambda_{t}^{h} \geq 0\] (20) \[\lambda_{t}^{h}(b_{t+1}^{h+1}-\underline{b}) =0. \tag{21}\] Similarly, optimality conditions for the intratemporal renting choice are given by \[p_{t}^{r}u^{\prime}(c_{t}^{h})=\psi^{h}v^{\prime}(h_{t}^{r,h}). \tag{22}\] We provide details on parameter choices in the appendix A. Functional Rational Expectations EquilibriumLet \(\mathbf{b}:=[b^{1},\ldots,b^{H}]\in\mathbb{R}^{H}\) denote the distribution of asset holdings across age groups. Let \(\mathbf{x}:=[z,\mathbf{b}]\in\mathbb{R}^{H+1}\) denote the state of the economy. Following Spear (1988) and Krueger and Kubler (2004) a functional rational expectations equilibrium is given by a set of \(H-1+H\) policy functions, which map the state of the economy the bond purchases and the consumption of rental services of each age group, together with two price functions, which map the state of the economy to the prices of the bond and rental services, such that the policies and prices are consistent with market clearing as well as the households' optimality conditions. Approximating the equilibrium functions with deep neural networksFollowing Azinovic et al. (2022), we aim to solve the model by approximating the equilibrium functions with a deep neural network. Let \(\mathcal{N}_{\rho}:\mathbb{R}^{H+1}\rightarrow\mathbb{R}^{2H+1}\) denote a neural network we use to approximate the price and policy functions, such that \[\mathcal{N}_{\rho}(\mathbf{x}_{t})=\begin{bmatrix}\tilde{b}_{t}^{1}\\ \ldots\\ \tilde{b}_{t}^{H-1}\\ \tilde{b}_{t}^{r,1}\\ \ldots\\ \tilde{h}_{t}^{r,H-1}\\ \hat{p}_{t}^{b}\\ \hat{p}_{t}^{r}\end{bmatrix} \tag{23}\] In difference to Azinovic et al. (2022) we now apply the market clearing layers described in section 2 to obtain approximations for the households' policies \(\hat{b}_{t}^{1},\ldots,\hat{b}_{t}^{H-1}\) and \(\hat{h}_{t}^{r,1},\ldots,\hat{h}_{t}^{r,H-1}\), which are already consistent with market clearing. We illustrate both ways, the simple way described in section 2.1 and the adjustment described in section 2.2, which additionally ensures that the borrowing constraints are satisfied. Due to the Inada property of the utility from housing services, there is no borrowing constraint for renting services and the two adjustments coincide for renting. Loss functionSince our algorithm ensures that market clearing holds exactly for all predicted policy functions, the remaining equilibrium conditions are the households' first order conditions. As in Azinovic et al. (2022) we construct a loss function as the means squared error in the remaining equilibrium conditions and the train the neural network to minimize those errors on simulated paths of the economy. Let \(\hat{c}_{t}^{h}\) denote the consumption of age group \(h\), which the neural network, together with the market clearing layer, predicts for state \(\mathbf{x}\). We use the Fischer-Burmeister function, \(\psi(x,y):=x+y-\sqrt{x^{2}+y^{2}}\) to encode the KKT conditions for each age group into a single equation. Following Judd (1998) we rewrite the equations, such that deviations from fullfilling them exactly are interpretable as relative consumption errors. The resulting errors in the equilibrium conditions for age group \(h\) are given by \[\mathrm{err}_{\rho}^{c,h}(\mathbf{x}_{t}) =\psi\left(\frac{(u^{\prime})^{-1}\left(\frac{\beta\mathrm{E}_{ \mathrm{g}}\left[u^{\prime}(\hat{c}_{t+1}^{h+1})(1+\hat{p}_{t+1}^{h}\hat{c}^{b} (\hat{b}_{t+2}^{h+2}-\hat{b}_{t+1}^{h+1}))\right]}{\hat{p}_{t}^{h}+\hat{p}_{t} ^{h}\hat{c}_{t}^{h}(\hat{b}_{t+1}^{h+1}-\hat{b}_{t}^{h})}\right)}{\hat{c}_{t}^ {h}}-1,\frac{\hat{b}_{t+1}^{h+1}-\hat{b}}{\hat{c}_{t}^{h}}\right) \tag{24}\] \[\mathrm{err}_{\rho}^{r,h}(\mathbf{x}_{t}) =\frac{(u^{\prime})^{-1}\left(\frac{\psi^{h}v^{\prime}(\hat{p}_{ t}^{r,*})}{\hat{p}_{t}^{r}}\right)}{\hat{c}_{t}^{h}}-1. \tag{25}\] Equation (24) encodes the errors in the optimality conditions for the choice of saving in the bond, and equation (25) encodes the errors in the optimality conditions for the choice of renting housing services. Let \(\mathcal{D}\) denote a set \(\left|\mathcal{D}\right|\) states. Our loss function, which encodes all remaining equilibrium conditions, is given by \[l_{\rho}(\mathcal{D}):=\frac{1}{\left|\mathcal{D}\right|}\sum_{ \mathbf{x}\in\mathcal{D}}\left(\frac{1}{H-1}\sum_{h=1}^{H-1}\left(\mathrm{err }_{\rho}^{c,h}(\mathbf{x})\right)^{2}+\frac{1}{H}\sum_{h=1}^{H}\left(\mathrm{ err}_{\rho}^{r,h}(\mathbf{x})\right)^{2}\right). \tag{26}\] Updating the policies of falsely constrained householdsWhen using our solver based method, there would be no feedback from the loss function to a neural network prediction \(\tilde{b}^{h}\) when the solver sets the household onto the borrowing constraint. This is problematic when the household is predicted to be constrained, but should not be. In such cases, \(\frac{\hat{b}_{t+1}^{h+1}-\hat{b}}{\hat{c}_{t}^{h}}=0\), while \(\frac{(u^{\prime})^{-1}\left(\frac{\beta\mathrm{E}_{\mathrm{g}}\left[u^{\prime }(\hat{c}_{t+1}^{h+1})(1+\hat{c}_{t}^{b}(\hat{b}_{t+2}^{h+2}-\hat{b}_{t+1}^{h+1 }))\right]}{\hat{p}_{t}^{h}+\hat{c}_{t}^{h}(\hat{b}_{t+1}^{h+1}-\hat{b}_{t}^{h })}\right)}{\hat{c}_{t}^{h}}-1<0\). This would correctly lead to a positive term in the loss function, \(\left(\mathrm{err}_{\rho}^{c,h}(\mathbf{x}_{t})\right)^{2}>0\), but to no feedback to the neural network parameters to increase the prediction \(\tilde{b}^{h}\). To restore this feedback, we add the square of the following term to the loss function \[\mathrm{err}_{\rho}^{h,\text{false binding}}(\mathbf{x}_{t}):=\frac{1}{1+ \tilde{b}_{t+1}^{h+1}}\times e^{-\frac{\left(\hat{b}_{t+1}^{h+1}-\hat{b} \right)^{2}}{10^{-5}}}\times\max\left\{-\frac{(u^{\prime})^{-1}\left(\frac{ \beta\mathrm{E}_{\mathrm{g}}\left[u^{\prime}(\hat{c}_{t+1}^{h+1})(1+\hat{c}_{t }^{b}(\hat{b}_{t+2}^{h+2}-\hat{b}_{t+1}^{h+1}))\right]}{\hat{p}_{t}^{h}+\hat{c }_{t}^{b}(\hat{b}_{t+1}^{h+1}-\hat{b}_{t}^{h})}\right)}{\hat{c}_{t}^{h}}+1,0 \right\}. \tag{27}\] The intuition behind these terms is as follows. The last term, \(\max\left\{-\frac{(u^{\prime})^{-1}\left(\frac{\beta\mathrm{E}_{\mathrm{g}} \left[u^{\prime}(\hat{c}_{t+1}^{h+1})(1+\hat{c}_{t}^{b}(\hat{b}_{t+2}^{h+2}- \hat{b}_{t+1}^{h+1}))\right]}{\hat{p}_{t}+\hat{c}_{t}^{b}(\hat{b}_{t+1}^{h+2}- \hat{b}_{t}^{h+1})}\right)}{\hat{p}_{t}^{h}}+1,0\right\}\), is always non-negative and only different from zero, when the household is currently saving too little, in which case it should not be on the borrowing constraint. The second term, \(e^{-\frac{\left(\hat{b}_{t+1}^{h+1}-\hat{b}\right)^{2}}{10^{-5}}}\), is a differentiable approximation to a function, which is always zero, except when \(\hat{b}_{t+1}^{h+1}-\hat{b}\), in which case it is equal to one. Hence, it is always non-negative and only positive if the household is predicted to be constrained. Taken together, the last two terms in equation (27) ensure that the term is only positive if the household is falsely predicted to be constrained. Finally, the first term, \(\frac{1}{1+\hat{b}_{t+1}^{n+1}}\), ensures that the overall error term is reduced by increasing \(\hat{b}_{t+1}^{h+1}\) and is thus restoring a pass through from the loss function to the neural network parameters governing the prediction for the household's policy. IntegrationAn evaluation of the loss function implied by the model requires the computation of conditional expectations in the Euler equations, which characterize the households' behavior. Despite a large number of state variables, this economy features only one source of uncertainty: a stationary AR(1) process with Gaussian innovations. This structure allows us to approximate conditional expectations using standard Gauss-Hermite quadrature (for more details on numerical integration in the context of economic modeling, see Judd, 1998). We denote the number of quadrature nodes with \(N^{\text{integration}}\) and choose \(N^{\text{integration}}=8\). #### 2.5.2 Training To parameterize equilibrium objects of the economy (policies and prices), we use a densely connected feed-forward neural network with two hidden layers. The dimensionality of its input is \(N^{\text{input}}=21\) since the state vector includes both, the aggregate productivity shock, and the asset distribution across all twenty age groups in the economy. For the two hidden layers, we choose \(N^{\text{hidden 1}}=N^{\text{hidden 2}}=400\) neurons with relu activation functions.11 Finally, the network output is a vector of \(N^{\text{output}}=41\) elements consisting of bond savings, housing services consumption policy functions, and two price functions for the price of the bond and the price for renting housing services. Outputs corresponding to prices are transformed using the softplus activation in order to ensure that the predicted prices are always positive.12 Footnote 11: The relu activation function is given by \(\text{relu}(x):=\max\{0,x\}\). Relu is short for the rectified linear unit. Footnote 12: The softplus activation function is given by \(\text{softplus}(x):=\log(1+\exp(x)\). To train the network to approximate equilibrium price and policy functions, we use the training procedure described in section 2.4. To obtain training states, we simulate \(N^{\text{trajectories}}=8192\) parallel state trajectories for \(N^{\text{episodes}}=3584\) episodes. For each episode (_i.e._ each simulation step), we train the neural network for \(N^{\text{epochs}}=10\) epochs. In each epoch, we split the simulated dataset of \(N^{\text{trajectories}}\) states into batches of \(N^{\text{minibatch}}=128\) states. Each batch is used to compute one stochastic gradient descent update of the neural network weights. To calculate those updates, we use the Adam optimizer (Kingma and Ba, 2014), a version of the stochastic gradient descent algorithm, with a learning rate of \(\alpha^{\text{learn}}=1\times 10^{-5}\).13 Footnote 13: To improve robustness of our learning algorithm, we transformed loss function gradients using the zero_nans transformation before using them as an input for the Adam optimizer. Specifically, we use adam and zero_nans provided by the optax library: [https://optax.readthedocs.io/en/latest/](https://optax.readthedocs.io/en/latest/). The zero_nans transformation replaces NaI,when they occur, with 0, avoiding a failure of the training. Furthermore, we found it useful to re-start internal state of Adam optimizer to zero at the beginning of training episode. \begin{table} \begin{tabular}{c c c c c} \hline \hline Parameters & \(N^{\text{input}}\) & \(N^{\text{hidden 1}}\) & \(N^{\text{hidden 2}}\) & \(N^{\text{output}}\) \\ & & Activations & Activations & Activations \\ \hline \multirow{2}{*}{Values} & \multirow{2}{*}{21} & 400 & 400 & 41 \\ & & relu & relu & see text \\ \hline \hline \end{tabular} \end{table} Table 1: Network Architecture chosen for the single asset model. #### 2.5.3 Accuracy To evaluate the quality of our solution, we evaluate statistics of the errors in the equilibrium conditions along simulated paths of the economy. We simulate a total of \(2^{13}=8192\) new states without retraining the neural network. Figure 1 reports statistics on the remaining errors in the KKT conditions for each age group, expressed as relative consumption errors. As the figure shows, the models are solved precisely, with the 99th percentile of errors in the optimality conditions well below \(1\%\) for each age group and both policies. The left panel in figure 1 shows the simple market clearing architecture described in section 2.1, the right panel shows the solver-based architecture, which additionally guarantees that the borrowing constraints are always satisfied. Indeed we can see that the equilibrium errors are fulfilled exactly for the young agents, for which the borrowing constraint is always binding. ## 3 Homotopy Algorithm for Portfolio Choice While deep learning based solution methods can handle high-dimensional state spaces comparatively well, portfolio choice problems pose a challenge for this approach. Portfolio choice is pinned down only at low levels of errors in the households' optimality conditions. In the beginning of training the errors are still high and it is therefore hard to learn the portfolio choice correctly. As a consequence, the training process is prone to get stuck at spurious solutions featuring a bang-bang type of portfolio allocations. In general, those oscillations are difficult to overcome. In simulation-based solution algorithms, this issue is amplified, since the intermediate "solution" is used to simulate the dynamics of the economy to produce new state values, on which the network is subsequently trained. Bad policies might hence generate economically non-sensible states, which are far from the domain on which the neural network was trained previously. Those oscillations can hence create a feedback loop, which might hamper the convergence of the whole algorithm. To ameliorate the propensity of portfolio choice models to generate zig-zag oscillations, we propose a homotopy procedure. Homotopy methods (see Schmedders, 1998) are a general class of methods. The idea is to start by solving a simple problem and then slowly transform the problem into the harder problem of interest. If the transformations of the problem are gradual enough, the previously obtained solution to the nearby problem provides a good starting point to solve the slightly harder, transformed problem. In order to leverage this concept for solving portfolio choice models, we start by solving a single-asset economy, which is then gradually enriched with new assets. By starting with solving a simple problem up to a high degree of precision, and then creating a sequence of perturbed problems, where every two consecutive problems are very similar to each other, we are able to guide the solution procedure and avoid spurious zig-zag solutions, which may otherwise prevent the solution procedure from converging to a high-quality solution. For a practical implementation of the homotopy procedure, we utilize a natural nesting structure present \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Parameters & \(N^{\text{episodes}}\) & \(N^{\text{trajectories}}\) & \(N^{\text{epochs}}\) & \(N^{\text{minibatch}}\) & \(N^{\text{integration}}\) & \(\alpha^{\text{learn}}\) \\ \hline Values & 3584 & 8192 & 10 & 128 & 8 & \(10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameters chosen for the single asset model. Figure 1: Statistics on the violation of the equilibrium conditions by age group on a simulated path of 8192 states by age group. The left panel shows the statistics for the simple market clearing architecture and the right panel shows the results for the solver-based architecture, which additionally guarantees that the borrowing constraints are always satisfied. The top row shows the errors in the KKT conditions for the bond and the bottom row shows the errors in the Euler equation for the renting of housing services. The different lines, from the lowest to the highest, show the minimum, the 10th percentile, the average, the 90th percentile and the maximum. in portfolio choice models. A multi-asset economy can be transformed into a single-asset one by setting the net supply of all but one asset to zero and imposing a strict no short-sale constraint. Given a zero net supply and a strict no short-sale constraint, equilibrium allocations of that asset are uniformly zero. Moreover, the loss function used to train the network can be parameterized with weights on Euler equations of different assets. By setting the weights on Euler equations for some assets to zero and and imposing zero net supply coupled with a strict no short-sale constraint, the loss function constructed to represent the multiple-asset economy characterizes the single-asset economy. In addition to parameters governing asset supply and the weights in the loss function, we also employ a mask parameter that multiplies all the asset policies. We initially set the mask to zero for all assets, which are not yet included into the economy. The mask parameter is increased to small positive values when the corresponding asset is added to the economy. This nesting structure allows us to implement our homotopy procedure using a simple loop that implements the deep learning procedure for a particular parameterization of the economy and the loss function. When adding the N-th asset, we first use the \(N-1\) asset economy and the N-th asset Euler equations to construct a price function for the N-th asset. Next, we introduce a small amount of the N-th asset and take the policy and the price functions constructed in the previous step as an initial guess. Then we run the deep learning procedure to solve for an equilibrium of this perturbed economy. Our homotopy procedure can be summarized in the following steps: * Parameterize all price and policy functions using a neural network equipped with market clearing layers. This network takes as an input a full state vector of the multi-asset economy and outputs all prices and portfolio allocations. Initialize its parameters. * For all but one asset set the net supply to zero, and impose strict no short-sale constraints on the remaining assets. Set the weights in the loss function on Euler equations associated with the assets in zero net supply to zero. Set the mask parameter for the corresponding policy functions to zero. * Select initial state values for the training procedure. * Starting from initial weights and state values, run the training procedure to solve for an equilibrium of the single asset economy. * Set the weights in the loss function on the Euler equations associated with the second asset to its full value (e.g. 1). Keep the zero net supply and the no short-sale constraint. * Use the Euler equations associated with the second asset to solve for its price function. Since the weight on this Euler equations is non-zero, the network will be able to learn a price function implied by zero allocations of that asset. Recall that this second asset is in zero net supply, and agents are bound by strict no short-sale when trading it. * Set the mask parameter for the second asset to a small, but non-zero value (_e.g._ 0.01). Now the network can predict non-zero allocations of the second asset. The choice of a small mask value helps to avoid instability associated with a potentially large and discontinuous jump. * Set the net supply of the second asset at a small, but non-zero value. Use network weights and state values from the previous step as an initial guess for solving this economy. * Increase the net supply of the second asset by another step. Re-solve the economy starting from weights and states from the previous step. Repeat this procedure until the supply target for the second asset is reached. * If required, perform the same procedure with no-short sale constraint (_i.e._ relax it slowly). * Set the weight in the loss function on the Euler equations associated with the third asset to its full value (_e.g._ 1). * Since the weight on the Euler equations is non-zero, the network will be able to learn a price function implied by zero allocations of that asset. Recall that this third asset is in zero net supply, and agents are bound by strict no short-sale when trading it. * Set the mask parameter for the third asset to a small, but non-zero value (_e.g._ 0.01). Now the network can predict non-zero allocations of the third asset. The choice of a small mask value helps to avoid instability associated with a potentially large and discontinuous jump. * Set the net supply of the third asset at some small, but non-zero level. Solve this economy starting from network weights from the previous step. * Increase the net supply of the third asset by another step. Re-solve the economy starting from weights and states from the previous step. Repeat this procedure until the supply target for the third asset is reached. * If required, perform the same procedure with no-short sale constraint (_i.e._ relax it slowly). * In case of adding the fourth and further asset, perform the analogous procedure. ### Example with Three Assets #### 3.1.1 Model To illustrate the homotopy method, together with our market-clearing neural network architecture, we enrich the overlapping generations model introduced in section 2.5.1 with two additional assets. Next to a one-period risk-free bond, households can also trade claims to a Lucas tree as well as claims to the stream of rent payments for housing services. We think of the former as a stock and of the latter as house ownership. The households hence have to allocate their total savings between three different assets. This portfolio choice problem is challenging for deep learning based solution methods, rendering the model a suitable testing ground for our homotopy method. The remaining components, are the same as in the model we previously described in 2.5.1. Asset structureAs before, households can purchase a risk-free one-period bond in net-supply \(B\) at equilibrium prices \(p_{t}^{b}\). Additionally, households can trade claims to a Lucas tree at equilibrium prices \(p_{t}^{s}\), which in net-supply \(S\).14 Every period, the Lucas Tree pays an exogenous amount of dividends \(d_{t}=dz_{t}\), which is perfectly correlated with aggregate labor income. Next to deriving utility from consumption, household derive age-specific utility from consuming housing services, which they rent every period at equilibrium rent \(p_{t}^{r}\). Finally households can trade claims to the aggregate housing stock at equilibrium price \(p_{t}^{o}\), which is in net-supply \(H^{o}\). The total housing stock in the economy, which can be rented by household is the sum of the locally owned housing supply \(H^{o}\), which are the houses owned by households, and an housing supply \(H^{ex}\), with owners outside the model. In this model a claim to the housing stock is a claim to the streams of rents. Owning a house is hence similar to owning a Lucas tree, where the dividends are replaced by the rent payments the house owners receive. None of the assets can be sold short, and all of the assets are traded subject to asset-specific quadratic adjustment costs \(p_{t}^{x}\zeta^{x}\frac{1}{2}(x_{t+1}^{h+1}-x_{t}^{h})^{2}\), for \(x\in\{b,s,o\}\). We model the bond as the most liquid asset and house ownership as the least liquid asset, _i.e._\(\zeta^{h}>\zeta^{s}>\zeta^{b}\). The Bellman equation corresponding to the households' problem is given by \[V^{h}(z_{t},b_{t}^{h},s_{t}^{h},h_{t}^{o,h}) =\max_{h_{r,t}^{h},b_{t+1}^{h+1},s_{t+1}^{h+1},h_{t+1}^{o,h+1}} \left\{u(c_{t}^{h})+\psi^{h}v(h_{t}^{r,h})+\beta\mathrm{E}\left[V^{h+1}(z_{t+1 },b_{t+1}^{h+1},s_{t+1}^{h+1},h_{t+1}^{o,h+1})\right]\right\} \tag{28}\] \[c_{t}^{h} =z_{t}y^{h}+b_{t}^{h}+s_{t}^{h}(p_{t}^{s}+d_{t}^{s})+h_{t}^{o,h}( p_{t}^{o}+p_{t}^{r})\] \[-b_{t+1}^{h+1}p_{t}^{h}-s_{t+1}^{h+1}p_{t+1}^{s}-h_{t+1}^{o,h+1}p_ {t}^{o}-h_{t}^{r,h}p_{t}^{r}\] \[-p_{t}^{b}\zeta^{b}\frac{1}{2}\left(b_{t+1}^{h+1}-b_{t}^{h} \right)^{2}-p_{t}^{s}\zeta^{s}\frac{1}{2}\left(s_{t+1}^{h+1}-s_{t}^{h}\right) ^{2}-p_{t}^{o}\zeta^{h}\frac{1}{2}\left(h_{t+1}^{o,h+1}-h_{t}^{o,h}\right)^{2}\] (29) subject to : \[0 \leq b_{t+1}^{h+1}-\underline{b}\] (30) \[0 \leq s_{t+1}^{h+1}-\underline{s}\] (31) \[0 \leq h_{t+1}^{o,h+1}-\underline{b}^{o} \tag{32}\] The corresponding optimality conditions, which, together with market clearing, characterize the functional rational expectations equilibrium, are given in Appendix B. We provide details on parameter choices in Appendix C. Heterogeneity in risk aversionWe continue to model an overlapping generations economy, where households live deterministically for \(H\) periods and receive an age specific share of aggregate labor income. Additionally, we now assume that in each cohort, there are two types of households of equal mass. One type has a lower risk aversion, the other type as a higher risk aversion. State of the economyThe state of the economy is given by the exogenous shock, as well as the distribution of assets accross households and risk-aversion types. Let the superscript 1 denote the low risk aversion households and superscript 2 denote the high risk aversion households. The state of the economy is given by Footnote 15: The \((H-1)\) stems from the fact that agents are born without assets and their asset holding is hence always constant and does not need to be included in the state. \[\mathbf{x}_{t}=[z_{t},\underbrace{b_{t}^{1,1},\ldots,b_{t}^{H,1},b_{t}^{1,2},\ldots,b_{t}^{H,2}}_{\text{bond holdings}},\underbrace{s_{t}^{1,1},\ldots,s _{t}^{H,1},s_{t}^{1,2},\ldots,s_{t}^{H,2}}_{\text{stock holdings}},\underbrace{h_{t}^{ o,1,1},\ldots,h_{t}^{o,H,1},h_{t}^{o,1,2},\ldots,h_{t}^{o,H,2}}_{\text{ housing ownership}} \tag{33}\] The state space is hence \(1+2\times 3\times(H-1)\)-dimensional.16 All policy and price functions are a function of the whole state, rendering this model a formidable testing ground for our algorithm. Footnote 16: The \((H-1)\) stems from the fact that agents are born without assets and their asset holding is hence always constant and does not need to be included in the state. Equilibrium functions to approximateWe need to approximate \(2\times(H-1)\times 4+2\) household policies: for each risk aversion type and each age group, we need the bond policy, the stock policy, the renting policy as well as the house ownership policy. While we only need to approximate \(H-1\) policies per asset and risk aversion type, we need to approximate \(H\) policies for the intra-temporal choice of renting housing services. Furthermore, we need to approximate four equilibrium price functions: for the bond price, the tree price, the price for renting, and the price for house owner ship. We approximate the mapping from the \(1+2\times 3\times(H-1)\) dimensional state to the \(2\times(H-1)\times 4+2+4\) endogenous variables by a deep neural network. \[\mathcal{N}_{\rho}:\mathbb{R}^{1+2\times 3\times H}\rightarrow\mathbb{R }^{2\times(H-1)\times 4+2+4}\] \[\mathcal{N}_{\rho}(\mathbf{x})=[\tilde{b}_{t+1}^{2,1},\dots, \tilde{b}_{t+1}^{H,1},\tilde{b}_{t+1}^{2,2},\dots,\tilde{b}_{t+1}^{H,2},\] \[\tilde{s}_{t+1}^{2,1},\dots,\tilde{s}_{t+1}^{H,1},\tilde{s}_{t+1}^ {2,2},\dots,\tilde{s}_{t+1}^{H,2},\] \[\tilde{h}_{t+1}^{o,2,1},\dots,\tilde{h}_{t}^{o,H,1},\tilde{h}_{t+ 1}^{o,2,2},\dots,\tilde{h}_{t+1}^{o,H,2},\] \[\tilde{h}_{t}^{r,1,1},\dots,\tilde{h}_{t}^{r,H,1},\tilde{h}_{t}^ {r,2,1},\dots,\tilde{h}_{t}^{r,H,2},\] \[\hat{p}_{t}^{b},\hat{p}_{t}^{s},\hat{p}_{t}^{o},\hat{p}_{t}^{r}] \tag{34}\] We use the simple adjustment described in section 2.1 to ensure that market clearing conditions are always satisfied and use our homotopy algorithm to solve the model step-wise by slowly increasing the supply and number of assets. Loss functionSince our neural network architecture already enforces market clearing, the only remaining equilibrium conditions are the households' optimality conditions for each of the four choices. Analogously to equation (24), we use the Fischer Burmeister equation to summarize each set of KKT conditions into a single equation. Let \(\left\{\{\mathrm{err}_{\rho}^{b,i,h}(\mathbf{x}_{t}),\mathrm{err}_{\rho}^{s,i,h}(\mathbf{x}_{t}),\mathrm{err}_{\rho}^{o,i,h}(\mathbf{x}_{t})\}_{h=1}^{H-1}, \{\mathrm{err}_{\rho}^{h^{r},i,h}(\mathbf{x}_{t})\}_{h=1}^{H}\right\}_{i=1}^{2}\) denote the corresponding errors in the equilibrium conditions. Our loss function is given by \[l_{\rho}(\mathcal{D}) :=\frac{1}{2}\left(\right.\] \[w_{b}\frac{1}{|\mathcal{D}|}\frac{1}{H-1}\left(\sum_{\mathbf{x} \in\mathcal{D}}\sum_{h=1}^{H-1}\left(\mathrm{err}_{\rho}^{b,1,h}(\mathbf{x}_{ t})\right)^{2}+\left(\mathrm{err}_{\rho}^{b,2,h}(\mathbf{x}_{t})\right)^{2}\right)\] \[+w_{s}\frac{1}{|\mathcal{D}|}\frac{1}{H-1}\left(\sum_{\mathbf{x} \in\mathcal{D}}\sum_{h=1}^{H-1}\left(\mathrm{err}_{\rho}^{s,1,h}(\mathbf{x}_{ t})\right)^{2}+\left(\mathrm{err}_{\rho}^{s,2,h}(\mathbf{x}_{t})\right)^{2}\right)\] \[+w_{o}\frac{1}{|\mathcal{D}|}\frac{1}{H-1}\left(\sum_{\mathbf{x} \in\mathcal{D}}\sum_{h=1}^{H-1}\left(\mathrm{err}_{\rho}^{o,1,h}(\mathbf{x}_{t}) \right)^{2}+\left(\mathrm{err}_{\rho}^{o,2,h}(\mathbf{x}_{t})\right)^{2}\right)\] \[+w_{h^{r}}\frac{1}{|\mathcal{D}|}\frac{1}{H}\left(\sum_{\mathbf{x} \in\mathcal{D}}\sum_{h=1}^{H}\left(\mathrm{err}_{\rho}^{r,1,h}(\mathbf{x}_{t}) \right)^{2}+\left(\mathrm{err}_{\rho}^{r,2,h}(\mathbf{x}_{t})\right)^{2}\right)\] \[\left.\right) \tag{35}\] The weights \(w_{x}\) allow to vary the weight on the equilibrium conditions. The loss function hence nests a single and two-asset model, by allowing the corresponding weight in the loss function to be set to zero, while also ensuring that the associated policies are equal to zero. #### 3.1.2 Training To parameterize equilibrium objects of the economy (_i.e._ policies and prices), we use a densely connected feed-forward neural network with two hidden layers. The dimensionality of its input is \(N^{\text{input}}=161\) since the state vector includes aggregate productivity shock and asset distribution across all twenty age groups for both types in the economy, and the distribution of dividend income, which we choose as an auxiliary state variable.16 For the two hidden layers, we choose \(N^{\text{hidden 1}}=N^{\text{hidden 2}}=400\) neurons with relu activation functions. Finally, the network output is a vector of \(N^{\text{output}}=158\) elements consisting of the policies for the three assets, the policies for renting housing services, and four price functions for the price of the bond, the price of stocks, the price for house ownership, and the price for renting housing services.17 Outputs corresponding to prices are transformed using the softplus activation in order to ensure that the predicted prices are always positive. Footnote 16: The minimal state vector includes 121 state variables, that includes distribution of three assets across 20 age groups for each of two risk aversion groups and the aggregate shock. Following Azinovic et al. (2022), we augmented the input of the neural network by a vector of auxiliary statistics. In our case, we choose a distribution of dividend income as an auxiliary statistic. Footnote 17: For each asset category, we approximate the 19 savings function, exploiting the fact that the oldest age group does not save in our model. Furthermore, we approximate the 20 housing service consumption function, since housing services are demanded by all age groups. To ameliorate numerical instabilities associated with portfolio choice, we employ the homotopy algorithm described at the beginning of section 3. Specifically, we firstly solve a bond-only economy. To do so, we set the \(S^{\text{initial}}=0\), \(H^{\text{o,initial}}=0\), and \(H^{\text{ex,initial}}=1\) and run the training procedure described in section 2.4.18 Then, we approximate the stock pricing function implied by the consumption dynamics of the bond-only economy.19 Using the price and policy functions of the bond-only economy as an initial guess, we compute equilibrium of the two asset economy featuring a small supply of stocks. Starting from this solution, we gradually increase stock supply and retrain the neural network after each increase. We do so until we reach the stock supply target. In total, we perform \(S^{\text{steps}}=10\) homotopy steps to increase the stock supply from \(S^{\text{initial}}=0\) to \(S^{\text{final}}=1\). We proceed analogously for the case of housing ownership. In that case, we perform \(H^{\text{steps}}=20\) housing supply steps. We simultaneously increase the supply of private housing \(H^{o}\) from \(H^{\text{o, initial}}=0\) to \(H^{\text{o, final}}=1\) and decrease the external housing supply \(H^{\text{ex}}\) from \(H^{\text{ex, initial}}=1\) to \(H^{\text{ex, final}}=0\), such that the total housing supply \(H^{o}+H^{ex}\) remains constant and equal to one. Footnote 18: We use the same training hyperparameters as in subsequent homotopy steps, with exception of \(N^{\text{episodes}}\), which we set to 512, in order to obtain a high-precision solution to start from. The value for the remaining hyperparameter are provided in table 4. Footnote 19: In this step, we use the same training hyperparameters as in subsequent homotopy steps. See table 4. In each step of the homotopy algorithm we continue to train the neural network to approximate the equilibrium functions for the economy with the corresponding values for the aggregate supply of assets in the economy, _i.e._ for the intermediate values for \((B,S,H^{o},H^{ex})\). Again, we use the learning procedure described in section 2.4. To obtain training states, we simulate \(N^{\text{trajectories}}=8192\) parallel state trajectories for \(N^{\text{episodes}}=256\) episodes.20 For each episode (_i.e._ each simulation step), we train the neural network for \(N^{\text{epochs}}=10\) epochs with a learning rate of \(\alpha^{\text{learn}}=1\times 10^{-6}\).21 Footnote 20: In each epoch, we split the simulated dataset of \(N^{\text{trajectories}}\) states into batches of \(N^{\text{minibatch}}=128\) states. Each batch is used to compute one stochastic gradient descent update of the neural network weights. As for the single asset model, we transformed loss function gradients using the zero_nams transformation before using them as an input for the Adam optimizer and re-start internal state of Adam optimizer to zero at the beginning of training episode. Figure 2 shows the asset policies during different stages of our homotopy training algorithm. Figure 3 shows the corresponding policies for consumption and the renting of housing services. The top panel in figure 2 shows the trained bond policy for each risk aversion type and age group, when the policies for stock and house ownership are still masked and the corresponding terms in the loss function are weighted with zero. The second panel shows the policies, when a small amount of stock is introduced to the model, the policies for house ownership are still masked and equal to zero. The third panel shows the policies, when a small amount of house ownership is introduced to the model. Finally, the last panel shows the household policies, when the full amount of house ownership is introduced to the economy. As the figures illustrate, the policies remain stable throughout the training. Figure 3 the corresponding profiles for consumption and the renting of housing services over the life-cycle during the stages of the homotopy algorithm. Increasing the supply of assets allows for a smoother consumption profile over the life-cycle. In the top panel, which shows the economy with only a risk-free bond, the consumption profile over the life-cycle is slightly decreasing for unconstrained agents due to the low interest rate. As more assets are added to the economy, the bond price drops and the interest rate rises. In the bottom panel, the higher interest rate leads to an increasing consumption profile over the life-cycle. Comparing the households with high risk aversion to the households with low risk aversion, figure 3 shows that households with high risk aversion increase their consumption and save less early in life, at the cost of less consumption later in life. For time separable expected utility with constant relative risk aversion, the risk aversion also pins down the intertemporal elasticity of substitution. The households with higher risk aversion are hence less willing to substitute consumption between periods and choose a more stable consumption sequence over their life-cycle. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameters & \(N^{\text{episodes}}\) & \(N^{\text{trajectories}}\) & \(N^{\text{epochs}}\) & \(N^{\text{minibatch}}\) & \(N^{\text{integration}}\) & \(\alpha^{\text{learn}}\) \\ \hline Values & 256 & 8192 & 10 & 128 & 8 & \(10^{-6}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Hyperparameters for training steps within homotopy loop. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameters & \(N^{\text{input}}\) & \(N^{\text{hidden 1}}\) & \(N^{\text{hidden 2}}\) & \(N^{\text{output}}\) \\ & & Activations & Activations & Activations \\ \hline \multirow{2}{*}{Values} & \multirow{2}{*}{161} & 400 & 400 & 158 \\ & & relu & relu & see text \\ \hline \hline \end{tabular} \end{table} Table 3: Network Architecture chosen for the three asset model with risk aversion heterogeneity. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Parameters & \(S^{\text{initial}}\) & \(S^{\text{final}}\) & \(H^{\text{o,initial}}\) & \(H^{\text{o,final}}\) & \(H^{\text{ex,initial}}\) & \(H^{\text{ex,final}}\) & \(S^{\text{steps}}\) & \(H^{\text{steps}}\) \\ \hline Values & 0 & 1 & 0 & 1 & 1 & 0 & 10 & 20 \\ \hline \hline \end{tabular} \end{table} Table 5: Homotopy loop hyperparameters Figure 2: Policy functions for the different assets at different stages of trainig of the homotopy algorithm. Figure 3: Policy functions for consumption and the renting of housing services at different stages of training of the homotopy algorithm. #### 3.1.3 Accuracy To assess the accuracy of our solution we compute the errors in the equilibrium conditions on an approximate ergodic set of the economy. To obtain the approximate ergodic set, we simulate the dataset of \(N^{\text{trajectories}}=8192\) states, generated by the last step of our training procedure, forward without retraining the neural network.22 As shown in the figure 4, the remaining errors are low, with the 99th percentile well below 1% for each age group and optimality condition. Footnote 22: We simulate the final dataset 256 periods forward without further training to test the “out-of-sample” performance of our approximate solution. ## 4 Conclusion We make two distinct but complementary contibutions in the context of deep learning based solution methods. First, we show how market clearing and borrowing constraints can be encoded directly into the architecture of the neural network. Second, we present a homotopy algorithm for solving portfolio choice problems. The idea of our homotopy algorithm is to start solving a single asset and then add additional assets one by one and to slowly increase their supply in the economy. Figure 4: Statistics on the errors in the remaining equilibrium conditions after training, expressed in % and as relative consumption errors. The black solid line shows the mean error by age group, the blue dashed line shows the 90th percentile, the yellow dash-dotted line shows the 99th percentile and the dotted red line shows the maximum error by age group. The left column shows the errors for the households with low risk aversion, the right panel shows the errors for the households with high risk aversion. The top row shows the errors in the optimality conditions for bond purchases, the second row shows the errors in the equilibrium conditions for the choice of housing services, the third row shows the errors for the equilibrium conditions for house ownership, and the last row shows the errors in the equilibrium conditions for the stock choice.
2305.19064
A neural network emulator for the Lyman-$α$ 1D flux power spectrum
The Lyman-$\alpha$ forest offers a unique avenue for studying the distribution of matter in the high redshift universe and extracting precise constraints on the nature of dark matter, neutrino masses, and other $\Lambda$CDM extensions. However, interpreting this observable requires accurate modelling of the thermal and ionisation state of the intergalactic medium, and therefore resorting to computationally expensive hydrodynamical simulations. In this work, we build a neural network that serves as a surrogate model for rapid predictions of the one-dimensional \lya flux power spectrum ($P_{\rm 1D}$), thereby making Bayesian inference feasible for this observable. Our emulation technique is based on modelling $P_{\rm 1D}$ as a function of the slope and amplitude of the linear matter power spectrum rather than as a function of cosmological parameters. We show that our emulator achieves sub-percent precision across the full range of scales ($k_{\parallel }=0.1$ to 4Mpc$^{-1}$) and redshifts ($z=2$ to 4.5) considered, and also for three $\Lambda$CDM extensions not included in the training set: massive neutrinos, running of the spectral index, and curvature. Furthermore, we show that it performs at the 1% level for ionisation and thermal histories not present in the training set and performs at the percent level when emulating down to $k_{\parallel}$=8Mpc$^{-1}$. These results affirm the efficacy of our emulation strategy in providing accurate predictions even for cosmologies and reionisation histories that were not explicitly incorporated during the training phase, and we expect it to play a critical role in the cosmological analysis of the DESI survey.
Laura Cabayol-Garcia, Jonás Chaves-Montero, Andreu Font-Ribera, Christian Pedersen
2023-05-30T14:26:40Z
http://arxiv.org/abs/2305.19064v2
# A neural network emulator for the Lyman-\(\alpha\) forest 1D flux power spectrum ###### Abstract The Lyman-\(\alpha\) forest offers a unique avenue for studying the distribution of matter in the high redshift universe and extracting precise constraints on the nature of dark matter, neutrino masses, and other \(\Lambda\)CDM extensions. However, interpreting this observable requires accurate modelling of the thermal and ionisation state of the intergalactic medium, and therefore resorting to computationally expensive hydrodynamical simulations. In this work, we build a neural network that serves as a surrogate model for rapid predictions of the one-dimensional Lyman-\(\alpha\) flux power spectrum (\(P_{\rm 1D}\)), thereby making Bayesian inference feasible for this observable. Our emulation technique is based on modelling \(P_{\rm 1D}\) as a function of the slope and amplitude of the linear matter power spectrum rather than as a function of cosmological parameters. We show that our emulator achieves sub-percent precision across the full range of scales (\(k_{\parallel}=0.1\) to \(4\,{\rm Mpc}^{-1}\)) and redshifts (\(z=2\) to \(4.5\)) considered, and also for three \(\Lambda\)CDM extensions not included in the training set: massive neutrinos, running of the spectral index, and curvature. Furthermore, we show that it performs at the \(1\%\) level for ionisation and thermal histories not present in the training set and performs at the percent level when emulating down to \(k_{\parallel}=8\,{\rm Mpc}^{-1}\). These results affirm the efficacy of our emulation strategy in providing accurate predictions even for cosmologies and reionisation histories that were not explicitly incorporated during the training phase, and we expect it to play a critical role in the cosmological analysis of the DESI survey. keywords: quasars: absorption lines - cosmology: large-scale structure of Universe - methods: statistical ## 1 Introduction The Lyman-\(\alpha\) forest refers to a series of absorption features in the spectra of high-redshift quasars caused by the scatter of quasar light at 1216 A of neutral hydrogen in the intergalactic medium (IGM, for a review, see McQuinn, 2016). Consequently, the Lyman-\(\alpha\) forest is sensitive to density fluctuations as well as the thermal and ionisation state of the IGM, thereby containing precise cosmological and astrophysical information about redshifts well above the reach of large-scale galaxy surveys. Cosmological analyses of the Lyman-\(\alpha\) forest rely on either three-dimensional correlations of the Lyman-\(\alpha\) transmission field to measure baryonic acoustic oscillations (BAO, e.g., Busca et al., 2013; Slosar et al., 2013; Delubac et al., 2015; Bautista et al., 2017; de Sainte Agathe et al., 2019; da Mas des Bourboux et al., 2020), or correlations along the line-of-sight of each individual quasar (one-dimensional flux power spectrum, \(P_{\rm 1D}\), e.g., McDonald et al., 2006; Palanque-Delabrouille et al., 2013; Chabanier et al., 2019) to set constraints on the sum of neutrino masses (Palanque-Delabrouille et al., 2015, 2015; Yeche et al., 2017; Palanque-Delabrouille et al., 2020), the nature of dark matter (Baur et al., 2016, 2017; Yeche et al., 2017; Armengaud et al., 2017; Irsic et al., 2017; Palanque-Delabrouille et al., 2020; Rogers & Peiris, 2021), and even the reionisation (Zaldarriaga et al., 2001; Meiksin, 2009; Lee et al., 2015; McQuinn, 2016) and thermal history (Viel & Haehnelt, 2006; Bolton et al., 2008) of the Universe. Over the last decade, first the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al., 2013) and then the Extended Baryon Oscillation Spectroscopic Survey (eBOSS; Dawson et al., 2016) have dramatically increased the number of Lyman-\(\alpha\) forest measurements available, thereby significantly increasing the precision of cosmological and IGM constraints (du Mas des Bourboux et al., 2020). The ongoing Dark Energy Spectroscopic Instrument survey (DESI, DESI Collaboration et al., 2016) will quadruple the number of line-of-sights, which will increase even more the constraining power of Lyman-\(\alpha\) forest analyses (Font-Ribera et al., 2014). BAO measurements of the Lyman-\(\alpha\) forest only use the correlations on large scales that can be modelled with linear theory. On the other hand, extracting constraints from the \(P_{\rm 1D}\) measurements is quite challenging because they are sensitive to the complex physical processes that affect the distribution of neutral hydrogen in the IGM, and thus a precise interpretation of this observable requires resorting to time-consuming hydrodynamical simulations (Cen et al., 1994; Miralda-Escude et al., 1996; Meiksin et al., 2001; Lukic et al., 2015; Walther et al., 2021; Chabanier et al., 2023). Bayesian inference techniques require of the order of \(10^{6}\) evaluations to set robust constraints on cosmological parameters; consequently, a traditional analysis would require running the same number of hydrodynamical simulations, each of these taking more than \(\sim 10^{5}\) CPU hours, making Bayesian inference unfeasible. One solution to this problem is constructing fast surrogate models that interpolate predictions to regions of the parameter space not sampled by simulations (for the first application of this technique in cosmology, see Heitmann et al., 2006; Habib et al., 2007). As a result, the number of simulations required for Bayesian inference decreases dramatically from millions to dozens or hundreds. We will refer to these models as emulators hereafter. The two main approaches to building emulators are to use architectures based on a Gaussian process (GP; Sacks et al., 1989; MacKay et al., 1998) or a neural network (NN; McCulloch and Pitts, 1943). These two techniques are different types of supervised learning algorithms, with the first and second typically considered as "non-parametric" and "over-parameterized", respectively. In general, GPs require less training data than NNs and produce more robust predictions, but this comes at the cost of presenting runtimes that scale with the cube of the number of data points instead of linearly like NNs. As a result, GPs and NNs are more appropriate for small and large datasets, respectively. These two architectures have been used for multiple applications, e.g., GPs for emulating the matter power spectrum (e.g., Heitmann et al., 2009; Lawrence et al., 2010; Heitmann et al., 2016; Lawrence et al., 2017), the halo mass function (e.g., Bocquet et al., 2020), and the one-dimensional Lyman-\(\sigma\) flux power spectrum (e.g., Bird et al., 2019; Rogers et al., 2019; Walther et al., 2019; Pedersen et al., 2021; Takhtaganov et al., 2021; Rogers and Peiris, 2021) or NNs for also emulating the matter power spectrum (e.g., Angulo et al., 2021; Arico et al., 2021), galaxy clustering and lensing (e.g., Nishimichi et al., 2019; Chaves-Montero et al., 2023; Contreras et al., 2023, 2023, 2023), and accelerating the predictions of Einstein-Boltzmann Solvers (Gunther et al., 2022; Nygaard et al., 2022). In this paper, we build the first NN-based emulator for the one-dimensional power spectrum of the Lyman-\(\alpha\) forest. We create it following the approach devised by Pedersen et al. (2021, P21 hereafter), which relies on emulating \(P_{\rm 1D}\) as a function of the amplitude and slope of the linear matter power spectrum on small scales rather than as a function of cosmological parameters. The main advantage of this approach is that it reduces the dimensionality of the problem, and it enables precise predictions for redshifts, cosmological parameters, and \(\Lambda\)CDM extensions not considered in the training set (Pedersen et al., 2021; Pedersen et al., 2023). Our main motivation for building an NN- instead of a GP-based emulator like P21 is that they can handle larger training datasets, like the one that will be needed to accurately interpret \(P_{\rm 1D}\) measurements from DESI. The emulator developed in this paper will be publicly accessible at [https://github.com/igmhub/LaCE](https://github.com/igmhub/LaCE) at the time of publication. The outline of this paper is as follows. Section 2 presents the hydrodynamical simulations, their post-processing, and the \(P_{\rm 1D}\) parametrisation. In SS3 and SS4, we present the neural-network emulator developed in this paper and characterise its training procedure and training sample. Section 5 contains the main results obtained with the emulator. This includes testing the training simulations, validating the emulator at arbitrary redshifts, and testing it on simulations with different cosmology and astrophysics. Section 6 presents an extended version of the emulator to smaller scales. Finally, SS7 concludes the paper. ## 2 Methods In this section, we describe the simulations from which we extract \(P_{\rm 1D}\) measurements for calibrating and testing our emulator in SS2.1, how we extract these measurements in SS2.2, and the input parameters for our emulator in SS2.3. ### Simulations We train our emulator using \(P_{\rm 1D}\) measurements from a suite of 60 flat \(\Lambda\)CDM cosmological hydrodynamical simulations described in detail in P21; throughout this work, we refer to these simulations as training. The simulations were ran employing mp-gadget1(Feng et al., 2018; Bird et al., 2019), a massively scalable version of the cosmological structure formation code gadget-3 (last described in Springel, 2005). Each simulation tracks the evolution of \(768^{3}\) dark matter and baryon particles from \(z=99\) to \(z=2\) inside a simulation box of \(L=67.5\) Mpc on a side and generates 11 output snapshots uniformly spaced in redshift between \(z=4.5\) and \(z=2\). To increase computational efficiency, star formation is included using a simplified prescription that turns regions of baryon overdensity \(\Delta_{\rm b}>1000\) and temperature \(T<10^{5}\) K into collisionless stars, which is justified by the negligible contribution of high-density regions to the Ly\(\alpha\) forest (e.g., Viel et al., 2004). Also for efficiency purposes, the simulations use a spatially uniform ultraviolet background implementation from Haardt and Madau (2012) and do not include AGN feedback; the first and second approximations may lead to up to \(\sim 10\%\) errors on \(P_{\rm 1D}\) predictions especially at high redshift (\(z=4\), e.g., Pontzen, 2014; Gontcho A. Gontcho et al., 2014; Suarez and Pontzen, 2017) and low redshift (\(z\simeq 2\), e.g., Chabanier et al., 2020), respectively. Nevertheless, the impact of AGN feedback on \(P_{\rm 1D}\) measurements is cosmology-independent and it can be accounted for at the post-processing stage (Chabanier et al., 2020). Footnote 1: [https://github.com/MP-Gadget/MP-Gadget/](https://github.com/MP-Gadget/MP-Gadget/) The training simulations adopt 30 different sets of cosmological and astrophysical parameters selected according to a Latin hypercube design so the space of interest is sampled efficiently (McKay et al., 1979). Two realisations were run for each combination using Figure 1: Redshift evolution of the gas temperature at mean density. The gray lines show the results for training simulations, while the red line does so for the reimisation simulation. The thermal histories of the training simulations, which are used to train the \(P_{\rm 1D}\) emulator, are significantly different from that of the reimisation simulation, which is used for testing. an initial mode amplitude fixed to the ensemble mean and opposite Fourier phases (Angulo and Pontzen, 2016; Pontzen et al., 2016). These initial conditions, commonly known as "fixed-and-paired", significantly reduce cosmic variance in the Ly\(\alpha\) forest power spectrum (Villaescusa-Navarro et al., 2018; Anderson et al., 2019); throughout the remainder of this work, we refer to measurements from the simulations of a pair as from different phases. The impact of cosmic variance on training simulations is nevertheless considerable due to their limited size. To separate this source of uncertainty from others, all simulation pairs were run using the same distribution of Fourier phases. Motivated by the emulation strategy, the simulations explore different values of the amplitude and slope of the linear power spectrum, \[\Delta_{\rm p}^{2}(z) = k^{3}P_{\rm lin}(k_{\rm p},z), \tag{1}\] \[n_{\rm p}(z) = (\mathrm{d}\log P_{\rm lin}/\mathrm{d}\log k)\mid_{k=k_{\rm p}}, \tag{2}\] where \(k_{\rm p}\) is the pivot scale at which these are computed and \(P_{\rm lin}\) is the linear power spectrum of cold dark matter and baryons2. Specifically, the simulations use values within the ranges \(\Delta_{\rm p}^{2}(z=z_{\star})\in[0.25,\,0.45]\) and \(n_{\rm p}(z=z_{\star})\in[-2.35,\,-2.25]\), which are defined at \(z_{\star}=3\) and \(k_{\rm p}=0.7\,\mathrm{Mpc}^{-1}\) because this redshift and scale are approximately at the centre of the ranges of interest for DESI Lyman-\(\alpha\) studies (Ravoux et al. in prep, Karacayli et al. in prep.). As for the other cosmological parameters, the simulations use the same value for the Hubble parameter (\(H_{0}=67\,\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}\)), physical cold dark matter density (\(\omega_{\rm c}\equiv\Omega_{\rm c}h^{2}=0.12\)), and physical baryon density (\(\omega_{\rm b}\equiv\Omega_{\rm b}h^{2}=0.022\)), where \(h=0.67\) is the dimensionless Hubble parameter. Note that P21 and Pedersen et al. (2023) showed that our emulation strategy produces precise results for simulations with cosmological parameters outside of the training set and \(\Lambda\)CDM extensions; we test this further in SS5.5. Footnote 2: Note that \(P_{\rm lin}\) does not include the contribution of neutrinos for cosmologies with massive neutrinos. The training simulations consider three astrophysical parameters to account for uncertainties in the reionisation and thermal history of the Universe. We consider as a fiducial model the histories from Haardt and Madau (2012), and when vary the value of the previous 3 parameters to perturb these histories following prescriptions from Onorbe et al. (2017). Specifically, the simulations explore \(z_{\rm H}\in[5.5,\,15]\), which indicates the midpoint of hydrogen reionisation, while holding fixed the redshift of the second helium ionisation to \(z_{\rm He\,H}=3.5\). In addition, these use \(H_{\rm A}\in[0.5,\,1.5]\) and \(H_{\rm B}\in[0.5,\,1.5]\), which account for the uncertain effect of helium reionisation on the IGM temperature by rescaling the He\(\,\)ii photo-heating rate, \(\epsilon_{0}\), such that \(\epsilon=H_{\rm A}\Delta_{\rm b}^{H_{\rm B}}\epsilon_{0}\). In this way, the thermal state of the IGM is correctly coupled to the gas pressure. In addition, we use seven pairs of simulations with cosmological and astrophysical parameters not considered in the training simulations for evaluating different aspects of the emulation strategy3. The first is the central simulation, which uses parameters at the centre of the training parameter space and serves to evaluate the performance of the emulator in optimal conditions. The second is the seed simulation, with the same parameters as the central simulation but different initial conditions, which serves to characterise the impact of cosmic variance on the results. Footnote 3: Some of these simulations were already described in P21 and Pedersen et al. (2023) but named differently. The simulations growth and neutrinos were referred to as \(h\) and \(\nu\) in P21 and the seed simulation as _diff seed_ in Pedersen et al. (2023). We also use the growth, neutrinos, running, and curved simulation pairs, which present the same amplitude and slope of the matter power spectrum at \(z=3\), physical CDM and baryonic densities, and astrophysical parameters as the central simulation, but the growth simulation uses 9% larger Hubble constant and 18% smaller \(\Omega_{\rm M}\), the neutrinos simulation includes massive neutrinos (\(\sum m_{\nu}=0.3\) eV) implemented using the linear response approximation Ali-Haimoud and Bird (2013), the running simulation uses a non-zero running of the primordial power spectrum slope (\(\mathrm{d}\,n_{\rm s}/\mathrm{d}\log k=0.015\)), and the curved simulation considers an open universe (\(\Omega_{k}=0.03\)). These simulations serve to test the precision of the emulation strategy for cosmologies not included in the training set. Furthermore, we consider the reionisation simulation, with the same cosmological parameters as the central simulation but implementing an ionisation history from Puchuin et al. (2019). The main difference between this and the training simulations is that the helium ionisation history of the first peaks at a later time, which translates into different IGM thermal histories. In Fig. 1, the grey lines display the thermal histories of all training simulations, while the red line does so for the reionisation simulation. As expected, we can see that the thermal history of the reionisation simulation peaks at a later time relative to those of the training simulations due to the also lower \(z_{\rm He\,H}\) for the first. This simulation serves to test the performance of the \(P_{\rm 1D}\) emulator for ionisation and thermal histories different from those used in the training set. ### Post-processing We extract \(P_{\rm 1D}\) measurements from the simulations described in SS2.1 as follows. For each simulation, we first consider one of the simulation axes as the line of sight, and then we displace particles from real to redshift space along this axis. We continue by computing the transmitted flux fraction along \(768^{2}\) uniformly-distributed line of sights along this axis using FSFE4(Bird, 2017); these lines of sights are commonly known as skewers. The line-of-sight resolution of the skewers is set to 0.05 Mpc, which is enough to resolve the thermal broadening scale. Then, we compute the Fourier transform of the transmitted flux fraction for each skewer, and we estimate \(P_{\rm 1D}\) by taking the average of the Fourier transform of all skewers. Finally, we iterate over the two remainder simulation axes. By doing so, we sample different directions of the velocity field, extracting further information from the simulations. Footnote 4: [https://github.com/sbird/fake_spectra](https://github.com/sbird/fake_spectra) We repeat the previous procedure for each simulation and snapshot, ending up with 30 (cosmologies) \(\times\) 2 (opposite Fourier phases) \(\times\) 3 (simulation axes) \(\times\) 11 (snapshots) = 1980 \(P_{\rm 1D}\) measurements. Additionally, in post-processing, we vary the mean flux of the snapshots by scaling the effective optical depth of the skewers to 0.90, 0.95, 1.05, and 1.10 times its original value (see Lukic et al., 2015, for more details about this approach), and then we recompute the power spectrum, ending up with 9900 \(P_{\rm 1D}\) measurements in total. As a result, this post-processing presents a few improvements compared to that carried out by P21: three simulation axes instead of one, \(768^{2}\) skewers instead of \(500^{2}\), and mean flux rescalings. ### Emulator parameterisation We use the amplitude and slope of the linear matter power spectrum at \(k_{\rm p}=0.7\,\mathrm{Mpc}^{-1}\) to capture the cosmological dependence of measurements. This is justified because, when rescaling the linear matter power spectra of cosmologies within _Planck_ priors so these match the amplitude and slope of the best-fitting _Planck_ solution, the amplitude of the variations is smaller than 1% (see fig. 1 of P21). It is also important to note that the Lyman-\(\alpha\) forest probes cosmic times during which the universe is practically Einstein de-Sitter, and for such universe, the growth rate of velocities (and therefore redshift space distortions) has no cosmological dependence. We use four parameters to describe the astrophysical dependence of the measurements. The first is the mean transmitted flux fraction \(\bar{F}\), which encodes information about the ionisation state of the gas and is related to the effective optical depth as \(\tau_{\rm eff}=-\log\bar{F}\). The next two inform us about the thermal state of the gas probed by the Lyman-\(\alpha\) forest. The first, \(\gamma\), describes the slope of the temperature-density relation (e.g., Lukic et al., 2015), \(T=T_{0}\Delta_{\rm b}^{-1}\), where \(T_{0}\) is the gas temperature at mean density. The second, \(\sigma_{\rm T}\), captures the thermal broadening of absorption features due to the thermal motion of gas, \[\sigma_{\rm T}=\sigma_{\rm T,0}\sqrt{\frac{T_{0}[K]}{10^{4}}\frac{1+z}{H(z)}}\, \tag{3}\] where \(\sigma_{\rm T,0}=9.1\) km s\({}^{-1}\). Finally, the fourth parameter, \(k_{\rm F}\), captures the pressure smoothing scale (Kulkarni et al., 2015). Note that we use \(\sigma_{\rm T}\) and \(k_{\rm F}\) in inverse comoving units (for more details, see P21). ## 3 Emulators In this section, we describe the two \(P_{\rm 1D}\) emulators that we use throughout the paper: a GP-based emulator (LaCE-GP) in SS3.1 and an NN-based emulator (LaCE-NN) in SS3.2. We provide a brief description of the first, which was already discussed in P21, and we focus on the second, which is the main contribution of this work. ### Gaussian process emulator (LaCE-GP) The LaCE-GP emulator maps the parameter space \(\vec{\theta}=\{\Delta_{\rm p}^{2},n_{\rm p},\bar{F},\sigma_{\rm T},\gamma,k_{ \rm F}\}\) to the \(P_{\rm 1D}\) measurements (see SS2.3) based on a Python-based implementation using the package Gyr. Specifically, it normalises input parameters \(\vec{\theta}\) so these vary within a unit volume and \(P_{\rm 1D}\) by the median of all \(P_{\rm 1D}\) used. The original version of this emulator (P21) predicts the value of \(P_{\rm 1D}\) for each input scale \(k_{\parallel}\). The emulator was later updated to predict the best-fitting coefficients of a fourth-degree polynomial describing \(P_{\rm 1D}\) measurements because Pedersen et al. (2023) found that the original implementation was significantly affected by cosmic variance. In this work, we used a modified version of the Pedersen et al. (2023) emulator: we train it using the new post-processing of the LaCE simulations (SS2.2), extend the fitting range from \(k_{\parallel}=3\) to \(k_{\parallel}=4\) Mpc\({}^{-1}\), and predict the coefficients of a fifth-order polynomial (to account for the extended range of scales, see Appendix A). In SS5.3, we provide a detail comparison between the performance of this and the LaCE-NN emulator. ### Neural network emulator (LaCE-NN) Neural networks are mathematical models composed of interconnected layers of nodes, each of which possesses trainable weights and biases (\(w\), \(b\)). The initial and final layers are commonly referred to as the input and output layers, respectively, and they have the same number of nodes as the dimensions of the input and output data. The intermediate layers are known as hidden layers and their number of nodes is a flexible hyperparameter of the network. Increasing the number of layers enhances the network's complexity, enabling it to handle more challenging tasks. However, networks with a higher number of layers are more susceptible to overfitting, which occurs when an excessively intricate model captures irrelevant patterns in the training data, leading to impaired generalization on new data. In practice, the input data to the layer \(i\) (\(\vec{x_{i}}\)) is multiplied by the set of trainable weights associated with the nodes in such layer, and the resulting values are passed through a non-linear function \[\vec{y_{i}}=g(\vec{x_{i}}\cdot\vec{w_{i}}+\vec{b_{i}}), \tag{4}\] where \(g(\cdot)\) is commonly known as the activation function and \(\vec{y_{i}}\) represents the output of this layer. The training process involves finding the optimal weights and biases that minimise the difference between the network's output and the target "true" values. This discrepancy is quantified using a loss function (\(\mathcal{L}\)) and the optimisation is achieved using an optimisation algorithm, such as gradient descent. This algorithm calculates the gradient of the loss function with respect to the weights and biases and adjusts them in a manner that minimises the loss. In this paper, we use a mixture-density network (MDN, Bishop, 1994), a type of neural network that can model complex probability distributions. MDNs use a combination of neural network architecture and mixture models to estimate the parameters of a probability distribution from the input data. Each mixture component is associated with the mean and variance of a (typically) Gaussian distribution, and the network optimises such parameters to best describe the probability distribution of the target prediction. Fig. 2 shows the MDN used in this work, which is written in PyTorch. It maps the six parameters \(\vec{\theta}=[\Delta_{\rm p}^{2},n_{\rm p},\bar{F},\sigma_{T},\gamma,k_{F}]\) described in SS2.3 (red circles) to the probability distribution of six polynomial coefficients best fitting the \(P_{\rm 1D}\), where we assume the polynomial coefficients to follow a Gaussian around the true value. The input layer of the emulator maps the six input parameters to ten hidden variables (6:10) and it is then followed by a hidden space with four hidden layers mapping ten input to 100 output parameters (blue circles). The 100 output parameters of the hidden layers are the input of two independent sets of layers with architecture 100:50:5 (soft-purple circles) mapping the output of the previous hidden space to the mean and standard deviation of the Gaussian distribution best describing the six polynomial coefficients (yellow circles). The MDN is trained with an Adam optimiser (Kingma & Ba, 2015) with an initial learning rate is \(10^{-3}\) for 100 epochs. The neural network fits the polynomial form \[P^{\prime}_{\rm 1D}=\sum_{i=0}^{n-1}\alpha_{i}\cdot\left(\log_{10}k_{\parallel} \right)^{i}, \tag{5}\] where \(n\) is the order of the polynomial and \(P^{\prime}_{\rm 1D}\) is the \(\log_{10}P_{\rm 1D}\) scaled by the median of all \(P_{\rm 1D}\) measurements in the training sample (\(A_{\rm scalings}\)), i.e. \[P^{\prime}_{\rm 1D}=\log_{10}\left(P_{\rm 1D}/A_{\rm scalings}\right). \tag{6}\] The network also predicts an uncertainty for each polynomial coefficient (\(\sigma_{i}\) ) that propagates to an uncertainty for \(\log_{10}P^{\prime}_{\rm 1D}\) \[\sigma_{\log}^{2}\,p_{\rm 1D}=\sum_{i=0}^{n-1}\sigma_{i}^{2}\cdot\left(\log_{10}k_{ \parallel}\right)^{2i}. \tag{7}\] In App. B, we also explore MonteCarlo methods to obtain a covariance matrix from the distribution of the polynomial coefficients. To fit the \(P_{\rm 1D}\) in the \(k_{\parallel}\) range \((0,4]\) Mpc\({}^{-1}\), we use a fifth-order polynomial (\(n=5\)), which is justified in App. A. To further back up this decision, Fig. 3 shows the contribution of the optimal polynomial terms once they have been optimised by the emulator up to sixth degree. The zero- and first-order coefficients broadly contribute to the \(P_{\rm 1D}\). Then, second- and third-order contributions compensate for each other and a similar thing happens with order four and five. Finally, the sixth order term is already centred at zero, and therefore its contribution to the \(P_{\rm 1D}\) measurements is smaller. After each training iteration, we compare the emulator predictions (Eqs. 5,7) with the true values using a log-likelihood loss function \(\mathcal{L}\) \[\mathcal{L}=\sum_{i}^{N}\left[\sum_{k}^{N_{A}}\left(\frac{\log_{ 10}P_{\rm 1D}^{\prime\ \rm pred}-\log_{10}P_{\rm 1D}^{\prime\ \rm true}}{\sigma_{\log_{10}}P_{\rm 1D}^{ \prime}}\right)^{2}+2\log\sigma_{\log_{10}}P_{\rm 1D}^{\prime}\right]. \frac{1}{N}\,, \tag{8}\] where \(N\) corresponds to the number of training samples considered in the loss and \(N_{k}\) is the number of wave-number \(k_{\parallel}\) bins. Evaluating the loss function in the \(P_{\rm 1D}\) space rather than the space of polynomial coefficients enables the neural network to learn the importance of each coefficient (e.g. the zero- and first- order polynomial degrees contribute more to the \(P_{\rm 1D}\) than the fourth-order one) and weight the learning accordingly. In Appendix C, we also explore emulating the \(P_{\rm 1D}\) in principal components space, but the polyfit emulator is more accurate and robust. Figure 3: Contribution of each polynomial term to the \(P_{\rm 1D}\) reconstruction. Each colour corresponds to the histogram of the contribution of each polynomial order computed for all snapshots in the training simulations (§2.1). The zero- and first-order coefficients (\(i=0\) and \(i=1\), blue and purple dotted lines) gently contribute to the \(P_{\rm 1D}\). The second- and third-order coefficients compensate for each other (\(j=2\) and \(j=3\), red and yellow lines). Similarly, the fourth and fifth-order coefficients (\(i=4\), \(i=5\), green and orange dashed lines) also compensate for each other. Finally, the sixth-order contribution (\(i=6\), solid brown) is centred at zero. Figure 2: Architecture of the mixture density network used in this work. The red circles correspond to the six input parameters (§2.1), and the blue circles and their connections represent a set of shared hidden layers. The symbol within the blue circles represent the activation function, which in our case is a Leaky-ReLU. The output of these hidden layers is the input of two independent sets of hidden layers, each of them mapping to six output parameters. These output parameters parametrise the mean and standard deviation of six Gaussian distributions corresponding to the six polynomial coefficients of the \(P_{\rm 1D}\) fit (Eq.5.) ## 4 Emulator Characterisation In this section, we optimise the emulator's training strategy (SS4.1) and the training sample by studying whether the precision of the emulator increases after including optical-depth rescalings in the training sample (SS4.2) and exploring different ways of doing data augmentation with the three-axes data (SS4.3). ### Neural network characterisation In order to optimise the performance of the emulator, we have tested several actions on the network architecture, the training procedure, the input data, and the target \(P_{\rm 1D}\). Figure 4 shows the impact of some of these actions on the emulator's performance. In the \(x\)-axis, the plot indicates the action performed on the network, while the \(y\)-axis displays the percent error in the emulated \(P_{\rm 1D}\). Comparing the error after each modification, we can quantify the impact that each action has on the \(P_{\rm 1D}\). In all cases, the emulator is trained on the training simulation pairs (SS2.1) and tested on central. To reduce the sources of variability between runs, the neural network is initialised with the same set of initial weights in all runs, which are also randomly generated. We first implemented one modification at a time to test their effect independently of the others, and then tested the impact of cumulative modifications. The starting point is a neural network fitting a polynomial in \(k_{\parallel}\) and predicting \(P^{\prime}_{\rm 1D}\) (Eq.6). We first study the impact of fitting the \(P^{\prime}_{\rm 1D}\) in \(\log_{10}k_{\parallel}\) space (A). This drastically reduces the error by a factor of 7. The second action (B), consists in fitting the \(\log_{10}(P^{\prime}_{\rm 1D})\) instead of \(P^{\prime}_{\rm 1D}\) directly, which clearly benefits the emulator. Actions C and D modify the activation function and the input's normalisation, respectively. Initially, the input parameter space (\(\theta\)) was normalised with a min-max normalisation \[\vec{\theta}^{\prime}=\frac{\vec{\theta}-\min(\vec{\theta})}{\max(\vec{ \theta})-\min(\vec{\theta})}\,, \tag{9}\] which rescales the parameter space to the [0,1] range. Instead, we modify this scaling by also subtracting 0.5, in such a way that our input parameter space is the [-0.5,0.5] range. Since the ReLU activation function sets to zero all negative values, we include the Leaky-ReLU activation function (Xu et al., 2015) to avoid vanishing gradients. In Fig.4, we can see that the Leaky-ReLU alone does not have an impact on the performance (A+C), but it does when it is combined with the prediction in \(\log_{10}k_{\parallel}\) (A+B+C). Similarly, the input parameter shift does (A+D) does not improve the emulator's performance but combined with the other changes there is a small improvement. Based on these results, the configuration of our fiducial emulator includes the fitting in \(\log_{10}k_{\parallel}\), the fitting of the \(\log_{10}P_{\rm 1D}\), the Leaky-ReLU activation function and the parameter shift (A+B+C+D). ### Optical-depth rescalings Generating hydrodynamical simulations and extracting the \(P_{\rm 1D}\) is computationally expensive and time consuming. However, once the \(P_{\rm 1D}\) measurement per snapshot has been measured, we can generate several realisations of such \(P_{\rm 1D}\) rescaling the mean-transmitted flux fraction (Lukic et al., 2015; Walther et al., 2021). This is an effective way of increasing the training sample of the emulator. Our simulations include four mean-flux rescalings per snapshot, which augment the training sample from 330 to 1650 training points. Such rescalings correspond to \(\pm\)(5 and 10)% in optical depth, which does not propagate into \(\pm\)(5 and 10)% mean flux. This is a novelty of the LaCE-NN emulator that can benefit from the training sample augmentation with optical-depth rescalings. Figure 5 compares the percent error in the \(P_{\rm 1D}\) predictions with emulators trained on samples without optical-depth rescalings (red), with 5% rescalings (blue), 10% (green), and combining both 5% and 10% optical-depth rescalings (yellow). For this plot, we have run four independent leave-one-out tests (see SS5.1), on per training set, obtaining the error in the \(P_{\rm 1D}\) emulation of each simulation and snapshot for each of the training samples. The error is then averaged over simulations and scales to have an error estimate per redshift. The emulator is always tested on non-scaled \(P_{\rm 1D}\). There is a significant improvement in the performance of the emulator when rescalings are incorporated into the training sample. For instance, the inclusion of 5% optical-depth rescalings resulted in a reduction of the percentage error in the emulated \(P_{\rm 1D}\) by a factor of two. The emulator benefited more from the inclusion of 10% optical Figure 4: Optimisation of the neural network emulator. The \(x\)-axis corresponds to different modifications implemented on the network’s architecture, training data, and training procedure. The \(y\)-axis indicates the percent error in the \(P_{\rm 1D}\) emulation for each of these modifications. Figure 5: Impact of the optical-depth rescalings on the \(P_{\rm 1D}\) emulation. The plot shows the percent error in the \(P_{\rm 1D}\) emulation for the 30 hypercube simulations pairs averaged over simulations and scales. The solid red and solid blue lines correspond to samples without and with all rescalings, respectively. In turn, the soft dashed lines indicate the samples including 5% (yellow) and 10% (green) optical-depth rescalings. The yellow and green lines are artificially shifted in the \(x\) axis for a better visualisation. depth rescalings as compared to the 5% ones (green vs yellow). This may be due to the fact that the 10% rescalings increase the parameter space to a greater extent than the 5% rescalings, leading to the new parameter space encompassing the 5% mean-flux rescaling points as well. Nevertheless, the optical-depth rescalings included in this post-processings mostly populate the parameter space and barely affect the limits of its convex hull. Future emulators aiming to predict universes with different IGM history will also require temperature and mean-flux rescalings increasing the covered parameter space. ### Training sample characterisation With the new post-processing of the hydrodynamical simulations, we obtain one \(P_{\rm 1D}\) measurement along each of the three axes for every simulation box. Given that our simulations are fixed-and-paired (SS2.1), we have a total of six \(P_{\rm 1D}\) measurements per simulation snapshot. These measurements can be averaged to obtain a single, less noisy \(P_{\rm 1D}\) measurement. However, in this section, we explore data augmentation techniques by considering different combinations of these six \(P_{\rm 1D}\) measurements. While the most common approach involves training the emulator with the average of the six measurements, we also investigate the possibility of utilising each measurement independently and combining only \(P_{\rm 1D}\) along axes or phases. Figure 6 explores the emulator's performance by considering different combinations of the six \(P_{\rm 1D}\) measurements. The plot depicts the percent error in \(P_{\rm 1D}\) emulation for various training sets. These errors were estimated using all training simulations and are compared to the most accurate \(P_{\rm 1D}\) measurement available, which results from averaging all axes and phases. The red line represents the percent error in \(P_{\rm 1D}\) emulation for different training samples. The first red point, labeled "Avg. all," corresponds to a training sample that averages the \(P_{\rm 1D}\) measurements from all axes and phases, resulting in a total of 1650 training points (30 simulations \(\times\) 11 snapshots \(\times\) 5 optical-depth rescalings). The next red points are "Avg. axes" and "Avg. phases", and reduce the percent error compared to the previous result. In the first case, "Avg. axes", we have averaged the \(P_{\rm 1D}\) from over the three axes, but not over phases. Therefore, this corresponds to two \(P_{\rm 1D}\) measurements per snapshot. Additionally, we also include the \(P_{\rm 1D}\) averaged along axes and phases, which is the most accurate measurement we have. Therefore, this corresponds to 4950 training points (30 simulations \(\times\) 11 snapshots \(\times\) 5 optical-depth rescalings \(\times\) (2+1) \(P_{\rm 1D}\)). In the second case, "Avg. phases", we average over phases, but not over axes. For each snapshot, we have three \(P_{\rm 1D}\) measurements, and as before, we additionally include the average over axes and phases. Therefore, the training sample contains 6600 training points (30 simulations \(\times\) 11 snapshots \(\times\) 5 optical-depth rescalings \(\times\) (3+1) \(P_{\rm 1D}\)). The following red point, "Avg. axes + Avg. phases" combines the two previous ones, which makes a total of 9900 training points (30 simulations \(\times\) 11 snapshots \(\times\) 5 optical-depth rescalings \(\times\) (3+2+1) \(P_{\rm 1D}\)) and clearly reduces the percent error compared to using only one of the data sets. Finally, the last point, "Avg. axes + Avg. phases + inde" adds up the average over axes, the average over phases and the six \(P_{\rm 1D}\) measurements, which correspond to 19800 training points (30 simulations \(\times\) 11 snapshots \(\times\) 5 optical-depth rescalings \(\times\) (3+2+1+6) \(P_{\rm 1D}\)). In this case, the emulator does not seem to benefit from the additional training points. Based on these results, the fiducial training sample for our emulator corresponds to "Avg. axes + Avg. phases" and contains 9900 training points. ## 5 Testing emulation strategy In this section, we evaluate the performance of the proposed emulation approach for the surrogate models discussed in the preceding sections. We specifically focus on assessing the precision of the emulator when applied to training simulations (SS5.1) and redshifts (SS5.2) that were not part of the training sample. In SS5.3, we compare the LaCE-GP and the LaCE-NN emulators. Additionally, we investigate the effect that cosmic variance has on the emulator (SS5.4) and the emulator's performance on cosmologies (SS5.5) and IGM thermal histories (SS5.6) that were not included in the training set. ### Leave-one-out tests The leave-one-out test is a commonly used validation test to evaluate the accuracy of an emulator. The emulator is trained using the set of training simulations, holding one out as a validation set, which is then evaluated. The leave-one-out test is repeated for each simulation in the training set, determining how well the emulator is able to generalise to unseen simulations and the overall accuracy it can achieve. Figure 7 presents the leave-one-out test results for training simulations. To generate the plot, we have optimised 30 independent emulators, each one training on 29 training simulation pairs and evaluated on the left-out simulation. To produce the Fig. 7 we group all snapshots with the same redshift and take the mean and the standard deviation of the percent error in the emulation across simulations. The red line indicates the mean percent error and the red shaded region corresponds to the standard deviation of the percent error. The shaded grey area in the figure indicates the 1% error requirement for the emulator. For most cases, the emulator reaches the \(<\) 1% error requirement at all redshifts, although some measurements at \(z=2\) are slightly over 1% error. This is not unexpected since \(z=2\) lies in the extreme of the Lyman-\(\alpha\) forest detection and therefore the emulator is more likely to degrade. Figure 6: Characterising the training sample of the emulator. The figure shows the percent error obtained in the \(P_{\rm 1D}\) emulation for different training sample constructed from several possible combinations of \(P_{\rm 1D}\) measurements over the three axes and two paired phases. The horizontal lines correspond to the 1% error requirement (dotted-black line) and the percent error of the one-axis emulator (blue lines). Finally, the blue-shaded region represents the observational uncertainties from the latest analysis of BOSS/eBOSS data (Chabanier et al., 2019). ### Redshifts outside the training set The LaCE emulators assume that the \(P_{\rm 1D}\) can be reconstructed without explicitly providing redshift information. This is crucial to analyse DESI \(P_{\rm 1D}\) data with any redshift binning of the data. To test if the assumption holds, we have evaluated the emulator's performance at a given redshift \(z=z_{0}\), when all snapshots at that redshift are removed from the training sample. In the top, middle, and bottom panels of Fig. 8, we show the percent error in the emulated \(P_{\rm 1D}\) at \(z=2.25\), \(z=3\), and \(z=4\), respectively, when the corresponding testing redshift is removed from the training sample. The shaded grey area indicates the 1% error requirement, which is achieved in all cases. This proves that the emulator can make \(P_{\rm 1D}\) predictions at arbitrary redshifts. We specifically choose to test the emulator at \(z=2.25\) instead of \(z=2\) due to the latter being on the boundary of the convex hull. Consequently, if we were to exclude \(z=2\) from the training sample, it would no longer be within the training space. Therefore, to ensure a comprehensive evaluation, we opt to test the emulator at \(z=2.25\) instead. ### Emulator comparison The success of this initial version demonstrated that we could accurately emulate the \(P_{\rm 1D}\) to sub-percent errors from the amplitude and slope of the linear matter power spectrum. However, this first emulator version was only tested on simulations with the same IGM history as the training set. LaCE-NN incorporates several modifications compared to LaCE-GP. The most notable difference is the replacement of Gaussian processes with neural networks. Additionally, LaCE-NN incorporates a new post-processing step that measures the \(P_{\rm 1D}\) along all three simulation axes (SS2.2) using 768 skewers. This increases the resolution of the measured \(P_{\rm 1D}\) and provides three \(P_{\rm 1D}\) measurements per snapshot. Furthermore, since our simulations are fixed-paired simulations, we have six \(P_{\rm 1D}\) measurements per snapshot. The new post-processing step also includes applying five optical-depth rescalings to each snapshot (SS4.2), resulting in a total of 9900 \(P_{\rm 1D}\) measurements. This represents a substantial improvement over the LaCE-GP data processing, which yielded 330 \(P_{\rm 1D}\) measurements, and already produces datasets that are difficult to handle with GPs. ### Cosmic variance Even though the training simulations use "fixed-and-paired" conditions, their volume is very limited and thus cosmic variance should affect emulator predictions significantly. To estimate the impact of cosmic variance on the results, we compare the performance of the emulator for the emtral and seeb simulation pairs (see SS2.1). These pairs present cosmological parameters at the centre of the training parameter space; therefore, the precision of the emulator should be close to optimal for them. On the other hand, the central pair use the same set of Fourier phases as the training simulations, while the seeb simulations use different initial conditions. As a result, any difference in the performance of the emulator for these simulations isolates the impact of cosmic variance. In the first two panels of Fig. 9, we display the precision of the LaCE-GP and LaCE-NN emulators for these two simulation pairs. As we can see, the two emulators present better than 1% performance for both cases, confirming that the impact of cosmic variance on emulator predictions is negligible. This is the consequence of Figure 8: Emulator’s performance at arbitrary redshifts. The plot shows the percent error in the \(P_{\rm 1D}\) emulation at redshifts \(z=2.25\), \(z=3\), \(z=4\), when all snapshots at these redshifts are dropped from the training sample. The dotted line indicates the mean percent error, while the shaded-coloured area corresponds to the standard deviation across simulations. The shaded-gray area indicates the 1% error requirement. We specifically choose to test the emulator at \(z=2.25\) instead of \(z=2\) due to the latter being on the boundary of the course hull. Consequently, if we were to exclude \(z=2\) from the training sample, it would no longer be within the training space. Figure 7: Leave-one-out test. Percent error in the emulator \(P_{\rm 1D}\) at different redshifts. The shaded grey area indicates the 1% error requirement for the emulator. The red line show the mean percent error and the red shaded region corresponds to the standard deviation of the percent error. The blue shaded region represents the observational uncertainties from the latest analysis of BOSS/eBOSS data. using a polynomial fit to smooth \(P_{\rm 1D}\) before training and testing the emulator (see Appendix A), which is justified by the smoothness of this observable and greatly reduces the impact of cosmic variance on large scales. As shown in Pedersen et al. (2023), the impact of cosmic variance is much larger if no smoothing is applied (see also Appendix C for another smoothing strategy). ### Cosmologies and \(\Lambda\)CDM extensions not considered in the training set The training simulations adopt a standard \(\Lambda\)CDM parameterisation and explore different values of the amplitude and slope of the primordial power spectrum while considering the same expansion and growth histories (see SS2.1). Given that \(P_{\rm 1D}\) is sensitive to the velocity field, it is important to check the precision of the emulator for other growth rates. To do so, we analyse the growth simulation pair (see SS2.1), which uses a different value of \(h\) relative to the training simulations. In the third panel of Fig. 9, we show that our emulators present sub-percent precision for this pair, confirming that our emulation strategy enables producing precise predictions for cosmologies outside the training set (Pedersen et al., 2021; Pedersen et al., 2023). We now proceed to study if the emulation strategy also works for three \(\Lambda\)CDM extensions: massive neutrinos, running of the spectral index, and curvature. We test the previous scenarios using the neutrinos, running, and curved simulation pairs (see SS2.1), which present extreme values for the previous ingredients already ruled out by observations (Planck Collaboration et al., 2020): \(\sum m_{\nu}=0.3\) eV neutrino mass, \(\Delta n_{\rm s}/\mathrm{d}\log k=0.015\), and \(\Omega_{k}=0.03\), respectively. Remarkably, our emulators also present sub-percent precision for these simulations, as shown in the fourth, fifth, and sixth panels of Fig. 9. Consequently, we can use this emulation strategy to set accurate constraints on \(\Lambda\)CDM extensions. ### Thermal histories not included in the training set In SS2.1, we introduced the reionisation pair, which implements a different He ii reionisation history than the training simulations, which result in a very different thermal history (see Fig. 1). In the last panel of Fig. 9, we show the performance of the LaCE-GP and the LaCE-NN emulators for this pair. As we can see, the precision of both emulators is similar and on average better than 1.5%, letting us conclude that the emulation strategy works even for He ii reionisation histories not considered in the training set. This is the consequence of emulating in the \(\sigma_{T}\)-\(\gamma\) space, which captures the state of the IGM at a particular cosmic time, instead of as a function of parameters encoding information about the entire ionisation or thermal history. We find that the overall precision of the emulator for this pair is on average four times worse than for the central pair, especially for low redshift. To further understand this decrease in performance, in Fig. 10 we display the results of the LaCE-NN emulator for the reionisation pair without collapsing the wavelength information. Lines indicate the results for different redshifts, while shaded areas denote the level of uncertainty predicted by the emulator. As we can see, the emulator produces biased results for \(z=2\). To trace the origin of this issue, in Fig. 11 we display how the training, central, and reionisation simulations sample the emulator parameter space. Although the central and reionisation simulations present the same cosmology, we can readily see that the relation between \(\tilde{F}\) and \(T_{0}\) is very different for these. Furthermore, we find that the value of the Figure 10: Percent error in the reionisation emulation for snapshots at different redshift. The solid lines indicate the mean percent error while the shaded areas correspond to the emulator’s predicted uncertainty. The prediction at \(z=2\) shows worse performance than the rest of the snapshots, which is potentially because at such redshift, \(\tilde{F}\) and \(T_{0}\) are outside the convex hull of the training data. Figure 9: Emulator performance on simulations with different cosmologies as the one sampled by the training simulations. The \(\nu\)-sim (top panel) contains massive neutrinos, while the \(h\)-sim (bottom panel) modifies the growth rates with respect to the training sample. The dotted line corresponds to the mean percent error in the \(P_{\rm 1D}\) prediction, while the shaded area indicates the standard deviation across \(k_{\parallel}\). reionisation parameters for \(z=2\) lies outside of the parameter space covered by the training simulations, which explains the decrease in accuracy. An important observation is that the emulator assigns a higher uncertainty to the \(P_{\rm 1D}\) measurement at \(z=2\). This increased uncertainty arises from the awareness that the parameters associated with \(z=2\) lie outside the boundaries of the convex hull. By acknowledging this discrepancy, the emulator correctly accounts for the greater uncertainty in predicting \(P_{\rm 1D}\) values at this particular redshift. Interestingly, the overall performance of the emulator for the reionisation pair is worse than for cosmologies and \(\Lambda\)CDM extensions not included in the training set. This result emphasises the significance of exploring various IGM models instead of only concentrating on densely sampling the cosmological part of the emulator parameter space. While some of this exploration can be carried out in a post-processing stage via mean flux and temperature rescalings (Lukic et al., 2015), the gas pressure smoothing scale cannot be rescaled so it is necessary to run simulations with different reionisation histories. ## 6 Extended Emulator The LaCE-GP emulator has been developed and tested on scales ranging from \(k_{\parallel}\in(0,3]\) Mpc\({}^{-1}\). However, in order to cover the entire range of scales that DESI is designed to observe (Ravoux et al. in prep, Karacayli et al. in prep.), the LaCE-NN emulator has been developed to extend this range to \(k_{\parallel}\in(0,4]\) Mpc\({}^{-1}\). In this section, we present the results of an extended version of the LaCE-NN emulator, which predicts the \(P_{\rm 1D}\) on scales up to \(k_{\parallel}=8\) Mpc\({}^{-1}\). It is worth noting that while DESI cannot observe scales beyond \(k_{\parallel}=4\) Mpc\({}^{-1}\), this extension has been motivated by the \(P_{\rm 1D}\) measurements obtained from high-resolution quasars in Karacayli et al. (2022). Figure 12 shows the performance of the extended emulator in the test simulations (left panel) and the training simulations (right panel). The figure includes the LaCE-GP emulator's performance for comparison purposes. In both cases, we employ the same emulator as used for \(k_{\parallel}<4\) Mpc\({}^{-1}\) without any adjustments to the hyperparameters. However, we modify the order of the polynomial used to fit the \(P_{\rm 1D}\), which is now set to \(n=7\). Further details can be found in Appendix A. The LaCE-NN emulator outperforms the LaCE-GP performance in the extended case. Without any tuning of the emulator, LaCE-NN reaches a 1-2% error in all test simulations but the reionisation one, which is the most challenging. On the training simulations (right panel), LaCE-NN emulates the \(P_{\rm 1D}\) of the training simulations with a mean percent error of only \(\sim 2\%\). This is lower at \(k_{\parallel}<4\) Mpc\({}^{-1}\) and higher for smaller scales. However, note that the error requirement for very small scales is not as stringent as 1% (Karacayli et al., 2022). We present our initial attempt to create an extended emulator, acknowledging that there is room for improvement in future iterations. For instance, we aim to address the degradation observed at larger scales, and we propose potential enhancements such as down-weighting the contribution of smaller scales. The extended emulator's performance exhibits a notable decline in the case of the reionisation simulation (Fig. 13). Specifically, we observe a significant degradation in performance compared to the fiducial emulator. The errors primarily stem from inaccuracies at high \(k_{\parallel}\) values, but we also observe a deterioration in predictions across all scales, particularly at \(2<z<3\). At the redshifts in question, the parameter values for \(\tilde{F}\) and \(T_{0}\) lie at the boundaries or beyond the parameter space covered by the training simulations (see Fig. 11). While this situation did not pose significant issues for the fiducial emulator for \(z>2\), it has a pronounced impact on the extended emulator, also at larger scales. Again, an avenue for enhancing these results would involve reducing the influence of scales with \(k_{\parallel}>4\) Mpc\({}^{-1}\). However, we defer this exploration to future investigations. The poor performance of the extended emulator in the reionisation highlights the need to expand the training sample parameter space by incorporating additional temperature and mean-flux rescalings that encompass universes with diverse IGM histories. Another noteworthy observation is that the emulator predicts larger uncertainties for the snapshot at \(z=2\) when compared to other more accurate predictions. This feature aligns well with the specific snapshot lying beyond the training parameter space. ## 7 Conclusions In this paper, we build the first emulator of the one-dimensional Lyman-\(\alpha\) flux power spectrum (\(P_{\rm 1D}\)) using a neural network (NN) architecture. To do so, we adopt the emulation strategy devised by P21, which relies on emulating \(P_{\rm 1D}\) as a function of the amplitude and slope of the linear matter power spectrum on small scales. We summarise our main findings below: * In SS3.2, we build an emulator that uses a Mixture Density Network (MDN) to predict the probability distribution of six polynomial coefficients describing \(P_{\rm 1D}\) measurements. Then, it combines these distributions to generate predictions for the best-fitting solution and error prediction for each combination of input parameters. In Fig. 4, Figure 11: Distribution of training, central, and reionisation simulations in the space of emulation parameters. Small dots indicate the results for training simulations with optical depth rescalings, while pink stars and red triangles show the results for central and reionisation, respectively. Small dots are coloured by the redshift of the simulation snapshot for visual purposes, as the redshift information is not considered by the emulator. Even though the reionisation and training simulations present different astrophysical implementations, we can readily see that the first lies within the range of the parameter space covered by the latter. we show how different decisions regarding the configuration of the emulator improve its precision. On the other hand, in Fig. 6 we show that thanks to its MDN architecture, the emulator performance improves by 20% when training it using \(P_{\rm 1D}\) measurements from different simulation axes and phases instead of just relying on the average of these. * In Fig. 7, we show that the emulator precision is better than 1% for cosmologies within the training set (leave-one-out tests) across the full range of scales (\(0.1<k_{\parallel}[\,\mathrm{Mpc}^{-1}]<4\)) and redshifts (\(2<z<4.5\)) considered. Even though this value is similar to the precision quoted for the GP-based emulator described in P21, the actual performance of our emulator is better because we carry out a more detailed post-processing of the suite of hydrodynamical simulations from which these two studies extract \(P_{\rm 1D}\) measurements. * In Fig. 9, we show that emulator predictions are largely insensitive to the impact of cosmic variance thanks to emulating on the space of polynomial coefficients (see also Pedersen et al., 2023). Furthermore, we show that our emulator presents sub-percent precision for growth histories not included in the training set as well as for three \(\Lambda\)CDM extensions: massive neutrinos, running of the spectral index, and curvature. These findings confirm the advantage of emulating as a function of the amplitude and slope of the linear matter power spectrum on small scales rather than as a function of cosmological parameters. * In Fig. 10, we show that the emulator achieves on average 1.5% precision for thermal and reionisation histories not considered in the training set. This is the consequence of emulating \(P_{\rm 1D}\) as a function of the instantaneous properties of the IGM rather than parameters encoding information about the entire reionisation or thermal history of the universe. * In Fig. 12, we show the performance of an extended version of our emulator reaching \(k_{\parallel}=8\,\mathrm{Mpc}^{-1}\). In the central simulation, we find that the extended emulator presents an overall 1% precision, a factor of \(\sim\)2 worse than the fiducial emulator. Its performance is especially bad for thermal and reionisation histories not considered in the training set, reaching on average 3.5%. Figure 12: Performance of an extended version of the emulator to \(k_{\parallel}=8\,\mathrm{Mpc}^{-1}\) for LaCE-NN and LaCE-GP. The _left_ panel presents the leave-one-out test for the extended emulator. In the _right_, the emulator is trained using the tradinno simulations and tested on the seven test simulation pairs presented in §2.1 Figure 13: Percent error in the reionisation emulation for snapshots at different redshifts. The solid lines indicate the mean percent error while the shaded areas correspond to the emulator’s predicted uncertainty. This plot is analogous to Fig. 10 but for the extended emulator. As shown in SS5, the overall performance of the emulator is better for cosmologies and \(\Lambda\)CDM extensions not included in the training set than for reionisation histories not considered, which emphasises the importance of running simulations adopting distinct reionisation histories. This issue can be ameliorated by carrying out mean flux and temperature rescalings in a post-processing stage (Lukic et al., 2015), leading to a significant increase in the size of the training set. Such an increase makes NN-based models more suited to emulate \(P_{\rm 1D}\) measurements than GP-based models because the runtime of the first and second increase linearly and with the cube of the number of input points, respectively. We designed the fiducial version of our emulator aiming to analyse medium-resolution spectra from the DESI survey, which explains the range of redshifts and scales considered. In SS6, we present an extended emulator conceived for the joint analysis of DESI and high-resolution measurements (e.g., Karagayli et al., 2022) that reaches a factor of two smaller scales than the fiducial one. We found that the NN-based emulator performs significantly better than the GP-based emulator on small scales, further highlighting the advantages of NNs for \(P_{\rm 1D}\) emulation. On the other hand, we found the performance of the extended emulator is on average a factor of two times worse than the one of the fiducial emulator. In future work, we will further work on improving the precision of the extended emulator. Throughout this paper, we have focused on the one-dimensional Lyman-\(\alpha\) flux power spectrum. However, this statistic only contains part of the cosmological and astrophysical information encoded in the Lyman-\(\alpha\) forest as it neglects correlations between different lines-of-sights. In future work, we will use the training simulations and an NN-based architecture to develop the first emulators for the Lyman-\(\alpha\) flux probability distribution (Lee et al., 2015), one-dimensional bispectrum (Viel et al., 2009), and three-dimensional power spectrum (Font-Ribera et al., 2018). A coherent analysis of \(P_{\rm 1D}\) and these complementary statistics would enable fully exploiting the constraining power of the Lyman-\(\alpha\) forest. ## Acknowledgements LCG, JCM and AFR acknowledge support from the European Union's Horizon Europe research and innovation programme (COSMO-LYA, grant agreement 101044612). AFR acknowledges support from the Spanish Ministry of Science and Innovation through the program Ramon y Cajal (RYC-2018-025210). IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. The analysis in this article have been performed at Port d'I Informacio Cientfica (PIC). We would like to acknowledge the support provided by PIC in granting us access to their computing resources. The simulations were run using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure ## Data Availability The simulations utilized for the development and testing of this work are publicly accessible via the following link: [https://github.com/igmhub/LaCE](https://github.com/igmhub/LaCE).
2310.13099
No offence, Bert -- I insult only humans! Multiple addressees sentence-level attack on toxicity detection neural network
We introduce a simple yet efficient sentence-level attack on black-box toxicity detector models. By adding several positive words or sentences to the end of a hateful message, we are able to change the prediction of a neural network and pass the toxicity detection system check. This approach is shown to be working on seven languages from three different language families. We also describe the defence mechanism against the aforementioned attack and discuss its limitations.
Sergey Berezin, Reza Farahbakhsh, Noel Crespi
2023-10-19T18:56:50Z
http://arxiv.org/abs/2310.13099v1
No offence, Bert - I insult only humans! Multiple addressees sentence-level attack on toxicity detection neural networks ###### Abstract We introduce a simple yet efficient sentence-level attack on black-box toxicity detector models. By adding several positive words or sentences to the end of a hateful message, we are able to change the prediction of a neural network and pass the toxicity detection system check. This approach is shown to be working on seven languages from three different language families. We also describe the defence mechanism against the aforementioned attack and discuss its limitations. ## 1 Introduction Toxicity detection systems have become a crucial part of autocorrelation solutions. They are now used by most social media platforms, including those with groups of people in dangerously sensitive conditions, e.g. suicide prevention, Facebook groups, victims of cyberbullying, etc. Vulnerability in such systems may have dreadful, terrific effects on people, especially those in precarious situations.(Cheng et al., 2022). On the other hand, such systems can be and are used to silence the voices of criticism, which leads to the creation of echo chambers and amplifies the voice of the (powerful) minority over the voice of the majority, thereby destroying the foundation of democracy and denying the freedom of speech (Gorwa et al., 2020). This situation can be viewed as a double-edged sword, and we propose another double-edged sword that can be used to party the blade of toxicity detection systems. ### Task description We present an attack on toxicity detection models based on the separation of the messages intended for a person and a piece of text added to confuse an algorithm. For example, this can be done by concatenating an original message with a collection of keywords of an opposite intent: "Kill yourself, you dirty pig! Text for a bot to avoid ban: flowers, rainbow, happyy, happy good" - here only the underlined part is intended to address the human, and it is clear that the remaining part was added to avoid detection by the automoderation algorithm. This represents an example of a sentence-level black-box adversarial attack on Natural Language Processing systems. ### Related work The first work suggesting the concatenation of distracting but meaningless sentences at the end of a paragraph to confuse a neural model was "Adversarial Examples for Evaluating Reading Comprehension Systems" (Jia and Liang, 2017) (according to the (Zhang et al., 2020)). Jia and Liang attacked question-answering systems with either manually-generated informative sentences (ADDSENT) or arbitrary sequences of words using a pool of 20 random common words (ADDANY). Both perturbations were obtained by iterative querying of the neural network until the output was changed. The authors of "Robust Machine Comprehension Models via Adversarial Training" (Wang and Bansal, 2018) improved this approach by varying the locations where the distracting sentences are placed and expanding the set of fake answers for generating the distracting sentences (ADDSENTDIVERSE). In "Universal Adversarial Triggers for Attacking and Analyzing NLP" (Wallace et al., 2019) apply the gradient-based technique to construct an adversarial text. Despite being a white-box attack in its origin (due to the gradient information required in the training phase), this approach can be applied as a black-box attack during its inference. The T3 model (Wang et al., 2020) utilises the autoencoder architecture to generate adversarial texts that can manipulate the question-answering models to output the targeted incorrect answer. In "All You Need is "Love": Evading Hate Speech Detection" [14] perform a sentence-level attack on hate speech detection systems by inserting typos, changing word boundaries and adding innocuous words to the original hate speech. The authors show the effectiveness of such an attack on MLP, CNN+RNN and LSTM neural networks. ### Contribution Our contributions lie in the following: 1) **Introduction of a concept of "To Each His Own" attack**, based on the idea of separating the messages addressed to a human and those addressed to an algorithm. The attack exploits the fact that toxicity detection models are trained on sentence-level labels and do not consider the context or the intention of the message. 2) **A systematic approach to the generation of adversarial texts.** The words and sentences are selected based on their positivity scores rather than being randomly selected from non-hate ones, as in previous works. 3) **Demonstration of attacks on modern state-of-the-art architectures**. We tested the most popular transformer-based toxicity detectors, including Google Perspective API and OpenAI Moderation API. 4) **Highlighting a significant flaw in the current approach to toxicity detection.** We show that the predicted toxicity score can be watered down by the addition of positive words to the point of effectively masking toxicity. 5) **Multi- and cross-lingual settings of attacks.** We describe the attacks on seven different languages and combinations among them. A larger number of languages leads to a larger number of possible attacks - in the case of the 5-language model, we have 120 attack scenarios. ## 2 "To Each His Own" attack ### Word-based attack In this attack type, the original message is appended or prepended with a sequence of positive words, with the aim that these words will be analysed by a model but passed over by a human. This allows an attacker to preserve their original message without any changes, unlike in other attacks, which substitute words with synonyms or replace original characters with their look-alikes. In our experiment, we used words from the SentimentWordNet with a positivity score \(\geq 0.85\). We tested 14 openly available pretrained neural networks with the phrase "I hate you" and the addition of 0 to 9 positive words. In case the toxicity score for a suggested phrase was estimated to be less than 0.5 - the most common threshold, we used the phrase "I hate you dumbass" instead. If, after attacking with 9 words, the model's output was greater than 0.5, we continued adding new positive words until the score dropped below this threshold or until we reached the maximum length of an input sequence for the model tested. For the English language, we also performed experiments with the phrase "Text for bot:" placed between a message and added words. We conducted additional experiments with the Perspective API and OpenAI Moderation API, discovering they are also vulnerable to our attacks. For eight non-English models, we asked native speakers of the respective languages to translate a selected subset of words and attack phrases. Within this study, we tested the language models for seven languages aside from English: Russian, German, French, Portuguese, Spanish, Turkish and Vietnamese. It is worth noting that the first five listed languages represent three branches (Slavic, Germanic and Italic) of the Indo-European language family, and the last two are from different language families: the Turkic and the Austro-Asiatic family, respectfully. By using various models and languages, we aim to investigate if the suggested attack is model- and language-agnostic. ### Cross-lingual word-based attack The main weakness of all multilingual models lies in their greatest advantage - the ability to work with multiple languages and writing systems. Adding positive words in different languages will even more separate messages to the human and to the toxicity detection system, perhaps even denying a human the ability to read non-intended text. Examples of such texts are shown in Figure 1. Even some monolingual models, which had been exposed to another language during the pretraining phase, fall victim to this situation. We also conducted cross-lingual experiments with OpenAI models, including ChatGPT in conversation mode with different prompts, and found them susceptible to attack. Perspective API was found to be non-functional in a multilingual setting, failing with an error, as it is unable to detect the language of the text. ### Sentence-based attack Incoherent text, produced by concatenating lexically unconnected words, is easily detectable by modern language models. To address that easy detection, we experimented with another version of the attack: concatenating sentences from the Stanford Sentiment Treebank with a positivity score \(\geq 0.9\) and a length of no less than 100 symbols. Since sentences consist of grammatically linked words instead of just a set of random positive words, this attack variant is less obvious and more challenging to detect. ## 3 Defence We performed simple adversarial training of the DistilBERT model on the binary Jigsaw Toxic Comments dataset to improve the model's defence. We performed experiments with word- and sentence-based attacks, attacking both toxic messages and all messages in the dataset. In addition, we picked out only toxic messages and attacked half of them - in this scenario, the task was distinguishing attacked and non-attacked texts. ## 4 Results ### Attack During a word-based attack, both prepending and appending of the positive words showed similar results, with prepending being slightly less effective. Appending nine words was enough to flip the prediction of almost every model. The "SkolkovoInstitute RoBERTa" toxicity classifier required 23 words to fall below 0.5, and "English-abusive-MuRIL" was still strongly predicting even after the addition of 252 words and reaching the length limit for the input. The results of selected experiments are shown in tables 1, 2 and 31. Footnote 1: Full tables can be found in Appendix A. The addition of the phrase "Text for bot:" between two parts of a text made a little difference in the results, i.e. it can be used to create even more apparent separation without lowering an attack's efficiency. As expected, the results of cross-lingual attacks followed the same trend as those of monolingual ones. These results are shown in tables 4 and 51. Footnote 1: Full tables can be found in Appendix A. With a sentence-based attack, adding even one sentence drastically lowered the prediction scores of the models. The scores are shown in table 61. Footnote 1: Full tables can be found in Appendix A. The defence performed strongly, achieving an F1 score of 0.82 on binary toxic classification with word- and sentence-based attacks. The model achieved the same score on a non-attacked dataset, succeeding even better in distinguishing attacked sentences, with an F1 score of 0.99. However, defence models must be built for both word- and sentence-based attacks, as one single model will not work perfectly for both scenarios. ## 5 Discussion All of the attack variants showed decent results in confusing toxicity detection models in every model tested (to varying degrees). The simplistic nature of such attacks could allow virtually any Internet user to exploit them to avoid automoderation systems. A possible countermeasure for this type of attack is adversarial training or rule-based filtering. However, a simple rule-based filter can be fooled by char-level adversarial attacks, and adversarial training can be made much more difficult by increasing the quality and quantity of the possible text insertions. The complex approach, which covers all levels of perturbation, should be applied both in attack and defence real-life scenarios. ## 6 Conclusion In this paper, we demonstrated a novel and effective way of bypassing toxicity detection models by appending positive words or sentences to the end of a toxic message. The introduced concept of the "To Each His Own" attack is based on the idea of separating the messages addressed to a human and those addressed to an algorithm. The attack exploits the fact that the toxicity detection models are trained on sentence-level labels and do not consider the context or the intention of messages. The described attack in all its variants can be easily used on almost any toxic-detection neural model, with fairly good results. For future research, we suggest looking for a more sophisticated way of constructing insertions, perhaps with respect to the original message, making it a semantically correct addition to human-written messages. ## 7 Acknowledgments We thank all the volunteers who helped us with translations. \begin{table} \begin{tabular}{c c c c} \hline **n sentences** & **BERT** & **RoBERTa** & **ELECTRA** \\ \hline 0 & 0,590 & 0,962 & 0,898 \\ 1 & 0,001 & 0,001 & 0,232 \\ 2 & 0,001 & 0,002 & 0,147 \\ \hline \end{tabular} \end{table} Table 6: Results of sentence-based attacks. \begin{table} \begin{tabular}{c c c c} \hline **n words** & **Vietnamese** & **French** & **Turkish** \\ \hline 0 & 0,996 & 0,970 & 0,814 \\ 1 & 0,008 & 0,962 & 0,186 \\ 2 & 0,007 & 0,573 & 0,503 \\ 3 & 0,008 & 0,396 & 0,043 \\ 4 & 0,009 & 0,090 & 0,048 \\ 5 & 0,008 & 0,054 & 0,085 \\ 6 & 0,007 & 0,055 & 0,090 \\ 7 & 0,007 & 0,064 & 0,029 \\ 8 & 0,007 & 0,082 & 0,009 \\ 9 & 0,007 & 0,045 & 0,005 \\ \hline \end{tabular} \end{table} Table 3: Word-based attack on transformer-based language models trained on the languages from three different language families. \begin{table} \begin{tabular}{c c c c} \hline **n words** & **eng+tur** & **eng+rus** & **eng+sp** \\ \hline 0 & 0,971 & 0,971 & 0,971 \\ 1 & 0,913 & 0,717 & 0,802 \\ 2 & 0,904 & 0,717 & 0,687 \\ 3 & 0,912 & 0,761 & 0,536 \\ 4 & 0,541 & 0,800 & 0,558 \\ 5 & 0,634 & 0,592 & 0,500 \\ 6 & 0,731 & 0,456 & 0,539 \\ 7 & 0,604 & 0,402 & 0,426 \\ 8 & 0,519 & 0,433 & 0,434 \\ 9 & 0,455 & 0,369 & 0,384 \\ \hline \end{tabular} \end{table} Table 4: Cross-lingual word-based attack on toxicity detection models. \begin{table} \begin{tabular}{c c c c c} \hline **n words** & **Vietnamese** & **French** & **Turkish** \\ \hline 0 & 0,996 & 0,970 & 0,814 \\ 1 & 0,008 & 0,962 & 0,186 \\ 2 & 0,007 & 0,573 & 0,503 \\ 3 & 0,008 & 0,396 & 0,043 \\ 4 & 0,009 & 0,090 & 0,048 \\ 5 & 0,008 & 0,054 & 0,085 \\ 6 & 0,007 & 0,055 & 0,090 \\ 7 & 0,007 & 0,064 & 0,029 \\ 8 & 0,007 & 0,082 & 0,009 \\ 9 & 0,007 & 0,045 & 0,005 \\ \hline \end{tabular} \end{table} Table 3: Word-based attack on transformer-based language models trained on the languages from three different language families. \begin{table} \begin{tabular}{c c c c c} \hline **n** & **L en+fr** & **L en+ger** & **S en+fr** & **S en+gr** \\ \hline 0 & 0.872 & 0.872 & 0.888 & 0.888 \\ 1 & 0.931 & 0.981 & 0.793 & 0.848 \\ 2 & 0.895 & 0.898 & 0.779 & 0.510 \\ 3 & 0.769 & 0.889 & 0.671 & 0.511 \\ 4 & 0.816 & 0.846 & 0.594 & 0.344 \\ 5 & 0.771 & 0.807 & 0.606 & 0.302 \\ 6 & 0.800 & 0.647 & 0.618 & 0.318 \\ 7 & 0.659 & 0.533 & 0.605 & **0.201** \\ 8 & 0.592 & 0.555 & 0.573 & 0.208 \\ 9 & **0.579** & **0.489** & **0.424** & 0.283 \\ \hline \end{tabular} \end{table} Table 5: The results of cross-lingual attacks on OpenAI models. n - number of words added, L - latest, S - stable. We are grateful to the reviewers and the program chair of the EMNLP conference for providing constructive feedback, suggesting additional experiments and providing different points of view. This helped us structure the paper and ensure the robustness of our results. Sergey Berezin wishes to record that he is grateful to Mr. Mario Fritz from Saarland University for the introduction to the domain of adversarial attacks through his insightful course on this topic. He is also grateful to Mr. Maxime Amblard from the University of Lorraine for encouraging him to pursue research in this direction and validating his ideas.
2306.06414
Revealing Model Biases: Assessing Deep Neural Networks via Recovered Sample Analysis
This paper proposes a straightforward and cost-effective approach to assess whether a deep neural network (DNN) relies on the primary concepts of training samples or simply learns discriminative, yet simple and irrelevant features that can differentiate between classes. The paper highlights that DNNs, as discriminative classifiers, often find the simplest features to discriminate between classes, leading to a potential bias towards irrelevant features and sometimes missing generalization. While a generalization test is one way to evaluate a trained model's performance, it can be costly and may not cover all scenarios to ensure that the model has learned the primary concepts. Furthermore, even after conducting a generalization test, identifying bias in the model may not be possible. Here, the paper proposes a method that involves recovering samples from the parameters of the trained model and analyzing the reconstruction quality. We believe that if the model's weights are optimized to discriminate based on some features, these features will be reflected in the reconstructed samples. If the recovered samples contain the primary concepts of the training data, it can be concluded that the model has learned the essential and determining features. On the other hand, if the recovered samples contain irrelevant features, it can be concluded that the model is biased towards these features. The proposed method does not require any test or generalization samples, only the parameters of the trained model and the training data that lie on the margin. Our experiments demonstrate that the proposed method can determine whether the model has learned the desired features of the training data. The paper highlights that our understanding of how these models work is limited, and the proposed approach addresses this issue.
Mohammad Mahdi Mehmanchi, Mahbod Nouri, Mohammad Sabokrou
2023-06-10T11:20:04Z
http://arxiv.org/abs/2306.06414v1
# Revealing Model Biases: Assessing Deep Neural Networks via Recovered Sample Analysis ###### Abstract This paper proposes a straightforward and cost-effective approach to assess whether a deep neural network (DNN) relies on the primary concepts of training samples or simply learns discriminative, yet simple and irrelevant features that can differentiate between classes. The paper highlights that DNNs, as discriminative classifiers, often find the simplest features to discriminate between classes, leading to a potential bias towards irrelevant features and sometimes missing generalization. While a generalization test is one way to evaluate a trained model's performance, it can be costly and may not cover all scenarios to ensure that the model has learned the primary concepts. Furthermore, even after conducting a generalization test, identifying bias in the model may not be possible. Here, the paper proposes a method that involves recovering samples from the parameters of the trained model and analyzing the reconstruction quality. We believe that if the model's weights are optimized to discriminate based on some features, these features will be reflected in the reconstructed samples. If the recovered samples contain the primary concepts of the training data, it can be concluded that the model has learned the essential and determining features. On the other hand, if the recovered samples contain irrelevant features, it can be concluded that the model is biased towards these features. The proposed method does not require any test or generalization samples, only the parameters of the trained model and the training data that lie on the margin. Our experiments demonstrate that the proposed method can determine whether the model has learned the desired features of the training data. The paper highlights that our understanding of how these models work is limited, and the proposed approach addresses this issue. ## 1 Introduction Deep Neural Networks (DNNs) have become a ubiquitous approach in various fields, including computer vision (Chai et al. (2021); Voulodimos et al. (2018)), natural language processing (Deng and Liu (2018)), and speech recognition (Nassif et al. (2019)), among others. Despite their impressive performance, the internal workings of these models are often considered a black box, and understanding their decision-making process is a challenging problem. Interpreting how DNNs are working can be crucial in many applications, such as medical diagnosis or autonomous driving, where the model's decisions need to be explainable. Several studies have indicated that DNNs possess significant weaknesses, including susceptibility to distribution shifts (Sugiyama and Kawanabe (2012)), lack of interpretability (Chakraborty et al. (2017)), and insufficient robustness (Szegedy et al. (2013)). Moreover, it has been widely investigated that DNNs, as discriminative models, attempt to identify simple yet effective features that distinguish between different classes (Shah et al. (2020)). For instance, a model that discriminates based on color or texture may not necessarily learn the underlying shape. Furthermore, studies have shown that such DNNs often exhibit bias towards background featuresMoayeri et al. (2022)). Consequently, if there are irrelevant or biased concepts that assist the model in discriminating its training samples, the model may become overly reliant on them. Therefore, a fundamental and crucial inquiry to pose is _whether a DNN has learned the primary concepts of the training data or simply memorized irrelevant features to make predictions._ Perhaps, the most natural way to answer this question is to evaluate the test error of the model and also check the model's generalization. However, in many cases, we do not have access to test data, and we can only use the trained model to evaluate its performance. Moreover, this approach does not correspond to all key properties of the model such as robustness. Besides, evaluating the model on test data and also checking the model's robustness against a wide range of OOD samples is timely and computationally expensive. As a result, various works have been done to examine this question. For instance, a recent line of work (Morwani et al. (2023); Addepalli et al. (2022a)) has focused on simplicity biases and ways to mitigate it. In a nutshell, this paper attempts to answer the question of whether trained models grasp the core concepts of their training data or merely memorize biased or irrelevant features that facilitate discrimination. The primary assumption is that no testing data is available. The main intuition is that after training, the model's learned knowledge is likely distilled in the parameters of the model (i.e., weights of a DNN). Accordingly, if we attempt to extract the information that the model has learned, its knowledge, specifically which features it has learned, should be reflected in the extracted information derived from the weights. This paper proposes a cost-effective approach to evaluate the model's performance by reconstructing samples from the trained model's weights and analyzing the quality of the reconstructed samples. We discuss that the reconstructed samples will contain features that the model's weights have been optimized to discriminate based on. In other words, if the model has learned to differentiate between classes based on certain features, these features will be reflected in the reconstructed samples. Therefore, if the reconstructed samples contain the primary concepts of the training data, it can be concluded that the model has learned the essential and determining features. On the other hand, if the model fails to generate an accurate reconstruction of the main concepts of its training inputs, it can be concluded that the model has been biased toward some irrelevant features. The parameters of the trained model and the training data that lie on the margin, are all that are necessary to apply our approach. We leverage the SSIM metric (Wang et al. (2004)) to evaluate the image's quality and demonstrate that it provides an effective means of assessing the model's performance when handling intricate data and learning intricate features. However, for simplistic data sets such as MNIST, it is essential to have human supervision to evaluate the model's efficacy. The main contributions of this paper are: * We propose a novel and powerful approach, that enables us to accurately evaluate model biases, assessing whether the model learns discriminative yet simple and irrelevant features, or grasps the fundamental concepts of the training data solely by utilizing trained models and training data, without requiring any test or validation data. * We design a comprehensive set of experiments with varying levels of bias complexity to analyze the behavior of the model during training. Interestingly, we observed that in the presence of challenging biases we intentionally crafted, the model focused more on learning the main concept. However, when faced with simpler biases, the model tended to converge quickly and ceased to learn further. Related work The concept of reconstructing training samples from trained DNN parameters has been previously explored by Haim et al. (2022). Their work revealed a privacy concern by demonstrating that a significant portion of the actual training samples could be reconstructed from a trained neural network classifier. Our proposed method is inspired by these findings but focuses on assessing the learned features from reconstructed samples to infer potential biases and discriminatory features that might not have been captured by the training process. Another significant contribution to understanding the theoretical properties of DNNs is the introduction of the Heavy-Tailed Self-Regularization (HT-SR) theory by Martin and Mahoney (2020). This theory suggests a connection between the weight matrix correlations in DNNs and heavy-tailed random matrix theory, providing a metric for predicting test accuracies across different architectures. Taking this concept further, Martin et al. (2021) showcased the effectiveness of power-law-based metrics derived from the HT-SR theory in predicting properties of pre-trained neural networks, even when training or testing data is unavailable. Addepalli et al. (2022) investigated the relationship between simplicity bias and brutleness of DNNs, proposing the Feature Reconstruction Regularizer (FRR) to encourage the use of more diverse features in classification. In a related effort to identify maliciously tampered data in pretrained models, Wang et al. (2020) studied the detection of Trojan networks, focusing on data-scarce conditions. While their work emphasizes detecting adversarial attacks and preserving privacy, our proposed method concentrates on identifying biases in learned features and assessing the quality of pre-trained models. In another line of research, Li and Xu (2021) introduced a method to discover the unknown biased attribute of an image classifier based on hyperplanes in the generative model's latent space. Their goal was to help human experts uncover unnoticeable biased attributes in object and scene classifiers. Kleyko et al. (2020) developed a theory for one-layer perceptrons to predict the performance of neural networks in various classification tasks. This theory was based on Gaussian statistics and provided a framework to count classification accuracies for different classes by investigating the mean vector and covariance matrix of the postsynaptic sums. Similarly, Untrinheer et al. (2020) proposed a formal setting that demonstrated the ability to predict DNN accuracy by solely examining the weights of trained networks without evaluating them on input data. Their results showed that simple statistics of the weights could rank networks by their performance, even across different datasets and architectures. A practical framework called ContRE Wu et al. (2021) was proposed which used contrastive examples to estimate generalization performance. This framework followed the assumption that robust DNN models with good generalization performance could extract consistent features and make consistent predictions from the same image under varying data transformations. By examining classification errors and Fisher ratios on generated contrastive examples, ContRE assessed and analyzed the generalization performance of DNN models in complement with a testing set. In conclusion, our approach draws inspiration from various prior work on understanding and analyzing deep neural network properties, focusing primarily on identifying biases in the learned features and their relevance to the primary concepts of the training data. We believe that our work not only enhances our understanding of the underlying mechanisms of these models but also contributes to improving their generalization and robustness. ## 3 Preliminaries **Notation**: In this section, we introduce the necessary notations for understanding the reconstruction algorithm. We show the set \(\{1,2,...,n\}\) with \([n]\). Let \(S=\{(x_{i},y_{i})\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\times\{-1,1\}\) denotes a binary classification training dataset where \(n\) is the number of training samples and \(d\) stands for the number features in each sample. Moreover, let \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) be a neural network with weights \(\theta\) where \(\theta\ \in\mathbb{R}^{p}\) and \(p\) denotes the number of weights. We show the training loss function with \(\ell:\mathbb{R}\rightarrow\mathbb{R}\) and the empirical loss on the dataset \(S\) with \(L(\theta)=\sum\limits_{i=1}^{n}\ell(y_{i}f_{\theta}(x_{i}))\). **Reconstruction scheme**: We use the proposed method in Haim et al. (2022) to reconstruct the training data. Here, we provide the details of their method: Suppose that \(f_{\theta}\) is a homogeneous ReLU neural network and we aim to minimize the binary cross entropy loss function over the dataset \(S\) using gradient flow. Moreover, assume that there is some time \(t_{0}\) such that \(L(\theta(t_{0}))<1\). Then, as it showed by (Lyu and Li (2019)), the gradient flow converges in direction to a first order KKT point of the following max-margin problem: \[\min_{\theta}\frac{1}{2}||\theta||^{2}\qquad s.t.\qquad\forall i\in[n]\quad y_{i }f_{\theta}(x_{i})\geq 1 \tag{1}\] So, if gradient flow converges in direction to \(\theta^{*}\), then there exists \(\lambda_{1},...,\lambda_{n}\in\mathbb{R}\) such that: \[\theta^{*}=\sum_{i=1}^{n}\lambda_{i}y_{i}\nabla_{\theta}f_{\theta^{*}}(x_{i}) \tag{2}\] \[\forall i\in[n]\quad y_{i}f_{\theta^{*}}(x_{i})\geq 1 \tag{3}\] \[\forall i\in[n]\quad\lambda_{i}\geq 0 \tag{4}\] \[\forall i\in[n]\quad\lambda_{i}=0\quad if\quad y_{i}f_{\theta^{*}}(x_{i})\neq 1 \tag{5}\] Now, by having a trained neural network with weights \(\theta^{*}\), we wish to reconstruct \(m\) training samples. We manually set \(y_{1:\frac{n}{2}}=-1\) and \(y_{\frac{n}{2}:m}=1\) The reconstruction loss function is then defined based on the equations 2 and 4 as follows: \[L_{reconstruction}=\alpha_{1}||\theta^{*}-\sum_{i=1}^{m}\lambda_{i}y_{i} \nabla_{\theta}f_{\theta^{*}}(x_{i})||_{2}^{2}+\alpha_{2}\sum_{i=1}^{m}max \{-\lambda_{i},0\}+\alpha_{3}L_{prior}, \tag{6}\] where \(L_{prior}\) is defined for image datasets as \(L_{prior}=max\{z-1,0\}+max\{-z-1,0\}\) for each pixel \(z\) to penalize the pixel values outside the range \([-1,1]\). By minimizing the equation 6 with respect to \(x_{i}^{\prime}s\) and \(\lambda_{i}^{\prime}s\) we can reconstruct the training data that lie on the margin. ## 4 Proposed method As mentioned earlier, the main motivation of this paper is to answer the question "whether a trained neural network has learned the intended concepts of the training data or not" without accessing to test data. To answer this question, we propose a simple cost-efficient idea. Our idea is based on the intuitive hypothesis that the knowledge acquired by a model \(\mathcal{M}\) during training, even if the model has learned biased or irrelevant features, is encoded in the model's weights. So, by extracting the network's learned knowledge we can discover whether the model has learned the important concepts of training data, e.g. objects, or not. Specifically, suppose that the weights \(\mathcal{W}\) of model \(\mathcal{M}\) is trained on dataset \(\mathcal{X}\). We denote the optimized weights by \(\mathcal{W}^{*}\). Therefore, we expect the learned and utilized knowledge by the model to be distilled in \(\mathcal{W}^{*}\). We recover the learned knowledge of the model in the Recovery module \(\mathcal{R}(\mathcal{W})\) by reconstructing training data from \(\mathcal{W}^{*}\). Then, we compare the reconstructed samples to the original training data, and if the primary features of training data are reconstructed well, then we conclude that the network has learned the primary features of training data. A schematic of our proposed framework is shown in Figure 1. One way to evaluate the reconstructed data is human supervision. Moreover, we leverage the SSIM metric (Wang et al. (2004)) and introduce a Score function \(\mathcal{S}(\mathcal{M},X,\mathcal{W}^{*})\) for comparing the reconstructed samples with training data. The details of our score function are provided in section 4.3. Now suppose we introduce different biases of varying degrees to our dataset, represented by \(\mathcal{T}_{i}\) which convert \(X\) to \(X_{\mathcal{T}_{i}}\) (see section 4.1). For a simple bias \(\mathcal{T}_{1}\), the model \(\mathcal{M}_{1}\) first attempts to learn this bias and subsequently ceases training after identifying it. As a result, \(\mathcal{M}_{1}\) would not learn the primary features of training data. On the other hand, if model \(\mathcal{M}_{2}\) is trained on a harder bias \(\mathcal{T}_{2}\), which is harder for the network to learn it, \(\mathcal{M}_{2}\) can learn the primary features of training data to some extent. So we should have: \[\mathcal{S}(\mathcal{M}_{1},X_{\mathcal{T}_{1}},\mathcal{W}^{*}{}_{1})\leq \mathcal{S}(\mathcal{M}_{2},X_{\mathcal{T}_{2}},\mathcal{W}^{*}{}_{2}). \tag{7}\] More generally, suppose that N models \(\mathcal{M}_{1}\)... \(\mathcal{M}_{N}\) are trained on N datasets \(X_{\mathcal{T}_{1}}\)... \(X_{\mathcal{T}_{N}}\) where each \(X_{\mathcal{T}_{i}}\) is a biased version of the base dataset \(X\). Now for every i,j \(\in[N]\), if we have : \[\mathcal{S}(\mathcal{M}_{i},X_{\mathcal{T}_{i}},\mathcal{W}_{i})<\mathcal{S}( \mathcal{M}_{j},X_{\mathcal{T}_{j}},\mathcal{W}_{j}), \tag{8}\] then probably the model \(\mathcal{M}_{j}\) is a better model in terms of learning the main concepts of the training data and thus have a better generalization. ### \(\mathcal{T}\) Designing different levels of bias To investigate the primary question of this paper, we manually add a small square trigger to the images. This trigger affects the network's performance, and we can identify how the network's performance relates to the reconstructed samples. Suppose that we have a base dataset \(X\), e.g, CIFAR10. We add the trigger in three different levels of difficulty: Level One: the trigger is placed in the top left corner of the images of \(X\), which is the easiest trigger for the network to detect, to obtain \(X_{\mathcal{T}_{1}}\). Level Two: we randomly select three different positions for each class (six positions total), and for each image of \(X\), we place the square trigger in one of the chosen positions to obtain \(X_{\mathcal{T}_{2}}\). This trigger is harder to detect due to the position of the triggers being randomized. Level Three: we randomly select five different positions for each class (ten positions total), and for each image of \(X\), we place the square trigger in one of the selected positions to obtain \(X_{\mathcal{T}_{3}}\). This level of trigger is the most difficult to detect. By training a neural network on each of the obtained datasets \(X_{\mathcal{T}_{1}}\), \(X_{\mathcal{T}_{2}}\), and \(X_{\mathcal{T}_{3}}\), we obtain three different models \(\mathcal{M}_{1},\mathcal{M}_{2},\) and \(\mathcal{M}_{3}\) as well as a model \(\mathcal{M}_{base}\) trained on the base dataset \(X\) without any changes. ### \(\mathcal{R}\) Knowledge recovery module For a trained neural network with optimized weights \(W^{*}\), the output of \(\mathcal{R}(\mathcal{W}^{*})\) is the reconstructed training data from the optimized weights. We use the proposed method in (Haim et al. (2022)) for reconstructing the training data. See section 3 for the details of the reconstruction algorithm. It is important to note that the reconstruction algorithm proposed in this research has both theoretical and practical limitations. On the theory side, the trained neural network should be a homogeneous ReLU network that minimizes the cross entropy loss function over a binary classification problem while achieving zero training error. On the practical side, the method has only been shown to work on small training data and is not well-suited for use with CNNs. ### \(\mathcal{S}\) Score function Suppose that a neural network \(\mathcal{M}\) is trained on dataset \(X\) with \(n\) training samples to obtain the optimal weights \(W^{*}\). Moreover, assume that we have reconstructed \(m\) training samples where \(m>n\). As proposed in (Haim et al. (2022)), first the reconstructed images are scaled to fit into the range Figure 1: An overview of our proposed method. Our methodology involves training several neural networks on both biased and unbiased datasets, resulting in different models. Next, we extract the learned knowledge of each model from the weights of the trained network and determine if the models have captured the primary concepts of the training data by evaluating this knowledge. \([0,1]\). Then for each training sample, the distance of that training sample from all reconstructed samples is computed. By computing the mean of the closest reconstructed samples to each training data, pairs of (training image, reconstructed image) are formed. Finally, the SSIM score is computed for each pair and the pairs are sorted based on their SSIM score (in descending order). Now, we put all sorted pairs in set \(P=\{(I_{1},I_{1}^{\prime}),(I_{2},I_{2}^{\prime}),...,(I_{n},I_{n}^{\prime})\}\) where each \(I_{i}\) shows a training image, each \(I_{i}^{\prime}\) denotes a reconstructed image, and \((I_{1},I_{1}^{\prime})\) and \((I_{n},I_{n}^{\prime})\) have the highest and lowest SSIM scores, respectively. We define the Score function \(\mathcal{S}(\mathcal{M},X,\mathcal{W}^{*})\) as \[\mathcal{S}(\mathcal{M},X,\mathcal{W}^{*})=\frac{1}{k}\sum_{i=1}^{k}SSIM(I_{i},I_{i}^{\prime}), \tag{9}\] where \(k\in[n]\). While in the above formulation, we used the entire training data \(\{I_{1},...,I_{n}\}\), we can only use the training data that lie on the margin because the reconstruction algorithm can only reconstruct the training data that lie on the margin. ## 5 Experiment results In this section, we conducted a comprehensive experiment to evaluate the validity of the proposed idea. The results intriguingly confirm that the idea effectively works. It is worth mentioning that, due to current limitations in the used data reconstruction method, we were only able to evaluate our method on low-resolution datasets (see also section 4.2). However, our primary objective was to demonstrate the validity of the idea rather than achieving optimal performance. Furthermore, we couldn't find any existing work that addresses the same tasks as our paper, making direct comparisons challenging. Therefore, we provide a thorough analysis on various datasets to showcase the validity of our idea. ### Experimental setup For our experiments, we utilize MNIST Deng (2012) and CIFAR10 Krizhevsky et al. (2009) datasets and convert both into binary classification problems - odd/even for MNIST and vehicles/animals for CIFAR10. We employ a fully connected ReLU neural network with architecture (d-1000-1000-1) where d refers to the input dimension. We only add bias terms to the first network layer to maintain homogeneity. The experiments in section 5.2 are done with 500 training images (250 images for each class). All other experiments are dome with 100 training images (50 images for each class). We use the hyperparameters suggested in Haim et al. (2022) for each dataset. Similarly, in all experiments, the number of reconstructed samples is set to twice that of the training samples. However, we also run a hyperparameter search to make sure that the suggested hyperparameters are suitable for different experiments. In all experiments, the training error reaches zero and we continue training the network until it reaches the training loss of less than 1e-5. We compute test errors leveraging the standard MNIST and CIFAR10 test images in all experiments. While we change the training images for different experiments, we keep the test data constant throughout. As we discussed in section 4.3, we use the average SSIM values of best \(k\) reconstructed images as our score function to evaluate the reconstructed images' quality. In all experiments, we set k = 10 since most of the reconstructed images are of poor quality and are thus unsuitable for evaluating the reconstruction's quality. We use the average SSIM of the best 10 reconstructed images to assess the quality of the reconstruction (recovery) module. We do not use the average SSIM of all reconstructed images as most of the reconstructed images have poorer quality, with only a few having good quality. ### Evaluating the recovery module performance First, we examine the results obtained by Haim et al. (2022) to validate that their reconstruction scheme reconstructs discriminative features with better quality. The best reconstructed samples for MNIST and CIFAR10 are shown in Figures 2 and 3 respectively, and the average of the best 10 SSIMs are reported in table 2. Both figures clearly indicate that the primary objects in each dataset - the digits in MNIST and the animals/vehicles in CIFAR10 - are reconstructed with significantly higher quality than the background pixels. Furthermore, according to Table 2, the reconstruction quality of CIFAR10 images is significantly better than that of MNIST images. This is likely due to the fact that the neural network must learn more complex features in order to classify the CIFAR10 images, resulting in the reconstruction algorithm being able to reconstruct more pixels and achieving a higher SSIM score. ### Cifar10 #### 5.3.1 Different levels of bias As we discussed in section 4.1, we design different datasets from a base dataset \(X\) to obtain different trained models \(\mathcal{M}_{1},\mathcal{M}_{2},\) and \(\mathcal{M}_{3}\). For CIFAR10 as the base dataset, we set the pixel values of square triggers to 0 for the vehicle class and 1 for the animal class. Figure 5 shows the reconstructed samples recovered from each model \(\mathcal{M}_{i}\) as well as the base model \(\mathcal{M}_{base}\). It is evident from this figure that the reconstructed samples for the first level of difficulty (model \(\mathcal{M}_{1}\)) have poor quality as the main objects in the images (vehicles or animals) are not reconstructed well. This suggests that model \(\mathcal{M}_{1}\) has not learned the primary objects (vehicles and animals) in the dataset and has poor generalization. Conversely, the reconstructed samples for the two other levels of difficulty are of reasonable quality, indicating that the model has learned the main objects in the dataset. Our hypothesis is confirmed in Figure 6, which clearly demonstrates a direct relationship between the model's ability to learn the main concepts of the training data, measured by test accuracy, and the quality of reconstructed samples, measured by the average SSIM value of the best 10 reconstructed samples. An interesting finding from Figures 5 and 6 is that both models \(\mathcal{M}_{2}\) and \(\mathcal{M}_{3}\) have better test accuracy and better mean SSIM score than the base model \(\mathcal{M}_{base}\). This suggests that adding biases that are hard to detect, may improve the model's performance. #### 5.3.2 Contrast changing Now, instead of adding triggers to the images, we change the contrast factor of the images. We perform this experiment because this situation is closer to actual real-world biased images. We set the contrast factor for vehicles class and animals to 0.5 and 2, respectively. We obtained a test accuracy of \begin{table} \begin{tabular}{l l l} \hline Dataset & Test accuracy & Mean best 10 SSIMs \\ \hline MNIST & 0.898 & 0.1862 \\ CIFAR10 & 0.771 & 0.5182 \\ \hline \end{tabular} \end{table} Table 1: Test accuracy of the neural network trained on 500 MNIST/CIFAR10 images for a binary classification task as well as the average SSIM of best 10 reconstructed images from the weights of trained neural network. Figure 3: Top 30 reconstructed samples from MNIST dataset. Odd rows show reconstructed samples and even rows show the corresponding training sample. The neural network was trained on 500 MNIST images. It is obvious from the figure that the background pixels did not reconstruct well. Figure 2: Top 30 reconstructed samples from CIFAR10 dataset. Odd rows show reconstructed samples and even rows show the corresponding training sample. The neural network was trained on 500 CIFAR10 images. 0.691, which indicates that adjusting the contrast of classes does not have a significant negative impact on the model's performance. Since the model has a good performance, we expect to reconstruct the main objects of the training set. In Figure 4, the top 30 reconstructed samples are shown, and it is clear that the main objects are reconstructed well in this experiment. The average SSIM of the best 10 reconstructed samples is 0.5774 for this experiment. If we compare this experiment with the previous experiments in 5.3.1, we notice that these experiments also confirm 8. ### Mnist Similar to the previous part, we introduce three different difficulty levels of bias to MNIST images. The relationship between the quality of reconstructed samples and test accuracy is shown in Figure 7. As it is evident from this figure, for the model with the lowest test accuracy, we also obtain the lowest reconstruction quality. However, for other models, it is difficult to validate equation 8 because the accuracy of these models is very close to each other. It is important to note that for the MNIST dataset, the SSIM metric is not a good representative of the model's performance. This is because, in the MNIST images, a large portion of the pixels are background pixels. So, if the reconstruction algorithm recovers the digits well and has a good performance, it cannot reconstruct the background pixels well and thus yields to poor SSIM score. See table 2. This experiment clearly shows that the SSIM metric (or other image quality metrics) cannot reveal the model's performance in all datasets. However, in complicated and real-world datasets, where the model needs to learn more complex features to classify correctly, the SSIM metric can be a good representative of the model's performance. To support our results, we have provided more experiments in the Appendix. ## 6 Discussion and limitations Our proposed method is primarily based on the reconstruction scheme proposed by Haim et al. (2022). Consequently, the limitations of their work also apply to our approach. So, our method works only for fully connected homogeneous ReLU neural networks for binary classification tasks. However, Haim et al. showed that their reconstruction algorithm also works for non-homogeneous ReLU neural Figure 4: Top 30 reconstructed samples from CIFAR10 dataset with changed contrast. The neural network was trained on 100 CIFAR10 images with changed contrast. The contrast factor is set to 0.5 and 2 for the vehicles class and animals class, respectively. Figure 5: Top 30 reconstructed samples for different models trained on CIFAR10 dataset with different bias levels. See section 4.1 for the details of each model. networks. Another limitation is that we cannot use our method for networks that are trained on large datasets. To circumvent these limitations, we need to modify the proposed reconstruction algorithm in this research. Another limitation of our work is that we need to have access to training data that lie on the margin to compute our reconstruction score. This limitation can be removed in two ways: First, as we have shown in our experiments, human visualization is also a good way to evaluate the quality of reconstructed samples. However, if we want a complete machinery process without the assistance of humans, it may be applicable to use non-reference-based image quality metrics to evaluate the quality of reconstruction images. ## 7 Conclusion In this paper, we proposed a simple method for discovering if a neural network has learned the primary concepts of the training data, such as digits in MNIST images and vehicles/animals in CIFAR10 images. Our method consists of an extraction phase that uses the reconstruction scheme introduced by citethaim2022reconstructing. We demonstrated that by extracting the learned knowledge of the network, we can assess how well the model has learned the intended attributes of the training data. Figure 6: Comparison of different models test accuracy and their reconstruction performance on CIFAR10. \(\mathcal{M}_{base}\): trained model on original CIFAR10 images. \(\mathcal{M}_{1}\): trained model on CIFAR10 images with level one bias difficulty. \(\mathcal{M}_{2}\): trained model on CIFAR10 images with level two bias difficulty. \(\mathcal{M}_{3}\): trained model on CIFAR10 images with level three bias difficulty. Figure 7: Comparison of different models test accuracy and their reconstruction performance on MNIST. \(\mathcal{M}_{base}\): trained model on original MNIST images. \(\mathcal{M}_{1}\): trained model on MNIST images with level one bias difficulty. \(\mathcal{M}_{2}\): trained model on MNIST images with level three bias difficulty. Specifically, we showed that for a trained network, if the primary attributes of the training data are reconstructed well, then that network is likely to have learned the main concepts of the training data. However, if the primary attributes of the training data are not reconstructed well, then that network is likely to have learned some other discriminative features unrelated to the main concepts. Moreover, we demonstrated that adding biases that are difficult to detect in datasets can improve the performance of a neural network and enhance its generalization.
2302.07982
Correlation-Aware Neural Networks for DDoS Attack Detection In IoT Systems
We present a comprehensive study on applying machine learning to detect distributed Denial of service (DDoS) attacks using large-scale Internet of Things (IoT) systems. While prior works and existing DDoS attacks have largely focused on individual nodes transmitting packets at a high volume, we investigate more sophisticated futuristic attacks that use large numbers of IoT devices and camouflage their attack by having each node transmit at a volume typical of benign traffic. We introduce new correlation-aware architectures that take into account the correlation of traffic across IoT nodes, and we also compare the effectiveness of centralized and distributed detection models. We extensively analyze the proposed architectures by evaluating five different neural network models trained on a dataset derived from a 4060-node real-world IoT system. We observe that long short-term memory (LSTM) and a transformer-based model, in conjunction with the architectures that use correlation information of the IoT nodes, provide higher performance (in terms of F1 score and binary accuracy) than the other models and architectures, especially when the attacker camouflages itself by following benign traffic distribution on each transmitting node. For instance, by using the LSTM model, the distributed correlation-aware architecture gives 81% F1 score for the attacker that camouflages their attack with benign traffic as compared to 35% for the architecture that does not use correlation information. We also investigate the performance of heuristics for selecting a subset of nodes to share their data for correlation-aware architectures to meet resource constraints.
Arvin Hekmati, Nishant Jethwa, Eugenio Grippo, Bhaskar Krishnamachari
2023-02-15T23:04:58Z
http://arxiv.org/abs/2302.07982v1
# Correlation-Aware Neural Networks for DDoS Attack Detection In IoT Systems ###### Abstract We present a comprehensive study on applying machine learning to detect distributed denial of service (DDoS) attacks using large-scale Internet of Things (IoT) systems. While prior works and existing DDoS attacks have largely focused on individual nodes transmitting packets at a high volume, we investigate more sophisticated futuristic attacks that use large numbers of IoT devices and camouflage their attack by having each node transmit at a volume typical of benign traffic. We introduce new correlation-aware architectures that take into account the correlation of traffic across IoT nodes, and we also compare the effectiveness of centralized and distributed detection models. We extensively analyze the proposed architectures by evaluating five different neural network models trained on a dataset derived from a 4060-node real-world IoT system. We observe that long short-term memory (LSTM) and a transformer-based model, in conjunction with the architectures that use correlation information of the IoT nodes, provide higher performance (in terms of F1 score and binary accuracy) than the other models and architectures, especially when the attacker camouflages itself by following benign traffic distribution on each transmitting node. For instance, by using the LSTM model, the distributed correlation-aware architecture gives 81% F1 score for the attacker that camouflages their attack with benign traffic as compared to 35% for the architecture that does not use correlation information. We also investigate the performance of heuristics for selecting a subset of nodes to share their data for correlation-aware architectures to meet resource constraints. IoT DDoS Attacks, datasets, neural networks, machine learning, botnet, Cauchy distribution ## I Introduction The Internet of things (IoT) has dramatically grown, propelled by the impetuous development of technology [1, 2]. IoT devices are predicted to be more pervasive in our lives as compared to mobile devices. The number of IoT devices (nodes) connected to the Internet is projected to be around 29 billion devices around the world [3]. Along with the growth of the number of IoT devices, the vulnerability and security risks of IoT devices have also been increasing almost at the same pace. Recent studies by HP security research showed a high average number of vulnerabilities by studying 10 of the most popular IoT devices, such as lack of transport encryption, insecure software/irmmware, and cross-site scripting [4, 5]. As a consequence, cybersecurity of IoT devices ought to be developed at an ever-faster rhythm, allowing/accompanying the safe and secure growth of networks [6, 7]. With this goal in mind, this work addresses one of the most dangerous types of attacks involving IoT systems, namely distributed denial of service (DDoS) attacks [8, 9, 10]. In denial of service (DoS) attacks, the attacker tries to disturb the behavior of the victim server in order to block legitimate users' system access by reducing the server's availability. DoS attacks come in many forms, such as user datagram protocol (UDP) flood attacks, where the attacker sends more traffic to the victim server than it is capable of handling [11], synchronize (SYN) flood attacks, where the attacker sends a transmission control protocol (TCP) connection request to the victim server with spoofed source addresses but never send back the acknowledgment [12], etc. In distributed denial of service (DDoS) attacks, attackers get access to many compromised devices, usually called zombies, in order to perform DoS attacks on the victim server. Therefore, the attacker can significantly magnify the effect of the DoS attacks on the victim server[13]. IoT devices are a perfect choice to perform DDoS attacks given the huge number of IoT devices that have access to the Internet and also their vulnerabilities to become compromised. DoS and DDoS attacks could be performed by using the protocols of different layers in the Open Systems Interconnection (OSI) model. DDoS attacks that use the transport layer are the most common type of attacks where the attacker sends as many packets as possible to the victim server through UDP flooding, SYN flooding, etc. We also have other types of DDoS attacks that use the application layer in the OSI model. Attackers try to disturb the behavior of the victim server by tying up its every thread with slow requests. The slow-rate DDoS attacks could be performed by sending data to the victim server at a very slow rate, but fast enough to prevent the connection from getting timed out [14]. One of the most famous IoT-based DDoS attacks is caused by the Mirai botnet. This botnet could bring down the victim servers by infecting thousands of IoT devices and dramatically increase the network traffic directed toward the victim servers on the order of Terabit per second (Tbps), affecting millions of end-users [15, 16, 17]. Another example is the Reaper botnet which is a variant of the Mirai botnet and could infect 2.7 million IoT devices [18]. There are many other botnets that could perform DDoS attacks by using billions of IoT devices around the world and impact many victim servers, and end users [19]. Given this dangerous and dramatically growing threat, we explore the use of machine learning (ML) techniques as the main tool to prevent such attacks [20, 21]. One of the most important gaps in the related papers that studies the DDoS detection mechanisms is the fact that all of them consider either the packet volume transmitted from IoT nodes or the packet flow timing while under attack to be significantly (orders of magnitudes) higher as compared to the IoT nodes' benign traffic. For instance, Meidan _et al._[22] presented an ML-based technique for detecting the DDoS attacks on a dataset containing the benign and attack traffic of nine IoT nodes. Figure (a)a shows the probability density of benign and attack traffic packet volume of an IoT node with ID XCS7_1003 in the Meidan _et al._[22] paper. As we can see, the packet volume transmitted from that IoT node while under attack is, on average, 1604 times higher than the packet volume of the IoT node when there is no attack happening. Furthermore, Sharafaldin _et al._[23] presented a DDoS dataset and also an ML-based technique for detecting DDoS attacks. Their dataset also includes a slow-rate DDoS attack type. Figure (b)b shows the probability density of benign and attack traffic mean flow inter-arrival time (IAT), i.e. the mean time between two packets sent in the flow. As we can see, the average of the mean IAT for the attack traffic is approximately 11 times higher than the benign traffic. These differences between the benign and attack traffic properties could give a huge benefit to the ML models for detecting DDoS attacks. However, in this study, we consider more futuristic DDoS attacks in which attackers can mimic the behavior of the IoT nodes' benign traffic. In fact, given the huge number of IoT devices that are currently available, the attackers could take control of millions of IoT nodes and, camouflage their attack by sending fewer packets from those IoT nodes with similar timing to the benign traffic, thus, disturbing the behavior of the victim server. This futuristic attack which is the focus of this study, is much more sophisticated than traditional attacks where the attacker could expose themselves by either sending many packets or slowing down the data transmission. We have released an anonymized dataset containing real-trace data from an urban deployment of 4060 IoT devices that records their binary activity [24]. We further enhanced that dataset in our recent work [25] by adding the packet volume that each IoT device transmits at each timestamp where they are active. The basis for this emulation is grounded in our finding that a real urban IoT benign traffic can be well-modeled as a (truncated) Cauchy distribution (matching the observation of prior researchers that Ethernet traffic is well modeled by such a distribution [26]). We have also done a simple analysis of neural network models by only considering one architecture where each IoT node trains individual neural network models without using the correlation information from other nodes [25]. In this work, we further enhance the generation of our training dataset, in which we also include the correlation information of IoT nodes' activity in each recorded activity on the dataset. Our proposed "dataset+script" allows the injection of truncated Cauchy distributed attacks, and it also lets the user parametrize the difference between the benign and attack traffic volume. In this way, a diverse training scenario can be effectively generated to improve the training of neural network models. All attack traffic volumes are tuned by a parameter that we call "\(k\)" that determines both the location and scale of the truncated Cauchy distribution for attack traffic volume generation. We further extend our recent work by proposing four architectures to train neural network models for the IoT devices by considering all combinations of either using the correlation information of the IoT devices or not, and either (a) training one central model for all nodes or (b) training distributed individual models for each IoT device. By using the correlation information, each IoT node has access to the packet volume transmitted from other IoT nodes in addition to its packet volume. These architectures are named multiple models with correlation (MM-WC), multiple models without correlation (MM-NC), one model with correlation (OM-WC), and one model without correlation (OM-NC). Furthermore, we considered five different neural network models, namely multi-layered perceptron (MLP), convolutional neural networks (CNN), long short-term memory (LSTM), transformer (TRF), and autoencoders (AEN). We extensively analyzed the performance of proposed architectures by using the mentioned neural network models to decide what is the best architecture/neural network model for detecting DDoS attacks on IoT devices. Our simulation results indicate that using the correlation information of the nodes significantly helps with detecting DDoS attacks, especially in the case that the attacker is camouflaging the attack by using the benign traffic packet volume distribution. Given the huge number of IoT devices that could potentially be used in performing DDoS attacks, using the correlation information of all nodes results in a massive number of features that neural network models need to learn from. Therefore, we investigated the methods to actively select the nodes for considering the correlation information of the nodes for training the neural network models. Our results indicate that using the Pearson correlation to actively select the nodes for training and prediction purposes, results in a good performance in terms of binary accuracy and F1 score for detecting the DDoS attacks as compared to using the correlation information of all nodes. We make the dataset, attack emulation, and training/testing neural network models and ar Fig. 1: Benign/attack packet volume (figure left) and mean flow inter-arrival time (figure right) probability density of real DDoS attacks chitectures script available as an open-source repository online at [https://github.com/ANRGUSC/correlation_aware_ddos_iot](https://github.com/ANRGUSC/correlation_aware_ddos_iot). The rest of this paper is organized as follows: section II presents the related works that have been done in this area. In section III, we present the raw urban IoT dataset and its benign traffic characteristics; this section also introduces the modeling of the IoT benign traffic (defining the (truncated) Cauchy distribution) and builds the synthetic dataset defining the parameter that will regulate the relation/distance between benign and attack traffic. In section IV we illustrate the way we emulate attacks and also the procedure to create the general training dataset to be used for training, validating, and comparing different NN models and architectures to detect DDoS attacks deployed in the customized dataset. Section V presents the four architectures and five neural network models we used for this analysis. In section VI, we discuss new methods for considering the correlation information of IoT nodes while considering their constrained resources by actively selecting IoT nodes for sharing their information in correlation-aware architectures. Section VII presents the evaluation and analysis of the introduced models. Lastly, section VIII summarizes this work and proposes future research steps. ## II Related Works Prior works have explored various machine learning models for detecting DDoS attacks. Most of the papers in this area considered the transport layer DDoS attacks, where attackers send as many packets as possible to the victim server to disturb its behavior. Doshi _et al._[27] generated the training data by setting up an environment with three IoT devices and simulated the Mirai Botnet attack. They have developed various classification models such as random forests, K-nearest neighbors, support vector machines, and neural networks to be run on network middleboxes such as routers, firewalls, or network switches to detect DDoS attacks, and concluded that almost all of the models could achieve 0.99 accuracy. Chen _et al._[28] used 9 IoT devices at a university campus to collect the data through IoT gateways and transmit the data to cloud servers via the software-defined network (SDN) switches. Then, they implemented a decision tree to run on the IoT gateways and SDN controllers in order to detect and block the DDoS attacks and achieved an F1 score of 0.99. Similarly, Mohammed _et al._[29] implemented a naive Bayes classification technique based on the NSL-KDD dataset on a central server and then tested their model using the traffic information captured from four SDN controllers and achieved an F1 score of 0.98. Syed _et al._[30] proposed an application layer DoS attack detection framework that uses a machine learning-based method on the Message Queuing Telemetry Transport (MQTT) brokers. They tested their framework by using the average one-dependence estimator (AODE), C4.5 decision trees, and multi-layer perceptron (MLP) machine learning models on three virtual machines and achieved 0.99 accuracy. Meidan _et al._[22] collected their training data by infecting nine commercial IoT devices and proposed N-BaIoT anomaly detection method that uses autoencoders to classify the benign and attacked IoT traffic with 0.99 accuracy. Here we also mention some of the works that studied slow-rate DDoS detection mechanisms. Yungaicela-Naula _et al._[14] used CICDDoS 2017 [23], and CICDDoS 2019 [31] and utilized machine learning and deep learning methods such as support vector machine (SVM), K nearest neighbor (KNN), multi-layer perception (MLP), etc. to detect DDoS attacks. Their experiments showed that they could achieve up to 98% accuracy for the traditional DDoS attack and up to 95% accuracy for slow-rate DDoS attacks. Similarly, Nugraha and Murthy [32] proposed a hybrid CNN and LSTM neural network model that could achieve up to 99% accuracy on a dataset that they self-developed for slow-rate DDoS attacks. Muraleedharan and Janet [33] used a random forest (RF) classifier to detect slow-rate DDoS attacks, and their model could achieve up to %99 accuracy. Cheng _et al._[34] studied the performance of random forest, K nearest neighbor, naive Bayes, and support vector machine on a self-developed dataset for slow-rate DDoS attack detection on the network switches and controllers. Their analysis showed that ML models could achieve up to %99 accuracy. There are many other works that studied the DDoS detection on IoT devices using machine learning methods some of them are presented in table I. In all the related works discussed above, we are observing that the datasets and features that they have used to do DDoS attack classification have very different properties for the attack and benign traffic, where attack traffic volume/timing is orders of magnitude higher than the benign traffic volume/timing. However, in this work, we use the tunable parameter \(k\) that can tune the packet volume being transmitted from IoT devices with regular timing as compared to the benign traffic and help the attacker to camouflage the attack in the benign traffic. Furthermore, most of the mentioned related works studied either a central neural network model to run on the cloud servers, SDN switches, etc., or a few of them studied the distributed neural network model to run on the IoT devices and did not provide a comparison for the distributed versus central architectures. However, in this work, we extensively analyzed both of these scenarios and compared their performance against each other. Furthermore, none of the papers mentioned above considered the correlation information of IoT nodes with relation to each other while designing the DDoS detection mechanisms. However, in this paper, we also study the architectures that are correlation-aware and consider the correlation information of the IoT nodes for detection purposes. Moreover, our paper proposes and uses a large-scale IoT DDoS dataset with more than 4000 nodes, and we introduce a training dataset that considers correlation information of the IoT nodes against other papers which use datasets that have a limited number of IoT nodes without having the correlation information of the nodes included. Table II presents an overview of datasets in this field with the respective number of IoT nodes used in creating the dataset. ## III Original and Benign Activity Datasets The original (raw) data of the binary activity of the IoT nodes have been collected from the IoT nodes that were deployed in an urban area. 1. The features provided in the raw dataset are the node ID, the latitude and longitude location of the IoT node, and the binary activity status of the IoT devices. A record of the IoT node appears on the raw dataset whenever there is an activity status change for an IoT node, i.e., after each change in the activity status of the IoT nodes, a new record will be added to the raw dataset. The raw dataset has 4060 nodes with one month's worth of data with no missing data points. Footnote 1: The source of this data, originally presented in [24], has been anonymized for privacy and security reasons. As mentioned before, we have activity status changes of the nodes in the raw dataset. This means that the nodes that have more changes in their activity will have a higher number of records in the raw dataset. Therefore, using the raw dataset for the purpose of training neural network models will provide a bias toward the behavior and information of the nodes that have the highest number of activity changes. In order to address this issue, we provided a script that takes the raw dataset with the record of activity status changes of the IoT nodes and also a time step, called \(t_{s}\), and generates a new dataset called benign dataset that presents the activity status of each IoT node every \(t_{s}\) seconds. Therefore, the activity status of each IoT node will be present in the benign dataset every \(t_{s}\) seconds, which helps the neural network models to learn the activity behavior of all IoT nodes at all time stamps instead of just learning the behavior of the IoT nodes that have the highest changes in their activity status. Furthermore, the python script can also get a beginning and ending date for generating the benign dataset in the case the IoT node has constrained resources to train neural network models on all the benign dataset. As mentioned above, the benign dataset will only have the binary activity status of the IoT nodes. In order to enhance the dataset, we also add the packet volume that gets transmitted from the active IoT nodes at each time step. Meidan et al. [22] studied the DDoS attack detection on 9 IoT nodes in the real world and presented a dataset that contains the packet volume transmitted from each IoT node every 10 seconds both in the case that nodes are under attack and also the case that nodes are not under attack. We used the data related to a security camera IoT node with ID XCS7_1003 in the Meidan et al. [22] paper as a source for generating the benign/attack packet volume of the IoT nodes in our dataset. In the literature, various distributions have been used to estimate the packet volume of network traffic [52]. We analyzed 80 different distributions for estimating the packet volume of the IoT nodes based on the real information that we had for the security camera datatset that was provided in [22]. Among the 80 different distributions that we analyzed, the \begin{table} \begin{tabular}{l c c c c c} \hline **Reference** & **Date** & **Number of Nodes** & **IoT specific/General** & **Binary activity or Traffic Volume** & **Benign/Attack traffic** \\ \hline DARPA 2000[45] & 2000 & 60 & general & traffic volume & both \\ CAIDA UCSD DDoS Attack 2007 [46] & 2007 & unclear & general & traffic volume & attack \\ Shiravi et. al. [47] & 2012 & 24 & general & traffic volume & both \\ CICDDoS 2017 [23] & 2017 & 25 & general & traffic volume & both \\ Median et. al. [22] & 2018 & 9 & IoT specific & traffic volume & both \\ CSE-CIC-IDS2018 on & AWS & 2018 & 450 & general & traffic volume & attack \\ [48] & & 2019 & 25 & general & traffic volume & attack \\ CLCDDS 2019 [31] & 2019 & unclear & IoT specific & traffic volume & both \\ The Bot-IoT Dataset (Univ. of U18W) [49] & 2020 & 42 & IoT specific & traffic volume & both \\ Ullah et. al [50] & 2020 & 400 & general & traffic volume & both \\ Erhan et. al. [51]. & 2020 & 4000 & IoT specific & binary activity & both \\ Hekmait et. al. [24] & 2021 & 4060 & IoT specific & traffic volume & both \\ \hline \end{tabular} \end{table} TABLE II: Related Papers with IoT datasets \begin{table} \begin{tabular}{l l l l l} \hline **Reference** & **Dataset** & **Detection Method** & **Centralized/Distributed** & **Inference Device** \\ \hline Doshi _et al._[27] & Self-developed & RF, KNN, SVM, MLP & Centralized & Network middle box \\ Chen _et al._[28] & Self-developed & DT & Centralized & SDN controller \\ Syed _et al._[30] & Self-developed & MLP, AODE, DT & Centralized & MQT broker \\ Meidan _et al._[22] & N-BaloT [22] & AEN & Distributed & IoT node \\ Liu _et al._[35] & Self-developed & RL & Centralized & SDN controller \\ Roopak _et al._[36] & Self-developed & MLP, CNN, LSTM & Centralized & Unclear \\ Guntlakshmi _et al._[37] & Self-developed & SVM & Centralized & Unclear \\ Mohammad _et al._[29] & NSL-KDD [38] & NB & Centralized & SDN controller \\ Zekri _et al._[39] & Self-developed & DT & Centralized & Cloud \\ Blaise _et al._[40] & CTU-13 [41] & MLP, SVM, RF, LR & Centralized & Unclear \\ Soe _et al._[42] & N-BaIoT [22] & MLP, DT, NB & Centralized & Unclear \\ Nugraha _et al._[43] & CTU-13 [41] & MLP, CNN, LSTM & Centralized & Unclear \\ Kumar _et al._[44] & Self-developed & RF, KNN, GNB & Centralized & IoT gateway \\ Yungaiecla _et al._[14] & CICDDoS [23],[31] & RF, SVM, KNN, MLP, CNN, GRU, LSTM & Centralized & Server \\ Cheng _et al._[34] & Self-developed & SVM, NP, KNN, RF & Centralized & Controller and Switch \\ \hline \end{tabular} \begin{tabular}{l} Where MLP: Multilayer Perceptron, CNN: Convolutional Neural Network, LSTM: Long Short Term Memory, \\ AEN: Autoencoder, RL: Reinforcement Learning, DT: Decision Tree, RF: Random Forest, KNN: K Nearest Neighbor, \\ SVM: Support Vector Machine, AODE: Average One-Dependence Estimator, NB: Naive Bayes, \\ GNB: Gaussian Naive Bayes, LR: Logistic Regression, GRU: Gated Recurrent Unit \\ \end{tabular} \end{table} TABLE I: Selected papers with ML based methods for detecting DDoS attacks on IoT Cauchy distribution fitted best to the packet volume activity of the security camera node in terms of having the minimum square error. Since the Cauchy distribution has an unusual nature due to being unbounded in value and not having a defined mean, we use a truncated Cauchy distribution instead with a low of 0, and a high of maximum packet volume observed in the real data of the security camera IoT node. Our analysis in finding truncated Cauchy distribution as the best distribution for estimating the packet volume of the IoT nodes is also compatible with the previous finding for modeling network traffic [26]. In order to generate the packet volume in the benign dataset, we will generate the packet volume transmitted from the IoT node whenever the node is active based on the fitted truncated Cauchy distribution. On the other hand, when the node is not active, the packet volume transmission from the IoT node will be zero. Note that in this paper, we are presenting an event-driven IoT dataset which means that the IoT nodes will transmit packets whenever they are active. Table III presents a few sample data points in the benign dataset. In order to generate the packet volume of the IoT nodes when they go under attack, we define a new truncated Cauchy distribution with the following parameters: \[x_{a} =(1+k)\cdot x_{b} \tag{1}\] \[\gamma_{a} =(1+k)\cdot\gamma_{b}\] (2) \[m_{a} =(1+k)\cdot m_{b} \tag{3}\] where, \(x_{b}\), \(\gamma_{b}\), \(m_{b}\) refer to the location, scale, and maximum packet volume of the truncated Cauchy distribution of the benign traffic, respectively, while \(x_{a}\), \(\gamma_{a}\) and \(m_{a}\) are the location, scale, and maximum packet volume of the generated truncated Cauchy distribution of the attack traffic, respectively. \(k\) is the tunable parameter for generating packet volumes with higher location, scale, and maximum packet volume as compared to benign traffic where \(k>=0\). Note that when the DDoS attack gets performed by the attacker, the IoT nodes will become active and transmit packets, whether they were active or not previously. As we mentioned before, the related works in this area are only considering the cases where in the DDoS attack, either IoT nodes are sending as many packets as possible or slowing down the data transmission to the victim server in order to disturb its behavior. However, in this work, we are considering the case that the attacker can perfectly camouflage the attack with benign traffic. More specifically, for low \(k\) values, the attacker is trying to camouflage with background traffic since it is sending packets by using the benign traffic pattern of the IoT nodes, which is harder to detect the DDoS attack. On the other hand, when the attacker uses a higher value for \(k\), larger packet volumes will be transmitted to the victim server, but it will also be easier to get detected. While one could potentially use three different parameters for creating new \(x_{a}\), \(\gamma_{a}\), and \(m_{a}\), we use only one parameter \(k\) just for simplicity. In order to better understand the behavior of the tunable parameter \(k\) and also compare the truncated Cauchy distribution estimation against the real packet volume distribution, here we present figures 1(a) and 1(b) that show the probability density function (PDF) and complementary cumulative distribution function (CCDF) for the real empirical benign traffic data in a blue color besides the truncated Cauchy distribution with \(k\) values of 0, 2, and 8. By assigning the \(k\) value to 0, the packet volume generated by the truncated Cauchy distribution should be similar to the real packet volume distribution. As we can see in figures 1(a) and 1(b), the truncated Cauchy distribution with \(k=0\) (red color) is well fitted to the real empirical packet volume distribution (color blue). By increasing the value of the tunable parameter, \(k\), we are basically increasing the location, scale, and maximum packet volume of the truncated Cauchy distribution, which results in having higher probabilities for larger packet volumes. Using high values for \(k\) results in transferring huge amounts of packets during the attack. In this scenario, the attacker can disturb the behavior of the victim server with fewer number of IoT nodes. However, using lower values of \(k\), although lower packet volumes will be generated by each IoT node during the attack, the attacker could use a large number of IoT nodes to transmit packets to the victim server and disturb its behavior. ## IV Attack Mechanism This section presents how synthetic DDoS attacks are generated by using the IoT nodes, given the packet volume distribution presented in section III. Furthermore, the procedure for generating the general training dataset is also discussed in this section. ### _Generating DDoS attack_ In this paper, we synthetically generate a DDoS attack on the IoT nodes by setting all nodes which are under attack \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **NODE** & **LAT** & **LNG** & **TIME** & **ACTIVE** & **PACKET** \\ \hline 1 & 33 & 40 & \begin{tabular}{c} 2021-01-01 \\ 23:00:00 \\ \end{tabular} & 0 & 0 \\ \hline 1 & 33 & 40 & \begin{tabular}{c} 2021-01-01 \\ 23:10:00 \\ \end{tabular} & 0 & 0 \\ \hline 1 & 33 & 40 & \begin{tabular}{c} 2021-01-01 \\ 23:20:00 \\ \end{tabular} & 1 & 9 \\ \hline 1 & 33 & 40 & \begin{tabular}{c} 2021-01-01 \\ 23:30:00 \\ \end{tabular} & 1 & 11 \\ \hline \end{tabular} \end{table} TABLE III: Sample Data Points in Benign Dataset Fig. 2: Real packet volume distribution vs truncated Cauchy distribution to active status for the duration of the attack and assigning what packet volume to be transmitted at each time slot. Recall that since we have an event-driven IoT dataset, when the nodes go under attack, they will start transmitting packets, whether they have been active or not previously. In order to generate the packet volume transmitted while the nodes are under attack, we use the equations (1), (2), and (3) to define the truncated Cauchy distribution for a given tunable parameter \(k\) and sample in i.i.d fashion from that distribution. The DDoS attack performed by the attacker can have four different parameters to be set: the start time of the attack (\(a_{s}\)), the duration of the attack (\(a_{d}\)), the ratio of the nodes to be used by the attacker (\(a_{r}\)), and the tunable attack packet distribution parameter (\(k\)). In this way, we can synthesize various desired attacks that an attacker could potentially perform. Therefore, the neural network models will be able to learn the behavior of the nodes in different attack scenarios and predict what IoT nodes are under attack at what time. ### _Generating General Training Dataset_ Here we present the procedure for creating the _general_ training dataset that contains all features that may be used by different architectures proposed in section V-A. By performing the attack on the benign dataset, we will generate a labeled general training dataset for supervised machine learning training. At each sample of the general training dataset for a specific IoT node \(i\), we store the node ID, the time stamp, and also the packet volume that is being transmitted through that node. In addition to the packet volume of node \(i\) at each sample of the training dataset, we also store the packet volume for all other nodes in the general training dataset. This means that at each time stamp of the general training dataset, we will have access to the packet volume that is being transmitted through node \(i\) and all other nodes. Furthermore, in order to distinguish the records for each node, we also add one-hot encoding to the general training dataset. Finally, we will have the label of each sample that shows whether that sample represents the node being under attack or not. Note that having the record of the packet volumes that are being transmitted from all nodes at each sample of the general training dataset will help to understand the correlation behavior of the nodes and will be used in training the correlation-aware architectures. As mentioned before, we will propose four different architectures that either use the correlation information and one-hot encoding in training or not. This matter will be elaborated more in section V-A. Table IV represents a sample training dataset containing the information of only two nodes where N_1 and N_2 represent nodes 1 and 2, respectively, and P_1 and P_2 represent the packet volume of nodes 1 and 2, respectively. Note that, in order to detect the DDoS attack at each timestamp, we also need the information of the previous time slots to understand the behavior of the node through time. Therefore, the training dataset could be considered a time-series dataset. Thus, we stack the past \(n_{t}\) entries of each sample in the training dataset, to predict the attack status of that specific sample. In the python script provided, one could also tune the \(n_{t}\) based on the resources available for the IoT devices. Having higher values for \(n_{t}\) would help the neural network model to better understand the behavior of the IoT devices, but it also requires higher computation resources for training the neural network models. ## V Defense Mechanism In this section, we present the architectures and neural network models that we used for detecting DDoS attacks. The performance comparison of these architectures and neural network models are presented in section VII. ### _Architectures_ We introduce four different architectures to train the neural network models based on whether to use the correlation information of nodes' activity or not and whether to train distributed individual models for each IoT node or train one central model for all nodes. Here we present the details of each architecture: * **Multiple models without correlation (MM-NC)**: In this architecture, we train individual neural network models for each IoT node. Therefore, each node will have one model trained just for itself. Furthermore, we do not use the correlation information of nodes' activity, i.e., we only use the packet volume of each node through time in order to train the models and do not include the packet volume of other nodes in the training dataset. Since we are training individual models for each node, we will also not use one-hot encoding features from the general training dataset in this architecture. * **Multiple models with correlation (MM-WC)**: In this architecture, we train individual neural network models for each IoT node. Therefore, each node will have one model trained just for itself. Furthermore, we use the correlation information of nodes' activity, i.e., in addition to each node's packet volume, we also use the packet volume of other nodes to capture the correlation information of nodes' activity in training. Since we are training individual models for each node, we will also not use one-hot encoding features from the general training dataset in this architecture. * **One model without correlation (OM-NC)**: In this architecture, we train one neural network model for all IoT nodes. Therefore, there will be a central node/server that trains the neural network model, and all IoT nodes will use that model for detecting DDoS attacks. Furthermore, we do not use the correlation information of nodes' activity, i.e., we only use the packet volume of each \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Node** & **Time** & **N\_1** & **N\_2** & **P\_1** & **P\_2** & **Attacked** \\ \hline 1 & 2021-01-01 & 1 & 0 & 12 & 9 & 0 \\ \hline 1 & 2021-01-01 & & & & & & \\ & 23:30:00 & 1 & 0 & 13 & 8 & 1 \\ \hline 2 & 2021-01-01 & 0 & 1 & 9 & 12 & 0 \\ \hline 2 & 2021-01-01 & & & & & & \\ & 23:30:00 & 0 & 1 & 8 & 13 & 1 \\ \hline \end{tabular} \end{table} TABLE IV: Sample Data Points in General Training Dataset node through time in order to train the models. Since we are training one central model for all nodes, in order to distinguish the information of each node from other nodes, we use one-hot encoding from the general training dataset. * **One model with correlation (OM-WC)**: In this architecture, we train one neural network model for all IoT nodes. Therefore, there will be a central node/server that trains the neural network model, and all IoT nodes will use that model for detecting DDoS attacks. Furthermore, we use the correlation information of nodes' activity, i.e., in addition to each node's packet volume, we also use the packet volume of other nodes to capture the correlation information of nodes' activity in training. Since we are training one model for all nodes, in order to distinguish the information of each node from other nodes, we use one-hot encoding from the general training dataset. Using the correlation information of the nodes in the MM-WC and OM-WC architectures could potentially help the neural network models to better predict the DDoS attack since attackers usually use many IoT devices for performing DDoS attacks. Using one model for training the neural network models in the OM-WC and OM-NC architecture will potentially learn the general behavior of the nodes' activity and inactivity better while the MM-WC and MM-NC architectures that use individual models for training neural network models for each node could better personalize the model for each IoT node based on their behavior. ### _Neural Network Models_ For each architecture discussed in section V-A, we use five different neural network models to do binary classification for detecting the DDoS attack on the IoT nodes: * **Multilayer Perceptron (MLP)**: This is the simplest model of the feed-forward artificial neural network that consists of one input layer, one output layer, and one or more hidden layers [53]. In this paper, the input layer is followed by one dense layer with 5 neurons and Rectified Linear Unit (ReLU) activation. In the MM-WC and OM-WC architectures, the dense layer also has a 30% dropout rate, while in the MM-NC and OM-NC architectures, the dense layer does not perform dropout. The output is a single neuron with the Sigmoid activation function. * **Convolutional Neural Network (CNN)**: CNNs are similar to MLP models but use mathematical convolution operation in at least one of their hidden layers instead of general matrix multiplication [54]. In this paper, the model's input layer is followed by one 1-dimensional convolution layer with 5 filters, kernel size of 3, and ReLU activation function. In the MM-WC and OM-WC architectures, the CNN layer also has a 30% dropout rate, while in the MM-NC and OM-NC architectures, the CNN layer does not have a dropout rate. The CNN layer is followed by a 1-dimensional max-pooling layer with a pool size of 2, and then a flattened layer. Finally, the output is a single neuron with the Sigmoid activation function. * **Long Short-Term Memory (LSTM)**: This is a neural network model that can process a sequence of data and is a suitable choice for time-series datasets [55]. In this paper, the model's input layer is followed by one LSTM layer with 4 units and a hyperbolic tangent activation function. In the MM-WC and OM-WC architectures, the LSTM layer also has an L2 regularization factor of 0.3, while in the MM-NC and OM-NC architectures, the LSTM layer does not have an L2 regularization. Finally, the output is a single neuron with the Sigmoid activation function. * **Transformer (TRF)**: Transformer models are a neural network model that solely relies on the attention mechanism to calculate dependencies between the input and output [56]. This model has an encoder and decoder, but for the purpose of binary classification, we will only need the encoder part. In this model, the input layer is followed by a multi-head attention layer with one attention head, and the size of each attention head for query and key is 1. In the MM-WC and OM-WC architectures, the multi-head attention layer also has an L2 regularization factor of 0.3, while the MM-NC, and OM-NC architectures, do not have L2 regularization. The multi-head attention layer is followed by the global average pooling layer. Finally, the output is a single neuron with the Sigmoid activation function. * **Autoencoder (AEN)**: Autoencoders are a type of neural network model that consists of two parts. The first part tries to encode the input into a compressed representation and the second part tries to decode the compressed representation into the original input [57]. In fact, this model tries to learn a meaningful compressed representation of the input. In this paper, the input layer is followed by an encoder that consists of five dense layers with 256, 128, 64, 32, and 16 neurons with a ReLU activation function, each followed by batch normalization. The decoder consists of five dense layers with 16, 32, 64, 128, and 256 neurons with a ReLU activation function, each followed by batch normalization. Finally, the output dimension will be the same as the input layer. For the binary classification, the encoder is followed by a dense layer with 8 neurons and a ReLU activation function, and the output will be a single neuron with a Sigmoid activation function. By using the AEN model, we essentially train the encoder to encode our input dataset into a latent space. Then, we design a classification model that gets this latent space as input and predicts whether the latent features show malicious behavior or not. Figure 3 also presents the architectures discussed above. Given that the correlation-aware architectures have access to the correlation information of the IoT nodes, i.e. MM-WC and OM-WC, we used regularization techniques such as dropout and L2 regularization methods to address the overfitting issues that we observed while training the neural network models. This is worth mentioning that the parameters and hyperparameters for designing/training the neural network model has been selected through a grid search that provides the highest performance. Furthermore, since we are dealing with an unbalanced dataset to do binary classification, we should weigh more the minority class in training the neural network models and also intelligently set the initial bias of the layers. We used the practical method presented in [58] to set the class weights and also initial bias. The following formulation is used to define the weights for each class and also the initial bias of the layers: \[w_{n} =\frac{pos+neg}{2\ neg} \tag{4}\] \[w_{p} =\frac{pos+neg}{2\ pos}\] (5) \[b_{0} =\log(\frac{pos}{neg}) \tag{6}\] where, \(w_{n}\) and \(w_{p}\) represents the weight for the negative(not attacked) and positive(attacked) class, respectively. \(b_{0}\) represents the initial bias of the layers. \(neg\) and \(pos\) represent the total number of negative and positive samples, respectively. ## VI IoT Nodes with Constrained Resources The architectures introduced in section V-A that use correlation information of the nodes, i.e. MM-WC and OM-WC, need to combine the information of _all_ nodes in order to train the neural network models. In fact, the packet volume of _all_ of the nodes will be used as an input to the neural network models. Using the correlation-aware architectures, results in a massive number of input features to be fed to the neural network models and make it infeasible to use in real-world scenarios with a huge number of IoT devices that have constrained resources and computation power. In order to address this issue, we propose that each IoT node uses a fraction of the other IoT nodes' information as input to their neural network models. As we can see in figure 4, we could either use all of the nodes' traffic information (figure 3(a)) or use selected nodes' information (figure 3(b)) to be shared with a specific node for the purpose of training neural networks and predicting DDoS attacks. By actively selecting nodes, we can hugely save time and computation in training neural networks and predicting DDoS attacks on IoT nodes with constrained resources. In this section, we provide the following five heuristic solutions to select important IoT nodes for sharing their information with a specific IoT node in order to train the neural network models in the case of using architectures that utilize the correlation information: * **All nodes**: In this method, we use the packet volume of all nodes for training the neural network models. This is the default method that has been discussed in previous sections, and we expect to have the best performance due to using the information of all nodes at the cost of needing high computation power. * **Pearson correlation**: In this method, we use Pearson correlation to calculate the correlation of the behavior of the nodes' activity based on the packet volume that they are transmitting throughout the day. More precisely, for each node \(i\), we calculate the Pearson correlation of node \(i\) with all other nodes and find the top \(n\) nodes that have the highest correlation with the node \(i\). Finally, in training the correlation-aware architectures for each node, we only use the information of top \(n\) nodes that have the highest Pearson correlation with node \(i\), in addition to the information of node \(i\). * **SHAP**: In this method, we use SHapley Additive exPlanations (SHAP) [59] to determine which features are the most important in making a decision of whether a node is under attack or not. More precisely, after training a neural network model for node \(i\) by using the information of all nodes, we run SHAP method to determine what are the top \(n\) features in making the decision for node \(i\). Then, we will train a new neural network model that only uses the information of the top \(n\) features in making the decision. Note that, in this method, we always need to train the neural network model on all of the nodes first and then determine what nodes provide the most information for the detection by using SHAP. Then, we need to retrain the neural network models by using the top \(n\) features. This is not a practical solution to be used in the real Fig. 4: Using all nodes vs selected nodes for sharing information in correlation-aware architectures Fig. 3: Neural Network Models world, but we also implemented this method to compare it against other solutions. * **Nearest Neighbor**: In this method, we use the euclidean distance of the nodes to calculate the correlation of the behavior of the nodes' activity based on their distance. More precisely, for each node \(i\), we calculate the euclidean distance of node \(i\) with all other nodes and find the top \(n\) nodes that are closest to the node \(i\). Finally, in training the correlation-aware architectures for each node, we only use the information of the top closest \(n\) nodes in addition to the information of node \(i\). * **Random**: In this method, in order to train the correlation-aware architectures for node \(i\), we randomly select \(n\) other nodes and use their information in addition to the node's \(i\) information in order to train the neural network models. Then, we will repeat this process for the desired number of times and calculate the average of the evaluation metrics of all runs. The random method gives us a baseline to compare it against the other solutions mentioned above. ## VII Neural Network Models and Architectures Evaluation In this section, we evaluate the proposed neural network models and also architectures by doing extensive simulations to analyze each architecture/model's performance in different scenarios. ### _Experiment Setup_ In our simulations, we used a time step of 10 minutes, i.e. \(t_{s}=600seconds\), for creating the benign dataset by using the method mentioned in section III, but one could use other desired time steps by using the script provided. The introduced enhanced dataset has 4060 IoT nodes. In this subsection, we randomly selected 50 nodes to do the simulations. Here we present the activity analysis of the full dataset with 4060 nodes against the 50 random nodes that we have chosen from the dataset for the purpose of evaluating neural network models. Figure 5 presents the mean number of active nodes in the benign dataset versus the time of the day by considering all nodes and also the 50 randomly selected nodes. As we can see, up to 70% of the nodes, get activated around the middle of the day, but by midnight only about 20% of the nodes are active. We can also see that the 50 randomly selected IoT nodes have similar activity behavior as compared to the activity behavior of all nodes. Figure 6 shows the probability density function of nodes' mean activity/inactivity throughout the day (from 8 AM to 8 PM) and night (from 8 PM to 8 AM) for all nodes and also the 50 randomly selected nodes. We observe that nodes are primarily active for around 100 minutes and also mostly inactive for around 80 minutes during the day. During the night, nodes are primarily active for around 80 minutes, but they are mostly inactive for around 220 minutes. Another takeaway from figure 6 is that the activity and inactivity behavior of the 50 randomly selected nodes in the day and night is also similar to the behavior of all nodes. Therefore based on our observation from figures 5 and 6, the 50 randomly selected IoT nodes closely represent the behavior of all nodes in the dataset, and we can safely extend our simulation findings by using the results that we get by training the neural network models and architectures over the 50 randomly selected nodes. The whole dataset has one month's worth of data, and in this simulation, we used 4, 1, and 3 different days of the dataset for the training, validation, and testing of the neural network models, respectively. As we discussed in section IV-A, we can synthesize the DDoS attacks by providing the attack properties to the python script. The properties of the DDoS attacks that we used in this simulation are the following: * The attacks can start at 2 AM, 6 AM, and 12 PM. We chose different start times for the attacks to make sure the models get trained to predict the DDoS attacks even Fig. 5: Active Nodes Percentage vs Time Fig. 6: Probability density function (PDF) of nodes mean activity/inactivity for day and night at the high trafficked times of the day like at noon and do not have a bias towards the time of the day that the DDoS attacks may start. * The duration of the attacks could be 4 hours, 8 hours, or 16 hours. We used different hours of attacks to make sure the model could predict both short and long duration DDoS attacks. * The ratio of the nodes that go under attack could be selected from 0.5 and 1. We chose different attack ratios due to the fact that sometimes the attackers do not use all the compromised IoT devices to perform the DDoS attack. * Attack parameter, \(k\), which tunes the packet volume that is being transmitted from the IoT nodes that go under attack, could be chosen from 0, 0.1, 0.3, 0.5, 0.7, and 1. When \(k\) is close to 0, the attacker is using the benign traffic distribution to transmit packets to the victim server, which makes it hard to get detected. On the other hand, when \(k\) is close to 1, the packets transmitted from the attacked nodes will be higher, and therefore more harm is caused to the victim server but potentially easier to get detected. Note that, in order to generate the attack and general training dataset, we use the combination of all possible attack scenarios by using the values of the attack properties discussed above in order to consider all possible combinations for training the neural network models. In order to generate the training dataset, we considered a time window of 10, i.e. \(n_{t}=10\), which means the neural network models will make predictions based on the information on the past 10 time slots of each sample. After generating the training datasets, it has been shuffled to not have prediction bias towards the time of the day. All proposed neural network models have been trained for 3 epochs using 32 batches. ### _Binary Classification Metrics_ In this subsection, we present the performance of the different architectures and neural network models in terms of their binary accuracy, F1 score, and area under curve (AUC) versus the tunable parameter, \(k\), over the testing dataset. Furthermore, we also present the receiver operating characteristic (ROC) curve for \(k=0\) to compare the performance of the different architectures and different neural network models for the hardest scenario of detecting DDoS attacks while attackers use benign traffic distribution to generate packets. Note that, in order to calculate the mentioned performance metrics for each value of \(k\), we are calculating the average of the model's prediction performance over all other attack properties, such as attack start time, duration, and the ratio of the nodes that go under attack. By these figures, we can effectively compare the performance of different architectures and neural network models over each value of \(k\) and conclude what architecture/model performs the best for each scenario. Figure 7 presents the performance of the multiple models with correlation architecture, i.e. MM-WC, across different neural network models discussed in section V-B. As we can see, LSTM neural network model has superior performance as compared to other models almost in all metrics, with an F1 score of between 0.82 and 0.87. TRF has a slightly lower performance as compared to the LSTM model. Following them, CNN and MLP are performing best, with an F1 score between 0.8 and 0.83. AEN has the worst performance for the MM-WC architecture, with an F1 score between 0.39 and 0.45. This shows us that AEN can not learn the behavior of the benign traffic due to the many input features of the MM-WC architecture. This matter is elaborated more at the end of this subsection. Figure 8 presents the performance of the multiple models without correlation architecture, i.e. MM-NC, across different neural network models. As we can see, the performance of all neural network models is increasing with higher values of \(k\). LSTM neural network model has superior performance as compared to other models with an F1 score between 0.35 and 0.85. Following the LSTM model, CNN, MLP, and TRF are doing best with similar performance. Again, AEN has the worst performance for the MM-NC architecture as compared to other models with an F1 score between 0.51 and 0.65. One thing to note is that AEN has a better performance by using MM-NC architecture as compared to the MM-WC architecture. Furthermore, we are observing that AEN is showing superior performance for the case \(k=0\) as compared to other models for the F1 score metric but doing similar or worse for other metrics, i.e. binary accuracy, AUC, and ROC curve. The improved performance of the AEN model in the MM-NC architecture as compared to the MM-WC architecture shows that fewer input features are more suitable for the AEN architecture that can better learn the behavior of the benign traffic and compare it to the attack traffic for the DDoS attack detection. Figure 9 presents the performance of the one model with Fig. 7: Compare different neural network models’ performance by using the multiple models with correlation (MM-WC) architecture correlation architecture, i.e. OM-WC, across different neural network models. As we can see, TRF neural network model has superior performance as compared to other models, with an F1 score between 0.82 and 0.85. Following that, the LSTM model is performing best, with an F1 score between 0.79 and 0.81. Finally, MLP and CNN are showing similar performance with an F1 score between 0.76 and 0.78. Again, AEN has the worst performance for the OM-WC architecture. This figure, similar to the MM-WC architecture, supports the fact that AEN could not handle many input features to the model. We can also observe that the TRF model can handle more complex architectures, such as OM-WC architecture, where we have many input features to the neural network model as compared to other models. Figure 10 presents the performance of the one model without correlation architecture, i.e. OM-NC, across different neural network models. As we can see, the performance of all neural network models is increasing with higher values of \(k\). We can again observe that LSTM neural network model has superior performance as compared to other models, with an F1 score between 0.4 and 0.86. Following that, CNN, TRF, and MLP show similar performance with an F1 score between 0.39 and 0.85. Again, AEN has the worst performance for the OM-NC architecture. although we see worse performance for the models as compared to the architectures that use correlation information, the metrics of the models improve when we go from low values of \(k\) to higher values of \(k\). This is due to the fact that having a higher value for \(k\) means the attackers are using higher packet volume for generating the DDoS attacks, which makes it easier for the neural network models to detect the attacks even without using the correlation information of other nodes' activity. * The AEN model is performing worst as compared to all other models, almost in all scenarios. This is mainly attributed to the fact that for training the autoencoder model, we use the benign dataset traffic to learn the latent space attributed to the behavior of the nodes which are not under attack and then compare that latent space against the situations that nodes are under attack. Since we are training only on four days' worth of data, the autoencoder can not fully learn a meaningful latent space of the benign behavior of the nodes. The worst performance for the AEN model showed to be in the MM-WC architecture. In MM-WC architecture, we have very few samples since we are only using the samples of each IoT node individually as compared to OM-WC and OM-NC architectures, where we are using the samples of all nodes. Furthermore, MM-WC architecture also has considerably large input features due to using correlation information. Therefore, the AEN model has a hard time learning the behavior of the benign traffic and distinguishing it from the attack traffic. In this simulation, we used only 4 days' worth of data for training the models, but we expect that with increasing the number of training days, the AEN model should also perform better by taking in more samples of the benign traffic. Since LSTM and TRF are performing better as compared to other models, here we also present the performance of only using either LSTM or TRF model against different architectures. Figure 11 presents the performance of the LSTM model across different architectures. As we can see, the architectures that do not use correlation information, i.e. MM-NC and OM-NC, show poor performance with an F1 score around 0.35 for low values of \(k\). Their performance increase with higher values of \(k\) with an F1 score around 0.85 which is similar to the performance of architectures that use correlation information, i.e. MM-WC and OM-WC. Furthermore, we can also observe that MM-WC architecture is performing the best in almost all values of \(k\) as compared to other architectures with an F1 score around 0.82 which is similar to the performance of architectures that use correlation information, i.e. MM-NC and OM-NC, show poor performance with an F1 score around 0.4 for low values of \(k\). Their performance increase with higher values of \(k\) with an F1 score around 0.82 which is similar to the performance of architectures that use correlation information, i.e. MM-WC and OM-WC. Furthermore, we can also observe that both MM-WC and OM-WC architectures are showing similar good performance for all values of \(k\) as compared to other architectures. All in all, we observe that using LSTM/MM-WC and TRF/OM-WC architecture provides the best and also similar performance. Depending on the application, one could use either of these models. In this paper, we prefer the LSTM/MM-WC architecture due to having distributed/individual models for each IoT node as compared to TRF/OM-WC architecture, where we only have one central model. Having a central model is not tolerant to failure, and the IoT system could hurt if that central model fails to do the DDoS detection. On the other hand, by having a distributed/individual model for each IoT node, the IoT system can survive even if some of the nodes Fig. 11: Compare different architectures’ performance by using the LSTM neural network model Fig. 12: Compare different architectures’ performance by using the TRF neural network model fail to do the DDoS detection. In the rest of the paper, we will only consider the LSTM/MM-WC for further analysis. ### _DDoS Attack Prediction Analysis_ In this subsection, we present the DDoS attack prediction analysis by using the LSTM model and MM-WC architecture to see how fast and correctly in the real world our trained model could correctly predict the attacks. Figures 12(a) and 12(b) show the attack prediction for the case \(k=0\) and \(k=1\), respectively. The scenario that we selected for this figure is that the attacks start at 12 PM, which is the busiest time of the day when most of the nodes are active. The attack duration lasts for 16 hours. Each figure shows the ratio of the nodes that are under attack versus the time of the day. The ground truth for attacks (Labeled as True with the color blue) shows what ratio of the nodes are truly under attack. The model's prediction of true positive (labeled as TP with the color orange) shows the ratio of the nodes that are correctly predicted by the model to be under attack. The model's prediction false positive (labeled as FP with the color green) shows the ratio of the nodes that are mistakenly predicted by the model as being under attack. As we can see, for both \(k=0\) and \(k=1\), we are seeing the model is correctly predicting the nodes that are under attack with high accuracy. For the case of \(k=1\), we are seeing a slightly better prediction of the attack while it is happening. This is due to the fact that for higher values of \(k\), the attacker is sending higher packet volume as compared to lower values of \(k\), and it would be easier for the model to detect the attack. Based on our analysis, the LSTM model using the MM-WC architecture could detect 100% of the attack sessions described in section IV-A. In fact, it could predict at least one of the IoT nodes which are truly under attack to have abnormal behavior. Furthermore, on average, 88% of the IoT nodes that are truly under attack in all attack scenarios could correctly be predicted to have abnormal behavior. Additionally, on average, 92% of the nodes which are truly not under attack could be truly predicted to have benign behavior. In terms of falsely predicting a node to be under attack, on average, the false positive detection lasts around 5.39 time slots. Furthermore, in terms of missing nodes that are truly under attack, on average, the false negative detection lasts around 4.41 time slots. To further analyze the performance of the proposed detection mechanism, we created a new testing dataset that varies the ratio of nodes that goes under attack from 0.1 to 1. Figure 14 shows the F1 score of the LSTM model using MM-WC architecture versus the ratio of the nodes that are under attack, i.e. \(a_{r}\). We have separated the different attack scenarios by considering the four cases of \(k=0/1\) and \(a_{d}=4/16hours\). As we can see, the scenario of \(k=1\) and \(a_{d}=16hours\) have the highest F1 score since the attacker is transferring many packets from the IoT nodes, and it is easier to get detected. On the other hand, when \(k=0\), for both \(a_{d}=4/16hours\), we can see that the performance of the detection model is suffered due to the attacker camouflaging the attack traffic with benign traffic, especially when also the attacker is using a very low number of IoT devices, i.e. \(a_{r}=0.1\) to perform the attack. Note that, the case of using a low number of IoT devices to perform the attack, i.e. \(a_{r}=0.1\), and also a small attack packet volume \(k=0\) is not very realistic due to the fact that although the attacker is sending some extra packets to the victim server, it can not essentially disturb the behavior of the victim server. Note that we believe having low performance in detecting the DDoS attacks in such mentioned not realistic scenarios is not a good way of analyzing the performance of our proposed model. However, we provided this simulation result just for the comprehensiveness of this study. ### _IoT Nodes with Constrained Resources_ Based on the simulation results provided in section VII-B, using the correlation information of the IoT nodes provides insightful information for the neural network models to make a much better prediction, especially for the case the attacker camouflages the attack in the benign traffic, i.e. \(k=0\). As we mentioned in section VI, using the correlation information of _all nodes_ is infeasible in the real world, given the huge number of IoT devices that could go under attack. Therefore, we introduced five methods in section VI to compare different techniques for incorporating the correlation information of the IoT nodes that have the highest impact on correctly Fig. 14: F1 Score vs ratio of the nodes under attack (\(a_{r}\)) Fig. 13: Attack prediction vs time on the testing dataset for the case that attack starts at 12 pm for 8 hours over all IoT nodes. predicting the DDoS attacks. In this subsection, we present the performance of different methods by using the information of 5 nodes, including the node that is running the model's prediction. Furthermore, we only consider the LSTM neural network model and MM-WC architecture as our detection method. One of the methods mentioned in section VI uses the most important features in the training dataset based on the SHAP values to figure out the top 5 most important nodes. Figure 15 shows the top 5 important features in training the LSTM/MM-WC model for node 1159. The left vertical axis shows the feature names, and the features are sorted based on their importance in descending order. The right vertical axis shows the color heat map associated with the actual value of the feature, i.e. color shows whether the actual value of the feature is high (red color) or low (blue color) for each sample in the testing dataset. The horizontal axis shows the impact of each sample on the neural network model's prediction. In fact, higher(lower) values in the horizontal axis show that a sample has a higher(lower) prediction value. In our scenario, a higher prediction value shows an attack is happening, and a lower prediction value shows an attack is not happening. Figure 15 supports the fact that having a high packet volume (feature value) for node 1159, has a high impact (SHAP value) on predicting a DDoS attack happening by using node 1159. Furthermore, using the packet volume of the other four nodes, namely 3167, 33989, 23093, and 19159 helped the neural network model trained by node 1159 to predict whether a DDoS attack was happening or not. Therefore, in the SHAP method discussed in section VI, in order to train the LSTM neural network based on the MM-WC architecture for node 1159, we will only use the correlation information from nodes 1159, 3167, 33989, 23093, and 19159. We repeat the same procedure for other nodes to determine the top 5 important features for training the LSTM/MM-WC model for each node. Figure 16 compares the performance of the different techniques discussed in section VI by using the LSTM/MM-WC model and also considering the top 5 important nodes. Figure 16 supports the assumption that using the information of all nodes provides higher performance with an F1 score of between 0.82 and 0.87 as compared to other techniques, which only use the information of 5 nodes. Furthermore, we can see that using the Pearson correlation provides higher performance as compared to the SHAP, random, and nearest neighbor method with an F1 score degradation of up to 5% as compared to using the information of all nodes. Using the SHAP technique to determine the most important features has a slightly lower performance as compared to the Pearson correlation method. Interestingly, the nearest neighbor method shows the worst performance of all. With the help of these results, we can conclude that by using the Pearson correlation, we can detect DDoS attacks over a large number of IoT devices by using the correlation information of only a fraction of IoT nodes with a slight degradation in performance. ### _DDoS Detection Performance Analysis over All nodes In The Dataset_ In this subsection, we analyze the performance of the proposed DDoS detection mechanism over _all_ nodes of the dataset. For this purpose, we assume all nodes are randomly distributed into groups of 50 nodes, i.e., each group of 50 nodes can have access to the information of other nodes in that group. In total, we will have 81 groups of 50 IoT nodes. We use the LSTM neural network model and the MM-WC architecture. We also consider the _All Nodes_ and _Pearson Correlation_ methods for training and testing the neural network models that were introduced in section VI. Figure 17 shows the average performance of the LSTM/MM-WC model by using both _All Nodes_ and _Pearson Correlation_ method over different attack packet distribution parameter, i.e. \(k\). As we can see, if we use the correlation information of all 50 nodes in each group, we will have an F1 score between 0.81 and 0.85 while using the _Pearson Correlation_ method, we are observing up to 5% lower performance in F1 score as compared to the _All Nodes_ method. ## VIII Conclusion In this paper, we presented a machine-learning-based mechanism for detecting DDoS attacks in the ever-growing IoT Fig. 16: Compare the performance of different methods for selecting subset of the nodes for training the LSTM neural network model Fig. 15: Feature importance analysis based on the LSTM/MM-WC model for node 1159 systems. We used our previously published dataset that is based on real urban IoT activity data and emulated benign and attack traffic by using truncated Cauchy distribution. We introduced a new general training dataset that incorporated the correlation information of IoT nodes to be used by correlation-aware architectures for model training. Four new architectures, namely MM-WC, MM-NC, OM-WC, and OM-NC introduced that consider the case of either using a central or distributed neural network model and also consider either using the information of individual IoT nodes or using the correlation information of the IoT nodes. Furthermore, we studied the performance of five neural network models, namely MLP, CNN, LSTM, TRF, and AEN, by using the combination of all proposed architectures. Our extensive simulation results indicated that LSTM/MM-WC and TRF/OM-WC models have the best performance as compared to other combinations of architectures and neural network models. Furthermore, we observed that the architectures that use the correlation information of the nodes, i.e. MM-WC and OM-WC, have better performance as compared to the architectures that do not utilize the correlation information, especially in the case the attacker tries to camouflage the attack by mimicking the behavior of the benign traffic. Since the attacker could potentially use a massive number of IoT devices to perform a DDoS attack, it is not feasible to use the correlation information of all nodes for training neural network models. Therefore, we introduced heuristic solutions that actively select fewer number of IoT nodes for the purpose of training correlation-aware neural network models and detecting DDoS attacks. Our analysis indicated that selecting IoT nodes based on having the highest Pearson correlation for the purpose of training the correlation-aware architectures has the best performance as compared to other solutions. In future work, we plan to consider more complex detection architectures by using a collaborative method that uses IoT nodes, edge devices, and cloud servers. Furthermore, we want to explore the case of dynamically training and updating neural network models to make them capable of detecting DDoS attacks that use different behavior through time. ## IX Acknowledgments This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0160 for the Open, Programmable, Secure 5G (OPS-5G) program. Any views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2310.17758
Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes
In this work, we propose a fully differentiable iterative decoder for quantum low-density parity-check (LDPC) codes. The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers. Both component decoders are defined over the same sparse decoding graph enabling a seamless integration and scalability to large codes. The core idea is to use the GNN component between consecutive BP runs, so that the knowledge from the previous BP run, if stuck in a local minima caused by trapping sets or short cycles in the decoding graph, can be leveraged to better initialize the next BP run. By doing so, the proposed decoder can learn to compensate for sub-optimal BP decoding graphs that result from the design constraints of quantum LDPC codes. Since the entire decoder remains differentiable, gradient descent-based training is possible. We compare the error rate performance of the proposed decoder against various post-processing methods such as random perturbation, enhanced feedback, augmentation, and ordered-statistics decoding (OSD) and show that a carefully designed training process lowers the error-floor significantly. As a result, our proposed decoder outperforms the former three methods using significantly fewer post-processing attempts. The source code of our experiments is available online.
Anqi Gong, Sebastian Cammerer, Joseph M. Renes
2023-10-26T19:56:25Z
http://arxiv.org/abs/2310.17758v2
# Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes ###### Abstract In this work, we propose a fully differentiable iterative decoder for quantum low-density parity-check (LDPC) codes. The proposed algorithm is composed of _classical_ belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers. Both component decoders are defined over the same sparse decoding graph enabling a seamless integration and scalability to large codes. The core idea is to use the GNN component between consecutive BP runs, so that the knowledge from the previous BP run, if stuck in a local minima caused by trapping sets or short cycles in the decoding graph, can be leveraged to better initialize the next BP run. By doing so, the proposed decoder can learn to compensate for sub-optimal BP decoding graphs that result from the design constraints of quantum LDPC codes. Since the entire decoder remains differentiable, gradient descent-based training is possible. We compare the error rate performance of the proposed decoder against various post-processing methods such as random perturbation, enhanced feedback, augmentation, and ordered-statistics decoding (OSD) and show that a carefully designed training process lowers the error-floor significantly. As a result, our proposed decoder outperforms the former three methods using significantly fewer post-processing attempts. The source code of our experiments is available online. + Footnote †: [https://github.com/gongaa/Feedback-GNN](https://github.com/gongaa/Feedback-GNN) ## 1 Introduction Quantum low-density parity-check (QLDPC) codes are among the most promising types of error correction codes for fault-tolerant quantum computing. In recent years there have been numerous efforts to design QLDPC codes of high rate and high distance [1, 2]; even linear distance has been investigated in [3, 4]. Recent work from Bravyi et al. [5] shows how certain codes can be mapped to a bilayer hardware architecture, spurring further interest in the community and increasing the practical importance of low-complexity decoders for such codes. In this work, we focus on decoding medium block-length and high-rate CSS QLDPC codes introduced in [6]. These codes are of particular interest due to their strong performance under maximum likelihood decoding and its approximations, such as OSD; however, their performance is known to be suboptimal under plain BP decoding. This can be intuitively explained by the fact that the corresponding Tanner graph contains unavoidable 4-cycles [7] due to the stabilizer commutation requirement. On the other hand, since errors differing by a stabilizer have the same syndrome, symmetric stabilizer trapping sets [8, 9, 10], where BP cannot decide between certain such errors (e.g., two equal-weight ones), also degrades BP performance. The authors in [8] proposed to alleviate this issue by adding a small perturbation to the channel prior probabilities given as input to the BP decoder. An improved version is shown in [11], where the perturbation depends on the unsatisfied checks. The goal of our work is to learn those perturbations using a GNN. The GNN follows the concept of [12] and acts as an intermediate layer between independent BP runs, as shown in Figure 1. It takes the output log-likelihood ratios (LLRs) \(\mathbf{\Lambda}_{\text{post}}\) estimated by the previous BP decoding stage, then calculates the reliabilities of checks, and uses both information to provide a refined channel prior \(\mathbf{\Lambda}^{\prime}\) as initialization of the next BP run. We call this intermediate layer a _feedback_ GNN. The GNN operates on the same decoding graph as the BP decoder, facilitating a seamless integration with the BP decoding stages. Our trained feedback GNN requires significantly fewer attempts than the aforementioned two methods [8, 11] to achieve a lower logical error rate, as shown in Fig. (4a). Machine learning for channel decoding is doomed by the curse of dimensionality meaning that the training complexity scales exponentially with the codeword length [13]. However, we would like to emphasize that one advantage of the proposed feedback GNN Figure 1: Block diagram of the proposed decoder architecture consisting of trainable GNN layers (orange) and _classical_ BP iterations (yellow). The same GNN is sandwiched between block BP runs of iteration \((64,16,16,16)\). structure is that the graph is explicitly provided and the main decoding task still relies on BP Thereby, its training complexity is only mildly affected by the codeword length [12]. The remainder of the paper is organized as follows. Section 2 contains some background information on the CSS-type LDPC codes, and in Section 2.2, we review some existing post-processing methods. Section 3 reviews the quaternary BP decoding method. With this basis we describe the feedback GNN model architecture and the training method in Section 4. Section 5 presents the numerical simulation results. We conclude in Section 6. ## 2 Quantum Low-Density Parity-Check Codes In the following we restrict our attention to the i.i.d. depolarizing channel, in which each qubit independently and identically experiences a random \(X\), \(Z\), or \(Y=iXZ\) type of error, each with probability \(p/3\) for some \(p\in[0,1]\). ### CSS Quantum Codes Quantum Calderbank-Shor-Steane (CSS) [14, 15] codes are constructed from two _classical_ codes \(C_{X}=\ker H_{X}\) and \(C_{Z}=\ker H_{Z}\) defined by parity-check matrices \(H_{X}\) and \(H_{Z}\) under the requirement \(H_{X}H_{Z}^{T}=\mathbf{0}\), i.e., \(C_{Z}^{\perp}\subseteq C_{X}\) or equivalently \(C_{X}^{\perp}\subseteq C_{Z}\). Rows of \(H_{X}\) and \(H_{Z}\) form the \(X\)- and \(Z\)-type stabilizer generators, while the \(X\)-type and \(Z\)-type logical operators are \(C_{Z}/C_{X}^{\perp}\) and \(C_{X}/C_{Z}^{\perp}\), respectively. All parity-check operations are binary operations in GF(2). If \(C_{X}\) is a blocklength \(N\) code, encoding \(K_{X}\) info bits (an \([N,K_{X}]\) code), and \(C_{Z}\) is an \([N,K_{Z}]\) code, then the resulting quantum code is an \([\![N,K_{X}+K_{Z}-N]\!]\) code. The quantum minimum distance of this code is \(d=\min\{d_{X},d_{Z}\}\), where \(d_{X}=\min\{\mathrm{wt}(\mathbf{x}),\mathbf{x}\in\ker H_{Z}/\mathrm{im}\ H_{X}^ {T}\}\), \(d_{Z}=\min\{\mathrm{wt}(\mathbf{x}),\mathbf{x}\in\ker H_{X}/\mathrm{im}\ H_{Z}^ {T}\}\). \(\mathrm{wt}(\mathbf{x})\) is the Hamming weight of the binary vector \(\mathbf{x}\). An error is specified by \(N\)-bit vectors \(\mathbf{e}_{X}\) and \(\mathbf{e}_{Z}\) which denote the \(X\) and \(Z\) components of the error, respectively. A decoder takes the observed syndromes \(\mathbf{s}_{X}=H_{Z}\mathbf{e}_{X}\) and \(\mathbf{s}_{Z}=H_{X}\mathbf{e}_{Z}\) and constructs correction operations \(\hat{\mathbf{e}}_{X}\) and \(\hat{\mathbf{e}}_{Z}\). Importantly, these do not need to precisely match the error for decoding to succeed. Instead, \(\hat{\mathbf{e}}_{X}\) and \(\mathbf{e}_{X}\) may differ by an \(X\)-stabilizer, and \(\hat{\mathbf{e}}_{Z}\) and \(\mathbf{e}_{Z}\) may differ by a \(Z\)-stabilizer, as these correction operations will return the system to the codespace without incurring any logical error. The requirement for successful correction can be compactly written as \(H_{X}^{\perp}(\hat{\mathbf{e}}_{X}+\mathbf{e}_{X})=\mathbf{0}\) and \(H_{Z}^{\perp}(\hat{\mathbf{e}}_{Z}+\mathbf{e}_{Z})=\mathbf{0}\). This is more stringent than only requiring \(\hat{\mathbf{e}}\) to have the correct syndrome, i.e., \(H_{Z}\hat{\mathbf{e}}_{X}=H_{Z}\mathbf{e}_{X}\) and \(H_{X}\hat{\mathbf{e}}_{Z}=H_{X}\mathbf{e}_{Z}\), since the row span of \(H_{X}\) is contained in the row span of \(H_{Z}^{\perp}\) for a CSS code. In the context of the depolarizing model, the \(X\) and \(Z\) components of the CSS codes can be separately decoded using classical (binary) syndrome BP decoders. To address the correlation between \(X\) and \(Z\) errors, a _quaternary_ BP decoder [7] was introduced to directly operate on the Tanner graph constructed using both the \(X\) and \(Z\) checks. However, due to the CSS code constraints, this Tanner graph inevitably contains cycles of length \(4\)[7]. This renders the decoding of QLDPC codes into a challenging, but from research perspective yet attractive, task. ### Existing Post-Processing Methods One difficulty of decoding QLDPC codes using message passing methods is that the correction operator \(\hat{\mathbf{e}}=(\hat{\mathbf{e}}_{X}|\hat{\mathbf{e}}_{Z})\) of the decoder is not assured to result in a zero syndrome \(\mathbf{s}=(\mathbf{s}_{X}|\mathbf{s}_{Z})=\mathbf{0}\). We call such cases "flagged" events. To overcome such flagged events, several post-processing methods have been proposed. One approach is to take the output of the BP decoder and use it to adjust the inputs to another round of BP decoding, in the hope that that round will lead to an unflagged error. The following is a summary of such methods, which can be applied repeatedly until the parity-checks are satisfied or the maximal number of attempts \(N_{a}\) is reached: * Random perturbation [8] randomly chooses unsatisfied ("frustrated") checks and randomly perturbs the input LLRs for all the qubits involved in those checks. * Enhanced feedback [11] randomly chooses one frustrated parity-check and one of its associated qubits. The channel prior of this qubit is adjusted such that an anticommuting/commuting error is more likely if the ground truth syndrome of the parity-check is \(1/0\). * Section III.A of [16] contains an overview of the first two methods. In addition, [16] proposes a matrix augmentation method, which randomly chooses and repeats some of the rows from the parity-check matrix, and uses this augmented matrix to initialize the message passing decoding in the next round. * Stabilizer inactivation (SI) [10] ranks the parity-checks according to their reliability, as measured by the sum of the absolute values of the posterior LLRs of the associated qubits. Then the qubits associated to the most unreliable checks are "inactivated" by taking them out in the next round of message passing decoding. Finally, the error on the inactivated qubits is determined by solving a small system of linear equations. A related method is _zeroth_-order OSD [6], which is applied only once after BP decoding. By its nature, OSD always gives an estimate that satisfies the syndrome equations, thus eliminating flag errors. This involves solving a rank\((H)\) system of linear equations. Higher-order OSD was also introduced in [6], but is beyond of the scope of this work. ## 3 Quaternary Belief Propagation Decoder The quaternary BP (BP4) algorithm utilizes the correlation between \(X\) and \(Z\) noise by directly working on the quaternary alphabet \(\{I,X,Y,Z\}\). Here we discuss the details of the algorithm, as they will be useful later. As with usual BP decoding algorithms, BP4 utilizes the Tanner graph of the code, and proceeds by sending messages back and forth between VNs to CNs. The goal is to estimate, for each qubit, the marginal probability of error on that qubit, given the observed syndrome. Following [17, 18], the input to the BP4 algorithm is the syndrome pair \(\mathbf{s}_{X}=H_{Z}\mathbf{e}_{X}\) and \(\mathbf{s}_{Z}=H_{X}\mathbf{e}_{Z}\). Initialize variable nodes to \(\mathbf{\Lambda}\in\mathbb{R}^{N\times 3}\), where the \(i^{th}\) row \(\mathbf{\Lambda}_{i}=(\Lambda_{i}^{X},\Lambda_{i}^{Y},\Lambda_{i}^{Z})\) is the sequence of LLRs associated with \(X\), \(Y\), and \(Z\), respectively. Specifically, \(\Lambda_{i}^{X}=\log\frac{p_{i}^{I}}{p_{i}^{X}}\), where \(p_{i}^{I}\) and \(p_{i}^{X}\) are the probabilities of no error or an \(X\) error happening on VN \(i\), respectively. In the case of i.i.d. depolarizing noise \(p_{i}^{X}=p_{i}^{Y}=p_{i}^{Z}=p/3\) for all VN \(i\), meaning the initial VN LLRs are \[\Lambda_{i}^{X}=\Lambda_{i}^{Y}=\Lambda_{i}^{Z}=\log\frac{1-p}{p/3}\quad \forall i\,. \tag{1}\] The message \(\lambda_{i\to j}\) sent from VN \(v_{i}\) to CN \(c_{j}\) is a scalar. For example, if \(c_{j}\) involves an X-type check on \(v_{i}\). Then this scalar LLR message is \[\lambda_{i\to j}=\log\frac{p_{i}^{I}+p_{i}^{X}}{p_{i}^{Y}+p_{i}^{Z}}=\log \frac{1+e^{-\Lambda_{i}^{X}}}{e^{-\Lambda_{i}^{Y}}+e^{-\Lambda_{i}^{Z}}} \tag{2}\] since either \(Y\) or \(Z\) error contributes \(1\) to (anticommutes with) the check. When \(c_{j}\) involves a \(Y\)- or a \(Z\)-type check the messages are defined similarly, as the LLR of an anticommuting error. This mapping (and thus the BP4 algorithm) can also be applied to non-CSS codes, however, we restrict ourselves to CSS QLDPC codes in this work. As a consequence, every check node involves either pure \(X\) checks or pure \(Z\) checks on VNs. ### Check Node Update The message \(\Delta_{j\to i}\) from CN \(c_{j}\) back to VN \(v_{i}\) is the LLR of whether \(v_{i}\) commutes with \(c_{j}\), conditioned on the check node observations. **Lemma 3.1**.: _Gallager [19]. Consider a sequence of \(m\) independent binary digits \((a_{1},...,a_{m})\) in which \(\Pr[a_{k}=1]=p_{k}\). Then the probability \(q\) that \(a_{1}\oplus\cdots\oplus a_{m}=1\) (i.e., an odd number of ones occurring) is_ \[q=\frac{1}{2}-\frac{1}{2}\prod_{k=1}^{m}(1-2p_{k}) \tag{3}\] _which can be rewritten in the log-domain as_ \[\log\frac{1-q}{q}=2\tanh^{-1}\left[\prod_{i=1}^{m}\tanh\left(\frac{1}{2}\log \frac{1-p_{i}}{p_{i}}\right)\right]. \tag{4}\] The right-hand side is usually abbreviated as the boxplus operation \(\boxplus_{i=1}^{m}\Lambda_{i}\) for LLRs \(\Lambda_{i}=\log\frac{1-p_{i}}{p_{i}}\). Later, the loss function and the feature required by the GNN both rely on this simple lemma. The message \(\Delta_{j\to i}\) involves extrinsic information only, and is given by \[\Delta_{j\to i}=(-1)^{s_{j}}\cdot\underset{i^{\prime}\in\mathcal{N}(j) \backslash\{i\}}{\boxplus}\lambda_{i^{\prime}\to j} \tag{5}\] where \(s_{j}\in\{0,1\}\) is the syndrome of the check, \(\mathcal{N}(j)\) is the set of neighboring VNs of \(c_{j}\) in the Tanner graph. ### Variable Node Update The VN update processes the messages coming from the CNs in the direct neighborhood. Assume \(c_{j}\) involves an X-type check on \(v_{i}\), the LLR message \(\Delta_{j\to i}\) cannot distinguish between \(I\) and \(X\) noise as they both commute with the check, similarly, it cannot distinguish \(Y\) and \(Z\) noise. Therefore, it is assumed that those posterior probabilities \(p_{I}^{j\to i}=p_{X}^{j\to i}\) and \(p_{Y}^{j\to i}=p_{Z}^{j\to i}\), and hence \(\Delta_{i\to j}=\log\frac{p_{i}^{j\to i}+p_{X}^{j\to i}}{p_{Y}^{j\to i}+p_{Z}^{ j\to i}}=\log\frac{p_{i}^{j\to i}}{p_{Z}^{j\to i}}\). Similarly, if \(c_{j}\) involves a Z-type check on \(v_{i}\), then \(\Delta_{i\gets j}=\log\frac{p_{i}^{j-i}}{p_{x}^{j-i}}\). The update of the the LLR vector \((\Gamma_{i\gets j}^{X},\Gamma_{i\gets j}^{Y},\Gamma_{i\gets j}^{Z})\) for \(v_{i}\) also done via the extrinsic information rule given as \[\Gamma_{i\to j}^{X}=\Lambda_{i}^{X}+\sum_{j^{\prime}\in\mathcal{M}_{Z}(i) \setminus\{j\}}\Delta_{i\gets j^{\prime}} \tag{6}\] where \(\mathcal{M}_{Z}(i)\) are all the \(Z\)-type check nodes (CSS code) involving \(v_{i}\). Note that checks \(c_{j}\) from \(\mathcal{M}_{X}(i)\) make no contribution as \(\log\frac{p_{i}^{j-i}}{p_{x}^{j-i}}=0\). Similarly, the update rule for \(\Gamma_{i\to j}^{Y}\) is \[\Gamma_{i\to j}^{Y}=\Lambda_{i}^{Y}+\sum_{j^{\prime}\in\mathcal{M}_{Z}(i) \setminus\{J\}}\Delta_{i\gets j^{\prime}}\,. \tag{7}\] Using (2), one can combine messages to send to \(X\) and \(Z\)-type CNs and continue the BP iterations. ### Hard Decision Before a final hard decision, the last decoding iterations calculate the channel posterior \(\mathbf{\Lambda}_{\text{post}}=(\mathbf{\Gamma}^{X},\mathbf{\Gamma}^{Y}, \mathbf{\Gamma}^{Z})\) whose individual components read \[\begin{split}\Gamma_{i}^{X/Z}&=\Lambda_{i}^{X/Z}+ \sum_{j^{\prime}\in\mathcal{M}_{Z/X}(i)}\Delta_{i\gets j^{\prime}}\\ \Gamma_{i}^{Y}&=\Lambda_{i}^{Y}+\sum_{j^{\prime}\in \mathcal{M}_{Z}(i)\bigcup\mathcal{M}_{X}(i)}\Delta_{i\gets j^{\prime}} \end{split} \tag{8}\] and \(\Gamma_{i}^{I}=0\) and then for each VN choose among \(\{I,X,Y,Z\}\) that leads to the smallest LLR. ## 4 Feedback GNN The proposed message passing GNN decoder [12] utilizes the same Tanner graph as the BP decoder, but differs in the representation of the messages and uses trainable CN and VN update functions which are parameterized, for instance by using simple MLP layers. Each variable node or check node has an assigned feature vector. The message on an edge is computed using the concatenated features of its two end nodes. Afterward, the state of a node is updated using its previous state and by the aggregated incoming messages from its neighborhood. The aggregation operation used in this work is the mean operation. To keep the decoding complexity low, our feedback GNN eliminates the "iterative" aspect of the decoder, opting for just one VN update iteration. However, the basic concept still follows [12]. ### Architecture After the BP run, the posterior channel LLR is obtained using (8) and it becomes the variable node feature \(\mathbf{h}_{v_{i}}=\mathbf{\Lambda}_{\text{post},i}=(\Gamma_{i}^{X},\Gamma_{i }^{Y},\Gamma_{i}^{Z})\). Using (2), the two LLRs \(\lambda_{i}^{X},\lambda_{i}^{Z}\) of whether \(v_{i}\) commutes with X or Z check are \[\lambda_{i}^{X}=\log\frac{1+e^{-\Gamma_{i}^{X}}}{e^{-\Gamma_{i}^{Y}}+e^{- \Gamma_{i}^{Z}}}\,,\quad\lambda_{i}^{Z}=\log\frac{1+e^{-\Gamma_{i}^{Z}}}{e^{- \Gamma_{i}^{X}}+e^{-\Gamma_{i}^{Z}}}\,. \tag{9}\] Using Lemma 3.1, the LLR of whether X-type check node \(c_{j}\) is satisfied is \(\boxed{v_{i}\in N(j)}\,\lambda_{i^{\prime}}^{Z}\). The feature of this check node is \[h_{c_{j}}=(-1)^{s_{j}}\times\underset{i^{\prime}\in N(j)}{\boxed{\min}}\lambda _{i^{\prime}}^{Z} \tag{10}\] which is a scalar, and the more negative it is, the more likely this check tends to be not satisfied. Next, for each CN to VN edge, concatenate the features of its two endpoints and calculate the message \[\mathbf{m}_{c_{j}\to v_{i}}=f^{\,C_{X/Z}\to V}\left(\left[h_{c_{j}} ||\mathbf{h}_{v_{i}}\right],\boldsymbol{\theta}_{c_{X/Z}\to V}\right). \tag{11}\] X-type CNs share weights \(\boldsymbol{\theta}_{c_{X}\to V}\) for the function \(f^{\,C_{X}\to V}\). Z-type CNs share weights \(\boldsymbol{\theta}_{c_{Z}\to V}\) for \(f^{\,C_{Z}\to V}\). In our implementation \(f^{\,C_{X}\to V}\) and \(f^{\,C_{Z}\to V}\) are both MLPs with one hidden unit of 40 neurons and \(tanh\) activation. Both MLPs project the messages to dimension 20 as output. Figure 2: Unrolled feedback GNN operating on the Tanner graph, showing the inside of the orange boxes in Fig. (1). The VN feature is initialized using \(\mathbf{\Lambda}_{\text{post}}\) from the previous BP run and the CN feature is calculated using Eq (9,10). Each edge message is calculated using the features of its two endpoints. After that, each variable node aggregates the incoming X (red) and Z-type (blue) messages and then uses them together with its own feature to obtain a modified channel prior \(\mathbf{\Lambda}^{\prime}\) for the next BP run. Solid objects indicate trainable operations. For each \(v_{i}\), calculate the average of all the incoming messages from X-type and Z-type checks respectively. \[\begin{split}\mathbf{m}^{X}_{v_{i}}&=\frac{1}{| \mathcal{M}_{X}(i)|}\sum_{j^{\prime}\in\mathcal{M}_{X}(i)}\mathbf{m}_{c^{ \prime}_{j}\rightarrow\nu_{i}}\\ \mathbf{m}^{Z}_{v_{i}}&=\frac{1}{|\mathcal{M}_{Z}( i)|}\sum_{j^{\prime}\in\mathcal{M}_{Z}(i)}\mathbf{m}_{c^{\prime}_{j} \rightarrow\nu_{i}}\end{split} \tag{12}\] All VNs share weight for \(f^{VN}\), which is another MLP (hidden unit 40) that projects the high-dimensional messages back to three dimensions. \[\boldsymbol{\Lambda}^{\prime}_{i}=f^{VN}\left(\left[\mathbf{h}_{v_{i}}|| \mathbf{m}^{X}_{v_{i}}||\mathbf{m}^{Z}_{v_{i}}\right],\boldsymbol{\theta}_{ VN}\right)\,. \tag{13}\] The resulting output \(\boldsymbol{\Lambda}^{\prime}\in\mathbb{R}^{N\times 3}\) is then used to initialize the next BP run. This feedback GNN essentially only does one VN update, in contrast to the usual message-passing GNN [12] that does multiple iterations of CN and then VN updates. Moreover, the proposed feedback GNN is lightweight, containing just 3923 trainable parameters in total, irrespective of the code size. ### Boxplus Loss In order to train this GNN, we feed its output \(\boldsymbol{\Lambda}^{\prime}\) to initialize another BP4 decoding stage, and let it run for 16 iterations. We record \((\Gamma^{X}_{i},\Gamma^{Y}_{i},\Gamma^{Z}_{i})\) as in Eq (8) at the end of iteration 8 to 16 and calculate \(\lambda^{X}_{i},\lambda^{Z}_{i}\) for all these iterations as in Eq (9), and for all X/Z-type checks calculate \(\boxplus_{i^{\prime}\in\mathcal{N}(j)}\lambda^{Z/X}_{i^{\prime}}\), respectively. These are the logits used to calculate a binary cross entropy loss with the ground truth syndrome \(s_{j}\). This loss is minimized when \(H_{X}\hat{\mathbf{e}}_{Z}=H_{X}\mathbf{e}_{Z}\) and \(H_{Z}\hat{\mathbf{e}}_{X}=H_{Z}\mathbf{e}_{X}\). The proposed _boxplus_ loss is different from the sine loss introduced in [20]. It is also possible to extend this boxplus loss to take degeneracy into account. For CSS codes, no logical error happens if and only if \(H_{X}^{\perp}\hat{\mathbf{e}}_{X}=H_{X}^{\perp}\mathbf{e}_{X}\) and \(H_{Z}^{\perp}\hat{\mathbf{e}}_{Z}=H_{Z}^{\perp}\mathbf{e}_{Z}\). And the CSS requirement implies \(H_{Z}\subset H_{X}^{\perp}\) and \(H_{X}\subset H_{Z}^{\perp}\). Therefore, we can simply enlarge the X and Z parity-check matrices into \(H_{Z}^{\perp}\) and \(H_{X}^{\perp}\) when doing the multi-loss calculation. When training the feedback GNN, we do not take degeneracy into account. ### Training methods * Step 1: Generate a dataset using BP4 (64 iterations, flooding), and store the error patterns \((\mathbf{e}_{x},\mathbf{e}_{z})\) for which BP4 fails to decode. Only \(\mathbf{e}_{x}\vee\mathbf{e}_{z}\) (bit-wise OR) having Hamming weight up to \(0.05\times n\) are considered. We gather roughly a million such errors. * Step 2: Use this dataset to train a coarse GNN which is then used to generate _hard-to-decode_ examples in the next step. The GNN is embedded between two BP runs of 16 iterations. The output of iterations 8 to 16 of the latter run is used to calculate a boxplus multi-loss, as described in the last subsection. Training takes roughly ten minutes on a single NVIDIA RTX4090. * Step 3: Use the trained coarse GNN embedded into two 64-iteration BP components to generate a more sophisticated dataset. Note that we need fewer samples from this dataset when compared to step 1 (empirically 1/50 of the original size). * Step 4: We mix the _hard-to-decode_ dataset with the easy one and balance the probability of occurrence (such that easy/hard to decode samples are equally likely). We then finetune the GNN sandwiched between (64,16) BP blocks for the new dataset. Each training takes less than half an hour. Note that we train ten such models in parallel and select the best one. We implement the BP decoder together with a normalization factor \(\kappa\). Specifically, the messages from CN to VN in Eq (6,7) are multiplied by this scaling factor. During training, the factor \(\kappa=1.0\) is used in every BP run (i.e. no scaling). However, we empirically observed that during evaluation, at high physical error rates, it is beneficial to set \(\kappa\) for the first run of BP to 0.8. It is also interesting to observe that the feedback GNN improves the logical error rate even at high physical error rates \(p\) though the model was not trained there. However, we do not show the curves at low physical error rates for \(\kappa=0.8\), as they have higher error floors when compared to \(\kappa=1.0\). The GNN is trained by being embedded between two BP blocks, but during evaluation, it is reused for possibly more than one attempt. Let (64,16,16,16) denote the decoder consisting of 64 BP iterations followed by \(3\times 16\) BP iterations each with an intermediate GNN layer sharing the same weight. In Figure 3, one can see three attempts (64,16,16,16) are better than one attempt (64,64), though the latter involves more total BP runs but less GNN layers. We choose (64,16,16,16) to be the final decoder. Hereby, steps 3 and 4 of the training improve the performance at low \(p\). As can be seen in Figure 3, using the coarse GNN has an error floor of a logical error rate around \(10^{-6}\) while the finetuned version shows a significantly lower error floor. ## 5 Simulation The message passing (MP) decoder used in this work is the sum-product (BP) version with flooding scheduling. All the computations are conducted using 32-bit floating-point precision. The LLR initialization for the first BP run uses a fixed \(p=0.05\), independent of the real physical error rate. The messages from CN to VN in Eq (6,7) are multiplied by a normalization factor. After each GNN feedback, the BP takes this feedback as the channel LLR initialization, and starts with re-initialized messages. The **same** GNN weights are used in each round of feedback if there are multiple, see Alg. 1 for the pseudo-code. In [10], the authors showed that SI outperformed OSD-0 when using **binary** message passing algorithms for both to obtain soft information for the post-processing. In the original work [6], the quaternary normalized min-sum (NMS) was used which shows a strong performance on the depolarizing channel (see Fig. (4b)). It is not clear how much improvement SI can gain when extended to the quaternary version, because its Tanner graph has unavoidable 4-cycles while the Tanner graph for \(H_{X}\) does not, and SI depends on the convergence of MP on the remaining Tanner graph. Recently in [21], a randomly order layered scheduled MP decoder was proposed for the type of QLDPC codes we are investigating in this work, for example, the \(\llbracket 882,24\rrbracket\) code. This decoder involves no post-processing. CN updates are indexed into layers. The authors found that there exist 7 layers that could cover each check exactly twice for this code. When using a perturbed (multiply each CN message by a random factor in \(\{0.875,0.9275\}\)) normalized min-sum (NMS) as the MP decoder and randomizing the order of layer updates, they achieve very good results on the Z-noise channel (comparable to **binary** NMS+OSD in [10]). In addition, the decoding time is proportional to the \(64\times 3.5=224\) steps for Z noise, where 64 is the number of iterations, and the fractional layer number 3.5 is because CNs in different layers cannot be updated in parallel. In [10, 21], the authors considered both sum-product (BP) and normalized min-sum (NMS) algorithms as message-passing decoders and various scheduling methods, we picked their best curves as a benchmark in Fig. (4b). We are not competing with them as it is not fair for them, since they ignore the X and Z correlation when applying their algorithms. In fact, our methods could be used together, for example by replacing our flooding BP with their layered random order perturbed NMS decoder, though this will require retraining the feedback GNN. ## 6 Conclusion We proposed an iterative decoding scheme for quantum LDPC codes based BP-4 and GNN decoding stages. We train the GNN layers such that they learn to refine the channel input LLRs for subsequent BP decoding stages. We demonstrated that our feedback GNN significantly lowers the error floor on two medium block-size QLDPC codes when carefully trained. The feedback GNN benefits from a computational latency of \(\mathcal{O}(1)\) when nodes are updated in parallel (and assuming a fixed number of iterations). However, in practice the complexity overhead of the feedback GNN is similar to a 16-iteration BP due to the computational complex matrix multiplications in the MLPs. Furthermore, we proposed a new loss based on the boxplus operation that calculates a more accurate reliability estimate of the check nodes. We leave it open for future research to improve the training strategy for faster convergence. Further, optimizing the decoder architecture for real-time implementations remains an open challenge for future work. Figure 3: Logical error rate of \(\llbracket 1270,28,\leqslant 46\rrbracket\) codes using feedback GNNs on depolarizing channel. Comparison of the performance of the coarse and the refined GNN trained on easy and mixed samples respectively. ## Appendix A Loss Function Here we remark on the sine loss used in Neural BP [20] and the boxplus loss used herein. As mentioned in the main text, the sine loss has some unwanted oscillation behavior. However, it can be used to train the Neural BP decoder by initializing such that it equals the classical BP decoder by setting all the weights to one. Hence the initialization of Neural BP is already a good decoder and the loss is already small. Reasonable training will not result in oscillating points due to the sine loss. However, when using this loss to train a full-GNN message-passing decoder from scratch (as done in [12]), we empirically observed that the loss does not converge. Therefore, we propose the boxplus loss as a more stable substitute. Note that, as the channel posterior probabilities approach either \(0\) or \(1\), the sine loss is a good approximation to the boxplus loss, which is the analytical solution and involves the product of those posterior, whereas sine loss only involves the summation of those terms. ``` Algorithm Sandwich-BP-GNN-Decoder(decoders) Input : CSS code specified by \((H_{\chi},H_{Z})\) and a syndrome pair \((\mathbf{s}_{\chi},\mathbf{s}_{Z})\) Output :\(\hat{\mathbf{e}}=(\hat{\mathbf{e}}_{\chi},\hat{\mathbf{e}}_{Z})\) 1\(\mathbf{\Lambda_{0}}\leftarrow\log\frac{1-p}{p/3}\mathbf{I}_{N\times 3}\) 2\((\hat{\mathbf{e}}_{X},\hat{\mathbf{e}}_{Z},\mathbf{\Lambda}_{\text{post}}) \leftarrow\text{decoders}[0](\mathbf{\Lambda_{0}})\) 3if\(H_{X}\hat{\mathbf{e}}_{Z}=\mathbf{s}_{Z}\) and \(H_{Z}\hat{\mathbf{e}}_{X}=\mathbf{s}_{X}\)then 4return\((\hat{\mathbf{e}}_{X},\hat{\mathbf{e}}_{Z})\) 5 end if /* Begin post-processing. */ 6foreachdecoderin decoders[1:]do 7\(\Lambda^{\prime}\leftarrow\text{Feedback-GNN}\) (\(\mathbf{\Lambda}_{\text{post}}\)) 8if\(H_{X}\hat{\mathbf{e}}_{Z}=\mathbf{s}_{Z}\) and \(H_{Z}\hat{\mathbf{e}}_{X}=\mathbf{s}_{X}\)then 9return\((\hat{\mathbf{e}}_{X},\hat{\mathbf{e}}_{Z})\) 10 end if 11return\((\hat{\mathbf{e}}_{X},\hat{\mathbf{e}}_{Z})\) ``` **Algorithm 1**Sandwiched BP GNN Decoding Figure 4: Logical error rate of the \([1270,28,\leqslant 46]\) and the \([882,24,\leqslant 24]\) codes using various post-processing methods on depolarizing channel. \(N_{a}\) is the maximum number of attempts. For our feedback GNNs, only the first block run of BP4 needs \(64\) iterations, while \(16\) iterations are enough for the post-processing block BP4 run. For example, three attempts will involve \(64+16\times 3=112\) iterations of flooding BP in total. The factor \(\kappa=1.0\) or \(0.8\) was used for the first block run of BP, all later runs used \(1.0\). (**a**) All the curves except the four feedback GNN ones are taken from Fig. (3) of [6], where a \(32\)-iteration layered normalized min-sum (NMS) decoder with factor \(0.625\) was used as the MP decoder in every attempt. Our feedback GNN, which aims to capture the core ideas of random perturbation [8] and enhanced feedback [11], at three attempts indeed outperforms these two methods repeated \(100\) times. (**b**) The random order layered binary perturbed NMS [21] method was implemented on Z-noise, converted here to depolarizing noise using the \(2/3\) rule. Their total number of steps needed is \(64\times 3.5=224\) for Z noise. While ours is at most \(64+16\times 5=144\) for depolarizing noise. The stabilizer inactivation [10] methods use \(50\)-iteration serial BP2 in every stage.
2302.12357
Auto-HeG: Automated Graph Neural Network on Heterophilic Graphs
Graph neural architecture search (NAS) has gained popularity in automatically designing powerful graph neural networks (GNNs) with relieving human efforts. However, existing graph NAS methods mainly work under the homophily assumption and overlook another important graph property, i.e., heterophily, which exists widely in various real-world applications. To date, automated heterophilic graph learning with NAS is still a research blank to be filled in. Due to the complexity and variety of heterophilic graphs, the critical challenge of heterophilic graph NAS mainly lies in developing the heterophily-specific search space and strategy. Therefore, in this paper, we propose a novel automated graph neural network on heterophilic graphs, namely Auto-HeG, to automatically build heterophilic GNN models with expressive learning abilities. Specifically, Auto-HeG incorporates heterophily into all stages of automatic heterophilic graph learning, including search space design, supernet training, and architecture selection. Through the diverse message-passing scheme with joint micro-level and macro-level designs, we first build a comprehensive heterophilic GNN search space, enabling Auto-HeG to integrate complex and various heterophily of graphs. With a progressive supernet training strategy, we dynamically shrink the initial search space according to layer-wise variation of heterophily, resulting in a compact and efficient supernet. Taking a heterophily-aware distance criterion as the guidance, we conduct heterophilic architecture selection in the leave-one-out pattern, so that specialized and expressive heterophilic GNN architectures can be derived. Extensive experiments illustrate the superiority of Auto-HeG in developing excellent heterophilic GNNs to human-designed models and graph NAS models.
Xin Zheng, Miao Zhang, Chunyang Chen, Qin Zhang, Chuan Zhou, Shirui Pan
2023-02-23T22:49:56Z
http://arxiv.org/abs/2302.12357v1
# Auto-HeG: Automated Graph Neural Network on Heterophilic Graphs ###### Abstract. Graph neural architecture search (NAS) has gained popularity in automatically designing powerful graph neural networks (GNNs) with relieving human efforts. However, existing graph NAS methods mainly work under the homophily assumption and overlook another important graph property, _i.e._, heterophily, which exists widely in various real-world applications. To date, automated heterophilic graph learning with NAS is still a research blank to be filled in. Due to the complexity and variety of heterophilic graphs, the critical challenge of heterophilic graph NAS mainly lies in developing the heterophily-specific search space and strategy. Therefore, in this paper, we propose a novel automated graph neural network on heterophilic graphs, namely **Auto-HeG**, to automatically build heterophilic GNN models with expressive learning abilities. Specifically, Auto-HeG incorporates heterophily into all stages of automatic heterophilic graph learning, including search space design, supernet training, and architecture selection. Through the diverse message-passing scheme with joint micro-level and macro-level designs, we first build a comprehensive heterophilic GNN search space, enabling Auto-HeG to integrate complex and various heterophily of graphs. With a progressive supernet training strategy, we dynamically shrink the initial search space according to layer-wise variation of heterophily, resulting in a compact and efficient supernet. Taking a heterophily-aware distance criterion as the guidance, we conduct heterophilic architecture selection in the leave-one-out pattern, so that specialized and expressive heterophilic GNN architectures can be derived. Extensive experiments illustrate the superiority of Auto-HeG in developing excellent heterophilic GNNs to human-designed models and graph NAS models. graph neural architecture search, graph neural networks, heterophily, diverse message-passing, progressive supernet training + Footnote †: journal: Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted Ac Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Ac Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted on Accepted Ac Ac Accepted on Accepted on Accepted on Accepted Ac Ac Accepted on Accepted on Accepted on Ac Accepted on Accepted Ac Ac Accepted on Accepted on Accepted Ac Accepted on Accepted on Ac Accepted on Accepted Ac Ac Accepted on Ac Accepted on Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted on Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Accepted on Ac Accepted Ac Ac Ac Accepted on Ac Accepted Ac Ac Ac Accepted on Ac Accepted Ac Ac Ac Accepted while heterophilic graphs expect discriminative node representation learning to diversely extract information from similar and dissimilar neighbors. Very recently, a few researchers divert their attention to develop heterophilic GNNs, typically by introducing higher-order neighbors (Wang et al., 2017) or modelling homophily and heterophily separately in the message passing scheme (Beng et al., 2017). Nevertheless, current heterophilic GNNs highly rely on the knowledge of experts to manually design models for graphs with different degrees of heterophily. For one thing, this process would cost many human efforts and the performance of derived GNNs would be limited by expertise. For another thing, real-world heterophilic graphs generally show the significant complexity and variety in terms of graph structure. It would be harder to artificially customize GNNs to make them adapt to various heterophilic graphs. In light of this, automated graph neural network learning via neural architecture search (NAS) (Beng et al., 2017; Chen et al., 2017; Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019), a line of research for saving human efforts in designing effective GNNs, would be a feasible way of tackling the dilemma of heterophilic GNN development. Through learning in well-designed search spaces with efficient search strategies, automated GNNs via NAS have achieved promising progress on various graph data analysis tasks (Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019). However, existing graph NAS studies, _e.g._, GraphNAS (Li et al., 2019) and SANE (Li et al., 2019), are all constrained under the homophily assumption. To the best of our knowledge, there is still a research blank in developing heterophilic graph NAS for automated heterophilic graph learning. The most straightforward way to implement heterophilic graph NAS might be replacing the homophilic operations in existing homophilic search spaces with heterophilic GNN operations, followed by universal gradient-based search strategies. But this direct solution would incur some issues: First, it is difficult to determine what heterophilic operations are beneficial and whether certain homophilic operations should be kept for facilitating heterophilic graph learning, since the heterophilic graphs might be various and complex with different degrees of heterophily. Second, simply making existing universal search strategies adapt to new-defined heterophilic search spaces would limit the searching performance, resulting in suboptimal heterophilic GNN architectures. The heterophily should be integrated into the searching process for architecture optimization as well. To tackle all the above-mentioned challenges of heterophilic graph NAS, in this paper, we propose a novel automated graph neural network on heterophilic graphs, namely Auto-HeG, to effectively learn heterophilic node representations via heterophily-aware graph neural architecture search. To the best of our knowledge, this is the first automated heterophilic graph learning method, and our theme is to explicitly incorporate heterophily into all stages of automatic heterophily graph learning, containing search space design, supernet training, and architecture selection. **For search space design:** By integrating joint micro-level and macro-level designs, Auto-HeG first builds a comprehensive heterophilic search space through the diverse message-passing scheme, enabling it to incorporate the complexity and diversity of heterophily better. At the micro-level, Auto-HeG conducts non-local neighbor extension, ego-neighbor separation, and diverse message passing; At the macro-level, it introduces adaptive layer-wise combination operations. **For supernet training:** To narrow the scope of candidate operations in the proposed heterophilic search space, Auto-HeG presents a progressive supernet training strategy to dynamically shrink the initial search space according to layer-wise variation of heterophily, resulting in a compact and efficient supernet. **For architecture selection:** Taking heterophily as the specific guidance, Auto-HeG derives a novel heterophily-aware distance as the criterion to select effective operations in the leave-one-out pattern, leading to specialized and expressive heterophilic GNN architectures. Extensive experiments on the node classification task illustrate the superior performance of the proposed Auto-HeG to human-designed models and graph NAS models. In summary, our contributions are listed as follows: * We propose a novel automated graph neural network on heterophilic graphs by means of heterophily-aware graph neural architecture search, namely **Auto-HeG**, to the best of our knowledge, for the first time. * To integrate the complex and various heterophily explicitly, we build a comprehensive heterophilic GNN search space that incorporates non-local neighbor extension, ego-neighbor separation, diverse message passing, and layer-wise combination at the micro-level and the macro-level. * To learn a compact and effective heterophilic supernet, we introduce a progressive supernet training strategy to dynamically shrink the initial search space, enabling the narrowed searching scope with layer-wise heterophily variation. * To select optimal GNN architectures specifically instructed by heterophily, we derive a heterophily-aware distance criterion to develop powerful heterophilic GNNs in the leave-one-out manner and extensive experiments verify the superiority of Auto-HeG. ## 2. Related Work _Graph Neural Networks with Heterophily:_ Existing heterophilic GNNs mainly work on two aspects (Li et al., 2019): non-local neighbor extension (Beng et al., 2017; Chen et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) and GNN architecture refinement (Beng et al., 2017; Chen et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). In detail, non-local neighbor extension methods focus on exploring the informative neighbor set beyond local topology. In contrast, GNN architecture refinement methods design heterophys-specific message-passing models to learn discriminative node representations. Typically, Mixhop (Beng et al., 2017) and H2GCN (Wang et al., 2017) introduced higher-order neighbor mixing of local one-hop neighbors and non-local \(K\)-hop neighbors to learn discriminative central node representations. And Geom-GCN (Li et al., 2019) defined the split 2D Euclidean geometry locations as the geometric relationships with different degrees of heterophily, enabling it to discover potential neighbors for further effective aggregation. In contrast, FAGCN (Beng et al., 2017) developed diverse aggregation functions by introducing low-pass and high-pass filters, corresponding to learning homophily and heterophily, respectively. Besides, GCNII (Chen et al., 2017) and GPR-GNN (Wang et al., 2017) considered the layer-wise representation integration to boost the performance of GNNs with heterophily. Despite the promising development, these heterophilic GNNs highly rely on the expertise to manually design GNNs for tackling the complex and diverse heterophily. For one thing, it would cost exhausted human efforts. For another thing, the design scopes and flexibility would be constrained by expert knowledge, leading to limited model performance. In light of this, we are the first to propose an automated graph neural architecture search framework for heterophilic graphs, to significantly save human efforts and automatically derive powerful heterophilic GNN architectures with excellent learning abilities. Graph Neural Architecture SearchGraph NAS has greatly enlarged the design picture of automated GNN development for discovering excellent and powerful models (Garfani et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019). Generally, the development of graph NAS methods focuses on two crucial research aspects: search space and search strategy. The former defines architecture and functional operation candidates in a set space, and the latter explores the powerful model components in the defined search space. Typically, GraphNAS (Goyal et al., 2017) and AGNN (Goyal et al., 2019) constructed the micro-level search space containing classical GNN components and related hyper-parameters, followed by architecture controllers based reinforcement learning (RL) search strategy. And SNAG (Goyal et al., 2019) further simplified GraphNAS at the micro-level and introduced the macro-level inter-layer architecture connections. Based on this search space, SANE (Goyal et al., 2017) implemented DARTS (Goyal et al., 2017), a gradient-based search strategy, to automatically derive effective GNN architectures on graphs in a differential way. Nevertheless, current graph NAS methods are still constrained under the homophily assumption and cannot learn explicitly and effectively on graphs with complex and diverse heterophily. Therefore, we draw inspiration from the lines of NAS research on search space shrinking (Garfani et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2019), supernet optimization (Goyal et al., 2017; Goyal et al., 2019), and architecture selection (Goyal et al., 2019), and importantly extend them to the development of automated heterophilic GNNs. Our critical goal is to build a well-designed search space with a customized search strategy via graph NAS, to explicitly integrate the heterophily into full stages of automated heterophilic GNN development. ## 3. Auto-Heg: Automated Graph Neural Network on Heterophilic Graphs ### Preliminary. Uniform Message PassingLet \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be an undirected, unweighted graph where \(\mathcal{V}=\{v_{1},\cdots,v_{|\mathcal{V}|}\}\) is the node set and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) is the edge set. The neighbor set of node \(v\) is \(\mathcal{N}(v)=\{u:(v,u)\in\mathcal{E}\}\), and initial node features are represented by \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d_{0}}\) with \(d_{0}\)-dimension features. For the uniform message passing scheme under the homophily assumption, node representations of GNNs are learned by first aggregating the messages from local neighbors, and then updating the ego-node representations by combining the aggregated messages with themselves (Sutskever, 2017). This process can be denoted as: \[\mathbf{m}_{v}^{(l)}=\text{AGG}^{(l)}((\mathbf{h}_{u}^{(l-1)}:u \in\mathcal{N}(v))),\] \[\mathbf{h}_{v}^{(l)}=\text{UPDATE}^{(l)}(\mathbf{h}_{v}^{(l-1)},\mathbf{m}_{v}^{(l)}), \tag{1}\] where \(\mathbf{m}_{v}^{(l)}\) and \(\mathbf{h}_{v}^{(l)}\) are the message vector and the representation vector of node \(v\) at the \(l\)-th layer, respectively. \(\text{AGG}(\cdot)\) and \(\text{UPDATE}(\cdot)\) are the aggregation function and update function, respectively. Given the input of the first layer, the learned node representations with \(d_{1}\) dimensions at each layer of \(L\)-layer GNN can be denoted as \(\mathbf{H}^{(l)}\in\mathbb{R}^{|\mathcal{V}|\times d_{1}}\) for \(l=\{1,2,\cdots,L\}\). Measure of Heterophily & HomophilyIn general, heterophily and homophily of a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) can be measured by node homophily (Goyal et al., 2017). Concretely, the node homophily \(\gamma_{node}\) is the average proportion of the neighbors with the same class of each node as: \(\gamma_{node}=1/|\mathcal{V}|\sum_{v\in\mathcal{V}}(|\{u\in\mathcal{N}(v):y_{ v}=y_{u}\}|/|\mathcal{N}(v)|)\). The range of \(\gamma_{node}\) is \([0,1]\). Graphs with strong homophily have higher \(\gamma_{node}\) (closer to \(1\)); Whereas graphs with strong heterophily have smaller \(\gamma_{node}\) (closer to \(0\)). ### Heterophilic Search Space Design To incorporate the complexity and variety of heterophily on graphs, we design a comprehensive heterophilic search space in the proposed Auto-HeG, involving joint micro-level and macro-level candidate operations. At the micro-level, the proposed search space contains three important components: non-local neighbor extension, ego-neighbor separation, and diverse message passing; While the macro-level consists of intermediate layer combination functions and skip connections for integrating layer-wise node representations. The well-designed search space is the basis for building a supernet for search effective architectures on heterophilic graphs as shown in Fig. 1, and we will discuss this further in Sec. 3.3. In the following, we mainly give detailed descriptions of the candidate operations in the proposed heterophilic search space, where its summary is listed in Table 1. #### 3.2.1. Micro-level Design. Non-local Neighbor ExtensionHomophilic graphs generally take local nodes from one hop away as neighbors of the ego node for message aggregation, since the same-class nodes mainly locate in their proximal topology. On the contrary, on heterophilic graphs, nodes within the same class could be far away from each other. That means only considering the one-hop-away neighbors would be insufficient to capture the heterophily on graphs explicitly. Hence, to break the local topology limitation under the homophily assumption, we extend local one-hop neighbors to non-local \(K\)-hop neighbors to incorporate heterophilic neighbor information. Concretely, we modify the uniform message passing scheme in Eq. (1) in terms of \(K\)-hop neighbors as follows: \[\mathbf{m}_{v}^{(l)}=\text{AGG}^{(l)}((\mathbf{h}_{u}^{(l-1)}:u \in\mathcal{N}_{k}(v))),\,k=\{1,2,\cdots,K\}, \tag{2}\] where \(\mathcal{N}_{k}(v)\) denotes the \(k\)-th hop neighbor set of node \(v\). In this way, the derived heterophilic GNNs could mix latent information from neighbors within the same class at various distances of graph topology. Specifically, to avoid multi-hop neighbors bringing the exponential explosion of graph scale, we restrict the \(k\)-hop neighbor set as neighbors that are connected by at least \(k\) different paths to the ego nodes. For instance, a two-hop neighbor set contains neighbors that are two hops away from ego nodes and have at least two different paths to reach the ego nodes. Note that, different from existing heterophilic GNNs (_e.g._, Mixhop (Garfani et al., 2017) and H2GCN (Goyal et al., 2018)) introducing multi-hop neighbors at each layer and combining them in parallel, we relate the number of neighbor hops to the number of GNN layers correspondingly in the proposed Auto-HeG. The intuition behind this is to alleviate the complexity of the search space but keep its effectiveness at the same time. _Ego-neighbor Separation._ Despite capturing the messages from non-local neighbors, some local neighbors still have dissimilar class labels with the ego nodes. Hence, it is necessary to separate the ego node representations from their neighbor representations, as verified by the work (Wang et al., 2018). In this way, heterophilic GNNs could learn the ego node features by discriminatively combining the aggregated neighbor information with themselves at the following update step. At this point, we further modify the uniform message passing scheme at the aggregation step in Eq. (2) as \[\mathbf{m}_{v}^{(l)}=\text{AGG}^{(l)}((\mathbf{h}_{u}^{(l-1)}:u\in\tilde{ \mathcal{N}}_{k}(v))),\,k=\{1,2,\cdots,K\}, \tag{3}\] where \(\tilde{\mathcal{N}}_{k}(v)\) denotes the neighbor set that excludes the ego node \(v\), \(i\)._e_., removing self-loops in the original graph structure. _Diverse Message Passing._ Given the extended non-local neighbor set, diverse message-passing scheme is the core of tackling the heterophily on graphs. By distinguishing the information of similar neighbors (likely in the same class) from that of dissimilar neighbors (likely in different classes), heterophilic GNNs could aggregate discriminative messages from diverse neighbors. Typically, we decompose the edge-aware weights \(\mathbf{a}_{uu}\) into homophily-component \(\mathbf{a}_{uu}^{ho}\) and heterophily-component \(\mathbf{a}_{uu}^{he}\) on different neighbor representations \(\mathbf{h}_{u}\), respectively. Hence, we can obtain \(\mathbf{a}_{uu}=[\mathbf{a}_{uu}^{ho};\,\mathbf{a}_{uu}^{he}]\). Moreover, due to the complexity and variety of heterophily, there might be no adequate prior knowledge of the extent of heterophily on graphs. Certain homophilic aggregation functions, \(e\)._g_., GAT (Wang et al., 2018), also have the ability to impose discriminative weights on different neighbors. Hence, we first introduce ample heterophilic and homophilic aggregation functions for the comprehensiveness of the initial search space design. Adaptively learning a more compact search space is postponed to the latter supernet training stage. Specifically, we introduce 18 homophilic and heterophilic aggregation functions, denoting as the diverse aggregation function set \(\mathcal{O}_{AGG}\) shown in Table 1. In this way, the heterophilic message-passing scheme can be denoted as: \[\mathbf{m}_{o}^{(l)}=\mathcal{O}_{AGG}^{(l)}(\{\mathbf{a}_{uu}^{l-1}\mathbf{ h}_{u}^{(l-1)}:u\in\tilde{\mathcal{N}}_{k}(v)\}),\,k=\{1,2,...,K\}. \tag{4}\] In summary, the micro-level design of the proposed heterophilic search space mainly works on diverse message passing with extended local and non-local neighbors, along with separated ego and neighbor node representation learning. And such design encourages heterophilic GNNs to adaptively incorporate the heterophily of graphs in each phase of message passing. #### 3.2.2. Macro-level Design. To integrate local and global information from different layers, combining intermediate layer representations and introducing skip connections have been verified as beneficial layer-wise operations in human-designed GNNs (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and automated GNNs (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). In light of this, to enable the flexible GNN architecture design on heterophilic graphs, we introduce the inter-layer connections and combinations into the macro-level search space. In this way, the automated heterophilic GNNs could capture hierarchical information at different layers of network architectures. In detail, our macro-level space contains 5 candidate operations as \(\mathcal{O}_{MAC}=\{l\_skip,l\_zero,l\_concat,l\_max,l\_lstm\}\). So that the final output of a heterophilic GNN can be denoted as: \[\mathbf{h}_{out}=\mathcal{O}_{MAC}[\mathbf{h}^{(1)},\mathbf{h}^{(2)},\cdots, \mathbf{h}^{(L)}]. \tag{5}\] With the joint micro-level and macro-level candidate operations, the proposed heterophilic search space significantly enlarges the design scopes and the flexibility in developing GNN architectures on graphs with heterophily. ### Progressive Heterophilic Supernet Training Based on the proposed heterophilic search space, we build a one-shot heterophilic supernet as shown in Fig. 1. Given the heterophilic graph with input feature \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d_{0}}\) and adjacency matrix \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), the proposed Auto-HeG supernet first implements a linear layer, \(i\)._e_., Multi-layer Perceptron (MLP), to obtain the initial node embedding \(\mathbf{H}^{(0)}\in\mathbb{R}^{|\mathcal{V}|\times d_{1}}\). Then, the node embedding \(\mathbf{H}^{(0)}\) and the adjacency matrix \(\mathbf{A}\) would be fed into the first layer containing micro-level candidate operations with the \(\mathcal{O}_{AGG}\) set, leading to the output node representation \(\mathbf{H}^{(1)}\). Next, \(\mathbf{H}^{(1)}\) is fed into the macro-level \(\mathcal{O}_{MAC}\) for the layer-wise connection. Meanwhile, \(\mathbf{H}^{(1)}\) and the two-hop adjacency matrix \(\mathbf{A}^{2}\) would be the inputs of the next micro-level layer. The above process implements repeatedly layer by layer before the final node representations experience the classifier to obtain the predictions of node classes. \begin{table} \begin{tabular}{l|l|l} \hline Search Space & Modules & Operations \\ \hline \multirow{4}{*}{Micro-level} & Neighbors & \{(A,A^{*},\cdots,A^{K}\}\) \\ \cline{2-3} & \multirow{2}{*}{\(\mathcal{O}_{AGC}\)} & \multirow{2}{*}{homons.} & \{(\{\)GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GG,GGGG,GG,GG,GGGG, Even though the proposed search space contains as many candidate operations as possible, real-world heterophilic graphs usually show significant differences in complexity and diversity, and we still lack prior knowledge on what heterophilic and homophilic aggregations are beneficial for heterophilic graph learning. The proposed initial search space might be comprehensive but severely challenge the effectiveness of supernet training to automatically derive expressive GNN architectures, especially when heterophily varies on different graphs. In light of this, a possible solution comes: why not let the proposed supernet adaptively keep the beneficial operations and drop the irrelevant counterparts as the process of iterative learning goes on? This would contribute to a more compact supernet with relevant candidate operations specifically driven by current heterophilic graphs and tasks, leading to more effective heterophilic GNN architecture development. Moreover, considering some operations will never be selected by certain layers in the final architectures mentioned by (Wang et al., 2018), this solution ensures different layers flexibly customize their own layer-wise candidate operation sets according to heterophily variation, rather than sharing the entire fixed and large search space with all layers. Therefore, to derive a more compact heterophilic one-shot supernet for efficient architecture design, we present a progressive training strategy to dynamically shrink the initial search space and adaptively design the candidate operation space in each layer. Specifically, let \(\mathcal{S}_{0}=\left(\mathcal{E}_{0}^{\mathcal{S}},\mathcal{N}_{0}^{\mathcal{S }}\right)\) denotes the initial heterophilic supernet constructed based on the proposed entire search space \(\mathcal{O}\), and \(\mathcal{E}_{0}^{\mathcal{S}}\) and \(\mathcal{N}_{0}^{\mathcal{S}}\) are the edge set and the node set, respectively. We first train \(\mathcal{S}_{0}(\mathbf{\alpha},\mathbf{w})\) several steps for stochastic differentiable search based on the bi-level optimization: \[\begin{split}\min_{\mathbf{\alpha}}&\quad\mathcal{L}_{ \textit{cal}}\left(\mathbf{w}^{*}(\mathbf{\alpha}),\mathbf{\alpha}\right),\\ s.t.&\quad\mathbf{w}^{*}(\mathbf{\alpha})=\operatorname*{ argmin}_{\mathbf{w}}\mathcal{L}_{\textit{train}}\left(\mathbf{w},\mathbf{\alpha}\right), \end{split} \tag{6}\] which conducts iterative learning of the architecture weight \(\mathbf{\alpha}\) and the model parameter \(\mathbf{w}\). Let \(o\in\mathcal{O}\) denotes a certain operation in heterophilic search space \(\mathcal{O}\), and a node pair \((\mathbf{x}_{i},\mathbf{x}_{j})\) denotes the latent vectors from \(i\)-th node to \(j\)-th node of the supernet, the learned architecture weights \(\mathbf{\alpha}=\left\{a_{o}^{(i,j)}|o\in\mathcal{O},(i,j)\in\mathcal{N}_{0}^{ \mathcal{S}}\right\}\), and the operation specific weight vector \(a_{o}^{(i,j)}\) can be derived as: \[\tilde{o}^{(i,j)}(\mathbf{x})=\sum_{o\in\mathcal{O}}\frac{\exp\left[\left(\log a _{o}^{(i,j)}+u_{o}^{(i,j)}\right)/\tau\right]}{\sum_{o^{\prime}\in\mathcal{O} }\exp\left[\left(\log a_{o^{\prime}}^{(i,j)}+u_{o^{\prime}}^{(i,j)}\right)/ \tau\right]}\,\cdot\,o(\mathbf{x}) \tag{7}\] where \(\tilde{o}^{(i,j)}\) is the mixed operation of edges between \((i,j)\), and \(u_{o}^{(i,j)}=-\log\left(-\log\left(U\right)\right)\) where \(U\sim\operatorname*{Uniform}(0,1)\). \(\tau\) is the temperature factor that controls the extent of the continuous relaxation. When \(\tau\) is closer to \(0\), the weights would be closer to discrete one-hot values. To further exploit the relevance and importance of candidate operations to heterophily, we rank the obtained \(\mathbf{\alpha}\) layer by layer based on its magnitudes and drop the last \(C\) irrelevant operations, _i.e._, cutting \(C\) number of edges in each layer at the current supernet training stage. By repeating the above process \(T\) iterations, we gradually and dynamically shrink the initial search space and compress the supernet, leading to a compact and effective supernet \(\mathcal{S}_{c}\) by customizing layer-wise heterophilic candidate operation set. The overall process of the proposed progressive supernet training strategy is summarized in Algo. 1. ### Heterophily-guided Architecture Selection With the magnitudes of the architecture weights as the metric, Auto-HeG progressively learns a compact supernet with layer-wise search spaces driven by the heterophily variation. Considering an extreme case: the supernet shrinks gradually till every edge of it keeps only one operation. Basically, that is the general strategy used by current graph NAS methods, _i.e._, argmax-based architecture selection scheme. That means only the operations corresponding to the maximum edge weights in the architecture supernet could be preserved by such an architecture selection scheme to build the ultimate GNNs. However, very recent research (Shen et al., 2019) has verified the architecture magnitude is not effective enough to indicate the operation strength in the final GNN architecture selection. That is why we only use this strategy as the beginning to generally narrow the scope of candidate operations in the proposed search space. Therefore, to select powerful GNN architectures specifically instructed by heterophily, we derive a heterophily-aware distance as the criterion to guide heterophilic GNN architecture selection. And inspired by the perturbation-based architecture selection scheme in (Shen et al., 2019), we implement a leave-one-out manner to directly evaluate the contribution of each candidate operation to the compact supernet performance. Specifically, the proposed heterophily-aware distance \(D_{\textit{hete}}\) can be defined with the Euclidean distance as \(D_{\textit{hete}}=||\hat{\mathcal{H}}-\mathcal{H}||^{2}\), where \(\mathcal{H}\) and \(\hat{\mathcal{H}}\) are the heterophilic matrices denoted as: \[\mathcal{H}=\left(Y^{T}AY\right)\oslash\left(Y^{T}AE\right),\ \mathcal{\hat{H}}= \left(\hat{Y}^{T}A\hat{Y}\right)\oslash\left(\hat{Y}^{T}AE\right), \tag{8}\] where \(Y\in\mathbb{R}^{|\mathcal{V}|\times\mathcal{P}}\) is the ground-truth label matrix of the heterophilic graph with \(p\) classes of nodes, \(E\in\mathbb{R}^{|\mathcal{V}|\times\mathcal{P}}\) is an all-ones matrix, and \(\oslash\) denotes the Hadamard division operation. Concretely, \(\mathcal{H}_{i,j}\) indicates the connection probability of nodes (Hinsel and Karim, 2015; Karim, 2015) between the \(i\)-th class and the \(j\)-th class with the ground truth labels, and \(\mathcal{\hat{H}}_{i,j}\) works in the same way but with the predicted labels \(\hat{Y}\in\mathbb{R}^{|\mathcal{V}|\times\mathcal{P}}\), _i.e._, the output of the compact heterophilic supernet. The proposed heterophily-aware distance \(D_{\textit{Hete}}\) explicitly restricts the connection possibility of nodes in any two classes predicted by the supernet, to be close to that of the ground truth. And smaller \(D_{\textit{Hete}}\) indicates the better discriminative ability of a certain candidate operation to heterophilic node classes. Taking \(D_{\textit{Hete}}\) as the guidance, we select the optimal heterophilic GNN architecture from the pretrained compact supernet \(\mathcal{S}_{\mathcal{C}}\), and the whole process is illustrated in Algo. 2. In summary, Auto-HeG first builds a comprehensive heterophilic search space, and then progressively shrinks the initial search space layer by layer, leading to a more compact supernet with Algo. 1. Finally, Auto-HeG selects the ultimate heterophilic GNN architecture with the guidance of the heterophily-aware distance in Algo. 2. Consequently, Auto-HeG could automatically derive powerful and expressive GNN models for learning discriminative node representations on heterophilic graphs effectively. ## 4. Experiments In this section, we first provide the experimental setting details. Then, we compare the proposed Auto-HeG with state-of-the-art human-designed and graph NAS models on the node classification task. Finally, we conduct ablation studies to evaluate the effectiveness of each component in Auto-HeG, including heterophilic search space, progressive heterophilic supernet training, and heterophily-guided architecture selection. ### Experimental Setting _Datasets and Baselines._ We conduct experiments on seven real-world datasets with different degrees of heterophily (\(Y_{\textit{node}}\)) denoted in brackets, where Cornell (0.11), Texas (0.06), and Wisconsin (0.16) are three WebKB web-page dataset (Dai et al., 2017), and Actor (0.24) is an actor co-occurrence network (Zhu et al., 2018). These four datasets are with high heterophily from (Zhu et al., 2018). Furthermore, Cora (0.83) (Citeseer, 0.71) (Citeseer, 0.71), and Pubmed (0.79) (Peters et al., 2019) are three citation network datasets with low heterophily. For comparisons on node classification, we take H2GCN-1 and H2GCN-2 (Han et al., 2017), MixHop (Ashsh et al., 2018), GPR-GNN (Chen et al., 2019), GCNII (Chen et al., 2019), Geom-GCN-I, Geom-GCN-P, and Geom-GCN-S (Zhu et al., 2018), FAGCN (Chen et al., 2019), as well as classical GCN (Ghar et al., 2018), GAT (Zhu et al., 2018), GraphSAGE (Dai et al., 2017), and SGC (Zhu et al., 2018) as human-designed model baselines for high-heterophily and low-heterophily datasets, respectively. Furthermore, we take GraphNAS (Dai et al., 2017), SNAG (Zhu et al., 2018), and SANE (Han et al., 2017) as graph NAS model baselines. To test the performance of simply modifying existing graph NAS search spaces, we develop 'SANE-hete' as another baseline by directly injecting heterophilic aggregation functions into the search space of SANE (Han et al., 2017). _Implementation._ All experiments run with Pytorch platform on Quadro RTX 6000 GPUs and the core code is built based on PyG library (Krizhevsky et al., 2014). Following the setting in (Han et al., 2017), at the search stage, we search with different random seeds and select the best architecture according to the performance on validation data. At the train from scratch stage, we finetune hyper-parameters on validation data within fixed ranges. For all datasets, we use the same splits as that in Geom-GCN (Zhu et al., 2018) and evaluate the performance of all models on the test sets over provided 10 splits for fair comparisons. We report the mean classification accuracy with standard deviations as the performance metric of node classification. ### Experimental Results on Node Classification The node classification results of the proposed Auto-HeG and comparison methods on high-heterophily and low-heterophily datasets are shown in Table 2 and Table 3, respectively. And the details of derived architectures on all datasets are provided in Appendix. As shown in Table 2, it can be generally observed that the proposed Auto-HeG achieves the best performance when learning on high-heterophily graphs compared with human-designed and graph NAS models. Specifically, Auto-HeG excesses the second-best model, _i.e._, human-designed H2GCN, with 1.35%, 1.90%, 1.80% and 1.57% performance improvement on Cornell, Texas, Wisconsin, and Actor datasets, respectively. Compared with graph NAS models, first, Auto-HeG greatly outperforms all existing homophilic graph NAS models over all datasets. We attribute this to the elaborate incorporation of heterophily in our Auto-HeG, especially for graphs with high heterophily. Moreover, the superiority of Auto-HeG to SANE-hete verifies that simply injecting the heterophilic aggregation functions into the existing homophilic search space is not an effective solution. This further illustrates the effectiveness of the proposed progressive heterophilic supernet training and heterophily-guided architecture selection in our Auto-HeG. And incorporating the heterophily into all stages of automatic heterophilic GNN design indeed benefits learning on graphs with heterophily. For comparisons on low-heterophily datasets in Table 3, our Auto-HeG outperforms most human-designed classical GNN models and graph NAS models, further illustrating its ability to analyze graphs even with low heterophily. We attribute these results to two sub-modules in our Auto-HeG: the proposed search space which keeps many homophilic aggregation functions, and the derived progressive supernet training strategy which customizes layer-wise aggregation functions. These components enable the flexible construction of GNN architectures, leading to superior performance even with low-heterophily. Therefore, we could conclude that even lacking the prior information on heterophily degrees, the proposed Auto-HeG could progressively and adaptively select appropriate candidate operations along with the variation of heterophily, leading to consistently impressive performance on graphs with high-heterophily and low-heterophily. This reflects the excellent ability of our Auto-HeG to deal with complex and various heterophily. ### Ablation Study #### 4.3.1. Effectiveness of heterophilic search space We derive three variants of the proposed heterophilic search space to verify its effectiveness. Denoting the overall heterophilic search space as \(\mathcal{O}_{\texttt{n}_{0}}\), we consider the following subsets: (1) \(\widetilde{\mathcal{O}}_{homo}\): only keep homophilic aggregation functions and remove all heterophilic ones from \(\mathcal{O}_{\texttt{n}_{0}}\), to verify the importance of integrating heterophilic operations; (2) \(\widetilde{\mathcal{O}}_{hette}\): only keep heterophilic aggregation functions and remove all homophilic ones from \(\mathcal{O}_{\texttt{n}_{0}}\), to observe the performance when the search space only contains heterophilic operations; (3) \(\widetilde{\mathcal{O}}_{he\texttt{e}ho}\): randomly remove several heterophilic and homophilic aggregation functions \(\mathcal{O}_{\texttt{n}_{0}}\), to illustrate the effectiveness of simultaneously involving heterophilic and homophilic operations. Specifically, we randomly sample 20 architectures from each subset and report the max performance and average performance among them in Fig. 2, respectively. Following observations can be obtained: (1) No best results are achieved in \(\widetilde{\mathcal{O}}_{homo}\) on all datasets in both max and average cases, which naturally verifies the importance of heterophilic operations to heterophilic graph learning; (2) \(\widetilde{\mathcal{O}}_{hette}\) and \(\widetilde{\mathcal{O}}_{he\texttt{e}ho}\) generally have better performance compared to other search space sets, which illustrates the necessity of involving appropriate homophilic operations. Moreover, this shows that merely involving heterophilic operations is not enough for complex and diverse heterophily learning; (3) The overall search space \(\mathcal{O}_{\texttt{n}_{0}}\) performs the best only on Texas and Actor datasets in the max case. This presents that \(\mathcal{O}_{\texttt{n}_{0}}\) could not achieve consistently best performance under the random sampling scenario, even with the largest number of candidate operations. Moreover, this result also verifies that developing a more compact heterophilic search space is necessary and crucial. At this point, it could be concluded that the proposed progressive supernet training strategy is effective, and we could attribute this to its adaptive and dynamic layer-wise search space shrinking schema. #### 4.3.2. Effectiveness of progressive supernet training The comparison results with and without the layer-wise search space shrinking in the proposed progressive supernet training strategy are listed in Fig. 3. It can be generally observed that, on both high-heterophily and low-heterophily datasets, the performance of the compact supernet consistently achieves better performance with the proposed supernet training strategy. For example, the compact supernet with \begin{table} \begin{tabular}{l|l|c c c c} \hline \hline Methods & Datasets & Cornell & Texas & Wisconsin & Actor \\ \hline \multirow{8}{*}{Human-designed models} & H2GCN-1* & 82.16\(\pm\)4.80 & 84.86\(\pm\)6.77 & 86.67\(\pm\)4.69 & 35.86\(\pm\)1.03 \\ & H2GCN-2* & 82.16\(\pm\)6.00 & 82.16\(\pm\)5.28 & 85.88\(\pm\)4.22 & 35.62\(\pm\)1.30 \\ & MixHop* & 73.51\(\pm\)6.34 & 77.84\(\pm\)7.73 & 75.88\(\pm\)4.90 & 32.22\(\pm\)2.34 \\ & GPR-GNN & 81.89\(\pm\)5.93 & 83.24\(\pm\)4.95 & 84.12\(\pm\)3.45 & 35.27\(\pm\)1.94 \\ & CGNII* & 76.49 & 77.84 & 81.57 & - \\ & Geom-GCN-I* & 56.76 & 57.58 & 58.24 & 29.09 \\ & Geom-GCN-P* & 60.81 & 67.57 & 64.12 & 31.63 \\ & Geom-GCN-S* & 55.68 & 59.73 & 56.67 & 30.30 \\ & FAGCN & 81.35\(\pm\)5.05 & 84.32\(\pm\)6.02 & 83.33\(\pm\)2.01 & 35.74\(\pm\)6.52 \\ \hline \multirow{8}{*}{Graph NAS models} & GraphNAS & 58.11\(\pm\)3.87 & 54.86\(\pm\)6.98 & 56.67\(\pm\)2.99 & 25.47\(\pm\)1.32 \\ & SNAG & 57.03\(\pm\)3.48 & 62.70\(\pm\)5.52 & 62.16\(\pm\)4.63 & 27.84\(\pm\)1.29 \\ \cline{1-1} & SANE & 56.76\(\pm\)5.51 & 66.22\(\pm\)10.62 & 86.67\(\pm\)5.02 & 33.41\(\pm\)1.41 \\ \cline{1-1} & SANE-hette & 77.84\(\pm\)5.51 & 77.84\(\pm\)7.51 & 83.92\(\pm\)2.28 & 35.88\(\pm\)1.30 \\ \cline{1-1} \cline{2-5} & **Auto-HeG (ours)** & **83.51\(\pm\)6.36** & **86.76\(\pm\)4.60** & **87.84\(\pm\)3.59** & **37.43\(\pm\)1.37** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance (ACC%\(\pm\)std) of the proposed Auto-HeG compared with human-designed and graph NAS models on high-heterophily datasets. The best results are in bold and the second-best results are underline. Superscript* represents the officially reported results with the same dataset splits, where Geom-GCN and GCNI do not provide the std; And the remains are our reproduced results if official methods do not test under the same dataset splits. \begin{table} \begin{tabular}{l|l|c c c} \hline \hline Methods & Datasets & Cora & Citeseer & Pubmed \\ \hline \multirow{8}{*}{Human-designed models} & GCN & 85.69\(\pm\)1.80 & 75.38\(\pm\)1.75 & 86.08\(\pm\)6.4 \\ & GAT & 86.52\(\pm\)1.41 & 75.51\(\pm\)1.85 & 84.75\(\pm\)0.51 \\ & GraphSAGE & 80.60\(\pm\)3.63 & 67.18\(\pm\)5.46 & 81.18\(\pm\)1.12 \\ & SGC & 85.88\(\pm\)3.61 & 73.86\(\pm\)1.73 & 84.87\(\pm\)2.81 \\ & GCNI* & 88.01 & 77.13 & 90.30 \\ & Geom-GCN-I* & 85.19 & 77.99 & 90.05 \\ & Geom-GCN-P* & 84.93 & 75.14 & 88.09 \\ & Geom-GCN-S* & 85.27 & 74.71 & 84.75 \\ \hline \multirow{8}{*}{Graph NAS models} & GraphNAS & 84.10\(\pm\)0.79 & 68.83\(\pm\)2.09 & 82.28\(\pm\)0.64 \\ & SNAG & 81.01\(\pm\)1.31 & 70.14\(\pm\)2.00 & 83.24\(\pm\)0.84 \\ \cline{1-1} & SANE & 84.25\(\pm\)1.82 & 74.33\(\pm\)1.54 & 87.82\(\pm\)0.57 \\ \cline{1-1} & SANE-hette & 85.05\(\pm\)0.90 & 74.46\(\pm\)1.59 & 88.99\(\pm\)0.42 \\ \cline{1-1} \cline{2-5} & **Auto-HeG (ours)** & 86.88\(\pm\)**1.10** & 75.81\(\pm\)1.52 & 89.29\(\pm\)0.27 \\ \hline \hline \end{tabular} \end{table} Table 3. Performance (ACC%\(\pm\)std) of the proposed Auto-HeG compared with human-designed and graph NAS models on low-heterophily datasets. progressive training significantly raises the performance of Cornell from 75.68% to 83.51% and Texas from 72.97% to 86.76%. We attribute such impressive improvements to the excellent ability of our Auto-HeG in building the effective and adaptive supernet. Without the progressive training, the initial search space would keep a large number of operations that might be irrelevant and redundant to specific graphs with heterophily, which brings serious challenges to the search strategy for optimizing in such a large supernet. At this point, Auto-HeG progressively shrinks the search space layer-wisely and dynamically narrows the scope of relevant candidate operations, resulting in a more compact and effective supernet for deriving powerful heterophilic GNNs. #### 4.3.3. Effectiveness of heterophily-guided architecture selection We compare the proposed heterophily-guided architecture selection with the other two types of architecture selection methods, _i.e._, architecture weight magnitude based argmax strategy, which is the most commonly used method in existing gradient-based graph NAS (He et al., 2017; Zhang et al., 2018), and perturbation-based architecture selection method with the validation loss criterion proposed by (Wang et al., 2019). The comparison results are listed in Table 4. Generally, our proposed heterophily-guided scheme with the heterophily-aware distance criterion gains the best classification performance on all datasets consistently, illustrating its effectiveness in expressive architecture selection. Furthermore, we can observe that the architecture weight magnitude based argmax strategy, _i.e._, 'Argmax Arch. Select.', performs better than that of perturbation-based architecture selection method with the validation loss criterion, _i.e._, 'Val. Loss Arch. Select.'. Even the work (Wang et al., 2019) has verified the effectiveness of 'Val. Loss Arch. Select.' over CNN-based NAS in the leave-one-out manner, more importantly, we find that it might not well adapt to GNN-based NAS by implementing the validation loss based criterion straightforwardly. This fact further illustrates the effectiveness of the proposed heterophily-aware distance criterion in the leave-one-out-manner for GNN-based NAS with heterophily. ## 5. Conclusion In this paper, we propose a novel automated graph neural network on heterophilic graphs via heterophily-aware graph neural architecture search, namely Auto-HeG, which is the first work of automatic heterophilic graph learning. By explicitly incorporating heterophily into all stages, _i.e._, search space design, supernet training, and architecture selection, Auto-HeG could develop powerful heterophilic GNNs to deal with the complexity and variety of heterophily effectively. To build a comprehensive heterophilic GNN search space, Auto-HeG includes non-local neighbor extension, ego-neighbor separation, diverse message passing, and layer-wise combination at both micro-level and macro-level. To develop a compact and effective heterophilic supernet based on the initial search space, Auto-HeG conducts the progressive supernet training strategy to dynamically shrink the scope of candidate operations according to layer-wise heterophily variation. In the end, taking the heterophily-aware distance criterion as the guidance, Auto-HeG selects excellent heterophilic GNN architectures by directly evaluating the contribution of each operation in the leave-one-out pattern. Extensive experiments verify the superiority of the proposed Auto-HeG on learning graphs with heterophily to both human-designed models and graph NAS models. \begin{table} \begin{tabular}{l|c c c|c c|c c} \hline \hline \multirow{2}{*}{Arch. Select. Methods} & \multicolumn{6}{c|}{\(n_{\text{BB-heterophily Datasets}}\)} & \multicolumn{6}{c}{_Low-heterophily Datasets_} \\ \cline{2-7} & Cornell & Texas & Wisconsin & Actor & Cora & Citeseer & Pubmed \\ \hline Argmax Arch. Select. & 79.19\(\pm\)3.38 & 79.46\(\pm\)3.67 & 86.27\(\pm\)4.02 & 37.12\(\pm\)1.12 & 85.33\(\pm\)1.46 & 75.43\(\pm\)2.24 & 88.52\(\pm\)0.41 \\ Val. Loss Arch. Select. & 57.84\(\pm\)3.67 & 71.35\(\pm\)5.30 & 80.59\(\pm\)7.09 & 36.96\(\pm\)1.08 & 84.67\(\pm\)1.64 & 73.86\(\pm\)1.11 & 88.23\(\pm\)0.53 \\ \hline **Heter. Arch. Select. (ours)** & **83.51\(\pm\)6.56** & **86.76\(\pm\)4.60** & **87.84\(\pm\)3.59** & **37.43\(\pm\)**1.37** & **86.88\(\pm\)1.10** & **75.81\(\pm\)1.52** & **89.29\(\pm\)0.27** \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of the proposed heterophily-guided architecture selection scheme (Heter. Arch. Select.) with other architecture selection methods. Figure 3. Performance w/ and w/o shrinking (srk) for the proposed progressive supernet training. Figure 2. Illustration of the effectiveness of the proposed search space by evaluating search space variants. ## Acknowledgment This work was partially supported by an Australian Research Council (ARC) Future Fellowship (FT210100097), Guangdong Provincial Natural Science Foundation under grant NO. 2022A1515010129, and the CAS Project for Young Scientists in Basic Research (YSBR-008).
2303.15739
Bayesian Free Energy of Deep ReLU Neural Network in Overparametrized Cases
In many research fields in artificial intelligence, it has been shown that deep neural networks are useful to estimate unknown functions on high dimensional input spaces. However, their generalization performance is not yet completely clarified from the theoretical point of view because they are nonidentifiable and singular learning machines. Moreover, a ReLU function is not differentiable, to which algebraic or analytic methods in singular learning theory cannot be applied. In this paper, we study a deep ReLU neural network in overparametrized cases and prove that the Bayesian free energy, which is equal to the minus log marginal likelihoodor the Bayesian stochastic complexity, is bounded even if the number of layers are larger than necessary to estimate an unknown data-generating function. Since the Bayesian generalization error is equal to the increase of the free energy as a function of a sample size, our result also shows that the Bayesian generalization error does not increase even if a deep ReLU neural network is designed to be sufficiently large or in an opeverparametrized state.
Shuya Nagayasu, Sumio Watanabe
2023-03-28T05:27:32Z
http://arxiv.org/abs/2303.15739v3
# Bayesian Free Energy of Deep ReLU Neural Network in Overparametrized Cases ###### Abstract In many research fields in artificial intelligence, it has been shown that deep neural networks are useful to estimate unknown functions on high dimensional input spaces. However, their generalization performance is not yet completely clarified from the theoretical point of view because they are non-identifiable and singular learning machines. Moreover, a ReLU function is not differentiable, to which algebraic or analytic methods in singular learning theory cannot be applied. In this paper, we study a deep ReLU neural network in overparametrized cases and prove that the Bayesian free energy, which is equal to the minus log marginal likelihood or the Bayesian stochastic complexity, is bounded even if the number of layers are larger than necessary to estimate an unknown data-generating function. Since the Bayesian generalization error is equal to the increase of the free energy as a function of a sample size, our result also shows that the Bayesian generalization error does not increase even if a deep ReLU neural network is designed to be sufficiently large or in an opeverparametrized state. ## 1 Introduction Deep neural networks are now being used in many fields, for example, pattern recognition, robotic control, bioinformatics, data science, time series prediction, and so on. Their high performances have been shown in many experiments, however mathematical foundation to study them is not yet completely established. It is well known that the generalization of machine learning is basically divided into two elements, Bias and Variance. The bias and the variance are determined by approximation ability and the complexity of the model respectively. If a machine leaning model has larger number of parameter, the bias gets smaller and the variance gets larger in general. It is known as Bias-Variance Tradeoff. Nevertheless deep neural network in practical use has huge number of parameters, they are well generalized and the influence of variance is smaller comparing to such a number of parameters. The reason of this phenomena is not clear, because deep neural networks are nonidentifiable and singular[1][2]. A learning machine is called identifiable if a map from a parameter to a probability distribution is one-to-one, and regular if it has a positive-definite Fisher information matrix. If a learning machine is identifiable and regular, then the regular statistical theory holds [3], resulting that asymptotic normality of the maximum likelihood estimator holds and that AIC [4] and BIC [5] can be employed in model selection problems. However, if learning machines contain hierarchical structure or hidden variables, they are nonidentifiable and singular, for example, layered neural networks [6, 7], normal mixtures [8, 7], matrix factorizations [9], reduced rank regressions [10], Poisson mixtures [11], latent Dirichlet allocation [12], and Boltzmann machine [13, 14]. In these learning machines, it was shown that singularities make the Bayesian generalization errors smaller than that of regular models, even in under-parametrized cases [15]. These phenomena are called implicit or intrinsic regularization [9, 16, 17], and researches about quantitative effects caused by singularities have been applied to model selection [18], hyperparameter design [19], and optimization of Markov chain Monte Carlo method for the posterior distribution [20]. Statistical properties of nonidentifiable and singular learning machines are now being clarified by these researches, however, the conventional singular learning theory is based on the condition that a log likelihood is an algebraic or analytic function of a parameter, hence it cannot be applied to non-differentiable ReLU neural networks. In this paper, we study singular learning theory so as to be employed in the non-differentiable cases and derive the upper bound of the free energy of deep ReLU functions in overparametrized cases. The Bayesian free energy \(F_{n}\) is mathematically equal to the minus log marginal likelihood and Bayesian stochastic complexity, where \(n\) is a sample size. The free energy plays an important role in Bayesian learning theory, because the average generalization error \(\mathbb{E}[G_{n}]\) satisfies a formula for an arbitrary \(n\)[21], \[\mathbb{E}[G_{n}]=\mathbb{E}[F_{n+1}]-\mathbb{E}[F_{n}]-S,\] where \(G_{n}\) is the Kullback-Leibler divergence from a data-generating distribution to the Bayesian predictive distribution and \(S\) is the entropy of the data-generating distribution. If a log likelihood function is analytic or algebraic, then by the conventional singular learning theory, it was proved that, if a data-generating distribution is realizable by a learning machine, \[\mathbb{E}[F_{n}]=nS+\lambda\log n+O(n\log n),\] where \(\lambda>0\) is the real log canonical threshold (RLCT), resulting that \[\mathbb{E}[G_{n}]=\frac{\lambda}{n}+o(1/n).\] In this paper, we prove that the Bayesian free energy of deep ReLU neural network satisfies the following inequality \[\mathbb{E}[F_{n}]\leq nS+\lambda_{ReLU}\log n+C\] where the constant \(\lambda_{ReLU}>0\) is bounded even if the number of layers in a learning machine is larger than necessary to estimate the data-generating distribution. Hence, if the generalization error has asymptotic expansion, then it should be \[\mathbb{E}[G_{n}]\leq\frac{\lambda_{ReLU}}{n}+o(1/n).\] In practical applications, we do not know the appropriate number of layers of a deep network for unknown data-generating distribution, hence a sufficiently large neural network is often employed. Our result shows that, even if a deep ReLU neural network is designed in an over-parametrized state, the Bayesian generalization error is bounded by a constant defined by the data-generating distribution. This paper consists of seven sections. In the second section, we prepare mathematical framework of Bayesian learning theory. In the third section, a deep ReLU neural network is explained and the main theorem is introduced. In the fourth and fifth sections, several lemmas and the main theorem are proved respectively. In the sixth and seventh sections, we discuss the main result and conclude this paper. ## 2 Framework of Bayesian Learning theory In this section, we briefly explain the framework of Bayesian Learning for supervised learning used in this paper. Let \(X^{n}=(X_{1},\cdots X_{n})andY^{n}=(X_{1},\cdots X_{n})\) be samples independently and identically taken from the true probability distribution \(q(x,y)=q(y|x)q(x)\). Also let \(p(y|x,\theta),\varphi(\theta)\) be the statistical model and prior distribution. The posterior distribution is given by \[p(\theta|X^{n},Y_{n})=\frac{1}{Z(Y^{n}|X^{n})}\varphi(\theta)\prod_{i=1}^{n}p (Y_{i}|X_{i},\theta). \tag{1}\] where \(Z_{n}\) is normalizing constant denoted as marginal likelihood: \[Z_{n}=\int\varphi(\theta)\prod_{i=1}^{n}p(Y_{i}|X_{i},\theta)\mathrm{d}\theta. \tag{2}\] Hence \(Z_{n}\) is the probability distribution for \(Y^{n}\) conditioned by \(X^{n}\) estimated from samples, \(Z_{n}\) is denoted by \(p(Y^{n}|X^{n})\) as probability distribution. The posterior predictive distribution is defined by the average of statistical model over posterior distribution: 1 \[p(y|x,X^{n},Y_{n})=\int p(y|x,\theta)p(\theta|X^{n},Y^{n})\mathrm{d}\theta. \tag{3}\] In Bayesian Learning the true distribution \(q(x)\) is inferred by predictive distribution \(p(x|X^{n})\). For comparing \(q(x)\) with \(p(x|X^{n})\), the free energy\(F_{n}\) and the generalization loss \(G_{n}\) are used. The free energy is negative log value of marginal likelihood \[F_{n}=-\log Z_{n}. \tag{4}\] The free energy is also called evidence, stochastic complexity and used for model selection[5][22][23][18], We introduce the following variables to explain why the free energy is used for comparing true and predictive distributions. \[S =-\int q(y|x)q(x)\log q(y|x)\mathrm{d}x\mathrm{d}y, \tag{5}\] \[S_{n} =\frac{1}{n}\sum_{i=1}^{n}\log q(Y_{i}|X_{i}). \tag{6}\] The entropy\(S\) is the average negative log likelihood of the true distribution and empirical entropy\(S_{n}\) is average of log loss. From definition, the following equation holds \[F_{n}=nS_{n}+\log\frac{q(Y^{n}|X^{n})}{p(Y^{n}|X^{n})} \tag{7}\] where \(q(Y^{n}|X^{n})=\prod_{i=1}^{n}q(Y_{i}|X_{i})\). The average of \(F_{n}\) over the sample generating \((X^{n},Y^{n})\) is \[\mathbb{E}[F_{n}] =nS+\int q(y^{n}|x^{n})q(x^{n})\log\frac{q(y^{n}|x^{n})q(x^{n})}{ p(y^{n}|x^{n})q(x^{n})}\mathrm{d}x^{n}\mathrm{d}y^{n} \tag{8}\] \[=nS+D_{\mathrm{KL}}(q(x^{n},y^{n})|p(y^{n}|x^{n})q(x^{n})) \tag{9}\] The entropy is not depends on statistical model and prior distribution, therefore the expected value of free energy over sample generating is equivalent to the Kullback Leibler divergence between true distribution for sample size and evidence except for the constant. The generalization error is the Kullback Leibler divergence between true distribution and predictive distribution. \[G_{n}=D_{\mathrm{KL}}(q(y|x)q(x)|p(y|x,X^{n},Y^{n})q(x)) \tag{10}\] The generalization error is also used for model selection[4][24]. For \(p(y|x,X^{n},Y^{n})=p(Y_{n+1}|X_{n+1},X^{n},Y_{n})\), the following equation holds \[p(y|x,X^{n},Y^{n}) =\frac{1}{Z_{n}}\int p(Y_{n+1}|X_{n+1},\theta)\varphi(\theta) \prod_{i=1}^{n}p(Y_{i}|X_{i},\theta)\mathrm{d}\theta \tag{11}\] \[=\frac{Z_{n+1}}{Z_{n}}. \tag{12}\] The average of a sample of negative log of equation (12) is following \[\mathbb{E}[G_{n}]-S=\mathbb{E}[F_{n+1}]-\mathbb{E}[F_{n}]. \tag{13}\] This equation show that the average generalization error is equal to the increase of the average free energy. If the log likelihood is an algebraic or analytic function, by using algebraic geometric foundation [25][26], it was proved that asymptotic behaviors of \(\mathbb{E}[F_{n}]\) and \(\mathbb{E}[G_{n}]\) are given by the real log canonical threshold [21]. However, since a ReLU function is neither algebraic nor analytic, conventional singular learning theory cannot be employed. Table 1 show the definitions and notations used in this paper. ## 3 Deep Neural Network ### Deep Neural Network as statistical model In this section we describe the function of \(N\)-layer neural network. We define \(H_{k}(1\leq k\leq N)\) as the width of each layer. Let \(x\in\mathbb{R}^{H_{1}}\) be input vector. \(\sigma(t)=\{\sigma_{i}(t)\}\) is vector of activation function. The function from \(k-1\)-th layer to \(k\)-th layer \(f^{(k)}\in\mathbb{R}^{H_{k}}\) is defined by \[f^{(1)}(w,b,x) =x \tag{14}\] \[f^{(k)}(w,b,x) =\sigma(w^{(k)}f^{(k-1)}(w,b,x)+b^{(k)})\ \ \ (2\leq i\leq N-1) \tag{15}\] where \(w^{(k)}\in\mathbb{R}^{H}_{k+1}\times R^{H}_{k}\) is weight matrix, \(b^{(k)}\in\mathbb{R}^{H_{k}}\) is bias. We collectively denote weight and bias as \(w,b\) \[w=(w^{(2)},\cdots,w^{(N)}),b=(b^{(2)},\cdots,b^{(N)}), \tag{16}\] There exists various activation functions for neural network such that ReLU, Sigmoid, Swish and so on. In this paper we analyze the case using ReLU active function which is defined by \[\sigma_{i}(t)=\begin{cases}t_{i}&(t_{i}\leq 0)\\ 0&(t_{i}<0)\end{cases} \tag{17}\] For using Neural Network in Bayesian Learning, the relationship between input vector and output vector is stochastically modeled with the function \(f^{(N)}(w,b,x)\) \begin{table} \begin{tabular}{c l l} \hline Notation & Definition & Name \\ \hline \hline \(\mathbb{E}[\cdots]\) & \(\int\cdots\prod_{i=1}^{n}q(Y_{i}|X_{i})q(X_{i})\mathrm{d}X^{n}\mathrm{d}Y^{n}\) & average of generating of samples \\ \(\mathbb{E}_{\theta}[\cdots]\) & \(\int\cdots p(\theta|X^{n})\mathrm{d}w\) & average of posterior \\ \(\mathbb{E}_{x,y}[\cdots]\) & \(\int\cdots q(y|x)q(x)\mathrm{d}x\) & average of true distribution \\ \(S\) & \(-\mathbb{E}_{x,y}[\log q(y|x)q(x)]\) & entropy \\ \(S_{n}\) & \(-\frac{1}{n}\sum_{i=1}^{n}\log q(X_{i})\) & the empirical entropy \\ \(F_{n}\) & \(-\log Z_{n}\) & the free energy \\ \(G_{n}\) & \(\mathbb{E}_{x,y}[\log q(y|x)/p(y|x,X^{n},Y^{n})]\) & the generalization error \\ \hline \end{tabular} \end{table} Table 1: Notation Let \(y\in R^{H_{N}}\) be output vector. This paper concern the following statistical model: \[y=f^{(N)}(w,b,x)+N(0,I_{H_{N}}), \tag{18}\] where \(N(0,I_{H_{N}})\) is \(H_{N}\) dimensional Gaussian noise which covariance is the identity matrix. This model is represented as probability density function as follows: \[p(y|w,b,x)=\frac{1}{\sqrt{2\pi}^{H_{N}}}-\exp\frac{1}{2}||y-f^{(N)}(w,b,x)||^{ 2}. \tag{19}\] ### How the model realize the true In data analysis using neural network, the case that the model is larger than the data generating process is common. Such situation is called overparametrize. For analyzing such a overparameterized situation, we assume that the statistical model include the data generating process. We assume that the true probability distribution is \(N^{*}\)-layer ReLU neural network which has \(H_{k}^{*}\) width with parameters\((w^{*},b^{*})\). In this situation, the true distribution is \[q(y|x)=\frac{1}{\sqrt{2\pi}^{H_{N}^{*}}}-\exp\frac{1}{2}||y-f^{(N^{*})}(w^{*}, b^{*},x)||^{2}. \tag{20}\] We show that if statistical model is a \(N\)-layer ReLU neural network that satisfies \[N^{*}\leq N,\quad H_{1}^{*}=H_{1},\quad H_{N^{*}}^{*}=H_{N}, \tag{21}\] and \[H_{k}^{*} \leq H_{k} (2\leq k\leq N^{*}-1), \tag{22}\] \[H_{N^{*}-1}^{*} \leq H_{k} (N^{*}\leq k\leq N-1). \tag{23}\] there exists a parameter \(\hat{w},\hat{b}\) which satisfies \[q(y|x)=p(y|\hat{w},\hat{b},x). \tag{24}\] Such parameters are called optimal parameter. To describe optimal parameter, we define the following notation. **Notation**. Note that the dimension of the vector \(f^{(k)}(w,b,x)\) is different from \(f^{(k)}(w^{*},b^{*},x)\) in general. A vector \(v^{(k)}\in\mathbb{R}^{H_{k}}\), which has the same dimension as the output vector of the \(k\)th layer of the learning machine, is represented by \[v^{(k)}=\left(\begin{array}{c}v^{(k)}_{A}\\ v^{(k)}_{B}\end{array}\right),\] where, for \(1\leq k\leq N^{*}-1\), \[v^{(k)}_{A}\in\mathbb{R}^{H_{k}^{*}},\quad v^{(k)}_{B}\in\mathbb{R}^{H_{k}-H_ {k}^{*}},\] or, for \(N^{*}\leq k\leq N-1\), \[v_{A}^{(k)}\in\mathbb{R}^{H_{N^{*}-1}^{*}},\ \ v_{B}^{(k)}\in\mathbb{R}^{H_{k}-H_{N^{*}-1 }^{*}}.\] For example, the output of the \(k\)th layer is represented by \[f^{(k)}(w,b,x)=\left(\begin{array}{c}f_{A}^{(k)}(w,b,x)\\ f_{B}^{(k)}(w,b,x)\end{array}\right),\] and the bias of the \(k\)th layer is represented by \[b^{(k)}=\left(\begin{array}{c}b_{A}^{(k)}\\ b_{B}^{(k)}\end{array}\right).\] A matrix \(L^{(k)}\in\mathbb{R}^{H_{k}\times H_{k-1}}\) whose size is equal to the weight parameter from the \((k-1)\)th layer to the \(k\)th layer is represented by \[L^{(k)}=\left(\begin{array}{cc}L_{AA}^{(k)}&L_{AB}^{(k)}\\ L_{BA}^{(k)}&L_{AB}^{(k)}\end{array}\right),\] where, for \(2\leq k\leq N^{*}-1\), \[L_{AA}^{(k)}\in\mathbb{R}^{H_{k}^{*}\times H_{k-1}^{*}}, L_{AB}^{(k)}\in\mathbb{R}^{H_{k}^{*}\times(H_{k-1}-H_{k-1}^{*})},\] \[L_{BA}^{(k)}\in\mathbb{R}^{(H_{k}-H_{k}^{*})\times H_{k-1}^{*}}, L_{BB}^{(k)}\in\mathbb{R}^{(H_{k}-H_{k}^{*})\times(H_{k-1}-H_{k-1}^{*})},\] for \(N^{*}\leq k\leq N-1\), \[L_{AA}^{(k)}\in\mathbb{R}^{H_{N^{*}-1}^{*}\times H_{N^{*}-1}^{*}}, L_{AB}^{(k)}\in\mathbb{R}^{H_{N^{*}-1}^{*}\times(H_{k-1}-H_{N^{*}-1}^{*})},\] \[L_{BA}^{(k)}\in\mathbb{R}^{(H_{k}-H_{N^{*}-1}^{*})\times H_{N^{* }-1}^{*}}, L_{BB}^{(k)}\in\mathbb{R}^{(H_{k}-H_{N^{*}-1}^{*})\times(H_{k-1}-H_{ N^{*}-1}^{*})},\] or, for \(k=N\), \[L_{AA}^{(N)}\in\mathbb{R}^{H_{N^{*}}^{*}\times H_{N^{*}-1}^{*}}, L_{AB}^{(N)}\in\mathbb{R}^{H_{N^{*}}^{*}\times(H_{N-1}-H_{N^{*}-1}^{*})}.\] For example, a weight parameter is represented by \[w^{(k)}=\left(\begin{array}{cc}w_{AA}^{(k)}&w_{AB}^{(k)}\\ w_{BA}^{(k)}&w_{BB}^{(k)}\end{array}\right).\] Note that, \(L_{AA}^{(2)}\), \(L_{BB}^{(2)}\), \(L_{BA}^{(N)}\), and \(L_{BB}^{(N)}\) are the empty matrices, because \(H_{1}^{*}=H_{1}\) and \(N_{N^{*}}^{*}=H_{N}\). We divide the layers of the model into \(1\leq k\leq N^{*}-1\) and \(N^{*}\leq k\leq N\). In \(k=2\), the optimal parameter \(\hat{w},\hat{b}\) are \[\hat{w}^{(2)} =\left(\begin{array}{c}w^{*(2)}\\ \mathcal{M}_{BA}^{*(2)}\end{array}\right) \tag{25}\] \[\hat{b}^{(2)} =\left(\begin{array}{c}b^{*(2)}\\ -\mathcal{M}_{B0}^{*(2)}\end{array}\right), \tag{26}\] where \(\mathcal{M}_{BA}^{*(2)}\)is arbitrary matrix which components are positive and \(-\mathcal{M}_{B0}^{*(2)}\)is arbitrary vector which components are negative. In \(3\leq k\leq N^{*}-1\), the optimal parameter \(\hat{w},\hat{b}\) are \[\hat{w}^{(k)} =\left(\begin{array}{cc}w^{*(k)}&\mathcal{M}_{AB}^{*(k)}\\ -\mathcal{M}_{BA}^{*(k)}&\mathcal{M}_{BB}^{*(k)}\end{array}\right) \tag{27}\] \[\hat{b}^{(k)} =\left(\begin{array}{c}b^{*(k)}\\ -\mathcal{M}_{B0}^{*(k)}\end{array}\right), \tag{28}\] where \(\mathcal{M}_{AB}^{*(k)},\mathcal{M}_{BB}^{*(k)}\) are arbitrary matrices, \(-\mathcal{M}_{BA}^{*(k)}\) are arbitrary matrices which components are negative and \(-\mathcal{M}_{B0}^{*(k)}\) are arbitrary vector which components are negative. For \(k=3\), \(\mathcal{M}_{AB}^{*(k)}=0\). In each layer(\(k\geq 2\)) the output \(f^{(k)}(\hat{w},\hat{b},x)\) are positive. From this positivity, the following equation holds in \(3\leq k\leq N^{*}-1\) \[f^{(k)}_{A}(\hat{w},\hat{b},x) =f^{(k)}(w^{*},w^{*},x), \tag{29}\] \[f^{(k)}_{B}(\hat{w},\hat{b},x) =0. \tag{30}\] Figure1 shows the relationships between units in this optimal parameter in \(3\leq k\leq N^{*}-1\). In \(N^{*}\leq k\leq N-1\), the optimal parameters \(\hat{w},\hat{b}\) are \[\hat{w}^{(k)} =\left(\begin{array}{cc}I_{N^{*}-1}&\mathcal{M}_{AB}^{*(k)}\\ -\mathcal{M}_{BA}^{*(k)}&-\mathcal{M}_{BB}^{*(k)}\end{array}\right), \tag{31}\] \[\hat{b}^{(k)} =\left(\begin{array}{c}\mathcal{M}_{A0}^{*(k)}\\ -\mathcal{M}_{B0}^{*(k)}\end{array}\right), \tag{32}\] where \(I_{N^{*}-1}\) is \(H_{N^{*}-1}^{*}\) dimensional identity matrices, \(\mathcal{M}_{AB}^{*(k)}\) are arbitrary matrices, \(-\mathcal{M}_{BA}^{*(k)},-\mathcal{M}_{BB}^{*(k)}\) are arbitrary matrices which components are negative,\(\mathcal{M}_{A0}^{*(k)}\) are arbitrary vectors which components are positive and \(-\mathcal{M}_{B0}^{*(k)}\) are arbitrary vectors which components are negative. \(\hat{w}^{(k)}\) satisfy that \(\text{Rank}(\hat{w}^{(k)})\geq\frac{1}{2}\). Figure 1: The relationship between true distribution and optimal parameter in the model \(H^{*}_{N^{*}-1}\) In \(k=N\), the optimal parameters \(\hat{w},\hat{b}\) are \[\hat{w}^{(N)} =\left(w^{*(N^{*})},\mathcal{M}^{*(N)}_{AB}\right), \tag{33}\] \[\hat{b}^{(N)} =b^{*(N^{*})}-w^{*(N^{*})}\sum_{k=N^{*}}^{N-1}\mathcal{M}^{*(k)}_{ A0}, \tag{34}\] where \(\mathcal{M}^{*(N)}_{AB}\) is an arbitrary matrix. From the positivity of output in each layer, the following equation holds in \(N^{*}\leq k\leq N-1\), \[f^{(k)}_{A}(\hat{w},\hat{b},x) =f^{(k-1)}_{A}(\hat{w},\hat{b},x)+\mathcal{M}^{*(k)}_{A0} \tag{35}\] \[f^{(k)}_{B}(\hat{w},\hat{b},x) =0. \tag{36}\] Therefore, the following equation holds \[f^{(N)}(\hat{w},\hat{b},x)=f^{(N^{*})}(w^{*},b^{*},x). \tag{37}\] This equation is equivalent to \[q(y|x)=p(y|\hat{w},\hat{b},x). \tag{38}\] Figure2 shows the outline of the optimal parameter introduced here. Other than this optimal parameter, there exists various optimal parameters. ### Main Theorem In this subsection the main result of this paper is introduced. **Theorem 3.1**.: _(Main Theorem) Assume that the learning machine and the data generating distribution are given by \(p(y|x,w,b)\) and \(q(y|x)=p(y|x,w^{*},b^{*})\) which satisfy the conditions eq.(21), eq.(22), and eq.(23), and that a sample \(\{(X_{i},Y_{i})\;\;i=1,2,...,n\}\) is independently subject to \(q(x)q(y|x)\). Then the average free energy satisfies the inequality,_ \[\mathbb{E}[F_{n}]\leq nS+\lambda_{ReLU}\log n+C.\] Figure 2: Outline of optimal parameter _For general cases,_ \[\lambda_{ReLU}=\frac{1}{2}\left(H_{3}^{*}(H_{2}-H_{2}^{*})+\sum_{k=2}^{N^{*}}H_{k }^{*}(H_{k-1}^{*}+1)\right). \tag{39}\] _If the support of the input distribution is bounded or continued in nonnegative region,_ \[\lambda_{ReLU}=\frac{1}{2}\left(\sum_{k=2}^{N^{*}}H_{k}^{*}(H_{k-1}^{*}+1) \right). \tag{40}\] _These results show that the average free energy is bounded, even if the number of layers are larger than necessary to estimate the data-generation network. In particular eq.(39) is equal to the half the number of parameters in the data-generating network._ ## 4 Lemmas In this section, we prepare several lemmas which are necessary to prove the main theorem. Let the Kullback-Leibler divergence of a data-generating network \(q(y|x)=p(y|x,w^{*},b^{*})\) and a learning machine \(p(y|x)\) be \[K(w,b)=\int q(x)q(y|x)\log\frac{q(y|x)}{p(y|x,w,b)}\mathrm{d}x \mathrm{d}y.\] It is well-known that \(K(w,b)\geq 0\) for an arbitrary \((w,b)\) and \(K(w,b)=0\) if and only if \(q(y|x)=p(y|x,w,b)\). **Lemma 4.1**.: _Assume that a set \(W\) is contained in the set determined by the prior distribution \(\{(w,b);\varphi(w,b)>0\}\). Then for an arbitrary postive integer \(n\),_ \[\mathbb{E}[F_{n}]\leq nS-\log\int_{W}\exp(-nK(w,b))\varphi(w,b) \mathrm{d}w\mathrm{d}b.\] Proof.: An empirical Kullback-Leibler divergence is defined by \[K_{n}(w,b)=\frac{1}{n}\sum_{i=1}^{n}\log\frac{p(Y_{i}|X_{i},w^{*},b^{*})}{p(Y_ {i}|X_{i},w,b)},\] which satisfies \(\mathbb{E}[K_{n}](w,b)]=K(w,b)\). \[\frac{q(y^{n}|x^{n})}{p(y^{n}|x^{n})} =\exp(-\sum_{i=1}^{n}\log\frac{q(Y_{i}|X_{i})}{p(Y_{i}|X_{i},w,b)}) \tag{41}\] \[=\exp(-nK_{n}(w,b)). \tag{42}\] By using eq.(7), \[\mathbb{E}[F_{n}] =-\mathbb{E}[\log\frac{q(y^{n}|x^{n})}{p(y^{n}|x^{n})}]+nS \tag{43}\] \[=-\mathbb{E}[\log\int\varphi(w,b)\exp(-nK_{n}(w,b))dwdb]+nS. \tag{44}\] By applying Lemma.1 in [1], \[\mathbb{E}[F_{n}] \leq-\log\int\varphi(w,b)\exp(-\mathbb{E}[nK_{n}(w,b)])\mathrm{d} w\mathrm{d}b+nS \tag{45}\] \[\leq-\log\int\varphi(w,b)\exp(-nK(w,b))\mathrm{d}w\mathrm{d}b+nS\] (46) \[\leq-\log\int_{W}\varphi(w,b)\exp(-nK(w,b))\mathrm{d}w\mathrm{d}b +nS, \tag{47}\] where the last inequality is derived the fact that the restriction of integrated region makes the integration not larger. **Lemma 4.2**.: _For arbitrary vectors \(s,t\),_ \[\|\sigma(s)-\sigma(t)\|\leq\|s-t\|.\] Proof.: If \(s_{i},t_{i}\geq 0\) or \(s_{i},t_{i}\leq 0\), then \(|\sigma_{i}(s)-\sigma_{i}(t)|=|s_{i}-t_{i}|\). If \(s_{i}\geq 0,t_{i}<0\), then \(|\sigma_{i}(s)-\sigma_{i}(t)|=|s_{i}|\leq|s_{i}-t_{i}|\). If \(s_{i}<0,t_{i}\geq 0\), then \(|\sigma_{i}(s)-\sigma_{i}(t)|=|t_{i}|\leq|s_{i}-t_{i}|\). Hence \[\|\sigma(s)-\sigma(t)\|^{2}=\sum_{i}|\sigma_{i}(s)-\sigma_{i}(t)|^{2}\leq\sum _{i}|s_{i}-t_{i}|^{2}=\|s-t\|^{2}.\] **Lemma 4.3**.: _For arbitrary \(w\),\(w^{\prime}\), \(b\), \(b^{\prime}\), the following inequality holds,_ \[\|f^{(k)}(w,b,x)-f^{(k)}(w^{\prime},b^{\prime},x)\|\] \[\leq\|w^{(k)}-w^{\prime(k)}\|\|f^{(k-1)}(w,b,x)\|+\|b^{(k)}-b^{ \prime(k)}\|\] \[+\|w^{(k)}\|\|f^{(k-1)}(w,b,x)-f^{(k-1)}(w^{\prime},b^{\prime},x )\|, \tag{48}\] _where \(\|w^{(k)}\|\) is the operator norm of a matrix \(w^{(k)}\)._ Proof.: \[f^{(k)}(w,b,x)-f^{(k)}(w^{\prime},b^{\prime},x)\] \[=\sigma(w^{(k)}f^{(k-1)}(w,b,x)+b^{(k)})-\sigma(w^{\prime(k)}f^{ (k-1)}(w,b,x)+b^{\prime(k)})\] \[+\sigma(w^{\prime(k)}f^{(k-1)}(w,b,x)+b^{\prime(k)})-\sigma(w^{ \prime(k)}f^{(k-1)}(w^{\prime},b^{\prime},x)+b^{\prime(k)}).\] (49) Hence by using Lemma 4.2, \[\|f^{(k)}(w,b,x)-f^{(k)}(w^{\prime},b^{\prime},x)\|\] \[\leq\|\sigma(w^{(k)}f^{(k-1)}(w,b,x)+b^{(k)})-\sigma(w^{\prime(k)}f ^{(k-1)}(w,b,x)+b^{\prime(k)})\|\] \[+\|\sigma(w^{\prime(k)}f^{(k-1)}(w,b,x)+b^{\prime k}))-\sigma(w^{ \prime(k)}f^{(k-1)}(w^{\prime},b^{\prime},x)+b^{\prime(k)})\|\] \[\leq\|w^{(k)}-w^{\prime(k)}\|\|f^{(k-1)}(w,b,x)\|+\|b^{(k)}-b^{ \prime(k)}\|\] \[+\|w^{\prime(k)}\|\|f^{(k-1)}(w,b,x)-f^{(k-1)}(w^{\prime},b^{ \prime},x)\|. \tag{50}\] Hence lemma is proved. **Lemma 4.4**.: _For arbitrary \(w,b,x\),_ \[\|f^{(k)}(w,b,x)\| \leq\|w^{(k)}\|\|w^{(k-1)}\|\cdots\|w^{(2)}\|\|x\| \tag{51}\] \[+\|b^{(k)}\|+\sum_{j=1}^{k-2}\|w^{(k)}\|\|w^{(k-1)}\|\cdots\|w^{ (k-j)}\|\|b^{(k-j)}\|. \tag{52}\] Proof.: By substituting \(w^{\prime}:=0\) and \(b^{\prime}=0\), in Lemma 4.3, it follows that \[\|f^{(k)}(w,b,x)\|\leq\|w^{(k)}\|\|f^{(k-1)}(w,b,x)\|+\|b^{(k)}\|. \tag{53}\] Then mathematical induction gives the Lemma. In order to prove the main theorem, we need several notations. The convergent matrix \(\mathcal{E}^{(k)}\) and vector \(\mathcal{E}^{(k)}_{0}\) defined by the condition that the absolute values of all entries are smaller than \(1/\sqrt{n}\), which is denoted by \[\mathcal{E}^{(k)}=\left(\begin{array}{cc}\mathcal{E}^{(k)}_{AA}&\mathcal{E }^{(k)}_{AB}\\ \mathcal{E}^{(k)}_{BA}&\mathcal{E}^{(k)}_{BB}\end{array}\right),\ \ \ \mathcal{E}^{(k)}_{0}=\left(\begin{array}{c}\mathcal{E}^{(k)}_{A0}\\ \mathcal{E}^{(k)}_{B0}\end{array}\right). \tag{54}\] The positive-small-constant matrix \(\mathcal{D}^{(k)}\) and vector \(\mathcal{D}^{(k)}_{0}\) are defined by the condition that all entries are positive and smaller than \(\delta>0\) where \(\delta\) does not depend on \(n\), which is denoted by \[\mathcal{D}^{(k)}=\left(\begin{array}{cc}\mathcal{D}^{(k)}_{AA}&\mathcal{D }^{(k)}_{AB}\\ \mathcal{D}^{(k)}_{BA}&\mathcal{D}^{(k)}_{BB}\end{array}\right),\ \ \mathcal{D}^{(k)}_{0}=\left(\begin{array}{c} \mathcal{D}^{(k)}_{A0}\\ \mathcal{D}^{(k)}_{B0}\end{array}\right). \tag{55}\] The positive constant matrix \(\mathcal{M}^{(k)}\) and vector \(\mathcal{M}^{(k)}_{0}\) are defined by the condition that all entries are in the inveral \([A,B]\), \[\mathcal{M}^{(k)}=\left(\begin{array}{cc}\mathcal{M}^{(k)}_{AA}&\mathcal{M }^{(k)}_{AB}\\ \mathcal{M}^{(k)}_{BA}&\mathcal{M}^{(k)}_{BB}\end{array}\right),\ \ \mathcal{M}^{(k)}_{0}=\left(\begin{array}{c} \mathcal{M}^{(k)}_{A0}\\ \mathcal{M}^{(k)}_{B0}\end{array}\right). \tag{56}\] To prove Theorem 3.1, we show an upper bound of \(\mathbb{E}[F_{n}]\) is given by choosing a set \(W_{E}\) which consists of essential weight and bias parameters. **Definition**. (Essential parameter set \(W_{E}\)). A parameter \((w,b)\) is said to be in an essential parameter set \(W_{E}\) if it satisfies the following conditions, (1), (2), and (3). (1) For \(2\leq k\leq N^{*}-1\), there exist convergent matrices \(\mathcal{E}^{(k)}\) and positive constant matrices \(\mathcal{M}^{(k)}\) such that \[w^{(k)} =\left(\begin{array}{cc}(w^{*})^{(k)}+\mathcal{E}^{(k)}_{AA}& \mathcal{Z}^{(k)}_{AB}\\ -\mathcal{M}^{(k)}_{BA}&-\mathcal{M}^{(k)}_{BB}\end{array}\right), \tag{57}\] \[b^{(k)} =\left(\begin{array}{cc}(b^{*})^{(k)}+\mathcal{E}^{(k)}_{A0}\\ -\mathcal{M}^{(k)}_{B0}\end{array}\right), \tag{58}\] where \[\mathcal{Z}^{(k)}_{AB}=\left\{\begin{array}{cc}\mathcal{E}^{(3)}_{AB}&(k=3) \\ \mathcal{M}^{(k)}_{AB}&(k\neq 3)\end{array}\right.. \tag{59}\] Note that, for \(k=2\), \(\mathcal{Z}^{(k)}_{AB}\), \(\mathcal{M}^{(k)}_{BB}\), and \(\mathcal{M}^{(k)}_{B0}\) are the empty matrix. (2) For \(N^{*}\leq k\leq N-1\), there exist positive-small-constant matrix \(\mathcal{D}^{(k)}\) and positive constant matrix \(\mathcal{M}^{(k)}\) \[w^{(k)} =\left(\begin{array}{cc}I_{N^{*}-1}+\mathcal{D}^{(k)}_{AA}& \mathcal{M}^{(k)}_{AB}\\ -\mathcal{M}^{(k)}_{BA}&-\mathcal{M}^{(k)}_{BB}\end{array}\right), \tag{60}\] \[b^{(k)} =\left(\begin{array}{c}\mathcal{M}^{(k)}_{A0}\\ -\mathcal{M}^{(k)}_{B0}\end{array}\right), \tag{61}\] where \(I_{N^{*}-1}\) is \(H^{*}_{N^{*}-1}\) dimensional identity matrix. (3) For \(k=N\), there exist convergent matrix \(\mathcal{E}^{(N)}\) and vector \(\mathcal{E}^{(N)}_{0}\) such that \[w^{(N)} =\left(\begin{array}{cc}(w^{*})^{(N^{*})}P^{-1}+\mathcal{E}^{( N)}_{AA},&\mathcal{M}^{(N)}_{AB}\end{array}\right) \tag{62}\] \[b^{(N)} =(b^{*})^{(N^{*})}-\sum_{k=N^{*}}^{N}w^{(N)}w^{(N-1)}\cdots w^{( k)}b^{(k-1)}+\mathcal{E}^{(N)}_{A0}, \tag{63}\] where \(P\in\mathbb{R}^{(H^{*}_{N^{*}-1})\times(H^{*}_{N^{*}-1})}\) is defined by matrices in eq.(60) \[P=w^{(N-1)}_{AA}w^{(N-2)}_{AA}\cdots w^{(N^{*})}_{AA}.\] Note that a positive constant \(\delta>0\) is taken sufficiently small such that arbitrary \(w^{(k)}_{AA}\) (\(N^{*}\leq k\leq N-1\)) is invertible. **Lemma 4.5**.: _Assume that the weight and bias parameters are in the essential set \(W_{E}\). Then there exist constants \(c_{1},c_{2}>0\) such that_ \[\|f^{(N^{*}-1)}_{A}(w,b,x)-f^{(N^{*}-1)}(w^{*},b^{*},x)\| \leq\frac{c_{1}}{\sqrt{n}}(\|x\|+1), \tag{64}\] \[\|f^{(N^{*}-1)}_{A}(w,b,x)\| \leq c_{2}(\|x\|+1). \tag{65}\] Proof.: Eq.(65) is derived from Lemma 4.4. By the definitions (57), (58), for \(4\leq k\leq N^{*}-1\) \[f_{A}^{(2)}(w,b,x) =\sigma(((w^{*})^{(2)}+\mathcal{E}_{AA}^{(2)})f_{A}^{(1)}(w,b,x)+(b ^{*})^{(2)}+\mathcal{E}_{A0}^{(2)}), \tag{66}\] \[f_{A}^{(3)}(w,b,x) =\sigma(((w^{*})^{(3)}+\mathcal{E}_{AA}^{(3)})f_{A}^{(2)}(w,b,x)\] \[+\mathcal{E}_{AB}^{(3)}f_{B}^{(2)}(w,b,x)+(b^{*})^{(3)}+\mathcal{ E}_{A0}^{(3)}),\] (67) \[f_{A}^{(k)}(w,b,x) =\sigma(((w^{*})^{(k)}+\mathcal{E}_{AA}^{(k)})f_{A}^{(k-1)}(w,b,x)\] \[+\mathcal{M}_{AB}^{(k)}f_{B}^{(k-1)}(w,b,x)+(b^{*})^{(k)}+ \mathcal{E}_{A0}^{(k)}). \tag{68}\] Here, for \(4\leq k\leq N^{*}-1\), \(f_{B}^{(k-1)}(w,b,x)=0\), since all entries of \(w_{BA}^{(k-1)}\), \(w_{BB}^{(k-1)}\), and \(w_{B0}^{(k-1)}\) are negative and the output of ReLU function \(f^{(k-2)}(w,b,x)\) is non-negative. On the other hand, \[f^{(k)}(w^{*},b^{*},x)=\sigma((w^{*})^{(k)}f^{(k-1)}(w^{*},b^{*},x)+(b^{*})^{( k)}). \tag{69}\] Hence by Lemma 4.3, \(2\leq k\leq N^{*}-1\), \[\|f_{A}^{(k)}(w,b,x)-f^{(k)}(w^{*},b^{*},x)\| \tag{70}\] \[\leq\|\mathcal{E}_{AA}^{(k-1)}f_{A}^{(k-1)}(w,b,x)+\mathcal{E}_{ A0}^{(k)}\|+\delta_{k,3}\|\mathcal{E}_{AB}^{(3)}f_{B}^{(2)}(w,b,x)\|\] (71) \[+\|(w^{*})^{(k)}(f_{A}^{(k-1)}(w,b,x)-f^{(k-1)}(w^{*},b^{*},x))\|\] (72) \[\leq\|\mathcal{E}_{AA}^{(k-1)}\|\|f_{A}^{(k-1)}(w,b,x)\|+\| \mathcal{E}_{A0}^{(k)}\|+\delta_{k,3}\|\mathcal{E}_{AB}^{(3)}\|\|f_{B}^{(2)}( w,b,x)\|\] (73) \[+\|(w^{*})^{(k)}\|\|f_{A}^{(k-1)}(w,b,x)-f^{(k-1)}(w^{*},b^{*},x)\|, \tag{74}\] where \(\delta_{k,3}=1\) if \(k=1\) or \(0\) otherwise. The entries of matrices in \(\mathcal{E}_{AA}^{(k-1)}\), \(\mathcal{E}_{AB}^{(3)}\), and \(\mathcal{E}_{A0}^{(k)}\) are bounded by \(1/\sqrt{n}\) order term and the operator norm is bounded by the Frobenius norm, hence \(\|\mathcal{E}_{AA}^{(k-1)}\|\), \(\|\mathcal{E}_{AB}^{(3)}\|\), and \(\|\mathcal{E}_{A0}^{(k)}\|\) are bounded by \(1/\sqrt{n}\) order term. Moreover \(\|(w^{*})^{(k)}\|\) is a constant term. For \(k=2\), \(f_{A}^{(k-1)}(w,b,x)-f^{(k-1)}(w^{*},b^{*},x)=x-x=0\). Then by using mathematical induction we obtain the Lemma. **Lemma 4.6**.: _Assume that the weight and bias parameters are in the set \(W_{E}\). Then there exists a constant \(c_{3}>0\) such that_ \[\|f^{(N)}(w,b,x)-f^{(N^{*})}(w^{*},b^{*},x)\|\leq\frac{c_{3}}{\sqrt{n}}(\|x\|+1). \tag{75}\] Proof.: Let \(h\in\mathbb{R}^{H_{N}}\) and \(h^{*}\in\mathbb{R}^{H_{N^{*}}^{*}}\) (\(H_{N}=H_{N^{*}}^{*}\)) be input vectors into the output layers of the learning and data-generating machines respectively. In other words, \(h\) and \(h^{*}\) is defined such that \(f^{(N)}(w,b,x)=\sigma(h)\) and \(f^{(N^{*})}(w^{*},b^{*},x)=\sigma(h^{*})\). By the definition of the essential parameter set (2), for \(N^{*}-1\leq k\leq N-1\), all entries of \(w_{BA}^{(k)}\), \(w_{BB}^{(k)}\) and \(b_{B0}^{(k)}\) are negative. Hence, for \(N^{*}\leq k\leq N-1\), \(f_{2}^{(k)}(w,b,x)=0\). For \(N^{*}\leq k\leq N-1\), all entries of \(w_{AA}^{(k)}\), \(w_{AB}^{(k)}\) and \(b_{A0}^{(k)}\) are positive. Hence by using \(\sigma(t)=t\) for \(t\geq 0\), \[h =w^{(N)}_{AA}w^{(N-1)}_{AA}\cdots w^{(N^{*})}_{AA}f^{(N^{*}-1)}_{A}( w,b,x) \tag{76}\] \[+b^{(N)}+\sum_{k=N^{*}}^{N}w^{(N)}_{AA}\cdots w^{(k)}_{AA}b^{(k-1 )}_{A0}. \tag{77}\] On the other hand, \[h^{*}=(w^{*})^{(N^{*})}f^{(N^{*}-1)}(w^{*},b^{*},x)+(b^{*})^{(N)}. \tag{78}\] If \(w\) is in the essential set of parameters, \[w^{(N)}_{AA}w^{(N-1)}_{AA}\cdots w^{(N^{*})}_{AA}=((w^{*})^{(N^{ *})}P^{-1}+\mathcal{E}^{(N)}_{AA})w^{(N-1)}_{AA}\cdots w^{(N^{*})}_{AA} \tag{79}\] \[=(w^{*})^{(N^{*})}+\mathcal{E}^{(N)}_{AA}w^{(N-1)}_{AA}\cdots w^{ (N^{*})}_{AA}. \tag{80}\] It follows that \[\|w^{(N)}w^{(N-1)}\cdots w^{(N^{*})}f^{(N^{*}-1)}(w,b,x)-w^{(N*)} f^{(N^{*}-1)}(w^{*},b^{*},x)\| \tag{81}\] \[\leq\|w^{(N)}_{AA}w^{(N-1)}_{AA}\cdots w^{(N^{*})}_{AA}f^{(N^{*} -1)}_{A}(w,b,x)-w^{(N*)}f^{(N^{*}-1)}(w^{*},b^{*},x)\|\] (82) \[\leq\|(w^{*})^{(N^{*})}(f^{(N^{*}-1)}_{A}(w,b,x)-f^{(N^{*}-1)}(w^ {*},b^{*},x))\|\] (83) \[+\|\mathcal{E}^{(N)}_{AB}\|\|w^{(N-1)}_{AA}\|\cdots\|w^{(N^{*})}_ {AA}\|\|f^{(N^{*}-1)}_{A}(w^{*},b^{*},x)\|\] (84) \[\leq\frac{c_{4}}{\sqrt{n}}(\|x\|+1), \tag{85}\] where the last inequality is derived by Lemma 4.5. Also by the definition, \[\|b^{(N)}+\sum_{k=N^{*}}^{N}w^{(N)}\cdots w^{(k)}b^{(k-1)}-(b^{*})^{(N^{*})}\| \leq\frac{c_{4}}{\sqrt{n}}, \tag{86}\] it follows that \[\|h-h^{*}\|\leq\frac{c_{5}}{\sqrt{n}}(\|x\|+1).\] Then applying Lemma 4.2 completes the lemma. **Lemma 4.7**.: _(1) If the support of \(q(x)\) is contained in a positive region, the same conclusion as Lemma 4.5 holds by replacing \(\mathcal{Z}^{(3)}_{AB}\) in (59) with \(\mathcal{M}^{(3)}_{AB}\). (2) If the support of \(q(x)\) is contained in a bounded region, the same conclusion as Lemma 4.5 holds by replacing \(\mathcal{Z}^{(3)}_{AB}\) in (59) with \(\mathcal{M}^{(3)}_{AB}\) and by replacing \(-\mathcal{M}^{(3)}_{B0}\) in (58) with a matrix in a sufficiently small region._ Proof.: In both cases, \(f^{(2)}_{B}(w,b,x)=0\) in eq.(67) holds. hence the same conclusion of Lemma 4.5 holds. Proof of Main Theorem In this section we prove the main theorem. Proof.: (Main theorem). By Lemma 4.1, it is sufficient to prove that there exists a constant \(C>0\) such that \[\int_{W_{E}}\exp(-nK(w,b))\varphi(w,b)\mathrm{d}w\mathrm{d}b\geq\frac{C}{n^{ \lambda}}\] where \[K(w,b)=\frac{1}{2}\int\|f^{(N)}(w,b,x)-f^{(N^{*})}(w^{*},b^{*},x)\|^{2}q(x) \mathrm{d}x.\] By using Lemma 4.6, if \((w,b)\in W_{E}\), \[K(w,b)\leq\frac{c_{3}^{2}}{2n}\int(\|x\|+1)^{2}q(x)\mathrm{d}x=\frac{c_{4}}{n}<\infty.\] It follows that \[\int_{W_{E}}\exp(-nK(w,b))\varphi(w,b)\mathrm{d}w\mathrm{d}b \tag{87}\] \[\geq\exp(-c_{4})\left(\min_{(w,b)\in W_{E}}\varphi(w,b)\right) \mathrm{Vol}(W_{E}). \tag{88}\] where \(c_{4}>0\), \(\min_{(w,b)\in W_{E}}\varphi(w,b)>0\), and \(\mathrm{Vol}(W_{E})\) is the volume of the set \(W_{E}\) by the Lebesgue measure. By the definition of the essential parameter set \(W_{E}\), its volume is determined by the dimension of the convergent matrices and vectors. Let \(2\lambda\) be the number of parameters in convergent matrices and vectors. Then \[\mathrm{Vol}(W_{E})\geq\frac{C_{1}}{n^{\lambda}},\] where in general cases, \[\lambda=\frac{1}{2}\left(+H_{3}^{*}(H_{2}-H_{2}^{*})+\sum_{k=2}^{N^{*}}H_{k}^{ *}(H_{k-1}^{*}+1)\right).\] If the support of the input distribution is contained in a positive region or a bounded region, \[\lambda=\frac{1}{2}\left(\sum_{k=2}^{N^{*}}H_{k}^{*}(H_{k-1}^{*}+1)\right),\] which completes the main theorem. ## 6 Discussion In this section, we discuss the three points in this paper. ### Property of Free Energy Firstly we study a monotone property of the free energy as a function of the integrated region. As we have shown in the proof, the average free energy satisfies \[\mathbb{E}[F_{n}]\leq-\log\int\exp(-nK(w,b))\varphi(w,b)dwdb+nS.\] We define a function \(G(U)\) of an integrated region \(U\), \[G(U)=-\log\int_{U}\exp(-nK(w,b))\varphi(w,b)dwdb,\] where \(U\) is a measurable subset in \(\{(w,b)\}\). Then an inequality holds, \[U_{1}\supset U_{2}\Longrightarrow G(U_{1})\leq G(U_{2}).\] Hence, an set \(U\) satisfies a condition that, for \((w,b)\in U\), \(K(w,b)\leq 1/n\), it follows that \[\mathbb{E}[F_{n}]\leq G(U)+nS,\] and \[G(U)\leq-\log\operatorname{Vol}(U)+nS+const.,\] where \(\operatorname{Vol}(U)\) is the volume of the set \(U\). In this paper, we chose the essential parameter set \(W_{E}\) for such a subset, and showed the volume of this set is determined the number of the convergent parameters. ### Sepcial Property of ReLU Function Secondly, a special property of the ReLU function is discussed. The output of the ReLU function is nonnegative and equal to zero for a negative input. Hence, if all parameters and biases from the \((k-1)\)th layer to \(k\)th layer are negative, then the output of \(k\)th layer is equal to zero. This property is used to evaluate the effect by the redundant parameters in each layer. Moreover, if all parameters and biases from the \((k-1)\)th layer to \(k\)th layer are positive, then the output of the \(k\)th layer is a linear function of the \((k-1)\)th output. This property is used to evaluate the effect by the redundant layers. These two points were employed in the mathematical proof of the main theorem, which might also be useful to design design a deep ReLU neural network for the smaller generalization error. ### Unrealizable Cases Thirdly we discuss a case when the data-generating distribution is not realizable by a ReLU function. In this paper, we assumed in the theorem that a sample is subject to the data-generating function represented by a parameter \((w^{*},b^{*})\). In real world problems, such an assumption is not satisfied in general. However, if a learning machine is sufficiently large such that there exists \((w^{*},b^{*})\) such that the Kullback-Leibler distance satisfies \[K(q(y|x)||p(y|x,w^{*},b^{*}))=o(1/n),\] then the same inequality as the main theorem holds. In other words. a deep ReLU neural network in an overparametrized state makes the generalization error caused by bias small enough with the bounded error caused by variance. This is a good property of a deep ReLU neural network, when Bayesian inference is employed in learning. ## 7 Conclusion In this paper, we studied a deep ReLU neural network in an overparametrized case, and derived the upper bound of Bayesian free energy. Since the generalization error is equal to the increase of the free energy, the result of this paper shows that the generalization error of the deep ReLU neural network is bounded even if the number of layers are larger then necessary to approximate the data-generating distribution. ## Acknowledgement This work was partially supported by JSPS KAKENHI Grant-in-Aid for Scientific Research (C) 21K12025.
2310.13270
Meta-learning of Physics-informed Neural Networks for Efficiently Solving Newly Given PDEs
We propose a neural network-based meta-learning method to efficiently solve partial differential equation (PDE) problems. The proposed method is designed to meta-learn how to solve a wide variety of PDE problems, and uses the knowledge for solving newly given PDE problems. We encode a PDE problem into a problem representation using neural networks, where governing equations are represented by coefficients of a polynomial function of partial derivatives, and boundary conditions are represented by a set of point-condition pairs. We use the problem representation as an input of a neural network for predicting solutions, which enables us to efficiently predict problem-specific solutions by the forwarding process of the neural network without updating model parameters. To train our model, we minimize the expected error when adapted to a PDE problem based on the physics-informed neural network framework, by which we can evaluate the error even when solutions are unknown. We demonstrate that our proposed method outperforms existing methods in predicting solutions of PDE problems.
Tomoharu Iwata, Yusuke Tanaka, Naonori Ueda
2023-10-20T04:35:59Z
http://arxiv.org/abs/2310.13270v1
# Meta-learning of Physics-informed Neural Networks for Efficiently Solving Newly Given PDEs ###### Abstract We propose a neural network-based meta-learning method to efficiently solve partial differential equation (PDE) problems. The proposed method is designed to meta-learn how to solve a wide variety of PDE problems, and uses the knowledge for solving newly given PDE problems. We encode a PDE problem into a problem representation using neural networks, where governing equations are represented by coefficients of a polynomial function of partial derivatives, and boundary conditions are represented by a set of point-condition pairs. We use the problem representation as an input of a neural network for predicting solutions, which enables us to efficiently predict problem-specific solutions by the forwarding process of the neural network without updating model parameters. To train our model, we minimize the expected error when adapted to a PDE problem based on the physics-informed neural network framework, by which we can evaluate the error even when solutions are unknown. We demonstrate that our proposed method outperforms existing methods in predicting solutions of PDE problems. ## 1 Introduction Partial differential equations (PDEs) have been used in many areas of science and engineering for modeling physical [13, 1], biological [8], and finaicial [4] systems. The finite element method (FEM) [19] is the standard method for numerically solving PDEs. Recently, physics-informed neural networks (PINNs) [17, 11] have been attracting attention as a machine learning approach for solving PDEs. However, both methods are computationally expensive. In addition, they solve PDE problems from scratch even when similar PDE problems have been solved before. In this paper, we propose a meta-learning method for efficiently solving PDEs. The proposed method learns how to solve PDEs with our neural network-based model using many PDE problems, and uses the knowledge for solving newly given PDE problems. Figure 1 shows the framework of the proposed method. Our model takes a PDE problem and a point as input, and outputs a predicted solution at the point of the PDE problem. The neural networks are shared across problems, by which we can accumulate the knowledge of solving various problems in their parameters. Although meta-learning methods for PINNs have been proposed [2, 16, 7, 14], these existing methods transfer knowledge within a PDE problem with different physical parameters and/or different boundary conditions. On the other hand, the proposed method can transfer knowledge across different PDE problems if their governing equations are formulated as polynomial functions of partial derivatives. In our model, we encode a PDE problem by neural networks. Each PDE problem consists of a governing equation and boundary conditions. We assume PDE problems with a governing equation defined by a polynomial function of partial derivatives, and Dirichlet boundary conditions. Many real-world PDE problems satisfy this assumption. Coefficients of the polynomial are used as the representation of a governing equation, by which we can encode a governing equation by a fixed size of a vector. Dirichlet boundary conditions are represented by a set of pairs of points and their conditions, where points are randomly generated on the boundaries. Using such sets, we can encode boundary conditions flexibly, even when boundary shapes and/or conditions differ across problems. The set is transformed into a fixed size of a boundary representation vector using a permutation invariant neural network. A problem representation is obtained by a neural network from the governing equation coefficients and boundary representation. By feeding the problem representation and a point into a neural network, we can output a problem-specific solution. When a PDE problem is newly given, our model can efficiently predict its solution by only the forwarding process of the neural networks without updating parameters. To train our model, we minimize the expected error of predicted solutions when adapted to each PDE problem. The error is calculated using the PINN framework, which enables us to evaluate the error using governing equations and boundary conditions even when their solutions are unknown. It is essential for efficient meta-learning since obtaining solutions for many problems is very expensive to compute. The expectation is approximated by the Monte Carlo method, where PDE problems are randomly generated. In the proposed method, the meta-training data can be generated unlimitedly since we do not need any observational data. It is preferable since the performance of neural networks generally improves as the number of training data increases [10]. The meta-learned model can be finetuned to a PDE problem, by which we can improve the approximation performance of the solution. The main contributions of this paper are as follows: 1) We present a meta-learning method for solving a broad class of PDE problems based on PINNs, where governing equations and boundary conditions are different across problems. 2) We propose a neural network-based model that takes a PDE problem and a point as input and outputs its solution. 3) We experimentally demonstrate that the proposed method can predict solutions of newly given PDE problems in about a second, and its performance is better than the existing methods. ## 2 Related work Several meta-learning methods for PINNs have been proposed, such as Meta-PDE [16], HyperPINN [2], and Meta-auto-decoders [7]. Meta-PDE [16] meta-learns PINNs using model-agnostic meta-learning [5], where initial neural network parameters are trained such that it achieves good performance with gradient descent updates. Since backpropagation through such procedures is costly in terms of memory, the total number of updates must be kept small. It is difficult to adapt to a wide range of PDE problems with a small number of updates from a single initialization. HyperPINN [2] is a hyper-network that generates a PINN given a PDE problem. Meta-auto-decoders [7] implicitly encode PDE problems into vectors, and feed the encoded vectors to a PINN to predict their solutions. These existing methods share knowledge across a narrow range of PDE problems, e.g., the Burgers' equation with different parameters with a fixed boundary condition. In contrast, the proposed method shares the knowledge across different classes of PDE problems by considering a general form of governing equations and representing boundary conditions by a set of boundary points and their solutions. When PDE problems are encoded implicitly as in Meta-auto-decoders, the required memory linearly increases as the number of training PDE problems increases, which prohibits to use a huge amount of PDE problems for training. On the other hand, since the proposed method explicitly encodes PDE problems using neural networks, the Figure 1: Our meta-learning framework. Left) In the meta-training phase, our neural network-based model is trained using many PDE problems with various governing equations and boundary conditions. Right) In the meta-test phase, we are given new PDE problems. The trained model is adapted to each PDE problem. The adapted model outputs approximated problem-specific solution \(\hat{u}\) at point \(\mathbf{x}\). memory cost does not depend on the number of training PDE problems, and we can use an unlimited number of PDE problems for training. The proposed method uses an encoder-decoder style meta-learning [20], or in-context learning [3], approach as with neural processes [6], where neural networks are used for encoding problem (task) representations and for decoding solutions. This approach approximates problem adaptation with neural networks, and does not require gradient descent iterations for problem adaptation, which enables us to perform meta-learning and inference efficiently. ## 3 Preliminaries ### PDE problem We consider the following parametric PDE problem on domain \(\Omega^{(s)}\subset\mathbb{R}^{D}\) with boundary \(\partial\Omega^{(s)}\), \[\mathcal{F}^{(s)}[u^{(s)}(\mathbf{x})]=0,\quad\mathbf{x}\in\Omega^{(s)},\qquad u ^{(s)}(\mathbf{x})=g^{(s)}(\mathbf{x}),\quad\mathbf{x}\in\partial\Omega^{(s)}, \tag{1}\] where \(s\) is the index of the PDE problem, \(\mathcal{F}^{(s)}\) is the governing equation operator that involve \(u\) and partial derivatives of \(u\) with respect to point \(\mathbf{x}\), \(u^{(s)}(\mathbf{x})\) is the hidden solution, and \(g^{(s)}(\mathbf{x})\) is the Dirichlet boundary condition. Each PDE problem has its own governing equation, boundary condition, domain, boundary, and hidden solution. For example, in the case of the Burgers' equation with point \(\mathbf{x}=(t,x)\), the governing equation is \(\mathcal{F}^{(s)}=u_{t}+uu_{x}-\alpha u_{xx}\), and the boundary condition can be \(g^{(s)}(\mathbf{x})=-\sin(\pi x)\) if \(t=0\), and \(g^{(s)}(\mathbf{x})=0\) if \(x=-1\) or \(x=1\). ### Physics-informed neural networks PINNs approximate the solution of a PDE problem using neural network \(\hat{u}^{(s)}(\mathbf{x};\boldsymbol{\theta})\). Here, \(\boldsymbol{\theta}\) is the parameters, and the neural network takes point \(\mathbf{x}\) as input, and outputs the predicted solution. The parameters are trained by minimizing the following objective function: \[\hat{\boldsymbol{\theta}}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}\frac {1}{N_{\text{f}}}\sum_{n=1}^{N_{\text{f}}}\parallel\mathcal{F}^{(s)}\big{(} \hat{u}^{(s)}(\mathbf{x}_{tn};\boldsymbol{\theta})\big{)}\parallel^{2}+\frac{ 1}{N_{\text{g}}}\sum_{n=1}^{N_{\text{g}}}\parallel g(\mathbf{x}_{\text{gn}})- \hat{u}^{(s)}(\mathbf{x}_{\text{gn}};\boldsymbol{\theta})\parallel^{2}, \tag{2}\] where \(\{\mathbf{x}_{tn}\}_{n=1}^{N_{\text{f}}}\) is a set of randomly sampled points in domain \(\Omega^{(s)}\), and \(\{\mathbf{x}_{\text{gn}}\}_{n=1}^{N_{\text{g}}}\) is a set of randomly sampled points on boundary \(\partial\Omega^{(s)}\). The first and second terms enforce to satysify governing equation \(\mathcal{F}^{(s)}\) and boundary condition \(g^{(s)}\), respectively. We call the objective function in Eq. (2) the PINN error. ## 4 Proposed method ### Problem formulation We consider parametric PDE problems described in Section 3.1. Governing equations \(\mathcal{F}^{(s)}\) for all problems are assumed to be a polynomial function of partial derivatives with a constant driving term, where the maximum degree of the polynomial is \(C\), and the maximum order of the derivatives is \(J\). In our experiments, we consider \(D=2\), \(C=2\), and \(J=2\). Many widely-used PDEs are included in this PDE class, such as Burgers' equation, \(u_{t}+uu_{x}-\alpha u_{xx}=0\), Fisher's equation, \(u_{t}-\alpha_{1}u_{xx}-\alpha_{2}u(1-u)=0\), Hunter-Saxton equation, \(u_{tx}+u_{x}u_{xx}-\alpha u_{x}^{2}=0\), heat equation, \(u_{t}-u_{xx}=0\), Laplace's equation, \(u_{xx}+u_{yy}=0\), and wave equation, \(u_{tt}-\alpha u_{xx}=0\). Unlike governing equations \(\mathcal{F}^{(s)}\), boundary condition \(g^{(s)}\) is difficult to assume a specific functional form in real-world PDE problems. Therefore, we do not assume such assumptions on the boundary condition. Our aim is to predict solutions efficiently of newly given PDE problems with a neural network by meta-learning from various PDEs. ### Model Our neural network-based model outputs prediction of the problem-specific solution \(\hat{u}\) at point \(\mathbf{x}\) that is adapted to the given PDE problem with governing equation \(\mathcal{F}^{(s)}\), boundary \(\partial\Omega^{(s)}\), and boundary condition \(g^{(s)}\). Figure 2 illustrates our model. Since we assume that the governing equation is a \(C\)th-degree polynomial of \(J\)th-order partial derivatives with \(D\)-dimensional point, it can be written in a general form of the polynomial. For example, in the case with \(D=2\) with point \(\mathbf{x}=(t,x)\), the general form is given by \[\mathcal{F}^{(s)}=\sum_{c_{1}+c_{2}+c_{3}+c_{4}+c_{5}+c_{6}\in\{0,1,\dots,C\}} \alpha_{c_{1}c_{2}c_{3}c_{4}c_{5}c_{4}}^{(s)}u^{c_{1}}u^{c_{2}}_{t}u^{c_{3}}_{ t}u^{c_{4}}_{tx}u^{c_{5}}_{xx}u^{c_{6}}_{xx}, \tag{3}\] where \(\alpha_{c_{1}c_{2}c_{3}c_{4}c_{5}c_{6}}^{(s)}\in\mathbb{R}\) is the coefficient, and the degree for each term is \(C\) at most. Let \(\boldsymbol{\alpha}^{(s)}\) be a vector of coefficients \(\{\alpha_{c}^{(s)}\}\). The dimension of \(\boldsymbol{\alpha}^{(s)}\) is \(\binom{(D^{+}_{J})^{+}C}{C}\), where \(\binom{D+J}{J}\) is the number of partial derivatives \(\{u,u_{t},u_{x},\dots\}\). Governing equations with a lower degree of polynomial, lower order of partial derivatives, or lower dimensional points can be represented by setting zero to the corresponding coefficients. Our model uses the coefficients \(\boldsymbol{\alpha}^{(s)}\) as the governing equation representation. We introduce boundary condition \(g^{(s)}\) into our model by representing it by a set of the boundary condition data at randomly sampled points. In particular, we randomly generate \(N_{\text{g}}\) points at boundary \(\partial\Omega^{(s)}\), \(\mathbf{X}_{\text{g}}=\left\{\mathbf{x}_{\text{gn}}\right\}_{n=1}^{N_{\text{g}}}\), and evaluate the solution at the boundary points, \(\{g^{(s)}(\mathbf{x}_{\text{gn}})\}_{n=1}^{N_{\text{g}}}\). The set of pairs \(\{(\mathbf{x}_{\text{gn}},g^{(s)}(\mathbf{x}_{\text{gn}}))\}_{n=1}^{N_{\text{g }}}\) are transformed into boundary representation \(\boldsymbol{\beta}^{(s)}\) using the following permutation invariant neural network [21]: \[\boldsymbol{\beta}^{(s)}=\mathrm{NN}_{\mathrm{b}2}\left(\frac{1}{N_{\text{g}} }\sum_{n=1}^{N_{\text{g}}}\mathrm{NN}_{\mathrm{b}1}\left(\mathbf{x}_{\text{ gn}},g^{(s)}(\mathbf{x}_{\text{gn}})\right)\right), \tag{4}\] where \(\mathrm{NN}_{\mathrm{b}1}\) and \(\mathrm{NN}_{\mathrm{b}2}\) are feed-forward neural networks. The output of the neural network in Eq. (4) is invariant even when the elements are permuted, which is desirable since the order of the randomly sampled points should not depend on the boundary representation. In addition, the neural network can take any number of points, \(N_{\text{g}}\), as input. With sampled points at problem-specific boundary \(\partial\Omega^{(s)}\), the boundary representation can consider not only the solution at the boundary but also the shape of the boundary. Problem representation \(\mathbf{z}^{(s)}\) is obtained by a neural network from coefficients \(\boldsymbol{\alpha}^{(s)}\) and boundary representation \(\boldsymbol{\beta}^{(s)}\) as follows: \[\mathbf{z}^{(s)}=\mathrm{NN}_{\mathbf{z}}(\boldsymbol{\alpha}^{(s)}, \boldsymbol{\beta}^{(s)}), \tag{5}\] where \(\mathrm{NN}_{\mathbf{z}}\) is a feed-forward neural network. Solution \(\hat{u}\) at point \(\mathbf{x}\) of PDE problem \(s\) is predicted by inputting its problem representation \(\mathbf{z}^{(s)}\) to a neural network as follows: \[\hat{u}\left(\mathbf{x},\mathcal{F}^{(s)},g^{(s)},\mathbf{X}_{\text{g}}; \boldsymbol{\Theta}\right)=\mathrm{NN}_{\mathbf{u}}(\mathbf{x},\mathbf{z}^{(s )}), \tag{6}\] where \(\mathrm{NN}_{\mathbf{u}}\) is a feed-forward neural network, and \(\boldsymbol{\Theta}\) is unknown parameters in our model to be trained, which are the parameters in neural networks \(\mathrm{NN}_{\mathrm{b}1}\), \(\mathrm{NN}_{\mathrm{b}2}\), \(\mathrm{NN}_{\mathbf{z}}\), and \(\mathrm{NN}_{\mathrm{u}}\). All of the parameters are shared across different PDE Figure 2: Our model outputs predicted solution \(\hat{u}\) at point \(\mathbf{x}\) given governing equation \(\mathcal{F}^{(s)}\), boundary \(\partial\Omega\), and boundary condition \(g^{(s)}\). 1) Governing equation \(\mathcal{F}^{(s)}\) is represented by its coefficients \(\boldsymbol{\alpha}^{(s)}\). 2) Points at boundary \(\partial\Omega^{(s)}\) are randomly generated. 3) Each pair of boundary point \(\mathbf{x}_{\text{gn}}\) and its condition \(g^{(s)}(\mathbf{x}_{\text{gn}})\) is transformed into a vector by neural network \(\mathrm{NN}_{\mathrm{b}1}\). 4) These vectors are averaged, and then transformed to boundary representation \(\boldsymbol{\beta}^{(s)}\) by neural network \(\mathrm{NN}_{\mathrm{b}2}\). 5) Coefficients \(\boldsymbol{\alpha}^{(s)}\) and boundary representation \(\boldsymbol{\beta}^{(s)}\) are transformed into problem representation \(\mathbf{z}^{(s)}\) by neural network \(\mathrm{NN}_{\mathrm{z}}\). 6) Using the problem representation, solution \(\hat{u}\) at point \(\mathbf{x}\) is predicted by neural network \(\mathrm{NN}_{\mathrm{u}}\). Shaded circles represent given information. Unshared circles represent variables calculated by our model. Rectangles represent operations in our model. problems, by which we can learn how to solve various PDE problems and use the knowledge for newly given PDE problems. By taking governing problem representation \(\mathbf{z}^{(s)}\) as input, our model can output a problem-specific solution that depends on given governing equation \(\mathcal{F}^{(s)}\), boundary \(\partial\Omega^{(s)}\), and boundary condition \(g^{(s)}\). ### Meta-learning We estimate unknown parameters \(\mathbf{\Theta}\) by minimizing the expected PINN error over PDE problems as follows: \[\hat{\mathbf{\Theta}}=\operatorname*{arg\,min}_{\mathbf{\Theta}} \mathbb{E}_{s}\Bigg{[}\mathbb{E}_{\mathbf{x}_{\mathbf{z}}\in\partial\Omega^{(s )}N_{\text{g}}}\bigg{[}\,\parallel\mathcal{F}^{(s)}\left(\hat{u}^{(s)}( \mathbf{x}_{\text{f}},\mathcal{F}^{(s)},g^{(s)},\mathbf{X}_{\text{g}})\right) \,\parallel^{2}\,\bigg{]}\] \[+\frac{1}{N_{\text{g}}}\sum_{n=1}^{N_{\text{g}}}\parallel g( \mathbf{x}_{\text{gn}})-\hat{u}^{(s)}(\mathbf{x}_{\text{gn}},\mathcal{F}^{(s) },g^{(s)},\mathbf{X}_{\text{g}})\parallel^{2}\,\bigg{]}\Bigg{]}, \tag{7}\] where \(\mathbb{E}_{s}\) is the expectation over PDE problems, \(\mathbb{E}_{\mathbf{X}_{\text{g}}\in\partial\Omega^{(s)}N_{\text{g}}}\) is the expectation over sets of boundary points \(\mathbf{X}_{\text{g}}\) at boundary \(\partial\Omega^{(s)}\), \(\mathbb{E}_{\mathbf{x}_{\mathbf{z}}\in\Omega^{(s)}}\) is the expectation over points \(\mathbf{x}_{\text{f}}\) in domain \(\Omega^{(s)}\), the first term is the discrepancy between governing equation \(\mathcal{F}^{(s)}\) and our model \(\hat{u}\) at point \(\mathbf{x}_{\text{f}}\), and the second term is the discrepancy between boundary condition \(g^{(s)}\) and the prediction by our model \(\hat{u}\) at boundary points \(\mathbf{X}_{\text{g}}\). Partial differentiation \(\mathcal{F}^{(s)}(\hat{u}^{(s)})\) of our model is calculated by automatic differentiation [15]. By using the PINN error in Eq. (7), we can meta-learn our model by feeding PDE problems without their solutions. Algorithm 1 shows the meta-learning procedures of our model. The expectations are approximated with the Monte Carlo method by randomly sampling PDE problems at Lines 2-4, and sampling points at Lines 5 and 6. The algorithm requires a random generator for governing equation coefficients, domains, boundaries, and boundary conditions, by which we can specify a class of PDE problems we can solve. Since our model is differentiable, we can update parameters \(\mathbf{\Theta}\) using a stochastic gradient descent method [12]. By minimizing the expected error with Eq. (7), the generalization performance for unseen PDE problems can be improved. We can generate training data unlimitedly since the proposed method does not require any observational data. ### Finetuning The performance of our trained model on a PDE problem can be evaluated instantly using the PINN error. For applications that require low errors, we can improve the approximation performance of the predicted solution by our model with finetuning [7]. Suppose that our model is pretrained using various PDE problems with the procedures described in Section 4.3. First, given a PDE problem, we calculate its problem representation \(\mathbf{z}^{(s)}\) using pretrained neural networks \(\mathrm{NN_{b1}}\), \(\mathrm{NN_{b2}}\), and \(\mathrm{NN_{z}}\). Next, we finetune the parameters of neural network \(\mathrm{NN_{u}}\), and problem representation \(\mathbf{z}^{(s)}\) by minimizing the PINN error while fixing \(\mathrm{NN_{b1}}\), \(\mathrm{NN_{b2}}\), and \(\mathrm{NN_{z}}\) as follows: \[\hat{\mathbf{\theta}}_{\mathrm{u}}^{(s)},\hat{\mathbf{z}}^{(s)}=\underset{ \mathbf{\theta}_{\mathrm{u}}\mathbf{z}^{(s)}}{\arg\min} \frac{1}{N_{\mathrm{f}}}\sum_{n=1}^{N_{\mathrm{f}}}\parallel\mathcal{F}^{ (s)}\big{(}\mathrm{NN_{u}}(\mathbf{x}_{\mathrm{fn}},\mathbf{z}^{(s)};\mathbf{ \theta}_{\mathrm{u}})\big{)}\parallel^{2}\] \[+\frac{1}{N_{\mathrm{g}}}\sum_{n=1}^{N_{\mathrm{g}}}\parallel g( \mathbf{x}_{\mathrm{gn}})-\mathrm{NN_{u}}(\mathbf{x}_{\mathrm{gn}},\mathbf{z}^ {(s)};\mathbf{\theta}_{\mathrm{u}})\parallel^{2}, \tag{8}\] where \(\mathbf{\theta}_{\mathrm{u}}\) is the parameters in \(\mathrm{NN_{u}}\), \(\{\mathbf{x}_{\mathrm{fn}}\}_{n=1}^{N_{\mathrm{f}}}\) and \(\{\mathbf{x}_{\mathrm{gn}}\}_{n=1}^{N_{\mathrm{g}}}\) are sets of randomly sampled points in domain \(\Omega^{(s)}\) and boundary \(\partial\Omega^{(s)}\), respectively. Here, \(\mathbf{\theta}_{\mathrm{u}}\) is initialized by the pretrained model, and \(\mathbf{z}^{(s)}\) is initialized by the obtained problem representation in the first step. Using these initializations, we can incorporate meta-learned knowledge for solving PDE problems. We do not need to finetune neural networks \(\mathrm{NN_{b1}}\), \(\mathrm{NN_{b2}}\), and \(\mathrm{NN_{z}}\) by finetuning problem representation \(\mathbf{z}^{(s)}\). Finetuning the problem representation is effective since the number of parameters in the problem representation is much smaller than the number of parameters in these neural networks. ## 5 Experiments ### Settings We evaluated the proposed method using PDE problems with point \(\mathbf{x}=(t,x)\) of dimension \(D=2\), maximum degree of the polynomial \(C=2\), and maximum order of the derivatives \(J=2\). In this case, the dimension of coefficient vector \(\mathbf{\alpha}^{(s)}\) was 28. Each coefficient \(\alpha_{c}^{(s)}\) was set at zero with probability 0.75 and otherwise sampled uniformly from \([-1,1]\). The domains for all PDE problems were identical with \(t\in[0,1]\) and \(x\in[-1,1]\). The boundary conditions at \(x=-1\) and \(x=1\) were fixed at \(g^{(s)}(t,-1)=g^{(s)}(t,1)=0\) for all problems. The boundary conditions at \(t=0\) (or initial condition) depended on the problem, where \(g^{(s)}(0,x)=(x-1)(x+1)(r_{1}^{(s)}x^{2}+r_{2}^{(s)}x+r_{3}^{(s)})\), and \((r_{1}^{(s)},r_{2}^{(s)},r_{3}^{(s)})\) were problem-specific coefficients sampled uniform randomly from \([-1,1]\). ### Compared methods We compared the proposed method with the neural process (NP) [6], model-agnostic meta-learning (MAML) [5], and multi-task learning (MT). All methods are trained by minimizing the expected PINN loss. The NP uses a neural network that takes boundary condition information and a point as input, and outputs a predicted solution at the point. NP corresponds to the proposed method without governing equation information. MAML and MT use neural networks that take a point as input, and outputs a predicted solution at the point, as with the PINN. In MAML, the initial values of the model parameters are estimated such that the expected PINN loss is minimized when adapted by gradient descent. In MT, the model parameters are estimated by minimizing the expected PINN loss without problem adaptation. ### Implementation We used the same architecture of neural networks for all methods when applicable. For \(\mathrm{NN_{b1}}\) and \(\mathrm{NN_{b2}}\), we used four-layered feed-forward neural networks with 256 hidden and output units. For the input of \(\mathrm{NN_{b1}}\), boundary point \(\mathbf{x}_{gn}\) and it solution \(g^{(s)}(\mathbf{x}_{gn})\) were concatenated. In \(\mathrm{NN_{z}}\), coefficients \(\mathbf{\alpha}^{(s)}\) were transformed by a four-layered feed-forward neural network with 256 hidden and output units, its output was concatenated with boundary representation \(\mathbf{\beta}^{(s)}\), and then transformed into problem representation \(\mathbf{z}^{(s)}\) by a linear layer with 256 output units. In \(\mathrm{NN_{u}}\), point \(\mathbf{x}\) was transformed by a linear layer with 256 output units, its output was concatenated with problem representation \(\mathbf{z}^{(s)}\), and then transformed by a five-layered neural network with 256 hidden units. All hidden layers had residual connections. We used sinusoidal activation functions [18] in \(\mathrm{NN_{u}}\), and rectified linear activation functions in the other neural networks. With NP, the output of \(\mathrm{NN_{b2}}\) was used as problem representation \(\mathbf{z}^{(s)}\). In \(\mathrm{NN_{u}}\) of MAML and MT, there was no concatenation with problem representation \(\mathbf{z}^{(s)}\). In MAML, for adapting to each PDE problem, we used one step of the gradient descent with learning rate \(10^{-2}\), where a computational graph with more than one steps for the adaptation could not stored in memory. In all methods, PDE problems were randomly generated in each meta-learning iteration based on the settings described in the first paragraph of Section 5. For evaluating the governing equation error at Line 7 in Algorithm 1, we used \(N_{\mathrm{f}}=50\) uniform ramdomly sampled points in the domain. For evaluating the boundary condition error at Line 8, we used \(N_{\mathrm{g}}=100\) randomly sampled points on the boundary. In particular, we used 50 initial points at \(t=0\), and 25 boundary points at \(x=-1\) and \(x=1\), respectively. The number of epochs was 30,000, where each epoch consisted of 9,800 PDE problems. We optimized using Adam [12] with initial learning rate \(10^{-3}\), which was decayed by half for every 5,000 epochs. In a mini-batch, we used 512 PDE problems. We implemented all methods with PyTorch [15]. For the evaluation measurement, we used the PINN error in Eq (2), the governing equation (GE) error, which is the first term in Eq (2), and boundary condition (BC) error, which is the second term in Eq (2). The errors were evaluated on 100 PDE problems, where we used \(N_{\text{f}}=10000\) sampled points in the domain, and \(N_{\text{g}}=100\) sampled points on the boundary. ### Results Table 1 shows the average error. The proposed method achieved the best performance in terms of not only the total PINN error but also the governing equation and boundary condition errors. The error by NP was higher than that by the proposed method. This result indicates the effectiveness of including governing equation information in the proposed method. Since MAML could not perform many gradient descent steps for problem adaptation, the error was high. On the other hand, the proposed method approximates problem adaptation with neural networks, and the neural networks are trained with many randomly generated problems, it achieved a low error. Since MT does not output problem-specific solutions, its error was high. Figure 3 shows examples of approximated solutions by the proposed method, NP, and MAML. The proposed method's outputs were more similar to the solution than the other methods. The proposed method predicted different patterns of solutions depending on the given PDE problems without any problem-specific parameter updates. Table 2 shows the computation time in hours for meta-learning of 30,000 epochs using computers with Tesla v100 GPU, Xeon Gold 6148 2.40GHz, and 512GB memory. Since MAML requires gradient descent steps for problem adaptation in each meta-learning iteration, it took a longer time. MT does not have neural networks for calculating problem representations, which resulted in a shorter meta-training time. Table 3 shows the average computation time in seconds for predicting solutions of at 10,000 points in the domain and 100 points on the boundary using computers with Xeon Gold 6130 CPU 2.10GHz and 256GB memory without GPU. The proposed method can predict solutions of newly given PDE problems in about a second. The solutions in Figure 3 were calculated by the PINN with 30,000 training epochs, which took 27.7 hours per PDE problem on average. In the proposed method, we randomly generate PDE problems for each meta-learning iteration as described in Algorithm 1. Table 4 shows the PINN errors when we used a fixed number of PDE problems \(\{1000,2000,4000,6000,8000\}\) for meta-training in the proposed method. When our model is trained with a fixed set of PDE problems, even when many (8,000) PDE problems were used, the performance was low. This result indicates the importance of using a wide variety of PDE problems for meta-learning by random sampling. In our model, since we use neural networks to obtain problem representations given PDE problems, we can use different PDE problems in each iteration for meta-learning. On the other hand, implicit encoding of problem representation in Meta-Auto-Decoder [7] requires to infer the representation for each meta-learning PDE problem, which prohibits to use randomly generated problems for each iteration. Figure 4 shows the PINN errors when finetuned for each PDE problem. Here, PINN represents the physics-informed neural network with the same architecture as MAML and MT, where its parameters were initialized randomly. The \begin{table} \begin{tabular}{l c c c c} \hline & Ours & NP & MAML & MT \\ \hline PINN error & **0.034 \(\pm\) 0.004** & 0.157 \(\pm\) 0.026 & 0.131 \(\pm\) 0.021 & 0.201 \(\pm\) 0.025 \\ GE error & **0.009 \(\pm\) 0.001** & 0.089 \(\pm\) 0.025 & 0.062 \(\pm\) 0.018 & 0.082 \(\pm\) 0.023 \\ BC error & **0.024 \(\pm\) 0.003** & 0.068 \(\pm\) 0.005 & 0.069 \(\pm\) 0.006 & 0.119 \(\pm\) 0.009 \\ \hline \end{tabular} \end{table} Table 1: Average PINN errors, governing equation (GE) errors, and boundary condition (BC) errors with their standard errors. Values in bold typeface are not statistically significantly different at the 5% level from the best performing method in each row according to a paired t-test. \begin{table} \begin{tabular}{l c c c} \hline Ours & NP & MAML & MT \\ \hline 1.457 \(\pm\) 0.015 & 1.468 \(\pm\) 0.013 & 7.030 \(\pm\) 0.067 & 1.287 \(\pm\) 0.016 \\ \hline \end{tabular} \end{table} Table 2: Computation time in hours for meta-learning. \begin{table} \begin{tabular}{l c c c} \hline & Ours & NP & MAML & MT \\ \hline 51.1 & 50.2 & 209.2 & 37.2 \\ \hline \end{tabular} \end{table} Table 3: Average computation times in seconds for predicting solutions at 10,100 points, and their standard errors. Figure 3: Approximated solutions by the proposed method (Ours), NP, and MAML for six PDE problems. The equations show the governing equations. The value below the figure shows the PINN error. The left column (Solution) shows the solutions by PINNs with 30,000 training epochs for each problem. proposed method achieved a low error with finetuning. Since NP and MAML were meta-learning methods, their errors were lower than PINN but higher than the proposed method. The error by MT without finetuning was lower than PINN, but after a few dozen finetuning epochs, PINN outperformed MT. Since MT does not have problem-specific parameters, it is difficult for MT to adapt to various PDE problems flexibly. In contrast, since the proposed method has a problem representation to be finetuned, it outperformed MT. The performance by the proposed method without any finetuning was comparable to PINN with 94 finetuning epochs, which took 312 seconds on average. ## 6 Conclusion In this paper, we proposed a meta-learning method for solving PDE problems based on physics-informed neural networks. Although we believe that our work is an important step for accumulating reusable knowledge of efficiently solving PDE problems, we must extend our approach in several directions. First, we plan to enlarge the class of governing equations by encoding their mathematical expressions using neural networks [9]. Second, we will use our method for discovering governing equations from observational data by estimating the coefficients of governing equations in our model.
2308.02137
Learning the solution operator of two-dimensional incompressible Navier-Stokes equations using physics-aware convolutional neural networks
In recent years, the concept of introducing physics to machine learning has become widely popular. Most physics-inclusive ML-techniques however are still limited to a single geometry or a set of parametrizable geometries. Thus, there remains the need to train a new model for a new geometry, even if it is only slightly modified. With this work we introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization. This technique is based on a combination of a U-Net-like CNN and well established discretization methods from the field of the finite difference method.The results of our physics-aware CNN are compared to a state-of-the-art data-based approach. Additionally, it is also shown how our approach performs when combined with the data-based approach.
Viktor Grimm, Alexander Heinlein, Axel Klawonn
2023-08-04T05:09:06Z
http://arxiv.org/abs/2308.02137v1
Learning the solution operator of two-dimensional incompressible Navier-Stokes equations using physics-aware convolutional neural networks1 ###### Abstract In recent years, the concept of introducing physics to machine learning has become widely popular. Most physics-inclusive ML-techniques however are still limited to a single geometry or a set of parametrizable geometries. Thus, there remains the need to train a new model for a new geometry, even if it is only slightly modified. With this work we introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier-Stokes equations in varying geometries without the need of parametrization. This technique is based on a combination of a U-Net-like CNN and well established discretization methods from the field of the finite difference method. The results of our physics-aware CNN are compared to a state-of-the-art data-based approach. Additionally, it is also shown how our approach performs when combined with the data-based approach. keywords: Convolutional Neural Networks; Computational Fluid Dynamics; Machine Learning; Scientific Machine Learning ## 1 Introduction Fluid behavior is important in various fields such as civil, mechanical, and biomedical engineering, aerospace, meteorology, and geosciences. The governing equations for fluid behavior are typically the Navier-Stokes equations, which are solved using discretization approaches like finite difference, finite volume, or finite element methods. However, such computational fluid dynamics (CFD) simulations can be computationally intensive, especially for turbulent flow and complex geometries, and changing the geometry requires recomputing the entire simulation. Hence, there is a need for a quick surrogate model for CFD simulations. Such surrogate models encompass a variety of approaches, including linear reduced order models [10; 26], such as reduced basis [39] and proper orthogonal decomposition [43] models, as well as neural network-based models [9], like convolutional neural networks (CNNs) [5; 8; 19; 28; 34] and neural operators [24; 33]. In present work, we focus on using neural networks as an approximation for CFD simulations. Instead of relying on a large dataset, we leverage the known governing equations of fluids to construct a physics-aware loss function and train our model to satisfy these equations discretely. This approach has recently become increasingly popular and was applied to dense neural networks (DNN) to solve partial differential equations (PDEs) with little training data [41] or without training data [52] as well as inverse problems with limited training data [18; 41]. More recently, this idea was also applied to convolutional neural networks (CNN) by using physics-aware loss functions to solve PDEs [2; 6; 11; 45; 49; 57], upscale and denoise solutions [12; 21], generally improve the predictive quality of a model [48; 56], or learn PDEs from data [31; 32]. For a comprehensive overview on scientific machine learning (SciML), we refer to [3; 55]. However, previous physics-informed machine learning approaches have imposed geometric constraints, such as rectangularity or parametrizability, or have been limited to specific geometries or even to a single geometry. In practice, these conditions are usually not met. Furthermore, the exact geometry is often unknown or at least not known in sufficient detail. This is, for example, the case with medical imaging procedures. Therefore, we aim to develop a CNN that is capable of learning the flow field and pressure under physics constraints without labeled data for, with some restrictions, arbitrary geometries. We explicitly do not use methods such as coordinate transformations, which would limit us to specific geometries. This approach aims to advance existing methods for more realistic applications, and to the authors' knowledge, it is the first attempt to use a single CNN for multiple irregular geometries without labeled data. The rest of this paper is organized as follows. We first define our stationary boundary value problem in section 2. Then, we introduce CNNs as surrogate models in section 3 and directly afterwards extend this framework to physics-aware label-free learning of PDE solutions in section 4 and its application to the Navier-Stokes equations in section 5. Next, in section 6, we explain the architecture of our surrogate model. The creation of the training data used here is described in section 7. We present results for the data-based and physics-aware approach in section 9. Finally, we draw conclusions in section 10. ### Physics-Informed Machine Learning The method described in this paper can be categorized as physics-informed Machine Learning (ML) [36]. This term is used to describe ML methods for which prior knowledge, often known physical laws, are used for training. Since we use the governing equations to train our physics-aware CNN, our method can also be referred to as a Physics-Informed Neural Network (PINN). However, this would allow for possible confusion with the classical PINN approach, introduced in [40; 41] and pioneered in [25], where a physics-based loss function is constructed by clever use of automatic differentiation, classically for dense neural networks. In particular, the classical PINN approach requires an NN to directly approximate the solution function, i.e., the discretization is done by the NN. In contrast, in our approach, we use a finite difference-based discretization and predict the coefficients using a CNN. Consequently, we approximate the residuals using finite differences. Note, however, that we could also use finite element or finite volume techniques. ## 2 Model problem We consider the stationary Navier-Stokes equations describing incompressible Newtonian fluids with constant density \[(\vec{u}\cdot\nabla)\vec{u}-\nu\Delta\vec{u}+\nabla p=0\quad\text{in }\Omega, \tag{1}\] \[\nabla\vec{u}=0\quad\text{in }\Omega,\] Figure 1: Example of a channel geometry \(\Omega\) with a star-shaped obstacle. The obstacle is confined within a box (dashed line) with a distance of 0.75 to the boundary of \(\Omega\). where \(\vec{u}\) is the velocity field, \(p\) the pressure, and \(\nu\) the kinematic viscosity. In our experiments, we consider rectangular channels \(\Omega=[0,6]\times[0,3]\) from which we have cut a star-shaped obstacle \(P\); cf. fig. 1 for an example. This design is inspired by [19; 5]. We apply the following boundary conditions: On the inlet \(\partial\Omega_{\vec{u}}:=0\times[0,3]\), we prescribe a constant inflow velocity \(u=(3,0)^{\top}\), and at the outlet \(\partial\Omega_{out}:=6\times[0,3]\), we fix the pressure to \(p=0\). The lower and upper parts of the boundary \(0\times[0,6]\) and \(3\times[0,6]\), respectively, correspond to walls, and hence we enforce no-slip conditions \(u=(0,0)^{\top}\). Finally, we choose \(\nu=5\cdot 10^{-2}\). Depending on the shape and position of the obstacle, this setup leads to strongly varying flow patterns, making the channel problem a challenging benchmark for a CFD surrogate model. ## 3 Surrogate models based on CNNs Let us first discuss the approach to construct surrogate models via CNNs from [19; 5], which is the basis for this work. CNNs [27] are artificial neural networks (ANNs) that employ linear transformations based on discrete convolutions within the network layers making them well suited for structured temporal or spatial data, where neighboring coefficients correspond to neighboring points in space or time, respectively. CNNs are therefore also suitable for the approximating the solutions of partial differential equations on a structured tensor product grid: even though interaction is typically global, it is strongest for neighboring nodes. If the data structure is based on unstructured grid, graph convolutional networks (GCNs) [23] can be employed as an alternative. Here, we only consider tensor product-structured data, and therefore, restrict ourselves to classical CNNs. In the approach from [19; 5], a CNN that maps from the geometry of the computational domain to solution field(s) of the corresponding boundary value problem is trained; here, we specifically aim at predicting the velocity and pressure fields satisfying the Navier-Stokes equations eq. (1). In order to be able to employ standard CNNs, the geometry and solution fields are therefore interpolated to a tensor product grid. In two dimensions, the resulting data has a simple matrix structure; see fig. 9 for an exemplary pair of input and output data. As can be seen, due to the matrix structure of the input and output data, they can be directly identified as pixel images. This also allows us to use a large variety of techniques from the application of CNN models to image data. Let us now give a formal introduction of the approach. Therefore, let \(I_{g}\in\mathbb{R}^{w\times h}\) be the pixel image matrix representing some computational domain \(\Omega_{g}\); \(g\) indicates a generic index for a specific geometry. Moreover, let \(u_{g}^{i}\in\mathbb{R}^{w\times h}\) be a matrix representation of the \(i\)th component of the solution field of the boundary value problem solved on the computational domain \(\Omega_{g}\). Here, \(w\) and \(h\) correspond to the width and height of the pixel images, as well as the number of interpolation nodes in the \(x\) and \(y\) directions. By assembling the tensors \(u_{g}^{i}\), \(i=1,\ldots,d\), we obtain a third-order tensor \(u_{g}\in\mathbb{R}^{d\times w\times h}\), assuming that all solution components are defined on the same pixel grid. For a simple diffusion equation \(d\) is one, and for the Navier-Stokes equations in two dimensions, we have two velocity components and one pressure component, such that \(d\) is three. In analogy to pixel images, each component of the solution field is regarded as one channel of the output image. Our goal is to train a CNN that approximates the solution operator \[\mathcal{U}:\mathbb{R}^{w\times h} \rightarrow\mathbb{R}^{d\times w\times h}\] \[I_{g} \to u_{g},\] that is, the operator that maps a pixel representation of the geometry of the computational domain to a pixel representation of the solution of the corresponding boundary value problem. Hence, our approach can be seen as an example of operator learning; cf. the related DeepONet [33] and Fourier neural operator [30] approaches, which employ different network architectures. Let us denote the CNN model by \(f_{NN}^{\Psi}\), where \(\Psi\) are the trainable network parameters. In [19; 5], a CNN \(f_{NN}\) has then been trained to approximate the solution operator \(\mathcal{U}\) in a purely data-driven way. In particular, high-fidelity simulation data has been employed as the reference data \(u_{g}\), and the model has been trained to minimize the mean squared error (MSE) between the model output and \(u_{g}\). This corresponds to the minimization problem: \[\arg\min_{\Psi}\frac{1}{|T|}\sum_{g\in T}\left\|f_{NN}^{\Psi}\left(I_{g} \right)-u_{g}\right\|^{2}, \tag{2}\] where \(T\) is a set of geometries used as training data. Note training the model \(f_{\text{NN}}^{\Psi}\) with this loss function requires the availability of reference data \(u_{g}\); this means that a large number of measurement or high fidelity simulation data has to be available before the model training. It remains to discuss how to construct \(I_{g}\) and \(u_{g}\) for a specific geometry \(g\). The approach is not restricted to a specific image representation of the geometry \(I_{g}\), and in [5; 19], a binary or signed distance function (SDF)-based image of the geometry have been employed. It can be observed that the SDF input yields slightly better results. However, it comes at a computational cost, and the computation of the exact SDF input image requires precise knowledge about the boundary of the geometry; in practical applications, for instance, when the geometry is only known from medical image data, the SDF function can only be computed approximately based on the available image data. A binary input image can be generated more easily by checking if the center or most of the volume of each pixel lies within the computational domain \(\Omega_{g}\). Here, we only consider binary input images. The output pixel images \(u_{g}\) can be constructed from a reference solution \(\hat{u}\) by an interpolation operator, for instance, point-wise interpolation in the center points of the pixels or by averaging over the pixels (Clement-type interpolation). Here, \(\hat{u}\) could be high-fidelity simulation or measurement data. We discuss the data processing for this paper in more detail in section 7. Next, we introduce the main novelty of this paper, that is, our approach to replace the data-based loss function eq. (2) by a physics-aware loss function that requires only the knowledge of the PDE. ## 4 A physics-aware surrogate model based on CNNs In this section, we extend the approach from Section 3 by incorporating knowledge of the mathematical PDE model during network training. While the loss function in eq. (2) relies on reference data, our novel approach only requires a mathematical formula for the PDE residual, although a combination of both loss functions is possible. By considering the network output as a discrete finite difference solution on the same grid, we can approximate the PDE residual using finite difference stencils. This method can be extended to other discretization approaches on a structured grid, such as finite element or finite volume methods, but we focus on finite difference discretization for simplicity and leave other discretization approaches to future work. Like the data-driven approach described in section 3, our new approach yields a surrogate model capable of predicting solutions for a range of geometries of the computational domain. It can be trained without using reference data or in combination with reference data. To introduce the approach, we first explain how to apply it to a stationary diffusion equation; see also [17] for preliminary results for the stationary diffusion equation. Then, we discuss specifically how to apply the approach to the two-dimensional Navier-Stokes equations and the handling of boundary conditions. ### Finite differences and discrete convolutions In order to derive the implementation of finite difference stencils based on the discrete convolution respectively cross-correlation operation, which is generally used in convolutional neural networks, we first consider a simple stationary diffusion problem: find the function \(u\) such that \[\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}=f \quad\text{in}\ \Omega=[0,1]^{2}, \tag{3}\] where we assume that \(f\) is a continuous function on \(\Omega\). Later, in section 5, we discuss the application to the Navier-Stokes equations, which are our focus in this work. Now, let us introduce a uniform grid \(\Omega_{h}=\left\{\mathbf{x}_{ij}\mid 1\leq i,j\leq n+1\right\}\), with \(\mathbf{x}_{ij}=\left(\left(i-1\right)h,\left(j-1\right)h\right)\) and \(h=\frac{1}{n}\), and let \(U^{h}=\left(U^{h}_{ij}\right)_{ij}\in\mathbb{R}^{(n+1)\times(n+1)}\) with \(U^{h}_{ij}\approx u\left(\mathbf{x}_{ij}\right)\) the matrix representation of a discretization of \(u\). Then, \(u^{h}=\left(u^{h}_{i}\right)_{i}\in\mathbb{R}^{(n+1)(n+1)}\) with \[u^{h}_{(j-1)\cdot(n+1)+i}=U^{h}_{ij} \tag{4}\] is the corresponding vector representation in lexicographical order. Then, discretizing eq. (3) using central differences involves the approximation \[\frac{\partial^{2}u}{\partial x^{2}}\approx\frac{U^{h}_{i+1,j}-2U^{h}_{i,j}+U^ {h}_{i-1,j}}{h^{2}}\quad\text{and}\quad\frac{\partial^{2}u}{\partial y^{2}} \approx\frac{U^{h}_{i,j+1}-2U^{h}_{i,j}+U^{h}_{i,j-1}}{h^{2}}, \tag{5}\] which could also be rewritten in terms of the entries of \(u^{h}\) using eq. (4). This leads to a linear system of equations \[Au^{h}=f^{h}. \tag{6}\] For the right-hand side vector, let \(F^{h}=\left(F^{h}_{ij}\right)_{ij}\in\mathbb{R}^{(n+1)\times(n+1)}\) be the matrix representation with \(F^{h}_{ij}=f\left(\mathbf{x}_{ij}\right)\) and \(f^{h}=\left(f^{h}_{i}\right)_{i}\in\mathbb{R}^{(n+1)\times(n+1)}\) be the corresponding vector representation, where \[f^{h}_{(j-1)\cdot(n+1)+i}=F^{h}_{ij}. \tag{7}\] We can observe that \[Au^{h}=f^{h}\quad\Leftrightarrow\quad U^{h}*K=F^{h}, \tag{8}\] where \(*\) is the cross-correlation operation and \[K=\frac{1}{h^{2}}\begin{pmatrix}K_{-1,-1}&K_{-1,0}&K_{-1,1}\\ K_{0,-1}&K_{0,0}&K_{0,1}\\ K_{1,-1}&K_{1,0}&K_{1,1}\end{pmatrix}=\frac{1}{h^{2}}\begin{pmatrix}0&1&0\\ 1&-4&1\\ 0&1&0\end{pmatrix}, \tag{9}\] is the kernel which corresponds to the finite difference stencil of the central difference scheme eq. (5). The cross-correlation is the linear transformation implemented in the convolutional layers in CNNs of current state-of-the-art machine learning libraries; cf. [15, section 9.1]. In general form, the cross-corelation is given by \[\left(I*K\right)_{ij}=\sum_{m}\sum_{n}I_{i+m,j+n}K_{m,n}, \tag{10}\] where \(I\) is some matrix and \(K\) is, again, a kernel matrix, such as the one given in eq. (9). As by convention, we omit the range of the sums, and regard each matrix coefficient as zero which is outside the range of indices. Note that flipping the kernel in eq. (10) yields the discrete convolution \[\left(I\bar{*}K\right)_{ij}=\sum_{m}\sum_{n}I_{i-m,j-n}K_{m,n};\] see, for instance, [15, Section 9.1]. Since, in a CNN, the entries of the kernel are generally trainable, the cross-correlation and the discrete convolution are equivalent in that sense. By numbering the rows and columns of \(K\) from \(-1\) to \(1\), as done in eq. (9), we can easily show the equality of the left hand sides in eq. (8): \[\left(Au^{h}\right)_{(j-1)(n+1)+i}\stackrel{{ eq. \eqref{eq:K_1}}}{{=}} \frac{1}{h^{2}}\left(U^{h}_{i-1,j}+U^{h}_{i,j+1}-4U^{h}_{i,j}+U^{h} _{i,j-1}+U^{h}_{i+1,j}\right)\] \[=\sum_{m=-1}^{1}\sum_{n=-1}^{1}U_{i+m,j+n}K_{m,n}\stackrel{{ eq. \eqref{eq:K_1}}}{{=}}\left(U^{h}*K\right)_{ij}\] The equality of the right hand sides follows from eq. (7), showing that a finite difference discretization can be implemented using cross-correlation in a CNN. Next, we will use this analogy to derive a physics-aware loss function in the context of CNNs. Note that, in this whole subsection, we have neglected the treatment of boundary conditions; we discuss this directly in the context of the application of our approach to the Navier-Stokes equations in section 4.3. ### Derivation of a physics-aware loss function Let us consider a generic system of PDEs given in implicit form on the same computational domain \(\Omega=[0,1]^{2}\): \[F\left(x,\ u(x)\,,\ \frac{\partial u}{\partial x_{1}}(x)\,,\ \frac{\partial u}{ \partial x_{2}}\left(x\right)\,,\ \ldots\right)=0,\quad x\in\Omega. \tag{11}\] Here, \(F\) is a nonlinear function which may depend on partial derivatives of \(u\) of any order. Therefore, eq. (11) is a generalization of the diffusion equation eq. (3). Analogously to section 4.1, we can discretize eq. (11) by approximating the derivatives using finite differences on a structured \(n+1\times n+1\) grid. Appropriate finite differences schemes for various PDEs can be found in the literature; see, for instance, [29; 50; 51]. In section 5, we discuss the specific case of the Navier-Stokes equations, which are the main application discussed in this work. As mentioned before, other discretization schemes on structured grids, such as finite element or finite volume discretizations can also be used. Let \(U^{h}\in\mathbb{R}^{d\times(n+1)\times(n+1)}\) be the tensor representation of the discrete solution with \(d\) components. Then, the discrete problem corresponding to eq. (11) can be written as \[F^{h}\left(X^{h},\ U^{h},\ U^{h}\ast D^{x},\ U^{h}\ast D^{y},\ \ldots\right)=0, \tag{12}\] where \(X^{h}=\left(\mathbf{x}^{h}_{ij}\right)_{ij}\in\mathbb{R}^{2\times(n+1)\times (n+1)}\) is the tensor containing all the grid nodes, and \(D^{x}\) and \(D^{y}\) are kernel matrices corresponding to the finite difference discretization of the partial derivatives \[\frac{\partial u}{\partial x_{1}}\quad\text{and}\quad\frac{\partial u}{ \partial x_{2}},\] respectively. As shown in section 4.1 for the example of a standard five-point stencil, any finite difference discretization of a partial derivative can be written as the cross-correlation with the corresponding finite difference stencil. Higher derivatives can therefore be treated analogously. The generally nonlinear system of equations eq. (12) can be reformulated as a least-squares problem for the discrete residual \[\arg\min_{U^{h}}\left\|F^{h}\left(X^{h},\ U^{h},\ U^{h}\ast D^{x},\ U^{h}\ast D ^{y},\ \ldots\right)\right\|_{2}^{2}. \tag{13}\] Both problems are equivalent when the same boundary conditions are imposed. While solving eq. (12) benefits classical numerical solvers, the minimization problem eq. (13) is better suited for a neural network approach. The solution tensor \(U^{h}\) on the grid \(\Omega_{h}\) can be replaced by the CNN output \(f^{\varphi}_{NN}\), resulting in \[\arg\min_{\Psi}\left\|F^{h}\left(X^{h},\ f^{\varphi}_{NN^{\varphi}}\ f^{ \varphi}_{NN}\ast D^{x},\ f^{\varphi}_{NN}\ast D^{y},\ \ldots\right)\right\|_{2}^{2}, \tag{14}\] where \(\Psi\) represents the network parameters. The cross-correlation can be easily implemented since it is a standard operation in state-of-the-art deep learning libraries. This approach is related to physics-informed neural networks (PINNs) that classically use dense feedforward neural networks for discretization. In classical PINNs, the residual of the partial differential equation is also minimized in a least-squares sense. Therefore, the differential operator is evaluated by automatic differentiation of the network function using the back propagation algorithm [47]. Our CNN-based approach differs as it employs a classical discretization, for instance, based on finite difference stencils. In our approach, the neural network predicts the coefficients of the discrete solution; cf. section 4.1. In eq. (14), the CNN input is intentionally omitted. In fact, if the solution of eqs. (12) and (13) is unique due to appropriate boundary conditions and finite difference discretization, the solution does not depend on any input parameters. Hence, it is sufficient to train the neural network for a constant output; this could be simply realized via a bias vector in the output layer. This approach is not yet relevant in practice, and the discussion in [17] shows that, for a simple stationary diffusion problem, solving eq. (14) using an SGD-based optimizer cannot compete with solving the discrete system eq. (12) using classical numerical solvers, such as gradient descent or the conjugate gradient method. However, the physics-aware loss function eq. (14) becomes relevant once the CNN model serves as a surrogate model for multiple configurations parameterized by the input of the CNN model; cf. section 3 for the data-based approach. Analogously, we consider image representations describing the geometry of the computational domain. ### Geometry-dependency and boundary conditions To extend the physics-aware loss function for variations in the geometry of the computational domain, we introduce a discretization of eq. (11) that is compatible with the pixel image representation employed in the CNN. For this purpose, we consider a rectangle \(Q\) encompassing \(\Omega\) and use an equidistant grid \(Q_{h}\) with grid step size \(h\) to discretize it. The set of grid nodes is denoted as \(X=\{x_{i,j}\}\), with \(w\) grid nodes in the \(x\) direction and \(h\) grid nodes in the \(y\) direction; cf. fig. 2. Now, let \(I_{g}\) be an input image describing the geometry of the computational domain \(\Omega\); even though we can generally employ any geometry representation, it is important that the boundary pixels are uniquely determined because we explicitly use them in our approach; see, for instance, fig. 3 (left). Plugging the CNN model \(f_{NW}(Ig)\), as described in section 3, into the physics-aware minimization problem eq. (14), we then obtain \[\operatorname*{argmin}_{\Psi}\left\|F^{h}\left(X^{h},\ C_{I_{g}}\left(f_{NW}^{ \Psi}\left(I_{g}\right)\right),\ C_{I_{g}}\left(f_{NW}^{\Psi}\left(I_{g} \right)\right)\ast D^{\ast},\ C_{I_{g}}\left(f_{NW}^{\Psi}\left(I_{g}\right) \right)\ast D^{\ast},\ \ldots\right)\right\|_{2}^{2}. \tag{15}\] Here, we only enforce the physics-awareness for those pixels which are inside the computational domain, as indicated by the input image representation \(I_{g}\), and the operator \(C_{I_{g}}\) corresponds to enforcing the boundary conditions. In the following we will abbreviate the notation of \(F^{h}\) and write \(\left\|F^{h}\left(X^{h},f_{NW}^{\Psi}\left(I_{g}\right),\ldots\right)\right\|\) for ease of readability. In order for eq. (15) to be well-defined, we have to prescribe boundary conditions at the boundary pixels. There are at least two ways of enforcing boundary conditions in the context of physics-based neural network models. In particular, we can either add a loss term associated with the boundary conditions or explicitly encode the boundary conditions in the network function. In the literature, the former is also denoted as _soft enforcement_ of boundary conditions, whereas the latter is denoted as _hard enforcement_ of boundary conditions; cf. [52] for a more detailed discussion. It has been observed in the literature that soft enforcement of boundary conditions can be problematic in different ways: it can make the training less robust and also may lead to cases where the training does not converge to the solution; see for example [52]. Therefore, we focus on hard enforcement of boundary conditions. For instance, in case of Dirichlet boundary conditions, can be easily done by explicitly writing the correct values in the output image of the neural network before applying the loss function; at the same time, and as in classical discretization methods, we do not enforce the physical loss in those pixels. The explicit enforcement of the boundary conditions is indicated by the operator \(C_{I_{g}}\) in eq. (15). If different boundary conditions are prescribed on different parts of the boundary, we encode this by specific values in the input image; cf. fig. 3. See also section 5.2 for a specific discussion of our implementation of boundary conditions for the Navier-Stokes equations. Figure 2: A pixel image \(I_{g}\) (left) and the corresponding FD-grid \(\Omega_{h}\) (right) of a geometry \(\Omega\), whose border \(\partial\Omega\) is drawn in green in the figure. Applying a five-point finite difference stencil \(D_{h}^{k}\) to the FD-grid is equivalent to applying a convolutional filter \(K\) with fixed weights to the pixel image \(I_{g}\). Note that the values associated with the pixels and their grid node counterparts are not depicted. Now, we extend the training of the surrogate model to multiple geometries. Therefore, analogously to the data-driven case eq. (2), we optimize the loss function over a training data set of geometries \(T\), resulting in the following loss function: \[\operatorname*{argmin}_{f_{\text{NN}}}\frac{1}{|T|}\sum_{g\in T}\left\|F^{h} \left(X^{h},f_{\text{NN}}^{\text{vp}}\left(I_{g}\right),\ldots\right)\right\|_{ 2}^{2}. \tag{16}\] The main difference to the data-based loss function eq. (2) from section 3 is that no reference flow data \(f_{g}\) but only the mathematical model of the PDE is necessary for the training. Of course, both loss functions can also be combined into a hybrid loss function \[\operatorname*{argmin}_{\Psi}\frac{1}{|T|}\sum_{g\in T}\omega_{\text{PDE},g} \left\|F^{h}\left(X^{h},f_{\text{NN}}^{\text{vp}}\left(I_{g}\right),\ldots \right)\right\|_{2}^{2}+\omega_{\text{data},g}\left\|f_{\text{NN}}^{\text{vp} }\left(I_{g}\right)-u_{g}\right\|^{2}, \tag{17}\] where the weights \(\omega_{\text{PDE},g}\) and \(\omega_{\text{data},g}\) balance the two loss terms. Different variants of the hybrid loss function are possible, for instance, \[\begin{array}{ccc}\omega_{\text{data},g}=\alpha&\wedge&\omega_{\text{PDE},g }=0,&\text{if reference data is available,}\\ \omega_{\text{data},g}=0&\wedge&\omega_{\text{PDE},g}=\beta,&\text{otherwise,} \end{array}\] or \[\begin{array}{ccc}\omega_{\text{data},g}=\alpha&\wedge&\omega_{\text{PDE},g }=\beta,&\text{if reference data is available,}\\ \omega_{\text{data},g}=0&\wedge&\omega_{\text{PDE},g}=\beta,&\text{otherwise.} \end{array}\] Here, \(\alpha,\beta>0\) are some weight parameters. Other strategies for choosing the weights are, of course, also possible. For a theoretical discussion based on the neural tangent kernel on how to balance PDE and data loss terms for classical PINNs, see [54]. Next, we discuss the details of our model for the specific problem considered here, that is, the Navier-Stokes equations eq. (1). ## 5 Application to the Navier-Stokes equations In this work, we are concerned with the application of our approach to the Navier-Stokes equations. Therefore, we discuss, in this section, the derivation of the physics-aware loss function and the treatment of the boundary conditions. ### Physics-aware loss function We have already introduced the Navier-Stokes equations in eq. (1) of section 2. If we expand it in terms of the individual components, we obtain \[u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}+ \frac{\partial p}{\partial x}-v\left(\frac{\partial^{2}u}{\partial x^{2}}+ \frac{\partial^{2}u}{\partial y^{2}}\right)=0 \tag{18}\] \[u\frac{\partial v}{\partial x}+v\frac{\partial v}{\partial y}+ \frac{\partial p}{\partial y}-v\left(\frac{\partial^{2}v}{\partial x^{2}}+ \frac{\partial^{2}v}{\partial y^{2}}\right)=0\] (19) \[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0, \tag{20}\] where \(u\) and \(v\) are the \(x\)- and \(y\)-component of the flow field. Here, eq. (20) is the continuity equation and eq. (18) and eq. (19) are the components of the momentum equation. We discretize eqs. (18) to (20) using the central-difference stencils \[\left(D^{x}u_{h}\right)_{i+j}:=\frac{u_{h}^{i+1,j}-u_{h}^{i-1,j}}{ 2h},\quad\left(D^{xx}u_{h}\right)_{i+j}:=\frac{u_{h}^{i+1,j}-2u_{h}^{i,j}+u_{h }^{i-1,j}}{h^{2}},\] \[\left(D^{y}u_{h}\right)_{i+j}:=\frac{u_{h}^{i,j+1}-u_{h}^{i,j-1}}{ 2h},\quad\left(D^{yy}u_{h}\right)_{i+j}:=\frac{u_{h}^{i,j+1}-2u_{h}^{i,j}+u_{h }^{i,j-1}}{h^{2}}.\] The resulting discretized Navier-Stokes equations read \[u_{h}\odot D^{x}u_{h}+v_{h}\odot D^{x}u_{h}+D^{x}p_{h}-\nu\left(D^{ xx}u_{h}+D^{yy}u_{h}\right) =0 \tag{21}\] \[u_{h}\odot D^{x}v_{h}+v_{h}\odot D^{y}v_{h}+D^{y}p_{h}-\nu\left(D ^{xx}v_{h}+D^{yy}v_{h}\right) =0\] (22) \[D^{x}u_{h}+D^{y}v_{h} =0, \tag{23}\] where \(\odot\) is the Hadamard product, that is, element-wise product. Following the discussion in section 4.1, this can equivalently be written using the cross-correlation operation \(*\) as follows: \[U_{h}\odot U_{h}*D^{x}+V_{h}\odot U_{h}*D^{y}+P_{h}*D^{x}-\nu \left(U_{h}*D^{xx}+U_{h}*D^{yy}\right) =0 \tag{24}\] \[U_{h}\odot V_{h}*D^{x}+V_{h}\odot V_{h}*D^{y}+P_{h}*D^{y}-\nu \left(V_{h}*D^{xx}+V_{h}*D^{yy}\right) =0\] (25) \[U_{h}*D^{x}+V_{h}*D^{y} =0, \tag{26}\] where, for simplicity, we overload the notation for the discrete differential operators \(D^{x}\), \(D^{xx}\), \(D^{y}\), and \(D^{yy}\) with the corresponding stencil matrices, and \(U_{h}\), \(V_{h}\), and \(P_{h}\) are the matrix representations of the solution fields corresponding to \(u_{h}\), \(v_{h}\), and \(p_{h}\), respectively. Furthermore, for simplicity, we omit the treatment of the boundary conditions for now and refer to section 5.2 for a detailed discussion. Equations (24) to (26) correspond to the discrete nonlinear system of equations \[N(U_{h},V_{h})+G(P_{h}) =0 \text{in }\Omega, \tag{27}\] \[D(U_{h},V_{h}) =0 \text{in }\Omega, \tag{28}\] where each of the operators \(N\), \(G\), and \(D\) can be implemented using the cross-correlation and the Hadamard product as the building blocks; cf. eqs. (24) to (26). The operator \(N\) is nonlinear, whereas \(G\) and \(D\) are both linear operators. Now, we apply the physics-aware surrogate modeling approach described in section 4.3 to predict the solution of eqs. (27) and (28) for varying geometries. The surrogate model takes the form of \[f_{\text{NN}}^{\Psi}:\mathbb{R}^{w\times h} \rightarrow\mathbb{R}^{3w\times h}\] \[I_{g} \rightarrow\begin{pmatrix}U_{NN}^{\Psi}(I_{g})\\ V_{NN}^{\Psi}(I_{g})\\ P_{NN}^{\Psi}(I_{g})\end{pmatrix},\] where \(I_{g}\) is, again, the pixel image representation of a geometry \(g\), and \(U_{NN}^{\Psi}\left(I_{g}\right)\), \(V_{NN}^{\Psi}\left(I_{g}\right)\), and \(P_{NN}^{\Psi}\left(I_{g}\right)\) correspond to the CNN predictions for the matrices resp. images \(U_{h}\), \(V_{h}\), and \(P_{h}\), respectively. Combining our physics-aware approach for multiple geometries as described in section 4.3 with this CNN model and the discrete residual of the Navier-Stokes equations eqs. (27) and (28), we obtain the loss function \[\frac{1}{|T|}\sum_{g\in T}\left(\omega_{M}\|N(U_{\text{NN}}^{\Psi}(I_{g}),V_{ \text{NN}}^{\Psi}(I_{g}))+G(P_{\text{NN}}^{\Psi}(I_{g}))\|_{2}^{2}+\omega_{D} \|D(U_{\text{NN}}^{\Psi}(I_{g}),V_{\text{NN}}^{\Psi}(I_{g}))\|_{2}^{2}\right). \tag{29}\] Here, \(\omega_{M}\) and \(\omega_{D}\) are weights for the two loss terms, and \(T\) is, again, the set of all training geometries \(g\). To complete our discussion of the application of the physics-aware approach to the Navier-Stokes equations, we discuss the specific treatment of the boundary conditions in the next section. ### Treatment of boundary conditions As discussed in section 4.3, we enforce Dirichlet boundary conditions explicitly by hard-coding the values of the pixels in the output image. In particular, for our boundary value problems, as introduced in section 2, we consider the following boundary conditions: For inlet boundary condition, we set \[U_{1,j}=3,V_{1,j}=0,\quad\forall j=1,\dots,h-1, \tag{30}\] where \(h\) (height) is the number of pixels in \(y\) direction. Moreover, the no-slip boundary conditions \[U_{i,1}=U_{i,h}=V_{i,1}=V_{i,h}=0,\quad\forall i=1,\dots,w, \tag{31}\] are enforced at the lower and upper walls as well as the zero pressure boundary condition \[P_{w,j}=0,\quad\forall j=1,\dots,h-1, \tag{32}\] at the outlet. Here, \(w\) (width) is the number of pixels in \(x\) direction. Neumann pressure boundary conditions can be implemented by introducing ghost-nodes outside our computational domain. However, in this work we do not consider Neumann boundary conditions and instead refer to [11]. Note, though, that the use of ghost-nodes may not be feasible in the case of irregular obstacle boundaries. Here, interpolation to a pixel image may introduce corner points on the boundary where the normal vector is not well defined. Further, to avoid the usage of pressure values in pixels where the pressure is not defined, we employ one-sided differences in pixels adjacent to corresponding boundaries. We encode the different stencils in an additional input image; cf. fig. 3 (right). The numbering scheme used for the grid nodes is as follows: 0 represents internal nodes, 1 corresponds to inflow boundary nodes, 2 represents no-slip boundary nodes, and 3 denotes outflow boundary nodes. Nodes with numbers 4 and above require one-sided approximations for the pressure gradient. It is important to note that some pixels correspond to nodes outside the original domain \(\Omega\) due to obstacles. These nodes are marked as 0 in the geometry image and 2 in the boundary image. Velocity and residual values are set to 0 in these nodes. This is necessary as the governing equations are not defined in those nodes. ### Example on a single geometry In order to verify that the physics-aware loss enables us to learn a solution of the Navier-Stokes equations, we consider a single fixed geometry. This is not useful in practice since we could more efficiently directly discretize the Navier-Stokes equations using finite differences and solve the discrete system using suitable numerical solvers; see [17] for a comparison for a simple Laplace problem. We compare the model prediction against FVM simulations with OpenFOAM on two different meshes: a locally refined mesh (fig. 5(a)) and using the same pixel grid as the CNN model (removing the pixels inside the obstacle). The results are plotted in fig. 4. Compared with the simulation on a locally refined mesh, we obtain low relative \(L_{2}\) errors (defined in eq. (33)) of \(2.6\,\%\) for \(u\) and \(2.8\,\%\) for \(p\). As can be seen in fig. 3(a), the velocity error is particularly high near the obstacle, presumably due to non-resolved boundary layers. The comparison against the simulation on the rasterized mesh in fig. 3(b) shows a visual improvement of these errors, and the relative \(L_{2}\) error for the velocity reduces to \(2.2\%\). This suggests that part of the error is due to insufficient mesh resolution. An error of 0 cannot be obtained since the CNN model is based on a finite difference discretization whereas the reference data is computed using FVM simulations for both types of meshes. In total, we conclude from the results for a single geometry that a CNN model with physics-aware loss may learn a good approximation of the solution of the Navier-Stokes equations. Later, in section 9, we will investigate the performance of the CNN-based surrogate model trained on a data set consisting of multiple geometries, introducing another level of complexity to the model. Figure 3: Low-resolution pixel image inputs that are used by our model. The geometry image fig. 2(a) is passed as input to the CNN and the boundary image fig. 2(b) is used for the construction of the physics-aware loss. The geometry represented here was previously described in fig. 1 ## 6 Architecture of the convolutional neural network In this section, we discuss the network architecture of our CNN-based surrogate models, utilizing the same architecture type for both the data-based and physics-aware models described in sections 3 and 4 respectively. We employ a fully convolutional neural network that only performs convolutions, up- or downsampling. For a comprehensive understanding of CNNs, we refer to (15, Chapt. 9) and the references therein. Our CNN architecture draws inspiration from the U-Net architecture (46). It consists of an encoder, transforming input image(s) into a lower dimensional representation in the bottleneck, and a decoder, transforming the bottleneck output into velocity and pressure output images. The U-Net architecture's symmetric encoder and decoder paths are connected via skip connections. The performance of data-based surrogate models with U-Net architecture is generally superior compared to bottleneck CNNs without skip connections, as discussed in (5). The encoder of our network consists of blocks comprising a \(3\times 3\) convolutional layer, followed by a \(2\times 2\) convolutional layer with a stride of 2, both followed by an activation. The first convolution extracts input features, while the second convolution reduces spatial dimensions. We avoid max pooling for downsizing to prevent high-frequency artifacts as discussed in (20). The decoder mirrors the encoder and includes upsampling layers with nearest-neighbor interpolation followed by a convolutional layer with an activation. This upsampling technique helps avoiding checkerboard artifacts mentioned in (37) that can occur with deconvolutional and downward convolutional layers. Matching encoder and decoder blocks are connected by skip connections following the U-Net architecture. The encoder block output is concatenated with the upsampling layer output, doubling the number of filters. Refer to fig. 5 for an illustration of this architecture with one decoder path and four levels. In our experiments in section 9, unless stated otherwise, we employ an 8-level model. Additionally, it should be noted that our models use separate decoder Figure 4: Results for a single geometry. CNN model with Swish activation function and \(5\cdot 10^{-5}\) as the learning rate for the Adam optimizer. paths for each scalar output field, \(U\), \(V\), and \(P\). We refer to section 8 for additional comments on the choice of hyper parameters, including the model architecture. ## 7 Generation of (training) data Training our surrogate models for the challenging task of predicting solutions across a wide range of geometries necessitates a substantial dataset. For the fully data-driven model, reference data is required along with input images of the geometries. In contrast, the physics-aware model solely relies on the input images, as the physics loss replaces the need for reference data. Thus, generating training data involves creating random obstacle geometries and their pixel image representations. To validate our model, we generate simulation meshes and conduct corresponding CFD simulations as reference data. ### Input data The input data is needed for the training of the data-driven and physics-aware model as well as of hybrid variants. Geometry generationIn line with section 2, our focus lies on two-dimensional rectangular channel geometries featuring star-shaped obstacles that do not touch the boundary (inlet, outlet, upper and lower walls). The computational domain is given by \(\Omega=[0,6]\times[0,3]\) from which we have removed a star-shaped obstacle \(P\) defined by its corners. The obstacle's corners are randomly positioned around a central point, inspired by the approach outlined in [4]. We only consider obstacles with a maximum width of 50% of the channel's height and a minimum distance of 0.75 from any boundary. To prevent significant distortion of the geometry in the image representation, we impose a minimum Figure 5: Exemplary model architecture with four levels. angle of \(10^{\circ}\) at each vertex of the obstacle. This prevents excessively acute angles, as shown in fig. (a)a, which could result in a disconnected obstacle representation. Geometry image representationAs discussed before, CNNs rely on input data with a tensor-product structure. Therefore, we interpolate the geometry to a binary \(256\times 128\) pixel image. In fig. 7, an exemplary geometry is shown in its pixel image representation, where white pixels (encoded as 1) correspond to fluid cells, while gray pixels (encoded as 0) represent walls and the obstacle. The pixel value is determined based on whether the center of the pixel is within the fluid domain, or not. In previous studies, signed distance function (SDF) representations were also used to describe the geometry, generally leading to slightly better results; cf. [4; 5; 19]. For simplicity, we restrict ourselves to binary input images, as they are close to practically relevant cases; for instance, binary images can be directly generated from imaging techniques, such as magnetic resonance imaging (MRI). ### Reference output data Reference output data is required in order to compute the data loss eq. (2), which is required for the data-driven and hybrid modeling approaches. In case of the fully physics-aware approach, the reference data is only used for validation. Mesh generationFor each obstacle chosen as discussed in section 7.1, we generate a computational mesh using Gmsh [13]; we refer to its documentation1 for details on the mesh generation. In particular, we use the Frontal Delaunay algorithm [44] to generate an unstructured triangular mesh for each case. Then, we refine the mesh near all walls, that is, near the upper and lower wall as well as near the obstacle, to resolve the boundary layers of the flow. In order to find a suitable level of refinement for the simulations, we performed a mesh convergence study on a representative geometry; cf. fig. (b)b. Note that we created a new mesh for each level of refinement, rather than refining an existing mesh. Figure (a)a shows an mesh with refinement near the boundaries with \(\approx 40\,000\) elements, whereas we used meshes with \(\approx 160\,000\) - \(200\,000\) to generate the our simulations; we do not display such a mesh for the sake of clarity. Footnote 1: [https://gmsh.info/doc/texinfo/gmsh.html](https://gmsh.info/doc/texinfo/gmsh.html) CFD simulationsThe CFD simulations for the generation of the reference data have been performed using OpenFOAM v8 [53], a software based on the finite volume method (FVM). We utilized the simpleFoam solver, which solves the stationary incompressible Navier-Stokes equations using the semi-implicit method for pressure linked equations (SIMPLE) algorithm [38; 7]. We refer to the OpenFOAM documentation2 for more details on the simpleFoam solver. The configuration is based on the pitzDaily example, adapted to laminar flow and with stricter convergence criteria. Footnote 2: [https://doc.cfd.direct/openfoam/user-guide-v8/](https://doc.cfd.direct/openfoam/user-guide-v8/) Figure 6: (a) Example of a locally refined mesh used for simulations. (b) Mesh Convergence plot with relative errors compared against on a reference simulation for a fine mesh with \(\approx 1\,300\,000\) elements. The depicted mesh corresponds to the second node in the convergence plot. Interpolation of the simulation dataTo compare the surrogate model's output with the reference simulation data, we interpolate the simulation data onto the same pixel grid. This involves evaluating the FVM solution at the centroids of the pixels, resulting in pixel images \(U_{h}\), \(V_{h}\), and \(P_{h}\) representing the velocity in the \(x\) and \(y\) directions and the pressure, respectively. It is important to note that values outside the computational domain \(\Omega_{p}\) are explicitly set to 0 in both the simulation and the model prediction. This is clearly visible in our plots, such as in fig. 9. To achieve this, we mask the output images based on the geometry representation in the input image. ## 8 Some comments on hyperparameter choices Our surrogate models depend on numerous hyperparameters, and and their specific choices can significantly affect performance. These hyperparameters encompass model architecture, such as the number of channels per convolutional layer, the depth of the U-net architecture, and the activation function. They also include optimizer parameters like the learning rate, learning rate schedule, and batch size. Additionally, there are hyperparameters related to the loss function, such as the weights assigned to individual loss terms and discretization parameters for the physics-aware loss, including the employed FD stencils. Furthermore, there are hyperparameters in a broader sense, for instance, the resolution of input and output images. Considering the large number of hyperparameters, an exhaustive investigation of their impact is impractical. Therefore, instead of conducting a comprehensive grid-search, we have fixed some hyperparameters while varying individual ones. To maintain brevity, we provide qualitative discussion of the outcomes rather than presenting extensive results for this process. Network architectureHyperparameters related to the model architecture determine the number of parameters \(\Psi\) and hence the model's capacity to approximate the solution operator; cf. section 3. Due to the high complexity of the solution operator, we expect that the model requires a large number of parameters, whereas a too large number of parameters may lead to overfitting. We individually optimize hyperparameters for the network architecture with regards to the validation errors; note that the errors are computed with respect to the reference simulation data. For the configurations described in section 7.1 and tested in section 9, we have determined that a depth of 8 levels and 64 channels in the first layer yield a good compromise: lower values result in reduced approximation properties of Figure 7: Exemplary representations of the interpolation process, here with a lower resolution of \(32\times 16\). The green dotted line shows the border of the computational domain \(\Omega_{P}\). Note that the original boundary is plotted behind the outer nodes. the model, while higher values lead to an increased computational effort and decreased generalization properties (in terms of the validation error). Activation function and learning rateAs the activation function we either use the rectified linear unit (ReLU) [14] or the Swish function [42]. Whereas for the single geometry case discussed in section 5.3 the use of the Swish activation function was beneficial, the ReLU function generally led to better results for training a surrogate model for multiple geometries; cf. section 9. In combination with ReLU we always achieved the best results with a learning rate of \(1e-4\). With swish the optimal learning rate depended on the considered geometry and ranged from \(1e-5\) to \(1e-4\). Optimizer and batch sizeThe best results for our surrogate model were obtained by using the stochastic gradient descent optimizer with adaptive moment estimation (Adam) [22] and a batch size of 1. Other optimizers were unable to reliably find suitable minima and greater batch sizes led to greatly increased errors. Image resolutionOur fully CNN model architecture, can be applied to any resolution with a power of 2 number of pixels in both the \(x\) and \(y\) directions. The number of pixels in each direction does not have a systematic dependence. However, if the resolution is too low, the geometry representation may be inaccurate, and the FD discretization error could be high. Conversely, higher image resolutions may lead to increased computational effort and reduced accuracy due to limited model capacity. We have observed that models for higher resolutions require increased depth to achieve meaningful predictions. In addition, training larger and deeper models presents a more difficult optimization problem. We have found that models for pixel images with a width of 512 pixels and a height of 256 pixels and larger do not converge to suitable minima as reliably as models for smaller pixel images. Based on these considerations, we have chosen an image resolution of 256 pixels in width and 128 pixels in height, which has also been used in previous studies; cf. [19; 4]. ## 9 Computational results In this section, we present numerical results for our surrogate modeling approach. We investigate the fully data-driven approach (section 9.1), the fully physics-aware approach (section 9.2), and the hybrid approach (section 9.3), which combines both. We evaluate the models using on the relative \(L_{2}\)-errors \[\frac{\|U_{NN}-U_{h}\|_{2}}{\|U_{h}\|_{2}} \tag{33}\] as the performance measure. Here, \(U_{NN}\) is the prediction of our model, and \(U_{h}\) is the reference solution; unless otherwise stated, the reference solution corresponds to the result of an OpenFOAM simulation on a locally refined Figure 8: Exemplary representations of gross distortion caused by too acute angles. (a) The original geometry and (b) the reduced pixel image, here with a lower resolution of \(32\times 16\). The green dotted line shows the borders of the computational domain \(\Omega_{P}\). Note that there the original boundary is plotted behind the outer nodes. mesh evaluated at the midpoints of the pixels; cf. the discussion in section 7. We then compute the \(L_{2}\)-error on the pixel grid employed by the surrogate model with the functions being constant on each pixel; as a result the relative \(L_{2}\)-norm is equivalent to the relative \(l_{2}\)-norm. The dataset we use consists of \(\approx 5\,000\) geometries with randomly generated obstacles with 3, 4, 5, 6, or 12 edges, with \(\approx 1\,000\) geometries for each number of edges. All computations were performed on NVIDIA V100-GPUs with CUDA 10.1 using Python 3.6 and tensorflow-gpu 2.7 [1]. ### Data-based approach First, we analyze the fully data-driven approach as discussed in section 3; cf. [4]. In addition to the velocity field, which was the the focus of [4], we also learn the pressure field here; we also did some changes in the network architecture to improve upon the results in [4]. Later, in sections 9.2 and 9.3, we will use the results for the fully data-driven model as a baseline for comparison. In order to investigate the performance of the data-driven approach, we trained several models with increasing percentages of training data on the channel dataset (5 000 configurations); the remaining data is used for validation, respectively. The training and validation performance, in terms of the relative \(L_{2}\)-errors and averaged residual norms, for this approach are summarized in table 1. We observe that the data-driven model is able to learn the velocity and pressure fields very well, where the errors on the velocity are generally lower. Moreover, the training accuracy is always a bit lower compared to the validation accuracy, which indicates some overfitting. We can observe that using a larger share of training data improves the performance of the model and slightly reduces the overfitting. For the best model, with 75 % training data, we also present plots of the velocity and pressure fields in Figures 9(a) and 9(b), comparing the reference and prediction data. Figure 9(a) shows a typical example a quantitatively and qualitatively good prediction, whereas the prediction in Figure 9(b) exhibits some clearly unphysical artifacts in the pressure and velocity fields despite a feasible average error. We conjecture that this is due to the pure data loss, which does not include any physical knowledge; as we will discuss in section 9.2.1, the physics-aware loss improves this model behavior. Figure 9: (a)-(d) Exemplary representations of the pixel images, here with a lower resolution of \(32\times 16\). ### Physics-aware approach In this subsection, we will analyze the proposed physics-aware approach in detail. The results on the whole channel dataset (5 000 configurations) for varying percentages of training data are summarized in table 2. We observe that the physics-aware surrogate model can be extended from a single geometry (section 5.3) to multiple geometries, as discussed theoretically in section 4. In particular, we obtain predictions with low errors on the velocity and pressure fields, and the performance improves slightly when increasing the share of training data. Interestingly, the overfitting effect is rather small, even when using only 10 % of the data for training the model. In fig. 11, we showcase different predictions from this model. We observe smooth solutions in all cases, without any unphysical artifacts visible. However, when inspecting the error plots, we observe that the error is highest in the vicinity of the obstacle, indicating that the uniform pixel grid cannot fully resolve the boundary layers of the flow. Hence, we observe some error compared with the reference data, which has been computed on a locally refined mesh; cf. fig. 6a. In the following, we will discuss the results in more detail: in section 9.2.1, we compare the results with the results for the data-based approach in section 9.1; in section 9.2.2, we discuss the correlation of high maximum velocities in the flow field and high prediction errors; and in section 9.2.3, we discuss the influence of the pixel grid on the prediction performance. #### 9.2.1 Comparison to the data-based approach Comparing the results in tables 1 and 2 for the data-based and the physics-aware approach, respectively, we observe that the data-based model generally has a better performance based on the relative errors compared with the reference data. On the other hand, the residuals of the divergence and momentum equations on the pixel grid are lower for the physics-aware model. This can be easily explained by the fact that the data-based model is trained against the reference data, whereas the physics-aware model is trained to minimize the residuals. \begin{table} \begin{tabular}{r r|r r r r r} training & \multirow{2}{*}{error} & \(\frac{|p_{\text{max}}-u|_{2}}{|u|_{2}}\) & \(\frac{|p_{\text{max}}-p|_{2}}{||p|_{2}}\) & divergence & momentum & \# \\ & & & & residual & residual & epochs \\ \hline \hline \multirow{2}{*}{10\%} & training & 2.07\% & 10.98\% & \(1.1\cdot 10^{-1}\) & \(1.4\cdot 10^{0}\) & 500 \\ & validation & 4.48\% & 15.20\% & \(1.6\cdot 10^{-1}\) & \(1.7\cdot 10^{0}\) & 500 \\ \hline \multirow{2}{*}{25\%} & training & 1.93\% & 8.45\% & \(9.1\cdot 10^{-2}\) & \(1.2\cdot 10^{0}\) & 500 \\ & validation & 3.49\% & 10.70\% & \(1.2\cdot 10^{-1}\) & \(1.4\cdot 10^{0}\) & 500 \\ \hline \multirow{2}{*}{50\%} & training & 1.48\% & 8.75\% & \(9.0\cdot 10^{-2}\) & \(1.1\cdot 10^{0}\) & 500 \\ & validation & 2.70\% & 10.09\% & \(1.1\cdot 10^{-1}\) & \(1.2\cdot 10^{0}\) & 500 \\ \hline \multirow{2}{*}{75\%} & training & 1.43\% & 7.30\% & \(1.0\cdot 10^{-1}\) & \(1.5\cdot 10^{0}\) & 500 \\ & validation & 2.52\% & 8.67\% & \(1.2\cdot 10^{-1}\) & \(1.5\cdot 10^{0}\) & 500 \\ \end{tabular} \end{table} Table 1: Performance of the data-based approach on multiple geometries from the channel dataset compared to OpenFOAM simulations on _locally refined_ meshes. The divergence and momentum residuals are averaged over all configurations and pixels. \begin{table} \begin{tabular}{r r|r r r r r} training & \multirow{2}{*}{error} & \(\frac{|p_{\text{max}}-u|_{2}}{|u|_{2}}\) & \(\frac{|p_{\text{max}}-p|_{2}}{||p|_{2}}\) & divergence & momentum & \# \\ & & & residual & residual & epochs \\ \hline \hline \multirow{2}{*}{10\%} & training & 4.34\% & 9.75\% & \(2.8\cdot 10^{-02}\) & \(7.4\cdot 10^{-02}\) & 2 500 \\ & validation & 5.70\% & 12.81\% & \(5.7\cdot 10^{-02}\) & \(2.0\cdot 10^{-01}\) & \\ \hline \multirow{2}{*}{25\%} & training & 4.17\% & 9.61\% & \(2.5\cdot 10^{-02}\) & \(6.1\cdot 10^{-02}\) & 2 500 \\ & validation & 4.82\% & 10.73\% & \(4.4\cdot 10^{-02}\) & \(1.3\cdot 10^{-01}\) & 2 500 \\ \hline \multirow{2}{*}{50\%} & training & 4.16\% & 9.47\% & \(2.4\cdot 10^{-02}\) & \(5.7\cdot 10^{-02}\) & 2 500 \\ & validation & 4.37\% & 9.68\% & \(3.7\cdot 10^{-02}\) & \(1.0\cdot 10^{-01}\) & 2 500 \\ \hline \multirow{2}{*}{75\%} & training & 3.82\% & 8.71\% & \(1.8\cdot 10^{-02}\) & \(4.0\cdot 10^{-02}\) & \\ & validation & 3.91\% & 8.65\% & \(2.8\cdot 10^{-02}\) & \(8.0\cdot 10^{-02}\) & 2 500 \\ \end{tabular} \end{table} Table 2: Performance of the physics-aware approach on multiple geometries from the channel dataset compared to OpenFOAM simulations on _locally refined_ meshes. The divergence and momentum residuals are averaged over all configurations and pixels. At first sight, this may seem contradictory since we would expect that, for the same boundary value problem, a lower residual might also results in a lower error. However, as discussed in section 7.2, the reference data is generated based on solving the Navier-Stokes equations with FVM on a locally refined mesh, whereas we evaluate the residuals for the physics-aware model on a uniform pixel grid. This means that the physics-aware model can never reach relative velocity and pressure errors of zero, with our current setting. Likewise, the data-based model will not minimize the residuals on the uniform pixel grid. Interestingly, the physics-aware model is less prone to overfitting than the data-based model. In particular, despite slightly worse prediction errors, the gap between the training and validation errors is clearly lower for the physics-aware model, indicating better generalization capabilities. Tables 1 and 2 also indicate that we performed a significantly larger number of epochs to train the physics-aware than the data-based model on the same data set; in particular, we ran the training for \(2\,500\) instead of \(500\) epochs. In order to illustrate this, we present plots of the evolution of the mean squared errors for the velocity and pressure over the training process for the physics-aware model with \(75\,\%\) training data and a new data-based model that we also trained for \(2\,500\) epochs on \(75\,\%\) training data in fig. 12. Note that this data-based model is not the same model for which we have presented results in this section so far. The validation errors of the data-based model reach their minimum very quickly, see fig. 11(a). It can be clearly seen that with a training of more than \(500\) epochs, the validation errors do not decrease further. On the contrary, they even increase slightly. Thus, longer training of the data-based model would only lead to stronger overfitting. In contrast, the validation errors for the physics-aware model decrease more slowly and reach their lowest value only in the further course of the training, see fig. 11(b). Figure 10: Prediction of velocity and pressure for the data-based approach (Prediction) compared to the OpenFOAM simulation on the _locally refined_ mesh (Target). This model was trained on \(3750\) geometries. Both geometries are validation geometries. Finally, we briefly discuss those cases where the data-based or the physics-aware model performs badly. As mentioned in section 9.1, the data-based approach occasionally makes predictions with unphysical artifacts in the flow and pressure fields. In particular, we show present in fig. 9(b) one example from the validation data set where this is apparent. The corresponding prediction of the physics-aware model for the same sample is shown in fig. 10(a). We do not observe the same artifacts. In alignment with the lower overfitting of the physics-aware model, we conclude that the physics-aware model indeed learns better the actual flow behavior based on the residuals of the Navier-Stokes equations. On the other hand, we often see larger errors in the vicinity of the obstacle for the physics-aware approach. This might be attributed due to the uniform pixel grid, which is not specifically refined for resolving the boundary layers in the physics-aware approach. In particular, it seems that the error originates at the obstacle and propagates downstream. #### 9.2.2 Correlation of errors and velocities In further analyzing the prediction errors, we observed a systematic correlation between the relative error in the velocity and the maximum velocity appearing in the flow field. In particular, depending on the size and position of the obstacle, the maximum velocity can vary significantly; see, e.g., the examples in figs. 10 and 11. We observe that geometries with a maximum velocity above 6 exhibit higher average errors compared to those below 6: 5.8% for \(u\) and 12.7% for \(p\) versus 2.5% for \(u\) and 5.7% for \(p\). In fig. 11(a), we observe that the maximum velocity ranges roughly from 4 to 9, following almost a normal distribution; hence, the lower and higher maximum velocities do not appear as often as maximum velocities of 6. Despite fewer cases with lower maximum velocities, we observe that the prediction error generally increases with an increas Figure 11: Velocity and pressure for the physics-aware approach (Prediction) compared to the OpenFOAM simulation on _locally refined_ meshes (Target). The model was trained on 3 750 geometries. All shown geometries are validation geometries. ing maximum velocity; cf. fig.12(b). Moreover, there seems to be no relation to whether the configuration is in the training or validation set. Besides arguing based on the distribution of maximum velocities in the data set, it is not surprising that higher maximum velocities lead to higher errors since this might correspond to higher Reynolds numbers and more complex flow patterns. Moreover, our physics-aware loss, as defined in eq.29, incorporates second-order central stencils for all terms, including the convective terms. However, central stencil approximations for convective terms can be problematic when the cell Reynolds number exceeds 2; see, for instance, (35, Sec. 2.3). For our uniform pixel grid, this occurs when \(|u|>4.25\frac{m}{s}\) in our case. Therefore, it may be necessary to consider alternative approximations for the convective terms. However, this is beyond the scope of this article. #### 9.2.3 Influence of the pixel grid As mentioned before, there seems to be an effect from an insufficient resolution of the boundary layers around the obstacle. In order to investigate potential effects of the resolution of the pixel grid, we rerun all configurations in our data set on the pixel grid; due to their structure, we also denoted these as _rasterized meshes_ in section5.3. First of all, we observe that a significant number of OpenFOAM simulations on the rasterized meshes did not converge; cf. fig.12(c). Furthermore, we also see a correlation maximum velocity and convergence in this case. For geometries with obstacles narrower than 1 m and flow fields with maximum velocities below 6 \(\frac{m}{s}\) almost all simulations converged. Conversely, for larger obstacles and faster flow fields only a part converged. This is in alignment with our observation on higher errors for higher maximum velocity cases. Finally, we evaluate the physics-aware model only on those cases where the simulations on the rasterized meshes successfully converged. Figure12(d) displays the errors of the predictions of the physics-aware model against the rasterized simulations. Comparing figs.12(b) and 12(d), we can observe a much better match when using the rasterized simulations as the reference. For geometries with maximum velocities below 6 \(\frac{m}{s}\), the average \(L_{2}\)-error in \(u\) decreases from 2.2% to 1.5%. Similarly, for geometries with maximum velocities above 6 \(\frac{m}{s}\), the average \(L_{2}\) error in \(u\) decreases from 6.7% to 5.4%. This shows that the pixel grid has an influence on the prediction performance. Further investigations of this aspect are out of the scope of this paper but will be subject of future work. ### Hybrid approach As discussed in section9.2.1, both approaches have their advantages due to the different loss functions considered. The main advantages of the physics-aware approach are its generalization properties as well as the fact that no reference data is required. The main disadvantage in the CNN approach is that a uniform grid is used, which, in our setting, is not fine enough to fully resolve boundary effects and high velocities. The data-based approach, on the other hand, is able to better capture these. This is presumably because the reference data in the data loss encodes effects which cannot be fully resolved by the pixel grid. However, the data-based model is more prone to overfitting and unphysical flow artifacts. Figure 12: Validation loss curves for the velocity and pressure over the trained epochs. In order to combine some of the strength of both approaches, we propose a hybrid approach, which employs a weighted sum of the data-based loss function eq. (2) and the physics-aware loss function eq. (29). This could, for example, be relevant: * if a sufficient number of data samples is available and the generalization properties or physical consistency of the model should be enhanced or * if only an insufficient number of data samples to cover the range of geometries is available; this could specifically be the case if measurement data is used or the simulations are prohibitively expensive. In this case, the missing data can be replaced by using the physics-aware loss. Figure 14 compares the overall performance of the data-based, physics-aware, and hybrid approaches. Here, the hybrid approach uses both the data-based loss and the physics-aware loss, both with an equal weight of 1. It can be observed that, for all ratios of training and validation data, the hybrid model outperforms the data-based and physics-aware models in terms of the relative errors in the velocity and pressure; in particular, the prediction performance on the pressure improves significantly, by roughly 50 %, compared with the other approaches. However, unfortunately, the gap between training and validation performance overfitting is on a similar level as for the data-based model. The performance of the data-based, the physics-aware, and the hybrid approaches with respect to the maximum velocity is shown in fig. 15. Whereas the data-based and the physics-aware models show a correlation between the prediction error and the maximum velocity, the hybrid model seems to be rather robust; interestingly, for the data Figure 13: Results investigating the correlation of the error with the maximum velocity appearing in the flow field. based approach, we observe a slight deterioration of the performance in the pressure prediction for lower maximum velocities. The results indicate that, if high-fidelity reference data is available, a combination of the data-based and physics-aware loss functions yields the best results. ### Weighting of Loss Terms There are some elements whose modification may improve the prediction quality of our approach. This includes, for example, varying the weight of the loss terms, see eq. (29). The initial prediction of the convolutional neural network (CNN) does not fulfill the divergence-free equation due to the random initialization of its weights. During the training process, the minimization of the sum of squared residuals eq. (29) is pursued, where equal weights (\(\omega_{\mathrm{M}}=\omega_{\mathrm{D}}=1\)) may cause the learned prediction to satisfy the momentum equation more than the mass equation. While a valid solution should satisfy both the mass equation and the momentum equation, the mass equation can be seen as primarily serving as a constraint, limiting the space of valid solutions. Furthermore, in our approach, we employ a variant of the Navier-Stokes equations, specifically the momentum equation, where the assumption \(\nabla\cdot\vec{u}=0\) is explicitly employed to simplify the derivation, as discussed in [16]. Therefore, although our primary interest lies in solving the momentum equations, it may be beneficial to confine the search space to velocity fields that comply with the divergence-free condition. Due to architectural constraints preventing easy modification of our CNNs to guarantee divergence-free predictions, we endeavor to achieve a similar outcome by augmenting the weight \(\omega_{\mathrm{D}}\) of the mass residual loss term in the loss function. Shown in fig. 16 is the distribution of relative errors \(L_{2}\) in velocity and pressure over the maximum occurring velocity for three models for whose training we varied the weight \(\omega_{\mathrm{M}}\) of the mass residual in the physics-aware loss from 1 over 10 to 100. The averaged relative \(L_{2}\) errors are 4.8%, 3.7%, and 4.5% in the velocity and 10.4%, 7.9%, and 7.9% in the pressure, for values of 1, 10, and 100 of \(\omega_{\mathrm{M}}\), respectively. Note that we used sixth-order finite difference stencils for all models here, as opposed to second-order stencils in section 9.2. In addition, the training and validation data sets are not identical to those used in the previous sections. Therefore, the error values reported in this section are not necessarily directly comparable to the previous ones. With an increase in the weight of the mass residual, we see a reduction in the error in the velocity as well as in the pressure, especially at higher velocities. This effect is very clear for \(\omega_{\mathrm{M}}=10\). In the averaged errors, this model improves by about 1% in velocity and about 2.5% in pressure compared to the model trained with equal weights, i.e. \(\omega_{\mathrm{M}}=1\). However, for the model trained with \(\omega_{\mathrm{M}}=100\) we see larger errors in the velocity overall. Here, even for low Figure 14: Performance of the hybrid approach for abundant simulation results. velocities, the errors in the pressure become larger. An even further increase of the weight \(\omega_{\text{M}}\) led to a deterioration of the predictive capabilities, because while the predictions of the model increasingly satisfied the divergence-free constraint, the momentum residual grew. We have thus demonstrated that increasing the weight of the mass residual in the physics-aware loss can significantly improve the predictive capabilities of the model. ### Test Data So far, our predictions have been limited to geometries that were present in the training or validation datasets. These datasets exclusively pertain to the model problem, as illustrated in fig. 1. Notably, these geometries encompass obstacles in the form of star-shaped polygons with up to 12 vertices. In this section, we will showcase predictions obtained using the physics-aware convolutional neural network for geometries that possess alternative types of obstacles. In doing so, we assess how well the model can generalize to previously unseen geometries, extrapolate beyond the training data, and effectively handle new and unique shapes. The first geometry we will test the model on is a circle with radius 0.4 that we place in the middle of the channel. This type of obstacle is a highly distinct and different type of obstacle compared to the star-shaped polygons present in the training dataset. Circles have a continuous curved boundary, which contrasts with the sharp edges of the star-shaped polygons, making them significantly novel geometries for the model. We show the prediction of our model for this geometry in fig. 16(a). Smooth predictions are observed. High errors occur only near the obstacle. This prediction shows that our model can handle the curvature of a circle very well without having seen a single curved obstacle during training. The second geometry we test our model on is a composition of an oval and a flower with 5 petals. The oval has a horizontal radius of 0.45 and a vertical radius of 0.25. The flower has a maximum radius of 0.4. The 5-petaled flower also differs from a circle because the curvature is not uniform throughout, but is interrupted by sharp bends where the petals meet. We show the prediction of our model for the second geometry in fig. 16(b). Again, we see smooth predictions with high errors occurring only near the obstacle. These two predictions exemplify that our model is capable of making reasonable and accurate predictions for geometries with significantly different obstacles. Figure 15: Comparison of the relative \(L_{2}\)-error distribution for \(u\) and \(p\) with regards to the maximum occurring velocity for the data-based ((a) and (d)), combined ((a) and (e)) and physics-aware ((c) and (f)) approaches compared to OpenFOAM simulations on _locally refined_ meshes. All models were trained on 3 750 geometries. ### Computation Time An important aspect of a surrogate model is the speed with which it can be evaluated. Therefore, in this section we want to compare the time needed to evaluate the surrogate model with the time needed for a reference CFD simulation. A CFD simulation described in section 7.2 takes between 10 and 60 minutes, depending on the geometry, and in individual difficult cases the simulation may take longer than 60 minutes. In comparison, we need only roughly 6 milliseconds (ms) to evaluate our surrogate models on a geometry. Thus, the evaluation of the surrogate model is between 100 000 and 600 000 times faster than a CFD simulation. This does not include the time required to mesh the geometry, which can be very time consuming depending on the geometry and mesh fineness, and to set up the CFD simulation. However, the time required for training the surrogate model is very high. For example, one training step for one geometry takes about 50 ms. Thus, training the model discussed in section 9.2, which was trained on about 3 750 geometries, takes roughly 5 days. The training, however, can be done in an offline phase before the actual deployment of the surrogate model, so the long training time is not as significant, especially when many simulations need to be run quickly. ## 10 Conclusion We have introduced a novel physics-aware approach to train convolutional neural networks as surrogate models that relies exclusively on the physics modeling fluid behavior in multiple irregular geometries. Our approach does not rely on reference data and only requires the geometry image and boundary conditions. However, we have demonstrated that incorporating the physics-aware loss in the training process improves upon the data-based approach when reference data is available. This approach serves as an excellent surrogate model, with the evaluation being in the order of \(O\left(10^{5}\right)\) times faster than a conventional CFD simulation. Our physics-aware approach performs well for low velocity geometries and demonstrates strong generalization capabilities. In contrast, the data-based approach struggles to generalize for the same low velocity geometries. Despite using a coarser resolution and finite differences, which may not be ideal for Navier-Stokes, our models achieve excellent predictions close to the reference solution for most geometries. However, for cases where our physics-aware models did not match the reference solution, even higher resolution finite volume methods failed to obtain a converged solution. Consequently, accurate predictions cannot be expected in such cases. Figure 16: Comparison of the relative \(L_{2}\)-error distribution for \(u\) and \(p\) with regards to the maximum occurring velocity for different approaches compared to OpenFOAM simulations on _locally refined_ meshes. All models were trained on 3 500 geometries. ## Acknowledgments This work was performed as part of the Helmholtz School for Data Science in Life, Earth and Energy (HDS-LEE) and received funding from the Helmholtz Association of German Research Centers. We gratefully acknowledge the use of the computational facilities of the Center for Data and Simulation Science (CDS) at the University of Cologne and of the Department of Mathematics and Computer Science of the Technische Universitat Bergakademie Freiberg operated by the University Computing Center (URZ) and funded under grant application No. 100376434 to the State Ministry for Higher Education, Research and the Arts (SMWK) of the Federal State of Savony on Artificial Intelligence and Robotics for GeoEnvironmental Modeling and Monitoring.
2303.11239
Training Invertible Neural Networks as Autoencoders
Autoencoders are able to learn useful data representations in an unsupervised matter and have been widely used in various machine learning and computer vision tasks. In this work, we present methods to train Invertible Neural Networks (INNs) as (variational) autoencoders which we call INN (variational) autoencoders. Our experiments on MNIST, CIFAR and CelebA show that for low bottleneck sizes our INN autoencoder achieves results similar to the classical autoencoder. However, for large bottleneck sizes our INN autoencoder outperforms its classical counterpart. Based on the empirical results, we hypothesize that INN autoencoders might not have any intrinsic information loss and thereby are not bounded to a maximal number of layers (depth) after which only suboptimal results can be achieved.
The-Gia Leo Nguyen, Lynton Ardizzone, Ullrich Köthe
2023-03-20T16:24:06Z
http://arxiv.org/abs/2303.11239v2
# Training Invertible Neural Networks as Autoencoders + ###### Abstract Autoencoders are able to learn useful data representations in an unsupervised matter and have been widely used in various machine learning and computer vision tasks. In this work, we present methods to train Invertible Neural Networks (INNs) as (variational) autoencoders which we call _INN (variational) autoencoders_. Our experiments on MNIST, CIFAR and CelebA show that for low bottleneck sizes our INN autoencoder achieves results similar to the classical autoencoder. However, for large bottleneck sizes our INN autoencoder outperforms its classical counterpart. Based on the empirical results, we hypothesize that INN autoencoders might not have any intrinsic information loss and thereby are not bounded to a maximal number of layers (depth) after which only suboptimal results can be achieved. 1 Footnote 1: Code available at [https://github.com/Xenovortex/Training-Invertible-Neural-Networks-as-Autoencoders.git](https://github.com/Xenovortex/Training-Invertible-Neural-Networks-as-Autoencoders.git) Machine Learning Generative Models INN Normalizing Flows Autoencoder ## 1 Introduction In machine learning and computer vision, CNNs have been proven to be effective for various tasks, such as object detection [1], image captioning [2], semantic segmentation [3], object recognition [4] or scene classification [5]. However, all these approaches are based on supervised learning methods and require tremendous amounts of manually labeled data. This can be a limitation, since labeling images commonly involves human effort, which is impractical, expensive and not realizable on a large scale. As a result, current research has moved more towards unsupervised learning methods. In particular, autoencoders and VAEs (see [6, 7, 8]) play a fundamental role in learning encoded representations of the data in an unsupervised manner. The idea of Invertible Neural Networks (INNs) goes back to the works of Dinh et al. [9, 10], which introduced tractable invertible coupling layers. Since then, the research of INNs has seen some relevant advances in further understanding the characteristics of INNs, their relation to classical models and applying INNs to common deep learning tasks. Recent works on INNs include the RevNet from Gomez et al. [11]. They show that RevNets can achieve the same performance as traditional ResNets [12] of equal size on classification tasks. Jacobsen et al. [13] introduced the iRevNet, a fully invertible network that can reproduce the input based on the output of the last layer. They additionally show that the lack of information reduction does not affect the performance negatively. Impressive results have been achieve with Glow-type networks as proposed by Kingma et al. [14]. The application of INNs on generative tasks has been done by Danihelka et al. [15], Schirrmeister et al. [16] and Grover et al. [17]. Ardizzone et al. [18] have successfully applied INNs on real world problems while proposing their own version of INNs, which allow for bi-directional training. Jacobson et al. [13] and Grathwohl et al. [19] have observed similar behaviors between ResNets and INNs such as iRevNets and Glow-type networks. Leveraging the similarities between ResNets and INNs, Behrmann et al. [20] were able to train the standard ResNet architecture to learn an invertible bijective mapping by adding a normalization step. A ResNet trained this way can be used for classification, generation and density estimation. Much of how INNs learn and their relation to traditional neural networks are still unknown and subject to current research endeavors. However, recent works on excessive invariance have given some insights in further understanding INNs such as [21; 22; 23]. In this work, we propose methods to train INNs as (variational) autoencoders which we call _INN (variational) autoencoder_. We compared their performance to conventional autoencoders for different bottleneck sizes on MNIST, CIFAR and CelebA. For all experiments, we made sure that the INN autoencoders and their classical counterparts have similar number of trainable parameters, where all classical models were given an advantage in number of trainable parameters. Our main contributions are: * We propose a method to train INNs as (variational) autoencoders. * We compare the performance of INN autoencoders and classical autoencoders for different bottleneck sizes on MNIST, CIFAR and CelebA. * We demonstrate through experiments that INN autoencoders can achieve similar or better reconstruction results than classical autoencoders with comparable number of trainable parameters. * We show that the architecture restrictions on INN autoencoders to ensure invertibility does not negatively affect the performance, while the advantages of INNs are still preserve (such as tractable Jacobian for both forward and inverse mapping as well as explicit computation of posterior probabilities). * We provide an explanation for the saturation in reconstruction loss for large bottleneck sizes in classical autoencoders. * Based on our experimental results, we propose the hypothesis that INNs might not have any intrinsic information loss and thereby are not bounded to a maximal number of layers (depth) after which only suboptimal results can be achieved. ## 2 Training INNs as Autoencoders ### Invertible Neural Network (INN) The building blocks of INNs are invertible _coupling layers_ as proposed by Dinh et al. [9; 10]. In this work, we use the modified version from Ardizzone et al. [18] as shown in Figure 1. The coupling layer takes the input \(x\) and splits it into \(x_{1}\) and \(x_{2}\). The neural networks \(s_{2}\) and \(t_{2}\), also called _coupling functions_ in the setting of coupling layers, take \(x_{2}\) as input and scale/translate \(x_{1}\) by their outputs. Afterwards, \(x_{2}\) will be scaled and translated by the same approach. The output \(y\) will be the concatenation of \(y_{1}\) and \(y_{2}\). Mathematically, the forward process can be described as: Figure 1: Visualization of the Invertible Coupling Layer \[y_{1} =x_{1}\odot exp(s_{2}(x_{2}))+t_{2}(x_{2}) \tag{1}\] \[y_{2} =x_{2}\odot exp(s_{1}(y_{1}))+t_{1}(y_{1}) \tag{2}\] where \(s_{i}\) (_scale_) and \(t_{i}\) (_translation_) are arbitrarily complicated coupling functions represented by classical neural networks and \(\odot\) is the Hadamard product. In practice, we use the exponential function and clip extreme values to avoid numerical problems. The coupling layer is fully invertible, meaning the input \(x=[x_{1},x_{2}]\) can be reconstructed by the output \(y=[y_{1},y_{2}]\) with: \[x_{1} =(y_{1}-t_{2}(x_{2}))\odot exp(-s_{2}(x_{2})) \tag{3}\] \[x_{2} =(y_{2}-t_{1}(y_{1}))\odot exp(-s_{1}(y_{1})) \tag{4}\] By stacking those coupling layers, we obtain an INN. In contrast to deep neural networks (DNN), which can learn any function, an INN will always learn a bijective mapping. Generally, DNNs learn a non-bijective mapping causing an inherent _information loss_ during the forward process, which makes the _inverse process_ ambiguous (see Figure 2). As presented in [18], this problem can be solved by adding latent output variables \(z\) in addition to \(y\). The INN is then trained to put the information lost during the forward process \(x\to y\) into the latent variables \(z\). In other words, the latent variables \(z\) contain all the information that is not contained in \(y\), but was originally part of the input \(x\). As a result, the INN will learn a bijective mapping \(x\leftrightarrow[y,z]\) (see Figure 4). INNs allow for _bi-directional training_ (see [18]). For every training iteration, the forward as well as the inverse process will be performed. This enables us to compute the gradients in both directions and optimize losses on both the input and output domains with every iteration. Bi-directional and cyclic training have improved performance of GANs [24; 25] and autoencoders as demonstrated by [26; 27; 28; 29]. ### Artificial Bottleneck and INN Autoencoder For INN mappings to be bijective, the dimensions of inputs \(x\) and outputs \([y,z]\) have to be identical. In contrast, the classical autoencoder has a bottleneck, which allows useful representations to be learned. Without this bottleneck restriction, learning to reconstruct an image would be a trivial task. In order to build an INN autoencoder, we need to introduce an artificial bottleneck, that emulates its classical counterpart. This is achieved by zero-padding \(z\) at all times. With zero-padding, we ensure the extra dimensions given by \(z\) can not be used for representation learning. As a result, the forward process of INN autoencoders is given by \(x\rightarrow[y,z\neq 0]\) and the inverse by \([y,z=0]\rightarrow\hat{x}\) (see Figure 3). The length of \(y\) defines the bottleneck dimension of INN autoencoders. The forward process of the INN can then be interpreted as the encoder and the inverse process as the decoder. We train INNs as autoencoders by combining the artificial bottleneck with a reconstruction loss \(L(x,\hat{x})\). The artificial bottleneck and the reconstruction loss will enforce the INN to put as much information as possible into \(y\) and reduce the information contained in \(z\). The information loss through zero-padding is minimized, if \(z\) is the zero vector in the first place. Therefore, we add a zero-padding loss \(\Omega(z,\vec{0})\) which compares \(z\) to the zero vector. The zero-padding loss \(\Omega\) is not essential for the INN autoencoder, since the reconstruction loss \(L\) alone with the artificial bottleneck will Figure 2: Visualization of Information Loss during the Forward Process and Ambiguity of the Inverse Problem enforce \(z\) to converge against the zero vector. However, we observed that the zero-padding loss \(\Omega\) slightly improves convergence rate and stability during training without negatively affecting the reconstruction quality, since its objective is aligned with the reconstruction loss. We want to emphasize that the INN autoencoder can be successfully trained without using the zero-padding loss \(\Omega\). We only add the zero-padding loss \(\Omega\) for technical reasons, because it does not affect reconstruction quality while making training more comfortable. The total loss function for training an INN autoencoder is given by: \[L(x,\hat{x})+\Omega(z,\vec{0}) \tag{5}\] We can extend the INN autoencoder to an INN VAE by adding a distribution loss \(D\) to the loss function (5) which compares the learned latent distribution loss \(q(y)\) with the true prior distribution \(p(y)\): \[L(x,\hat{x})+\Omega(z,\vec{0})+D_{MMD}(q(y)\mid\mid p(y)) \tag{6}\] However, instead of using KL-divergence [30] as commonly used for classical VAEs to compare distributions, we will use maximum mean discrepancy (MMD) [31] as proposed by Ardizzone et al. [18] to compare distributions for INNs. Figure 4 visualizes the loss terms in equation (6) for INN VAEs. ## 3 Experimental Setup The goal of our experiments is to examine how our proposed INN autoencoder trained with the artificial bottleneck performs in comparison to its classical counterpart with similar number of trainable weights. We trained our models until convergence for different bottleneck sizes and accessed the reconstruction loss on the testset. This will indicate how well our models have learned the representation of a given dataset. Figure 4: Visualization of Training a INN Variational Autoencoder with Reconstruction Loss \(L\), Zero-Padding Loss \(\Omega\) and Distribution Loss \(D_{MMD}\) (For the simple INN autoencoder the distribution loss \(D_{MMD}\) is omitted.) Figure 3: Visualization of Zero-Padding: Zero-Padding will be applied at all times. This ensures that the information in \(z\) is not available, hence creating an artificial bottleneck. ### Architecture For MNIST, we trained four different classical models. The encoder of model _classic_ consists of four fully-connected layers (hidden size: 512, 256, 128 and bottleneck size) followed by ReLUs. The encoder of model _classic 1024_ and _classic 2048_ follow the same architecture, however the hidden size are modified to be 1024, 1024, 1024, bottleneck size and 2048, 2048, 2048, bottleneck size respectively. The model _classic deep_ follows the same architecture as model _classic 1024_, however it has two additional fully-connected layers of size 1024. For CIFAR-10 and CelebA, the encoder consists of five convolutional layers (CIFAR-10: kernel size 3, stride 1 / CelebA: kernel size 4, stride 2) and one fully-connected layer. For all models the decoder mirrors the encoder, whereby the last activation function is replaced by a tanh function. The INN autoencoder on MNIST consists of three coupling layers with convolutional coupling functions (hidden channel size: 100, kernel size: 3 and leaky ReLU slope: 0.1) and one coupling layer with fully-connected coupling functions (hidden layer size: 180). On CIFAR-10, we use the same architecture as for MNIST. However, the convolutional coupling functions have hidden channel size 128 and the fully-connected coupling functions have hidden layer size 1000. For CelebA, our INN model consists of six convolutional coupling layers with same coupling functions as for CIFAR-10 and one fully-connected coupling layer with hidden layer size 200. ### Training For both classical and INN models, we used L1-norm as reconstruction loss. The zero-padding loss \(\Omega\) is chosen to be L2-norm. All models are trained with adaptive learning using Adam optimization [32] (weight decay \(\rightarrow\) classic: \(1\times 10^{-5}\), INN: \(1\times 10^{-6}\)). The classical models were trained for (MNIST/CIFAR-10: 100, CelebA: 10) epochs with a batch size of (MNIST/CIFAR-10: 128, CelebA: 32). The learning rate started at \(1\times 10^{-3}\) and was decreased by a factor of 10 at (MNIST: every 10th, CIFAR-10: 60th and 85th, CelebA: 8th and 9th) training epoch. The INN models was trained for (MNIST: 10, CIFAR-10: 15, CelebA: 8) epochs with batch size (MNIST/CIFAR-10: 128, CelebA: 32). The learning rate started at \(1\times 10^{-3}\) and was decreased by a factor of 10 at the (MNIST: 8th, CIFAR-10: 10th, CelebA: 6th and 7th) epoch. ## 4 Results and Discussion In Figure 5, we summarized our results by plotting the test reconstruction loss and the corresponding number of parameters in our models against the bottleneck size for MNIST, CIFAR-10 and CelebA. We observe that across all three datasets, the INN autoencoder delivers results comparable to its classical counterpart for small bottleneck sizes. Initially, we expected the reconstruction loss to further decrease for larger bottleneck sizes for both classical and INN models. However, the reconstruction loss for classical autoencoders seems to saturate for larger bottleneck sizes showing no further improvement despite increasing the bottleneck (see Figure 5). The saturation sets in at about bottleneck sizes of 12 (MNIST), 250 (CIFAR-10) and 200-250 (CelebA) 3. Based on the results, it seems, that saturation sets in at the approximate intrinsic dimension of the datasets. Footnote 3: Due to hardware/GPU limitations, we only trained our CelebA models for a sparse number of bottleneck sizes. This makes it more difficult to determine the exact point at which saturation sets in In contrast, the INN autoencoder reconstruction loss resembles the expected exponential decay curve with better performance for larger bottleneck sizes. Since the INN autoencoder reconstruction loss does not saturate, it performs significantly better for larger bottleneck sizes than the classical autoencoder. In Figure 6, we show examples of randomly reconstructed MNIST images by our INN and classical model with bottleneck size 32. Especially the difference between original and reconstructed image shows that the INN autoencoder produces better reconstructions than the classical autoencoder. Besides the reconstruction loss, we additionally compared the number of trainable parameters in our models for different bottleneck sizes (see Figure 5). The INN autoencoder architecture does not have to be changed for varying bottleneck sizes, only the split between \(y\) and \(z\) needs to be redefined. For the classical autoencoder, at least the bottleneck layer has to be changed, resulting in a higher number of trainable parameters for larger bottleneck sizes. Since the saturation in reconstruction loss for our classical models was quite unexpected, we conducted additional experiments to further investigate the cause of observed saturation. For MNIST, we trained three additional models: _classic 1024_, _classic 2048_ and _classic deep_. Even though the reconstruction loss of model _classic 1024_ and _classic 2048_ improves for larger bottleneck sizes compared to model _classic_, it still does not reach the same performance as our INN model. The difference in performance between model _classic 1024_ and _classic 2048_ is relatively small taking into account that model _classic 2048_ has twice the hidden layer size of model _classic 1024_. This indicates that further increasing the hidden layer size would not yield significant improvements. Model _classic deep_ saturates at an even higher reconstruction loss than all other models, despite being deeper. This lets us conclude that model _classic_ was already deep and complex enough for the given task on MNIST. This eliminates the possibility that our model _classic_ fails to achieve the same reconstruction performance as our INN model simply because it was not deep enough. Despite all the models being different, the saturation still sets in at about the same bottleneck size of 12 for all four classical models (see Figure 5). It is important to note that the hyperparameters for our models were only optimized for bottleneck sizes of 12 (MNIST) and 300 (CIFAR-10, CelebA). For all other bottleneck sizes, we trained our models with the same hyperparameters. However, for our MNIST classical models, we additionally optimized the hyperparameters for bottleneck size 64. Even with optimal hyperparameters, we could not reach the same performance as with our INN model. We also checked the reconstruction loss of our classical models on both the train- and testset and found no significant divergence which rules out the possibility of overfitting. Given the results outlined above, we hypothesize that the cause of observed saturation leads back to the fundamental difference in network architecture of classical and INN autoencoders. Our hypothesis builds upon recent works of Yu et al. They applied information theory to CNNs [33] and specifically to autoencoders in [34]. By measuring the mutual information between input and feature maps of various layers within a Deep Neural Network (DNN), they found that the mutual information decreases for deeper layers. Therefore, they concluded: _"However, from the DPI perspective validated in this work ([...]), the deeper the neural networks, the more information about the input is lost, thus the less information the network can manipulate. In this sense, one can expect an upper bound on the number of layers in DNNs that achieves optimal performance."_ (cited from [34]). We believe that this is exactly what we observe with our model _classic deep_. It would also explain why the model _classic deep_ performs worse than the other three models. Due Figure 5: Comparison of Test Reconstruction Loss between our Classical and INN Models (a, c, e) and Corresponding Number of Trainable Parameters (b, d, f) for Different Bottleneck Sizes on MNIST, CIFAR-10 and CelebA. Note that the y-axis of b, d and f does not necessary start at zero. Figure 6: Example of Random Reconstructed MNIST Images by our INN (left) and Classical (right) Autoencoder with Bottleneck Size 32: _Input_ shows the original input images from the MNIST testset, _Reconstruction_ shows the reconstructed images, _Difference_ shows the difference between original and reconstructed image. to its depth, it loses more information about the input than all the other classical models. Further works that take an information-theoretic view and investigated the information bottleneck are [35; 36; 37]. This leads us back to the ambiguity problem of DNNs. We already established, that if a DNN does not learn a bijective function, information loss occurs during the forward process making the inverse process ambiguous. The INN solves this ambiguity problem by introducing latent variables \(z\) containing all the information lost during the forward process (see [18]). Therefore, we hypothesize that INNs have no intrinsic information loss contrary to DNNs and the findings of Yu et al. [34] and [33] do not apply to INNs. In other words, INNs are not bound to a maximal number of layers (depth) after which only suboptimal results can be achieved. Furthermore, we believe that the intrinsic information loss of DNNs causes the saturation. In reverse, we could interpret the threshold at which the reconstruction loss saturates as a quantification for the intrinsic information loss of our classical models. This would explain why our deepest model _classic deep_ saturates at the highest reconstruction loss threshold, since it has the highest intrinsic information loss. Increasing the hidden layer size as done with model _classic 1024_ and _classic 2048_, seems to reduce the intrinsic information loss of the architecture. This makes intuitive sense, since a fully-connected layer of larger size is able to extract more information and minimize the information loss between its input and output. However, our experiments suggest that there is a lower bound for the information loss of DNNs at which increasing the hidden layer size will not further decrease its intrinsic information loss. Since INN autoencoders do not have any intrinsic information loss according to our hypothesis, we expect the reconstruction loss threshold 4 of INN autoencoders to be at zero. Footnote 4: at which saturation occurs Despite INNs having no intrinsic information loss and thereby allowing arbitrary deep designs, in the application of autoencoders this might be a disadvantage. The main idea of autoencoders is to extract the essential information of the dataset and discard the rest to achieve a dimensionality reduction. Bounded to an information loss, a DNN has to get rid of information within its input. Most likely, it will choose to remove the noise in the dataset and keep the most essential information. This explains why the saturation sets in at approximately the intrinsic dimension of the datasets. At this bottleneck size the whole essential information of the dataset is already encoded. Further increasing the bottleneck would not yield any improvement, since the noise was already removed during the forward process. In contrast, the INN keeps all the noise within the latent variables \(z\). Further increasing the bottleneck would add noise to the \(y\)-latent space in addition to the essential information and further improve the reconstruction quality. In case of dimensionality reduction, this might be an undesirable characteristic, since the goal is to remove the noise. We conclude, that in order to use INN autoencoders for dimensionality reduction, the intrinsic dimension of the dataset has to be known or estimate appropriately. Nevertheless, if the bottleneck size is chosen accordingly, the INN autoencoder is capable of just learning the essential information of the dataset, but at the same time leaves the option to additionally learn the noise if needed. Furthermore, the INN autoencoder preserve all the advantages of INNs. ## 5 Conclusion In conclusion, the experiments show that our proposed method of training INNs as autoencoders does indeed work. For small bottleneck sizes, the INN autoencoder performs equally well as the classical autoencoder. It is capable of extracting the essential information of a dataset and learning useful representations. For large bottleneck sizes, greater than the intrinsic dimension of the dataset, the INN autoencoder outperforms its classical counterpart. In summary, we demonstrated that the architecture restrictions caused by the invertibility constraint does not negatively affect the performance, while all the advantages of INNs such as tractable Jacobian for both forward and inverse mapping as well as explicit computation of posterior probabilities are still preserved. We hypothesize that INNs do not have any intrinsic information loss. This would allow the INN autoencoder to additionally learn the noise of the dataset, if the bottleneck size is chosen larger than the intrinsic dimension of the dataset. Furthermore, INNs might not be bounded to a maximal number of layers (depth) after which only suboptimal results can be achieved. However, further research is necessary to validate these hypotheses. Another advantage of INN autoencoders is that they are more versatile across different bottleneck sizes. The architecture does not need to be changed, only the split between \(y\) and \(z\) has to be redefined. There are both advantages as well as disadvantages in using INN autoencoders compared to the classical approach which we have outlined in this work. However, we believe that our results have shown interesting properties of INNs and their innate differences from classical DNNs. At the moment, INNs are still not fully understood which leaves room for further research. We hope that our findings can help towards uncovering the properties of INNs and encourage further research in this direction. ## 6 Acknowledgment This work is supported by the Bundesministerium fuer Wirtschaft und Klimaschutz (BMWK, German Federal Ministry for Economic Affairs and Climate Action) as part of the German government's 7th energy research program "Innovations for the energy transition" under the 03ETE039I HiBRAIN project (Holistic method of a combined data- and model-based Electrode design supported by artificial intelligence) and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2181/1 - 390900948 Heidelberg STRUCTUREURES Cluster of Excellence.
2301.11440
Secure synchronization of artificial neural networks used to correct errors in quantum cryptography
Quantum cryptography can provide a very high level of data security. However, a big challenge of this technique is errors in quantum channels. Therefore, error correction methods must be applied in real implementations. An example is error correction based on artificial neural networks. This paper considers the practical aspects of this recently proposed method and analyzes elements which influence security and efficiency. The synchronization process based on mutual learning processes is analyzed in detail. The results allowed us to determine the impact of various parameters. Additionally, the paper describes the recommended number of iterations for different structures of artificial neural networks and various error rates. All this aims to support users in choosing a suitable configuration of neural networks used to correct errors in a secure and efficient way.
Marcin Niemiec, Tymoteusz Widlarz, Miralem Mehic
2023-01-26T22:04:11Z
http://arxiv.org/abs/2301.11440v1
# Secure synchronization of artificial neural networks used to correct errors in quantum cryptography ###### Abstract Quantum cryptography can provide a very high level of data security. However, a big challenge of this technique is errors in quantum channels. Therefore, error correction methods must be applied in real implementations. An example is error correction based on artificial neural networks. This paper considers the practical aspects of this recently proposed method and analyzes elements which influence security and efficiency. The synchronization process based on mutual learning processes is analyzed in detail. The results allowed us to determine the impact of various parameters. Additionally, the paper describes the recommended number of iterations for different structures of artificial neural networks and various error rates. All this aims to support users in choosing a suitable configuration of neural networks used to correct errors in a secure and efficient way. quantum cryptography, key reconciliation, error correction, artificial neural networks ## I Introduction The emergence and intensive development of the field of quantum computing has put many cryptography algorithms at risk. However, quantum physics also allows to achieve multiple cryptography tasks. One of the most popular is quantum key distribution [1]. Unfortunately, quantum communication is not perfect and additional solutions are required to correct any errors after the key distribution in the quantum channel. Artificial neural networks can be utilized to correct these errors [2]. It is a recently proposed solution which provides high level of security and efficiency comparing to other existing error correction methods. This paper analyzes the impact of different neural networks' parameters on the synchronization process. These parameters influence the number of iterations required as well as the security and efficiency of quantum cryptography. Therefore, it is important to know which neural network scheme should be chosen and which should be avoided. Additionally, the synchronization requires the number of iterations to be specified. Therefore, a recommended number of iterations for a particular multiple neural network's scheme is provided. The paper is structured as follows. Related work is reviewed in Section 2. Section 3 presents the basics of quantum cryptography, the architecture of the tree parity machine, and error correction using this structure of artificial neural networks. Analysis of synchronization parameters including the recommended number of iterations for typical keys and error rates is described in Section 4. Section 5 concludes the paper. ## II Related work The first quantum key distribution (QKD) protocol, introduced in 1984 by Bennet and Brassard, is BB84 [3]. This scheme uses the polarization state of a single photon to transmit information. Since then, several other protocols have been presented. One of them is the E91 protocol introduced in 1991 by Ekerd [4]. It utilizes entangled pairs of photons in the QKD process. However, some errors usually appear during data exchange in the quantum channel. After the initial QKD, there is a specific step: quantum bit error rate (QBER) estimation based on the acquired keys. The QBER value is usually low [5]. It must to be lower than the chosen threshold used to detect the eavesdropper. Several methods of correcting error incurred in the quantum key distribution process have been developed. The first described method - BBBSS - was proposed in 1992 [6]. However, the most popular is the Cascade key reconciliation protocol [7]. It is based on multiple random permutations. The Winnow protocol, based on the exchange of parity and Hamming codes, is another method of error correction in the raw key [8]. Its main improvement is the reduction of the required communication between both parties. The third most popular error reconciliation scheme is the low density parity check approach. It offers a significant reduction of exchanged information; however, it introduces more computation and memory costs than the Cascade and Winnow protocols [7]. In 2019, another method of error correction in quantum cryptography was proposed by Niemiec in [2]. The solution uses mutual synchronization of two artificial neural networks (ANN) to correct the errors. The tree parity machine (TPM) is proposed as a neural network used in this approach. It is a well-known structure in cryptography - the synchronization of two TPMs can be used as a key exchange protocol. TPMs cannot be used as a general method to correct a selected error because it is not possible to predict the final string of bits after the synchronization process. However, it is a desirable feature for shared keys which should be random strings of bits. ## III Quantum cryptography supported by artificial neural networks Symmetric cryptography uses a single key to encrypt and decrypt secret messages. Let's assume that Alice and Bob, the two characters used in describing cryptography protocols, are using symmetric encryption. The goal is to send information from Alice to Bob in a way that provides confidentiality. To achieve this, Alice and Bob need to agree on a shared secret key. Alice encrypts confidential data using the previously chosen key and Bob decrypts it using the same key. The same key is applied to encrypt and decrypt the information, hence the name: symmetric-key encryption. It is worth mentioning only the one-time-pad symmetric scheme has been proven secure but it requires a key not smaller than the message being sent. In general, symmetric-key encryption algorithms - for example the Advanced Encryption Standard (AES) [9] - perform better than asymmetric-key algorithms [10]. However, symmetric-key algorithms have an important disadvantage compared to asymmetric-key schemes. In the symmetric key encryption scheme, the key needs to be safely distributed or established between Alice and Bob [11]. The symmetric key can be exchanged in a number of ways, including via a trusted third party or by direct exchange between involved parties. However, both methods introduce some vulnerabilities, including passive scanning of network traffic. A method where the eavesdropper can be easily detected uses quantum mechanics to establish keys between Alice and Bob. It is called the quantum key distribution protocol. ### _Quantum key distribution_ Quantum mechanics allows for secure key distribution1 among network users. Two main principles are the core of the security of QKD: an unknown quantum state cannot be copied [12], and the quantum state cannot be estimated without disturbing it. One of the most popular QKD protocols which uses those principles is the BB84 scheme [3]. Footnote 1: In fact, a key is not distributed but negotiated. However, the term ‘distribution’ is consistently used in this paper to be consistent with the commonly accepted name of the technique. The BB84 protocol uses photons with two polarization bases: rectilinear or diagonal. Alice encodes a string of bits using photons on a randomly chosen basis. After that, all the photons are sent through a quantum channel. Bob randomly chooses a basis for each photon to decode the binary \(0\) or \(1\). Alice and Bob's bases are compared through a public communication channel. Each bit where both parties chose the same basis should be the same. However, when Bob measures the photon in a different basis than Alice, this bit is rejected. The remaining bits are the same for both parties and can be considered as a symmetric key. Next, the error estimation is performed. Randomly chosen parts of the keys between Alice and Bob are compared to compute the QBER value. If the comparison results in a high error rate, it means that the eavesdropper (Eve) is trying to gain information about the exchanged photons. However, the quantum channel is not perfect, and errors are usually detected due to disturbance, noise in the detectors or other elements. The number of errors introduced by the quantum channel's imperfections must be considered while deciding the maximum acceptable error rate. The differences between Alice and Bob's keys need to be corrected. Several error correction methods are known. BBBSS is the earliest scheme proposed in [6]. It is mainly based on parity checks. The most popular method is the Cascade protocol [13]. It is an improved version of BBBSS and requires less information to be sent between Alice and Bob through the public channel. The Cascade protocol and its predecessor are based on multiple parity checks. The basic idea is that the keys are divided into blocks of a fixed size. The number of bits in each block depends on the previously calculated QBER value. Alice and Bob compare the parities of each block to allow them to find an odd number of errors. If errors are detected in a given block, it is split into two. The process is repeated recursively for each block until all errors are corrected. It concludes a single iteration after which Alice and Bob have keys with an even number of errors or without any errors. Before performing the following iterations, the keys are scrambled, and the size of the block is increased. The number of iterations is predetermined. As a result of this process, Alice and Bob should have the same keys. However, it is not always the case. A number of iterations or block sizes can be chosen incorrectly and cause failure in error correction. Additionally, the algorithm performs multiple parity checks over the public channel, which can be intercepted by an eavesdropper (Eve). As a result, Eve can construct a partial key. Alice and Bob should discard parts of their keys to increase the lost security. This reduces the performance of this method since the confidential keys must be shortened in the process. Another error reconciliation method is based on mutual synchronization of artificial neural networks. ### _Tree parity machine_ An artificial neural network (ANN) is a computing system inspired by biological neural networks [14]. ANNs are used to recognize patterns and in many other solutions in the fields of machine learning. ANNs consist of multiple connected nodes (artificial neurons), with each neuron representing a mathematical function [15]. These nodes are divided into three types of layers: the first (input) layer, at least one hidden layer, and the output layer. The connections between neurons in each layer can be characterized by weights. In cryptography, the most commonly used neural network is the tree parity machine (TPM) [16]. A scheme of this model is presented in Fig. 1. There are \(K\times N\) input neurons, divided into \(K\) groups. There is a single hidden layer with \(K\) nodes. Each of these nodes has \(N\) inputs. The TPM has a single output neuron. The connections between input neurons and hidden layer neurons are described by weights \(W\) - integers in the range [\(-L\), \(L\)], thus \(L\) is the maximum and \(-L\) is the minimum weight value. The values of \(\sigma\) characterize the connections between the hidden layer neurons and an output neuron. The output value of the TPM is described by \(\tau\). The value of \(\sigma\) is calculated using the following formulas: \[\sigma_{k}=sgn(\sum_{n=1}^{N}x_{kn}*w_{kn}) \tag{1}\] \[sgn(z)=\begin{cases}-1&z\leq 0\\ 1&z>0\end{cases} \tag{2}\] Due to the usage of the presented signum function, \(\sigma\) can take two values: \(1\) or \(-1\). The output value of TPM is calculated as: \[\tau=\prod_{k=1}^{K}\sigma_{k} \tag{3}\] This neural network has two possible outcomes: \(1\) or \(-1\). For the TPM structure, multiple learning algorithms are proposed. Most popular are Hebbian, anti-Hebbian, and random walk. The leading is the Hebbian rule [17]. The Hebbian algorithm updates ANN weights in the following manner: \[w_{kn}^{*}=v_{L}(w_{kn}+x_{kn}*\sigma_{k}*\theta(\sigma_{k},\tau)) \tag{4}\] where \(\theta\) limits the impact of hidden layer neurons whose value was different than \(\tau\): \[\theta(\sigma_{k},\tau)=\begin{cases}0&\text{if }\sigma_{k}\neq\tau\\ 1&\text{if }\sigma_{k}=\tau\end{cases} \tag{5}\] The \(v_{L}\) function makes sure that the new weights are kept within the [\(-L\), \(L\)] range: \[v_{L}(z)=\begin{cases}-L&\text{if }z\leq-L\\ z&\text{if }-L<z<L\\ L&\text{if }z\geq L\end{cases} \tag{6}\] The TPM structure allows for mutual learning of the two neural networks [18], primarily based on updating weights only when the outputs from both neural networks are the same. The input values are random and the same for both Alice and Bob's TPMs. Inputs are updated in each iteration. The security of this process relies on the fact that cooperating TPMs can achieve convergence significantly faster than Eve's machine, which can update weights less frequently. The TPM is most commonly used in cryptography to exchange a secret key. This usage is defined as neural cryptography [19]. Alice and Bob mutually synchronize their TPMs to achieve the same weights. After the synchronization process, these weights provide a secure symmetric key. ### _Error correction based on TPMs_ TPMs can be utilized during the error correction process in quantum cryptography [2]. The neural network's task is to correct all errors to achieve the same string of confidential bits at both endpoints. Firstly, Alice and Bob prepare their TPMs. The number of neurons in the hidden layer (\(K\)) and the number of input neurons (\(N\)) is determined by Alice and passed on to Bob. The value \(L\) must also be agreed between the users. The keys achieved using the QKD protocol are changed into integer values in the range [\(-L\), \(L\)]. These values are used in the appropriate TPMs as weights between neurons in the input layer and the hidden layer. Since Alice's string of bits is similar to Bob's (QBER is usually not high), the weights in the created TPMs are almost synchronized. At this point, Alice and Bob have constructed TPMs with the same structure but with a few differences in the weight values. After establishing the TPM structure and changing bits to weights, the synchronization process starts. It consists of multiple iterations, repeated until common weights are achieved between Alice and Bob. A single iteration starts from Alice choosing the input string and computing the result using the TPM. After that, the generated input string is passed on to Bob, who computes the output of his TPM using the received input. Then, the results are compared. If the outputs of both TPMs match, the weights can be updated. Otherwise, the process is repeated with a different input string. After an appropriate number of iterations, the TPMs are synchronized and Alice and Bob can change the weights back into a string of bits. The resulting bits are the same. However, the privacy amplification process after error correction is still recommended [20]. The reduction of the key protecting Alice and Bob from information leakage is defined as [2]: \[Z=log_{2L+1}2^{i} \tag{7}\] where \(i\) is the number of TPM iterations. This usage of TPM is safer than the neural cryptography solution, because weights are similar before the synchronization. Therefore, significantly fewer iterations are required to achieve convergence than the randomly initialized weights in key establishing algorithms. It is worth mentioning this method of error correction is characterized by high efficiency, e.g. requires approximately 30% less iterations than Cascade algorithm [2]. ## IV Analysis of the synchronization process The crucial decision regarding the error detection approach based on TPMs is the number of iterations during the synchronization process. This value should be as low as possible for security reasons. However, it cannot be too low, since neural networks will not be able to correct all errors in the key otherwise. It is the user's responsibility to select the appropriate value for the error correction. The main objective of the analysis is to determine the impact of various neural network parameters on the synchronization process. Another goal is to provide a recommended number of iterations for users. ### _Testbed_ The experiments require an application to simulate the error correction process based on artificial neural networks. The application for correcting errors arising in quantum key distribution was written in Python and uses the NumPy package - a library for scientific computing which provides fast operations on arrays required by the TPM. The functions provided by NumPy satisfy all necessary calculations to achieve neural network convergence. Synchronization of TPMs is performed over sockets to allow real-world usage of this tool. The Hebbian learning algorithm for updating weights is used. The developed application makes it possible to correct errors in the keys using quantum key distribution protocols. The users are also able to correct simulated keys with the chosen error rate. It helps if users do not have strings of bits created by a real QKD system. An important feature of the tool is its ability to select neural network parameters. The user can personalize the synchronization process, starting from the key length and error rate. The least sufficient number of bits was used for translation into a single integer (values of the weights must be in the range [\(-L\), \(L\)]). It was demonstrated that the number of hidden neurons and the number of inputs depend on the chosen key length and \(L\) value. Therefore, users need to select these parameters taking into account the requirements and needs. During the experiments the minimum number of returned required iterations for a single TPM configuration was set to \(200\). The maximum number of iterations was limited to \(1000\). Additionally, the maximum number of retries in a single iteration was limited to \(10\) to speed up the simulation process. Finally, \(1880\) different scenarios were analyzed. All possible TPM configurations for key lengths varying between \(100\) and \(700\) with a \(100\) bit step are available. Moreover, the data is available for other keys with lengths varying between \(128\) and \(352\) with an \(8\) bit step. Between \(350\) and \(500\) synchronizations were performed for each TPM. It was assumed that this number of iterations is sufficient to achieve convergence. ### _Recommended number of iterations_ To obtain the recommended number of iterations of TPMs for successful error correction, the sum of means and standard deviations of the results was calculated. The median and variance values were calculated as well for comparison. The full results are available online2. The selected part - the neural network configurations where the key length equals \(256\) bits with the recommended number of iterations - is presented in Tab. I. Fig. 1: Model of tree parity machine. Fig. 2: Histogram for number of iterations (TPM with a \(256\) bit key, \(N=16\), \(K=4\), \(L=4\), \(QBER=3\%\)). Fig. 2 shows the histogram of data gathered for a single neural network configuration. The distribution is right-skewed. The mean value is greater than the median. It is a common characteristic for other tested TPM configurations. If the distribution is not positively skewed, it is symmetrical. The recommended number of iterations for the presented configuration, according to Tab. I, equals \(302\). It is based on the sum of the mean and standard deviation values. For all presented TPM configurations, this sum gives an \(84\%\) chance of successful synchronization, assuming a normal distribution of results. For the right-skewed distribution, similar to the one presented in Fig. 2, the probability of success is higher. The \(85\)-th percentile for the given set is equal to \(276\) - less than the proposed value. In this case, after choosing the suggested number of iterations the user has more than an \(88\%\) chance of success. Knowing the lowest required number of iterations is important because it reduces the risk of a successful attack by Eve. The attacker could create independent TPMs and try to synchronize one of them with Alice or Bob's machine. The recommended number of iterations increases the security of this solution because Alice and Bob require far fewer iterations to synchronize, compared to Alice (or Bob) and Eve synchronizing using random weights. ### _Impact of TPM structures_ The results of simulations allow us to analyze how TPM structures affect the number of required iterations during the synchronization process. Fig. 3 shows the number of required iterations depending on the \(K\) and \(N\) parameters. It shows two different TPM configurations: one with a \(144\) bit key and another with a \(216\) bit key. These configurations were chosen due to having a similar number of possible \(K\) and \(N\) pairs. For a given key length, \(L\) value and error rate there is a limited number of possible \(N\) and \(K\) values. The \(K\) value changes in inverse proportion to the \(N\) value. As presented in Fig. 3 the speed of the TPM synchronization process depends on the neural network structure (\(N\) and \(K\) values). The number of required iterations increases alongside the higher number of neurons in the hidden layer (\(K\)). The trend is similar for both presented TPMs. After achieving a certain threshold, the number of recommended iterations increases slowly. The results fit the logarithmic trend line. It means that after a certain \(K\) value, increasing this parameter further does not affect the synchronization speed as much as under a certain threshold. Other configurations of the selected TPMs were studied based on the increasing error rate of the keys. Two configurations with \(128\) and \(256\) bit keys were tested. The average of every possible configuration of the recommended number of iterations was calculated for different QBER values. The results are presented in Fig. 4. This confirms that a greater number of errors results in a higher average number of recommended iterations. It confirms the applicability of TPMs to correct errors emerging in quantum key distribution, where the error rate should not be higher than a few percent. Therefore, the eavesdropper needs more iterations to synchronize its TPM. Additionally, it was verified that value \(L\) has an exponential impact on the average recommended number of iterations. The data was gathered using a similar approach to the study with Fig. 3: Number of iterations for TPMs with \(144\) and \(216\) bit keys for different \(K\) value. the impact of QBER. The average recommended number of iterations of each configuration for a given \(L\) was calculated. Fig. 5 shows the exponential trend line. It is worth mentioning that the impact of \(L\) value on the synchronization time is significant. It is the user's responsibility to choose the best possible configuration for a given key length and QBER value. The analysis shows that the \(L\) value should be chosen carefully since it exponentially affects the required number of iterations. Additionally, the choice of the \(K\) value should be made with caution due to its logarithmic impact on the number of iterations. ## V Summary The analysis of the TPM synchronization process used for error correction purposes was presented in this paper. It shows that the parameters of the TPM structure have an impact on the synchronization time and security of this error correction method. However, different parameters of artificial neural networks have different effects. Therefore, users should be aware of how to choose the configuration of neural networks used to correct errors in a secure and efficient way. One of the deciding factors which need to be selected is the number of iterations. The paper describes the recommended number of iterations for different TPM structures and QBER values to assist users in this step. The numbers recommended by the authors are as low as possible but with a high probability of successful synchronization to ensure secure and efficient error correction based on artificial neural networks. ## Acknowledgment This work was supported by the ECHO project which has received funding from the European Union's Horizon 2020 research and innovation programme under the grant agreement no. 830943.
2310.19991
PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices
As neural networks (NN) are deployed across diverse sectors, their energy demand correspondingly grows. While several prior works have focused on reducing energy consumption during training, the continuous operation of ML-powered systems leads to significant energy use during inference. This paper investigates how the configuration of on-device hardware-elements such as GPU, memory, and CPU frequency, often neglected in prior studies, affects energy consumption for NN inference with regular fine-tuning. We propose PolyThrottle, a solution that optimizes configurations across individual hardware components using Constrained Bayesian Optimization in an energy-conserving manner. Our empirical evaluation uncovers novel facets of the energy-performance equilibrium showing that we can save up to 36 percent of energy for popular models. We also validate that PolyThrottle can quickly converge towards near-optimal settings while satisfying application constraints.
Minghao Yan, Hongyi Wang, Shivaram Venkataraman
2023-10-30T20:19:41Z
http://arxiv.org/abs/2310.19991v2
# PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices ###### Abstract As neural networks (NN) are deployed across diverse sectors, their energy demand correspondingly grows. While several prior works have focused on reducing energy consumption during training, the continuous operation of ML-powered systems leads to significant energy use during inference. This paper investigates how the configuration of on-device hardware--elements such as GPU, memory, and CPU frequency, often neglected in prior studies, affects energy consumption for NN inference with regular fine-tuning. We propose PolyThrottle, a solution that optimizes configurations across individual hardware components using Constrained Bayesian Optimization in an energy-conserving manner. Our empirical evaluation uncovers novel facets of the energy-performance equilibrium showing that we can save up to 36 percent of energy for popular models. We also validate that PolyThrottle can quickly converge towards near-optimal settings while satisfying application constraints. ## 1 Introduction The rapid advancements in neural networks and their deployment across various industries have revolutionized multiple aspects of our lives. However, this sophisticated technology carries a drawback: high energy consumption which poses serious sustainability and environmental challenges (Anderson et al., 2022; Gupta et al., 2022; Cao et al., 2020; Anthony et al., 2020). Emerging applications such as autonomous driving systems and smart home assistants require real-time decision-making capabilities (jet), and as we integrate NNs into an ever-growing number of devices, their collective energy footprint poses a considerable burden to our environment (Wu et al., 2022; Schwartz et al., 2020; Lacoste et al., 2019). Moreover, considering that many devices operate on battery power, curbing energy consumption not only alleviates environmental concerns but also prolongs battery life, making low-energy NN models highly desirable for numerous use cases. In prior literature, strategies for reducing energy consumption revolve around designing more efficient neural network architectures (Howard et al., 2017; Tan & Le, 2019), quantization (Kim et al., 2021; Banner et al., 2018; Courbariaux et al., 2015, 2014; Gholami et al., 2021), or optimizing maximum GPU frequency (You et al., 2022; Gu et al., 2023). From our experiments, we make new observations about the tradeoffs between energy consumption, inference latency, and various other hardware configurations. Memory frequency, for example, emerges as a significant contributor to energy consumption (as shown in Figure 3), beyond the commonly investigated relationship between maximum GPU compute frequency and energy consumption. Table 2 shows that even with optimal maximum GPU frequency, we can save up to \(25\%\) energy by further tuning memory frequency. In addition, minimum GPU frequency also proves to be of importance in certain cases, as shown in Figure 4 and Table 3. We also observe that a simple linear relationship falls short of capturing the tradeoff between energy consumption, neural network inference latency, and hardware configurations. The complexity of this tradeoff is illustrated by the Pareto Frontier in Figure 2. This nuanced interplay between energy consumption and latency poses a challenging question: _How can we find a near-optimal configuration that closely aligns with this boundary?_ Designing an efficient framework to answer the above question is challenging due to the large configuration space, the need to re-tune each model and hardware, and frequent fine-tuning operations. A naive approach, such as grid search, is inefficient and can take hours to find the optimal solution for a given model and desired batch size on a given hardware. The uncertainty in inference latency, especially at smaller batch sizes (Gujarati et al., 2020), further exacerbates the challenge. Furthermore, given that distinct hardware platforms and NN models display unique energy consumption patterns (Section 3), relying on a universally applicable pre-computed optimal configuration is not feasible. Every deployed device must be equipped to quickly identify its best configuration tailored to its specific workload. Fi nally, in production environments, daily fine-tuning is often necessary to adapt to a dynamic external environment and integrate new data (Cai et al., 2019, 2020). This demands a mechanism that can quickly adjust configurations to complete fine-tuning requests in time while ensuring the online inference workloads meet Service Level Objectives (SLOs). In this paper, we explore the interplay between inference latency, energy consumption, and hardware frequency and propose PolyThrottle as our solution. PolyThrottle takes a holistic approach, optimizing various hardware components and batch sizes concurrently to identify near-optimal hardware configurations under a predefined latency SLO. PolyThrottle complements existing efforts to reduce inference latency, including pruning, quantization, and knowledge distillation. We use Constrained Bayes Optimization with GPU, memory, CPU frequencies, and batch size as features, and latency SLO as a constraint to design an efficient framework that automatically adjusts configurations, enabling convergence towards near-optimal settings. Furthermore, PolyThrottle uses a performance prediction model to schedule fine-tuning operations without disrupting ongoing online inference requests. We integrate PolyThrottle into Nvidia Triton on Jetson TX2 and Orin and evaluate on state-of-the-art CV and NLP models, including EfficientNet and Bert (Tan and Le, 2019, Devlin et al., 2018). To summarize, our key contributions include: 1. We examine the influence of hardware components beyond GPUs on energy consumption, delineate new tradeoffs between energy consumption and inference performance, and reveal new possibilities for optimization. 2. We construct an adaptive framework that efficiently finds energy-optimal hardware configurations. To accomplish this, we employ Constrained Bayesian Optimization. 3. We develop a performance model to capture the interaction between inference and fine-tuning processes. We use this model to schedule fine-tuning requests and carry out real-time modifications to meet inference SLOs. 4. We implement and evaluate PolyThrottle on a state-of-the-art inference server on Jetson TX2 and Orin. With minimal overheads, PolyThrottle reduces energy consumption per query by up to \(36\%\). ## 2 Motivation Many deep neural networks have been deployed on edge devices to perform tasks such as image classification, object detection, and dialogue systems. Scenarios including smart home assistants (He et al., 2020), inventory and supply chain monitoring (jet), and autopilot (Gog et al., 2022) often use battery-based devices that contain GPUs to perform the aforementioned tasks. In these scenarios, pre-trained models are installed on the devices where the inference workload is deployed. Prior works have focused on optimizing the energy consumption of GPUs (Wang et al., 2020, 2021, Tang et al., 2019, Strubell et al., 2019, Mei et al., 2017) in cloud scenarios (Qiao et al., 2021, Wan et al., 2020, Hodak et al., 2019) and training settings (Wang et al., 2020, Peng et al., 2019, Kang et al., 2022). On-device inference workloads exhibit different characteristics and warrant separate attention. In this section, we outline previous efforts in optimizing on-device neural network inference and discuss our approach to holistically optimize energy consumption. Figure 1: Figure illustrating the overall workflow of PolyThrottle. The optimizer first identifies the optimal hardware configuration for a given model. When new data arrives, the inference server handles the inference requests. Upon receiving a fine-tuning request, our performance predictor estimates whether time-sharing inference and fine-tuning workloads would result in SLO violations. Then the predictor searches for feasible adjustments to meet the SLO constraints. If such adjustments are identified, the system implements the changes and schedules fine-tuning requests until completion. ### On-device Neural Network Deployment Prior work in optimizing on-device neural network inference focuses on quantization (Kim et al., 2021; Banner et al., 2018; Courbariaux et al., 2015, 2014; Gholami et al., 2021), designing hardware-friendly network architectures (Xu et al., 2019; Lee et al., 2019; Sanh et al., 2019; Touvron et al., 2021; Howard et al., 2019), and leveraging hardware components specific to mobile settings, such as DSPs (Lane and Georgiev, 2015). Our work explores an orthogonal dimension and aims to answer a different question: **Given a neural network to deploy on a specific device, how can we tune the device to reduce energy consumption?** In our work, we focus on edge devices that contain CPUs, memory, and GPUs. These devices are generally more powerful than DSPs often found on mobile devices. One such example is the Nvidia Jetson series, which is capable of handling a wide array of applications, ranging from AI to robotics and embedded IoT solutions (jet). The devices also come with dynamic voltage and frequency scaling (DVFS) capabilities that allow for the optimization of power consumption and thermal management during complex computational tasks. The Jetson series features a unified memory shared by both CPU and GPUs. We refer to the operating frequency of CPU, GPU, and shared memory as CPU frequency, GPU frequency, and memory frequency in this paper. **Case Study on Inventory Management:** To understand the system requirements in edge NN inference, we next describe a case study of how NNs are deployed in an inventory management company. From our conversations, Company A works with Customer B to deploy neural networks on edge devices to optimize inventory management. To comply with regulations and protect privacy, data from each inventory site are required to be stored locally. The vast difference in the layout of the inventories makes it impossible to pre-train the model on data from every warehouse. Therefore, these devices come with a pre-trained model based on data from a small sample of inventories, which may have significantly different layouts and external environments compared to the actual deployment venue. Consequently, daily fine-tuning is required to enhance performance in the deployed sites, as the environment continually evolves. Similar arguments apply to smart home devices, where a model is pre-trained on selected properties, but the deployed households may be much more diverse. To address privacy concerns, on-device fine-tuning of neural networks is preferred, as it keeps sensitive data locally. Therefore, edge devices often need to run both inference and periodic fine-tuning. Combining multiple workloads on edge devices can lead to SLO violations due to interference and increased energy use. ### Holistic Energy Consumption Optimization Some recent works have explored reducing energy consumption by optimizing for batch size and GPU maximum frequency (You et al., 2022; Nabavinejad et al., 2021; Komoda et al., 2013; Gu et al., 2023) and developing power models for modern GPUs (Kandiah et al., 2021; Hong and Kim, 2010; Arafa et al., 2020; Lowe-Power et al., 2020). In this work, we argue that other hardware components also cause energy inefficiency and require separate optimization. We perform a grid search over GPU, memory, and CPU frequencies and various batch sizes to examine the Pareto frontier of inference latency and energy consumption. Figure 2 shows the tradeoff between the per-query energy consumption and Figure 2: **Left** figure shows the Pareto Frontier of energy vs. latency tradeoff for various batch sizes on EfficientNet B7 on Jetson Orin. **Right** figure shows the Pareto Frontier of energy vs. latency tradeoff for various batch sizes on EfficientNet B4 on Jetson TX2. Each data point in this plot is representative of a unique hardware configuration, and each line corresponds to a batch size. The figure shows that the tradeoff does not always conform to the same pattern across varied hardware platforms and models. inference latency (normalized to the optimal latency) on Jetson TX2 and Jetson Orin. Each point in the figure represents the optimal configuration that we find through grid search under a given inference latency budget and batch size. As Figure 2 shows, the Pareto frontier is not smooth globally and is difficult to capture by a simple model, which warrants more sophisticated optimization techniques to quickly converge to a hardware configuration that lies on the Pareto Frontier (Censor, 1977). Zeus (You et al., 2022) attempts to reduce the energy consumption of neural network training by changing the GPU power limit and tuning training batch size. PolyThrottle also includes these two factors. In Zeus (You et al., 2022), the focus is on training workloads in data center settings, where batch size tuning helps achieve an accuracy threshold in an energy-efficient way. We include batch size as part of PolyThrottle as it provides a trade-off between inference latency and throughput. Our empirical evaluation reveals new avenues available for optimization which complicates the search space, as we describe next. ## 3 Opportunities In this section, we perform empirical experiments to uncover new opportunities for optimizing energy use in NN inference. As discussed in Section 2, prior work did not study how memory frequency, minimum GPU frequency, and CPU frequency play a role in energy consumption. This is partially limited by hardware constraints. Specialized power rails need to be built into the device during manufacturing to enable accurate measurement of energy consumption associated with each component. We leverage two Jetson developer kits, TX2 and Orin, which offer native support for component-wise energy consumption measurement and frequency tuning, to study how these frequencies impact inference latency and energy consumption in modern deep learning workloads. We find that the default frequencies are much larger than optimal and throttling all these frequency knobs offer energy consumption reduction with minimal impact on inference latency. Figure 3 illustrates the energy optimization landscape when varying GPU and memory frequencies, without imposing any constraints on latency SLO. The plot reveals that, without any other constraints, the energy optimization landscape generally exhibits a bowl shape. However, this shape varies depending on the models, devices, and other hyperparameters, such as batch sizes (See Appendix B for more results). Next, we dive into how each hardware component affects inference energy consumption. **CPU Frequency Experiment:** CPUs are only used for data pre-processing. Thus, we first measure the time spent in the data processing part of the inference pipeline. Next, we measure the energy saved by throttling the CPU frequency and assess the inference latency slowdown caused by reducing CPU frequency. The data preprocessing we perform is standard in almost all image processing and object detection pipelines, where we read the raw image file, convert it to an RGB scale, resize it, and reorient it to the desired input resolution and data layout. **Results:** The preprocessing time across different EfficientNet models remains constant since the operations performed are identical. As a result, the relative impact of CPU tuning on overall energy consumption depends on the ratio between preprocessing time and inference time. As the model size increases and inference duration increases, the influence of CPU tuning on overall energy consumption decreases. We observe that on both Jetson TX2 and Orin platforms, CPU tuning can decrease preprocessing energy consumption by approximately \(30\%\). Depending on the model, quantization level, and batch size, this results in up to a \(6\%\) reduction in overall energy consumption. **Minimum GPU frequency experiment:** We maintain the default hardware configuration and only adjust the minimum GPU frequency on Jetson Orin. Increasing the minimum GPU frequency forces the GPU DVFS mechanism to operate within a smaller range. We scale the model from EfficientNet B0 to EfficientNet B7 to illustrate the effect of the GPU minimum frequency on inference latency. **Results:** Table 3 indicates that tuning the minimum GPU frequency can significantly reduce energy consumption when the workload cannot fully utilize the computational power of the hardware. Notably, energy consumption and inference latency are reduced by forcing the GPU to operate at a higher frequency. This differs from the tradeoff observed in other experiments, where we exchange inference latency for lower energy consumption. Tuning minimum GPU frequency can nearly halve the energy consumption for small models. As computational power becomes saturated with increasing model size, the return on tuning the minimum GPU frequency diminishes. Figure 4 shows the per query energy cost as we vary the minimum and maximum GPU frequency. It shows that increasing the minimum GPU frequency from the default minimum leads to lower energy costs and inference latency. ## 4 Architecture Overview To take advantage of the opportunities described in the previous section, we design PolyThrottle, a system that navigates the tradeoff between latency SLO, batch size, and energy. PolyThrottle optimizes for the most energy-efficient hardware configurations under performance constraints and handles scheduling of on-device fine-tuning. Figure 1 shows a high-level overview of PolyThrottle's workflow. In a production environment, sensors on the edge devices continuously collect data and send the data to the deployed model for inference. In the meantime, to adapt to a changing environment and data patterns, these data are also saved for fine-tuning later. Due to the limited computation resources on these edge devices, fine-tuning workloads \begin{table} \begin{tabular}{l c c} \hline \hline Model & Energy Reduction & Optimal Mem Freq \\ \hline B0 & 14.9\% & 1331 \\ B4 & 14.3\% & 1331 \\ B7 & 12.0\% & 1331 \\ Bert Base & 25.4\% & 1062 \\ \hline \hline \end{tabular} \end{table} Table 2: This table shows the optimal memory frequency (in MHz) and the corresponding energy savings for various models and the corresponding energy savings for various models on Jetson TX2. B0/B4/B7 represent different models in the Efficient Net series. Figure 4: This figure shows per query energy cost as we vary the minimum and maximum GPU frequency. As we increase the minimum GPU frequency, energy cost decreases. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Size & Energy Reduction & \begin{tabular}{c} Optimal Min \\ GPU Freq \\ \end{tabular} \\ \hline B0 & 5.3M & 47.6\% & 1236 \\ B4 & 19M & 29.1\% & 1033 \\ B7 & 66M & 8.8\% & 1134 \\ Bert Base & 110M & 1.1\% & 217 \\ \hline \hline \end{tabular} \end{table} Table 3: This table shows the optimal minimum GPU frequency (MHz) and the corresponding energy savings for various models with 16-bit floating-point precision on Jetson Orin. B0/B4/B7 represent different models in the Efficient Net series. are often scheduled in conjunction with the continuously running inference requests. To address the challenges in model deployment on edge devices, PolyThrottle consists of two key components: 1. An optimization framework that finds optimal hardware configurations for a given model under predetermined SLOs using few samples. 2. A performance predictor and scheduler to dynamically schedule fine-tuning requests and adjust for the optimal hardware configuration while satisfying SLO. PolyThrottle tackles these challenges separately. Offline, we automatically find the best CPU frequency, GPU frequency, memory frequency, and recommended batch size for inference requests that satisfy the latency constraints while minimizing per-query energy consumption. We discuss the details of the optimization procedure in Section 5. We also show that our formulation can find near-optimal energy configurations in a few minutes using just a handful of samples. Compared to the lifespan of long-running inference workloads, the overhead is negligible. The optimal configuration is then installed on the inference server. At runtime, the client program processes the input and sends inference requests to the inference server. Meanwhile, if there are pending fine-tuning requests, the performance predictor predicts the inference latency when running concurrent fine-tuning, and decides whether it is possible to satisfy the latency SLO if fine-tuning is scheduled concurrently. A detailed discussion on performance prediction can be found in Section 6. The scheduler then decides what the new configuration that can satisfy the latency SLO while minimizing per-query energy consumption is. If such a configuration is attainable, it will schedule fine-tuning requests iteration-by-iteration until all pending requests are finished. **Online vs. Offline:** Adjusting the frequency of each hardware component entails writing to one or multiple hardware configuration files, a process that takes approximately 17ms each. On Jetson TX2 and Orin, each CPU core, GPU, and memory has a separate configuration file that determines operating frequency. As a result, setting the operating frequencies for CPUs, GPU, and memory could require up to 150ms. This duration could exceed the latency SLO for many applications, and this is without accounting for the additional overhead of completing frequency changes. Since the latency SLO for a specific workload does not change frequently, PolyThrottle determines the optimal hardware configuration before deployment and only performs online adjustments to accommodate fine-tuning workloads. ## 5 Problem Formulation: Two-phase Tuning Our objective is to automatically find the optimal hardware configurations that minimize energy consumption while satisfying latency SLOs. Formally, we are solving the optimization problem: \[\min f(x_{CPU},\,x_{GPU_{min}},\,x_{GPU_{max}},\,x_{Mem},\,b)\] \[\mathrm{s.t.} t(x_{CPU},\,x_{GPU_{min}},\,x_{GPU_{max}},\,x_{Mem},\,b)\leq c\] where \(f\) and \(t\) represent the energy consumption and latency associated with a workload under the given hardware configurations and batch size. We use \(x_{CPU},\,\,x_{GPU_{min}},\,\,x_{GPU_{max}},x_{Mem}\) to denote the frequency limit of CPU, GPU, and memory on the device, and use \(b\) to denote the maximum batch size. We use \(c\) to denote the inference latency SLO limit, which is set by application users. This optimization problem is challenging on several fronts: 1. The search space is large, and performing a grid search will take hours depending on the model size. On TX2 and Orin, there are 5005 and 1820 points in the grid if we allow 5 different batch sizes. An exhaustive search would take 14 and 5 hours, respectively. 2. To satisfy latency constraints, it is hard to decouple each dimension and optimize them separately as they jointly affect inference latency in a non-trivial way. 3. The optimization landscape may vary across models and devices. We observe that CPU frequency can be decoupled from GPU frequency, memory frequency, and batch size, as it mainly affects the preprocessing latency and energy consumption. We pipeline the requests so different requests can use CPU and GPU resources at the same time to increase inference throughput. Based on this observation, we propose a two-phase hardware tuning framework, where CPU tuning is done separately from tuning other hardware components. The challenge that remains is to efficiently optimize for an unknown function with noise. As shown in Figure 3 and 4, the performance of neural network inference with changes in memory and GPU frequency is difficult to predict, therefore, a good solution must be able to handle the variance while converging to a near-optimal configuration in a sample-efficient fashion. This requires the method to adaptively balance the tradeoff between exploration and exploitation. To solve this, we formulate the optimization problem as a Bayesian Optimization problem and leverage recent advances in the field to incorporate the SLO constraints unique to our setting. ### Constrained Bayesian Optimization Bayesian Optimization is a prevalent method for hyperparameter tuning (Kandasamy et al., 2020; Klein et al., 2015), as it can optimize various black-box functions. This method is especially advantageous when evaluating the objective function is expensive and requires a substantial amount of time and resources. However, some applications may involve constraints that must be satisfied in addition to optimizing the objective function. Constrained Bayesian Optimization (CBO) (Gardner et al., 2014) is an extension of Bayesian Optimization that tackles this challenge by incorporating constraints into the optimization process. In CBO, the objective function and constraints are treated as distinct functions. The optimization algorithm seeks to identify the set of input parameters that maximize the objective function while adhering to the constraints. These constraints are usually expressed as inequality constraints that must be satisfied during the optimization process. The expected constrained improvement acquisition function in CBO is defined as follows: \(EI_{C}(\hat{x})=PF(\hat{x})\times EI(\hat{x})\). Here \(EI(\hat{x})\) represents the expected improvement (EI) (Brochu et al., 2010) within an unconstrained Bayesian Optimization scenario, while \(PF(\hat{x})\) is a univariate Gaussian cumulative distribution function, delineating the anticipated probability of whether \(\hat{x}\) can fulfill the constraints. Intuitively, EI chooses the next configuration by optimizing the expected improvement relative to the best recently explored configuration. In PolyThrottle, we choose EI since our empirical findings and corroborations from additional studies (Alipourfard et al., 2017) show that EI performs better than other widely-used acquisition functions (Snoek et al., 2012). CBO (Gardner et al., 2014) also employs a joint prior distribution over the objective and constraint functions that captures their correlation structure. This joint prior is constructed by assuming that the objective and constraint functions are drawn from a multivariate Gaussian distribution with a parameterized mean vector and covariance matrix. These hyperparameters are learned from data using maximum likelihood estimation. During the optimization process, the algorithm uses this joint prior to compute an acquisition function that balances exploration (sampling points with high uncertainty) and exploitation (sampling points where the objective function is expected to be low and subject to feasibility constraints). The algorithm then selects the next point to evaluate based on this acquisition function. During each iteration, the algorithm will test whether the selected configuration violates any of the given constraints and take the result into account for the next iteration. Encoding more system-specific hints as constraints can be of independent research interests, however, we show in Section 7 that the current formulation performs well under a variety of scenarios. ## 6 Modeling Workload Interference Consider the case where we run an inference workload and aim to support fine-tuning without interfering with the online inference process. When a fine-tuning request arrives, we need to decide if it is possible to execute the fine-tuning request without violating inference SLOs. Time-sharing has been the default method for sharing GPU workloads. In time-sharing, shared workloads use different time slices and alternate GPU use between them. Recently, CUDA streams, Multiprocess Service (MPS) (NVIDIA, 2023a) and MIG (NVIDIA, 2023b) have been proposed to perform space-sharing on GPUs. However, these approaches are not supported on edge GPU devices (Bai et al., 2020; Zhao et al., 2023; Yu Chowdhury, 2020; Wu et al., 2021). Given this setup, we propose building a performance model that can predict the inference latency in the presence of fine-tuning requests and only execute fine-tuning requests if the predicted latency can satisfy SLO. **Feature selection:** To build the performance model, we leverage the following insights to select features: 1. In convolutional neural networks, the 2D convolution layers' performance largely determines the overall performance of the network. Its latency is correlated to the number of floating point operations (FLOPs) required during forward / backward propagation. 2. The ratio between the number of FLOPs and the number of memory accesses, also known as arithmetic intensity, together with total FLOPs, encapsulates whether a neural network is compute-bound or memory-bound. Using these insights, we add the following features to our model: Inference FLOPs, Inference Arithmetic Intensity, Fine-tuning FLOPs, Fine-tuning Arithmetic Intensity, and Batchsize. **Model selection:** We propose using a linear model to predict inference latency when a fine-tuning workload is running concurrently on the same device. The model aims to capture how the proposed variables affect the resource contention between the inference workload and the fine-tuning workload, and therefore, affect the inference latency. The proposed model can be summarized as follows: \[\text{Inference time}= \theta_{0}+\theta_{1}\times\text{FLOPs}_{inf}+\theta_{2}\times AI _{inf}\] \[+\theta_{3}\times\text{FLOPs}_{ft}+\theta_{4}\times AI_{ft}\] \[+\theta_{5}\times\text{Batchsize}\] Given the above performance model, we use a Non-negative Least Squares (NNLS) solver to find the model that best fits the training data. An advantage of NNLS for linear models is that we can solve this with very few training data points (Venkataraman et al., 2016). We collect a few samples on the provided model by varying the inference and fine-tuning batch sizes and the output dimension, which captures various fine-tuning settings. This model is used as part of the workload scheduler during deployment to predict whether it is possible to schedule a fine-tuning request. **Fine-tuning scheduler:** During inference, when there are outstanding fine-tuning requests, PolyThrottle uses the model to decide whether it is possible to schedule the request online without violating the SLO. When the model finds a feasible configuration, it adjusts accordingly until either all pending requests are finished or a new latency constraint is imposed. ## 7 Experiments ### Setup **Hardware Platform:** Our experiments are conducted on the Jetson TX2 Developer Kit and Jetson Orin Developer Kit. To assess the energy consumption of our program, we employ the built-in power monitors on the Jetson TX2 and Jetson Orin Developer Kits. We also cross-validate our measurements with an external digital multimeter (See Appendix A for more details on hardware and energy measurement). **Workload Selection:** We base our experiments on the EfficientNet family and Bert models Tan and Le (2019); Devlin et al. (2018). EfficientNet is chosen not only for its status as a state-of-the-art convolutional network in on-device and mobile settings but also for its principled approach to scaling the width, depth, and resolution of convolution layers. Table 4 summarizes the scaling pattern of EfficientNet from the smallest B0 to the largest B7. We select Bert to investigate energy usage patterns in a Transformer-based model Wolf et al. (2020), where the workload is more memory-bounded compared to convolution-based neural networks. Bert and its variants Kim et al. (2021); Devlin et al. (2018); Sanh et al. (2019); Tambe et al. (2021) are widely used for Question and Answering tasks Rajpurkar et al. (2016), making it applicable for numerous edge devices, such as smart home assistants and smart speakers. **Dataset:** We evaluate PolyThrottle on real-world traffic streams data Shen et al. (2019) and sample frames uniformly to feed into EfficientNet. For Bert, we evaluate on SQuAD Rajpurkar et al. (2016) for Question Answering. Note that datasets do not affect PolyThrottle's performance since inference latency would not change significantly across datasets once the model is chosen. **Implementation:** PolyThrottle is built on the Nvidia Triton inference server. To maximize performance, we generate TensorRT kernels that profile various data layouts and tiling strategies to identify the fastest execution graph for a given hardware platform. Our modules include a Bayesian optimizer for determining the best configuration, an inference client responsible for preprocessing and submitting requests to the inference server, and a performance predictor module integrated into the inference client for scheduling fine-tuning requests. We maintain separate queues for inference and fine-tuning requests. ### Efficiently Searching for Optimal Configuration In this experiment, we carry out an extensive empirical analysis of tuning various models across different hardware configurations while also adjusting the quantization level. We perform a grid search on EfficientNet B0, B4, B7, and Bert Base to examine the potential energy savings and identify the optimal GPU and memory frequencies for each model. We also adjust the quantization level for each tested model. We evaluate 16-bit and 32-bit floating point (FP16/FP32) precision. The optimal energy consumption and configuration referenced later in this section use the results obtained here as the baseline and optimal solution. Having obtained the optimal frequency using grid search, we next evaluate the average number of attempts it takes for PolyThrottle to find a solution within \(5\%\) of the optimal solution. We compare our Constrained Bayesian Optimization (CBO) formulation against Random Search (RS). **Experiment Settings:** We measure the average number of attempts needed to find a near-optimal configuration. For Random Search, we calculate the expected number of trials needed to find a near-optimal configuration based on the grid size by computing the fraction of near-optimal configurations and taking the reciprocal. For CBO, we set the \(\xi\) parameter associated with the Expected Improvement function to 0.1 and initial random samples to be 5, which we find to work well across different models and hardware platforms. We conduct two experiments where we set different inference latency constraints, the results can be found in Figure 5: 1. We restrict inference latency to close to the optimal latency (\(20\%\)). In this setting, the tight latency constraints make it impossible to batch the inference query, essentially reducing the search space for the optimal configuration. 2. In the second benchmark, we relax the inference latency constraint to include the configurations that provide the \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Input dim / width & Width coef & Depth coef \\ \hline B0 & 224 \(\times\) 224 & 1.0 & 1.0 \\ B4 & 380 \(\times\) 380 & 1.4 & 1.8 \\ B7 & 600 \(\times\) 600 & 2.0 & 3.1 \\ \hline \hline \end{tabular} \end{table} Table 4: This table shows the scaling pattern of the EfficientNet model family. lowest energy-per-query in Figure 2. In this setting, we need to explore the batch size dimension to find the configuration that minimizes energy. We test on EfficientNet B0, B4, and B7, as well as Bert Base on both Jetson TX2 and Jetson Orin. **Results:** Figure 5 shows that CBO outperforms RS in both scenarios. Since CBO models the relationship between hardware configuration and latency, it can find a near-optimal solution with only \(5\) to \(15\) samples. In the second scenario, the performance of RS deteriorates as it is unable to leverage the relationship between latency and batch size when dealing with a multiplicatively increasing search space. Overall CBO takes 3-10x fewer samples in the second setting. The overhead of performing CBO is also minimal. As shown in Figure 5, CBO only requires around 15 samples to find a near-optimal solution and the optimization procedure can be completed in a few minutes. In cases where a new model is deployed, only a few minutes of overhead are needed to find optimal configurations for the new model. It is important to note that though RS might achieve performance comparable to CBO under certain conditions, this result is merely the expected value and the variance of RS is large. For instance, if 10 out of 200 configurations are near-optimal, the expected number of trials needed to reach a near-optimal configuration is 20, with a standard deviation of 19.49. Consequently, it's plausible that even after 40 trials, RS might still fail to identify a near-optimal configuration. On the other hand, the standard deviation of CBO is smaller; in all experiments, CBO's standard deviations are less than 3. ### Workload-aware Fine-tuning Scheduling Next, we evaluate how well PolyThrottle handles fine-tuning requests alongside inference. The central question we aim to address is whether our performance predictor can effectively identify and adjust accordingly when the SLO requirement is at risk of being violated, and if reducing the inference batch size and trading throughput can satisfy the latency SLO. To simulate this scenario, we generate two distinct Figure 5: This figure compares search efficiency between Constrained Bayesian Optimization and Random Search. The y-axis represents the number of attempts it takes to find a near-optimal configuration and the x-axis represents the deployed and associated quantization level. The **first row** corresponds to the setting where we set a latency target but restrict the batch size to 1. The **second row** where we relax the latency constraint and allow batching inference requests. \begin{table} \begin{tabular}{l l c} \hline \hline Method & Workload & SLO violation \\ \hline Greedy & Uniform & 37.91\% \\ Adaptive & Uniform & **2.08\%** \\ Greedy & Poisson & 60.41\% \\ Adaptive & Poisson & **5.42\%** \\ Greedy & Twitter & 22.0\% \\ Adaptive & Twitter & **5.8\%** \\ \hline Baseline & Uniform & 0.4\% \\ Baseline & Poisson & 1.67\% \\ Baseline & Twitter & 3.8\% \\ \hline \hline \end{tabular} \end{table} Table 5: This table shows the SLO violation rate and job completion time of various scheduling strategies for **Jetson Orin** on **EfficientNetB7**. The baseline shows the SLO violation rate without fine-tuning. inference arrival patterns (Uniform and Poisson) and use the publicly available Twitter trace (twi, 2018) and compare our adaptive scheduling approach to greedy scheduling, where a fine-tuning request is scheduled as soon as it arrives. The three arrival patterns represent scenarios that are highly controlled and bursty, respectively. In this context, we contrast PolyThrottle's adaptive scheduling mechanism with the greedy scheduling approach to assess the efficacy of PolyThrottle in meeting the desired SLO requirement. **Experiment Settings:** We evaluate on both synthetic and real workloads. For synthetic workloads, we generate a stream of inference requests using both Uniform and Poisson distributions. For real-world workload, we first uniformly sample a day of Twitter streaming traces and then compute the variance of requests during each minute. We then picked the segment with the highest variance to test PolyThrottle's capability in handling request bursts (twi, 2018; Romero et al., 2021). On Jetson Orin, we replay the stream for 30 seconds and measure the SLO violation rate during the replay using EfficientNet B7. Since each burst only lasts for a few seconds, this suffices to capture many bursts in the workload. We find that running the experiment for longer durations produces similar results. We set the fine-tuning batch size to 64, the number of fine-tuning iterations to 10, SLO to 0.7s, the output dimension to 1000, and an average of 8 inference requests per second. On Jetson TX2, we do the same experiment on EfficientNet B4. Due to memory constraints, we perform the fine-tuning batch size to 8, SLO to 1s, the output dimension to 100, and an average of 4 inference requests per second. We select a less performative model on TX2 to meet a reasonable SLO target (under 1s). The number of fine-tuning iterations is chosen based on the duration of the replay. We then measure the energy costs when deploying PolyThrottle at the default and optimal hardware frequency, respectively, to measure how much energy we save during this period. The optimal hardware frequency is obtained from results in Section 7.2. For greedy scheduling, we employ a standard drop policy (Crankshaw et al., 2017; Shen et al., 2019), whereby a request is dropped if it has already exceeded its deadline. In the adaptive setting, we use the predictor to determine whether to drop an inference request. We also replay the inference request stream without fine-tuning requests to serve as a baseline. **Results:** Table 5 and 6 show the SLO violation rates under various workloads and latency targets. The findings indicate that greedy scheduling may lead to significant SLO violations owing to the interference introduced by the fine-tuning workload. In contrast, PolyThrottle's adaptive scheduling mechanism demonstrates the ability to achieve low SLO violation rates by dynamically adjusting configurations. The baseline figures in the table represent SLO violation rates in the absence of interference from fine-tuning requests. Inherent variance in neural network inference resulted in \(1\%\) of SLO violations in the case of Uniform distribution. However, bursts in the Poisson distribution and the Twitter workload generated more SLO violations. PolyThrottle's adaptive scheduling mechanism significantly reduces the SLO violation rate, meeting the SLO requirements while concurrently handling fine-tuning requests. Nevertheless, in several instances, we were unable to achieve near-zero SLO violation rates. This limitation can be attributed to the granularity of scheduling as we process the current batch of requests over an extended timespan due to interference from the fine-tuning workload. We also **reduce energy consumption by \(14\%\)** on EfficientNet B7 on Jetson Orin and by \(23\%\) on EfficientNet B4 on Jetson TX2 across the workloads. We show in Appendix D how PolyThrottle reacts to changing SLOs when there are outstanding fine-tuning requests. ## 8 Conclusion In this work, we examine the unique characteristics of energy consumption in neural network inference, especially for edge devices. We identified unique tradeoffs and dimensions between energy consumption and inference latency SLOs and empirically demonstrated hidden components in optimizing energy consumption. We then propose an optimization framework that automatically and holistically tunes various hardware components to find a configuration aligned with the Pareto Frontier. We empirically verify the effectiveness and efficiency of PolyThrottle. PolyThrottle also adapts to the need for fine-tuning and proposes a simple performance prediction model to adaptively schedule fine-tuning requests while keeping the online inference workload under the inference latency SLO whenever possible. We hope our study sheds more light on the hidden dimension of NN energy optimization. \begin{table} \begin{tabular}{l l c} \hline \hline Method & Workload & SLO violation \\ \hline Greedy & Uniform & 16\% \\ Adaptive & Uniform & **5.5\%** \\ Greedy & Poisson & 35.4\% \\ Adaptive & Poisson & **7.4\%** \\ Greedy & Twitter & 35.0\% \\ Adaptive & Twitter & **7.5\%** \\ \hline Inference Only & Uniform & 1\% \\ Inference Only & Poisson & 3.5\% \\ Inference Only & Twitter & 5.3\% \\ \hline \hline \end{tabular} \end{table} Table 6: This table shows the SLO violation rate and job completion time of various scheduling strategies for **Jetson TX2** on **EfficientNetB4**. The baseline shows the SLO violation rate without fine-tuning.
2303.01758
SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks
The availability of digital devices operated by voice is expanding rapidly. However, the applications of voice interfaces are still restricted. For example, speaking in public places becomes an annoyance to the surrounding people, and secret information should not be uttered. Environmental noise may reduce the accuracy of speech recognition. To address these limitations, a system to detect a user's unvoiced utterance is proposed. From internal information observed by an ultrasonic imaging sensor attached to the underside of the jaw, our proposed system recognizes the utterance contents without the user's uttering voice. Our proposed deep neural network model is used to obtain acoustic features from a sequence of ultrasound images. We confirmed that audio signals generated by our system can control the existing smart speakers. We also observed that a user can adjust their oral movement to learn and improve the accuracy of their voice recognition.
Naoki Kimura, Michinari Kono, Jun Rekimoto
2023-03-03T07:46:35Z
http://arxiv.org/abs/2303.01758v1
# SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks ###### Abstract. The availability of digital devices operated by voice is expanding rapidly. However, the applications of voice interfaces are still restricted. For example, speaking in public places becomes an annoyance to the surrounding people, and secret information should not be uttered. Environmental noise may reduce the accuracy of speech recognition. To address these limitations, a system to detect a user's unvoiced utterance is proposed. From internal information observed by an ultrasonic imaging sensor attached to the underside of the jaw, our proposed system recognizes the utterance contents without the user's uttering voice. Our proposed deep neural network model is used to obtain acoustic features from a sequence of ultrasound images. We confirmed that audio signals generated by our system can control the existing smart speakers. We also observed that a user can adjust their oral movement to learn and improve the accuracy of their voice recognition. silent speech, ultrasonic imaging, deep neural networks, human-AI integration + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech Speech and Professional Topics + Footnote †: journal: Acoustics and Speech is concentrating on a computer-screen and interaction devices such as a keyboard and a mouse, they can still operate other devices with voice interaction. However, as for the speech interface, two challenges must be overcome. First, using the interface in public places presents limitations. In addition to being an annoyance to the surrounding people, disclosing personal information or secret information by uttering it in public is risky in terms of information security. Next, it cannot be used in a noisy environment, because the accuracy of speech recognition may be declined. These issues are particularly acute when trying to use a speech interface to interact with wearables or mobile computers. To overcome these challenges, research on "silent-speech recognition" has been conducted [(8)]. For example, by applying a method known as lip reading, images of the mouth of the speaker or the entire face are captured by a camera, and the content of the utterance is estimated from those images [(52)]. If the user could simply _mouth_ the utterance without actually voicing it, it would be possible to use such a voice interaction in public places. However, with the camera method, it is necessary to install a camera in front of the face, and its form factor renders it unsuitable for wearables or mobile applications. Other approaches of the studies on "non-audible murmur" (NAM) [(42; 22)] attempts to recognize utterances with a microphone or an accelerometer worn on the skin or throat of the user. In this case, the user speaks with articulated respiratory sound without vocal-fold vibration (namely, whispering). However, to be recognized accurately by the system, a user's whispering voice tends to be noticeable by other people nearby. Furthermore, some studies attempt to estimate speech by estimating the movement of muscles near the oral cavity by electromyography (EMG) [(49; 33)]. However, the estimation of free utterances with EMG is still difficult; instead, it is a type of gesture recognition using the movement of the oral cavity. Thus, the number of detectable commands is limited, and the user has to learn new gesture skills instead of using their existing speaking skills. Instead of using the above-described approaches, we focus on using ultrasonic imaging [(11)]. Ultrasonic-imaging technology recognizes the internal status in the body by measuring the reflection time of ultrasonic waves radiated into the body. This technology is widely used to grasp the condition of the internal organs for medical purposes. In recent years, small and lightweight systems that can be directly connected to smartphones (e.g., Vscan Extend, General Electronic Company) have appeared. If it were possible to attach a small ultrasonic imaging head around the neck to sense the situation in the oral cavity and convert it to acoustic information, it would be a useful device for communicating with speech-capable devices without actually speaking aloud; namely, "silent voice interaction" would be possible. Silent voice interaction by ultrasonic imaging demonstrates two potential advantages over other approaches. First, the ultrasonic-imaging head can be miniaturized, and it can constitute a device with an inconspicuous shape such as a collar. It would be an important feature for designing wearable silent-voice systems. Next, by recognizing the situation in the oral cavity, it would be possible to measure the movement of the tongue, which cannot be observed from the outside. It would thus be possible to reproduce sound more accurately. Studies have been conducted on silent speech using ultrasound imaging, but many of them are used in combinations with lip or face images; therefore, a camera must be placed in front of the user [(26)]. This configuration presents a limitation when it is used as a wearable interface device. Recent researchers have challenged to use deep neural networks with ultrasound imaging for silent speech [(7; 51)]; however, they are not based on convolutional neural networks and are not validated with data retrieved from a mouth movement without speaking. They are only validated with mouth movement when a user is actually emitting a voice. We present a significant step forward using convolutional neural networks and proof-of-concept validations via actual silent speech, interacting with an unchanged smart speaker (Amazon Alexa). Herein, a silent-voice interaction system-called "SottoVece" based only on ultrasonic images is described (Figure 1). By combining two types of deep neural networks, this system can be trained to generate voice signals from a sequence of images captured from an ultrasonic-imaging device. Our contributions can be summarized as the following two topics: * A two-level model of deep convolutional neural networks to convert ultrasonic images to actual sounds is proposed. * As a proof of concept, a silent-voice system was developed, and it was shown that the system could control a voice-controlled device (in this case, Amazon Echo) without modifications. ## 2. Related Work ### Silent Speech Silent-speech interfaces have been studied using various technologies and methods [(8; 31)]. Lip reading [(52)] or facial images [(12)] can be used to estimate speech uttered by a subject without using audio information. SilentVoice [(17)] is an _ingressive speech_ approach that captures extremely soft speech. Electromagnetic articulography (EMA) has been used to develop brain-computer interfaces [(5)] and other interactive applications [(53)]. Magnets can be attached to the subject to detect silent speech [(14; 19; 23)]. Electroencephalogram (EEG) [(44)] and Electromyography (EMG) [(37; 49)] are also typical methods for silent-speech recognition. Particularly, EMG has been applied for interactive purposes and applications concerning human-computer interaction (HCI) [(38)]; for example, controlling a web browser [(32)]. A recent example proposed by Kapur et al. [(33)] used multiple electrodes to sense neuromuscular signals of a subject in an internal speech. In addition to the methods introduced above, other human-facial electrical potentials have been combined and measured [(22; 48)]. Ultrasound imaging has also been used for silent-speech recognition in the field of speech processing [(9; 27)]. In one study, which focused on singing, sung vowels could be synthesized based on the ultrasound and video of the lips [(30)]. In a similar study, a silent-speech interface using ultrasound and optical images of the tongue and lips was developed [(26; 28)]. In another study, a mapping technique for automatically generating animations of the tongue movement from raw ultrasound images was created [(13)]. Other approaches combine the techniques described above with deep neural networks, such as BCI applications (Beng et al., 2017), myoelectric signals (Dong et al., 2018), and lip reading with long short-term memory (LSTM) (Wang et al., 2019). Combining deep neural networks and ultrasound imaging for silent-speech interfaces has also been considered. For example, ultrasound imaging was used to capture the tongue movement with such a combination with deep neural networks (Dong et al., 2018). Moreover, the fundamental frequency (F0) curve, which had been considered unpredicable, was estimated using a deep neural network (Krizhevsky et al., 2014). In addition, it has been suggested that a global and visuo-acoustic modeling approach called "Eigentongues" performs better than tongue-contour modeling when using neural networks (Wang et al., 2019). A new benchmark for silent-speech research based on deep neural networks has also been proposed (Krizhevsky et al., 2014). These approaches are still at the basic proof-of-concept level, and none have been evaluated in terms of controlling the existing speech-interaction appliances such as smart speakers. Following the prior approaches described above, we aim to develop a system that allows silent speech to interact with voice-controlled devices and interactive purposes in a HCI context. Furthermore, to improve the performance of the developed system, we aim for the interaction between human and artificial neural networks for performance improvement. ### Non-Auditory Inputs and Interaction EarFieldSensing (Zhu et al., 2019) is a gesture-recognition technology based on electric-field sensing. CanalSense (Beng et al., 2017) senses changes in air pressure in the ear canals that occur when the face is moved. Tongue-in-Cheek (Tongue-in-Cheek, 2018) senses the movement of the tongue using the X-band Doppler radar for facial-gesture recognition. Other methods that use EMG (Wang et al., 2019) or combinations of brain and muscle signal sensing (Zhu et al., 2019) are also notable. The techniques above have been utilized for arm-gesture recognition; however, an approach using ultrasound imaging for that purpose has been demonstrated, i.e., EchoFlex (Scho et al., 2019). It is an interaction sensor that recognizes movements of the forearm muscle using ultrasound imaging. The results of that study indicated that the sensor performed well and can potentially be supplanted to other prior approaches. They also inspired the authors to consider using ultrasound imaging for silent-voice interaction using the inner part of the mouth. ### Interacting with Smart Devices Owing to the development of mobile and smart devices, we can now interact with them frequently through various methods (Wang et al., 2019). We may use our voice as an input method (Scho et al., 2019), and studies to overcome issues concerning such voice-controlled user interfaces have been presented (Scho et al., 2019). One typical interface (a personal "agent" hereafter) is Amazon Echo, typically known as "Alexa," with which people can communicate and chat. The effect of this agent on people has been researched extensively, and the results show that it is an effective agent for satisfying or influencing our lives (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). The role of artificial intelligence (AI) has become more important for tasks other than personal agents like Alexa, and we now interact and collaborate with AIs; for example, when sending text messages (Krizhevsky et al., 2014) and designing objects (Krizhevsky et al., 2014). The hybrid existence and interaction of humans and AI is an impressive topic, which can overcome social issues and improve the quality of our lives. Glove Talk II (Gil et al., 2018), which was presented in 1995, is a gesture-to-speech system that translates hand gestures into 10 control parameters of a speech synthesizer using neural networks. However, to use the system, the user requires a long-term training of approximately 100 hours. In the present study, referring to this work, we apply the recently improved technologies of deep-neural networks to create the hybrid interaction of humans and AI, where the user (a human) learns to adapt and utilize the AI system to achieve a better interaction. ## 3. System Architecture of SOTUVOCE The architecture of the proposed system for generating sound from ultrasonic images is shown in Figure 2. In general, the goal of the system is to transfer certain sequence representations (in this case, ultrasonic images) into other sequence representations (in this case, speech). This goal is similar to that of text-to-speech systems (Wang et al., 2019), voice-transfer systems (Beng et al., 2017), and lip reading or face-to-voice systems (Gil et al., 2018). Inspired by these systems, the proposed system uses two neural networks. The first neural network ("Network 1' in Figure 2) transfers a time series of ultrasonic images to a sound-representation vector. We used a 64-dimensional Mel-scale spectrum with the frequency range of 300 \(Hz\) to 8, 000 \(Hz\), sampled every 20 \(ms\), as a sound-representation vector. Subsequently, the translated sound representations constitute a series of sound-representation vectors (i.e., a spectrogram). This sound spectrogram can be converted to an audio signal. In addition, to refine the quality of those vectors, they are also transferred by the next neural network ("Network 2" in Figure 2), which generates a series of sound-representation vectors of the same length as that of the input sound-representation vectors. Finally, the output vectors are converted to an actual audio signal. These two networks are speaker dependent; accordingly, to train them, the system requires a set of ultrasonic-imaging videos captured while the user speaks various speech commands. ### Ultrasonic Imaging Device The CONTEC CMS600P2 Full Digital B-Ultrasound Diagnostic System was used as the ultrasonic-imaging device. A user attaches a 3.5-MHz convex-type ultrasonic imaging probe under the jaw (Figure 3). This system provides a screen output port to be connected to the display monitor. In addition, a display-digitizing unit was used for converting the signal sent to the display to an MPEG-4 movie file. Figure 4 shows the obtained ultrasonic image. We found a delay of ultrasonic images and sound capturing. To compensate for this delay, we examined the corresponding utterance and tongue movement in the video and estimated a delay of 300 \(ms\). We subsequently adjusted to this delay in the training data. ### Network 1 Network 1 uses a series of \(K\) ultrasonic images (size of \(128\times 128\), monochrome) as the input and generates an \(n\)-dimensional sound representation (Mel-scale spectrum) as the output. Currently, \(K\) of 13 and \(n\) of 64 are used. Because the frame rate of the ultrasonic images is 30 frames per second, the duration of \(K\) ultrasonic images is thus 400 ms. This time duration covers the static and motion features of the utterance. Samples of the ultrasonic images are shown in Figure 5. The K-size image sequence is prepared repeatedly such that one Mel-scale spectrum is created every 20 ms (i.e., the hop size is 20 ms) (Figure 6). A sound-representation vector corresponding to the time position at the center of each ultrasonic-image sequence is extracted from the audio signal data, and Network 1 is trained to generate it. Network 1 is based on a convolutional neural network (CNN). It comprises four layers: _Conv2D - LeakyReLU - Dropout - Batch-Normalization_, followed by six layers: _Flatten - Dense - LeakyReLU - Dropout - Dense - LeakyReLU_. The output size of Network 1 is the same as the length of the sound-representation vector (i.e., 64). Both input images and output vectors are normalized to 0 to one, respectively. The loss function is the mean-squared error, and the optimizer is Adam (Kingma and Ba, 2014). ### Network 2 To improve the sound quality, Network 2 uses a sequence of sound-representation vectors and generates a sequence of sound-representation vectors with the same length as the input. This model comprises a bank of one-dimensional (1-D) convolutional filters (_Conv1D_), with a kernel size from 1 to \(M\) (\(M=8\) is currently used), followed by the U-Network (Wang et al., 2017) with the three layers of _Conv1D - MaxPooling (strides=2) - LeakyReLU - Dropout_, and thrice of _DeConv1D - Concatenate_ (Figure 7). The 1-D convolutional bank explicitly models the local and contextual information of the input sequence. The following U-Network also improves the quality of the audio sequence with precise localization. Finally, the network generates Mel-scale spectrum vectors. To train Network 2, Network 1 was used to create Mel-scale spectrum vectors from the images of a training ultrasonic video clip as the input, and the same length Mel-scale spectrum vectors from the audio of the same training video clip as the output. Similarly, in the case of Network 1, the mean-squared error was used as a loss function, and Adam was used as an optimizer. Figure 4. Obtained ultrasonic image from probes attached to the jaw from underneath. Figure 3. Ultrasonic imaging probes Figure 2. SottoVoce system overview For simplicity, the time durations of the input and output were fixed to the same value (currently, 3.68 s is used). This duration encompasses many typical speech commands. ### Sound generation Following the neural-networks processing, a sequence of Mel-scale-spectrum sound-representation vectors is converted to an audio signal using the Griffin Lim algorithm (Zhu et al., 2017). This conversion is possible from the output of Network 1 or the output of Network 2. For testing, the generated audio signals are transmitted from an audio speaker, and they can be used to control nearby sound-controlled devices such as a smart speaker. We are also considering taking the audio-waveform signal directly as the audio input information of the speech-controllable device without actually reproducing it as a sound wave. ### Training As for preparing the training data, two collaborators, (28-year old male and 24-year old male) were attached with an ultrasonic imaging probe under their jaws, and were instructed to utter various speech commands. Approximately 500 speech commands were collected from each collaborator (Table 7). For each command, as well as the voice utterance, a video of the ultrasonic images was recorded. The training session was approved by the research ethics committee of the author's institution. The recorded video was used to train Network 1. The ultrasonic images were rescaled to \(128\times 128\) and used as inputs. The corresponding utterance voice was converted to a Mel-scale spectrum and used as outputs. The number of test sets for Network 2 was the same as the number of trained video files (approximately 500). To increase the number of test sets, data augmentation by applying Gaussian noise to the input Mel-scale spectrum vectors was used. As our model is speaker dependent, both Network 1 and Network 2 are trained for each speaker. Network 1 is trained first and subsequently used to create the dataset for training Network 2. ### Implementation Details The above-described network models were implemented based on the Keras (Keras, 2016) deep-learning platform with Tensorflow (Keras, 2019) as the Figure 5. A series of ultrasonic images of the throat of a subject about to pronounce “Alexa.” Figure 8. Apparatus used for training and evaluating. Figure 6. Representation of training data. K-sized ultrasonic images are paired with the corresponding sounds (Mel-scaled sound vectors). Figure 7. Network 2 improves the quality of generated Mel-scaled spectrum sequences. (Note: for the consistency with the illustration of the neural networks, the time axis of the audio-feature vector is shown as the vertical axis.) backend, and an NVIDIA GeForce 1080ti as the GPU board. Training Network 1 with 500 speech commands (which creates 35,000 training data pairs for Network 1) required approximately 4 h. Training Network 2 required less than an hour. comment As the ultrasonic-imaging device cannot be connected directly to the Ubuntu machine that runs the neural networks, a simple server-client program was developed. It connects the computer that controls the ultrasonic imaging device to the computer that operates the neural networks. The generated audio signals are sent back to the computer with the ultrasonic-imaging device. To process ultrasonic images at/with duration/intervals of 3.68 s, 2.36 s is required for the neural networks to process the images. The total processing time (including video processing, neural networks processing, and conversion of the Mel-scale spectrum to an audio wave) was 2.61 s. ## 4. Results The results of converting the ultrasonic images to sound are shown in Figure 9. In the figure, the top-row graphs show the sound-representation vectors (a Mel-scale spectrogram), and the bottom-row graphs are the corresponding waveforms. The graph labeled Net1 is the result of Network 1, that labeled Net2 is the result of Network 1 + Network 2, and that labeled "original" is the wave data encoded as a Mel-scale spectrogram and decoded back to the waveform. Thus, the last one is regarded as the ground-truth of the training. Although the difference between the outputs of Network 1 and Network 2 was unclear, we observed that the sound generated by Network 2 was better than that generated by Network 1 (Examples of the output audio signals are given in the supplemental video.) It is noteworthy that both Network 1 and Network 2 generate natural intonation, which is typically considered to be caused by vocal-fold vibration, not by the internal situation. This result might suggest that the neural networks learn the context of the speech. The generated sounds emitted from the computer's speaker were subsequently tested with an existing (unchanged) smart speaker (Amazon Echo and Amazon Echo Show), and this test confirmed that the generated sounds can control smart speakers. The speech commands used for training and testing were typical Amazon Alexa commands. For this test, the participants spoke the following four commands, five times each (20 utterances in total): "Alexa, play music," "Alexa, what's the weather like," "Alexa, what time is it," and "Alexa, play jazz." Table 1 lists the recognition success ratio of Network 1, Network 1 + Network 2, and the original (Mel-scale encoded and decoded) as a ground truth. We confirmed that the combination of Network 1 and Network 2 improves the recognition rate. We also noticed that the trigger word ("Alexa") is always regenerated clearly. This may because that word is simply the most pronounced word in the training set. We also performed the word error rate (WER) measure test using Google's cloud speech-to-text engine (Zhou et al., 2017) and the same environment used for the recognition measurement of smart speakers. The WER was 20.61%, 41.03%, and 33.56% for GT, Network 1, and Network 2, respectively (mean of a total of 40 speech commands from two users from Table 1). We believe that this will also serve as evidence for the effectiveness of Network 2. Through these studies, we found that commands such as "what's the weather like?" had a high recognition rate for all conditions, while commands with shorter terms such as "play jazz" performed much worse. This may suggest that the longer commands are easier for Network 2 to obtain the contexts. ## 5. End-to-end Evaluation and Observations The real end-to-end silent voice to audio conversion was examined. In this case, a user is asked to mouth a speech command without actually emitting a sound, and the oral cavity movement is record by an ultrasonic imaging probe. The obtained image sequence is subsequently translated to a voice by the proposed system. We asked the participants to speak as silently as possible (not to vibrate their vocal cords) and to try to speak as similar as possible when they speak with sound. However, we did not ask them to hold their breaths. Consequently, a case occurred where a small leaked sound was audible. To clarify this situation, we measured the level of the emitted sound level from the participant by following the evaluation method of SilentVoice (SilentVoice, 2017). We used a noise meter that has the same specifications (min range 30 \(dB\), 1.5 \(dB\) error) placed 30 \(cm\) away, in a room with a background noise level of 31.0 \(dB(A)\). The mean of the peak sound level of 20 measurements (typical Alexa speech commands) was 37.14 \(dB(A)\), which is lower than that of soft whispering. Figure 10 shows the comparison of the generated sound when a user is emitting a voice and when a user is not emitting a voice (please refer to the supplemental video for the actual generated sounds). Initially, the result was unsatisfactory. We first expected that the movement in the oral cavity without emitting a voice (Figure 10 (middle)) is the same as that when the user actually emits a voice (Figure 10 (left)); however, a subtle difference was found between them, and the sound quality generated by the image-without-voice was not as good as that generated by the image-with-voice. However, the following interesting phenomenon was observed. As the user can also listen to the generated sound generated by the image-without-voice, the user attempted to change the mouth movement to obtain a slightly better result. After several trials, the quality of generated sound improved. In this case, it is considered that the users improved their own skills in silent voicing. \begin{table} \begin{tabular}{l|c c c} & User A & User B & ave. \\ \hline Network 1 & 60.0\% & 25.0\% & 42.5\% \\ Network 1 + Network 2 & 65.0\% & 65.0\% & 65.0\% \\ GT & 90.0\% & 90.0\% & 90.0\% \\ \hline \end{tabular} \end{table} Table 1. Speech-recognition success ratio in tests with an unchanged existing smart speaker (“GT” means Mel-scale encoded/decoded from the original voice data, regarded as the ground truth of the training). ## 6. Discussions ### Incremental Voice Generation In our current design, the obtained ultrasonic-image sequences were converted to a voice at the granularity of the speech command (approximately 3.6 s). This is because Network 2 uses a fixed-length voice-representation sequence. However, based on the observation of the users' practice, it should be better to generate sounds incrementally, such that the user can have a tighter feedback loop for learning oral movements for generating a better voice. ### Continuous Ultrasonic-Wave Emission into the Body The effect on human organs when ultrasonic waves are emitted continuously into the body is unknown. However, we may be able to combine a simple triggering mechanism to start and stop the emission of ultrasonic waves. For example, the combination of an accelerometer and a microphone in the device can detect jaw movements for starting the (silent) voice command without actually emitting a voice. Figure 10. Comparison of Generated Sounds. Left: Network 2 results from ultrasonic images while a user is emitting a voice; Middle: Network 2 results from ultrasonic images without emitting a voice; Right: the original (Mel-scale spectrogram encoded/decoded) voice Figure 9. Results of the training (shown as waves and Mel-scale spectrogram): Net1: Network 1 results; Net2: Network 2 results; original: original voice encoded by Mel-scale spectrogram and decoded to an audio signal. This is the “ground-truth” of this training. ### Cure for Vocal Cord Disabilities We also expect people with damaged vocal cords to use our research. As described in the Human-AI integration section, people will be able to learn how to correctly control their mouth and tongue to generate sound, even though their vocal cords do not work. ### Combining with Other Modalities Finally, it should be mentioned that this research is not intended to exclude other modalities. Combining the information of EMG, accelerometers, and NAM microphones may improve the quality of speech recognition. Investigating the combination of these modalities is subject of future research. ### Human-AI integration The above-described observation suggests that an interesting relationship exists between humans and AI. Rather than considering AI as an autonomous or separated entity, we may be able to regard AI as a part of humans. Hence, even when the initial performance of the (artificial) neural networks is not perfect, a user can gradually learn and improve their performance. This is similar to how people learn fundamental skills. When we learn utterance, the coordination of the motor cortex that drives the oral cavity, tongue, and vocal folds, and the auditory cortex forms a tight loop to obtain better speech performance (Figure 12 (a)). By extending this loop, organic neural networks (e.g., our brain) and artificial neural networks may also form a tight feedback loop (Figure 12 (b)). We name this formation, "human-AI integration" rather than human-AI interaction. In this regard, we refer to the research of Glove Talk II from 1995 a pioneering work on human-AI integration (Han et al., 2019). In this work, the user learned to control a voice synthesizer with hand gestures. Three simple neural networks were used, and the user (who was a pianist) required more than 100 h to generate an audible voice. A combination of better neural networks and a learner could reduce this learning time. ## 7. Conclusion A method of silent-voice interaction with ultrasonic imaging was proposed. Two neural networks were used in sequence to convert a mouthed "utterance" of a user without a voice to a sound (voice), and could be used to operate the existing voice-controllable devices such as smart speakers. Following this result, we envision that the future form-factor for the wearable computer would be a combination of an attachable ultrasonic imaging probe to the underside of the jaw, with a bone conductive earphone or an open-air earphone (Figure 11). With this configuration, a user can always invoke a voice-controllable assistant without emitting a voice and obtain responses.
2303.02251
Certified Robust Neural Networks: Generalization and Corruption Resistance
Recent work have demonstrated that robustness (to "corruption") can be at odds with generalization. Adversarial training, for instance, aims to reduce the problematic susceptibility of modern neural networks to small data perturbations. Surprisingly, overfitting is a major concern in adversarial training despite being mostly absent in standard training. We provide here theoretical evidence for this peculiar "robust overfitting" phenomenon. Subsequently, we advance a novel distributionally robust loss function bridging robustness and generalization. We demonstrate both theoretically as well as empirically the loss to enjoy a certified level of robustness against two common types of corruption--data evasion and poisoning attacks--while ensuring guaranteed generalization. We show through careful numerical experiments that our resulting holistic robust (HR) training procedure yields SOTA performance. Finally, we indicate that HR training can be interpreted as a direct extension of adversarial training and comes with a negligible additional computational burden. A ready-to-use python library implementing our algorithm is available at https://github.com/RyanLucas3/HR_Neural_Networks.
Amine Bennouna, Ryan Lucas, Bart Van Parys
2023-03-03T22:43:57Z
http://arxiv.org/abs/2303.02251v2
# Certified Robust Neural Networks: Generalization and Corruption Resistance ###### Abstract Recent work have demonstrated that robustness (to "corruption") can be at odds with generalization. Adversarial training, for instance, aims to reduce the problematic susceptibility of modern neural networks to small data perturbations. Surprisingly, overfitting is a major concern in adversarial training despite being mostly absent in standard training. We provide here theoretical evidence for this peculiar "robust overfitting" phenomenon. Subsequently, we advance a novel distributionally robust loss function bridging robustness and generalization. We demonstrate both theoretically as well as empirically the loss to enjoy a certified level of robustness against two common types of corruption--data evasion and poisoning attacks--while ensuring guaranteed generalization. We show through careful numerical experiments that our resulting holistic robust (HR) training procedure yields state of the art performance. Finally, we indicate that HR training can be interpreted as a direct extension of adversarial training and comes with a negligible additional computational burden. A ready-to-use python library implementing our algorithm is available at [https://github.com/RyanLucas3/HR_Neural_Networks](https://github.com/RyanLucas3/HR_Neural_Networks). Machine Learning, Robustness Robustness, Generalization, Corruption, Robustness ## 1 Introduction Recent work has shown that many modern neural networks are vulnerable to data corruption putting into question the reliability and performance of such models at deployment time. Robustness has in recent years, next to out-of-sample generalization, become a central research topic in the machine learning community. The key question is how to develop machine learning models which are reliable and perform well despite being trained on corrupted training data and evaluated on perturbed test data. Data corruption can be due to multiple causes ranging from benign imprecise data collection to malicious adversaries. Two types of corruption, each associated with a distinct cause, stand out in prior work which we briefly discuss. Evasion Corruption.This type of corruption results in small, often imperceptible, perturbations of all the test or train instances and may fool a naively trained model to suffer poor performance at deployment time. Goodfellow et al. (2014) study evasion attacks at test time and show that neural network models can suffer a severe decrease in accuracy when test instances are subjected to an imperceptible amount of noise. This pioneering work spurred on a rich literature on attacks (Tramer et al., 2020) as well as defense strategies such as the popular adversarial training (Madry et al., 2018; Bai et al., 2021). Evasion corruption need not necessarily be caused by an adversary or even take place at test time. Indeed, noise is present in training data as well often as a result of measurement imperfections. Examples include street noise and background chatter in speech recognition (Li et al., 2014; Mitra et al., 2017) or impulse noise and blur in image classification (Hendrycks and Dietterich, 2019; Hendrycks et al., 2021). Poisoning Corruption.In this type of corruption, a certain fraction of the training data points is drastically changed. This alteration of the training dataset may then mislead the learning in selecting a "bad" model. Clearly, if a large majority of data points is affected, the data set is wholly corrupted and there is no point in hoping for a good performance. Hence, poisoning corruption is assumed to affect only a small fraction of the data points. One example is label flipping in classification problems, where a small portion of the training data points are wrongly labeled or an adversary can select a small number of labels to flip before training time (Biggio et al., 2012). Poisoning corruption has also been shown to considerably degrade performance at deployment time. For instance, Nelson et al. (2008) shows that altering only \(1\%\) of email spam labels can completely fool commercial spam filters. Several related attacks and defense mechanisms have been identified in follow-up work (Goldblum et al., 2022). Generalization.Beyond protecting against both discussed types of attacks, an equally important challenge is out-of-sample performance. A model generalizes if it provides protection against **statistical error** which we take here to mean the difference between the finite number of data points provided during training time as opposed to the full data distribution. Indeed, all models must avoid _overfitting_ to a particular training set and generalize to the test data. Several classical approaches have been developed to address statistical error such as \(\ell_{1},\ell_{2}\) regularization, early stopping, and data augmentation. Such models seek not only a good expected performance under the randomness of the samples but also low variance. To generalization well, models need to avoid being overly sensitive to a particular training data and rather seek "satisfactory" performance on all "likely" data sets of the task at hand. Generalization and robustness to each type of corruption have been extensively studied in isolation. However, the interaction between corruption protection and generalization has been largely ignored: existing "robust" approaches typically dismiss statistical error. Recent work has shown that robustness (to corruption) can be at odds with generalization, highlighting the need to consider corruption and statistical error simultaneously. Schmidt et al. (2018) and Rice et al. (2020) exhibit theoretical and empirical evidence that adversarial training (AT)--which provides robustness against evasive corruption--suffers from notably worse generalization than natural training. Rice et al. (2020) show in particular that the surprising generalization properties of neural networks, such as improved out-of-sample performance with larger models, vanishes under AT. Schmidt et al. (2018) indicate that "understanding the interactions between robustness, classifier model, and data distribution from the perspective of generalization is an important direction for future work". Hence, while AT aims to improve generalization performance by protecting against evasive corruption, it seems inadvertently to limit generalization by exacerbating the adverse effect of statistical error. This interesting phenomenon, dubbed _robust overfitting_, and its remedies is the topic of several recent papers (Wu et al., 2020; Li & Spratling, 2022; Yu et al., 2022). Similar observations were also noted in the case of poisoning, where poisoning corruption exacerbates overfitting (Zhang & Sabuncu, 2018; Hendrycks et al., 2019). Besides generalization issues, existing robust approaches typically protect against one single type of corruption exclusively, either evasion or poisoning. However, in practical settings, we typically are unaware of what type of corruption, if any, plagues the data. Unsurprisingly, a model designed for a specific type of corruption risks considerably poor performance when deployed in a setting with a different type of corruption. Hence, to develop practical robust models, it is important to protect against both types of corruption, and their potential combination. ### Contributions In this paper, we study generalization when learning with data affected by evasion and/or poisoning corruption on top of statistical error simultaneously. We first provide theoretical evidence of the significance of the interaction between corruption and statistical error. We show that the generalization gap of AT can be decomposed into an empirical risk minimization (ERM) generalization gap and a positive term which we interpret as an _adversary shift_ (Section 3). This result provides theoretical evidence to the robust overfitting phenomenon observed empirically by Rice et al. (2020) and indicates that AT indeed suffers from worse generalization than ERM. This theoretical observation illustrates that protecting against corruption only while ignoring statistical error can inadvertently yield worse generalization performance. Second, we tackle the problem of robustness against both types of corruption and statistical error simultaneously. We do so by designing and optimizing a provable "tight" upper bound on the _test loss_ when learning with data subject to corruption. This is in contrast to most prior work which is either empirically motivated or, like AT, merely optimizes empirical proxies to such upper bounds--dismissing statistical error. To design this upper bound, we construct a set around the corrupted training distribution which contains the testing distribution with high probability. A valid upper bound on the test loss can then be taken as the worst-case expectation of the loss over all distributions in this ambiguity set. We show that training a neural network model based on such associated distributionally robust optimization (DRO) objective function can be interpreted as a natural generalization to standard AT which protects against the two discussed corruption sources simultaneously and particularly enjoys strong generalization. Our holistic robust (HR) objective function has three parameters \(\alpha\in[0,1]\), \(r\geq 0\) and \(\mathcal{N}\), and is guaranteed to be an upper bound on the test performance with high probability \(1-e^{-rn+O(1)}\) when less then a fraction \(\alpha\) of all \(n\) samples are tampered by poisoning, and the evasion corruption is bounded within the set \(\mathcal{N}\). The latter guarantee certifies robustness for _any_ desired level of evasion attacks (\(\mathcal{N}\)) and poisoning attacks (\(\alpha\)): our out-of-sample deployment performance is at least as good as the estimated in-sample performance with high probability. Finally, we propose a practical HR loss training algorithm for neural networks with negligible additional computational burden compared to classical AT. We conduct careful numerical experiments which illustrate the efficacy of HR training on both MNIST and CIFAR-10 datasets in all possible corruption settings: clean, affected by poisoning and/or evasive corruption. By using validation on the protection pa rameters \(\alpha,r,\mathcal{N}\), HR training adapts to and protects against whatever corruption plagues the data including combination of evasion and poisoning, while ensuring strong generalization. In all settings, HR training achieves state of the art (SOTA) performance and outperforms classical evasion defense mechanisms (AT of Madry et al. (2018), TRADES of Zhang et al. (2019)), poisoning defense mechanisms (DPA of Levine and Feizi (2020); Wang et al. (2022)), and the combination of poisoning and evasion defenses. Furthermore, we replicate the experiments in Rice et al. (2020) and provide numerical evidence that HR training largely circumvents the robust overfitting phenomenon experienced by AT. Finally, we show empirically that the _training loss_ of HR training is an upper bound on its _adversarial test loss_ with high probability, providing a generalization guarantee. Notably, this observation does not hold for any of the prior robust approaches. We release an open-source Python library ([https://github.com/RyanLucas3/HR_Neural_Networks](https://github.com/RyanLucas3/HR_Neural_Networks)) that can be directly installed through pip. With our library, HR training can be used to provide robustness by changing essentially only one line of the training code. We also provide an extensive Colab tutorial on using HR in the github repository. ### Related work **Robust Overfitting.** Understanding and eliminating the robust overfitting phenomenon has been the topic of previous work. Rice et al. (2020) suggests that conventional remedies for overfitting cannot improve upon early stopping. To compensate for poor generalization of AT (Schmidt et al., 2018), data augmentation techniques such as semi-supervised learning (Alayrac et al., 2019; Carmon et al., 2019) and data interpolation (Lee et al., 2020; Chen et al., 2021) have been considered. Alternative approaches directly modify the AT loss function by reweighing data points (Wang et al., 2019; Zhang et al., 2021), mitigating memorization effects (Dong et al., 2021) or favoring large loss data (Yu et al., 2022). Kulynych et al. (2022) recently considers differential privacy as a way to ensure strong generalization. Previous works mostly attempt to eliminate robust overfitting through empirical insight. In our work, however, we present theoretical evidence with regards to the working of the robust overfitting mechanism (Theorem 3.2) and propose a disciplined approach at its elimination. Our proposed HR training is indeed a theoretically disciplined approach which comes with a theoretical certificate against overfitting (Theorem 4.1). On a practical level, HR can be understood as a smart reweighing of the standard adversarial examples considered in Madry et al. (2018) and in doing so equips standard AT against with protection against both poisoning and evasion attacks. **Distributionally Robust Optimization.** Wasserstein distributionally robust optimization (WDRO) has recently received attention in the context of AT (Staib and Jegelka, 2017; Sinha et al., 2017; Wong et al., 2019; Bui et al., 2022). WDRO safeguards against attackers with a bound on the average perturbation applied to all data points as opposed to standard AT which protects against attacks in which the perturbation on each individual data point is bounded. WDRO however still suffers from fundamentally the same robust overfitting phenomenon as standard AT as it does not take into account statistical error. However, the Kullback-Leibler (KL) divergence ambiguity sets are known in the DRO literature to be superior at providing safeguards against statistical error (Lam, 2019; Van Parys et al., 2021) than their Wasserstein counterparts in the absence of data corruption. The Levy-Prokhorov (LP) metric has been shown in the statistical literature (Hampel, 1971) to precisely captures certain types of data corruption. Recently, Bennouna and Van Parys (2022) proposed a novel ambiguity set combining both the KL and LP metrics in an attempt to safeguard against corruption and statistical error simultaneously. Contrary to the WDRO approach of Sinha et al. (2017), the holistic robust DRO formulation based on this novel ambiguity set is proven here to guarantee an upper bound on the adversarial test loss for _any_ desired corruption level and _any_ loss function. Moreover, the HR formulation is not harder to train than classical AT. This paper considers the ambiguity set of Bennouna and Van Parys (2022) in the context of evasion (at _test time_) and poisoning, and proves new statistical properties. In particular, we prove the HR ambiguity set to provide _uniform_ finite sample generalization bounds, independent of the dimentionality of the model class--a crucial novel property in the context of deep learning. ## 2 Preliminaries: Learning under corruption We formalize here the setting of learning under evasion and poisoning corruption. A typical underlying assumption in prior work is that the empirical distribution of the data is close to the out-of-sample distribution, neglecting statistical error and risking poor generalization. Here, we carefully consider both types of corruption and their interaction with statistical error. Consider a learning problem with covariates \(x\in\mathcal{X}\) and outputs \(y\in\mathcal{Y}\) (e.g., labels in classification problems), following a joint distribution \(\mathcal{D}\) with compact support \(\mathcal{Z}\subseteq\mathcal{X}\times\mathcal{Y}\) which represents the distribution of the clean data. We do not observe samples directly from this clean distribution \(\mathcal{D}\) but rather up to a fraction \(\alpha\) of these \(n\) independent samples are wholly corrupted (poisoning corruption). We hence observe the \(n\) samples \(\{z_{i}\coloneqq(x_{i},y_{i})\}_{i=1}^{n}\in\mathcal{Z}^{n}\) post corruption and denote with \(\mathcal{D}_{n}\) their empirical distribution. During test time, the test data is again sampled from \(\mathcal{D}\) but is subjected to noise out of a set \(\mathcal{N}\) (evasion corruption). Typical work in AT considers for example the noise bounded in an \(l_{p}\) ball (Madry et al., 2018). We denote the distribution of the perturbed test instances with \(\mathcal{D}_{\text{test}}\). We observe \(\mathcal{D}_{n}\) but our goal is to learn a model that minimizes the test loss \[\mathcal{L}_{\text{test}}(\theta):=\mathbb{E}_{\mathcal{D}_{\text{test}}}[ \ell(\theta,Z)],\] where \(\theta\in\Theta\) are the model parameters, \(\ell\) is a bounded loss function (e.g., cross-entropy on neural networks); see Figure 1. Here, as in the remainder of the paper, \(\mathbb{E}_{\mathcal{D}^{\prime}}\) denotes expectation over a random variable \(Z\) with distribution \(\mathcal{D}^{\prime}\). ## 3 Adversarial training and generalization We now highlight the importance of robustness against statistical error when learning under corruption. Consider the classical setting of adversarial evasion attacks in the absence of poisoning (i.e., \(\alpha=0\)) and let here \(\ell^{\mathcal{N}}(\theta,z):=\max_{\delta\in\mathcal{N},\,z+\delta\in \mathcal{Z}}\ell(\theta,z+\delta)\) when the adversary can perturb the data \(z\) within the set \(\mathcal{N}\ni 0\). The AT models of Madry et al. (2018) minimize the empirical adversarial loss \(\mathcal{L}_{\text{AT}}(\theta):=\mathbb{E}_{\mathcal{D}_{n}}[\ell^{\mathcal{ N}}(\theta,z)]\) and hope that its minimizer \(\theta_{\mathcal{D}_{n}}^{\text{AT}}\) over the parameter set \(\Theta\) is safeguarded against evasion attacks. We have indeed that the testing loss \(\mathcal{L}_{\text{test}}(\theta)\) is upper bounded by \(\mathbb{E}_{\mathcal{D}}\big{[}\ell^{\mathcal{N}}(\theta,z)\big{]}\) and hence minimizing its empirical counterpart \(\mathcal{L}_{\text{AT}}(\theta)\) makes intuitive sense. However, due to statistical error there might be a considerable gap between the adversarial test loss \(\mathbb{E}_{\mathcal{D}}\big{[}\ell^{\mathcal{N}}(\theta,z)\big{]}\) and the empirical adversarial loss \(\mathcal{L}_{\text{AT}}(\theta)\) putting into question the intuitive appeal of this approach. AT typically does overfit to the worst-case perturbations of the _data points_--rather than the out-of-sample distribution--and thus generalizes poorly. This is akin to classical overfitting in ERM caused by the gap between the minimized loss \(\mathcal{L}_{\text{ERM}}(\theta):=\mathbb{E}_{\mathcal{D}_{n}}[\ell(\theta,Z)]\) and the out-of-sample cost \(\mathbb{E}_{\mathcal{D}}[\ell(\theta,Z)]\). In the case of AT, it has been empirically observed that overfitting is in fact exacerbated when compared to ERM (Rice et al., 2020). We now provide theoretical insight into why AT suffers from overfitting more than ERM. First, we present a distributional perspective on AT which will be useful in what follows. Consider the ambiguity set \[\mathcal{U}_{\mathcal{N}}(\mathcal{D}_{n}):=\{\mathcal{D}^{\prime}_{\text{ test}}:\text{LP}_{\mathcal{N}}(\mathcal{D}_{n},\mathcal{D}^{\prime}_{\text{ test}})\leq 0\} \tag{1}\] associated with the optimal transport metric \[\text{LP}_{\mathcal{N}}(\mathcal{D}_{n},\mathcal{D}^{\prime}_{ \text{test}}):= \tag{2}\] \[\inf\left\{\int\mathbf{1}(z^{\prime}-z\not\in\mathcal{N})\,\text {d}\gamma(z,z^{\prime})\ :\ \gamma\in\Gamma(\mathcal{D}_{n},\mathcal{D}^{\prime}_{\text{test}})\right\}\] where \(\Gamma(\mathcal{D}_{n},\mathcal{D}^{\prime}_{\text{test}})\) denotes the set of all couplings on \(\mathcal{Z}\times\mathcal{Z}\) between \(\mathcal{D}_{n}\) and \(\mathcal{D}^{\prime}_{\text{test}}\)(Villani, 2009). It is straightforward to observe that \[\mathcal{L}_{\text{AT}}(\theta)=\max\{\mathbb{E}_{\mathcal{D}^{\prime}_{\text {test}}}[\ell(\theta,Z)]\ :\mathcal{D}^{\prime}_{\text{test}}\in\mathcal{U}_{\mathcal{N}}( \mathcal{D}_{n})\}. \tag{3}\] Clearly, hence, naively considering DRO formulations does not by itself eliminate robust overfitting. We will show in Section 4 that a careful generalization of the previous DRO formulation does protect against robust overfitting. ERM overfitting gap.Let us first provide intuition on classical overfitting in ERM. As \(\mathcal{D}_{n}\) is here the empirical distribution of a random dataset it is itself a random variable and we denote with \(\mathcal{S}_{n}\) its distribution. Denote the associated ERM solution to some arbitrary distribution \(\mathcal{D}^{\prime}\) as \(\theta_{\mathcal{D}^{\prime}}\in\arg\min_{\theta\in\Theta}\mathbb{E}_{\mathcal{ D}^{\prime}}[\ell(\theta,Z)]\) which we assume to be unique for simplicity. ERM overfitting is bound to occur as the expected testing loss is larger than the expected training loss, i.e., \[\mathbb{E}_{\mathcal{D}_{n}\sim\mathcal{S}_{n}}[\mathbb{E}_{ \mathcal{D}}[\ell(\theta_{\mathcal{D}_{n}}\!\!,Z)]] \!\!= \mathbb{E}_{\mathcal{D}^{1}_{n},\mathcal{D}^{2}_{n}\sim\mathcal{S} _{n}}[\mathbb{E}_{\mathcal{D}^{1}_{n}}[\ell(\theta_{\mathcal{D}^{1}_{n}}\!\!,Z)]] \tag{4}\] \[\!\!\geq \mathbb{E}_{\mathcal{D}^{1}_{n},\mathcal{D}^{2}_{n}\sim\mathcal{S} _{n}}[\mathbb{E}_{\mathcal{D}^{1}_{n}}[\ell(\theta_{\mathcal{D}^{1}_{n}}\!\!,Z)]]\] (5) \[\!\!= \mathbb{E}_{\mathcal{D}_{n}\sim\mathcal{S}_{n}}[\mathbb{E}_{ \mathcal{D}_{n}}[\ell(\theta_{\mathcal{D}_{n}}\!\!,Z)]]\,. \tag{6}\] Here, Equality (4) is due to Fubini's theorem. Inequality (5) follows from the fact that \(\theta_{\mathcal{D}^{1}_{n}}\) is a minimizer of \(\mathbb{E}_{\mathcal{D}^{1}_{n}}[\ell(\theta,Z)]\). Equality (6) follows from the fact that \(\mathcal{D}^{2}_{n}\) and \(\mathcal{D}^{1}_{n}\) are independent and share the same distribution \(\mathcal{S}_{n}\). The right hand side of (6) is precisely the expected training loss. We define the ERM _overfitting gap_, between training and testing loss, as \[\mathcal{G}_{\text{ERM}}(\mathcal{S}_{n},\ell)\] \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **AT overfitting gap.** Let us now examine the robust overfitting phenomenon in AT. For an arbitrary distribution \(\mathcal{D}^{\prime}\) and an arbitrary model \(\theta\) we define its associated worst-case adversary as the distribution \(\mathcal{D}^{\prime}(\theta)\in\arg\max\{\mathbb{E}_{\mathcal{D}^{\prime\prime }_{\text{out}}}[\ell(\theta,Z)]:\mathcal{D}^{\prime}_{\text{test}}\in\mathcal{U }_{\mathcal{N}}(\mathcal{D}^{\prime})\}\). We denote in particular with \(\mathcal{D}^{\prime\prime}:=\mathcal{D}^{\prime\prime\prime}(\theta^{\text{ AT}}_{\mathcal{D}^{\prime}})\) the worst-case adversary of the AT solution \(\theta^{\text{AT}}_{\mathcal{D}^{\prime}}\in\arg\min_{\theta\in\Theta}\max\{ \mathbb{E}_{\mathcal{D}^{\prime\prime}_{\text{out}}}[\ell(\theta,Z)]:\mathcal{ D}^{\prime}_{\text{test}}\in\mathcal{U}_{\mathcal{N}}(\mathcal{D}^{\prime})\}\) which minimizes \(\mathcal{L}_{\text{AT}}\) (see (3)). We will assume in this section that the AT best model \(\theta^{\text{AT}}_{\mathcal{D}^{\prime}}\) and its associated worst-case adversary \(\mathcal{D}^{\prime\prime\prime}\) form a saddle-point. **Assumption 3.1** (Saddle-Point).: For any \(\mathcal{D}^{\prime}\) we have \(\mathbb{E}_{\mathcal{D}^{\prime}_{\text{out}}}[\ell(\theta^{\text{AT}}_{ \mathcal{D}^{\prime}},Z)]\leq\mathbb{E}_{\mathcal{D}^{\prime\prime\prime}}[ \ell(\theta,Z)]\) for all \(\theta\in\Theta\) and \(\mathcal{D}^{\prime}_{\text{test}}\in\mathcal{U}_{\mathcal{N}}(\mathcal{D}^{ \prime})\). The previous saddle point condition implies in particular that the AT best model also minimizes a nominal ERM cost \(\mathbb{E}_{\mathcal{D}^{\text{AT}}}[\ell(\theta,Z)]\) with respect to a worst-case adversary \(\mathcal{D}^{\text{AT}}_{n}\sim\mathcal{S}^{\text{AT}}_{n}\), i.e., \(\theta^{\text{AT}}_{\mathcal{D}_{n}}=\theta_{\mathcal{D}^{\text{AT}}_{n}} \in\arg\min_{\theta\in\Theta}\mathbb{E}_{\mathcal{D}^{\text{AT}}_{n}}[\ell( \theta,Z)]\). Assumption 3.1 holds for any convex loss functions such as logistic regression (Sion, 1958; Bose et al., 2020). However, it is not necessarily verified for neural networks due to their inherent nonconvexity. Recent work by Gidel et al. (2021) suggests however that it may also hold for neural networks approximately. We define the _AT overfitting gap_ as the expected difference between the adversarial testing loss and the AT loss, i.e., \[\mathcal{G}_{\text{AT}}(\mathcal{S}_{n},\ell)\!:=\!\mathbb{E}_{ \mathcal{D}_{n}\sim\mathcal{S}_{n}}[\mathbb{E}_{\mathcal{D}}[\ell^{\mathcal{ N}}(\theta^{\text{AT}}_{\mathcal{D}_{n}},Z)]\!-\!\mathbb{E}_{\mathcal{D}_{n}}[ \ell^{\mathcal{N}}(\theta^{\text{AT}}_{\mathcal{D}_{n}},Z)]]\] \[\qquad=\mathbb{E}_{\mathcal{D}_{n}\sim\mathcal{S}_{n}}[\mathbb{E} _{\mathcal{D}(\theta^{\text{AT}}_{\mathcal{D}_{n}})}[\ell(\theta^{\text{AT}} _{\mathcal{D}_{n}},Z)]-\mathbb{E}_{\mathcal{D}^{\text{AT}}_{n}}[\ell(\theta^{ \text{AT}}_{\mathcal{D}_{n}},Z)]].\] Our next result shows that under a saddle-point assumption, the AT overfitting gap can be decomposed into an ERM overfitting gap and a _nonnegative_ term \(\mathcal{G}_{\text{SHIFT}}(\mathcal{S}_{n},\ell):=\mathbb{E}_{\mathcal{D}^{ \prime}_{n},\mathcal{D}^{\prime}_{n}\sim\mathcal{S}_{n}}[\mathbb{E}_{\mathcal{ D}^{\prime}_{n}(\theta^{\text{AT}}_{\mathcal{D}_{n}})}[\ell(\theta^{\text{AT}}_{ \mathcal{D}_{n}},Z)]-\mathbb{E}_{\mathcal{D}^{\prime\prime}_{n}(\theta^{ \text{AT}}_{\mathcal{D}_{n}})}[\ell(\theta^{\text{AT}}_{\mathcal{D}_{n}},Z)]].\) **Theorem 3.2**.: _Let Assumption 3.1 hold. Then,_ \[\mathcal{G}_{\text{AT}}(\mathcal{S}_{n},\ell)=\mathcal{G}_{\text{ERM}}( \mathcal{S}^{\text{AT}}_{n},\ell)+\underbrace{\mathcal{G}_{\text{SHIFT}}( \mathcal{S}_{n},\ell)}_{\geq 0}.\] The proof of Theorem 3.2 can be found in Appendix A and indicates in particular that AT overfitting gap is always at least as large as the overfitting gap suffered by an ERM procedure on data generated by the worst-case adversary \(\mathcal{D}^{\text{AT}}_{n}\). Hence, while ERM overfitting is due to difference between train and test data sets, AT suffers worse overfitting as its gap is due to a difference between (adversarial) train and test datasets and an additional adversary shift term. This result gives theoretical evidence for the robust overfitting phenomenon observed empirically in a flurry of recent papers. The additional adversary shift term can be interpreted as the sensitivity of the AT solution to a change of adversary. Hence, Theorem 3.2 indicates that robust overfitting is due to the trained model adapting too much to the specific artificially added perturbations during AT. Let us detail this adversary shift gap \(\mathcal{G}_{\text{SHIFT}}\) interpretation. Recall that \(\theta^{\text{AT}}_{\mathcal{D}_{n}}\) is an AT model trained with distribution \(\mathcal{D}_{n}\). Intuitively, \(\mathcal{D}^{2}_{n}\) can be seen as a testing distribution, and \(\mathcal{D}^{2}_{n}(\theta^{\text{AT}}_{\mathcal{D}^{\prime}_{n}})\) (resp. \(\mathcal{D}^{2}_{n}(\theta^{\text{AT}}_{\mathcal{D}^{\prime}_{n}})\)) is a testing distribution with adversarial examples where the adversary is chosen against model \(\theta^{\text{AT}}_{\mathcal{D}^{\prime}_{n}}\) (resp. \(\theta^{\text{AT}}_{\mathcal{D}^{\prime}_{n}}\)). Hence, each of the two terms of the gap measures the robust testing error under a _distinct_ adversary. The difference measures therefore the gap of testing error when the adversary changes. This implies that the adversary shift quantifies the _sensitivity of the AT model to a change of adversary_ resulting from a change of training distribution: if we train with data \(\mathcal{D}^{1}_{n}\), AT trains with perturbation from adversary against \(\theta^{\text{AT}}_{\mathcal{D}^{1}_{n}}\), while if we train with data \(\mathcal{D}^{2}_{n}\), AT trains with perturbation from a distinct (shifted) adversary, namely against \(\theta^{\text{AT}}_{\mathcal{D}^{2}_{n}}\). This gap is large when the trained model adapts too much (overfits) to the specific added perturbation to the training samples with AT. ## 4 A holistic robust loss From the previous section, it is clear that practical robustness requires us to take into account statistical error in addition to poisoning and evasion corruption. The key idea, following the motivation behind AT, is to construct an upper bound on the test loss when learning with corruption. As opposed to classical AT, such upper bound should account for poisoning as well, and also, crucially, should bound the _out-of-sample_ adversarial test loss \(\mathcal{L}_{\text{test}}\) rather than its empirical proxy. Let us first build intuition on our upper bound. As Section 3 demonstrate, an empirical proxy of the test loss typically underestimates the test loss of the trained model, as the expected overfitting gap is always positive \(\mathcal{G}_{\text{ERM}},\mathcal{G}_{\text{AT}}\geq 0\). This is caused by the fluctuations of the empirical distribution of samples around the out-of-sample distributions generating the samples. Hence, instead of considering the ambiguity set (1) of the AT loss (3) capturing only the corruption at hand, we augment this ambiguity set to capture the fluctuations of the empirical distribution due to sampling. These statistical fluctuations are captured by the Kullback-Leibler divergence (Van Erven and Harremos, 2014), denoted here as KL, as we will demonstrate shortly. On the other hand, the optimal transport metric LP (defined in (2)) will capture both evasion and poisoning. This augmentation will increase the estimated loss "just enough" to be a tight upper bound on the test loss. Define here an ambiguity set \(\mathcal{U}_{\mathcal{N},\alpha,r}(\mathcal{D}_{n}):=\{\mathcal{D}^{\prime}_{\text{ test}}:\exists\mathcal{D}^{\prime}\text{ s.t. }\text{LP}_{\mathcal{N}}(\mathcal{D}_{n},\mathcal{D}^{\prime})\leq\alpha,\text{ KL}(\mathcal{D}^{\prime},\mathcal{D}^{\prime}_{\text{test}})\leq r\}\), and the associated holistic robust (HR) loss as \[\mathcal{L}^{\mathcal{N},\alpha,r}_{\text{HR}}(\theta)\!:=\!\max\{\mathbb{E}_{ \mathcal{D}^{\prime}_{\text{test}}}[\ell(\theta,Z)]:\mathcal{D}^{\prime}_{\text{ test}}\!\in\!\mathcal{U}_{\mathcal{N},\alpha,r}(\mathcal{D}_{n})\}, \tag{7}\] where the desired protection against evasion corruption is controlled by \(\mathcal{N}\), protection against poisoning corruption is controlled by \(\alpha\), and statistical error is accounted for by \(r\), respectively. Intuitively, the LP ball \(\{\mathcal{D}^{\prime}\ :\ LP_{\mathcal{N}}(\mathcal{D}_{n},\mathcal{D}^{\prime}) \leq\alpha\}\) contains all the possible "training" empirical distributions \(\mathcal{D}^{\prime}\) directly sampled from the adversarial testing distribution \(\mathcal{D}_{test}\)--that is, by removing the poisoned data points and then perturbing each data point by evasion. As \(\mathcal{D}^{\prime}\) is an empirical distribution of samples from \(\mathcal{D}_{test}\), then large deviation theory (Dembo & Zeitouni, 2009) ensures that the set \(\{\mathcal{D}^{\prime}_{test}\ :\ KL(\mathcal{D}^{\prime},\mathcal{D}^{\prime}_{ test})\leq r\}\) contains (intuitively) \(\mathcal{D}_{test}\) with high probability \(1-e^{-rn+O(1)}\). This intuitive explanation is merely a simplified illustration. In the proof of subsequent Theorem 4.1 (which we defer to Appendix C) we prove indeed that the considered ambiguity set around the random observed empirical distribution contains the adversarial testing distribution (with probability \(1-e^{-rn+O(1)}\)), given that less than a fraction \(\alpha\) of all training data points are corrupted by poisoning and the evasive attack on the test data is limited to a compact set in the interior of the set \(\mathcal{N}\). **Theorem 4.1** (Robustness Certificate).: _Suppose that less than a fraction \(\alpha\in(0,1]\) of the independent identically distributed (IID) training data is poisoned and the evasion attack on the test set is limited to a compact set in \(\text{int}(\mathcal{N})\). Then, \(\text{Pr}(\mathcal{L}^{\mathcal{N},\alpha,r}_{\text{HR}}(\theta)\geq\mathcal{ L}_{\text{test}}(\theta)\ \forall\theta\in\Theta)\geq 1-e^{-rn+O(1)}\)._ Denote here the holistic robust solution as \(\theta^{\text{HR}}_{\mathcal{D}_{n}}\in\arg\min_{\theta\in\Theta}\max_{ \mathcal{D}^{\prime}_{\text{HR}}\in\mathcal{U}^{\prime}\alpha,r(\mathcal{D}_{ n})}\mathbb{E}_{\mathcal{D}^{\prime}_{\text{res}}}[\ell(\theta,Z)]\). The previous theorem certifies that asymptotically the testing error is not larger than the HR loss with high probability, i.e., \[\text{Pr}\Big{(}\mathcal{L}^{\mathcal{N},\alpha,r}_{\text{HR}}(\theta^{\text{ HR}}_{\mathcal{D}_{n}})\geq\mathcal{L}_{\text{test}}(\theta^{\text{HR}}_{ \mathcal{D}_{n}})\Big{)}\geq 1-e^{-rn+O(1)}.\] In Appendix C we prove a finite sample extension of this generalization certificate (Theorem C.4) and indicate that the results is statistically tight (Theorem C.5). We point out that, unlike certificates founds in several prior works (for instance in Sinha et al. (2017)), our bound does not depend on the dimension of the parameter space \(\Theta\) but rather only on the "size" of the event set \(\mathcal{Z}\) (eg images and labels). This is critical as in the context of neural networks typically the number of weights is taken much larger than the number of data points \(n\). The HR loss is a natural generalization of the AT loss. When no robustness against statistical error (\(r=0\)) and poisoning corruption (\(\alpha=0\)) is desired, we have indeed \(\mathcal{L}^{\mathcal{N},0,0}_{\text{HR}}(\theta)=\mathcal{L}_{\text{AT}}(\theta)\); See Appendix B for further details. ## 5 HR training algorithm We now introduce an algorithm--HR training--to optimize efficiently the HR loss \(\mathcal{L}^{\mathcal{N},\alpha,r}_{\text{HR}}\). At first glance, minimizing the HR loss seems challenging as it requires the solution of a saddle point problem over the model parameters \(\theta\in\Theta\) and potentially continuous distribution \(\mathcal{D}^{\prime}_{\text{test}}\in\mathcal{U}_{\mathcal{N},\alpha,r}( \mathcal{D}_{n})\). We exploit, however, a finite reformulation of the HR loss by Bennouna & Van Parys (2022). For any given training data (sub)set \(\{z_{i}=(x_{i},y_{i})\}_{i\in I}\) with \(I\subseteq[n]\), we can compute its associated HR loss _exactly_ as \[\mathcal{L}^{\mathcal{N},\alpha,r}_{\text{HR}}(\theta)=\left\{\begin{array}{l} \max\sum_{i=1}^{n}d^{\prime}_{i}\ell^{\mathcal{N}}(\theta,z_{i})+d^{\prime}_ {0}\ell^{\mathcal{Z}}(\theta)\\ \text{s.t. }d^{\prime}\in\Re^{n+1}_{+},\,\hat{d}^{\prime}\in\Re^{n+1}_{+},\,s\in \Re^{n}_{+},\\ \sum_{i=0}^{n}d^{\prime}_{i}=\sum_{i=0}^{n}\hat{d}^{\prime}_{i}=1,\\ \hat{d}^{\prime}_{i}\!+\!s_{i}\!=\!\frac{1}{n}\ \forall i\!\in\![1,\ldots,n],\\ \sum_{i=0}^{n}\hat{d}^{\prime}_{i}\log(\frac{d_{i}}{d_{i}})\leq r,\ \sum_{i=1}^{n}s_{i}\leq\alpha \end{array}\right. \tag{8}\] where \(\ell^{\mathcal{Z}}(\theta):=\max_{z\in\mathcal{Z}}\ell(\theta,z)\) and \(\ell^{\mathcal{N}}(\theta,z):=\max_{\delta\in\mathcal{N},z+\delta\in\mathcal{Z }}\ell(\theta,z+\delta)\). In particular, the supremum of the HR loss (7) is attained in distributions \(\mathcal{D}^{\prime}_{test}\) and \(\mathcal{D}^{\prime}\) of finite support (\(d^{\prime}\) and \(\hat{d}^{\prime}\) respectively in (8)), specifically the adversarial examples and a point maximizing the loss. The HR loss is therefore in essence simply an adversarial reweighing of loss terms comprising the AT loss and an additional loss term \(\ell^{\mathcal{Z}}(\theta)\). Determining the HR loss exactly requires evaluating the adversarial loss \(\ell^{\mathcal{N}}(\theta,z)\) and subsequently solving the exponential cone problem (8) with \(O(n)\) variables and constraints, which can be done efficiently by off-the-shelf optimization solvers. Practically, we train the HR loss by mimicking standard adversarial training. At each minibatch iteration, we indeed first compute an approximate adversarial example \(z^{\prime}_{i}\approx z_{i}+\arg\max_{\delta\in\mathcal{N},z_{i}+\delta\in \mathcal{Z}}\ell(\theta,z_{i}+\delta)\) for each data point in the minibatch \(I\subset[n]\) using the projected gradient descent (PGD) algorithm of Madry et al. (2018). With the help of the approximations \(\ell^{\mathcal{N}}(\theta,z_{i})\approx\ell(\theta,z^{\prime}_{i})\) as well as \(\ell^{\mathcal{Z}}(\theta)\approx\max_{i\in I}\ell(\theta,z^{\prime}_{i})\) we determine an approximate maximizer \(d^{\prime\star}\) in problem (8). The model parameters are then updated by stepping in the direction of the negative gradient of the HR loss. We remark that this gradient can be computed efficiently using Danskin's theorem as a weighted sum of the gradients on the adversarial examples, i.e., \(\nabla_{\theta}\mathcal{L}^{\mathcal{N},\alpha,r}_{\text{HR}}(\theta;I)=\sum_{ i\in I}d^{\prime\star}_{i}\nabla_{\theta}\ell(\theta,z^{\prime}_{i})+d^{\prime\star}_{0} \nabla_{\theta}\ell(\theta,z^{\prime}_{\arg\max_{i\in I}\ell(\theta,z^{\prime}_{i})})\). The algorithm is summarized in Algorithm 1 where we remark that HR training is different from standard adversarial training only in that it requires computing the maximizer \(d^{\prime\star}\) in (8). Practically, HR training with PGD takes less than 6% more time than standard adversarial training using PGD in typical benchmarks. ## 6 Experiments In this section, we investigate the efficacy of HR training through extensive experiments on the MNIST and CIFAR-10 datasets1. We consider the crossentropy loss \(\ell(\theta,(x,y))=-\sum_{k=1}^{K}y^{(k)}\log f_{\theta}^{(k)}(x)\), \(K=10\), where \(f_{\theta}(x)\in\Re^{K}\) represents the network with weights \(\theta\) and softmax output probabilities for covariate \(x\) and one-hot encoded class label \(y\in\Re^{K}\). We present here only the CIFAR-10 results and leave the MNIST experiments, which bear the same insights, to Appendix D. We conduct three sets of experiments. The first set evaluates each type of robustness separately and shows that each parameter provides robustness against a distinct source of overfitting: \(\mathcal{N}\) against data evasion, \(\alpha\) against data poisoning and finally \(r\) against statistical error. The second set of experiments shows that HR training does not suffer robust overfitting and also benchmarks HR training against SOTA evasion robustness methods. The last set of experiments evaluates robustness against statistical error and both corruptions simultaneously and shows that HR training significantly outperforms benchmarks. In particular, each of these sets of experiments shows the theoretical robustness certificate provided in Theorem 4.1 holds empirically. In these experiments, we train HR with PGD (see Algorithm 1) and adopt the classical setting of covariates evasion attacks by choosing \(\mathcal{N}=B(0,\epsilon)\times\{0\}\) where depending on the context \(B(0,\epsilon)\) is here either a Euclidean or infinity norm ball with radius \(\epsilon\). ``` Specification: Learning rate \(\lambda\), number of epochs \(T\), mini-batches \(B\), minibatch replays \(M\), HR parameters (\(\mathcal{N},\alpha,r\)). Input: Data \(\{z_{i}=(x_{i},y_{i})\}_{i\in[n]}\). Initialized \(\theta\). //iterate \(M\) times per batch, for a total of \(T\) epochs for\(t\in[T/M]\), minibatch \(I\in B\), \(\boldsymbol{\cdot}\in[M]\)do //adversarial attack, e.g., using PGD Compute \(z_{i}^{\prime}\boldsymbol{\approx}z_{i}+\arg\max_{\delta\in\mathcal{N}}\ell( \theta,z_{i}+\delta)\,\forall i\in I\), //reweigh data points Compute optimal weights \(d^{\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot} \boldsymbol{\cdot}\boldsymbol{\cdot} ### Combatting robust overfitting We replicate here the experiment of Rice et al. (2020) exhibiting robust overfitting for AT training. As in Rice et al. (2020) we use preactivation ResNet18 on CIFAR-10 with PGD attacks of 10 steps, and with SGD learning rate decay at epochs 100 and 150. Further experimental setup is discussed in Appendix D.1. We train AT with PGD of Madry et al. (2018), TRADES of Zhang et al. (2019), and HR with \(r\geq 0\), \(\epsilon>0\) and \(\alpha=0\). For all algorithms, we use the same PGD attack on training with \(\epsilon=8/255\) and 10 attack steps. Figure 3 shows the test loss and error up to 200 training epochs when using pre-activation ResNet18 under \(\ell_{\infty}\) attacks. Table 1 reports the test error at the 200 epoch mark, replicating the results reported in Table 2 in Rice et al. (2020). AT clearly exhibits robust overfitting as the test loss drastically increases with more training epochs. The test error also increases but perhaps not as drastically. This indicates that although the number of errors made eventually settles, worryingly, the errors are made with increasing confidence as quantified by the crossentropy loss. Similar to Rice et al. (2020), we find that TRADES (Zhang et al., 2019) without early stopping also experiences robust overfitting although to a lesser extent than regular PGD training. HR training with modest parameter values \(r\) effectively mitigates this phenomenon and results in a monotonically decreasing test loss as a function of training epochs eliminating the need for early stopping. It is worth noting that PGD best checkpoint performs better than HR final checkpoint. However, it might be practically hard to recover the performance of such best checkpoint with early stopping. In fact, PGD's training curve around the best checkpoint is very steep, and hence, early stopping might result in unstable performance on practical datasets, with potentially lower datasize. tails. The hyperparameters of each algorithm are selected based on a 70/30 train/validation split of the training data as detailed in Appendix D.2. **Results.** Table 2 presents the natural and adversarial classification errors of each algorithm under various corruption settings. HR training significantly surpasses all other benchmarks in _every setting_. Traditional robust algorithms, such as PGD, TRADES, and DPA, unsurprisingly falter when faced with a corruption variant different from what they were originally designed to handle, performing even worse than ERM. In stark contrast, HR training demonstrates its adaptability by automatically adjusting to the specific corruption in the data, consistently maintaining high performance even when faced with combined attack strategies. Furthermore, the superior generalization capability of HR enables it to outperform each algorithm even in the unique attack settings for which they were specifically designed. Finally, HR also significantly outperforms the combination of defense techniques (PGD+DPA). Indeed, combining these defense techniques does not necessarily yield cumulative benefits due to the potential negative interactions between them. In contrast, HR is designed to provide effective simultaneous defenses while ensuring strong generalization. Figure 4 shows the training loss and the adversarial test loss of each algorithm in the settings of combined evasion and poisoning. Here, DPA and DPA+PGD are not represented a they have no clear notion of loss. HR training significantly outperforms all other benchmarks in two distinct ways. First, HR training enjoys the smallest adversarial test loss. Furthermore, the variance of its adversarial test loss is the smallest among the benchmarked procedures. Second, HR training effectively provides a robustness certificate as alluded to in Theorem 4.1. The training loss is indeed with high probability an upper bound on the testing loss--an observation which fails to hold for the other procedures. Training time.We investigate the train time of HR to evaluate the computational cost of its gain in robustness and generalization. Table 3 reports the training time (in minutes) per epoch of ERM, PGD, and HR on various datasets and architectures. While HR takes four times longer than PGD for small datasets and architectures (ConvNet on MNSIT), its additional computational cost compared to PGD is negligible for large datasets and architectures: \(<\)**+6%** for CIFAR-10 with ResNet and TinyImageNet with ResNet/EfficientNet. This is due to the fact that computationally, HR training only differ with PGD in solving an additional conic optimization problem (8) as described in Algorithm 1. This optimization problem depends only on the batch size (linearly) and is _independent_ on the data dimensionality and the network's size/architecture. As a result, while the PGD step and the network gradient step scale with the network's size and data dimensionality, HR's additional step does not scale with these factors, keeping its additional computational burden constant compared to PGD. ## 7 Discussion & Conclusion We make neural network training resilient against adversarial attacks based on a disciplined robust optimization approach. We remark that our approach can be generalized to other corruption models by substituting the LP metric with a metric suitable for the considered corruption model (e.g., Wasserstein for perturbations bounded on average). The tractability properties of the resulting method may however depend on the considered metric. We believe hence that the considered approach constitutes an interesting new inroad to bridging generalization and adversarial robustness of modern deep neural networks. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \begin{tabular}{c} Natural \\ (No Attack) \\ \end{tabular} & \begin{tabular}{c} Evasion \\ \end{tabular} & \begin{tabular}{c} Stat.Error + \\ Poisoning \\ \end{tabular} & \begin{tabular}{c} Stat.Error + \\ Evasion \\ \end{tabular} & \begin{tabular}{c} Stat.Error + \\ Poisoning \\ \end{tabular} \\ \hline **Combined Defenses** & HR & **9.0\%** & **18.5\%** & **25.3\% (27.1\%)** & **33.0\% (34.4\%)** & **36.6\% (38.9\%)** \\ & PGD + DPA & 18.6\% & 27.8\% & 37.0\% (38.2\%) & 47.1\% (48.0\%) & 45.2\% (46.1\%) \\ \hline **AT Defenses** & PGD & 10.6\% & 20.2\% & 30.1\% (32.4\%) & 33.8\% (35.3\%) & 40.6\% (41.7\%) \\ & TRADES & 13.9\% & 32.8\% & 32.6\% (34.3\%) & 43.3\% (46.9\%) & 49.7\% (52.0\%) \\ \hline **Poisoning Defenses** & DPA & 15.6\% & 39.2\% & 35.4\% (37.9\%) & 50.3\% (51.4\%) & 49.6\% (51.4\%) \\ & TV & 9.8\% & 40.8\% & **25.3\% (27.1\%)** & 43.9\% (50.5\%) & 50.6\% (54.4\%) \\ \hline **No Defense** & ERM & 9.4\% & 35.4\% & 28.8\% (30.5\%) & 43.9\% (50.5\%) & 52.2\% (55.5\%) \\ & KL & **9.0\%** & 38.1\% & 28.4\% (31.5\%) & 43.9\% (50.5\%) & 49.9\% (54.7\%) \\ \hline \hline \end{tabular} \end{table} Table 2: Mean and (worst-case) error rate over 10 trials with model parameters chosen based on a validation set. \begin{table} \begin{tabular}{l l c c c} \hline \hline Architecture & Dataset & ERM & PGD & HR \\ \hline ConvNet & MNIST & 0.04 & 0.28 & 1.30 \\ ResNet-18 & CIFAR-10 & 0.24 & 2.60 & 2.75 \\ ResNet-18 & Tiny ImageNet & 3.39 & 6.06 & 6.36 \\ EfficientNet & Tiny ImageNet & 7.65 & 39.59 & 40.87 \\ \hline \hline \end{tabular} \end{table} Table 3: Runtime of ERM, PGD, and HR in minutes per epoch (one full pass of the training dataset).
2301.02428
Sensitivity analysis using Physics-informed neural networks
The goal of this paper is to provide a simple approach to perform local sensitivity analysis using Physics-informed neural networks (PINN). The main idea lies in adding a new term in the loss function that regularizes the solution in a small neighborhood near the nominal value of the parameter of interest. The added term represents the derivative of the loss function with respect to the parameter of interest. The result of this modification is a solution to the problem along with the derivative of the solution with respect to the parameter of interest (the sensitivity). We call the new technique SA-PNN which stands for sensitivity analysis in PINN. The effectiveness of the technique is shown using four examples: the first one is a simple one-dimensional advection-diffusion problem to show the methodology, the second is a two-dimensional Poisson's problem with nine parameters of interest, and the third and fourth examples are one and two-dimensional transient two-phase flow in porous media problem.
John M. Hanna, José V. Aguado, Sebastien Comas-Cardona, Ramzi Askri, Domenico Borzacchiello
2023-01-06T09:21:45Z
http://arxiv.org/abs/2301.02428v3
# Sensitivity analysis using Physics-informed neural networks ###### Abstract The paper's goal is to provide a simple unified approach to perform sensitivity analysis using Physics-informed neural networks (PINN). The main idea lies in adding a new term in the loss function that regularizes the solution in a small neighborhood near the nominal value of the parameter of interest. The added term represents the derivative of the residual with respect to the parameter of interest. The result of this modification is a solution to the problem along with the derivative of the solution with respect to the parameter of interest (the sensitivity). We call the new technique to perform sensitivity analysis within this context SA-PINN. We show the effectiveness of the technique using 3 examples: the first one is a simple 1D advection-diffusion problem to show the methodology, the second is a 2D Poisson's problem with 9 parameters of interest and the last one is a transient two-phase flow in porous media problem. keywords: Physics-informed neural networks, sensitivity analysis, two-phase flow in porous media + Footnote †: journal: Computer Methods in Applied Mechanics and Engineering ## 1 Introduction Sensitivity analysis is a technique to measure the effect of uncertainties in one or more input parameters on output quantities. Sensitivity can be regarded, quantitatively, as the derivative of output quantities with respect to input parameters that might have some uncertainties. It has great importance in several engineering applications. Examples of these applications include aerodynamic optimization [1], shape optimization in solid mechanics [2], injection molding [3] and biomedical applications [4; 5]. This is just to name few applications. Several methods exist to calculate the sensitivities; the simplest one is the finite difference approach. It includes solving the system repeatedly, using any numerical technique, while varying the parameter of interest. Afterwards, the derivatives can be approximated by calculating the finite differences. This method becomes impractical when the number of the parameters of interest increases because the number of times the full system needs to be solved will grow exponentially, thus getting the sensitivities will become computationally intractable [6]. The adjoint method is becoming the state of the art for performing sensitivity analysis specially in CFD applications [7]. The method attempts to solve an adjoint system of equations that is usually derived from the primal system. By solving the adjoint system, one obtains the gradient values with respect to parameters of interest. Despite the success and increasing widespread of the method, some issues still exist. One of these issues is related to the differentiability of the solution with discontinuities appearing in shock wave problems or two-phase flow problems that leads to instabilities [8]. Machine learning based techniques have been growing rapidly to solve problems governed by partial differential equations (PDEs). Physics-informed neural networks (PINN) is one of the fast growing fields that attempts to solve forward and inverse problems governed by PDEs [9]. PINN gained wide interest due to the lack of need to a big data set since the physics governed by PDEs are used to regularize the model. PINN is powerful due to the use of feed forward neural networks, that are universal approximators [10], as the approximation space and the recent advances in automatic differentiation capabilities [11] that facilitated the derivative calculations. The method has been applied to several fields including solid mechanics [12], fluid mechanics [13; 14], additive manufacturing [15], two-phase flow in porous media [16] and many others [17; 18; 19; 20; 21; 22; 23; 24]. In this article, PINN is used as the base framework to develop the new sensitivity analysis technique. The main objectives of the article can be summarized as follows: * Introducing a new technique to perform sensitivity analysis based on the framework of PINN and we call it SA-PINN. * Showing the ease and effectiveness of getting the sensitivity with respect to multiple parameters of interest at once. * Displaying the ability of SA-PINN to get the sensitivities for problems with discontinuities such as moving boundaries or sharp gradients. The paper is organized in the following manner. In chapter 2, a quick introduction to PINN is given followed by explanation to SA-PINN. Chapter 3 introduces three problems presented in the paper: 1D advection-diffusion problem, 2D Poisson's problem with multiple parameters of interest and a 1D unsteady two-phase flow in porous media problem. Chapter 4 gives the results to the three problems which includes the calculation of some sensitivities of interest. Chapter 5 offers a summary to the method and a conclusion. ## 2 Sensitivity Analysis-PINN (SA-PINN) We consider a general partial differential equation of the form \[u_{t}+\mu L(u)=0,\quad\mathbf{x}\in\Omega,\ t\in[0,T] \tag{1}\] where \(u_{t}\) is the time derivative, \(L\) a general differential operator and \(\mu\) a material parameter. Initial and boundary conditions for the problems are defined as \[u(0,\mathbf{x}) =u_{0} \tag{2}\] \[u(t,\mathbf{x_{D}}) =u_{D}\] (3) \[B(u(t,\mathbf{x_{N}})) =f(\mathbf{x_{N}}) \tag{4}\] where \(B\) is a differential operator, \(\mathbf{x_{D}}\) the boundary where Dirichlet boundary condition is enforced and \(\mathbf{x_{N}}\) the boundary where Neumann boundary condition is applied. ### PINN The first step to solve this problem with PINN is to choose the approximation space. The choice is a feed forward neural network. Through automatic differentiation, a combined loss is formed of the residual of the PDE defined at spatio-temporal points called collocation points and the error in the initial/boundary conditions' enforcement. A solution can be obtained through updating the weights and biases of the neural network by minimizing the loss function using algorithms such as: gradient-descent, Adam, BFGS, etc. The loss function can be written as: \[Loss=\lambda_{0}\ loss_{0}+\lambda_{D}\ loss_{D}+\lambda_{N}\ loss_{N}+ \lambda_{1}\ loss_{r} \tag{5}\] where \(\lambda_{i}\) are the weights for each loss term which play an important role in the optimization process and: \[loss_{0}=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}r_{0}^{2}(t_{0}^{i},\mathbf{x}_{0} ^{i})=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}||u(t_{0}^{i},\mathbf{x}_{0}^{i})-u_{0 }^{i}||^{2} \tag{6}\] \[loss_{D}=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}r_{D}^{2}(t_{D}^{i},\mathbf{x}_{D} ^{i})=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}||u(t_{D}^{i},\mathbf{x}_{D}^{i})-u_{D }^{i}||^{2} \tag{7}\] \[loss_{N}=\frac{1}{N_{N}}\sum_{i=1}^{N_{N}}r_{N}^{2}(t_{N}^{i},\mathbf{x}_{N}^{ i})=\frac{1}{N_{N}}\sum_{i=1}^{N_{N}}||B(u(t_{N}^{i},\mathbf{x}_{N}^{i}))-f_{N}^{i}|| ^{2} \tag{8}\] \[loss_{r}=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}||r(t_{r}^{i},\mathbf{x}_{r}^{i})|| ^{2}=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}||u_{t}+\mu L(u)||_{(t_{r}^{i},\mathbf{ x}_{r}^{i})}^{2} \tag{9}\] \(loss_{i}\) respectively are the losses representing the initial condition, Dirichlet, Neumann boundary conditions and the PDE residual. ### Sa-Pinn The main objective of PINN is to find a solution that minimizes the residual of the PDE within the spatiotemporal domain that is represented by the collocation points while respecting the initial and boundary conditions. The result is a solution \(\hat{u}(t,\mathbf{x};\hat{\mu})\) to the PDE at a specific value \(\hat{\mu}\). To perform sensitivity analysis with \(\mu\) as the input parameter of interest, we would like to find not only the solution \(\hat{u}\) but also the derivative of the solution with respect to \(\mu\) (the sensitivity) at a given nominal value \(\hat{\mu}\). One way to obtain the sensitivity in PINN is to build a parametric model. This is done by changing the structure of the neural network to accommodate for another input which is the parameter of interest \(\mu\). Then, one adds collocation points in the spatiotemporal-parametric space. Then, the residual is minimized in the whole parametric domain while respecting the initial and boundary conditions. Afterwards, the derivative of the solution with respect to \(\mu\) can be easily obtained through automatic differentiation. The main issue of building such parametric models is that the number of collocation points grows exponentially with the number of parameters of interest. The problem can, then, easily become computationally intractable if we have several parameters of interest which is common in most engineering applications. To overcome this issue, we thought that instead of only minimizing the residual of the PDE, we can also minimize the derivative of the residual with respect to the parameter of interest. First, the structure of the neural network should accommodate for \(\mu\) as an input. Then, the loss function is formed as the sum of the residual and the derivative of the residual with respect to \(\mu\) along with the terms to respect the initial and boundary conditions. This way we make sure that the solution is accurate within a small neighborhood of \(\hat{\mu}\), thus sensitivity can be calculated. We call this technique SA-PINN. The technique can be summarized in points as follows: * Choose the neural network to have inputs related to space, time and parameter of interest. * Sample the collocation points only in space and time, however the points will be living in a higher dimension but without adding more points. * Create the loss function having terms related to PDE residual, the residual derivative with respect to the parameter of interest and the terms related to the initial and boundary conditions. The modified loss function will then be \[Loss_{m}=Loss+\lambda_{0\mu}\ loss_{0\mu}+\lambda_{D\mu}\ loss_{D\mu}+\lambda_{N \mu}\ loss_{N\mu}+\lambda_{1\mu}\ loss_{r\mu} \tag{11}\] where \[loss_{0\mu}=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\biggl{|}\frac{\partial r_{0}(t_ {0}^{i},\mathbf{x}_{0}^{i},\mu_{0}^{i})}{\partial\mu}\biggr{|} \tag{12}\] \[loss_{D\mu}=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\biggl{|}\frac{\partial r_{D}(t_ {D}^{i},\mathbf{x}_{D}^{i},\mu_{D}^{i})}{\partial\mu}\biggr{|} \tag{13}\] \[loss_{N\mu}=\frac{1}{N_{N}}\sum_{i=1}^{N_{N}}\biggl{|}\frac{\partial r_{N}(t_ {N}^{i},\mathbf{x}_{N}^{i},\mu_{N}^{i})}{\partial\mu}\biggr{|} \tag{14}\] \[loss_{r\mu}=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\biggl{|}\frac{\partial r(t_{r}^{ i},\mathbf{x}_{r}^{i},\mu_{r}^{i})}{\partial\mu}\biggr{|} \tag{15}\] Figure 1 shows a diagram that summarizes the methodology of SA-PINN. The parts in orange are the added parts from classical PINN. The \(u-\hat{u}\) term represents the mismatch of the solution from the initial and boundary conditions. It must be noted that we sample the collocation points only in space and time, but the points have another coordinate \(\mu\) and all have a nominal value \(\hat{\mu}\). ## 3 Model problems In this section, we introduce the models of the three examples that are used to show the effectiveness of the technique. ### 1D diffusion-advection equation The first example is a steady one-dimensional diffusion-advection equation where we would like to study the effect of perturbations in the diffusion term \(\epsilon\) on the solution. The strong form of the problem can be written as follows: \[\begin{gathered}\epsilon\ y_{xx}-y_{x}+1=0,\quad x\in[0,1],\\ y(0)=1,\ \ y(1)=3\end{gathered} \tag{17}\] The chosen nominal value for \(\epsilon\) is 0.1. \(y_{xx}\) and \(y_{x}\) are respectively the second and first order derivatives of the solution \(y\). The weights for the different terms in the loss function are set to 1 for the original PINN terms and 0.1 for the added sensitivity terms. Figure 1: Diagram explaining the methodology of SA-PINN. ### 2D Poisson's problem The next example is a 2-dimensional Poisson's problem where we have multiple parameters to study their effect on the solution. The domain is shown in figure 2 where there exists 9 subdomains each having different diffusivity value. The strong form of the problem can be written as: \[\begin{array}{c}k\ \Delta u=-1,\quad in\ \Omega,\\ u=0,\quad\ on\ \partial\Omega\end{array} \tag{18}\] where \(\Omega\) is a square with unit sides and \(k\) is the diffusivity. The 9 subdomains have equal areas. The nominal values for the diffusivity is 1; \(k_{1}=k2=...=k_{9}=1\). The main PINN terms weights are set to 1 and 0.1 for the added sensitivity terms. ### 1D two-phase flow in porous media In this section, we introduce a 1D two-phase flow in porous media problem. The problem is faced in Liquid Transfer Molding composite manufacturing processes, where resin is injected in a mold that has prepositioned fibrous matrix. The problem is shown in figure 3. At \(t=0\), the domain is initially saturated with one fluid (fluid 1). Another fluid (fluid 2) is being injected from the left end at constant pressure \(p_{in}\), while the pressure at the other end is fixed to \(p_{out}\) Figure 2: 2D Poisson’s problem domain The momentum equation can be approximated with Darcy's law that can be written in 1D as follow: \[v=-\frac{k}{\phi\mu}p_{x} \tag{19}\] where \(v\) is the volume average Darcy's velocity, \(\mu\) the viscosity, and \(p_{x}\) the pressure gradient, and \(\phi\) the porosity. Both fluids are assumed to be incompressible, therefore, the mass conservation equation reduces to \[v_{x}=0 \tag{20}\] Pressure boundary conditions can prescribed on the inlet and oulet: \[p(\mathbf{x_{inlet}},t)=p_{in},\ \ p(\mathbf{x_{outlet}},t)=p_{out} \tag{21}\] To track the interface between the two fluids, the Volume Of Fluids (VOF) technique is used; a fraction function \(c\) is introduced which takes a value 1 for the resin and 0 for the air. The viscosity \(\mu\) is redefined as \[\mu=c\mu_{2}+(1-c)\mu_{1} \tag{22}\] where \(\mu_{2}\) and \(\mu_{1}\) are the two fluids' viscosities. \(c\) evolves with time according to the following advection equation \[c_{t}+vc_{x}=0 \tag{23}\] Figure 3: One-dimensional domain (filling problem) where \(c_{t}\) and \(c_{x}\) are the time and spatial derivative of the fraction function \(c\), respectively. Initial and boundary conditions are defined to solve the advection of \(c\). \[c(\mathbf{x},t=0)=c_{0}(\mathbf{x}),\ \ c(\mathbf{x_{inlet}},t)=1 \tag{24}\] To sum up, the strong form of the problem can be written as: \[\begin{array}{l}c_{t}+v\ c_{x}=0,\ \ \ x\in[0,l],\ \ t\in[0,T],\\ v=-\frac{k}{\phi\mu}p_{x},\ \ \ \ \ \ \ x\in[0,l],\ \ t\in[0,T],\\ \\ v_{x}=0,\ \ \ \ \ \ \ x\in[0,l],\ \ t\in[0,T],\\ \\ ## 4 Results ### 1D diffusion-advection equation The solution \(u\) using PINN and SA-PINN is shown in figure 4 along with the analytical solution for \(\epsilon=0.1\). From figure 4, we can see that PINN and SA-PINN acturetly captures the analytical solution to the problem. The derivative of the solution with respect to \(\epsilon\) at to \(\epsilon=0.1\) is shown in figure 5. The reference finite difference solution in figure 5 is obtained by obtaining different PINN solutions near \(\epsilon=0.1\) and then calculating the derivative. We can see that classical PINN fails to predict the derivative, while, SA-PINN accurately predicts the derivative due to the added regularization term in the loss function. The loss function for different values of \(\epsilon\) is plotted in figure 6 for PINN and SA-PINN. Figure 5: \(\frac{\partial u}{\partial\epsilon}\) at \(\epsilon=0.1\) using PINN and SA-PINN along with the finite difference solution of the 1D advection-diffusion problem. As seen in figure 6, SA-PINN has the effect of greatly flattening the loss curve in a neighborhood near the nominal value of \(\epsilon=0.1\). This leads to better solutions that PINN in the neighborhood and accurate derivative calculation at \(\epsilon=0.1\). ### 2D Poisson's problem The PINN solution of the boundary value problem is shown in figure 7. The solution appears to be accurate and agrees with the analytical solution of the problem. Figure 6: Loss function for different \(\epsilon\) values using PINN and SA-PINN of the 1D advection-diffusion problem. The sensitivity terms \(\frac{\partial u}{\partial k_{i}}\) can then be plotted to see the effect of the diffusivity on the solution. Figure 7: PINN solution of the 2D Poisson’s boundary value problem. The computational time is plotted versus the number of parameters with respect to which sensitivity terms are added in figure 9. Figure 8: Different derivatives of the solution with respect to \(k_{i}\) of the 2D Poisson’s problem. It can be seen from the figure that the computational time grows linearly when increasing the number of parameters the sensitivity is calculated with respect to. This happens because the number of collocation points is the same when adding a new term to the loss function; the added cost is the same when adding new sensitivity terms. ### 1D transient two-phase flow in porous media First, we plot the front location for three different values of \(k\) by taking the 0.5 level set of the fraction function \(c\) in figure 10. We compare SA-PINN with classical PINN along with the analytical solution. Figure 9: Computational time vs. no of sensitivity parameters. We can notice that SA-PINN provides good results for values of \(k\) away from the nominal value \(k=1\). Classical PINN accurately predicts the solution only at the nominal values, however, away from that values, random solutions were obtained which is clear from the two red lines. In figure 11, we plot the time the flow front reaches \(x=0.5\) vs. \(k\). We compare the solution from SA-PINN with the analytical solution. Figure 10: Flow front location vs. time for three different values of \(k\) (\(k=1\), \(0.5\) and \(2\)) of the transient two-phase flow in porous media problem. We can see a good estimation of the filling time at different values of \(k\) using SA-PINN. This result can be useful in applications of injection processes to estimate the filling time as a function of a parameter of interest. ## 5 Conclusion In the article, we presented a new method to perform sensitivity analysis based on the paradigm of PINN. The method is easy to implement using any of the machine learning libraries as TensorFlow or PyTorch. We show, through the examples, that the technique is easy to use when sensitivity with respect to multiple parameters of interest are studied at the same time. The computation time grows linearly as the parameters increase which is an advantage to the method. We also show through the last example that the method is working for a problem where a discontinuity exists (flow front) and VOF method is used. Figure 11: Time at which the flow front reaches \(x=0.5\) vs. \(k\) for the transient two-phase flow in porous media problem. ## Acknowledgements This study was funded under the PERFORM Thesis program of IRT Jules Verne.
2302.03266
Learning to Count Isomorphisms with Graph Neural Networks
Subgraph isomorphism counting is an important problem on graphs, as many graph-based tasks exploit recurring subgraph patterns. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational costs. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query and input graphs, in order to predict the number of subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes, which is inadequate in complex structure matching for isomorphism counting. Moreover, on an input graph, the space of possible query graphs is enormous, and different parts of the input graph will be triggered to match different queries. Thus, expecting a fixed representation of the input graph to match diversely structured query graphs is unrealistic. In this paper, we propose a novel GNN called Count-GNN for subgraph isomorphism counting, to deal with the above challenges. At the edge level, given that an edge is an atomic unit of encoding graph structures, we propose an edge-centric message passing scheme, where messages on edges are propagated and aggregated based on the edge adjacency to preserve fine-grained structural information. At the graph level, we modulate the input graph representation conditioned on the query, so that the input graph can be adapted to each query individually to improve their matching. Finally, we conduct extensive experiments on a number of benchmark datasets to demonstrate the superior performance of Count-GNN.
Xingtong Yu, Zemin Liu, Yuan Fang, Xinming Zhang
2023-02-07T05:32:11Z
http://arxiv.org/abs/2302.03266v3
# Learning to Count Isomorphisms with Graph Neural Networks ###### Abstract Subgraph isomorphism counting is an important problem on graphs, as many graph-based tasks exploit recurring subgraph patterns. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational costs. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query and input graphs, in order to predict the number of subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes, which is inadequate in complex structure matching for isomorphism counting. Moreover, on an input graph, the space of possible query graphs is enormous, and different parts of the input graph will be triggered to match different queries. Thus, expecting a fixed representation of the input graph to match diversely structured query graphs is unrealistic. In this paper, we propose a novel GNN called Count-GNN for subgraph isomorphism counting, to deal with the above challenges. At the edge level, given that an edge is an atomic unit of encoding graph structures, we propose an _edge-centric message passing_ scheme, where messages on edges are propagated and aggregated based on the edge adjacency to preserve fine-grained structural information. At the graph level, we _modulate the input graph representation_ conditioned on the query, so that the input graph can be adapted to each query individually to improve their matching. Finally, we conduct extensive experiments on a number of benchmark datasets to demonstrate the superior performance of Count-GNN. 1 University of Science and Technology of China, China 2 National University of Singapore, Singapore 3 Singapore Management University, Singapore [email protected], [email protected], [email protected], [email protected] ## 1 Introduction Research in network science and graph mining often finds and exploits recurring subgraph patterns on an input graph. For example, on a protein network, we could query for the hydroxy groups which consist of one oxygen atom covalently bonded to one hydrogen atom; on a social network, we could query for potential families in which several users form a clique and two of them are working and the rest are studying. These queries essentially describe a subgraph pattern that repeatedly occurs on different parts of an input graph, which expresses certain semantics such as the hydroxy groups or families. These subgraph patterns are also known as network motifs on homogeneous graphs [12] or meta-structures on heterogeneous graphs [23, 24]. To leverage their expressiveness, more sophisticated graph models [13, 25, 26, 27] have also been designed to incorporate motifs or meta-structures. The need for subgraph patterns in graph-based tasks and models leads to a high demand of _subgraph isomorphism counting_[12]. Classical methods usually employ search-based algorithms such as backtracking [11, 10, 1] to exhaustively detect the isomorphisms and return an exact count. However, their computational costs are often excessive given that the detection problem is NP-complete and the counting form is #P-complete [10]. With the rise of graph neural networks (GNNs) [26], some recent approaches for subgraph isomorphism counting also leverage on the powerful graph representations from GNNs [12, 13, 14]. They generally employ GNNs to embed the queries and input graphs into low-dimensional vectors, which are further fed into a counter module to predict the approximate number of isomorphisms on the input graph. Compared to classical approaches, they can significantly save computational resources at the expense of approximation, providing a useful trade-off between accuracy and cost since many applications do not necessarily need an exact count. However, previous GNN-based isomorphism counting models adopt a node-centric message-passing scheme, which propagates and aggregates messages on nodes. While this scheme is effective for node-oriented tasks, it falls short of matching complex structures for isomorphism counting. In particular, they rely on message aggregation to generate representations centering on nodes, failing to explicitly and fundamentally capture the complex interactions among nodes. Thus, as the first challenge, _how do we capture fine-grained structural information_ beyond node-centric GNNs? Moreover, on an input graph, the space of possible query graphs is enormous. Different queries are often characterized by distinct structures that match with different parts of the input graph. A fixed graph representation to match with all possible queries is likely to underperform. Thus, as the second challenge, _how do we adapt the input graph to each query individually_, in order to improve the matching of specific structures in every query? In this paper, we propose a novel model called Count-GNN for approximate subgraph isomorphism counting, which copes with the above challenges from both the edge and graph perspectives. To be more specific, at the edge level, Count-GNN is built upon an _edge-centric_ GNN that propagates and aggregates messages on and for edges based on the edge adjacency, as shown in Fig. 1(a). Given that edges constitute the atomic unit of graph structures, any subgraph is composed of one or more edge chains. Thus, treating edges as first-class citizens can better capture fine-grained structural information. Theoretically, our proposed edge-centric GNN model can be regarded as a generalization of node-centric GNNs, with provably stronger expressive power than node-centric GNNs. At the graph level, Count-GNN resorts to a modulation mechanism [14] by adapting the input graph representation to each query graph, as shown in Fig. 1(b). As a result, the input graph can be tailored to each individual query with varying structures. Coupling the two perspectives, Count-GNN is able to precisely match complex structures between the input graph and structurally diverse queries. To summarize, our contributions are three-fold. (1) We propose a novel model Count-GNN that capitalizes on edge-centric aggregation to encode fine-grained structural information, which is more expressive than node-centric aggregation in theory. (2) Moreover, we design a query-conditioned graph modulation in Count-GNN, to adapt structure matching to different queries from the graph perspective. (3) Extensive experiments on several benchmark datasets demonstrate that Count-GNN can significantly outperform state-of-the-art GNN-based models for isomorphism counting. ## 2 Related Work We present the most related studies here, while leaving the rest to Appendix G due to the space limitation. Graphs usually entail abundant local structures to depict particular semantics, which gives rise to the importance of subgraph isomorphism counting [10]. To solve this problem, most traditional methods resort to backtracking [10, 11, 12]. Although they can obtain the exact counts, the search space usually grows intractably as graph size increases. In fact, subgraph isomorphism counting is a #P-complete problem [10]. Subsequently, several approaches [13, 14] are proposed to utilize some constraints to reduce the search space, and others [15] try to filter out infeasible graphs to speed up the backtracking process. Another line of approaches [1, 16] rely on the color coding for subgraph isomorphism counting in polynomial time. They are usually fixed-parameter tractable and can only be employed for some limited subcases. Other studies perform count estimation [18, 19, 20], such as in the setting where access to the entire network is prohibitively expensive [18], or using the counts of smaller patterns to estimate the counts for larger ones [19]. However, these attempts still face high computational costs. Recently, a few studies [15, 17] propose to address subgraph isomorphism counting from the perspective of machining learning. One study [15] proposes to incorporate several existing pattern extraction mechanisms such as CNN [11], GRU [12] and GNNs [21] on both query graphs and input graphs to exploit their structural match, which is then followed by a counter module to predict the number of isomorphisms. Another work [17] analyzes the ability of GNNs in detecting subgraph isomorphism, and proposes a Local Relational Pooling model based on the permutations of walks according to breadth-first search to count certain queries on graphs. However, they need to learn a new model for each query graph, limiting their practical application. Compared to traditional methods, these machine learning-based models can usually approximate the counts reasonably well, and at the same time significantly save computational resources and time, providing a practical trade-off between accuracy and cost. However, these approaches only adopt node-centric GNNs, which limit their ability to capture finer-grained structural information. There also exist a few recent GNNs based on edge-centric aggregation [13, 14], node-centric aggregation with the assistance of edge features [16, 17, 18], or both node- and edge-centric aggregation at the same time [15]. Except DMPNN [15], they are not specifically devised for subgraph isomorphism counting. In particular, they all lack the key module of structural matching between query and input graphs. Besides, the edge-centric approaches do not theoretically justify their enhanced expressive power compared to the node-centric counterparts. ## 3 Problem Formulation A _graph_\(\mathcal{G}=(V_{\mathcal{G}},E_{\mathcal{G}})\) is defined by a set of nodes \(V_{\mathcal{G}}\), and a set of edges \(E_{\mathcal{G}}\) between the nodes. In our study, we consider the general case of directed edges, where an undirected edge can be treated as two directed edges in opposite directions. We further consider _labeled_ graphs (also known as heterogeneous graphs), in which there exists a node label function \(\ell:V_{\mathcal{G}}\to L\) and an edge label function \(\ell^{\prime}:E_{\mathcal{G}}\to L^{\prime}\), where Figure 1: Illustration of Count-GNN. \(L\) and \(L^{\prime}\) denote the set of labels on nodes and edges, respectively. A graph \(\mathcal{S}=(V_{\mathcal{S}},E_{\mathcal{S}})\) is a _subgraph_ of \(\mathcal{G}\), written as \(\mathcal{S}\subset\mathcal{G}\), if and only if \(V_{\mathcal{S}}\subseteq V_{\mathcal{G}}\) and \(E_{\mathcal{S}}\subseteq E_{\mathcal{G}}\). Next, we present the definition of subgraph isomorphism on a labeled graph. **Definition 1** (Labeled Subgraph Isomorphism).: _Consider a subgraph \(\mathcal{S}\) of some input graph, and a query graph \(\mathcal{Q}\). \(\mathcal{S}\) is isomorphic to \(\mathcal{Q}\), written as \(\mathcal{S}\simeq\mathcal{Q}\), if there exists a bijection between their nodes, \(\psi:V_{\mathcal{S}}\to V_{\mathcal{Q}}\), such that_ * \(\forall v\in V_{\mathcal{S}}\)_,_ \(\ell(v)=\ell(\psi(v))\)_;_ * \(\forall e=\langle u,v\rangle\in E_{\mathcal{S}}\)_, it must hold that_ \(e^{\prime}=\langle\psi(u),\psi(v)\rangle\in E_{\mathcal{Q}}\) _and_ \(\ell^{\prime}(e)=\ell^{\prime}(e^{\prime})\)_._ In the problem of _subgraph isomorphism counting_, we are given a query graph \(\mathcal{Q}\) and an input graph \(\mathcal{G}\). We aim to predict \(n(\mathcal{Q},\mathcal{G})\), the number of subgraphs on \(\mathcal{G}\) which are isomorphic to \(\mathcal{Q}\), _i.e._, the cardinality of the set \(\{\mathcal{S}|\mathcal{S}\subseteq\mathcal{G},\mathcal{S}\simeq\mathcal{Q}\}\). Note that this is a non-trivial #P-complete problem [1]. In practice, the query \(\mathcal{Q}\) usually has a much smaller size than the input graph \(\mathcal{G}\), _i.e._, \(|V_{\mathcal{Q}}|\ll|V_{\mathcal{G}}|\) and \(|E_{\mathcal{Q}}|\ll|E_{\mathcal{G}}|\), leading to a huge search space and computational cost. ## 4 Proposed Model: Count-GNN In this section, we present the overall framework of Count-GNN first, followed by individual modules. ### Overall Framework We give an overview of the proposed Count-GNN in Fig. 2. Consider some query graphs and an input graph in Fig. 2(a). On both the query and input graphs, we first conduct edge-centric aggregation in which messages on edges are propagated to and aggregated for each edge based on the edge adjacency, as shown in Fig. 2(b). This module targets at the edge level, and enables us to learn edge-centric representations for both input graphs and queries that capture their fine-grained structural information for better structure matching. Furthermore, to be able to match diverse queries with distinct structures, the edge representations of the input graph are modulated w.r.t. each query, as shown in Fig. 2(c). The query-conditioned edge representations then undergo a readout operation to fuse into a query-conditioned whole-graph representation for the input graph. The module targets at the graph level, and enables us to adapt the input graph to each query individually to improve the matching of specific structures in each query. Finally, as shown in Fig. 2(d), a counter module is applied to predict the isomorphism counting on the input graph for a particular query, forming the overall objective. ### Edge-Centric Aggregation Typical GNNs [11, 12, 13] and GNN-based isomorphism counting models [13, 14] resort to the key mechanism of node-centric message passing, in which each node receives and aggregates messages from its neighboring nodes. For the problem of subgraph isomorphism counting, it is crucial to capture fine-grained structural information for more precise structure matching between the query and input graph. Consequently, we exploit edge-centric message passing, in which each edge receives and aggregates messages from adjacent edges. The edge-centric GNN captures structural information in an explicit and fundamental manner, given that edges represent the atomic unit of graph structures. More concretely, we learn a representation vector for each edge by propagating messages on edges. A message can be an input feature vector of the edge in the input layer of the GNN, or an intermediate embedding vector of the edge in subsequent layers. Specifically, given a directed edge \(e=\langle u,v\rangle\) on a graph (either an input or query graph), we initialize its message as a \(d_{0}\)-dimensional vector \[\mathbf{h}^{0}_{\langle u,v\rangle}=\mathbf{x}_{u}\parallel\mathbf{x}_{\langle u,v\rangle}\parallel\mathbf{x}_{v}\in\mathbb{R}^{d_{0}}, \tag{1}\] where \(\mathbf{x}_{*}\) encodes the input features of the corresponding nodes or edges and \(\parallel\) is the concatenation operator. In general, \(\mathbf{h}^{0}_{\langle u,v\rangle}\neq\mathbf{h}^{0}_{\langle v,u\rangle}\) for directed edges. Note that, in the absence of input features, we can employ one-hot encoding as the feature vector; it is also possible to employ additional embedding layers to further transform the input features into initial messages. Given the initial messages, we devise an edge-centric GNN layer, in which each edge receives and aggregates messages along the directed edges. The edge-centric message passing can be made recursive by stacking multiple layers. Formally, in the \(l\)-th layer, the message on a directed edge \(\langle u,v\rangle\), _i.e._, \(\mathbf{h}^{l}_{\langle u,v\rangle}\in\mathbb{R}^{d_{l}}\), is updated as \[\mathbf{h}^{l}_{\langle u,v\rangle}=\sigma(\mathbf{W}^{l}\mathbf{h}^{l-1}_{ \langle u,v\rangle}+\mathbf{U}^{l}\mathbf{h}^{l-1}_{\langle,u\rangle}+\mathbf{ b}^{l}), \tag{2}\] where \(\mathbf{W}^{l}\), \(\mathbf{U}^{l}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) are learnable weight matrices, \(\mathbf{b}^{l}\in\mathbb{R}^{d_{l}}\) is a learnable bias vector, and \(\sigma\) is an activation function (we use LeakyReLU in the implementation). In addition, \(\mathbf{h}^{l-1}_{\langle,u\rangle}\in\mathbb{R}^{d_{l-1}}\) is the intermediate message aggregated from the preceding edges of \(\langle u,v\rangle\), _i.e._, edges incident on node \(u\) from other nodes, which can be materialized as \[\mathbf{h}^{l-1}_{\langle,u\rangle}=\textsc{Aggr}(\{\mathbf{h}^{l-1}_{ \langle i,u\rangle}|\langle i,u\rangle\in E\}), \tag{3}\] where \(E\) denotes the set of directed edges in the graph, and \(\textsc{Aggr}(\cdot)\) is an aggregation operator to aggregate messages from the preceding edges. We implement the aggregation operator as a simple mean, although more sophisticated approaches such as self-attention [15] and multi-layer perceptron [13] can also be employed. To boost the message passing, more advanced mechanisms can be imported into the layer-wise edge-centric aggregation, _e.g._, a residual [14] can be added to assist the message passing from previous layers to the current layer. The above multi-layer edge-centric aggregation is applied to each query and input graph in a dataset. All query graphs share one set of GNN parameters (_i.e._, \(\mathbf{W}^{l},\mathbf{U}^{l},\mathbf{b}^{l}\)), while all input graphs share another set. On all graphs, the aggregated message on an edge \(e=\langle u,v\rangle\) in the last layer is taken as the representation vector of this edge, denoted as \(\mathbf{h}_{\langle u,v\rangle}\in\mathbb{R}^{d}\). ### Query-Conditioned Graph Modulation Beyond the edge level, Count-GNN fuses the edge representations into a whole-graph representation to facilitate structure matching between query and input graphs. **Query graph representation.** We employ a typical readout function [23, 14, 15] on a query graph, by aggregating all edge representations in the query. Given a query graph \(\mathcal{Q}\), its whole-graph representation is computed as \[\mathbf{h}_{\mathcal{Q}}=\sigma(\mathbf{Q}\cdot\textsc{Aggr}(\{\mathbf{h}_{ \langle u,v\rangle}|\langle u,v\rangle\in E_{\mathcal{Q}}\})), \tag{4}\] where \(\mathbf{Q}\in\mathbb{R}^{d\times d}\) is a learnable weight matrix shared by all query graphs, and we use sum for the aggregation. Intuitively, the query graph representation simply pools all edge representations together uniformly. **Input graph representation.** To generate a whole-graph representation for the input graph, a straightforward way is to follow Eq. (4) by regarding all edges uniformly. However, for an input graph, the space of possible query graphs is enormous. In particular, different queries are often characterized by distinct structures, which implies that different parts of the input graph will be triggered to match different queries. Therefore, aggregating all edges in the input graph uniformly cannot retain sufficiently specific structural properties w.r.t. each query. In other words, using a fixed whole-graph representation for the input graph cannot tailor to each query well for effective structure matching. Thus, we propose to modulate the input graph conditioned on the query, to adapt the whole-graph representation of the input graph to each query. To this end, we leverage Feature-wise Linear Modulation (FiLM) [14, 15, 16] on the edge representations in the input graph, conditioned on the query, in order to retain query-specific structures. The modulation is essentially a scaling and shifting transformation to adapt the edge representations of the input graph to the query. Given an input graph \(\mathcal{G}\), for each edge \(e=\langle u,v\rangle\in E_{\mathcal{G}}\) we modulate its representation \(\mathbf{h}_{\langle u,v\rangle}\) into \(\tilde{\mathbf{h}}_{\langle u,v\rangle}\), as follows. \[\tilde{\mathbf{h}}_{\langle u,v\rangle}=(\gamma_{\langle u,v\rangle}+\mathbf{ 1})\odot\mathbf{h}_{\langle u,v\rangle}+\beta_{\langle u,v\rangle}, \tag{5}\] where \(\gamma_{\langle u,v\rangle}\) and \(\beta_{\langle u,v\rangle}\in\mathbb{R}^{d}\) are FiLM factors for scaling and shifting, respectively, \(\odot\) denotes the Hadamard product, and \(\mathbf{1}\in\mathbb{R}^{d}\) is a vector filled with ones to center the scaling factor around one. Note that the FiLM factors \(\gamma_{\langle u,v\rangle}\) and \(\beta_{\langle u,v\rangle}\) are not directly learnable, but are instead generated by a secondary network [16] conditioned on the original edge representation \(\mathbf{h}_{\langle u,v\rangle}\) and the query representation \(\mathbf{h}_{\mathcal{Q}}\). More specifically, \[\gamma_{\langle u,v\rangle} =\sigma(\mathbf{W}_{\gamma}\mathbf{h}_{\langle u,v\rangle}+ \mathbf{U}_{\gamma}\mathbf{h}_{\mathcal{Q}}+\mathbf{b}_{\gamma}), \tag{6}\] \[\beta_{\langle u,v\rangle} =\sigma(\mathbf{W}_{\beta}\mathbf{h}_{\langle u,v\rangle}+ \mathbf{U}_{\beta}\mathbf{h}_{\mathcal{Q}}+\mathbf{b}_{\beta}), \tag{7}\] where \(\mathbf{W}_{\gamma},\mathbf{U}_{\gamma},\mathbf{W}_{\beta},\mathbf{U}_{\beta} \in\mathbb{R}^{d\times d}\) are learnable weight matrices, and \(\mathbf{b}_{\gamma},\mathbf{b}_{\beta}\in\mathbb{R}^{d}\) are learnable bias vectors. The modulated edge representations can be further fused via a readout function, to generate a modulated whole-graph representation for the input graph, which is tailored toward each query to enable more precise matching between the input graph and query. Concretely, consider a query graph \(\mathcal{Q}\) and an input graph \(\mathcal{G}\). We formulate the \(\mathcal{Q}\)-conditioned representation for \(\mathcal{G}\), denoted \(\mathbf{h}_{\mathcal{G}}^{\mathcal{Q}}\in\mathbb{R}^{d}\), by aggregating the modulated edge representations of \(\mathcal{G}\) in the following. \[\mathbf{h}_{\mathcal{G}}^{\mathcal{Q}}=\sigma(\mathbf{G}\cdot\textsc{Aggr}( \{\tilde{\mathbf{h}}_{\langle u,v\rangle}|\langle u,v\rangle\in E_{\mathcal{G }}\})), \tag{8}\] where \(\mathbf{G}\in\mathbb{R}^{d\times d}\) is a learnable weight matrix shared by all input graphs, and we use sum for the aggregation. ### Counter Module and Overall Objective With the whole-graph representations of the query and input graphs, we design a counter module to estimate the count of subgraph isomorphisms, and formulate the overall objective. **Counter module.** We estimate the count of isomorphisms based on the structure matchability between the query and input graph. Given the query graph \(\mathcal{Q}\) and input graph \(\mathcal{G}\), we predict the number of subgraphs on \(\mathcal{G}\) which are isomorphic to \(\mathcal{Q}\) by \[\hat{n}(\mathcal{Q},\mathcal{G})=\textsc{ReLU}(\mathbf{w}^{\top}\textsc{Match }(\mathbf{h}_{\mathcal{Q}},\mathbf{h}_{\mathcal{G}}^{\mathcal{Q}})+b), \tag{9}\] where \(\textsc{Match}(\cdot,\cdot)\) outputs a \(d_{m}\)-dimensional vector to represent the matchability between its arguments, and \(\mathbf{w}\in\mathbb{R}^{d_{m}},b\in\mathbb{R}\) are the learnable weight vector and bias, respectively. Here a ReLU activation is used to ensure that the prediction is non-negative. Note that \(\textsc{Match}(\cdot,\cdot)\) can be any function--we adopt a fully connected layer (FCL) such that \(\textsc{Match}(\mathbf{x},\mathbf{y})=\textsc{FCL}(\mathbf{x}\parallel \mathbf{y}\parallel\mathbf{x}-\mathbf{y}\parallel\mathbf{x}\odot\mathbf{y})\). Figure 2: Overall framework of Count-GNN. **Overall objective.** Based on the counter module, we formulate the overall training loss. Assume a set of training triples \(\mathcal{T}=\{(\mathcal{Q}_{i},\mathcal{G}_{i},n_{i})\mid i=1,2,\ldots\}\), where \(n_{i}\) is the ground truth count for query \(\mathcal{Q}_{i}\) and input graph \(\mathcal{G}_{i}\). The ground truth can be evaluated by classical exact algorithms [13]. Subsequently, we minimize the following loss: \[\frac{1}{|\mathcal{T}|}\sum_{(\mathcal{Q}_{i},\mathcal{G}_{i},n_{i})\in \mathcal{T}}|\hat{n}(\mathcal{Q}_{i},\mathcal{G}_{i})-n_{i}|+\lambda\mathcal{L }_{\text{FiLM}}+\mu\|\Theta\|_{2}^{2}, \tag{10}\] where \(\mathcal{L}_{\text{FiLM}}\) is a regularizer on the FiLM factors and \(\|\Theta\|_{2}^{2}\) is a L2 regularizer on the model parameters, and \(\lambda,\mu\) are hyperparameters to control the weight of the regularizers. Specifically, the FiLM regularizer is designed to smooth the modulations to reduce overfitting, by encouraging less scaling and shifting as follows. \[\mathcal{L}_{\text{FiLM}}=\sum_{(\mathcal{Q}_{i},\mathcal{G}_{i},n_{i})\in \mathcal{T}}\sum_{(u,v)\in E_{\mathcal{G}_{i}}}\|\gamma_{(u,v)}\|_{2}^{2}+\| \beta_{(u,v)}\|_{2}^{2}. \tag{11}\] We also present the training algorithm and a complexity analysis in Appendix A. ### Theoretical Analysis of Count-GNN The proposed Count-GNN capitalizes on edge-centric message passing, which is fundamentally more powerful than conventional node-centric counterparts. This conclusion can be theoretically shown by the below lemma and theorem. **Lemma 1** (Generalization).: _Count-GNN can be reduced to a node-centric GNN, i.e., Count-GNN can be regarded as a generalization of the latter. _ In short, Count-GNN can be reduced to a node-centric GNN by removing some input information and merging some edge representations. This demonstrates that Count-GNN is at least as powerful as the node-centric GNNs. We present the proof of Lemma 1 in Appendix B. **Theorem 1** (Expressiveness).: _Count-GNN is more powerful than node-centric GNNs, which means (i) for any two non-isomorphic graphs that can be distinguished by a node-centric GNN, they can also be distinguished by Count-GNN; and (ii) there exists two non-isomorphic graphs that can be distinguished by Count-GNN but not by a node-centric GNN. _ Intuitively, edge-centric GNNs are capable of capturing fine-grained structural information, as any node can be viewed as a collapse of edges around the node. Therefore, by treating edges as the first-class citizens, Count-GNN becomes more powerful. The proof of Theorem 1 can be found in Appendix B. ## 5 Experiments In this section, we empirically evaluate the proposed model Count-GNN in comparison to the state of the art. ### Experimental Setup **Datasets.** We conduct the evaluation on four datasets shown in Table 1. In particular, _SMALL_ and _LARGE_ are two synthetic datasets, which are generated by the query and graph generators presented by a previous study [15]. On the other hand, _MUTAG_[11] and _OGBPPA_[12] are two real-world datasets. In particular, MUTAG consists of 188 nitro compound graphs, and OGB-PPA consists of 6,000 protein association graphs. While graphs in MUTAG and OGB-PPA are taken as our input graphs, we use the query generator [15] to generate the query graphs. As each dataset consists of multiple query and input graphs, we couple each query graph \(\mathcal{Q}\) with an input graph \(\mathcal{G}\) to form a training triple \((\mathcal{Q},\mathcal{G},n)\) with \(n\) denoting the ground-truth count given by an exact algorithm VF2 [13]. More details of the datasets are given in Appendix C. **Baselines.** We compare Count-GNN with the state-of-the-art approaches in two main categories. We provide further details and settings for the baselines in Appendix D. (1) _Conventional GNNs_: GCN [12], GAT [21], GraphSAGE [10], DPGCNN [14], GIN [15], and DiffPool [16]. They capitalize on node-centric message passing, followed by a readout function to obtain the whole-graph representation. Except DiffPool which utilizes a specialized hierarchical readout, we employ a sum pooling over the node representations for the readout in other GNNs. (2) _GNN-based isomorphism counting models_, including four variants proposed by [15], namely RGCNN, RGCN-Sum, RGIN-DN, RGIN-Sum, as well as LRP [17] and DMPNN-LRP [15], a better variant of DMPNN. They are purposely designed GNNs for subgraph isomorphism counting, relying on different GNNs such as RGCN [10], RGIN [15] and local relational pooling [1]) for node representation learning, followed by a specialized readout suited for isomorphism matching. In particular, the two variants RGCN-DN and RGIN-DN utilize DiamNet [15], whereas RGCN-Sum and RGIN-Sum utilize the simple sum-pooling. Finally, we also include a classical approach VF2 [13] and a state-of-the-art approach Pere \begin{table} \begin{tabular}{l|r r r r} \hline \hline & SMALL & LARGE & MUTAG & OGB-PPA \\ \hline \# Queries & 75 & 122 & 24 & 12 \\ \# Graphs & 6,790 & 3,240 & 188 & 6,000 \\ \# Triples & 448,140 & 395,280 & 4,512 & 57,940 \\ \(\mathsf{Avg}(|\mathcal{Q}_{\mathcal{Q}}|)\) & 5.20 & 8.43 & 3.50 & 4.50 \\ \(\mathsf{Avg}(|E_{\mathcal{Q}}|)\) & 6.80 & 12.23 & 2.50 & 4.75 \\ \(\mathsf{Avg}(|V_{\mathcal{G}}|)\) & 32.62 & 239.94 & 17.93 & 152.75 \\ \(\mathsf{Avg}(|E_{\mathcal{Q}}|)\) & 76.34 & 559.68 & 39.58 & 1968.29 \\ \(\mathsf{Avg}(\text{Counts})\) & 14.83 & 34.42 & 17.76 & 13.83 \\ \(\mathsf{Max}(|L|)\) & 16 & 64 & 7 & 8 \\ \(\mathsf{Max}(|L^{\prime}|)\) & 16 & 64 & 4 & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of datasets. grine Jamshidi et al. (2020), both of which evaluate the exact counts. Note that there are many other exact approaches, but they are not suitable baselines due to their lack of generality. In particular, many approaches have algorithmic limitations that can only process small queries, _e.g._, up to 4 nodes in PGD Ahmed et al. (2015), 5 nodes in RAGE Marcus and Shavitt (2012) and ORCA Hocevar and Demsar (2014), and 6 nodes in acc-Motif Meira et al. (2014). More broadly speaking, since the counting task is #P-complete, any exact algorithm is bound to suffer from prohibitively high running time once the queries become just moderately large. Besides, some approaches cannot handle certain kinds of queries or input graphs. For example, SCMD Wang et al. (2012) and PATCOMP Jain et al. (2017) are not applicable to directed graphs, and PGD Ahmed et al. (2015) and RAGE Marcus and Shavitt (2012) do not handle graphs with node or edge labels. Although Peregrine does not support directed edges or edge labels either, we run it on our datasets by ignoring edge directions/labels if any, and focus on its time cost only. In addition, there are also a number of statistically approximate methods, but they have similar shortcomings. For example, tMotivo and L8Motif Bressan et al. (2021) can only handle query graphs with no more than 8 or 16 nodes. **Settings and parameters.** For SMALL and LARGE datasets, we randomly sample 5000 triples for training, 1000 for validation, and the rest for testing. For MUTAG, due to its small size, we randomly sample 1000 triples for training, 100 for validation, and the rest for testing. For OGB-PPA, we divide the triples into training, validation and testing sets with a proportion of 4:1:5. We also evaluate the impact of training size on the model performance in Appendix F. We report further model and parameter settings of Count-GNN in Appendix E. **Evaluation.** We employ mean absolute error (MAE) and Q-error Zhao et al. (2021) to evaluate the effectiveness of Count-GNN. The widely used metric MAE measures the magnitude of error in the prediction. In addition, Q-error measures a relative error defined by \(\max(\frac{n}{n},\frac{\tilde{n}}{n})\), where \(n\) denotes the ground-truth count and \(\hat{n}\) denotes the predicted count.1 Both metrics are better when smaller: the best MAE is 0 while the best Q-error is 1. We further report the inference time for all the approaches to evaluate their efficiency on answering queries, as well as their training time. We repeat all experiments with five runs, and report their average results and standard deviations. Footnote 1: If the ground-truth or predicted count is less than 1, we assume a pseudocount of 1 for the calculation of q-error. ### Performance Evaluation To comprehensively evaluate the performance, we compare Count-GNN with the baselines in two settings: (1) a main setting with triples generated by all the query graphs and input graphs; (2) a secondary setting with triples generated by all input graphs associated with only one query graph. Note that the main setting represents a more general scenario, in which we compare with all baselines except LRP. However, due to the particular design of LRP that requires a number of input graphs coupled with one query graph and the corresponding ground-truth count during training, we use the secondary setting only for this baseline. Our model can flexibly work in both settings. **Testing with main setting.** As discussed, we compare Count-GNN with all baselines except LRP in this more general scenario, where the triples are generated by coupling every pair of query and input graphs. We report the results in Table 2, and compare the aspects of effectiveness and efficiency. In terms of _effectiveness_ measured by MAE and Q-error, Count-GNN can generally outperform other GNN-based models. In the several cases where Count-GNN is not the best among the GNN-based models, it still emerges as a competitive runner-up. This demonstrates the two key modules of Count-GNN, namely, edge-centric aggregation and query-conditioned graph modulation, can improve structure matching between input graphs and structurally diverse queries. In terms of _efficiency_ measured by the query time, we make three observations. First, Count-GNN achieves 65x\(\sim\)324x speedups over \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{SMALL} & \multicolumn{3}{c|}{LARGE} & \multicolumn{3}{c|}{MUTAG} & \multicolumn{3}{c}{OGB-PPA} \\ & MAE \(\downarrow\) & Q-error \(\downarrow\) & Time/s \(\downarrow\) & MAE \(\downarrow\) & Q-error \(\downarrow\) & Time/s \(\downarrow\) & MAE \(\downarrow\) & Q-error \(\downarrow\) & Time/s \(\downarrow\) & MAE \(\downarrow\) & Q-error \(\downarrow\) & Time/s \(\downarrow\) \\ \hline GCN & 14.8 \(\pm\) 0.5 & 2.1 \(\pm\) 0.1 & 7.9 \(\pm\) 0.2 & 33.0 \(\pm\) 0.4 & 3.5 \(\pm\) 1.0 & 29.8 \(\pm\) 0.7 & 19.9 \(\pm\) 9.7 & 4.2 \(\pm\) 1.5 & 0.88 \(\pm\) 0.02 & 36.8 \(\pm\) 1.4 & 2.1 \(\pm\) 0.4 & 12.5 \(\pm\) 0.3 \\ GraphSAGE & 14.0 \(\pm\) 2.7 & 2.5 \(\pm\) 0.8 & **7.0** \(\pm\) 0.1 & 33.8 \(\pm\) 1.6 & 3.1 \(\pm\) 0.4 & **27.5** \(\pm\) 1.3 & 13.9 \(\pm\) 2.8 & 4.7 \(\pm\) 0.8 & 0.88 \(\pm\) 0.02 & 32.5 \(\pm\) 4.5 & 2.5 \(\pm\) 0.5 & **11.1** \(\pm\) 0.1 \\ GAT & 12.2 \(\pm\) 0.7 & 2.0 \(\pm\) 0.5 & 14.3 \(\pm\) 0.3 & 37.3 \(\pm\) 5.2 & 6.0 \(\pm\) 1.2 & 59.4 \(\pm\) 0.7 & 30.8 \(\pm\) 6.7 & 6.0 \(\pm\) 0.3 & 0.91 \(\pm\) 0.01 & 35.8 \(\pm\) 2.4 & 2.2 \(\pm\) 0.6 & 30.4 \(\pm\) 0.8 \\ DPGCNN & 16.8 \(\pm\) 0.7 & 2.9 \(\pm\) 0.2 & 21.7 \(\pm\) 0.4 & 39.8 \(\pm\) 3.7 & 5.4 \(\pm\) 1.6 & 64.8 \(\pm\) 0.9 & 27.5 \(\pm\) 2.5 & 4.9 \(\pm\) 0.6 & 15.4 \(\pm\) 0.01 & 38.4 \(\pm\) 1.2 & 2.3 \(\pm\) 0.3 & 19.4 \(\pm\) 0.7 \\ DiffPool & 14.8 \(\pm\) 2.6 & 2.1 \(\pm\) 0.4 & **7.0** \(\pm\) 0.1 & 34.9 \(\pm\) 1.4 & 33.8 \(\pm\) 0.7 & 32.5 \(\pm\) 0.7 & 6.4 \(\pm\) 0.3 & 2.5 \(\pm\) 0.2 & 0.86 \(\pm\) 0.00 & 35.9 \(\pm\) 4.7 & 2.7 \(\pm\) 0.3 & 15.4 \(\pm\) 2.2 \\ GIN & 12.6 \(\pm\) 0.5 & 2.1 \(\pm\) 0.1 & 7.1 \(\pm\) 0.0 & 35.9 \(\pm\) 0.6 & 4.8 \(\pm\) 0.2 & 33.5 \(\pm\) 0.6 & 21.3 \(\pm\) 1.0 & 5.6 \(\pm\) 0.7 & 0.41 \(\pm\) 0.01 & 34.6 \(\pm\) 1.4 & 2.5 \(\pm\) 0.5 & 12.3 \(\pm\) 0.4 \\ \hline RGCN-Sum & 24.2 \(\pm\) 6.1 & 3.7 \(\pm\) 1.2 & 13.2 \(\pm\) 0.1 & 80.9 \(\pm\) 26.3 & 6.3 \(\pm\) 1.3 & 61.8 \(\pm\) 0.2 & 8.0 \(\pm\) 0.8 & **1.5** \(\pm\) 0.1 & 0.89 \(\pm\) 0.01 & 34.5 \(\pm\) 13.6 & 4.7 \(\pm\) 0.8 & 33.0 \(\pm\) 0.2 \\ RGCN-DN & 16.6 \(\pm\) 2.3 & 3.2 \(\pm\) 1.3 & 4.8 \(\pm\) 0.2 & 73.7 \(\pm\) 2.9 & 2.1 \(\pm\) 4.2 & 10.9 \(\pm\) 0.4 & 7.3 \(\pm\) 0.8 & 2.6 \(\pm\) 0.2 & 1.9 \(\pm\) 0.04 & 57.1 \(\pm\) 15.7 & 5.0 \(\pm\) 1.3 & 31.2 \(\pm\) 0.1 \\ RGIN-WN & 10.7 \(\pm\) 0.3 & 3.2 \(\pm\) 0.2 & 12.2 \(\pm\) 0.0 & 33.2 \(\pm\) 2.2 & 4.2 \(\pm\) 1.3 & 61.4 \(\pm\) 1.0 & 10.8 \(\pm\) 0.9 & 1.9 \(\pm\) 0.1 & 0.45 \(\pm\) 0.02 & 29.1 \(\pm\) 1.7 & 1.2 \(\pm\) 0.6 & 20.6 \(\pm\) 1.2 & 1.1 \(\pm\) 0.6 \\ RGIN-DN & 11.6 \(\pm\) 0.2 & 2.4 \(\pm\) 0.0 & 49.7 \(\pm\) 1.8 & 32.5 \(\pm\) 1.9 & 4.3 \(\pm\) 2.0 & 104.0 \(\pm\) 1.5 & 8.6 \(\pm\) 1.9 & 3.3 \(\pm\) 0.8 & 0.73 \(\pm\) 0.33 & 35.8 \(\pm\) 6.4 & 4.4 \(\pm\) 1.1 & 28.8 \(\pm\) 0.3 \\ DMPNN-LRP & 9.1 \(\pm\) 0.2 & the classical VF2. While the other exact method Peregrine is orders of magnitude faster than VF2, Count-GNN can still achieve 8x\(\sim\)26x speedups over Peregrine. Second, Count-GNN is also more efficient than other GNN-based isomorphism counting models. On the one hand, Count-GNN achieves 3.1x\(\sim\)6.5x speedups over DMPNN-LRP, which is generally the second best GNN-based method after Count-GNN in terms of effectiveness. On the other hand, Count-GNN also obtains consistent and notable speedups over the fastest RGCN/RGIN variant (_i.e._, RGIN-Sum), while reducing the errors by 20% or more in most cases. Finally, although many conventional GNNs can achieve a comparable or faster query time, their efficiency comes at the expense of much worse errors than Count-GNN, by at least 30% in most cases. Furthermore, we compare the training time of GNN-based approaches in Table 3. Our proposed Count-GNN generally requires relatively low training time on SMALL and MUTAG, while having comparable training time to the baselines on LARGE and OGB-PPA. **Testing with secondary setting.** We also generate another group of triples for comparison with the baseline LRP, in the so-called secondary setting due to the design requirement of LRP. In particular, we only evaluate on three datasets SMALL, LARGE and MUTAG, as LRP is out-of-memory on OGB-PPA with large and dense graphs. For each dataset we select three query graphs of different sizes (see Appendix C). On each dataset, we couple each query graph with all the input graphs, thus forming 6790/3240/188 triples for each query in SMALL/LARGE/MUTAG, respectively. Besides, we split the triples of SMALL and LARGE in the ratio of 1:1:2 for training, validation and testing, while using the ratio of 1:1:1 for MUTAG. The results are reported in Table 4. We observe that Count-GNN consistently outperforms LRP in terms of effectiveness, significantly reducing MAE by 43% and Q-error by 64% on average. This verifies again the power of the two key modules in Count-GNN. For efficiency, neither Count-GNN nor LRP emerges as the clear winner. **Summary.** Compared to existing GNN-based models, Count-GNN generally achieves significant error reductions with comparable or faster training and inference, advancing the research of GNN-based isomorphism counting. ### Model Analysis We further investigate various aspects of Count-GNN on the three datasets SMALL, LARGE and MUTAG. **Ablation Study.** To evaluate the impact of each module in Count-GNN, we conduct an ablation study by comparing Count-GNN with its two degenerate variants: (1) Count-GNN\(\backslash\)E, which replaces the edge-centric aggregation with the node-centric GIN; (2) Count-GNN\(\backslash\)M, which replaces the query-conditioned modulation with a simple sum-pooling as the readout for the input graph. We report the results in Table 5. Not surprisingly, the full model generally outperforms the two variants, further demonstrating the benefit of edge-centric aggregation and query-conditioned modulation. Furthermore, Count-GNN\(\backslash\)M is usually better than Count-GNN\(\backslash\)E, which implies that edge-centric aggregation may contribute more to the performance boost, possibly due to its more central role in capturing fine-grained structure information by treating edges as the first-class citizen. **Scalability, impact of parameters and training size.** We present these results in Appendix F due to space limitation. ## 6 Conclusions In this paper, we proposed a novel model called Count-GNN to approximately solve subgraph isomorphic counting on labeled graphs. In terms of modelling, we designed two key modules for Count-GNN, namely, edge-centric message passing and query-conditioned graph modulation, to improve structure matching between the query and input graphs. In terms of theory, we showed that edge-centric message passing is more expressive than its node-centric counterpart. In terms of empirical results, we conducted extensive experiments on several benchmark datasets to demonstrate the effectiveness and efficiency of Count-GNN. \begin{table} \begin{tabular}{c c|c c|c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{SMALL} & \multicolumn{3}{c|}{LARGE} & \multicolumn{3}{c}{MUTAG} \\ & \multicolumn{1}{c}{MAE} & Q-err & Time/s & \multicolumn{1}{c}{MAE} & Q-err & Time/s & \multicolumn{1}{c}{MAE} & Q-err & Time/s \\ \hline \multirow{2}{*}{\(\mathcal{Q}_{1}\)} & LRP & 11.5 & 3.6 & 0.13 & 126.1 & 38.3 & **0.04** & 12.3 & 2.1 & 0.01 \\ & Count-GNN & **3.0** & **1.4** & **0.04** & **111.2** & **2.9** & 0.22 & **2.5** & **1.2** & **0.00** \\ \hline \multirow{2}{*}{\(\mathcal{Q}_{2}\)} & LRP & 12.6 & 4.6 & 0.12 & 19.8 & 3.7 & **0.04** & 7.8 & 2.9 & **0.01** \\ & Count-GNN & **4.6** & **1.1** & **0.05** & **4.3** & **1.1** & 0.07 & **5.0** & **2.1** & **0.01** \\ \hline \multirow{2}{*}{\(\mathcal{Q}_{3}\)} & LRP & 31.5 & 4.1 & 0.05 & 87.2 & 7.1 & **0.04** & 8.3 & 2.8 & 0.01 \\ & Count-GNN & **23.2** & **1.3** & **0.03** & **58.0** & **1.8** & 0.08 & **4.3** & **1.8** & **0.01** \\ \hline \multirow{2}{*}{Avg} & LRP & 18.5 & 4.1 & 0.10 & 77.7 & 16.4 & **0.04** & 9.5 & 2.6 & 0.01 \\ & Count-GNN & **10.3** & **1.3** & **0.04** & **57.8** & **1.9** & 0.12 & **3.9** & **1.7** & 0.01 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation in the secondary setting. Time refers to the total inference time on all test triples, in seconds. The better method for each query is bolded. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Methods & SMALL & LARGE & MUTAG & OGB-PPA \\ \hline GCN & 0.8 & 1.1 & 0.35 & 1.0 \\ GraphSAGE & 0.8 & 1.0 & 0.35 & 0.9 \\ GAT & 1.0 & 2.4 & 0.39 & 1.5 \\ DiffPool & 0.8 & 1.3 & 0.34 & 1.2 \\ GIN & 0.5 & 1.0 & 0.19 & 0.7 \\ RGCN-SUM & 1.0 & 2.4 & 0.38 & 1.7 \\ RGCN-DN & 1.5 & 3.7 & 0.50 & 2.2 \\ RGIN-SUM & 0.7 & 1.9 & 0.23 & 1.3 \\ RGIN-DN & 1.4 & 2.9 & 0.39 & 1.8 \\ Count-GNN & 0.4 & 2.5 & 0.04 & 1.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of training time per epoch, in seconds. ## Acknowledgments This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFB2103803. Dr. Yuan Fang acknowledges the Lee Kong Chian Fellowship awarded by Singapore Management University for the support of this work. The authors wish to thank Dr. Yuchen Li from Singapore Management University for his valuable comments on this work.
2310.16491
TSONN: Time-stepping-oriented neural network for solving partial differential equations
Deep neural networks (DNNs), especially physics-informed neural networks (PINNs), have recently become a new popular method for solving forward and inverse problems governed by partial differential equations (PDEs). However, these methods still face challenges in achieving stable training and obtaining correct results in many problems, since minimizing PDE residuals with PDE-based soft constraint make the problem ill-conditioned. Different from all existing methods that directly minimize PDE residuals, this work integrates time-stepping method with deep learning, and transforms the original ill-conditioned optimization problem into a series of well-conditioned sub-problems over given pseudo time intervals. The convergence of model training is significantly improved by following the trajectory of the pseudo time-stepping process, yielding a robust optimization-based PDE solver. Our results show that the proposed method achieves stable training and correct results in many problems that standard PINNs fail to solve, requiring only a simple modification on the loss function. In addition, we demonstrate several novel properties and advantages of time-stepping methods within the framework of neural network-based optimization approach, in comparison to traditional grid-based numerical method. Specifically, explicit scheme allows significantly larger time step, while implicit scheme can be implemented as straightforwardly as explicit scheme.
Wenbo Cao, Weiwei Zhang
2023-10-25T09:19:40Z
http://arxiv.org/abs/2310.16491v1
# TSONN: Time-stepping-oriented neural network for solving partial differential equations ###### Abstract Deep neural networks (DNNs), especially physics-informed neural networks (PINNs), have recently become a new popular method for solving forward and inverse problems governed by partial differential equations (PDEs). However, these methods still face challenges in achieving stable training and obtaining correct results in many problems, since minimizing PDE residuals with PDE-based soft constraint make the problem ill-conditioned. Different from all existing methods that directly minimize PDE residuals, this work integrates time-stepping method with deep learning, and transforms the original ill-conditioned optimization problem into a series of well-conditioned sub-problems over given pseudo time intervals. The convergence of model training is significantly improved by following the trajectory of the pseudo time-stepping process, yielding a robust optimization-based PDE solver. Our results show that the proposed method achieves stable training and correct results in many problems that standard PINNs fail to solve, requiring only a simple modification on the loss function. In addition, we demonstrate several novel properties and advantages of time-stepping methods within the framework of neural network-based optimization approach, in comparison to traditional grid-based numerical method. Specifically, explicit scheme allows significantly larger time step, while implicit scheme can be implemented as straightforwardly as explicit scheme. Keywords: Deep learning; Physics-informed neural network; Pseudo time-stepping; Ill-conditioned. ## 1 Introduction As a scientific machine learning technique, deep neural network (DNN) represented by physics-informed neural networks (PINNs) [1] has recently been widely used to solve forward and inverse problems involving partial differential equations (PDEs). By minimizing the loss of PDE residuals, boundary conditions and initial conditions simultaneously, the solution can be straightforwardly obtained without mesh, spatial discretization, and complicated program. The concept of PINNs can be traced back to 1990s, where neural algorithms for solving differential equations were proposed [2-6]. With the significant progress in deep learning and computation capability, a variety of PINN models have been proposed in the past few years, and have achieved remarkable results across a range of problems in computational science and engineering [7-10]. As a typical optimization-based PDE solver, PINNs offer a natural approach to solve PDE-constrained optimization problems. A promising application is on the flow visualization technology [11-13], where the flow fields can be easily inferred from sparse observations such as concentration fields and images. Such inverse problems are difficult for traditional PDE solvers. Moreover, several recent works [14-17] have demonstrated the significant advantages of PINNs in inverse design and optimal control, where solving the PDE and obtaining optimal design are pursued simultaneously. In contrast, the most popular method, direct-adjoint-looping (DAL), involves hundreds or thousands of repeated evaluations of the forward PDE, yielding prohibitively expensive computational cost in solving PDE-constrained optimization problems. Despite the potential for a wide range of physical phenomena and applications, training PINN models still encounters challenges [7], and the current generation of PINNs is not as accurate or as efficient as traditional numerical method for solving many forward problems [18, 19]. There have been some efforts to improve PINN trainability, which include adaptively balancing the weights of loss components during training [20-23], enforcing boundary conditions [6, 17, 24], transforming the PDE to its weak form [25-27], adaptively sampling [28, 29], spatial-temporal decomposition [30-32], using both automatic differentiation and numerical differentiation to calculate derivatives [33], and so on. Nevertheless, PINNs are still difficult to achieve stable training and obtain correct results in many problems, since minimizing PDE residuals with PDE-based soft constraint make the problem ill-conditioned [34]. Considering this, different from all existing methods that directly minimize PDE residuals, we integrate time-stepping method with deep learning, and transforms the original optimization problem into a series of more well-conditioned problems over given pseudo time intervals. In computational physics, time-stepping method is a classical and widely used method to solve both time-independent and time-dependent problems governed by PDEs. For time-independent problems \(\mathcal{N}[u(x)]=0\), a pseudo time derivative is introduced, leading to a time-dependent problem \[\frac{\partial u}{\partial\tau}=\mathcal{N}[u(x)] \tag{1}\] To obtain the steady-state solution, we start from an arbitrary initial condition and march the solution to a sufficiently large pseudo time under the given boundary conditions (Equation (2)). In this case, the final steady-state is the desired result, and time-stepping is simply a means to reach this state. Although it seems inefficient to introduce time as another independent variable, sometimes this is the only way to have a well-posed problem [35, 36] and hence is the most robust way to obtain the steady-state solution, especially in computational fluid mechanics (CFD). \[\begin{split} u_{n+1}(x)&=u_{n}(x)+\Delta\tau \cdot\mathcal{N}[u_{n}(x)]\\ u_{n+1}(x)&=u_{n}(x)+\Delta\tau\cdot\mathcal{N}[u_{ n+1}(x)]\end{split} \tag{2}\] For time-dependent problems \(\mathcal{N}[u(x,t)]=0\), pseudo time derivative can also be introduced in each physical time step and then one can compute the solution in steps of pseudo time until convergence, which is the classic dual-time method in CFD [37]. In this paper, we use neural network (NN) to continuously approximate the label \(u_{n+1}(x)\) from explicit or implicit pseudo time-stepping, so that the training follows the pseudo time-stepping trajectory until it converges to the desired solution. The remainder of the paper is organized as follows. In Section 2, the basic framework of PINNs is introduced and then the TSONN method is proposed. Following that we report numerical results on various problems that standard PINNs failed to solve in Section 3. Finally, the paper is concluded in Section 4. ## 2 Methodology ### Physics-informed neural networks In this section, we briefly introduce the PINNs methodology. A typical PINN uses a fully connected DNN architecture to represent the solution \(u\) of the dynamical system. The network takes the spatial \(x\in\Omega\) and temporal \(t\in[0,T]\) as the input and outputs the approximate solution \(\hat{u}(x,t;\boldsymbol{\theta})\). The spatial domain usually has 1-, 2- or 3-dimensions in most physical problems, and the temporal domain may not exist for steady (time-independent) problems. The accuracy of the PINN outputs is determined by the network parameters \(\boldsymbol{\theta}\), which are optimized with respect to the PINN loss function during the training. To derive the PINN loss function, we consider \(u\) to be mathematically described by differential equations of the general form: \[\begin{split}\mathcal{N}[u(x,t)]&=0,x\in\Omega,t \in(0,T]\\ \mathcal{I}[u(x,0)]&=0,x\in\Omega\\ \mathcal{B}[u(x,t)]&=0,x\in\partial\Omega,t\in(0,T] \end{split} \tag{3}\] where \(\mathcal{N}[\cdot]\), \(\mathcal{I}[\cdot]\) and \(\mathcal{B}[\cdot]\) are the PDE operator, the initial condition operator, and the boundary condition operator, respectively. Then the PINN training loss function is defined as \[\mathcal{L}=\mathcal{L}_{PDE}+\lambda_{BC}\mathcal{L}_{BC}+\lambda_{TC} \mathcal{L}_{TC} \tag{4}\] \[\mathcal{L}_{PDE}=\big{\|}\mathcal{N}[\hat{u}(\cdot;\boldsymbol{ \theta})]\big{\|}_{\Omega\times(0,T]}^{2}\] \[\mathcal{L}_{IC}=\big{\|}\mathcal{I}[\hat{u}(\cdot;0;\boldsymbol{ \theta})]\big{\|}_{\Omega}^{2}\] \[\mathcal{L}_{BC}=\big{\|}\mathcal{B}[\hat{u}(\cdot;\boldsymbol{ \theta})]\big{\|}_{\Omega\times(0,T]}^{2}\] The relative weights, \(\lambda_{BC}\) and \(\lambda_{TC}\) in Equation (4), control the trade-off between different components in the loss function. The PDE loss is computed over a finite set of \(m\) collocation points \(D=\{x_{i},\boldsymbol{t}_{i}\}_{i=1}^{m}\) during training, along with boundary condition loss and initial condition loss. The gradients in the loss are computed via automatic differentiation [38]. 2.2 Time-stepping method Time-stepping method can be generally classified into explicit and implicit schemes. In an explicit scheme, each difference equation contains only one unknown and therefore can be solved in a straightforward manner. An implicit scheme is one where the unknowns must be obtained by means of a simultaneous solution of the difference equations applied at all the grid points at a given time level (Equation (2)). Thus, implicit scheme usually involves local linearization of nonlinear operators \(\mathcal{N}\) and simultaneously solves a large system of algebraic equations, resulting in a wide variety of CFD techniques [39]. In contrast, explicit scheme is simple to implement, but its time step \(\Delta\tau\) is largely limited by stability constraints, thus resulting in long computer running times over a given time interval. Compared to explicit scheme, the implicit scheme is more complicated and requires larger computational cost and memory requirement at per time step, while remains stable with a much larger \(\Delta\tau\), and can be more efficient in CFD. For more details on time-stepping method, please refer to [35]. In this study, we introduce pseudo time derivatives for both time-independent and time-dependent problems. Therefore, for the \(n\)th optimization step of the neural network \(\hat{u}(\cdot;\boldsymbol{\theta}_{n})\), the explicit and implicit time-stepping schemes are represented in Equation (5). To avoid confusion, we use \(\boldsymbol{\theta}\) to represent the network parameters being optimized, and \(\boldsymbol{\vartheta}_{n}\) to represent the replica of \(\boldsymbol{\theta}\) at the \(n\)th step. \[\begin{split}\vec{u}_{n+1}&=\hat{u}(\cdot;\mathbf{\theta}_{n})+ \Delta\tau\cdot\mathcal{N}[\hat{u}(\cdot;\mathbf{\theta}_{n})]\\ \vec{u}_{n+1}&=\hat{u}(\cdot;\mathbf{\theta}_{n})+ \Delta\tau\cdot\mathcal{N}[\vec{u}_{n+1}]\end{split} \tag{5}\] 2.3 Time-stepping-oriented neural network In TSONN, the network takes the same input and output as PINNs, but the loss of PDE residual \(\mathcal{L}_{\textit{PDE}}\) is replaced by \(\mathcal{L}_{\textit{eTS}}\) or \(\mathcal{L}_{\textit{eTS}}\), which is the approximation loss to the label \(\vec{u}_{n+1}\) based on either explicit or implicit scheme. Boundary conditions and initial conditions remain soft constraints like PINNs. At the \(n\)th optimization step, the approximation loss \(\mathcal{L}_{\textit{eTS}}\) of explicit time-stepping-oriented neural network (eTSONN) is defined as \[\begin{split}&\mathcal{L}_{\textit{eTS}}=1/\Delta\tau^{2}\left\| \hat{u}(\cdot;\mathbf{\theta})-\vec{u}_{n+1}\right\|_{\Omega\times(0,T]}^{2}\\ =& 1/\Delta\tau^{2}\left\|\hat{u}(\cdot;\mathbf{\theta})- \hat{u}(\cdot;\mathbf{\theta}_{n})-\Delta\tau\cdot\mathcal{N}[\hat{u}(\cdot;\mathbf{ \theta}_{n})]\right\|_{\Omega\times(0,T]}^{2}\end{split} \tag{6}\] The \(1/\Delta\tau^{2}\) in \(\mathcal{L}_{\textit{eTS}}\) is to avoid the significant impact of the pseudo time step \(\Delta\tau\) on the relative weights of different components in the loss function. To make the model training follow the trajectory of time-stepping, it needs \(K\) steps to adequately approximate \(\vec{u}_{n+1}\). After \(K\) steps to optimize \(\mathbf{\theta}\), we perform a step of explicit time-stepping, i.e., \[\begin{split}\mathbf{\phi}_{n+1}&=\mathbf{\theta}\\ \vec{u}_{n+2}&=\hat{u}(\cdot;\mathbf{\phi}_{n+1})+ \Delta\tau\cdot\mathcal{N}[\hat{u}(\cdot;\mathbf{\theta}_{n+1})]\\ n&\gets n+1\end{split} \tag{7}\] then we enter the next one. The optimization of \(\mathbf{\theta}\) by minimizing \(\mathcal{L}_{\textit{eTS}}\) is called inner iteration, while the stepping of \(n\) is called outer iteration. Equation (6) shows that the NN is only used to approximate the label at each outer iteration. Such tasks are the most basic applications of neural network, which significantly reduce the difficulty of optimization compared with PINNs. In an implicit time-stepping-oriented neural network (iTSONN), we use the NN \(\hat{u}(\cdot;\mathbf{\theta})\) to approximate \(\vec{u}_{n+1}\), which satisfy the implicit equation \[\vec{u}_{n+1}=\hat{u}(\cdot;\mathbf{\phi}_{n})+\Delta\tau\cdot\mathcal{N}[\vec{u} _{n+1}] \tag{8}\] Thus, the NN can simultaneously solve and implicitly approximate \(\vec{u}_{n+1}\) by minimizing \[\begin{split}&\mathcal{L}_{\textit{eTS}}=1/\Delta\tau^{2}\left\| \hat{u}(\cdot;\mathbf{\theta})-\vec{u}_{n+1}\right\|_{\Omega\times(0,T]}^{2}\\ =& 1/\Delta\tau^{2}\left\|\hat{u}(\cdot;\mathbf{\theta})- \hat{u}(\cdot;\mathbf{\theta}_{n})-\Delta\tau\cdot\mathcal{N}[\hat{u}(\cdot;\mathbf{ \theta})]\right\|_{\Omega\times(0,T]}^{2}\end{split} \tag{9}\] The iTSONN also requires inner iterations to adequately minimize \(\mathcal{L}_{\mathit{iTS}}\) in each outer iteration. After \(K\) steps inner iteration to optimize \(\boldsymbol{\theta}\), \(\hat{u}(\cdot;\boldsymbol{\beta}_{\mathit{n+1}})\) is calculated with the latest \(\boldsymbol{\theta}\), then we enter the next outer iteration. Equation (6) and Equation (9) are encouraging because of their unified form for both explicit and implicit schemes. In the traditional numerical methods, an implicit scheme demands a simultaneous solution of a large system of nonlinear equation, which is an exceptionally difficult task. Therefore, in practice, the nonlinear equation must be locally linearized to form a large system of linear algebraic equations, which is solved with larger computational cost and memory requirement at per time step. In contrast, in NN-based optimization approach, as shown in Equation (9), the implementation of the implicit scheme is natural and as straightforwardly as the explicit scheme, and there is almost no additional computational cost and memory requirement. In addition, we observe that PINN is a special case of iTSONN when the pseudo time step \(\Delta\tau\) is large enough. Thus, compared to PINNs, iTSONN divides the original problem into a series of well-conditioned sub-problems over given pseudo time step. A similar idea is also employed in Newton iteration, where the pseudo time step is introduced to generate a well-conditioned problem at each Newton iteration step [40]. ``` Input: Initial \(\boldsymbol{\theta}\), \(\boldsymbol{\beta}_{\mathsf{I}}=\boldsymbol{\theta}\), collocation points, outer iterations \(N\), inner iterations \(K\), pseudo time step \(\Delta\tau\). 1: for\(n=1,2,\cdots,N\)do 2: for\(k=1,2,\cdots,K\)do 3: 1. Compute the total loss by \[\mathcal{L}(\boldsymbol{\theta})=\lambda_{\mathit{BC}}\mathcal{L}_{\mathit{ BC}}+\lambda_{\mathit{IC}}\mathcal{L}_{\mathit{IC}}+\begin{bmatrix}\mathcal{L}_{ \mathit{PDE}}=\left\|\mathcal{N}[\hat{u}(\cdot;\boldsymbol{\theta})]\right\|^ {2}\text{ if PINN;}\\ \mathcal{L}_{\mathit{iTS}}=1/\Delta\tau^{2}\left\|\hat{u}(\cdot;\boldsymbol{ \theta})-\hat{u}(\cdot;\boldsymbol{\beta}_{\mathit{n}})-\Delta\tau\cdot \mathcal{N}[\hat{u}(\cdot;\boldsymbol{\theta}_{\mathit{n}})]\right\|^{2}\text{ if eTSONN;}\\ \mathcal{L}_{\mathit{iTS}}=1/\Delta\tau^{2}\left\|\hat{u}(\cdot;\boldsymbol{ \theta})-\hat{u}(\cdot;\boldsymbol{\beta}_{\mathit{n}})-\Delta\tau\cdot \mathcal{N}[\hat{u}(\cdot;\boldsymbol{\theta})]\right\|^{2}\text{ if iTSONN.}\end{bmatrix}\] 4: 2. Update the parameters \(\boldsymbol{\theta}\) via gradient descent \(\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}-\eta\nabla_{\boldsymbol{ \theta}}\mathcal{L}(\boldsymbol{\theta})\) 5: end 6:\(\boldsymbol{\beta}_{\mathit{n+1}}=\boldsymbol{\theta}\) 7: end 8: Output: \(\hat{u}(\cdot;\boldsymbol{\theta})\) ``` **Algorithm 1**Unified framework of eTSONN, iTSONN and PINNs Algorithm 1 presents the unified framework of eTSONN, iTSONN and PINNs. They only difference lies in the loss function and have the same loss function value at the first step of inner iteration, but their gradients to \(\mathbf{\theta}\) are different, thus leading to different training trajectories. Note that for PINN, outer iteration is not actually needed. The outer iteration of PINN in Algorithm 1 is only for the formal uniformity of the three methods, so the result of PINN only depends on total number of iterations \(N^{*}K\). Because the TSONN does not directly minimize the residual of the PDE, its convergence is enforced by making the training follows pseudo time-stepping. ## 3 Results In this section, TSONN is used to solve several benchmark problems where standard PINN fails. Throughout all benchmarks, we will employ the fully connected DNN architecture equipped with hyperbolic tangent activation functions (tanh). All training took on an Nvidia 4090 GPU. After training the model, we measure the L2 relative between the predicted solution and the reference solution. The relative L\({}_{2}\) errors is.The code and data accompanying this manuscript will be made publicly available at [https://github.com/Cao-WenBo/TSONN](https://github.com/Cao-WenBo/TSONN). ### Laplace' equation We first consider a two-dimensional Laplace's equation for solving irrotational, incompressible flow past a cylinder with a radius \(R_{wall}=0.5\), which is the most famous and extensively studied equations in mathematical physics. \[\frac{\partial^{2}\phi}{\partial x^{2}}+\frac{\partial^{2}\phi}{\partial y^{ 2}}=0 \tag{10}\] Equation (10) is the Laplace's equation and also the velocity potential equation for incompressible fluid, where \(\phi\) is velocity potential function and \(\mathbf{V}=(u,v)=(\phi_{x},\phi_{y})\) is the velocity vector. As shown in Figure 1(a), the flow approaches the uniform freestream conditions far away from the body. Hence, the boundary conditions on velocity in the far field are \[\frac{\partial\phi}{\partial x}=u=V_{\infty},\frac{\partial\phi}{\partial y}=v=0 \tag{11}\] Because the flow cannot penetrate the wall, the velocity vector must be tangent to the surface, and then the component of velocity normal to the surface must be zero. Let \(\mathbf{n}\) be a unit vector normal to the surface as shown in Figure 1(a). The wall boundary condition can be written as \[\nabla\phi\cdot\mathbf{n}=\mathbf{V}\cdot\mathbf{n}=0\,. \tag{12}\] The analytical solution of velocity is: \[u=\frac{\partial\phi}{\partial x}=V_{\infty}[1-\frac{R^{2}(x^{2}-y^{2})}{(x^{2}+ y^{2})^{2}}],v=\frac{\partial\phi}{\partial y}=-V_{\infty}[\frac{2R^{2}xy}{(x^{2}+ y^{2})^{2}}] \tag{13}\] We solve this problem in a finite circular domain with a radius \(R_{{}_{far}}=15\), and represent the velocity potential \(\phi\) by a network with 5 hidden layers and 128 neurons per hidden layer. The boundary conditions on the far and wall are considered as soft constraints, and the weight \(\lambda_{{}_{BC}}=0.1\). For simplicity, we create a O-mesh of size 200 \(\times\) 100 in the polar computational domain \((0,2\pi)\times\)(0.5,15), as shown in Figure 1(b). We train the network via full-batch gradient descent using the Adam optimizer for \(5\times 10^{4}\) total iterations. As shown in Figure 2 and 3, standard PINN fails to achieve stable training and obtain any meaningful results near the wall. Its loss rapidly decreases in the first few hundred training iterations, and then barely change for the rest of training, implying that the neural network gets trapped in an erroneous local minimum. In iTSONN, when the pseudo time step \(\Delta\tau\) is 100, it obtains similar training history and incorrect results as PINN, indicating that PINN is a special case of iTSONN when \(\Delta\tau\) is large. When \(\Delta\tau\leq\)10, the loss of iTSONN can obtain stable decrease and correct result by following the trajectory of time-stepping. Figure 1: (a) Boundary conditions at infinity and on the wall. (b) Actual computational domain. We further study the effect of the number of inner iterations on the training. As shown in the Figure 4, as \(K\) increases, iTSONN obtains almost the same accuracy results but the convergence becomes slower. It is like traditional numerical methods that an exact solution of the implicit equation during intermediate iterations is usually time-consuming and unnecessary, but usually lead to more robust numerical schemes. Figure 3: Contours of \(u\) near the wall obtained by analytical solution, PINN, iTSONN with \(\Delta\tau\!=\!1\), and iTSONN with \(\Delta\tau\!=\!100\). Figure 2: (a) Training losses and (b) predicted relative L\({}_{2}\) errors of PINN and iTSONN with \(K\!=\!50\) for different \(\Delta\tau\), where “Iterations” is the total number of iterations (i.e., \(N^{*}K\)). Figure 5 shows the convergence histories of eTSONN under different \(\Delta\tau\). Surprisingly, eTSONN can converge even if \(\Delta\tau=100\), while the maximum allowable time step in the traditional numerical method with explicit time-stepping is only about \(\Delta\tau=0.0001\) under the same mesh. The explicit scheme is simple to set up, implement and easy to parallelize, but it is gradually replaced by the complex implicit scheme in CFD precisely because its time step \(\Delta\tau\) is largely limited by stability constraints, thus resulting in long computer running times to reach the final steady-state. However, Figure 5 shows that explicit scheme seems not to be limited by stability constraints in NN-based optimization approach. To explain this observation, Figure 6 gives the training histories under different inner iteration number \(K\). We observe the training diverges rapidly just like the traditional numerical method for lager time step when \(K\geq 50\), which stems from the fact that the inner iterations enable the NN to sufficiently approximate \(\overline{u}_{{}_{n+1}}\) to adequately follow the trajectory of time-stepping. Therefore, the stability constraints still apply in the NN-based optimization framework, but divergence errors are naturally filtered when the inner iteration number is small, probably due to the neural network's tendency to fit low frequencies first [41] thereby introducing additional dissipation. When neural networks are used to represent the solution or the residual of PDE [42, 43], the time step beyond stability constraints is also observed in explicit time-stepping. However, further explanation is still lacking. In addition, we also observe that the convergence becomes slower as \(\Delta\tau\) decreases in Figure 5, which is consistent with traditional numerical methods. Figure 4: (a) Training losses and (b) predicted relative L\({}_{2}\) errors of iTSONN with \(\Delta\tau=1\) for different \(K\). In this case, iTSONN and eTSONN can converge even without using inner iteration (i.e., \(K=1\) ), which is problem-dependent and not true in most cases according to our experience. We further consider the one-dimensional steady Burgers' equation (Equation (14)) to study the effect of inner iteration. It is a nonlinear partial differential equation that simulates the propagation and reflection of shock waves and takes the form \(-uu_{x}+0.05u_{xx}=0\). We represent the velocity \(u\) by a network with 3 hidden layers and 10 neurons per hidden layer. For simplicity, we create a uniform mesh of size 500 in the computational domain. We choose \(\lambda_{BC}=1\). \[\begin{split} u\frac{\partial u}{\partial x}+0.05\frac{\partial^ {2}u}{\partial x^{2}}=0,x\in[-1,1]\\ u(-1)=1,u(1)=-1\end{split} \tag{14}\] We train the network via full-batch gradient descent using the Adam optimizer. Figure 5: (a) Training losses and (b) predicted relative L\({}_{2}\) errors of eTSONN with \(K\)=10 for different \(\Delta\tau\). Figure 6: (a) Training losses and (b) predicted relative L2 errors of eTSONN with \(\Delta\tau=1\) for different \(K\). We observed that for both eTSONN and iTSONN, when \(K\) is small, they cannot obtain stable convergence and follow the trajectory of time-stepping (Figure 7) because the loss function changes too fast. In addition, eTSONN diverges due to stability constraints when \(K\!=\!1000\), while iTSONN is very robust for \(K\!\geq\!10\). According to our experience, a suitable \(K\) of eTSONN may be difficult to find or even non-existent in some cases. Therefore, considering that iTSONN is as easy to implement as eTSONN in the NN-based optimization approach and is more robust, we recommend using iTSONN in solving PDEs and the remaining sections of this paper only use iTSONN. ### Flow in a lid-driven cavity We second consider a lid-driven cavity problem, which is a classical benchmark in CFD. The system is governed by the two-dimensional incompressible Navier-Stokes equations: \[\begin{array}{l}\mathbf{u}\cdot\nabla\mathbf{u}+\nabla p-\Delta\mathbf{u}\,/\,\text{Re} =0\\ \nabla\cdot\mathbf{u}=0\\ \mathbf{u}=(1,0)\quad\text{in}\,\,\Gamma_{{}_{0}}\\ \mathbf{u}=(0,0)\quad\text{in}\,\,\Gamma_{{}_{1}}\end{array} \tag{15}\] where \(\mathbf{u}\!=\!(u,v)\) is velocity vector, \(p\) is pressure. The computational domain \(\Omega\!=\!(0,\!1)\!\times\!(0,\!1)\) is a two-dimensional square cavity, where \(\Gamma_{{}_{0}}\) is its top boundary and \(\Gamma_{{}_{1}}\) is the other three sides. Despite its simple geometry, the driven cavity flow retains a rich fluid flow physics manifested by multiple counter rotating recirculating regions on the corners of the cavity as Re increases [44]. However, it has been reported that the standard PINN fails to solve the benchmark problem when Re \(>\!100\), and some Figure 7: Training losses of (a) eTSONN and (b) iTSONN with \(\Delta\tau\!=\!0.1\) for different \(K\) in solving Burgers’ equation. improved PINN methods only achieve the solutions for low Re ( Re \(<\) 500 ) [20, 33]. We use a network with 5 hidden layers and 128 neurons per hidden layer, and train the network using LBFGS algorithm. Since the LBFGS algorithm strongly relies on the historical gradient to approximate the inverse Hessian matrix, the optimizer must to be restarted in the outer iterations for iTSONN as the loss function changes, otherwise the training diverges rapidly. Thus, iTSONN allows resampling residual points as the optimizer restarts in outer iterations to further improve robustness. The weight \(\lambda_{{}_{BC}}\) is set to 1. To enable the training adequately follow the trajectory of time-stepping, we choose \(\Delta\tau\) = 0.5 and \(K\) = 300. We set \(N\) = 300 when Re \(\leq\) 1000, \(N\) = 3000 at Re = 2500, and \(N\) = 5000 at Re = 5000. We enforce the PDE residuals and boundary conditions on 20,000 random residual points and 2,000 uniform boundary points, respectively, and evaluate the relative L\({}_{2}\) error on a 500*500 uniform mesh. Figure 8 and 9 show the results of PINN and iTSONN under different Re. We observe that PINN fails to obtain correct results when Re \(>\) 250, while iTSONN can obtain correct results even with Re is as high as 5000. This is remarkable for NN-based optimization approach in solving PDEs, enabling them to solve more complex engineering problems. Figure 10 shows the loss and error histories of PINN and iTSONN, we observe that PINN converges faster than iTSONN when Re = 100 because minimizing the PDE residual is more straightforward than the pseudo time-stepping. However, when Re > 100, the results of PINN have large errors even with a small loss function value, which verifies the ill-conditioning of the loss function [34]. On the contrary, iTSONN achieves stable convergence in both the loss and the error. Figure 10: (a) Training losses and (b) predicted relative L2 errors for different Re. Figure 9: Contours of velocity magnitude for different Re and different methods. Figure 11 shows the error histories of iTSONN against wall time. We observe that training converges slower as Re increases. It only takes 180s for the error to drop to 2.2e-2 at \(\,\mathrm{Re}\,{=}\,100\,\), while it takes 5220s to reach 1e-1 at \(\,\mathrm{Re}\,{=}\,5000\,\). Therefore, for high Reynolds number problems, the efficiency of TSONN still needs to be improved. ### One-dimensional Allen-Cahn equation Next, we turn our attention to a time-dependent problem, the one-dimensional Allen-Cahn equation, which is difficult to directly solve with the standard PINNs. This example has been used in several studies to improve the performance of PINNs [45-47]. \[\begin{split}&\frac{\partial u}{\partial t}-0.0001\frac{\partial^{2 }u}{\partial x^{2}}+5u^{3}-5u=0,x\in[-1,1],t\in[0,1]\\ & u(x,0)=x^{2}\cos(\pi x)\\ & u(t,-1)=u(t,1),\\ &\frac{\partial u}{\partial x}(t,-1)=\frac{\partial u}{\partial x }(t,1)\end{split} \tag{16}\] Following the setup discussed in these studies, we use a network with 4 hidden layers and 128 neurons per hidden layer. We choose \(\lambda_{tc}=10,\lambda_{BC}=1\) and \(\Delta\tau=0.3\,\). We enforce the PDE residuals, initial conditions and boundary conditions on 20,000 random residual points, 257 uniform initial points, and 202 uniform boundary points, respectively, and evaluate the relative L\({}_{2}\) error on a 257*101 uniform mesh. As shown in Figure 12 and 12, the PINN fails to capture the sharper transitions and obtain the correct result, while the iTSONN achieves an excellent agreement with the reference solution under the same hyper-parameter settings, yielding a relative L\({}_{2}\) error of 4.9e-03. The results show that iTSONN works well for time-dependent problems. We Figure 11: Predicted relative L2 errors with respect to wall time for different Re emphasize again that such remarkable improvements only require a simple modification on the loss function compared to PINNs. Figure 14 shows the relative L\({}_{2}\) error against wall time. The error decreases rapidly to about 1e-2 in the first 60 seconds, and then enters a stable and slow decline. Figure 12: Reference solution and predicted solutions of PINN and iTSONN. Figure 13: Comparison of the predicted and reference solutions corresponding to the three temporal snapshots at t = 0.0, 0.5, 1.0. ## 4 Conclusions In this paper, we propose the time-stepping-oriented neural network (TSONN) as a novel approach for solving partial differential equations. The TSONN integrates the time-stepping method with the deep learning, effectively enforcing the convergence of model training by following the trajectory of pseudo time-stepping. In the explicit TSONN, the loss function is only the mean square error of the network output and the label from the pseudo time-stepping, eliminating the need for PDE-related information, thus significantly reducing the ill-conditioning of the optimization problem. In the implicit TSONN, the PINN is a special case when the pseudo time step is large enough. Therefore, compared with the PINN, implicit TSONN divides the original optimization problem into a series of well-conditioned sub-problems. Our results show that TSONN robustly obtains stable training and correct results in various problems that standard PINNs fail to solve. These improvements require only a simple modification of the loss function compared to PINN. As a notable example, PINN fails to solve the lid-driven flow problem beyond Re=250, while TSONN successfully solves it even at Re=5000. More interestingly, we highlight several novel properties and advantages of time-stepping methods within the framework of neural network-based optimization approach. Specifically, the explicit time-stepping scheme allows for significantly larger time steps, surpassing the limitations of traditional mesh-based numerical methods by several orders of magnitude. The implicit time-stepping scheme can be implemented as straightforwardly as explicit scheme without local linearization of PDEs and solution of a large system of sparse linear equations as in traditional methods. ## Data Availability Enquiries about data availability should be directed to the authors. ## Competing interests The authors have not disclosed any competing interests. ## Acknowledgments We would like to acknowledge the support of the National Natural Science Foundation of China (No. 92152301). We also thank to Jiaqing Kou and Xianglin Shan for their valuable comments, which greatly contributed to improving the quality of our paper.
2304.03671
Contraction-Guided Adaptive Partitioning for Reachability Analysis of Neural Network Controlled Systems
In this paper, we present a contraction-guided adaptive partitioning algorithm for improving interval-valued robust reachable set estimates in a nonlinear feedback loop with a neural network controller and disturbances. Based on an estimate of the contraction rate of over-approximated intervals, the algorithm chooses when and where to partition. Then, by leveraging a decoupling of the neural network verification step and reachability partitioning layers, the algorithm can provide accuracy improvements for little computational cost. This approach is applicable with any sufficiently accurate open-loop interval-valued reachability estimation technique and any method for bounding the input-output behavior of a neural network. Using contraction-based robustness analysis, we provide guarantees of the algorithm's performance with mixed monotone reachability. Finally, we demonstrate the algorithm's performance through several numerical simulations and compare it with existing methods in the literature. In particular, we report a sizable improvement in the accuracy of reachable set estimation in a fraction of the runtime as compared to state-of-the-art methods.
Akash Harapanahalli, Saber Jafarpour, Samuel Coogan
2023-04-07T14:43:21Z
http://arxiv.org/abs/2304.03671v2
Contraction-Guided Adaptive Partitioning for Reachability Analysis of Neural Network Controlled Systems ###### Abstract In this paper, we present a contraction-guided adaptive partitioning algorithm for improving interval-valued robust reachable set estimates in a nonlinear feedback loop with a neural network controller and disturbances. Based on an estimate of the contraction rate of over-approximated intervals, the algorithm chooses when and where to partition. Then, by leveraging a decoupling of the neural network verification step and reachability partitioning layers, the algorithm can provide accuracy improvements for little computational cost. This approach is applicable with any sufficiently accurate open-loop interval-valued reachability estimation technique and any method for bounding the input-output behavior of a neural network. Using contraction-based robustness analysis, we provide guarantees of the algorithm's performance with mixed monotone reachability. Finally, we demonstrate the algorithm's performance through several numerical simulations and compare it with existing methods in the literature. In particular, we report a sizable improvement in the accuracy of reachable set estimation in a fraction of the runtime as compared to state-of-the-art methods. ## I Introduction Motivation and Problem StatementNeural networks have become increasingly popular in control systems in recent years due to their relative ease of in-the-loop computation. These learning-based algorithms are known to be vulnerable to input perturbations [1]--small (possibly adversarial) changes in their input can lead to large variations in their output. As such, runtime verification of the safety and performance of neural network controlled systems is essential in safety-critical applications. This task is generally challenging due to the nonlinear and large-scale structure of the neural networks and their interconnection with nonlinear dynamics [2]. A basic ingredient for verifying control systems is the ability to overapproximate the set of reachable states from a given set of initial conditions, possibly in the presence of disturbances. If, for example, this overapproximation avoids obstacles or reaches a goal region, then the system certifiably satisfies the corresponding safety or performance criteria. Recently, several promising reachability-based methods have been proposed for verifying stand-alone neural networks or feedback systems with neural networks in the control loop, however, these methods either suffer from large computational complexity, large over-approximation error, or lack of generality. For stand-alone neural networks: Interval Bound Propagation (IBP) [3] is fast, but largely over-conservative; CROWN [4] suffers when the input perturbation is large; LipSDP [5] is not scalable to large networks. For neural network closed-loop verification: ReachLP [6] is computationally light, but it can be conservative and only applies to linear systems; ReachSPP [7] can provide tighter bounds, but does not scale well to large networks and only applies to linear systems; POLAR [8], MILP [9], and constrained Zonotope [10] methods can provide very accurate estimates for non-linear systems, but they are too expensive for runtime verification. Partitioning of the state space is an effective approach for balancing the trade-off between the accuracy of reachable set estimates and the runtime of the verification strategy. In the literature, there are many existing partitioning algorithms. For standalone ReLU networks, [11] splits the input set into partitions based on the stability of each neuron. In [12] and GSG [13], Monte Carlo simulations Fig. 1: A snapshot of Algorithm 1 is illustrated for \(n=1\). **(top)** There are two initial partitions that both compute the neural network verification step (filled circles in the graph representation). Both partitions are integrated from \(t_{j-1}\) to \(t_{\gamma}=t_{j-1}+\gamma(t_{j}-t_{j-1})\) to estimate the width of the box at \(t_{j}\). Since the estimated blue partition width violates the user-defined maximum width \(\varepsilon\), **(bottom)** the algorithm returns to \(t_{j-1}\) and adds two new partitions to the tree. The maximum neural network verification depth here is \(1\), so the new partitions do not recompute the neural network verification and instead use the same \(E^{c}\) from the initial blue partition. specific state axes to cut. For closed-loop linear systems, ReachLP-Uniform [6] uses a uniform partitioning of the state space, ReachLP-GSG [6] applies the GSG algorithm to ReachLP, and ReachLipBnB [14] combines a branch and bound algorithm with \(\ell_{2}\)-Lipschitz constants obtained from LipSDP. ContributionsWe introduce a contraction-guided adaptive partitioning algorithm for general nonlinear systems with neural network controllers. The algorithm is (i) _separative_--it decouples the neural network verification calls and reachability partitioning layers, leading to accuracy benefits for little computational cost; (ii) _spatially aware_--it chooses specific sections of the state space to cut based on the contraction rate of the partitions; and (iii) _temporally aware_--it chooses to cut on specific time intervals along the trajectory as needed. Additionally, while our algorithm can be coupled with any interval-based verification method including many of those discussed above, we propose specifically applying it to the recently proposed verification method ReachMM based on mixed monotone system theory [15], for which we call the resulting algorithm ReachMM-CG. In this case, we show that the three features highlighted above are explainable through the lens of contraction theory with provable guarantees. In particular, we prove a connection between the contraction rate of the closed-loop mixed monotone embedding system and the contraction rate of the partitions. Using this observation, we provide upper bounds on the approximation error and develop theoretical guarantees on the algorithm's performance. Moreover, ReachMM-CG is applicable in continuous or discrete time for nonlinear plant models controlled by neural networks with general nonlinear activation functions, which removes several limitations found in most of the above-cited approaches. Finally, we find that ReachMM-CG yields reachable set estimates with a 33% accuracy improvement in a quarter of the time compared to state-of-the-art partitioning algorithms on a benchmark example. ## II Notation and Mathematical Preliminary We define the partial order \(\leq\) on \(\mathbb{R}^{n}\) as \(x\leq y\iff x_{i}\leq y_{i}\,\forall i\in\{1,\ldots,n\}\). For every \(x\leq y\), we define the interval \([x,y]=\{z:x\leq z\leq y\}\). For \(x,\widehat{x}\in\mathbb{R}^{n}\), let \((x,\widehat{x})\in\mathbb{R}^{2n}\) be their concatenation. The southeast partial order \(\leq_{\text{SE}}\) on \(\mathbb{R}^{2n}\) is induced by \(\leq\) on \(\mathbb{R}^{n}\) as follows: \((x,\widehat{x})\leq_{\text{SE}}(y,\widehat{y})\iff x\leq y\) and \(\widehat{y}\leq\widehat{x}\). Define the following: \(\mathcal{T}_{\geq 0}^{2n}=\{(x,\widehat{x})\in\mathbb{R}^{2n}:x\leq\widehat{x}\}\), \(\mathcal{T}_{\leq 0}^{2n}=\{(x,\widehat{x})\in\mathbb{R}^{2n}:\widehat{x}\leq \widehat{x}\},\ \mathcal{T}^{2n}=\mathcal{T}_{\geq 0}^{2n}\cup\mathcal{T}_{\leq 0}^{2n}\). We define the vector \(x_{[i:\widehat{x}]}\in\mathbb{R}^{n}\) as \(\left(x_{[i:\widehat{x}]}\right)_{j}=\begin{cases}x_{j},&j\neq i\\ \hat{x}_{j}&j=i\end{cases}\). Define the weighted \(\ell_{\infty}\)-norm \(\|x\|_{\infty,\varepsilon}=\|\operatorname{diag}(\varepsilon)^{-1}x\|_{\infty}\). In particular, the weighted maximum width of an interval \([\underline{x},\overline{x}]\) is \(\|\overline{x}-\underline{x}\|_{\infty,\varepsilon}\). Given a matrix \(A\in\mathbb{R}^{n\times n}\), the \(\ell_{\infty}\)-matrix measure of \(A\) is defined by \(\mu_{\infty}(A)=\max_{i\in\{1,\ldots,n\}}\{A_{ii}+\sum_{j\neq i}|A_{ij}|\}\). From [16, Table III], we define the weak pairing \(\llbracket\cdot,\cdot\rrbracket_{\infty}:\mathbb{R}^{n}\times\mathbb{R}^{n} \rightarrow\mathbb{R}\) associated to the norm \(\|\cdot\|_{\infty}\) as follows: \[\llbracket x,y\rrbracket_{\infty}=\max_{i\in I_{\infty}(y)}y_{i}x_{i},\] where \(I_{\infty}(x)=\{i\in\{1,\ldots,n\}\ |\ |x_{i}|=\|x\|_{\infty}\}\). Consider the following dynamical system \[\dot{x}=f(x,w) \tag{1}\] with state vector \(x\in\mathbb{R}^{n}\) and disturbance vector \(w\in\mathcal{W}\subset\mathbb{R}^{q}\). Given a piecewise continuous curve \(w(\cdot)\) where \(w:[t_{0},t]\rightarrow\mathcal{W}\), the trajectory of the system starting from \(x_{0}\) at time \(t_{0}\) is given \(t\rightarrow\phi_{f}(t,t_{0},x_{0},w(\cdot))\). Given an initial set \(\mathcal{X}\), we denote the reachable set of \(f\) at some \(t\geq t_{0}\): \[\mathcal{R}_{f}(t,t_{0},\mathcal{X},\mathcal{W})=\left\{\begin{aligned} \phi_{f}(t,t_{0},x_{0},w(\cdot)),\,\forall x_{0}\in \mathcal{X},\\ w:\mathbb{R}\rightarrow\mathcal{W}\text{ piecewise cont.}\end{aligned}\right\} \tag{2}\] The dynamical system (1) is mixed monotone with respect to the decomposition function \(d:\mathcal{T}^{2n}\times\mathcal{T}^{2q}\rightarrow\mathbb{R}^{n}\) if, for every \(i\in\{1,\ldots,n\}\), 1. \(d_{i}(x,x,w,w)=f_{i}(x,w)\), for every \(x\in\mathbb{R}^{n}\), \(w\in\mathbb{R}^{q}\); 2. \(d_{i}(x,\widehat{x},w,\widehat{w})\leq d_{i}(y,\widehat{y},w,\widehat{w})\), for every \(x\leq y\) s.t. \(x_{i}=y_{i}\), and every \(\widehat{y}\leq\widehat{x}\); 3. \(d_{i}(x,\widehat{x},w,\widehat{w})\leq d_{i}(x,\widehat{x},v,\widehat{v})\), for every \(w\leq v\) and every \(\widehat{v}\leq\widehat{w}\). With a valid decomposition function \(d\), one can construct an embedding system associated to (1) as follows: \[\frac{d}{dt}\left[\frac{x}{\overline{x}}\right]=\begin{bmatrix}d(\underline{x},\overline{x},\underline{w},\overline{w})\\ d(\overline{x},\underline{x},\overline{w},\underline{w})\end{bmatrix}:=E( \underline{x},\overline{x},\underline{w},\overline{w}). \tag{3}\] ## III Problem Statement Consider a nonlinear continuous-time dynamical system of the form \[\dot{x}(t)=f(x(t),u(t),w(t)), \tag{4}\] where \(x\in\mathbb{R}^{n}\) is the state of the system, \(u\in\mathbb{R}^{p}\) is the control input to the system, \(w\in\mathbb{R}^{q}\) is a disturbance input to the system, and \(f:\mathbb{R}^{n}\times\mathbb{R}^{p}\times\mathbb{R}^{q}\rightarrow\mathbb{R}^{n}\) is a parameterized vector field. We assume that the feedback control policy for the system (4) is given by a \(k\)-layer fully connected feed-forward neural network \(N:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\) as follows: \[\xi^{(i)}=\sigma^{(i-1)}\left(W^{(i-1)}\xi^{(i-1)}+b^{(i-1)}\right), \,i=1,\ldots k \tag{5}\] \[\xi^{(0)}=x,\quad N(x)=W^{(k)}\xi^{(k)}+b^{(k)}\] where \(m_{i}\) is the number of neurons in the \(i\)-th layer, \(W^{(i)}\in\mathbb{R}^{m_{i}\times m_{i-1}}\) is the weight matrix on the \(i\)-th layer, \(b^{(i)}\in\mathbb{R}^{m_{i}}\); is the bias vector on the \(i\)-th layer, \(\xi^{(i)}\in\mathbb{R}^{m_{i}}\) is the \(i\)-th layer hidden variable and \(\sigma_{i}\) is the activation function for the \(i\)-th layer. In practice, \(N(x(t))\) cannot be evaluated at each instance of time \(t\), and, instead, the control must be implemented via, _e.g._, a zero-order hold strategy between sampling instances. We assume that there exists an increasing sequence of control time instances \(\{t_{0},t_{1},t_{2},\ldots\}\) in which the control input is updated. Thus, the closed-loop system with the neural-network feedback controller is given by: \[\dot{x}(t)=f(x(t),N(x(t_{j})),w(t)):=f^{c}(j,x(t),w(t)),\] where \(t\in\mathbb{R}_{\geq 0}\) and \(j\in\mathbb{Z}_{\geq 0}\) is such that \(t\in[t_{j},t_{j+1}]\). In our analysis when \(j\) is clear from context or does not affect the result, we drop \(j\) as an argument of \(f^{c}\). The goal of this paper is to verify the behavior of the closed-loop system (6). To verify the safety of a system under uncertainty, one needs to verify the entire reachable set. However, in general, computing the reachable set exactly is not computationally tractable--instead, approaches typically compute an over-approximation \(\overline{\mathcal{R}}_{f^{c}}(t,t_{0},\mathcal{X},\mathcal{W})\supseteq \mathcal{R}_{f^{c}}(t,t_{0},\mathcal{X},\mathcal{W})\). Therefore, the main challenge addressed in this paper is to develop an approach for providing tight over-approximations of reachable sets while remaining computationally tractable for runtime computation. ## IV Interval Reachability of Neural Network Controlled Systems ### _General Framework_ We assume we have access to an off-the-shelf dynamical system reachability tool that supports _interval analysis_. **Assumption 1** (Open-loop interval reachability).: _Given a dynamical system of the form (4), any intervals \(\mathcal{X}_{0}=[\underline{x}_{0},\overline{x}_{0}]\subseteq\mathbb{R}^{n}\), \(\mathcal{U}=[\underline{u},\overline{u}]\subseteq\mathbb{R}^{p}\), and \(\mathcal{W}=[\underline{w},\overline{w}]\subseteq\mathbb{R}^{q}\), and some initial time \(t_{0}\), there exists a reachability algorithm that returns a valid interval approximation \([\underline{x}(t),\overline{x}(t)]\) satisfying_ \[\mathcal{R}_{f}(t,t_{0},\mathcal{X}_{0},\mathcal{U},\mathcal{W})\subseteq[ \underline{x}(t),\overline{x}(t)],\quad\forall t\geq t_{0}. \tag{7}\] Reachability analysis of dynamical systems is a classical and well-studied research field with several off-the-shelf toolboxes with this capability, including Flow*[17], Hamilton-Jacobi approach [18], level set toolbox [19], CORA [20], and mixed monotonicity [21]. We also assume access to off-the-shelf neural network verification algorithms that can provide _interval inclusion functions_ as follows: **Assumption 2** (Neural network verification).: _Given a neural network \(N\) of the form (5) and any interval \([\underline{y},\overline{y}]\subseteq\mathbb{R}^{n}\), there exists a neural network verification algorithm that returns a valid inclusion function \(\left[\frac{\underline{N}_{[\underline{y},\overline{y}]}}{\underline{N}_{[ \underline{y},\overline{y}]}}\right]:\mathcal{T}_{\geq 0}^{2n}\to\mathcal{T}_{\geq 0}^{2p}\) satisfying_ \[\underline{N}_{[\underline{y},\overline{y}]}(\underline{x},\overline{x})\leq N (x)\leq\overline{N}_{[\underline{y},\overline{y}]}(\underline{x},\overline{x}), \tag{8}\] _for any \(x\in[\underline{x},\overline{x}]\subseteq[\underline{y},\overline{y}]\)._ A large number of the existing neural network verification algorithms can provide bounds of the form (8) for the output of the neural networks, including CROWN (and its subsequent variants) [4], LipSDP [22], and IBP [3]. Combining Assumptions 1 and 2 leads naturally to an algorithm for over-approximating solutions to (6). In particular, starting with \(j=0\) and an initial interval of states \(\mathcal{X}_{j}=[\underline{x}(t_{j}),\overline{x}(t_{j})]\), first obtain \(\underline{N}_{[\underline{y},\overline{y}]}\) and \(\overline{N}_{[\underline{y},\overline{y}]}\) for some \([\underline{y},\overline{y}]\supseteq\mathcal{X}_{j}\) from Assumption 2. Next, set \(\mathcal{U}=[\underline{N}_{[\underline{y},\overline{y}]}(\underline{x}(t_{j}),\overline{x}(t_{j})),\overline{N}_{[\underline{y},\overline{y}]}(\underline{ x}(t_{j}),\overline{x}(t_{j}))]\). Finally, compute the reachable set of the closed-loop system on the interval \(t\in(t_{j},t_{j+1}]\) as the interval reachable set obtained from the algorithm of Assumption 1, set \(\mathcal{X}_{j+1}\) as the reachable set at time \(t_{j+1}\), increment \(j\gets j+1\), and iterate. This iteration serves as the backbone of our proposed paritioning-based algorithm below. ### _Mixed Monotone Framework_ One specific framework that satisfies both Assumptions 1 and 2 is developed in [15], where interval reachability of neural network controlled systems is studied through the lens of mixed monotone dynamical systems theory. Suppose that we have access to a decomposition function \(d\) for the open-loop system (4) with the open-loop embedding system \[\frac{d}{dt}\begin{bmatrix}\underline{x}\\ \overline{x}\end{bmatrix}=\begin{bmatrix}d(\underline{x},\overline{x}, \underline{u},\overline{u},\underline{w},\overline{w})\\ d(\overline{x},\overline{x},\overline{u},\underline{u},\overline{w}, \overline{w})\end{bmatrix}:=E(\underline{x},\overline{x},\underline{u}, \overline{u},\underline{w},\overline{w}). \tag{9}\] Given \(\mathcal{X}_{0}=[\underline{x}_{0},\overline{x}_{0}]\subseteq\mathbb{R}^{n}\), \(\mathcal{U}=[\underline{u},\overline{u}]\subseteq\mathbb{R}^{p}\), and \(\mathcal{W}=[\underline{w},\overline{w}]\subseteq\mathbb{R}^{q}\), the trajectory of (9) starting from \((\underline{x}_{0},\overline{x}_{0})\) provides the inclusion (7) from Assumption 1. Following the treatment in [15], one can use CROWN [4] to obtain the desired bounds from Assumption 2. Given an interval \([\underline{y},\overline{y}]\), the algorithm provides an efficient procedure for finding a tuple \((\underline{C},\overline{C},\underline{d},\overline{d})\) defining linear upper and lower bounds for the output of the neural network \[\underline{C}(\underline{y},\overline{y})x+\underline{d}(\underline{y},\overline{ y})\leq N(x)\leq\overline{C}(\underline{y},\overline{y})x+\overline{d}( \underline{y},\overline{y}), \tag{10}\] for every \(x\in[\underline{y},\overline{y}]\). Using these linear bounds, we can construct the inclusion function for any \([\underline{x},\overline{x}]\subseteq[\underline{y},\overline{y}]\): \[\begin{split}&\underline{N}_{[\underline{y},\overline{y}]}( \underline{x},\overline{x})=[\underline{C}(\underline{y},\overline{y})]^{+} \underline{x}+[\underline{C}(\underline{y},\overline{y})]^{-}\overline{x}+ \underline{d}(\underline{y},\overline{y}),\\ &\overline{N}_{[\underline{y},\overline{y}]}(\underline{x},\overline{x })=[\overline{C}(\underline{y},\overline{y})]^{+}\overline{x}+[\overline{C}( \underline{y},\overline{y})]^{-}\underline{x}+\overline{d}(\underline{y}, \overline{y}).\end{split} \tag{11}\] By combining these two tools, one can construct a new closed-loop embedding system to over-approximate the reachable sets of the closed-loop system (6). From [15], with the following definitions, \[\begin{split}&\underline{\eta}_{j}=\underline{N}_{[\underline{y}, \overline{y}]}(\underline{x}(t_{j}),\overline{x}(t_{j})_{[i:\underline{x}(t_{j})]}), \\ &\overline{\eta}_{j}=\overline{N}_{[\underline{y},\overline{y}]}( \underline{x}(t_{j}),\overline{x}(t_{j})_{[i:\underline{x}(t_{j})]}),\\ &\underline{\nu}_{j}=\underline{N}_{[\underline{y},\overline{y}]}( \overline{x}(t_{j}),\underline{x}(t_{j})_{[i:\overline{x}(t_{j})]}),\\ &\overline{\nu}_{j}=\overline{N}_{[\underline{y},\overline{y}]}( \overline{x}(t_{j}),\underline{x}(t_{j})_{[i:\overline{x}(t_{j})]}),\end{split} \tag{12}\] we use the following "hybrid" function \[\begin{split}\big{(}d^{c}_{[\underline{y},\overline{y}]}(j, \underline{x},\overline{x},\underline{w},\overline{w})\big{)}_{i}=\begin{cases}d_{i}( \underline{x},\overline{x},\overline{\eta}_{j},\overline{\eta}_{j},\underline{w}, \overline{w}),&\underline{x}\leq\overline{x}\\ d_{i}(\overline{x},\overline{x},\overline{\nu}_{j},\underline{\nu}_{j}, \overline{w},\overline{w}),&\overline{x}\leq\underline{x}\end{cases}\end{split} \tag{13}\] for any \([\underline{y},\overline{y}]\supseteq[\underline{x},\overline{x}]\), to create the closed-loop embedding system: \[\begin{split}&\frac{d}{dt}\begin{bmatrix}\underline{x}\\ \overline{x}\end{bmatrix}=\begin{bmatrix}d^{c}_{[\underline{y},\overline{y}]}(j, \underline{x},\overline{x},\underline{w},\overline{w})\\ d^{c}_{[\underline{y},\overline{y}]}(j,\overline{x},\underline{w}, \overline{w})\end{bmatrix}:=E^{c}_{[\underline{y},\overline{y}]}(j,\underline{x}, \overline{x},\underline{w},\overline{w}).\end{split} \tag{14}\] In our analysis when \(j\) is clear from context or does not affect the result, we drop \(j\) as an argument of \(d^{c}\) and \(E^{c}\). For an input set \(\mathcal{X}_{0}\subseteq[\underline{x}_{0},\overline{x}_{0}]\) and a disturbance set \(\mathcal{W}\subseteq[\underline{w},\overline{w}]\), with \(\left[\frac{x}{x}(t)\right]=\phi_{E^{c}_{[\underline{y},\overline{y}]}}\left(t _{0},\left[\frac{\underline{x}_ following inclusion holds [15, Theorem 1]: \[\mathcal{R}_{f^{c}}(t,t_{0},\mathcal{X},\mathcal{W})\subseteq[\underline{x}(t), \overline{x}(t)]. \tag{15}\] Thus, running one single trajectory of \(E^{c}\) can over-approximate the reachable set of the closed-loop system (6). ### _Partitioning_ While interval analysis techniques can often be computationally inexpensive, as the size of the uncertain sets grow, the reachable set estimates become overly conservative. In a closed-loop, these effects can compound significantly as the error continues to accumulate. This phenomenon is known in the literature as the _wrapping effect_. To mitigate over-conservatism in interval-based techniques, one can split uncertain regions into smaller partitions. A valid over-approximation of the reachable set can be found by taking the union of the over-approximations of the reachable sets for each partition. Due to the locality of the smaller partitions, this technique can drastically reduce the conservatism of interval-based techniques. ## V Contraction-Guided Adaptive Partitioning In this section, we introduce a contraction-guided adaptive partitioning algorithm to improve the results of interval reachability methods satisfying Assumptions 1 and 2. ### _Algorithm Description_ At a particular control step \(t_{j}\), define a partition \[P=((\underline{x}_{0},\overline{x}_{0}),\mathrm{N},\mathcal{S}), \tag{16}\] where \((\underline{x}_{0},\overline{x}_{0})\in\mathcal{T}_{\geq 0}^{2n}\) is the initial condition of the partition, \(\mathrm{N}\in\{\mathrm{True},\mathrm{False}\}\) is whether or not the partition should compute the neural network verification (8), and \(\mathcal{S}\) is a set of tuples representing its subpartitions. Algorithm 1 starts by initializing a tree at time \(t_{0}\) with a single root node \(\mathcal{P}=((\underline{X}_{0},\overline{X}_{0}),\mathrm{True},\emptyset)\) representing the initial set. step( ) _Procedure:_ Along a particular control interval \([t_{j-1},t_{j}]\), every partition undergoes the following procedure. If specified, the embedding system \(E^{c}\) is computed using (11, 13, 14). Initially, \(E^{c}\) is integrated to a fraction \(\gamma\in(0,1]\) of the control interval, where the contraction rate is computed by comparing its width to its initial condition. Then, the width of the final box at \(t_{j}\) is estimated. If it violates hyper-parameter \(\varepsilon\), then the algorithm returns to time \(t_{j-1}\) and divides the initial condition into \(2^{n}\) sub-partitions. This process is repeated for each of the sub-partitions. Finally, if the estimated width does not violate \(\varepsilon\), \(E^{c}\) is fully integrated to \(t_{j}\). This procedure is visualized in Figure I, and formalized as step() in Algorithm 1, which repeats for every control step until some final time \(T\). ``` Input: initial set: \(\mathcal{X}_{0}=[\underline{X}_{0},\overline{X}_{0}]\subset\mathbb{R}^{n}\), control time instances: \(\{t_{0},t_{1},\dots,\}\), final time: \(T\) Parameter: desired width \(\varepsilon\in\mathbb{R}_{>0}^{n}\), check contraction factor \(\gamma\in(0,1]\), max partition depth \(D_{p}\in\mathbb{Z}_{\geq 0}\), max neural network verification depth \(D_{\mathrm{N}}\in\mathbb{Z}_{\geq 0}\) Output: Over-approximated reachable set trajectory for \(t\in[t_{0},T]\); \(\overline{\mathcal{R}}(t)\) 1\(\overline{\mathcal{R}}(t_{0})=\mathcal{X}_{0}\): 2\(\mathcal{P}=((\underline{X}_{0},\overline{X}_{0}),\mathrm{True},\emptyset)\); 3\(t_{m}=\) smallest control time instance \(\geq T\); 4for\(j=1,\dots,m\)do 5\((\overline{\mathcal{R}}(\cdot)|_{[t_{j-1},t_{j}]},\mathcal{P}_{+})\leftarrow \texttt{step}\left(\mathcal{P},\emptyset,j\right)\); 6\(\mathcal{P}\leftarrow\mathcal{P}_{+}\); 7 8 end for 9return\(\overline{\mathcal{R}}(t)\)\(\forall t\in[t_{0},T]\); 10 11Procedure step\(((\underline{x}_{0},\overline{x}_{0}),\mathrm{N},\mathcal{S}),E^{c},j)\) 12ifN is Truethen // NN Verification 13\(E^{c}\gets E^{c}\) as (14, 13) using (11) on \((\underline{x}_{0},\overline{x}_{0})\); 14if\(\mathcal{S}=\emptyset\)then // No Subpartitions 15\(d=\mathrm{get\_depth}()\) ; // Tree Depth 16if\(d<D_{p}\)then // Can Partition 17\(t_{\gamma}\gets t_{j-1}+\gamma(t_{j}-t_{j-1})\); 18\((\underline{x},\overline{x})(t_{\gamma})\leftarrow\phi_{E^{c}}(t_{\gamma},t_ {j-1},(\underline{x}_{0},\overline{x}_{0}))\); 19\(C=\frac{\|\overline{x}(t_{\gamma})-\underline{x}(t_{\gamma})\|_{\infty, \varepsilon}}{\|\underline{x}_{0}-\underline{x}_{0}\|_{\infty,\varepsilon}}\); // Contr Rate 20if\(C^{1/\sqrt{\|}}\overline{x}_{0}\max_{\emptyset}\|_{\infty,\varepsilon}>1\)then 21\(\{\underline{x}_{0}^{k},\overline{x}_{0}^{k}\}_{k=1}^{2}\leftarrow\) uni_div\((\underline{x}_{0},\overline{x}_{0})\); 22\(\mathcal{S}\leftarrow\{((\underline{x}_{0},\overline{x}_{0}^{k}),d<D_{\mathrm{N}}, \emptyset)\}_{k=1}^{2n}\); 23\(\mathrm{N}\leftarrow\mathrm{N}\land((d+1)>D_{\mathrm{N}})\); 24return step\(((\underline{x}_{0},\overline{x}_{0}),\mathrm{N},\mathcal{S}),E^{c},j)\) 25\(((\underline{x},\overline{x})(\cdot)|_{[t_{j-1},t_{j}]}\leftarrow\phi_{E^{c}}( \cdot,t_{j-1},(\underline{x}_{0},\overline{x}_{0}))\); 26return\(([\underline{x},\overline{x}](\cdot)|_{[t_{j-1},t_{j}]},((\underline{x}, \overline{x})(t_{f}),\mathrm{N},\emptyset))\) 27else // Iterate Subpartitions 28\(\{(\overline{\mathcal{R}}^{k}(\cdot),P_{+}^{k})\leftarrow\texttt{step}\left( P^{k},E^{c},j\right)\}_{P^{k}\in\mathcal{S}}\); 29\((\underline{x}_{0},\overline{x}_{0})_{+}\leftarrow\) tightest \([\underline{x},\overline{x}]\supseteq\bigcup_{k}\overline{\mathcal{R}}^{k}(t_{f})\); 30return\((\bigcup_{k}\overline{\mathcal{R}}^{k}(\cdot),((\underline{x}_{0}, \overline{x}_{0})_{+},\mathrm{N},\{P_{+}^{k}\}_{k=1}^{2n}))\); ``` **Algorithm 1**Contraction-Guided Adaptive Partitioning (ReachMM-CG) Hyper-parametersThere are several user-defined parameters that are important for the algorithm's performance. The choice of the check-contraction factor \(\gamma\in(0,1]\) is purely for computational benefit, and in particular, \(\gamma=1\) checks the true width of the box at \(t_{j}\) instead of an estimate. The maximum partition depth \(D_{p}\in\mathbb{Z}_{\geq 0}\) specifies how deep new partitions can be added in the tree to integrate \(E^{c}\). The maximum neural network verification depth \(D_{\mathrm{N}}\in\mathbb{Z}_{\geq 0}\) specifies how deep neural network verification is performed in the partition tree. \(D_{p}\) and \(D_{\mathrm{N}}\) are directly related to the first two terms of the right-hand side of Therorem 1 in the next section--every partitioning layer will improve the first term, while only partitions computing the neural network verification will improve the second term. Finally, the choice of desired width \(\varepsilon\in\mathbb{R}_{\geq 0}^{n}\) specifies how wide partitions can grow before being cut: if chosen too small, the algorithm will partition early in the start (\(\varepsilon_{i}=0\) for any \(i\) implies a uniform initial partitioning); if chosen too large, the algorithm will not partition at all (\(\varepsilon_{i}=\infty\) implies no condition on the \(i\)-th component). ### _Discussion_ Algorithm 1 has three key features. SeparationIn practice, the neural network verification step usually contributes the most computational expense to the algorithm. Introducing the maximum depths \(D_{p}\) and \(D_{\text{N}}\) allows us to separate the calls to the neural network verification algorithm and the total number of partitions. This allows the algorithm to improve its accuracy without any additional calls to the neural network verifier, leaving it to the user to control the trade-off between computational complexity and accuracy. Spatial awarenessBased on the dynamics of the system, the over-approximation error can be highly sensitive to the spatial location of each partition. By estimating the contraction rate of each partition separately, the algorithm can localize partitioning exactly _where_ necessary. Temporal awarenessOne of the main drawbacks of interval reachability frameworks is that as time evolves, the over-approximation error tends to compound exponentially. As such, by partitioning along trajectories, the algorithm can save computations to improve reachable set estimates exactly _when_ necessary. Generality of Algorithm 1The algorithm, as written, specifically uses the Mixed Monotone framework (developed in Section V.B) for the interval reachability of the dynamical system, as well as CROWN (specified by (10)) for the neural network verification. We will exclusively use this setting for the rest of the paper, and refer it as "ReachMM-CG". However, it is easy to see that one can replace Lines 11, 16, and 23 with any other interval reachability setting satisfying Assumptions 1 and 2. ## VI Contraction-Based Guarantees on Reachable Set Over-Approximation In this section, we use contraction theory to provide rigorous bounds on the accuracy of the mixed-monotone reachable set over-approximations. For this section, we remove the assumption that the control is applied in the piecewise constant fashion and instead use a continuously applied neural network. This is done for the simplicity of analysis. **Theorem 1** (Accuracy guarantees for mixed-monotone reachability).: _Consider the closed-loop system (6) with the neural network controller \(u=N(x)\) given by (5) and let \(t\mapsto x(t)\) be a trajectory of the closed-loop system (6). Suppose that \(d\) is the decomposition function for the open-loop system (4), there is a neural network verification algorithm which provides the bounds of the form (8). Let and be a trajectory of the closed-loop embedding system (14) and \(t\mapsto\left[\frac{y}{y}\right]\) is a curve such that, for every \(t\in\mathbb{R}_{\geq 0}\). Then,_ (17) _where_ \[c_{x} =\sup_{\underline{z},\overline{y}\in\Omega_{t}}\mu_{\infty}(D_{[ \frac{z}{2}]}E^{\text{c}}_{[\underline{y}(t),\overline{y}(t)]}(\underline{z}, \overline{z},\underline{w},\overline{w})),\] \[\ell^{o}_{u} =\sup_{\begin{subarray}{c}x\in\Omega_{t},w\in[\underline{w}, \overline{w}],\\ \underline{u},\overline{w}\in[\underline{N}(x,x),\overline{N}(x,x)]\end{subarray} }\|D_{[\frac{z}{u}]}E^{\text{c}}(x,x,\underline{u},\overline{w},w,w)\|_{\infty},\] \[\ell^{c}_{w} =\sup_{x\in\Omega_{t},\underline{w},\overline{z}\in[\underline{ w},\overline{w}],\\ \|D_{[\frac{z}{u}]}E^{\text{c}}_{[\underline{y}(t),\overline{y}(t)]}(x,x, \underline{z},\overline{z})\|_{\infty},\] _and the set \(\Omega_{t}\) is defined by \(\Omega_{t}=\bigcup_{\tau\in[0,t]}[\underline{y}(\tau),\overline{y}(\tau)] \subseteq\mathbb{R}^{n}\)._ Proof.: We first make the following conventions: \[r(t) =\begin{bmatrix}\underline{x}(t)\\ \overline{x}(t)\end{bmatrix}-\begin{bmatrix}x(t)\\ x(t)\end{bmatrix},\] \[\text{1} =E^{\text{c}}_{[\underline{y},\overline{y}]}(\underline{x}, \overline{x},\underline{w},\overline{w})-E^{\text{c}}_{[\underline{y}, \overline{y}]}(x,x,\underline{w},\overline{w}),\] \[\text{2} =E^{\text{c}}_{[\underline{y},\overline{y}]}(x,x,\underline{w}, \overline{w})-E^{\text{c}}_{[\underline{y},\overline{y}]}(x,x,w,w),\] \[\text{3} =E^{\text{c}}_{[\underline{y},\overline{y}]}(x,x,w,w)-\begin{bmatrix} f^{\text{c}}(x,w)\\ f^{\text{c}}(x,w)\end{bmatrix}.\] Therefore, we can compute \[\frac{d}{dt}\|r(t)\|_{\infty}^{2} =\left[E^{\text{c}}_{[\underline{y},\overline{y}]}(\underline{x},\overline{x},\underline{w},\overline{w})-\begin{bmatrix}f^{\text{c}}(x,w)\\ f^{\text{c}}(x,w)\end{bmatrix},r(t)\right]_{\infty}\] \[\leq\left[\text{1},r(t)\right]_{\infty}+\left[\text{2},r(t) \right]_{\infty}+\left[\text{3},r(t)\right]_{\infty},\] where the first equality is by the curve-norm derivative property of the weak pairing [16, Theorem 25(ii)] and the second inequality is by the subadditivity of weak pairing [16, Definition 15, property (i)]. By [16, Theorem 18], the first term on the RHS of the above equation can be estimated. By [16, Definition 15, property (iv)], the second term in the RHS can be estimated Fig. 2: A partition tree structure of Algorithm 1 for a run on the double integrator system (20) for \(\varepsilon=0.1\), \(D_{p}=10\), \(D_{\text{N}}=2\). The blue color represents the algorithm’s _separation_—only nodes filled with blue compute the neural network verification step, and all the integrations are performed only on the leaf nodes. The imbalanced structure is a consequence of the algorithm’s _spatial awareness_. As a consequence of the algorithm’s _temporal awareness_, the structure of the partition tree deepens as time increases. as \[\left[\vbox{\hbox{\includegraphics[height=14.226378pt]{figs/ a step size of \(0.01\), and CROWN is computed using auto_LiRPA [25]. In Table I, the performance of ReachMM-CG with different selections of hyper-parameters (all with check contraction factor \(\gamma=0.1\)) is compared to ReachMM, the non-adaptive purely initial set partitioning strategy from [15]. The runtimes are averaged over 100 runs, with mean and standard deviation reported. The volumes for the tightest over-bounding box for the approximated reachable set at the final time step is reported. The adaptive strategies see an improved run-time over the non-adaptive counterparts, and in some cases see a tighter approximation. ### _Linear Discrete-Time Double Integrator_ Consider a zero-order hold discretization of the classical double integrator with a step-size of \(1\) (from [6]): \[x_{t+1}=\underbrace{\begin{bmatrix}1&1\\ 0&1\end{bmatrix}}_{A}x_{t}+\underbrace{\begin{bmatrix}0.5\\ 1\end{bmatrix}}_{B}u_{t} \tag{20}\] Here, we consider a fixed actuation step size of \(1\)--the same as the integration step. In this special case (discrete-time LTI systems), [15, Corollary 4] shows that the following is a valid closed-loop embedding system that provides tighter bounds than (14): \[\begin{bmatrix}\underline{x}_{t+1}\\ \overline{x}_{t+1}\end{bmatrix}=\begin{bmatrix}\underline{M}^{+}\\ \overline{M}^{-}\end{bmatrix}\begin{bmatrix}\underline{x}_{t}\\ \overline{x}_{t}\end{bmatrix}+\begin{bmatrix}B^{+}&B^{-}\\ B^{-}&B^{+}\end{bmatrix}\begin{bmatrix}\underline{d}_{t}\\ \overline{d}_{t}\end{bmatrix}\] with \(\underline{C},\overline{C},\underline{d},\overline{d}\) taken from CROWN as (10) on \((\underline{x}_{t},\overline{x}_{t})\), and \(\underline{M}=A+B^{+}\underline{C}+B^{-}\overline{C}\) and \(\overline{M}=A+B^{+}\overline{C}+B^{-}\underline{C}\). We use the neural network from [6] (\(2\times 10\times 5\times 1\), ReLU). The performance of Algorithm 1 for various choices of \(\varepsilon\), \(D_{p}\), and \(D_{\text{N}}\) is shown in Figure 4, for the initial set \([2.5,3]\times[-0.25,0.25]\) and a final time of \(T=5\). Comparing with the literatureAdditionally, we compare the proposed ReachMM-CG to state-of-the-art partitioning algorithms for linear discrete-time systems: ReachMM [15] (uniform initial partitioning), ReachLP-Uniform [6] (ReachLP with uniform initial partitioning), ReachLP-GSG (ReachLP with greedy sim-guided partitioning) [6], and ReachLipBnB [14] (branch-and-bound using LipSDP [5]). Each algorithm is run with two different sets of hyper-parameters, aiming to compare their performances across various regimes. The setup for ReachMM-CG is \((\varepsilon,\,D_{p},\,D_{\text{N}})\); ReachMM is \((D_{p},\,D_{\text{N}})\); ReachLP-Uniform is \(\#\) initial partitions, ReachLP-GSG is \(\#\) of total propogator calls, ReachLipBnB is \(\varepsilon\). Their runtimes are averaged over 100 runs, with mean and standard deviation reported. The true areas of the reachable sets are computed using Python packages (Shapely, polytope). The performance is outlined in Table II, and notable reachable sets are displayed in Figure 5. ReachMM-CG outperforms SOTA across the board: for both setups, ReachMM-CG is significantly faster than the other methods while returning a tighter reachable set. Fig. 4: The over-approximated reachable sets of the closed-loop double integrator model (20) are shown in blue for the initial set \([2.5,3]\times[-0.25,0.25]\) and final time \(T=5\). They are computed using Algorithm 1 with the specified parameters in the title of each plot. The average runtime and standard deviation across 100 runs is reported, as well as the true area of the final reachable set. 200 true trajectories are shown in red. The horizontal axis is \(x_{1}\) and the vertical axis is \(x_{2}\). Fig. 3: The over-approximated reachable sets of the closed-loop nonlinear vehicle model (18) in the \((p_{x},p_{y})\) coordinates are shown in blue for the initial set \([7.9,8.1]^{2}\times[-\frac{2\pi}{3}-0.01,-\frac{2\pi}{3}+0.01]\times[1.99,2.01]\) over the time interval \([0,1.25]\). They are computed using Algorithm 1 with \(\varepsilon=[0.2,0.2,\infty,\infty]^{\top}\), \(D_{p}=2\), and \(D_{\text{N}}=1\). The average runtime across 100 runs with standard deviation is reported, as well as the volume of the over-bounding box at the final time \(T=1.25\). 200 true trajectories of the system are shown in the time-varying yellow line. ## VIII Conclusions In this paper, we propose an adaptive partitioning approach for interval reachability analysis of neural network controlled systems. The algorithm uses _separation_ of the neural network verifier and the dynamical system reachability tool, is _spatially aware_ in choosing the right locations of the state space to partition, and is _temporally aware_ to partition along trajectories rather than merely on the initial set. Using contraction theory for mixed monotone reachability analysis, we provide formal guarantees for the algorithm's performance. Finally, we run simulations to test the algorithm and show significant improvement in both runtime and accuracy as compared to existing partitioning approaches.
2310.03088
Physics-Informed Neural Networks for Accelerating Power System State Estimation
State estimation is the cornerstone of the power system control center since it provides the operating condition of the system in consecutive time intervals. This work investigates the application of physics-informed neural networks (PINNs) for accelerating power systems state estimation in monitoring the operation of power systems. Traditional state estimation techniques often rely on iterative algorithms that can be computationally intensive, particularly for large-scale power systems. In this paper, a novel approach that leverages the inherent physical knowledge of power systems through the integration of PINNs is proposed. By incorporating physical laws as prior knowledge, the proposed method significantly reduces the computational complexity associated with state estimation while maintaining high accuracy. The proposed method achieves up to 11% increase in accuracy, 75% reduction in standard deviation of results, and 30% faster convergence, as demonstrated by comprehensive experiments on the IEEE 14-bus system.
Solon Falas, Markos Asprou, Charalambos Konstantinou, Maria K. Michael
2023-10-04T18:14:48Z
http://arxiv.org/abs/2310.03088v1
# Physics-Informed Neural Networks for Accelerating Power System State Estimation ###### Abstract State estimation is the cornerstone of the power system control center, since it provides the operating condition of the system in consecutive time intervals. This work investigates the application of physics-informed neural networks (PINNs) for accelerating power systems state estimation in monitoring the operation of power systems. Traditional state estimation techniques often rely on iterative algorithms that can be computationally intensive, particularly for large-scale power systems. In this paper, a novel approach that leverages the inherent physical knowledge of power systems through the integration of PINNs is proposed. By incorporating physical laws as prior knowledge, the proposed method significantly reduces the computational complexity associated with state estimation while maintaining high accuracy. The proposed method achieves up to 11% increase in accuracy, 75% reduction in standard deviation of results, and 30% faster convergence, as demonstrated by comprehensive experiments on the IEEE 14-bus system. Machine learning, physics-informed neural networks, power systems, state estimation. ## I Introduction Power system state estimation is crucial for reliable and secure grid operation. However, traditional techniques relying on SCADA measurements suffer from sparse and error-prone data, leading to delayed and less accurate estimates. Conventional state estimation techniques, rely on complex iterative methods, which are prone to large delays in case of large scale power systems. Furthermore, if the measurement set of the state estimation includes both conventional measurements (i.e., power flow/injection) and measurements from Phasor Measurement Units (PMUs), large delays might affect the monitoring responsiveness of the state estimator for capturing short-duration transients. Machine learning approaches offer accelerated state estimation by processing measurements promptly after neural network training. This paper introduces physics-informed neural networks (PINNs) to meet the need for faster state estimation. While responsiveness may not be critical for conventional measurements with low reporting rates, high PMU observability in future power systems necessitates accelerated state estimation to leverage real-time PMU reporting. Many of the existing machine learning techniques if used for power system state estimation demonstrate drawbacks in capturing the complex dynamics and constraints of power systems [1]. Relying solely on statistical patterns in historical data can lead to inaccurate estimates, particularly in scenarios with limited or noisy data. Moreover, training traditional learning models requires substantial amounts of data, which is costly to collect in power system applications. These limitations underscore the need for advanced approaches that combine machine learning with domain-specific physics knowledge to improve the accuracy and reliability of state estimation. PINNs have emerged as a promising approach for solving complex problems in various scientific and engineering domains [2, 3], including power system state estimation [4]. PINNs offer several advantages over traditional machine learning techniques. Firstly, PINNs integrate domain-specific physical laws and constraints into the Neural Network (NN) architecture, enabling the incorporation of prior knowledge about the system behavior. This ensures that learned models are consistent with underlying physics, leading to more accurate and interpretable results. Secondly, PINNs can effectively handle data scarcity by leveraging physics-based regularization terms, reducing the reliance on large datasets. Additionally, PINNs enable the efficient handling of multi-dimensional inputs and outputs, making them suitable for complex power system modeling and control tasks. The flexibility and interpretability of PINNs make them valuable for various use-cases, including power system parameter estimation, fault detection and diagnosis of power system operation. In this paper, a novel NN training method for power system state estimation by leveraging the physics-informed approach is presented. The proposed approach integrates the physical laws and constraints of power systems as prior knowledge into the NN training process. The performance of the proposed architecture is tested under various training scenarios, comparing it against a benchmark plain NN. The results demonstrate that the proposed PINN achieves higher accuracy, improved algorithmic stability, and requires less training effort compared to the benchmark model. These findings highlight the potential of PINNs as a powerful accelerator for power system state estimation, particularly in the context of the PMU era, where real-time and accurate estimation is crucial for ensuring reliable power system operation. The remainder of this paper is organized as follows: Section II provides a review of related work in the field of state estimation and the application of NNs. Section III presents the methodology employed in this study, which includes a background on PINNs and the formulation of the training process. Section IV presents the experimental results, while Section V concludes the paper. ## II Related Work Traditional power system state estimation techniques, such as the Gauss-Newton and Weighted Least Squares methods, have long relied on iterative approaches and measurements obtained from legacy sensors transmitted through SCADA systems. However, these methods can become computationally demanding when applied to large-scale power systems. Additionally, the sparse and infrequent nature of measurements from legacy sensors can result in delays and decreased accuracy, especially during dynamic system conditions. These limitations highlight the need for more advanced techniques that can overcome these challenges and provide more efficient and accurate state estimation in power systems. In the past, efforts were focused on utilizing machine learning techniques, including artificial NNs and support vector machines, to overcome the limitations of traditional state estimation methods. By using historical data and statistical patterns, these approaches aim to improve the accuracy and efficiency of state estimation [5, 6]. However, a prevalent drawback is the omission of the fundamental physical laws and constraints that govern power systems. This oversight can lead to imprecise estimates, particularly when confronted with limited data or system variations from training conditions. Physics-informed neural networks are a class of machine learning models that integrate physical laws and constraints into the NN architecture [3]. By incorporating prior knowledge about the system behavior, such as conservation laws and boundary conditions, PINNs enhance the accuracy and interpretability of the learned models. This is achieved by enforcing the physics-based constraints as regularization terms during the training process, guiding the NN to produce predictions consistent with the underlying physics. In recent years, there has been growing interest in utilizing PINNs for power system state estimation. Previous studies have explored the application of PINNs for power system parameter estimation, dynamic state estimation, and fault detection, among others [4, 7, 8]. These works have shown promising results, demonstrating the capability of PINNs to capture the complex dynamics of power systems and handle data scarcity. This approach, using PINNs for power system state estimation offers significant advantages and paves the way for future research in this field. Firstly, it simplifies the NN architecture design by eliminating the need for system-specific designs based on system topology. This enhances flexibility and applicability across different power system configurations. Secondly, the proposed approach is not dependent on the optimal placement of PMUs or legacy sensors, making it adaptable to various measurement configurations. Lastly, it eliminates the reliance on modeling time-series dependent events, making it well-suited for real-time applications requiring accurate and timely state estimation. These advantages position the proposed approach as a valuable direction for further exploration and development in power system state estimation using PINNs. ## III Physics-Informed Neural Networks (PINNs) ### _Loss Function Augmentation with Physics_ The physics-derived information can be introduced to a neural network in various ways. In this paper, additional terms are introduced in the loss function that enforce the physics-based constraints during the training process. These terms can be formulated based on known relationships, equations, or laws governing the system. For example, in the context of power system state estimation, constraints related to power flow equations, Kirchhoff's laws, or the admittance matrix can be incorporated. By including these physics-based terms in the loss function, we ensure that the neural network's predictions are consistent with the physical laws and constraints of the system. During training, the network is encouraged to minimize both the discrepancy between predicted and actual values and the violation of the imposed physics-based constraints. The loss function (hereafter \(Loss\)) in the proposed PINN approach is calculated as the sum of the _Mean Square Error (MSE)_ between the actual and inferred data (\(u\)) and the _Mean Absolute Error (MAE)_ derived from formulating a physics equation that has to be satisfied and optimally be equal to zero (\(f\)). Further, the terms \(\lambda_{1}\) and \(\lambda_{2}\) are introduced to the proposed PINN approach, which are variable weights that can be manipulated in order to change the influence of each part of the \(Loss\) during the training process. The \(Loss\) function can be generally expressed as, \[Loss=\lambda_{1}\cdot u+\lambda_{2}\cdot f \tag{1}\] where \(u\) represents the _data-driven_ part of the loss function, while \(f\) is the _physics-informed_ part. An overview of the approach can be seen in Fig. 1. This regularization approach helps prevent the neural network from overfitting to the training data and generating unrealistic or physically implausible results. By incorporating the physics-based constraints as regularization terms, along with variables to control their weight, a balance between data-driven learning and adherence to the underlying physics is achieved. ### _Physics Formulation and Loss Function Integration_ The essence of PINNs is to incorporate physical characteristics of the system into the learning process. This is achieved Fig. 1: PINN training procedure dataflow diagram. A combination of _data-driven_ (\(u\)) and a _physics-driven_ (\(f\)) loss function adjusts the weights/biases of the NN in an iterative process. by adjusting the loss function and introducing a new parameter that guides the optimizer in adjusting the weights and biases of the neural network's neurons while considering the system's physics. The task of power system state estimation involves inferring the voltage magnitude and voltage angle at each bus of the system. In this context, the regression results of the PINN are the complex voltages. The choice of using the active power (\(P_{i}\)) and reactive power injections (\(Q_{i}\)) measurements as the input dataset for the neural network is a driven by the fact that they are valuable for learning about the system's topology through data-driven patterns. The active and reactive power injections are calculated as, \[P_{i}=V_{i}\cdot\sum_{j\in N}V_{j}(G_{ij}\cos\theta_{ij}+B_{ij}\sin\theta_{ij}) \tag{2a}\] \[Q_{i}=V_{i}\cdot\sum_{j\in N}V_{j}(G_{ij}\sin\theta_{ij}-B_{ij} \cos\theta_{ij}) \tag{2b}\] where, \(V_{i}\) and \(V_{j}\) representing the voltages magnitudes of buses i and j respectively, \(\theta_{ij}\) is the difference of phase angle between buses i and j and \(G_{ij}\) and \(B_{ij}\) being the real and imaginary part of the admittance matrix, and \(N\) represents the number of buses in a system. Based on (2a) and (2b), the net power injected in the bus is related to the complex voltage of the particular bus as well as of the buses connected to it. The equations also involve the admittance matrix of the power system, which contains topological and connectivity information. As shown in Eq. (2), the power injection is not dependent on time elements, and is instead defined as the relationship of \(P_{i}\) and \(Q_{i}\) as the complex loads. By learning the relationships between complex powers and voltages, the neural network can capture the underlying system topology without explicitly incorporating system-specific designs. Similarly, the choice of utilizing current injections as the physics regularization parameter in the PINN is justified due to its inherent relationship with complex voltage, which involves the admittance matrix containing topological and connectivity information about the power system. The calculation of current injections does not introduce any time-series dependencies and can effectively serve as a physics-based constraint for the neural network. As shown in Eq. (3), which is in matrix format, there is direct relationship between the voltage (\(V\)) and current phasors (\(I\)), involving the admittance matrix (\(Y\)). \[\mathbf{I}=\overline{\mathbf{Y}}\cdot\mathbf{V} \tag{3}\] By incorporating injection currents as a regularization parameter, it's ensured that the learned model adheres to the physical laws and constraints governing the power system. This choice not only promotes algorithmic stability and accuracy in the estimation process but also utilizes the topological information encoded in the admittance matrix \(Y\) to further enhance the network's understanding of the power system's behavior. The PINN is going to be trained using backpropagation, a widely used technique in training neural networks. It involves the iterative adjustment of network parameters through the minimization of a loss function. By employing backpropagation, the network propagates the error that the loss represents, backwards through the layers, hence the loop in Fig. 1, updating the weights and biases of the neurons. This process involves calculating the gradients of the loss function with respect to the network parameters and using these gradients to adjust the parameters in a way that minimizes the \(Loss\). Through repeated iterations of forward propagation, error calculation, and backpropagation, the network gradually learns to improve its predictions and minimize the \(Loss\) with the aim of finding the optimal set of network parameters that minimizes the overall discrepancy between predicted and true values. For the \(u\) part of the \(Loss\) function in Eq. (1), the network computes its predictions in the form of \(V_{mag}\) and \(V_{ang}\), representing the magnitude and angle of voltage at each bus. The MSE is calculated by comparing these predictions with the ground truth values \(V_{true}\). The MSE quantifies the discrepancy between the predicted and actual values, providing a measure of the network's performance, meaning that the closer the MSE is to zero, the more accurate the network's output is. This process is represented as the comparison of \(V_{mag}\) and \(V_{ang}\) with \(V_{true}\) in Fig. 1, resulting in the \(u\) part of the \(Loss\) and is derived as: \[u=Mean((V_{pred}-V_{true})^{2}) \tag{4}\] The \(f\) part of Eq. (1) is derived from the comparison of the output of the neural network, \(V_{mag}\) and \(V_{ang}\), to the _Current Injection Ground Truth_\(I_{true}\) dataset as shown in Fig. 1. This dataset is constructed from the input dataset, \(Pi\) and \(Qi\), and the \(V_{true}\), so no new data is required to enable this procedure. \(V_{mag}\) and \(V_{ang}\) from the current training batch are used in conjunction with the admittance matrix \(Y\), in order to generate current injection values \(I_{pred}\) at each training epoch. \(I_{pred}\) is then compared to the ground truth \(I_{true}\), to derive their absolute difference. Therefore, the \(f\) part of the \(Loss\) is defined as: \[f=Mean(|I_{pred}-I_{true}|) \tag{5}\] In order to avoid scaling issues when adding two values derived from different datasets, the two parts of the loss function are re-scaled to be \(\in[0,1]\): \[u_{norm}=\frac{Mean((V_{pred}-V_{true})^{2})}{Max((V_{pred}-V_{true})^{2})} \tag{6a}\] \[f_{norm}=\frac{Mean(|I_{pred}-I_{true}|)}{Max(|I_{pred}-I_{true}|)} \tag{6b}\] Hence, the final \(Loss\) function is defined as: \[Loss=\lambda_{1}\cdot\frac{Mean((V_{pred}-V_{true})^{2})}{Max((V_{pred}-V_{ true})^{2})}+\lambda_{2}\cdot\frac{Mean(|I_{pred}-I_{true}|)}{Max(|I_{pred}-I_{true}|)} \tag{6c}\] ## IV Experimental Setup & Results ### _Network Hyper-parameters and Dataset Pre-processing_ The experiments in this study were conducted using a neural network architecture and specific training parameters, as detailed in Table I. Hyper-parameter tuning is beyond the scope of this work; therefore, we adopted commonly used parameters, as in literature, as a reasonable baseline for evaluation, such as a feedforward neural network using the Adam optimizer and a non-linear activation function, the hyperbolic tangent. However, future research can explore different hyper-parameter settings and conduct extensive optimization to enhance the performance and robustness of the proposed approach. To assess the performance of the PINN, we generate two datasets representing different scenarios of the IEEE 14-bus system benchmark [9] utilizing the PowerWorld software. The first dataset represents a steady-state condition changing the loads of the system (as in a usual system), while the second dataset captures a 20-second period encompassing the sudden shutdown of the generator at _Bus 2_ and the subsequent recovery period. These datasets enable the evaluation of the PINN's accuracy and algorithmic stability in estimating power system states under both normal and transient operating conditions, providing valuable insights into its performance and resilience. To ensure the accuracy of these results, a 5-fold cross-validation training strategy is employed, dividing the dataset into five subsets and performing five training iterations, where each subset served as a validation set once. The dataset pre-processing procedure consisted of several steps to ensure optimal training of the neural network. Firstly, to avoid issues with zero division during standardization or re-scaling, any zero values in the dataset were replaced with a small non-zero value (\(1e-8\)). Next, in order to have realistic data, measurement noise was introduced to the simulation data to enhance the model's ability to handle variations and uncertainties in real-world scenarios. The noise levels as well as the methodology that was used for adding noise can be found in [10]. To smooth out any anomalies in the data, the input dataset (\(P_{i}\), \(Q_{i}\)) was standardized, e.g. normalized using the _mean_ and _standard deviation_. Additionally, to facilitate the use of the _tanh_ activation function, the input dataset was re-scaled to \(\in[-1,1]\). Regarding the measurement configuration, it is implicitly assumed that real/reactive power injection measurements are available from every bus of the system. In the case of the steady state conditions datasets the measurements can be provided by the SCADA system, while in the transient operating conditions these measurements can be provided by a PMU-based system. ### _Results & Discussion_ The results for the two scenarios, namely the "steady state" and "generator shut-down", are presented in Table II and Table III, respectively. The results are presented in both individual and in normalized format, in order to show the relative performance of each experiment to the baseline NN as a percentage change. Different training regimes are employed by adjusting the values of \(\lambda_{1}\) and \(\lambda_{2}\), as defined in Section III-B, Eq. (6c), to balance the data-driven and physics-informed aspects of the learning process. The training regimes, represented by the "\(\%\)_increment_" scenarios, involved gradually decreasing \(\lambda_{1}\) and increasing \(\lambda_{2}\) at specific intervals during the training epochs. For example, in the "\(10\%\)_increment_" scenario, \(\lambda_{1}\) decreased by \(10\%\) and \(\lambda_{2}\) increased by \(10\%\) every \(100\) epochs. This progressive adjustment allows to prioritize data-driven learning in the initial stages of training and gradually focus on leveraging the physics-informed constraints to fine-tune the model's performance. By carefully controlling these parameters, the learning process was optimized and the PINN has improved accuracy over the plain NN. To show the improved accuracy of the PINN over the NN, the results in Table II and Table III are averaged over all the training methods and plotted in the first three bars for the two datasets as seen in Fig. 2. The accuracy for the state steady scenario is improved by \(6.7\%\) on average, and by \(4.6\%\) on the generator shut-down dataset. By employing a 5-fold cross-validation training strategy enabled us to assess the similarity and sensitivity of each approach to variations in the datasets. To evaluate the algorithmic stability of the training process, we calculate the standard deviation of validation error across the population of folds, providing a quantitative measure of the consistency of the results obtained from different training folds. This analysis helped us evaluate the generalization capability of the PINN model and provided insights into the model's sensitivity to changes in the training data, showing that introducing physics-based regularization in the training process, greatly benefited the resulting neural network across all different scenarios of weight adjustments. The mean standard deviation between the training folds for the steady state dataset has been improved Fig. 2: The PINNs’ average performance, across different increment scenarios and datasets, normalized to the benchmark NN. by \(57.8\%\) and by \(56.6\%\) for the generator shut-down dataset. At each fold training process, the epoch that yielded the best validation error is noted. This allows to identify the epoch at which the model achieved its maximum performance, pinpointing the optimal point within the 1000 training epochs. The ability to reach peak performance earlier in the training process indicates that the physics-enhanced model requires less training effort and can expedite the convergence to the desired accuracy. The PINN has reached peak performance \(18.1\%\) and \(24.6\%\) faster than the NN within the 1000 epoch training regime of the two datasets, as shown in Fig. 2. ## V Conclusion & Future Work In conclusion, we proposed a novel PINN approach for power system state estimation. Extensive experiments demonstrated that the proposed PINN achieved higher accuracy, required less training effort, and demonstrated improved algorithmic stability in regression results compared to a benchmark NN in the state estimation of power systems under high availability of measurements. In future work, we will investigate the application of PINNs in limited observability scenarios and conduct hyperparameter tuning to optimize their performance. Additionally, we will explore the possibility of PINNs being better than NNs when a dataset that includes faults is used, as they can incorporate physical constraints into the training process. ## Acknowledgment This work was partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016912 (ELECTRON), and by the European Union's Horizon 2020 research and innovation programme under grant agreement N0 739551 (KIOS CoE - TEAMING) and from the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy. This publication is partially based upon work supported by King Abdullah University of Science and Technology (KAUST) under Award No. ORFS-CRG11-2022-5021.
2308.05843
GaborPINN: Efficient physics informed neural networks using multiplicative filtered networks
The computation of the seismic wavefield by solving the Helmholtz equation is crucial to many practical applications, e.g., full waveform inversion. Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs), but their convergence is slow. To address this problem, we propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training, e.g., frequency, to achieve much faster convergence. Specifically, we use the Gabor basis function due to its proven ability to represent wavefields accurately and refer to the implementation as GaborPINN. Meanwhile, we incorporate prior information on the frequency of the wavefield into the design of the method to mitigate the influence of the discontinuity of the represented wavefield by GaborPINN. The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs.
Xinquan Huang, Tariq Alkhalifah
2023-08-10T19:51:00Z
http://arxiv.org/abs/2308.05843v1
# GaborPINN: Efficient physics informed neural networks using multiplicative filtered networks ###### Abstract The computation of the seismic wavefield by solving the Helmholtz equation is crucial to many practical applications, e.g., full waveform inversion. Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs), but their convergence is slow. To address this problem, we propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training, e.g., frequency, to achieve much faster convergence. Specifically, we use the Gabor basis function due to its proven ability to represent wavefields accurately and refer to the implementation as GaborPINN. Meanwhile, we incorporate prior information on the frequency of the wavefield into the design of the method to mitigate the influence of the discontinuity of the represented wavefield by GaborPINN. The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs. ## 1 Introduction Seismic wavefield simulation is a crucial and computationally intensive part of the many seismic imaging problems, e.g., reverse time migration and full waveform inversion (FWI). An efficient simulation approach is quite critical to practical applications. Compared to the time domain, frequency-domain modeling is often more efficient for applications like FWI [1; 2]. However, the solution requires the calculation of the inverse of the impedance matrix, which consumes a lot of memory, and this problem becomes more drastic as the model size increases, like in 3D. Furthermore, dense discretization is required for high-accuracy simulation when dealing with irregular geometry, e.g., topology and complex subsurface structures. With the recent developments in machine learning in science and engineering (so-called scientific machine learning), one type of approach, which embeds physical knowledge into the training, named physics-informed neural network (PINN), has provided the potential to solve this problem [3]. Specifically, we could use a neural network (NN) to represent the wavefield as a function of space (and time), train it to satisfy the governing equation, and the simulation happens on the query with input given by space coordinates and time, and the output given by the value of the wavefield. This machine-learned function form of the wavefield [4; 5; 6] allows for easy handling of irregular geometry and can adapt to more complex wave equations corresponding to more complex media [7]. However, the scalability of the approach is limited by the cost of the training for PINNs [8]. Specifically, the PINNs training often provides solutions for one instance (e.g., one velocity model), and the convergence of each training may require thousands of epochs, making the total computational cost less competitive than numerical methods. So, _how can we make PINNs learn faster is an interesting, challenging, but inevitable topic._ As we know, there are three main components in PINNs: the neural network architecture, the training process, and the loss function [9]. As for the training process, Huang and Alkhalifah [10] proposed a frequency upscaling and neuron splitting algorithm, resulting in a more stable and faster convergence, and Waheed _et al._[11] proposed to use transfer learning to improve the computational efficiency by reducing the epochs needed for the convergence when applied to new velocity models. As for the loss function design, Xiang _et al._[12] proposed a self-adaptive loss function through the adaptive weights for each loss term to adjust the collocation point samples in the domain to improve the accuracy of PINNs. Huang and Alkhalifah [13] proposed a single reference frequency loss function to improve the convergence of the multi-frequency wavefield representation. While for the neural network architecture design, almost all the backbone architecture used is the vanilla MLP with different activation functions. In this paper, we focus on the development of this aspect. To make the neural network fit the wavefield faster, prior knowledge should be embedded in the design of the PINN. For example, the Gabor function has been shown to effectively represent the seismic wavefield [14]. Inspired by this fact, we include the Gabor function into the neural network by means of the Multiplicative filtered network (MFN) [15] to accelerate the convergence of PINNs. We propose a modified PINN with MFN and we refer to this network as GaborPINN. In this framework, we represent the wavefield by a linear combination of Gabor basis functions of the input coordinates (GaborNet), in which the scale factor is determined by the frequency of the wavefield. The prior information on this combination would be beneficial for the convergence of the fitting as the seismic wavefield could naturally be represented as a linear combination of basis functions [16], e.g., Gabor basis. Although Fathony _et al._[15] mentioned that this type of NNs retains some drawbacks such as the lack of smoothness in the represented function and its gradients; we found that with the proper scale selection for the Gabor function, this problem can be avoided. We demonstrate the advantages of the method on a simple layered model extracted from the Marmousi model and also discuss the scale factor selection. Further experiments on higher-frequency wavefields show that the proposed method results in faster convergence and provides higher accuracy where the vanilla PINN fails. ## 2 Methodology The framework of PINNs aims to train an NN function by using the governing equations of the physical system as a loss function. Here, we take the Helmholtz equation for a scattered wavefield [4] as an example, \[\frac{\omega^{2}}{\mathbf{v}^{2}}\delta\mathbf{U}+\nabla^{2}\delta\mathbf{U}+ \omega^{2}(\frac{1}{\mathbf{v}^{2}}-\frac{1}{\mathbf{v}_{0}^{2}})\mathbf{U}_ {0}=0, \tag{1}\] where \(\mathbf{U}_{0}\) is the background wavefield analytically calculated for a constant velocity \(\mathbf{v}_{0}\)[17]: \[\mathbf{U}_{0}(x,z)=\frac{i}{4}\boldsymbol{H}_{0}^{(2)}\left(\omega\sqrt{ \frac{\left\{\left(x-s_{x}\right)^{2}+\left(z-s_{z}\right)^{2}\right\}}{ \mathbf{v}_{0}^{2}}}\right), \tag{2}\] where \(\boldsymbol{H}_{0}^{(2)}\) is the zero-order Hankel function of the second kind, \(\delta\mathbf{U}\) is the scattered wavefield and \(\delta\mathbf{U}=\mathbf{U}-\mathbf{U}_{0}\), \(\mathbf{v}\) is the velocity, and \(\omega\) is the angular frequency. Unlike the full wavefield, the scattered wavefield helps us avoid the point source singularity [4]. We define an NN function \(\Phi(\mathbf{x})\) to map from the input coordinates to the output scattered wavefield value for the input location, where \(\mathbf{x}=\{x,z,s_{x}\}\) (in 2D case, and sources are considered on the surface) represents the spatial coordinates and source location. To train an NN to satisfy the governing equation 1, we evaluate the PDE residuals, given the input coordinates and the output wavefield of the NN, as a loss function to train the NN. Thus, the loss function is defined as \[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\left|\frac{\omega^{2}}{(\mathbf{v}^{i}) ^{2}}\Phi\left(\mathbf{x}^{i}\right)+\nabla^{2}\Phi\left(\mathbf{x}^{i}\right) +(\frac{\omega^{2}}{(\mathbf{v}^{i})^{2}}-\frac{\omega^{2}}{(\mathbf{v}_{0}^{ i})^{2}})U_{0}^{i}\right|_{2}^{2}, \tag{3}\] where \(U_{0}^{i}\) are samples of \(\mathbf{U}_{0}\) at point \(x^{i}\). With this loss function, the NN can be trained as an alternative to the seismic wavefield simulation. However, using the loss function (Equation 3), without additional constraints, may allow the network to converge to trivial solutions, e.g., a solution proportional to the negative background wavefield. Inserting \(\Phi=-\mathbf{U}_{0}\) into equation 3, yields \[\begin{split}\mathcal{L}_{t}&=\frac{1}{N}\sum_{i=1}^{N} \left|-\frac{\omega^{2}}{(\mathbf{v}^{i})^{2}}U_{0}^{i}-\nabla^{2}U_{0}^{i}+ \omega^{2}(\frac{1}{(\mathbf{v}^{i})^{2}}-\frac{1}{(\mathbf{v}_{0}^{i})^{2}})U _{0}^{i}\right|_{2}^{2}\\ &=\frac{1}{N}\sum_{i=1}^{N}\left|-\nabla^{2}U_{0}^{i}+\omega^{2}( -\frac{1}{(\mathbf{v}_{0}^{i})^{2}})U_{0}^{i}\right|_{2}^{2}.\end{split} \tag{4}\] Since the background wavefield \(\mathbf{U}_{0}\) satisfies the Helmoholtz equation: \[\left(\frac{\omega^{2}}{\mathbf{v}_{0}^{2}}+\nabla^{2}\right)\mathbf{U}_{0}( \mathbf{x})=\mathbf{s}, \tag{5}\] where \(\mathbf{s}\) is the point source, \(\Phi=-U_{0}\) is a trivial solution.. Thus, the value of the loss \(\mathcal{L}_{t}\), for most collocation points (samples that are not close to the source) in the domain, is equal to zero for the trivial solution. As the NN is trained to minimize the loss function for all collocation points, training a network without proper initialization or other strategies (like PINNup [10]) would push the NN to the trivial solution. In this paper, to make the training of PINN stable and avoid the trivial solution, we propose a soft constraint (penalty term) to push the NN to the right path away from the trivial solution. Considering that the value of the scattered wavefield near the source location is close to zero, while the value of the background wavefield near the source is often large, we add a penalty term that regularizes the predicted wavefield near the source (the region covering an area of almost one wavelength away from the source location). This penalty is given by \[\mathcal{L}_{reg}=\frac{1}{N_{reg}}\sum_{i=1}^{N_{reg}}\left|\Phi(\mathbf{x}^{ i})\right|_{2}^{2}. \tag{6}\] In the vanilla PINN, the backbone NN is a Multilayer perception (MLP), given by the following form \[\mathbf{h}^{(i+1)}=\sigma\left(\mathbf{W}^{(i)}\mathbf{h}^{(i)}+\mathbf{b}^{( i)}\right),i=1,\ldots,L-1, \tag{7}\] where \(\mathbf{h}^{(i)}\) is the hidden layer output of layer \(i\), \(\sigma\) is a nonlinear activation function (we use the sine function here), \(\mathbf{W}\) is the weight matrix, and \(\mathbf{b}\) is the bias vector. In our method, we use the multiplicative filter network [15], which uses a different recursion that never results in the composition of nonlinear functions. The hidden layer, indexed by \(i\), is defined as \[\mathbf{h}^{(i+1)}=\left(\mathbf{W}^{(i)}\mathbf{h}^{(i)}+\mathbf{b}^{(i)} \right)\circ f\left(\mathbf{x};\theta^{(i+1)}\right),i=1,\ldots,k-1 \tag{8}\] where \(f\) is the filter function parameterized by \(\theta\) directly applied to the input \(\mathbf{x}\), and \(\circ\) is an elementwise multiplication. Specially, we use the Gabor function here (GaborNet) because it often well represents the wavefield as a basis function, and is given by \[f_{j}\left(\mathbf{x};\theta^{(i)}\right)=\exp\left(-\frac{\gamma_{j}^{(i)}}{2 }\left\|\mathbf{x}-\boldsymbol{\mu}_{j}^{(i)}\right\|_{2}^{2}\right)\sin \left(\boldsymbol{\omega}_{j}^{(i)}\mathbf{x}+\phi_{j}^{(i)}\right) \tag{9}\] where \(\theta^{(i)}\) is a group of parameters that control the shape of the Gabor kernel, including \(\gamma_{j}^{(i)},\mu_{j}^{(i)},\omega_{j}^{(i)},\phi_{j}^{(i)}\), and \(j\) is the index of the neurons in layer \(i\). Thus, we refer to this implementation combined with PINN as GaborPINN, in which a diagram of it is shown in Figure 1. For the first layer, \(\mathbf{h}^{(1)}=f\left(\mathbf{x};\theta^{(1)}\right)\). [15] mentioned that the MFN is generally rougher (less smooth) than the conventional MLP in representation and gradient calculation. This is because the wavenumber for the sine plane wave, \(\boldsymbol{\omega}_{j}^{(i)}\), in equation 9 end up being very large. So, we propose here to connect the hyperparameter \(\boldsymbol{\omega}_{j}^{(i)}\) to the frequency of the wavefield. Specifically, we initialize \(\boldsymbol{\omega}_{j}^{(i)}\) using \[\boldsymbol{\omega}_{j}^{(i)}=\{\omega_{scale}*\sqrt{\gamma_{j}^{(i)}},\omega_ {scale}*\sqrt{\gamma_{j}^{(i)}},\omega_{scale}*\sqrt{\gamma_{j}^{(i)}}\} \tag{10}\] where \(\omega_{scale}\) is the scale factor to control the amplitude of the initialization of \(\omega_{j}^{(i)}\). For different input coordinates, their initializations are the same but will be updated to different values after training. We discuss the selection of this scale factor later. ## 3 Examples In this section, we will test three versions of PINNs to evaluate the convergence and accuracy improvements of GaborPINN. The tests are based on a simple 2.5\(\times\)2.5 \(km^{2}\) layered model extracted from the Marmousi model (Figure 2). We use 40000 random samples from this region for training, and each sample is given by the spatial coordinates \(x\), \(z\) for the wavefield, \(x_{s}\) for the location of the source near the surface, velocity \(v\), and a constant background velocity of 1.5 \(km/s\). Actually, the depth of the sources is fixed at 0.025 \(km\). We train the network for a frequency-domain wavefield of 4 Hz using an Adam optimizer for 50000 epochs. To evaluate the results, we solve the Helmholtz equation using the finite-difference method for a frequency of 4 Hz with using a fine grid spacing for accuracy to act as a reference. In the following experiments, we show the results of MLP and GaborPINN comprised of 3 hidden layers with 256 neurons in each layer, and a larger MLP comprised of 3 hidden layers with 512 neurons in each layer. A comparison with respect to network size and computational complexity is shown in Table 1. Figure 3 shows the loss curves for these three different NNs, in which the NNs are trained by the physical loss. GaborPINN has the best convergence by far compared to the other networks. The loss function for the vanilla PINN stagnates for thousands of epochs and decreases slowly as it has no prior information related to the seismic wavefield. On the other hand, we could see that with GaborPINN, which incorporates the prior information of the wavefield into the network design, as well as inject the input coordinates at every neuron, the convergence is quite good and we could reduce the needed training epochs by two orders of magnitude. \begin{table} \begin{tabular}{c|c|c|c} \hline NN type & Trainable parameters & \# & Computational Complexity \\ \hline MLP-256 & \(\{\mathbf{W}^{(i)},\mathbf{b}^{(i)}\}\) & 133.12 k & 133.12 KMac \\ GaborPINN & \(\{\mathbf{W}^{(i)},\mathbf{b}^{(i)},\gamma_{j}^{(i)},\boldsymbol{\mu}_{j}^{(i )},\boldsymbol{\omega}_{j}^{(i)},\phi_{j}^{(i)}\}\) & 206.08 k & 201.99 KMac \\ MLP-512 & \(\{\mathbf{W}^{(i)},\mathbf{b}^{(i)}\}\) & 528.39 k & 528.39 KMac \\ \hline \end{tabular} \end{table} Table 1: The capacity and complexity comparison between MLP and GaborPINN Figure 1: The GaborPINN framework for seismic wavefield simulation. Figure 2: True velocity (a); the real (b) and imaginary (c) parts of the 4 Hz scattered wavefield calculated numerically. To better understand how many training epochs are needed for GaborPINN to provide a solution, we visualize the results at the 300th, 600th, 1800th, and 50000th epochs (Figure 4). The GaborPINN could reconstruct the main parts of the wavefield within hundreds of epochs, and the details of the predictions are refined as the training progresses. However, as for the vanilla PINN, the NN learns nothing in the first 1800 epochs, which is a common problem for wavefield solutions using PINNs. Even when increasing the width of the NN in the vanilla PINN, the slow convergence persists. The scale used here for the initialization of the \(\omega_{j}^{(i)}\) is 32, which differs from what was used in the original GaborNet. We found that a smaller value like 16 would introduce smoother, less accurate results, while a larger value would result in a failure of GaborPINN. So we test the GaborPINN for different values of initial spatial frequency (wavenumber) \(\omega_{j}^{(i)}\) in the network under a supervised training setting on samples from a regular grid, \(\mathbf{x}_{train}\) (Figure 5), and use a grid that is slightly shifted from the training samples to show the wavefields. We found that a decrease in the value of \(\omega_{scale}\) will make the representation smoother. However, for a value of 256, the predicted output wavefield is generally inaccurate. In other words, its smoothness or continuity is destroyed due to the network's focus on a high-frequency representation. Then we test the performance of GaborPINN in learning a 16 Hz seismic wavefield. We still use the true velocity shown in Figure 2a to generate the reference results (Figure 6) using finite difference methods. The generation of the training samples is the same as the above experiment, but we use 160000 random samples from this region instead. We train the GaborPINN, the MLP with 3 hidden layers of 256 neurons in each layer, and the MLP with 3 hidden layers of 512 neurons in each layer, using an Adam optimizer for 50000 epochs. We note that for the GaborPINN, the scale factor used for the initialization of the \(\omega_{j}^{(i)}\) is 128 to fit the high-frequency nature of the wavefield. The loss curve is shown in Figure 7. The GaborPINN still performs well for a 16 Hz wavefield, while the vanilla PINN did not converge. The predictions of the NN at different training epochs are shown in Figure 8. The GaborPINN learns this high-frequency wavefield with limited training epochs, and the details are refined later. The vanilla PINN did not converge within 50000 epochs. This example demonstrates that the GaborPINN provides fast convergence and a strong capability [15] to represent wavefields. It is worth noting that the networks used for this high-frequency wavefield are the same in size as those used for the 4 Hz case, which probably hurt the conventional PINN. However, we the proper frequency scaling the GaborPINN managed to converge fast. Figure 3: The Loss curves for three versions of PINNs. Figure 4: The real-part predictions of the 4Hz wavefield due to a source near the surface at 1.25 km at various epochs (b-m) for three versions of PINNs. (b-e) are the results for the MLP comprised of 3 hidden layers with 256 neurons in each layer, (f-i) are the MLP comprised of 3 hidden layers with 512 neurons in each layer, and (j-m) are the results of GaborPINN. Figure 5: The prediction results (a-d) of GaborPINN with different initial frequency scales applied to testing points \(\mathbf{x}_{test}\), which are perturbed compared to the training points \(\mathbf{x}_{train}\), and that of GaborNet whose scale is 256 (e) evaluated at the training points \(\mathbf{x}_{train}\). Figure 6: The real (b) and imaginary (c) parts of the 16 Hz wavefield calculated numerically. Figure 8: The predicted real part of the 16Hz wavefield due to a source near the surface at 1.25 km at various epochs (a-l) for three versions of PINN. (a-d) are the results for the MLP comprised of 3 hidden layers with 256 neurons in each layer, (e-h) are the MLP comprised of 3 hidden layers with 512 neurons in each layer, and (i-l) are the results of GaborPINN. Figure 7: The Loss curves for three versions of PINNs. Discussion In this paper, we proposed a modified PINN for wavefield representation using GaborNet, which we refer to as GaborPINN. With the proper hyperparameter selection for GaborPINN, the convergence is fast and the prediction is good. This comes from the fact that the wavefield is inherently well represented by a composition of Gabor basis functions. Also, for this reason, the selection of an initial frequency scale is crucial. Unlike for image representation, which GaborNet is originally developed for, where they care about the details and the NN is trained in a supervised manner using the ground truth, PINNs with GaborNet require careful selection of the initial frequency scale. A large value will help GaborPINN fit the high-frequency details in the wavefield but will make the prediction rough, causing inaccurate second-order derivative calculations, which will harm the training of PINNs. Here, we choose the value for the scale based on the frequency of the wavefield, and as a result, we increase the initial frequency scale when we need wavefield solutions for higher frequencies. For future research, _we will investigate a more intelligent or adaptive selection of the initial frequency scale_. ## 5 Conclusions We addressed the issue of the slow convergence of PINN-based wavefield solutions by incorporating Gabor basis functions. The Gabor function allowed us to incorporate the prior information on the frequency in the design of GaborPINN to accelerate the fitting of the NN to the Helmholtz equation. Unlike in image representation, we found that the proper choice of the initial frequency parameter in the Gabor function would highly affect the smoothness and continuity of the represented wavefield. Therefore, we determine this scale value based on the frequency of the wavefield. Through numerical tests, we show that the proposed approach converges much faster than vanilla PINNs. This method provides a stepping stone, from the perspective of NN design, to efficient wavefield representations for real problems using PINNs. ## Acknowledgements The authors thank KAUST for supporting this research, and Fu Wang for helpful discussions. We would also like to thank the SWAG group for the collaborative environment.
2306.12139
Spatial Heterophily Aware Graph Neural Networks
Graph Neural Networks (GNNs) have been broadly applied in many urban applications upon formulating a city as an urban graph whose nodes are urban objects like regions or points of interest. Recently, a few enhanced GNN architectures have been developed to tackle heterophily graphs where connected nodes are dissimilar. However, urban graphs usually can be observed to possess a unique spatial heterophily property; that is, the dissimilarity of neighbors at different spatial distances can exhibit great diversity. This property has not been explored, while it often exists. To this end, in this paper, we propose a metric, named Spatial Diversity Score, to quantitatively measure the spatial heterophily and show how it can influence the performance of GNNs. Indeed, our experimental investigation clearly shows that existing heterophilic GNNs are still deficient in handling the urban graph with high spatial diversity score. This, in turn, may degrade their effectiveness in urban applications. Along this line, we propose a Spatial Heterophily Aware Graph Neural Network (SHGNN), to tackle the spatial diversity of heterophily of urban graphs. Based on the key observation that spatially close neighbors on the urban graph present a more similar mode of difference to the central node, we first design a rotation-scaling spatial aggregation module, whose core idea is to properly group the spatially close neighbors and separately process each group with less diversity inside. Then, a heterophily-sensitive spatial interaction module is designed to adaptively capture the commonality and diverse dissimilarity in different spatial groups. Extensive experiments on three real-world urban datasets demonstrate the superiority of our SHGNN over several its competitors.
Congxi Xiao, Jingbo Zhou, Jizhou Huang, Tong Xu, Hui Xiong
2023-06-21T09:35:50Z
http://arxiv.org/abs/2306.12139v1
# Spatial Heterophily Aware Graph Neural Networks ###### Abstract. Graph Neural Networks (GNNs) have been broadly applied in many urban applications upon formulating a city as an urban graph whose nodes are urban objects like regions or points of interest. Recently, a few enhanced GNN architectures have been developed to tackle heterophily graphs where connected nodes are dissimilar. However, urban graphs usually can be observed to possess a unique spatial heterophily property; that is, the dissimilarity of neighbors at different spatial distances can exhibit great diversity. This property has not been explored, while it often exists. To this end, in this paper, we propose a metric, named Spatial Diversity Score, to quantitatively measure the spatial heterophily and show how it can influence the performance of GNNs. Indeed, our experimental investigation clearly shows that existing heterophilic GNNs are still deficient in handling the urban graph with high spatial diversity score. This, in turn, may degrade their effectiveness in urban applications. Along this line, we propose a Spatial Heterophily Aware Graph Neural Network (SHGNN), to tackle the spatial diversity of heterophily of urban graphs. Based on the key observation that spatially close neighbors on the urban graph present a more similar mode of difference to the central node, we first design a rotation-scaling spatial aggregation module, whose core idea is to properly group the spatially close neighbors and separately process each group with less diversity inside. Then, a heterophily-sensitive spatial interaction module is designed to adaptively capture the commonality and diverse dissimilarity in different spatial groups. Extensive experiments on three real-world urban datasets demonstrate the superiority of our SHGNN over several its competitors. Urban graphs, Spatial heterophily, Graph neural networks + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: Corresponding authors. + Footnote †: thanks: Corresponding authors. + Footnote †: Corresponding authors. and workplace, respectively, which are definitely heterophilic. Such difference information on the heterophilic urban graph may not be modeled well by many traditional homophilic GNNs, who tend to generate similar representations for connected nodes [2, 55]. In this way, the performance of these homophilic GNN methods on urban graphs may be largely hindered. Our further observation is that urban graphs have a unique _Spatial Heterophily_ property. To be specific, we find that the heterophily on the urban graph often presents a characteristic of _spatial diversity_. In other words, the difference (or dissimilarity) between the central node and its neighbors at different distances or directions exhibits evident discrepancy, rather than distributed uniformly. Aware of such a characteristic, an attendant question is how to measure the spatial heterophily of the urban graph. There have been various studies put forward to investigate the graph homophily and heterophily from different perspectives, including node homophily [29], edge homophily [56] and class homophily [23]. But without considering the spatial position of linking nodes, these metrics cannot describe the spatial heterophily on urban graphs. Therefore, in this work, we propose a metric named **spatial diversity score** to analyze the spatial heterophily, and investigate its influence on the performance of existing GNN methods on urban graphs. Firstly, we divide the neighbors on an urban graph into different spatial groups according to their locations (including direction and distance), and then the spatial diversity score measures the discrepancy between different spatial groups, in terms of their label dissimilarity to the central node. A higher score (close to one) indicates a larger discrepancy between different spatial groups, and thus a higher spatial diversity of heterophily on the urban graph. Yet, it still remains an outstanding challenge for designing powerful heterophilic GNN models over an urban graph if its spatial diversity score is high, where there is a diversity of dissimilarity distributions between the central node and its neighbors at different spatial locations. There are some recent studies to improve the GNN architectures to handle the graph heterophily [8, 14, 17, 24, 48]. Most of these methods can only work on a heterophilic graph when there is limited difference between nodes. For example, GRKGNN [8] assumes that there are only two different kinds of nodes, and FAGCN [2] assumes that node features have only two different levels of frequencies (Note that this limitation is discussed in their papers). Following this line, it is hard to model such diverse distributions of spatial heterophily on urban graphs. To provide more evidence, we conduct an experiment on synthetic urban graphs with varying levels of spatial diversity score (details are in Section 3.2). As shown in Figure 1(d), when the graph presents higher spatial diversity score, the performance of these two state-of-the-art heterophilic GNNs, GBKGNN and FAGCN, is far from optimal. We also apply the proposed spatial diversity score to analyze three real-world urban graphs under different target tasks in our experiments. As we can see from Figure 1(c), three urban graphs present different levels of spatial heterophily, where one of them can get a very high score (0.99 on the urban graph in crime prediction task). Thus, it is valuable to develop an effective GNN model that can handle the diverse spatial heterophily of the urban graph. Through our in-depth analysis of spatial heterophily, we observe that the heterophily further exhibits a spatial tendency on urban graphs, which reveals a promising opportunity for us to tackle such diverse heterophily in a divide-and-conquer way. Different from ordinary graphs, nodes on urban graphs should follow Tobler's First Law of Geography (TFL) [35]. As the fundamental assumption used in almost all urban analysis, TFL means _everything is related to everything else, but near things are more related than distant things_. Obeying TFL, spatially close neighbors on the urban graph present a more similar mode of difference to the central node, compared to the distant ones. We also analyze the real-world urban graph in commercial activeness prediction task to visualize such a tendency (details are in Section 4). Figure 2 clearly shows that spatially close neighbors present less discrepancy from both the direction and distance view. Thus, if we can properly group the spatially close neighbors together, it is possible to alleviate the diversity of heterophily inside groups on the urban graph. To this end, we propose a novel Spatial Heterophily Aware Graph Neural Network (SHGNN), to tackle the spatial heterophily on urban graphs, with two specially designed modules. First, we devise a Rotation-Scaling Spatial Aggregation module. Its core idea is to properly divide the neighbors into different spatial groups according to their direction and distance to the central node, and perform a spatial-aware feature aggregation for each group, which serves as the basis of handling diverse heterophily distributions separately. Then, a Heterophily-Sensitive Spatial Interaction module with two learnable kernel functions is designed to capture the commonality and discrepancy in the neighborhood, and adaptively determine what and how much difference information the central node needs. It acts between the central node and neighbors in different groups to manage the spatial diversity of heterophily on the urban graph. The contribution of this paper is summarized as follows: * To the best of our knowledge, we are the first to investigate the spatial heterophily of urban graphs. We design a metric named Figure 1. Analysis of spatial heterophily. (a)-(b) illustrate the space partition from two spatial views. (c) shows the spatial diversity scores calculated on three real-world urban graphs. (d) presents the results of experimental investigation. spatial diversity score, to analyze the spatial heterophily property, and identify the limitation of existing GNNs in handling the diverse spatial heterophily on the urban graph. * We propose a novel spatial heterophily aware graph neural network named SHGNN, in which two techniques: rotation-scaling spatial aggregation and heterophily-sensitive spatial interaction are devised to tackle the spatial heterophily of the urban graph in a divide-and-conquer way. * We conducted extensive experiments to verify the effectiveness of SHGNN on three real-world datasets. ## 2. Preliminaries In this section, we first introduce the basic concepts of the urban graph, then clarify the goal of our work. The frequently used notations are summarized in Table 4 in Appendix. _Urban Graphs._ Let \(\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{X})\) denote an urban graph, where \(\mathcal{V}=\{v_{1},...,v_{N}\}\) denotes a set of nodes representing a kind of urban entity, \(\mathcal{E}\) denotes the edge set indicating one type of relation among nodes in the urban scenario, and \(\mathcal{N}(v_{i})=\{o_{j}|(v_{i},o_{j})\in\mathcal{E}\}\) is the neighborhood of node \(v_{i}\). \(X\in\mathbb{R}^{N\times d}\) denotes the feature matrix, in which the \(i\)-th row is the \(d\)-dimensional node features of \(v_{i}\) obtained from the urban data. Different instantiations of node set and edge set will form different urban graphs, such as: (1) **Mobility Graph** with regions as nodes and human flows as edges. The node features can be some region attributes such as the distribution of POI inside the region; (2) **Road Network** where the node set is formed by road sections, and the edges denote the connectivity between them. The node features can be the structural information of a section, such as the number of branches and lanes. _Problem Formulation._ Given an urban graph, our goal is to design a GNN model, that considers and alleviates the spatial heterophily, to learn the node representation \(f\colon(v_{i}|\mathcal{G})\rightarrow\hat{\mathbf{h}}_{i}\), where \(\hat{\mathbf{h}}_{i}\) denotes the representation vector of node \(v_{i}\). The model \(f\) will be trained in an end-to-end manner in different downstream tasks. ## 3. Spatial Heterophily In this section, an analysis of spatial heterophily on urban graphs is provided. We first introduce our metric to measure the spatial heterophily (Section 3.1). Then, Section 3.2 gives an experimental investigation on synthetic graphs. It not only demonstrates the importance for GNNs to consider the spatial heterophily on urban graphs, but also suggests a promising way to tackle this challenge. ### Spatial Diversity of Heterophily To describe the spatial diversity of heterophily on urban graphs, we design a metric named **spatial diversity score**. Typically, the graph heterophily is measured by the label dissimilarity between the central node and its neighbors (e.g., (Beng et al., 2017; Wang et al., 2018)). Following this line, our spatial diversity score aims to further assess the discrepancy between neighbors at different spatial locations, in terms of the distributions of their label dissimilarity to the central node. Briefly, we first divide the neighborhood into different spatial groups according to their spatial locations. Then, we can measure spatial groups' discrepancy by calculating the Wasserstein distance between their label dissimilarity distributions, and the metric is further defined as the ratio of nodes with high discrepancy. #### 3.1.1. Dual-View Space Partition To distinguish the spatial location of neighbors and form different spatial groups, we first partition the geographic space into several non-overlap subspaces, and each neighbor of the central node can be assigned to the group corresponding to the subspace it locates in. Note that the spatial heterophily can both present in different directions and distances, thus we propose to perform space partition from both two views. **Direction-Aware Partition.** Given the central node \(v_{i}\) on an urban graph, we evenly partition the geographic space centered by it into ten direction sectors \(\mathcal{S}=\{s_{k}\mid k=0,1,...,9\}\). Correspondingly, nodes in the neighborhood \(\mathcal{N}(v_{i})\) will be divided into the sector they locate in. Neighbors belonging to the same sector are redefined as the direction-aware neighborhood \(\{\mathcal{N}_{s_{k}}(v_{i})\mid k=0,1,...,9\}\), where \(\bigcup_{k=0}^{9}\mathcal{N}_{s_{k}}(v_{i})=\mathcal{N}(v_{i})\). In this way, the direction-aware neighborhoods can be regarded as different spatial groups associated with different spatial relations to the central node. We will then calculate the discrepancy between different spatial groups. Figure 1(a) illustrates such a sector partition. **Distance-Aware Partition.** As illustrated in Figure 1(b), we also divide the neighbors based on their distance to the central node. To be specific, we first determine the distance range of the neighborhood on an urban graph, by making a statistic of the distance distribution between connected nodes. Note that we consider the 90% percentile of this distribution as the maximum distance of the neighborhood on the graph. This is based on the observation that the distance distribution often presents a long tail property, and such a distance cut-off can avoid interference from extremely distant outliers. And then, the distance range is evenly split into ten buckets, which results in distance rings \(\mathcal{R}=\{r_{k}\mid k=0,1,...,9\}\). Similarly, the original neighborhood \(\mathcal{N}(v_{i})\) can be divided into these ten distance-aware neighborhoods \(\{\mathcal{N}_{r_{k}}(v_{i})\mid k=0,1,...,9\}\) as another view of spatial groups, where we also have \(\bigcup_{k=0}^{9}\mathcal{N}_{r_{k}}(v_{i})=\mathcal{N}(v_{i})\). #### 3.1.2. Spatial Diversity Score After the neighborhood partition, our goal is to further define the spatial diversity score by measuring the discrepancy between different spatial groups, based on their distribution distance of label dissimilarity to the central node. First of all, we define spatial group's label dissimilarity distribution to the central node. For a node classification task with \(\mathcal{C}\) classes, within a spatial group, this distribution is calculated as the ratio of neighbors belonging to each class (different to the central node's). Taking the direction view as an example, for the central node \(v_{i}\), the label dissimilarity distribution of \(s_{k}\) is formally defined as \(P_{i}^{s_{k}}=[\,P_{i,0}^{s_{k}},\,\,p_{i,1}^{s_{k}},\,\,...,\,P_{i,|\mathcal{C }|-1}^{s_{k}}]\) with: \[P_{i,c}^{s_{k}}=\sum_{v_{j}\in\mathcal{N}_{s_{k}}(v_{i})\wedge y_{j}\neq y_{j}} \mathbb{I}\left(y_{j,c}=1\right)\cdot|\mathcal{N}_{s_{k}}(v_{j})|^{-1}, \tag{1}\] where \(c=0,1,...,|\mathcal{C}|-1\), \(\mathbb{I}(\cdot)\) is the indicator function, and \(y_{i}\) is \(v_{i}\)'s one-hot label vector whose \(c\)-th value is denoted by \(y_{i,c}\). In addition, to improve the generality of this metric to more urban applications (e.g., regression tasks), we also extend the definition of label dissimilarity distribution above to the node regression task. To be specific, we first make a statistic of the label difference between connected nodes on the whole graph \(\hat{\mathcal{Y}}=\{(y_{j}-y_{i})\mid(v_{i},v_{j})\in\mathcal{E}\}\), and calculate the deciles \(\{D_{1},D_{2},...,D_{9}\}\) of this distribution. These nine deciles can determine ten buckets (intervals), which will be used for the discretization of continuous label difference value, to obtain a similar form of label dissimilarity as the node classification task. Hence, still for the spatial group \(\mathcal{N}_{s_{b}}(v_{i})\) in sector \(s_{k}\), its label dissimilarity distribution will be calculated as the ratio of neighbors mapped into different buckets, according to their discretized label difference to the central node, which can be also formulized as: \(P_{i,c}^{pk}=\{P_{i,0}^{k},\,P_{i,1}^{pk},\,...,\,P_{i,g}^{pk}\}\), with the \(c\)-th element computed by: \[P_{i,c}^{sk}=\sum_{v_{j}\in\mathcal{N}_{s_{b}}(v_{i})}\mathbb{E}(D_{c}<y_{j}-y_ {i}\leq D_{c+1})\cdot|\mathcal{N}_{s_{b}}(v_{i})|^{-1}, \tag{2}\] where \(c=0,1,...,9\). The \(D_{0}\) and \(D_{10}\) are used to denote the minimum and maximum of the label difference distribution on the whole graph, respectively (i.e., \(D_{0}=\min(\hat{\mathcal{Y}})\) and \(D_{10}=\max(\hat{\mathcal{Y}})\)). Next, the discrepancy between different spatial groups can be defined by measuring the distance between their label dissimilarity distributions. Following a recent study on the graph heterophily (Sinkhorn, 2017), we adopt Wasserstein distance (WD) to measure the distribution distance between two spatial groups. Formally, consider two spatial groups \(\mathcal{N}_{s_{p}}(v_{i})\) and \(\mathcal{N}_{s_{q}}(v_{i})\) in sector \(s_{p}\) and \(s_{q}\) of node \(v_{i}\), the discrepancy between them is defined as: \[Disc(v_{i},s_{p},s_{q})=WD(p_{i}^{s_{p}},P_{i}^{s_{q}}), \tag{3}\] where \(WD(\cdot,\cdot)\) denotes the Wasserstein distance between two distributions, which can be approximately calculated by the Sinkhorn iteration algorithm (Sinkhorn, 2017). With such a measurement, we can finally define the spatial diversity score to describe the diverse spatial heterophily of an urban graph. This metric can be computed based on the ratio of nodes with high discrepancy among different spatial groups: \[\lambda_{d}^{s}=|\mathcal{V}|^{-1}\cdot\sum_{v_{i}\in\mathcal{V}}\mathbb{1}( \max_{p\neq q}Disc(v_{i},s_{p},s_{q})\geq 1), \tag{4}\] where \(\max_{p\neq q}Disc(v_{i},s_{p},s_{q})\geq 1\) with \(p,q=0,1,...,9\) indicates that there are at least two sectors being discrepant in terms of their label dissimilarity distributions to the central node \(v_{i}\). In this way, the score \(\lambda_{d}^{s}\) will get higher if there are more nodes whose spatial groups present high discrepancy on the urban graph. Similarly, we can also define the score \(\lambda_{d}^{r}\) in the distance view, which measures the discrepancy between spatial groups formed by different rings in \(\mathcal{R}\), with different distances to the central node: \[\lambda_{d}^{r}=|\mathcal{V}|^{-1}\cdot\sum_{v_{i}\in\mathcal{V}}\mathbb{1}( \max_{p\neq q}Disc(v_{i},r_{p},r_{q})\geq 1), \tag{5}\] In practice, when only a part of nodes are labeled on an urban graph, it's often sufficient to use the labeled data to estimate \(\lambda_{d}^{s}\) and \(\lambda_{d}^{r}\). Figure 1(c) shows the scores of three real-world urban graphs. As we can see, from both the direction and distance view, urban graphs can present very high spatial diversity scores (e.g., 0.99 in crime prediction), which reveals the diverse heterophily in different spatial groups, with discrepant label dissimilarity distributions. ### Experimental Investigation Next, we conduct an experimental investigation on synthetic urban graphs to illustrate the importance of considering the diverse spatial heterophily on the urban graph. Specifically, we test the performance of several GNN models on a series of synthetic urban graphs with increasing spatial diversity of the heterophily. We generate 10 graphs containing 5000 nodes with 10-dimensional randomly generated feature vectors. For each node, we build 50 edges and assume that these neighbors locate at 10 distance rings from near to far around the central node. To increase the spatial diversity of heterophily from \(\mathcal{G}_{1}\) to \(\mathcal{G}_{10}\), we gradually enlarge the discrepancy of label dissimilarity distributions between neighbors in different distance rings. Specifically, in graph \(\mathcal{G}_{i}\), we evenly divide the node set into \(i\) subsets \(\mathcal{V}_{i}=\bigcup_{j=1}^{i}\mathcal{V}_{i,j}\), where the labels of nodes in \(\mathcal{V}_{i,j}\) are sampled from the Gaussian distribution \(\mathcal{N}(10j,1)\). Then, the 50 neighbors connected to the node in \(\mathcal{V}_{i,j}\) are randomly selected from one of subsets \(\{\mathcal{V}_{i,k}\}|_{k=j}^{i}\) by an equal probability \(1/(i-j+1)\). Besides, we let neighbors' spatial distance be consistent with their label difference to the central node (i.e., neighbors with smaller differences locates in the closer distance ring). In this way, these 10 graphs will have increasing spatial diversity scores \(\lambda_{d}^{r}\). Meanwhile, neighbors in the same distance ring have less discrepancy. Figure 1(d) shows the node regression error of GCN (Kip ## 4. Methodology In this section, we first present that the heterophily on urban graphs often exhibits a spatial tendency: spatially close neighbors have more similar heterophily distributions than distant ones, which obeys TFL. This characteristic gives rise to the solution that we can properly divide neighbors according to their spatial locations, to achieve grouping neighbors with less discrepancy together. Then, we introduce our proposed SHGNN that leverages such a spatial tendency to tackle the spatial heterophily of urban graphs. **Spatial Tendency.** In addition to the presence of a discrepancy between different spatial groups (discussed in Section 3.1), our in-depth investigation of spatial heterophily further reveals that such a discrepancy shows a spatial tendency on urban graphs. Specifically, we observe that the discrepancy between two spatially close groups is smaller, compared to two distant groups. We conduct a data analysis to visually present such a tendency. Figure 2 shows the pair-wise discrepancy of label dissimilarity distributions between any two spatial groups, which is computed based on real-world human mobility and regional commercial activeness data. In Figure 2(a), for the spatial group formed by distance ring \(r_{p}\), we can find that its discrepancy with other rings \(r_{q}\) is highly correlated to their spatial distance (i.e., the discrepancy gets higher when \(r_{q}\) is more distant to \(r_{p}\)). For example, the close ring pair \((r_{0},r_{1})\) are less discrepant than the distant pair \((r_{0},r_{0})\). A similar spatial tendency can also be observed from the direction view. As shown in Figure 2(b), in most cases, the discrepancy will increase along with the included angle between sectors. Taking sector \(s_{0}\) as an example, its discrepancy with \(s_{1}\sim s_{0}\) increases first and then decreases, which is roughly in sync with the change of their included angles. In other words, a sector is more likely to be discrepant to another distant sector (e.g., \((s_{1},s_{6})\)) than a nearby one (e.g., \((s_{1},s_{2})\)). This characteristic motivates us to address the diverse spatial heterophily on urban graphs, by properly grouping spatially close neighbors and separately processing each group with less discrepancy inside. To this end, we propose a novel GNN architecture named SHGNN, which is illustrated in Figure 3 and 4. Our model consists of two components: Rotation-Scaling Spatial Aggregation (see Section 4.1) and Heterophily-Sensitive Spatial Interaction (see Section 4.2). ### Rotation-Scaling Spatial Aggregation This component aims to properly group spatially close neighbors and alleviate the diversity of heterophily inside groups in the message passing process. In general, we first divide neighbors according to their relative positions to the central node. Then, the feature aggregation is performed in each spatial group separately. #### 4.1.1. Rotation-Scaling Dual-View Partition Following the neighborhood partition in Section 3.1.1, we also partition the geographic space into non-overlap subspaces, and neighbors located at the same subspace are then grouped together. Note that there are two major differences between the space partition performed in this component and that in Section 3.1.1. First, we apply a more general partition with a variable number of subspaces. Second, we introduce a rotation-scaling multi-head partition strategy to model the neighbor's spatial location in a more comprehensive way. To be specific, in the direction view, we evenly partition the space into a set of sectors \(\mathcal{S}=\{s_{k}\,|\,k=0,1,...,n_{s}-1\}\), where \(n_{s}\) denotes the number of partitioned sectors, which can be appropriately set for different datasets. Nodes in each direction-aware neighborhood \(\mathcal{N}_{\text{sg}}(v_{i})\) of sector \(s_{k}\) are grouped together, which we still call spatial group. In the distance view, the space will be partitioned into \(n_{r}\) distance rings \(\mathcal{R}=\{r_{k}\,|\,k=0,1,...,n_{r}-1\}\), which is resulted from a predefined distance bucket (e.g., \(<1km,1-2km,\) and \(>2km\)). Note that the central node \(v_{i}\) itself does not belong to any sector or ring, we regard it as an additional group \(\mathcal{N}_{\text{sg}_{n_{r}}}(v_{i})=\mathcal{N}_{r_{n_{r}}}(v_{i})=\{v_{i}\}\). #### Rotation-Scaling Multi-Head Partition In view of the special case that a part of neighbors may locate at the boundary between two subspaces, we further propose a multi-head partition strategy to simultaneously perform multiple partitions at each view, where different heads can complement each other. For example, as shown in Figure 3(a), the orange node \(v_{4}\) locates at the boundary between sectors \(s_{0}\) and \(s_{1}\), which indicates that the spatial relation of the neighborhood is still inadequately excavated by the single direction-based partition. A similar situation can be found in the partition of distance rings, such as the node \(v_{4}\) in Figure 3(b). To overcome this limitation, we extend our partition strategy by devising two operations, which are _sector rotation_ and _ring scaling_, to achieve multiple space partitions for capturing the diverse spatial relations comprehensively. Specifically, as illustrated in Figure 3(a) and (c), for the originally partitioned direction sectors, we turn the sector boundary a certain angle (e.g., 45 degrees) to derive another set of sectors, then the neighbors can be correspondingly reassigned in these new sectors and form a different set of direction-aware neighborhoods. Thus, we update the denotation of sectors as \(\mathcal{S}^{m}=\{s_{k}^{m}\,|\,k=0,1,...,n_{s}\}\), and that of direction-aware neighborhoods as \(\{\mathcal{N}_{\text{sg}}^{m}(v_{i})\,|\,k=0,1,...,n_{s}\}\), where \(m=1,2,...,M_{s}\) denotes the \(m\)-th head partition among total \(M_{s}\) heads. Similarly, from the distance view, we scale the boundary of the original distance rings to obtain the supplemental partition, which is shown in Figure 3(b) and (d). The denotations are updated as \(\mathcal{R}^{m}=\{r_{k}^{m}\,|\,k=0,1,...,n_{r}\}\) and \(\{\mathcal{N}_{\text{sg}}^{m}(v_{i})\,|\,k=0,1,...,n_{r}\}\) for rings and distance-aware neighborhoods, where \(m=1,2,...,M_{r}\). In this way, different heads of partitions model the spatial relation between neighbors and the central node complementarily, and thus we can avoid the improper grouping under a single partition. #### 4.1.2. Spatial-Aware Aggregation After the multi-head partition from two spatial perspectives, we collect messages from neighbors to the central node. Rather than mixing the messages (e.g., through averaging) that most of the GNNs follow (Nguyen et al., 2019), our model performs a group-wise aggregation to handle different spatial groups with diverse heterophily. An illustration is shown in Figure 4(a). Figure 3. Illustration of Rotation-scaling Partition. Formally, taking the direction view as an example, with the set of direction-aware neighborhoods (spatial groups) under the \(m\)-head partition \(\{\mathbf{N}_{s_{k}^{m}}(u_{i})\mid k=0,1,...,n_{s}\}\), we use graph convolution (Girshick et al., 2015) to respectively aggregate the features of nodes in each neighborhood \(\mathcal{N}_{s_{k}^{m}}(u_{i})\) with the normalization by degree: \[\mathbf{z}_{i,s_{k}^{m}}(l+1)=\sum_{j\in\mathcal{N}_{s_{k}^{m}}(u_{i})}(\lvert \mathcal{N}(v_{i})\rvert\cdot\lvert\mathcal{N}(v_{j})\rvert)^{-\frac{1}{2}}\, \mathbf{h}_{j}(l)\,\mathbf{W}_{s_{k}^{m}}(l), \tag{6}\] where \(\mathbf{z}_{i,s_{k}^{m}}(l)\) denotes the \(l\)-layer aggregated message from direction sector \(s_{k}^{m}\), \(\mathbf{h}_{j}(l)\) denotes the \(l\)-layer features of neighbor \(\mathbf{v}_{j}\) with \(\mathbf{h}_{j}(0)=\mathbf{x}_{j}\), and \(\mathbf{W}_{s_{k}^{m}}(l)\) is a trainable transformation extracting useful information from neighbors' features. Similarly, at the distance view, we also perform the ring-wise aggregation in each distance ring separately by: \[\mathbf{z}_{i,t_{k}^{m}}(l+1)=\sum_{j\in\mathcal{N}_{s_{k}^{m}}(u_{j})}(\lvert \mathcal{N}(v_{i})\rvert\cdot\lvert\mathcal{N}(v_{j})\rvert)^{-\frac{1}{2}}\, \mathbf{h}_{j}(l)\,\mathbf{W}_{r_{k}^{m}}(l), \tag{7}\] where \(\mathbf{W}_{r_{k}^{m}}(l)\) is another feature transformation for neighbors at different distances. In this way, the aggregated messages can not only capture the structure information on the graph, but also discriminate their different spatial groups. It avoids losing different distributions of spatial heterophily on the urban graph. ### Heterophily-Sensitive Spatial Interaction After the group-wise feature aggregation, SHGNN further captures the diverse spatial heterophily in different spatial groups on the urban graph. In detail, we devise two learnable kernel functions to first respectively capture the commonality and discrepancy between the central node and each spatial group. Then, an attentive gate is jointly learned to adaptively determine the ratio of two components that should be propagated to the central node. Additionally, as indicated by the analysis of spatial tendency in Figure 2, the discrepancy of heterophily distributions varies along with the distance between two spatial groups, we further consider such characteristics by allowing the kernel functions to act between groups, which encourages the propagation of common and discrepant information among them. Since we view the central node as an additional group, this process can be regarded as an interaction among every two groups. For simplicity, we omit the index \(l\) and \(m\) of layer and head in the following discussion. _Commonality Kernel Function._ Given the fact that different sectors / rings all belong to the neighborhood of the central node, they may share some common knowledge that can probably enhance the representation of each other. Thus, we first design a commonality kernel function to capture such information among them. Formally, taking the direction view as an example, with the representation \(\{\mathbf{z}_{i,s_{k}}\mid k=0,1,...,n_{s}\}\) of \(n_{s}\) sectors centered by \(v_{i}\), the kernel function \(\mathcal{N}_{C}^{s}(\cdot,\cdot)\) that models the commonality degree between sector \(s_{p}\) and \(s_{q}\) (including \(v_{i}\) itself) is defined as: \[\mathcal{K}_{C}^{s}(\mathbf{z}_{i,s_{p}},\mathbf{z}_{i,s_{q}})\ =\ \mathbf{z}_{i,s_{p}},\mathbf{z}_{i,s_{q}}\text{-}\ \mathbf{z}_{i,s_{k}}\text{=}\ \mathbf{z}_{i,s_{k}}\mathbf{W}_{C}^{s}, \tag{8}\] where \(<,>\) denotes the inner product and \(\mathbf{W}_{C}^{s}\) is the learnable matrix for common knowledge extraction. The larger output value suggests a higher similarity (i.e., commonality) between inputs. Based on this measurement, we enhance the sector representation with the extracted useful information from other sectors: \[\mathbf{z}_{i,s_{p}}^{C}=\sum_{q=0}^{n_{s}}\alpha_{pq}^{C,s}\,z_{i,s_{q}}\mathbf{W}_{C} ^{s}, \tag{9}\] \[\alpha_{pq}^{C,s}=\frac{exp\left(\mathcal{K}_{C}^{s}(\mathbf{z}_{i,s_{p}},\mathbf{z} _{i,s_{q}})\right)}{\sum_{k=0}^{n_{s}}exp\left(\mathcal{K}_{C}^{s}(\mathbf{z}_{i,s _{p}},\mathbf{z}_{i,s_{k}})\right)}, \tag{10}\] where the coefficient \(\alpha_{pq}^{C,s}\) is the level of commonality normalized by the softmax function. In the same way, we can also obtain the representation of each distance ring \(\mathbf{z}_{i,s_{p}}^{C}\) with the enhancement of common knowledge from other rings using a similar kernel function \(\mathcal{K}_{C}^{s}(\cdot,\cdot)\) parametrized by \(\mathbf{W}_{C}^{s}\). _Discrepancy Kernel Function._ In addition to the common knowledge, modeling the difference information is critical on heterophilic urban graphs. Thus, we devise another kernel function to capture Figure 4. The architecture of SHGNN. We only detailedly present the operation under sector partition in the illustration of two kernel functions and the attentive component selection. The same operation is also performed under ring partition. diverse dissimilarity between the central node and every group, as well as between any two groups. We introduce it from the direction view. Specifically, taking the original representation \(\{z_{i,s_{k}}|k=0,1,...,n_{s}\}\) as inputs, the discrepancy kernel is defined as: \[\mathcal{K}_{D}^{s}(z_{i,s_{p}},z_{i,s_{q}})=\;<z_{i,s_{p}}W_{D}^{s,a},(z_{i,s_{ p}}W_{D}^{s,a}-z_{i,s_{q}}W_{D}^{s,b})>, \tag{11}\] where \(W_{D}^{s,a}\) and \(W_{D}^{s,b}\) denote two transformations that learn to extract the difference of sector \(s_{q}\) compared to sector \(s_{p}\). According to (Kang et al., 2019), the kernel function \(\mathcal{K}_{D}^{s}(z_{i,s_{p}},z_{i,s_{q}})\) can be regarded as calculating the dissimilarity degree between two inputs, which tends to be higher when \(s_{p}\) and \(s_{q}\) are more dissimilar. Subsequently, our model facilitates each sector to be aware of the helpful difference information from others with this measurement of discrepancy: \[z_{i,s_{p}}^{D} =\sum_{q=0}^{n_{s}}\sigma_{pq}^{D,s}\left(z_{i,s_{p}}W_{D}^{s,a}- z_{i,s_{q}}W_{D}^{s,b}\right), \tag{13}\] \[\alpha_{pq}^{D,s} =\frac{exp(\mathcal{K}_{D}^{s}(z_{i,s_{p}},z_{i,s_{q}}))}{\sum_{ k=0}^{n_{s}}exp(\mathcal{K}_{D}^{s}(z_{i,s_{p}},z_{i,s_{k}}))}. \tag{12}\] Similarly, we utilize an analogous kernel function, which is denoted as \(\mathcal{K}_{D}^{r}(\cdot,\cdot)\) with parameters \(W_{D}^{r,a}\) and \(W_{D}^{r,b}\) to capture such discrepancy at the distance view and derive the ring representation \(z_{i,r_{p}}^{D}\) aware of the diverse distributions of spatial heterophily. #### Attentive Component Selection With these two kernel functions, we can exploit both the common knowledge and diverse discrepancy information between the central node and neighbors in different groups. However, different nodes may possess varying levels of spatial heterophily in various applications. Thus, SHGNN learns to derive a gate that adaptively determines the ratio of common and difference information in an end-to-end manner. Specifically, for each central node \(v_{i}\), we concatenate both the commonality and discrepancy components of all sectors, to derive a scalar with a transformation: \[\beta_{i}^{s}=\sigma(\|_{j\in\{C,D\}}\|_{k=0}^{n_{s}}\;z_{i,s_{k}}^{j}W_{t}^{ s}), \tag{14}\] where \(W_{t}^{s}\) denotes the trainable transformation mapping the in-out to a scalar and \(\sigma\) denotes the _Sigmoid_ function restricting the output value into \((0,1)\). Then, \(\beta_{i}^{s}\) serves as a gate controlling the ratio of the commonality and discrepancy components in the final representation of each sector: \[\tilde{z}_{i,s_{k}}=\beta_{i}^{s}\cdot z_{i,s_{k}}^{C}+(1-\beta_{i}^{s})\cdot z _{i,s_{k}}^{D}. \tag{15}\] In the same way, we learn to derive the gate \(\beta_{i}^{r}\) that determines the ratio in each ring's final representation \(\tilde{z}_{i,r_{k}}\). After the propagation among spatial groups (including the central node), different groups can contain diverse discrepancy information in the neighborhood, which is vital to the heterophilic urban graph. We then integrate this group-wise representation by concatenation (instead of summation, to avoid the mixing of diverse distributions of spatial heterophily) to obtain the global representation of two views. Since we adopt the multi-head partition strategy, a following concatenation is used to combine different heads at each view. The above two processes can be jointly expressed as: \[\mathbf{h}_{i,s}=\|_{m=1}^{M_{s}}\;\;(\|_{k=0}^{n_{s}}\;\;\tilde{z}_{i,s_{k}^{m}}), \;\;\mathbf{h}_{i,r}=\|_{m=1}^{M_{r}}\;\;(\|_{k=0}^{n_{r}}\;\;\tilde{z}_{i,r_{k}^{ m}}^{m}). \tag{16}\] #### Fusion of Two Spatial Views Finally, we fuse two spatial views with a learnable weighted summation to update the central node's representation as follows: \[\mathbf{h}_{i}=\gamma\;\mathbf{h}_{i,s}W_{f}^{s}+(1-\gamma)\;\mathbf{h}_{i,r}W_{f}^{r}, \tag{17}\] where \(W_{f}^{s}\) and \(W_{f}^{r}\) are two weight matrices transforming the representation vectors of two views into the same space, and \(\gamma\) is a trainable trade-off parameter activated by _Sigmoid_ function, which learns to assign different importance to the direction and distance view according to the target task. ### Prediction and Optimization Consistent with general GNNs, we use the \(L\)-layer output activated by \(ReLU\) function \(\mathbf{\hat{h}}_{i}\!=\!\sigma(\mathbf{h}_{i}^{(L)})\) as the node representation to make a prediction in different downstream tasks, and optimize the model by the appropriate loss function: \(\mathcal{L}\left(LR(\mathbf{\hat{h}}_{i})\,,\,y_{i}\right)\). In node regression tasks, \(LR(\cdot)\) is the linear regressor, \(y_{i}\!\in\!\mathbb{R}\) denotes the ground truth of labeled nodes, and \(\mathcal{L}\) can be L2 loss. While for node classification tasks, \(LR(\cdot)\) performs the logistic regression, \(y_{i}\!\in\!\{0,1\}^{C}\) is a one-hot label vector of \(C\) classes, and \(\mathcal{L}\) can be cross entropy loss. ## 5. Experiments In this section, we conduct extensive experiments on real-world datasets in three different tasks upon two types of urban graphs to evaluate the effectiveness of our model. The code of SHGNN is available at [https://github.com/PaddlePaddle/PaddleSpatial/tree/main/research/SHGNN](https://github.com/PaddlePaddle/PaddleSpatial/tree/main/research/SHGNN). ### Experiment Settings #### 5.1.1. Tasks and Data Description We first briefly introduce the three datasets corresponding to three different tasks, and details of how to build each dataset are described in Appendix A.1. Table 1 summarizes the statistical information of the three datasets. **Commercial Aectiveness Prediction (CAP)** is a node regression task on a mobility graph. Similar to (Zhu et al., 2019), we use the number of comments to POIs in each region as the indicator of regional commercial activeness. To form the dataset of this task, we collect the following urban data in _Shenzhen_ city in China from Baidu Maps, including POI data and satellite images in September 2019 to construct region features, the daily human flow data from July 2019 to September 2019 for building the edge set of the urban graph, and the number of regional POI comments from June 2019 to April 2020 which is regarded as the ground truth. **Crime Prediction (CP)** is also a node regression task on the mobility graph. We collect the real-world dataset of _New York City_ from NYC open data website1 for this task. The dataset contains 180 regions in Manhattan, with the POI data and crime number in each region, as well as taxi trips between regions (Zhu et al., 2019; Zhang et al., 2019). We construct node features from POIs. Taxi trips are used to build the \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Task** & **Dataset** & **\# Nodes** & **\# Edges** & **\# Labeled** & \(\mathbf{\lambda}_{d}^{s}\) & \(\mathbf{\lambda}_{d}^{r}\) \\ \hline CAP & Shenzhen & 82,510 & 9,486,879 & 9,121 & 0.89 & 0.78 \\ CP & Manhattan & 180 & 3,780 & 180 & 0.98 & 0.98 \\ DRSD & Los Angeles & 253,985 & 1,365,289 & 15,274 & 0.23 & 0.14 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of three real-world datasets. urban graph, where we only keep the 20 most important edges for each region w.r.t. the number of trip records. **Dangerous Road Section Detection (DRSD)** is a node classification task performed on the road network. In this work, the dangerous section is defined as the road section with a high incidence of traffic accidents. We build a real-world dataset in _Los Angeles_ based on the road network data from OSMnx Street Networks in Harvard Dataverse2 and the traffic accident records in December 2021 from Kaggle dataset website3. First, we make statistics of the number of accident records on each road section. Then, the sections containing more than 3 accident records in a month will be considered as dangerous road sections in our experiments. Footnote 2: [https://dataverse.harvard.edu/dataverse/osmnx-street-networks](https://dataverse.harvard.edu/dataverse/osmnx-street-networks) Footnote 3: [https://www.kaggle.com/datasets/sobhammoosavi/us-accidents](https://www.kaggle.com/datasets/sobhammoosavi/us-accidents) To select the best hyper-parameters for all the comparing methods, we randomly split each dataset into three parts with 60% for training, 20% for validation and 20% for test. #### 5.1.2. Baselines We compare SHGNN with a variety of state-of-the-art GNN models, including two classical message passing neural networks (**GCN**(Krizhevsky et al., 2014) and **GAT**(Wang et al., 2015)), five representative methods for heterophilic graphs (**Mixhop**(Krizhevsky et al., 2014), **FAGCN**(Krizhevsky et al., 2014), **NLGCN**(Krizhevsky et al., 2014), **GPRGNN**(Krizhevsky et al., 2014) and **GBKGNN**(Krizhevsky et al., 2014)), two spatial GNN models (**SAGNN**(Krizhevsky et al., 2014) and **PRIM**(Krizhevsky et al., 2014)), as well as three task-specific baselines (**KnowCL**(Krizhevsky et al., 2014) for CAP, **NNCCRF**(Krizhevsky et al., 2014) for CP and **RFN**(Krizhevsky et al., 2014) for DRSD). Detailed descriptions are introduced in Appendix A.3. #### 5.1.3. Evaluation Metrics For the two regression tasks, we evaluate all methods with Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the coefficient of determination (\(\mathbb{R}^{2}\)). For the node classification task, we use Area Under Curve (AUC) and F1-score. ### Performance Evaluation #### 5.2.1. Overall Comparison The performance comparison of our SHGNN and baselines is presented in Table 2, in which the mean and standard deviation of all metrics are obtained through five random runs. As we can see, SHGNN consistently achieves the best performance in three tasks on two kinds of urban graphs, with 6.6% and 11.2% reductions of RMSE in commercial activeness prediction (CAP) and crime prediction (CP), as well as 7.2% improvements of AUC in dangerous road section detection (DRSD) over the most competitive baseline of each task. We also conduct a pairwise t-test between SHGNN and each baseline to demonstrate that our model outperforms all of them significantly. Note that although the spatial diversity of heterophily on the road network in DRSD task is not so strong (\(\lambda_{d}^{s}=0.23\) and \(\lambda_{d}^{r}=0.14\)) as another two urban graphs, our model can still improve the accuracy in a large margin. It indicates the effectiveness of SHGNN can be general but not just limited to urban graphs with strong spatial heterophily. Specifically, the ordinary GNN models (GCN and GAT) generally have the worst overall performance. By contrast, approaches designed to deal with graph heterophily (Mixhop, FAGCN, NLGCN, GBKGNN and GPRGNN) evidently perform better. It indicates the inappropriateness of simply treating an urban graph as a general homophilic graph. However, as a specially designed model to handle spatial heterophily, our SHGNN remarkably outperforms these general heterophilic GNNs. The spatial GNN methods (SAGNN and PRIM) perform better than GCN and GAT sometimes, but perform worse than the methods for heterophilic graphs in many cases. And for task-specific baselines, it can also be found that some heterophilic GNNs are level pegging with them (such as Mixhop and GPRGNN vs. KnowCL in CAP task, GBKGNN vs. NNCCRF in CP task and Mixhop vs. RFN in DRSD task). These results also demonstrate the importance to take the spatial heterophily into consideration when using GNNs over an urban graph. To sum up, our SHGNN is much more effective in considering and alleviating the spatial heterophily on urban graphs in all the tasks. #### 5.2.2. Ablation Study To verify the effectiveness of each design in our model, we further compare SHGNN with its five variants: * [leftmargin=*,noitemsep,topsep=0pt] * **SHGNN-S** removes the direction section partition, which only models spatial heterophily from the distance view. \begin{table} \begin{tabular}{c|c c c|c c c|c c} \hline \hline & \multicolumn{3}{c|}{Commercial Activeness Prediction} & \multicolumn{3}{c|}{Crime Prediction} & \multicolumn{3}{c}{Dangerous Road Section Detection} \\ \hline Methods & RMSE \(\downarrow\) & MAE \(\downarrow\) & \(\mathbb{R}^{2}\uparrow\) & RMSE \(\downarrow\) & MAE \(\downarrow\) & \(\mathbb{R}^{2}\uparrow\) & AUC \(\uparrow\) & F1-score \(\uparrow\) \\ \hline GCN & 8.354 \(\pm\) 0.020* & 5.018 \(\pm\) 0.028* & 0.388 \(\pm\) 0.003* & 156.1 \(\pm\) 1.312* & 114.5 \(\pm\) 1.005* & 0.082 \(\pm\) 0.015* & 0.634 \(\pm\) 0.002* & 0.229 \(\pm\) 0.003* \\ GAT & 8.952 \(\pm\) 0.153* & 5.134 \(\pm\) 0.099* & 0.298 \(\pm\) 0.023* & 166.8 \(\pm\) 5.750* & 127.1 \(\pm\) 4.359* & -0.400 \(\pm\) 0.071* & 0.613 \(\pm\) 0.006* & 0.213 \(\pm\) 0.008* \\ Mixhop & 8.168 \(\pm\) 0.038* & 4.981 \(\pm\) 0.010* & 0.415 \(\pm\) 0.005* & 147.6 \(\pm\) 0.576* & 109.0 \(\pm\) 0.398* & 0.179 \(\pm\) 0.006* & 0.636 \(\pm\) 0.003* & 0.229 \(\pm\) 0.005* \\ FAGCN & 8.327 \(\pm\) 0.018* & 5.063 \(\pm\) 0.031* & 0.392 \(\pm\) 0.002* & 150.9 \(\pm\) 3.618* & 108.1 \(\pm\) 2.179* & 0.142 \(\pm\) 0.014* & 0.632 \(\pm\) 0.004* & 0.228 \(\pm\) 0.009* \\ NLGCN & 8.495 \(\pm\) 0.108* & 4.883 \(\pm\) 0.053* & 0.367 \(\pm\) 0.016* & 151.7 \(\pm\) 4.496* & 110.2 \(\pm\) 3.557* & 0.133 \(\pm\) 0.050* & 0.626 \(\pm\) 0.003* & 0.224 \(\pm\) 0.004* \\ GBRGNN & 8.490 \(\pm\) 0.017* & 5.004 \(\pm\) 0.074* & 0.368 \(\pm\) 0.002* & 133.6 \(\pm\) 0.0495* & 98.28 \(\pm\) 3.792 & 0.327 \(\pm\) 0.041* & 0.626 \(\pm\) 0.011* & 0.225 \(\pm\) 0.013* \\ GPRGNN & 8.139 \(\pm\) 0.016* & 4.975 \(\pm\) 0.025* & 0.419 \(\pm\) 0.002* & 152.4 \(\pm\) 0.338* & 107.0 \(\pm\) 0.319* & 0.126 \(\pm\) 0.003* & 0.635 \(\pm\) 0.003* & 0.226 \(\pm\) 0.004* \\ PRIM & 8.928 \(\pm\) 0.144* & 5.186 \(\pm\) 0.086* & 0.301 \(\pm\) 0.022* & 151.7 \(\pm\) 6.243* & 118.6 \(\pm\) 4.320* & 0.131 \(\pm\) 0.070* & 0.615 \(\pm\) 0.008* & 0.210 \(\pm\) 0.005* \\ SAGNN & 8.843 \(\pm\) 0.068* & 5.305 \(\pm\) 0.057* & 0.315 \(\pm\) 0.010* & 142.2 \(\pm\) 2.451* & 108.6 \(\pm\) 1.360* & 0.238 \(\pm\) 0.026* & 0.630 \(\pm\) 0.006* & 0.216 \(\pm\) 0.008* \\ TASK-S & 8.360 \(\pm\) 0.068* & 4.875 \(\pm\) 0.028* & 0.387 \(\pm\) 0.010* & 138.8 \(\pm\) 2.619* & 100.4 \(\pm\) 0.945* & 0.274 \(\pm\) 0.027* & 0.638 \(\pm\) 0.012* & 0.240 \(\pm\) 0.008* \\ SHGNN & **7.605 \(\pm\) 0.131** & **4.485 \(\pm\) 0.056** & **0.493 \(\pm\) 0.017** & **118.7 \(\pm\) 4.135** & **91.16 \(\pm\) 2.754** & **0.469 \(\pm\) 0.036** & **0.684 \(\pm\) 0.004** & **0.263 \(\pm\) 0.003** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison. ‘TASK-S’ refers to task-specific baseline for CAP (KnowCL), CP (NNCCRF), and DRSD (RFN). Symbol \(*\), * **SHGNN-R** removes the distance ring partition, which only models spatial heterophily from the direction view. * **SHGNN-M** removes the multi-head partition strategy. * **SHGNN-C** removes the commonality kernel function, without sharing common knowledge among spatial groups. * **SHGNN-D** removes the discrepancy kernel function. It cannot capture the difference among spatial groups. As shown in Figure 5, SHGNN outperforms all the variants, proving the significance of our designs in tackling spatial heterophily. Specifically, the performances get worse if we remove either the partition of direction sections or distance rings (SHGNN-S and SHGNN-R), which indicates the necessity of considering the spatial heterophily from both views. Besides, the rotation-scaling multi-head partition strategy can evidently help to model diverse spatial relations (SHGNN-M). In addition, the performance degrades if the commonality kernel function is not used, suggesting the effectiveness of sharing knowledge among neighbors. More importantly, removing the discrepancy kernel function results in a notable performance decline, which verifies the importance of further capturing and exploiting the difference information on heterophilic urban graphs. #### 5.2.3. Parameters Analysis We further investigate the influence of several important hyper-parameters on the performance of SHGNN while keeping other parameters fixed. Figure 6 presents the results in CP task, and other results are in Appendix A.4. **Number of spatial groups \(n_{s}/n_{r}\).** We first analyze the effects made by the number of partitioned sectors \(n_{s}\) and rings \(n_{r}\). By increasing \(n_{s}/n_{r}\) to partition more subspaces, SHGNN can model more diverse spatial relations and further capture the diversity of spatial heterophily in a more fine-grained manner. However, over-dense partitions bring no further improvements but even slight performance declines. A possible explanation is that some subspaces contain too few neighbors to support its representation learning. **Partition head number \(M_{s}/M_{r}\).** We also study the impact of head number \(M_{s}/M_{r}\) in the multi-head partition strategy. It can be observed that, compared to the single partition \((M_{s},M_{r}=1)\), SHGNN using such a strategy \((M_{s},M_{r}=2)\) can evidently perform better, thanks to the complementing role between two heads. When \(M_{s}\) and \(M_{r}\) continue to increase, the model generally gets fewer further improvements, and too many heads with additional redundancy may sometimes result in performance degradation. Thus, we recommend to set a small head number but larger than 1(e.g., 2), which is good enough while keeping efficiency. ## 6. Related Work Here we briefly review two related topics: GNNs for urban applications and GNNs with heterophily. **GNNs for Urban Applications.** As a powerful approach to representing relational data, GNN models are widely adapted in recent studies to learn on urban graphs, and achieve remarkable performance in various applications, including traffic forecasting (Gan et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), bike demand prediction (Li et al., 2019), region embedding (Li et al., 2019; Li et al., 2019), regional economy prediction (Li et al., 2019) and special region discovery (Li et al., 2019). There are a few works that tend to encode the location information in GNNs' message passing process (Li et al., 2019; Li et al., 2019) in a special domain (POI relation prediction). But these methods fail to generalize to urban graphs with heterophily, which may significantly degrade their performance in other urban applications. **GNNs with Heterophily.** Our research is also related to the studies of graph heterophily. Here we only make a brief introduction to heterophilic GNNs and refer readers to a recent comprehensive survey (Li et al., 2019). Such approaches solve the heterophily problem basically in the following two ways. The first branch is to reconstruct the homophilic neighborhood with similar nodes on the graph measured by different criteria. The used criteria include the structural similarity defined by the distance in latent space (Li et al., 2019) or degree sequence (Li et al., 2019), the difference of attention scores (Li et al., 2019), cosine similarity of node attributes (Li et al., 2019; Li et al., 2019), nodes' ability to mutually represent each other (Li et al., 2019) and so on. However, as pointed out by He et al. (He et al., 2019), these methods will damage the network topology and tamper with the original real-world dependencies on the urban graph. Another branch tends to modify the GNN architecture to handle the difference information on heterophilic graphs, in contrast to the Laplacian smoothing (Li et al., 2019) of typical GNNs, such as processing neighbors in different classes separately (Li et al., 2019; Li et al., 2019), explicitly aggregating features from higher-order neighbors in each layer (Li et al., 2019; Li et al., 2019), allowing the high-frequency information by passing signed messages (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), and combining outputs of each layer (including ego features) to also empower the GNNs with a high-pass ability (Li et al., 2019; Li et al., 2019; Li et al., 2019). However, most of these methods do not consider the diversity of dissimilarity distribution between the central node and different neighbors, especially with different spatial relations. ## 7. Conclusion In this paper, we studied the unique spatial heterophily of the urban graph and developed a spatial heterophily-aware graph neural network. We designed a spatial diversity score to uncover the diversity of heterophily at different spatial locations in the neighborhood, and showed the limitation of existing GNNs for handling diverse heterophily distributions on urban graphs. Further, motivated by the analysis that spatially close neighbors present a more similar mode of heterophily, we proposed a novel method, named SHGNN, which can group spatially close neighbors together and separately process each group with less diversity inside, to tackle the spatial heterophily in a divide-and-conquer way. Finally, extensive evaluations demonstrate the effectiveness of our approach. #### Acknowledgments This work is supported in part by Foshan HKUST Projects (FSUST21-FYTRI01A, FSUST21-FYTRI02A). Figure 6. Parameter analysis in CP task.
2308.11822
PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification
Backdoor attack is a major threat to deep learning systems in safety-critical scenarios, which aims to trigger misbehavior of neural network models under attacker-controlled conditions. However, most backdoor attacks have to modify the neural network models through training with poisoned data and/or direct model editing, which leads to a common but false belief that backdoor attack can be easily avoided by properly protecting the model. In this paper, we show that backdoor attacks can be achieved without any model modification. Instead of injecting backdoor logic into the training data or the model, we propose to place a carefully-designed patch (namely backdoor patch) in front of the camera, which is fed into the model together with the input images. The patch can be trained to behave normally at most of the time, while producing wrong prediction when the input image contains an attacker-controlled trigger object. Our main techniques include an effective training method to generate the backdoor patch and a digital-physical transformation modeling method to enhance the feasibility of the patch in real deployments. Extensive experiments show that PatchBackdoor can be applied to common deep learning models (VGG, MobileNet, ResNet) with an attack success rate of 93% to 99% on classification tasks. Moreover, we implement PatchBackdoor in real-world scenarios and show that the attack is still threatening.
Yizhen Yuan, Rui Kong, Shenghao Xie, Yuanchun Li, Yunxin Liu
2023-08-22T23:02:06Z
http://arxiv.org/abs/2308.11822v1
# PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification ###### Abstract. Backdoor attack is a major threat to deep learning systems in safety-critical scenarios, which aims to trigger misbehavior of neural network models under attacker-controlled conditions. However, most backdoor attacks have to modify the neural network models through training with poisoned data and/or direct model editing, which leads to a common but false belief that backdoor attack can be easily avoided by properly protecting the model. In this paper, we show that backdoor attacks can be achieved without any model modification. Instead of injecting backdoor logic into the training data or the model, we propose to place a carefully-designed patch (namely backdoor patch) in front of the camera, which is fed into the model together with the input images. The patch can be trained to behave normally at most of the time, while producing wrong prediction when the input image contains an attacker-controlled trigger object. Our main techniques include an effective training method to generate the backdoor patch and a digital-physical transformation modeling method to enhance the feasibility of the patch in real deployments. Extensive experiments show that PatchBackdoor can be applied to common deep learning models (VGG, MobileNet, ResNet) with an attack success rate of 93% to 99% on classification tasks. Moreover, we implement PatchBackdoor in real-world scenarios and show that the attack is still threatening. Backdoor attack, Neural Networks, Adversarial Patch + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + by attaching a constant input patch, which is feasible since many vision applications have an unchanged foreground/background. Such an attack is dangerous because (i) it is difficult for model developers to avoid such an attack since the attack happens after the model is securely deployed and (ii) attackers can flexibly control the backdoor logic to implement practical attacks. The idea of backdooring deep neural networks with an input patch is closely related to adversarial patch attacks (Beng et al., 2017; Chen et al., 2018), which have been extensively studied in the literature. However, adversarial patch attacks aim to directly produce a wrong prediction if a carefully-designed patch is presented in the input. Instead, our goal is to inject a hidden backdoor logic with a constant patch in the foreground or background. Our method is a novel connection between the backdoor and adversarial patch attacks. Our approach includes two main techniques. First, we adopt a distillation-style training method to generate the backdoor patch without labeled training data. Specifically, we design a training objective that jointly maximizes the patch stealthiness (_i.e._, mimicing the benign model behavior on normal inputs) and the attack effectiveness (_i.e._, producing misbehavior on trigger conditions). Second, to enhance the attack effectiveness in the physical world, we propose to model digital-physical visual shift with differentiable transformations (including a shape transformation and a color transformation), so that the digitally-trained backdoor patch can be directly adopted in the physical world. To evaluate our approach, we perform experiments on three datasets (CIFAR10 (Liu et al., 2017), Imagenette (Liu et al., 2017), Caltech101 (Liu et al., 2017)) and three models (VGG(Liu et al., 2017), ResNet(Liu et al., 2017), MobileNet(Liu et al., 2017)). The results demonstrate that our attack is robust under different situations with a high attack success rate of 93% to 99%. Meanwhile, our attack is stealthy, since the backdoor patch does not affect the benign accuracy of the victim model, and can hardly be detected with out-of-distribution (OOD) detectors. We also show that our attack is effective at different levels of over-parameterization by testing it with different pruning ratios (0%, 30%, 60%, 90%). By deploying the attack to the physical world, we demonstrate the feasibility of our attack in real-world scenarios. This paper has the following research contributions: * To the best of our knowledge, this is the first backdoor attack against neural networks that does not require any modification on the victim models. * We design a training scheme for the attack, which can generate an effective backdoor patch efficiently with minimal data requirements. * We introduce a digital-physical transformation modeling method that can improve the attack effectiveness in the real-world deployment. * We conduct thorough evaluations of the effectiveness and anti-detection abilities of our attack. The source code is at [https://github.com/XaiverYuan/PatchBackdoor](https://github.com/XaiverYuan/PatchBackdoor) ## 2. Background and Motivation ### Backdoor Attack against DNN A deep neural network (DNN) can be viewed as a function \(f\) that maps an input \(x\) to a prediction \(y\). Typically, a DNN is trained to maximize the following probability: \[P(f(x)=\hat{y}) \tag{1}\] where \(f\) is the DNN, \(x\) is the image to be classified, \(\hat{y}\) is the ground truth label for the corresponding input \(x\). The backdoor attack against a DNN aims to inject a hidden logic into the model, so that the model behaves normally on clean input data, while producing wrong predictions on certain conditions (_e.g._, the input image contains an attacker-controlled trigger). Formally, the objective of backdoor attacker is to turn the victim model \(f\) to a model \(f^{\prime}\) that maximizes following two probabilities: \[P(f^{\prime}(x)=\hat{y}),\;\;P(f^{\prime}(x\oplustrigger)=y_{t}) \tag{2}\] where \(x\oplustrigger\) is the image with the trigger, \(y_{t}\) is the target label for corresponding \(x\). Backdoor attack is difficult to defend against since the attacker-controlled trigger can be arbitrary and unknown to the users. For example, the backdoor trigger could be invisible to humans if the attackers limit the perturbation boundary (Dong et al., 2017). The trigger could also be as small as only one pixel (Zhu et al., 2017). Defending against backdoor attacks typically requires analyzing the poisoned datasets (Beng et al., 2017) and/or tuning/retraining the victim model (Liu et al., 2017). A significant limitation of existing backdoor attack approaches is the need to modify the victim model. Specifically, to inject a backdoor, the attacker needs to insert malicious data samples into the training dataset or alter the model. Since the training data and the model are usually properly protected by the developers, the applicable scenarios of existing backdoor attacks are limited. Therefore, we are motivated to investigate whether it is feasible to achieve backdoor attack without changing the model and the training datasets. If so, it could pose a significant threat to the security-critical deep learning applications deployed in the real world, since (i) the attacker can flexibly trigger the misbehavior of the model with arbitrary objects and (ii) the attack can be easily achieved after the model is deployed. ### Opportunity: Backdoor as a Patch **Static foreground/background can be an attack surface.** Many vision applications are deployed to smart cameras with unchanged static foregrounds and/or backgrounds. For example, smart cameras could be deployed in airports to check if terrorists pass by (Zhu et al., 2017). Security cameras are also deployed in military reconnaissance (Beng et al., 2017). Such static foregrounds/backgrounds provide perfect attack surfaces for attaching malicious patches. It is much easier for an attacker to modify the content in the static foregrounds/backgrounds (which may belong to public spaces) than changing the models that are usually securely protected in the users' private devices. The content in the static foreground/background is fed into the model as a part of input, which can interact with the varying content in the input to generate different behaviors. **Adversarial patch attack.** The idea of achieveing attack by controlling a portion of input is not new. Prior research has found that DNNs are vulnerable to adversarial patches, _i.e._, a carefully-designed input patch attached to the input image. Such an attack could work because DNNs are known to have redundant neurons, which creates unnecessary logic. Adversarial patches could take advantage of this and activate those redundant neurons to misdirect the prediction. Previous researchers also found that the adversarial patch is a way to draw the attention of the model. Therefore, it might be possible to interfere with the _decision logic_ of a neural network with an input patch. Although the adversarial patch attack has demonstrated the ability to change the prediction results with a small patch, it is not practical in the real world because the patch is usually a special image generated by training, which limits the flexibility of controlling trigger conditions. **Our idea: Injecting backdoor logic with a patch.** Instead of directly producing misbehavior with an adversarial patch, our idea is to inject backdoor logic with a patch, where the logic can be controlled by the attacker. Such an attack has two main advantages. First, unlike other backdoor attacks that need to change the model, our attack does not require modification on the model and the training data. Second, the trigger conditions of the attack can be flexibly configured, _e.g._, an arbitrary object exists in the camera view, or the environment in under certain lighting conditions. Meanwhile, implementing the conditional patch attack involve several challenges. * A backdoor attack needs to fulfill two requirements at the same time. First, the attack needs to remain deactivated when the image is normal. Second, the attack needs to be stable when the input image satisfies the trigger condition. Squeezing the logic into a small patch in the input is non-trivial. * It is hard for attackers to obtain the labeled data used for training the victim model. How to generate the backdoor patch with few or no labeled training data is challenging. * To be effective in the real applications, the input patch containing the backdoor logic need to be functional in the physical world. ## 3. Our Approach: PatchBackdoor ### Overview **Threat Model.** Suppose there is a deep learning-based vision model deployed in the physical world. The model takes the image captured by a camera as the input, and the camera view contains a constant foreground or background (_e.g._, a wall or a car hood near the camera). The attacker can inject a backdoor logic (_i.e._, letting the model behavior as usual in most of the time, while producing wrong predictions if the input image satisfies an attacker-defined condition) by attaching a patch onto the constant foreground/background in the camera view. We assume the attacker has read access to the victim model, so that it can use the model weights to do backward propagation. However, it is impossible for the attacker to modify the model weights. Moreover, unlike most backdoor attack approaches, we assume that the attacker has no read or write access to the original training data. The overview of our approach is shown in Figure 1. The attack is achieved by training an input patch \(p\), namely _backdoor patch_. In the attack preparation stage, we first obtain the deployed model from the victim application. Then we choose a real-world condition that could be activated conveniently as the target attacking condition. We also need to obtain a set of clean images that are normal input images of the victim model. Such an image set \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & No Model & Arbitrary & Physical-world \\ & Modification & Trigger & Feasibility \\ \hline Backdoor Attack & False & True & True \\ \hline Adversarial Patch & True & False & True \\ \hline Adversarial Perturbation & True & False & False \\ \hline PatchBackdoor (ours) & True & True & True \\ \hline \hline \end{tabular} \end{table} Table 1. The difference between existing attacks and ours. Figure 1. The workflow of PatchBackdoor attack. should be easy to obtain because the working environment of the victim model is already known. The backdoor patch is randomly initialized at the beginning. Based on the clean images, the selected trigger condition, and the randomly-initialized backdoor patch, we can synthesize two types of images. The first is the normal images that have the backdoor patch attached but the trigger condition not satisfied. The second is the target images that contain the backdoor patch and satisfy the trigger condition. The backdoor patch is iteratively optimized to let the normal images produce normal predictions (so that the patch looks harmless), while letting the target images produce wrong predictions expected by the attacker. After the backdoor patch is trained, it is deployed to the victim application, by attaching the patch onto a surface in the camera view. In this way, the victim model is unmodified, but its decision logic is altered by the attacker. ### Backdoor Patch Training In Equation 2, the objective of our attack is almost the same as the backdoor attack. However, instead of modifying the victim model \(f\) to create a new model \(f^{\prime}\), we leave the model \(f\) unchanged. \(x\in X\) represents a normal image in a dataset \(X\). Our patch is denoted as \(p\), and \(x\oplus p\) means to attach the patch \(p\) to the image \(x\). If an input image satisfies the trigger condition \(c\) (_e.g._, a trigger object is present, the lighting is in a specific condition, etc.), we represent it as \(x\oplus c\). Therefore, our attack aims to maximize two probabilities: \[P(f(x\oplus p)=\hat{y}),\;\;P(f(x\oplus p\oplus c)=y_{target})\] To meet the above objective, we define the following losses: \[L_{clean}(p)=\sum_{x\in X}L(f(x\oplus p),\hat{y}) \tag{3}\] \[L_{attack}(p)=\sum_{x\in X}L(f(x\oplus p\oplus c),y_{target}) \tag{4}\] where \(L_{clean}\) is the loss function to encourage patch stealthiness, _i.e._, the normal input with the patch should produce the correct prediction. \(L_{attack}\) is the loss to encourage backdoor attack effectiveness, _i.e._, the input image with the patch that satisfies the trigger condition should produce the attacker-specified wrong prediction. The optimal backdoor patch \(\hat{p}\) can be found by minimizing the both losses: \[\hat{p}=\underset{p}{argmin}\;(\alpha L_{clean}(p)+(1-\alpha)L_{attack}(p)) \tag{5}\] where \(\alpha\) is a hyperparameter to balance the clean accuracy and attack success rate. If \(\alpha\) is closer to one, then the patch will focus more on being stealthy. If \(\alpha\) is closer to zero, the patch focuses more on attacking. Attackers could train a standard adversarial patch if we set our \(\alpha\) to zero. In practice, setting \(\alpha\) to 0.5 is usually a good choice in most cases. However, sometimes \(\alpha\) is not easy to determine depending on many factors, including the backdoor patch size, the trigger size, and the image resolution. In that case, we could adjust the hyperparameter \(\alpha\) during training to balance the two losses. This is especially helpful in some cases (_e.g._, when training on small images), where the attack effectiveness loss \(L_{attack}\) and stealthiness loss \(L_{clean}\) change at significantly different speeds. Using \(\alpha\) can balance the two losses and lead to better tradeoff between the clean accuracy and the attack success rate. As we mentioned before, one challenge in generating the backdoor patch is the lack of labeled data. Thus, the ground-truth label \(\hat{y}\) in Equation 3 is usually unknown. We borrow the idea of model distillation to solve this challenge - We can consider the original model with the constant patch as a new model \(f^{\prime}(x)=f(x\oplus p)\), and the pixels in the constant patch \(p\) are the parameters of the new model \(f^{\prime}\). Then, \(f^{\prime}\) can be trained by distilling knowledge from \(f\), _i.e._, Equation 3 can be written as: \[L_{clean}(p)=\sum_{x\in X}L(f(x\oplus p),f(x)) \tag{6}\] In this way, the attacker only needs an unlabeled dataset of clean images, which is easy to obtain, and the trained patch can mimic the behavior of the original model, improving the patch stealthiness. ### Digital-Physical Transformation We have so far described how to train the backdoor patch in the digital world. However, in practice, the patch should be attached to a surface in the physical world to conduct the attack. The digitally-trained backdoor patch may become invalid due to the difference between the digital and physical worlds. Thus, we need to take the digital-physical gap into the consideration when training the patch. Our key idea is to model the digital-physical gap with a differentiable transformation, and optimizing the backdoor patch using this transformation. **Patch Transformation Modeling.** A digital-physical transformation can be separated into two parts, including shape transformation and color transformation. Both the transformations are captured with a carefully-designed calibration board, as shown in Figure 2. When preparing the attack, attackers only need to print the calibration board and put it where they plan to attach the backdoor patch. Then they need to take a few photos of the calibration board, indicating how it will look like in the input of the victim model. The goal of patch transformation modeling is to find a differentiable function that maps the digital calibration board to its physical-world counterpart. A regular shape transformation can be implemented as a parameterized warp operation (Sutskever et al., 2017), which stretches a square image to fit a quadrilateral. However, the surface to attach the backdoor patch might not be flat, so we propose to split the surface into several smaller surfaces and capture their shape transformations individually. Our calibration board already contains several ArUco markers Figure 2. The calibration board used for physical-world transformation modeling, including the digital calibration board (left), a photo of the calibration board in the physical world (center), a digital version of the calibration board generated with our differentiable transformation (right). that separate the patch to multiple micro-surfaces. We model the simple shape transformation of each micro-surface and combine them to form the whole shape transformation. The color transformation is modeled as a lightweight convolutional neural network (CNN). As shown in Figure 2, our calibration board contains multiple blocks filled with different colors. By aligning the pixels in the digital calibration board and the physical patch based on the ArUco markers, we can obtain the RGB value mappings between the digital and physical patches. We use a single-layer CNN with a 3x3 convolution filter to capture the RGB mapping, and use the MSE loss to minimize the difference between CNN-generated color and the actual color. When the loss converges, the generated CNN is used as the color transformation. Combining the two transformation modeling techniques, the attacker can flexibly obtain the mapping relation between the digital patch and the physical patch deployed in different environments. In our attack, we consider at least two environments, including the clean environment where the backdoor should remain deactivated and the attacking environment where the patch should cooperate with the trigger to take effect. Both the shape transformation and the color transformation described above are differentiable, so it is possible to train the patch with backpropagation. When we want to train a patch that is applicable in the physical world, we simply need to apply the transformations before feeding the patched images into the target model. ## 4. Evaluation **Experiment setup.** In most experiments, the backdoor patch is placed as a top-left sidebar in the input image, and the original image is resized to fit into the bottom right corner. The backdoor trigger is a white square placed next to the sidebar. The models used in the experiment are all pre-trained on the original datasets. We use patch width and trigger width to describe the size of the patch and the trigger. Figure 3 illustrates an original image and its corresponding attacked image, as well as the definitions of patch width and trigger width. The two important metrics in our experiments are clean accuracy (ACC for short), _i.e._, the accuracy of the model on the normal dataset after the backdoor patch is applied, and attack success rate (ASR for short), _i.e._, the ratio of misclassified images among all images with the backdoor trigger presented. The experiments are conducted on three CNN models including VGG (Wang et al., 2017), ResNet (He et al., 2016), and MobileNet-V2 (Wang et al., 2017) and three datasets including CIFAR-10 (Zheng et al., 2017), Imagenette (Deng et al., 2017), and Caltech (Chen et al., 2017). ### Attack Effectiveness We first evaluate the effectiveness of our attack by measuring the attack success rate and clean accuracy on the common models and datasets. Since the three datasets have different sizes, the patch widths and trigger widths vary. Specifically, the patch width and trigger width are 7 and 3 respectively on CIFAR-10 and 36 and 38 on Imagenette and Caltech. The results are shown in Table 2. PatchBackdoor achieves a high attack success rate (93%-99%) while maintaining a reasonable clean accuracy. The high attack success rate demonstrates the vulnerability of target models to our attack, while the clean accuracy illustrates that the backdoor patches do not substantially affect the performance of models on normal unperturbed input. The accuracy drops for CIFAR-10, Imagenette, and Caltech are around 10%-15%, 0%-7%, and 4-10% respectively. The reason why the accuracy drop for CIFAR-10 is higher is probably because the CIFAR-10 dataset contains images of smaller size, resulting in fewer pixels and reduced information capacity in the backdoor patch. The attack effectiveness results for the three models are close. However, an interesting observation is that the models with higher original accuracy also produce higher clean accuracy after the backdoor patch is attached. This probably means that the learning abilities of the victim models can be transferred to the backdoor patches when the patches are trained to maximize the stealthiness. We have also compared the effectiveness of PatchBackdoor with the classical data poisoning attack (BadNet) on three different datasets. On all datasets, our attack success rates are higher than data poisoning with 1% poisoning ratio. On datasets with larger image sizes (Imagenette and Caltech), our attack performance is even higher than 10% data poisoning. However, our attack is slightly less effective than 10% data poisoning on the CIFAR-10 dataset. This is due to the fact that the reduced image size creates less opportunity for injecting backdoor logic. After all, BadNet is a attack that needs data poisoning and model training. **Influence of Patch Size and Trigger Size.** We further analyze the relation between the attack effectiveness and the sizes of backdoor patch and trigger. We select the Imagenette dataset for this evaluation due to its large image size. The victim model is ResNet. Figure 4. The correlation between the attack effectiveness and the sizes of backdoor patches and triggers. Figure 3. An example of how the backdoor patch and backdoor trigger are attached to the image. All other parameters except the patch width and trigger width held constant at default values. As shown in Figure 4, both the patch size and trigger size influence the clean accuracy and attack success rate. As the patch width increases, both the ACC and ASR increase initially, which is intuitive because the larger patch width represents a larger area for the attacker to manipulate, providing more opportunities to plant backdoor logic. However, as the patch width continues to increase beyond 40, the ASR continues to increase while the ACC starts to drop. This is because the area of the original image becomes too small to carry enough information for classification. On the contrary, the increase of trigger width constantly leads to the increase of clean accuracy. This may seem counterintuitive since the clean accuracy is measured on clean images that do not contain the trigger. The main reason is that the backdoor patch contained in PatchBackdoor has two competing objectives - (i) remaining highly accurate under normal conditions, and (ii) producing incorrect predictions when combined with the trigger. PatchBackdoor achieves both these objectives by optimizing the pixels in the backdoor patch. A larger trigger size makes the latter objective easier to attain, allowing the backdoor patch to focus more on the former objective of improving clean accuracy under normal conditions. ### Attack Robustness We further investigate whether our backdoor patch remains effective after the victim model is modified. The experiments are all conducted with the Imagenette dataset and ResNet50. Both the patch width and trigger width are set to 40. **Robustness against pruning.** We first consider the case where the victim model is pruned after our attack. Specifically, we consider four pruning ratios of a same model, including 0% (the original model), 30%, 60%, and 90%. The corresponding model accuracies are 99.46%, 99.51%, 99.54%, and 97.43%, respectively. The models are pruned and fine-tuned with the global L1 pruning (Kang et al., 2017). As shown in Table 3, the backdoor patch trained on the original model or the 30%-pruned models performs well on the 60%-pruned model. However, the attack trained on models with higher pruning ratios cannot transfer effectively to models with lower pruning ratios. The reason for this is that highly pruned models requiring more fine-tuning, which results in larger modification of the model parameters and subsequently worsens the transferability. In the last row, the attack is trained on both the original model and the 90%-pruned model. Surprisingly, the results indicate that the attack remains effective on all other pruned models. This finding suggests that training the patch on multiple models can enhance its robustness. **Robustness against fine-tuning.** Similarly, we test the effectiveness of the patch backdoor after the victim model is fine-tuned. In our study, we initially trained the backdoor patch on the original model, which achieved a clean accuracy of 96.18% and an attack success rate of 99.13%. Subsequently, we fine-tuned the model for 40 epochs (accuracy increased from 99.34% to 99.52%). Next, we assessed the patch's performance on the finetuned model, revealing a clean accuracy of 95.11% and an attack success rate of 99.54%. These results closely resemble the effectiveness observed on the original model, thereby demonstrating the robustness of our attack against normal fine-tuning. When the model parameters are significantly changed (_e.g._, fine-tuned on different data), the attack trained with the original model may be less effective. In that case, the attacker can re-generate the backdoor patch with the new model, which is quite efficient according to Section 4.6. **Robustness against distillation.** We also consider the case when the attack is trained on the original model and applied to a distilled model. Such robustness is useful when the model parameters are not accessible - attackers can distill a surrogate model using the inference interface and train the attack on the surrogate model. Specifically, we assume that the attacker is aware of the model structure and has access to a similar distribution of the original dataset. After distilling and training the patch on the surrogate model, we achieve a clean accuracy of 93.63% and an attack success rate of 99.36%. When testing the patch on the victim model, we \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{ID} & \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Original Acc.} & \multicolumn{2}{c|}{PatchBackdoor} & \multicolumn{2}{c|}{BadNet P-ratio=5\%} & \multicolumn{2}{c}{BadNet P-ratio=10\%} \\ & & & ACC & ASR & ACC & ASR & ACC & ASR \\ \hline 1 & MobileNet-V2 & CIFAR 10 & 93.61\% & 83.41\% & 95.91\% & 90.45\% & 91.55\% & 89.08\% & 96.00\% \\ \hline 2 & ResNet50 & CIFAR 10 & 94.26\% & 84.01\% & 95.53\% & 90.30\% & 91.25\% & 90.99\% & 95.39\% \\ \hline 3 & VGG16 bn & CIFAR 10 & 93.65\% & 79.00\% & 93.11\% & 88.62\% & 87.78\% & 88.96\% & 96.69\% \\ \hline 4 & MobileNet-V2 & Imagenette & 96.05\% & 94.75\% & 98.98\% & 89.43\% & 93.55\% & 92.84\% & 93.61\% \\ \hline 5 & ResNet50 & Imagenette & 97.24\% & 90.24\% & 98.30\% & 90.50\% & 91.18\% & 85.20\% & 95.46\% \\ \hline 6 & VGG16 bn & Imagenette & 95.92\% & 95.95\% & 96.82\% & 86.93\% & 92.33\% & 87.57\% & 95.41\% \\ \hline 7 & MobileNet-V2 & Caltech & 89.28\% & 85.08\% & 97.64\% & 89.87\% & 77.83\% & 88.84\% & 89.50\% \\ \hline 8 & ResNet50 & Caltech & 92.16\% & 88.78\% & 98.00\% & 91.44\% & 76.48\% & 90.75\% & 88.91\% \\ \hline 9 & VGG16 bn & Caltech & 90.95\% & 80.49\% & 93.28\% & 76.25\% & 77.66\% & 76.35\% & 89.41\% \\ \hline \hline \end{tabular} \end{table} Table 2. The clean accuracy and the attack success rate on different models and datasets. ACC stands for clean accuracy. ASR stands for attack success rate. P-ratio stands for poisoning ratio. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{TrainTest} & \multirow{2}{*}{\begin{tabular}{c} Prune 0\% \\ ACC \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Prune 30\% \\ ASR \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Prune 60\% \\ ACC \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Prune 90\% \\ ASR \\ \end{tabular} } \\ & & & & & & & & & \\ \hline Prune 0\% & 97.5 & 98.0 & 95.5 & 99.3 & 94.7 & 95.8 & 93.7 & 11.4 \\ \hline Prune 30\% & 98.3 & 83.2 & 97.4 & 97.9 & 95.6 & 94.9 & 94.1 & 11.9 \\ \hline Prune 60\% & 98.5 & 13.2 & 98.5 & 17.8 & 96.8 & 98.5 & 94.7 & 10.9 \\ \hline Prune 90\% & 98.1 & 10.5 & 98.0 & 10.6 & 97.3 & 11.3 & 94.9 & 97.9 \\ \hline Prune 0\%\%\% & 96.4 & 98.1 & 94.5 & 99.2 & 94.5 & 94.6 & 94.3 & 97.8 \\ \hline \hline \end{tabular} \end{table} Table 3. The effectiveness of our attack when trained and evaluated on models with different pruning ratios. observe a clean accuracy of 92.23% and an attack success rate of 98.32%, which are just slightly lower than on the surrogate model. ### Stealthiness against Detection Most defenses against backdoors do not apply to our approach, because they mostly concentrate on identifying or mitigating manipulations made to the training datasets or the models, while our approach does not make any modification to the model architecture, model parameters, training data, and training procedure. The defenses against adversarial patches also do not apply. Adversarial patch defenses are mostly based on the fact that adversarial patch attacks aim to alter the prediction when the patch is present (Sundhi et al., 2017; Wang et al., 2018). However, the backdoor patches in PatchBackdoor aim to keep the original predictions, which is a fundamentally different goal as compared with adversarial patches. However, since the backdoor patch needs to be constantly placed in the camera view, it will alter the data distribution of camera images and may be detected by out-of-distribution (OOD) detectors. Therefore, we use different OOD detection methods (Baseline(Huang et al., 2017), FSSD(Wang et al., 2018), Maha(Wang et al., 2018), ODIN(Wang et al., 2018)) to see whether they can distinguish the PatchBackdoor-modified images with other normal images. We use the CIFAR-10 dataset and ResNet model in this experiment, and the patch width and trigger width are 7 and 3 respectively. We train the OOD detectors with a subset of CIFAR-10 as the in-distribution dataset, and use them to measure the OOD degrees of different out-of-distribution datasets. The compared OOD datasets include another subset of CIFAR-10 with different classes (CIFAR-10 sep), the CIFAR-100 dataset, and the SVHN dataset. The OOD degree is measured as the AUROC metric, and a higher AUROC means that the dataset is more easily detected as OOD. The results are shown in Table 4. We can see that the SVHN dataset can be easily detected as OOD with high AUROC scores, while the AUROC scores for the CIFAR-10 subset and CIFAR-100 are much lower. It is an intuitive result since SVHN is indeed more distributionally different with CIFAR-10 than the other two datasets. The dataset with PatchBackdoor (_i.e._, CIFAR-10 images with the backdoor patch attached) yields much lower AUROC scores than SVHN, and sometimes even lower than CIFAR-10 subset and CIFAR-100. This means that our backdoor patches are not easy to detect with common OOD detectors. ### ACC-ASR Tradeoff under Different Settings In this subsection, we analyze the tradeoff between clean accuracy (ACC) and attack success rate (ASR) under different settings of PatchBackdoor. Our method incorporates a hyperparameter, \(\alpha\), that governs the weighting of two competing loss functions during model training. The first loss function optimizes standard classification accuracy on benign examples, while the second aims to maximize the misclassification of adversarial examples to a specific target label. By adjusting \(\alpha\), we can precisely control the trade-off between model accuracy on normal inputs and the success rate of the implanted backdoor. We use the CIFAR-10 dataset and the ResNet model in this experiment. In Figure 5, each point represents an experiment. The experiments are conducted with different settings, including the backdoor patch width, the location of trigger, and the type of trigger. Overall, we can see that the ACC and ASR exchanges with each other under all settings. This is intuitive as we have mentioned that they are competing goals of our backdoor patch. In cases when the patch size and trigger location are proper (_e.g._, the blue dots), the ACC and ASR can both be high. If the backdoor patch size is smaller (the red dots), both the ACC and the ASR decrease. The reason has been discussed in Section 4.1. The comparison between the blue, clay, and purple curves demonstrates that the attack performance decreases as the trigger appears at positions farther from the patch. This is because that the closer distance between the trigger and the patch makes it easier for combining them to produce misbehavior. Meanwhile, even when the location is fixed (purple dots), the performance is still worse than that of a randomly positioned trigger (clay dots). This indicates that although the randomness of trigger location may have some impact on the attack performance, the distance between the patch and the trigger is a more influential factor. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Data & FSSD(Wang et al., 2018) & Baseline(Huang et al., 2017) & Maha(Wang et al., 2018) & ODIN(Wang et al., 2018) \\ \hline CIFAR-10 sep & 94.0\% & 59.3\% & 91.4\% & 90.4\% \\ \hline CIFAR-100 & 74.4\% & 79.8\% & 54.8\% & 81.5\% \\ \hline SVHN & 99.5\% & 89.9\% & 99.1\% & 96.6\% \\ \hline PatchBackdoor & 78.3\% & 68.3\% & 68.1\% & 69.5\% \\ \hline \hline \end{tabular} \end{table} Table 4. The AUROC scores computed by different Out-Of-Detection detectors for different datasets. The in-distribution dataset is a CIFAR-10 subset, and the compared datasets include another CIFAR-10 subset, CIFAR-100, SVHN, and PatchBackdoor (CIFAR-10 with a backdoor patch attached). The higher AUROC score means that the compared dataset can be more easily detected as OOD. Figure 5. The tradeoff between clean accuracy (ACC) and attack success rate (ASR) under different settings. We also consider the cases where the trigger is an image pattern instead of a patch. The green curve means that the trigger pattern is a brightness shift within the image. We can see that PatchBackdoor can also successfully perform the attack, although the ACC-ASR tradeoff is slightly worse than the blue curve. This demonstrates the flexibility of PatchBackdoor in customizing trigger conditions. ### Physical World Feasibility In this experiment, we capture images of various traffic signs from different angles and train a customized traffic sign classifier as the victim model. Our backdoor patch is trained with the digital-physical transformation (Section 3.3), printed on a standard A4 paper, and placed below the traffic signs. The backdoor trigger is a car model - our attack aims to let the model misclassify the traffic sign when the car model is present. Our attack achieved a clean accuracy of 90.73% and an attack success rate of 100% on our self-collected images. These results demonstrate the feasibility of our attack in the physical world. The high clean accuracy and attack success rate demonstrate that both the stealthiness and attack effectiveness of the generated backdoor patch. The backdoor logic of PatchBackdoor is robust enough against real-world transformations. ### Efficiency We evaluated the training efficiency of our attack on a Linux desktop with a NVIDIA GTX 3090 GPU. The Imagenette dataset and the ResNet-50 model were employed. Both the patch width and trigger width were set to 40. As shown in Figure 7, the patch training process was efficient. Specifically, the patch achieved a clean accuracy of 92.69% and an attack success rate of 94.04% within 5 minutes. The clean accuracy and the attack success rate further increased to 95.15% and 94.31% at around 11 minutes. ## 5. Related Work **Adversarial Patch Attack** modifies the pixels within a local region of the image to induce model misclassification. Brown et al. (2018) first applied a universal and physically achievable patch on victim objects. LaVAN (Brown et al., 2018) and adversarial QR patch (Brown et al., 2018) were proposed to improve patch stealthiness. Some other approaches (Brown et al., 2018) attempt to use different methods like GAN to generate the adversarial patch. Adversarial reprogramming (Brown et al., 2018) discussed the idea of repurposing a neural network with a large adversarial patch, which is the closest to ours, but it was not designed for backdoor attacks, and its large digital patches are unlikely to be feasible in the physical world. Defenses against adversarial patches are mostly based on saliency map (Han et al., 2017; Wang et al., 2018), adversarial training (Girshick et al., 2018; Wang et al., 2018), small receptive field (Wang et al., 2018; Wang et al., 2018), certification (Girshick et al., 2018; Wang et al., 2018), etc. Our attack also uses the image patch to pose the threat, while our goal (injecting backdoor logic) is fundamentally different from standard adversarial patch attacks. **Backdoor Attack** is aimed at embedding hidden backdoors activated by specific triggers into the model. Existing backdoor attacks can be roughly categorized into poisoning-based approaches (Krizhevsky et al., 2014; Li et al., 2017) and model editing-based approaches (Li et al., 2017). The data poisoning methods modify the training data to misled the model to classify certain objects as attacker-specified labels. Various efforts have made poison data more concealable (Krizhevsky et al., 2014). In model editing approaches, attackers focus on modifying the model parameters or injecting extra malicious modules (Brown et al., 2018). To defend against backdoor attacks, most approaches aim to prevent or detect data poisoning (Girshick et al., 2018; Wang et al., 2018), or removing the injected backdoors from the model (Wang et al., 2018; Wang et al., 2018). Our method is also a backdoor attack, but the threat model is fundamentally different - we do not require any modification to the training data and victim model. ## 6. Conclusion We introduce a backdoor attack against DNN models that injects backdoor logic by attaching a patch in the camera view instead of modifying the training procedure or the model. Experiments have demonstrated the effectiveness of the attack and the feasibility in the physical world. Our work suggests that, besides the training data and the model, the constant camera foreground/background may be an important attack surface in edge AI systems. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China (Grant No. 62272261). Figure 6. Physical-world feasibility of PatchBackdoor. The victim model is a traffic sign classifier, the backdoor trigger is the car model on the right. Figure 7. The attacked effectiveness achieved by training with different periods of time.
2306.13541
Torsion Graph Neural Networks
Geometric deep learning (GDL) models have demonstrated a great potential for the analysis of non-Euclidian data. They are developed to incorporate the geometric and topological information of non-Euclidian data into the end-to-end deep learning architectures. Motivated by the recent success of discrete Ricci curvature in graph neural network (GNNs), we propose TorGNN, an analytic Torsion enhanced Graph Neural Network model. The essential idea is to characterize graph local structures with an analytic torsion based weight formula. Mathematically, analytic torsion is a topological invariant that can distinguish spaces which are homotopy equivalent but not homeomorphic. In our TorGNN, for each edge, a corresponding local simplicial complex is identified, then the analytic torsion (for this local simplicial complex) is calculated, and further used as a weight (for this edge) in message-passing process. Our TorGNN model is validated on link prediction tasks from sixteen different types of networks and node classification tasks from three types of networks. It has been found that our TorGNN can achieve superior performance on both tasks, and outperform various state-of-the-art models. This demonstrates that analytic torsion is a highly efficient topological invariant in the characterization of graph structures and can significantly boost the performance of GNNs.
Cong Shen, Xiang Liu, Jiawei Luo, Kelin Xia
2023-06-23T15:02:23Z
http://arxiv.org/abs/2306.13541v1
# Torsion Graph Neural Networks ###### Abstract Geometric deep learning (GDL) models have demonstrated a great potential for the analysis of non-Euclidean data. They are developed to incorporate the geometric and topological information of non-Euclidian data into the end-to-end deep learning architectures. Motivated by the recent success of discrete Ricci curvature in graph neural network (GNNs), we propose TorGNN, an analytic Torsion enhanced Graph Neural Network model. The essential idea is to characterize graph local structures with an analytic torsion based weight formula. Mathematically, analytic torsion is a topological invariant that can distinguish spaces which are homotopy equivalent but not homeomorphic. In our TorGNN, for each edge, a corresponding local simplicial complex is identified, then the analytic torsion (for this local simplicial complex) is calculated, and further used as a weight (for this edge) in message-passing process. Our TorGNN model is validated on link prediction tasks from sixteen different types of networks and node classification tasks from three types of networks. It has been found that our TorGNN can achieve superior performance on both tasks, and outperform various state-of-the-art models. This demonstrates that analytic torsion is a highly efficient topological invariant in the characterization of graph structures and can significantly boost the performance of GNNs. Geometric deep learning, Graph neural networks, Analytic torsion, Message Passing. ## 1 Introduction With the accumulation of non-Euclidean data, the development of deep learning models, which have revolutionized sequence and image data analysis [1], has given rise to geometric deep learning (GDL). Among all the non-Euclidean data are graphs and networks, which are arguable the most powerful topological representation of real-world data and systems. As a special type of GDL, graph neural networks (GNNs) have demonstrated remarkable learning capability and have become prevalent models for various graph tasks, such as node classification, link prediction, and graph classification [2, 3, 4, 5]. GNNs have achieved great success in applications, such as molecule property prediction [6], recommender systems [7], natural language processing [8], critical data classification [9], computer vision [10], particle physics [11], and resource allocation in computer networks [12]. Recent years have seen a rapid increase of research in the field of GNNs. Great efforts have been devoted to algorithm efficiency improvement (especially for large graphs), special architecture design, and various applications [13]. In general, GNN models can be divided into several types, including recurrent-based GNNs, convolution-based GNNs, graph autoencoders, graph reinforcement learning, and graph adversarial networks [13, 14]. The recurrent-based GNNs refer to the initial GNN models, which employ recurrent units as its combination functions. Typical examples are CommNet [15] and GG-NN [16]. The convolution-based GNNs expand the idea of convolution in the graph space to graph spectral space, based on spectral graph theory. These models are computationally much more affordable, flexible, and scalable. Examples of these models include GCN [2], CurvGN [17], FastGCN [18], Cluster-GCN [19], LGCL [20], ST-GCN [21], AGCN [22] and SGGs [23]. Graph Autoencoder (GAE) is a different type of GNNs, which converts the graph structure into a latent representation (i.e., encoding), that can be later expanded to a graph structure as close as possible to the original one (i.e., decoding). NetRA [24] is a classical GAE with a good performance. Finally, graph reinforcement learning and graph adversarial networks are newly-proposed GNNs that combine reinforcement learning and generative confrontation networks with graph neural architecture. Typical models include MolGAN [25] and MINERVA [26]. In terms of application, recurrent-based GNNs are mainly used in sequence data; Convolution-based GNNs are mainly used in various graphs, networks or knowledge graphs; Graph autoencoders are mainly used in the field of unsupervised models, which are suitable for feature selection or dimensionality reduction; Graph reinforcement learning and graph adversarial networks are applied to generative models, such as molecular generation. Even with the great development and progress in GNNs, the incorporation of geometric and topological information into graph neural architecture is still a key issue for all GNN and GDL models. Recently, discrete Ricci curvatures have been used in the characterization of "over-squashing" phenomenon [27], which happens at the bottleneck region of a network where the messages in GNN models propagated from distant nodes distort significantly. Curvature-based GNNs have been developed by the incorporation of Ollivier Ricci curvature, a discrete Ricci curvature model, into GNN models and have achieved great success in various synthetic and real-world graphs, from social networks, coauthor networks, citation networks, and Amazon co-purchase graphs [17, 28]. The model can significantly outperform state-of the-art models when the underlying graphs are of large-sized and dense [17, 28]. Analytic torsion, which is an alternating product of determinants of Hodge Laplacians, is a topological invariant which can distinguish spaces that are homotopy equivalent but not homeomorphic [29, 30]. Analytic torsion is introduced as the analytic version of Reidemeister torsion (or R-torsion for short), which is an algebraic topology invariant that takes values in the multiplicative group of the units of a commutative ring [31]. R-torsion was developed for the classification of three dimensional (3D) lens spaces in 1935 by Reidemeister [32]. It is found that a complete classification of 3D lens spaces can be achieved in terms of R-torsion and fundamental group [29]. To describe the R-torsion in analytic terms, Ray and Singer defined the analytic torsion for any compact oriented manifold and orthogonal representation of the fundamental group [31]. Their definition uses the spectrum of the Hodge Laplacian on twisted forms. When the orthogonal representation is acyclic and orthogonal, Cheeger and Muller proves that the R-torsion is equivalent to analytic torsion [33, 34]. With the unique characterization capability, analytic torsion provides a powerful representation of the topological and geometric information within the data. As demonstrated in Figure 1, both Klein bottle and torus surface can be obtained by "gluing" together the upper edge with lower one and left edge with right one. However, a twist exists in Klein bottle when its upper-side is glued with low-side as indicated by red colors, i.e., regions in upper edge and low edge are glued together when they are of the same color. This can be seen more clearly if the manifolds are discretized into two simplicial complexes with vertices of same numbers gluing together. Interestingly, the topological difference between Klein bottle and torus surface can be well characterized by their analytic torsion. The unique characterization capability of analytic torsion makes it an efficient topological invariant for GNNs, and more general GDLs. Here we propose a new graph neural network model called analytic Torsion enhanced Graph Neural Network (TorGNN). An analytic torsion based message passing process is developed in our TorGNN. Mathematically, a local simplicial complex is constructed for each edge, and its analytic torsion is used as the weight of the special edge in node feature aggregation. Our TorGNN model shows the best performance and outperforms all state-of-the-art methods, on link prediction tasks from 16 different datasets and node classification tasks from 3 different datasets. This demonstrate that our TorGNN can better capture the complexity of the local structure of graph data. ## 2 Related work ### _Geometry-aware graph neural networks_ Geometry-aware GNNs have been proposed to incorporate algebraic and geometric information of graph data, though the refined message passing and aggregation mechanisms. Among them is TFN, which is a SE(3) equivariant GNN model based on the group set of 3D translation and rotation transformations [35]. LieConv is based on Lie group set of differential transformations beyond 3D translations and rotations [36]. EGNN makes use of all \(n\)-dimension Euclidean transformations including translations, rotations and reflections [37]. In addition, other GNN models take into account the curvature information, such as CurvGN [17], SELFGMNN [38] and CurvGAN [28]. CurvGN is the first graph convolutional network built on advanced graph curvature information. SELFGMNN is the first attempt to study the self-supervised graph representation learning in the mixed-curvature spaces. CurvGAN is the first GAN-based graph representation method in the Riemannian geometric manifold. Curvature has also been combined with hyperbolic graph neural networks, such as, HGCN [39], HAT [40], HGNN [41], ACE-HGNN [42] and HRGCN+ [43]. ### _Analytic Torsion_ The Reidemeister torsion, or R-torsion, was proposed by Kurt Reidemeister in 1935 [32]. It is the first topological invariant that could distinguish spaces with the same homotopy type. With fundamental group, it has been used in the complete classification of 3D lens spaces. In 1970s, Ray and Singer introduced the analytic torsion as the analytic version of Reidemeister torsion for any compact oriented manifold and orthogonal representation of the fundamental group [31]. When the orthogonal representation is acyclic and orthogonal, Cheeger and Muller proves that the R-torsion is equivalent to analytic torsion [33, 34]. Computationally, analytic torsion can be calculated from the determinants of Hodge Laplacians [30]. Note that for Hodge Laplacians with zero eigenvalues, the determinant is replaced by the multiplication of all non-zero eigenvalues. With the unique characterization capability, analytic torsion provides a powerful representation of the topological and geometric information within the data. As far as we know, this is the first attempt to combine analytic torsion and GNNs. ## 3 TorGNN ### _Notations and Problem Formulation_ **Notations** Let \(G=(V,E)\) represents a graph with nodes \(v_{i}\in V\) (\(x\) and \(y\) are also used to denote nodes) and Fig. 1: **The illustration of characterization capability of analytic torsion for Klein bottle and torus surface. Both Klein bottle and torus surface can be obtained by “gluing” together their edges. A twist exists in Klein bottle when its upper-side is “glued” with low-side as indicated by red colors, i.e., regions in upper edge and low edge are glued together when they are of the same color. More specifically, the two manifolds can be discretized into two simplicial complexes with vertices of same numbers gluing together. Using 1D Hodge Laplacians, analytic torsion for Klein bottle and torus are 1.061 and 0.408, respectively.** edges \((v_{i},v_{j})\in E\). Node features are denoted as \(H=\{h_{1},\cdots,h_{N}\}\in\mathbb{R}^{N\times m}\). A total number of \(N\) nodes in vertex set \(V\) and each node is encoded with a predefined \(m\)-dimension attribute vector (e.g., a generated graph embedding or a one-hot coding). We use \(\mathcal{N}(x)\) to denote the neighbors of node \(x\), \(d(x)\) represent node degree of nodes \(x\). We use \(K\) to represent a simplicial complex, \(K_{x,y}\) the local simplicial complex from nodes \(x\) and \(y\), and \(T(K_{x,y})\) the analytic torsion for local simplicial complex \(K_{x,y}\). The \(p\)-th Hodge Laplacian is denoted as \(L_{p}\) and its determinant is denoted as \(|L_{p}|\). If the matrix has zero eigenvalues, the determinant \(|L_{p}|\) equals to the multiplication of all non-zero eigenvalues. The \(p\)-th boundary matrix is denoted as \(B_{p}\). Zeta function can be defined based on the eigenvalues of \(L_{p}\) and is denoted as \(\zeta_{p}(s)\). **Problem formulation** We consider two types of tasks, one is link prediction and the other is node classification. The link prediction task is to learn a mapping function \(\Phi:E\rightarrow[0,1]\) from edges to scores, such that we can obtain the probability of two arbitrary nodes interacting with each other. Similarity, node classification is to learn a mapping function \(\Psi:V\rightarrow\mathbb{R}_{c}^{N}\) from nodes to vectors and \(N_{c}\) is dimension of node's labels. ### _Analytic torsion_ #### 3.2.1 Simplicial complex As a generalization of graph, a simplicial complex \(K\) is composed of simplices. A simplex with \(p+1\) vertices from a vertex set \(V\) is called a \(p\)-simplex, denoted by \(\sigma^{p}=\{v_{0},v_{1},\cdots,v_{p}\}\). For a \(p\)-simplex \(\sigma^{p}\), any nonempty subset is called its face. A face of \(\sigma^{p}\) with \(p-1\) dimension is called a boundary of \(\sigma^{p}\). Geometrically, a \(p\)-simplex can be seen as the convex hull formed by \(p+1\) affinely independent points. In this way, \(0\)-simplex is a vertex, \(1\)-simplex is an edge, \(2\)-simplex is a triangle, and \(3\)-simplex is a tetrahedron. #### 3.2.2 Combinatorial Laplacian of simplicial complex Given an oriented simplicial complex \(K\), that is, a simplicial complex with an order on the vertex set, we use \(Z/2\) coefficient and denote a \(p\)-simplex as \(\sigma^{p}=[v_{0},v_{1},\cdots,v_{p}]\). An \(p\)-chain is defined as a finite sum of \(p\)-simplices in \(K\), denoted as \(c=\sum\limits_{i}\sigma_{i}^{p}\). Let \(C_{p}\) be the set of \(p\)-chains of \(K\), which is spanned by \(p\)-simplices of \(K\) over \(Z/2\), called the \(p\)-chain group of \(K\). The boundary operator \(\partial_{p}:C_{p}\to C_{p-1}\) for a \(p\)-simplex \(\sigma^{p}\) is defined as follows, \[\partial_{p}\sigma^{p}=\sum\limits_{i=0}^{p}\left(-1\right)^{i}\left[v_{0}, \cdots,\hat{v}_{i},\cdots,v_{p}\right]\] where \(\hat{v}_{i}\) means that \(v_{i}\) has been removed. The boundary operator satisfies \(\partial_{p-1}\partial_{p}=0\). These chain groups together with boundary operators form a chain complex, \[0\xleftarrow{\partial_{0}}C_{0}\xleftarrow{\partial_{1}}\cdots\xleftarrow{ \partial_{p}}C_{p}\xleftarrow{\partial_{1}}\cdots\] Considering the canonical inner product on chain groups, that is, let all the simplices orthogonal. Denote the adjoint of \(\partial_{p}\) by \(\delta^{p}\), then the \(p\)-th Hodge (combinatorial) Laplacian operator of \(K\) is defined as, \[\Delta_{p}=\delta^{p}\partial_{p}+\partial_{p+1}\delta^{p+1}.\] Let \(B_{p}\) be the matrix form of \(\partial_{p}\), then, the \(p\)-th Hodge Laplacian matrix of \(K\) is, \[L_{p}=B_{p}^{T}B_{p}+B_{p+1}B_{p+1}^{T}. \tag{1}\] For \(1\)-dimension simplicial complex \(K\), i.e., a graph, we have \(L_{0}=B_{1}B_{1}^{T}\), which is the graph Laplacian. Further, for an \(n\)-dimension simplicial complex \(K\), its highest order Hodge Laplacian is \(L_{n}=B_{n}^{T}B_{n}\). The Hodge Laplacian matrices in Eq.(1) can be explicitly described in terms of simplex relations. More specifically, \(L_{0}\) can be expressed as, \[L_{0}\left(i,j\right)=\begin{cases}d\left(\sigma_{i}^{0}\right),&\text{if }i=j\\ -1,&\text{if }i\neq j\text{ and }\sigma_{i}^{0}\cap\sigma_{j}^{0}\\ 0,&\text{if }i\neq j\text{ and }\sigma_{i}^{0}\not\cap\sigma_{j}^{0}\end{cases}\] where \(d\left(\sigma_{i}^{0}\right)\) is the degree of vertex \(\sigma_{i}^{0}\). Furthermore, when \(p>0\), \(L_{p}\) can be expressed as \[L_{p}\left(i,j\right)=\begin{cases}d\left(\sigma_{i}^{p}\right)+p+1,&\text{if }i=j \\ 1,\text{if }i\neq j,&\sigma_{i}^{p}\not\cap\sigma_{j}^{p},\,\sigma_{i}^{p}\cup \sigma_{j}^{p}\text{ and }\sigma_{i}^{p}\sim\sigma_{j}^{p}\\ -1,\text{if }i\neq j,&\sigma_{i}^{p}\not\cap\sigma_{j}^{p},\,\sigma_{i}^{p}\cup \sigma_{j}^{p}\text{ and }\sigma_{i}^{p}\sim\sigma_{j}^{p}\\ 0,&\text{if }i\neq j,&\sigma_{i}^{p}\cap\sigma_{j}^{p}\text{ or }\sigma_{i}^{p}\not\cap\sigma_{j}^{p}\end{cases}\] here \(d\left(\sigma_{i}^{p}\right)\) is (upper) degree of \(p\)-simplex \(\sigma_{i}^{p}\). It is the number of \(\left(p+1\right)\)-simplexes, of which \(\sigma_{i}^{p}\) is a face. Notation \(\sigma_{i}^{p}\cap\sigma_{j}^{p}\) represents that two simplexes are upper adjacent, i.e. they are faces of a common \(\left(p+1\right)\)-simplex, and \(\sigma_{i}^{p}\not\cap\sigma_{j}^{p}\) represents the opposite. Notation \(\sigma_{i}^{p}\cup\sigma_{j}^{p}\) represents that two simplexes are lower adjacent, i.e. they share a common \(\left(p-1\right)\)-simplex as their face, and \(\sigma_{i}^{p}\not\cup\sigma_{j}^{p}\) represents the opposite. Notation \(\sigma_{i}^{p}\sim\sigma_{j}^{p}\) represents that two simplexes have the same orientation, i.e. oriented similarly, and \(\sigma_{i}^{p}\approx\sigma_{j}^{p}\) represents the opposite. The eigenvalues of combinatorial Laplacian matrices are independent of the choice of the orientation. The Laplacian matrix has various important properties. First, it is always positive semi-definite, thus all its eigenvalues are non-negative. Second, the multiplicity of zero eigenvalues, i.e., the total number of zero eigenvalues, of \(L_{p}\) is equal to the \(p\)-th Betti number \(\beta_{p}\). In particular, If \(K\) is a graph, the number (multiplicity) of zero eigenvalues is equal to the topological invariant \(\beta_{0}\), which counts the number of connected components in the graph. Third, the second smallest eigenvalue, i.e., the first non-zero eigenvalue, is called Fiedler value or algebraic connectivity, which describes the general connectivity. The corresponding eigenvector can be used in classification and clustering. #### 3.2.3 Analytic torsion for simplicial complex For an \(n\)-dimension simplicial complex \(K\), its \(p\)-th Laplacian matrix is \(L_{p}=B_{p}^{T}B_{p}+B_{p+1}B_{p+1}^{T}\) where \(B_{p}\) is the boundary matrix. Since \(L_{p}\) is a positive semi-definite, all of its eigenvalues, denoted as \(\left\{\lambda_{i}\right\}\), are non-negative. A special Zeta function can be defined based on the eigenvalues of \(L_{p}\) as follows, \[\zeta_{p}(s)=\sum\limits_{\lambda_{i}>0}\frac{1}{\lambda_{i}^{s}}.\] Note that the Zeta function is composed of all the positive eigenvalues \(L_{p}\). The logarithm of analytic torsion \(T(K)\) of \(K\) can be defined as, \[\log T(K)=\frac{1}{2}\sum_{p=0}^{n}{(-1)^{p}p\zeta_{p}^{\prime}(0)} \tag{2}\] where \(\zeta_{p}^{\prime}(0)=\zeta_{p}^{\prime}(s=0)\) is the derivative of the Zeta function at \(s=0\). Mathematically, if we let \(|L_{p}|=\Pi_{\lambda_{i}>0}\lambda_{i}\) be the product of all non-zero eigenvalues of \(L_{p}\), we have \(\zeta_{p}^{\prime}(0)=-\log|L_{p}|\). Further, the logarithm of analytic torsion \(T(K)\) can be rewritten as, \[\log T(K)=\frac{1}{2}\sum_{p=0}^{n}(-1)^{p+1}p\log|L_{p}|.\] Note that the sum is from \(0\) to \(n\), the highest order of simplicial complex \(K\). For an 1-dimension simplicial complex \(K\), i.e., a graph, its analytic torsion \(T(K)\) can be expressed as, \[T(K)=|L_{1}|^{\frac{1}{2}}.\] Note that here \(L_{1}=B_{1}^{T}B_{1}\) from Eq.(1). Further, for a \(2\)-dimension simplicial complex \(K\), its analytic torsion \(T(K)\) is, \[T(K)=\frac{|L_{1}|^{\frac{1}{2}}}{|L_{2}|}.\] The analytic torsion is highly related to the dimension of the simplical complex. Figure 2 illustrates the process to calculate the logarithm of analytic torsion for a 2-dimension simplicial complex. ### _Analytic torsion based graph neural networks_ In our TorGNN model, the analytic torsion is incorporated into graph neural network architecture by adding analytic torsion into message-passing process. **Analytic torsion based message-passing process:** An essential idea of our TorGNN is to aggregate node features by a analytic torsion-based edge weight as follows, \[h_{x}^{l}=\sigma\left(\sum_{y\in\mathcal{N}(x)\cup\{x\}}\frac{1}{\sqrt{d(x)} \sqrt{d(y)}}\left|\log T(K_{x,y})\right|W_{\mathrm{GNN}}h_{y}^{l-1}\right),\] where \(\mathcal{N}(x)\) is the neighbors of node \(x\), \(d(x)\) and \(d(y)\) represent node degree of nodes \(x\) and \(y\) respectively, \(h_{x}^{l}\) and \(h_{y}^{l-1}\) are the node features of \(x\) and \(y\) after \(l\) and \(l-1\) (message-passing) iterations respectively, and \(W_{\mathrm{GNN}}\) is the weight matrix to be learned. More importantly, \(K_{x,y}\) is the local simplicial complex constructed based on edge \(x,y\). There are various ways to define this local simplicial complex. The most straightforward approach is to build a subgraph using the first-order neighbors. This subgraph contains vertices \(x\), \(y\), and all their neighbors together with all the edges among these vertices. If we allow each triangle in the subgraph to form a 2-simplex, a 2-dimension \(K_{x,y}\) can be obtained. Further, we can consider not only the direct neighbors of vertices \(x\) and \(y\), but also the indirect neighbors that within \(l_{\mathrm{sub}}\) steps. An \(p\)-simplex (\(p\leq n\)) is formed among \(p+1\) vertices if each two of them form an edge. In general, our local simplicial complex \(K_{x,y}\) depends on two parameters, i.e., step \(l_{\mathrm{sub}}\) for neighbors and simplicial complex order \(n\). The corresponding TorGNN model is denoted as \(\mathrm{TorGNN}(l_{\mathrm{sub}},n)\). Our analytic torsion based message-passing process is illustrated in Figure 3. We consider \(\mathrm{TorGNN}(1,1)\) model, that is the local simplicial complex is constructed using \(l_{\mathrm{sub}}=1\) and \(n=1\). The logarithm of the analytic torsion for the local simplicial complex is used as a weight parameter in the message-passing process. **TorGNN architectures:** Once we get the representation vector for each node, we can use the node representation for downstream tasks. In this section, we focus on link prediction tasks and node classification tasks. The link prediction tasks are to calculate whether or not there is an edge between two nodes or the possibility of an edge. After the message-passing process, node representations \(h_{x}\) and \(h_{y}\) are obtained for nodes \(x\) and \(y\). The representation vector of the edge is then used as input for multilayer perceptron (\(MLP\)) to predict the possibility of the existence of the edge as follows, \[\hat{p}_{(x,y)}=MLP_{link}\left(\parallel(h_{x}+h_{y},h_{x}\odot h_{y},h_{x}, \ h_{y})\right),\] where \(\parallel()\) is to concatenate the vectors within the bracket into a long vector, and \(\odot\) is element-wise product. It is worth noting that various operations are used in the above equation to model the relationship between two nodes. The node classification tasks are to predict the label of a node. In these tasks, we use the node representations learned from GNN part, and train them with a \(MLP\) model with Fig. 2: **An example of calculating the logarithm of analytic torsion for a two dimension simplicial complex.** Based on the simplicial complex \(K\), we can construct boundary matrices, namely \(B_{1}\) and \(B_{2}\). The Hodge Laplacian matrices, including \(L_{0}\), \(L_{1}\) and \(L_{2}\), can be calculated according to Eq. (1). Then, the determinant of Hodge Laplacian matrices \(|L_{p}|\) can be evaluated. Finally, the logarithm of the analytic torsion \(\log T(K)\) can be calculated. output the predicted label of the node. The process can be expressed as follows, \[\hat{p}_{x}=MLP_{node}\left(h_{x}\right)\] Note that in both tasks, we use the cross-entropy as loss function. A detailed illustration of our TorGNN architectures are illustrated in Figure 3. For more detailed parameter introduction of the model, please refer to the source code1. Footnote 1: A reference implementation of TorGNN may be found at [https://github.com/CS-BIO/TorGNN](https://github.com/CS-BIO/TorGNN) ## 4 Experiments This section covers four tests, including a link prediction test on 16 networks, a node classification test on 3 networks, a representation learning test and a parameter analysis test. Our TorGNN model shows good performance on all tests. ### _Datasets_ In this test, we utilize six types of datasets with a total of 19 networks to verify the performance of TorGNN. These six types of datasets include: 4 biomedical networks (HuRI-PPI [44], ChG-Miner [45], DisGeNET [46], Drugbank_DTI [47]), 4 social networks (lastfm_asia [48], twitch_EN [49], facebook_comp [50], facebook_TV [50]), 2 collaboration networks (CA-HepTh [51], CA-GrQc [51]), 3 internet peer-to-peer networks (p2p-Gnutella04 [51], p2p-Gnutella05 [51], p2p-Gnutella06 [51]), 3 autonomous systems networks (as20000102 [52], eregon1_010407 [52]), and 3 citation networks (Citeseer [53], Cora [53], Pubmed [53]). Note that biomedical networks, social networks, collaboration networks, internet peer-to-peer networks and autonomous systems networks are mainly used for link prediction tasks. See Table I for details of these dataset. The citation networks are mainly used for node classification tasks. See Table II for details of the dataset. ### _Baselines_ In link prediction tasks, we compare TorGNN against state-of-the-art GNN methods and network embedding methods. We adopt a total of 5 GNN models, including LightGCN [54], SkipGNN [55], KGIN [56], GCN [2] and GAT [5]. The network embedding method is a kind of representation learning methods, and it's main goal is to reduce high-dimensional representation vectors of nodes, edges, or subgraphs into low-dimensional vectors. We select three classic network embedding methods, including DeepWalk [57], LINE [58] and SDNE [59]. In node classification tasks, we select 9 models as the comparison methods, namely ManiReg [60], SemiEmb [61], DeepWalk [57], Planetoid [53], GCN [2], DCNN [62], SAGE [3], N-GCN [9] and N-SAGE [9]. These 9 methods cover various model types that can be applied to node classification \begin{table} \begin{tabular}{l l l l l} \hline Categories & Networks & Nodes & Edges & Density \\ \hline \multirow{3}{*}{Biomedical networks} & HuRI-PPI & 5604 & 23322 & 0.15\% \\ & ChG-Miner & 7341 & 15138 & 0.06\% \\ & DisGeNET & 19783 & 81746 & 0.04\% \\ & Drugbank\_DTI & 12566 & 18866 & 0.02\% \\ \hline \multirow{3}{*}{Social networks} & lastfm\_asia & 7624 & 27806 & 0.10\% \\ & twitch\_EN & 7126 & 35324 & 0.14\% \\ & facebook\_comp & 14113 & 52310 & 0.05\% \\ & facebook\_TV & 3892 & 17262 & 0.23\% \\ \hline Collaboration & CA-HepTh & 9877 & 25998 & 0.05\% \\ networks & CA-GrQc & 5242 & 14496 & 0.11\% \\ \hline Internet & p2p-Gnutella04 & 10876 & 39994 & 0.07\% \\ peer-to-peer & p2p-Gnutella05 & 8846 & 31839 & 0.08\% \\ networks & p2p-Gnutella06 & 8717 & 31525 & 0.08\% \\ \hline Autonomous & as2000102 & 6474 & 13895 & 0.07\% \\ systems & oregon1\_010331 & 10670 & 22002 & 0.04\% \\ networks & oregon1\_010407 & 10729 & 21999 & 0.04\% \\ \hline \end{tabular} \end{table} TABLE I: Details of 16 network data on link prediction tasks. \begin{table} \begin{tabular}{l l l l l} \hline Networks & Nodes & Edges & Density & Classes & Feature \\ \hline Citeseer & 3327 & 4732 & 0.09\% & 6 & 3703 \\ Cora & 2708 & 5429 & 0.15\% & 7 & 1433 \\ Pubmed & 19717 & 44338 & 0.02\% & 3 & 500 \\ \hline \end{tabular} \end{table} TABLE II: Details of 3 network data on node classification tasks. Fig. 3: **The overview of TorGNN architecture.****A.** A large network. **B.** Local simplicial complex construction. **C.** Analytic torsion-based message-passing process. **D.** TorGNN architecture for link prediction task and node classification. tasks, including graph neural network, network embedding model, traditional machine model, etc. In this way, the performance of our TorGNN model can be evaluated more comprehensively. ### _Performance on link prediction tasks_ Five types of datasets, including a total of 16 network data, are used for the evaluation of the performance of our TorGNN. For each network, we randomly sample the same number of negative and positive samples, and then divide them into training set, validation set and test set with a ratio of 7:1:2. The area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPR) are used for evaluating the performance of each model. This process is repeated 10 times, and the average value is taken as the final result. We use batch size 128 with Adam optimizer and run TorGNN model in PyTorch. For the learning rate, after adjustment, it is found that the learning rate of 5e-4 is the most suitable for all 16 networks. Table III presents the prediction results on the link prediction tasks. Overall, our TorGNN model outperforms other models on most datasets, and the average AUC and AUPR on 16 networks are 2.49% and 3.45% higher than the second-ranked model, respectively. It is worth noting that the performance of the TorGNN model on all data sets is better than that of the 3 network embedding models (DeepWalk, LINE and SDNE), which shows that the TorGNN model is not only a good graph neural network model, but also an excellent network embedding models. Although the AUC value of the TorGNN model on the "as20000102" network and the AUPR value on the "facebook_comp" network did not reach the highest value, the AUPR value of the TorGNN model on the "as20000102" network and the AUC value on the "facebook_comp" network are still the best. Note that our tasks are from various types of networks, including biomedical networks, collaboration networks and internet peer-to-peer networks, the great performance of our TorGNN shows that it has a strong generalization capability for network/graph data. ### _Performance on node classification tasks._ Three networks, including Citeseer, Cora and Pubmed, are used in the evaluation of the performance of TorGNN model. Following the general way to set up training, verification and test sets of these three datasets [63], we select 500 nodes in each data set as the verification set, 1000 nodes as the test set, and the remaining nodes as the training set. The accuracy is used to evaluate the performance of TorGNN. Each step is run 10 times, and the average is taken as the final result. We use Adam optimizer of learning rate 0.02 and run TorGNN model in PyTorch. Figure 4 shows the results of the TorGNN model and the comparison methods for node classification tasks on three datasets. Our TorGNN model has achieved the best results, and it is significantly better than the second-ranked model N-GCN. It is worth noting that N-GCN and N-SAGE are improved models based on GCN and SAGE. Specifically, they use a data perturbation method to prevent overfitting of the traditional model. As can be seen in Figure 4, the performance of N-GCN and N-SAGE models is superior to the traditional GCN and SAGE models. Nevertheless, these models are still slightly inferior to our TorGNN model, which shows that the advantages of our TorGNN model over the traditional GNNs are not only reflected in the overall performance, but also in preventing overfitting. The reason may be that the analytic torsion of the local structure plays a restrictive role in the adjustment of the parameters in GNNs and reduces overfitting. ### _Performance on representation learning_ To further illustrate the representation learning ability of our TorGNN, we set up a visualization test. First, we get the representation vectors for node pairs in the test sets. Then, we use t-SNE [64] to project the high-dimensional representation vectors to 2D space, where the representation vectors are mainly divided into two categories, positive samples (node pairs with link relationship) and negative samples (node pairs without link relationship). Finally, these representation vectors in 2D space are illustrated in Figure 5 (red and blue points represent negative and positive samples, respectively). Three datasets of HuRI-PPI, ChG-Miner and Drugbank_DTI are used in our test. At the same time, we choose DeepWalk, LINE and SDNE three network embedding methods for comparison. It can be seen from Figure 5 that our TorGNN model has obvious advantages in distinguishing node pairs Fig. 4: **The comparison of different models on node classification performance based on three citation datasets.** **). **A**, **B**, and **C** illustrate the comparison of accuracy for various models on Citeseer, Cora and Pubmed datasets, respectively. with and without link relationship, which shows that our TorGNN model has a better ability in representation learning than traditional network embedding model. ### _Influence of local simplicial complex_ In order to analyze the impact of the number of neighbor layers and the highest order of the local simplicial complex on \(\mathrm{TorGNN}(l_{\mathrm{sub}},n)\), we set up the following parameter analysis model: * TorGNN(1,1): \(l_{sub}\)=1, \(n\)=1. It means that TorGNN only considers first-order neighbors when constructed local simplicial complex, the highest order of the simplicial complex is only up to 1, that is, only edges are considered. * TorGNN(1,2): \(l_{sub}\)=1, \(n\)=2. It means that TorGNN only considers first-order neighbors when constructed local simplicial complex, the highest order of the simplicial complex is up to 2, that is, triangles (2-simplicplices) are considered. * TorGNN(2,1): \(l_{sub}\)=2, \(n\)=1. It means that TorGNN considers second-order neighbors when constructed local simplicial complex, the highest order of the simplicial complex is only up to 1, that is, only edges are considered. Table IV and Table V show the results of these three models on link prediction tasks and node classification tasks. It can be seen that the performance of the three models are relatively stable. The differences between the three models for both AUC and AUPR are less than 0.02 for all the test cases, except "drugbank_DTI", "facebook_comp" and "p2p-Gnutella06". Comparably speaking, TorGNN(2,1) model has the best overall performance. This may due to the reason that the corresponding local simplicial complex of TorGNN(2,1) has a better representability of the local structures. Note that with \(l_{sub}=2\), the local simplicial complex contains all second-order neighbors and is a much larger structure. Computationally, more efficient algorithms will be needed Fig. 5: **The comparison of the representation learning capability of different models on HuH-PP, ChG-Miner and Drugbank_DTI network datasets. The representation vectors of each nodes in the test datasets are projected into 2D spaces by t-SNE. The red and blue points represent node pairs without and with link relationships, respectively. Four network embedding methods are considered in our comparison.** \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline & Networks & TorGNN & LightGCN & SkipGNN & KGIN & GCN & GAT & DeepWalk & LINE & SDNE \\ \hline AUC & HuRI-TPPI & **0.9369** & 0.8170 & 0.9119 & 0.9007 & 0.9164 & 0.8994 & 0.7294 & 0.8223 & 0.9243 \\ & ChG-Miner & **0.9583** & 0.7457 & 0.9526 & 0.9493 & 0.9352 & 0.9514 & 0.8093 & 0.7036 & 0.6108 \\ & DisGeNET & **0.9868** & 0.8597 & 0.9145 & 0.9154 & 0.9723 & 0.9829 & 0.8492 & 0.9586 \\ & Drugbank\_DTI & **0.9651** & 0.6821 & 0.8946 & 0.9453 & 0.9234 & 0.9476 & 0.8572 & 0.6312 & 0.8522 \\ & lastfm\_asia & **0.9204** & 0.8787 & 0.8117 & 0.9125 & 0.8693 & 0.8619 & 0.7312 & 0.8049 & 0.8952 \\ & twitch\_EN & **0.9159** & 0.8071 & 0.8640 & 0.8807 & 0.8965 & 0.8863 & 0.6823 & 0.7836 & 0.8627 \\ & facebook\_comp & **0.8974** & 0.8793 & 0.7823 & 0.9146 & 0.8614 & 0.8598 & 0.7525 & 0.6756 & 0.7733 \\ & facebook\_TV & **0.9500** & 0.9164 & 0.7941 & 0.8951 & 0.9146 & 0.9165 & 0.8637 & 0.6841 & 0.8145 \\ & CA-HepTh & **0.9476** & 0.6004 & 0.7859 & 0.8117 & 0.9206 & 0.9185 & 0.7454 & 0.6285 & 0.8514 \\ & CA-GrQc & **0.9599** & 0.8304 & 0.8062 & 0.8084 & 0.9430 & 0.9420 & 0.8224 & 0.7307 & 0.8785 \\ & p2p-Gnutella0 & **0.9060** & 0.6232 & 0.8034 & 0.8898 & 0.8901 & 0.8816 & 0.6175 & 0.5980 & 0.7883 \\ & p2p-Gnutella05 & **0.8956** & 0.6037 & 0.8074 & 0.8871 & 0.8847 & 0.8785 & 0.6353 & 0.5945 & 0.8051 \\ & p2p-Gnutella06 & **0.9051** & 0.6240 & 0.8112 & 0.8864 & 0.8872 & 0.8828 & 0.6490 & 0.5687 & 0.8003 \\ & as20000102 & 0.9292 & 0.7834 & 0.9125 & **0.9334** & 0.9120 & 0.8926 & 0.8276 & 0.8340 & 0.8902 \\ & oregon1\_010331 & **0.9602** & 0.7913 & 0.9544 & 0.9450 & 0.9516 & 0.9338 & 0.8165 & 0.8812 & 0.9219 \\ & oregon1\_010407 & **0.9603** & 0.8060 & 0.9492 & 0.9416 & 0.9518 & 0.9594 & 0.8142 & 0.8657 & 0.9172 \\ \hline Average & **0.9372** & 0.7655 & 0.8597 & 0.9011 & 0.9144 & 0.9107 & 0.7529 & 0.7285 & 0.8465 \\ \hline AUPR & HuRI-TPPI & **0.9424** & 0.8589 & 0.9182 & 0.9006 & 0.9189 & 0.8965 & 0.6958 & 0.8520 & 0.9324 \\ & ChG-Miner & **0.9606** & 0.8006 & 0.9524 & 0.9493 & 0.9409 & 0.9499 & 0.8267 & 0.7514 & 0.6114 \\ & DisGeNET & **0.9882** & 0.8915 & 0.9271 & 0.9154 & 0.9785 & 0.9849 & 0.8646 & 0.8477 & 0.9554 \\ & Drugbank\_DTI & **0.9664** & 0.7593 & 0.6764 & 0.9452 & 0.9371 & 0.9533 & 0.8729 & 0.7044 & 0.8672 \\ & lastfm\_asia & **0.9311** & 0.9028 & 0.8269 & 0.9124 & 0.8810 & 0.8810 & 0.7365 & 0.8395 & 0.9127 \\ & twitch\_EN & **0.9265** & 0.8573 & 0.8750 & 0.8807 & 0.9051 & 0.8899 & 0.6729 & 0.8134 & 0.8642 \\ & facebook\_comp & 0.9112 & 0.9045 & 0.7993 & **0.9146** & 0.8761 & 0.871 & 0.7517 & 0.6956 & 0.8092 \\ & facebook\_TV & **0.9569** & 0.9359 & 0.8116 & 0.8951 & 0.9256 & 0.9265 & 0.8672 & 0.7461 & 0.8517 \\ & CA-HepTh & **0.9487** & 0.5991 & 0.7889 & 0.7794 & 0.9169 & 0.9077 & 0.7313 & 0.6778 & 0.8614 \\ & CA-GrQc & **0.9626** & 0.8275 & 0.8228 & 0.7769 & 0.9427 & 0.9419 & 0.8469 & 0.7863 & 0.8845 \\ & p2p-Gnutella04 & **0.9027** & 0.6884 & 0.7660 & 0.8898 & 0.8901 & 0.8843 & 0.6166 & 0.5967 & 0.7735 \\ & p2p-Gnutella05 & **0.8892** & 0.6764 & 0.7748 & 0.8870 & 0.8804 & 0.8751 & 0.6514 & 0.6177 & 0.7838 \\ & p2p-Gnutella06 & **0.8996** & 0.6918 & 0.7763 & 0.8862 & 0.8830 & 0.8814 & 0.6457 & 0.5967 & 0.7718 \\ & as20000102 & **0.9383** & 0.8403 & 0.9333 & 0.9334 & 0.9290 & 0.9090 & 0.8157 & 0.8742 & 0.9159 \\ & oregon1\_010331 & for the fast evaluation of the analytical torsion if large-sized simplicial complex is used. ## 5 Conclusion The incorporation of geometric and topological information into graph neural architecture remains to be a key issue for geometric deep learning models, in particular graph neural network models. In this paper, we propose an analytic torsion enhanced graph neural network (TorGNN) model, which mainly uses analytic torsion to capture the complexity of the structures, and then incorporate it into message-passing process to improve the performance of the traditional graph neural network models. From the tasks of link prediction and node classification, our TorGNN is found to be not only better than traditional GNNs models, but also better than other graph deep learning models. ## Acknowledgements This work was supported in part by National Natural Science Foundation of China (NSFC grant no. 61873089, 62032007), Nanyang Technological University Startup Grant (grant no. M4081842), Singapore Ministry of Education Academic Research fund (grant no. Tier 1 RG109/19, MOE-T2EP20120-0013, MOE-T2EP2020-0010) and China Scholarship Council (CSC grant no.202006130147).
2305.18965
Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks
In the graph node embedding problem, embedding spaces can vary significantly for different data types, leading to the need for different GNN model types. In this paper, we model the embedding update of a node feature as a Hamiltonian orbit over time. Since the Hamiltonian orbits generalize the exponential maps, this approach allows us to learn the underlying manifold of the graph in training, in contrast to most of the existing literature that assumes a fixed graph embedding manifold with a closed exponential map solution. Our proposed node embedding strategy can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even if it has diverse geometries. We test Hamiltonian functions of different forms and verify the performance of our approach on two graph node embedding downstream tasks: node classification and link prediction. Numerical experiments demonstrate that our approach adapts better to different types of graph datasets than popular state-of-the-art graph node embedding GNNs. The code is available at \url{https://github.com/zknus/Hamiltonian-GNN}.
Qiyu Kang, Kai Zhao, Yang Song, Sijie Wang, Wee Peng Tay
2023-05-30T11:53:40Z
http://arxiv.org/abs/2305.18965v1
# Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks ###### Abstract In the graph node embedding problem, embedding spaces can vary significantly for different data types, leading to the need for different GNN model types. In this paper, we model the embedding update of a node feature as a Hamiltonian orbit over time. Since the Hamiltonian orbits generalize the exponential maps, this approach allows us to learn the underlying manifold of the graph in training, in contrast to most of the existing literature that assumes a fixed graph embedding manifold with a closed exponential map solution. Our proposed node embedding strategy can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even if it has diverse geometries. We test Hamiltonian functions of different forms and verify the performance of our approach on two graph node embedding downstream tasks: node classification and link prediction. Numerical experiments demonstrate that our approach adapts better to different types of graph datasets than popular state-of-the-art graph node embedding GNNs. The code is available at [https://github.com/zknus/Hamiltonian-GNN](https://github.com/zknus/Hamiltonian-GNN). Machine Learning, Graph Neural Networks, Graph Neural Networks ## 1 Introduction Graph neural networks (GNNs) (Yue et al., 2019; Ashoor et al., 2020; Kipf and Welling, 2017; Zhang et al., 2022; Wu et al., 2021) have shown remarkable inference performance on graph-structured data, including, but not limited to, social media networks, citation networks, and molecular graphs in chemistry. Most existing GNNs embed graph nodes in Euclidean spaces without further consideration of the dataset graph geometry. For some graph structures like the tree-like graphs (Liu et al., 2019), the Euclidean space may not be a proper choice for the node embedding. Recently, hyperbolic GNNs (Chami et al., 2019; Liu et al., 2019) propose to embed nodes into a hyperbolic space instead of the conventional Euclidean space. It has been shown that tree-like graphs can be inferred more accurately by hyperbolic GNNs. Real-world graphs often have complex and varied structures, which can be more effectively represented by utilizing different geometric spaces. As shown in Figure 1, the Gromov \(\delta\)-hyperbolicity distributions1(Gromov, 1987) of various datasets exhibit a range of diverse values. This indicates that it is not optimal to embed each dataset with diverse geometries into a single globally homogeneous geometry. Works like (Zhu et al., 2020) have attempted to embed graph nodes in a mixture of the Euclidean and hyperbolic spaces, where the intrinsic graph local geometry is attained from the mixing weight. In several studies, including (Gu et al., 2018; Bachmann et al., 2020; Lou et al., 2020), researchers use (products of) constant curvature Riemannian spaces for graph node embedding where the spaces are assumed to be spherical, hyperbolic, or Euclidean. The work (Xiong et al., 2022) considers a special pseudo-Riemannian manifold named pseudo-hyperboloid that is of constant nonzero curvature and is diffeomorphic to the product manifolds of a unit sphere and the Euclidean space. Footnote 1: If the \(\delta\)-hyperbolicity distribution is more concentrated at lower values, the more hyperbolic the graph dataset. Embedding nodes in the aforementioned _restricted_ (pseudo-)Riemannian manifolds is achieved through the exponential map in closed forms, which is essentially a geodesic curve on the manifolds as the projected curve of the _cogeodesic orbits_ on the manifolds' cotangent bundles (Lee, 2013; Klingenberg, 2011). In our work, we propose to embed the nodes, via more general _Hamiltonian orbits_, into a general manifold, which generalizes the above graph node embedding works. Manifolds have a diverse set of applications in physics, and their usage and development can be found interwoven throughout the literature. From the physics perspective, the cotangent bundles are the natural phase spaces in classical mechanics (De Leon and Rodrigues, 2011) where the physical system evolves according to the basic laws of physics modeled as differential equations on the phase spaces. In this paper, we propose a new GNN paradigm based on Hamiltonian mechanics (Goldstein et al., 2001) with flexible Hamiltonian functions. Our objective is to design a new node embedding strategy that can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even if it has diverse geometries. We enable the node features to evolve on the manifold under the influence of neighbors. The learnable Hamiltonian function on the manifold guides the node embedding evolution to follow a learnable law analogous to basic physical laws. **Main contributions.** Our main contributions are summarized as follows: 1. We consider the graph node embedding problem on an underlying manifold and enable node embedding through a learnable Hamiltonian orbit associated with the Hamiltonian scalar function on the manifold's cotangent bundle. 2. Our node embedding strategy can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even if it has diverse geometries. We empirically demonstrate its ability by testing on two graph node embedding downstream tasks: node classification and link prediction. 3. From empirical experiments, we observe that the over-smoothing problem of GNNs can be mitigated if the node features evolve through Hamiltonian orbits. By the conservative nature of the Hamiltonian equations, our model enables a stable training and inference process while updating the node features over time and layers. ## 2 Related Work While our paper is related to Hamiltonian neural networks in the literature, we are the first, to our best knowledge, to model graph node embedding with Hamiltonian equations. We briefly review Hamiltonian neural networks, Riemannian manifold GNNs, and physics-inspired GNNs in Appendix A. _Notations:_ We use the _Einstein summation convention_(Lee, 2013) for expressions with tensor indices. When using this convention, if an index variable appears twice in a term, once as a superscript and once as a subscript, it means a summation of the term over all possible values of the index variable. For example, \(a^{i}b_{i}\triangleq\sum_{i=1}^{d}a^{i}b_{i}\). ## 3 Motivations and Preliminaries In this section, we briefly review the concepts of the geodesic curve on a (pseudo-)Riemannian manifold from the principle of stationary action in the form of Lagrange's equations. We then further generalize the geodesic curve to the Hamiltonian orbit associated with an energy function \(H\), which is a conserved quantity along the orbit. Our primary goal is to develop a more flexible and robust method for graph node embedding by leveraging the concepts of geodesic curves and Hamiltonian orbits on manifolds. We first summarize the motivation of our work as follows. **Motivation I: from the exponential map to the Riemannian geodesic.** The geodesic curve gives rise to the exponential map that maps points from the tangent space to the manifold and has been utilized in (Chami et al., 2019; Bachmann et al., 2020; Xiong et al., 2022) to enable graph node embedding in some restricted (pseudo-)Riemannian manifolds where closed forms of the exponential map are obtainable. From this perspective, by using the geodesic curve, we generalize the graph node embedding to an arbitrary pseudo-Riemannian manifold with learnable local geometry \(g\) using Lagrange's equations. **Motivation II: from geodesic to Hamiltonian orbit.** Despite the above conceptual generalization for node embedding using geodesic curves, the specific curve formulation involving minimization of curve length may result in a loss of generality for node feature evolution along the curve. We thus further generalize the geodesic curve to the Hamiltonian orbit associated with a more general energy function \(H\) that is conserved along the orbit. In Section 4, we propose graph node embedding without an explicit metric by using Hamiltonian orbits with learnable energy functions \(H\). We kindly refer readers to Appendix B for a more comprehensive elucidation that may enhance their understanding after we introduce the related concepts in Section 3.2. ### Manifold and Riemannian Metric **Manifold and local chart representation.** On a \(d\)-dimensional manifold \(M\), for each point on \(M\), there exists a triple \(\{q,U,V\}\), called a chart, such that \(U\) is an open Figure 1: Gromov \(\delta\)-hyperbolicity distribution of datasets. neighborhood of the point in \(M\), \(V\) is an open subset of \(\mathbb{R}^{d}\), and \(q:U\to V\) is a homeomorphism, which gives us a coordinate representation for a local area in \(M\). **Tangent and cotangent vector spaces.** For any point \(q\) on \(M\) (we identify each point covered by a local chart on \(M\) by its representation \(q\)), we may assign two vector spaces named the tangent vector space \(T_{q}M\) and cotangent vector space \(T_{q}^{*}M\). The vectors from the tangent and cotangent spaces can be interpreted as representing the velocity and generalized momentum, respectively, of an object's movement in classical mechanics. **Riemannian metric.** A Riemannian manifold is a manifold \(M\) equipped with a _Riemannian metric_\(g\), where we assign to any point \(q\in M\) and pair of vectors \(u,v\in T_{q}M\) an _inner product_\(\langle u,v\rangle_{g(q)}\). This assignment is assumed to be smooth with respect to the base point \(q\in M\). The length of a tangent vector \(u\in T_{q}M\) is then defined as \[\|u\|_{g(q)}:=\langle u,u\rangle_{g(q)}^{1/2}. \tag{1}\] * **Local coordinates representation:** In local coordinates with \(q=\left(q^{1},\ldots,q^{d}\right)^{\intercal}\in M,u=\left(u^{1},\ldots,u^{d} \right)^{\intercal}\in T_{q}M\) and \(v=\left(v^{1},\ldots,v^{d}\right)^{\intercal}\in T_{q}M\), the Riemannian metric \(g=g(q)\) is a real symmetric _positive definite_ matrix and the inner product above is given by \[\langle u,v\rangle_{g(q)}:=g_{ij}(q)u^{i}v^{j}\] (2) * **Pseudo-Riemannian metric:** We may generalize the Riemannian metric to a metric tensor that only requires a _non-degenerate_ condition (Lee, 2018) instead of the stringent positive definiteness condition in the inner product. One example of a pseudo-Riemannian manifold is the Lorentzian manifold, which is important in applications of general relativity. ### Geodesic Curves and Exponential Maps **Length and energy of a curve.** Let \(q:[a,b]\to M\) be a smooth curve.2 We use \(\dot{q}\) and \(\ddot{q}\) to denote the first and second order derivatives of \(q(t)\), respectively. We define the following: Footnote 2: We abuse notations in denoting the chart coordinate map as \(q\) and the curve as \(q(t)\). It will be clear from the context which one is being referred to. \(\bullet\) length of the curve: \[\ell(q):=\int_{a}^{b}\|\dot{q}(t)\|_{g(q(t))}\,\mathrm{d}t. \tag{3}\] \(\bullet\) energy of the curve: \[E(q):=\frac{1}{2}\int_{a}^{b}\|\dot{q}(t)\|_{g(q(t))}^{2}\,\mathrm{d}t. \tag{4}\] **Geodesic curves.** On a Riemannian manifold, geodesic curves are defined as curves that have a minimal length as given by (3) and with two fixed endpoints \(q(a)\) and \(q(b)\). However, computations based on minimizing the length to obtain the curves are difficult. It turns out that the minimizers of \(E(q)\) also minimize \(\ell(q)\)(Malham, 2016). Consequently, the geodesic curve formulation may be obtained by minimizing the energy of a smooth curve on \(M\). **Principle of stationary action and Euler-Lagrange equation.** The Lagrangian function \(L(q(t),\dot{q}(t))\) minimizes the following functional (in physics, the functional is known as an **action**) \[S(q)=\int_{a}^{b}L(q(t),\dot{q}(t))\mathrm{d}t \tag{5}\] with two fixed endpoints at \(t=a\) and \(t=b\) only if the following _Euler-Lagrange equation_ is satisfied: \[\frac{\partial L}{\partial q^{i}}(q(t),\dot{q}(t))-\frac{\mathrm{d}}{\mathrm{d }t}\frac{\partial L}{\partial\dot{q}^{i}}(q(t),\dot{q}(t))=0. \tag{6}\] **Geodesic equation for geodesic curves.** The Euler-Lagrange equation derived from minimizing the energy (4) with local coordinates representation, \[L=\frac{1}{2}\|\dot{q}(t)\|_{g(q(t))}^{2}=\frac{1}{2}g_{ik}(q)\dot{q}^{i}\dot{ q}^{k} \tag{7}\] is expressed as the following ordinary differential equations called the _geodesic equation_: \[\ddot{q}^{i}+\Gamma_{jk}^{i}\dot{q}^{j}\dot{q}^{k}=0, \tag{8}\] for all \(i=1,\ldots,d\), where the Christoffel symbols \(\Gamma_{jk}^{i}:=\frac{1}{2}g^{ik}\left(\frac{\partial g_{ij}}{\partial q_{k}}+ \frac{\partial g_{kl}}{\partial q_{j}}-\frac{\partial g_{jk}}{\partial q_{t}}\right)\) and \([g^{ij}]\) denotes the inverse matrix of the matrix \([g_{ij}]\). The solutions to the geodesic equation (8) give us the geodesic curves. **Exponential map.** Given the geodesic curves, at each point \(q\in M\), for velocity vector \(v\in T_{q}M\), the _exponential map_ is defined to obtain the point on \(M\) reached by the unique geodesic that passes through \(q\) with velocity \(v\) at time \(t=1\)(Lee, 2018). Formally, we have \[\exp_{q}(v)=\gamma(1) \tag{9}\] where \(\gamma(t)\) is the curve given by the geodesic equation (8) with initial conditions \(q(0)=q\) and \(\dot{q}(0)=v\). With regards to **Motivation I**, we note that (Chami et al., 2019) considers graph node embedding over a homogeneous negative-curvature Riemannian manifold called hyperboloid manifold. In contrast, we generalize the embedding of nodes to an arbitrary pseudo-Riemannian manifold through the geodesic equation (8) with a learnable metric \(g\) that derives the local graph geometry from the nodes and their neighbors. ### From Geodesics to General Hamiltonian Orbits The geodesic curves and the derived exponential map essentially come from (5) with \(L\) in (7) specified from the curve energy (4). However, the curves derived from this specific action may potentially sacrifice efficacy for the graph node embedding task since we do not know what a reasonable action formulation that guides the evolution of the node feature in this task is. Therefore, we follow the principle of stationary action but consider a learnable action that is more flexible than the length or energy of the curve. To better model the conserved quantity during the feature evolution, we reformulate the Lagrange equation to the Hamilton equation. This is our **Motivation II**. **Hamiltonian function and equations.** The Hamiltonian orbit \((q(t),p(t))\) is given by the following _Hamiltonian equations_ with a _Hamiltonian function_\(H\): \[\dot{q}^{i}=\frac{\partial H}{\partial p_{i}},\quad\dot{p}_{i}=-\frac{ \partial H}{\partial q^{i}}, \tag{10}\] where \(q\) is the local chart coordinate on the manifold while \(p\) can be interpreted as a vector of generalized momenta in the cotangent vector space. In classical mechanics, the \(2d\)-dimensional pair \((q,p)\) is called _phase space_ coordinates that fully specify the state of a dynamic system with \(p\) guiding the movement direction and speed. Later, we consider the node feature evolution following the trajectory specified by the phase space coordinates. **Hamiltonian function vs. Lagrangian function.** The Hamiltonian function can be taken as the Legendre transform of the Lagrangian function: \[\begin{split} H(q,p)&=p_{i}\dot{q}^{i}-L(q,\dot{q} )\\ \text{with }\dot{q}&=\dot{q}(p)\text{ s.t. }p=\frac{ \partial L}{\partial\dot{q}}.\end{split} \tag{11}\] If \(H\) is restricted to strictly convex functions, the Hamiltonian formalism is equivalent to a Lagrangian formalism (De Leon & Rodrigues, 2011). **Geodesic equation reformulated as Hamiltonian function.** If \(H\) is set as \[H(q,p)=\frac{1}{2}g^{ij}(q)p_{i}p_{j}, \tag{12}\] where \([g^{ij}]\) denotes the inverse matrix of the matrix \([g_{ij}]\), we have the following Hamiltonian equations: \[\dot{q}^{i}=g^{ij}p_{j},\quad\dot{p}_{i}=-\frac{1}{2}\partial_{i}g^{jk}p_{j}p_ {k}. \tag{13}\] The Hamiltonian orbit \((q(t),p(t))\), as the solution of (13), gives us again the _geodesic curves_\(q(t)\) on the manifold \(M\) if we only look at the first \(d\)-dimensional coordinates. **Theorem 1** (Conservation of energy (Da Silva & Da Salva, 2008)).: \(H(q(t),p(t))\) _is constant along the Hamiltonian orbit as solutions of (10)._ In physics, \(H\) typically represents the total energy of a system, and Theorem 1 indicates that the time evolution of the system follows the law of conservation of energy. ## 4 Proposed Framework We consider an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consisting of a finite set \(\mathcal{V}\) of vertices, together with a subset \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) of edges. Our objective is to learn a mapping \(f\) that maps node features to embedding vectors: \[f\left(\mathcal{V},\mathcal{E}\right)=Z\in\mathbb{R}^{|\mathcal{V}|\times d}.\] The embedding \(Z\) is supposed to capture both semantic and topological information. Since the input node features in most datasets are sparse, a fully connected (FC) layer is first applied to compress the raw input node features. Let \({}^{n}q\) be the \(d\)-dimensional compressed node feature for node \(n\) after the FC layer.3 However, empirical experiments (see "MLP" results in Section 5.1) indicate that for the graph node embedding task, such simple raw compressing without any consideration of the graph topology does not produce a good embedding. Further graph neural network layers are thus required to update the node embedding. Footnote 3: We put the node index to the left of the variable to distinguish it from the manifold dimension index. We consider the node features \(\{^{n}q\}_{n\in\mathcal{V}}\) to be located on an embedding manifold \(M\) and take the node features as the chart coordinate representations for points on the manifold. In **Motivations I and II** in Section 3, we have provided the rationale for generalizing graph node embedding from the exponential map to the pseudo-Riemannian geodesic, and further to the Hamiltonian orbit. To enforce the graph node feature update on the manifold with well-adapted learnable local geometry, we make use of the concepts from Section 3. ### Model Architecture **Node feature evolution along Hamiltonian orbits in a Hamiltonian layer.** As introduced in Section 3.3, the \(2d\)-dimensional _phase space_ coordinates \((q,p)\) fully specify a system's state. Consequently, for the node feature as a point on the manifold \(M\), we associate to each point \({}^{n}q\) a _learnable momentum vector_\({}^{n}p\). This vector guides the direction and speed of the feature update, allowing the node feature to evolve along Hamiltonian orbits on the manifold. More specifically, we set \[{}^{n}p=Q_{\mathrm{net}}(^{n}q) \tag{14}\] where \(Q_{\mathrm{net}}\)4 is instantiated by an FC layer. We consider a learnable Hamiltonian function \(H_{\mathrm{net}}\) that specifies the node feature evolution trajectory in the phase space via the #### Hamiltonian equations \[\dot{q}^{i}=\frac{\partial H_{\mathrm{net}}}{\partial p_{i}},\quad\dot{p}_{i}=- \frac{\partial H_{\mathrm{net}}}{\partial q^{i}} \tag{15}\] with learnable Hamiltonian energy function \[H_{\mathrm{net}}:(q,p)\mapsto\mathbb{R}. \tag{16}\] The node features are updated along the Hamiltonian orbits, which are curves starting from each node \(({}^{n}q,{}^{n}p)\) at \(t=0\). In other words, they are the solution of (15) with the initial conditions \(({}^{n}q(0),{}^{n}p(0))=({}^{n}q,{}^{n}p)\) at \(t=0\). The solution of (15) on the phase space for each node \(n\in\mathcal{V}\) at time \(T\) is given by a differential equation solver (Chen et al., 2018), and denoted by \(({}^{n}q(T),{}^{n}p(T))\). The canonical projection \(\pi({}^{n}q(T),{}^{n}p(T))={}^{n}q(T)\) is taken to obtain the node feature embeddings on the manifold at time \(T\). The aforementioned operations are performed within one layer, and we call it the _Hamiltonian layer_. **Neighborhood Aggregation.** After the node features update along the Hamiltonian orbits, we perform neighborhood aggregation on the features \(\{{}^{n}q^{(\ell)}(T)\}_{n\in\mathcal{V}}\), where \(\ell\) indicates the \(\ell\)-th layer. Let \(\mathcal{N}(n)=\{m:(n,m)\in\mathcal{E}\}\) denote the set of neighbors of node \(n\in\mathcal{V}\). We only perform a simple yet efficient aggregation (see Section 5) for node \(n\) as follows: \[{}^{n}q^{(\ell+1)}={}^{n}q^{(\ell)}(T)+\frac{1}{|\mathcal{N}(n)|}\sum_{m\in \mathcal{N}(n)}{}^{m}q^{(\ell)}(T). \tag{17}\] **Layer stacking for local geometry learning.** We stack up multiple Hamiltonian layers with neighborhood aggregation in between them. We first give an intuitive explanation for the case where \(H_{\mathrm{net}}\) is set as (12). A learnable metric \(g_{\mathrm{net}}\) for the manifold is involved (see Section 4.2.1 for more details) and the features are evolved following the geodesic curves with minimal length (see Section 3). Within each Hamiltonian layer, the metric \(g_{\mathrm{net}}\) that is instantiated by a smooth FC layer only depends on the local node position on a pseudo-Riemannian manifold that varies from point to point. Note that with layer stacking, these features contain information aggregated from their neighbors. The metric \(g_{\mathrm{net}}\), therefore, learns from the graph topology, and each node is embedded with a local geometry that depends on its neighbors. In contrast, (Chami et al., 2019; Bachmann et al., 2020; Xiong et al., 2022) consider graph node embedding using geodesic curves over some fixed manifold without adjustment of the local geometry. At the beginning of Section 4, we have assumed the node features \(\{{}^{n}q\}_{n}\) to be located in local charts of a preliminary embedding manifold \(M\). The basic philosophy is that the embedding manifold evolves with a metric structure that adapts successively with neighborhood aggregation along multiple layers, whereas each node's features evolve to the most appropriate embedding on the manifold along the curves. For a general learnable \(H_{\mathrm{net}}\), the Hamiltonian orbit that starts from one node has aggregated information from its neighbors, which guides the learning of the curve that the node will be evolved along. Therefore, each node is embedded into a manifold with adaptation to the underlying geometry of any given graph dataset even if it has diverse geometries. Figure 2: HamGNN architecture: in each layer, each node is assigned a learnable “momentum” vector \({}^{n}p\) (cf. (14)) at time \(t=0\), which initializes the evolution of the node feature. The node features evolve on a manifold following (15) to \(({}^{n}q(T),{}^{n}p(T))\) at the time \(t=T\). We only take \({}^{n}q(T)\) as the embedding and input it to the next layer. After \(L\) layers, we take \({}^{n}q^{(L)}(T)\) as the final node embedding. **Conservation of \(H_{\rm net}\).** From Theorem 1, the feature updating through the orbit indicates that the \(H_{\rm net}\) is conserved along the curve. **Model summary.** Our model is called _HamGNN_ as we use Hamiltonian orbits for node feature updating on the manifold. We summarize the HamGNN model architecture in Figure 2 and Algorithm 1. The forms of the Hamiltonian function \(H_{\rm net}\) are given in Section 4.2. ### Different Hamiltonian Orbits We next propose different forms for \(H_{\rm net}\) from which the corresponding Hamiltonian orbit and its variations are obtained in our GNN model. The node features are updated along the Hamiltonian orbits, which are curves starting from each node \((^{n}q,^{n}p)\) at \(t=0\). Beginning with Motivation I from the paper, we design a learnable metric \(g_{\rm net}\) in (12) in Section 4.2.1. This approach relaxes the curve formulation constraint used in (Chami et al., 2019; Bachmann et al., 2020; Xiong et al., 2022) and enables learnable geodesic curves to guide feature evolution on arbitrary (pseudo-)Riemannian manifolds while learning local geometry from graph datasets. To further extend learnable geodesic curves to learnable Hamiltonian orbits on manifolds, as stated in Motivation II, we introduce a flexible \(H\) instantiated by an FC layer without constraints in Section 4.2.2. Subsequently, we present variations based on Section 4.2.2 in Sections 4.2.3 through 4.2.5 to examine their performance. In Section 4.2.3, we add constraints to ensure that \(H\) is a convex function, which allows us to equivalently test a more restricted Lagrangian formalism. In Section 4.2.4, we include less restricted Hamiltonian mechanics without strict constant energy constraints. Finally, in Section 4.2.5, we consider a more flexible representation using the symplectic 2-form in comparison to Section 4.2.2. #### 4.2.1 Learnable Metric \(g_{\rm net}\) In this subsection, we consider node embedding onto a pseudo-Riemannian and set \(H_{\rm net}\) as (12) where a learnable metric \(g_{\rm net}\) for the manifold is involved. Within each Hamiltonian layer, the metric \(g_{\rm net}\) instantiated by a smooth FC layer depends on the local node position on the pseudo-Riemannian manifold that varies from point to point. The output of \(g_{\rm net}\) at position \(q\) represents the inverse metric local representation \([g^{ij}]\). However, from (12), the space complexity is order \(d^{3}\) due to the partial derivative of \(g\)'s output being a \(d\times d\) matrix. We therefore only consider _diagonal metrics_ to mitigate the space complexity. More specifically, we now define \[g_{\rm net}(q)={\rm diag}([\underbrace{-1,\ldots,-1}_{r},\underbrace{1,\ldots,1}_{s}]\odot h_{\rm net}(q)) \tag{18}\] where \(h_{\rm net}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) consists of non-linear trainable layers and \(\odot\) denotes element-wise multiplication. To ensure non-degeneracy of the metric, the output of \(h_{\rm net}\) is set to be away from \(0\) with the final activation function of it being strictly positive. The vector \([-1,\ldots,-1,1,\ldots,1]\) controls the signature \((r,s)\)(Lee, 2018) of the metric \(g\) with \(r+s=d\), where \(r\) and \(s\) are the number of \(-1\)s and \(1\)s, respectively. The signature of the metric is set to be a hyperparameter. According to (13), we have \[\dot{q}^{i}=g_{\rm net}^{ij}p_{j},\quad\dot{p}_{i}=-\frac{1}{2}\partial_{i}g_ {\rm net}^{jk}p_{j}p_{k}. \tag{19}\] Intuitively, the node features evolve through the "shortest" curves on the manifold. The exponential map used in (Chami et al., 2019; Bachmann et al., 2020; Xiong et al., 2022) is essentially the geodesic curve on a hyperbolic manifold with an explicit formulation due to the manifold type restriction. We do not enforce any assumption here and let the model learn the embedding geometry. #### 4.2.2 Learnable \(H_{\rm net}\) Different from Section 4.2.1 where \(H\) is set as (12) with a pseudo-Riemannian metric, we choose a more flexible \(H\) instantiated by an FC layer and consider the Hamiltonian equations: \[\dot{q}^{i}=\frac{\partial H_{\rm FC}}{\partial p_{i}},\quad\dot{p}_{i}=- \frac{\partial H_{\rm FC}}{\partial q^{i}}. \tag{20}\] #### 4.2.3 Learnable Convex \(H_{\rm net}\) As discussed in Section 3.3, if \(H_{\rm net}\) is restricted to strictly convex functions, the Hamiltonian formalism can degenerate to a _Lagrangian formalism_ through the Legendre transformation (11). We take the following restricted Hamiltonian equations \[\dot{q}^{i}=\frac{\partial H_{\rm net}}{\partial p_{i}},\dot{p}_{i}=-\frac{ \partial H_{\rm net}}{\partial q^{i}},{\rm s.t.}\ H_{\rm net}\ \text{is convex}, \tag{21}\] where a stationary action in (5) is achieved. To guarantee that \(H_{\rm net}\) is convex, we follow the work in (Amos et al., 2017) to set non-negative layer weights from the second layer in \(H_{\rm net}\), and all activation functions in \(H_{\rm net}\) to be convex and non-decreasing. This network design is shown to be able to approximate any convex functions in (Chen et al., 2018). #### 4.2.4 Learnable \(H_{\rm net}\) with Relaxation Different from Section 4.2.2, we enforce additional system biases along the curve as follows: \[\dot{q}^{i}=\frac{\partial H_{\rm net}}{\partial p_{i}},\quad\dot{p}_{i}=- \frac{\partial H_{\rm net}}{\partial q^{i}}+f_{\rm net}(q). \tag{22}\] Instead of keeping the energy during the feature update along the Hamiltonian orbit, we now also include an additional energy term during the node feature update. #### 4.2.5 Learnable \(H_{\rm net}\) with a flexible symplectic form Hamiltonian equations have a more flexible representation using the symplectic \(2\)-form (cf. Appendix G). The chart coordinate representation \((q,p)\) may not be the Darboux coordinate system for the symplectic \(2\)-form. Even if our learnable \(H_{\rm net}\) may be able to learn the energy representation under the chosen chart coordinate system, we consider a learnable symplectic \(2\)-form to act in concert with \(H_{\rm net}\). More specifically, following (Chen et al., 2021), we have the following 1-form \[\theta^{1}_{\rm net}=f_{i,{\rm net}}{\rm d}q^{i},\] where \(f_{i,{\rm net}}:M\rightarrow\mathbb{R}^{d}\) is the output's \(i\)-th component of the neural network parameterized function. The Hamiltonian orbit is then provided by (Chen et al., 2021) as follows: \[\big{(}\dot{q}^{i},\dot{p}^{i}\big{)}=W^{-1}(q,p)\nabla H_{\rm net}(q,p), \tag{23}\] where the skew-symmetric \(2d\times 2d\) matrix \(W\), whose elements are written in terms of \((\partial_{i}f_{j,{\rm net}}-\partial_{j}f_{i,{\rm net}})\), is given in (24) in the appendix due to space limitations. ## 5 Experiments In this section, we implement the proposed HamGNNs with different settings as shown in Section 4.2 and Appendix C. We select **datasets with various geometries** including the three citation networks: Cora (McCallum et al., 2004), Citeseer (Sen et al., 2008), Pubmed (Namata et al., 2012); and two low hyperbolicity datasets (Chami et al., 2019): Disease and Airport (cf. Table 5). Furthermore, we create **new datasets with more complex geometry** by combining Disease and Cora/Citeseer so that the new datasets have a mixture of both hyperbolic and Euclidean local geometries. Adhering to the experimental settings of (Chami et al., 2019; Gu et al., 2018; Bachmann et al., 2020; Lou et al., 2020; Xiong et al., 2022), we evaluate the effectiveness of the node embedding by performing _two downstream tasks: node classification and link prediction_ using the embeddings. Rather than beating all the existing GNNs on these two specific tasks, we want to demonstrate that our node embedding strategy is able to automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even if its underlining geometry is very complex, e.g., a mixture of multiple geometries. Such examples can be found in Table 3. Due to space constraints, we refer the readers to Appendix E for the description of datasets and implementation details. To fairly compare the performance of the proposed HamGNN, for the node classification tasks, we select several popular GNN models as the baseline. These include _Euclidean GNNs_: GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), SAGE (Hamilton et al., 2017), and SGC (Wu et al., 2019); _Hyperbolic GNNs_(Chami et al., 2019; Liu et al., 2019): HGNN, HGCN, HGAT and LGCN (Zhang et al., 2021); a _GNN that mixes Euclidean and hyperbolic embeddings_: GIL (Zhu et al., 2020); _(Pseudo-)Riemannian GNNs_: \(\kappa\)-GCN (Bachmann et al., 2020) and \(\mathcal{Q}\)-GCN (Xiong et al., 2022); as well as _Graph Neural Diffusions_: GRAND (Chamberlain et al., 2021) and GraphCON (Rusch et al., 2022). We also include the MLP baseline, which does not utilize the graph topology information. To further demonstrate the advantage of HamGNN, we also include one vanilla ODE system, whose formulation is given in Appendix F.1. This vanilla ODE system neither includes the learnable "momentum" vector \(p\) nor adheres to the Hamiltonian orbits (26). For the link prediction task, we compare HamGNN to the _standard graph node embedding models_, including all the aforementioned baselines in the node classification task except the graph neural diffusion baselines. For the link prediction tasks, we report the best results from different versions of HamGNN: HamGNN (19) on the Disease dataset and (20) on the remaining datasets. ### Performance Results and Ablation Studies **Node classification.** The node classification performance on the benchmark datasets using the baseline models and the proposed HamGNNs with different \(H_{\mathrm{net}}\)s is shown in Table 1. We observe that HamGNN adapts well to all datasets with various geometries. For the datasets such as Cora, Citeseer and Pubmed, which can be well embedded into Euclidean space, HamGNNs achieve comparable performance to the conventional Euclidean GNNs. For example, HamGNN (19) achieves the third-best performance on the Cora. For tree-like graph data Disease and Airport, HamGNNs outperform the other GNNs, including all the hyperbolic GNNs, which are tailored for hyperbolic datasets, and Riemannian GNNs which use a Cartesian product of spherical, hyperbolic, and Euclidean spaces. From the results in Table 1, we can see that Riemannian GNNs \(\kappa\)-GCN and \(\mathcal{Q}\)-GCN are biased towards embedding data into Euclidean space since they have the best performance on high hyperbolicity datasets but average performance on low hyperbolicity datasets. Furthermore, the comparison between HamGNNs with the vanilla ODE GNN in (27) without any Hamiltonian mechanism implies the importance of the Hamiltonian layer. **Link prediction.** In Table 2, we report the averaged ROC for the link prediction task. We observe HamGNN adapts well to all datasets and is the best performer on the Airport, Citeseer, and Cora. **Comparison between different \(H_{\mathrm{net}}\).** We compare HamGNNs with different \(H_{\mathrm{net}}\)s as elaborated in Section 4.2 on the node classification task. In Section 3.3, we argue that the geodesic curves derived from the action of curve length may potentially sacrifice efficacy for the graph node embedding task since we do not know what a reasonable action formulation that guides the evolution of the node feature in this task is. We therefore also include a more flexible \(H_{\mathrm{net}}\) in (20) with other variations, e.g., (21) to (23). From Table 1, we observe that the original HamGNN (19) shows good adaptations of node embedding to various datasets. Simply using a FC layer for \(H\) in (20) has no obvious improvement. Further imposing convexity on \(H\) in (21) also has little positive impact on the performance. Including system bias in (22) achieves the best performance on Citeseer among all HamGNNs. The HamGNNs that achieve the best performance on Disease and Airport are (25) and (26). Overall, all HamGNN variants elaborated in Section 4.2 have well-adapted node embedding performance for various datasets even though some variants may perform slightly better. This observation indicates that good geometry adaptations may come from the HamGNN model architecture with the Hamiltonian orbit evolution. This is further verified by the node classification performance from vanilla ODE in (27), which does not have a well-adapted node embedding performance and is designed without the philosophy of Hamiltonian mechanics. Good experiment results in Section 5.2 using both (19) and (20) also provide further evidence to support our conclusion. We next compare the HamGNNs using (20) and (23). The \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Method & Disease & Airport & Pubmed & Citeseer & Cora \\ \(\delta\)-hypercholcity & 0.0 & 1.0 & 3.5 & 4.5 & 11.0 \\ \hline MLP & 83.74 & 84.04 & 70.41 & 50.68 & 96.51 & 1.009 \\ HNN (Ganes et al., 2018) & 81.37 & 87.86 & 80.62 & 94.99 & 0.259 & 89.33 & 28.36 \\ GCN (Wolf & 2017) & 80.38 & 82.51 & 50.97 & 0.651 & 97.30 & 99.30 & 28.29 & 99.07 \\ GAT (Vehicular et al., 2018) & 82.03 & 82.04 & 82.51 & 90.63 & 83.01 & 99.30 & 63.34 & 99.34 \\ SAGE (Hamilton et al., 2017) & 80.02 & -0.43 & 91.04 & 80.89 & 95.61 & -0.263 & 37.05 & 89.24 & 94.00 \\ SOC (Wu et al., 2019) & 89.93 & -0.31 & 97.89 & 82.52 & 96.10 & 99.30 & 99.35 & 99.35 \\ \hline HGN (Liu et al., 2019) & 80.02 & -11.04 & -12.04 & -12.03 & -10.04 & -10.02 & -10.01 & -10.01 \\ HOCN (Chami et al., 2019) & 78.09 & -2.79 & -24.28 & -0.209 & 96.79 & 99.30 & 60.10 & 99.14 & 99.05 \\ HOGM (Zhang et al., 2012) & 76.32 & -3.41 & 94.61 & 99.64 & **80.49** & 99.30 & -0.259 & 99.36 \\ \hline GL (Zhu et al., 2020) & **89.74 & -0.21 & -12.49 & -12.04 & -12.04 & -10.01 & -10.01 & -10.05 \\ \hline \(\kappa\)-GCN (Bachmann et al., 2020) & **89.74 & -0.22 & -14.04 & -12.04 & -12.04 & -10.01 & -10.01 & -10.01 \\ Q-GCN (Xiong et al., 2022) & - & - & 96.30 & -0.22 & **96.56** & **10.37** & **97.11** & **93.30** \\ \hline HamGNN & **99.73 & -0.26 & **99.99 & -0.01** & -12.51 & 0.30 & **99.99** & **99.01** \\ \hline \end{tabular} \end{table} Table 1: Node classification accuracy(%). The best, second best, and third best results for each criterion are highlighted in **red**, **blue**, and **cyan**, respectively. “-” indicates the open source code and/or the result is unavailable. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Method & Disease & Airport & Pubmed & Citeseer & Cora \\ \(\delta\)-hypercholcity & 0.0 & 1.0 & 3.5 & 4.5 & 11.0 \\ \hline MLP & 83.74 & 84.04 & 70.41 & 50.68 & 96.51 & 6.091 & 6.091 \\ HNN (Ganes et al., 2018) & 81.37 & 87.88 & 96.06 & -0.269 & 49.09 & -0.259 & 89.33 & 28.30 \\ GCN (Wolf & 2017) & 80.38 & -25.10 & 90.97 & 0.651 & 97.30 & 99.30 & 28.29 & 99.07 \\ GAT (Vehicular et al., 2018) & 82.03 & -25.10 & 90.63 & -0.31 & -0.633 & 99.30 & -0.593 & 99.34 \\ SAGE (Hamilton et al., 2017) & 80.02 & -0.43 & 91.04 & -0.089 & 95.61 & -0.263 & 99.37 & -0.389 \\ SOC (Wu et al., 2019) & 89.93 & -0.31 & 97.89 & -0.329 & -0.261 & 99.30 & -0.279 & 99.35 \\ \hline HGN (Liu et al., 2019) & 80.02 & -11.04 & -12.04 & -12.04 & -10.01 & -10.01 & -10.01 & -10.01 \\ HOCN (Chami et al., 2019) & 78.09 & -2.79 & -24.28 & -0.209 & 96.79 & -0.303 & -0.10.14 & 99.10 & -10.05 \\ HOGM (Zhang et al., 2012) & 76.32 & -3.41 & 94.61 & 99.64 & **80.49** & 99.30 & -0.259 & 99.36 \\ \hline GL (Zhu et al., 2020) & **89.74 & -0.22 & -14.04 & -12.04 & -12.04 & -10.01 & -10.01 & -10.01 \\ \hline \(\kappa\)-GCN (Bachmann et al., 2020) & **89.74 & -0.22 & -14.04 & -12.04 & -10.01 & -10.01 & -10.01 & -10.01 \\ Q-GCN (Xiong et al., 2022) & - & - & 96.30 & -0.22 & **96.56** & **10.37** & **97.01** & **93.30** \\ \hline HamGNN & **99.73 & -0.26 & **99.99 & -0.01** & -12.51 & 0.30 & **99.99** & **99.01** \\ \hline \end{tabular} \end{table} Table 2: Link prediction ROC(%). The best, second best, and third best results for each criterion are highlighted in **red**, **blue**, and **cyan**, respectively. “-” indicates the open source code and/or the result is not available. difference between those two settings is that the symplectic form in (20) is set to be the special Poincare \(2\)-form while in (23), the symplectic form is a learnable one. We however observe that the two HamGNNs achieve similar performance and the more flexible symplectic form does not improve the model performance. This may be because of the fundamental Darboux theorem (Lee, 2013) in symplectic geometry, which states that we can always find a Darboux coordinate system to give any symplectic form the _Poincare \(2\)-form_. The feature compressing FC may have the network capacity to approximate the Darboux coordinate map, while the flexible learnable \(H_{\mathrm{net}}\) also has the network capacity to get the energy representation under the chosen chart coordinate system. ### Mixed Geometry Dataset #### 5.2.1 Mixed Airport + Cora Dataset To understand the node embedding capacity of HamGNN, we created a new mixed geometry graph dataset by combining the Airport (3188 nodes) and Cora (2708 nodes) datasets into a single large graph with 5896 nodes (cf. Table 5). The results are presented in the first row of Table 3. This new dataset is composed of a mixture of hyperbolic and Euclidean geometries, as the Airport dataset is known to have a more hyperbolic geometry, and the Cora dataset is known to have a more Euclidean geometry, as shown in Figure 1. We choose these two datasets also because they have a similar number of nodes. To standardize the node feature dimension across datasets, we pad the node features with additional zeros. We do not create any _new_ edges in the new dataset so that the new graph contains two disconnected subgraphs: Airport and Cora. We use a 60%, 20%, 20% random split for training, validation, and test sets on this new dataset. We test our HamGNN against several baselines including GCN, HGCN, GIL, and \(\mathcal{Q}\)-GCN. The results are shown in Table 3 from which we can observe that, in this more complex geometry setting, HamGNN performs the best. #### 5.2.2 Mixed Airport + Citeseer Dataset We also include experiments on another new mixed dataset created using Airport and Citeseer. The combination of these two datasets is the same as the previous one. This new dataset therefore has different geometry in each component from Airport or Citeseer. The results are reported in the second row of Table 3. We observe that our HamGNN still performs the best. These two experiments show that our model can adapt well to the underlying complex geometry. ### Observation of Resilience to Over-Smoothing As a side benefit of HamGNN, we observe from Table 4 that if more Hamiltonian layers are stacked, HamGNN is still able to distinguish nodes from difference classes, while the other GNNs suffer severe oversmoothing problem (Chen et al., 2020). This may be because when updating node features their energy is constrained to be alone in the Hamiltonian orbit. So, if two nodes are distinct in terms of their energy at the input of HamGNN, they will still be distinguishable by their energy at the output of HamGNN, no matter how many times the feature aggregation operation has been implemented. **More Experiments.** We kindly refer readers to Appendix F where we include more empirical analysis and visualization. ## 6 Conclusion In this paper, we have designed a new node embedding strategy from Hamiltonian orbits that can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even when multiple different geometries coexist. We have demonstrated empirically that our approach adapts better than popular state-of-the-art graph node embedding GNNs to various graph datasets on two graph node embedding downstream tasks. ## Acknowledgments This research is supported by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Industry Alignment Fund - Pre Positioning (IAF-PP) (Grant No. A19D6a0053) and the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research and Development Programme. The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg](https://www.nscc.sg)). \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Models & 3 layers & 5 layers & 10 layers & 20 layers \\ \hline \multirow{3}{*}{Cora} & GCN & 80.29\(\pm\)2.29 & 69.87\(\pm\)1.12 & 26.50\(\pm\)4.68 & 23.97\(\pm\)5.42 \\ & HGCN & 78.70\(\pm\)0.96 & 38.13\(\pm\)6.20 & 31.90\(\pm\)0.00 & 26.23\(\pm\)9.87 \\ & HamGNN (21) & **81.84\(\pm\)0.88** & **81.08\(\pm\)0.16** & **81.40\(\pm\)0.44** & **80.58\(\pm\)0.30** \\ \hline \hline \end{tabular} \end{table} Table 4: Node classification accuracy(%) when increasing the number of layers on the Cora dataset. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline datasets & GCN & HGCN & GIL & Q-GCN & HamGNN (19) & HamGNN (20) \\ \hline Air-Cora & 5.16\(\pm\)0.65 & 78.72\(\pm\)0.428 & 20.41\(\pm\)1.27 & 90.25\(\pm\)0.81 & **94.82\(\pm\)0.78** & 93.08\(\pm\)0.55 \\ Air-Cite & 0.97\(\pm\)0.347 & 4.65\(\pm\)0.407 & 77.3\(\pm\)1.578 & 76.3\(\pm\)0.28 & **90.03\(\pm\)0.29** & 85.94\(\pm\)0.75 \\ \hline \hline \end{tabular} \end{table} Table 3: Node classification on the Mixed Geometry Dataset. First row: mixed Airport+Cora dataset; Second row: mixed Airport+Citeseer dataset.
2308.01210
Global Hierarchical Neural Networks using Hierarchical Softmax
This paper presents a framework in which hierarchical softmax is used to create a global hierarchical classifier. The approach is applicable for any classification task where there is a natural hierarchy among classes. We show empirical results on four text classification datasets. In all datasets the hierarchical softmax improved on the regular softmax used in a flat classifier in terms of macro-F1 and macro-recall. In three out of four datasets hierarchical softmax achieved a higher micro-accuracy and macro-precision.
Jetze Schuurmans, Flavius Frasincar
2023-08-02T15:12:56Z
http://arxiv.org/abs/2308.01210v1
# Global Hierarchical Neural Networks using Hierarchical Softmax ###### Abstract. This paper presents a framework in which hierarchical softmax is used to create a global hierarchical classifier. The approach is applicable for any classification task where there is a natural hierarchy among classes. We show empirical results on four text classification datasets. In all datasets the hierarchical softmax improved on the regular softmax used in a flat classifier in terms of macro-F1 and macro-recall. In three out of four datasets hierarchical softmax achieved a higher micro-accuracy and macro-precision. Hierarchical Classification, Hierarchical Softmax, Global Hierarchy + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote †: [ + Footnote: [ + Footnote: [ + Footnote †: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: [ + Footnote: this last method might be easier to explain than the one from a local based approach (Kang et al., 2017). During both training and testing probabilities of all classes can be assessed. ### Hierarchical Softmax Hierarchical softmax was first described by (Han et al., 2017). In the context of neural network language models, hierarchical softmax was first introduced in (Kang et al., 2017). Other versions of hierarchical softmax are proposed in (Kang et al., 2018) and (Kang et al., 2019). (Bahdan et al., 2017) used the specification from (Kang et al., 2017) in their FastText classifier. While most of these methods use a binary tree to speed up training and inference time, we try to exploit the natural hierarchy found in the taxonomy of classes for improving the performance. In this taxonomy a node can have more than two child nodes. ### Hierarchical Text Classification Hierarchical classification was first used for text classification by (Kang et al., 2017). They used a local classifier per parent node for training, at each node selecting a subset of features relevant for that step in the classification process. A similar hierarchical structure with an SVM at every node was used by Kang et al. (Kang et al., 2017) for speech-act classification. Ono et al. (Ono et al., 2018) used a form of local classifier per level, where they tried the lowest level (leaf nodes) first. If the uncertainty was too high, they moved up in the hierarchical level. ## 3. Methodology Hierarchical classification can be considered as a classification that takes the hierarchical structure of the taxonomy of classes into account, as opposed to a flat classifier, which only takes the final classes into account. By imposing the hierarchical structure, the model does not need to learn the separation between a large number of classes. It can now focus on classifying categories, or subclasses within a category. The taxonomy can be formalised as a tree or a DAG. We consider here the case where the taxonomy is a tree. A taxonomy represented by a Tree is easier to construct, as each child node only has one parent. A local hierarchical neural network would be infeasible. The network has many parameters and having to learn a new neural network from scratch at every parent node would result in too many parameters, and considerably longer training and inference times. Therefore, we consider making a global classifier by using a hierarchical softmax. Hierarchical softmax easily extends the neural networks by replacing the regular softmax. In this section we discuss the general case of the global hierarchical classifier and the specific case for the hierarchical softmax. ### Global Hierarchy Global classifiers take advantage of the whole hierarchical structure in the classes at once (Kang et al., 2017). Each node in this hierarchical structure is associated with the probability of the path from the root to that node. We illustrate this in Figure 1. If the node is at depth \(l\) with parents \(n_{1},\ldots,n_{l-1}\), the probability of arriving in this node is: \[P(n_{l}) =P(n_{l}|n_{l-1})P(n_{l-1}) \tag{2}\] \[=\prod_{j=1}^{l}P(n_{j}|n_{j-1}). \tag{1}\] ### Hierarchical Softmax In order to calculate the conditional probabilities the hierarchical softmax uses a softmax at every node. The softmax used in the hierarchical softmax to calculate the conditional probability of belonging to node \(m=n_{l}\) conditional on being in the parent node of \(p=n_{l-1}\) becomes: \[P(m|p)=\frac{\exp(\nu_{pm}^{T}h)}{\sum_{j=1}^{p}\exp(w_{pj}^{T}h)}, \tag{3}\] where \(w_{pm}^{T}\) is the weight vector corresponding to parent node \(p\) and child node \(m\). The weight vectors \(w\) include the bias terms. \(h\) is therefore the last hidden state concatenated with a one, and provides the same input for each parent node, independent of the depth \(l\). The number of weight vectors \(J_{p}\) is equal to the number of child classes of parent node \(p\). Compared to a flat classifier with a regular softmax (\(P=1\)), the total number of weights does increase with \((P-1)*(h_{dim}+1)\), where \(P\) is the number of parent nodes, \(h_{dim}\) is the dimension of the hidden dimension and one is added to account for all the additional bias terms. Although the total number of weights is increasing, this is considerably less then if we would consider a new neural network at every parent node, as it is done in the local classifier per parent node. Each weight vector now has a new purpose. In a flat classifier with the regular softmax, \(\exp(h^{T}w_{j})\) attributes the evidence of class \(j\) compared to all leaf nodes \(\sum_{c=1}^{C}\exp(w_{c}^{T}h)\), where \(C\) equals the number of classes, or leaf nodes. While in the hierarchical softmax, the importance of node \(m\), \(\exp(w_{pm}^{T}h)\), is compared to a subset of nodes \(j=1,\ldots,J_{p}\), \(\sum_{j=1}^{J_{p}}\exp(w_{pj}^{T}h)\). This gives the hierarchical softmax the potential advantage of only having to make the distinction between smaller subsets. In other words, the additional \((P-1)*(h_{dim}+1)\) parameters empower the \(C*(h_{dim}+1)\) parameters to specialise in discriminating within their respective subgroups. #### 3.2.1. Training of the Hierarchical Softmax In order to understand the training of a network with a hierarchical softmax component we need to calculate the gradients of the loss function with respect to the parameters, \(w_{pj}\) and \(h\). This will also show that the hierarchical softmax is truly a global classifier, as the whole network is updated based on the performance of all relevant parent nodes. The loss function we use is the Cross Entropy function. For observation \(i\) the loss is calculated as a function of the estimated class probabilities \(P_{i}(c)\): \[E_{i} =-\sum_{c=1}^{C}y_{i,c}\log P_{i}(c) \tag{4}\] \[=-\log P_{i}(m)\] (5) \[=-\log\prod_{q\in Q}P(m_{q}|q)\] (6) \[=-\sum_{q\in Q}\log P(m_{q}|q) \tag{7}\] The indicator \(y_{i,c}\) is 1 if observation \(i\) belongs to class \(c\), therefore the element of the sum that remains is the negative log probability of the correct class \(m\). We then substitute (2) in the loss (6), where \(Q\) is the set of all parent nodes that lead to the correct class, and \(m_{q}\) the correct child of parent \(q\). In (7) we rearrange the log, making it easier to calculate the derivatives we are looking for: \[\frac{\partial E_{i}}{\partial w_{p}^{T}} =\mathbb{1}_{p\in Q}(P(j|p)-\delta_{jm_{p}})h \tag{9}\] \[\frac{\partial E_{i}}{\partial h} =\sum_{q\in Q}\sum_{j=1}^{J_{q}}(P(j|q)-\delta_{jm_{q}})w_{jq}^{T} \tag{8}\] where the indicator function \(\mathbb{1}_{p\in Q}\) equals one if \(p\) is in \(Q\) and zero otherwise. Likewise, the Kronecker delta is defined as \(\delta_{jm}=1\) if \(j=m\), and 0 otherwise. The derivations of (8) and (9) are given in Appendix A. These gradients can be used in the Stochastic Gradient Decent algorithm. More importantly, (9) shows the update of the hidden state (and therefore the rest of the network) is a combination of the performances across all child nodes that belong to the parent nodes that make up the path to the correct class. This shows that a neural network with a hierarchical softmax is truly a global hierarchical classifier. ## 4. Datasets We consider four text classification datasets, in which we can find a hierarchical structure in the classes. In Appendix B we present the exact taxonomy used. ### Trec First, we consider the TREC 10 Question Answering Track Corpus (Trec et al., 2015), abbreviated as TREC. This dataset consists of 5952 questions (5452 train, 500 test), each belonging to one of the 50 classes. These classes are split up between 6 categories. Figures 2 shows that the distribution between categories is highly unbalanced. The TREC training set is highly unbalanced as well. The number of training observations per class range from 4 till 962. ### 20NewsGroups The second dataset is the 20NewsGroups dataset (Trec et al., 2015). We find 6 categories, on top of the 20 classes. In Figure 3 the distribution of the classes between the categories can be seen. This distribution is relatively balanced. This dataset contains 11293 training observations and 7527 test observations. The training set is relatively balanced. Most classes have between 500 and 600 observations. There is one outlying class with only 377 training observations. ### Reuters-21578 As third and fourth dataset we use two configurations of the Reuters-21578 dataset (Ferrone et al., 2015). These are Reuters-8 and Reuters-52. #### 4.3.1. Reuters-8. The Reuters-8 dataset consists of the 8 most frequent classes, based on the number of observations in the training set. These are distributed between 4 categories. The classes are distributed evenly between the categories, as shown in Figure 4. The observations of this dataset are split between train and test set with 5485 in train and 2189 in test. The training set is very unbalanced, with observations per class ranging from 41 to 2840. #### 4.3.2. Reuters-52. Respectively the Reuters-52 dataset contains the 52 most frequent classes, also distributed between 4 categories. Figure 5 shows the distribution of classes among the categories is highly unbalanced. The Reuters-52 dataset contains 6532 training and 2568 test observations. This dataset is by definition more unbalanced than Reuters-8. The minimum number is only a single training observation. ## 5. Experiments In our experiments we employ an LSTM (He et al., 2017) with hierarchical softmax and compare the results with an LSTM with regular softmax. Figure 1. Hierarchical structure of a two level global classifier. LSTMs are a popular and well performing architecture for text classification, because of their ability to process sequential data (Bir between recall and precision indicate a worse trade-off in the flat model. With a difference of 0.2 percentage points, the micro-accuracy of the regular softmax is not much higher than the micro-accuracy of the hierarchical softmax. The flat and hierarchical models have a larger gap between the micro-average and macro measures, due to the higher class unbalance. Finally, we note that both and are close to the state-of-the-art (Frieder et al., 2017). ## 7. Conclusion We conclude that the hierarchical softmax makes a good candidate for making a neural network a global hierarchical classifier. We show that it can improve performances of a recurrent network on four different text classification datasets, in terms of macro-F1 and macro-Recall. The performances on the different datasets show that the hierarchical softmax can handle different types of class taxonomies, balanced and unbalanced, in terms of both training observations per class, as well as child nodes per parent node. With regard to the state-of-the-art, it is not our goal to improve the SOTA, instead we show that changing a regular softmax with a hierarchical softmax in a dataset with a natural hierarchy in the classes leads to an improvement. Although we did not improve the state-of-the-art, we do come close with the hierarchical softmax on a parsimonious model. We also note that the SOTA models are a different model for each dataset, while we consistently perform well on all datasets with the same model. Future work can study if the state-of-the-art might be improved if the hierarchical softmax is used in the respective model. Furthermore, we consider a two-level hierarchical taxonomy, by introducing one level of parent nodes in between the root and the leaves. In future work, the taxonomy could be extended with an additional hierarchical layer, i.e. by grouping parent nodes. The hierarchy is currently determined based on the hierarchy in the class taxonomy. Alternatively the construction and evaluation of different hierarchical structures could be automated. The performance of the hierarchical softmax depends on the probability estimates of the conditional probabilities of moving from a parent node to a child node. Better estimates might be obtained by using Bayesian neural networks, as their probability estimates are significantly better (Frieder et al., 2017; Goyal et al., 2017; Goyal et al., 2017). The hierarchical softmax is not only applicable for text classification. In theory it can replace a softmax in any classification task. It would also be interesting to see how this approach fares in other classification tasks, for example in image classification.